Advances in Condition Monitoring and Structural Health Monitoring: WCCM 2019 (Lecture Notes in Mechanical Engineering) [1st ed. 2021] 9811591989, 9789811591983

This book comprises the selected contributions from the 2nd World Congress on Condition Monitoring (WCCM 2019), held in

104 71

English Pages 810 [759] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Advances in Condition Monitoring and Structural Health Monitoring: WCCM 2019 (Lecture Notes in Mechanical Engineering) [1st ed. 2021]
 9811591989, 9789811591983

Table of contents :
Contents
About the Editors
Signal Processing: Fault Diagnosis for Complex System Monitoring
Detection and Localization of a Gear Fault Using Automatic Continuous Monitoring of the Modulation Functions
1 Introduction
2 Demodulation Process in AStrion and Physical Meaning of Modulation Functions
3 Data from the GOTIX Experimental Bench
4 Monitoring of the Modulation Functions
5 Demodulation Function Features
6 Limitations of the Method
7 Conclusions and Perspectives
References
Multi-harmonic Demodulation for Instantaneous Speed Estimation Using Maximum Likelihood Harmonic Weighting
1 Introduction
2 Methodology
2.1 Multi-harmonic Demodulation (MHD)
2.2 Maximum Likelihood Estimation of Harmonic Weighting
2.3 Practical Considerations
3 Simulation Results
4 Experimental Results
5 Conclusions
References
A Variational Bayesian Approach for Order Tracking in Rotating Machines
1 Introduction
2 A Brief Review of the Vold-Kalman Filter
2.1 Method Description
2.2 Shortcomings of the Method
3 Proposed Method
3.1 Bayesian Formulation
3.2 Algorithmic Solution
4 Experimental Results
4.1 Numerical Evaluation
4.2 Application to Helicopter Vibration Data
5 Conclusion
References
Reconstructing Speed from Vibration Response to Detect Wind Turbine Faults
1 Introduction
2 Hilbert Transform to Estimate Speed
3 Fault Detection using Speed Estimation
4 Conclusion
References
Non-regular Sampling and Compressive Sensing for Gearbox Monitoring
1 Introduction
2 Gear Diagnosis by Compressive Sensing
2.1 Gear Vibrations
2.2 Non-regular Sampling Method
2.3 Problem Formulation
2.4 Proposed Solution and Algorithm
3 Application
4 Conclusion
References
Change Detection in Smart Grids Using Dynamic Mixtures of t-Distributions
1 Introduction
2 Problem Statement and State of the Art
3 Simultaneous Change Detection and Clustering
3.1 Robust Mixture Modeling
3.2 Formulation in the Form of a Generative Model
4 Incremental Learning
4.1 EM Algorithm
4.2 Incremental Algorithm
5 Experiments
5.1 Model Selection
5.2 Results
6 Conclusion
References
Unbalance Detection and Prediction in Induction Machine Using Recursive PCA and Wiener Process
1 Introduction
2 Proposed Method
2.1 Recursive PCA
2.2 Wiener Process
3 Material and Results
3.1 Motor Unbalance Experiments
3.2 Results and Discussion
4 Conclusions
References
Automatic Fault Diagnosis in Wind Turbine Applications
1 Introduction
2 Automatic Diagnosis
3 Automatic Diagnosis in Real Wind Turbines
3.1 Planetary Stage Fault
3.2 Third Stage Wheel Fault
3.3 Generator Bearing Inner and Outer Race Defects
3.4 Generator Bearing Rolling Element Fault
4 Conclusion
References
Blade Monitoring in Turbomachines Using Strain Measurements
1 Introduction
2 Problem Statement and Formulation
2.1 Experiment Setup
2.2 Problem Modeling
3 Proposed Solution
3.1 Cyclo-Non-stationary Modeling of the Bades’ Signal
3.2 The Synchronous Fitting
4 Results
5 Conclusion
References
Measuring the Effectiveness of Descriptors to Indicate Faults in Wind Turbines
1 Introduction
2 Machine Learning Methods
3 Data
4 Results and Discussion
4.1 Planetary Stage Fault
4.2 Second Stage Wheel Fault
5 Conclusion
References
Gearbox Condition Monitoring Using Sparse Filtering and Parameterized Time–Frequency Analysis
1 Introduction
2 Theory Background
2.1 Sparse Filtering
2.2 Parameterized TFA
3 The Proposed Signal Processing Method
4 Simulation Results
4.1 Signal Structure
4.2 Signal Processing by Sparse Filtering
4.3 Signal Extraction by Parameterized TFA
5 Experiments
5.1 Data Description
5.2 Signal Processing
6 Conclusion
References
Rotating Machinery Fault Diagnosis with Weighted Variational Manifold Learning
1 Introduction
2 Weighted Variational Manifold Learning
2.1 Variational Mode Decomposition Based on SDA
2.2 Time–Frequency Manifold Learning
2.3 Weighted Coefficient-Based Signal Reconstruction
2.4 Procedure of the WVML Method
3 Experimental Study
3.1 Experimental Facility
3.2 Experimental Results
4 Conclusion
References
Recent Advances on Anomaly Detection for Industrial Applications
A Data-Driven Approach to Anomaly Detection and Health Monitoring for Artificial Satellites
1 Introduction
2 Data-Driven Health Monitoring for Artificial Satellites
2.1 Anomaly Detection by Machine Learning
2.2 Examples of Anomaly Detection Applied to Satellite Data
3 Lessons Learned
3.1 Importance of Data Preprocessing
3.2 Modeling of Satellite House-Keeping Data
3.3 Difficulty in Unsupervised Learning
3.4 Expected Roles of Data-Driven Approaches
4 Challenges
4.1 Explainability
4.2 Automation of Data Preprocessing
4.3 Transfer Learning
4.4 Interaction with Operators
5 BISHAMON: A New Platform for Data-Driven Health Monitoring
6 Conclusion
References
Robust Linear Regression and Anomaly Detection in the Presence of Poisson Noise Using Expectation-Propagation
1 Introduction
2 Exact Bayesian Model
2.1 Likelihood
2.2 Prior Distributions
2.3 Joint Posterior Distribution
3 EP for Robust Linear Regression with Poisson Noise
3.1 EP Approximation Using Auxiliary Variables
3.2 EP Updates
4 Simulation Results
4.1 Comparison with Monte Carlo Sampling (Without Anomalies)
4.2 Anomaly Detection
5 Conclusions
References
Spacecraft Health Monitoring Using a Weighted Sparse Decomposition
1 Introduction
2 Proposed Anomaly Detection Method
2.1 Preprocessing
2.2 Anomaly Detection Using a Weighted Sparse Decomposition
2.3 Anomaly Detection Strategy
2.4 Weight Updating Using a Correlation Coefficient
3 Experimental Results
4 Conclusion
References
When Anomalies Meet Aero-elasticity: An Aerospace Industry Illustration of Fault Detection Challenges
1 Introduction
2 Industrial Context and Constraints
3 Oscillatory Failure Cases (OFC)
4 Limit Cycle Oscillations (LCO)
5 Conclusions
References
Anomaly Detection with Discrete Minimax Classifier for Imbalanced Dataset or Uncertain Class Proportions
1 Introduction
2 Computation of the Minimax Classifier
3 Experiments
3.1 Training Step
3.2 Test Step
4 Conclusion
References
A Comparative Study of Statistical-Based Change Detection Methods for Multidimensional and Multitemporal SAR Images
1 Introduction
1.1 Motivations
1.2 Change Detection Overview
2 Statistical Modeling of SAR Images
2.1 The Data
2.2 Gaussian Model
2.3 Non-Gaussian Modeling
3 Dissimilarity Measures
3.1 Problem Formulation
3.2 Hypothesis Testing Statistics
3.3 Information-Theoretic Measures
3.4 Riemannian Geometry Distances
3.5 Optimal Transport
3.6 Summary
4 Analysis of Complexity
5 Experiments on Real Data
5.1 Methodology and Aim of the Study
5.2 Datasets Used
5.3 Reproducibility
5.4 Results and Discussion
6 Conclusions
References
Bearing Anomaly Detection Based on Generative Adversarial Network
1 Introduction
2 Generative Adversarial Network
3 GAN-Based Bearing Anomaly Detection Method
4 Experiment Description
5 Conclusion
References
Classification/Analysis for Fault or Wear Diagnosis
Specific Small Digital-Twin for Turbofan Engines
1 Engine State
2 Normalization
3 More Data and Time Series
4 Smoothing and Organizing
5 Neural Network
6 Conclusion
References
Fault Diagnosis of Single-Stage Bevel Gearbox by Energy Operator and J48 Algorithm
1 Introduction
2 Theoretical Background
2.1 Energy Operator (EO)
2.2 J48 Decision Tree Algorithm
3 Experimental Setup
4 Results and Discussions
4.1 Fault Identification
4.2 Fault Classification
5 Conclusion
References
An Efficient Multi-scale-Based Multi-fractal Analysis Method to Extract Weak Signals for Gearbox Fault Diagnosis
1 Introduction
2 The GFD Framework and Related Works
2.1 Proposed GFD Framework
2.2 Distance-Based Feature Selection
2.3 Random Forest Classifier
3 Multi-scale-Based Multi-fractal Analysis Method
3.1 Basic MFDFA Method
3.2 Multi-scale-Based Multi-fractal Analysis Method
4 Results and Analysis
4.1 Descriptions of Datasets
4.2 Experimental Setup
4.3 Experimental Results
5 Conclusions
References
A Novel Method of PEM Fuel Cell Fault Diagnosis Based on Signal-to-Image Conversion
1 Introduction
2 PEM Fuel Cell Data Generation and Feature Extraction Techniques
2.1 PEM Fuel Cell 2D Image Generation Technique
2.2 Image Feature Extraction Techniques
3 Description of PEM Fuel Cell Test
4 Performance of Proposed Method in PEM Fuel Cell Fault Diagnosis
5 Conclusions
References
Fault Classification of Polymer Electrolyte Membrane Fuel Cell System Based on Empirical Mode Decomposition
1 Introduction
2 PEMFCs Typical Structure and Operating Principle
3 Description of EMD and Proposed Feature Selection Method
4 Effectiveness of Proposed Method in PEMFC Fault Diagnosis
4.1 The PEM Fuel Experimental Setup
4.2 Performance of Selected Features in PEMFC Fault Diagnosis
5 Conclusion
References
Fleet-Oriented Pattern Mining Combined with Time Series Signature Extraction for Understanding of Wind Farm Response to Storm Conditions
1 Introduction
2 Methodology
2.1 Overview
2.2 Frequent Pattern Mining
2.3 Data-Windowing
2.4 Farm-Wide Support
3 Experimental Case
3.1 Overview
3.2 Pattern Mining on Farm 1
3.3 Pattern Searching in Farm 2
3.4 SCADA 1-Second Event Analysis on Farm 2
3.5 Event Fusion
4 Conclusions
References
Efficiency and Durability of Spur Gears with Surface Coatings and Surface Texturing
1 Introduction
2 Test Setup
3 Experimental Results: Efficiency and Durability
3.1 Efficiency
3.2 Durability
4 Conclusions
References
System Model for Condition Monitoring and Prognostic
Wheel-Rail Contact Forces Monitoring Based on Space Fixed-Point Strain of Wheel Web and DIC Technology
1 Introduction
2 Wheel Static Finite Element Model
2.1 Finite Element Modeling
2.2 Result Analysis
3 Measurement Method
4 Simulation Verification
5 Conclusions and Future Work
5.1 Conclusions
5.2 Future Work
References
Experimental Validation of the Blade Excitation in a Shaft Vibration Signals
1 Introduction
2 Asymmetry in Blisk Vibration
3 Experiments
4 Results
5 Conclusions
References
Seismic Risk Evaluation by Fragility Curves Using Metamodel Methods
1 Introduction
2 Theories and Methodology
2.1 Fragility Function
2.2 Analytical Fragility Curves Development Methods
2.3 Metamodel Methods
2.4 Metamodel Accuracy Evaluation
3 Numerical Cases
3.1 Four-Story Building Model
3.2 Uncertainties
3.3 Comparison of Multiple Stripes Analysis and Incremental Dynamic Analysis
3.4 Metamodel Accuracy Evaluation
3.5 Development of Fragility Curves by Considering Uncertainties
4 Conclusions
References
Reliability Centred Maintenance (RCM) Assessment of Rail Wear Degradation
1 Reliability and Rail Wear Degradation
2 Rail Wear Degradation Methodology
3 Wear Degradation Analysis
4 Lifetime Data Analysis
5 Conclusion
References
Prognostic of Rolling Element Bearings Based on Early-Stage Developing Faults
1 Introduction
2 Feature Selection
2.1 Rolling Bearing Failure Modes
2.2 Selecting Appropriate Band Filter for Envelope Analysis
2.3 Degradation Features Extraction
3 Experimental Data
4 Artificial Intelligence Modeling
5 Results
6 Conclusion
References
Recent Technologies for Smart and Safe Systems
Fault-Tolerant Actuation Architectures for Unmanned Aerial Vehicles
1 Introduction
2 Fault-Tolerant Technologies
2.1 Electrical Motors
2.2 Electrical Power Drive
3 Fault-Tolerant Actuation Architectures
4 Discussion
4.1 Electrical Topology
4.2 Architecture Characterization for Airworthiness Certification
5 Conclusion
References
Towards Certifiable Fault-Tolerant Actuation Architectures for UAVs
1 Introduction
2 UAV Safety Requirements Review
2.1 European Aviation Safety Agency Civil Rulemaking and SC-RPAS.1309
2.2 Military Regulations
2.3 Consequences for TEMA-UAV Requirements
3 A Scheme for Evaluating Flight Control Actuation Architectures
4 Use Case Analysis
4.1 Aircraft FHA
4.2 Use Case Definition
4.3 Derivation of Safety Requirements/Preliminary Aircraft Safety Assessment (PASA)
4.4 PASA Results
4.5 Consequences for TEMA-UAV
5 Conclusion
References
Development of a Patrol System Using Augmented Reality and 360-Degree Camera
1 Introduction
2 Self-localization Technology
3 Support Function Using AR Technology [2, 3]
3.1 AR Technology
3.2 Examples of AR Application
4 Patrol System Using 360-Degree Camera
4.1 360-Degree Video Browsing Function
4.2 Differential Processing Function Between Before and After Images
5 Conclusions
References
Research on Evaluation Method of Safety Integrity for Safety-Related Electric Control System of Amusement Ride
1 Background
2 Process of Safety Integrity Evaluation
3 RSIL Allocation
3.1 Identification of SRECS
3.2 Risk Assessment of SRCF
4 SIL Verification
4.1 Reliability Parameters
4.2 RSIL Verification
5 Optimization and Test Confirmation
6 Conclusions
References
Study on Warning System of Transportable Pressure Equipment from Regulatory Perspective
1 Introduction
2 Overall Design of Warning System
3 Risk Identification and Index System Establishment
3.1 Risk Identification
3.2 Warning Index System Establishment
3.3 Index Weight Calculation
4 Study on Warning Model
4.1 Classification of Warning Levels
4.2 Warning Model
5 Conclusions
References
Industry 4.0: Why Machine Learning Matters?
1 Introduction
2 Industry 4.0
3 Why Machine Learning Matters?
4 Case Studies of Machine Learning in Industry 4.0
4.1 Machine Learning in Additive Manufacturing
4.2 Machine Learning in Digital Twin
5 Conclusion and Perspectives
References
Condition Monitoring of Electromechanical Systems
A Method for Online Monitoring of Rotor–Stator Rub in Steam Turbines
1 Introduction
2 Rotor–Stator Contact Diagnostics
3 Localization of the Rub During Operation of the Turbine
4 Conclusion
References
Implementation of Instantaneous Power Spectrum Analysis for Diagnosis of Three-Phase Induction Motor Faults Under Mechanical Load Oscillations; Case Study of Mobarakeh Steel Company, Iran
1 Introduction
2 IPSA in Healthy, Faulty and Load Oscillation Conditions
2.1 Case of a Healthy Motor
2.2 Case of a Defective Motor
2.3 Case of a Varying Mechanical Load
3 Introducing the Case Study
4 Computer Simulation and Results
5 Conclusion
References
Analysis of Unbalanced Magnetic Pull in a Multi-physic Model of Induction Machine with an Eccentric Rotor
1 Introduction
2 Method of Analysis
2.1 Multi-physic Model
2.2 Effective Air-Gap Distance
2.3 Calculation of UMP
3 Results and Discussions
3.1 Simulation Results of the Case with 10% Ee Static Eccentricity
3.2 Variation of UMP with Different Eccentricities
3.3 Vibration Analysis from the Support Acceleration
4 Conclusions
References
Bearings Fault Diagnosis in Electromechanical System Using Transient Motor Current Signature Analysis
1 Introduction
2 Experimental Analysis
2.1 Experimental Setup
2.2 Experimentation
3 Methodology
3.1 Wavelet Transform
3.2 Artificial Neural Network
4 Results
5 Conclusions
References
SHM/NDT Technologies and Applications
Assessment of RC Bridge Decks with Active/Passive NDT
1 Introduction
2 Snap-Shot Monitoring
3 Continuous Monitoring
4 Future Technologies to Introduce the NDTs to Maintenance Program of Infrastructures
5 Conclusions
References
Estimation of In-Plane Strain Field Using Computer Vision to Capture Local Damages in Bridge Structures
1 Introduction
2 Experimental Study
3 Conclusion
References
Wireless Strain Sensors for Dynamic Stress Analysis of Rail Steel Structural Integrity
1 Introduction
2 Strain Measurement
3 Numerical Simulation
4 Fatigue Life Prediction
5 Conclusions
References
Purpose and Practical Capabilities of the Metal Magnetic Memory Method
1 Introduction
2 Stress Concentration Zones in Metal Magnetic Memory Method
3 Integration of MMM Method with Other NDT Methods
4 Development Capabilities of the MMM Method
5 Conclusion
References
Development and Application of Inner Detector in Drilling Riser Auxiliary Lines Based on Magnetic Memory Testing
1 Introduction
2 Design of the Inner Detector
2.1 Analysis of Design Requirement
2.2 Details of the Design
3 Laboratory Test
3.1 Artificial Flaws
3.2 Data Processing and Result
4 Industrial Application
5 Conclusion and Perspective
Reference
Defect Recognition and Classification Techniques for Multi-layer Tubular Structures in Oil and Gas Wells by Using Pulsed Eddy Current Testing
1 Introduction
2 Experiment System and Specimen
2.1 Experiment System
2.2 Tubular Structures Defect Specimens
3 Theory
3.1 PEC Detection Principle
3.2 Principal Component Analysis
3.3 Random Forest
4 Results and Analysis
4.1 Defect Recognition
4.2 Defect Classification
5 Conclusion
References
Corrected Phase Reconstruction in Diffraction Phase Microscopy System
1 Introduction
2 Phase Reconstruction in DPM System
2.1 Conventional Phase Reconstruction Process
2.2 Local Corrections of Unwrapped Phase
3 Results and Discussion
4 Conclusions and Future Works
References
Study on Fatigue Damage of Automotive Aluminum Alloy Sheet Based on CT Scanning
1 Introduction
2 Identification of Mesoscopic Defects
2.1 Material and Specimen
2.2 Fatigue Test and Nondestructive Detection
3 Results and Discussion
4 Conclusions
References
IoT-Web-Based Integrated Wireless Sensory Framework for Non-destructive Monitoring and Evaluation of On-Site Concrete Conditions
1 Introduction
2 Research Background and Objectives
2.1 Research Objectives
2.2 Maturity Concept of Concrete
2.3 Internet of Things (IoT) in Construction Practices
3 Research Methodology
3.1 Concrete Specimen Preparation
3.2 Maturity Functions
3.3 Temperature Sensors and IoT Micro-controllers
3.4 Web Application Development
4 Results and Discussion
4.1 Concrete Temperature and Maturity Index
4.2 Concrete Compressive Strength by Uniaxial and Maturity Methods
4.3 Conclusion and Recommendations
References
A Study of Atmospheric Corrosive Environment in the North East Line Tunnel
1 Introduction
2 Methodology
2.1 Exposure Sites
2.2 Corrosion Specimens
2.3 Metal Coupons
3 Results and Discussion
3.1 Temperature and Relative Humidity
3.2 Metal Loss of Silver and Copper Sensors
3.3 Mass Loss of Metal Coupons
3.4 Microstructure Analysis
4 Conclusions and Future Work
References
Acoustic Emission
Fatigue Crack Growth Monitoring Using Acoustic Emission: A Case Study on Railway Axles
1 Introduction
2 Acoustic Emission Monitoring
3 Self-organising Map
4 Experimental Set-up
5 Results
5.1 Raw Data
5.2 SOM Training
5.3 Clustered Data
6 Discussion
7 Conclusion
References
Acoustic Emission Monitoring for Bending Failure of Laminated Glass Used on Glass Water Slide
1 Introduction
2 Experimental Details
2.1 Specimen Preparation
2.2 Three-Point Bending Tests and AE Monitoring
3 Results and Discussion
3.1 Mechanical Performance Analysis
3.2 AE Characteristics Analysis
3.3 Displacement Field and Strain Field Distribution of Laminated Glass
4 Conclusions
References
Source Location of Acoustic Emission for Anisotropic Material Utilizing Artificial Intelligence (WCCM2019)
1 Introduction
2 Experiment
2.1 Measurement of AE
2.2 Use of Nueural Network
3 Result and Discussion
4 Conclusion
References
Standard Development and Application on Acoustic Emission Condition Monitoring of Amusement Device
1 Introduction
2 Acoustic Emission Testing of Rolling Bearings
2.1 Test Rig and Instrument
2.2 AE Characteristics of Rolling Bearings
3 Analysis and Evaluation Method of AE Condition Monitoring of Amusement Devices
3.1 AE Data Analysis
3.2 Evaluation Method
4 Application on the Top Spin
4.1 Testing Object and AE Instrument
4.2 Testing Results
5 Conclusions
References
Fault Diagnosis of Pitch Bearings on Wind Turbine Based on Acoustic Emission Method
1 Introduction
2 Methodology
2.1 Kurtosis
2.2 Continuous Wavelet Transform (CWT)
2.3 Wavelet Packet Decomposition and Reconstruction
3 Experiment
3.1 Tensile Experiment
3.2 Experimental Conclusions
4 System Design and Application
4.1 System Design
4.2 Analysis Result of Real Application Data
5 Conclusion
References
Ultrasonic Testing
In Situ Condition Monitoring of High-Speed Rail Tracks Using Diffuse Ultrasonic Waves: From Theory to Applications
1 Introduction
2 DUW-based Approach and Condition-contrasting Concept
3 Experimental Validation
3.1 Set-Up and Test Strategy
3.2 Results
4 Deployment and Implementation of the Proposed Approach
5 Conclusions
References
Theoretical Analysis of Scattering Amplitude Calculation of Torsional Wave on a Pipe
1 Introduction
2 Reciprocity Theorem Application
2.1 Reciprocity Theorem
2.2 Equivalent Force of Surface Traction
3 Application to Scattering by a Surface Corrosion
4 Torsional Wave Scattered Wave Calculation
5 Conclusions
References
Detection of a Micro-crack on a Rotating Steel Shaft Using Noncontact Ultrasonic Modulation Measurement
1 Introduction
2 Air-Coupled Ultrasonic System
3 Spectral Correlation-Based Crack Diagnosis
4 Detection of a Micro-crack on Steel Shafts
5 Conclusion
References
Measurement of Axial Force of Bolted Structures Based on Ultrasonic Testing and Metal Magnetic Memory Testing
1 Introduction
2 Principle of Measurement
2.1 Acoustoelasticity Effect for Ultrasonic Testing
2.2 Stress-Magnetization Effect for Metal Magnetic Memory Testing
3 Experimental Systems
4 Numerical Simulation
5 Test Results and Discusssion
5.1 Results for Ultrasonic Testing
5.2 Results for Metal Magnetic Memory Testing
6 Conclusions
References
Ultrasonic Testing of Anchor Bolts
1 Introduction
2 Proposed UT Inspection
2.1 Process Flowchart for the Test Phase
2.2 Process Flowchart for the Normalization Phase
2.3 Process Flowchart for the Evaluation Phase
3 Conclusion
References
Weld Inspection of Piping Elbow Using Flexible PAUT Probe
1 Introduction
2 Experimental Details
2.1 Design and Fabrication of Flexible Probe
2.2 Elbow Specimen
2.3 Elbow Specimen
3 Results
3.1 Characterization of the Flexible Probe
3.2 Detection of Artificial Defects by PAUT Probe
4 Conclusions
References
Laser Ultrasonic Imaging of Wavefield Spatial Gradients for Damage Detection
1 Introduction
2 Ultrasonic Wavefield Spatial Gradients
3 Experimental Setup
4 Results and Discussion
4.1 Single-Damage Case
4.2 Multi-location Damage Case
4.3 Multi-type Damage Case
5 Conclusion
References
Guided Waves Testing
Guided Ultrasonic Waves in Glass Laminate Aluminium Reinforced Epoxy
1 Why Knowledge of Guided Ultrasonic Waves in Fibre Metal Laminates Is Important
2 Experimental Setup
3 Numerical Simulation
4 Résumé
References
Ultrasonic Guided Waves in Unidirectional Fiber-Reinforced Composite Plates
1 Introduction
2 Guided Waves in a Unidirectional Composite
3 Guided Wave Motions in a Unidirectional Composite Plate by Reciprocity Considerations
4 Conclusions
References
Mode Selectivity and Frequency Dependence of Guided Waves Generated by Piezoelectric Wafer Transducers in Rebars Embedded in Concrete
1 Introduction
2 Numerical Modelling
3 Results and Discussion
4 Conclusions
References
How Temperature Variations Affect the Propagation Behaviour of Guided Ultrasonic Waves in Different Materials
1 Introduction
2 Experimental Setup
3 Data Analysis
4 Conclusion
References
Nonlinearity Study of Damaged Spring Materials Using Guided Wave
1 Introduction
2 Theory
2.1 General Solution of Nonlinear Wave Equation in the Cylindrical Rod
3 Experiment Setup and Experiment Result
3.1 Experiment Setup
3.2 Experiment Result
4 Conclusion
References
Transducer Arrangement for Baseline-Free Damage Detection Methods With Guided Waves
1 Introduction
2 Structural Health Monitoring with Guided Wave
2.1 Baseline Methods
2.2 Simultaneous Baseline Methods
2.3 Lamb Wave Decomposition
3 Simulation
4 Measurement Setup
5 Summary and Conclusions
References
Mode Switchable Guided Elastic Wave Transducer Based on Piezoelectric Fibre Patches
1 Introduction
2 Basics of Piezoelectric Fibre Patches (PFPs) and the New PFP Design
3 Modelling of the Guided Waves Generated by the Mode Switchable PFP
3.1 The Numerical Model
3.2 Modelling Results
4 Discussion and Further Work
References
Lamb Wave Excitation Using a Flexible Laser Ultrasound Transducer for Structural Health Monitoring
1 Introduction
2 Photo-acoustic Performance of the Transducer
3 Lamb Wave Excitation
4 Damge Detection Using the Laser Ultrasound Transmitter
5 Conclusions
References
Lamb Wave Propagation of S0 and A0 Modes in Unidirectional CFRP Plate for Condition Monitoring: Comparison of FEM Simulation with Experiments
1 Introduction
2 Conceptual Clarification of Group Velocity
3 Methodology
4 Results and Discussion
5 Conclusion
References
Thermographic Testing
Three-Dimensional Reconstruction of Leaked Gas Cloud Image Based on Computed Tomography Processing of Multiple Optical Paths Infrared Measurement Data
1 Introduction
2 Measurement Principle
2.1 Gas Visualization Principle
2.2 ART Reconstructed Method
3 Experimental Equipment
4 Experimental Results
5 Conclusion
Reference
Experimental Analysis on Fault Prediction of AC-Contactor by Infrared Thermography
1 Introduction
2 Residual Life Prediction Model
2.1 Weibull Distribution
2.2 Median Rank Method
3 Experimental Design
3.1 Platform Building
3.2 Experimental Steps
4 Experiment and Analysis
4.1 Analysis of Heating Mechanism
4.2 Remaining Life Prediction
5 Conclusion
References
Research on the Detecting Method of Motor Running State of Amusement Device Based on Infrared Thermography Technology
1 Introduction
2 Experiment Object and Equipment
3 Experiment Schemes
4 Experiment Preparation
4.1 Calibration for Thermal Imagers
4.2 Thermal Resistor Embedded
4.3 Platform Construction
5 Data Collection
5.1 Motor Internal Temperature Rise Data Acquisition
5.2 Motor External Temperature Rise Data Acquisition
6 Analysis of Infrared Thermography Method for Testing Motor Running State
6.1 Analysis of Methods for Testing Temperature Rise of Bearing
6.2 Analysis of Method for Inspecting Stator Temperature Rise by Infrared Thermography
7 Conclusion
References
Application of Ultrasound Thermometry to Condition Monitoring of Heated Materials
1 Introduction
2 Determining Temperature Profile and Heat Flux by Ultrasound
3 Experiment and Result
3.1 Temperature Profiling
3.2 Heat Flux Monitoring
4 Conclusions
References
Gas Leak Source Identification by Inverse Problem Analysis of Infrared Measurement Data
1 Introduction
2 Identification of Gas Leakage Source by Infrared Image and Inverse Analysis
3 Results and Discussion
4 Conclusions
Reference

Citation preview

Lecture Notes in Mechanical Engineering

Len Gelman Nadine Martin Andrew A. Malcolm Chin Kian (Edmund) Liew   Editors

Advances in Condition Monitoring and Structural Health Monitoring WCCM 2019

Lecture Notes in Mechanical Engineering Series Editors Francisco Cavas-Martínez, Departamento de Estructuras, Universidad Politécnica de Cartagena, Cartagena, Murcia, Spain Fakher Chaari, National School of Engineers, University of Sfax, Sfax, Tunisia Francesco Gherardini, Dipartimento di Ingegneria, Università di Modena e Reggio Emilia, Modena, Italy Mohamed Haddar, National School of Engineers of Sfax (ENIS), Sfax, Tunisia Vitalii Ivanov, Department of Manufacturing Engineering Machine and Tools, Sumy State University, Sumy, Ukraine Young W. Kwon, Department of Manufacturing Engineering and Aerospace Engineering, Graduate School of Engineering and Applied Science, Monterey, CA, USA Justyna Trojanowska, Poznan University of Technology, Poznan, Poland

Lecture Notes in Mechanical Engineering (LNME) publishes the latest developments in Mechanical Engineering—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNME. Volumes published in LNME embrace all aspects, subfields and new challenges of mechanical engineering. Topics in the series include: • • • • • • • • • • • • • • • • •

Engineering Design Machinery and Machine Elements Mechanical Structures and Stress Analysis Automotive Engineering Engine Technology Aerospace Technology and Astronautics Nanotechnology and Microengineering Control, Robotics, Mechatronics MEMS Theoretical and Applied Mechanics Dynamical Systems, Control Fluid Mechanics Engineering Thermodynamics, Heat and Mass Transfer Manufacturing Precision Engineering, Instrumentation, Measurement Materials Engineering Tribology and Surface Technology

To submit a proposal or request further information, please contact the Springer Editor of your location: China: Dr. Mengchu Huang at [email protected] India: Priya Vyas at [email protected] Rest of Asia, Australia, New Zealand: Swati Meherishi at [email protected] All other countries: Dr. Leontina Di Cecco at [email protected] To submit a proposal for a monograph, please check our Springer Tracts in Mechanical Engineering at http://www.springer.com/series/11693 or contact [email protected] Indexed by SCOPUS. All books published in the series are submitted for consideration in Web of Science.

More information about this series at http://www.springer.com/series/11236

Len Gelman Nadine Martin Andrew A. Malcolm Chin Kian (Edmund) Liew •





Editors

Advances in Condition Monitoring and Structural Health Monitoring WCCM 2019

123

Editors Len Gelman University of Huddersfield Huddersfield, UK Andrew A. Malcolm Advanced Remanufacturing and Technology Centre Singapore, Singapore

Nadine Martin CNRS Grenoble INP, GIPSA-Lab University Grenoble Alpes Grenoble, Rhône, France Chin Kian (Edmund) Liew Singapore Institute of Technology Singapore, Singapore

ISSN 2195-4356 ISSN 2195-4364 (electronic) Lecture Notes in Mechanical Engineering ISBN 978-981-15-9198-3 ISBN 978-981-15-9199-0 (eBook) https://doi.org/10.1007/978-981-15-9199-0 © Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Contents

Signal Processing: Fault Diagnosis for Complex System Monitoring Detection and Localization of a Gear Fault Using Automatic Continuous Monitoring of the Modulation Functions . . . . . . . . . . . . . . . X. Laval, N. Martin, P. Bellemain, and C. Mailhes

3

Multi-harmonic Demodulation for Instantaneous Speed Estimation Using Maximum Likelihood Harmonic Weighting . . . . . . . . . . . . . . . . . C. Peeters, J. Antoni, and J. Helsen

15

A Variational Bayesian Approach for Order Tracking in Rotating Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Y. Marnissi, D. Abboud, and M. Elbadaoui

25

Reconstructing Speed from Vibration Response to Detect Wind Turbine Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Marhadi and D. Saputra

35

Non-regular Sampling and Compressive Sensing for Gearbox Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. Parellier, T. Bovkun, T. Denimal, J. Gehin, D. Abboud, and Y. Marnissi

43

Change Detection in Smart Grids Using Dynamic Mixtures of t-Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Samé, M. L. Abadi, and L. Oukhellou

53

Unbalance Detection and Prediction in Induction Machine Using Recursive PCA and Wiener Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Picot, J. Régnier, and P. Maussion

69

Automatic Fault Diagnosis in Wind Turbine Applications . . . . . . . . . . . D. Saputra and K. Marhadi

77

Blade Monitoring in Turbomachines Using Strain Measurements . . . . . D. Abboud, M. Elbadaoui, and N. Tableau

87

v

vi

Contents

Measuring the Effectiveness of Descriptors to Indicate Faults in Wind Turbines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. Saputra and K. Marhadi

97

Gearbox Condition Monitoring Using Sparse Filtering and Parameterized Time–Frequency Analysis . . . . . . . . . . . . . . . . . . . . . . . . 105 Shiyang Wang, Zhen Liu, and Qingbo He Rotating Machinery Fault Diagnosis with Weighted Variational Manifold Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Quanchang Li, Xiaoxi Ding, Wenbin Huang, and Yimin Shao Recent Advances on Anomaly Detection for Industrial Applications A Data-Driven Approach to Anomaly Detection and Health Monitoring for Artificial Satellites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Takehisa Yairi, Yusuke Fukushima, Chun Fui Liew, Yuki Sakai, and Yukihito Yamaguchi Robust Linear Regression and Anomaly Detection in the Presence of Poisson Noise Using Expectation-Propagation . . . . . . . . . . . . . . . . . . 143 Yoann Altmann, Dan Yao, Stephen McLaughlin, and Mike E. Davies Spacecraft Health Monitoring Using a Weighted Sparse Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Barbara Pilastre, Jean-Yves Tourneret, Loïc Boussouf, and Stéphane D’escrivan When Anomalies Meet Aero-elasticity: An Aerospace Industry Illustration of Fault Detection Challenges . . . . . . . . . . . . . . . . . . . . . . . 169 P. Goupil, J.-Y. Tourneret, S. Urbano, and E. Chaumette Anomaly Detection with Discrete Minimax Classifier for Imbalanced Dataset or Uncertain Class Proportions . . . . . . . . . . . . . . . . . . . . . . . . . 179 Cyprien Gilet and Lionel Fillatre A Comparative Study of Statistical-Based Change Detection Methods for Multidimensional and Multitemporal SAR Images . . . . . . . . . . . . . . 189 Ammar Mian and Frédéric Pascal Bearing Anomaly Detection Based on Generative Adversarial Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Jun Dai, Jun Wang, Weiguo Huang, Changqing Shen, Juanjuan Shi, Xingxing Jiang, and Zhongkui Zhu Classification/Analysis for Fault or Wear Diagnosis Specific Small Digital-Twin for Turbofan Engines . . . . . . . . . . . . . . . . . 219 J. Lacaille

Contents

vii

Fault Diagnosis of Single-Stage Bevel Gearbox by Energy Operator and J48 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 V. Kumar, A. K. Verma, and S. Sarangi An Efficient Multi-scale-Based Multi-fractal Analysis Method to Extract Weak Signals for Gearbox Fault Diagnosis . . . . . . . . . . . . . . 241 Ruqiang Yan, Fei Shen, and Hongxing Tao A Novel Method of PEM Fuel Cell Fault Diagnosis Based on Signal-to-Image Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Zhongyong Liu, Weitao Pan, Yousif Yahia Ahmed Abuker, and Lei Mao Fault Classification of Polymer Electrolyte Membrane Fuel Cell System Based on Empirical Mode Decomposition . . . . . . . . . . . . . . . . . 261 Yousif Yahia Ahmed Abuker, Zhongyoug Liu, Lisa Jackson, and Lei Mao Fleet-Oriented Pattern Mining Combined with Time Series Signature Extraction for Understanding of Wind Farm Response to Storm Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 P.-J. Daems, L. Feremans, T. Verstraeten, B. Cule, B. Goethals, and J. Helsen Efficiency and Durability of Spur Gears with Surface Coatings and Surface Texturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Giovanni Iarriccio, Antonio Zippo, Marco Barbieri, and Francesco Pellicano System Model for Condition Monitoring and Prognostic Wheel-Rail Contact Forces Monitoring Based on Space Fixed-Point Strain of Wheel Web and DIC Technology . . . . . . . . . . . . . . . . . . . . . . 295 Xiaochao Wang, Zhenggang Lu, Juyao Wei, and Yang He Experimental Validation of the Blade Excitation in a Shaft Vibration Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 V. Vasicek, J. Liska, J. Strnad, and J. Jakl Seismic Risk Evaluation by Fragility Curves Using Metamodel Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 H. Z. Yang and C. G. Koh Reliability Centred Maintenance (RCM) Assessment of Rail Wear Degradation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 H. J. Hoh, J. L. Wang, A. S. Nellian, and J. H. L. Pang Prognostic of Rolling Element Bearings Based on Early-Stage Developing Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 M. Hosseini Yazdi, M. Behzad, and S. Khodaygan

viii

Contents

Recent Technologies for Smart and Safe Systems Fault-Tolerant Actuation Architectures for Unmanned Aerial Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 M. A. A. Ismail, C. Bosch, S. Wiedemann, and A. Bierig Towards Certifiable Fault-Tolerant Actuation Architectures for UAVs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 C. Bosch, M. A. A. Ismail, S. Wiedemann, and M. Hajek Development of a Patrol System Using Augmented Reality and 360-Degree Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Kenji Osaki, Tomomi Oshima, Naoya Sakamoto, Tetsuro Aikawa, Yuya Nishi, Tomokazu Kaneko, Makoto Hatakeyama, and Mitsuaki Yoshida Research on Evaluation Method of Safety Integrity for Safety-Related Electric Control System of Amusement Ride . . . . . . . . . . . . . . . . . . . . . 375 Weike Song, Peng Cheng, and Ran Liu Study on Warning System of Transportable Pressure Equipment from Regulatory Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Z. R. Yang, Z. Z. Chen, Z. W. Wu, and P. Yu Industry 4.0: Why Machine Learning Matters? . . . . . . . . . . . . . . . . . . . 397 T. H. Gan, J. Kanfoud, H. Nedunuri, A. Amini, and G. Feng Condition Monitoring of Electromechanical Systems A Method for Online Monitoring of Rotor–Stator Rub in Steam Turbines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 J. Liska and J. Jakl Implementation of Instantaneous Power Spectrum Analysis for Diagnosis of Three-Phase Induction Motor Faults Under Mechanical Load Oscillations; Case Study of Mobarakeh Steel Company, Iran . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 S. Mani, M. Kafil, and H. Ahmadi Analysis of Unbalanced Magnetic Pull in a Multi-physic Model of Induction Machine with an Eccentric Rotor . . . . . . . . . . . . . . . . . . . 429 Xiaowen Li, Adeline Bourdon, Didier Rémond, Samuel Koechlin, and Dany Prieto Bearings Fault Diagnosis in Electromechanical System Using Transient Motor Current Signature Analysis . . . . . . . . . . . . . . . . . . . . . 439 Alok Verma, Haja Kuthubudeen, Viswanathan Vaiyapuri, Sivakumar Nadarajan, Narasimalu Srikanth, and Amit Gupta

Contents

ix

SHM/NDT Technologies and Applications Assessment of RC Bridge Decks with Active/Passive NDT . . . . . . . . . . . 451 T. Shiotani, H. Asaue, K. Hashimoto, and K. Watabe Estimation of In-Plane Strain Field Using Computer Vision to Capture Local Damages in Bridge Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 T. J. Saravanan and M. Nishio Wireless Strain Sensors for Dynamic Stress Analysis of Rail Steel Structural Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 X. P. Shi, Z. F. Liu, K. S. Tsang, H. J. Hoh, Y. Liu, Kelvin Tan Kaijun, Phua Song Yong, Kelvin, and J. H. L. Pang Purpose and Practical Capabilities of the Metal Magnetic Memory Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481 A. A. Dubov, S. M. Kolokolnikov, M. N. Baharin, M. F. M. Yunoh, and N. M. Roslin Development and Application of Inner Detector in Drilling Riser Auxiliary Lines Based on Magnetic Memory Testing . . . . . . . . . . . . . . . 489 Xiangyuan Liu, Jianchun Fan, Wei Zhou, and Shujie Liu Defect Recognition and Classification Techniques for Multi-layer Tubular Structures in Oil and Gas Wells by Using Pulsed Eddy Current Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 Hang Jin, Hua Huang, Zhaohe Yang, Tianyu Ding, Pingjie Huang, Banteng Liu, and Dibo Hou Corrected Phase Reconstruction in Diffraction Phase Microscopy System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 V. P. Bui, S. Gorelik, Z. E. Ooi, Q. Wang, and C. E. Png Study on Fatigue Damage of Automotive Aluminum Alloy Sheet Based on CT Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519 Ya li Yang, Hao Chen, Jie Shen, and Yongfang Li IoT-Web-Based Integrated Wireless Sensory Framework for Non-destructive Monitoring and Evaluation of On-Site Concrete Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527 A. O. Olaniyi, T. Watanabe, K. Umeda, and C. Hashimoto A Study of Atmospheric Corrosive Environment in the North East Line Tunnel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 L. T. Tan, S. M. Goh, H. Y. Gan, and Y. L. Foo

x

Contents

Acoustic Emission Fatigue Crack Growth Monitoring Using Acoustic Emission: A Case Study on Railway Axles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 R. Marks, A. Stancu, and S. Soua Acoustic Emission Monitoring for Bending Failure of Laminated Glass Used on Glass Water Slide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 Ran Liu, Gongtian Shen, Yong Zhang, Wei Zhou, and Pengfei Zhang Source Location of Acoustic Emission for Anisotropic Material Utilizing Artificial Intelligence (WCCM2019) . . . . . . . . . . . . . . . . . . . . . 573 Y. Mizutani, N. Inagaki, K. R. Kholish, Q. Zhu, and A. Todoroki Standard Development and Application on Acoustic Emission Condition Monitoring of Amusement Device . . . . . . . . . . . . . . . . . . . . . 579 J. J. Zhang, G. T. Shen, Z. W. Wu, and Y. L.Yuan Fault Diagnosis of Pitch Bearings on Wind Turbine Based on Acoustic Emission Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 Guang-hai Li, Yang ming An, Tao Guo, and Guang-quan Hou Ultrasonic Testing In Situ Condition Monitoring of High-Speed Rail Tracks Using Diffuse Ultrasonic Waves: From Theory to Applications . . . . . . . . . . . . . . . . . . 601 K. Wang, W. Cao, and Z. Su Theoretical Analysis of Scattering Amplitude Calculation of Torsional Wave on a Pipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611 Jaesun Lee, Jan D. Achenbach, and Younho Cho Detection of a Micro-crack on a Rotating Steel Shaft Using Noncontact Ultrasonic Modulation Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . 617 I. Jeon, P. Liu, and H. Sohn Measurement of Axial Force of Bolted Structures Based on Ultrasonic Testing and Metal Magnetic Memory Testing . . . . . . . . . . . . . . . . . . . . 625 Siqi Yang, Laibin Zhang, and Jianchun Fan Ultrasonic Testing of Anchor Bolts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637 Y. M. Tong, A. R. K. Rajkumar, H. G. Chew, and C. K. Liew Weld Inspection of Piping Elbow Using Flexible PAUT Probe . . . . . . . . 647 S. J. Lim, T. S. Park, M. Y. Choi, and I. K. Park Laser Ultrasonic Imaging of Wavefield Spatial Gradients for Damage Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655 S. Y. Chong, Z. Wu, and M. D. Todd

Contents

xi

Guided Waves Testing Guided Ultrasonic Waves in Glass Laminate Aluminium Reinforced Epoxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667 M. Rennoch, M. Koerdt, A. S. Herrmann, N. Rauter, and R. Lammering Ultrasonic Guided Waves in Unidirectional Fiber-Reinforced Composite Plates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677 Ductho Le, Jaesun Lee, Younho Cho, Duy Kien Dao, Truong Giang Nguyen, and Haidang Phan Mode Selectivity and Frequency Dependence of Guided Waves Generated by Piezoelectric Wafer Transducers in Rebars Embedded in Concrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687 Rajeshwara Chary Sriramadasu, Ye Lu, and Sauvik Banerjee How Temperature Variations Affect the Propagation Behaviour of Guided Ultrasonic Waves in Different Materials . . . . . . . . . . . . . . . . 695 M. Rennoch, J. Moll, M. Koerdt, A. S. Herrmann, T. Wandowski, and W. M. Ostachowicz Nonlinearity Study of Damaged Spring Materials Using Guided Wave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705 Jeongnam Kim, Yonghee Lee, Sungun Lee, and Younho Cho Transducer Arrangement for Baseline-Free Damage Detection Methods With Guided Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713 U. Lieske, T. Gaul, and L. Schubert Mode Switchable Guided Elastic Wave Transducer Based on Piezoelectric Fibre Patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723 Y. Kim, K. Tschöke, L. Schubert, and B. Köhler Lamb Wave Excitation Using a Flexible Laser Ultrasound Transducer for Structural Health Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731 Wei Li, Jitao Xiong, and Wenbin Huang Lamb Wave Propagation of S0 and A0 Modes in Unidirectional CFRP Plate for Condition Monitoring: Comparison of FEM Simulation with Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741 Subin Kim, Joonseo Song, Seungmin Kim, Younho Cho, and Young H. Kim Thermographic Testing Three-Dimensional Reconstruction of Leaked Gas Cloud Image Based on Computed Tomography Processing of Multiple Optical Paths Infrared Measurement Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751 M. Uchida, D. Shiozawa, T. Sakagami, and S. Kubo

xii

Contents

Experimental Analysis on Fault Prediction of AC-Contactor by Infrared Thermography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757 Yue Yu, Wei Li, Chao Ye, and Bin Hu Research on the Detecting Method of Motor Running State of Amusement Device Based on Infrared Thermography Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769 C. Ye, Y. Yu, J. J. Zhang, R. Liu, and G. T. Shen Application of Ultrasound Thermometry to Condition Monitoring of Heated Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781 I. Ihara, R. Sawada, Y. Ogawa, and Y. Kawano Gas Leak Source Identification by Inverse Problem Analysis of Infrared Measurement Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791 H. Nishimura, D. Shiozawa, T. Sakagami, and S. Kubo

About the Editors

Len Gelman, Ph.D., Dr. of Sciences (Habilitation) joined Huddersfield University as Professor, Chair in Signal Processing/Condition Monitoring and Director of Centre for Efficiency and Performance Engineering, in 2017 from Cranfield University, where he worked as Professor and Chair in Vibro-Acoustical Monitoring since 2002. Len developed novel condition monitoring technologies for aircraft engines, gearboxes, bearings, turbines and centrifugal compressors. Len published more than 250 publications, 17 patents and is Co-editor of 12 Springer books. He is Fellow of: BINDT, International Association of Engineers and Institution of Diagnostic Engineers, Executive Director, International Society for Condition Monitoring, Honorary Technical Editor, International Journal of Condition Monitoring, Editor-in-Chief, International Journal of Engineering Sciences (SCMR), Chair, annual International Condition Monitoring Conferences, Honorary Co-chair, annual World Congresses of Engineering, Co-chair, International Congress COMADEM 2019 and Chair, International Scientific Committee of Third World Congress, Condition Monitoring. Len is Editorial Board member of journals Applied Sciences, Electronics, Sensors, International Journal Insight, International Journal of Acoustics and Vibration, International Journal of Prognostics and Health Management, International Journal of Basic and Applied Sciences and International Journal of Pure and Applied Mathematics. He was Chair, First World Congress, Condition Monitoring, Chair, Second World Congress, Engineering Asset Management and Chair, International Committee of Second World Congress, Condition Monitoring. Len is Chair of International CM Groups of ICNDT and EFNDT and Member of ISO Technical Committee, Condition Monitoring. Len made 42 plenary keynotes at major international conferences. He was Visiting Professor at ten Universities abroad. Len received two Rolls-Royce (UK) Awards for Innovation, COMADIT Prize (by British Institute of NDT) for significant contribution through research/development in condition monitoring, Oxford Academic Health Science Network Award, William Smith Prize by UK Institution of Mechanical Engineers and USA Navy Award.

xiii

xiv

About the Editors

Nadine Martin is a senior researcher at the CNRS, Nat. Centre of Scientific Research, within GIPSA-lab, Grenoble Image Speech Signal and Automatic, University of Grenoble Alpes, France. In the signal-processing domain, her research interests are the analysis, detection and modeling of non-stationary signals in time-frequency domain with a particular interest in conditional preventive monitoring of complex systems such as wind turbines or paper machines. His current research focuses on expert-level automatic signal processing for continuous fault detection and localization with severity tracking. N. Martin is the author of around 150 papers, 2 patents, 7 book chapters, and co-editor of the book Time-Frequency Decision. Currently, she is an expert for MINALOGIC, the French Global Innovation Cluster for Digital Technologies, Chair of ISCM, the Int. Society for Condition Monitoring, Vice-Editor-in-Chief of IJCM, the International Journal of Condition Monitoring and a member of a working group of the International Committee of Nondestructive testing, ICNDT. Within ISCM, she is active in the scientific program of Condition Monitoring conferences. Andrew A. Malcolm is the Deputy Group Manager for the Intelligent Process Verification Group, and Senior Technical Lead for the Non-Destructive Evaluation Team at the Advanced Remanufacturing and Technology Centre (ARTC) under the Agency for Science, Technology and Research (A*STAR) in Singapore. He received his M.Sc. in Computer-Aided Engineering and his Ph.D. for a thesis on precision measurement through fringe pattern analysis from Liverpool John Moores University in 1987 and 1995 respectively. Prior to joining A*STAR in early 1996, he worked for various UK and European companies developing vision systems for non-destructive testing, process control and product inspection, and for Liverpool John Moores University as an Industrial Research Fellow and Sessional Lecturer. He has a background in non-destructive testing and inspection, machine and computer vision, image processing, visual guidance systems for autonomous vehicles, stereo vision and teleoperation control. He is currently involved with a number of research and industry projects at ARTC developing and implementing novel techniques in the areas of ultrasonic testing, eddy current testing, thermography and X-ray imaging and computed tomography focusing on technology transfer for industrial implementation. Key application domains include aerospace, heavy machinery and fast-moving consumer goods (FMCG). His current research interests included X-ray computed tomography for visualisation and dimensional measurement, X-ray inspection and advanced image processing, non-destructive testing and evaluation, application of artificial intelligence within NDE, and the role of NDE in Industry 4.0. He is a Fellow of the Non-destructive Testing Society of Singapore and currently chairman of the certification committee responsible for the internationally-recognised SGNDT certification scheme accredited by the Singapore Accreditation Council in accordance with ISO 9712:2012 and EN4179 schemes.

About the Editors

xv

Chin Kian (Edmund) Liew has more than 15 years of teaching and research experience in the field of non-destructive testing and structural health monitoring. His early research interests were in guided wave ultrasound where he developed pattern recognition techniques to identify defects in metal beams. Subsequently, he researched on several emerging NDT methods to inspect helicopter composite airframe structures for Australian Aerospace, including phased array ultrasound, radiography, and thermography. He joined the Singapore Institute of Technology (SIT) in 2013 where he was instrumental in establishing an Authorised Training Organisation (ATO) at the university for the Singapore NDT (SGNDT) professional certification. He is currently an Associate Professor at SIT where he continues to be active in the field of NDT, delivering education and training, as well as conducting applied research for a diverse range of industry projects including for the rail, aerospace, civil, and marine engineering sectors.

Signal Processing: Fault Diagnosis for Complex System Monitoring

Detection and Localization of a Gear Fault Using Automatic Continuous Monitoring of the Modulation Functions X. Laval, N. Martin, P. Bellemain, and C. Mailhes

Abstract In the context of automatic and preventive condition monitoring of rotating machines, this paper presents a case study of a naturally-worn parallel straight gear by monitoring the evolution of the modulation functions. The Hilbert demodulation is automatically performed considering only the frequency content of the signals detected by the AStrion software. The gear has been worn over 3000 h with a constant axial load. A particular focus is set on the amplitude modulation function in order to assess its efficiency to characterize both the severity of the wear and the most worn part of the gear. The results are confronted with on-site observation of the teeth. For this purpose, the evolution of both amplitude and phase modulations over several meshing harmonics are compared, as well as demodulation on both original and residual signals. Indicators to automatically classify the wear are discussed. Keywords Condition monitoring · Signal processing · Hilbert demodulation · Gear fault detection · Continuous surveillance

1 Introduction Gearboxes are very important parts of mechanical systems. An unforeseen failure in an industrial context can lead to important shutdowns and replacement costs. Thus, the need of a functional and efficient condition monitoring applied to these parts is crucial. However, the day-to-day preventive maintenance consists mainly in monitoring standard features such as peak-to-peak, root mean square or Kurtosis. However, these methods are limited when the goal is an early fault detection of low vibrations embedded in strong other vibrations. A lot of technics, based on advanced signal processing are reviewed in [1] and [2] such as wavelet [3], empirical X. Laval (B) · N. Martin · P. Bellemain University Grenoble Alpes, CNRS, Grenoble INP, GIPSA-Lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402 Saint Martin D’heres, France e-mail: [email protected] C. Mailhes University of Toulouse, IRIT/INPT-ENSEEIHT/TéSA, Toulouse, France © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_1

3

4

X. Laval et al.

mode decomposition [4], Teager-Kaiser energy operator [5]. A common method is the Hilbert demodulation, which efficiency has been proven by McFadden [6] and Randall [7]. This paper presents a case study of a gearbox wear through automatic methods with the AStrion software using the Hilbert demodulation method. AStrion is a preventive maintenance software, developed coinjointly between the GIPSA-lab in Grenoble, France and the IRIT in Toulouse, France. This software, which principles have been presented in [8–10], operates by automatically detecting peaks and classifying them in harmonics and sideband families. The sideband families can be automatically demodulated to estimate the temporal modulation functions. This paper aims at characterizing the wear by monitoring the evolution of these functions with simple operators requiring no a priori knowledge of the system. The aim is to propose automatic methods to characterize spallings without mechanical modelling, using sensors on different positions to give credit to the diagnosis. The demodulation process will be shortly presented first. The GOTIX bench, from where the data come from, will be introduced in a second part. The monitoring of the modulation functions on a set of sensors will be presented in a third part. Specific indicators are defined from these modulation functions. The fourth part shows the ability of the indicator trends along timestamp signals to characterize the wear. Discussion of the method and its limitations in operational context will be undertaken in the last part.

2 Demodulation Process in AStrion and Physical Meaning of Modulation Functions From the identified harmonics and sideband families, AStrion is able to demodulate a specific family in signals where this family has been detected. The demodulation module first applies a multi-rate filter on a band corresponding either to the number of detected sidebands or a specific user-defined number of sidebands. This narrow band spectrum is then shifted to zero, before applying a synchronous averaging at the modulation frequency. Finally, the Hilbert transform is applied without removing the deterministic part. The method has been presented in details in [11]. After demodulation, 3 temporal signal functions representing the amplitude modulation (AM), the phase modulation (PM) and the frequency modulation (FM) are given over one cycle of the modulation frequency. If the demodulated harmonic corresponds to a meshing frequency, the AM is the envelope of the gearbox vibration. Its mean corresponds to the energy of the demodulated harmonic. It can thus be described as the instantaneous acceleration, which is an image of the force generated at the point of meshing. If the modulating frequency corresponds to a shaft connected to the gearbox, once applied the synchronous averaging, the AM represents the variations of the force generated by the meshing over one rotation of the gear. This force is influenced in particular by shaft misalignments, load asymetry and teeth meshing. Furthermore, a tooth with a defect such as

Detection and Localization of a Gear Fault …

5

spalling has a different stiffness compared to a healthy one [12]. Since the acceleration measure is directly influenced by stiffness variation, it is expected that for a damaged gear, the AM variations will be representative of the fault. The FM is the derivative of the PM and represents the instantaneous frequency, its mean being equal to the demodulated harmonic frequency. In the case of a gearbox modulated by a shaft, after the synchronous averaging at the modulation frequency, the FM will represent the instantaneous speed of meshing. It can be influenced for example by fissured teeth. However, in particular for low energy harmonics, or for strong spalling, the AM can drop close to zero and be folded, causing a 180° phase shift and an artificial surge in the FM. Thus the FM is more problematic to use, since it can lead to false interpretation.

3 Data from the GOTIX Experimental Bench The GOTIX bench, hosted in the GIPSA-lab, is an experimental setup consisting in a gearbox driven by an asynchronous motor and loaded by a DC-generator. A synchronized set of data comprising vibrations, currents, voltages and angular positions, are recorded through an OROS38 acquisition system. More information can be found on the GOTIX website http://www.gipsa-lab.grenoble-inp.fr/projet/gotix/presentat ion.html from which data can be downloaded. The main experiment is a long-term natural wear experiment on a parallel straight tooth multiplier gearbox composed of a Z gear = 57 teeth gear and a Z pinion = 15 teeth pinion. Thus, the vibration signals are all dominated by the meshing harmonics. The different harmonics are modulated by both the input and output shafts. AStrion is able to detect all harmonics and modulations automatically, up to 57 between the strongest harmonics. AStrion results in a frequency zoom are presented in Fig. 1. It is noticeable that the sidebands ±19 of the gear shaft rotation are predominant. This is due to the assembly phase frequency, generated by the fact that the number of gear teeth and pinion teeth are not prime Z gear Z = pinion = 3. In order to avoid interferences from this frequency and numbers: 19 5 its sidebands, the maximum number of sidebands has been set to 9 on each side of the harmonic. The bench has been rotating for over 6000 h with the same pair of gears, slowly wearing them in a completely natural way. Recordings were performed every 5 h. Currently, the gears are worn, with spalling marks on several teeth, but not yet out of use. The last change in the operating conditions happened 3300 h ago for technical resaons. Since then, the speed and load have been near constant at respectively 475 rpm and 760 Nm on the driving gear shaft. Thus, the study has been performed on these last 3300 h, with 650 signals investigated, each signal corresponding to 80 s of recording. Six accelerometers are located at 3 different positions, measuring 2 axes at each position. The considered sensors in the following parts, denominated sensor 1 and sensor 2, are both radially orientated. Sensor 1 is on the gear while sensor 2 is on the pinion. AStrion results in Fig. 1 show a predominance of the third harmonic. This

6

X. Laval et al.

Fig. 1 AStrion results in a frequency zoom of a GOTIX signal estimated. The meshing harmonics are identified in (a), and an additional zoom shows the identified sidebands around the third harmonics in (b)

is true for the 4 radial accelerometers on GOTIX and probably indicates a system resonance. However, the choice here is to not take into account any resonance but to demodulate all harmonics and to compare their similarities in order to identify the potential faulty parts of the gear.

4 Monitoring of the Modulation Functions In order to obtain comparable modulation functions, the signals are all synchronized using a top-tour signal from the modulating gear shaft. Thus, resulting modulation functions start from the same angular position of the gear. Harmonics with a sufficient number of sidebands are all investigated, whatever their amplitude. Furthermore, the AM and FM functions are normalized by their mean value in order to compare only the shape of these functions and be independent from their harmonic amplitude and frequency variations. These two features are also monitored independently by AStrion. As can be seen on Fig. 2, for sensor 1, the different functions present a continuous evolution as the gear is slowly wearing itself. One can already notice trends in these plots, in particular around 200° and 280°: at these angular positions, AM seems to present a significant evolution, either as an

Detection and Localization of a Gear Fault …

7

Fig. 2 Evolution of AM and FM functions for 3 different harmonics (H1, H3 and H6) over the hours of gear rotation for sensor 1. Each function length corresponds to one gear rotation, from 0° to 360°

increase (H1) or a decrease (H3 and H6). This is also true when studying other harmonics. However, it can be very long and fastidious to study each of these harmonics for every gear present in a system. The evolution is also present on FM, but can be masked by important peaks, visible on H6 in particular. These peaks are issued from a 180° phase shift in the PM, caused by the folding of the AM when its value comes close to zero. This effect is illustrated in Fig. 3. They can be problematic in the context of automatic detection. They can however be used to automatically remove AM presenting this folding, their shape being distorted. In order to confirm the diagnostic, it can be interesting to compare accelerometers at different positions. Figure 4 shows AM for 2 different accelerometers, among the 8 available in the system, both oriented radially: accelerometer 1 is located on the pinion shaft while accelerometer 2 is located on the gear shaft. While differences are visible, the trends are similar. By automatically demodulating both, it is possible to confirm or refute a diagnostic. Even if a confirmed operator could establish a diagnosis, the process of scanning the demodulation functions for every harmonic generated by every mechanical elements of a complex system can be very time-consuming and inefficient. Hence

8

X. Laval et al.

Fig. 3 AM and FM from the 6th harmonic for 2 folded cases. The link between the FM peaks and the AM folding is clearly visible on datatips

Fig. 4 AM for H1 and H3 for sensors 1 and 2 from 2 different accelerometers

the necessity to have features representing the evolutions that are monitorable with automatic classifications technics.

5 Demodulation Function Features The aim of this part is to extract as much information as possible from the AM and FM functions using simple features. Thus, only classical features such as peakto-peak (PP), root mean square (RMS) and Kurtosis (K) values have been used. First, these features are computed for each measure on the global AM functions. But in order to improve the method as well as the number of features available for

Detection and Localization of a Gear Fault …

9

the automatic classification, a gliding feature is computed on each function with a rectangular angular window of length 38° which amounts to dividing the functions in 19 parts, with a 50% overlap between each part to avoid side effects. As shown in [11], the filtering due to the choice of the demodulation band, ± 9 sidebands in the present case, strongly alters the shape of the modulation functions. The choice of 19 parts is empirical, and corresponds to Nparts = n sideband left + n sideband right + 1. It can thus be determined automatically. When linking it to the mechanics, each part contains 3 teeth, plus an overlapping of around 1 tooth and a half on each side. The first six harmonics are examined. In the following, the x-axis is labelled with zone numbers, each zone corresponding to 3 teeth of the gear as above-mentioned. Figure 5 shows the PP feature across the 19 angular zones over the 3300 h of rotation. The AM functions of H1, H3 and H6 highlight angular zones where PP increases, in particular in zones 9 and 10, reflecting the slow wearing of the gear. However, this phenomenon is not so clear on the FM evolution. As seen in Fig. 5, H1 FM does not present any evolution, while H3 shows some maximum in zones 10

Fig. 5 Gliding Peak-to-peak features for 3 harmonics on amplitude and frequency modulations functions and for all the time-stamps. The blank lines correspond to the signals for which the modulation sidebands have not been detected. The x-axis corresponds to 19 angular zones of 3 teeth each

10

X. Laval et al.

Fig. 6 For harmonics H1 to H6, PP, RMS and Kurtosis on the whole amplitude modulation function (a), (b), (c). PP, RMS and Kurtosis for zone 10 (d), (e), (f). PP for zones 3, 13 and 19 (g), (h), (i)

and 11, but it does not evolve continuously. Only H6 shows an evolution. Focusing on the AM, Fig. 6 confirms the interest of the PP feature, both on the whole AM and on different zones. For clarity and manual comparison purpose, the features have been normalized (divided by their mean value) for every figure. In an automatic context, this step may not be necessary. However, a median filter has also been applied to smooth the results. Since its purpose is also to improve the classification results, this step is more important. The PP value on the whole AM (a) steadily increases for 4 harmonics. Furthermore, it increases for 5 harmonics in zone 10 (d), all of them starting their increase around the same moment, between 2000 and 2400 h of rotation. Only H4 in fact starts to decrease at this moment, after a steadily increase. This shows that this zone in particular is more worn than the rest of the gear. On the other zones, only zone 13 (h) is showing signs of wear with 3 on 6 harmonics increasing. On the contrary, no trend can be discerned on zones 3 (g) and 19 (i). The results from Kurtosis and RMS, whether on the whole AM (b and c) or in the worn zone 10 (e and f) are also inconclusive. The fact that the PP increases in a particular zone for several harmonics is a strong validation of the PP increasing on the whole AM. It increases the likelihood of an advanced wear on this zone, and can reinforce an automatic diagnostic. When inspecting the state of the gear teeth in Fig. 7, it is confirmed that the zone 10 is the most worn. Indeed, large spalling marks on teeth 30, 31 and 46 can be seen, the largest being on teeth 31. Teeth 30 and 31 belongs to zone 10. Smaller spalling marks have been observed on 18 teeth, such as tooth 21.

Detection and Localization of a Gear Fault …

11

Fig. 7 Teeth 21, 30, 31 and 46 from the gear of the GOTIX bench. Circled in red are spalling marks

6 Limitations of the Method The strongest limitation of the presented method is probably the fact that this method needs to have quite stable operating conditions, in particular speed and load, for each measure. The shapes of modulation functions are strongly dependent on the excitation of the system. This article shows that with constant operating conditions, such as on an experimental bench like GOTIX, the AM and FM present a continuous evolution, pertinent for the detection of defects. However, in the context of industrial exploitation, it is rarely the case. A solution could be to monitor the state parameters (speed and torque in particular) to trigger a measurement only when the system is operating at a predefinite condition. It can work for systems remotely and continuously monitored, such as wind turbines. When the measures are performed by technicians, they can previously set these conditions before performing the measurements. Another limitation is the necessity to have angularly synchronized measures, starting all from the same gear angle. On the results presented here, the measures have all been synchronized using a top-tour on the gear shaft. However, these are not always present and can not be mounted easily in industrial context. The last limitation is the changes in the system due to maintenance: a greasing for example is susceptible to change the profile of the AM function brutally and

12

X. Laval et al.

needs to be taken into account in the context of automatic monitoring, by a feed of information to the monitoring software

7 Conclusions and Perspectives This paper presented a case study of a gearbox natural wearing and how to automatically assess it by monitoring the AM and FM function evolution for several harmonics. It has been shown that the wear is highlighted in the very continuous evolution of the AM, more than the FM, and that a clear indicator of it is the use of the peak-to-peak feature, both on the whole AM and on specific parts associated to the wornest part of the gear. The confirmation that the wear is concentrated on a specific part of the gear, as well as the fact that other sensors presents the same characteristics, is used as a double check to ensure the fiability. The double advantage of dividing the AM from several harmonics in several zones is both to provide more features for an automatic classification, increasing the fiability of the alarm raised, and to provide technicians a zone to check on the gear and decide if the gear should be replaced. The GOTIX experiments on the worn gear will resume after a reparation of the driving system, and will go on until the gearbox breaks naturally. It will be interesting then to check if the zones identified as worn now will be the one to break, and to see the future evolution of the AM functions. Furthermore, the same study should be conducted on other type of mechanical pieces, bearings or planetary gearbox for example, as well as in industrial context, in order to confirm its interest.

References 1. Fanning P, Carden EP (2004) Vibration based condition monitoring: a review. SHM 3(4):355– 377 2. Márquez FPG et al (2012) Condition monitoring of wind turbines: techniques and methods. Renew Energy 46:169–178. https://doi.org/10.1016/j.renene.2012.03.003 3. Wang WJ, McFadden PD (1995) Application of wavelets to gearbox vibration signals for fault detection. JSV 192(5):927–939 4. Liang M et al (2012) Fault diagnosis for wind turbine planetary gearboxes via demodulation analysis based on ensemble empirical mode decomposition and energy separation. Renew Energy 47:112–126. https://doi.org/10.1016/j.renene.2012.04.019 5. Antoniadou I et al (2015) A time–frequency analysis approach for condition monitoring of a wind turbine gearbox under varying load conditions. MSSP 64–65:188-216. http://dx.doi.org/ 10.1016/j.ymssp.2015.03.003 6. McFadden PD, Smith JD (1985) A signal processing technique for detecting local defects in a gear from the signal average of the vibration. In: Proceedings of the institution of mechanical engineers, vol 199, no 43, pp 287–292. https://dx.doi.org/10.1243/PIME_PROC_1985_199_ 125_02 7. Randall RB (1982) A new method of modelling gear faults. ASME, J Mech Des 104:259–267. https://dx.doi.org/10.1115/1.3256334

Detection and Localization of a Gear Fault …

13

8. Firla M et al (2016) Automatic characteristic frequency association and all-sidebands demodulation for detection of a bearing fault of a wind turbine test rig. MSSP 80:335–348. https:// doi.org/10.1016/j.ymssp.2016.04.036 9. Martin N, Mailhes C (2018) Automatic data-driven spectral analysis based on a multi-estimator approach. Signal Process 146:112–125. https://doi.org/10.1016/j.sigpro.2017.12.024 10. Gerber T, Martin N, Mailhes C (2015) Time-frequency tracking of spectral structures estimated by a data-driven method. IEEE Trans Ind Electron Spec Session 52(10):6616–6626. https://dx. doi.org/10.1109/TIE.2015.2458781 11. Laval X et al (2018) Vibration response demodulation, shock model and time tracking. In: CM 2018 and MFPT 2018, Nottingham, UK, 10–12 September 2018 12. Chaari F et al (2008) Effect of spalling or tooth breakage on gearmesh stiffness and dynamic response of a one-stage spur gear transmission. Eur J Mech A/Solids 27:691–705. https://doi. org/10.1016/j.euromechsol.2007.11.005

Multi-harmonic Demodulation for Instantaneous Speed Estimation Using Maximum Likelihood Harmonic Weighting C. Peeters, J. Antoni, and J. Helsen

Abstract This paper details a novel way to estimate the instantaneous angular speed of a rotating shaft directly from its vibration signal measurement. Speed estimation through single-harmonic phase demodulation has proven itself already multiple times in the past to be a fairly reliable way to obtain accurate instantaneous angular speed estimations. However, if the chosen harmonic experiences any disturbances such as crossing orders or noise increases, the end result can be severely compromised. The technique in this paper utilizes the phase of multiple harmonics in the signal simultaneously to obtain an estimate of the instantaneous speed. The novelty lies in the simultaneous phase demodulation formulation and in the way the phases of the harmonics are combined using a maximum likelihood estimation approach. To examine the robustness of this technique, this paper investigates the potential benefits and hindrances of using a multi-harmonic demodulation approach as compared to a single-harmonic approach. Simulation and experimental results show that a significant increase in estimation accuracy can be gained by employing more information contained within the phases of the harmonics in the vibration signal. Keywords Vibrations · Instantaneous speed estimation · Tacholess · Harmonics · Phase demodulation

1 Introduction Vibration analysis has become one of the main tools in condition monitoring of rotating machines. A reoccurring problem in the analysis of rotating machinery vibrations is the fluctuation of the rotation speed causing non-stationary behavior of the vibration signal content. This issue has received much attention in recent years since C. Peeters (B) · J. Helsen Applied Mechanics, Vrije Universiteit Brussel, Pleinlaan 2, Brussels, Belgium e-mail: [email protected] J. Antoni Laboratoire Vibrations Acoustique, INSA-Lyon, Villeurbanne, France © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_2

15

16

C. Peeters et al.

knowledge of the rotation speed is an integral part of almost every vibration-based condition monitoring scheme. In general, the overall goal of condition monitoring is to obtain an as accurate as possible understanding of the behavior of the machine and its components. Therefore, rotation speed estimation has become one of the mainstays in vibration analysis to assess machine health. Accurate knowledge of the rotation speed allows to compensate for the speed variation. Typically, this compensation is done by transforming the signal from the time to the angle domain. While it is possible to install an encoder or tachometer to measure the IAS directly on the machine, this imposes an additional cost for the machine operator since a technician needs to dedicate time and resources to the instrumentation [1, 2]. Because of this increased cost, there has been a significant amount of research done on the estimation of the instantaneous angular speed from the vibration signal itself. A recent review of several commonly used signal processing techniques to estimate the speed from the vibration is provided in [3]. It turns out that the majority of techniques for speed estimation from the vibration signal can be divided into two groups. One group focuses on tracking harmonics in a time-frequency representation (TFR) of the signal. Methodologies that fall into this group typically try to either generate a TFR that is as accurate as possible, such as in [4–6], or they try to improve the tracking itself in the TFR as much as possible, such as in [7–9]. The second group utilizes phase demodulation of a harmonic. Typically, a single harmonic with a high signalto-noise ratio is chosen and then demodulated using its analytic signal representation; examples are given in [10–13]. Methods that do not fall into these two groups employ various different ways to estimate the speed and include techniques such as Kalman filters, scale transforms, or wavelets. This paper discusses the derivation of a multi-harmonic phase demodulation methodology and can thus be categorized as a member of the second group of techniques. The methodology section describes how the phase of multiple harmonics can be combined and how the technique can be made more robust to low signal-to-noise ratio harmonics. The latter improvement significantly increases the applicability of this technique to industrial cases. The proposed methodology is illustrated on a simulated signal and on an experimental case study using vibrations from a plane jet engine gearbox. The obtained results of the multi-harmonic demodulation are compared to the conventional single-harmonic demodulation method. This comparison clearly indicates the significant gain in accuracy and robustness that can be made by incorporating more harmonics and thus more signal content in the speed estimation process.

2 Methodology The idea of the method is based on extending the conventional single-harmonic phase demodulation to multiple harmonic phase demodulation. Additionally, a maximum likelihood estimation methodology is added to estimate weights for every harmonic. Both steps are described in Sects. 2.1 and 2.2.

Multi-harmonic Demodulation for Instantaneous Speed Estimation …

17

2.1 Multi-harmonic Demodulation (MHD) The vibration signal x(n) (of length N) is considered to be a sum of a set of K complex harmonic signals x k (n) defined as a complex exponential: x(n) =

K 

Ak (n)e j(αk θ(n)+φk )

(1)

k=1

with n being the sample number, k the harmonic number, K the total number of harmonics, A the real amplitude, α the harmonic order, φ the constant phase, θ the instantaneous angle of rotation. For just one harmonic k this sum can be written as: xk (n) = Ak (n)e j(αk θ(n)+φk ) + νk (n)

(2)

In practice, this harmonic is filtered out using band-pass filtering. Calculating the derivatives x˙k (n), gives:   x˙k (n) = A˙ k (n)e j(αk θ(n)+φk ) + Ak (n) e j(αk θ(n)+φk ) jαk θ˙ (n)

(3)

The linear relationship between x k (n) and x˙k (n) can be written as follows: x˙k (n) = βk (n)xk (n)

(4)

where βk (n) is equal to: A˙ k (n) + jαk θ˙ (n) Ak (n)

βk (n) =

(5)

Now, we introduce process noise νk (n) (e.g., due to imperfect sampling and numerical differentiation): x˙k (n) = βk (n)xk (n) + νk (n)

(6)

The least-square error estimator of θ˙ (n) using all the filtered narrow-band harmonic signals is given by: θˆ˙ (n) = arg min

K    x˙k (n) − jαk θ˙ (n)xk (n)

(7)

k

Calculating the derivative and setting it to zero gives after some algebra in the end the following:

18

C. Peeters et al.

θˆ˙ (n) =



K

k=1  K k=1

x˙k (n)xk∗ (n)

 (8)

αk |xk (n)|2

where {.} denotes the imaginary part.

2.2 Maximum Likelihood Estimation of Harmonic Weighting Equation 8 gives a uniform weighting to every harmonic. In an attempt to make the weights more dependent on the signal-to-noise ratio, a maximum likelihood estimation is derived. Three assumptions are made: 1. The process noise is a univariate complex Gaussian distribution with zero bias   and unknown variance ∼ N 0, σk2 (n) 2. The statistics of the process noise are independent of time and the number of the harmonic k 3. The process noises of the K different harmonics are assumed to be uncorrelated. The second assumption is wrong since νk (n) does depend on time and the harmonic number, but it allows us to use a simple Gaussian distribution. By incorporating the complex Gaussian probability density function of the process noise, we arrive at the likelihood function:

K



P x1 , x2 , . . . , xk |θˆ˙ (n) =

k=1

e

−|x˙k (n)−βk (n)xk (n)|2 σk2 (n)

πKN

K

2 k σk (n)

  = L θ˙ (n)

(9)

To find θˆ˙ that maximizes Eq. 11, we minimize the negative log-likelihood:



δ − ln L θˆ˙ (n) δ θˆ˙

=0

(10)

Solving the derivative for θˆ˙ gives, after some algebra: θˆ˙ (n)ML =



 K k=1

x˙k (n)xk∗ (n)αk σk2 (n)

k=1

|xk (n)|2 αk2 σk2 (n)

K

(11)

The optimal weights for each harmonic are thus given by: wk (n) =

αk 2 σk (n)

(12)

Multi-harmonic Demodulation for Instantaneous Speed Estimation …

19

The noise variance is estimated using again a maximum likelihood estimation and it is assumed to be constant over a short time interval [n − I, n + I ]. The PDF of the noise variance is given by: P



xk |σk2



=

n+I  i=n−I

2

1 −|x˙k (i)−βσk2(i)xk (i)| k e π σk2

(13)

Deriving the maximum log-likelihood gives: 

σk (n)ML ≈

n+I    1 x˙k (i) − jαk θ˙ (i)xk (i)2 2I + 1 i=n−I

(14)

2.3 Practical Considerations The main benefit of the formulation in Eq. 11 is that the denominator has a small probability of getting close to zero since it uses the amplitudes of multiple harmonics. In practice, the complex harmonic signals xk (n) are obtained using complex demodulation with a roughly estimated speed. The rough speed can be determined in various ways that need little input (e.g., through maximum tracking in the STFT). For the differentiation involved, a filter can be chosen that does not amplify high-frequency noise. Differentiating filters typically increase high-frequency noise which can be observed by looking at their transfer function. Four examples of transfer function amplitude responses are shown in Fig. 1. This paper employs a first-order difference scheme since it has the lowest computation time and the produced results did not Fig. 1 Four examples of the magnitude responses of various differentiation filters

20

C. Peeters et al.

vary significantly with the differentiation filter type. The MHD method is run iteratively by estimating in an alternating fashion first θˆ˙ (n)ML and then σk (n)ML until convergence or the maximum number of iterations has been reached. 

3 Simulation Results In order to highlight the influence of the maximum likelihood weighting, a simple example is chosen where a signal is simulated that consists of the following: • 5 sinusoidal harmonics with a base rotation speed of 31 Hz, but the harmonic numbers are chosen to be odd. The 5th harmonic however is chosen to be asynchronous with the speed. This means the speed synchronous signal content only consists of the 1st, 3rd, 7th, and 9th harmonic. • 1 lightly damped resonance in the middle of the 7th harmonic, which is detrimental for the phase demodulation of that harmonic. • Time-varying and unitary amplitudes for all 5 harmonics (i.e., mono-component sinusoidal modulation). • Additive white Gaussian noise at an SNR of −5 dB. The spectrogram of the signal is shown in Fig. 2. The red rectangles indicate the lightly damped mode coinciding with the 7th harmonic and the asynchronous 5th harmonic. The simulated signal has a sample rate of 700 Hz and a duration of 20 s. In total, the MHD is run for 20 iterations.

Fig. 2 Spectrogram of the simulated signal

Multi-harmonic Demodulation for Instantaneous Speed Estimation …

21

Fig. 3 Left: Speed estimation results comparison for MHD with and without MLE weighting. Right: Normalized average weights per harmonic and per iteration

Figure 3 shows the speed estimation results with and without maximum likelihood estimation of the weights. It is clear that without proper weighting, the estimation fails. The figure also displays the evolution of the weights, which clearly shows the proper selection of the optimal harmonics for the speed estimation.

4 Experimental Results The experimental data set comes from the Safran contest at the Surveillance 8 conference, held in Roanne, France [14]. A diagram of the aircraft engine with the faulty bearings and the locations of the accelerometers is shown in Fig. 4. A spectrogram of the processed data of sensor 2 is displayed in Fig. 5. It is obtained using a Hanning window with a length of 211 samples with an overlap of 95%. For comparison, the single-harmonic phase demodulation method is used on the 38th harmonic of the high-pressure shaft. The multi-harmonic demodulation method uses the first 60 harmonics as input. The estimated speed profiles for both methods are shown in Fig. 6 on the left. Visually, it is difficult to tell which method performs best. Therefore, the estimated speeds are compared to the measured encoder speed.

Fig. 4 Left: Engine and accessory gearbox. Shafts are identified by labels in amber color. Right: Diagram of the kinematics of the gearbox

22

C. Peeters et al.

Fig. 5 Spectrogram of the aircraft engine data

Fig. 6 Left: Estimated speed profiles. Right: Mean and median absolute errors

Figure 6 shows on the right the mean and median absolute deviations to the encoder speed for both methods. The multi-harmonic method clearly outperforms the singleharmonic demodulation method.

5 Conclusions This paper discusses the potential gain in accuracy and robustness when using multiple harmonics for phase demodulation as compared to just one single harmonic. The proposed methodology presents a framework for incorporating multiple demodulated phase signals into a singular demodulation step. It also provides a means to optimize the available set of harmonics such that the actual speed estimation is more accurate and less influenced by asynchronous signal components or poorly excited harmonics. Acknowledgements The authors of VUB would like to thank the agency for Innovation by Science and Technology in Belgium for supporting the SBO HYMOP project and also SIM for the support with the MASIWEC project. They would also like to thank FWO for providing funding for a long stay abroad of Cédric Peeters at the University of INSA Lyon.

Multi-harmonic Demodulation for Instantaneous Speed Estimation …

23

References 1. Potter R (1990) A new order tracking method for rotating machinery. Sound Vibr 24(9):30–34 2. Fyfe K, Munck E (1997) Analysis of computed order tracking. Mech Syst Signal Process 11(2):187–205 3. Peeters C, Leclère Q, Antoni J, Lindahl P, Donnal J, Leeb S, Helsen J (2019) Review and comparison of tacholess instantaneous speed estimation methods on experimental vibration data. Mech Syst Signal Process 129:407–436 4. Peng Z, Meng G, Chu F, Lang Z, Zhang W, Yang Y (2011) Polynomial chirplet transform with application to instantaneous frequency estimation. IEEE Trans Instrum Meas 60(9):3222–3229 5. Kwok HK, Jones DL (2000) Improved instantaneous frequency estimation using an adaptive short-time fourier transform. IEEE Trans Signal Process 48(10):2964–2972 6. Cheung S, Lim JS (1992) Combined multiresolution (wide-band/narrow-band) spectrogram. IEEE Trans Signal Process 40(4):975–977 7. Barrett RF, Holdsworth DA (1993) Frequency tracking using hidden markov models with amplitude and phase information. IEEE Trans Signal Process 41(10):2965–2976 8. Wang S, Chen X, Wang Y, Cai G, Ding B, Zhang X (2015) Nonlinear squeezing time-frequency transform for weak signal detection. Sig Process 113(195):210 9. Schmidt S, Heyns PS, De Villiers JP (2018) A tacholess order tracking methodology based on a probabilistic approach to incorporate angular acceleration information into the maxima tracking process. Mech Syst Signal Process 100:630–646 10. Urbanek J, Barszcz T, Sawalhi N, Randall R (2011) Comparison of amplitude-based and phase-based methods for speed tracking in application to wind turbines. Metrol Meas Syst 18(2):295–304 11. Urbanek J, Barszcz T, Antoni J (2013) A two-step procedure for estimation of instantaneous rotational speed with large fluctuations. Mech Syst Signal Process 38(1):96–102 12. Combet F, Gelman L (2007) An automated methodology for performing time synchronous averaging of a gearbox signal without speed sensor. Mech Syst Signal Process 21(6):2590–2606 13. Bonnardot F, El Badaoui M, Randall R, Daniere J, Guillet F (2005) Use of the acceleration signal of a gearbox in order to perform angular resampling (with limited speed fluctuation). Mech Syst Signal Process 19(4):766–785 14. Antoni J, Griaton J, Andre H, Avendano-Valencia LD, Bonnardot F, Cardona-Morales O, Castellanos-Dominguez G, Daga AP, Leclere Q, Vicuna CM et al (2017) Feedback on the surveillance 8 challenge: vibration-based diagnosis of a Safran aircraft engine. Mech Syst Signal Process 97:112–144

A Variational Bayesian Approach for Order Tracking in Rotating Machines Y. Marnissi, D. Abboud, and M. Elbadaoui

Abstract Order tracking is important in rotating machine signal processing. It consists in tracking the complex envelope of phase varying sinusoids associated with distinct rotating mechanical elements. The resulting magnitudes and phases are usually used for vibration level control and system resonance identification. A common approach to tackle this issue is the Vold-Kalman filter. However, the performance of this filter is highly conditioned by the choice of some tuning parameters that control its selectivity. In this paper, a new order tracking method is proposed to jointly estimate the targeted envelopes and the optimal values of these parameters in an automatic manner. To do so, the order tracking problem is formulated within the Bayesian framework. In details, prior distributions are affected to the target variables (including the tuning parameters and the noise variance); posterior distributions are then derived and estimated using a bariational Bayesian approximation algorithm. Results on simulated and real helicopter vibration data demonstrate the effectiveness of the proposed approach. Keywords Order tracking · Vold-Kalman filter · Bayesian framework · Vibrations · Nonstationary · Rotating machines

1 Introduction In rotating machines, each mechanical source generates a unique pattern as the machine operates. For many components, the vibratory response can be modelled accurately through a sum of phase and amplitude modulated sinusoids whose instantaneous frequencies are linearly related to the rotating frequency of the rotating component. The estimation of the amplitude and the phase modulations, also known Y. Marnissi · D. Abboud (B) · M. Elbadaoui Information & Signal Technology center, SafranTech, Rue des Jeunes Bois, Châteaufort, 78772 Magny-les-Hameaux, France e-mail: [email protected] M. Elbadaoui LASPI, EA3059, Université de Lyon, UJM-St-Etienne, 42023 Saint-Etienne, France © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_3

25

26

Y. Marnissi et al.

as order tracking, is of high importance in rotating machine signal processing. On the one hand, it provides the instantaneous vibratory level through the magnitude as well as potential “phase jumps” which are associated with resonance modes. On the other hand, it can be used to estimate the deterministic components and separate them from other random interfering contributions (e.g. environmental noise, electromagnetic interference, measurement errors, etc.). When the instantaneous frequencies of the targeted components are known, a common approach to tackle this issue is the Vold-Kalman filter [1–5]. Several variants of this algorithm have been proposed in Refs. [6–8]. This filter models the order tracking problem through two equations. The first one is the data equation which establishes the link between the observation (raw signal) and the component(s) of interest. The second is the structural equation that adds a constraint on the smoothness of the estimated envelopes. This constraint is dictated by some tuning parameters which control the degree of smoothness or, equivalently, the selectivity of the filtering [9]. Nowadays, these parameters are very often empirically set by the operator, though their values can significantly affect the estimation. This paper addresses the shortcomings of the Vold-Kalman filter by adopting a new approach based on the Bayesian framework. One of the main advantages of the proposed formulation is that it allows joint estimation of the target envelopes and the tuning parameters from the measured signal, while providing a good approximation of the minimum mean square error estimators. This paper is organized as follows: Sect. 2 opens with a brief review on the original Vold-Kalman filter as well as a discussion about its shortcomings. In Sect. 3, the theoretical fundamentals of the newly proposed Bayesian method are presented together with an efficient iterative algorithm. In Sect. 4, the efficiency of the new method is demonstrated and compared with the Vold-Kalman method on simulations and helicopter vibrations data. Finally, some conclusions are drawn in Sect. 5.

2 A Brief Review of the Vold-Kalman Filter 2.1 Method Description Let K > 0 be the number of target components (i.e. sinusoids) to be extracted from a measured signal of length N > 0. The data equation states that each target component k ∈ {1, . . . , K } is equal to the observed signal up to an additive error δk (n). This can be formulated as follows: ∀k ∈ {1, . . . , K } ∀n ∈ {1, . . . , N } y(n) − xk (n) exp( jφk (n)) = δk (n)

(1)

The latter formulation is called a mono-component estimation problem since each of the K components is estimated separately and independently from the others. Another approach to tackle this issue is to jointly consider the K components. In this case, Eq. (1) turns to a multi-component estimation problem with an additive error

A Variational Bayesian Approach for Order Tracking in Rotating …

27

δ(n) as follows: ∀n ∈ {1, . . . , N } y(n) −

K 

xk (n) exp( jφk (n)) = δ(n)

(2)

k=1

The Vold-Kalman filter also requires prior information on the target components that will be integrated into the structural equation. More specifically, for every k ∈ {1, . . . , K }, the complex envelope ak (n) is assumed to have a smooth and regular variation. This is ensured using the slow-varying discrete difference of the complex envelope to act as a low pass filter: ∀k ∈ {1, . . . , K } ∀n ∈ {1, . . . , N } ∇ p+1 xk (n) = k (n)

(3)

where ∇ p+1 is the difference operator of order p and k (n) is an additive error term. Given the data and the structural Eqs. (1) and (3), respectively, the least square estimator of each complex envelope xk = [xk (1), . . . , xk (N )]T in the mono-component estimation problem reduces to the minimizer of the 22 norm of the residual errors δk (n) and k (n). The solution reads −1  ∀k ∈ {1, . . . , K } xˆk = I + rk A Tp A p CkH y

(4)

where rk is a positive weighting factor, A p is the matrix of discrete difference of order p, Ck is the diagonal matrix formed by the terms s exp( jφk (1)), . . . , exp( jφk (N )) and (.) H denotes the conjugate transpose. Similarly, in the case of the multi-component estimation problem, the least square estimator of the vector of complex envelopes T  x = x1T , . . . , x KT can be expressed in matrix form:  −1 xˆ = C H C + T R C H y

(5)

where C = [C1 , . . . , C K ], R is the diagonal matrix formed with r1 , . . . , r K and  is the block diagonal matrix formed by K matrices A p .

2.2 Shortcomings of the Method The quality of the estimator is very sensitive to the choice of weighting factor r k . On the one hand, the larger r k is, the more the solution is influenced by the structural equation resulting into a very smooth envelope. This also amounts to say that the larger r k is, the narrower the bandwidth of the filter is and the more important the selectivity is. However, using large values of r k may  cause numerical issues due the bad conditioning of the matrix C H C + T R that increases linearly with r k resulting to in nonsensical results. On the other hand, decreasing the value of r k may

28

Y. Marnissi et al.

lead to an amplification of the noise level in the estimated amplitude. The setting of the weighting factor turns to be more complicated in the case of a multi-component estimation problem. In fact, considering the same value for all r k is not generally a reliable solution since the different components may have different energy ranges.

3 Proposed Method 3.1 Bayesian Formulation The considered observation model is similar to that of (2). The coefficients of the noise δ(n) are assumed to be i.i.d according to a zero mean complex Gaussian noise of unknown variance 2τ −1 which means that their real and imaginary parts are real Gaussian random variables of variance τ −1 (i.e. τ denotes the precision of the real and imaginary variables). Given the Gaussian assumption, the observation distribution given x and τ reduces to a complex Gaussian distribution with mean C x and variance 2τ −1 . Just like the Vold-Kalman method, a constraint on the smoothness of the envelopes is needed. Equation (3) can be relaxed to the probabilistic framework by considering instead the probability density function of x given the precision τ and the regularization parameters r1 , . . . , rk : p(x|r1 , . . . , rk , τ ) =

K 

p(xk |rk , τ )

(6)

k=1

with ∀k ∈ {1, . . . , K } p(xk |rk , τ ) =

Cte.rkN τ N

 2 1 exp − τ.rk A p xk 2

(7)

where . denotes the 2 -norm and C t e is a constant independent of r k , τ and x k . The noise precision τ and the regularization parameters r 1 , . . . , r k will be estimated jointly with the complex envelopes. For simplification reasons, conjugate priors are adopted, namely for each r k , a Gamma prior with real positive parameters a k > 0, bk > 0, i.e. p(r k |a k , bk ) ∝ r akk −1 exp(−bk .r k ) and for the noise precision parameter a gamma prior with positive real hyperparameters a > 0, b > 0, i.e. p(τ|a, b) ∝ τa−1 exp(−b.τ). Note that the hyperparameters a k , bk , a, b are fixed according to our prior knowledge about the parameters r k and τ . In the case where a prior information is unavailable, one can choose small values for these parameters to obtain diffuse prior distributions. The Bayes rule allows to derive the posterior distribution from the observation and the considered prior distribution of the set of unknown parameters  = {x, r 1 , . . . , r k , τ } as follows:

A Variational Bayesian Approach for Order Tracking in Rotating …

p(y|x, τ ) p(x|r 1 , . . . , r k , τ) p(τ|a, b) p(|y) = p( y)

29

K k=1

p(r k |a k , bk )

(8)

The negative logarithm of this conditional distribution reads up to an additive constant independent of : K τ τ  f (, y) = C x − y2 + r k A p x 2k + bτ − (a + N(K + 1))log(τ ) 2 2 k=1

+

K 

bk r k −

k=1

K  (N + a k )log(r k )

(9)

k=1

The aim now is to derive estimations of the variables in  from their joint posterior distribution (8). As the posterior distribution has an intricate form, a variational Bayesian approximation (VBA) approach is adopted to approximate it. This is addressed in the next subsection.

3.2 Algorithmic Solution 3.2.1

Variational Bayes Approximation (VBA) Principle

The VBA approach consists in approximating p(|y) with a distribution q() which is as close as possible to p(|y) in the sense of the Kullback–Leibler divergence criterion [10]. In order to have a tractable solution, the approximate probability density is assumed to have the following factorized form: q() =

K  k=1

qxk (xk )

K 

qrk (rk )qτ (τ ).

(10)

k=1

The advantage of this factorization form is to enable a disjoint estimation of the different components while ensuring a multi-component modelling of the problem. In this case, an expression of the optimal density approximation for each variable is expressed as the exponential of the expectation of f (, y) with respect to the densities of the remaining variables, i.e.  ∀r ∈  qr (r ) ∝ exp E

i =r

qi (i )

{

f (, y)}

(11)

30

Y. Marnissi et al.

3.2.2

Densities Update

Using the expression (11), one can easily show that the optimal density of the envelope xk reduces to a circularly symmetric complex normal distribution, that is qxk (xk ) ≡ CCN(m k , 2. k ) where  k

⎛ ⎞

−1   αk T β α  H⎝ Id + y− = A Ap and m k = C Ck m k ⎠ α βk p β k k k =k

(12)

which means that the real and imaginary parts of xk follow Gaussian distributions with means equal to the real and the imaginary parts of m k respectively, and with the same real covariance matrix k . It can be noted that the computation of m k requires inverting a matrix that has the same structure as the one that intervenes in the mono-component Vold-Kalman estimation. However, contrary to the monocomponent  estimation, an additional term is added to the approximate posterior mean m k namely k =k Ck m k to account for the bias introduced by the other components. Similarly, the optimal densities reduce to Gamma distributions for rk and τ , i.e. qrk (rk ) ≡ G(αk , βk ) and qτ (τ ) ≡ G(α, β) where 

αk = a k + N    α A p m 2k + 2.trace ATp A p k βk = bk + 2β

(13)

⎧ ⎪ ⎨ α = a + N (K + 1) 2 K  K 2    αk 1  + 2.trace AT A p k ⎪ A Ck m k − y + trace[ k ] + m p k p ⎩β = 2 2β k k=1 k=1 (14)

3.2.3

Iterative Algorithm

Due to the implicit relations existing between the variables, these approximate densities are computed iteratively by updating the separable density of one variable while fixing the others which results in the following algorithm (0) (0) (0) 1. Set initial values: αk(0) ,βk(0) , m (0) k , k , k ∈ {1, . . . , K }, α , β 2. For t = 0, 1, . . .

a. For k = 1, . . . , K using (12) i. Update k(t+1) and m (t+1) k (t+1) (t+1) ii. Update αk , βk using (13) b. Update α (t+1) , β (t+1) using (14).

A Variational Bayesian Approach for Order Tracking in Rotating …

31

The approximate posterior means at convergence are then used as estimators of rk , τ, xk that is τˆ =

3.2.4

α opt , rk β opt



=

opt

αk opt βk



opt

and xk = m k .

Implementation Issues

Since the main objective is the computation of the posterior mean, the calculation and the storage of  k is not required. However, the update of the approximate densities of the other parameters requires the knowledge of second-order statistics of x k , being dependent on  k . Several solutions can be adopted, for instance, using circulant border conditions for the discrete difference matrix A p . This results in a circulant matrix  k that is diagonalizable in the Fourier domain so that all second-order statistics can be computed easily in the Fourier domain. Additional solutions using diagonal approximation or Monte Carlo approach can be also found in [10].

4 Experimental Results 4.1 Numerical Evaluation The synthetic signal is a three-second length signal with a sampling frequency F s = 5000 Hz, comprising two sinusoidal components whose instantaneous frequencies are given in Fig. 1a. The instantaneous frequency of the first component, plotted with a black continuous line, is a hanning window over the signal length; while that of the second component, plotted with a red continuous line, consists of a monotonically increasing function over 1 s, cascaded with a constant frequency (220 Hz) over the remaining two seconds. In order to evaluate the methods in a challenging setting, the second profile (red line) is purposely chosen close in the increasing part and intersects in the remaining part (precisely at 2 s) with the other profile. A complex Gaussian noise with a precision equal to 21.38 × 10−4 is also added which corresponds to a signal-to-noise ratio equal to 1; this gives K = 2 and N = 15, 000. Amplitudes of the estimated envelopes using the proposed approach are given in Fig. 1b with dotted

Fig. 1 Results on simulated data. a Speed profile. The actual magnitude (dotted lines) and the estimated magnitudes using the b proposed approach, c mono-component Vold-Kalman and d multicomponent Vold-Kalman

32

Y. Marnissi et al.

Fig. 2 Results on helicopter vibration signal. a Initial spectrogram. b Spectrogram of the estimated component. c Estimated envelope

lines, together with the corresponding actual magnitude (dotted lines). The estimated values for the other parameters are respectively r 1 = 1.58 × 108 , r 2 = 1.09 × 1010 and τˆ = 21.02 × 10−4 . r 1 and r 2 are then used as inputs in the standard VoldKalman filter in order to provide estimators using the mono-component and multicomponent models. The obtained estimators are exposed in Fig. 1c, d. One can see that the multi-component estimation removes the artifacts of close and crossing frequencies. Despite the factorization form (10) assuming that the densities of the different components are separable, the Bayesian approach gives comparable results as for the multi-component Vold-Kalman filter, yet not requiring a manual setting of the filter bandwidths. 







4.2 Application to Helicopter Vibration Data The proposed method is now applied on a helicopter vibration signal acquired with a constant sampling frequency in a run up to automatically extract the high pressure shaft component. The spectrograms of the initial signal and the extracted component as well as the amplitude of the estimated envelope are displayed in Fig. 2. Note that, for confidentiality issues, the sampling frequency is set to one and the acceleration signal is normalized by its standard deviation. The estimated envelope shows an obvious and smooth increase of the amplitude caused by the passage through a structural resonance.

5 Conclusion This paper proposed a new approach for vibro-acoustic component extraction for rotating machines using a Bayesian framework. The data and the structural equation proposed by the standard Vold-Kalman filter have been expressed in probabilities forms. The regularization parameters controlling the selectivity of the filter are also

A Variational Bayesian Approach for Order Tracking in Rotating …

33

modelled as random variables with Gamma distributions. Estimators for the envelope, the noise variance and the regularization parameters are then derived from their resulting posterior distribution using a Variational Bayesian approach. Experiments performed on simulated data and a real case of a helicopter vibration signal demonstrate the good performance of the method.

References 1. Vold H, Mains M, Blough J (1997) Theoretical foundations for high performance order tracking with the Vold-Kalman tracking filter. In: SAE transactions, pp 3046–3050 2. Vold H, Herlufsen H, Mains M, Corwin-Renner D (1997) Multi axle order tracking with the Vold-Kalman tracking filter. Sound Vibr 5:30–35 3. Herlufsen H, Gade S, Konstantin-Hansen H (1997) Characteristics of the Vold-Kalman order tracking filter. Sound Vibr 33:34–44 4. Pan MC, Lin YF (2006) Further exploration of Vold–Kalman-filtering order tracking with shaft-speed information—I: Theoretical part, numerical implementation and parameter investigations. MSSP 5:1134–1154 5. Pan MC, Lin YF (2006) Further exploration of Vold–Kalman-filtering order tracking with shaft-speed information—II: Engineering applications. MSSP 6:1410–1428 6. Feldbauer C, Holdrich R (2000) Realization of a Vold-Kalman tracking filter—a least squares problem. In: COST G-6, Limerick, pp 1–4 7. Pan MC, Lin YF (2010) Extended angular-velocity Vold–Kalman order tracking. JDSMC 3:031001 8. Pan MC, Lin YF (2010) High efficient crossing-order decoupling in Vold-Kalman filtering order tracking based on independent component analysis. MSSP 6:1756–1766 9. Tuma J (2005) Setting the passband width in the Vold-Kalman order tracking filter. In: ICSV, Lisbon, Portugal, pp 11–14 10. Marnissi Y, Zheng Y, Chouzenoux E, Pesquet JC (2017) A variational Bayesian approach for image restoration—application to image deblurring with Poisson-Gaussian noise. IEEE Trans Comput Imaging 4:722–737

Reconstructing Speed from Vibration Response to Detect Wind Turbine Faults K. Marhadi and D. Saputra

Abstract Speed information is often necessary to detect certain failure modes in wind turbines, such as unbalance, looseness, misalignment, and gear related problems. This is normally obtained from tacho sensors targeting certain shafts, e.g. generator shaft or main shaft. Tacho pulses can be used to track rotational speed of certain components as well as to perform angular resampling. Most importantly, knowing machine speed is necessary for alarming speed dependent failure modes. Correct speed information may be unavailable due to broken tacho sensor and incorrect setting of the sensor. Consequently, many failure modes cannot be detected when the information is not available. Unable to detect these failure modes early may cause unexpected secondary damages that leads to unexpected downtime of the wind turbines and loss of production. Maintenance work on correcting speed sensor is costly, especially for offshore wind turbines, and may result in downtime. Furthermore, condition monitoring system (CMS) of wind turbines is significantly cheaper without speed sensors; considering a wind farm normally consists of many wind turbines. Therefore, it is important to acquire speed information without the need for having a tacho sensor. This paper presents an industrial implementation of speed reconstruction based on vibration response to detect speed-related faults, such as unbalance and gear faults. Speed is determined by frequency demodulation of response vibration signals. Hilbert transform is used to extract the instantaneous frequency, where the derivative of the analytic signal is done in the frequency domain. Using data from wind turbines monitored by Brüel & Kjær Vibro, we show this method can detect speed-related faults as early as when the speed information is available through real speed sensor. The method was able to detect unbalance, misalignment and looseness on wind turbine generators, as well as gear-related problems on high speed stage, intermediate stage, and planetary stage gearboxes. This offers the possibility of reducing the cost of wind turbines CMS significantly. K. Marhadi (B) · D. Saputra Bruel & Kjær Vibro, Skodsborgvej 307B, 2850 Nærum, Denmark e-mail: [email protected] D. Saputra e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_4

35

36

K. Marhadi and D. Saputra

Keywords Wind turbine CMS · Tacho pulses · Hilbert transform · Machine speed · Order cepstrum

1 Introduction Condition monitoring of wind turbine requires monitoring magnitudes of certain frequencies, also known as descriptors. These descriptors describe the state of a component and are trended for alarming purpose. Some of them require speed information of certain shafts to monitor, for example, unbalance in a generator and gearrelated faults in a gearbox [1] and [2]. This information is normally obtained from a speed sensor. In a wind turbine, the speed sensor is normally attached to the generator shaft. Shaft speeds and tooth mesh frequencies at various stages can be determined from the generator shaft speed using ratios of various shafts in the gearbox. A non-contacting eddy current displacement sensor is normally used for speed reference sensor. This type of sensor needs to be adjusted correctly to obtain correct speed measurement. The generator shaft rotation often drifts over time, which could cause the initial speed sensor setting to be invalid. Incorrect speed sensor setting could cause speed reference measurement to be invalid, which results in unreliable speed-dependent descriptors. As a result, failure modes that require speed information for their detection may be missed. Several works to mitigate the problem of invalid speed sensor reading have been presented [3–8]. They mostly involve detection of speed information from vibration; thus speed sensor is not required. In [8], speed information is estimated for wind turbine applications in case speed sensor becomes unavailable. In that paper, prior information from the sensor is required before it becomes unavailable for the algorithm to work. Those works mainly focused on estimating the speed information itself rather than using it for faults detection. This paper focuses on using estimated speed to detect faults in wind turbines. The speed is estimated using frequency demodulation of response vibration signals using Hilbert transformation as presented in [9] and [10]. The work in [9] shows that it is the most accurate method to determine instantaneous speed of a shaft by frequency demodulation.

2 Hilbert Transform to Estimate Speed As described in [9] and [10], instantaneous phase and amplitude of a vibration signal can be determined by Hilbert transform techniques, which result in an analytic signal that corresponds to the real signal. This signal can be represented as: xa (t) = A(t)exp( jφ(t)) ˙ = ω(t) = 2π f (t). where φ(t)

(1)

Reconstructing Speed from Vibration Response …

37

The derivative of the instantaneous phase of an analytic signal can be represented as: ˙ = Im ω(t) = φ(t)



x˙a (t) xa (t)

 (2)

where x˙a (t) is the derivative of xa (t). As explained in [9], the exact derivative can be obtained by multiplying the spectral values by jω over the frequency ranges to be demodulated before performing inverse transform of the spectrum. With Eq. 2, the instantaneous speed of a shaft can be determined from vibration signal at all times. However, the initial estimated speed needs to be smoothed out because the differentiation introduces noise in real vibration signal as described in [10]. In the current paper, initial estimated speed is smoothed out using a FIR filter in the time domain as presented in [10].

3 Fault Detection using Speed Estimation In Bruel and Kjær Vibro wind turbine condition monitoring applications, time waveforms data are obtained approximately every two days. The time waveforms contain vibration signals from all components monitored. The length of each time waveform is 10.24 s with sampling rate of 25,600 Hz. The type of wind turbine analyzed in this work has a gearbox configuration of one planetary and two helical stages. For brevity, this paper only presents detection of generator unbalance, gear faults at intermediate or 2nd stage and planetary or 1st stage based on estimated speed information. A critical part in estimating speed without a speed sensor is to select a frequency band to be demodulated; from which the speed is to be determined. The carrier frequency to be demodulated in this band must be as clean as possible from contaminations from other components/frequencies. Furthermore, it must be able to take into account variation of the speed to be estimated, e.g. not more than 15% speed variations. Based on work in [8], a band is selected to demodule the first tooth mesh frequency of the high speed or 3rd stage of the wind turbine gearbox. Speed is estimated from high speed stage vibration signal for the duration of time waveform obtained, i.e. 10.24 s. The estimated speed is then converted in terms of rotation angle to perform order analysis, i.e. angular resampling vibration signals, and to track the orders of interest. This is done by integrating the estimated instantaneous speed over time to get the cumulative rotations as a function of time. All failure modes detected here are based on tracking orders that indicate faults. To detect generator unbalance, the first order magnitude (1X) of the generator shaft is tracked over a period of four months as shown in Fig. 1 for a 2 MW turbine. An increasing trend of 1X can be seen clearly starting mid April, indicating development of unbalance. The tacho information from the generator shaft, which is also contained in the 10.24 s time waveform data, can be used to verify the estimated

38

K. Marhadi and D. Saputra

Fig. 1 Trend of first order magnitude (1X) of generator shaft

speed. Throughout the paper, it is the generator shaft speed that is estimated as reference. For comparison, the first order magnitude based on real tacho is also presented. The maximum difference of 1X based on estimated and real tacho is 9.75%, and the mean difference is 4.56%. Figure 2 shows comparison of the actual averaged speed and the average estimated speed in 10.24 s over the period of four months. The magnitude of 1X is proportional to rotational speed. As Fig. 2 shows, there are small differences between actual and estimated speeds. The maximum difference between average actual and average estimated speed is 0.34%. Despite the differences, unbalance can still be detected, and its development is tracked. Detecting gear related problems involves identification of sidebands around gear tooth mesh frequencies. Spacing of the sidebands will indicate which gear has

Fig. 2 Trend of average actual and estimated speeds of turbine with unbalance generator

Reconstructing Speed from Vibration Response …

39

the problem. Spacing of sidebands indicates modulation of shaft where the gear is attached. As described in [11], it is very convenient to identify family of sidebands using cepstrum. Thus, order cepstrum, which is defined as the inverse Fourier transform of the log-order spectrum, is used in current work to identify sidebands related to gear problems. In detecting gear problem at the second stage (intermediate stage), vibration signal from intermediate stage sensor is order-tracked according to the intermediate stage shaft rotations. Thus, gear related problems at this shaft manifest as sidebands with spacing of one order of the intermediate shaft in order spectrum. It is convenient to track one cycle in the order cepstrum to indicate development of gear related problem at the intermediate shaft. Figure 3 shows the development of gear problem at the intermediate shaft from a 2 MW turbine over a period of three months. As the figure shows, gear problem occurred after the 21st of January. The average difference between one cycle tracked based on real and one cycle tracked based on estimated speed is 48.07%, and the maximum difference is 286%. Figure 4 shows comparison between the actual and estimated generator speeds. Maximum difference between actual and estimated generator speed is 16.86%. Despite a maximum speed difference of approximately 17%, gear problem can be detected. Gear problem at the planetary stage, which is a very difficult failure mode to be detected in wind turbines, can also be detected by estimating speed. A wind turbine had a planetary ring defect. The ring is stationary and the input from main rotor of the turbine drives the planetary stage carrier. The ring problem resulted in strong modulation of three times carrier speed, known as ring defect frequency, which is tracked in order cepstrum. Figure 5 shows the trend of ring defect of the planet carrier from order cepstrum over a period of six months. Significant development of the fault can be seen in the end of March as magnitude of the ring defect increases. The average difference between ring defect based on estimated speed and the one based on real

Fig. 3 Trend of first cycle of intermediate shaft from an order cepstrum

40

K. Marhadi and D. Saputra

Fig. 4 Trend of average actual and estimated generator shaft speeds of turbine with gear problem at the intermediate shaft

Fig. 5 Trend of ring defect from an order cepstrum

tacho is 8.29%, and the maximum difference is 60.76%. Figure 6 shows comparison between the actual and estimated generator speeds of the turbine. The maximum difference between actual and estimated generator speed is 0.49%. Although there are few clear differences at certain points in time, problem at the planetary stage can be detected. The descriptors calculated based on estimated speed in the present work can have big differences with the ones calculated based on real speed at certain times. However, the failure modes can still be detected over a long period. In fact, faults were detected in approximately the same time as the ones trended using descriptors calculated based on real speed. Descriptors based on estimated and real speeds follow the same trend over time. Table 1 presents summary of the result.

Reconstructing Speed from Vibration Response …

41

Fig. 6 Trend of average actual and estimated generator shaft speeds of turbine with planetary stage problem

Table 1 Summary of differences between estimated and real speeds Turbines

Average absolute generator speed difference (%)

Maximum average Average absolute generator speed descriptor difference (%) difference (%)

Turbine with unbalance

0.04

0.34

4.56

Turbine with 2nd stage gear fault

0.48

16.86

48.09

Turbine with ring defect

0.04

0.49

8.29

Maximum absolute descriptor difference (%) 9.75 286 60.76

4 Conclusion Wind turbine failure modes that require speed information to detect can be identified with estimated speed information. The faults can be detected even when the estimated speed is not 100% accurate. They are identified at the same time as when descriptors based on real speed show development of faults. This gives the possibility of condition monitoring of wind turbines without speed sensor, which could significantly reduce the overall cost of wind turbine condition monitoring.

42

K. Marhadi and D. Saputra

References 1. Bartelmus W, Zimroz R (2009) Vibration condition monitoring of planetary gearbox under varying external load. Mech Syst Sig Process 23(1):246–257 2. Taylor JI (2000) The gear analysis handbook. The Vibration Institute 3. Coats MD, Sawalhi N, Randall R (2009) Extraction of tach information from a vibration signal for improved synchronous averaging. Acoustics, Adelaide 4. Urbanek J, Zimroz R, Barszcz T, Antoni J (2012) Reconstruction of rotational speed from vibration signal– comparison of methods. In: The 9th conference on condition monitoring and machine failure prevention technologies, London, pp 866–878 5. Zhao M, et al (2013) A tacho-less order tracking technique for large speed variations. Mech Syst Signal Process 40(1):76–90 6. Zimroz R, Milioz F, Martin N (2010) A procedure of vibration analysis from planetary gearbox under non-stationary cyclic operations by instantaneous frequency estimation in time-frequency domain. In: The 7th international conference on condition monitoring and machinery failure prevention technologies, Stafford 7. Zimroz R et al (2011) Measurement of instantaneous shaft speed by advanced vibration signal processing—application to wind turbine gearbox. Metrol Meas Syst 18(4):701–712 8. Skrimpas GA et al (2015) Speed estimation in geared wind turbines using the maximum correlation coefficient. In: Annual conference of the prognostic and health management society, San Diego 9. Randall RB, Smith WA (2018) Accuracy of speed determination of a machine from frequency demodulation of response vibration signals. In: The 15th international conference on condition monitoring and machinery failure prevention technologies, Nottingham 10. Randall R, Smith W (2016) Use of the teager kaiser energy operator to estimate machine speed. In: European conference of the prognostic and health management society, Bilbao 11. Marhadi K, Skrimpas GA (2017) Using cepstrum information to detect gearbox faults in wind turbine planetary stage gearbox. In: The 1st word congress on condition monitoring, London

Non-regular Sampling and Compressive Sensing for Gearbox Monitoring C. Parellier, T. Bovkun, T. Denimal, J. Gehin, D. Abboud, and Y. Marnissi

Abstract Vibration-based condition monitoring is a powerful field to achieve preventive maintenance and control the efficiency of mechanical systems. The common monitoring scheme consists of capturing vibration signals at a sufficiently high sampling frequency to consider the dynamic and kinematic properties of the machine, and then condition indicators are constructed and saved or sent to a groundbased surveillance station. A major drawback of such an approach is that these indicators may not be sufficient to establish an accurate diagnosis, and the signal itself is needed in various situations. As the frequency rate of vibration signals and, consequently, their size highly depends on the highest frequency to be monitored, it is generally set high in order to respect the Nyquist theorem. As a consequence, sending these data can be compromised because of the limited bandwidth of the transmission channel. This paper proposes a non-regular sampling strategy via compressive sensing to significantly reduce the sampling frequency (below the Nyquist limit) and the data size, while preserving enough information for diagnosis. It is based on the hypothesis that the measured signal consists of a sum of sinusoids corrupted by some random Gaussian noises. The signal recovery problem from random samples is then formulated through an 2 − 1 optimization problem, which is solved via the iterative shrinking-thresholding algorithm. At last, the potentiality of the suggested approach is demonstrated on real vibration signals measured from a spur gearbox with a spalling fault that progresses over 12 days. Keywords Health monitoring · Compressive sensing · Non-regular sampling · 1 -minimization · Iterative shrink thresholding · Gearbox

C. Parellier · T. Bovkun · T. Denimal · J. Gehin Télécom Paristech, 46 Rue Barrault, 75013 Paris, France D. Abboud (B) · Y. Marnissi Safran Tech, Rue de Jeunes Bois, 78772 Châteaufort, Magny-les-Hameaux, France e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_5

43

44

C. Parellier et al.

1 Introduction Gearboxes are mechanical equipment used across a large range of industries. They consist in at least a pair of gears engaged with each other to ensure power transmission. Their geometry determines the gear ratio and, therefore, the speed and torque ratio. Consequently, the design and chipping state of the tooth have a major impact on the gearbox efficiency. A broken tooth may lead to severe accidents in some critical applications. In this respect, their in-service monitoring is crucial. Vibration-based condition monitoring comprises a class of well-established methods used in a predictive maintenance strategy. Sensors (e.g., accelerometers) are mounted in the vicinity of the monitored components. Then, traditional methods consist in acquiring signals on a regular basis, i.e., with a constant time step. The sampling rate must be chosen high enough to consider the dynamic and kinematic properties of the machine. Afterwards, the signal is processed to compute condition indicators, which provide health information about the rotating machine. While this approach strongly condenses the data, other useful health information may be missed. In fact, it is more suitable to have a reasonably close signal to the real one. In many applications (e.g., aircraft engine monitoring), vibration signals are acquired at relatively high frequency rates due to the very high rotating frequency of the engine components. In detail, the sampling rate must respect the Nyquist/Shannon (NS) theorem, which states that the sampling frequency must be greater than twice the maximum frequency in the spectrum. This results in a huge volume of data, which is constraining in terms of transmission bandwidth and/or memory storage. Aiming to address the issues above, compressive sensing (CS) [1] is a continuously growing research area in signal processing since 2006. It is an efficient data acquisition theory based on sparsity property of input signal. CS allows acquiring a sampled signal and faithfully recovering the original signal while the (average) sampling rate is much lower than the NS rate. Because of these advantages, CS is widely used in numerous engineering areas such as electrical, electronic, and telecommunications [2–4]. CS has proven its efficiency in rotating machinery faults detection [5] where the sparse representation classification algorithm is applied to solve problems based on 1 -norm minimization. Höglund et al. [6] applied CS to bridge damage detection, using a real-world vibration dataset, and proved its ability to recover a signal sampled at a rate of 90%. Their solution used for sensing a random matrix with either i.i.d. Gaussian values or random values from {1, −1} i.i.d according to a Bernoulli distribution. This paper intends an application of compressive sensing to vibration signals produced by a spur gearbox. In order to detect gear damages, the implemented method should guarantee a reliable recovered signal from the under-sampled observed signal. The rest of the paper is organized as follows: Sect. 2 succinctly reviews the model of vibrations generated by gears, in addition to some common gear condition indicators. Then, the proposed monitoring approach is detailed. Section 3 focuses on the demonstration of the recovering quality on real signals acquired on a test bench,

Non-regular Sampling and Compressive Sensing for Gearbox …

45

including a local gear fault in a spur gearbox that evolves across the experiment. The paper is sealed with a conclusion in Sect. 4.

2 Gear Diagnosis by Compressive Sensing This section starts with a brief review of the characteristic frequencies of a gearbox subjected to a fault and related gear condition indicators. Next, the non-regular sampling method is detailed. Last, the problem is formulated and an efficient algorithm for signal recovery is proposed.

2.1 Gear Vibrations The studied system consists of a simple spur gearbox with two wheels. The driver gear transmits a rotational motion by meshing with the driven gear. These meshing forces are the main source of vibration. In ideal situation, this force is spread evenly among the tooth, thus its temporal profile is periodic (in the stationary case) with a frequency called the gear meshing frequency, subsequently denoted by FGM ∈ R. Each time a local fault in a tooth makes a contact under a certain load with another tooth, an abrupt change in the contact force occurs leading to an impulse in the vibration signal. The generated impulses are likely to modulate the meshing signal and generate multiple sidebands in the spectrum carried by the FGM and its harmonics, spaced by the faulty gear frequency. The number and energy of these sidebands are likely to increase with the progression of the local fault. FGM = Ni f i , where Ni ∈ N is the number of teeth of gear i ∈ N and f i ∈ R is the rotating frequency of gear i. Since frequencies FGM and f 1,2 are known, a gear condition indicator (GCI) Ii can be constructed: L Ii =

k=−L S(

f GM + k ∗ f i )

S( f GM )2

2

,

(1)

where S( f ) denotes the signal spectrum, f is the frequency (in [Hz]) and L ∈ N is defined such that 2L + 1 is the considered number of sidebands. The crest factor (CF) is a condition indicator used in vibration-based condition monitoring, known to be sensitive during early phases of the damage: CF(s) =

Speaks (s) , RMS(s)

(2)

where Speaks (s) is the maximum value of the signal s and RMS(s) the root mean square of signal s.

46

C. Parellier et al.

Indicators (1) and (2) are used afterwards as a part of the validating process of the proposed approach Sect. 3.

2.2 Non-regular Sampling Method One considers a continuous-time signal c : T → R, where T ⊂ R is the interval observation and its highest frequency is denoted f max ∈ R. NS theorem imposes a regular sampling at a frequency f NS = 1/TNS ≥ 2 f max to capture all the information of c. This condition leads to a discrete time signal s ∈ R N , where N ≥ T /TNS = 2T f max . A non-regular sampling can be performed to reduce the overall size of data needed to describe periodic signals and then to obtain an observed signal y ∈ R M such that M  N . To achieve the non-regular sampling and control the sampling rate, the sampled instants τm ∈ R are selected following the jittered random sampling: τm = m

T + δm , M

(3)

where δm is an i.i.d. random variable with the uniform probability distribution U bounded between 0 and T /M, which means δm ∼ U (0, T /M). Numerically, the sampling process is computed with: y = s,

(4)

where  ∈ R M×N is the measurement matrix. To illustrate the previous method, matrix 1,2 ∈ R N /2×N are 2 realizations of a simplified case with sampling rate M = N /2. One instant is sampled on each interval highlights with a red square (5).

(5)

2.3 Problem Formulation The non-regular sampling produces an observed signal y contaminated with interfering and sampling noises. The compressed sensing framework gives an efficient

Non-regular Sampling and Compressive Sensing for Gearbox …

47

approach to recover a signal reasonably close to s [7]. Applying this technique, two constraints need to be fulfilled: sparsity and incoherence. Quasi-stationary mechanical signals are often quasi and/or multi-periodic in the time domain and, thus, can be faithfully represented in the Fourier domain through a small number of non-zero coefficients (which defines the Fourier coefficients of a periodic signal). In other words, mechanical signals are likely to exhibit a sparse spectrum characterized by numerous harmonics whose frequency depends on the system kinematics. For this reason, the Fourier basis  ∈ N N ×N is perhaps the most convenient representation of the signal s. Incidentally, the incoherence property between representation basis and measurement matrix is guaranteed by the random measurement matrix  previously detailed in Sect. 2.2. Equation (4) can be then reformulated as: y =  −1 x + ,

(6)

where x ∈ R N is the signal s in frequency basis and  a Gaussian noise independent of signal.

2.4 Proposed Solution and Algorithm To fulfill the sparsity condition on x while reducing the Gaussian noise , a common approach to solve (6) is the least squares method with 1 -norm regularization, also referred to as LASSO [8]:   2 x ∈ argmin  y −  −1 x 2 + λ x 1 ,



(7)

where . i is the i -norm and the λ ∈ R+ is the regularization term. The larger λ is, the more the solution is influenced. A large λ results in a very sparse signal and may lead to a poor estimation of peaks with low amplitudes. Conversely, decreasing the value of λ may result in poorly sparse signal and high level of residual noise. Therefore, the choice of λ can significantly affect the estimation. To solve (7) numerically, one of the most popular method is the class of “iterative shrinking-thresholding” algorithms called ISTA [9]. In order to improve the algorithmic efficiency, it has been coded from scratch for the specific problem (7) (Fig. 1). In fact, the computation of Fourier Transform matrix  is avoided due to its high computational cost. Instead, the Fast Fourier Transform implemented in Scipy [10] is used.

48

C. Parellier et al.

Fig. 1 Flowchart of the implementation of ISTA for the specific problem (7)

3 Application Raw signals are measured at high definition on a test bench with an accelerometer mounted in the vicinity of gearbox (Sect. 2.1). One signal is acquired per day, for 12 days, at a sampling frequency of 20 kHz. The gear 1 and gear 2 have, respectively, N1 = 20 teeth and N2 = 21 teeth. FGM is around 330 Hz. The target phenomenon to identify is a progressive local fault in gear 1. Figure 2 illustrates the raw temporal signal and the relative spectrum of the first day (healthy state) and of the last day (faulty state). In the latter, the presence of a high number of sidebands with significant magnitude compared to the meshing frequency magnitude is an obvious sign of the presence of a damage. The increase of this ratio indicates a progression of the damage.

Fig. 2 Temporal and frequency signal at day 1, healthy state, and day 12, faulty state

Non-regular Sampling and Compressive Sensing for Gearbox …

49

Figures 3, 4 and 5 provides an overview of the recovered spectrum superposed on the raw one. Since gear 1 fault begins to increase significantly from day 9, figures are focused on days 1, 9, and 12. One observes that the nine harmonics of FGM are recovered with a low information loss, both in terms of frequency and amplitude, even for the frequencies higher than NS criterion. Figure 6 compares the GCI (1) of raw signals and recovered ones of both gears. It is computed on the four harmonics of sidebands around FGM . The high peak of gear 1 at day 11 indicates a strong spalling on this gear. The GCI values of gear 2 remain low, which denotes a good gear condition. Finally, the recovering is efficient up to a sampling rate of 0.2, since spalling condition of gear 1 is identified. Figure 7 shows the CF (2) results. One observes that the trend is preserved with any sampling rates: There is no significant fault before day 8 and a local fault appears and strongly increases from day 9.

Fig. 3 Frequency spectrum of raw and recovered signal at day 1, for a sampling rate of 0.2

Fig. 4 Frequency spectrum of raw and recovered signal at day 9, minor fault, for a sampling rate of 0.2

Fig. 5 Frequency spectrum of raw and recovered signal at day 12, major fault, for a sampling rate of 0.2

50

C. Parellier et al.

Fig. 6 Evolution per day of GCI (1) computed on spur gear 1 and 2 for sampling rate ∈ {0.5, 0.4, 0.3, 0.2}

Fig. 7 Evolution of crest factor through days, computed for sampling rate ∈ {0.5, 0.4, 0.3, 0.2}

4 Conclusion This study focused on the ability to transmit a mechanical vibration signal lightweight but informative in condition monitoring framework. The sampling based on uniform distribution via compressive sensing showed good performance in recovering the signal. In particular, it was demonstrated that the proposed approach was able to recover high frequency peaks in the spectrum, though being beyond the Nyquist frequency. However, since the problem is reduced to a LASSO optimization problem, its solution depends on the threshold hyper parameter λ which has a major impact on the recovering efficiency. The search of an ideal parameter was performed through a manual iterative method, which may be insufficient to deal with a large amount of signals. Studying more input signals, one may be able to determine a set of value of λ relevant to some signal classes. Moreover, this step may be automatized via a Bayesian approach which takes part of some prior knowledge on the model variables. Acknowledgements We thank A. Sabourin and C. Tillier of Télécom ParisTech (LTCI) who provided a rich assistance.

Non-regular Sampling and Compressive Sensing for Gearbox …

51

References 1. Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theor 52(4):1289–1306 2. Herman MA, Strohmer T (2009) High-resolution radar via compressed sensing. IEEE Trans Signal Process 57(6):2275–2284 3. Duarte MF et al (2008) Single-pixel imaging via compressive sampling. IEEE Signal Process Mag 25(2):83–91 4. Tian Z, Giannakis GB (2007) Compressed sensing for wideband cognitive radios. In: International conference on acoustics, speech and signal processing, Honolulu, pp 1357–1360 5. Tang G (2015) Sparse classification of rotating machinery faults based on compressive sensing strategy. Mechatronics 31:60–67 6. Höglund J, Wei B, Hu W, Karoumi R (2014) Compressive sensing for bridge damage detection. In: Proceedings of 5th Nordic workshop, on system and network optimization for wireless 7. Candès EJ, Romberg JK, Tao T (2006) Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math 59(8):1207–1223 8. Tibshirani R (1996) Regression shrinkage and selection via the lasso. J Roy Stat Soc Ser B 58(1):267–288 9. Combettes PL, Pesquet JC (2011) Proximal splitting methods in signal processing, Springer series optimization A, vol 49. Springer, New York, pp 185–212 10. Jones E et al (2019) SciPy: open source scientific tools for python, 2019-06. https://www.scipy. org/

Change Detection in Smart Grids Using Dynamic Mixtures of t-Distributions A. Samé, M. L. Abadi, and L. Oukhellou

Abstract The analysis of massive amounts of data collected via smart grids can contribute to manage more efficiently energy production and water resources. Within this framework, a method is proposed in this article to detect changes in panel data from smart electricity and water networks. The adopted approach is based on a representation of the data by clusters whose occurrence probability varies in the course of time. Because of the cyclical nature of the studied data, these probabilities are designed to be piecewise periodic functions whose break points reflect the intended changes. A mixture of t-distributions with a hierarchical structure forms the basis of our proposal. This model was also chosen for its robustness properties. The weights of the mixture, at the top of the hierarchy, constitute the change detectors. At the lowest hierarchy level, the mixture weights are designed to model the periodic dynamics of the data. The incremental strategy adopted for parameter estimation makes it possible to process large amounts of data. The practical relevance of the proposed method is illustrated through realistic urban data. Keywords Energy and water monitoring · Smart grid · Panel data · Change detection · Segmentation and clustering · Incremental learning

1 Introduction Nowadays, with the advent of smart cities, high-resolution data acquired through sensor networks allow for a more accurate monitoring of energy and water, simultaneously at different points of the urban area. Processing this data can help ensure a best compromise between supply and demand, thus leading to natural resource savings. In this context, the data usually collected are consumption measurements collected and transmitted via smart meters. As these data describe the dynamics of a set of entities (individual houses or collective buildings) over time, we can formalize them as panel data. New algorithms based on flexible latent variable models are A. Samé (B) · M. L. Abadi · L. Oukhellou COSYS-GRETTIA, Univ Gustave Eiffel, IFSTTAR, F-77447 Marne-la-Vallée, France e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_6

53

54

A. Samé et al.

useful for the descriptive and predictive analysis of these data. As shown in the following section, several researches have already been conducted to address this complex problem. The main purpose of this paper is the (off-line) change detection in such panel data, a problem which can also be seen as the data segmentation along the time axis. The proposed approach is based on the representation of panel data, which are dynamic in nature, by a set of classes whose occurrence probability varies over time. To detect changes while taking into account the cyclic nature of the data, the prior probability of classes is assumed to be piecewise periodic. In this way, we assimilate the break times of the latter probabilities to the change points. To formalize these assumptions, we rely on a dynamic mixture of t-distributions with a hierarchical structure, which has also been chosen for its robustness properties. To allow the processing of large amounts of data, the model parameters are estimated using an incremental expectation–maximization algorithm derived from the variational method initiated by Neal and Hinton [17]. The article is structured as follows. Section 2 raises the problem of change detection from panel data and provides a brief state of the art on this issue. Section 3 introduces the proposed hierarchical mixture model dedicated to change detection from panel data. Section 4 describes the incremental strategy adopted for parameter estimation, and Sect. 5 is dedicated to the application of this method on a realistic dataset issued from a smart grid.

2 Problem Statement and State of the Art In this paper, we focus on energy, water, or gas consumption data from smart meters associated to individual houses or collective buildings in an urban area. We assume that, for each meter, the consumption is available at hourly intervals during several days. As the main interest of this article lies in the dynamics of a panel of consumers on a daily time scale, the data will be denoted as (x 1 , . . . , x T ), where x t = (x t1 , . . . , x tn ) will designate the cross-sectional sample extracted on day t ≤ T from n meters, with x ti ∈ Rd . In this study, each individual x ti consists of d = 24 values (hourly consumption). The problem addressed in this paper is that of change detection in these panel data. Let us recall that this problem consists in finding time instants at which changes occur in the data distribution. This process requires the data distributions, before and after the change point, to be determined. Figure 1 illustrates the problem of change detection in a panel of daily consumption curves. Several studies have already focused on the problem of change detection in panel data. In [11, 12], the so-called random break model has been proposed. It assumes that each time series has its own break point. The likelihood, which is similar to that of a mixture model, is maximized via a specific Expectation–Maximization (EM) algorithm. In [2, 3, 10], a statistical approach is used for estimating a common break in the mean and the variance of panel data. The parameters estimation is performed

Change Detection in Smart Grids Using Dynamic Mixtures …

55

Fig. 1 Change detection in a panel of daily consumption curves from a water grid

via a quasi-maximum likelihood method. To detect a change in the mean of panel data, an adaptation of the cumulative sum (CUSUM) method has been developed in [4, 9], and a CUSUM-based statistic has been proposed in [14] to detect a common variance change in panel data. The problem of multiple change point detection in panel data has been addressed in [5] through the use of a double-CUSUM statistic and a bootstrap procedure.

3 Simultaneous Change Detection and Clustering The method proposed for change detection in panel data is based on a mixture of multivariate t-distributions whose mixing weights, themselves, are modeled as a temporal mixture model which is piecewise periodic. A two-way clustering of panel data is derived from this approach: a clustering of observations x ti to one of K clusters and a clustering of the time instants t to one of L contiguous segments. This latter clustering constitutes the detection of change points. The model and its parameter estimation strategy are successively described in the following subsections.

3.1 Robust Mixture Modeling According to the proposed model, each observation x ti is assumed to be generated from the following hierarchical mixture distribution: p(x ti ; θ ) =

 L K   k=1

 πl (t; α)πlk (ut ; β l ) f (x ti ; μk , k , ϑk ),

(1)

l=1

where θ represents the set of parameters of the model, L and K are the numbers of mixture components at the two levels of the hierarchy, and f is the density function of a multivariate t-distribution with mean μ ∈ Rd , scale matrix  ∈ Rd×d , and degrees of freedom ϑ, defined by  ϑ+d 

 − 1 (ϑ+d) (x − μ)  −1 (x − μ) 2   1+ , f (x; μ, , ϑ) = ϑ (π ϑ)1/2 ||1/2 ϑ2 

2

(2)

56

A. Samé et al.

where  is the Gamma function. Here, we use the multivariate t-distribution rather than the classic Gaussian distribution for its more generic character (a Gaussian density is obtained when ϑ tends to infinity) and for its robustness to outliers [16]. L def πl (t; α)πlk (ut ; β l ) = pk (t) allow for a For our model, the global weights l=1 segmentation of the time axis into contiguous parts, while remaining periodic in each segment. To this end, the mixing weights πl are defined by exp(αl1 t + αl0 ) πl (t; α) = , h exp(αh1 t + αh0 )

(3)

with α = (αl0 , αl1 )l=1,...,L ∈ R2L . Using linear functions of time inside the logistic transformation leads to the desired segmentation [18]. For instance, for L = 2 segments, the single change point (t ∗ = −αl0 /αl1 ) corresponds to the intersection of two logistic functions [18]. In order to allow the proportions πlk vary periodically in the course of time while taking into account special calendar events (e.g., public holidays); they are defined as      ut exp β lk   , πlk ut ; β l = (4) h exp β lh ut 

  where β l = β l1 , . . . , β l K is the set of parameters to be estimated, with β lk = (βlk0 , . . . , βl,k,2q+1 ) ∈ R2q+2 , and ut = (1, C1 (t), S1 (t), . . . , Cq (t), Sq (t), 1sd (t)) is the corresponding covariate vector, with  C j (t) = cos

   2π jt 2π jt 1 if day t is special , S j (t) = sin , 1sd (t) = 0 otherwise. m m

(5)

It should be noticed that m = 7 (weekly seasonality), and q(≤ m/2) is the required number of trigonometric terms, where a is the integer part of a. If there is no special day in the studied period, then we use ut = (1, C1 (t), S1 (t), . . . , Cq (t), Sq (t)) as covariate vector with the new set of parameters β lk = (βlk0 , . . . , βl,k,2q ) . For identifiability purposes, the parameters (α L0 , α L1 ) and β l K are set to be the null vector.

Change Detection in Smart Grids Using Dynamic Mixtures …

57

3.2 Formulation in the Form of a Generative Model From a generative point of view, the proposed model can be defined as follows: 1. Generation of the cluster labels z ti ∈ {1, . . . , K } of observations x ti as follows: (a) drawing of segment labels wti ∈ {1, . . . , L} according to the multinomial distribution M(1; π1 (t; α), . . . , π L (t; α)); to the (b) drawing of cluster label z ti ∈ {1, .. . , K }, given wti = l, according  multinomial distribution M 1; πl1 ut ; β l , . . . , πl K ut ; β l ; 2. generation of x ti , given z ti = k, from the t-distribution with parameters μk ,  k , ϑk as follows:   (a) generation of yti according to the Gamma distribution  ϑ2k , ϑ2k ;

(b) generation of x ti , given yti , according to the distribution N μk , ytik . Therefore, the proposed model involves the discrete latent variables w = (w1 , . . . , w T ) and z = (z 1 , . . . , z T ), and the continuous latent variables y = ( y1 , . . . , y T ), with wt = (wti )i=1,...,n , z t = (z ti )i=1,...,n , and yt = (yti )i=1,...,n . The graphical representation corresponding to the proposed model is given in Fig. 2. Figure 3 displays examples of weights π l and π lk , and cluster labels z ti generated using (L = 2, K = 3). As it can be seen, the mixture proportions of the highest level of the hierarchy allow for an effective segmentation of the time axis (change point at t = 36), while the mixture proportions of the bottom of the hierarchy take into account the periodic dynamics of the data. The cluster labels generated from the combination pk (t) of these proportions show two successive periodic behaviors.

4 Incremental Learning Given the temporal data x = (x 1 , . . . , x T ) and their covariates (u1 , . . . , u T ), the parameter vector θ is estimated by maximizing the log-likelihood L(θ ) =

 t,i

log

 





πl (t; α)πlk ut ; β l f (x ti ; μk ,  k , ϑk )

k,l

Fig. 2 Graphical probabilistic representation associated to the proposed model

 (6)

58

A. Samé et al.

Fig. 3 Examples of mixture proportions obtained with L = 2, K = 3, and q = 3, and cluster labels zti generated from these probabilities: (top) the mixture proportions at the highest level of the hierarchy allow the detection of a change at t = 36; (middle) the mixture proportions at the bottom of the hierarchy take into account the periodic nature of the data; (bottom) the cluster labels zti generated from the combination pk (t) of these proportions show two different periodic behaviors

through the Expectation–Maximization (EM) algorithm [7, 15]. For our specific mixture model, each step of this algorithm requires the computation of two categories of posterior probabilities, in addition to the use of multivariate t-distributions. In the perspective of processing large amounts of data, an incremental version of the EM algorithm is proposed. Before describing this version, we start by developing the off-line version of the algorithm.

Change Detection in Smart Grids Using Dynamic Mixtures …

59

4.1 EM Algorithm Using the generative formulation of the model, the complete log-likelihood can be written as         ϑk ϑk k , N x ti ; μk , wtil z tik log πl (t; α)πlk ut ; β l  yti ; , CL(θ ) = 2 2 yti t,i,k,l (7) where (.; a, b) is the Gamma probability density function with parameters a and b; N (.; μ, ) is the Gaussian density with mean μ and covariance matrix . Variables wtil and z tik are defined as follows: wtil = 1 if wti = l and 0 otherwise, and z tik = 1 if z ti = k and 0 otherwise. The EM algorithm starts from an initial parameter θ (0) and iterates the following two steps until convergence. Expectation Step Computation of the expected complete log-likelihood conditionally on the observed data and the current parameter θ (c) , which is given by the following auxiliary function Q: 

  Q θ , θ (c) = E CL(θ )|x 1 , . . . , x T ; θ (c)  (c)    (c)    (c) Q 2l β l + Q 3k μk ,  k + Q 4k (ϑk ), = Q (c) 1 (α) + l

k

(8)

k

with Q (c) 1 (α) =



(c) τtilk logπl (t; α)

(9)

t,i,l,k

   (c)   τtilk logπlk ut ; β l Q (c) 2l β l = t,i,k

  Q (c) 3k μk ,  k Q (c) 4k (ϑk )

=

 t,i,l

(c) τtilk logN



k x ti ; μk , yti

(10) 

 ϑk ϑk yti ; , = 2 2 t,i,l        (c)  (c) ϑk + d ϑk(c) + d ϑk + −1 ψ τtilk − log . 2 2 2 t,i,l 

(c) τtilk 



(11)

(12)

where ψ refers to the Digamma function defined by ψ(a) = ∂ log (a)/∂a = (c) is the posterior probability that x ti originates from the kth cluster   (a)/ (a), τtilk and the lth segment given the current parameter, and λ(c) tik is the conditional expectation

60

A. Samé et al.

of yti given the current parameter. These expectations can be written as 

(c) τtilk = E wtil z tik |x ti ; θ (c)

 

(c) (c) πl t; α (c) πlk ut ; β l(c) f x ti ; μ(c) ,  , ϑ k k k

, =  

(c) (c) (c) (c) π f x ti ; μk ,  k , ϑk(c) lk ut ; β l l,k πl t; α

(13)



(c) λ(c) tik = E yti |z ti = k, x ti ; θ =

ϑk(c) + d 

−1

. (c) (c)  x ϑk(c) + x ti − μ(c) − μ ti k k k

(14)

Maximization Step Computation of the new parameter θ (c+1) by maximizing the Q function with respect (c) to θ . The maximizations of Q (c) 1 and Q 2l are logistic regression problems weighted (c) by the probabilities τtilk , which cannot be solved in a closed form. The Iteratively Reweighted Least Squares (IRLS) algorithm [8, 13] is exploited to this end. In a similar way as for the standard mixture of Gaussian distributions [15], we perform   μ . ,  the maximization of Q (c) k k 3k

4.2 Incremental Algorithm As is usually the case with smart meters data, the EM algorithm can become slow for massive panel data (T × n relatively large) since each of its iterations, and more particularly the E-step, requires visiting the whole dataset. A solution would consist in distributing the E-step among several process units [6, 19], which are made possible by the fact that the posterior probabilities τtilk that can be computed independently. In this article, we opted for another approach, which consists in adapting the incremental strategy initiated by Neal and Hinton [17] to our context of periodic mixture modeling. Before describing our incremental algorithm, it should first be emphasized that the EM algorithm previously described can also be written by using only the following sufficient statistics: 

(c) = S1lk

(c) (c) s1tlk where s1tlk =



(c) τtilk

(15)

τ (c) λ(c) i tilk tik

(16)

τ (c) logλ(c) tik i tilk

(17)

t (c) S2lk = (c) S3lk =

 t

 t

i

(c) (c) s2tlk where s2tlk =

(c) (c) s3tlk where s3tlk =





Change Detection in Smart Grids Using Dynamic Mixtures … (c) S4lk = (c) S5lk =

 t

 t

(c) (c) s4tlk where s4tlk =

(c) (c) s5tlk where s5tlk =

61



τ (c) λ(c) x ti i tilk tik

(18)

τ (c) λ(c) x ti x ti . i tilk tik

(19)



To accelerate the EM algorithm without visiting the whole dataset, we use partial E-steps and incremental parameter updating, which are based on the computation of sufficient statistics [17]. This strategy also has the advantage of accelerating the convergence of the algorithm while keeping its monotony property. We opted in this article for specific partial E-steps, where the posterior probabilities τtilk and λtik are computed only for a day t. The additive nature of the expected sufficient statistics defined by (15)–(19) enables their incremental updating. The main steps of this procedure is detailed by the following pseudo-code. It will be called IEMSPMIX, where I stands for “incremental,” EM for “Expectation–Maximization,” S for “segmental,” P for “periodic,” and MIX for “mixture.” Once the estimate θ is provided by the algorithm, a contiguous segmentation of the time instants can be obtained by setting El (l = 1, . . . , L) as follows: 

     El = t ∈ [1; T ]; πl t; α = max1≤h≤L πh t; α . 



(20)

The estimated change points are therefore the starting time stamp of segments E l . A partition of the data is deduced by setting

 τtilk . z ti = k ⇔ k = argmax1≤h≤L P z ti = h|x ti ; θ = argmax1≤h≤L





l

where τtilk is given by (13).

(21)

62

A. Samé et al.

Change Detection in Smart Grids Using Dynamic Mixtures …

63

5 Experiments In this section, realistic consumption data similar to those collected from real world water networks are used to evaluate the proposed change detection algorithm. They are obtained through simulations from a specific dynamic model [1]. With these data, the ground truth segmentation is available, thereby allowing the method evaluation. Let us recall that each observation x ti ∈ R24 corresponds to the behavior adopted by a user i during a day t (there is one numeric value per hour). The dataset has L = 3 segments (change at t = 36 and t = 71), and K = 8 clusters. The numbers of time instants is T = 105 days (which corresponds to 10 weeks,) and the number of individuals is n = 100. These data can also be formalized by a tensor of 100 × 105 × 24 numeric values. A part of these data is displayed in Fig. 4, where each subplot represents the set of observations (x ti )i=1,...,n of a day t, x ti being displayed longitudinally as a time series of 24 values. Since the main concern of this study is change detection in consumption behavior (time series shape) rather than consumption volume, an additional transformation has been applied to the data: each series x ti = (x tis )s=1,...,24 was standardized by setting xtis to (x tis − x ti )/σ (x ti ), where x ti and σ (x ti ) are, respectively, the mean and standard deviation of x ti .

Fig. 4 Extract of a realistic panel dataset (20 smart meters observed during 16 days)

64

A. Samé et al.

Table 1 BIC criterion in relation to the number of segments and the number of clusters K =6

K =7

K =8

K =9

K = 10

L=1

371,954.51

371,014.72

369,707.24

370,405.08

370,614.00

L=2

369,621.13

368,902.14

367,214.11

367,838.20

368,290.31

L=3

368,327.83

367,193.04

365,951.70

366,382.93

366,992.62

L=4

368,536.86

367,563.87

366,413.30

366,883.04

367,651.21

L= 5

368,833.13

367,990.21

366,940.57

367,544.82

368,241.98

Table 2 BIC criterion based on Gaussian mixture model, in relation to the numbers of segments and clusters K =6

K =7

K= 8

K =9

K = 10

L=1

393,606.63

392,564.63

391,145.70

391,627.69

392,105.33

L= 2

391,304.51

390,223.47

388,635.08

389,140.04

389,729.94

L=3

389,826.11

388,684.20

387,099.98

387,729.74

388,307.09

L=4

390,081.98

389,019.08

387,494.16

388,189.17

388,861.90

L= 5

390,413.55

389,427.82

387,923.91

388,692.74

389,448.64

5.1 Model Selection To select the best couple (L, K), we have relied on the minimization of the Bayesian

Information Criterion (BIC), which is defined as B I C(L , K ) = −2L θ + 



m L ,K log(nT ), where m L ,K is the number of free parameters of the model, and θ is the maximum likelihood estimates provided by the IEM-SPMIX algorithm. This estimation algorithm has thus been launched for different pairs (L, K), with L varying in the set {1, …, 5} and K varying in the set {6, …, 10}. The results achieved are given in Table 1. The minimum value of BIC is obtained for the pair (L = 3, K = 8), which corresponds to the reality of the data. We also compared these BIC values with those achieved by using a Gaussian distribution instead of the t-distribution. The results obtained are given in Table 2. As it can be observed, the resulting values of BIC in this case are higher than those obtained with the proposed model, which shows that our model is better adapted to the considered data.

5.2 Results Using the number of segments (L = 3) and the number of clusters (K = 8) which have been selected in the previous section, the algorithm IEM-SPMIX has been launched on the panel data (x ti )t=1,...,T,i=1,...,n . Figure 5 shows the resulting daily

Change Detection in Smart Grids Using Dynamic Mixtures …

65

Fig. 5 Daily cluster labels and change points (vertical lines) obtained with IEM-SPMIX (left); consumption patterns corresponding to the clusters (right)

cluster labels together with the estimated change points (vertical lines at day 36 and day 71) and the eight estimated means μk of the t-distributions. The estimated change points coincide with the true change points. The displayed means constitute typical consumption behavior patterns. For instance, pattern μ2 which is a typical behavior of active residents (two well-separated consumption peaks) and is frequent during the working days of segment 2 while pattern μ7 is frequent during the weekends of segment 3. Thanks to our robust mixture modeling approach, Fig. 6 allows us to see, within the panel data, the observations that can be considered as noise and displays some An observation x ti has been considered as noise if  −1   of these observations. 2 2 x τ − μ x  − μ , > χ ti ti k k l,k tilk k d;0.95 where χd;α is the α-percentile of the Chi-square distribution with d degrees of freedom [16]. In practice, we observed that these noisy data could generally be attributed to atypical behaviors specific to the studied panels and, depending of the period, to exogenous factors (temperature, precipitation). 





Fig. 6 Noisy observations within the panel (left); corresponding consumption curves (right)

66

A. Samé et al.

The estimated mixture proportions are displayed in Fig. 7. As shown in Fig. 7a, the change points are given by t = 36 and t = 71. The segment–specific periodic proportions are displayed in Fig. 7b–d. They show the local dynamics of the panel data distribution. The global dynamic behavior

of the data is by Fig. 7e, which gives given L the time evolving probability pk (t) = P z ti = k; θ = l=1 πl (t; α )πlk (ut ; β l ). Using this plot, we can distinguish the differences between segments, which can help give insight to the data. 



  Fig. 7 Estimated mixture proportions: a πl t; α , b–d πlk ut ; β l , and e pk (t) 





Change Detection in Smart Grids Using Dynamic Mixtures …

67

6 Conclusion A change detection method, specially dedicated to periodic panel data emanating from smart electricity and water networks, has been proposed in this article. A hierarchical mixture of t-distributions whose weights evolve over time that forms the basis of this method. The weights of the mixture, at the top of the hierarchy, constitute the change detectors. At the lowest hierarchy level, the mixture weights are designed to model the periodic dynamics of the data. An incremental strategy, which is based on the expected sufficient statistics of the model, has been adopted for parameter estimation. It allows keeping the monotonic convergence properties of the log-likelihood. The proposed method proved to be effective on realistic simulated data. As short-term perspectives of this work, comparisons of our method with a two-phase approach involving clustering and then segmentation will be conducted. Although BIC can help selecting the number of clusters and segments, we observed in practice that an underestimation of the number of clusters could lead to poor detection of changes, which is not the case when the number of clusters is over-estimated. The consideration of this fact in the model selection phase therefore deserves to be studied.

References 1. Abadi ML, Samé A, Oukhellou L, Cheifetz N, Mandel P, Féliers C, Chesneau O (2017) Predictive classification of water consumption time series using nonhomogeneous markov models. In: IEEE DSAA, pp 323–331 2. Bai J (2010) Common breaks in means and variances for panel data. J Econometrics 157(1):78– 92 3. Bai J, Carrion-I-Silvestre JL (2009) Structural changes, common stochastic trends and unit roots in panel data. Rev Econ Stud 76(2):471–501 4. Chan J, Horváth L, Hušková M (2013) Darling–erd˝os limit results for change-point detection in panel data. J Stat Plann Infer 143(5):955–970 5. Cho H et al (2016) Change-point detection in panel data via double cusum statistic. Electron J Stat 10(2):2000–2038 6. Dean J, Ghemawat S (2008) MapReduce: simplified data processing on large clusters. ACM 51(1):107–113 7. Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. J Roy Stat Soc B 39:1–38 8. Green P (1984) Iteratively reweighted least squares for maximum likelihood estimation, and some robust and resistant alternatives. J Roy Stat Soc B 46(2):149–192 9. Horváth L, Hušková M (2012) Change-point detection in panel data. J Time Ser Anal 33(4):631– 648 10. Im KS, Lee J, Tieslau M (2005) Panel lm unit-root tests with level shifts. Oxford Bull Econ Stat 67(3):393–419 11. Joseph L, Wolfson DB (1992) Estimation in multi-path change-point problems. Commun Stat Theor Methods 21(4):897–913 12. Joseph L, Wolfson DB (1993) Maximum likelihood estimation in the multi-path change-point problem. Ann Inst Stat Math 45(3):511–530

68

A. Samé et al.

13. Krishnapuram B, Carin L, Figueiredo MAT, Hartemink AJ (2005) Sparse multinomial logistic regression: fast algorithms and generalization bounds. IEEE Trans Pattern Anal Mach Intell 27(6):957–968 14. Li F, Tian Z, Xiao Y, Chen Z (2015) Variance change-point detection in panel data models. Econ Lett 126:140–143 15. McLachlan GJ, Krishnan T (2008) The EM algorithm and extensions. Wiley, New York 16. McLachlan GJ, Ng SK, Bean R (2016) Robust cluster analysis via mixture models. Aust J Stat 35(2):157–174 17. Neal R, Hinton G (1998) A view of the EM algorithm that justifies incremental, sparse, and other variants. In: Jordan MI (ed) Learning in graphical models. Springer, Dordrecht (1998) 18. Samé A, Chamroukhi F, Govaert G, Aknin P (2011) Model-based clustering and segmentation of time series with changes in regime. Adv Data Anal Classif 5(4):301–321 19. Wolfe J, Haghighi A, Klein D (2008) Fully distributed EM for very large datasets. In: ICML’08, pp 1184–1191

Unbalance Detection and Prediction in Induction Machine Using Recursive PCA and Wiener Process A. Picot, J. Régnier, and P. Maussion

Abstract This paper focuses on the detection of unbalance in induction machines and the prediction of its dynamic evolution based on the analysis of the current signals. The proposed method is based on the combination of recursive Principal Component Analysis (PCA) and a Wiener process. The recursive PCA is processed on the different features computed from the current signals in order to choose the most relevant ones. This way the fault detection is processed with the only knowledge of the healthy state of the machine. A linear Wiener process is then used to model the behavior of PCA components and predict their evolution over time in order to estimate the dynamic of the tracked fault along with the remaining useful life (RUL). The proposed method is applied on real data from a 5.5 kW induction machine with three different levels of unbalance and obtains very promising results. Keywords Principal component analysis · Wiener process · Induction machine · Unbalance diagnosis · Current-based diagnosis

1 Introduction Condition monitoring of electrical machines has become an important industrial concern for safety and reliability. Electrical and mechanical faults have been well documented over the years in the case of single faults scenarios. Unfortunately, the industrial reality is often more complex, especially when there is no a priori on the faults to happen. Mechanical faults have different origins: unbalance, eccentricity, or bearings damages. The most popular approach to diagnose these different faults A. Picot (B) · J. Régnier · P. Maussion LAPLACE, Université de Toulouse, CNRS, Toulouse, France e-mail: [email protected] J. Régnier e-mail: [email protected] P. Maussion e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_7

69

70

A. Picot et al.

is called the Motor Current Signal Analysis (MCSA) and has been widely documented [1]. This method is based on the spectral analysis of motor currents, generally computed using the Fast Fourier Transform (FFT ). Mechanical faults modify the spectral contents of the currents through specific harmonic components. These harmonic components depend on the fault and the system characteristics [2–4]. Unfortunately, the electro-mechanical complexity is generally increased in industrial applications due to the use of gearboxes, screws, external bearings and couplings among others. It results on a high risk of fault effects overlapping, even more considering multiple operating conditions. Artificial intelligence techniques have proven to be very effective to tackle the issue of multi-faults scenarios [5–7]. In the particular case where only the healthy functioning of the system is known, novelty detections techniques such as Self-Organising Maps (SOM) [8] or recursive Principal Component Analysis (PCA) [9] have shown promising results. For health management, predicting the useful life left at a particular time of operation is more interesting than detecting a fault. This way, it is possible to correct the system before the failure happens and so to decrease the length of production shutdowns. This is called the Remaining Useful Life (RUL). A review on the different techniques for the estimation of RUL is given in [10]. Among these different techniques, the Wiener process is an interesting way to model the problem. The RUL is estimated as a time dependant drift added with Brownian noise [11, 12]. This paper proposes to use recursive PCA along with a Wiener process in order to detect unbalance in induction machines. The recursive PCA is used in order to select the most relevant features for the detection knowing only the healthy behavior of the machine. A 2D Wiener process is then applied in the PCA plan in order to predict the evolution of the PCA features across time. Both techniques are presented in Sect. 2. The proposed method is evaluated in Sect. 3 on experimental data measured on a 5.5 kW induction machine with different unbalance levels.

2 Proposed Method 2.1 Recursive PCA PCA is a multivariate statistic method that transforms possibly correlated variables into uncorrelated ones by maximizing the variance between them. The new variables are called Principal Components and are linear combinations of the original ones. This well known technique is often used to reduce the size of the data set by extracting the most important information from the data set [13]. The idea of recursive PCA as presented in [9] is to process PCA on the first observations that constitute a healthy database. The data are then projected in this new space. If several new observations in a row are far away from the healthy group, it is considered as a new fault and a new PCA is processed on the healthy data and the faulty data in order to obtain the best subspace to separate them. If not, the PCA is not updated. This process is repeated

Unbalance Detection and Prediction in Induction Machine …

71

Fig. 1 Algorithm of the recursive PCA

every time the observations do not fit existing groups. The algorithm of recursive PCA is depicted in Fig. 1.

2.2 Wiener Process As described in [11, 12], a Wiener process based degradation X(t) is generally described as X (t) = X (0) + λt + σ B(t)

(1)

where X(0) is the initial degradation value (usually equal to 0), λ is the drift parameter, σ > 0 is the diffusion coefficient, and B(t) is the standard Brownian motion. Such a model assumes that the drift rate is constant. The parameters λ and σ are recursively estimated from the data using a Maximum Likelyhood Estimator (MLE). Then, the degradation estimation at time t i with current observation x i can be estimated according to (2) for t > t i .

72

A. Picot et al.

X (t) = xi + λ(t − ti ) + σ B(t − ti )

(2)

In this work, the Wiener process is applied on PCA features in order to follow and predict the system evolution in the PCA space. However, the PCA subspace is likely to change when a new fault is detected because of the recursive PCA algorithm. Let be x and y the coordinates in the first PCA subspace (before fault detection) and x and y the coordinates in the new one (after fault detection). Variable (x,y) can be expressed as Wiener processes according to (3). 

x(t) = λx t + σx B(t) y(t) = λ y t + σ y B(t)

(3)

It is supposed that new coordinates (x ,y ) can be expressed as a linear combination of (x,y) as 

x  (t) = ax(t) + by(t) y  (t) = cx(t) + dy(t)

(4)

with a, b, c, d some constants that can be computed using already known observations. The new coordinates can then be expressed as a Wiener process where the new Wiener parameters can be expressed as a linear combination of old ones according to (5). λx  σx  λy σy

= aλx + bλ y = aσx + bσ y = cλx + dλ y = cσx + dσ y

(5)

3 Material and Results 3.1 Motor Unbalance Experiments The present method is tested on current signals recorded on a 5.5 kW induction machine (IM) with two poles pairs. The load level of the IM is regulated by a direct current generator. An iron disk placed on the shaft is used to create different unbalance levels. The experimental system is depicted in Fig. 2. The IM is fed by an open-loop inverter at 40 Hz and operates at a low load level. The phase currents of the IM are acquired with a sample frequency of f e = 10 kHz. A total of 250 5 s-recordings of the IM currents have been measured on the machine. These measurements include 100 recordings with no unbalance corresponding to the healthy state of the machine, 50 recordings with a low level of unbalance, noted B1 , created by a small weight (77.5 g) fixed to the iron disk, 50 recordings with a medium

Unbalance Detection and Prediction in Induction Machine …

73

Fig. 2 Experimental test bench

level of unbalance, noted B2 , created by a medium weight (133 g), and 50 recordings with a high level of unbalance, noted B3 , created by a heavy weight (274 g). These unbalances produce load torque oscillations at f r with an amplitude of respectively 0.15%, 0.27%, and 0.55% of the IM rated torque. Several temporal and frequential features are computed from the different recordings. The list of the different features can be found in [9].

3.2 Results and Discussion The distance between the current observation and the healthy state is displayed in Fig. 3. The recursive PCA detected a new fault at recording 100 and a new PCA has been performed at this time. It is enlighted in Fig. 3 as huge step on the distance criteria that is due to the apparition of B1. This step is indeed amplified by the PCA as explained in [9]. Then, the distance criteria remains high but no significative difference is found between the 3 unbalance levels. The algorithm considers it as the same fault and no new PCA is processed. So, the default is well detected by the method but no difference is found between the 3 faults severities. It is interesting to note that the fault was detected knowing only the healthy state and without any a priori on the features to monitor. The 2 main components only are kept with the PCA. It is then possible to follow the evolution of the system in modelling these 2 components as a Wiener processes. Figure 4 displays 3 examples of prediction at 50 recordings. Figure 4a is just before the PCA is processed and Fig. 4b right after (recordings 100 and 101). Figure 4c displays the prediction at recording 220. In this figure, the observation at t + 50 recordings is displayed as a magenta square and its prediction as a red circle. The

74

A. Picot et al.

Fig. 3 Distance between the current observation and the healthy state

Fig. 4 Examples of prediction at time t = 100 (a), t = 101 (b) and t = 220 (c)

dashed circle represents the standard deviation of the prediction. The blue crosses are the past observations at current time t. At time t = 100 recordings, the prediction is not good because the standard deviation is large and the observation is far from the predicted point. This is due to the fact that the subspace is not optimized for the fault detection. Once the fault is detected and the PCA is processed, the new subspace is better suited for the fault detection and the prediction is more accurate: the observation is closer to the predicted one and is inside the standard deviation circle (t = 101 recordings). At time t = 220 recordings, the prediction gets better and better. It is very close to the observation and the error circle is reduced. Figure 5 shows the error between the prediction and the observation for the proposed method (blue curve). As an indication, the error obtained when using the Wiener process only is depicted as the dashed red curve. The prediction error is decreasing a lot at the beginning because the Wiener process parameters are identified more and more precisely. At recording 100, the PCA is processed and the prediction error becomes smaller for the proposed method. This is due to the new subspace that is optimized for unbalance monitoring. At the end, the prediction error decreases again. Overall, the prediction error is smaller with the proposed method.

Unbalance Detection and Prediction in Induction Machine …

75

Fig. 5 Error between predictions and observations

Nevertheless, the prediction error remains quite constant after the fault detection while a decrease could have been expected because the Wiener parameters should be better estimated with time. This might be due to the fact that the degradation here is not really linear. The analysis of the distance between the observation and the healthy state in Fig. 4 shows that the method considers the unbalance as 1 fault and does not make the difference between the 3 severities. Other models of drift parameters should be experimented in order to reach better results.

4 Conclusions This paper presents an original method for the automated detection of mechanical unbalance in induction machine using current siganls. The proposed method is based on the use of recursive PCA, i.e. the PCA is updated every time a new fault is detected, in order to choose the best subspace and features for the fault detection. This way, the method only needs to know the healthy behavior of the machine and can be seen as a novelty detection algorithm. The fault is tracked using a Wiener process applied on the PCA components in order to predict its evolution in the PCA subspace. The proposed method has been tested on real experimental data from a 5.5 kW induction machine for 3 different levels of unbalance. The unbalance was correctly detected but no specific difference between the 3 unbalance levels is found. The features chosen by the PCA should be analysed to better understand this point. The prediction of the fault evolution seems correct and gets more and more precise with time. However, a better precision could be expected and different form of drift

76

A. Picot et al.

parameters should be tested in order to increase the robustness of the proposed method.

References 1. Thomson WT, Culbert I (2017) Motor current signature analysis for induction motors. Wiley/IEEE Press 2. Henao H, Razik H, Capolino G-A (2005) Analytical approach of the stator current frequency harmonics computation for detection of induction machine rotor faults. IEEE Trans Ind Appl 41(3):801–807 3. Trajin B, Chabert M, Régnier J, Faucher J (2009) Hilbert versus con- cordia transform for threephase machine stator current time-frequency monitoring. Mech Syst Sig Process 23(8):2648– 2657 4. Picot A, Obeid Z, Régnier J, Poignant S, Darnis O, Maussion P (2014) Statistic-based spectral indicator for bearing fault detection in permanent-magnet synchronous machines using the stator current. Mech Syst Signal Process 46(2):424–441 5. Delgado M, Cirrincione G, Espinosa AG, Ortega JA, Henao H (2013) Dedicated hierarchy of neural networks applied to bearings degradation assessment. In: 9th IEEE international symposium on diagnostics for electric machines, power electronics and drives (SDEMPED), pp 544–551 6. D’Angelo MF, Palhares RM, Cosme LB, Aguiar LA, Fonseca FS, Caminhas WM (2014) Fault detection in dynamic systems by a fuzzy/Bayesian network formulation. Appl Soft Comput 21:647–653 7. Rauber TW, de Assis Boldt F, Varejão FM (2015) Heterogeneous feature models and feature selection applied to bearing fault diagnosis. IEEE Trans Ind Electron 62(1):637–646 8. Carino JA, Delgado-Prieto M, Zurita D, Millan, M, Redondo JAO, Romero Troncoso R (2016) Enhanced industrial machinery condition monitoring methodology based on novelty detection and multi-modal analysis. IEEE Access 4:7594–7604 9. Picot A, Régnier J, Maussion P (2019) Mechanical faults detection in induction machine using recursive PCA with weighted distance. In: IEEE international conference on industrial technology (ICIT), pp 1–6 10. Si X-S, Wang W, Hu C-H, Zhou D-H (2011) Remaining useful life estimation—a review on the statistical data driven approaches. Eur J Oper Res 213:1–14 11. Si X-S, Wang W, Hu C-H, Chen M-Y, Zhou D-H (2013) A Wiener-process-based degradation model with a recursive filter algorithm for remaining useful life estimation. Mech Syst Sig Process 35:219–237 12. Zhai Q, Ye Z-S (2017) RUL prediction of deteriorating products using an adaptive wiener process model. IEEE Trans Ind Inform 13(6):2911–2921 13. Abdi H, Williams LJ (2010) Principal component analysis. Wiley Interdisc Rev: Comput Stat 2(4):433–459

Automatic Fault Diagnosis in Wind Turbine Applications D. Saputra and K. Marhadi

Abstract A wind turbine may have any—out of hundreds—types of bearing on its generator or other components. The bearing type is often unknown until a possible defect occurs or until a diagnostic engineer performs a diagnosis. We identify faults on different wind turbine components by matching their fault frequencies with frequency harmonics and sidebands found in the spectrum analysis. It is crucial to know the fault frequencies, such as ball pass inner and outer race defect frequencies, before the diagnosis. Lack of expertise and experience in identifying bearing type and frequency may result in an incorrect diagnosis. Moreover, not knowing the bearing frequency leads to the inability to track bearing fault development continuously. In this paper, we propose automatic diagnosis, a novel method to identify prominent peaks above carpet level in power spectrum analysis, and then label them for potential faults. Prominent peaks are noticeable spectral peaks above the noise floor, and we calculate the noise floor using median filtering. Once we extract the peaks, we implement priority labeling. The fittest bearing frequency is intelligently estimated using approximate peak matching and envelope of bandpass segment. We use the estimated frequency to identify, quantify, and track early and late bearing faults without knowing the bearing type. Using data from wind turbines monitored by Brüel & Kjær Vibro, we show that automatic diagnosis accurately detects faults, such as ball pass outer race fault, ball spin frequency fault, generator imbalance, and gear fault. We expect automatic diagnosis to reduce manual identification of fault frequencies. Implementing automatic diagnosis could significantly reduce the cost of simultaneously monitoring the condition of hundreds, or even thousands, of wind turbines. Keywords Power spectrum analysis · Wind turbine · Fault frequency · Sidebands · Harmonics · Automatic diagnosis D. Saputra (B) · K. Marhadi Brüel & Kjær Vibro, Skodsborgvej 307B, 2850 Nærum, Denmark e-mail: [email protected] K. Marhadi e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_8

77

78

D. Saputra and K. Marhadi

1 Introduction A condition monitoring system (CMS) is vital for effective maintenance and ensuring wind turbines’ maximum uptime. It is even crucial for offshore wind turbines. The implementation of CMS in the wind turbine industry has been presented in [1]. Most CMS for wind turbines is based on vibration analysis [2–4] presented a case study of using vibration data to monitor fault progression in the generator bearing of a wind turbine in real industrial application. As described in [4], the primary components monitored in wind turbine CMS are generator, gearbox, main bearing, and tower. Accelerometers are installed on these components, and there could be up to ten accelerometers installed in a wind turbine. The data acquisition unit in a wind turbine collects vibration data continuously from each sensor. Different vibration measurements, also known as descriptors, monitor any fault development in a wind turbine. To monitor generator bearings, for example, several descriptors are used at different frequency ranges. The data acquisition unit continuously calculates the overall vibration RMS level (ISO RMS [10-1000 Hz]), the high-frequency bandpass (HFBP [1 k–10 k Hz]), the high-frequency crest factor (HFCF), and several harmonics or orders of the running speed of the generator (e.g., 1X, 2X) from each sensor. Different failure modes can be monitored from various sensors using a different set of descriptors. [4] showed that each descriptor is trended over time to detect specific failure modes in standard industrial practice. Alarm or warning of particular failure mode is triggered when the trend from specific descriptors crosses over predefined thresholds. Diagnostic engineers, subsequently, perform vibration analysis to confirm and describe the alarms. Broadband descriptors are prone to generate false alarms. For example, line frequency and crosstalk among different turbine components can falsely increase ISO RMS, crest factor, and peak-to-peak. An alarm from a broadband descriptor such as ISO RMS and crest factor can mean a) one very severe fault, b) multiple faults but not too severe, or c) no critical faults, for example, stand still marks on generator bearings. Also, broadband descriptors cannot detect some severe faults. For example, a generator bearing can have severe outer race defect without any alarm due to relatively low ISO RMS. Moreover, any broadband descriptors cannot detect roller defect and cage defects because their peaks are in the same frequency region as the peaks from the generator running speed harmonics. Furthermore, the bearing type is often unknown because replacement of generator bearing may be cheap and frequent, requiring continuous update of the bearing information on the CMS. Incorrect bearing fault frequencies could lead to misdiagnosis of the turbine component. When monitoring hundreds or thousands of turbines, human involvement in confirming the alarms, identifying faults without alarms, and finding fault frequencies before performing diagnosis is potentially cumbersome. Thus, it is vital to automate the whole process as much as possible. This paper proposes a novel method, namely

Automatic Fault Diagnosis in Wind …

79

automatic diagnosis, that calculates and trends narrow band descriptors to detect faults and tracks unknown bearing frequencies.

2 Automatic Diagnosis There are four essential strategies in automatic diagnosis: angular resampling, identification of peaks, priority labeling, and frequency tracking. Angular resampling is essential in performing accurate machine diagnosis. It overcomes the speed variation issue and eliminates smearing in the power spectrum, making fault peaks clear and easy to track. Spectral peaks are identified based on the following criteria: 1. Signals larger than 3 dB relative to the spectrum carpet level are considered as peaks. 2. The carpet level is determined by performing median filtering. 3. Peaks are always local maxima. 4. For pruning purpose, chosen peaks should be within the first and third quantiles plus 1.5 interquartile range of all identified peaks. The chosen peaks are then prioritized and labeled according to the failure modes to be detected. For instance, on detecting a bearing fault, the shaft’s running speed harmonics are removed, then the bearing fault peaks such as inner race ball pass frequency (BPFI) and outer race ball pass frequency (BPFO) are identified. However, line frequency should always be the first to be removed as it is mainly related to electrical interference, not related to mechanical faults. Some fault frequencies with the same priority, such as BPFO and sidebands of tooth meshing frequency (TMF), cannot be removed as they can coexist at the same frequency and accumulate in amplitude. Finally, peaks related to a known failure mode are then summed and trended over time as a descriptor. In other words, one failure mode is described by one descriptor. A unique feature in automatic diagnosis is its ability to track unknown bearing frequencies. When monitoring thousands of wind turbines from different turbine owners, it is considered fortunate for diagnostic engineers to know which bearing is installed, especially at the generator side. Frequency tracking is beneficial in these three diagnosis cases: 1. The diagnosed shaft has only one bearing, and we have a list of possible bearing types. In that case, the bearing frequency consistently converges to a number when a bearing fault develops. However, this case is relatively infrequent as sensors usually crosstalk among each other. 2. The diagnosed shaft has many bearings, and we have a list of possible bearing types for each bearing position. In that case, the bearing fault peaks are grouped by bearing position, and the best bearing type is presented for each position. Bearing types will still be unknown until a fault develops. However, when several bearings are failing, automatic diagnosis can identify which bearing is more severe than the others. This advantage solves the crosstalk issue.

80

D. Saputra and K. Marhadi

3. There is no list of possible bearing types. In that case, the correct bearing frequencies can be discovered, given an adequately more expansive frequency search space.

3 Automatic Diagnosis in Real Wind Turbines Over the years, Brüel & Kjær Vibro has monitored more than 8,000 turbines of various types from different manufacturers. We collect the descriptor scalars every half hour and the time waveform data from various sensors at different turbine components every two days, or once every a few hours in some turbines. The time waveform sampling rate from each sensor is 25.6 kHz with a length of 10.24 seconds. In this paper, we use these time waveform data to perform automatic diagnosis over specific periods. We focus on detecting a planetary defect, 3rd stage gear wheel fault, and generator bearing fault on turbines with one-planetary-and-two-helical (1P2H) gearbox configurations. We use the terms “frequency” and “order” interchangeably. We indicate a fault by the increasing trend in automatic diagnosis descriptor value, and we determine the criteria for automatic detection beforehand.

3.1 Planetary Stage Fault A planetary stage in a wind turbine gearbox consists of a sun gear, a ring gear, a planet carrier, and typically three planet gears. The planet gears revolve around a sun gear while rotating on planet shafts at the same time. Once the planet gear finishes one revolution, the number of meshing teeth of the sun gear becomes equal to the number of meshing teeth of the ring gear. Consequently, the ring and the sun defect frequencies equal the number of planets, times the meshing frequency, divided by the number of gear teeth, while the planet defect frequency equals to the number of planets that are failing, times meshing frequency over the number of planet gear teeth. Sidebands around planetary TMF with the spacing of the planetary defect frequency indicate the presence of a planetary defect. Considering that the planetary gears move at the same speed with the carrier shaft, sidebands with the spacing of the carrier shaft speed around planetary defect sidebands could also present, as described in [5]. The sum of all these sidebands is used as a planetary defect indicator. In automatic diagnosis, these sidebands are tracked in the frequency order spectrum. The time waveforms obtained from a sensor at the intermediate stage are angular resampled based on the low-speed shaft that connects the planetary stage gear and the 2nd stage helical gear. Figure 1 shows the order spectrum of a turbine having a planetary defect. Figure 2 shows all the ring defect peaks. Figure 3 shows the sum of all these peaks and the sums of all peaks related to sun and planet defects as descriptors trended over a year. According to the figure, the fault was developing

Automatic Fault Diagnosis in Wind …

81

Fig. 1 The black line is the order spectrum of a turbine with a planetary defect seen from an intermediate-speed stage sensor with a resolution of 0.078. The small red bar is the third TMF of the planetary stage. The other planetary TMFs are too small to be peaks

Fig. 2 The blue bars are the ring defect peaks around its TMF. The small red bar is the third TMF of the planetary stage. The other planetary TMFs are too small to be considered as peaks

since the April 30, 2012. The turbine owner performed maintenance on August 8, 2012, and the trends returned to normal.

3.2 Third Stage Wheel Fault To detect wheel fault at the 3rd stage gearbox, we trend the sum of sidebands of its TMF with spacing of the running speed of intermediate-speed shaft. Figure 4 displays

82

D. Saputra and K. Marhadi

Fig. 3 The trend of the planetary defect descriptors over a year

Fig. 4 The trend of the 3rd stage wheel and pinion faults descriptors over a year

the trend of the descriptor for a turbine having 3rd stage wheel fault. The fault started to develop in December 2015. The turbine owner performed maintenance on April 28, 2017, and since then, the descriptor trend was back to a low level. Figures 5 and 6 depict the turbine’s order spectrums when the 3rd stage wheel and pinion faults present, respectively. Figure 6 indicates that a pinion fault could be present. However, as indicated in Fig. 5, the wheel fault is more severe than the pinion fault because there are more wheel fault sidebands with high amplitudes. The pinion fault could be a secondary effect of the wheel fault.

Automatic Fault Diagnosis in Wind …

83

Fig. 5 The red bars are the 3rd stage wheel fault sidebands around its TMF, angular resampled with the high-speed shaft, seen from high-speed stage sensor with a resolution of 0.004

Fig. 6 The pink bars are the 3rd stage pinion fault sidebands around its TMF

3.3 Generator Bearing Inner and Outer Race Defects When the bearing fault frequencies, for example, BPFI and BPFO, are unknown, the most probable frequencies must be determined first. This step requires having an adequately wide frequency band to search. The predicted bearing fault frequency can be determined when the frequency trend is relatively consistent for some time. Descriptor value that indicates bearing type fault in automatic diagnosis is the sum of harmonics of the predicted frequency and their sidebands.

84

D. Saputra and K. Marhadi

Fig. 7 The trend of the outer race and inner race defects on the generator bearing over 4.5 years. Misalignment, outer race defect, and inner race defect are in m/s2 . BPFO and BPFI are in order of generator running speed

Figure 7 displays the trends of bearing outer and inner race defects in a turbine. Outer race defect is the sum of amplitudes of BPFO harmonics. Meanwhile, inner race defect is the sum of amplitudes of BPFI harmonics and their running speed sidebands. Misalignment is the amplitude of twice of the generator running speed. BPFO and BPFI are the tracked frequencies of outer and inner race defects, respectively. Generally, the outer race bearing fault is always high before July 2017, which is confirmed by the consistent trend of BPFO since January 1, 2015. Meanwhile, the inner race bearing defect starts to develop on May 17, 2016, confirmed by the consistent trend of BPFI from May 17, 2016, to July 2017. We can infer the shift of BPFI and BPFO in August 2017 as the generator bearing type change. The predicted BPFO and BPFI frequencies fluctuate after August 2017. However, we can see the consistent frequency, which indicates no bearing fault or mild bearing fault. After bearing replacement, the bearing defect trend is still relatively high, which could indicate the presence of a real bearing defect. Around spring 2018, the inner race bearing defect trend slightly increases, yet there is no indication of bearing change. This pattern could be due to a misalignment issue since January 2018, indicated as the purple line. Once the generator was realigned back by early summer 2018, the inner race defect trend returned to its previous level.

3.4 Generator Bearing Rolling Element Fault When the ball spin frequency (BSF) and the fundamental train frequency (FTF) are unknown, the most probable frequencies must be determined to monitor the rolling element defect. The predicted BSF and FTF can be determined when both are relatively consistent for some time.

Automatic Fault Diagnosis in Wind …

85

Figure 8 displays the trends of outer and inner race defects in a turbine. Rolling element fault (with sidebands) is the sum of amplitudes of BSF harmonics and their FTF sidebands. BSF is the tracked ball spin frequency. In general, the rolling element fault descriptor suddenly increased on February 22, 2015, confirmed by the consistent BSF trend. Figure 9 showed the turbine’s order spectrum when the rolling element fault was at the maximum level on February 22, 2015.

Fig. 8 The trend of the rolling element fault on the generator bearing over three months. Rolling element fault is in m/s2 , and BSF is in Hz

Fig. 9 The order spectrum of a turbine with BSF fault on generator bearing, resampled with generator shaft, seen from generator drive-end sensor

86

D. Saputra and K. Marhadi

4 Conclusion Automatic diagnosis represents a failure mode as a single descriptor, which significantly simplifies fault detection. Automatic diagnosis can also eliminate manual identification of fault frequencies when such information is not available. We can fully automate the whole process using predetermined criteria for fault detection. Automatic diagnosis will help monitor a thousand turbines with minimum human effort, especially with IoT infrastructure availability. In the end, an automatic diagnosis makes wind turbine condition monitoring very cost-effective.

References 1. Andersson C, Gutt S, Hastings M (2007) Cost-effective monitoring solution using external surveillance centre. In: The 2nd world congress on engineering asset management and the 4th international conference on condition monitoring, Harrogate, UK 2. Tavner P (2012) Offshore wind turbines. The Institute of Engineering and Technology 3. Crabtree C (2011) Condition monitoring techniques for wind turbines, unpublished doctoral dissertation, Durham University 4. Marhadi K, Hilmisson R (2013) Simple and effective technique for early detection of rolling element bearing fault: a case study in wind turbine application. In: COMADEM, Helsinki 5. Miao Q, Zhou Q (2015) Planetary gearbox vibration signal characteristics analysis and fault diagnosis. Shock and Vibration

Blade Monitoring in Turbomachines Using Strain Measurements D. Abboud, M. Elbadaoui, and N. Tableau

Abstract Blade rubbing is an undesirable but unavoidable phenomenon in turbomachinaries. It decreases the engine performance and can lead to blades or/and stator damages. Thus, blade monitoring is of high importance not only to ensure security, but also to optimize the engine efficiency. In particular, the detection and the identification of the instants of contacts between the blades and the stator are a valuable information that can be used through a preventive maintenance scheme. This paper proposes a new detection approach based on strain gauge measurements. Since the rubbing phenomenon creates a wave shock that deform the structure, these latter can be efficiently measured by a strain gauge mounted on the stator. However, the strain signal is generally corrupted with other mechanical and electromagnetic interferences. To tackle this issue, the authors propose a signal separation technique based on modeling the blades’ signal as cyclo-non-stationary. Also, the use of the synchronous fitting is proposed to separate blades’ component from environmental noise. The proposed technique is validated on real signals captured from a strain gauge mounted on the stator of a turbomachine. Keywords Health monitoring · Blades · Turbomachines · Cyclo-non-stationary · Source separation · Strain gauges

D. Abboud (B) · M. Elbadaoui Signal and Information Technology Center, Safran Tech, Rue des Jeunes Bois, 78772 Châteaufort, Magny-les-Hameaux, France e-mail: [email protected] M. Elbadaoui LASPI, EA3059, Université de Lyon, UJM-St-Etienne, 42023 Saint-Etienne, France N. Tableau R&T HP Turbine, Safran Aircraft Engines, Rond Point René Ravaud, 77550 Réau, Moissy-Cramayel, France © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_9

87

88

D. Abboud et al.

1 Introduction Turbomachines aims at transforming the kinetic energy of a fluid into mechanical energy (and vice versa) via a rotary assembly called a rotor. They have different designs according to their functions: turbines, pumps, compressors, turbochargers, etc. Nevertheless, most of them have a common architecture consisting of a rotor (a wheel mounted on a shaft) which is connected to a fixed housing (the stator) via a bearing. To establish a relative rotational movement between the stator and the rotor, the rotor is necessarily spaced from the stator and this spacing is commonly called the blade clearance of the turbomachine. Blade clearance is an important parameter of design and operation of a turbomachine. Its minimization prevents excessive amounts of fluid to bypass the rotating row of blades and, therefore, improves the general efficiency of the turbomachine. However, this clearance is subjected to fluctuations due to thermal expansions and mechanical phenomena dependent on the operating regime of the turbomachine as well as other environmental conditions (e.g., temperature) [1]. Therefore, minimal clearance often induces contacts between the blade tip and the stator that leads to blade’s rubbing. Though being inevitable, the rubbing phenomenon decreases the engine performance and can lead to blades or/and stator damages [2]. Thus, detecting and identifying the instants of contacts between the blades and the stator are precious and offers at least two advantages. The first one is to get information about the existing wear of the turbomachine which can help to make a preventive maintenance scheme. The second one is to provide the clearance control system with the contacts information. In practice, contact can be detected using stator vibrations, acoustic signals, and/or dynamic pressure [3, 4]. In this paper, it is proposed to measure the deformation (or the displacement) using strain gauges mounted on the stator. As for any sensor, it was found that the signal is highly corrupted by other electromagnetic interference and background noise. A proper signal processing techniques are thus needed to reduce environmental noise so that the signals associated with the contact can be accurately distinguished. In this paper, a proper signal processing method is proposed to extract the mechanical component produced by the blades’ rub from noisy perturbations. In this regard, a model for the strain signal produced by the blade rubbing phenomenon within the “cyclo-non-stationary” framework is proposed. This framework was originally proposed to model vibration signals captured under nonstationary regimes [5]. In this paper, it will be shown that this framework is large enough to enfold strain signals resulting from the blade rubbing phenomenon. This framework offers a straightforward way to extract blades’s contribution from environmental noise through the synchronous fitting [6, 7]. The rest of the paper is organized as follow: Sect. 2 describes the benchmark used to demonstrate the proposed approach and formulates the problem. In Sect. 3, it is proved how the strain signal resulting from blades’ rubbing is cyclo-non-stationary and the synchronous fitting previously proposed in Ref. [6]. Reference [7] is reviewed and proposed to separate the blades contributions from the noise. Section 4 exposes and discusses the results. The paper is sealed with a conclusion in Sect. 5.

Blade Monitoring in Turbomachines Using Strain Measurements

rotor

89

blades blade

Stator section

Strain gauges Stator integration tool Sliding table (c) (b) (a)

Fig. 1 Illustration of the rubbing test bench (a), focus on the integrated stator section (b), and the strain gauges bonded on the stator section (c)

2 Problem Statement and Formulation This section describes the experiment setup used to validate the blade rub detection approach proposed in this paper. A mathematical model of the strain gauge signal is then proposed.

2.1 Experiment Setup To test blade rubbing phenomenon and its consequences on the stator, a specific rubbing test bed has been used (Fig. 1a). The experimental setup consists in a rotor combining three identical metallic blades, and a fully integrated stator section mounted on a sliding table (Fig. 1b). Several strain gauges have been bonded on the stator parts (Fig. 1c) to be able to measure strains values near the contact location during tests. Test has been conducted using a rotor speed of 15,000 rpm. The penetrating speed (relative approach between stator and rotor) has been fixed between 0.01 and 0.15 mm/s. Maximum penetrating depth of 0.4 mm has been reached during tests. To be able to measure properly dynamic phenomena, the strain sampling frequency has been fixed to 240 kHz during tests.

2.2 Problem Modeling When a blade tip rubs with the stator, a sudden variation in the stress occurs at the interface owing to the transfer of the kinetic energy into elastic energy. Often, the

90

D. Abboud et al.

shock waves propagate into and excites the stator structure which responds according to its normal modes. While the occurrence of these forces are cyclic due to the rotating nature of the rotor, its magnitude is likely to change from cycle-to-cycle as the latter can depend on the depth of the rubbing and other operational conditions. Based on this, considering a rotor with R blades, it is proposed to approximate the contribution of the rth blade in the measured strain signal as follow: ⎛ ⎞ Q  aqr δ[n − q.N − τr ]⎠ ⊗ h r [n] ∀n ∈ {1, . . . , L} dr [n] = ⎝ q=1

=

Q 

aqr h r [n − q.N − τr ]

(1)

q=1

where n stands for the discrete angle variable, N is the number of points for which the rotor turns one cycles, L is the whole signal length, Q = L/N denotes the number of cycles, ⊗ is the operator of numerical convolution, δ[n] is the Kronecker function, h r [n] is the impulse response of the stator with the r th blade, aqr is the magnitude of the contact force (the excitation) generated by the r th blade at the qth cycle (note that aqr = 0 in the absence of rubbing) and τr = τ0 + (r − 0.5)/R is the angular shift expressed in number of samples and τ0 the angular shift of the first blade. It is then assumed that the correlation length of the impulse response of the structure is smaller than the cycle, Eq. (1) become: ∀n ∈ {1, . . . , L} dr [n] ≈ ar [n]cr [n]

(2)

Q where c p [n] = q=1 h r [n − q.N − τr ] is a periodic function of period N and ar [n] is a smooth function such as ar [n − q.N − τr ] = aqr . The measured signal is obtained by summing up the contributions of all the blades and an environmental noise denoted by W [n]: ∀n ∈ {1, . . . , L} X [n] = D[n] + W [n]

(3)

where ∀n ∈ {1, . . . , L} D[n] =

R  r =1

dr [n]

(4)

Blade Monitoring in Turbomachines Using Strain Measurements

91

3 Proposed Solution This section shows the cyclo-non-stationary nature of the blades’ signal and reviews the synchronous fitting technique.

3.1 Cyclo-Non-stationary Modeling of the Bades’ Signal Let’s consider the smoothness of the modulation functions, a1 [n] . . . a R [n], then by applying the Weirstrass theorem, one may rewrite dr [n] as a Pth polynomial with periodic coefficients: ∀r ∈ {1, . . . , R}, ∀n ∈ {1, . . . , L} dr [n] =

P 

c˜r p [n]n p

(5)

p=1

where c˜r [n] = e pr cr [n] and e pr are the Pth polynomial coefficients such as: ar [n] = P p p=1 e pr n . Then, the signal of interest, D[n], can be also expressed through a Pth polynomial with periodic coefficients: ∀n ∈ {1, . . . , L} D[n] =

P 

C p [n]n p

(6)

p=1

 where C p [n] = rR=1 c˜r p [n] is a periodic function of period N (because it is a sum of periodic function with the same period). This is typically a deterministic cyclo-nonstationary signal. The next subsection addresses the extraction of this component from asynchronous noise.

3.2 The Synchronous Fitting The synchronous fitting is a cyclo-non-stationary method that estimates deterministic cyclo-non-stationary components with a single fundamental period when this latter is corrupted by other asynchronous components. It can be seen as a generalization of the synchronous average, a common cyclostationary tool [8], to the cyclo-nonstationary case. Hereafter, this method is applied to estimate the signal of interest D[n]. Let’s first denote by n = (n − 1)/N + 1 the angular position in the period N (a/b stands for the remainder of the division of a by b). Because C p [n] is periodic with a period N , one has C p [n] = C p [n + (q − 1)N ] for any integer q = 1, . . . , Q (Q is the number of cycles). One can rewrite Eq. (6) as:

92

D. Abboud et al.

∀q ∈ {1, . . . , Q}∀n ∈ {1, . . . , N } D[n + (q − 1)N ] =

P 

C p [n](n + (q − 1)N ) p

p=0

(7) p p Applying the binomial theorem (n + (q − 1).N ) p = i=0 li N i (n − N ) p−i q i , p where li are the binomial coefficients, one can deduce that the sample associated with the position n, sq [n] = D[n + (q − 1).N ] for all integer q ∈ {1, . . . , Q}, defines also a Pth order polynomial with constant coefficients, viz ∀q ∈ {1, . . . , Q}∀n ∈ {1, . . . , N } sq [n] =

P 

b p [n]q p

(8)

p=0

 j where b p [n] = N p Pj= p C p (n − N ) j− p c j [n]. It is important to note that b p [n] is parametrized by n. An estimation of the predictible part is to find the curve s[n] =  T s1 [n], . . . , s Q [n] for all n ∈ {1, . . . , N } that reduces the noise to find an estimation of b[n] = [b1 [n], . . . , b P [n]]T for each n ∈ {1, . . . , N } to a given order P of the polynomial order. This can be made by minimizing the mean square error, i.e. ⎛

∀n ∈ {1, . . . , N } b[n] = argmin⎝ ⎛ = argmin⎝ ⎛

Q 

⎞ W [n + (q − 1)N ]2 ⎠

q=1 Q 

2



D[n + (q − 1)N ] − xq [n] ⎠

q=1

⎞2 ⎞ ⎛ Q P   = argmin⎝ ⎝ b p [n]q p − xq [n]⎠ ⎠ q=1

(9)

p=0

where xq [n] = X [n + (q − 1)N ]. The matrix  with dimension Q × (P + 1) is defined such as q, p = q p−1 (with q ∈ {1, . . . , Q} and p ∈ {1, . . . , P + 1}, and  T x[n] = x1 [n], . . . , x Q [n] . The above equation can be rewritten as:

∀n ∈ {1, . . . , N } b[n] = argminb[n] − x[n]2

(10)

and its solution writes: −1 ∀n ∈ {1, . . . , N } b[n] = T  T x[n]

(11)

Blade Monitoring in Turbomachines Using Strain Measurements

93

Now the coefficients are calculated, the signal of interest located at n can be found as: −1 ∀n ∈ {1, . . . , N } s[n] = b[n] =  T  T x[n]



(12)

Last, the deterministic component can be estimated as follows:

∀n ∈ {1, . . . , L} D [n] = sq [n] avec n = (n − 1)/N  + 1 et q = 1 + (n − n)/N (13)

4 Results This section is dedicated to validating the proposed approach on the signal obtained from the experiment described in Sect. 2.1. The time signal is displayed in Fig. 2 (top plot) together with its spectrogram. The spectrogram indicates the presence of strong electromagnatic interferences that mask the mechanical component generated by the blades; these components appear as horizontal lines in the spectrogram (being typically harmonics with fixed frequencies asynchronous to the shaft rotation). The rubbing occurs after 18 s and stay almost two seconds during which the depth of the rubbing changes. The occurrence of this phenomenon is seen in the spectrogram and manifests through a wideband waveform that extends over a large frequency band. As expected, the excitation is angle-periodic with respect to the blade rotation

Fig. 2 Raw strain gauge signal and its spectrogram

94

D. Abboud et al.

and blade pass frequencies. Nevertheless, it is important to estimate the strain signal associated with the rubbing itself to clearly identify and quantify the phenomenon. It is thus proposed to apply the synchronous fitting with respect to the blade rotation frequency. Though being acquired under a stationary condition, the time signal is first resampled using a blade position sensor that measures the passage of the blades to compensate small speed fluctuations. The obtained signal is now synchronized with the blade positions and its plot is exposed in Fig. 2 (top plot). The synchronous fitting is then applied using an 8th order polynomial and the obtained results are shown in Fig. 3. Very interestingly, the estimated signal returns small values before and after the rubbing phenomenon, but high values during. This clearly indicates the presence and the strength of the rubbing. The residual signal, which basically consists on electromagnetic interferences, seems more regular and has similar time-statistics. This is expected as this latter is independent from the mechanical condition of the system. It is important to note that the strain signal contains time-varying DC components which turns out to be useless in the detection process. This component is separated by simply applying a narrow-band low-pass filter and then disregarded. Eventually, a close-up of the waveform is displayed in Fig. 4 during the rubbing, showing three events in each cycle associated with the response of each blade in the rotor (the rotor consists of three blades). Results highlight the potentiality of the proposed approach to detect and quantify rubbing in turbomachines.

Fig. 3 Raw strain gauge signal in the angular domain, the blades’ component estimated by the synchronous fitting, and the corresponding residue that basically consists of the electrical part

Blade Monitoring in Turbomachines Using Strain Measurements

95

Fig. 4 Close-up in the rubbing zone of the raw strain gauge signal, the blades’ component estimated using the synchronous fitting, and the corresponding residue that basically consists of the electrical part

5 Conclusion In this paper, a blade rub detection technique based on strain gauge measurements is proposed. A mathematical model for the strain signal is proposed to model the rub phenomenon. It is shown that the related response is deterministic cyclo-nonstationary. While the periodicity is clearly justified by the rotating nature of the blades in the angular domain, the cycle-to-cycle change of the statistics is physically caused by the presence or the absence of the rub on a hand, and its depth on the other hand. This finding has offered a straightforward way to extract this component from other environmental noise. Accordingly, the synchronous fitting is used to perform the separation (or denoising) step. An experiment has been conducted on real turbomachines to validate the technique. Furthermore, filtered signals have been used to calibrate a dedicated numerical model [9]. The results highlight the efficiency of the synchronous fitting in extracting the blades’ component and in indicating the occurrence of blade rub.

References 1. Jia B, Zhang X (2011) An optical fiber blade tip clearance sensor for active clearance control applications. Procedia Eng 15:984–988 2. Barschdorff D, Korthauer R (1986) Aspects of failure diagnosis on rotating parts of turbomachinerys using computer simulation and pattern recognition methods. In: International conference on condition monitoring, Brighton, United Kingdom, May 21–23 (1986)

96

D. Abboud et al.

3. Abdelrhman AM, Hee LM, Leong MS, Al-Obaidi S (2014) Condition monitoring of blade in turbomachinery: a review. Adv Mech Eng 2014:10 4. Abdelrhman AM et al (2015) Time frequency analysis for blade rub detection in multistage rotor system. Appl Mech Mater 773–774:95–99 5. Antoni J, Ducleaux N, Nghiem G, Wang S (2013) Separation of combustion noise in IC engines under cyclo-non-stationary regime. Mech Syst Signal Process 38(1, SI):223–236 6. Daher Z, Sekko E, Antoni J, Capdessus C, Allam L (2010) Estimation of the synchronous average under varying rotating speed condition for vibration monitoring. In: Proceedings of ISMA 2010 including USD 2010 7. Abboud D, Marnissi Y, Assoumane A, Elbadaoui M (2019) Synchronous fitting for deterministic signal extraction in non-stationary regimes: application to helicopter vibrations. In: Proceeding of the international conference of surveillance, VISHNO and EVA (SURVISHNO 2019), 8–10 July 2019, Lyon, France 8. Antoni J, Bonnardot F, Raad A, El Badaoui M (2004) Cyclostationary modelling of rotating machine vibration signals. Mech Syst Signal Process 18(6):1285–1314 9. Nyssen F, Tableau N, Lavazec D, Batailly A (2019) Experimental and numerical characterization of a ceramic matrix composite shroud segment under impact loading. Int J Mech Sci (submitted)

Measuring the Effectiveness of Descriptors to Indicate Faults in Wind Turbines D. Saputra and K. Marhadi

Abstract Over the years, numerous descriptors have been proposed to indicate the health and various failure modes of wind turbine components. Description and mathematical interpretation of descriptors, such as kurtosis, crest factor, peak to peak, mesh, and residual RMS, have been shown in several works to detect bearing and gear faults. However, there are only a few works that evaluate the effectiveness of various descriptors in detecting faults. This paper proposes Descriptor Ranking, a novel machine learning application that measures descriptors’ effectiveness to indicate different failure modes accurately. The proposed application uses different feature selection algorithms to rank the importance of the descriptors. Results show that Descriptor Ranking can correctly rank the importance of any calculable descriptors. Also, we evaluate the performance of different feature selection algorithms, such as random forest regressor and logistic regression. Given a balanced set of vibration data with apparent faults and no faults, Descriptor Ranking visualizes different descriptors’ effectifeness ranks, including newly proposed descriptors, in detecting different failure modes. Knowing the effectiveness of a descriptor in indicating specific failure modes is essential to optimize the condition monitoring strategy of wind turbines or any other machinery. Having multiple descriptors to identify the same failure mode can be avoided. Hence, we can reduce false alarms, and faults can be detected more accurately. With fewer false alarms, monitoring the condition of many wind turbines is going to be more efficient. Keywords Wind turbine faults · Descriptor · Machine learning · Feature selection

D. Saputra (B) · K. Marhadi Brüel & Kjær Vibro, Skodsborgvej 307B, 2850 Nærum, Denmark e-mail: [email protected] K. Marhadi e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_10

97

98

D. Saputra and K. Marhadi

1 Introduction The strategy for monitoring the condition of a wind turbine has evolved over the last decade. Routine analysis of the vibration signal from sensors near the monitored components is one popular strategy. We perform vibration analysis by calculating statistical features, called descriptors, that indicate various failure modes. Different descriptors have been proposed and mathematically interpreted to detect bearing and gear faults, such as residual RMS, kurtosis, and peak to peak [1]. These descriptors are subsequently trended over time to monitor any developing faults. There have not been many evaluations on the effectiveness of various descriptors known to identify different failure modes. We should avoid having ineffective descriptors as they may lead to false alarms. Also, an excessive number of descriptors to be calculated requires unnecessary computational resources. Hence, we should not implement newlyproposed descriptors without thorough evaluation. This paper proposes Descriptor Ranking, a novel machine learning application that selects descriptors that accurately indicate different failure modes. We compare some wellknown descriptors and descriptors calculated based on a method called Automatic Diagnosis [2]. In a nutshell, Automatic Diagnosis descriptors calculates the sum of the amplitudes of peaks related to particular failure modes in the order spectrum.

2 Machine Learning Methods Machine learning offers a brilliant way to select the best descriptor that explains a failure mode through feature selection. A useful feature selection algorithm intelligently recognizes redundant and irrelevant descriptors. We can thus remove these descriptors without incurring much loss of supervised learning prediction power. The initial, most important step in feature selection is data gathering. There should be plenty and a balanced amount of real vibration data containing mechanical faults and those with no faults. The biased result may occur if there are not enough data or more data in one group than the other. The subsequent crucial step is data cleaning. A short length of time waveform data can cause erratic behavior in fault detection, primarily when evaluating many of these time waveform data over a certain period. To illustrate, suppose there are 20 files in the time waveform format, each of which has a duration of 10 seconds. A specific fault signature appears swiftly in the waveform every 20 seconds. Hence, 10 seconds is not enough window to ensure the presence of the fault. Consequently, ten waveform files might indicate the presence of a fault and the other ten files indicate no faults. The machine learning training set should be free from such false-negative cases. Another common mistake in data cleaning is misdiagnosis, i.e., when a time waveform file is labelled as faulty while there is no fault or vice versa. Multiple faults considered as just one fault is also a case of misdiagnosis. Also, we should not

Measuring the Effectiveness of Descriptors to Indicate …

99

use data without a confirmed diagnosis. In wind turbine operation and maintenance, it is possible to miss actual fault, even with a physical inspection. The inability to physically identify actual fault could give incorrect feedback to the given diagnosis. In such cases, we cannot use the data either. The next step is to create a training set by calculating the descriptor values and assigning classes to all vibration data into faulty (1) and no-fault (0). We can then feed a well-generated training set into a machine-learning algorithm to obtain a predictive model that describes the relationship between descriptors and the fault status, i.e., faulty or no-fault. The model is subsequently used to rank the effectiveness of a descriptor in assigning fault or no-fault condition. There are several feature selection algorithms that we can use, as listed in Table 1. In this paper, we use the Random Forest Regressor algorithm [3] to rank descriptors’ importance. The algorithm performs very well compared to the other methods in our examples because Random Forest uses bagging and feature randomness when building individual decision trees to create a de-correlated forest of trees. The combined prediction is more accurate than any of those individual trees. In real life, the wisdom of the crowds is always better than lone decision making. This is the philosophy behind the Random Forest algorithm. Random Forest algorithm takes vote from many decision trees, each built over a random subset of the training set and a random subset of features. This guarantees that the individual trees are de-correlated, thus minimizing overfitting. Each tree divides the training set into two clusters, each of which contains sub-datasets similar among themselves and different from the other cluster. Therefore, each descriptor’s importance is calculated as the purity of each cluster, i.e., a measure of how often a randomly chosen descriptor will predict incorrectly. Impurity is measured using the Gini index in Random Forest Classifier and mean squared error (MSE) in Random Forest Regressor. The idea behind the Random Forest algorithm is to calculate how much the existence of each descriptor decreases the impurity. The more a descriptor decreases the impurity, the more critical the descriptor is. The impurity decrease from each descriptor is subsequently averaged across all trees to determine the descriptor’s final importance, represented as the fraction of all descriptors’ total importance.

3 Data Over the years, Brüel & Kjær Vibro has monitored the condition of more than 8,000 turbines of various types from different manufacturers. Every half hour, we collect various descriptors from these turbines. We also collect time waveform data from various sensors located at different turbine components every two days or every few hours in some turbines. The time waveform sampling rate from each sensor is 25.6 kHz with a length of 10.24 seconds. We used these time waveform data to generate descriptors and to evaluate their importance in detecting faults. This paper focuses on finding the best descriptors that detect a planetary defect and a second

100

D. Saputra and K. Marhadi

Table 1 List of feature selection algorithms that are compared in this paper. The evaluation is omitted for brevity Algorithm

Detail

Pearson correlation coefficient (PCC)

Linear correlation between a descriptor and the fault status, regardless of correlation among descriptors.

Distance correlation

This algorithm addresses PCC’s problem of not measuring variable independence.

F-regression

Univariate linear regression that uses F-test to compare the significance between a descriptor and the fault status, regardless of correlation among descriptors.

Random forest classifier

Ensemble learning algorithm that constructs decision trees on various subsamples and uses modes to improve predictive accuracy and to control overfitting. Out-of-bag samples are used to estimate the R2 on unseen data.

Random forest regression

Similar to above, but it uses averaging instead of modes.

Extra trees classifier

Similar to Random Forest, but instead of looking for the most discriminative thresholds, thresholds are randomly drawn for each candidate feature, and the best threshold is picked as the splitting rule. This reduces the variance of the model but slightly increase bias.

Extra trees regressor

Similar to above, but it uses averaging instead of modes.

Random forest regressor with Drop columns method

We run a random forest regressor, then remove one descriptor, retrain the reduced training set, and do that for other descriptors. This algorithm calculates whether the other descriptors can replace that descriptor.

Random forest classifier with Drop columns method

Similar to above but using random forest classifier.

Logistic regression

A statistical model that uses a logistic function to model a binary class training set. SAGA solver is used as it handles L1 regularization very well

Maximal Information Correlation

Linear and non-linear association between a descriptor and the fault status, regardless of correlation among descriptors

Ridge regression

Regression algorithm using linear least squares and L2 regularization

Lasso Regression

Regression algorithm using linear least squares and L1 regularization

Randomized lasso regression

Computing Lasso on each resampling of the training set

Measuring the Effectiveness of Descriptors to Indicate …

101

stage wheel fault on turbines with one-planetary-and-two-helical (1P2H) gearbox configurations. The dataset used in this work is obtained from customers of Brüel & Kjær Vibro and is subject to a confidentiality agreement between Brüel & Kjær Vibro and its customers.

4 Results and Discussion 4.1 Planetary Stage Fault The turbine owner suspected a planetary defect around April 13, 2016, see Fig. 1. The owner finally replaced the gearbox on July 31, 2017, and there were no faults detected since then. There are 173 data points without faults and 170 data points containing faults during these approximately two years. This provides an excellent, balanced training set for Descriptor. The broadband descriptors used in this comparison are Envelope Condition Unit (ECU), High-Frequency Band Pass (HFBP), ISOA, MESH, Residual Value (RV), Tooth Mesh Frequencie (TMF), High-Frequency Crest Factor (HFCF), HighFrequency Peak (HFPK), Order Magnitude (MA), and other bandpass measurements in selected ranges. Some of these descriptors were explained in [1], and others are Brüel & Kjær Vibro proprietary descriptors. Most of the descriptors are measured in acceleration, but some are measured in velocity. High frequency in this work is typically defined as above 1000 Hz. ECU is the vibration’s effective value after

Fig. 1 The trend of the planetary defect descriptors over four years using Automatic Diagnosis. RingDF_AutomaticDiagnosis is an Automatic Diagnosis descriptor, while RV_Planetary and RV_IntermediateStage are the residual value after the spectrum is resampled using a low-speed shaft and an intermediate-speed shaft, respectively

102

D. Saputra and K. Marhadi

Fig. 2 The importance of various descriptors to detect a planetary defect, calculated using Random Forest regressor algorithm

enveloping the high-frequency range into frequency ranges of the bearing defects. The effective-value of the acceleration measurement according to the vibration standard ISO 10816 is defined as ISOA. MESH is the effective-value of the vibration between the first and the third gear mesh frequencies. The effective-value between the gear tooth mesh frequencies is defined as RV. TMF is the tooth mesh frequency. In this work, TMFs are measured up to the fifth TMF (5TMF). The first- and secondorder magnitudes (1MA and 2MA) of shafts in the gearbox and generator are also measured. Meanwhile, the Automatic Diagnosis descriptors for comparison are the sum of peaks of the sun defect (SunDF), planet defect (PlaDF), and ring defect (RingDF) sidebands, resampled with carrier shaft (Carr) and low speed shaft (Lss). Result from Descriptor Ranking, as shown in Fig. 2, shows that Automatic Diagnosis descriptors outperform the broadband descriptors. Moreover, AutomaticDiagnosis descriptors predict that the planetary fault is a ring defect, neither a sun defect nor a planet defect.

4.2 Second Stage Wheel Fault A second stage wheel fault was suspected on October 1, 2017, as seen in Fig. 3. The turbine owner finally replaced the gearbox on September 24, 2018. However, the pinion fault started to develop after the maintenance. As there are 50 more data points during the fault period than those during the no-fault period, we need to add around 50 no-fault data points to our training set to balance our training set. We should

Measuring the Effectiveness of Descriptors to Indicate …

103

Fig. 3 The trend of the 2nd stage wheel fault descriptors over 4.5 years calculated using Automatic Diagnosis. RV is multiplied by five for visualization purpose as the original RV is too small

not take data points during the period when the wheel might have started failing. Therefore, the first 50 points in 2015, i.e., when the fault is not seen, were taken. This provides a balanced dataset for Descriptor Ranking by having 156 no-fault data points and 154 fault data points. Figure 4 presents the Descriptor Ranking results. Seventeen descriptors were ranked based on calculated importance. Some broadband descriptors from Fig. 2 were used here, along with additional ones, such as low-frequency kurtosis (LFKurt), lowfrequency peak-to-peak (LFPK), the effective-value of certain low-frequency bandpass according to the vibration standard ISO 10816-21 (ISOA21), high-frequency kurtosis (HFKurt), and low-frequency crest factor (LFCF). In general, Automatic Diagnosis descriptor 2WheelFault (2nd Stage Wheel Fault) is much more effective in detecting the fault than the other 16 descriptors. This is because the Automatic Diagnosis descriptor is a narrow band descriptor, as explained in [1], and the fault in the training set is not a 2PinionFault (2nd stage pinion fault). There is a developing second stage pinion fault after the maintenance, as seen in Fig. 3. Therefore the residual value (RV) increased after maintenance. RV is normally considered to be the best broadband descriptors to detect this fault [1, 4]. However, RV cannot differentiate between wheel fault and pinion fault because it just represents energy between tooth mesh frequencies.

5 Conclusion The effectiveness of descriptors can be determined using Descriptor Ranking. Automatic Diagnosis descriptors in this study outperform other descriptors. We can evaluate any new proposed descriptors using Descriptor Ranking. Having effective

104

D. Saputra and K. Marhadi

Fig. 4 The importance of various descriptors to 2nd stage wheel fault, calculated using Random Forest regressor algorithm

descriptors in condition monitoring could help reduce false alarms and increase the accuracy of faults detection.

References 1. Zhu J et al (2014) Survey of condition indicators for condition monitoring systems. In: Annual conference on prognostics and health management society, vol 5 2. Saputra D, Marhadi K (2020) On Automatic Fault Diagnosis in Wind Turbine Condition Monitoring. In: PHM Society European Conference. Vol. 5. No. 1. 2020 3. Breiman L (2001) Random forests. Mach Learn 45(1):5–32 4. Skrimpas GA et al (2017) Residual signal feature extraction for gearbox planetary stage fault detection. Wind Energy 28(8):1389–1404

Gearbox Condition Monitoring Using Sparse Filtering and Parameterized Time–Frequency Analysis Shiyang Wang, Zhen Liu, and Qingbo He

Abstract It is challenging to monitor the condition of a gearbox being operating under variable conditions. Considering that numerous variable conditions have the non-stationarity nature, a time–frequency analysis (TFA) approach is very suited to analyze the frequency components of interest. As an emerging TFA method, parameterized TFA utilizes further parameters to parameterize kernel functions, so a complex signal can be characterized more accurately. However, it is an issue to estimate the instantaneous frequency with the interference of background noise. This paper presents a method to solve this problem by using the sparse filtering method, as the sparse filtering is capable of enhancing a particular desired feature to achieve the desired effect. With the sparse filtering, the parameterized TFA is used to achieve signal decomposition and clarify the signal spectrum structure. Finally, these interferential frequencies are extracted and removed, and the processed signals are used to monitor the gearbox. The availability of this method was verified by simulated and experimental signal analysis in gearbox condition monitoring. Keywords Gearbox condition monitoring · Time–frequency analysis · Sparse filtering · Signal decomposition

1 Introduction In modern industry, gearboxes are one of the vital components of rotating machinery [1]. The gearbox includes components such as the shaft, gears and bearings. The proportion of gear parts damaged in the gearbox is 60%, and the failure of the gear itself has the greatest impact on the system [2]. Therefore, the fault must be S. Wang Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230026, People’s Republic of China Z. Liu · Q. He (B) State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai 200240, People’s Republic of China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_11

105

106

S. Wang et al.

diagnosed and eliminated before the accident occurs. If the gear in the gear term fails, the amplitude of the meshing frequency and its harmonic components will rise, and a modulation sideband will appear around these frequencies [3]. These shocking signal components are too weak, and identifying these signal components becomes a very meaningful job. In practice, the gearbox is generally operated under variable conditions. Recently, with adopting a matched kernel function for a pre-defined signal model, parameterized TFA can be used. Yang et al. [4] proposed generalized parametric time– frequency analysis method, which extends the kernel form of parametric time– frequency analysis from multiple to arbitrary. This method is useful for analyzing strong time-varying signals. Kerber et al. [5] proposed chirplet atom with polynomial local frequency delay to analyze the corresponding non-stationary signal. Therefore, the parameterized TFA is used to achieve signal decomposition and clarify the signal spectrum structure. However, it is an issue to identify the specified varying frequency from the interference of background noise by parameterized TFA. This paper presents a method to solve this problem by using the sparse filtering method. Sparse filtering is one of the most effective algorithms among the unsupervised learning algorithms. Zennaro [6] emphasized that sparse filtering is an optimized algorithm which contains only one hyperparameter. Lei et al. [7] used sparse filtering to learn impulsive features representation. Jia et al. [8] optimized the cost function, like lp/lq norm, to enhance weak impulsive signal. The recent research proved sparse filtering can enhance a particular desired feature, which achieves the desired effect in gearbox condition monitoring. In the proposed study, the sparse filtering is firstly used to enhance the particular desired feature. After that, the parameterized TFA is used to represent the desired signal components. This is beneficial to remove the interferential frequencies and extract the useful components for gearbox condition monitoring. In the following, Sect. 2 presents the theoretical background that is related to this study. Sections 3 and 4 represent the simulation and experimental results, respectively. Finally, Sect. 5 gives some concluding remarks.

2 Theory Background 2.1 Sparse Filtering Different from usual learning algorithms that often model the training data distribution, the sparse filtering is an unsupervised algorithm. Sparse filtering directly analyzes the feature distribution of training data under the guidance of so-called good parameters. The objective function is optimized to involve only one tunable parameter. Strictly, sparse filtering is just a feature extractor [9]. In the analysis of

Gearbox Condition Monitoring Using Sparse Filtering and Parameterized …

107

its network structure and feature matrix, sparse filtering algorithm can be regarded as a neural network with a two-layer structure. For a stochastic input x = (x1 x2 . . . xn )T , after being multiplied by the   quantity m×n weight matrix W = wi, j ∈ R of the network, the vector y = (y1 y2 . . . ym )T is achieved as y = Wx

(1)

Then an activation function g is added to y, and we can obtain the activated eigenvector f ∈ R m as f = g(y) = g(W x)

(2)

√ a small positive Soft-absolute function: g(z) = ε + z 2 ≈ |z|, where ε takes   constant. Consider a training set containing ns samples S = x (1) , x (2) , . . . , x (n s ) , T  recorded as: f (i) = f 1(i) , f 2(i) , . . . , f m(i) . It represents a linearly operated and activated feature vector, where:   f (i) = g W x (i) , i = 1, 2, . . . , n s

(3)

So the selection of W has a strong correlation with the quality of F. In addition, F should satisfy the following three criteria [10]: (1) population sparsity (it requires sparse features per example); (2) lifetime sparsity (it requires sparse features across examples); and (3) high dispersal (it requires uniform activity distribution). The objective function is obtained by the following three steps: (i)

f 1. Row normalization: change matrix F into F˜ = ( f˜j(i) )m×n s , among f˜j(i) = || f jj ||2 2. Column normalization: change the matrix F˜ into Fˆ = ( fˆj(i) )m×n s , among fˆj(i) = f˜j(i) ˜ || f (i) ||2

3. The sum of the absolute values of the matrix elements: L(W ) =

ns 

|| fˆ(i) ||1 .

i=1

After the above three steps, L(W ) is our objective function, Using the objective function, the sparse filtering algorithm does it to minimize it, which is: minimize (L(W )) Get its minimum point min (W ), which is the optimal weight matrix.

(4)

108

S. Wang et al.

2.2 Parameterized TFA Fourier transform obtains an explicit frequency spectrum in stable conditions. However, for some special signals whose frequency varies with the time, Fourier transform is useless [4]. Recently, with adopting a matched kernel function for a pre-defined signal model, the parameterized TFA is more effective in characterizing the real time–frequency pattern of non-stationary signal. As one of the parameterized TFA methods, the chirplet transform is to use a slanted line with an arbitrary tilt angle on the time–frequency domain to analyze a linear frequency modulated signal. The essence of chirplet transform is to project a signal onto a set of line-modulated wavelet functions. The definition of chirplet transform (5) is: +∞ x(τ )gσ∗ (τ − t) exp(− jωτ )dτ Cwt (t, ω; c) = Amp (t)

(5)

−∞

⎧ ⎨  R (τ ) = exp(− jcτ 2 /2);  M (τ ) = exp( jctτ ); gσ (t) = √1 exp(−t 2 /4σ ) c c,t 2 πσ ⎩ R M 2 x(τ ) = x(τ )c (τ )c,t (τ ); Amp (t) = exp[− j (τ c/2 − ωt)] (6) The parameters in the formula (6) are explained as follows: x(τ ) is signal, cR (τ ) is frequency rotation operator of line frequency modulation parameter, cR (τ ) will parse the signal x(τ ) in the time–frequency plane by an angle θ and tan(θ ) = M (τ ) represents a frequency shifting operator of τ period near t moment. −c · c,t M c,t (τ ) translates the rotated signal x(τ )cR (τ ) in the time–frequency plane with a translation increment of ct. Amp is amplitude. gσ (t) is Gaussian window function.

3 The Proposed Signal Processing Method Integrating the above theory, we propose a method by combining sparse filtering and parameterized TFA. As shown in the scheme in Fig. 1, the gearbox signal x(t) is first inputted into the sparse filtering constructor to find a specific atom. The filtered signal is hoped to enhance a specified feature to achieve the desired effect. Based on previously provided spectrum information, the parameterized TFA is used to achieve signal decomposition and clarify the signal spectrum structure. Finally, these interferential frequencies are extracted and removed, and the processed signals are better used to monitor the gearbox.

Gearbox Condition Monitoring Using Sparse Filtering and Parameterized …

109

Fig. 1 Scheme of the presented method

4 Simulation Results 4.1 Signal Structure For the case where gearbox is running, we used two gears to simulate the condition of the gearbox. The first signal of gears has a meshing frequency of 770 Hz and a fault frequency of 70 Hz. The second signal of gears has a meshing frequency of 800 Hz and a fault frequency of 90 Hz. The two signals are superimposed, plus a white noise with the signal-to-noise ratio as 0 dB. The spectrum and envelope spectrum of the constructed signal are shown in Fig. 2. It can be seen that the frequency components are mixed together.

Fig. 2 Simulated gearbox signal x(t): a x(t) waveform; b spectrum; c envelope spectrum

110

S. Wang et al.

Fig. 3 Results of sparse filtering: a signal, b spectrum, c envelope spectrum of the first signal (meshing frequency 770 Hz); d signal, e spectrum, f envelope spectrum of the second signal (meshing frequency 800 Hz)

4.2 Signal Processing by Sparse Filtering The above signal contains information of multiple sets of gears, and their meshing frequency will be more similar, and the fault frequency is modulated in the meshing frequency. The correlation information between two similar meshing frequencies becomes difficult to distinguish. The sparse filtering can separate the signals according to the selection of the atoms and clarify the characteristics of the signals. By selecting a specific atom, the sparse filtering can select the corresponding features to enhance. In Fig. 3, the meshing frequencies of the two sets of signals are 770 and 800 Hz, respectively. The signal is completely separated, and the denoising effect is well done. So sparse filtering has potential capacity to detect impulsive components and identify different periodic components.

4.3 Signal Extraction by Parameterized TFA Parameterized TFA has good applicability for non-stationary working conditions, so it is used as a feasible method for signal extraction. However, it needs to clarify the required spectral components. According to the result of two separated signals by sparse filtering, the results extracted by the parameterized time–frequency analysis are shown in Fig. 4. According to the above example, it can be proved that the corresponding feature signal is enhanced by sparse filtering and use the parameterized time–frequency analysis to achieve signal decomposition and clarify the signal spectrum structure. This method can be used to monitor the gearbox.

Gearbox Condition Monitoring Using Sparse Filtering and Parameterized …

111

Fig. 4 Results of parameterized TFA: a signal, b spectrum, c envelope spectrum of the first signal (meshing frequency 770 Hz); d signal, e spectrum, f envelope spectrum of the second signal (meshing frequency 800 Hz)

5 Experiments 5.1 Data Description To prove the validity of the previous method, this study analyzes a set of actual gearbox signals. The measured signal duration is ten seconds, and the sampling frequency is 25600 Hz. In this study, we take a stable signal segment of five seconds for analysis. As shown in Fig. 5, the signal shows some large spectral components, but the envelope spectrum cannot show valuable information, so we cannot get any results without other information.

5.2 Signal Processing Figure 5 shows the processed signal by sparse filtering. We can clearly find 1209 Hz and its multiplier components in the spectrum and the envelope spectrum. After confirming with the manufacturer, it is the resonant frequency of the gearbox, which is the invalid interference frequency. Figure 6a shows the time–frequency diagram where the dotted line shows the components to be removed. By directly subtracting the interferential components, other latent frequency components are highlighted in the time–frequency diagram as shown in Fig. 6b. Then we used the parameterized TFA again to extract useful signal component. Figure 6c shows an extracted latent time-varying frequency component around 6700 Hz. The time-varying frequency provides more information related to the gearbox condition, which is beneficial for further condition monitoring. Combining sparse filtering and parameterized TFA, it can be seen that the components of the main interference energy can be removed and other latent frequency

112

S. Wang et al.

Fig. 5 Raw signal and result of sparse filtering: a signal, b spectrum, c envelope spectrum of raw signal; d signal, e spectrum, f envelope spectrum of the processed signal by sparse filtering

Fig. 6 Results of parameterized TFA: a preprocessed time–frequency diagram, b time–frequency diagram after interference removal, c time-varying frequency after interference removal

components can be highlighted. In a signal with periodic repetitive pulses, this method laid a good foundation for the subsequent research on related directions, which provides the possibility for data analysis in noisy components.

6 Conclusion This paper proposed a method by combining sparse filtering and parameterized TFA for gearbox condition monitoring. With the specific atom, the capability of sparse filtering was enhanced to distinguish different signal components, especially periodical and impulsive signal components. But sparse filtering cannot maintain the amplitude of the signal. In this paper, parameterized TFA is used to achieve signal decomposition and clarify the signal spectrum structure, which preserves the original information and can be used for time-varying signals. Results of this study verify the effectiveness of the combination method, which can estimate the instantaneous frequency with the interference of background noise, achieve signal decomposition and clarify the signal spectrum structure. It has laid a good foundation for the subsequent research on gearbox condition monitoring and fault diagnosis.

Gearbox Condition Monitoring Using Sparse Filtering and Parameterized …

113

Acknowledgements This work was supported by Joint Fund of Equipment Pre-research and Ministry of Education under Grant 6141A02022141, and the National Program for Support of Top-Notch Young Professionals.

References 1. He Q, Kong F, Yan R (2007) Subspace-based gearbox condition monitoring by kernel principal component analysis. Mech Syst Sig Process 21(4):1755–1772 2. Randall RB (2011) Vibration-based condition monitoring: industrial, aerospace and automotive applications. Wiley 3. Lin J, Zuo MJ (2003) Gearbox fault diagnosis using adaptive wavelet filter. Mech Syst Sig Process 17(6):1259–1269 4. Yang Y et al (2014) General parameterized time-frequency transform. IEEE Trans Sig Process 62(11):2751–2764 5. Kerber F et al (2010) Attenuation analysis of lamb waves using the chirplet transform. EURASIP J Adv Sig Process 2010(5):1–6 6. Zennaro FM, Chen K (2017) Towards understanding sparse filtering: a theoretical perspective. Neural Netw Off J Int Neural Netw Soc 98:154 7. Lei Y et al (2016) An intelligent fault diagnosis method using unsupervised feature learning towards mechanical big data. IEEE Trans Ind Electron 63:3137–3147 8. Jia X et al (2018) Sparse filtering with the generalized lp/lq norm and its applications to the condition monitoring of rotating machinery. Mech Syst Sig Process 102:198–213 9. Xiong W, He Q, Ouyang K (2018) Feature-difference sparse filtering for bearing health monitoring. In: 2018 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Houston, TX, USA, pp 1–5 10. Ngiam J et al (2011) Sparse filtering. In: Advances in neural information processing systems, pp 1125–1133

Rotating Machinery Fault Diagnosis with Weighted Variational Manifold Learning Quanchang Li, Xiaoxi Ding, Wenbin Huang, and Yimin Shao

Abstract The transient characteristics attributed to partial fault of rotating machinery are of technical evidences for health diagnosis, which are unavoidably overwhelmed by much noise disturbance in complex modulation bands. Usually, only the optimal component with a selected scale is filtrated and then extracted to be the evidence for fault diagnosis, which may ignore some important features distributed in the other scales. Focusing on the feature distribution in a whole-band, this paper proposed a novel weighted variational manifold learning (WVML) for transient features reinforcement and fault diagnosis in the view of multi-scale analysis. First, a series of intrinsic mode functions, containing different modulated frequency information, are automatically obtained via an improved variational mode decomposition (IVMD) approach, where a signal difference average criterion is introduced to determine the appropriate decomposition level. Time–frequency manifold (TFM) learning is referred to then sequentially mine their intrinsic structures in the corresponding time–frequency domain. Therewith, the reinforced signal for each variational mode can be achieved via phase preserve and a series of inverse transforms in a suppression of the in-band noise. Combining the reinforced mode signals with different weight coefficients, the reconstructed waveform can be finally synthesized. In these manner, the desired features of all scales can be soundly reinforced with mode manifolds mined and dynamically weighted in a self-learning way. The consequence of proposed WVML scheme is verified through experimental gear defective signal and result reveals this method yields an excellent accuracy in health diagnosis of rotating machinery. Keywords Variational mode decomposition · Manifold learning · Multi-scale analysis · Rotating machinery · Fault diagnosis

Q. Li · X. Ding (B) · W. Huang · Y. Shao State Key Laboratory of Mechanical Transmission, Chongqing University, No.174 Shapingba Street, Shapingba District, Chongqing 400044, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_12

115

116

Q. Li et al.

1 Introduction Rotating machinery has been acting a crucial role in industrial manufacturing, the partial faults of which maybe develop persistently then attract serious accidents with economic losses [1, 2]. As a result, it is indispensable to diagnose the health of key parts of rotating machinery such as bearings. Thanks to the periodic transient impulses in the acquired signals always give back the pivotal physical knowledge concerning fault bearing dynamics [3], vibration signal is the most widespread technique which is used in a field of rotating machinery health diagnosis. There are varies of signal processing approaches focusing on defective vibration transient features reinforcement and noise repression. Band-pass filtering [4] is the firstly used method to eliminate noise through removing frequency components outside the given frequency range, whereas it cannot distinguish the fault-related components and in-band noise in the same frequency band. As a method to pick the selected frequency band with particular range, wavelet transform (WT) [5] shows good characteristics of scale variations in frequency decomposition but the setting of mother basis and redundancy of wavelet make it have certain application restrictions. Empirical mode decomposition (EMD) [6] achieves purposes from the time scale of data sequence, whose effectiveness depends largely on the method of finding extreme points, the interpolation of the extreme points to the carrier envelope and the stopping criterion of the application. In addition, the robustness of EMD is limited by the deficiency of system of mathematical theory. On the foundation of EMD, a new method called variational mode decomposition (VMD) [7], is extensively introduced in fault identification and diagnosis. Liu et al. [8] applied VMD into CNN to extract planetary gears features and required 100% total recognition. Bagheri et al. [9] proposed a VMD-based modal identification algorithm to accomplish engineering structures mode identification and demonstrated the efficacy of it. However, VMD itself still has some following issues. First, the number of decomposition level in VMD needs to be given in advance while it is difficult to appropriately determine by prior acknowledgement. Second, the VMD retains a denoising effect as the Wiener filter to some extent, but it is powerless to remove the in-band noise of the signal. Recently, a new data-driven approach called time–frequency manifold (TFM) learning proposed by He et al. [10] has achieved excellent features extraction by the way of mining the intrinsic structures contained in the nature of different faults. Different from normal manifold learning, TFM learning can take non-stationary property hidden in high-dimensional time–frequency distribution into consideration and appears more effective results. Many TFM-based methods such as resonance-based TFM feature extraction [11], TFM-based fault classification [12], and TFM sparse reconstruction [13] have been studied and also acquired good results. However, on the one hand, with a nonlinear manifold learning, the immense operands vastly restricts farther applications for multi-modulation feature extraction in practical operations. On the other hand, TFM learning cannot satisfy the request of full-band structure mining owing to its amplitude loss.

Rotating Machinery Fault Diagnosis with Weighted Variational Manifold Learning

117

For a potential research value in realistic rotating machinery fault diagnosis, a novel weighted variational manifold learning (WVML) method is proposed to accomplish multi-scale transient features extraction of complex modulated signal in a data-driven way. Thanks to the excellent performance of VMD in modulated signal decomposition, band-limited intrinsic mode functions (IMFs) reflecting different modulation information are firstly decomposed using an improved VMD method, so as to ensure each band not be affected by other adjacent feature distributions. Then, the manifold structures of decomposed IMFs are able to be mined and enhanced by lifting each IMF to time–frequency plane for TFM learning. Among them, short-time Fourier transform (STFT) and its inverse algorithm are applied to process signal into time–frequency domain or return to time domain. The final reconstructed signal is acquired via synthesizing the learned manifold signals with different weights dynamically adjust the proportion of each IMF to emphasize the fault-relevant frequency components. These merits will make it possible for both VMD and TFM technique to efficiently process in the full scale, acquire the comprehensive modulation information, and provide a more accurate fault diagnosis result.

2 Weighted Variational Manifold Learning 2.1 Variational Mode Decomposition Based on SDA As a novel signal decomposition approach, VMD decomposes a given signal y(t) into K band-limited IMFs through unceasingly updating center frequencies ωk and corresponding bandwidths to satisfy the minimum sum of estimated mode bandwidths [14]. Different from existing signal decomposition methods such as EMD or WT, VMD redefined the physical significance of IMFs as AM-FM components with their corresponding narrow-band attribute. With such an assumption, the different sub-bands modulated by meshing frequency, resonant frequency, and so on in rotating machinery can be separated. To obtain each IMF and corresponding bandwidth, the consequent constrained variational problem needed to be solved   2      j − jω t ∂t δ(t) + ∗ u k (t) e k  u k (t) = y(t) s.t. min   {u k (t)},{ωk } πt 2 k

(1)

where {uk (t)}: = {u1 (t), u2 (t), …, uk (t)}, {ωk }: = {ω1 , …, ωk } are, respectively, the sets of decomposed components and corresponding center frequencies. In this way, K different modulated signals with varies frequency ranges can be obtained. However, the selection of VMD parameter K needed to be determined in advance, which is important but difficult. In the framework of proposed WVML method, to avoid the spurious components caused by inaccuracy setting of VMD decomposition levels, a signal difference average (SDA) criterion [15] is introduced in this section

118

Q. Li et al.

to adaptively determinate the number of IMFs in VMD, that is, the selection of parameter K. SDA is able to reflect their similarity entre the sum of IMFs and raw one, which is calculated by γ =

n 1 (yIMFs − ys ) Nt i=1

(2)

where N t is the total number of signal sequence, yIMFs represents the reconstructed signal by summing all the IMFs, ys is the original signal. The smaller the SDA value, the higher similarity the IMF and signal have. The SDA-based determination can be described as 1. For a given range k  [l, u], the value of each k is input to decompose the signal ys . 2. Acquire the reconstructed signal yIMFs by summing all the IMFs. 3. Calculate the SDA γ of each reconstructed signal yIMFs and obtain the SDA curve. Choose the most reasonable k value to be the first-rank decompose level K opt . 4. Finally, waveforms of K opt IMFs can be obtained, which actually contain different modulated distributions in frequency domain.

2.2 Time–Frequency Manifold Learning The TFM signature can reflect the nonlinear manifold characteristics of nonstationary signals in time–frequency domain. It is usually mined through operating manifold learning at a succession of TFDs in reconstructed phase space [10]. In this study, the TFM signature of each IMF is learned on its TFD at a clear frequency range, as a result different modulated bands are treated due to their different distribution characteristics. A TFM is generally learnt through the following procedures: A time series analysis technique called phase space reconstruction (PSR) is firstly applied to rebuild the manifold. For a given IMF x(t) with the number of N t points, its ith phase point vector in m-dimensional phase space can be calculated by X im = [xi , xi+τ , . . . , xi+(m−1)τ ]

(3)

where m represents the embedding dimension, x i represents the ith point in IMF x(t), and τ represents the time delay. Then, vectors {X im |i = 1, 2, …, n} are aligned by chronological order to develop a time-dependent matrix in the phase space T

P = X 1m , X 2m , . . . , X nm ∈ R m×n

(4)

P( j, h) = x h+( j−1)τ

(5)

Rotating Machinery Fault Diagnosis with Weighted Variational Manifold Learning

119

where τ = 1, n = N t − m + 1, j [1, m] and h [1, n]. Then, the phase space tracking is operated in the time–frequency domain so as to transform each row of the time-dependent matrix into a TFD by discrete STFT S j (v, h) =



P j [l]w[h − l]e−i

2π L

vl

(6)

l=−∞

where h represents the position on time axis, v represents the position on frequency axis. L is the total number of discrete frequency, w(h) represents a short window sliding on time axis. The obtained Sj (v,h) can be regarded as a series of complex matrices with the size of L × n, where L is the total frequency points and n is the total time points. Consequently, m TFDs which only contain the amplitude matrices Aj , and their corresponding phase part θj are able to be obtained, respectively, through Sj (v,h), while each of them can also be represented as a matrix. By utilizing a manifold learning algorithm in the reconstructed phase space, the TFM signature can be obtained from previous TFDs. The local tangent space alignment (LTSA) algorithm is applied in this paper to compute therein d TFM signatures. Thus, the latent structure reflecting the desired pattern of the IMF can be automatically extracted with the noise removed, particularly on in-band noise suppression to some extent. Furthermore, the waveform of each TFM signature in time domain is able to be transformed via inverse STFT.

2.3 Weighted Coefficient-Based Signal Reconstruction As Ref. [10] shows, the manifold signature will lose absolute amplitude information during the operation of matrix normalization, as a result, the max value of amplitude of each IMF will be the same, which is contradictory to the mechanism that different IMFs contributes different weights to the signal. Therefore, a group of weight coefficients is introduced here to reconstruct the waveform more accurately. The Pearson correlation coefficient [13] is a statistical indicator used to measure the degree of correlation between two variables. For the decomposed kth IMF x k (t) = [x 1 , x 2 ,…,x Nt ] and the raw signal y(t) = [y1 , y2 ,…,yNt ], the cross-correlation coefficient can be calculated as follows N t i=1 (x i − x)(yi − y) r k = (7) N t Nt 2· 2 (x − x) (y − y) i i i=1 i=1 where x is the mean of x k (t), y is the mean of y(t). Nt is the entire number of points in sequence x k (t). The result r k value ranges from –1 to 1. The closer the result is to 1, the more relevant the two variables are. Based on the assumption that the output

120

Q. Li et al.

IMF component containing transient information related to the fault features has a high correlation with the raw signal, the coefficient r k of any IMF x k (t) and the raw signal y(t) is used to give the corresponding component different proportion in the synthesized signal. And finally, the manifold signal y (t) can be synthesized by y(t) =

K opt

rk · xk (t)

(8)

k=1

2.4 Procedure of the WVML Method Figure 1 illustrates the steps of proposed WVML method. In this manner, the new synthesized signal can be well processed for further fault diagnosis according to rotating machinery like bearing and gear.

3 Experimental Study In this section, a case of experimental gear data is analyzed to validate the advantages and effect of the proposed WVML method in manifold mining for further health diagnosis of rotating machinery.

3.1 Experimental Facility A gearbox setup is built for the validation of proposed WVML method. As Fig. 2 shows, it is chiefly made up of a 3-phase induction motor, two flexible couplings, a magnetic powder brake, and a two-stage gearbox. In addition, this two-stage gearbox contains a total transmission ratio as 3.59, and the teeth number of each gear is labeled in Fig. 2. A thin spalling fault with length of 15 mm, width of 3 mm, and depth of 1 mm was set in the second-stage gear Z 3 . Four acceleration meters are, respectively, installed at the bearing plates of this gearbox. In addition, an industrial computer with a LMS acquisition system was employed to collect the vibration defective signal for follow-up processing and analysis. Figure 3 illustrated the picture of the experimental data acquisition setup. The sampling frequency is 5120 Hz. To initially reduce the amount of calculation, a 4× down sampling operation was performed in the later analysis. It is explained the vibration signal collected by acceleration meter installed at the bearing plates of gear Z 3 is analyzed in this validation.

Rotating Machinery Fault Diagnosis with Weighted Variational Manifold Learning

121

Fig. 1 Steps of the WVML method

3.2 Experimental Results To practically verify the effect of the WVML method, the vibration signal collected at 700 rpm, 60 Nm with 5120 points was analyzed as an example. Figure 4 shows the waveform and spectra characteristics of raw spalling defective signal, from which can see there are inconspicuous transient impulses submerged by much background noise in both Fig. 4a, b. In addition, by Hilbert transform and Fourier transform the frequency components are able to be represented in Fig. 4c, d, where the modulation frequency and harmonic frequency is difficult to identify in two frequency spectra.

122

Q. Li et al.

Fig. 2 Schematic of the gearbox setup

Fig. 3 Photograph of the experimental setup

The first process of WVML is decomposing the raw signal into K opt IMFs with respective modulated frequency information in a variational solution. The value K opt was determined based on SDA criteria, and the curve was plotted as Fig. 5a. Within the range of effective decomposition levels, the minimum SDA value acquired when

Rotating Machinery Fault Diagnosis with Weighted Variational Manifold Learning

123

Fig. 4 Raw spalling defective signal: a waveform in time domain; b TFD; c amplitude spectrum; d envelope spectrum

Fig. 5 SDA curve and modes decomposition of gear defective signal

k = 3, corresponds to the optimal decomposition levels. Figure 5b represents the frequency bands separation of each IMF by means of VMD. The three decomposed IMF were then transformed into time–frequency domain to accomplish manifold learning. The comparison of IMF2 and its reconstructed signal in Fig. 6 was taken as an example to illustrate TFM learning. From Fig. 6b, d, it can be observed within the frequency range of [400, 640] the manifolds were successfully extracted from the raw TFD with full-band noise repressed. Furthermore, the manifold signal reconstructed by a series of inverses transform shows more clear periods of transient impulses. All the three manifold signals were obtained to final learned signal reconstruction. The final waveform and TFD, amplitude spectrum, and its corresponding envelope spectrum are orderly plotted in Fig. 7. With the comparison of raw waveform in Fig. 4, it can be observed that the periodic features have been enhanced in not only the time domain but also time–frequency domain. From Fig. 7c, d can see the meshing frequencies f m1 and f m2 in amplitude spectrum, as well as modulated

124

Q. Li et al.

Fig. 6 The comparison between IMF2 and its manifold signal: a raw signal of IMF2; b TFD of IMF2; c reconstructed manifold signal of IMF2; d TFM of IMF2

Fig. 7 Reconstructed spalling defective signal: a waveform in time domain; b TFM; c amplitude spectrum; d envelope spectrum

rotating frequency f r and its multi-harmonic frequencies in envelope spectrum were emphasized, which is significative for gear fault diagnosis. From the application in gear fault diagnosis, it can be concluded that based on full frequency band information preserving, the manifold structure for the desired time–frequency impulses is significantly enhanced via the proposed WVML method, which also provides a more obvious spectra result for fault diagnosis.

Rotating Machinery Fault Diagnosis with Weighted Variational Manifold Learning

125

4 Conclusion In this paper, a novel time–frequency analysis method named weighted variational manifold learning (WVML) is proposed for complex modulated transient features enhancement. Different from existing methods, this WVML method focuses on multi-scale feature reconstruction with lossless recovery of defect information. By employing the mode self-separation theory into the framework of manifold learning, the complex modulation information embedded in non-stationary signal can be soundly reinforced with mode manifolds mined and weighted in a self-learning way. Additionally, a SDA criterion is introduced for optimal decomposition level determination. Practical gear defective signal is analyzed with the result; the proposed method presents significant effectiveness in complex multi-scale sideband information extraction with transient features enhancement. Furthermore, it can be foreseen the proposed WVML method supplies a proper time–frequency architecture for multi-scale modulation analysis in which data mining will play a greater potential.

References 1. Rai A, Upadhyay S (2016) A review on signal processing techniques utilized in the fault diagnosis of rolling element bearings. Tribol Int 96:289–306 2. Ding X, He Q (2017) Energy-fluctuated multiscale feature learning with deep convent for intelligent spindle bearing fault diagnosis. IEEE Trans Instrum Meas 66(8):1926–1935 3. Chen X, Wang S, Qiao B, Chen Q (2018) Basic research on machinery fault diagnosis: past, present, and future trends. Front Mech Eng 13(2):264–291 4. Hussain S (2017) Fault diagnosis of gearbox using particle swarm optimization and second order transient analysis. J Vibr Acoust-Trans ASME 139(2):021015 5. Pandya D, Upadhyay S, Harsha S (2015) Fault diagnosis of high-speed rolling element bearings using wavelet packet transform. Int J Sig Imaging Syst Eng 6:390–401 6. Zvokelj M, Zupan S, Preil I (2016) EEMD-based multiscale ICA method for slewing bearing fault detection and diagnosis. J Sound Vib 370:394–423 7. Li F, Li R, Tian L et al (2019) Data-driven time-frequency analysis method based on variational mode decomposition and its application to gear fault diagnosis in variable working conditions. Mech Syst Sig Process 116:462–479 8. Liu C, Cheng G, Chen X, Pang Y (2018) Planetary gears feature extraction and fault diagnosis method based on VMD and CNN. Sensors 18(5):1523 9. Bagheri A, Ozbulut O, Harris D (2018) Structural system identification based on variational mode decomposition. J Sound Vib 417:182–197 10. He Q, Liu Y, Long Q, Wang J (2012) Time-frequency manifold as a signature for machine health diagnosis. IEEE Trans Instrum Meas 61(5):1218–1230 11. Yan J, Sun H, Chen H et al (2018) Resonance-based time-frequency manifold for feature extraction of ship-radiated noise. Sensors 18(4):936 12. He M, He D, Qu Y (2016) A new signal processing and feature extraction approach for bearing fault diagnosis using AE sensors. J Fail Anal Prev 16(5):821–827 13. Ding X, He Q (2016) Time-frequency manifold sparse reconstruction: a novel method for bearing fault feature extraction. Mech Syst Sig Process 80:392–413

126

Q. Li et al.

14. Li Y, Cheng G, Liu C, Chen X (2018) Study on planetary gear fault diagnosis based on variational mode decomposition and deep neural networks. Measurement 130(8):94–104 15. Isham M, Leong M, Lim M et al (2018) Variational mode decomposition: mode determination method for rotating machinery diagnosis. J Vibroengineering 20(7):2604–2621

Recent Advances on Anomaly Detection for Industrial Applications

A Data-Driven Approach to Anomaly Detection and Health Monitoring for Artificial Satellites Takehisa Yairi, Yusuke Fukushima, Chun Fui Liew, Yuki Sakai, and Yukihito Yamaguchi

Abstract In the operation of artificial satellites, overall health monitoring of the system and detection of symptoms of potential troubles are very important. For about twenty years, we have studied the data-driven health monitoring methods for artificial satellites, in which machine learning techniques are applied to the housekeeping data of satellites. In this paper, we review the basic of anomaly detection based on unsupervised learning and some results in early studies and then discuss the lessons learned and challenges to be tackled in the data-driven health monitoring for artificial satellites. Keywords Anomaly detection · Data-driven health monitoring · Artificial satellite

1 Introduction One of the most important tasks in the operation of artificial satellites is health monitoring or checking whether the system is working correctly or not. As the modern satellite systems are getting more complicated and an increasing number of satellites are planned to be launched in near future, it is impossible or impractical for a limited number of experienced operators to monitor all satellites individually and carefully. Therefore, more reliable and inexpensive ways of health monitoring for satellites are demanded. The most simple and primitive method of spacecraft health monitoring is the so-called limit-checking, in which upper and lower limits of normal ranges are imposed on the variables (or sometimes called parameters) to be monitored. Then, if any sensor measurement goes over the limit, an alarm goes off or a warning is issued. As the limit-checking is easy to understand and implement for humans, it still remains T. Yairi (B) · C. F. Liew Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo, Japan e-mail: [email protected] Y. Fukushima · Y. Sakai · Y. Yamaguchi Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo, Japan © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_13

129

130

T. Yairi et al.

the main monitoring method in spacecraft operation. It is not always easy, however, to decide appropriate threshold values for tremendous number of variables. Besides, it is very hard to detect subtle early symptoms of potentially serious troubles by limit-checking. Rule-based [1] and model-based [2] approaches were studied and developed as more advanced health monitoring and diagnosis methods, which are expected to overcome the limitations of limit-checking. While those approaches have certain advantages that they are able to detect unknown anomalies and diagnose the causes of detected faults by the powerful inference capabilities, it is generally expensive to construct and maintain the rule bases or model bases because a lot of expert knowledge is required to do so. This prevents us from employing these advanced health monitoring methods for every small satellite, as a different satellite needs a different rule base or model. It is impractical from the viewpoint of cost. In these circumstances, the data-driven approach to the health monitoring of artificial satellites draws much attention these days [3–5]. It does not require the expert knowledge, unlike the rule- and model-based approaches. Instead, this approach utilizes the big amount of past sensor data in spacecraft bus systems (i.e., housekeeping or HK data) as its main information source. The rising popularity of machine learning and artificial intelligence is another driving force. On the other hand, the data-driven approach does not work at all without data. Besides, in the health monitoring of artificial satellites (and other critical systems), we cannot take advantage of supervised learning techniques in most cases, as it is common that there is little amount of past anomaly data in these applications. Authors have been studying and developing data-driven anomaly detection methods for artificial satellites for more than 15 years [6–9]. In the remaining of this paper, we will present the lessons learned in the series of studies and discuss possible approaches to the challenges.

2 Data-Driven Health Monitoring for Artificial Satellites 2.1 Anomaly Detection by Machine Learning The central part of data-driven health monitoring is the technique called anomaly detection or novelty detection, which has been developed in the machine learning research community. Anomaly detection is different from the supervised classification in that training data is composed of only normal samples, whereas the supervised classification takes both normal and anomaly samples as its training data. This difference comes from the fact that anomaly data samples are usually unavailable or rare beforehand in actual health monitoring applications. The basic procedure of anomaly detection is described as follows. (1) A statistical model of normal behavior of the system is learned from training data. (2) Anomaly score for each test data sample is computed by evaluating it based on the learned normal behavior model. To learn the normal behavior models, a variety of unsupervised learning algorithms including dimensionality reduction, clustering and density estimation can be employed. We

A Data-Driven Approach to Anomaly Detection and Health Monitoring …

131

can understand how the unsupervised learning works as anomaly detector in several ways. In the following, we introduce three typical — probabilistic, geometrical and encoder-decoder interpretations. (a) Probabilistic interpretation A typical use of unsupervised learning is to estimate the probability density distribution of given data samples. Those methods are roughly classified into two types—parametric models such as multivariate Gaussian and mixture of Gaussians and non-parametric models such as kernel density estimation. When we apply them to anomaly detection, we first learn the density distribution model of normal data samples and then compute the (log-)likelihood of the test samples. If the log-likelihood is significantly low, it is detected as anomaly. Therefore, the minus log-likelihood can be used as the anomaly score. (b) Geometrical interpretation Another major topic in unsupervised learning is dimensionality reduction represented by principal components analysis (PCA). Dimensionality reduction methods are meant to approximate the distribution of data in the originally high-dimensional space by lower-dimensional subspaces or manifolds. In the application to anomaly detection, we obtain a “normal” subspace representing the distribution of normal training data and then compute the distance of each test sample from the obtained normal subspace. If the distance is larger than threshold, it is regarded as anomaly. Similarly, the clustering which is another representative unsupervised learning can be also used for this purpose. In that case, we compute the distance from a test sample to the nearest cluster center. (c) Encoder/decoder interpretation An autoencoder is an unsupervised neural network, which can be viewed as a non-linear extension of PCA. It consists of an encoder which converts original high-dimensional data into lower-dimensional data and a decoder which recovers the original dimensionality from the compressed data. The encoder and decoder are trained simultaneously so that they recover the original data with minimum loss. The idea of using an autoencoder for anomaly detection can be explained as follows. Once the autoencoder is trained using only normal data samples, it will recover a given test data sample well if it is normal, but will fail it if it is anomaly. Therefore, we can use the reconstruction error or the difference between the input and output patterns as the anomaly score.

2.2 Examples of Anomaly Detection Applied to Satellite Data In this subsection, we show a few examples of applying data-driven anomaly detection methods to the house-keeping data of satellites. The details of the satellites are omitted here for confidentiality reasons. (a) Detection of a solar paddle trouble Figure 1 shows a part of electric power subsystem (EPS) data of a communication testing satellite. Although it is hard to detect by looking, a trouble

132

T. Yairi et al.

Fig. 1 HK data in EPS of a communication testing satellite

in one of solar paddles was observed at the time indicated by a red dotted line in the figure. In this case study, we applied several unsupervised learning algorithms to the early part of this period for training to obtain density distribution models and then evaluated each data sample in the whole period. Figure 2 shows the anomaly scores (minus log-likelihoods) evaluated by them. Since this trouble is hard to detect even for experts, it is not surprising that simple distribution models such as probabilistic PCA (PPCA) and Gaussian mixture failed in detecting the anomaly. We found, however, that the minus log-likelihood computed by mixture of probabilistic PCA (MPPCA) significantly increased at the time in which the trouble occurred, which means MPPCA successfully detected the anomaly. We consider that this is because MPPCA is more suitable for modeling the high-dimensional and multi-modal spacecraft HK data than others. (b) Detection of battery trouble symptoms Figure 3 shows a part of HK data of another satellite, consisting of 228 analog variables. While some irregular patterns can be found in the last part of the period, it is hard to find significant symptoms prior to them. Figure 4 shows the anomaly score (i.e., minus log-likelihood) in this period, when MPPCA is applied to this data. We can see non-trivial increases of the score at least three times prior to the known troubles. Although experts examined these events and confirmed that they were not symptoms of the subsequent battery troubles, they admitted that detection of these events is still valuable because they were considered to be minor issues in another battery cell. This example shows the

A Data-Driven Approach to Anomaly Detection and Health Monitoring …

133

(a) GMM

(b) PPCA

(c) MPPCA Fig. 2 Log-likelihood of HK data

possibility of the data-driven approach to detect subtle changes or symptoms which are easily overlooked by operators. (c) Detection of attitude anomaly and novel events Figures 5 and 6 show numerical and categorical variables of HK data of yet another satellite, respectively. In the latter figure, different categorical values except the most frequent one are indicated by different colors, for each categorical variable. In general, spacecraft HK data consists of both numerical variables (typically corresponding to sensor measurements) and categorical variables (corresponding to discrete modes of subsystem components). In order to incorporate the categorical variables into the statistical model, we integrated MPPCA model with multi-dimensional categorical distribution models. Figure 7 shows the anomaly score of this period. We found that it successfully detected “unusual” events during this period, including altitude anomaly and a rare event which had never been encountered. As the categorical variables contain the

134

T. Yairi et al.

Known troubles Test

228 variables

Training

Time index Fig. 3 HK data with battery troubles

information about issued commands, this model is expected to check if the state of satellite changes as expected after a command is given.

3 Lessons Learned 3.1 Importance of Data Preprocessing Recently, thanks to the growing interest in machine learning, we can easily use many kinds of latest algorithms with just a few lines. However, before we apply them to HK data of artificial satellites, it is necessary for us to preprocess the raw data appropriately in some ways. First, the original HK data usually contains a high percentage of missing values. This is partly because different sensors have different sampling rates and partly because operators intentionally omit some portion of data to reduce the amount of communication from the satellite to the ground. Unfortunately, since the majority of popular machine learning algorithms are unable to deal with the missing values, we have to complete the missing elements by some means. Based on our experience, elaborate completion methods such as high-order polynomial interpolation are not recommended, because they have a risk of producing undesirable artifacts as a side effect. Besides, satellite HK data occasionally contains exceptionally large abnormal values, caused by errors in data conversion or transmission. Although those error values can also be seen as a kind of anomaly, we are not interested in them in most cases because they are temporary and not related to any serious troubles in

135

Fig. 4 Anomaly score evaluated by MPPCA

Time index

A Data-Driven Approach to Anomaly Detection and Health Monitoring …

(Reconstrucon error) Anomaly score

136

T. Yairi et al.

Fig. 5 Numerical variables

the whole system. So we call this type of value error “trivial” outlier. Trivial outliers should be removed beforehand in the data preprocessing stage for two reasons. First, they may hide the truly serious anomalies from the operators, because the absolute values of the former are generally much larger than those of the latter. Second, if outliers with extremely large values are contained in the training data, the learned models are severely damaged.

3.2 Modeling of Satellite House-Keeping Data A significant characteristic of satellite HK data is its heterogeneity. That is to say, the variables consisting HK data are first divided into two types: continuous variables that take real values and discrete variables that take categorical values. Typically, the former corresponds to sensor measurements, whereas the latter represents some status or modes of components. In addition, continuous variables have a diversity in the physical quantities, units and ranges. This heterogeneity of HK data is a big difference from other data sources such as imagery data, in which all variables (i.e., pixels) have the same range and quantity. Another essential property of HK data

A Data-Driven Approach to Anomaly Detection and Health Monitoring …

Fig. 6 Categorical variables

Fig. 7 Anomaly score

137

138

T. Yairi et al.

is the temporal dependency among data samples. In other words, HK data is multidimensional (or even high-dimensional) time series generated from spacecraft which is a dynamical system. However, it is not always easy and straightforward to model the temporal dependency, because sensor measurements may have different sampling rates and are not synchronized. Another remarkable property of HK data is multimodality of data distribution or existence of cluster structure. This is because the whole satellite system and its subsystems have multiple operational modes. Interestingly, within each mode (or cluster), data samples are often distributed on almost linear subspaces. We consider this is because artificial satellite systems are designed to operate linearly around stable states. While deep neural networks (DNNs) have gained great reputation in many domains such as image recognition, natural language processing and so on, it is rather important for the start-of-the-art techniques to deal with the properties of satellite HK data described above for successful results.

3.3 Difficulty in Unsupervised Learning The main difficulty in anomaly detection for artificial satellites comes from the reality that we have few data samples of anomalies in advance that we want to detect, which makes it impossible to use supervised classification methods that are most reliable and promising techniques. Therefore, we have no choice but to use unsupervised learning methods which have difficulties in deciding thresholds and avoiding false alarms when the amount of the training data is not sufficient. To overcome this difficulty, we need domain knowledge or other information sources that compensate the lack of anomaly data.

3.4 Expected Roles of Data-Driven Approaches As seen in the past case studies, the data-driven approach to satellite health monitoring has some advantages over conventional methods. For example, it has a possibility to discover subtle changes and events in the HK data which may be symptoms of serious troubles but are occasionally overlooked by humans. However, we consider human operators and knowledge-based approaches are still necessary. While the data-driven approach is good at monitoring a big amount of HK data broadly and shallowly, it is poor in logical inference which is necessary for decision making and fault diagnosis.

A Data-Driven Approach to Anomaly Detection and Health Monitoring …

139

4 Challenges 4.1 Explainability Recently, the concept of “explainable AI” is spreading over many application areas. The explainability is important also in the satellite health monitoring, as it is not enough just to detect anomalies. It is required to answer the questions such as “Why is it considered to be an anomaly?” or “How serious is it?”. The latter question is not easy, because the anomaly score utilized in the data-driven anomaly detection indicates how likely it is an anomaly, but not how critical the anomaly is. Obviously, an important challenge in the data-driven health monitoring is to improve its explainability. In our opinion, it is impractical to realize such a high-level explainability only by (unsupervised) learning from data. Some logical inference to handle expert knowledge and domain models will be necessary.

4.2 Automation of Data Preprocessing As we pointed out in Sect. 3.1, the raw HK data sent from satellite needs appropriate preprocessing, because it is not ready for most of machine learning algorithms. This preprocessing is a very troublesome and time-consuming task for operators, which could become a barrier to the wide use of data-driven health monitoring. Therefore, the automation of the data preprocessing is considered a key.

4.3 Transfer Learning One difficulty of data-driven approach is that we usually do not have enough amount of training data in the early stage soon after the launch. It would be desirable to utilize the data of other satellites instead. As mega constellation missions consisting of the same type of satellites are being planned and there is a trend toward standardization of bus systems, this could be realized in future. It is still challenging, however, to perform transfer learning or to share the training data among different types of satellites.

4.4 Interaction with Operators As has been discussed in Sect. 3.1, the anomaly detection based on unsupervised learning has certain limitations, because it uses only normal data samples in the past. One possible idea to overcome them is to utilize the user feedback or reaction from

140

T. Yairi et al.

the operators who evaluate the detection results of the system as partial or incomplete label information and to improve the performance gradually. This is closely related to multiple topics such as active learning, relevance feedback, semi-supervised learning and lifelong machine learning that draw attentions in machine learning community.

5 BISHAMON: A New Platform for Data-Driven Health Monitoring Based on the lessons and discussions in the previous sections, we are now developing a novel platform for monitoring house-keeping data of various artificial satellites, named BISHAMON (Big data-driven Intelligent HeAlth MONitoring). Supported by Japan Science and Technology Agency, this project is planning to develop a prototype by the end of FY2019 and to start an experimental service in FY2020.

6 Conclusion In this paper, we introduced an overview of the data-driven health monitoring for artificial satellites. This approach takes full advantage of a big amount of past housekeeping data and machine learning techniques. At the same time, however, we still have non-trivial challenges before this approach is widely spread and deals with drastic increase in artificial satellites which are expected in coming years. Acknowledgements This work was supported by JST START Grant Number JPMJST1814, Japan. Some results in this paper are from our past studies conducted under the support of JSPS KAKENHI Grant Numbers 26289320.

References 1. Tallo DP, Durkin J, Petrik EJ (1992) Intelligent fault isolation and diagnosis for communication satellite systems. Telematics Inf 9(3/4):173–190 2. Williams BC, Nayak PP (1996) A model-based approach to reactive self-configuring systems. In: Proceedings of the 13th national conference on artificial intelligence, pp 971–978 3. DeCoste D (1997) Automated learning and monitoring of limit functions. In: Proceedings international symposium on artificial intelligence, robotics, and automation in space, pp 287–292 4. Schwabacher M, Oza N, Matthews B (2009) Unsupervised anomaly detection for liquid-fueled rocket propulsion health monitoring. J Aerosp Comput Inf Commun 6(7):464–482 5. Iverson DL et al (2012) General purpose data-driven monitoring for space operations. J Aerosp Comput Inf Commun 9(2):26–44 6. Fujimaki R, Yairi T, Machida K (2005) An approach to spacecraft anomaly detection problem using kernel feature space. In: Proceeding of the 11th ACM SIGKDD international conference on knowledge discovery in data mining, pp 401–410

A Data-Driven Approach to Anomaly Detection and Health Monitoring …

141

7. Inui M, Kawahara Y, Goto K, Yairi T, Machida K (2009) Adaptive limit checking for spacecraft telemetry data using kernel principal component analysis. Trans Japan Soc Aeronaut Space Sci Space Technol Japan 7(ists26):Pf-11–Pf-16 8. Yairi T et al (2010) Spacecraft telemetry monitoring by dimensionality reduction and reconstru ction. In: The NASA conference on intelligent data understanding 9. Yairi T et al (2017) A data-driven health monitoring method for satellite housekeeping data based on probabilistic clustering and dimensionality reduction. IEEE Trans Aerosp Electron Syst 53(3):1–18

Robust Linear Regression and Anomaly Detection in the Presence of Poisson Noise Using Expectation-Propagation Yoann Altmann, Dan Yao, Stephen McLaughlin, and Mike E. Davies

Abstract This paper presents a family of approximate Bayesian methods for joint anomaly detection and linear regression in the presence of non-Gaussian noise. Robust anomaly detection using non-convex sparsity-promoting regularization terms is generally challenging, in particular when additional uncertainty measures about the estimation process are needed, e.g., posterior probabilities of anomaly presence. The problem becomes even more challenging in the presence of non-Gaussian, (e.g., Poisson distributed), additional constraints on the regression coefficients (e.g., positivity) and when the anomalies present complex structures (e.g., structured sparsity). Uncertainty quantification is classically addressed using Bayesian methods. Specifically, Monte Carlo methods are the preferred tools to handle complex models. Unfortunately, such simulation methods suffer from a significant computational cost and are thus not scalable for fast inference in high dimensional problems. In this paper, we thus propose fast alternatives based on Expectation-Propagation (EP) methods, which aim at approximating complex distributions by more tractable models to simplify the inference process. The main problem addressed in this paper is linear regression and (sparse) anomaly detection in the presence of noisy measurements. The aim of this paper is to demonstrate the potential benefits and assess the performance of such EP-based methods. The results obtained illustrate that approximate methods can provide satisfactory results with a reasonable computational cost. It is important to note that the proposed methods are sufficiently generic to be used in other applications involving condition monitoring. Y. Altmann (B) · D. Yao · S. McLaughlin School of Engineering and Physical Science, Heriot-Watt University, Edinburgh, UK e-mail: [email protected] D. Yao e-mail: [email protected] S. McLaughlin e-mail: [email protected] M. E. Davies School of Engineering, University of Edinburgh, Edinburgh, UK e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_14

143

144

Y. Altmann et al.

Keywords Robust regression · Bayesian inference · Anomaly detection · Expectation-propagation · Generalized linear models

1 Introduction Joint robust linear regression and anomaly detection is a challenging problem encountered in many industrial applications whereby sensors can be prone to random spurious readings. This problem is particularly challenging when the observation noise differs from the classical Gaussian assumption (e.g., Poisson noise) and when the sparse anomalies present spatial or temporal structures. This is typically the case for spectral unmixing problems [1, 2] for which the detected flux is quantified at a photon level and is affected by Poisson noise. While Bayesian models coupled with Monte Carlo sampling strategies stand as classical tools to handle challenging regression problems has enable uncertainty quantification for subsequent decision making, they still suffer from a high computational complexity which prevents their broad deployment in industrial applications requiring fast analysis [3]. Consequently, approximate Bayesian methods have been receiving a growing interest for their better scalability. A classical approach to performing approximate Bayesian inference relies on using a Kullback-Leibler (KL) divergence as an asymmetric discrepancy criterion between an exact but complex distribution and an approximating distribution. For instance, minimizing the direct KL divergence between the approximate and exact distributions [4] leads to Variational Bayes (VB) approaches. On the other hand, it turns out that minimizing the reverse KL divergence (between the approximate and the actual distributions) leads to a saddle point problem. Although provably convergent double loop algorithms [5] have been proposed, they suffer from high computational complexity. Expectation-Propagation (EP) methods form a family of faster fixed point message passing solutions which minimize sequentially (and locally) a set of reverse KL divergences [6]. Although their convergence is in general not ensured (damping strategies can be adopted to mitigate this limitation), when EP methods converge, they can provide better estimates than VB for problems where both methods can be applied. In particular, VB approaches tend to underestimate posterior variances more severely and EP. Gaussian approximations or messages are classically used within the EP framework [7], leading to more tractable KL divergence minimization problems. To reduce the number of sequential EP updates involving one-dimensional densities, it is possible to use, locally, multivariate Gaussian approximations. In particular, in high dimensional (e.g., imaging) problems, these factors can be constrained to have diagonal covariance matrices. Recently, a class of algorithms called Vector Approximate Message Passing (VAMP) [8] has been derived using EP with diagonal/isotropic Gaussian messages over vectors, although the noise model can be non-Gaussian. While Poisson likelihoods have recently been considered within EP [9], and in particular for linear regression in [10], here we use EP for joint robust linear regression and anomaly detection. In [10], we showed that

Robust Linear Regression and Anomaly Detection …

145

by considering extended Bayesian models, it is possible to design several EP-based methods, while trading off computational complexity and accuracy of the approximations. To handle sparse anomalies, we use a similar splitting scheme as in [11] whereby non-log-concave spike-and-slab prior models can be considered. The proposed models and associated a posteriori estimates are compared, through a series of experiments, to results obtained via Monte Carlo sampling from the true Bayesian model (in the absence of anomaly). Although a thorough comparison of the proposed methods with MCMC methods in the presence of anomalies would be more useful (as in [10] for the more standard linear regression problem), methods such as those investigated in [1, 2] (see also [3]) are still too computationally demanding and suffer from poor convergence properties in high dimensional settings and are not considered here. Thus, here we only present the anomaly detection results obtained by our method. The remainder of this paper is organized as follows. The exact Bayesian model used for linear regression is introduced in Sect. 2. Section 3 discusses the associated factor graph, the approximation based on Expectatio-Propagation and the associated update rules. Simulations results obtained using simulated data are reported and presented in Sect. 4 and conclusions are finally reported in Sect. 5.

2 Exact Bayesian Model 2.1 Likelihood We consider the robust estimation of a vector x = [x1 , . . . , xN ]T from a set of noisy measurements y = [y1 , . . . , yM ]T , with N ≤ M . Precisely, assuming mutual independence between the elements of y, conditioned on the value of the vector u = [u1 , . . . , uM ]T with positive entries (and depending on x), each measurement ym follows a Poisson distributions, i.e., f (ym |um ) = (um )ym exp [−um ]/ym !, ∀m ∈ 1, . . . , M . In this work, we consider the problem of linear regression in the presence of anomalies in the data and we assume that the vector x of interest is related to the vector u, containing the means of the Poisson distributions above, by the following equation u = Ax + z  r

(1)

where A = [a1 , . . . , aM ]T is a known M × N matrix with positive elements (the vectors {am }m=1,...,M are the rows of A). Moreover, r = [r1 , . . . , rM ]T ∈ RM +, z = [z1 , . . . , zM ]T ∈ (0; 1)M is a vector of binary labels and  denotes the Hadamard (element-wise) product, such that v = z  r or vm = zm rm , ∀m, i.e.,

146

Y. Altmann et al.

 vm =

0 if zm = 0 (absence of outlier) rm if zm = 1 (presence of outlier).

While we could have simply used v to represent the anomalies, using the two vectors z and r as in [1, 11] allows us to separate and assign prior distributions to the locations (through z) and the values (through r) of the anomalies. Using the mutual independence between the elements of y, conditioned on the value of (x, z, r) (or u), the joint likelihood reduces to f (y|x, z, r) =

M 

f (ym |x, z, r) =

m=1

M 

f (ym |aTm x + zm rm ),

(2)

m=1

i.e., to a product of M independent Poisson distributions. Since A = [a1 , . . . , aM ]T is known, it is omitted hereafter in all the conditional distributions to simplify notations.

2.2 Prior Distributions To complete our Bayesian model, we need to define prior distributions for the unknown model parameters in Eq. (2). First, we assume that the prior model f (x) is separable in x, i.e., can be written as a product of N independent distributions f (x) =

N 

fx (xn ).

n=1

Moreover, although different distributions could be assigned to the elements of x, here we use the same distribution for all the elements of x. As discussed in [10], considering independent prior distributions based on truncated Gaussian or exponential distributions simplifies significantly the EP-based inference but more complex prior models, ensuring the positivity of x can be used. Without loss of generality, here we consider a product of independent and identical gamma distributions whose shape and scale parameters (kx , θx ) are user-defined based on prior knowledge about x (if available). If a limited amount of information is available, weakly informative distributions (i.e., with large variances) can be used, while still enforcing the positivity of x. As mentioned above, using (z, r) instead of v allows us to assign independent prior distributions to the location and amplitude of the outliers. For the anomaly amplitudes, we use (as for x) a set of M identical and independent gamma prior distributions, i.e., M  fr (rm ). f (r) = m=1

Robust Linear Regression and Anomaly Detection …

147

Finally, we consider that the sparse anomalies can be grouped across some of the M measurements. In a similar fashion to [1], in this work we cluster the data into K groups of successive measurements and assume that if one measurement is corrupted, its whole group of M /K measurements is corrupted. Precisely, if Ik is the set of indices of the measurements in the kth group, we can define z˜ (k) ∈ (0; 1) such that zm = z˜ (k) , ∀m ∈ Ik . We then assign z˜ (k) a Bernoulli distribution with mean π , i.e., fz (˜z (k) = 1) = π, ∀k ∈ 1, . . . , K, which represents the (user-defined) prior probability of a block to be corrupted by anomalies. Note that once the user-defined groups are fixed, the prior distribution of z is fully defined by the prior model assigned to z˜ = [˜z (1) , . . . , z˜ (K) ]T , i.e., f (˜z) =

K 

fz (˜z (k) ).

k=1

As illustrated in Fig. 1, if K = M , the locations of the anomalies are completely uncorrelated. Note that here, we used a simple grouping strategy for the anomalies and used the same prior probability π for each block, but more complex models (e.g., different group sizes, non-successive measurements) can be used depending on the application.

Fig. 1 Example of sparse anomalies (first and third subplots) generated with M = 200 and π = 10%. While on average, the two anomaly sequences have the same number of non-zeros values, the two bottom subplots have been obtained with K = M , i.e., without structured sparsity while the two top plots have been obtained with K = 40 groups of 5 successive measurements

148

Y. Altmann et al.

2.3 Joint Posterior Distribution Using the joint likelihood f (y|x, r, z) and the prior distributions f (x), f (r) and f (˜z), we obtain f (y, x, r, z˜ ) = f (y|x, r, z˜ )f (x)f (r)f (˜z) =

M 

f (ym |aTm x + zm rm )

m=1

M  m=1

(3) fr (rm )

N  n=1

fx (xn )

K 

fz (˜z (k) ),

k=1

and the (marginal) posterior distribution of x, our main parameter of interest, is given by f (x|y) =



f (y|x, r, z)f (x)f (r)f (˜z)/f (y)dr.

(4)



In the absence of anomaly, estimating x via maximum a posteriori (MAP) estimation is relatively simple if f (x) is log-concave since the MAP estimator can be obtained by solving a convex optimization problem [12–14]. However, the MAP estimator is only a point estimate and such optimization strategy does not provide a posteriori confidence measures about x. Moreover, the problem becomes non-convex in the presence of anomalies, unless a convex prior model is used for v (as in [15] for instance). In this work, we focus on approximating the posterior mean and covariance (or marginal variances) of f (x|y) while also estimating the posterior probabilities of outlier presence, i.e., f (˜z (k) = 1|y). Unfortunately, computing the posterior mean and covariance matrix analytically in not possible, especially when M is large as the integrations involved in Eq. (4) become computationally intractable. Simulation methods are instead classically used to sample from the posterior distribution and approximate the quantities of interest. For instance, constrained Hamiltonian Monte Carlo methods [16] have been considered to solve regression problems in the presence of Poisson noise [2] (see also [3] for comparison of samplers including a bouncy particle sampler [17]). However, simulation-based approaches still suffer from a high computational cost and poor scalability properties. Indeed, while satisfactory results have been obtained using MCMC methods [2] for robust spectral unmixing, the associated computational cost was kept reasonable low thanks to the small number of measurements per pixel (M = 32, N ≤ 16) and the use of an Ising model promoting correlated locations of the anomalies and improving the mixing properties of the Markov chains. Thus, approximate Bayesian methods (including EP methods), appear as promising and scalable alternative solutions.

Robust Linear Regression and Anomaly Detection …

149

3 EP for Robust Linear Regression with Poisson Noise EP methods appear as promising alternatives to Monte Carlo simulation, in terms of estimation performance and computational complexity. Similarly to Variational Bayes (VB) approaches, they consist of approximating the joint posterior distribution f (x, r, z|y) by a simpler distribution (e.g., by imposing a posteriori independence between x, r and z) whose moments are simpler to compute. However, in contrast to VB, EP methods can be applied more generally by choosing the families of approximating distributions and by imposing additional constraints to the solution. In [10] we compared several EP-based methods for linear regression with Poisson noise, based on Gaussian approximations with different covariance structures. In particular, we used a general data augmentation scheme and investigated the estimation performance as additional constraints (local independence, isotropy) where imposed. A similar approach is adopted here to handle potential outliers. The key ingredient used in this work is the consideration of an augmented model based on Eq. (1). Let u = [u1 , . . . , uM ]T be a vector of auxiliary variables, the model in Eq. (3) can be extended as f (y, u, x, r, z) = f (y|u)f (u|x, r, z)f (x)f (r)f (˜z),  T with f (u|x, r, z) = M m=1 δ(um − am x − zm rm ) = δ(u − Ax − z  r), where δ(·) is the Dirac  delta function. Note that Eq. (3) is can be obtained by marginalizing u, i.e., using f (y, u, x, r, z)d u and that computing the marginal mean and covariance of x can be achieved using either the original or the extended model. While EP-based inference could be achieved with the original model (similarly to the work in [10], which first considered a non-extended model), the extended model offers, as will be discussed in this section, more flexibility in terms of parallel implementation of the EP updates.

3.1 EP Approximation Using Auxiliary Variables As in [10, 11], this joint density can be approximated by a product of unnormalized multivariate Gaussian distributions (for (u, x, r)) and Bernoulli distributions (for z˜ ). More precisely, we denote by q(·) approximating factors and use ⎧ f (y|u) ≈ qu,0 (u) ⎪ ⎪ ⎨ ≈ qr,0 (r) fr (r) ≈ qx,0 (x) fx (x) ⎪ ⎪ ⎩ δ(u − Ax − z  r) ≈ qu,1 (u)qx,1 (x)qr,1 (r)qz,1 (˜z)

150

y

Y. Altmann et al.

fy (u) qu,0 (u)

u

x

fx (x) qx,0 (x)

δ(u − Ax − z  r) qu,1 (u)qx,1 (x)qr,1 (r)qz,1 (˜ z)

r

fr (r) qr,0 (r)

˜ z

fz (˜ z) qz,0 (˜ z)

Fig. 2 Factor graph used to perform EP-based robust linear regression using variable splitting. The rectangular boxes (resp. circles) represent the factor (resp. variable) nodes and the approximating factors are shown in blue

where the approximation are defined as ⎧ qu,i (u) ∝ N (u; mu,i ,  u,i ), ∀i ∈ (0; 1) ⎪ ⎪ ⎨ q (r) ∝ N (r; m ,  ), ∀i ∈ (0; 1) r,i r,i r,i qx,i (x) ∝ N (x; mx,i ,  x,i ), ∀i ∈ (0; 1) ⎪ ⎪  ⎩ qz,1 (˜z) ∝ Kk=1 Ber(˜zk πk,1 ),

(5)

where N (·; mu,i ,  u,i ) is the probability density function of the multivariate normal distribution with mean mu,i and covariance  u,i . Note that since the approximating distributions for z˜ are products of Bernoulli distributions (as f (˜z)), the approximation of f (˜z) is exact and we obtain qz,0 (˜z) = f (˜z). In other words, qz,0 (˜z) will not need to be updated during the EP algorithm (see comment below). The resulting factor graph is depicted in Fig. 2. Using this factorization, the overall approximation of f (u, x, r, z, y), denoted by Q(u, x, r, z) (its dependence on y has been omitted for ease of reading) is given by Q(u, x, r, z˜ ) = Qu (u)Qx (x)Qr (r)Qz (˜z)    where Qu (u) ∝ 1i=0 qu,i (u), Qx (x) ∝ 1i=0 qx,i (x), Qr (r) ∝ 1i=0 qr,i (r) and Qz (˜z)  ∝ 1i=0 qz,i (˜z). To simplify notations later, we also introduce Qi (u, x, r, z˜ ) = qu,i (u)qx,i (x)qr,i (r)qz,i (˜z), ∀i ∈ (0; 1), such that Q(u, x, r, z˜ ) = Q0 (u, x, r, z˜ )Q1 (u, x, r, z˜ ).

Robust Linear Regression and Anomaly Detection …

151

3.2 EP Updates The overall aim of EP methods is to optimize Q(u, x, r, z˜ ) to minimize the KullbackLeibler (KL) divergence1 min

Q(u,x,r,˜z)

KL (f (u, x, r, z, y)||Q(u, x, r, z˜ )) .

Unfortunately, this minimization problem is intractable and the EP updates consist instead of the sequential and iterative optimization of each factor node in Fig. 2. Precisely, one iteration of the four updates required is given by

(i) min KL f (y|u)qu,1 (u)||qu,0 (u)qu,1 (u) , qu,0 (u)

KL (δ(u − Ax − z  r)Q0 (u, x, r, z˜ )||Q1 (u, x, r, z˜ )Q0 (u, x, r, z˜ )),

(iii) min KL f (x)qx,1 (x)||qx,0 (x)qx,1 (x) , qx,0 (x)

(iv) min KL f (r)qr,1 (x)||qr,0 (r)qr,1 (r) . (ii)

min

Q1 (u,x,r,˜z)

qr,0 (r)

These updates are then repeated until convergence. Note that although 5 factor nodes are displayed in Fig. 2, since qz,0 (˜z) is in the same family of distributions as f (˜z), we obtain qz,0 (˜z) = f (˜z) which can be set during the initialization of the method. While several stopping criteria can be used, here we simply iterate until the variations of the means and variances of the approximation Q(u, x, r, z˜ ) between successive iterations become negligible. We also set a maximal number of iterations (e.g., 100) to stop the algorithm if the first condition is not met. It is important to note that in general, EP methods are not guaranteed to convergence. However, as in [10] we used a classical damping strategy (see [11] for details) and did not experience convergence issues in all the scenarios considered in Sect. 4. Prior to presenting briefly how the updates (i)–(iv) are performed, it is important to discuss the additional constraints imposed to Q(u, x, r, z˜ ) and in particular to the different covariance matrices in Eq. (5). While allowing full covariance matrices increases the flexibility and potentially the quality of the approximation, it can also lead to numerical instabilities if too general approximations are used. For instance, since we used products of independent prior distributions to define fx (x) and fr (r), it is not useful to use full covariance matrices in their approximations, e.g., in qx,0 (x) and qr,0 (x). While a full analysis of the optimal covariance matrices is interesting, it is not the main contribution of this work and leave it for future work. In Table 1, we report the structures of the different covariance matrices involved in Fig. 2 and will motivate some of these choices in the next paragraph. We did not impose additional constraints on the means of the different distributions as we did not experience numerical issues with the covariance matrices in Table 1.

1 Note

that VB methods are also based on KL-divergence minimization, but they use the direct KL-divergence KL (Q(u, x, r, z˜ )||f (u, x, r, z, y)).

152

Y. Altmann et al.

Table 1 Properties of the different covariance matrices involved in the approximation of Fig. 2. “Isotropic” means that the covariance matrix is proportional to the identify matrix  u,0  u,1  r,0  r,1  x,0  x,1 Diagonal

Isotropic

Diagonal

Isotropic

Diagonal

Full

For brevity, we only present briefly how the the four updates (i)–(iv) are computed. The interesting reader is invited to consult [10, 11] for further details. Because Gaussian and Bernoulli distributions are chosen as approximating distributions, the KL minimization problems (i)–(iv) can be solved by computing the mean and (co)variance of distributions on the left-hand side of the divergences (also referred to as tilted distribution) and by matching these moments to those of the distributions on the right-hand side of the KL divergence. If constraints are added (e.g, isotropy of the covariance matrix), the KL minimization can be achieved using gradient-based methods (see [11]) as it can be casted as a convex optimization problem.

3.2.1

Update of qu,0

Using the facts that  u,1 is diagonal (it is even proportional to the identity matrix) and that the likelihood is separable in u, the tilted distribution f (y|u)qu,1 (u) has a diagonal covariance matrix. Consequently the KL minimization in (i) can be performed element-wise using the method proposed in [9].

3.2.2

Update of Q1

The update of Q1 relies on computing moments of the tilted distribution δ(u − Ax − z  r)Q0 (u, x, r, z˜ ). While it is not possible to compute analytically the full covariance matrix associated with that tilted distribution, it is worth recalling that Q(u, x, r, z˜ ) = Qu (u)Qx (x)Qr (r)Qz (˜z). Thus, we only need to compute the moments of the marginals of the tilted distribution w.r.t. x, {um }m , {rm }m and {zm }m . These updates are similar to those used in [10, 11] and thus not further detailed here. However, it is important to mention here that while we used a full covariance matrix for  x,1 to capture posterior correlation between the elements of x induced by A, it is possible, as in [10, 11] to only keep its diagonal elements. This can accelerate the update to qx,0 below at the cost of the loss of posterior correlations between the elements of x in Qx (·).

3.2.3

Update of qx,0

Once Q1 (·) has been updated, the update of qx,0 is performed exactly as in the “EPDF” method in [10]. Because  x,1 is a full matrix, the marginal means and variances

Robust Linear Regression and Anomaly Detection …

153

of qx,0 are updated sequentially (see [10] for details). However, if  x,1 is chosen to be diagonal, the update of qx,0 is the same as that of qr,0 below.

3.2.4

Update of qr,0

Similarly to qx,0 , the update of qr,0 is performed exactly as in the "EP-II" method in [10]. Because  r,1 is diagonal, the marginal means and variances of qr,0 can be updated in a parallel manner. The tilted distribution f (r)qr,1 (x) is proportional to the product of M independent truncated Gaussian distributions, whose moments are easy to compute analytically. Since many updates can be performed in parallel, the EP scheme used is highly scalable (when M is large). The next section presents simulation results illustrating the benefits of the proposed inference strategy.

4 Simulation Results In this work, we illustrate the potential benefits of the proposed approach for robust liner regression and anomaly detection through a series of experiments conducted with simulated data, which allows us to use ground truth information for validation.

4.1 Comparison with Monte Carlo Sampling (Without Anomalies) As discussed at the end of Sect. 2, MCMC methods can become extremely slow for large value of M when addressing the joint linear regression and anomaly detection problem, which makes the comparison with the reference/optimal estimators (considering the MCMC-based posterior mean and covariances as gold standard) difficult in practice. Thus, we first compare the performance of the proposed method to that of an MCMC assuming no outliers and by generating data without outliers. In all the simulations conducted in this section we used (M , N ) = (500, 20) and the matrix A has been generated randomly such that its columns are correlated, as can been seen in Fig. 3. First, we generated anomaly-free data using x generated from a product of independent gamma distributions with shape and scale parameters equal to (kx , θx ) = (1, 1). These parameters have been also used to define the prior distribution fx (·). Moreover, we used K = M (no structured sparsity), π = 10% and a product of independent gamma distributions with shape and scale parameters equal to (kr , θr ) = (1500) for f (r).

154

Y. Altmann et al.

Fig. 3 AT A for the matrix A used in Sect. 4. The condition number of A is κ(A) = 35.18

Figure 4 compares the posterior mean and covariance matrices estimated using the proposed EP-method and the MCMC method used as reference in [10] (using NMC = 20, 000 Monte Carlo iterations to ensure the convergence of the chains). This figure shows that in the absence of anomalies, the two methods provide very similar results, illustrating the ability of the EP method to approximate the actual non-Gaussian posterior distribution. Note that for these results all the marginal probabilities f (z k |y) obtained via EP satisfy f (z k |y) < 10−4 , i.e., the method clearly identifies that no anomalies are present. Moreover, although the EP method estimates more parameters than the MCMC method (it estimates means/covariances associated with (r, z)), the EP method converges within 50 iterations in 1.5s using Matlab 2017b implementation running on a MacBook Pro with 16 GB of RAM and a 2.9 GHz Intel Core i7 processor, while the MCMC method takes 10 s.

Robust Linear Regression and Anomaly Detection …

155

Fig. 4 Top: ground truth x (red) and posterior means of x estimated by the proposed EP-based method (black) and reference MCMC method (dashed blue) in the absence of outliers in the data. Bottom: associated posterior covariance matrices obtained by the two methods

4.2 Anomaly Detection In this section, we illustrate the ability of the proposed method to perform jointly robust linear regression and anomaly identification in the presence of Poisson noise. First, we generated data corrupted with anomalies using (kx , θx ) = (1, 1), (kr , θr ) = (1500), K = M and π = 10%. The main results obtained are presented in Fig. 5. The second subplot shows the posterior mean and associated ±3 standard deviation intervals (obtained from the estimated posterior covariance matrix) of x. The posterior mean of r is presented in the third subplot but the confidence interval are omitted here as they are difficult to see. The bottom subplot depicts the marginal probabilities f (z k = 1|y) of outliers presence together with the actual values of z (k) (−z (k) is displayed to facilitate the visual comparison). This figure shows that the proposed method is able to identify all the significant anomalies (associated with high posterior probabilities f (z k |y)) while the smallest anomalies are assigned lower probabilities. Finally, we assess the performance of the proposed method in the presence of spatially correlated anomalies. As in the previous experiment, we used (kx , θx ) = (1, 1), (kr , θr ) = (1, 500) and π = 10% but the anomalies were generated using K = M /5 (groups of 5 successive anomalies). Figure 6 depicts the regression and

156

Y. Altmann et al.

Fig. 5 Top row: realization of y generated using Eq. (2). Second row: ground truth x (red line), posterior mean (black line) and associated ±3 standard deviation intervals (blue dashed lines) of x in the presence of 10% of outliers in the data. Third row: actual anomalies (red line) and posterior mean of r (black line). Bottom: marginal probabilities f (z (k) = 1|y) of outliers presence (blue line) together with the actual values of z (k) (red line)

anomaly detection results obtained using K = M /5 (left-hand side subplots) and K = M (right-hand side subplots). While including additional information about the anomaly structure does not significantly improves the regression performance in this example (similar variances in the top row of Fig. 6), it improves the detection and localization of the anomalies. Indeed, it further reduces the probabilities f (z (k) |y) in the absence of anomalies, i.e., it reduces the overall probability of false alarm.

5 Conclusions In this work, we present a new set of Expectation-Progagation methods for joint robust linear regression and anomaly detection in the presence of Poisson noise. Using an extended Bayesian model, it is possible to derive efficient EP updates and handle non-convex posterior distributions induced by the use of spike-and-slab prior models. The resulting approximate Bayesian inference scheme provides satisfactory results, with a significantly shorter computational cost when compared to intrinsically sequential Monte Carlo sampling strategies. Although simple experiments have been

Robust Linear Regression and Anomaly Detection …

157

Fig. 6 Linear regression and anomaly detection results obtained using K = M /5 (left-hand side subplots) and K = M (right-hand side subplots) for data generated with K = M /5. Top: ground truth x (red line), posterior mean (black line) and associated ±3 standard deviation intervals (blue dashed lines) of x in the presence of 10% of outliers in the data. Middle: actual anomalies (red line) and posterior mean of r (black line). Bottom: marginal probabilities f (z (k) = 1|y) of outliers presence (blue line) together with the actual values of z (k) (red line)

conducted with 1D signals, the method can be extended to handle group-sparsity in 2D (e.g., images) or higher dimensions. Additional experiments conducted on the robust spectral unmixing problem addressed in [2] (but not presented here) confirmed the benefits of the proposed method for detection of structured anomalies in spectral signatures. Future work first includes a thorough comparison with reference MCMCbased methods (e.g., the method proposed in [1]) to confirm these preliminary results. It would be also interesting to extend this method to other anomaly models, e.g., where u = (1 − z)  (Ax) + z  r. Acknowledgements This work was supported by the Royal Academy of Engineering under the Research Fellowship scheme RF201617/16/31.

158

Y. Altmann et al.

References 1. Newstadt GE, III AH, Simmons J (2014) Robust spectral unmixing for anomaly detection. In: Proceeding IEEE-SP workshop statistical and saignal processing, Gold Coast, Australia 2. Altmann Y, Maccarone A, McCarthy A, Newstadt G, Buller GS, McLaughlin S, Hero A (2017) Robust spectral unmixing of sparse multispectral lidar waveforms using gamma Markov random fields. IEEE Trans Comput Imaging 3(4):658–670 3. Tachella J, ltmann Y, Pereyra M, Tourneret J-Y (2018) Bayesian restoration of high-dimensional photon-starved images. In: Proceeding European signal processing conference (EUSIPCO), Rome, Italy 4. Blei DM, Kucukelbir A, McAuliffe JD (2017) Variational inference: a review for statisticians. J Am Stat Assoc 112(518):859–877 5. Yuille AL, Rangarajan A (2002) The concave-convex procedure (cccp). In: Advances in neural information processing systems. MIT Press, pp 1033–1040 6. Minka TP (2001) Expectation propagation for approximate bayesian inference. In: Proceedings of the seventeenth conference on uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc. pp 362–369 7. Seeger MW (2008) Bayesian inference and optimal design for the sparse linear model. J Mach Learn Res 9:759–813 June 8. Schniter P, Rangan S, Fletcher AK (2016) Vector approximate message passing for the generalized linear model. In: 50th asilomar conference on signals. Systems and computers. Pacific Grove, CA, USA, Nov, , pp 1525–1529 9. Ko Y-J, Seeger MW (2016) Expectation propagation for rectified linear poisson regression. In: Asian conference on machine learning, ser. Proceedings of machine learning research, vol 45, Hong Kong, pp 253–268 10. Altmann Y, Perelli A, Davies ME (2019) Expectation-Propagation algorithms for linear regression with Poisson noise: application to photon-limited spectral unmixing. In: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brighton, UK 11. Hernández-Lobato J, Hernández-Lobato D, Suárez A (2015) Expectation propagation in linear regression models with spike-and-slab priors. Mach Learn 99(3):437–487. [Online]. Available: https://doi.org/10.1007/s10994-014-5475-7 12. Figueiredo M, Bioucas-Dias J (2010) Restoration of Poissonian images using alternating direction optimization. IEEE Trans Image Proc 19(12):3133–3145 13. Altmann Y, Maccarone A, Halimi A, McCarthy A, Buller GS, McLaughlin S (2016) Efficient range estimation and material quantification from multispectral lidar waveforms. In: Proceeding sensor signal processing for defence (SSPD) conference. Edinburgh, UK 14. Tobin R, Altmann Y, Ren X, McCarthy A, Lamb RA, McLaughlin S, Buller GS (2017) Comparative study of sampling strategies for sparse photon multispectral lidar imaging: towards mosaic filter arrays. J Opt 19(9):094006 15. Févotte C, Dobigeon N (2015) Nonlinear hyperspectral unmixing with robust nonnegative matrix factorization. IEEE Trans Image Proc 24(12):4810–4819 16. Brooks S (2011) Handbook of markov chain monte carlo, Chapman & Hall/CRC Handbooks of Modern Statistical Methods. Taylor & Francis 17. Bouchard-Côté A, Vollmer SJ, Doucet A (2018) The bouncy particle sampler: a nonreversible rejection-free markov chain monte carlo method. J Am Stat Assoc 113(522):855–867

Spacecraft Health Monitoring Using a Weighted Sparse Decomposition Barbara Pilastre, Jean-Yves Tourneret, Loïc Boussouf, and Stéphane D’escrivan

Abstract In space operations, spacecraft health monitoring and failure prevention are major issues. This important task can be handled by monitoring housekeeping telemetry time series using anomaly detection (AD) techniques. The success of machine learning methods makes them attractive for AD in telemetry via a semisupervised learning. Semi-supervised learning consists of learning a reference model from past telemetry acquired without anomalies in the so-called learning step. In a second step referred to as test step, most recent telemetry time-series are compared to this reference model in order to detect potential anomalies. This paper presents an extension of an existing AD method based on a sparse decomposition of test signals on a dictionary of normal patterns. The proposed method has the advantage of accounting for possible relationships between different telemetry parameters and can integrate external information via appropriate weights that allow detection performance to be improved. After recalling the main steps of an existing AD method based on a sparse decomposition Pilastre et al (Sign Proc, 2019 [1]) for multivariate telemetry data, we investigate a weighted version of this method referred to as WADDICT that allows external information to be included in the detection step. Some representative results obtained using an anomaly dataset composed of actual anomaB. Pilastre (B) · J.-Y. Tourneret INP-ENSEEIHT, University of Toulouse, 2 Rue Camichel, 31071 Toulouse, France e-mail: [email protected] J.-Y. Tourneret e-mail: [email protected] IRIT UMR5505, 118 Route de Narbonne , 31062 Toulouse, France TéSA, 7 Boulevard de la Gare, 31500 Toulouse, France L. Boussouf Airbus Defence and Space, 31 rue des cosmonautes, 31400 Toulouse, France e-mail: [email protected] S. D’escrivan Centre National d’Études Spatiales (CNES), 18 avenue Édouard Belin 18 av. Edouard Belin, 31400 Toulouse, France e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_15

159

160

B. Pilastre et al.

lies that occurred on several satellites show the interest of the proposed weighting strategy using external information obtained from the correlation coefficient between the tested data and its decomposition on the dictionary. Keywords Machine learning · Spacecraft health monitoring · Anomaly detection · Sparse decomposition · Dictionary learning

1 Introduction Spacecraft health monitoring and failure prevention are major issues that can be handled by checking housekeeping telemetry time series. Housekeeping telemetry is composed of hundreds to thousands telemetry parameters describing the evolution over time of physical quantities such as temperature, pressure, voltage. Detecting anomalies in these parameters jointly can be a complicated task. Thus, a lot of stateof-the-art telemetry AD methods consider an univariate framework which consists of detecting anomalies in univariate time series separately via a semi-supervised learning. The idea is to learn nominal spacecraft behaviours from past telemetry composed of normal patterns (i.e., data without anomalies) and build a reference model to which new data can be compared. Then, any deviation from the nominal behaviour is considered as a fault. In this context, no assumptions are made about anomalies whose the type is not required for the learning. This represents a real advantage compared to other methods based on supervised learning in which labelled data are needed. Examples of univariate anomalies affecting telemetry time series are displayed in Fig. 1 (top, red boxes). Popular machine learning (ML) methods that have been investigated in this framework include the one-class support vector machine [2], nearest neighbour techniques [3, 4] or neural networks [5, 6]. However, these methods do not account for possible relationships existing between different parameters and thus cannot detect multivariate anomalies resulting from changes in the relationships between several telemetry time series. An example of such multivariate anomaly is displayed in Fig. 1 (bottom, red box). The detection of the abnormal behaviour in the top time series of this figure (outlined in the red box) will be detected more easily by using the bottom time series. This kind of situation is often referred to as “contextual anomaly”, which has been recently investigated in [7–9].

2 Proposed Anomaly Detection Method 2.1 Preprocessing The proposed AD method first segments the different telemetry times series into windows of fixed size w as illustrated in Fig. 2. This segmentation step has been

Spacecraft Health Monitoring Using …

Fig. 1 Examples of univariate (top) and multivariate (bottom) anomalies

Fig. 2 Segmentation of telemetry into windows

161

162

B. Pilastre et al.

used successfully in many telemetry AD methods [2, 3]. Each resulting matrix is then transformed into a vector whose first w components correspond to the first time serie denoted as y1 , whose components w + 1 to 2w correspond to the second time serie denoted as y2 and so on. This preprocessing creates mixed signals composed of telemetry time series formed by the different parameters, i.e., y = [yT1 , . . . , yTK ]T with yk ∈ Rw , k = 1, . . . , K, where K is the number of telemetry parameters and w is the size of the time window.

2.2 Anomaly Detection Using a Weighted Sparse Decomposition Sparse decompositions have received an increasing attention in many signal and image processing applications. These decompositions were investigated for univariate AD in [10]. The AD strategy introduced in [10] decomposes each multivariate test signal as the sum of three signals: a nominal signal defined by a sparse representation into a dictionary of normal patterns, an anomaly signal and an additive noise such that y = x + e + b

(1)

where y ∈ RN is the multivariate test signal,  ∈ RN ×L is a dictionary previously learned from data describing normal behaviours of spacecraft data, x ∈ RL is a sparse vector of coefficients, e ∈ RN is a possible anomaly signal (with e = 0 in absence of anomaly) and b ∈ RN is an additive noise. The proposed multivariate framework considered in this work and the associated preprocessing divide the multivariate signals y and e as K blocks associated with the K telemetry parameters, i.e, y = [yT1 , . . . , yTK ]T and e = [eT1 , . . . , eTK ]T . The resulting sparse decomposition problem can be expressed as follows [1]  1 min  y − x − e 22 +ax1 + b ek 2 x,e 2 K

(2)

k=1

 where x1 = Ll=1 |xl | is the 1 norm of the vector x = (x1 , ldots, xL )T and ek 2 is the euclidean norm of the vector ek . Note that (2) considers two distinct sparsity constraints for the coefficient vector x and the anomaly signal e. This formulation reflects the fact that a nominal continuous signal can be well approximated by a linear combination of few atoms of the dictionary (sparsity of x using the 1 norm) and that anomalies are rare and affect few parameters at the same time (sparsity of e using the 21 norm). This paper generalizes the AD strategy defined by (2) referred to as ADDICT [1] by the integration of external information via appropriate weights. The proposed model for AD in spacecraft telemetry is similar to the well-known grouplasso model [11] that assigns a weight to each group. In this work we consider K

Spacecraft Health Monitoring Using …

163

groups associated with the K telemetry parameters and propose to solve the following problem  1 arg min y − x − e22 + ax1 + b wk ek 2 . x,e 2 K

(3)

k=1

where wk > 0 is the kth weight associated with the kth parameter. The problem (3) can be solved with the alternating direction method of multipliers (ADMM) [12] by adding an auxiliary variable z and the constraint z = x leading to  1 y − x − e22 + a z1 + b wk ek 2 s.t. z = x 2 K

arg min C(x, e, z) = x,e,z

k=1

(4) where “s.t” means “subject to”. Note that, contrary to Problem (3), the first and second terms of (4) are decoupled, which allows an easier estimation of the vector x. The ADMM algorithm associated with (4) minimizes the following augmented Lagrangian LA (x, z, e, m, μ) = C(x, e, z) + mT (z − x) +

μ z − x22 2

(5)

where m is a vector containing Lagrange multipliers and μ is a regularization parameter controlling the level of deviation between z and x. The ADMM algorithm iteratively updates x, z, e and m as follows Update of x x is classically updated as follows   1 μk k k+1 k 2 k k 2 (6) y − x − e 2 + m (z − x) + z − x2 . = arg min x x 2 2 Simple algebra leads to  xk+1 = (T + μk I )−1 [T (y − ek ) + mk + μk zk ]. Update of z The update of z is defined as   μk z − xk+1 22 .  zk+1 = arg min az1 + (mk )T (z − xk+1 ) + z 2

(7)

(8)

Thesolution of (8) is given by the element-wise soft thresholding operator  zk+1 = 1 a Sγ k xk+1 − μk mk with γ k = μk , where the thresholding operator Sγ (u) is defined by

164

B. Pilastre et al.

⎧ ⎪ ⎨u(n) − γ if u(n) > γ Sγ (u) = 0 if |u(n)| ≤ γ ⎪ ⎩ u(n) + γ if u(n) < −γ

(9)

where u(n) is the nth component of u. Updates of m and μ The updates of m and μ are defined as μk+1 = ρμk with ρ > 1. m  k+1 = mk + μk (zk+1 − xk+1 ) and  Update of e Due to the reweighted scheme investigated in this paper, the update of the anomaly vector e changes from the non-weighted scheme studied in [1], leading to

 K  1 2  e = arg min wk ek 2 . (10) y − x − e2 + b e 2 k=1

The problem (10) can be divided into the following K independent problems   1 (11) hk − ek 22 + bwk ek 2 .  ek = arg min ek 2 where h = y − x and hk is the kth element of h associated with the kth telemetry time series for k = 1, . . . , K. The solution of (11) is given by the group-shrinkage operator  e = Tb (h) defined by hk 2 −bwk hk if hk 2 > bwk hk 2 (12) [Tb (h)]k = 0 otherwise Note that the estimation of the anomaly signal e directly depends on the residual h resulting from the sparse decomposition of the test signal in the dictionary of normal patterns. Moreover, a value of the weight higher than 1 tends to reduce AD whereas a value lower than 1 promotes AD (for wk ≥ 1, the first equality is more difficult to satisfy, which leads to  e = 0 more frequently).

2.3 Anomaly Detection Strategy The proposed AD strategy referred to as W-ADDICT (for Weighted Anomaly Detection using a sparse decomposition on a DICTionary) is defined by comparing the estimated anomaly signal e denoted as  e to an appropriate threshold, i.e., Anomaly detected if  e22 > SPFA

(13)

where SPFA is the threshold depending on the desired compromise between false alarm and good detection rates. This threshold can be determined using receiver operating

Spacecraft Health Monitoring Using …

165

characteristics (ROCs) if a ground-truth is available or using the user’s experience. ROCs express the probability of detection PD as a function of the probability of false alarm PFA. In this work, these probabilities were computed using ground-truth time series and the threshold SPFA was determined from the pair (PFA, PD) located the closest to the ideal point (0, 1).

2.4 Weight Updating Using a Correlation Coefficient Weighting the square norms ek 2 allows external information from expert knowledge to be considered. This paper investigates the usefulness of the correlation coefficient between the test signal y and its sparse decomposition x, assuming that the smaller the value of the correlation coefficient, the higher the probability that y is affected by an anomaly. As explained before, weights interfere in the detection process in such a way that a small value of the weight promotes the detection of an anomaly. Conversely, a large value of the weight tends to reduce the number of detected anomalies. In order to respect this idea, we propose to build the weights with the following function R+ fα : [−1, 1] → 1 ck → wk = ((1+α)−c 2 k)

(14)

which is an increasing function of ck where ck is the correlation coefficient between the test signal and its reconstruction associated with the kth parameter and α is an appropriate hyperparameter fixed by the user. The choice of this function is motivated by its simplicity since it only depends on one parameter α. Note that ck = α corresponds to wk = 1, i.e., to the unweighted model. For ck < α, the kth weight satisfies wk < 1, which favours AD. Conversely, when ck > α, the kth weight is such that wk > 1, which limits AD. Our experiments have shown that α can be tuned in the interval ] − 1, +1[ without loss of generality.

3 Experimental Results The AD method (2) and its weighted version (3) (with weights built using the correlation coefficient as proposed in Sect. 2.4) have been evaluated on a representative anomaly dataset composed of K = 7 telemetry parameters with an available groundtruth. The hyperparameters a and b were tuned by cross validation using ground-truth and ROC curves. In all the experiments, the dictionary was composed of L = 2000 atoms learnt using the K-SVD algorithm [13] with two months of nominal telemetry (without anomalies), which represents approximately 5000 mixed training signals obtained after applying the preprocessing described in Sect. 2.1 with the parameter w = 50 (i.e., the signal length is N = 350). The anomaly database was built using 18 days of telemetry, i.e., was composed of 1000 signals, 82 of them being affected by anomalies. Note that the 82 anomalies were divided into 5 anomaly periods with various durations, as illustrated in the three examples of anomalies that are

166

B. Pilastre et al.

Fig. 3 Anomaly scores obtained with unweighted algorithm (2) (left) and with the weighted one (3) (right) with ground-truth marked by red background and the detection threshold indicated by the red horizontal lines

displayed in Fig. 1. Note also that a specific attention was devoted to the construction of a heterogeneous database (containing discrete and continuous time-series, with anomalies of different amplitudes and durations, …). Finally, it is important to note that the majority of anomalies are actual anomalies that have been observed in operated satellites. Figure 3 displays the anomaly scores  e22 returned by the unweighted AD method (2) (left) and the weighted one (3) (right) for each of the 1000 test signals of the anomaly dataset. Note that the actual anomalies are marked by red backgrounds whereas the red horizontal lines indicate the detection thresholds (determined using cross validation as described in Sect. 2.3). As can be observed, the score exceeds threshold almost exclusively in presence of anomaly, meaning that anomalies are well detected and that few false alarms are returned. Note that the contextual corresponding to the last anomaly period of the ground-truth is also well detected. These first results show the interest of using sparse decompositions for anomaly detection in spacecraft telemetry. Furthermore, they show that the weighted model allows the number of false alarms to be reduced. For example, the test signals #237, #535 and #921 indicated by blue arrows are false alarms for the non-weighted scheme whereas they are not detected by the weighted algorithm. Quantitative results in terms of probability of detection, probability of false alarm and area under the curve (AUC) are reported in Table 1. These results confirm that the weighted model reduces the number of false alarms for a fixed probability of detection (for PD = 89%, the PFA decreases from PFA = 5.2% for the unweighted model to PFA = 2.2% for the weighted one). Note that false alarm reduction is very interesting for spacecraft monitoring applications because each detection of the algorithm is examined by experts, which is potentially a very time consuming task. Note also that the detection threshold might be optimized using other criteria, e.g., allowing a smaller value of PD to decrease the number of false alarms.

Spacecraft Health Monitoring Using …

167

Table 1 Probability of detection, probability of false alarm and area under curves (AUC) for the unweighted model (2) and the proposed weighted model (3) Method Threshold PD (%) PFA (%) AUC Unweighted model (2) Weighted model (3)

4.9 4.3

89 89

5.23 2.18

0.93 0.96

4 Conclusion This paper showed the interest of sparse decompositions for anomaly detection in multivariate housekeeping telemetry time series. In particular, external information provided by the user can be incorporated in the anomaly detection via appropriate weights. Our first results indicated that weights built from the correlation coefficient between the test signal and its sparse decomposition in a dictionary of normal patterns allowed a significant reduction of the false alarm probability. For future work, a scaling-up is needed to evaluate the anomaly detection method in an operating context including hundreds to thousands telemetry time series. Concerning the weighted model, it would be interesting to build weights using other information than the correlation coefficient. Finally, we think that the model could be updated sequentially using expert feedback in order to improve detection, e.g., in the case of anomalies evolving with time. This opens the way for many works related to online or sequential anomaly detection, which could be useful for spacecraft health monitoring. Acknowledgements The authors would like to thank Pierre-Baptiste Lambert from CNES and Clémentine Barreyre from Airbus Defence and Space for fruitful discussions about anomaly detection in spacecraft telemetry. This work was supported by CNES and Airbus Defence and Space.

References 1. Pilastre B et al (2019) Anomaly detection in mixed telemetry data using a sparse representation and dictionary learning. Sig Proc to appear 2019 2. Fuertes S et al (2016) Improving spacecraft health monitoring with automatic anomaly detection techniques. In: Proceeding international conference space operations (SpaceOps’2016), Daejeon, South Korea 3. Martínez-Heras J-A, Donati A, Kirksch M, Schmidt F (2012) New telemetry monitoring paradigm with novelty detection. In: Proceeding international conference space operations (SpaceOps’2012), Stockholm, Sweden 4. O’Meara C et al (2016) Athmos: automated telemetry health monitoring system at GSOC using outlier detection and supervised machine learning. In: Proceeding international conferecne space operations (SpaceOps’2016), Daejeon, South Korea 5. O’Meara C et al (2018) Applications of deep learning neural networks to satellite telemetry monitoring. In: Proceeding international conference space operations (SpaceOps’2018), Marseille, France

168

B. Pilastre et al.

6. Hundman K et al (2018) Detecting spacecraft anomalies using LSTMs and nonparametric dynamic thresholding. Proceeding international conference knowledge data mining (KDD’18). United Kingdom, London, pp 387–395 7. Takeishi N et al (2014) Anomaly detection from multivariate times-series with sparse representation. In: Proceeding IEEE international conferecne system man and cybernetics, San Siego, CA, USA 8. Yairi T et al (2017) A data-driven health monitoring method for satellite housekeeping data based on pobabilistic clustering and dimensionality reduction. IEEE Trans Aerosp Electron Syst 53(3):1384–1401 9. Barreyre C (2018) Statistiques en grande dimension pour la détection d’anomalies dans les données fonctionnelles issues des satellites. Ph.D. dissertation, Université de Toulouse, Toulouse, France 10. Adler A et al (2015) Sparse coding with anomaly detection. J Sign Process Syst 79(2):179–188 11. Yuan M et al (2006) Model selection and estimation in regression with grouped variables. J Roy Statist Soc Series B (Methodological) 68(1):49–67 12. Boyd S et al (2010) Distributed optimization and statistical learning via alternating direction method of multipliers. Found Trends Mach Learn 3(1):1–222 13. Aharon M et al (2006) K-SVD: an algorithm for designing overcomplete dictionaries for sparse decomposition. IEEE Trans Sign Process 54(11):4311–4322

When Anomalies Meet Aero-elasticity: An Aerospace Industry Illustration of Fault Detection Challenges P. Goupil, J.-Y. Tourneret, S. Urbano, and E. Chaumette

Abstract This paper aims at describing some industrial aerospace applications requiring anomaly detection. After introducing the industrial context and needs, the paper investigates two examples of oscillation/vibration detection associated with aero-elastic phenomena: (i) the first example related to aircraft structural design optimization is characterized by very stringent constraints on detection time; (ii) the second example linked to equipment condition monitoring has more relaxed constraints in terms of detection requirements. Keywords Aerospace · Civil aircraft · Flight control system · Oscillation detection · Vibration diagnosis

1 Introduction The flight control system of an aircraft is an avionic system used to control its attitude, trajectory and speed. In its electrical version, termed electrical flight control system (EFCS, a.k.a. as fly-by-wire), it consists of all the components between the pilot inputs (in the cockpit) and the movable surfaces called control surfaces, including these two elements. In more details, it comprises wires, numerical busses, probes and sensors, actuators, power sources (e.g. hydraulic) and flight control computers managing the whole system. EFCS was first developed in the 1960s by Aerospatiale and installed on Concorde (1969) as an analogue system. Much later, digital EFCS was introduced first on the Airbus A310 (1982, spoilers, slats and flaps only) and was then generalized on all control surfaces (3-axis) on the A320 (1987). This is P. Goupil (B) · S. Urbano AIRBUS, Aircraft Control, 316 route de Bayonne, Cedex 09, 31060 Toulouse, France e-mail: [email protected] J.-Y. Tourneret INP/ENSEEIHT-IRIT-TéSA, University of Toulouse, Toulouse, France E. Chaumette Isae-Supaero, University of Toulouse, Toulouse, France © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_16

169

170

P. Goupil et al.

now generalized to all Airbus aircraft (e.g. A350, A380). EFCS main advantages include sophisticated control of the aircraft, flight envelope protection functions, pilot workload alleviation and weight saving [1, 2]. EFCS is designed to meet very stringent requirements in terms of dependability and availability. The detection of all related anomalies (e.g. faults, failures or any abnormal behaviour) is therefore a very important point to be considered in the aircraft design. In this context, this paper details two particular scenarios of anomaly detection, both related to vibrations and their specific constraints: the oscillatory failure cases (OFC) and the limit cycle oscillation (LCO)-induced vibrations, both affecting the aircraft during flight due to coupling with aero-elastic phenomena. However, these scenarios have different origins and consequences as well as diverse requirements in terms of fault detection. OFC leads to very stringent constraints on detection time and is related to aircraft structural design optimization. Conversely, LCO has more relaxed constraints in terms of detection requirements and is linked to equipment condition monitoring. Both scenarios must be compliant with severe robustness rules. Dedicated solutions must be eligible to real-time limitations. The paper is organized as follows: Sect. 2 is devoted to a review of the main realtime constraints with the goal to make the reader sensitive to the strong industrial limitations that must be taken into account from the early phases of the solution development. Section 3 details the OFC context and describes the solutions currently used and certified on in-service aircraft as well as current research study. Section 4 is dedicated to LCO-induced vibrations and presents related ongoing research activities. Finally, Sect. 5 presents some conclusions and perspectives.

2 Industrial Context and Constraints EFCS development is an arduous process because of its criticality in the sense that any severe malfunction could potentially affect the flight or prevent its correct achievement in the optimum configuration (e.g. aircraft performance, fuel consumption, …). Moreover, most of the design requirements come directly from the aviation authorities (e.g. the European Aviation Safety Agency) and taking into account these regulations leads to develop flight control computer software according to the Design Assurance Level A, which is the most stringent requirement for a real-time on-board software. In this context, getting the aircraft certified means demonstrating that it complies with the regulations mainly thanks to a lot of verification and validation activities which can be summarized as follows: (i) Verification: are we developing the system right ?; (ii) Validation: are we developing the right system? In practice, and especially for complex systems like EFCS, these activities involve a big number of tests of different natures (simulators, system bench, flight tests…) with a total duration longer than the development phase. Non-standard tests could be required such as the algorithm long-term behaviour in steady phase (to check that there is no divergence). Not only is the average performance required but also the performance in extreme and unusual conditions, at the operational boundaries.

When Anomalies Meet Aero-elasticity: An Aerospace …

171

Focusing more on anomaly detection, the general industrial state of practice [2] provides high levels of hardware redundancy (sensors, computers, wires, etc.…) in order to perform consistency tests, cross-checks and built-in-tests of various sophistications. The flight control computers, hosting the EFCS software and especially the anomaly detection algorithms, are multirate time-triggered, which means that they are able to manage different sampling frequencies. It also implies potentially mixing asynchronous signals with different sampling rates. Due to the life cycle of a civil aircraft, the flight control computers are equipped with CPU with limited computing power (e.g. the A320 family has been designed from the 80s and is still flying). This means that it is tricky to embed complex and non-deterministic algorithms such as optimization routines. The software is generally specified thanks to a graphical tool describing accurately all the functions to be ensured. A limited set of graphical symbols (e.g. adder, filter, integrator, lookup tables, very much in the style of Simulink blocks) is used to describe each part of the algorithm. The limitations imposed by such a process guarantee on the one hand that the code has a complexity level which allows its implementation on the actual computer but on the other hand it makes the designer task more complex. For instance, it is not possible to implement easily vector or matrix computation (no dedicated graphical symbols) and thus the corresponding calculation must be broken down to a series of scalar multiplication and addition. Note that there is no state-space representation and no direct way to implement conditional loops. It must also be noticed that the mastering of any innovative algorithm, which is more complex than the current state of practice is crucial for applications by non-experts. Particularly, the high-level tuning of hyper-parameters must be clearly explained: how to tune the parameters on a different aircraft model or on a different sensor?

3 Oscillatory Failure Cases (OFC) Oscillatory failure case (OFC) is the term given to a class of failures occurring in the actuator servo loop that cause unwanted oscillations of the control surface [3]. OFC can result from the faulty behaviour of an electronic component or an actuator mechanical failure. It will take the form of a sinusoidal signal propagating through the control loop leading to the control surface vibration. The current Airbus servo loop principle is depicted in Fig. 1. The flight control computer has a dual channel structure where the so-called COM (command) channel is dedicated mainly to the flight control law computation and to the control surface servo loop. The so-called MON channel (monitoring) is primarily dedicated to the monitoring of all EFCS components. The OFC effects can be modelled as the injection of a periodic signal at two specific points of the control loop: the analogic output and the analogic input (see Fig. 1). As the actuator acts as a low-pass filter, the OFC phenomenon is generally limited to approximately [0–12] Hz. An OFC has the main effect to generate loads on the structure when located within the actuator bandwidth. Coupled with the aero-elastic behaviour of the aircraft, an OFC may lead to unacceptable high loads. The OFC

172

P. Goupil et al.

Fig. 1 Airbus control surface servo loop

amplitude must be contained by system design within an envelope function of the frequency. The capability to detect these failures is very important because it has an impact on the structural design of the aircraft. Moreover, the load envelope constraints must be respected. More precisely, if an OFC of given amplitude cannot be detected and passivated, this amplitude must be considered for load computations. The result of this computation can lead to reinforce the structure in case of load exceeding. In order to avoid reinforcing the structure and consequently to save weight, lowamplitude OFC must be detectable early. Thus, there is a link between anomaly detection and aircraft weight saving which is not obvious at first sight. Concretely, for structure-related system objectives, it is necessary to detect OFC beyond a given amplitude in a given number of periods, whatever the unknown OFC frequency. The latest OFC detection strategy used on board Airbus aircraft is based on analytical redundancy. It is a classical approach in the field of fault detection, where the actual actuator rod position, measured by a dedicated sensor, is compared to an estimated position produced by an actuator model fed by the command signal. This residual generation is followed by a decision-making step to confirm the oscillation, typically by counting successive and alternate crossings of a given threshold. This kind of technique has been chosen as on the one hand actuator models were already widely used and mastered, and on the other hand OFC signal appears “as is” on the residual (no transformation) which eases the threshold tuning. The frequency range to monitor is broad and as the computing power is limited, it is not possible to use classical spectral analysis. OFC detection in the temporal domain alleviates the computational burden and automatically adjusts to all frequencies. The estimated position pˆ is the result of a discrete integration of the estimated velocity p˙ expressed by the following equation: 

 

p˙ (t) = V0 (t)

F

(t)+Fdamp (t) S

P(t) − aero Pref

(1)

When Anomalies Meet Aero-elasticity: An Aerospace …

173

where P is the hydraulic pressure delivered in input of the actuator, Faero repre2



sents the aerodynamic forces applied on the control surface, Fdamp = kd p˙ is the servo control load of the adjacent actuator in damping mode (in case of a dual active/passive scheme, i.e. one active actuator and one passive actuator in stand-by mode in case of a reconfiguration), S is the surface area of the actuator piston, Pref is the differential pressure corresponding to the maximum rod speed and V0 (t) is the maximum speed of one actuator without any load. In a first simplified version, the parameter vector θ = (P, Faero , kd )T is assumed to be constant (fixed to its most probable value) to reduce the computational complexity. In a second version, to achieve better performance, i.e. an earlier detection of lower OFC amplitudes, θ can be estimated by means of an extended Kalman filter to reduce modelling errors [4]. Both versions have been extensively validated, certified and are flying on in-service aircraft. However, these model-based approaches have the drawback of requiring a model for each type of actuator (e.g. three kinds of actuators in the A380) and the corresponding model must be tuned on each aircraft type (e.g. A380 and A350). This is why more generic techniques have been investigated like data-driven approaches with the advantages of being less dependent of the system. The main idea is to monitor the similarity between the input (command signal) and the output (actuator rod or control surface position) of the system under study (the control surface servo loop, Fig. 1): in a faulty situation there is a loss of similarity between the input and output signals, compared to the fault-free case. One interesting idea is a modified version of the so-called complex invariant distance (CID), proposed in [5]. It consists of computing a similarity index by weighting the Euclidean distance between input and output by a correcting factor representing the curvilinear distance of these signals [6]: C I D(x, y) = E D(x, y)C D(x, y)

(2)

where x is the command, y the actuator/control surface position, ED is the Euclidean distance and CD the curvilinear distance defined as:  ⎧ N ⎪  ⎪ ⎪ E D(x, y) = (xi − yi−τ )2 ⎨ i=1 (3) N −1 ⎪ ⎪ C D(x) 2 2 ⎪ (xi − xi+1 ) + TS ⎩ C D(x, y) = C D(y) ; C D(x) = i=1

where TS is the sampling time and τ is the drag error, i.e. the nominal delay between the command and the position. The index C I D(x, y) is compared to a given threshold and OFC is detected when it stays greater than the threshold during a given time. Figure 2 shows an example displaying the command x and the position y (top figure) and an OFC occuring at t = 15 s. At the bottom of the figure, it is clear that ED is not high enough to detect the fault (same value before and after the fault occurrence), especially due to an artificial bias between x and y. However, the

174

P. Goupil et al.

Fig. 2 Example of CID index variations for OFC detection

correcting factor CD is more sensitive to the fault and the global CID index allows us to discriminate the fault-free and faulty situations.

4 Limit Cycle Oscillations (LCO) The LCO phenomenon consists of an unwanted sustained oscillation of a control surface due to the combined effect of aero-elastic behaviour and an increased level of mechanical free-play in the elements that connect the control surface to the aerodynamic surface. Under some circumstances, it can lead to slight airframe vibrations, which can impact the passengers and the crew comfort. The current state of practice for LCO-vibration detection relies mainly on the pilot sensitivity and on long-term scheduled maintenance. Once the vibrations have been detected by the crew, a maintenance task is triggered but the troubleshooting is tricky. In the works presented in this paper, two goals are pursued: (i) to estimate the duration and amplitude of the vibrations in order to compute a condition monitoring index to be communicated to the operating airline and (ii) to ease the vibration troubleshooting in order to facilitate the maintenance tasks (i.e. to find directly the location of the vibration). The challenge is to build a condition monitoring index to be communicated to the airline to trigger a specific and accurate maintenance task and avoid any unscheduled work that could lead to flight delays or cancellation. Thanks to a more accurate vibration troubleshooting, the maintenance task will be shortened and will not impact the airline in terms of aircraft immobilization. The control aspect is not considered here, i.e. the possibility to cancel the vibrations by injecting a dedicated command is not investigated. The main idea investigated is to monitor individually each actuator using the already existing information (Fig. 1) in order to detect and diagnose the vibrations and

When Anomalies Meet Aero-elasticity: An Aerospace …

175

to de facto isolate the faulty component. Using the same analytical redundancy technique as for OFC is not possible here as there is no simple model of the system under study (a control surface linked to an aerodynamic surface leading to the airframe vibrations). Using a distance-based solution is also impossible as the vibration amplitudes to detect are very small, far below the OFC objectives and the performance observed in the OFC context. Using the same notation as for the OFC, a simple model of the actuator dynamic is: y(n) = λx(n − τ )

(4)

where n is the discrete time (n = 0, . . . , N −1), λ is an attenuation or amplification factor (e.g. due to the aerodynamic forces applied on the control surface) and τ is the aforementioned drag error. As the problem to solve can be formalized as the detection of deterministic signals with unknown parameters, assuming the simple model (4), the following binary hypothesis testing problem can be investigated:

H0 : y(n) = λx(n − τ ) + w(n) H1 : y(n) = λx(n − τ ) + A cos(2π f n + φ) + w(n)

(5)

with w(n) ∼ N 0, σ 2 . Under the null hypothesis H0 , the control surface position y(n) is simply a delayed and amplified/attenuated version of the command x(n) corrupted by some additive Gaussian noise w(n). Under the alternative hypothesis H1 , the control surface position is polluted by the vibration modelled as a sinusoid of a given amplitude, frequency and phase. In the binary hypothesis testing problem (5), the optimal decision rule is based on the probability distribution of the observations under both hypotheses and the a priori probability of each hypothesis (Bayes detector). In absence of any a priori probability for the two hypotheses, which is the case here, the most common detector is the likelihood ratio test (LRT) resulting from the Neyman-Pearson test [7]. Unfortunately, optimal statistical tests such as the LRT cannot always be implemented since there are often some unknown parameters in the observation model, leading to the so-called composite hypothesis testing problem [8]. A very common approach in this situation is to replace the unknown parameters in the LRT by their maximum likelihood estimators (MLE), following the ideas of the generalized likelihood ratio test (GLRT) [8]. The GLRT requires to estimate the unknown parameters under both hypothesis: {λ, τ } under H0 and {λ, τ, A, f, φ} under H1 , using the maximum likelihood principle. All the details regarding these estimators are available in [9], where the test and its theoretical performance are derived and validated. Once the GLRT test has been established, the detection rule consists of comparing the GLRT statistic to a threshold that can be derived from the desired false alarm rate [9] (resulting from strong industrial constraints). The GLRT requires to estimate the parameters of the model (5), i.e. the LCO amplitude Aˆ and duration T . Moreover, the energy generated by the vibrations can be estimated as: 

176

P. Goupil et al.

Eˆ LC O = ∫ Aˆ 2 dt T 

(6)

For diagnostic, it is reasonable to assume that only a fraction of the real energy E LC O is dissipated through wear process that increases the free-play level, so a simple index I (for a given flight F) can be: I (F) = kw ∫ Aˆ 2F dt T F 

(7)

where kw is a wear severity coefficient. This definition can be recursively extended to n flights: I (Fn ) = λ f I (Fn−1 ) + kw

∫ Aˆ 2Fn dt T Fn



(8)

where λ f is a forgetting factor adjusted to reduce the undesired effects of estimation errors. Figure 3 shows an example of the GLRT statistics on a flight test data for the elevator control surface where LCO signals have been injected artificially. On the top, the command signal x in blue is called “com”, the control surface position y in red is called “RVDT”. A first vibration of a very small amplitude A1 (not given for confidentiality reasons) is injected at t = T1 during 30s and a second amplitude A2 = 2*A1 is injected at t = T2 during 20 s. A1 and A2 are so small that they are not visible. The command and corresponding control surface positions are very close as the drag error is small. The two graphs at the bottom of Fig. 3 show the estimated LCO vibration duration and the corresponding estimated amplitudes. It can be observed that the estimated amplitudes are very close to the ground truth displayed in blue. Figure 3 also shows the accumulated duration (middle curve). The global accuracy is correct even if the first duration is slightly underestimated and the second one is marginally overestimated.

5 Conclusions This paper deals with two aerospace industrial challenges related to vibration detection with the goal to exemplify concrete applications and their engineering constraints. In the first challenge, the vibrations must be detected very quickly in real time to trigger a hardware reconfiguration (i.e. using the adjacent stand-by actuator). The second challenge is dedicated to detecting and estimating very small vibrations due to mechanical free-play resulting from a wearing effect. Note that in this second case, the problem is not to detect the vibrations very swiftly but rather to detect incipient faults, before they are felt by the pilots. These two different challenges illustrate the variety of engineering applications. For future works, some appealing

When Anomalies Meet Aero-elasticity: An Aerospace …

*T1 = 500s A1 deg / 30sec

177

*T2 = 1000s A2=2*A1 deg / 20sec

50 30

A2 A1

Fig. 3 Example of the GLRT-based algorithm on real flight data with vibrations artificially injected

avenues are worth to be mentioned. Several techniques have been considered for OFC during several decades. Future work could be dedicated to determine adaptive thresholds and consider different decision strategies. For LCO, it could be helpful to complement the vibration diagnosis by a free-play detection and diagnosis strategy, since mechanical free-play is generally the cause of the vibrations.

References 1. Brière D, Favre C, Traverse P (1995) A family of fault-tolerant systems: electrical flight controls, from A320/330/340 to future military transport aircraft. Micoprocessors Microsyst 19(2) 2. Goupil P (2011) AIRBUS state of the art and practices on FDI and FTC in flight control system. Control Eng Pract 19(6):524–539 3. Goupil P (2010) Oscillatory failure case detection in the A380 electrical flight control system by analytical redundancy. Control Eng Pract 18(9):1110–1119 4. Lavigne L, Zolghadri A, Goupil P, Simon P (2011) A model-based technique for early and robust detection of oscillatory failure case in A380 actuators. Int J Control Autom Syst 9(1):42–49

178

P. Goupil et al.

5. Batista GEAPA, Keogh EJ, Tataw OM, De Souza VMA (2014) CID: an efficient complexityinvariant distance for time series. Data Min Knowl Disc 28(3):634–669 6. Urbano S, Chaumette E, Goupil P, Tourneret J-Y (2017) A data-driven approach for actuator servo loop failure detection. In: Proceedings of the 20th IFAC World Congress, IFACPapersOnLine, vol 50, no 1, pp 13544–13549 7. Kay SM (1998) Fundamentals of statistical signal processing. Detection theory, vol 2. Prentice Hall 8. Van Trees HL (2004) Detection, estimation, and modulation theory. John Wiley & Sons 9. Urbano S, Chaumette E, Goupil P, Tourneret J-Y (2018) Aircraft vibration detection and diagnosis for predictive maintenance using a GLR test. In: Proceedings of 10th IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes (Safeprocess’2018), Warsaw, Poland, Aug. 2018, pp 1030–1036

Anomaly Detection with Discrete Minimax Classifier for Imbalanced Dataset or Uncertain Class Proportions Cyprien Gilet and Lionel Fillatre

Abstract Supervised classification is becoming a useful tool in condition monitoring. For example, from a set of measured features (temperature, vibration, etc.), we might be interested in predicting machine failures. Given some costs of class misclassifications and a set of labeled learning samples, this task aims to fit a decision rule by minimizing the empirical risk of misclassifications. However, learning a classifier when the class proportions of the training set are uncertain, and might differ from the unknown state of nature, may increase the misclassification risk when classifying some test samples. This drawback can also occur when dealing with imbalanced datasets, which is common in condition monitoring. To make a decision rule robust with respect to the class proportions, a solution is to learn a decision rule which minimizes the maximum class-conditional risk. Such a decision rule is called a minimax classifier. This paper studies the minimax classifier for classifying discrete or discretized features between several classes. Our algorithm is applied to a real condition monitoring database. Keywords Minimax classifier · Uncertain class proportions · Imbalanced dataset

1 Introduction Context: This paper studies a new anomaly detection method based on supervised classification. For example, from a set of measured features (temperature, vibration, etc.), we might be interested in predicting potential machine failures. However, very often we have to face three difficulties when dealing with such databases. Firstly, The authors thank the Provence-Alpes-Côte d’Azur region for its financial support. C. Gilet (B) · L. Fillatre University of Côte d’Azur, CNRS,I3S Laboratory,Sophia-Antipolis, France e-mail: [email protected] L. Fillatre e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_17

179

180

C. Gilet and L. Fillatre

both numerical and categorical features are available in the database, and it might be necessary to preprocess the features in order to optimize predictions. In such cases, an effective approach for performing supervised classification is to discretize numerical features [1] in order to work only with discrete features. Secondly, costs of misclassifications are given by experts, and have to be considered fixed. These costs aim to penalize differently the classifications errors which may bring more or less important consequences. Thirdly, the class proportions are imbalanced. For example the proportion of components presenting a failure is in general much lower than the proportion of components without failure. This problem leads most of the classifiers to neglect the classes containing fewest samples. The risk of misclassifications can also increase when the class proportions are uncertain, or can change in time, which is also possible when we have no idea of the state of nature. In this paper we propose a new supervised classification algorithm which aims to address all these difficulties. Notations and problem statement: The task of supervised classification aims to minimize the empirical risk of misclassifications from a finite set of labeled learning samples D = {(Yi , X i ), i ∈ I}. Let K ≥ 2 be the number of classes, Y := {1, . . . , K } the set of observed labels, Yˆ = Y the set of predicted labels, X the space where all the features are defined, and m the number of samples available in the training set. Let us denote Yi the random variable characterizing the label of the sample i, and X i = [X i1 , . . . , X id ] ∈ X the random vector containing all the d features describˆ the set of all the classifiers and ing the sample i. Let us define  = {δ : X → Y} let us consider a decision rule δ ∈ . Finally, let us consider the matrix of costs K ×K ˆ Ck,l is the cost of assigning the label such that, for all (k, l) ∈ Y × Y, C ∈ R+ ˆ Yi = l to the sample  i while its true class Yi is k. The empirical risk of misclassifications rˆ (δ) = m1 i∈I CYi ,δ(X i ) of δ can be written as [2]: rˆ (δ) =



πˆ k Rˆ k (δ) ,

(1)

k∈Y

where, for all k ∈ Y, πˆ k is the proportion of samples from the class k, and Rˆ k (δ) =



ˆ Ck,l P(δ(X i ) = l | Yi = k),

(2)

l∈Yˆ

ˆ with P(δ(X i ) = l | Yi = k) the probability of predicting the label l when the class is k. Benefits of the minimax classifier: Let us denote δπˆ the classifier δ ∈  fitted from the training set D satisfying the class proportions πˆ = [πˆ 1 , . . . , πˆ K ]. This classifier is then used to predict the labels Yi of new test samples from their associated features X i ∈ X . Let us consider the test set D = {(Yi , X i ), i ∈ I  } composed of m  test samples satisfying the class proportions π  = [π1 , . . . , π K ]. The risk associated to the classifier δπˆ and to the class proportions π  is then noted and defined by rˆ (π  , δπˆ ) =   ˆ  π R k∈Y k k (δπˆ ). Hence, the risk evolves linearly when the class proportions π differ

Anomaly Detection with Discrete Minimax Classifier … 5

181

Training set: average risk Validation set: average risk

4 3 2 1 0

040

200

400

600

Fig. 1 Scania trucks database: Left. Selection of the number of categories T using the kmeans quantizer. Right. Expectation of the evolution of the risks rˆ (π  ,δ) for each classifier δ ∈ {δπLR , δ SVM , δ RF , δπBˆ , δπB¯ } as a function of π1 . Since K = 2, rˆ π  , δπˆ can be rewritten as     ˆ  πˆ  πˆ rˆ π , δπˆ = π1 Rˆ 1 (δπˆ ) + π2 Rˆ 2 (δπˆ ) = π1 Rˆ 1 (δπˆ ) − Rˆ 2 (δπˆ ) + Rˆ 2 (δπˆ ). It is then clear that     rˆ π  , δπˆ is a linear function of π1 . Finally, let us remind that V (π  ) = rˆ π  , δπB is the estimation  of the minimum risk we could expect when considering the class proportions π

from πˆ . This fact is illustrated in Fig. 1, right. As described in [2], a solution to make a decision rule robust to differences of the class proportions between D and D is to minimize maxk∈Y Rˆ k (δπˆ ) with respect to all possible changes of the class proportions. Let us denote S the K -dimensional probabilistic simplex. This minimax problem can be written as (3) δπB¯ = argmin max rˆ (π, δπ ). δ∈

π∈S

As explained in [2], the minimax classifier δπB¯ tends to equalize as more as possible the class-conditional risks, and thus to consider fairly all classes. Related works: A common approach to deal with imbalanced datasets is to balance the data by resampling the training set. However, this approach may increase the misclassification risk when classifying some test samples which are imbalanced. Another common approach is the cost sensitive learning [3, 4] which aims to optimize the cost of class misclassifications in order to counterbalance the number of occurrences of each class. But, as previously mentioned, in our context we need to consider these costs fixed since they are given by experts. The task of learning the class-proportions which maximize the minimum empirical risk was already studied in past years. One of the closest work related to our context is [5]. The authors proposed an interesting fixed-point algorithm based on generalized entropy and strict sense Bayesian loss functions. This approach alternates a resampling step of the learning set with an evaluation step of the class-conditional risk, and it leads to estimate the least-favorable priors. However, the fixed-point algorithm needs the minimax rule to be an equalizer rule. We can show that this assumption is in general not satisfied when considering discrete features. Moreover, in practice, re-sampling the data to precisely consider some class proportions is not always possible due to the size of the smallest classes in the training set.

182

C. Gilet and L. Fillatre

Contributions: We consider here discrete or discretized features (X ⊂ Nd ), K ≥ 2, K ×K fixed. We show that under these conditions and a given matrix of costs C ∈ R+ we can calculate the discrete Bayes classifier δπBˆ which minimizes the empirical risk over the training set. Then we propose an algorithm to compute the minimax classifier (3). The convergence is guaranteed. We finally applied our algorithm to the real condition monitoring database [6] in order to illustrate the robustness of our minimax classifier.

2 Computation of the Minimax Classifier In order to manipulate both numerical and categorical features, we discretized the data with the k-means quantizer. In other words, each feature vector X i ∈ Rd composed of all the features is quantized with the centroid index of the centroid closest to it, i.e., X i → Q(X i ) = xt where Q : Rd → {x1 , . . . , x T } denotes the k-means quantizer with T centroids, and t is the index of the centroid of the cluster in which X i belongs to. Hence, the set of features becomes X = {x1 , . . . , x T }. Let us note ∈ I : Yi = k} the set of labeled learning samples belongT := {1, . . . , T }, Ik := {i ing to class k, and m k := i∈I 1{Yi =k} the number samples in class k. Let pˆ kt :=

1  1{X i =xt } m k i∈I

(4)

k

be the empirical probability of observing xt in class k, and let us denote 1{·} the indicator function. The following theorem characterizes the empirical discrete Bayes classifier δπBˆ , and its associated empirical risk of misclassifications. Theorem 1 The decision rule δπBˆ : X i → arg min l∈Yˆ



Ck,l πˆ k pˆ kt 1{X i =xt }

(5)

t∈T k∈Y

minimizes (1) over , and its associated empirical risk is    rˆ πˆ , δπBˆ = Ck,l πˆ k pˆ kt 1{k∈Y Ck,l πˆ k pˆkt =minq∈Yˆ k∈Y Ck,q πˆ k pˆkt } .

(6)

t∈T k∈Y l∈Yˆ

Proof The proof is omitted due to space limitations. More details are given in [7].   Let V : π → rˆ π, δπB be the empirical Bayes risk (6) extended over the simplex S. Thus, when the probabilities pˆ kt are fixed, V (π ) is the minimum risk we can expect when considering the class proportions π . It is shown in [7] that the minimax classifier is the Bayes classifier δπB¯ associated to the class proportions π¯ satisfying

Anomaly Detection with Discrete Minimax Classifier …

183

π¯ = arg max V (π ).

(7)

π∈S

As described in [7], V is concave and, in general, non-differentiable over S. Hence, to compute π¯ , we propose to use a projected subgradient algorithm based on [8]. This algorithm is fully described in Algorithm 1. Its convergence to π¯ , as a function of the number N of iterations, is established in [7]. Algorithm 1

Computation of the minimax classifier δπB¯

Input: (Yi , X i )i∈I , K , N . Compute π (1) = πˆ Compute the pˆ kt ’s as in (4).   described π¯ ← π (1) , r¯ ← rˆ πˆ , δπBˆ see (6) for n = 1 to N do for k = 1 to K do    (n) gk ← Rˆ k δπB(n) = t∈T l∈Yˆ Ck,l pˆ kt 1{

k∈Y

(n)

Ck,l πk

pˆ kt =minq∈Yˆ



k∈Y

(n)

Ck,q πk

pˆ kt }

end for (n) (n) K r (n) = k=1 πk gk see (1) (n) if r > r¯ then r¯ ← r (n) , π¯ ← π (n) end if γn ← 1/n, ηn ← max{1, g (n) 2 } see [9] for the projection PS onto the simplex S π (n+1) ← PS π (n) + γn g (n) /ηn end for   Output: r¯ , π, ¯ and δπB¯ : X i → arg min t∈T k∈Y Ck,l π¯ k pˆ kt 1{X i =xt } l∈Yˆ

3 Experiments We applied our algorithm to the real condition monitoring database [6]. This public database focuses on Air Pressure System (APS) used for various functions in Scania trucks such as braking and gear changes. Measurements of a specific APS component were collected from heavy Scania trucks in everyday usage. The goal is to predict a potential failure of this component. We therefore consider K = 2 classes where the class 1 corresponds to the APS without failures, and the class 2 corresponds to the APS presenting failures. For this database, the costs of class misclassifications were given by experts:  0 10 C= , 500 0 so that the cost of predicting a non-existing failure is $10, while the cost of missing a failure is $500. After removing missing values, the training set D contains the measurements of m = 54, 731 samples for which m 1 = 54, 137 do not present any failure

184

C. Gilet and L. Fillatre

and m 2 = 594 present a failure. Hence, the class proportions πˆ = [0.9891, 0.0109] of the training set are highly imbalanced, which highly complicates the task of predicting a failure. Each sample is described by d = 129 numerical and categorical anonymized features. For this database, an independent test dataset D is given, and contains the measurements of m  = 14, 578 samples (after removing samples with missing values). The class proportions of this test set are π  = [0.9848, 0.0152]. We compared our minimax classifier δπB¯ to the discrete Bayes classifier δπBˆ (5), and to three others well known methods: Logistic Regression [10], Random Forest [11], and Support Vector Machine (SVM) [12]. These three last methods were applied to the original (non-quantized) features. For the following, we will use the notations RF δπLR ˆ for denoting the Logistic Regression, δπˆ for denoting the Random Forest, and SVM δπˆ for denoting the SVM. Tocompare  these five methods, we consider the empirical risk of misclassifications rˆ π  , δπˆ , the maximum of the class-conditional risks maxk∈Y Rˆ k (δπˆ ), and the Total Cost ($) given by TC(δ) = C1,2

 i∈I1

1 Yˆ  =2 + C2,1 i

 i∈I2

  1 Yˆ  =1 = m  rˆ π  , δπˆ . i

3.1 Training Step In order to apply our algorithm, we quantized the features with the k-means algorithm with a number T = 40 centroids. The choice of T is important. It was established from Fig. 1, left. For each value of T , we performed a tenfold cross-validation over a training subset. Of the 10 subsamples, a single subsample is retained as the validation set for testing the classifier, and the remaining 9 subsamples are used as training set for learning the classifier. The blue curve shows that the classifier is better for a large value of T but its generalization error on the validation set, plotted in red, increases from T around 40. We therefore chose T = 40 which is a good trade-off between minimizing the average empirical risk over the subtraining set and the validation set. In Table 1 we can observe that the class-conditional risks Rˆ 1 (δ) and Rˆ 2 (δ) are SVM B , δπRF very unbalanced for the classifiers δπLR ˆ , δπˆ ˆ , δπˆ . This means that for these classifiers, the risk of missing a failure is much higher than the risk of predicting a non existent failure. In the case where we are interesting in minimizing the risk of missing failures, it is therefore better to consider our minimax classifier δπB¯ since     this decision rule aims to minimize max{ Rˆ 1 δπBˆ , Rˆ 2 δπBˆ }, and tends to equalize as much as possible the class-conditional risks. This fact is also illustrated in Fig. 1, right. ˆ In Table 2 we studied the empirical probabilities P(δ(X i ) = l | Yi = k), for all LR SVM RF B B δ ∈ {δπˆ , δπˆ , δπˆ , δπˆ , δπ¯ }. In Table 2, and more precisely in the confusion matrix associated to the minimax classifier δπB¯ , we can observe that this decision rule gets the best predictions associated to the class 2. Note that concerning the training step, the discrete Bayes classifier δπBˆ gets very good results too, even concerning the class

Anomaly Detection with Discrete Minimax Classifier …

185

Table 1 Results associated to the training dataset with πˆ =[0.9891,  0.0109]. The minimax classifier is associated to π¯ = [0.6045, 0.3955] and it satisfies rˆ π, ¯ δπB¯ = 6.0437. δ

  rˆ πˆ , δ Rˆ 1 (δ) Rˆ 2 (δ) TC(δ) Time

$/sample $/sample $/sample $ s

Original features δπLR δπSVM ˆ ˆ

δπRF ˆ

Quantized features δπBˆ δπB¯

1.98 0.011 181.818 108,580 501.90

0.94 0.003 86.700 51,640 213.41

0.97 0.486 45.455 53,300 10.17

5.31 0.004 489.057 290,720 991.48

5.60 5.592 6.734 306,740 10.34

Table 2 Confusion matrices associated to the training step Yˆ = 1 Yˆ = 2 δπLR ˆ Y = 1 0.9989 0.0011 Y = 2 0.3636 0.6364

Yˆ = 1 Yˆ = 2 δπSVM ˆ Y = 1 0.9996 0.0004 Y = 2 0.9781 0.0219

Yˆ = 1 Yˆ = 2 δπBˆ Y = 1 0.9514 0.0486 Y = 2 0.0909 0.9091

Yˆ = 1 Yˆ = 2 δπB¯ Y = 1 0.4408 0.5592 Y = 2 0.0135 0.9865

Yˆ = 1 Yˆ = 2 δπRF ˆ Y = 1 0.9997 0.0003 Y = 2 0.1734 0.8266

1, as detailed in Tables 1 and 2. However, we can observe in Fig. 1, right, that it would be better to consider the minimax classifier when the class proportions of the test set π  differ from πˆ , since its associated risk is more stable contrary to the dicrete Bayes classifier δπBˆ . Finally, in Table 1, we can observe an important gap between the SVM B B , δπRF computation times of the classifiers δπLR ˆ , δπˆ ˆ and the classifiers δπˆ , δπ¯ . Note that processing times were obtained on a computer using an i7 processor (3.1 GHz).

3.2 Test Step SVM B Concerning the test step, we first applied each classifier δπLR , δπRF ˆ , δπˆ ˆ , δπˆ and B δπ¯ to the entire test dataset provided in the database. The results are detailed in Table 3 and Table 4. We can observe that the risks and the class-conditional risks associated to each classifier are pretty similar to the training step (except for the Random Forest). Note that here π  is similar to πˆ . In order to illustrate the robustness of the classifier δπB¯ when π  differs from πˆ , we considered the following experiment. First, we generated M = 10, 000 random priors π (s) , s ∈ S := {1, . . . , M}, over the simplex using the procedure proposed by [13]. For each s ∈ S, we then generated a test subset containing 201 samples (size of the smallest class in D ) satisfying the SVM RF B B , δπˆ , δπˆ , δπ¯ } is class proportions π (s) . Thus, each trained classifier δ ∈ {δπLR ˆ , δπˆ (s) tested on each test subset satisfying the class proportions π . The results are detailed

186

C. Gilet and L. Fillatre

Table 3 Results associated to the test dataset with π  = [0.9848, 0.0152] Original features Quantized features SVM RF B δ δπLR δ δ δ δπB¯ ˆ πˆ πˆ πˆ    rˆ π , δ $/sample 2.59 7.41 2.90 1.14 5.65 Rˆ 1 (δ) $/sample 0.021 0.001 0.016 0.461 5.629 Rˆ 2 (δ) $/sample 169.683 488.687 190.045 45.259 6.787 TC(δ) $ 37,800 108,010 42,230 16,620 82,320

Table 4 Confusion matrices associated to the test step Yˆ = 1 Yˆ = 2 δπLR ˆ Y = 1 0.9979 0.0021 Y = 2 0.3394 0.6606

Yˆ = 1 Yˆ = 2 δπSVM ˆ Y = 1 0.9999 0.0001 Y = 2 0.9774 0.0226

Yˆ = 1 Yˆ = 2 δπBˆ Y = 1 0.9539 0.0461 Y = 2 0.0905 0.9095

Yˆ = 1 Yˆ = 2 δπB¯ Y = 1 0.4371 0.5629 Y = 2 0.0136 0.9864

Yˆ = 1 Yˆ = 2 δπRF ˆ Y = 1 0.9984 0.0016 Y = 2 0.3801 0.6199

Fig. 2 Dispersion of the risks, the Total Costs, and the maximums of the class-conditional risks, associated to the M test subsets satisfying the class proportions π (s) , s ∈ S

in Fig. 2. We can observe that the dispersion of the risks rˆ (π (s) , δπB¯ ) is considerably lower and thinner than for the other classifiers. This observation holds for the Total Cost, and for the maximum of the class-conditional risks. This illustrates that the minimax classifier δπB¯ is more robust when π  differs from πˆ .

4 Conclusion This paper proposes an algorithm to compute the minimax classifier which (i) is suitable for working on discrete/discretized features, (ii) takes into account costs given by experts, and (iii) is robust to the imbalanced or uncertain class proportions. In future work, we will investigate the robustness of the classifier with respect to the probabilities pˆ kt .

Anomaly Detection with Discrete Minimax Classifier …

187

References 1. Yang Y, Webb GI (Jan 2009) Discretization for naive-bayes learning: managing discretization bias and variance. Mach Learn 74(1):39–74 2. Poor HV (1994) An introduction to signal detection and estimation, 2nd edn. Springer, Berlin 3. Ávila Pires B, Szepesvari C, Ghavamzadeh M (2013) Cost-sensitive multiclass classification risk bounds. In: Proceedings of the 30th international conference on machine learning, vol 28 4. Drummond C, Holte RC (2003) C4.5, class imbalance, and cost sensitivity: Why undersampling beats oversampling. In: Proceedings of the ICML’03 workshop on learning from imbalanced datasets 5. Guerrero-Curieses A, Alaiz-Rodriguez R, Cid-Sueiro J (2004) A fixed-point algorithm to minimax learning with neural networks. IEEE Trans Syst Man Cybernetics Part C Appl Rev 34(4):383–392 6. AB SC (2016) APS failure at Scania trucks data set. UCI machine learning repository (Online). Available: https://www.kaggle.com/uciml/aps-failure-at-scania-trucks-data-set 7. Gilet C, Barbosa S, Fillatre L (2019) Minimax classifier with box constraint on the priors. hal-02296592 8. Alber YI, Iusem AN, Solodov MV (1998) On the projected subgradient method for nonsmooth convex optimization in a hilbert space. Math Programming 81:23–35 9. Condat L (2016) Fast projection onto the simplex and the 1 ball. Math Programming 158(1):575–585 10. McCullagh P, Nelder JA (1990) Generalized linear models, 2nd edn. Chapman & Hall 11. Breiman L (2001) Random forests. Mach Learn 45:5–32 12. Christianini N, Shawe-Taylor JC (2000) An introduction to support vector machines and other Kernel-based learning methods. Cambridge University Press, Cambridge 13. Reed WJ (1974) Random points in a simplex. Pacific J Math 54(2):183–198

A Comparative Study of Statistical-Based Change Detection Methods for Multidimensional and Multitemporal SAR Images Ammar Mian and Frédéric Pascal

Abstract This paper addresses the problem of activity monitoring through change detection in a time series of multidimensional Synthetic Aperture Radar (SAR) images. Thanks to SAR sensors’ all-weather and all-illumination acquisitions capabilities, this technology has become widely popular in recent decades when it concerns the monitoring of large areas. As a consequence, a plethora of methodologies to process the increasing amount of data has emerged. In order to present a clear picture of available techniques from a practical standpoint, the current paper aims at presenting an overview of statistical-based methodologies which are adapted to the processing of noisy and multidimensional data obtained from the latest generation of sensors. To tackle the various big data challenges, namely the problems of missing data, outliers/corrupted data, hetergenenous data, robust alternatives are studied in the statistics and signal processing community. In peculiar, we investigate the use of advanced robust approaches considering non-Gaussian modeling which appear to be better suited to handle high-resolution heterogeneous images. To illustrate the attractiveness of the different methodologies presented, a comparative study on real high-resolution data has been realized. From this study, it appears that robust methodologies enjoy better detection performance through a complexity trade-off with regards to other non-robust alternatives. Keywords Change detection · Synthetic aperture radar · Time series · Activity monitoring

A. Mian (B) SONDRA, CentraleSupélec, 3 rue Joliot Curie, 91192 Gif-sur-Yvette, France LISTIC, Université Savoie Mont-Blanc, 74944 Annecy le Vieux, France e-mail: [email protected] F. Pascal Laboratoire des Signaux et Systèmes, CentraleSupélec, 3 rue Joliot Curie, 91192 Gif-sur-Yvette, France e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_18

189

190

A. Mian and F. Pascal

1 Introduction 1.1 Motivations Change detection (CD) for remotely sensed images of the Earth has been a popular subject of study for recent decades. It has indeed attracted a plethora of scholars due to the various applications to military (activity monitoring) or civil (geophysics, disaster assessment, etc) applications. With the increase in the number of spatial missions with Synthethic Aperture Radar (SAR) sensors, the amount of readily available data has led to a big data era. In order to process this quantity, automatic algorithms to help analyzing time series of images have to be developed. Thus, CD algorithms have been thoroughly investigated in the recent literature [1]. In that context, statistical based methodologies have been successful to process the images that are subjected to the well-known speckle noise. Notably, when it concerns the processing of multidimensional (such as Polarimetric SAR images) and multitemporal images, various methodologies have been proposed in the literature. The present paper aims at giving a short overview of parametric methodologies based on popular distribution models of the SAR data.

1.2 Change Detection Overview The literature on the subject is dense, and a variety of methodologies are available.1 Broadly speaking, as illustrated in Fig. 1, a change detection algorithm relies on three separate elements: • A pre-processing phase in which the time series of images have to be co-registered, which means that by applying geometric transforms, each pixel of every image corresponds to the same physical localization. Various methodologies also consider a denoising step in which the speckle noise is reduced thanks to filtering techniques [3, 4]. Lastly, features can be selected from the images for more concise or descriptive representation for CD purposes. Some possibilities include wavelet decomposition [5], Markov fields [6] or Principal Component Analysis [7] among others. • A comparison step in which the features at each date are compared among themselves. This step consists in finding a relevant distance to measure dissimilarities of features between the different images of the series. Many techniques exist based on various principles. • A post-processing step which varies depending on the methodology used for the comparison step. It can either correspond to a thresholding [8, 9] or involves machine learning classification algorithms [10]. 1 Notably,

see [1] or [2] for an overview.

A Comparative Study of Statistical-Based Change Detection Methods …

191

Pre-processing

Comparison

Corrections Co-registration Denoising Feature selection

Pixel level Object level Supervised Non-supervised

Image Time Series Post-processing Thresholding Classification Change detection results

Fig. 1 General procedure for a change detection methodology

We will deal presently with comparison step methodologies. More specifically, we will place ourself in the context of pixel-level methodologies as opposed to object-level methodologies as defined in [1]. In this paradigm, the comparison is done on a local spatial neighborhood through the principle of the sliding windows: for each pixel a spatial neighborhood is defined and a distance function is computed to compare the similarities of this neighboorhood between the images at different time. Statistical-based methodologies rely on a probability model in order to infer about changes. Those methods can rely on a model with associated parameters in which case they are referred to as parametric methodologies. Nonparametric methods consider statistical tools without considering parameters [11] or Bayesian models [12] with a prior on the distribution of those parameters. We will consider presently the parametric approach which has been the most popular subject of study in the recent decade thanks to the seminal work by Conradsen et al. [13] which showed the attractiveness of applying statistical results on the equality of covariance matrix for CD. We will first introduce probability models used to model SAR images with associated parameters in Sect. 2. Then in Sect. 3, we will detail several dissimilarity functions referenced in the literature for SAR change detection. Finally, we will compare the performance of these functions on two distinct datasets in Sect. 5 and conclude in Sect. 6.

192

A. Mian and F. Pascal

p

p

x17

x18

x19

x14

x15

x16

x11

x12

x13

...

t=1

xT7

xT8

xT9

xT4

xT5

xT6

xT1

xT2

xT3

t=T

Fig. 2 Illustration of the sliding windows (in grey) approach ( p = 3, N = 9). The central pixel (x5t ) corresponds to the test pixel

2 Statistical Modeling of SAR Images 2.1 The Data Denote by W = {X1 , . . . , XT } a collection of T mutually independent groups of p-dimensional i.i.d complex vectors: Xt = [x1t , . . . , xtN ] ∈ C p×N . With regards to the Single Look Complex (SLC) SAR images these sets correspond to the local observations in a spatially sliding windows as illustrated in Fig. 2. The subscript k correspond to a spatial index while the superscript t corresponds to a time index. When it concerns Multilook Complex Images (MLC), the complex amplitude data xkt is not made available. The Sample Covariance Matrix (SCM) is then released to filter the speckle noise and compress the data. The number of pixels L used for the averaging is called the number of looks.

2.2 Gaussian Model The centered complex Gaussian distribution is the most encountered one in the SAR literature, notably for modelling Polarimetric images. Indeed, since each pixel consists of the coherent sum of the contribution of many scatterers, it is expected that, thanks to the central limit theorem, the Gaussian model is not too far from the actual empirical distribution. A random vector x ∈ C p is said to be distributed along the complex Gaussian distribution (which we denote x ∼ CN (0 p , )) if its probability density function (p.d.f) is: pxCN (x; ) =

  1 exp −xH  −1 x . π p ||

(1)

A Comparative Study of Statistical-Based Change Detection Methods …

193

For multilook images, the matrix obtained through data averaging is assumed to be Wishart distributed as a consequence.

2.3 Non-Gaussian Modeling While the Gaussian distribution is popular, it fails to accurately describe the heterogeneity observed in very high-resolution images as described in [14], [15] or in [16]. Indeed, in those images, the amount of scatterers in each pixel has been greatly reduced with regards to low-resolution images. To better describe the observed distribution of data, other models have been considered. For example, the K-distribution has been considered in [17, 18], the Weibull distribution in [19] or inverse Generalized Gaussian distribution in [20]. These various models belong to the family of complex elliptical distributions which generalizes them as discussed in [21] which is a model depending on a density generator function g : R+ → R+ that satisfies the condition m p,g = R+ t p−1 g(t)dt < ∞ and a posp itive definite matrix  ∈ SH : a random vector x ∈ C p follows a complex elliptical distribution (denoted x ∼ CE(0 p , g, )) if its p.d.f is the following:   pxCE (x; , g) = C p,g ||−1 g xH −1 x ,

(2)

 where C p,g is a normalization constraint ensuring that C p pxCE (x; , g)dx = 1. Another representation of those distributions can be found through the compoundGaussian model, sometimes referred as product model. In this case, a random vector x follows a complex compound-Gaussian (CCG) distribution if: x=



τ z,

(3)

where z ∼ CN (0 p , ξ ), called the speckle and τ , called the texture, follows a distribution probability on R+ . This quantity is often assumed to be deterministic in order to generalize the Gaussian distribution without having to consider a model on the texture. In this case, the statistical model for the pixels in a sliding window is written: xkt ∼

 τkt zkt ,

(4)

where τkt ∈ R+ is deterministic and zkt ∼ CN (0 p , ξ t ). The elliptical and CCG distributions are probability models which are in the scope of the robust statistical literature initiated by works such as [22–24]. More details can be found in the books [25] and [26]. Following their definition, a method is said to be robust, in the context of this paper, when its statistical properties are independent of the density generator function in the elliptical case or independent of the set of texture parameters for the deterministic CCG case. Using these various models, we will describe, hereafter, various dissimilarity measures for those models which can be used for change detection.

194

A. Mian and F. Pascal

3 Dissimilarity Measures 3.1 Problem Formulation Under a parametric approach, CD can be achieved by deciding between the two following alternative hypotheses: 

H0 : θ 1 = . . . = θ T = θ 0 (no change), . (change) H1 : ∃(t, t  ), θ t = θ t 

(5)

where θ corresponds to the parameters of the distribution used as a model. In order to decide, a dissimilarity measure between the data over time is needed. It can be seen as a function, also called statistic: p×N ×T →R ˆ : C  ˆ W → (W),

ˆ such that (W) is high when H1 is true and low otherwhise. An interesting property of such function is the Constant False Alarm Rate (CFAR) property. This property is valid when the distribution of the statistic under H0 hypothesis is not a function of the parameters of the problem. This allows selecting a threshold value for the detection which is directly linked to the probability of false alarm. The threshold is often obtained by deriving the distribution of the statistic or can be obtained by Monte-Carlo simulations when it is not available.

3.2 Hypothesis Testing Statistics The first kind of statistics comes from the statistical literature on hypothesis testing. Several techniques have been developed to obtain a statistic of decision. A comparative study of this framework can be found in [27]. The analysis presented allowed to show that many approaches led to equivalent results. Thus, three statistically independent statistics remain: • the Generalized Likelihood Ratio Test (GLRT) statistic: ˆG = 

   ˆ SCM T N  0  T     ˆ SCM  N  t  t=1

where:

,

(6)

A Comparative Study of Statistical-Based Change Detection Methods …

ˆ tSCM = ∀t, 

N T 1 t tH 1 SCM ˆ ˆ 0SCM =  xk xk and  . N k=1 T t=1 t

195

(7)

This statistic is well known in the literature and its properties have been well studied. [28] has derived the distribution under the null hypothesis and proposed an approximation as a function of χ 2 distributions. • the t1 statistic:

2  T

−1 1 ˆ 0SCM ˆ tSCM ˆ t1 = Tr  . (8)   T t=1 This statistic has the CFAR property and has a χ 2 asymptotic distribution (when N → ∞). • the Wald statistic: 

2  SCM ˆ SCM −1 ˆ Tr I p − 1 (t )

(9)

 T  T SCM −† SCM −1 ˆ ˆ N (t ) ⊗ (t ) , vec ϒt ,

(10)

ˆ Wald = N   −q

T t=2

t=1

t=2

where † is the transpose operator and:

ˆ 1SCM ( ˆ tSCM )−1 − ( ˆ tSCM )−1  ˆ tSCM )−1 , ϒ t = N ( −1

q(x, ) = x  x. H

(11) (12)

This statistic has the CFAR property and has a χ 2 asymptotic distribution (when N → ∞). It is, however, computationally more complex than the previous ones due to the Kroenecker product. In order to deal with multilook images and for a pair of two images (T = 2), [29] proposed to use the Complex Hotelling-Lawley statistic given by: ˆ HTL = Tr(( ˆ 1SCM ), ˆ 0SCM )−1  

(13)

where the estimates correspond to the multilook covariance matrices at date 0 and 1. The authors also derived the distribution under null hypothesis and showed that it can be approximated by a Fisher-Snecdor distribution. The study showed a potential improvement of detection rate compared to the GLRT statistic. Concerning non-Gaussian models, [30] has considered the derivation of the GLRT using the compound-Gaussian model with deterministic texture parameters:

196

A. Mian and F. Pascal

ˆ MT = 

 MT T N ˆ  N ξ 0   T     ˆ TE  N ξ Wt  t=1

 T T p MT t ˆ q ξ 0 , xk

t=1 T  k=1 T p q T

TE

p ξˆ , xt Wt

,

(14)

k

t=1

where Wt = {xkt : 1 ≤ k ≤ N }, T

TE ξˆ Wt

H

xkt xkt N t tH MT xk xk p p t=1 = , and ξˆ 0 = , TE T N t N t MT ˆ k=1 xk ∈Wt q(ξ Wt , xk ) t ˆ q ξ 0 , xk

(15)

t=1

and q(ξ , x) is defined at in (12). This statistic has been shown to have better robustness to very heterogeneous data for which the Gaussian model is very far. This comes however, at a cost of complexity. Indeed, there is a need to compute the solution of two fixed point equations which is computationally expensive. The statistic has the CFAR property but the distribution under null hypothesis has not yet been derived.

3.3 Information-Theoretic Measures Information theory tools are based on another approach which consists in measuring quantities related to the information at disposition on the data. In order to compare two distributions, distances can be computed. When a parametric model is used, the distance is often dependent on the parameters of the two distributions compared. Since in many applications, the parameters are not known, estimates are plugged into the distance function. The Kullback-Leibler divergence is a popular measure encountered, notably in SAR change detection problems [31]. For the Gaussian case, a distance can be obtained through the symmetrised version of the divergence: ˆ KL = 

1 ˆ 0SCM ,  ˆ 1SCM ) + d( ˆ 1SCM ,  ˆ 0SCM ) , d( 2

(16)

. [32] has shown that this distance is asympwhere d(A, B) = Tr(B−1 A) + log |A| |B| totically distributed as a χ 2 distribution.

A Comparative Study of Statistical-Based Change Detection Methods …

197

Other measures include in Renyi entropy or Bhattacharya distance which have been shown to behave similarly in [32]. They will thus be omitted in the present paper. For non-Gaussian model, [33] proposed a measure in the multilook case and for T = 2, based on the principle of mutual information. This distance, based on a gamma model on the texture parameters, is a generalization of the one proposed in [34]: ˆ MLL = MLL(W1 ) + MLL(W2 ) − MLL(W12 ),  (17) where W12 = W1 ∪ W2 , MLL(W) =

N ν+2pL (log(Lν) − log(μ)) − N L log |ξ TE W|

  H −1 t t − N log |(ν)| + ν−2pL log Tr (ξ TE W ) xk xk xkt ∈W

+





log K ν− pL

 

−1 xt xt H Lν/μ ) 2 Tr (ξ TE W k k

(18)

 ,

xkt ∈W

 is the Gamma function, L is the number of equivalent looks, ξ TE W is defined in (15), K ν (z) is the modified Bessel function of the second kind and (μ, ν) are the parameters of a Gamma distribution found by fitting the distribution of the texture parameters set t t (19) T = {τˆkt = q(ξ TE W , xk )/ p : xk ∈ W}. The method used yields a high computational cost, since the test statistic relies on a Bessel function as well as a fitting of a Gamma distribution.

3.4 Riemannian Geometry Distances Riemannian geometry is an alterative concept which allows us to compare distributions. Indeed, when the parameters of a distribution lie in a Riemannian manifold (for example, covariance matrices lie in Riemannian manifold of positive definite matrip ces SH [35]), it is possible to define a metric which can be related to the KullbackLeibler divergence. Consequently, distances between two probability distributions which take into account the geometry properties of the parameter space, have been considered in the litterature. Notably, for a Gaussian model, we have the following distance [36]:

ˆ 2SCM ( ˆ RE =  log ( ˆ 1SCM )− 21  ˆ 1SCM )− 21 2F ,  (20)

198

A. Mian and F. Pascal p

where log is the logarithm of matrices which for SH , having the eigenvalue decomposition  = UUH , is defined as follows: log() = U log ()UH ,

(21)

where the log is pointwise logarithm. For the elliptical case, a Riemannian CES distance can be obtained as well [37]: ˆ RE = α 

p i=1

log2 λi + β

 p

2 log λi

(22)

i=1

MLE ˆ 1MLE )−1  ˆ ∈{0,1} ˆ 2MLE ,  where λi is the i-th eigenvalue of ( is the MLE of the scatter matrix in CES case and α, β depends on the density generator function. For a Studentt distribution, we have:

α=

N xk xk H d+p d+p ˆ MLE = , , β = α − 1,  ˆ MLE )−1 xk d + p+1 N k=1 d + xk H (

(23)

where d is the number of degrees of freedom of the Student-t distribution. The concept of Riemannian geometry has also been considered in [38], where a methodology has been proposed for polarimetric SAR images based on Kennaugh matrix decomposition. This distance, however, is not based on a probability model and will be thus omitted in the present paper.

3.5 Optimal Transport Finally a distance can be obtained through the concept of optimal transport which defines a distance depending on a cost to displace parts of a reference distribution into a target distribution. It has been shown in [39], that this distance has a closed-form for elliptical distributions:  1 1 1 2 2 2 . d(1 , 2 ) = Tr(1 ) + Tr(2 ) − 2Tr 1 2 1

(24)

Thanks to this result, we can define two statistics for Gaussian and Elliptical case by plugging estimates of the covariance matrices. A Wasserstein statistic for Gaussian can be computed by taking: ˆ WG = d( ˆ 1SCM ,  ˆ 2SCM ). 

(25)

A Comparative Study of Statistical-Based Change Detection Methods …

199

Table 1 Summary of statistics with their respective properties Statistic

Reference

Model

Eq.

ˆG 

[13]

Gaussian

ˆt  1

[27]

Gaussian

ˆ Wald 

[27]

ˆ HTL 

T >2

SLC/MLC CFAR

Asymptotic null distribution

(6)

Both

χ 2 (asymptotic)

(8)

Both

χ 2 (asymptotic)

Gaussian

(10)

Both

χ 2 (asymptotic)

[29]

Gaussian

(13)

Both

Fisher-Snecdor (approximation)

ˆ MT 

[30]

Deterministic CCG

(14)

SLC

×

ˆ KL  ˆ MLL 

[32]

Gaussian

(16)

×

Both

[33]

CompoundGaussian

(17)

×

MLC

×

×

ˆ RG  ˆ RE 

[36]

Gaussian

(20)

×

Both

×

×

[37]

Student-t

(22)

×

SLC

×

×

[40]

Gaussian

(25)

×

Both

×

×

[40]

Elliptical

(26)

×

SLC

×

×

ˆ WG  ˆ WE 

×

χ 2 (asymptotic)

A Wasserstein distance in the elliptical can be obtained by taking: TE

TE

ˆ WE = d(τˆ1 ξˆ 1 , τˆ2 ξˆ 2 ),  TE

where ξˆ ∈{0,1} is defined in (15) and τˆ ∈{0,1} = ( p/N )

(26)

N k=1

TE

q(ξˆ , xk ).

3.6 Summary A summary of the different statistics with their respective properties can be found in Table 1.

4 Analysis of Complexity To compare the attractiveness of the different statistics described in Sect. 3 with regards to computational complexity, we consider an experimental Monte-Carlo analysis on synthetic Gaussian data. A set of data W is generated with parameter values p = 10, N = 25 and T = 2. The Gaussian data is generated with a covariance matrix ()i j = 0.5|i− j| and 4000 Monte-Carlo trials have been considered. For statistics involving fixed-point equation, we fixed the number of iterations to 15 and we choose L = 1 (Single Look

200

A. Mian and F. Pascal

Table 2 Time consumption in seconds ˆG ˆ t1 ˆ Wald  ˆ HTL  ˆ MT    0.001

0.001

0.003

0.001

0.004

ˆ KL 

ˆ MLL 

ˆ RG 

ˆ RE 

ˆ WG 

ˆ WE 

0.001

0.006

0.011

0.007

0.002

0.007

data). Finally, the simulation has been done on 2.50 GHz processor in Python 3.7. The mean execution times are provided in Table 2. Several observations can be made: • Most of the Gaussian derived statistics have similar time consumption since the statistic are based on the computation of the SCM which is less expensive than a fixed-point estimation. Wald statistic is more expensive than the Gaussian GLRT and t1 statistics due to the inverse of a Kroenecker products which is more expenˆ RG has the highest time consumption, even compared sive than a simple inverse.  to non-Gaussian methodologies, due to the fact that it requires to compute the square root of matrices which can take a lot of time depending on the implementation (the one used correspond to the Scipy implementation which can be improved with regards to complexity). • Concerning non-Gaussian statistics, the time consumption is generally higher than Gaussian methodologies due to the fixed-point estimation. The best methodology ˆ MT since it requires fewer operations than with regards to time consumption is  the others.

5 Experiments on Real Data 5.1 Methodology and Aim of the Study We consider comparing the performance of detection for all the statistics presented in Sect. 3 on real SLC polarimetric SAR datasets where a ground truth is available. In order to compute the statistics, we use a sliding windows of size 5 × 5 (N = 25). The fixed-point estimates are computed with 5 iterations to save time. For the statistic ˆ MLL , since it consider a multilook image, we choose L = 1. For the statistic  ˆ RE  based on the Student-t distribution, we choose d = 3 in order to have a distribution far from the Gaussian one. When the data correspond to a series of length T > 2, only the adapted statistics are computed. The performance of detection is measured through the Receiver Operator Curve (ROC), which quantify the amount of good detection as a function of false alarms.

A Comparative Study of Statistical-Based Change Detection Methods …

201

5.2 Datasets Used We consider three scenes of UAVSAR polarimetric data obtained from https://uavsar.jpl.nasa.gov (Courtesy NASA/JPL-Caltech) presented in Figs. 3 and 4 with their associated ground truth. The first two scenes are referenced under label SanAnd_26524_03 Segment 4 and correspond to a pair of images at dates April 23, 2009, and May 11, 2015 (T = 2). The groud truth for these two scenes has been fetched from [41]. The third scene is referenced under label Snjoaq_14511 and correspond to a series of T = 17 images and the ground truth have been realized by the present authors.

Fig. 3 UAVSAR Scenes 1 (top) and 2 (bottom) in Pauli representation. Left: Image at first date. Middle: Image at second date. Right: Ground Truth

202

A. Mian and F. Pascal

Fig. 4 UAVSAR Scene 3 in Pauli representation. Left: Image at t = 1. Right: Ground Truth on the whole series

5.3 Reproducibility The codes (in Python 3.7) are available at https://github.com/AmmarMian/WCCM2019. The datasets can be obtained from UAVSAR website (https://uavsar.jpl.nasa. gov) using the labels given and the ground truth may be obtained from the authors.

5.4 Results and Discussion The ROC for all statistics and three scenes are presented in Fig. 5. A transcription of the detection performance at PFA = 0.01 is also given in Table 3. ˆ MLL generally have poorer perforThe first observation that can be made is that  mance of detection than the other statistics. This is mostly explained by the fact that it has been developed for a multilook scenario which was not the case here. Thus, even by taking L = 1, the statistic yields poorer detection performance than most of the other statistics. ˆ WE appear ˆ WG and  The statistics obtained through the Wasserstein distance,  to not work well for this CD problem since the performance of detection compared to others distance is lower, especially for Scene 2. ˆ t1 ,  ˆ KL , ˆ G,  For the case T = 2, considering the Gaussian-derived statistics  ˆ HTL and  ˆ RG , it is difficult to discern a best statistic since depending on the scene,  ˆ Wald appears their relative performance vary. When the number of images is high,  ˆG to be lower in all cases but the performance of detection is not far. For T > 2,  seems to have better performance than its counterpart but no conclusion can be made since only one set of data with T > 2 was available. ˆ MT has the overall best performance for all Finally, concerning robust statistics  ˆ HTL but on the other scenes. In scene 1, it has slightly lower performance than  scenes (especially Scene 3), the performance of detection is greatly improved. The gain of this statistic compared to Gaussian one is explained by the heterogeneity of ˆG the images at the transition between objects. As an example, the output of both 

A Comparative Study of Statistical-Based Change Detection Methods … 1

ˆt Λ 1 ˆ RG Λ ˆ WG Λ ˆ MT Λ ˆ MLL Λ

0.8 PD

ˆG Λ ˆ HTL Λ ˆ KL Λ ˆ Wald Λ ˆ WE Λ ˆ RE Λ

203

0.6 0.4 0.2 0 10−3

10−2

10−1

100

1

0.8

0.8

0.6

0.6

PD

PD

PFA 1

0.4

0.2

0.2 0 10−3

0.4

10−2

10−1

0 10−3

100

10−2

10−1

100

PFA

PFA

Fig. 5 ROC plots for a window size of 5 × 5 for the three scenes. Top-right: Scene 1. Bottom-left: Scene 2. Bottom-right: Scene 3 Table 3 Probability of detection at a false alarm rate of 1% ˆG Statistic 

ˆt  1

ˆ Wald 

ˆ HTL 

ˆ MT 

ˆ KL 

ˆ MLL 

ˆ RG 

ˆ RE 

ˆ WG 

ˆ WE 

Scene 1 0.41

0.41

0.35

0.43

0.41

0.40

0.18

0.41

0.39

0.12

0.36

Scene 2 0.33

0.30

0.27

0.32

0.39

0.32

0.11

0.33

0.29

0.19

0.19

Scene 3 0.90

0.75

0.62

0.95

ˆ MT are presented in Fig. 6. For this data, the Gaussian derived statistic, which and  does not take into account the heterogeneity, yields a high value of the statistic at the transition between the fields while the CCG statistic does not, which constitutes ˆ WE a big improvement concerning false alarms. The natural Student-t statistic  does not have good performance of detection which is explained by a mismatch in the degrees of freedom chosen to model the distribution of the data. Moreover, the ˆ MT is Student-t model can be inaccurate to model the actual data. In those regards,  able to obtain better performance without relying on a specific elliptical model and has thus a better robustness to mismatch scenarios.

204

A. Mian and F. Pascal

6000 5000 4000 3000 2000 1000 0

ˆ G (left) and  ˆ MT (right) on scene 3. The major difference between the two Fig. 6 Output of  statistics is higlihted by red arrows

6 Conclusions We have described several parametric approaches for CD in multidimensional and multitemporal SAR images obtained from various principles and detailed their properties. The main result obtained through the comparative study on real data underˆ MT , based on a deterministic CCG taken in Sect. 5, was to show that the statistic  modelling, provided the best performance of detection thanks to its inherent robustness properties. Indeed, by taking into account the heterogeneity of the backscattering power on the local window size, the statistic allowed us to reduce the number of false alarms and kept the amount of true detection coherent with Gaussian-derived detectors. This improvement comes with a higher computational cost as explored in Sect. 4 which has been shown to be reasonable with regards to its detection performance.

References 1. Hussain M, Chen D, Cheng A, Wei H, Stanley D (2013) Change detection from remotely sensed images: From pixel-based to object-based approaches. (ISPRS) J Photogrammetry Remote Sens 80:91–106 (Online). Available: http://www.sciencedirect.com/science/article/ pii/S0924271613000804 2. Hecheltjen A, Thonfeld F, Menz G (2014) Recent advances in remote sensing change detection—a review. Springer Netherlands, Dordrecht, pp 145–178 3. Achim A, Tsakalides P, Bezerianos A (2003) Sar image denoising via Bayesian wavelet shrinkage based on heavy-tailed modeling. IEEE Trans Geosci Remote Sens 41(8):1773–1784 4. Foucher S, Lopez-Martinez C (2014) Analysis, evaluation, and comparison of polarimetric SAR speckle filtering techniques. IEEE Trans Image Process 23(4):1751–1764 5. Mian A, Ovarlez J-P, Ginolhac G, Atto AM (2017) Multivariate change detection on high resolution monovariate SAR image using linear time-frequency analysis. In: 25th European Signal Processing Conference (EUSIPCO). Kos, Greece 6. Wang F, Wu Y, Zhang Q, Zhang P, Li M, Lu Y (2013) Unsupervised change detection on SAR images using triplet Markov field model. IEEE Geosci Remote Sens Lett 10(4):697–701 7. Yousif O, Ban Y (2013) Improving urban change detection from multitemporal SAR images using pca-nlm. IEEE Trans Geosci Remote Sens 51(4):2032–2041

A Comparative Study of Statistical-Based Change Detection Methods …

205

8. Bruzzone L, Prieto DF (2000) Automatic analysis of the difference image for unsupervised change detection. IEEE Trans Geosci Remote Sens 38(3):1171–1182 9. Kervrann C, Boulanger J (2006) Optimal spatial adaptation for patch-based image denoising. IEEE Trans Image Process 15(10):2866–2878 10. Gong M, Zhao J, Liu J, Miao Q, Jiao L (2016) Change detection in synthetic aperture radar images based on deep neural networks. IEEE Trans Neural Netw Learn Syst 27(1):125–138 11. Aiazzi B, Alparone L, Baronti S, Garzelli A, Zoppetti C (2013) Nonparametric change detection in multitemporal SAR images based on mean-shift clustering. IEEE Trans Geosci Remote Sens 51(4):2022–2031 12. Prendes J, Chabert M, Pascal F, Giros A, Tourneret J (2015) Change detection for optical and radar images using a bayesian nonparametric model coupled with a markov random field. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), April 2015, pp 1513–1517 13. Conradsen K, Nielsen AA, Schou J, Skriver H (2001) Change detection in polarimetric SAR data and the complex Wishart distribution. In: IGARSS 2001. Scanning the present and resolving the future. Proceedings. IEEE 2001 International Geoscience and Remote Sensing Symposium (Cat. No.01CH37217), vol 6, pp 2628–2630 14. Greco MS, Gini F (2007) Statistical analysis of high-resolution SAR ground clutter data. IEEE Trans Geosci Remote Sens 45(3):566–575 15. Gao G (2010) Statistical modeling of SAR images: a survey. Sensors 10(1):775–795 16. Ollila E, Tyler DE, Koivunen V, Poor HV (2012) Compound-Gaussian clutter modeling with an inverse Gaussian texture distribution. IEEE Signal Process Lett 19(12):876–879 17. Yueh S, Kong J, Jao J, Shin R, Novak L (1989) K-distribution and polarimetric terrain radar clutter. J Electromagnetic Waves Appl 3(8):747–768 (Online). Available: https://doi.org/10. 1163/156939389X00412 18. Muller H (1994) K statistics of terrain clutter in high resolution SAR images. In: Proceedings of IGARSS ’94 - 1994 IEEE International Geoscience and Remote Sensing Symposium, vol 4, Aug 1994, pp 2146–2148 19. Bucciarelli T, Lombardo P, Oliver CJ, Perrotta M (1995) A compound Weibull model for SAR texture analysis. In: 1995 International Geoscience and Remote Sensing Symposium, IGARSS ’95. Quantitative Remote Sensing for Science and Applications, vol 1, July 1995, pp 181–183 20. Freitas CC, Frery AC, Correia AH (2005) The polarimetric g distribution for SAR data analysis. Environmetrics 16(1):13–31 (Online). Available: https://onlinelibrary.wiley.com/doi/abs/10. 1002/env.658 21. Ollila E, Tyler DE, Koivunen V, Poor HV (2012) Complex elliptically symmetric distributions: survey, new results and applications. IEEE Trans Signal Process 60(11):5597–5625 22. Maronna RA (1976) Robust M-estimators of multivariate location and scatter. Ann Statistics 4(1):51–67 23. Yohai VJ (1974) Robust estimation in the linear model. Ann Statist 2(3):562–567, 05 1974 (Online). Available: https://doi.org/10.1214/aos/1176342717 24. Martin B, Pierre D. Robust estimation of the sur model. Canadian J Statistics 28(2):277–288 (Online). Available: https://onlinelibrary.wiley.com/doi/abs/10.2307/3315978 25. Maronna RA, Martin DR, Yohai VJ (2006) Robust statistics: theory and methods. 1st edn, ser. Wiley Series in Probability and Statistics, Wiley 26. Zoubir AM, Koivunen V, Ollila E, Muma M (2018) Robust statistics for signal processing. Cambridge University Press 27. Ciuonzo D, Carotenuto V, Maio AD (2017) On multiple covariance equality testing with application to SAR change detection. IEEE Trans Signal Process 65(19):5078–5091 28. Anderson T (2003) An introduction to multivariate statistical analysis, Ser. Wiley Series in Probability and Statistics, Wiley (Online). Available: https://books.google.es/books? id=Cmm9QgAACAAJ 29. Akbari V, Anfinsen SN, Doulgeris AP, Eltoft T, Moser G, Serpico SB (2016) Polarimetric SAR change detection with the complex Hotelling-Lawley trace statistic. IEEE Trans Geosci Remote Sens 54(7):3953–3966

206

A. Mian and F. Pascal

30. Mian A, Ginolhac G, Ovarlez J-P, Atto AM (2019) New robust statistics for change detection in time series of multivariate SAR images. IEEE Trans Signal Process 67(2):520–534 31. Inglada J, Mercier G (2007) A new statistical similarity measure for change detection in multitemporal SAR images and its extension to multiscale change analysis. IEEE Trans Geosci Remote Sens 45(5):1432–1445 32. Frery AC, Nascimento ADC, Cintra RJ (2014) Analytic expressions for stochastic distances between relaxed complex Wishart distributions. IEEE Trans Geosci Remote Sens 52:1213– 1226 33. Liu M, Zhang H, Wang C, Wu F (2014) Change detection of multilook polarimetric SAR images using heterogeneous clutter models. IEEE Trans Geosci Remote Sens 52(12):7483–7494 34. Beaulieu J, Touzi R (2004) Segmentation of textured polarimetric SAR scenes by likelihood approximation. IEEE Trans Geosci Remote Sens 42(10):2063–2072 35. Skovgaard LT (1984) A Riemannian geometry of the multivariate normal model. Scandinavian J Statistics 211–223 36. Smith ST (2005) Covariance, subspace, and intrinsic crame/spl acute/r-rao bounds. IEEE Trans Signal Process 53(5):1610–1630 37. Breloy A, Ginolhac G, Renaux A, Bouchard F (2019) Intrinsic cramérao bounds for scatter and shape matrices estimation in ces distributions. IEEE Signal Process Lett 26(2):262–266 38. Ratha D, De S, Celik T, Bhattacharya A (2017) Change detection in polarimetric SAR images using a geodesic distance between scattering mechanisms. IEEE Geosci Remote Sens Lett 14(7):1066–1070 39. Gelbrich M (1990) On a formula for the l2 wasserstein metric between measures on euclidean and hilbert spaces. Mathematische Nachrichten 147(1):185–203 40. Ghaffari N, Walker S (2018) On multivariate optimal transportation. arXiv preprint arXiv:1801.03516 41. Nascimento ADC, Frery AC, Cintra RJ (2019) Detecting changes in fully polarimetric SAR imagery with statistical information theory. IEEE Trans Geosci Remote Sens 57(3):1380–1392

Bearing Anomaly Detection Based on Generative Adversarial Network Jun Dai, Jun Wang, Weiguo Huang, Changqing Shen, Juanjuan Shi, Xingxing Jiang, and Zhongkui Zhu

Abstract It is challenging to detect incipient fault of a bearing in a rotating machine being operated under complex conditions. Current intelligent fault diagnosis models require massive historical data of bearing monitoring signals under different health states and the corresponding state labels. However, in some cases, only the samples when the bearing is under healthy condition are available. To deal with the scenario when anomalous samples are absent, this paper proposes a bearing anomaly detection method based on generative adversarial network (GAN). Specifically, only the normal samples are used to train the proposed GAN model, by which the signal data distribution of normal samples is learned. Then, the newly acquired signals with unknown health states are taken as the input data of the trained model. Residual loss of each new sample is calculated by comparing the dissimilarity between the generated sample and the input sample. Because the trained model is optimized by the normal samples, the residual losses of the anomalous samples will be large due to the different data distribution from that of the normal samples. Finally, the bearing anomaly is detected when the residual loss of newly acquired sample increase quickly during continuous inspection of the bearing health condition. Life-circle vibration signals of a bearing measured from a run-to-failure test are used to validate the proposed method. The result indicates that the bearing anomaly can be detected earlier by the proposed GAN model than the traditional method based on signal time-domain statistical criterion, which is of great significance for incipient weak fault detection of mechanical systems. Keywords Anomaly detection · Deep learning · Generative adversarial network · Rolling bearing · Rotating machines

J. Dai · J. Wang (B) · W. Huang · C. Shen · J. Shi · X. Jiang · Z. Zhu Institute of Industrial Measurement, Control and Equipment Diagnostics, School of Rail Transportation, Soochow University, Suzhou, People’s Republic of China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_19

207

208

J. Dai et al.

1 Introduction In modern industries, mechanical systems tend to be more and more automatic, precise, and efficient. Minor damage occurred may affect the normal operation of the whole system and even cause serious accidents. In order to accurately capture the health status information of the mechanical systems, mechanical condition monitoring systems have been used to collect massive data from the equipment [1], and data-driven maintenance has become an important technique for mechanical system condition monitoring and fault diagnosis [2]. Traditional signal analysis methods, such as wavelet transform (WT) [3], independent component analysis (ICA) [4], and empirical mode decomposition (EMD) [5], can extract effective fault components from original signals. However, the performance of fault diagnosis by using these methods depends on signal processing experience of the maintenance personnel. In order to extract valid features from the measured data efficiently, deep learning has been applied to the field of machinery fault diagnosis. Tamilselvan and Wang [6] proposed a deep belief network (DBN) health state classification model and applied it to aircraft engine health diagnosis. Jia et al. [1] used an auto-encoders (AE) model and back-propagation algorithm to extract fault features from original signals. Zhang et al. [7] constructed a convolutional neural network (CNN) model for bearing fault diagnosis. Zhao et al. [8] proposed a residual neural network (ResNet) based on dynamic weighted wavelet coefficients for fault monitoring. These researches indicate that it has good application prospect to adopt the deep learning in mechanical equipment condition monitoring and fault diagnosis [9]. However, these models are trained by inputting massive historical data and the corresponding labels. In practical diagnosis task, it is difficult to obtain all the samples under different fault states. In the cases when safety is of paramount importance, such as the condition monitoring of bearings installed in a train wheelset or an aeroengine, only the samples when the bearing is under healthy state are available. In order to detect anomaly of a machinery without anomalous samples, a novel method that relies on only the available normal samples is urgent to be proposed. Generative adversarial network (GAN) is a semi-supervised feature learning algorithm which is proposed by Goodfellow in 2014 [10]. It has been applied in many fields, including image processing [11, 12], style transfer [13], and anomaly detection [14, 15]. GAN has also been applied in the field of fault diagnosis in recent years. Wang et al. [16] solved the problem of insufficient data samples by using the GAN, and then used stack sparse AE to extract fault-sensitive features. Yong et al. [2] dealt with the data imbalance problem by using the GAN to oversample the data with less samples. A deep neural network model was then used to classify different faults. Han et al. [17] proposed a model named categorical adversarial auto-encoder (CatAAE) for unsupervised fault diagnosis of rolling bearings. Since the GAN is able to learn the distribution feature of a group of data during the training process, and discover the distribution diversity between the trained data and a new group of

Bearing Anomaly Detection Based on Generative …

209

data during the testing process, it can be used to solve the problem of the absence of anomalous samples in practical diagnosis task. In this paper, a new GAN-based anomaly detection method is proposed for bearing condition monitoring. First, the samples of the bearing under health state are used to train the model of the GAN. Then, the newly acquired signals with unknown health states are taken as the input of the trained model. Due to that the data distributions of the anomalous samples are different from that of the normal samples; the GAN model trained by the normal samples are not fit for the anomalous samples. By measuring the dissimilarity between the input data and the output data of the model, the anomaly of the bearing can be finally detected. A group of life-circle vibration signals of a rolling bearing validates the enhanced performance of the proposed GAN-based anomaly detection method as compared to the traditional method based on time-domain statistical criteria.

2 Generative Adversarial Network The GAN contains a generator and a discriminator, as is illustrated in Fig. 1. The generator aims to map latent vectors from latent space to high dimensions, by which fake samples are created. The discriminator is used to tell the fake and real samples apart. During the training process, the latent vectors can be reconstructed to be similar to the real samples, and the data distribution of the real samples is learned. The GAN is optimized by competing between the generator and the discriminator. In Fig. 1, the latent vector z is a sample from a prior distribution pz . G(z) represents a generated sample. We hope the distribution of the generated sample G(z) is close to that of the real sample x. The discriminator in the GAN is used to discriminate between the real and the generated samples. The probability is given by the discriminator to reflect the input sample is real or fake. When the real sample x is input, the D(x) is expected to be close to 1; conversely, when the generated sample G(z) is input, the output D(G(z)) is expected to be close to 0. The objective function can be described as follows:   min max E x∼Pdata log D(x) + E z∼ pz [log(1 − D(G(z)))] G

D

Fig. 1 Structure of the GAN Real sample

x

Discriminator

G(z) Latent vector

z

Generator

(1)

Real/fake

210

J. Dai et al.

3 GAN-Based Bearing Anomaly Detection Method

Backpropagation

In the condition monitoring of a specific rolling bearing, the data collected in the early stage are normal samples. In order to detect the anomaly of the bearing the first time when a defect occurs without the anomalous samples, a new GAN model is established in this paper. The proposed model is trained by only the normal samples. That is to say, the normal samples are taken as the real samples in Fig. 1. During the testing process, the newly collected samples are taken as the real samples successively, and the generator and discriminator are fixed. The latent vector is updated by calculating the dissimilarity between a real sample and the corresponding generated sample. Since the latent space has smooth transitions [18], an ideal-case scenario is that the best latent vector z* can produce a generated sample that is as similar as possible to the real sample x. If a new sample corresponds to the normal condition, the dissimilarity between the generated sample and real sample will be very small. On the contrary, if the new sample corresponds to the abnormal condition, due to the different data distributions between the normal and the anomalous samples, the optimal generated sample G(z∗) will hardly express the real sample. Therefore, the dissimilarity between the real sample x and the generated sample G(z∗) is a good evaluator for anomaly degree. The flowchart of the proposed GAN-based bearing anomaly detection method is shown in Fig. 2, where I represents the total iteration numbers during the training process and K represents the total update numbers during the testing process. Firstly, the GAN model is trained with backpropagation by the bearing vibration data under normal condition after preprocessing via the Fourier transform and normalization

N

Normal samples

New samples

Preprocessing

Preprocessing

GAN

Trained GAN

i 0. The shape parameter β represents the reliability bathtub curve (Fig. 2) failure rate trends that are decreasing (β < 1), constant (β = 1) or increasing (β > 1). The scale parameter represents the time which it takes for 63.2% of the population to fail. The cumulative distribution function F(t) (also known as failure distribution) and the reliability distribution R(t) are F(t) = 1 − e

 β − ηt

 β − ηt

R(t) = e

(2) (3)

while the failure rate is λ(t) =

  β t β−1 η η

(4)

The historical wear measurements from year 1999 to 2017 of various metro lines were filtered to consider only curves from the curves of sharper radius R300 (represented by data points of radius R275 to R350) and radius R500 (represented by data points of radius R475 to 550). Other information pertaining to the rail geometry (chainage to determine location, NSEW direction, rail foundation type) was preserved. In the rail-wheel contact at the curves, head wear is observed to be more significant on the low rail while gauge corner wear dominates on the high rail. Permissible rail wear values have been specified for wear at the gauge corner (W 3 < 18 mm) and the head (W 1 < 16 mm) (Fig. 3).

Reliability Centred Maintenance (RCM) Assessment of Rail …

Reference profile Worn profile

329

45° W1 W3

Fig. 3 Wear measurement of rail profile

(a) High rail gauge corner wear degradation (W2 < 18 mm)

(b) Low rail head wear degradation (W1 < 16 mm)

Fig. 4 Degradation plots for radius R300 tunnel curves

3 Wear Degradation Analysis Linear regression is used to fit [9] the historical wear measurements, which are then extrapolated to critical values to obtain time-to-failure information. The results are presented for sharp-radius curves of R300 in the tunnel sections (presented in logarithmic scale in Fig. 4). Similar regression was conducted for R500 curves in the tunnel sections (Fig. 5) and in the viaduct sections of the track (Fig. 6).

4 Lifetime Data Analysis The failure rate of the wear failure mode in rails is expected to show an increasing failure rate trend, corresponding to the wear out phase of the bathtub curve (Fig. 2). This can be represented by a Weibull distribution with β > 1 and is corroborated by the Weibull parameters estimated summarized in Table 1. Based on the lifetime data analysis of the sharp-radius R300 tunnel section curves (Fig. 7), the wear out behaviour is more significant (β > 2) for the high rail gauge

330

H. J. Hoh et al.

(a) High rail gauge corner wear degradation (W2 < 18 mm)

(b) Low rail head wear degradation (W1 < 16 mm)

Fig. 5 Degradation plots for radius R500 tunnel curves

(a) High rail gauge corner wear degradation (W3 < 18 mm)

(b) Low rail head wear degradation (W1 < 16 mm)

Fig. 6 Degradation plots for radius R500 viaduct curves

Table 1 Summary of Weibull parameters Curve radius rail (high/low)

Weibull parameters β

η/years

R300 (tunnel, high rail)

2.18

25.2

R300 (tunnel, low rail)

1.57

37.0

R500 (tunnel, high rail)

1.35

20.9

R500 (tunnel, low rail)

1.59

24.8

R500 (viaduct, high rail)

1.42

25.1

R500 (viaduct, low rail)

1.88

58.9

corner wear, while slight wear out behaviour is observed for the low rail head wear (1 < β < 2). For the moderately sharp curves of radius R500, the wear for both viaduct and tunnel foundation types were observed to exhibit slight wear out behaviour (1 < β < 2). Based on the Weibull parameters, the reliability of the various cases is compared (Fig. 8). The reliability for wear of low rails in the R500 viaduct and R300 tunnel

Reliability Centred Maintenance (RCM) Assessment of Rail …

,

331

,

(a) High rail gauge corner wear Weibull plot

(b) Low rail head wear Weibull plot

Fig. 7 Weibull plots for R300 curves at tunnel sections

Reliability of rail based on wear failure mode 1

R300-tunnel-high-rail R300-tunnel-low-rail R500-tunnel-high-rail R500-tunnel-low-rail R500-viaduct-high-rail

0.9 0.8

Reliability

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

10

20

30

40

50

Time/years

Fig. 8 Reliability of rail based on wear failure modes for various conditions

are generally higher than that of the high rail wear and the low rail R500 tunnel. In general, the reliability of the low rail wear is higher than that of the high rail, while wear is more prominent at the tunnel compared to the viaduct section.

5 Conclusion An approach to predict rail track wear degradation based on historical rail wear records is presented. Critical rail track radius of R300 for tunnels and R500 (tunnel and viaduct) is compared, showing that wear is more prominent at the tunnel, and that high rail gauge corner wear is more prominent than low rail head wear. Low rail head

332

H. J. Hoh et al.

wear for cases of R300 tunnel and R500 viaduct is shown to be less prominent than the other cases. The Weibull parameters estimated are used to generate reliability plots to plan timely maintenance and replacement of rails. Acknowledgements This research work was conducted in the SMRT-NTU Smart Urban Rail Corporate Laboratory with funding support from the National Research Foundation (NRF), SMRT and Nanyang Technological University; under the Corp Laboratory @ University Scheme.

References 1. Gorjian N, Ma L, Mittinty M, Yarlagadda P, Sun Y (2010) A review on degradation models in reliability analysis. In: Engineering asset lifecycle management, pp 369–384 2. Lu CJ, Meeker WQ (1993) Using degradation measures to estimate a time-to-failure distribution. Technometrics 35(2):161–174 3. Ferreira JC, Freitas MA, Colosimo EA (2012) Degradation data analysis for samples under unequal operating conditions: a case study on train wheels. J Appl Stat 39(12):2721–2739 4. Cannon DF, Edel KO, Grassie SL, Sawley K (2003) Rail defects: an overview. Fatigue Fract Eng Mater Struct 26:865–886 5. Dick T, Barkan CPL, Chapman E, Stehly MP (2001) Analysis of factors affecting the location and frequency of broken rails. In: Proceedings of the world congress on railway research, Cologne, Germany 6. Larsson D (2004) A study of the track degradation process related to changes in railway traffic. Licentiate thesis, monograph, Luleå Tekniska Universitet, Luleå 7. Reddy V (2007) Development of an integrated model for assessment of operational risks in rail track, PhD 8. Chattopadhyay G, Kumar S (2009) Parameter estimation for rail degradation model. Int J Perform Eng 5(2):119–130 9. Freitas MA, de Toledo MLG, Colosimo EA, Pires MC (2009) Using degradation data to assess reliability: a case study on train wheel degradation. Qual Reliab Eng Int 25(5):607–629 10. Weibull W (1951) A statistical distribution function of wide applicability. J Appl Mech 13:293– 297

Prognostic of Rolling Element Bearings Based on Early-Stage Developing Faults M. Hosseini Yazdi, M. Behzad, and S. Khodaygan

Abstract Rolling element bearing (REB) failure is one of the general damages in rotating machinery. In this manner, the correct prediction of remaining useful life (RUL) of REB is a crucial challenge to move forward the unwavering quality of the machines. One of the main difficulties in implementing data-driven methods for RUL prediction is to choose proper features that represent real damage progression. In this article, by using the outcomes of frequency analysis through the envelope method, the initiated/existed defects on the ball bearings are identified. Also, new features based on developing faults of ball bearings are recommended to estimate RUL. Early-stage faults in ball bearings usually include inner race, outer race, ball and cage failing. These features represent the sharing of each failure mode in failure. By calculating the severity of any failure mode, the contribution of each mode can be considered as the input to an artificial neural network. Also, the wavelet transform is used to choose an appropriate frequency band for filtering the vibration signal. The laboratory data of the ball bearing accelerated life (PROGNOSTIA) are used to confirm the method. To random changes reduction in recorded vibration data, which is primary in real-life experiments, a preprocessing calculation is connected to the raw data. The results obtained by using new features show a more accurate estimation of the bearings’ RUL and enhanced prediction capability of the proposed method. Also, results indicate that if the contribution of each failure mode is considered as the input of the neural network, then RUL is predicted more precisely. Keywords Prognostic · Neural network · Rolling element bearing · Vibration · Remaining useful life

M. Hosseini Yazdi · M. Behzad (B) · S. Khodaygan Mechanical Engineering, Sharif University of Technology, Azadi St., Tehran, Iran e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_31

333

334

M. Hosseini Yazdi et al.

1 Introduction Correct failure time prediction of mechanical elements acts as a vital role in improving the reliability of production in different industries [1, 2]. Rolling element bearing (REB) failure is one of the frequent damages in rotating machines, so researchers have worked on the remaining useful life (RUL) of REB. The effectiveness of vibration condition-monitoring analysis in this field has motivated many researchers to focus on this subject [3]. In general, approaches used to predict failure times are divided into three general categories: physical models, knowledge-based models and data-driven models. In data-driven models, a feature of the recorded data is required to detect bearing depression as well as the estimated RUL. The features must have adequate information about the health of the machine. Artificial intelligence (AI) methods are one of the main categories of data-driven models, which involves two stages: training (developing an AI model), predicting the RUL by using the data records [4, 5]. Researchers have used different features to predict the RUL. Gebraeel et al. [6] employed the neural network (NN) model with the application of the amplitude of vibration in the failure frequency as input features to predict the RUL. Mahammad et al. [7] used root mean square (RMS) and kurtosis as inputs to a feed-forward neural network (FFNN) for RUL prediction. Wang et al. [8] investigated the effects of using a recurrent neural network with neuro-fuzzy systems in failure time prediction of bearing. Zhao et al. [9] offered a method based on time-frequency domain features to predict the RUL. In this study, new features based on developing faults of ball bearings are suggested to estimate RUL. The FFNN method has been used for prediction. By calculating the severity of any failure mode, the sharing of each failure mode can be considered as the input to an artificial neural network. Lastly, the outcomes of using new and popular features have been compared.

2 Feature Selection 2.1 Rolling Bearing Failure Modes Rolling bearings are one of the most commonly used elements in the industry. Randall and Antoni [10] studied the advanced signal processing techniques of REBs diagnostic. Frequencies that appear in the case of various faults in the vibration signal play an essential role in frequency analysis, and they can be computed from the geometric properties and shaft speed. The equations for the different frequencies are as follows: (“(1)” to “(3)”)   d n fr 1 − cos ∅ Ball pass frequency, outer race : B P F O = 2 D

(1)

Prognostic of Rolling Element Bearings …

  d n fr 1 + cos ∅ 2 D  2   D d Ball (roller)spin frequency : B S F = cos ∅ 1− 2d D

Ball pass frequency, inner race : B P F I =

335

(2)

(3)

Which n is the number of rolling elements, d is the ball diameter, D is the outer ring diameter, ∅ is the angle of the load from the radial plane, and f r is the shaft speed. In the envelope analysis, the signal in the resonance region is bandpass filtered, and then, the fast Fourier transform (FFT) is taken. The main challenge is the determination of the filter range, mainly when the fault factors are weak.

2.2 Selecting Appropriate Band Filter for Envelope Analysis In this paper, the Shannon entropy of the filtered signal (wavelet coefficients) is accepted as the objective function of the algorithm. As the defect frequency of inner race (BPFI) is the largest bearing characteristic frequency (BCF), the bandwidth is chosen as 5BPFI for all scales, and the value of center frequency f 0 is equal to the sampling frequency f s [11]. There are many approaches to calculate the optimum scale parameter S. To select the optimal scale range, some constraints need be taken into account as bellows: According to Nyquist sampling theorem for the upper cut-off frequency (lowest bound of scale): fs Smin

 fs 5B P F I f s → Smin + 2 2.5 0.4 f s − 2.5B P F I

(4)

In order to decrease the interfering of shaft harmonic effects, the lower cut-off frequency is considered 25 times larger than the shaft speed: fs  Smax



fs 5B P F I

 25 f sha f t → Smax 2 25 f sha f t + 2.5B P F I

(5)

In continuos wavelet transform (CWT), the admissibility condition implies that the wavelet does not have a zero frequency. In other words, it can be said that the wavelet function has zero mean. To achieve this condition, σf0 should be large enough. If σf0 > 1.3, the compatibility equation is satisfied with an acceptable approximation [12]. So, an additionally constrain for lower frequency is considered as fs fs f0  = > 1.3 → Smax <  σ σ Smax 1.3σ

(6)

336

M. Hosseini Yazdi et al.

Therefore, to select the upper limit of the scale, both “(5)” and “(6)” must be satisfied. So, upper-scale limit is selected as below

  Smax = Min Smax , Smax

(7)

As a conclusion, the range of the scale is selected as below: (0.4 f s − 2.5B P F I )−1 f s < Si < Smax

(8)

2.3 Degradation Features Extraction The steps of the recommended approach of obtaining the features are described in Fig. 1, which full description is as follows: Step 1. Transform the time domain signals into the frequency domain by employing fast Fourier transform (FFT): One cycle includes 2560 samples of vibration signals, which is transformed into the frequency domain by applying FFT. Step 2. Frequency-wise plot: The amplitude at a fixed frequency (e.g., Frq: BPFI) varies at different cycles. Accordingly, amplitudes at various cycles are received at a constant frequency, which is known as a frequency-wise plot. This method is performed for three frequencies: BPFI, BPFO and BSF. The amplitude of the envelop analysis in the defect frequencies of the inner ring, outer ring and balls is named with IF, OF and BF, respectively.

Fig. 1 Degradation feature extraction

Prognostic of Rolling Element Bearings …

337

3 Experimental Data In the PHM 2012 conference [13], a dataset of the ball bearing accelerated life (PROGNOSTIA) was published. The PROGNOSTIA includes the 17 ball bearing run-to-failure tests in three operating conditions. In this experiment, an electric motor with a power of 250 W and 2830 rpm was used as a shaft supplier. A gearbox reduces the electric motor’s speed to the required value in less than 2,000 rpm. In these tests, both horizontal and vertical vibration of bearing housings have been monitored, and the accelerometers have recorded the vibration signals with 25.6 kHz every 10 s. It should be mentioned, the 0.1 s in every 10 s is considered as one cycle, and there are 2560 samples in each cycle (Fig. 2). In this paper, the first operating condition is chosen to examine the proposed approach. Because the applied load in this operating condition is equivalent to the dynamic load of the ball bearing. Specifications of the first operation conditions are presented in Table 1. One of the standard critical features in the time domain is the RMS of the acceleration signal [12]. In many cases, RMS is a useful tool for following the failure process. Thus, the RMS trends of the seven datasets in operating condition one were plotted in Fig. 3. As can be seen, the RMS feature of the acceleration signal in bearings 1

Fig. 2 Overview of PROGNOSTIA [13]

Table 1 Specifications of PROGNOSTIA test OC

Speed (RPM)

Load (N)

Number of Tests

1

1800

4000

7

338

M. Hosseini Yazdi et al.

Fig. 3 RMS trends for first working condition

and 3 has a three-stage pattern of degradation (regular operation, slow and fast degradation). In bearings 5, 6 and 7, this feature does not appear to be a good indication of failure from the beginning of the failure process. In these circumstances, RMS cannot be a proper and reliable feature for estimating RUL and the defect growth process.

4 Artificial Intelligence Modeling In this paper, the FFNN is used to predict the RUL of the ball bearings. This network comprises two layers. The number of neurons in the first layer is ten, which is elected by try and error. A nonlinear sigmoid function is taken for the output of the first layer, and the linear function is used to produce the final output of the network. For eliminating noise in the trend of features, the smoothing algorithm is applied to all the suggested features. In the FFNN model, 55% of the data points are defined to train, 30% of the points are employed for validation, and the left 15% of the points are used to test. In the prediction stage, the obtained features from the vibration data should be smoothened with the identical procedure that was proposed for the training data. Before referring to the results, it is suitable to understand the offered algorithm as a flowchart. The main steps of the proposed method are shown in Fig. 4.

5 Results In summary, the algorithm used in this paper can be explained as follows: The first step is to remove the noise from the vibration signal, which is done by wavelet transform. The next step is to calculate the features associated with the defect frequency of the REB. Of course, determining the proper band for filtering the signal is very important.

Prognostic of Rolling Element Bearings …

Vibration Signal

RUL Prediction

339

Wavelet Transform

De-noised Signal

Shannon Entropy

Training Neural Network

Smoth Features

Feature Extraction

Fig. 4 Flowchart of the proposed method

Fig. 5 a Frequency response—b Envelope Analysis (Signal No. 2750, Test 1-1)

According to Fig. 5a in test 1-1, the resonance band is between 500 and 3000 Hz. Nevertheless, there is no sign of faults in frequency response when the filter is applied within this range. However, the frequency band from 4000 Hz to 6000 Hz is most suitable for faults to appear, which is done by the Shannon entropy method (Fig. 5b). When the fault occurs and hits the surface, these blows can resonate in the inner ring, outer rings, balls, sensors and other components of the system, so more than one resonance happens. There are several resonant in the frequency response that certainly all do not have the same intensity. In conventional methods, the resonance region is selected for filtering. In many cases, especially when defects are in the initial stages of growth, this selected range involves other factors [14]. In the proposed method, all large and strong resonances and weak resonances are investigated, and the best resonator is selected, which is indicative of defect and less disturbing factors (noise). It should be noted that each test includes a large number of signals which are in the time domain, and they should be converted into the frequency domain. The signal amplitudes can be selected at BCF. These values are plotted for all the signals, which represent the share of each failure mode in failure. Table 2 shows the characteristics of the inner Table 2 Bearing characteristic frequencies OC

BSF (Hz)

BPFO (Hz)

BPFI (Hz)

1

107,7

168,3

221,6

340

M. Hosseini Yazdi et al.

ring, outer ring and ball frequencies in the first operating condition. In Fig. 6, the variation in the amplitude of the BCFs is shown in Test 1-1. According to Fig. 6, it can be understood that all three features have an increasing trend at the end of life. The more interesting result is that if these values are considered as inputs of the neural network, the RUL is estimated more precisely. RMS has been used for comparison with standard features because this feature presents a downward trend in life expectancy and has been used in the literature as the main feature in RUL prediction and performs better than other features, too. Additionally, information on the input and output of the neural network in the two cases is presented in Table 3. In Fig. 7, the life expectancy results for test 1-3 are given in two cases. The red line shows that the real-life and blue line is a predicted life. Estimations of RUL are shown as a percentage. As can be considered for real life, at the start of the test, all life is left (100%), and no life is left at the end of the experiment (0%). RUL reduction is also considered linear. It is understood that when the new features (IF, OF, BF) are used, RUL predictions are achieved more accurately. This process is also used for all bearings, and the error values in these two cases are given in Table 4. The error is calculated according to “(9)”. Error =

 N  1  RU L predicted − RU L real   N i=1  RU L real

(9)

As can be understood from Table 4, the results of the RUL will be more accurate if the features that indicate the defects in the ball bearings are used rather than using the RMS, which only indicates the overall bearings condition. When a general parameter

a) Inner Race Defect

c) Ball Defect

b) Outer Race Defect

Fig. 6 Process of changing the amplitude of the envelope in the failure frequency

Table 3 Neural network specifications in two cases Case Number

Input

1

RMS

Output RUL

2

IF, OF, BF

RUL

Prognostic of Rolling Element Bearings …

341

Fig. 7 Estimation of the RUL (Right: with first case input, Left: with second case inputs)

Table 4 Error values of the bearings tests in two cases Predicting datasets

Training datasets

Error in case 1

Error in case 2

1

1–3, 1–4, 1–5, 1–6, 1–7

0.425

0.352

2

1–1, 1–4, 1–5, 1–6, 1–7

0.578

0.416

3

1–1, 1–4, 1–5, 1–6, 1–7

0.364

0.297

4

1–1, 1–3, 1–5, 1–6, 1–7

0.692

0.513

5

1–1, 1–3, 1-4, 1–6, 1–7

0.257

0.176

6

1–1, 1–3, 1–4, 1–5, 1–7

0.384

0.261

7

1–1, 1–3, 1–4, 1–5, 1–6

Average numerical testing error

0.457

0.329

0.451

0.334

like RMS is used, there is no distinction between the components and consider them all the same.

6 Conclusion In this paper, new features based on developing faults of ball bearings are suggested to estimate RUL. These features represent the sharing of each failure mode. Also, the wavelet transform is used to choose a proper frequency band for filtering the vibration signal. The laboratory dataset of the bearing accelerated life in the PROGNOSTIA tests is employed to approve the method. These research results indicate that if the contribution of each failure mode is considered as the input of the neural network, then life is estimated more precisely. Applying this algorithm has two advantages: 1—More precise life is estimated. 2— The failure mode of the ball bearing is known, which results in the maintenance being taken to reduce the damage. In these situations, you should use condition-monitoring

342

M. Hosseini Yazdi et al.

methods that show the state of the equipment at any given time and can be used to estimate the lifetime more accurately.

References 1. Albrecht PF et al (1987) Assessment of the reliability of motors in utility applications. IEEE Trans Energy Conv 3:396–406 2. Crabtree CJ, Donatella Z, Tavner PJ (2014) Survey of commercially available condition monitoring systems for wind turbines 3. Randall RB (2011) Vibration-based condition monitoring: industrial, aerospace and automotive applications. John Wiley & Sons 4. Peng Y, Dong M, Zuo MJ (2010) Current status of machine prognostics in condition-based maintenance: a review. Int J Adv Manuf Technol 50:297–313 5. Huang et al (2015) Support vector machine based estimation of remaining useful life: Current research status and future trends. J Mech Sci Technol 29:151–163 6. Gebraeel N et al (2004) Residual life predictions from vibration-based degradation signals: a neural network approach. IEEE Trans Ind Electron 51:694–700 7. Mahamad AK, Saon S, Hiyama T (2010) Predicting remaining useful life of rotating machinery based artificial neural network. Comput Math Appl 60:1078–1087 8. Wang WQ, Golnaraghi MF, Ismail F (2004) Prognosis of machine health condition using neuro-fuzzy systems. Mech Syst Sig Process 18:813–831 9. Zhao M, Tang B, Tan Q (2016) Bearing remaining useful life estimation based on time– frequency representation and supervised dimensionality reduction. Measurement 86:41–55 10. Randall RB, Antoni J (2011) Rolling element bearing diagnostics—a tutorial. Mech Syst Sig Process 25(2):485–520 11. Shi DF, Wang WJ, Qu LS (2004) Defect detection for bearings using envelope spectra of wavelet transform. J Vib Acoust 126(4):567 12. Liao H, Zhao W, Guo H (2006) Predicting remaining useful life of an individual unit using proportional hazards model and logistic regression model. In: Reliability and maintainability symposium, pp 127–132 13. Nectoux P et al (2012) An experimental platform for bearings accelerated degradation tests. In: IEEE international conference on prognostics and health management, PHM 2012, pp 1–8. IEEE Catalog Number: CPF12PHM-CDR 14. Hosseini M, Yazdi M, Behzad B, Ghodrati A, Vahed T (2019) Experimental study of rolling element bearing failure pattern based on vibration growth process. In: 29th european safety and reliability conference (ESREL), Honover, Germany

Recent Technologies for Smart and Safe Systems

Fault-Tolerant Actuation Architectures for Unmanned Aerial Vehicles M. A. A. Ismail, C. Bosch, S. Wiedemann, and A. Bierig

Abstract Rising civilian applications that make use of unmanned aerial vehicles (UAVs) demand crucial precautions to minimize safety hazards. Future UAVs are expected to incorporate fault-tolerant architectures for critical on-board systems to ensure compliance with airworthiness certification. Reliability reports of in-service UAVs showed that flight control actuators are among the highest root-causes of UAVs mishaps. In this paper, the current state-of-the-art actuation architectures for UAVs are reviewed to identify technical requirements for certification. This work is part of a TEMA-UAV research project aimed at developing certifiable fault-Tolerant Electro-Mechanical Actuators for future UAVs. Keywords Airworthiness certification for UAVs · Electro-mechanical actuators · Fault-tolerant control

1 Introduction Unmanned aerial vehicles (UAVs) have provided superior performance for many civil and military applications due to their low acquisition and operating costs. However, the reliability level of in-service UAVs is not equivalent to general aviation, according to several reliability reports based on about 290,000 flight hours (FH) [1–3]. The US Office of the Secretary of Defense has stated, “improving UA reliability is the single most immediate and long-reaching need to ensure their success” [1]. UAVs currently have a loss rate 10 times worse than manned aircraft category, signaling that the M. A. A. Ismail (B) · A. Bierig DLR (German Aerospace Center), Institute of Flight Systems, Lilienthalplatz 7, 38108 Brunswick, Germany e-mail: [email protected] C. Bosch Institute of Helicopter Technology, Technical University of Munich, Arcisstraße 21, 80333 Munich, Germany S. Wiedemann MACCON Elektroniksysteme GmbH, Aschauer Str. 21, 81549 Munich, Germany © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_32

345

346

M. A. A. Ismail et al.

IAI fleet

US army fleet

Fig. 1 Average source of system failures for in-service UAVs fleets [1]. IAI = Israeli Aircraft Industries

root-causes of these losses should be identified and mitigated [3]. Here, the scope is for UAVs of a takeoff-weight > 25 kg because of their certification potential [4]. System failures for two large fleets (the US army and Israeli Aircraft Industries) are shown in Fig. 1, in which flight control systems are the second highest root-cause of UAVs failures. Flight control system failures (Fig. 1) have been investigated to the following findings [1, 2]: • Flight control systems failures are usually failures of the electro-mechanical actuators (EMAs). • The root-causes of EMA failures within US army fleets can be grouped into: – EMAs designed for neither aerospace nor UAVs (e.g., Pioneer UAVs, with a loss rate of 334 per 100,000 FH compared to 1 per 100,000 FH for general aviation); and – EMAs that partially involve components matching the manned aircraft category (e.g., Predator UAVs with a loss rate of 32 per 100,000 FH). The superior reliability level of the manned aircraft category is maintained by certification requirements that are being extended to UAVs [4]. Recently, the European Aviation Safety Agency published safety regulations for UAVs considering three UAV categories: open (takeoff-weight < 25 kg), specific, and certified. A detailed discussion of the certification requirements for different UAVs is provided in a followup paper [4]. In this paper, the scope is for flight control actuators that are directly influenced by the expected certification conditions for UAVs: C1. No single failure will result in a catastrophic-failure condition (evaluated by an UAV level functional hazard analysis); and C2. Each catastrophic-failure condition is extremely improbable. The first condition implies a fault-tolerant control (FTC) design, as the flight control function must not be interrupted after the first failure [5]. The second condition relates to a quantitative probability of occurrence for possible catastrophic scenarios. Two FTC approaches have been cited [6–9] for mitigating flight control actuator failures: the aircraft and actuator FTCs shown in Fig. 2. The aircraft FTC is implemented in the flight control computer (FCC), which activates modified flight control laws to compensate for actuator failures, as described in

Fault-Tolerant Actuation Architectures …

347

Aerodynamic Model Monitoring

Reconfiguration

Mission

Flight Control

Multiple Actuators

Aircraft

Sensors

Single Failure

Aircraft based FTC for actuator failures Health Monitoring Reconfiguration

Mission

Flight Control

Multiple Actuators Backup Actuator

Aircraft

Sensors

Single Failure

Actuator based FTC for actuator failures Fig. 2 FTCs for accommodating UAVs actuator failures

[6, 7]. In [6], a multi-variable FTC was developed for controlling a fixed-wing UAV with a defective actuator, using only one operational control surface. In [7], a FTC with a bank of observers was used to compensate for jammed flight control actuators, but was limited to small jammed angles, making it insufficient for certification requirements. The FTC actuation architectures are based on subsystem redundancies and health monitoring functions [8, 9]. In this paper, the current state-of-the-art FTC actuation technologies and architectures for UAVs are reviewed in Sects. 2 and 3, respectively. The technical challenges for airworthiness certification and a new electrical fault-tolerant topology are discussed in Sect. 4.

2 Fault-Tolerant Technologies 2.1 Electrical Motors Several electrical motor technologies have been evaluated briefly in the literature for flight control actuators [9–14]. Among those technologies are machines without magnets, such as induction machines (IMs), switched reluctance machines (SRMs) and synchronous reluctance machines (SynRMs), and machines with permanent magnets (PMs), such as surface-mounted permanent-magnet synchronous machines

348

M. A. A. Ismail et al.

(SPMSM or PMSM) and interior permanent-magnet synchronous machines (IPMSM). IMs are known for their ruggedness, reliability, and inexpensive cost, but they provide limited power-to-weight density [9]. SRMs are considered more widely in fault-tolerant applications due to their multiple single-phase winding configurations and their inherent capability to continue the operation after a phase fault. However, they also exhibit comparatively high torque ripples (especially during a phase loss), have comparatively higher acoustic noise, and a reduced power density of approximately 30–50% compared to SPMSMs [10, 11]. SynRMs use distributed three-phase windings, allowing synchronous rotor operation with the stator magnetic field. Their torque ripples are comparative to the SPMSM and IPMSM while their power density lies between the SPMSM and the IM [12] and is strongly dependent on the rotor geometry. PM-free machines have the advantages of no cogging- or drag torque (in case of a short circuit failure), lower cost, easier manufacture, and greater robustness against mechanical and temperature shocks. Due to their low back-electromotive force, these machines lead to better inverter control failsafe operation, since the inverter does not need to deal with uncontrollable rectification, even at high speed [13]. PM machines, by contrast, are widely used as flight control actuators because of their superior power-to-weight density and their relatively simple control techniques [9, 14]. The IPMSM has, in comparison with the SPMSM, a higher reluctance torque and, therefore, a higher power density. Nevertheless, PM machines have, besides cogging effects (if the rotor or stator is not skewed), several fault-tolerant related drawbacks, such as drag torque (caused by short circuit faults), electrical and thermal phase coupling (triggered by distributed windings), magnet (hysteresis) losses, and comparatively high back-electromotive force. PMs are also prone to change over time as a function of temperature stresses (aging) and field weakening operation, as well as speed limitation if the magnets are surface-mounted instead of buried (e.g., SPMSM). The permanent-magnet-assisted synchronous reluctance machine (PMa-SynRM) is a hybrid reluctance machine possessing PM elements partially inserted into the rotor flux barriers for increased power density and, consequently, increased manufacturing complexity [12]. Instead of rare-earth magnets, cheaper ferrites can be used, which leads to decreased power density and actuator temperature range. That being said, PMa-SynRMs enjoy the aforementioned PM-less advantages to a certain degree as well as higher torque-to-weight, in comparison with the SynRM, and an extended field weakening range compared to the SPMSM and IPMSM [13]. Efficiency and power density depend strongly on rotor geometry and inserted magnetic material. Therefore, it is scalable between the IPMSM and SynRM [15, 16]. Due to design similarity, the reliability of the PMa-SynRM can be considered equivalent or better than the IPMSM, as less magnet material is used. Less magnet material also grants a lower drag torque compared to conventional PM machines in case of a short circuit fault. Consequently, and in a fault-tolerant context, conventional PM machines should be overrated from two to four times the nominal ratings [11] depending on low- or high-speed operation. Yet, because of lower drag torque and magnet material, the PMa-SynRM requires a smaller overrating, making it an interesting alternative to conventional PM and reluctance machines by size and weight.

Fault-Tolerant Actuation Architectures …

349

2.2 Electrical Power Drive The power drive is responsible for controlling electric power and delivering it to the motor through power transistors (e.g., MOSFETs or IGBTs [14]). The arrangement of power transistors depends on the motor configuration, namely multiple independent three-phase modules (MITPMs) and multiple independent single-phase modules (MISPMs). MITPMs encompass one or more motors divided into sets of three-phase star-connected modules, each powered by three-phase transistor bridges. Rated power is supplied by n sets of three-phase modules connected in parallel. For example, in the 3 + 3 system shown in Fig. 3 (left), two sets of three-phase modules are redundant in one rotor [9, 17]. MISPMs encompass a single motor divided into electrically isolated n phases, each powered by an H-bridge. Rated power is supplied by n sets of H-bridge modules. For example, in the 2 + 1 system shown in Fig. 3 (right), three sets of isolated single-phase modules are redundant in one rotor. The motor could be operated by 2 active phases (i.e., a faulty phase) using a modified pulse width modulation (PWM) waveforms [14]. The isolated phases may comprise three to six phases [8, 9].

3 Fault-Tolerant Actuation Architectures Fault-tolerant actuation architectures, A1–A6 in Fig. 4, comprise control units (CONs), monitoring units (MONs), position sensors for the actuator deflection angle, communication channels (COMs) to FCCs, clutches, and electric motors based on technologies in Sect. 2.1. All of these architectures include fault-tolerant electrical motors configured as dual motors (e.g., A1, A3, A4, and A5) or as single motors with backup windings (e.g., A2 and A6). The mechanical power of the dual electric motors is transferred to the control surface by using either a torque-summing gearbox (e.g., A3 and A4), an electric clutch (e.g., A5) or a directly controlled surface linkage (e.g.,

Fig. 3 Widely cited drive topologies for 3 + 3 (left) and for 2 + 1 (right) [9]

350

FCC

M. A. A. Ismail et al.

O

FCC O

C

D

N

N

Switching & Isolation

FCC O

C

FCC O

N

0

Duplex Monitoring

WA M WB

CS

P P

A2

P

P M

C

C

O

O

FCC FCC A3 P Position sensor V Voter unit SG Torque summing gearbox CL Clutch

O

O

C

C

T

PS

M

M

A1

A4CS

CL

CL

SG

3x P

3x P

M

M

M

M

C

C

C

C

O

O

O

O

FCC

FCC

FCC

CS

CS

Triplex P

SG

M

I

CS

A4 CS

Duplex P

FCC

N G

D N

FCC

A4 M Motor C Control unit N Monitoring unit O Communication

G W D CS

FCC A5

Gearbox Winding Drive unit Control surface

Duplex P G WWW H H H N

C V

O FCC

O

O

FCC FCC A6

H H-bridge of switches I Current sensor T Temperature sensor PS Power supply

Fig. 4 Examples of fault-tolerant actuation architectures for UAVs: A1 [20], A2 [21], A3 [22], A4 [23], A5 [18], A6 [8]

A1). Sensor redundancies are present in position sensors because of their high failure rates. A4 and A5 feature a typical duplex layout used for general purpose EMAs, such as those in many in-service UAVs, but they are not fault-tolerant in-flight. A6 involves a voter to reliably tolerate incorrect commands from triple FCCs, despite having only one control unit, making it insufficient to keep the actuator operational after its first failure. A5 was developed for optionally piloted vehicles (OPV, a category between UAVs and certified manned aircraft [18]). Further details on how A5 manages in-flight reconfiguration for optionally piloted vehicles are not available.

Fault-Tolerant Actuation Architectures …

351

4 Discussion 4.1 Electrical Topology An evaluation of electrical topologies, including PMSMs and PMa-SynRMs, is shown in Fig. 5. The complexity is the number of power transistor (PT) switches (e.g., MOSFETs, multiplied by 4 for a better illustration). The overratings for individual power transistors in a SPMSM are calculated in [14], mainly comprising additional torque contributions for a partial loss of actuator torque by an open circuit lane (a single phase for MSPMs or a three-phase set for MTPMs) and a drag torque by a short circuit lane at low speeds. The overratings for PMa-SynRMs are estimated by assuming a reduction in drag torque only by a factor of 0.9, according to the investigation in [19], while the overrating due to the open circuit faulty remains unchanged. Considering minimum complexity as a vital selection criterion for UAVs, topologies 2 + 1 and 3 + 3 are the best candidates within MISPMs and MITPMs, respectively. In a 3 + 3 topology, the motor control provides ripple-free torque after losing an active lane. In a 2 + 1 topology, the motor control must use asymmetric switching waveforms to keep the motor running with only two phases [9, 14]. These waveforms cause undesirably high levels of current harmonics, violating the total harmonic distortion requirements set by DO-160 [9] that may be considered for certification requirements. Therefore, a 3 + 3 topology of PMa-SynRMs has good potential for certified fault-tolerant actuation for UAVs. MSPMs

MTPMs

Ref. case Non-fault tolerant

Complexity (4*PT)

PM Over rating (%)

PMa-SynRM Over rating (%)

Fig. 5 Evaluations for different electrical topologies considering their complexities and over ratings

352

M. A. A. Ismail et al.

4.2 Architecture Characterization for Airworthiness Certification The UAV actuation architectures cited in this study can be characterized into three groups: (1) Non-fault-tolerant actuation architectures – Single-channel industrial actuators – Actuator failures may be tolerated in FCCs by modified control laws. Examples: Pioneer UAVs [2] and [6, 7] (2) Partial fault-tolerant actuation architectures – FCC communication channels are less than three, insufficient for tolerating a drift (a challenging fault) or incorrect FCC commands in general [8, 14]. – On-ground or a partial in-flight reconfiguration after a single critical failure. Examples: A1 [11], A3 [13], A4 [14], and A5 [15]. (3) Full fault-tolerant actuation architectures – Triplex communications applied for reliable FCC commands by a voter. – All catastrophic-failure conditions to which the actuator and its controller are the main contributor or only source are mitigated by fail operational or failsafe modes. – In-flight actuator reconfiguration after a single critical failure. Example: A2 [12], however, neither a prototype nor test setup has been reported. This group complies with C1 (Sect. 1), but C2 requires a reliability analysis discussed in a follow-up study [4].

5 Conclusion The limited reliability of in-service UAVs has been significantly attributed to failures caused by flight control EMAs, because they are neither fully fault-tolerant nor qualified for UAVs. Recently, fault-tolerant actuators have been exploited by some commercial UAVs, comprising mainly fault-tolerant capabilities for electric motors and position sensors. These actuators are mostly based on PM motors to achieve a high power-to-weight density. However, PM motors should be overrated two to four times to compensate for expected electrical failures. Alternatively, the PMa-SynRM of 3 + 3 drive topology has been discussed here, using less drag torque and magnet material to obtain a smaller overrating, making it an interesting alternative to conventional PM-based EMAs by size and weight. The term “fault-tolerant actuator” has been used frequently in the literature to describe EMAs that can partially maintain functionality

Fault-Tolerant Actuation Architectures …

353

after some critical failures. A fault-tolerant actuator for a certified UAV demands more rigorous requirements, namely: (a) all significant failures should be identified within a specific probability of occurrence as defined by a certification body; (b) these failures should be accommodated by fault detection and reconfiguration techniques; and (c) these techniques should be operational online during flight phases. Acknowledgements This work was supported by the TEMA-UAV project through the German

National Aerospace Research Program (Lufo V-3).

References 1. US Office of the Secretary of Defense (2005) Unmanned Aircraft Systems Roadmap 2005– 2030. US Office of the Secretary of Defense, United States. https://fas.org/irp/program/collect/ uav_roadmap2005.pdf. Accessed 2005 2. Israel K, Nesbit R (2004) Defense science board study on unmanned aerial vehicles and uninhabited combat aerial vehicles. US Defense Science Board, United States 3. Sadeghzadeh I, Zhang Y (2011) A review on fault-tolerant control for unmanned aerial vehicles (UAVs). Infotech@Aerospace. p 1472 4. Bosch C et al (2019, under publication) Towards certifiable fault-tolerant actuation architectures for UAVs. In: WCCM 2019, Singapore 5. Lauffs JP, Holzapfel F (2016) Hardware-in-the-loop platform for development of redundant smart actuators. Aircraft Eng Aerosp Technol: Int J 88(3):358–364 6. Venkataraman R, Seiler PJ (2016) Safe flight using one aerodynamic control surface. In: AIAA guidance, navigation, and control conference, p 0634 7. Bateman F, Noura H, Ouladsine M (2011) Active fault diagnosis and major actuator failure accommodation: application to a UAV. In: Advances in flight control systems, InTech 2011, pp 137–158 8. Di Rito G, Galatolo R, Schettini F (2016) Self-monitoring electro-mechanical actuator for medium altitude long endurance unmanned aerial vehicle flight controls. Adv Mech Eng 8(5):1687814016644576 9. Cao W et al (2011) Overview of electric motor technologies used for more electric aircraft (MEA). IEEE Trans Ind Electron 59(9):3523–3531 10. Radun AV (1992) High-power density switched reluctance motor drive for aerospace applications. IEEE Trans Ind Appl 28(1):113–119 11. Pickert V (2019) Fault Tolerant. ECPE, Leinfelden-Echterdingen, Germany 12. Donaghy-Spargo CM (2016) Synchronous reluctance motor technology: opportunities, challenges and future direction. Eng Technol Ref, pp 1–15 13. Wang B et al (2017) A fault-tolerant machine drive based on permanent magnet-assisted synchronous reluctance machine. IEEE Trans Ind Appl 54(2):1349–1359 14. Bennett JW (2010) Fault tolerant electromechanical actuators for aircraft. Ph.D. thesis, Newcastle University 15. Ibrahim M, Rashad E, Sergeant P (2017) Performance comparison of conventional synchronous reluctance machines and PM-assisted types with combined star-delta winding. Energies 10(10):1500

354

M. A. A. Ismail et al.

16. Torres López Torres C (2018) Analysis and implementation of a methodology for optimal PMa-SynRM design taking into account performances and reliability, Ph.D. thesis, Universitat Politècnica de Catalunya 17. Zhu J, Niu X (2010) Investigation of short-circuit fault in a fault-tolerant brushless permanent magnet ac motor drive with redundancy. In: 5th IEEE conference on industrial electronics and applications, pp 1238–1242 18. Technical manuals Pegasus GmbH, 10 December 2018. www.pegasus.de 19. Ullah S et al (2016) A permanent magnet assisted switched reluctance machine for more electric aircraft. In: XXII ICEM, pp 79–85 20. Flurry SJ (2010) KR Patent No. 100972516B1 21. Clarkstone JK, Morgan S, White N (2012) Australian Patent No. 2012235920B2 22. Hadeokju (2016) KR Patent No. 101658459 23. Technical manuals Volz GmbH, 10 December 2018. www.volz.de

Towards Certifiable Fault-Tolerant Actuation Architectures for UAVs C. Bosch, M. A. A. Ismail, S. Wiedemann, and M. Hajek

Abstract There is an increasing demand for integrating unmanned aerial vehicles (UAVs) into civilian airspaces. Consequently, onboard systems should be evaluated for possible threats to human life. This paper discusses future certification requirements for flight control actuators. A scheme is proposed for evaluating reliability requirements for flight control electro-mechanical actuators (EMAs) considering different flight control configurations. This work is part of the TEMA-UAV project, which aims at developing certifiable fault-tolerant actuation for future UAVs. Keywords Electro-mechanical actuators · Airworthy certification · Fault-tolerant architectures

1 Introduction Unmanned aerial vehicles (UAVs) are mainly used in leisure products and in military applications. However, due to the continuous technological advancements of electro-mechanical actuators (EMAs), power electronics and controllers [1], as well as their availability at lower costs, a recent study [2] predicts increased demand for UAVs in the next 30 years. Future UAV operation scenarios include not only government authority missions, such as border security, maritime surveillance, and military actions, but also their use for delivery purposes, medical supply, fire-fighting, and agriculture. However, the increasing interest in medium-sized UAVs demands C. Bosch (B) · M. Hajek Institute of Helicopter Technology, Technical University of Munich, Arcisstraße 21, 80333 Munich, Germany e-mail: [email protected] M. A. A. Ismail Institute of Flight Systems, DLR (German Aerospace Center), Lilienthalplatz 7, 38108 Brunswick, Germany S. Wiedemann MACCON Elektroniksysteme GmbH, Aschauer Str. 21, 81549 Munich, Germany © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_33

355

356

C. Bosch et al.

new certification requirements, including their onboard systems and utilised actuation architectures. This paper focuses on future flight control EMAs and proposes a scheme to evaluate reliability requirements of such a system in consideration of different flight control configurations.

2 UAV Safety Requirements Review 2.1 European Aviation Safety Agency Civil Rulemaking and SC-RPAS.1309 Currently, aviation authorities focus on creating a regulatory basis for UAV safety and certification. Resulting from this dynamic rulemaking, systems manufacturers are often required to anticipate aspects of certification before applicable regulations have been published. Following several review phases, the European Aviation Safety Agency (EASA) recently published regulations (EU) 2019/947 [3] and 2019/945 [4] describing top-level safety targets for manufacturers and operators, considering three different UAV risk categories: open (takeoff weight < 25 kg), specific and certified. The open category aims at low-risk operations and is not the focus of this discussion. The other categories represent two different approaches to UAV certification: 1. Operations within the specific category demand an operational risk assessment in which the operator assesses the mission risk or shows conformity with standard scenarios. Due to geographical or temporal UAV operation, we regard this as a mission-based certification [3, 4]. 2. A certification similar to that needed for manned aviation might be required for the certified category. UAVs being operated over assemblies of people and exceeding a characteristic dimension of 3 m are subject to this category [4]. While EASA has not published Certification Specifications for UAVs yet, the Joint Authorities for Rulemaking on Unmanned Systems (JARUS) proposed CS-LURS (rotorcraft) [5] and CS-LUAS (fixed-wing) [6], both providing regulations for UAVs lighter than 750 kg. Furthermore, the Schiebel S-100 UAV, which has a takeoff weight of 200 kg, was given special conditions based on CS-LURS as a certification basis [7]. For type certification, aircraft development should follow a top-down approach, which is extensively described in guidelines ARP4754 (system development) [8] and ARP4761 (safety assessment) [9]. The first step consists of a structured investigation of potential functional failure conditions on aircraft level, also referred to as functional hazard assessment (FHA). It requires information about criticalities and their accepted quantitative probabilities. Both CS-LURS and CS-LUAS reference AMC RPAS.1309. In 2015, EASA proposed Special Conditions SC-RPAS.1309-01 for UAVs lighter than 750 kg [10]. As depicted in Table 1, higher probabilities are acceptable than in manned aviation [8]. In addition, SC-RPAS.1309 accepts a loss of

Towards Certifiable Fault-Tolerant Actuation …

357

Table 1 Acceptable UAV risk for SC-RPAS.1309 [10] Failure condition classification

NSE

MIN

MAJ

HAZ

CAT

Allowable quantitative probabilities [h−1 ]



10−3

10−4

10−5

10−6

Design assurance level (DAL)



D

C

C

B

NSE No safety effect, MIN Minor, MAJ Major, HAZ Hazardous, CAT Catastrophic

Table 2 Acceptable UAV risk according to STANAG 4671 [11] Failure condition classification

NSE

MIN

MAJ

HAZ

CAT

Allowable quantitative probabilities [h−1 ]



10−3

10−4

10−5

10−6

vehicle as hazardous “where it can be reasonably expected that multiple fatalities will not occur” [10]. The specified design assurance levels (DALs) also show a reduction compared to manned aviation [8, 10].

2.2 Military Regulations Military certification of UAVs often follows STANAG 4671. This standard provides airworthiness requirements for UAVs with takeoff weights between 150 and 20,000 kg. Table 2 denotes accepted probabilities [11].

2.3 Consequences for TEMA-UAV Requirements Sections 2.1 and 2.2 show that – UAVs exceeding characteristic dimensions of 3 m are likely to be subject to a structured type certification process following new EASA regulations. – STANAG 4671 and SC-RPAS.1309 represent very similar safety and probability requirements. The Use Cases considered for TEMA-UAV (see Sect. 4) are in the same size range as the Schiebel S-100 aircraft. Both the S-100 planned EASA certification efforts and the Use Cases’ dimensions (>3 m) undermine the decision to plan for a regular type certification. We regard this as a worst-case scenario in case the operator’s concept of operations is not eligible for the specific category.

358

C. Bosch et al.

3 A Scheme for Evaluating Flight Control Actuation Architectures Certification requirements are usually described in terms of high-level safety constraints. A transition to low-level reliability constraints is achievable by evaluating possible functional deficiencies for a given flight control architecture. We propose a scheme for evaluating these requirements for flight control EMAs, as shown in Fig. 1. The top-level stage is related to safety objectives that are defined by certification regulations for UAVs flying in non-segregated airspace. This FHA process, as already mentioned in Sect. 2, requires respective UAV criticality and occurrence information (e.g. [10, 11]) on a design-independent level. For a UAV use case, we define a flight controls layout comprising the number of control surfaces for primary flight control. Applying safety requirements to flight control levels requires a specific flight controls layout. For example, a total loss of the pitch control has a catastrophic effect on UAV level. The accepted probability of failure may increase if the flight controls layout for the pitch function consists of duplex actuation channels. The outcomes of FHA involve qualitative and quantitative conditions for a candidate fault tolerant control (FTC) architecture. The qualitative conditions ensure that no single point of failure can conflict with safety-critical

Fig. 1 A scheme for evaluating flight control actuation architectures

Towards Certifiable Fault-Tolerant Actuation …

359

actuation functions. The quantitative conditions define the required failure rates for a candidate FTC architecture to be consistent with certification regulations. For UAV design, we discuss a candidate FTC architecture fulfilling necessary functional and operational requirements, such as flight control computers, actuators, and fault-tolerant features. In order to point out potential failure modes for fault detection and reconfiguration methods for a candidate FTC architecture, qualitative conditions from the FHA are evaluated by failure modes, effects and criticality analysis (FMECA). Failures can be mitigated by FTC features, (e.g. redundancy) or by maintenance inspections. Quantitative conditions from FHA are evaluated by fault tree analysis (FTA) to ensure that all safety-critical failure modes have a probability of occurrence lower than the threshold determined by the FHA. To be considered certifiable, it could be necessary to update candidate FTC architectures by adding or editing FTC features.

4 Use Case Analysis Di Rito, Galatolo and Schettini performed an aircraft FHA for a fixed wing configuration [12]. By extending their methodology for different aircraft configurations, we can analyze design-specific actuation system criticalities. For that purpose, we regard several UAV use cases to derive respective safety requirements for the system level. Figure 2 illustrates the integration of different use cases in the ARP development processes.

Fig. 2 Use case evaluation in accordance with ARP development and safety processes [8]

360

C. Bosch et al.

Table 3 Example of a simplified fixed-wing FHA (RMT: Remote) FHA function

Failure cond.

Possible failure effect

Class./Pr.

Control roll

Loss of roll

May result in uncontrollable flight state

CAT/10−6 h−1

Erroneous roll

Erratic roll inputs result in uncontrollable CAT/10−6 h−1 flight state

Partial err. roll

Limited control might help fly to a safe crash site

HAZ/10−5 h−1

Loss of pitch

May result in uncontrollable flight state

CAT/10−6 h−1

Erroneous pitch

Erratic pitch inputs result in uncontrollable flight state

CAT/10−6 h−1

Partial err. pitch

Limited control might help fly to a safe crash site

HAZ/10−5 h−1

Loss of yaw

In crosswinds, the workload on the RMT crew increases

MAJ/10−4 h−1

Erroneous yaw

Erratic yaw inputs result in uncontrollable CAT/10−6 h−1 flight state

Partial err. yaw

Limited yaw control may help fly to a safe HAZ/10−5 h−1 crash site

Control pitch

Control yaw

4.1 Aircraft FHA Top-level failure conditions are dealt with in an aircraft FHA [9], in which we consider different aircraft configurations while neglecting the specific flight control layout. We regard three separate FHAs (fixed-wing, rotorcraft and gyrocopter). For every function, respective failure conditions are assessed for their criticality. Table 3 depicts a simplified example for a fixed-wing UAV.

4.2 Use Case Definition According to the Unmanned Vehicles Handbook, fixed-wing UAVs had the biggest share (≈65–70%) of production and development UAVs in the year 2008, followed by rotorcraft configurations with a share of ≈20% [13]. To represent this, we examine three fixed-wing flight control architectures and one rotorcraft application. Following previous work by Bierig et al., a generic gyrocopter architecture is also included in this analysis [14]. Figure 3 depicts Use Cases 1–5. The fixed wing Use Cases (1–3) differ in the number of respective control surfaces for roll, pitch and yaw and were derived from existing UAV examples. Use Case 4 features a classic main/tail rotor configuration, while Use Case 5 describes a gyrocopter flight control architecture.

Towards Certifiable Fault-Tolerant Actuation …

361

Fig. 3 Use Cases 1–5 (control surfaces and actuation legs are marked as dotted rectangles)

4.3 Derivation of Safety Requirements/Preliminary Aircraft Safety Assessment (PASA) Following the use case definition, we can derive quantitative and qualitative requirements. For this analysis, several assumptions are required. For Use Cases 1–3 we propose: – One actuation leg per control surface – For pitch and roll: the FHA failure condition erroneous is applicable if at least 50% of control surfaces of the respective axis have failed. If less than 50% of surfaces have failed, we assume a partial erroneous state. The failure condition loss is assessed as catastrophic on aircraft level, but this does not need to be discussed here because surfaces would need to fail in a specific way (free floating), which induces less strict reliability requirements for the system level. – For yaw: the FHA failure condition erroneous is applicable, if more than 50% of control surfaces have failed. If ≤50% of surfaces have failed, we assume a partial erroneous state. In the following analysis, we use Boolean logic to develop the actuation level required probabilities (λACT ) necessary to meet the top-level target. Table 4 shows an example for Use Case 2. For reasons of clarity, only most critical failures are shown. For Use Case 4 we assume: – Any failure of the tail rotor actuation is considered erroneous and loss on aircraft level.

362

C. Bosch et al.

Table 4 Preliminary aircraft safety assessment/actuation requirements derived from aircraft FHA (Use Case 2) FHA function

FHA failure condition

Failed surfaces

Actuation requirement equation

Control roll

Erroneous roll

2 of 4 ailerons

6 · λ2ACT ≤ 10−6 h−1

Partial erroneous roll

1 of 4 ailerons

4 · λACT ≤ 10−5 h−1

Erroneous pitch

1 of 2 elevators

2 · λACT ≤ 10−6 h−1

—————

—————

Erroneous yaw

2 of 2 rudders

λ2ACT ≤ 10−6 h−1

Partial erroneous yaw

1 of 2 rudders

2 · λACT ≤ 10−5 h−1

Control pitch

Partial erroneous pitch Control yaw

– If at least one of the swashplate actuation legs fails, both top-level failure conditions erroneous and loss are triggered. For use case 5: – If one of both actuation legs fails, both top-level failure conditions erroneous and loss are triggered. – Any failure of the rudder actuation leg is considered erroneous and loss on aircraft level.

4.4 PASA Results Figure 4 illustrates specific actuation requirements for previously defined use cases for critical failure conditions. The strictest requirements can be found in the swashplate actuation of Use Case 4 and the rotor disk actuation of the gyrocopter application, while average budgeted failure rates are slightly higher in fixed-wing applications. For Use Cases 1–3, a reduced number of control surfaces increases the failure rate budget per surface if the same respective aircraft FHA failure condition is triggered.

4.5 Consequences for TEMA-UAV For the development of a prototype actuator, which is part of TEMA-UAV, we selected the rotor disk actuation of Use Case 5 for the following reasons. Firstly, there is a relatively low failure budget for the rotor disk actuation. Secondly, knowhow from other DLR projects and the possibility for prototype testing at DLR are crucial [14]. A design-specific system FHA for this use case is depicted in Table 5.

Towards Certifiable Fault-Tolerant Actuation …

363

Fig. 4 Failure rate requirement for one actuation leg λACT for Use Cases 1–5 (number of legs in brackets)

Table 5 System FHA excerpt for one actuation leg of the rotor disk (Use Case 5) Function

Failure condition

Failure effect

Class./λACT,req

Command actuation leg

Erroneous actuation

Erratic roll/pitch inputs cannot be compensated

CAT/5•10−7 h−1

Loss of actuation

Lost control of flight attitude

CAT/5•10−7 h−1

As part of the actuation requirements specification, the system FHA creates the basis for the subsequent preliminary system safety assessment (PSSA) to investigate different actuator architectures.

5 Conclusion Although certification specifications have not been published yet, the probabilities and DALs discussed in Sect. 2 represent the most important safety requirements for aircraft level. For UAVs exceeding a size of 3 m, we recommend concentrating on a full type certification. The use case analyses illustrated the strong dependency between aircraft failure conditions and qualitative and quantitative safety requirements on actuation level, which depend on the system design. The analyses also emphasized that the actuation legs of fixed-wing configurations might have a less strict probability requirement than those of rotorcraft configurations. In a rotorcraft or gyrocopter UAV, every described actuation leg is needed for safe continuation of flight. In this work, actuation requirements are insensitive to specific reliability data.

364

C. Bosch et al.

However, at a later stage, knowledge about component failures will be required to assess specific architectures, which can fulfill the budget determined in this paper. Acknowledgements This work is supported by the TEMA-UAV project through the German

National Aerospace Research Program (Lufo V-3).

References 1. Ismail MAA, Bosch C, Wiedemann S, Windelberg J (2019, under publication) Fault tolerant actuation architectures for UAVs. In: WCCM 2019, Singapore, 2–5 December 2019 2. European Drones Outlook Study—Unlocking the value for Europe (2016) SESAR, Brussels, November 3. Commission Implementation Regulation (EU) 2019/947, EASA, Brussels, 24 May 2019 4. Commission Delegated Regulation (EU) 2019/945, EASA, Brussels, 12 March 2019 5. Certification Specification for Light Unmanned Rotorcraft Systems (CS-LURS), JARUS, 30 October 2013 6. Recommendations for Certification Specification for Light Unmanned Aeroplane Systrems (CS-LUAS), JARUS, November 2016 7. Special Condition for EASA Type Certification Base – Camcopter S-100c RPAS, EASA, 09 July 2016 8. Guidelines for Development of Civil Aircraft and Systems, SAE ARP4754, November 1996 9. Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems and Equipment, SAE ARP4761, December 1996 10. Special Condition—Equipment, Systems, and Installation, SC-RPAS.1309-01, 24 July 2015 11. Unmanned Aerial Vehicles Systems Airworthiness Requirements (USAR), STANAG 4671, September 2009 12. Di Rito G, Galatolo R, Schettini F (2016) Self-monitoring electro-mechanical actuator for medium altitude long endurance unmanned aerial vehicle flight controls. Adv Mech Eng 8(5):1– 11 13. Donaldson P, Lake D (2008) Unmanned aircraft – in production/unmanned aircraft—in development. In: Unmanned vehicles handbook, United Kingdom, Shephard, pp 13–84 14. Bierig A et al (2018) Design considerations and test of the flight control actuators for a demonstrator for an unmanned freight transportation aircraft. In: R3ASC 2018, Toulouse

Development of a Patrol System Using Augmented Reality and 360-Degree Camera Kenji Osaki, Tomomi Oshima, Naoya Sakamoto, Tetsuro Aikawa, Yuya Nishi, Tomokazu Kaneko, Makoto Hatakeyama, and Mitsuaki Yoshida

Abstract From the viewpoints of safety plant operation and improvement of utilization rate of equipment, reduction of the human error in inspection work and efficient maintenance engineering and inspection work are needed. Therefore, we developed a maintenance support tool using augmented reality (AR) technology and 360-degree camera. Furthermore, the self-localization technology using Visual Simultaneous Localization and Mapping (Visual SLAM) technology is adopted. The inspection work and data management which linked with localization data with Visual SLAM technology are the feature of this system. These technologies help users identify equipment, assist users’ recognition and judgment of the object conditions and effective inspection data management. Keywords Patrol system · Augmented reality · Visual SLAM · 360-degree camera

1 Introduction From the viewpoints of safety plant operation and improvement of utilization rate of equipment, reduction of the human error in inspection work and efficient maintenance engineering and inspection work are needed. Therefore, we developed a patrol system using augmented reality (AR) technology, 360-degree camera, and self-localization technology. AR is the technology to merge a live view of the real world with computergenerated images. AR makes it possible to display information required for inspection on the screen of a portable device such as a smart phone or a tablet PC. A 360-degree K. Osaki (B) · T. Oshima · N. Sakamoto · T. Aikawa Energy Systems R&D Center, Toshiba Energy Systems & Solutions Corporation, 1 Toshiba-Cho, Fuchu-shi, Tokyo, Japan e-mail: [email protected] Y. Nishi · T. Kaneko · M. Hatakeyama Isogo Nuclear Engineering Center, Toshiba Energy Systems & Solutions Corporation, Tokyo, Japan M. Yoshida Field Engineering Department, Toshiba Energy Systems & Solutions Corporation, Tokyo, Japan © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_34

365

366

K. Osaki et al.

camera can photo the image of all the directions. The 360-degree camera can record the spherical image of an inspection worker and enables the inspection image of the direction to need. Early detection of change of an onsite situation was made possible by difference processing with the past spherical image. Furthermore, the self-localization technology using Visual Simultaneous Localization and Mapping (Visual SLAM) technology is adopted for navigation to the inspection point. These technologies help users identify equipment, assist users’ recognition and judgment of the object conditions and effective inspection data management. In this paper, the prototype under development is reported.

2 Self-localization Technology We adopted self-localization technology using Visual Simultaneous Localization and Mapping [1]. This technology estimates a self-position and creates a surrounding map simultaneously using the information obtained by the camera and the depth sensor of the portable device. The concept of this technology is shown in Fig. 1. The position of the portable device in the three-dimensional coordinate is obtained by computing distance to the characteristic points on surrounding the objects (Object 1 to Object 6) recognized through the pictures (Picture 1 to Picture 3) at every place. As a result, prediction of the position of the portable device and recognition of objects to be inspected can be achieved indoors, while global positioning system (GPS) is unavailable. This technology helps inspection worker navigate to inspection point and record inspection data combined with position information.

Fig. 1 Self-localization method using Visual Simultaneous Localization and Mapping

Development of a Patrol System Using Augmented Reality …

367

Fig. 2 Typical of AR content displayed on the smart phone

3 Support Function Using AR Technology [2, 3] 3.1 AR Technology AR is the technology to merge a live view of the real world with computer-generated images. A virtual content is superimposed on real image. Typical AR content displayed on the smart phone is shown in Fig. 2. AR makes it possible to display information required for inspection on the application screen of a portable devices such as a smart phone or a tablet personal computer (PC). Furthermore, the selflocalization technology using Visual SLAM technology is adopted for navigation to the inspection point.

3.2 Examples of AR Application The support items, the function, and the applied technology are selected based on the information of the sample cases such as inspection error and operation error in maintenance tasks as shown in Fig. 3. They can help various inspection works.

3.2.1

Support of Reading Instruments

In inspection tasks, reading and recording of numerical values of analog instruments at field sites are carried out. An application screen of this function is shown in Fig. 4(1). In this sample, a photograph of the analog meters obtained beforehand is registered as a photograph marker. Each meter is automatically recognized by

368

K. Osaki et al.

Fig. 3 Function and applied technology of the maintenance support tool

(1) Support of reading instruments

(2) Support of sensor attachment

(3) Management of information using virtual tags Fig. 4 Screen image of AR application

using the photograph marker, and frames and numbers are appended by AR to four meters. Workers recognize with these frames and numbers that the tool has grasped the meters to be inspected. It automatically reads values of analog meters from the position of their needles. The value for each meter is displayed on the checklist and is automatically saved when finally checked by the worker. This function helps improve efficiency of the inspection tasks and reduces human errors as well.

Development of a Patrol System Using Augmented Reality …

3.2.2

369

Support of Sensor Attachment

Vibration measurement for pumps are conducted in periodical inspection. Sensors for measurement usually have to be attached on the position specified according to manuals. An application screen of this function is shown in Fig. 4(2). The tool recognizes a pump to be inspected by reading the AR marker. The positions where sensors should be attached are displayed as virtual objects, based on the positional relationship to the AR marker.

3.2.3

Management of Information Using Virtual Tags

Workers occasionally have to record cautions and notices in the field tasks such as inspection and maintenance. While these tasks usually have been controlled with tags at the site, this tool manages them with electronic information. As shown in Fig. 4(3), virtual tags that contain cautions and notices are attached on the object of interest and displayed on the screen. Since they are recorded with position information, workers are able to easily grasp their positions on a map at next inspection, leading to improve the efficiency of confirmation task as well as the management of states of corrective actions.

4 Patrol System Using 360-Degree Camera 4.1 360-Degree Video Browsing Function The configuration of a patrol system using a 360-degree camera is shown in Fig. 5. The 360-degree video and the movement trajectory measured by Visual SLAM technology are recorded. The time-synchronized 360-degree video and the movement trajectory are displayed on a map such as a layout diagram. The 360-degree video at the position selected on the trajectory can be viewed.

Fig. 5 Screen image of AR application

370

K. Osaki et al.

Fig. 6 Screen image of browsing function in corporate with 3D data

An example of the 360-degree image browsing system is shown in Fig. 6. The position where the image is registered is indicated by the round mark on the layout diagram of the equipment. Here, the 360-degree image on only the points to be compared is extracted from the movement trajectory. The 360-degree image, laser scan data, and 3D-CAD data at the same position can be compared and viewed. This browsing function can be used to grasp changes in the site situation at design, construction, and remodeling work.

4.2 Differential Processing Function Between Before and After Images We have developed a technology that automatically detects changes in two 360° images for the purpose of quickly grasping changes in the field situation, such as changes in equipment layout and discoloration due to leakage. Here, the result of evaluating the applicability of differential processing with respect to changes in the layout of equipment is shown. The differential processing algorithm is shown in Fig. 7. Feature points in 360degree video are extracted and converted into 3D point cloud data. Next, the position of the equipment is calculated from the information of the camera position and orientation measured by the Visual SLAM technology and the information of the 3D point cloud data. The position of the equipment at another time in the same place is

Development of a Patrol System Using Augmented Reality …

371

Fig. 7 Algorithm for differential processing

calculated from the video at another time by the method described above. A change is detected by differentially processing the position of the equipment obtained from the two videos. The result of the difference processing (point cloud and voxel) is shown in Fig. 8. The viewpoint in the figure is the position of the camera, and the part of voxel A and B where the layout of the equipment has changed due to differential processing is displayed in voxels. The part of voxel A and B show the moved equipment and newly appeared equipment, respectively. The result of superimposing the difference processing result on the 360-degree video is shown in Fig. 9. It was confirmed that it was possible to detect changes in the layout of equipment about 30 cm within 3 m from a camera.

5 Conclusions • In order to improve safety and efficiency of maintenance activities and reduce human error, we developed a patrol system using augmented reality (AR) technology, 360-degree camera, and self-localization technology. • A prototype system using AR technology was developed by assuming how it would be used in field sites. Verification of the effects of the tool in field tests, and improvement and addition of functions are conducted, and onsite application is planned. • A system that can view omnidirectional images of any location on a map indoors has been developed using a 360° camera and a self-localization function using Visual SLAM technology. In addition, it was confirmed that the change in the layout of equipment can be detected from different images by differential

372

Fig. 8 Result of differential processing (point cloud and voxel)

Fig. 9 Result of differential processing (360-degree image)

K. Osaki et al.

Development of a Patrol System Using Augmented Reality …

373

processing. In the future, we will improve the accuracy of functional verification and differential processing.

References 1. Fuentes-Pacheco J, Ruiz-Ascencio J, Rendón-Mancha JM (2012) Visual simultaneous localization and mapping: a survey. Artif Intell Rev 43:55–81 2. Oshima T, Osaki K, Nishi Y (2019) Development of a maintenance support tool using augmented reality technology. In: Proceedings of 27th international conference on nuclear engineering, ICONE27-2029 3. EJAM Development of a Maintenance Support Tool Using Augmented Reality Technology. http://www.jsm.or.jp/ejam/Vol.11No.2/NT/NT94/94.html

Research on Evaluation Method of Safety Integrity for Safety-Related Electric Control System of Amusement Ride Weike Song, Peng Cheng, and Ran Liu

Abstract The safety-related electric control system (SRECS) of amusement ride as an important system ensures the safe operation of the device. Its safety integrity determines whether the SRECS can perform safety functions reliably. At present, the professional safety integrity evaluation of the SRECS for amusement ride is still blank all over the world. Based on the analysis of the structural type and movement characteristics of the amusement rides, this paper develops a safety integrity evaluation method suitable for the SRECS of amusement rides with reference to the international function safety standards of safety-related system. First step is to identify the safety-related control function (SRCF) of SRECS combined the failure case database, and then to do the risk assessment of SRCF considering the failure severity and possibility, such as death, injury, and high altitude detention. The safety integrity level requirements of ride (RSILr) can be obtained by risk matrix. The four reliability indicators of the designed control loop are analyzed, and the safety integrity level (RSIL) of the control loop is obtained by combining the safety integrity histogram. Finally, the effectiveness of the safety integrity of the control system is confirmed by tests. Keywords Amusement ride · Safety-related electric control system (SRECS) · Safety-related control function (SRCF) · Safety integrity level of ride (RSIL) · Risk assessment

1 Background With further development of amusement rides in experience diversification aspect, the complexity of amusement rides is getting higher and higher, which requires amusement rides to be more accurate in the control system. By adopting more advanced safety PLC control technology, taking advantage of a variety of sensors for data W. Song (B) · P. Cheng · R. Liu China Special Equipment Inspection and Research Institute, Heping Xiyuan Building 2, Beijing, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_35

375

376

W. Song et al.

conversion and transmission, making use of advanced AC/DC driving technology for power transmission, the precise control of amusement rides is achieved. At the same time, the complexity of the system is increasing the requirements of reliability of the rides. Especially, the important safety-related system which involves personal safety, its reliability directly determines the operation safety of amusement rides [1]. Safety-related system refers to the related system which can ensure the safe operation of amusement rides, including safety-related mechanical system and safetyrelated electric control system (SRECS). Safety-related systems are essential to ensure the safe operation of amusement rides. Common safety-related systems of amusement rides include safety interlocking system, emergency stop system, safety deceleration braking system, anti-collision protection system, anti-roll-back protection system, anti-overspeed protection system, and so on. At present, the design of amusement rides is more focused on safety-related mechanical systems, but pays less attention to the safety-related control systems, and safety integrity evaluation of safety functions is not carried out. The resulting accidents of amusement rides are not unusual. Based on the structure and movement characteristics of amusement ride, this paper develops a safety integrity evaluation method suitable for the SRECS of amusement rides with reference to the international function safety standards of safety-related system. First step is to identify the safety-related control function (SRCF) of SRECS combined the failure case database, and then to do the risk assessment of SRCF considering the failure severity and possibility, such as death, injury, and high altitude detention. The safety integrity level requirements of ride (RSILr) can be obtained by risk matrix. The four reliability indicators of the designed control loop are analyzed, and the safety integrity level (RSIL) of the control loop is obtained by combining the safety integrity histogram. Finally, the effectiveness of the safety integrity of the control system is confirmed by tests, which should be combined with the actual operating conditions of amusement rides.

2 Process of Safety Integrity Evaluation Safety integrity evaluation of amusement rides includes identification of SRECS, risk assessment of SRCF, safety circuit design, RSIL calculation, safety integrity evaluation, optimal design of safety circuit, and experiment confirmation, as shown in Fig. 1. The identification of SRECS and risk assessment of SRCF are the prerequisites for RSIL allocation. RSIL calculation and evaluation of safety circuit are parts of RSIL verification. Therefore, RSIL allocation and RSIL verification are the main body of safety integrity evaluation of amusement rides.

Research on Evaluation Method of Safety Integrity …

377

Fig. 1 Process of safety integrity evaluation

Start

Identification of SRECS RSIL Allocation

Se CI (Fr+Pr+Av)

Risk assessment of SRCF

Safety circuit design Cat. MTTFd DCavg CCF

RSIL calculation

RSIL Verification

Safety integrity evaluation

Meet the design requirements

NO

YES Test confirmation

finish

3 RSIL Allocation 3.1 Identification of SRECS The safety integrity evaluation of the SRECS of amusement rides should firstly determine the evaluation object. SRECS which really affects the safety of amusement rides often only accounts for a small part of the whole control system. Designers need to accurately extract SRECS from the complex control system according to structure and movement characteristics of amusement rides, design experience, and failure database [2, 3]. Due to the diversity and novelty of amusement rides, the SRCFs implemented by SRECS are also diverse, which makes it difficult for identification. Therefore, the identification of SRECS depends more on the designer’s understanding of the safety performance of the rides.

378

W. Song et al.

According to the current regulations and standards, the classification and grading requirements for amusement rides, combined with the actual work experience, typical SRECS and SRCF of common amusement rides are identified as shown in Table 1. This table can be referred in the SRECS and SRCF identification process of actual rides, but it must be increased or decreased in combination with specific equipment features. Table 1 Typical SRECS and SRCF of common amusement rides No.

Ride type

Typical ride

SRECS

SRCF

1

Rotation ride

Large pendulum

Closing and locking detection system of safety bar

Closing, locking, and ridet startup interlock function

Lifting platform closing detection system

Lift platform drop in place and ride startup interlock function

Emergency stop system

Emergency stop function

Closing and locking detection system of safety bar

Closing, locking, and ridet startup interlock function

2

Roller coaster

Launch coaster

Station safety door Station safety door closing closing detection system and ride startup interlock function Emergency stop system

Emergency stop function

Vehicle position detection system

Vehicle position and start interlock, anti-collision function Vehicle position and brake interlock, anti-collision function

3

Lifting tower

Fly tower

Station lifting friction wheel position detection system

Station friction wheel drop in place and ride startup interlock function

Closing and locking detection system of safety bar

Closing, locking, and ridet startup interlock function

Emergency stop system

Emergency stop function

Steel wire rope broken detection system

Steel wire rope broken and ride stop interlock function

Gondola position detection system

Prevent gondola bottoming and topping function

Research on Evaluation Method of Safety Integrity …

379

3.2 Risk Assessment of SRCF The purpose of risk assessment of SRCF is to obtain the safety integrity level requirement (RSILr) of that function. Improper risk assessment method will lead to too high or too low safety integrity level. Too high RSILr will cause waste and too low RSILr will not meet safety requirements and lead to unacceptable risks. RSIL is a discrete level (one of three possible levels) that specifies the safety integrity requirements assigned to the SRCF of SRECS. Here, the security level 3 is the highest and the level 1 is the lowest. Due to the SRECS of amusement rides is different from the traditional process control system of safety instruments, this paper uses the risk matrix method based on the amusement ride failure database to evaluate and distribute RSILr. The risk parameters are derived from the following elements: Se: Severity that may result from danger Fr: Frequency and duration of exposure Pr: Probability of hazard time occurring Av: Avoiding or limiting the possibility of injury. For the parameter Se of amusement ride, personnel death, serious injury, minor injury, personnel detention, economic losses, social influence, etc., should be considered. The three parameters of the probability of injury occurrence (Fr, Pr, and Av) should be evaluated independently of each other. Every parameter should consider the worst case. The classification of Se, Fr, Pr, and Av are as shown in Tables 2, 3, Table 2 Classification of Se Result

Se

Death of the person/Unrecoverable and serious injury/High altitude detention time is greater than 1 h/Direct economic loss more than 10,000 RMB

4

No death/Unrecoverable and serious injury, but the possibility of continuing to do the same 3 work after rehabilitation/High altitude detention time is greater than 1 h/Direct economic loss more than 10,000 RMB No death/Minor injury/High altitude detention time is less than 1 h/Direct economic loss less than 10,000 RMB

2

No death/Scared/High altitude detention time is less than 1 h/Direct economic loss less than 1 10,000 RMB

Table 3 Classification of Fr

Frequency of exposure

Fr(Duration > 10 min)

≤1d

5

>1d~≤1w

4

>1w~≤1 m

3

>1 m~≤1y

2

>1y

1

380

W. Song et al.

Table 4 Classification of Pr Risk probability

Pr

Extremely high, similar accidents have occurred

5

High, similar failures have occurred, and occur frequently in similar devices

4

Medium, similar failures have occurred, occur occasional in similar devices

3

Low, possible occurence, there is no such thing in in similar devices

2

Very low, impossible occurance can be ignored

1

4, and 5 considering the failure database. The technical parameters above are selected from the performance behavior of safety functions in specific amusement rides, referring to the knowledge database In this method, RSIL is assigned by referring to RSIL allocation matrix, as shown in Table 6, where the rows in table represent severity (Se) and the columns represent the level of analysis above (CI) that is added by Fr, Pr, and Av. Row-column intersection is the RSILr that should assign to SRCF. Table 5 Classification of Av Possibility

Av

Almost not Danger occurs quickly/Passengers cannot be evacuated from danger/The surrounding environment is bad/Danger is not easily identified

5

Small Danger occurs quickly/Passengers cannot be evacuated from danger/Good surrounding/Danger is not easy to be identified

4

Medium Danger occurs quickly/Passengers cannot be evacuated from danger/Good surrounding/Danger is easily identified

3

Large The danger occurs slowly (or delayed)/Passengers cannot be evacuated from danger/Good surrounding/Danger is easily identified

2

Very large The danger occurs slowly (or delayed)/Passengers can evacuate from danger/Good surrounding/Danger is easily identified

1

Table 6 RSIL allocation matrix Se

CI 3–4

5–7

8–10

11–13

14–15

4

RSIL2

RSIL2

RSIL2

RSIL3

RSIL3

3

/

/

RSIL1

RSIL2

RSIL3

2

/

/

/

RSIL1

RSIL2

1

/

/

/

/

RSIL1

Research on Evaluation Method of Safety Integrity …

381

4 SIL Verification The safety control circuit is the carrier that implements the RSIL of the SRCF. After the safety evaluation determines the RSILr value and completes the safety control circuit design and extracts the SRCF, how to find the actual RSIL value of the SRCF becomes the focus of the next step. In order to facilitate the actual evaluation work and further refine RSIL of the SRCF, referring to the latest requirements for evaluating the PL value of the SRP/CS in ISO 13849-1 [4]. The RSIL value of the SRCF should be determined as following reliability parameters: Control category, mean time to dangerous failure, average diagnostic coverage, and common cause fFailure. The main methods of RSIL verification are reliability block diagram, fault tree, Markov model, etc. However, no matter which method, it is based on the evaluation and verification of the reliability of safety circuit components. Reliability index of safety circuit is the premise of RSIL verification.

4.1 Reliability Parameters 4.1.1

Cat. (Control Category)

The reliability level relative to the fault and the ability of self-detection-warning under the fault state of SRECS are the most important factor which determines RSIL. There are five categories of safety circuits: B, 1, 2, 3, and 4. The higher the category level, the higher the reliability. Control category is the basic characteristic of safety circuit.

4.1.2

MTTFd (Mean Time to Dangerous Failure)

Since the overall reliability of the system is determined by the superposition of the reliability of different elements in the system, the average failure rate is also one of the main factors determines the reliability of the system. Generally speaking, MTTFd is the inverse function of failure rate, MTTFd = 1/λ, expressed in units of [year]. This value can belong to one component or can be a description of a system. For a system, the average fault-free time of the system can be calculated by determine the MTTFd of each part. In the case of considering the risk caused by the number of mechanical operations, the B10d (10% of the average number of component failures) shown in formula 1 should also be taken into account. In this case, the MTTFd value can be obtained according to the following formulas by setting the annual average number of operations nop (Number of operations) where formula 3 is the calculation formula of series circuit, and formula 4 is the calculation formula of parallel circuit [5].

382

W. Song et al.

M T T Fd =

B10d 0.1 × n op

(1)

dop × h op × 3600 tcycle

(2)

 1 1 = M T T Fd M T T Fdi i=1

(3)

n op =

n

 2 M T T Fd = M T T Fd1 + M T T Fd1 − 3

1 M T T Fd1

1 +

 1 M T T Fd2

(4)

Due to the strong seasonality and regularity of the operation of amusement rides, reasonable parameters can be set in the initial design phase according to the operation conditions and predicted usage of this type of equipment in the past. The MTTFd values of safety PLC, safety relay, and so on are generally given by the manufacturer after their verification, so the designer could choose the certified safety devices preferentially when designing the circuit.

4.1.3

DCavg (Average Diagnostic Coverage)

The average diagnostic coverage is the average of self-diagnostic ability of each SRECS. Reasonable self-diagnosis and self-monitoring measures can be used as a complementary measure for adopting low reliability circuit architecture. This method can effectively improve the RSIL level of the circuit. It is also in line with the current situation of the design and manufacture of amusement rides in China. Which means a higher integrity level can be achieved by selecting relatively cheap devices and adopting reasonable means of self-detection measures.

4.1.4

CCF (Common Cause Failure)

Common cause failure refers to the common causes of each hardware structure failure. ISO13849-1 defines that the CCF value is determined by the comprehensive score of the eight factors specified and it shall not be less than 65 points, and only SRECS with category 2 or more need CCF evaluation.

4.2 RSIL Verification RSIL verification depends on the above four reliability parameters, but the most important parameter to determine the overall safety performance of the system is the control category. However, higher or lower safety performance can also be achieved

Research on Evaluation Method of Safety Integrity …

383

RSIL MTTFd low RSIL 1

medium

high RSIL 2 RSIL 3 Cat.B DCavg-N

Cat.1 DCavg-N

Cat.2 DCavg-L

Cat.2 DCavg-M

Cat.3 DCavg-L

Cat.3 DCavg-M

Cat.4 DCavg-H

Fig. 2 Determination of RSIL considering Cat., DCavg and MTTFd

by adjusting the other three parameters under the same control category. Therefore, certain flexibility can be achieved at the design stage according to different design input requirements. On the premise of ensuring safety performance, the manufacturing cost could be properly controlled. The final safety integrity level results can be determined by Fig. 2.

5 Optimization and Test Confirmation The purpose of safety integrity evaluation of amusement rides is to evaluate whether the safety function meets the actual needs. If the actual RSIL value does not meet RSILr through the above safety evaluation, it is necessary to re-optimize the safety circuit or its safety components of that safety function. RSIL value of safety function can be improved by improving the Cat. of safety circuit or by improving the reliability index of safety components [5]. The experiment confirmation of safety function is to analyze and test the SRCF of SRECS. The validation work includes application analysis and tests carried out according to the validation plan if necessary. The application analysis should be carried out at the same time with the design process in order to solve relatively easy problems as soon as possible. During the test, various input combinations are applied to the relevant safety components of SRECS in order to confirm the safety function, and compare the corresponding output results with the prescribed output results. The confirmation of safety function should be embodied in the operation and maintenance manual of amusement rides, and the inspection cycle of the confirmation of safety

384

W. Song et al.

function should be clearly defined as well to keep the safety function continuously functional.

6 Conclusions The safety integrity evaluation method of amusement rides is applicable to SRECS that directly related to personal safety, which fills in the blank of safety evaluation aspect of amusement rides and has great practical significance. However, the amusement rides are different from the traditional safety system of instruments in the process industry, and there are great differences in RSIL allocation and verification. At the same time, the diversity and complexity of amusement rides determine that there are many difficulties in the evaluation of their safety function integrity, which will also be the key point of future research. It mainly focuses on the following aspects: (1) RSIL allocation mainly depends on traditional experience and failure case database. RSIL of safety circuit should be allocated more reasonably by developing more effective and accurate quantitative evaluation methods. (2) For RSIL verification, most of the control components used in domestic amusement rides do not have specific reliability indexes. In this case, how to determine the reliability indexes of safety circuit components is particularly important. (3) The RSIL of the safety circuit depends not only on the safety and reliability of the hardware circuit, but also have more interfering factors worth to study such as the software reliability and human factors. Acknowledgements This work was supported by the national key research and development program of China (Grant No. 2016YFF0203105).

References 1. Alizadeh S, Sriramula S (2018) Impact of common cause failure on reliability performance of redundant safety related systems subject to process demand. Reliab Eng Syst Saf 172:129–150 2. Wu C, Hua J (2018) Research on methodology of functional safety research and development. J Saf Sci Technol 14(8):23–28 3. Ma B, Sun ZQ (2016) Quantitative evaluation of safety integrity level based on MBF parameter model. J East China Univ Sci Technol 42(1):97–103

Research on Evaluation Method of Safety Integrity …

385

4. Safety of machinery-Safety-related parts of control systems-Part1: General principles for design, ISO13849-1:2015 5. Safety of machinery-Functional safety of safety-related electrical, electronic and programmable electronic control systems, IEC62061:2015

Study on Warning System of Transportable Pressure Equipment from Regulatory Perspective Z. R. Yang, Z. Z. Chen, Z. W. Wu, and P. Yu

Abstract Accidents on transportable pressure equipment happen occasionally and some of them were due to weak supervision on its using and maintenance processes. In order to reduce this kind of accidents, a warning system from regulatory perspective is designed and studied in this paper. The warning system involves several significant processes of the transportable pressure equipment including design, manufacture, using, filling, and maintenance. First, the risks were identified and analyzed in all processes. Second, the index system was established and the weight of each index was calculated by using AHP and entropy methods. Third, the warning level classification and the waning model were illuminated in this study. The warning system would have a certain reference value to the safety management on transportable pressure equipment. Keywords Transportable pressure equipment · Supervision · Warning system · Index system

1 Introduction With the high speed development of economy, the demands for energy resources are increasing. Transportable pressure equipment (TPE) is a kind of container for transporting LNG, liquid oxygen, liquid hydrogen, liquid nitrogen, and some other liquid chemicals. So, the demands of this kind of special equipment also have a high growth. The effective regulation and supervision on relative companies would do benefit on protecting the safety and stable running of both equipment and industry. P. Yu (B) Faculty of Vehicle Engineering and Mechanics, Dalian University of Technology, No. 2 Linggong Road, Ganjingzi District, Dalian, China e-mail: [email protected] Z. R. Yang · Z. Z. Chen · Z. W. Wu State Administration for Market Regulation (SAMR) Key Laboratory of Special Equipment Safety and Energy-Saving, China Special Equipment Inspection and Research Institute (CSEI), Beijing, China © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_36

387

388

Z. R. Yang et al.

In China, there are several regulations and rules about the supervision on special equipment, such as regulations on safety supervision of special equipment, rules for on-site supervision and inspection of special equipment, regulations on accident report, investigation and handling of special equipment, approval rules for special equipment inspection institutions, and so on. Even though the development of regulatory system of special equipment is becoming thriving, there are still some accidents happen due to weak supervision. According to the statistical data released by state administration for Market Regulation in 2018, there are 6 accidents happened on gas cylinders including three cases caused by illegal or improper operation, one case caused by equipment defect and safety attachment failure, one case caused by illegal business and one case caused by other reasons [1]. Sometimes, the accidents of TPE may cause vast economical loss and fatalities. So, normalized regulation and supervision should be enforced during the running of special equipment. Presently, the researches of TPE focus on the design, manufacture, non-destructive testing, and so on [2–8]. In order to safeguard the safety of TPE when using it, warning system has been studied. However, most of the warning system did not take regulation and supervision into consideration, and the main purpose of the warning system is to keep the safety during its transportation [9–11]. So, in this paper, a warning system from the regulatory perspective is proposed and studied, the risk identification, warning index system, and warning model were studied.

2 Overall Design of Warning System This regulatory warning system involves regulation and supervision in the processes of TPE’s design, manufacture, using, filling and maintenance. The schematic diagram is shown in Fig. 1. The supervision is conducted by the country, province, and city regulatory administration to inspect the processes in design, manufacture, using and filling companies. Then, according to the relevant regulations and rules, the assessment of each processes could be obtained. The supervisors inspect and record every

Fig. 1 The schematic diagram of warning system

Study on Warning System of Transportable …

389

indexes, then the data are input into the warning system, the assessment results are computed by using the warning model. Then, the corresponding warning level will be give according to the results. Moreover, the warning information, like warning level and the assessment results, will be sent to the regulatory administration. The warning system can reflect the potential risks of the current equipment in time, which make sure each equipment run safely and steadily. Meanwhile, all the data collected by the system of each equipment, process and company could be valuable references to evaluate the industry and it becomes easily to find the defects. It is also helpful to determine the industry’s development direction.

3 Risk Identification and Index System Establishment 3.1 Risk Identification The regulatory risk of TPE mainly happens in the processes of design, manufacture, factory test, using, filling and maintenance. Most of the accidents are related to the weak supervision. The warning systems identify and analyze the risks in the above processes.

3.1.1

Risk in Design and Manufacture Process

Design is the primary process of the TPE, so the safety of it is significant. In this process, the specification, design drawing, the results of numerical model test, and the other references of the products should be scrutinized to avoid the design defects. In the process of manufacture, the company or factory should have corresponding qualifications of human, material, and financial resources. So, the main risks in the process of design and manufacture are the design defect and unqualified products.

3.1.2

Risk in Type Test Process

Type test is a compulsory process for each batch of TPE; it is used to verify whether the product is qualified in both shapes and performance. Meanwhile, it is also a method to evaluate whether the design and manufacture companies are qualified in this area. In this process, the main risks in this process is low qualification rate.

3.1.3

Risk in Using Process

There are multiple reasons cause hazards in using process. But, the risks are mainly caused by disobeying regulation or operation rules when using TPE and employing

390

Z. R. Yang et al.

the unqualified operators. So, risks include delayed operations in the processes of registration before or after use, routine inspection, maintenance, periodical inspection, and annual inspection. And the unqualified operators employed in the processes mentioned before are also a kind of the risks.

3.1.4

Risk in Filling Process

Filling process is one of the common processes when using TPE, and it is also an important and high-risk process because the substances in the TPE are hazardous chemicals. Therefore, there should be specific equipment and qualified workers to fulfill this process. The main risks in this process are irregular substances, unqualified filling companies and operators, delayed inspection for filling equipments, and safety attchment.

3.1.5

Risk in Periodical Inspection Process

Periodical inspection and maintenance are necessary parts for TPE. All the TPEs should be inspected and maintained periodically and annually according to the specification, regulations, and rules. The risks in this process are delayed inspection and maintenance.

3.2 Warning Index System Establishment Based on the risk identification, the reasons of each risk were analyzed and the warning index system was established. There are five first-class indexes and several second-class indexes in the index system. The warning systems assess the safety conditions and give warning information by using the data of each second-class index.

3.2.1

First-Class Index

Firs-class index is the indicator of safety condition of the TPE which shows whether it is safe to use the current TPE. According to the contents of risk identification, the first-class index is divided into five indexes: design and manufacture safety, type test safety, using safety, filling safety, and periodical inspection safety (by country, province or city supervision administration). The warning index system consists of several indexes related to design, manufacture, using, and filling companies. So, the warning system is also a tool to evaluate the ability of every company.

Study on Warning System of Transportable …

3.2.2

391

Second-Class Index

The first-class indexes are macro indicators of the safety condition, so there should be more specific indexes which can easily obtained by rating or calculating. In this index system, there are several second-class indexes under the corresponding firstclass indexes. All the second-class indexes are chosen by professional experts in this field and they are also indicators that are closely related to the safety conditions. The index system including firs-class and second-class indexes is shown in Table 1.

3.3 Index Weight Calculation Index system is a complex system, in order to get the synthetic score, the weight of each index is calculated. Analytic Hierarchy Process (AHP) is a normal method to calculate the index weight. It is usually used when there is less indexes in one system. In this study, AHP is used to calculate the weight of first-class index. Firstly, analyze the first-class indexes and establish the hierarchy model and establish the judgement matrix. Secondly, obtain a set of overall level priorities for the hierarchy. Thirdly, check the consistency of the judgement. Finally, the weight of each index will be obtained. As there are many second-class indexes, AHP is not applicable. So, entropy method is applied to calculate the weight of second-class index. In this part, TPEs from different design and manufacture companies and different product batches are chosen as samples. Then calculate the weight of each index by using the steps of entropy method as shown in Fig. 2.

4 Study on Warning Model 4.1 Classification of Warning Levels The warning is divided into four levels, they are level I, level II, level III, and level IV which means heavy, medium, light, and no warning, respectively. Different level represent different safety conditions and corresponding solutions should be implemented. In the case of level III and level IV warning, the company can use the TPEs, but the company have to deal with the warning information shown in level III warning within a given time. In the case of Level II warning, the TPEs cannot be used at present, the supervision administration and the company should investigate the potential risks and find the solutions. And after the problems have been solved, the TPEs can be used again. And in the case of level I, the TPEs are forbidden to be used. After investigation, if the risk is related to the quality problems of the TPEs, the TPEs should be scrapped instantly, and on the other hand, if the risk is related to

392

Z. R. Yang et al.

Table 1 Indexes of warning index system First-class index

Second-class index

Design and manufacture safety Design FTY Human resources

Relevant company Design company Manufacture company

Site resources Equipment resources Factory test FTY Type test safety

Type test FTY

Using safety

Timeliness rate of registration Use company Timeliness rate of routine inspection Timeliness rate of maintenance Timeliness rate of periodical inspection Timeliness rate of annual inspection Timeliness rate of safety attachment inspection Implementation rate of safety regulations Qualification rate of safety manager Abnormal using condition rate

Filling safety

Qualification rate of filling substances

Filling company

Qualification rate of filling parameters Human resources Site resources Equipment resources Timeliness rate of filling equipment inspection Qualification rate of filling operators Abnormal filling condition rate Periodical safety(country, province, city)

Timeliness rate of periodical inspection Timeliness rate of annual inspection

Use/manufacture company

Study on Warning System of Transportable …

393

Fig. 2 The calculation steps of second-class indexes weight

the weak regulation of any companies, the companies should be fined or disqualified in this field.

4.2 Warning Model As all the analyses in the warning system are qualitative and there is less regulations and rules determine the safety threshold of every index and the whole system. So, in this warning system, the statistical method is used to determine the safety threshold. For the indexes that have a concrete value, they are calculated by the definite formulas,

394

Z. R. Yang et al.

for example, FTY, timeliness rate, qualification rate, and so on. For the other indexes that are difficult to get a concrete value, we adopt scoring system. And the score would be given by the supervisor according to the regulations and rules. The data of the second-class indexes are collected, recorded and scored by local, city, province or country supervision administration, or come from the reports submitted by the companies themselves. The TPEs with electronic label (such as RFID) will record and update the relevant information in their own labels. And the TPEs without an electronic label store the information in the computer with a certain code. All the information will be uploaded to the warning system. Moreover, the sum of every product of original second-class index value and its weight is set as a synthetic result. Several TPEs were used as a sample, but the equipment used in this sample should be different than those used in Sect. 3.3. And the synthetic scores for every TPE are obtained by using the calculation method mentioned before. Based on the statistics, the value in the top 20% is set as level IV warning, the value in the arrange of 20%–50% is set as level III warning, the value in the arrange of 50%–80% is set as level II warning, and the value in the last 20% is set as level I warning. The warning information will be shown in the warning system and sent to all related authorities and companies.

5 Conclusions In this paper, a warning system from regulatory perspective is designed. This system involves the regulation and supervision in almost every process of TPE, including design, manufacture, using, filling, inspection, and maintenance process. The index system was established based on the risk analysis, and the index system consists of five first-class indexes: design and manufacture safety, type test safety, using safety, filling safety, and periodical inspection safety. In order to have a better assessment, several second-class indexes are set under the corresponding firs-class indexes. The weight of every index were calculated by using AHP and entropy methods. Then, the warning model was studied and the synthetic score of the safety conditions could be obtained by using the warning model. In this system, there are four warning levels and warning information of every TPE could be sent to the related authorities and companies. Acknowledgements We would like to thank the National Key R&D Program of China “Research on Key Technologies of Risk Supervision and Management for Typical Transportable Pressure Equipment” (Grant No.2017YFC0805605) for the financial support.

Study on Warning System of Transportable …

395

References 1. Notice of the state administration of market supervision on the safety of special equipment nationwide in 2018, 04 April 2019. http://www.samr.gov.cn/tzsbj/dtzb/gzdt/201904/t20190 404_292592.html 2. Tschirschwitz R, Krentel D, Kluge M et al (2017) Mobile gas cylinders in fire: consequences in case of failure. Fire Saf J 91:989–996 3. Ge S, Zhang YL, Xu XD (2004) Measurement and analysis of residue stress conditions in high pressure gas cylinder. In: ICRS-7, Xi’an, pp 322–327 4. Yue Z, Yu F (2013) Research on materials corrosion failure model of liquefied natural gas (LNG) cylinder for vehicle. In: ICMSE Dalian, pp 690–693 5. Pal U, Mukhopadhyay G, Bhattacharya S (2017) Failure analysis of an exploded CO2 gas cylinder. Eng Fail Anal 79:455–463 6. Kingklang S, Daodon W, Uthaisangsuk V (2019) Failure investigation of liquefied petroleum gas cylinder using FAD and XFEM. Int J Pressure Vessels Piping 171:69–78 7. Song BH, Lee JH, Jo YD et al (2016) Study on the safety management of toxic gas cylinder distribution using RFID. Int J Saf Secur Eng 6(2):209–218 8. Kim JN, Jeong BY, Park MH (2017) Accident analysis of gas cylinder handling work based on occupational injuries data. Hum Fact Ergon Manufact 27(6):280–288 9. Liu SJ, Chen ZZ, Huang QH (2018) Construction of monitoring and management platform for intelligent connected special equipment-taking transportable pressure special equipment for example. China Spec Equip Saf 34(8):1–8 10. Liu X, Rapik Saat M, Barkan CPL (2013) Integrated risk reduction framework to improve railway hazardous materials transportation safety. J Hazard Mater 260:131–140 11. Ghaleh S, Omidvari M, Nassiri P et al (2019) Pattern of safety risk assessment in road fleet transportation of hazardous materials (oil materials). Saf Sci 116:1–12

Industry 4.0: Why Machine Learning Matters? T. H. Gan, J. Kanfoud, H. Nedunuri, A. Amini, and G. Feng

Abstract Machine Learning is at the forefront of the Industry 4.0 revolution. Both the amount and complexity of data generated by sensors make interpretation and ingestion of data beyond human capability. It is impossible for engineers to optimise an extremely complex production line and to understand which unit or machine is responsible for quality or productivity degradation! It is extremely difficult for engineers to process monitoring or inspection data as it requires a protocol, a trained and certified engineer as well as experience! It is extremely difficult to guarantee the quality of every single product particularly at high production rates! Artificial Intelligence can help answering the above questions. Indeed, machine learning can be used for predictive or condition-based maintenance. Even without knowing the design of the machine (i.e., gearbox stages, bearing design, etc.), a machine learning algorithm is capable to monitor deviation of monitoring sensors features compared to a healthy state. Machine learning can be used to monitor the quality of production by allowing the automation of the quality control process. Monitoring additive manufacturing process to detect defects during printing and allow mitigation through real-time machining and reprinting of the defective area. Ensuring the quality of very complex and sensitive production processes such as MEMS, electronic wafers, solar cells or OLED screens. Brunel Innovation Centre (BIC) is working on developing algorithms combining statistical, signal/image processing for features extraction and deep learning for automated defect recognition for quality control and for predictive maintenance. Brunel Innovation Centre is also working on integrating those technologies into the digital twin. Keywords Industry 4.0 · Machine learning · Digital twin · In-line inspection

T. H. Gan (B) · J. Kanfoud · H. Nedunuri · A. Amini · G. Feng Brunel University London, Kingston Lane, Uxbridge, Middlesex UB8 3PH, UK e-mail: [email protected] T. H. Gan TWI Ltd., Granta Park, Cambridge CB21 6AL, UK © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_37

397

398

T. H. Gan et al.

1 Introduction The term industrial revolution was first popularised by the English economic historian Arnold Toynbee to describe the process that led to United Kingdom economic and industrial development from 1760 to 1840. Steam power and mechanised production are among the most recognisable developments associated with the 1st industrial revolution but the 1st revolution has various features technological, socioeconomic and cultural. Among the technological changes, the use of new materials (iron), the use of new energy sources (coal, steam), the invention of new machines (spinning jenny), a novel organisation of work in the factory system, development of transportation and communication and the increasing application of science to the industry. The 2nd industrial revolution (1870–1910) also referred to as the technological revolution is considered the basis of modern life paving the way for widespread electricity, modern transport, water supply, sewage systems, etc. This has been achieved through major advances such as petrol replacing coal as the main energy source and the discovery of the Bessemer process for the mass-production of steel but also the electrification, the invention of the internal combustion engine. The increase of the size of industries resulted in complex production, maintenance and financial problems that led to significant developments in industrial engineering, manufacturing engineering and business management. Such developments have been pioneered by Frederick Winslow Taylor leading to the theory bearing his name [1] and Ford [2] optimisation of factories for mass production of standardised products. This was achieved thanks to precise and organised moving assembly lines using purposebuilt machines simplifying the production and allowing the employment of unskilled workers. The 3rd industrial revolution (1950 to 1980) is the digital revolution and built on the development of semiconductors, transistors and microprocessors. This led to a progressive move from analogue to digital electronics in the industry. The development of programmable logic controllers, PID Controllers, computers and network communication resulted in control systems such as SCADA and allowed a progressive automation of production and the centralisation of monitoring and control of equipment, machines, processes and production. Industry 4.0 or the 4th industrial revolution is a revolution in the making targeting smart and flexible manufacturing with mass customization, better quality and improved productivity. This paper is organised in 3 sections; the 1st presents Industry 4.0, the 2nd explains why machine learning is needed for Industry 4.0 and the final section presents examples of work developed by Brunel Innovation Centre (BIC) in the context of Industry 4.0

2 Industry 4.0 Industry 4.0 is a strategic German initiative. The final report of the working group [3]. It is easier to understand Industry 4.0 through its five main objectives:

Industry 4.0: Why Machine …

399

• production flexibility by optimising the steps in a manufacturing process in order to reduce cost, optimise machine loads and reduce dead time • convertible factories: that generate individualised products which requires a modular factory that adapts near real time to customer specific and different requirements while achieving mass and low-cost production • customer-oriented solutions: by using sensor data to understand better the customer environment and need and adapt the product or service accordingly • optimised logistics: by linking customer orders to predictive algorithms assessing consumables and material required and by optimising the supplier choice on the delivery route to achieve both an optimised cost and respected deadlines • use of data: data was already available thanks to the 3rd industrial revolution but it is local and the SCADA standard widely used is very rigid and not suitable for production flexibility and convertible factories. In the context of Industry 4.0, the production data can be used more intelligently for monitoring the process efficiency, cost, health and reliability but also for monitoring the product during its life opening up opportunities for new business model (service rather than product or both) • circular economy: the availability of data during a product life allows assessing the life of a product and planning recycling as well. The Industry 4.0 is an outcome of the current digital era but where the 3rd industrial revolution was driven by digital electronics, the 4th revolution is driven by Internet. Indeed, the connectivity is key to Industry 4.0 and by connectivity we mean the intraconnectivity which was achieved in the 3rd revolution but lacked flexibility. More importantly, by connectivity we mean the vertical connectivity with the supply chain and the customers and the horizontal connectivity between production, maintenance, logistics, research, marketing and finance. The near real-time flow of data between different departments and different stakeholders is the key to achieving the above objectives. In practise, Industry 4.0 is achieved by mapping continuously the physical events during production and service to the digital domain. Paolo et al. [4] lists the key enabling technologies for Industry 4.0 as shown in Fig. 1.

3 Why Machine Learning Matters? Industry 4.0 is built upon connectivity and data. The data is coming from the production units, from the machines, from the supply chain, from the logistics, from the intelligent and connected products, etc. The reason such amount of data is collected is to support manufacturing, maintenance, industrial planning and business intelligence. Wang et al. [5] states that manufacturing systems should be able to monitor physical processes and make smart decisions through real-time communication with humans, machines and sensors. The processing of the potential huge amount of data and the smart decision making requires machine learning as humans are unable to

400

T. H. Gan et al.

Industry 4.0 Internet of Things

Cloud computing

Cybersecurity

Big data

Autonoumous Robots

Digital Twin and simulation

Horizontal and Vertical System Integration

Augumented Reality

Additive Manufacturing

Fig. 1 Enabling technologies of Industry 4.0 [4]

process either complex or huge amount of data. Through a training process, machine learning algorithms are capable of creating virtual models of a production line, a factory or a process and use link the materials, power and consumables, the production steps and time and the output no matter how complex it is. The trained models allow optimising the production depending on customer specific requirements and delivery times and optimise the production to minimise cost, respect deadlines while guaranteeing quality. The machine learning algorithms can automatically predict required consumables and materials and inform the supply chain as well as optimise the logistics for both cost and guaranteed on-time delivery. The trained algorithms could handle partially or totally all the above operations depending on the quality and reliability of the data coming from different stakeholders. They can also provide scenarios to human to make the final decisions. Machine learning algorithms can be used to monitor—without extra-sensors— the health of the process and the production by evaluating deviations in power or time of production of the whole process as well as individual machines and alert on deterioration and risk of failure. If the production process is equipped with relevant sensors for condition monitoring of machinery or/and quality control of the product at different stages, they can be used to monitor in real-time both the quality of the products and the health of the machines allowing a high degree of certainty about the productivity, the efficiency, the health and the quality of the production units.

Industry 4.0: Why Machine …

401

4 Case Studies of Machine Learning in Industry 4.0 In the context of Industry 4.0, BIC work focuses on developing machine learning algorithms for condition monitoring, structural health monitoring and in-line inspection. In this section, we review the impact of machine learning on various Industry 4.0 areas emphasising BIC developments

4.1 Machine Learning in Additive Manufacturing Additive manufacturing is a key technology for Industry 4.0 as it allows production flexibility and customer-oriented solutions. Defects that occur during the additive manufacturing (AM) process can result in irreversible damage and structural failure of the object after its manufacturing [6]. Machine learning can be implemented in AM process to achieve high quality products. ML can be implemented as an additional layer to the real-time and in situ non-destructive testing (NDT) evaluations. The nature of some AM methods means that not all NDT techniques are effective in characterising critical defects [6]. Thermography, X-ray computed tomography (CT scan), or digital radiography are limited by the resolution of images and cost and are not ideal for real-time monitoring [6]. Beyond defect identification and quality control, ML could be applied in additive manufacturing for process parameter optimisation, process monitoring and control. Collins et al. [7] reported a machine learning model capable of predicting and estimating mechanical strength of a part manufactured using electron beam direct manufacturing. The model was able to identify distribution of yield strength using deep learning and genetic algorithms (GA). For metal arc welding, Xiong et al. [8] have proposed a deep learning model to optimise the bead geometry based on several process parameters such as feed rate, speed of welding, arc voltage and nozzle to plate distance. BIC is developing in the EM-ReSt project [6], electromagnetic acoustic transducers (EMAT) and eddy current transducers (ECT) for real-time monitoring of AM components. Big data collection and analysis will be performed and blended with statistical, machine learning and big data analysis for the estimation of the likelihood of AM techniques to introduce anomalies into the printed structures.

4.2 Machine Learning in Digital Twin Digital twin is a digital representation of a physical system developed independently and connected to the physical asset. The digital information could include multi-physics-based physics model of the asset or sensory information. According to Glaessgen and Stargel [9]: “the digital twin is an integrated multi-physics, multiscale, probabilistic simulation of a complex product and uses the best available

402

T. H. Gan et al.

physical models, sensor updates, etc., to mirror the life of its corresponding twin”. Multiple researchers have empirically and theoretically proven that machine learning and big data technologies have improved the efficiency of production. Techniques such as data mining, pattern recognition and deep learning have been demonstrated [10–12]. However, actual and real-world applications and reported case studies of ML in manufacturing are very limited [13]. ML and big data are being utilised in manufacturing domains such as: value creation [14] and quality control [15]. For tackling uncertainties in manufacturing, Monostori [16] reported a hybrid AI and multi-strategy machine learning approach. Machine learning can be implemented in all the above listed manufacturing domains. Qingfei et al. [13] proposed a machine learning-based digital twin framework as shown in Fig. 2. This framework aims to optimise production in petrochemical industries. Similar models can be implemented in other industries to improve the production efficiency (Fig. 3). BIC is involved in collaborative research projects aiming at developing a digital twin of a wind turbine and a bridge. WindTwin [17] aims to create a virtual copy of a wind turbine. The digital twin of the asset aims to improve the understanding

Fig. 2 Machine learning-based digital twin framework [13]

Industry 4.0: Why Machine …

403

Fig. 3 Project smart bridge concept [18]

of performance variations, anticipate wear and failures and implement conditionbased maintenance instead of planned scheduled maintenance. The project is developing modelica models of the machinery and application of machine learning for enhancing condition monitoring algorithms and improving defect detection. Smart bridge [19] aims to create a virtual model of a railway bridge located near London. The virtual model developed allows monitoring the condition and degradation of the asset combining data from IoT sensors and Artificial Intelligence algorithms considering environmental, operational and historic conditions. The project also aims developing risk-based inspection algorithms based on the monitoring data and loading of the bridge.

5 Conclusion and Perspectives Industry 4.0 and its enabling technologies are a very dynamic area of interest for the industry and the academia. A lot of resources are being invested in developing the enabling technologies: IIoT, clould computing, machine learning models and libraries, etc. The Industry 4.0 still suffers from the limitations of any young concept

404

T. H. Gan et al.

the lack of standards. Indeed, there is no communication protocol for Industry 4.0, no security standards and not even a standard definition of Industry 4.0 According to Annalise Suzuki, Director of Technology and Engagement, at Elysium Inc, there is no standard for machine to machine communication which is a key to Industry 4.0 implementation. The development of standards for Industry 4.0 is particularly challenging given that this revolution aims to connect not only the whole supply chain but the whole stakeholders which makes agreements more challenging to achieve.

References 1. Taylor FW (1911) The principles of scientific management. Harper and Brothers, New York 2. Ford H, Crowther S (1922) My life and work, x edn. Dbouleday, New York 3. Platform Industrie 4.0, 26 June 2019. https://www.plattform-i40.de/PI40/Navigation/EN/ Home/home.html 4. Paolo C et al (2018) Product lifecycle management to support Industry 4.0. In: 15th IFIP WG 5.1 international conference, proceedings, Turin, Italy, vol 540. Springer 5. Wang S et al (2016) Towards smart factory for industry 4.0: a self-organized multi-agent system with big data based feedback and coordination. Comput Netw 101:158–168 6. Project EM-ReSt. https://www.brunel.ac.uk/research/Projects/EM-ReSt 7. Collins PC et al (2014) Progress toward an integration of process–structure–property–performance models for “three-dimensional (3-D) printing” of titanium alloys. Jom 66(7):1299–1309 8. Xiong J et al (2014) Bead geometry prediction for robotic GMAW-based rapid manufacturing through a neural network and a second-order regression analysis. J Intell Manuf 25(1):157–163 9. Glaessgen E, Stargel D (2012) The digital twin paradigm for future NASA and US Air Force vehicles. In: 53rd AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics and materials conference, Honululo, Hawai 10. Carbonell JG et al (1983) An overview of machine learning, pp 3–23. Morgan Kaufmann, New York 11. Kateris D et al (2014) A machine learning approach for the condition monitoring of rotating machinery. J Mech Sci Technol 28(1):61–71 12. Köksal G et al (2011) A review of data mining applications for quality improvement in manufacturing industry. Expert Syst Appl 38(10):13448–13467 13. Min Q et al (2019) Machine learning based digital twin framework for production optimization in petrochemical industry. Int J Inf Manag 49:502–519 14. ur Rahman MH et al (2016) Big data reduction framework for value creation in sustainable enterprises. Int J Inf Manag 36(6):917–928 15. Tellaeche A, Ramón A (2016) Machine learning algorithms for quality control in plastic molding industry. In: IEEE 18th conference on emerging technologies & factory automation, Cagliari, Italy 16. Monostori L (2003) AI and machine learning techniques for managing complexity, changes and uncertainties in manufacturing. Eng Appl Artif Intell 16(4):277–291 17. Project WindTwin. https://www.brunel.ac.uk/people/project/184220 18. Project SmartBridge. https://www.brunel.ac.uk/research/Institutes/Institute-of-Materials-andManufacturing/Structural-Integrity/Brunel-Innovation-Centre/Project-summaries/SmartB ridge 19. https://gtr.ukri.org/projects?ref=103883

Condition Monitoring of Electromechanical Systems

A Method for Online Monitoring of Rotor–Stator Rub in Steam Turbines J. Liska and J. Jakl

Abstract In recent years, the operation of steam turbines has been associated with the occurrence of rotor–stator rub. This is because the clearances in the turbine flow paths decrease in an effort to increase the turbine efficiency. At present, detailed rubbing diagnostics is implemented as offline analysis of measured data. The detection of the rotor–stator rub is based on analysis of vibration signals in time domain when a machine operator monitors the overall level of vibration or the vector of rotation frequency contained in the measured vibration signal. This paper deals with design and implementation of the system for online monitoring of rotor–stator rub implemented as the rub advanced monitoring system (RAMS), which, in addition to detection phase, provides also localization of the place of contact occurrence, based on additional knowledge of the rotor geometry. Except the contact on the shaft, the contact between the blade tips and the stator seal can also be detected and localized. The contact is in that case accompanied by periodic impacts whose frequency for example corresponds to the product of the number of swirl brakes above blade tips and the rotation frequency. With this information obtained from standard rotor vibration signals, the rotor operation can be diagnosed in more detail and, if necessary, the turbine operation can be adapted to the current situation. Keywords Rotor–stator rub · Full spectrum · Diagnostics · Remote monitoring

1 Introduction Advanced analysis of rotor vibration signals provides additional information on rotor condition and its operation. From the long-term point of view, a higher efficiency of the turbine is achieved primarily by reducing the clearances between the rotor and stator, and the seal parameters are designed to minimize steam energy losses. The rotor–stator contact is subsequently a necessary result appearing during the J. Liska (B) · J. Jakl NTIS—European Centre of Excellence, Faculty of Applied Science, University of West Bohemia, Univerzitni 8, 30614 Plzen, Czech Republic e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_38

407

408

J. Liska and J. Jakl

machine operation. It includes several physical phenomena, such as friction, impacts, changes in system stiffness due to the physical connection of the rotor and stator, etc. Especially after installation of the new rotor and seals, there is a high probability of rub occurrence due to the small clearances in seals. In some cases, the seals are abraded and the contact usually disappears. However, more intensive contact causes the rotor to heat up at the point of contact which is accompanied by thermal bow of the rotor. The condition may deteriorate rapidly and cause serious damage to the machine. Muszynska describes the rub problem in [1]. At present, diagnostics of the rotor– stator contact is based on an offline analysis of vibration data. Within the turbine operation, the overall rotor vibration and 1X phasor are monitored. As discussed by Goldman in [2], the variation of 1X phase and amplitude within the constant rotation speed may indicate bent shaft caused by an inhomogeneous thermal stress of the rotor during the rub occurrence. This approach is an indirect method of rub accompanied also by rotor bending. Several authors [3–5] have proposed more or less advanced methods for rub monitoring and identification. These methods utilize primarily signal analysis in time–frequency domain. Time–frequency signal processing methods allow to analyze and describe the time dependency of signal frequency content. The inputs for methods testing are usually obtained by numerical simulations of the rotor–stator contact model (see, for example, [3]) or experimentally using laboratory rotor stands and rigs (see [6]). Rotor–stator rub detection methods are most often based on analysis of rotor vibration data—relative rotor or absolute stator signals. Hall and Mba in [7] have also proposed acoustic emission for analysis of rub signals.

2 Rotor–Stator Contact Diagnostics As mentioned above, methods for offline rub diagnostics described in the literature are based on the processing and representation of relative rotor vibration signals by evaluating the development of 1X phasors over time (see Fig. 1). In addition, Muszynska in [1] and [8] describes a method of calculating the full spectrum that allows the determination of rotor precession. The change of rotor precession from forward to backward may be additional indicator of rub presence, with exception of rotor supported by anisotropic bearing during transient changes of rotor speed. Periodical partial rub is a special type of rub, when rotor hits the stator n times per m turns. As a symptom of the partial rub, an origin of subsynchronous frequencies in the spectrum of analyzed signals could be detected. The issue of online detection and early detection of rubbing has been solved by automation of the full spectra processing (method of full cumulative spectrum) and by subsequent standardization of the subharmonic frequency components of the spectrum. A novel method for periodic and partial rub detection was described in [9] and [10] (Fig. 2). Thresholding of the resulting characteristic values is used for automatic detection of partial rub. The resulting algorithm (see [11] and [12]) has been implemented into

A Method for Online Monitoring of Rotor–Stator …

409

Fig. 1 Development of 1X phasors during contact—measuring planes on gearbox and generator

Fig. 2 Waveform of characteristic (standardized) values—HP (left) and LP (right) part of TG 200 MW

the rub advanced monitoring system (RAMS), which, in addition to detection phase, provides also localization of the place of contact occurrence, based on additional knowledge of the rotor geometry.

410

J. Liska and J. Jakl

3 Localization of the Rub During Operation of the Turbine The symptoms of the shaft rubbing were many times successfully detected by the rub advanced monitoring system within the operation of the steam turbine. Except the contact on the shaft, the contact between the blade tips and the stator seal can be also localized. In the following text, the case study of blade rubbing localization is presented. The rub symptoms were significantly and simultaneously measured at three measurement planes (bearing pedestals). Consequently, the localization of the contact origin was performed. Firstly, the blade contact was recognized at the excited frequency (276 Hz), which corresponds to the product of the rotation speed and number of “swirl brakes” installed in the stator (see Fig. 3). To identify the time of arrival (TOA) of rub induced vibration, a method comparing signal characteristics in two consecutive sliding windows (length N) in predefined frequency bands was used. The aim of the method is to identify signal changes. The ratio of two consecutive windows shows the local properties. Let us consider a signal s and its n-th sample. Then the quantity w describes the signal changes by dividing signal parts between samples n and n + N − 1 and between samples n − N to n. The rub impact TOA is identified as a maximum of the w-value. n+N −1

w(i) = i=n n−1

|s(i) − s(i − 1)|

i=n−N |s(i)

− s(i − 1)|

(1)

The w-value is used to determine the TOA as an input for data processing in the rub localization phase after its detection (Fig. 4). The global maximum of w-value could be also guaranteed by appropriate choice of window length N because of signal stationarity and low damping before the rub

Fig. 3 Time–frequency representation of the rub—absolute bearing vibration signal

A Method for Online Monitoring of Rotor–Stator …

411

Fig. 4 w-value (red) detects the beginning of impact arrival in selected vibration frequency band (blue)

origin. The window length of 1 ms (i.e. N = 100) has been verified for signals with the sampling frequency of 50 kHz. The advantage of this approach is its ability to adapt itself at different frequencies to determine the rub event beginning in the time–frequency domain. In most cases, the transmission and propagation velocity of the rub caused vibration through the rotor is not known. Therefore, there is need to perform the simplification of probable place detection with use of the linear localization based on knowledge of the distance between the measurement planes. In this case, three measurement planes are used for localization, and therefore it is not necessary to have prior knowledge of the speed of wave propagation. A frequency range of 270 Hz to 280 Hz was used for the analysis of the blade rub in the specific case of the event described in this article. The frequency band of 1– 6 Hz is a band of the shaft rubbing—a contact in the labyrinth seals (see Fig. 3). The time–frequency representation was calculated using the approximation of continuous Gabor transformation. To estimate the TOA of the rub vibration to the measuring sensor, the method based on (1) was implemented. Figure 5 shows the resulting axial rub localization in the form of a histogram (width of column is 50 mm). The picture also shows schematically the turbine HP part in the background and several rotating blade stages. The places of sensor installation are highlighted and labeled as SV1–SV3 (shaft vibration). The highest occurrence of the contact was detected between the 4th and the 6th stage of the HP rotor. The variance of the localization is partly depending on the existing noise of the measuring chain and on the operational deflection shape of the rotor during the rub occurrence. However, the turbine inspection has revealed that the rub has occurred at multiple places with the greatest damage in the place determined by the localization (see Fig. 6).

412

J. Liska and J. Jakl

Fig. 5 Example of contact localization in the form of histogram—HP part of TG 200 MW

There is also need to identify the place of contact circumferentially, not only axially. We use combination of both absolute bearing vibration and relative shaft vibration signals to approximate the tangential localization. By combining the information from both sensor types, the angle determination of the maximum contact intensity could be achieved. The resulting tangential angle of contact is displayed in the Fig. 7 (right-bottom part). The shown ellipse matches the orbit of the rotor based on the relative rotor vibration measurement. The color of the ellipse is based on the evaluation of the amplitude in the characteristic band of the absolute stator vibration signal (minimum = blue color, maximum = red color). Thus, the maximum amplitude (red) determines the angle of the contact between the rotor and stator. With this information obtained from standard rotor vibration signals, the rotor operation can be diagnosed in more detail with precise rub localization and, if necessary, the turbine operation can be adapted to the current situation.

4 Conclusion After the rub detection and localization, the HP turbine part was inspected and the estimated rub places were examined in detail. The inspection outputs have confirmed the results of the linear localization presented above (axial and tangential part). The most significant abrasion of the swirl brakes was detected between the 4th and 6th stage above the blade tips and the honey comb seal was also missing on several places in the bottom HP stator part. In this case, visual inspection has helped to evaluate the

A Method for Online Monitoring of Rotor–Stator …

413

Fig. 6 Inspection of the stator part of 220 MW steam turbine after the rub occurrence

Fig. 7 Significant damage of the honeycomb seal (above blade tips) and tangential localization of the blade rubbing

414

J. Liska and J. Jakl

results of the rub detection and localization method and to increase the confidence of turbine operators in its ability. The RAMS monitoring system with the implemented method provides the rub localization with centimeter accuracy. This fact significantly reduces duration of forced outage since it is known in advance where exactly the rub occurred. Moreover, planning of repair can be much more detailed because of the same reason. From the steam turbine operation point of view, the developed technology serves as an early alert system that prevents steam turbines from wreckage pretty easily caused by the rub when not revealed on time. Acknowledgements This work was supported from ERDF under project “Research Cooperation for Higher Efficiency and Reliability of Blade Machines (LoStr)” No. CZ.02.1.01/0.0/0.0/16_026/0008389 and by the project from the Ministry of Education, Youth and Sports of the Czech Republic, PUNTIS-LO1506.

References 1. Muszynska A (2005) Rotordynamics. Taylor & Francis, London 2. Goldman P, Muszynska A, Bently DE (2000) Thermal bending of the rotor due to rotor-to-stator rub. Int J Rotating Mach 6(2):91–100 3. Patel TH, Darpe AK (2009) Study of coast-up vibration response for rub detection. Mech Mach Theory 44:1570–1579 4. Peng Z, He Y, Lu Q, Chu F (2003) Feature extraction of the rub-impact rotor system by means of wavelet analysis. J Sound Vib 259(4):1000–1010 5. Lim MH, Leong MS (2013) Detection of early faults in rotating machinery based on wavelet analysis. In: Advanced in mechanical engineering, vol 2013, 8 p 6. Abuzaid MA, Eleshaky ME, Zedan MG (2009) Effect of partial rotor-to-stator rub on shaft vibration. J Mech Sci Technol 23:170–182 7. Hall LD, Mba D (2004) Diagnosis of continuous rotor-stator rubbing in large scale turbine units using acoustic emissions. Ultrasonics 41:765–773 8. Goldman P, Muszynska A (1999) Application of full spectrum to rotating machinery diagnostics. Orbit 20(1):17–21 9. Liska J, Jakl J, Cerny V (2011) The use of time-frequency methods in rotor/stator impactrubbing detection. In: Proceedings of Asme 2011 international design engineering technical conferences & computers and information in engineering conference, IDETC/CIE 2011, Washington, DC, pp 1–10 10. Liska J, Jakl J, Janecek E (2012) Steam turbine rotor/stator impact and rubbing detection. In: Proceedings of 9th international conference on condition monitoring and machinery failure prevention technologies, London, pp 1–12 11. Liska J, Cerny V A method of detecting and localizing partial rotor-stator rubbing during the operation of a turbine. US patent US9903787B2 12. Liska J, Cerny V A method of detecting and localizing partial rotor-stator rubbing during the operation of a turbine. European patent EP2746541

Implementation of Instantaneous Power Spectrum Analysis for Diagnosis of Three-Phase Induction Motor Faults Under Mechanical Load Oscillations; Case Study of Mobarakeh Steel Company, Iran S. Mani, M. Kafil, and H. Ahmadi Abstract Induction motors are the most commonly used prime movers in various industrial applications. Their vast amount of implementation necessitates a thorough condition monitoring framework to assure their well-being, reduction of costs, increasing their lifetimes and avoiding unwanted stoppages. Intensive research efforts have been focused on motor current signature analysis (MCSA), the technique which utilizes the spectral analysis of the stator current and aims to detect electrical and mechanical defects. Although MCSA is one of the most common and efficient techniques in condition monitoring of induction motors, several drawbacks hinder their applicability and efficiency in certain conditions; one of the most important of which is when the motor faces oscillating loads where fault-related sidebands in MCSA spectrum are misunderstood with the ones due to load oscillation and hence it leads to false indications . In this paper, a new fault detection technique focusing on active and reactive instantaneous power signatures rather than stator current is proposed for the diagnosis of faults in three-phase induction motors and prevention of false indications due to overlapping fault sideband components with those pertained to load oscillations. As a specific case study, a large 2 MW motor in steel production process in Mobarakeh Steel Company, Iran, with a relatively oscillating load is studied. A common fault in induction motors, a broken rotor bar is simulated for verification of the results. Computer simulation is performed using MATLAB based on the proposed case study and both approaches are compared in both healthy and faulty situations, highlighting the advantages of the proposed method. Keywords Motor Current Signature Analysis · Diagnosis · Instantaneous power · Broken rotor bar · Oscillating load

S. Mani (B) · M. Kafil · H. Ahmadi Technical Inspection Office, Mobarakeh Steel Company, Isfahan, Iran e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_39

415

416

S. Mani et al.

1 Introduction Condition-based maintenance (CbM) and condition monitoring (CM) denote the current status of equipment and indicate type of maintenance required while achieving optimal use of machine parts and guarantee breakdowns will not occur unexpectedly. With the help of intelligent diagnostic condition monitoring system, CbM would be able to present exact and valuable information about the condition of a machine. Many methods have been developed to support the development of condition monitoring system using specialized sensors [1–4]. The most prevalent type of motors utilized in industries are the induction motors which make up 95% of industrial prime movers [5, 6]. So it can be posited that induction motors are the most commonly used electromechanical machines in industries. During their operation, they are exposed to various loading and environmental conditions which acting together with the motor natural aging process may lead to numerous types of failures. As a result and due to their highlighted role in industries, significant efforts have been dedicated to induction machine fault diagnosis and many techniques have been proposed to detect and diagnose incipient motor faults including both electrical and mechanical faults. In recent years, the use of analysis methods together with the aid of systems of computerized data processing and acquisition has brought forth new areas in the research of condition monitoring for induction motors. The computer and transducer technologies combined with the advanced signal processing methods have resulted in the ability to apply condition monitoring systems in a more effective manner with lower maintenance costs [7, 8]. The key point is, it remains necessary during the maintenance period that the data regarding the status of the machines be obtained by on-line condition monitoring, and hence, the machinery failure can be effectively reduced. Motor current signature analysis (MCSA) is one of the most simple and popular methods for detection of three-phase induction motors faults [9–12]. MCSA is based on the analysis of the spectral components of one or more motor currents when the motor is at the steady state [9, 10]. Its remote monitoring capability based on stator current measurement available in the motor control center makes it an attractive and convenient technology to use in an industrial setting. The advantage of MCSA is its remote monitoring capability, simple setup and accurate results. Although MCSA proves to be non-intrusive and economical for motor fault detection, the false indication in MCSA approach due to overlap of fault-related sidebands with those pertained to load oscillations is unavoidable [13–25]. Thus, it is essential to have a non-invasive on-line condition monitoring scheme which would have the ability to detect the defects under various loading conditions. In this paper, in place of the stator current and to avoid the shortcomings of this approach, the instantaneous power is proposed as a medium for motor signature analysis oriented toward detection of motor abnormalities [26–28]. The procedure is known as instantaneous power signature approach (IPSA) and uses line-to-line voltages and line currents to

Implementation of Instantaneous Power Spectrum Analysis for …

417

derive the desired signals [29–33]. Numerous literature like [34, 35] investigated this phenomenon and made a comparison between MCSA and IPSA, narrating the advantages of the latter over the former in such practical conditions. It is shown that the amount of information carried by active and reactive instantaneous power signals is higher than that deducible from the current signal alone. As mentioned earlier, the advantage of IPSA over MCSA is more highlighted in cases where the load is oscillating with time; thus, the lateral fault side bands in current signal are misunderstood with those pertained to load variations, hence leading to false indications. IPSA approach can assist in such cases by distinguishing the two sidebands. In order to investigate the effectiveness and feasibility of the proposed approach in industrial applications, a comprehensive simulation is presented in this paper, offering the possibility to model fault conditions and to investigate if their separation from load torque oscillations is possible, in both MCSA and IPSA approaches. The simulation is performed in MATLAB environment, based on the industrial data derived from a Stand Reversing Mill 2 MW induction motor, located at cold rolling mill plant at Mobarakeh Steel Company, Iran.

2 IPSA in Healthy, Faulty and Load Oscillation Conditions The outstanding advantages and probable drawbacks of MCSA have been elaborated in the previous section. As mentioned, in cases accompanied with load variations, MCSA has serious shortcomings, as the fault sidebands in the current spectrum are misunderstood with the ones pertained to load oscillations and this may lead to false indications. Instantaneous power signal spectrum has got more information in this regard, thus can be assisting in such cases. In this section, the phenomenon of IPSA with a focus on a healthy and a defective motor and having load variation scenarios into account is investigated.

2.1 Case of a Healthy Motor Let us take a healthy induction motor, powered from a balanced three-phase source of sinusoidal voltages and driving a constant load into account. Taking this motor as a reference, waveforms of the voltages and currents can be as (1) and (2) [36, 37]. It is worth mentioning that no abnormalities in stator windings are assumed in further considerations. √ 2V cos ωt

(1)

√ 2I cos(ωt − ϕ)

(2)

va (t) = i a (t) =

418

S. Mani et al.

where V and I are the stator voltage and current RMS values, respectively, and ϕ is the phase angle. The voltage and currents of phase b and c can be reproduced using a 2π /3 of phase difference. Three-phase instantaneous active and reactive powers p3ph (t) and q3ph (t) are calculated via (3) and (4). p3ph (t) = (va × i a + vb × i b + vc × i c ) = 3V I cos ϕ

(3)

√ q3ph (t) = (2/ 3)((vb − vc ) × i a + (vc − va ) × i b + (va − vb ) × i c ) = 3V I sin ϕ (4)

2.2 Case of a Defective Motor Let us now consider presence of a fault in the rotor which modulates the current spectrum. As mentioned before, occurrence of a rotor cage fault or mechanical abnormality is characterized by appearance of a sequence of sideband components around the fundamental, at frequencies f s ± kf d , in the stator current spectrum (with f d = f b , f r or f 0 for the case of broken rotor bars, air-gap eccentricity faults, or load torque oscillations, respectively). Broken rotor bar is a relatively common fault in induction motors and accounts for a large percentage of the total induction motor failures [9, 10]. It is taken into account as the motor fault in this study and simulation. For simplicity, it is considered that the failure causes only an amplitude modulation on the stream of the stator. It could also prove that phase modulations appear in current as amplitude modulations by processed result from similar functions to the Bessel functions, an integral function in mathematically modeling the induction motor, which are sued in the simulation section in this study. The modulated current i a, f can be expressed by: i a, f = i a (t) +

∞   √

   2I1,k cos ω − kω f t − ϕ1,k

k=1

√    + 2I2,k cos ω + kω f t − ϕ2,k

(5)

where i a (t) is the healthy-mode current of phase a and ωf is the angular frequency of the failure. So the instantaneous active and reactive power signals can be calculated by p f (t) = 3V I cos ϕ + 3V −I2,k cos



∞ 

   I1,k cos kω f t + ϕ1,k

k=1

  kω f t − ϕ2,k

(6)

Implementation of Instantaneous Power Spectrum Analysis for …

q f (t) = 3V I sin ϕ + 3V

∞ 

419

   I1,k sin kω f t + ϕ1,k

k=1

   −I2,k sin kω f t − ϕ2,k

(7)

Thus with some mathematical simplifications, the amplitudes of the faulty components produced in both active and reactive spectrums are derived via (8) and (9). P f,k = 3V Q f,k = 3V



  I1,k + I2,k + 2I1,k I2,k cos ϕ1,k + ϕ2,k



  I1,k + I2,k − 2I1,k I2,k cos ϕ1,k + ϕ2,k

(8) (9)

Based on ϕ1,k and ϕ2,k phase angles in two rotor fault and load oscillation scenarios, the sideband frequency is visible in one of power spectrums, as it is in the current-related one. So the superiority of this approach is that the load oscillation can be distinguished from the rotor faults via this method, as the oscillation frequency component is absent in the reactive power spectrum and the rotor fault frequency component is absent in active power frequency spectrum. So these two spectrums can come in hand for discrimination of the two faulty frequency components.

2.3 Case of a Varying Mechanical Load Abnormal mechanical load conditions often significantly influence the characteristic signatures in the motor current spectrum [38, 39]. Mechanical load abnormalities can be classified in two main types. The first type is mechanical faults, such as shaft misalignments, that essentially create a rotor eccentricity inside the motor. So the fault indicator frequencies directly identify their detection. The second type of mechanical load anomalies such as the ones occurring in reciprocating compressors may also introduce the very same sideband harmonics in the stator current spectrum while the rotor shaft is not at all influenced. In other words, the second-type mechanical load anomalies do not produce any rotor faults inside the motor itself [39, 40]. This can lead to a major drawback of MCSA approach; as by analyzing the sideband harmonics in a stator current spectrum, the rotor fault condition can be confused with these mechanical load anomalies. This is the main issue in this study, where a false indication is produced when a low-frequency oscillating load is confused with a rotor fault in MCSA spectrum. The approach presented in this study aims to compensate this specific drawback.

420

S. Mani et al.

Fig. 1 The SRM motor at MSC

3 Introducing the Case Study Mobarakeh Steel Company (MSC), the largest steel producer in MENA region, incorporates numerous electric motors, most significantly electric induction motors with various working scenarios, environmental conditions, ages and power ratings. The importance of CbM and CM techniques in MSC, specifically on induction motors, has urged engineers and operators to focus on novel approaches to run a vivid and comprehensive condition monitoring (CM) framework for the motors in order to modify the present structure mostly based on MCSA approach and tackle the probable shortcomings. Several applications at MSC face oscillating loads; all of which highlight the drawbacks of MCSA in those industrial cases. IPSA is the solution for MSC case. The data derived from a Stand Reversing Mill (2 MW SRM motor) at Cold Rolling Plant at MSC, shown in Fig. 1, is used in this paper to investigate the effectiveness of the proposed approach. Table 1 presents the characteristics pertained to the introduced case study.

4 Computer Simulation and Results In this section, mathematical model of induction motors based on the directquadrature (d-q) framework of the machine is simulated in MATLAB Simulink environment. Figure 2 depicts a brief view of the Simulink model performed in this study.

Implementation of Instantaneous Power Spectrum Analysis for … Table 1 The SRM motor characteristics

421

Parameter

Value

Unit

Rated power

2

MW

Power factor

0.87



Rated frequency

50

Hz

Rated voltage

1100

V

Number of poles

4



Stator Rs

0.0061



Rotor Rr

0.0067



Stator Xls

0.1071



Rotor Xlr

0.0593



Fig. 2 The schematic MATLAB Simulink model

In order to investigate the false indications produced by presence of oscillating torques in an induction motor, a healthy motor in oscillating torque condition is elaborated in the first stage. The time varying oscillating torque pertained to the SRM motor introduced in the previous section is exerted on the simulated model. Figure 3 shows the rotor shaft speed while Figure 4 represents the shaft torque of the motor. All three spectrums of current, active and reactive powers are illustrated in Figs. 5, 6, 7, 8, 9 and 10, in two scenarios of healthy and faulty motor both with oscillating loads. In the steady state working condition of the motor, stator current of the motor driving an oscillating load is illustrated in frequency domain in Fig. 5. It is shown

422

S. Mani et al.

Fig. 3 The rotor shaft speed (m/s) versus time

Fig. 4 The rotor shaft torque (N m) versus time 0.1 0.09 0.08 0.07

Ias(db)

0.06 0.05 X: 56.53 Y: 0.03282

0.04 0.03 0.02 0.01 0 40

45

50

55

60

f

Fig. 5 Stator current spectrum of the healthy motor in time varying load condition

Implementation of Instantaneous Power Spectrum Analysis for …

423

0.09 0.08

Phealthy(db)

0.07 0.06 0.05

X: 6.528 Y: 0.0405

0.04 0.03 0.02 0.01 0 2

4

6

8

10

12

14

f

Fig. 6 Frequency spectrum of instantaneous active power for the healthy motor in time varying load condition 0.1 0.09 0.08

Qhealthy(db)

0.07 0.06 0.05 0.04 0.03 X: 2.083

0.02 Y: 0.01352 0.01 0 2

4

6

8

10

12

14

f

Fig. 7 Frequency spectrum of instantaneous reactive power for the healthy motor in time varying load condition

that only the electrical working frequency of the motor, 50 Hz, is visible in the stator current spectrum together with the lateral sidebands pertained to the time varying torque. The spectrums of instantaneous active and reactive powers are also depicted in Figs. 6 and 7. It is worth mentioning that all values in the figures in this subsection are normalized based on the maximum values in that specific spectrum. It is shown that a sideband at 6.28 Hz is present in both current and active power spectrums in both healthy and faulty conditions with similar values, indicating that load oscillation components are present in both signals. This is different in case of reactive power

424

S. Mani et al. 0.1 0.09 0.08

Iastotf(db)

0.07 0.06 0.05 X: 56.53 Y: 0.03584

0.04 0.03 0.02 0.01 40

45

50

55

60

f

Fig. 8 Stator current spectrum of the faulty motor in time varying load condition

0.1 0.09 0.08

Pfaulty(db)

0.07 0.06 0.05

X: 6.528 Y: 0.04035

0.04 0.03 0.02 0.01 0

2

4

6

8

10

12

14

f

Fig. 9 Frequency spectrum of instantaneous active power for the faulty motor in time varying load condition

spectrum where the 2.083 Hz component is pertained to the rotor fault and not to the load oscillation. In the second stage, a defective motor is simulated with a broken rotor bar defect. The fault frequency components pertained to this defect are introduced in (10). f br b = (1 ± 2ks). f s

(10)

where f s denotes the line frequency, s represents the motor slip and k is an integer.

Implementation of Instantaneous Power Spectrum Analysis for …

425

0.1 0.09 0.08

Qfaulty(db)

0.07 0.06 0.05 0.04 X: 2.083 Y: 0.04308

0.03 0.02 0.01 2

4

6

8

10

12

14

f

Fig. 10 Frequency spectrum of instantaneous reactive power for the faulty motor in time varying load condition

Having the defective simulated motor, Figure 8 illustrates the stator current spectrum of a problematic motor in oscillating load condition. The f br b frequencies pertained to the simulated fault can be seen in this figure. Also, the instantaneous active and reactive power in frequency domain are represented in Figs. 9 and 10. It is shown that f f component is present in the current spectrum. In other words, the faulty component is transmitted to the current signal; so in case of simultaneous load oscillation and rotor faults, the current signal cannot assist in distinguishing the two frequency sidebands. Besides, although the current spectrum sidebands in the two mentioned conditions have different values, presence of noises of other types of electromagnetic disturbances may alter the amplitudes of the sidebands which indeed boosts the false indications, as the f f component is present in the current spectrum in both cases of faulty rotor and oscillating load. On the other hand, since the load oscillation frequency components are absent in the instantaneous reactive power spectrum and rotor fault-related frequency components f f are absent in active power spectrum, the fault can be distinguished in the motor via this approach. So in case of unclear results, both instantaneous power spectrums can be checked, and if the specific sideband appears in this spectrum, it can be determined if it is related to the oscillating load or the rotor fault. According to the formulation and the simulation presented in this study, it is vividly shown that the instantaneous power spectrum method has an advantageous features compared to MCSA. This is indeed pertained to the fact that both fault and load oscillation-related sidebands are present in stator current which makes discrimination of these two cases difficult in such simultaneous occurrences. The advantage of instantaneous active and reactive power spectrums in this case is that the faultrelated frequency can be fully distinguished which assists in diagnosis of the defect in (prevalent) simultaneous problems. This study incorporating a strong computer simulation with technical data pertained to an industrial case clearly highlights the

426

S. Mani et al.

advantage of instantaneous power spectrum analysis in such specific cases, which are in fact quite prevalent in most large industries like MSC, studied as the case study in this research.

5 Conclusion This study assesses motor current signature analysis (MCSA), the most prevalent fault detection and diagnosis method for induction motors and its drawbacks in facing with time varying loading conditions are investigated. It is shown that MCSA has serious shortcomings due to false indication resulted from presence of rotor faults and load oscillations at similar frequencies. In order to prevent such false indications in MCSA spectrum, a new approach taking both current and voltage signals into account is studied. Instantaneous power signature analysis (IPSA) is the solution to the cases where the load is varied with time. A MATLAB Simulink simulation is performed in this study to compare both approaches for a 2 MW Stand Reversing Mill (SRM) motor case study at Mobarakeh Steel Company (MSC) in Iran, focusing on its actual data, incorporating low-frequency load torque oscillations. The research investigates advantages and disadvantages of the understudy methods and highlights superiorities of the presented approach over the more prevalent one in industries. Based on the derived results, it is proved that IPSA is more effective than MCSA in discrimination of load varying conditions with rotor faults in induction motors, since it delivers delicate result for fault detection, diagnosis and fault. Acknowledgements The authors would like to deeply thank the Rotary Machinery and Power Equipment Technical Inspection Office and Cold Rolling Mill Plant at Mobarakeh Steel Company, Isfahan, Iran, for provision of information, technical infrastructure and access to the required data.

References 1. Li B, Chow MY, Tipsuwan Y, Hung JC (2000) Neural-network-based motor rolling bearing fault diagnosis. IEEE Trans Ind Electron 47:1060–1069 2. Tavner PJ, Ran L, Pennman J, Sedding H (2008) Condition monitoring of rotating electrical machines. Research Studies Press Ltd., Letchworth, England 3. Okoro IO (2005) Steady and transient states thermal analysis of a 7.5-kW squirrel-cage induction machine at rated-load operation. IEEE Trans Energy Convers 20:730–736 4. Braun JM, Brown G (1990) Operational performance of generator condition monitors: comparison of heated and unheated ion-chambers. IEEE Trans Energy Convers 5:344–349 5. Thomson WT, Fenger M (2001) Current signature analysis to detect induction motor faults. IEEE Ind Appl Mag 7:26–34 6. Intesar A (2008) Investigation of single and multiple faults under varying load conditions using multiple sensor types to improve condition monitoring of induction machines 7. Choi S, Akin B, Rahimian MM, Toliyat HA (2001) Implementation of fault diagnosis algorithm for induction machines based on advanced digital signal processing techniques. IEEE Trans Ind Electron 58:937–948

Implementation of Instantaneous Power Spectrum Analysis for …

427

8. Siyambalapitiya DJT, McLaren PG (1990) Reliability improvement and economic benefits of on-line monitoring system for large induction machines. IEEE Trans Ind Appl 26:1018–1025 9. Nandi S, Toliyat HA, Li X (2005) Condition monitoring and fault diagnosis of electrical motors—a review. IEEE Trans Energy Convers 20:719–729 10. Benbouzid MEH (2000) A review of induction motors signature analysis as medium for fault detection. IEEE Trans Ind Electron 47:984–993 11. Tandon N, Yadava GS, Ramakrishna KM (2007) A comparison of some condition monitoring techniques for the detection of defect in induction motor ball bearing. Mech Syst Signal Process 21:244–256 12. Tavner PJ (2008) Review of condition monitoring of rotating electrical machines. IET Electr Power Appl 2:215–247 13. Stack JR, Habetler TG, Harley RG (2004) Fault classification and fault signature production for rolling element bearings in electric machines. IEEE Trans Ind Appl 40:735–739 14. Sun Q, Tang Y (2002) Singularity analysis using continuous wavelet transform for bearing fault diagnosis. Mech Syst Signal Process 16:1025–1041 15. Schoen RR, Habetler TG (1997) Evaluation and implementation of a system to eliminate arbitrary load effects in current-based monitoring of induction machines. IEEE Trans Ind Appl 33:1571–1577 16. Frosini L, Bassi E (2010) Stator current and motor efficiency as indicators for different types of bearing faults in induction motors. IEEE Trans Ind Electron 57:244–251 17. Blodt M, Granjon P, Raison B, Rostaing G (2008) Models for bearing damage detection in induction motors using stator current monitoring. IEEE Trans Ind Electron 1:383–388 18. Jung JH, Lee JJ, Kwon BH (2006) Online diagnosis of induction motor using MCSA. IEEE Trans Ind Appl 53:1842–1852 19. Stack JR, Habetler TG, Harley RG (2005) Experimentally generating faults in rolling element bearing via shaft current. IEEE Trans Ind Appl 41:25–29 20. Randall RB, Antoni J (2011) Rolling element bearing diagnostics—a tutorial. Mech Syst Signal Process 25:485–520 21. Jardine AKS, Lin D, Banjevic D (2006) A review on machinery diagnostics and prognostics implementing condition-based maintenance. Mech Syst Signal Process 20:1483–1510 22. Bellini A et al (2002) On-field experience with on-line diagnosis of large induction motors cage failures using MCSA. IEEE Trans Ind Appl 38(4):1045–1053 23. Lee S et al (2013) Evaluation of the influence of rotor axial air duct design on condition monitoring of induction motors. IEEE Trans Ind Appl 49(5):2024–2033 24. Culbert IM, Rhodes W (2007) Using current signature analysis technology to reliably detect cage winding defects in squirrel-cage induction motors. IEEE Trans Ind Appl 43(2):422–428 25. Leith D, Rankin D (1991) Real time expert system for identifying rotor faults and mechanical influences in induction motor phase current. In: Proceeding of fifth international conference on electric machines and drives IET, London, UK, 11–13 Sept 1991, pp. 46–50 26. Bellini A, Filippetti F, Franceschini G, Tassoni C, Kliman GB (2001) Quantitative evaluation of induction motor broken bars by means of electrical signature analysis. IEEE Trans Ind Appl 37(5):1248–1255 27. Liu Z, Yin X, Zhang Z, Chen D, Chen W (2004) Online rotor mixed fault diagnosis way based on spectrum analysis of instantaneous power in squirrel cage induction motors. IEEE Trans Energy Convers 19(3):485–490 28. Didier G, Ternisien E, Caspary O, Razik H (2006) Fault detection of broken rotor bars in induction motor using a global fault index. IEEE Trans Ind Appl 42(1):79–88 29. Faiz J, Ojaghi M (2009) Instantaneous-power harmonics as indexes for mixed eccentricity fault in mains-fed and open/closed-loop drive connected squirrel-cage induction motors. IEEE Trans Ind Electron 56(11):4718–4726 30. Legowski SF, Sadrul Ula AHM, Trzynadlowski AM (1996) Instantaneous power as a medium for the signature analysis of induction motors. IEEE Trans Ind Appl 32(4):904–909 31. Benbouzid MEH, Kliman GB (2003) What stator current processing-based technique to use for induction motor rotor faults diagnosis? IEEE Trans Energy Convers 18(2):238–244

428

S. Mani et al.

32. Nemec M, Drobniˇc K, Nedeljkovi´c D, Fišer R, Ambrožiˇc V (2010) Detection of broken bars in induction motor through the analysis of supply voltage modulation. IEEE Trans Ind Electron 57(8):2879–2888 33. Zagirnyak MV, Mamchur DG, Kalinov AP (2010) Elimination of the influence of supply mains low-quality parameters on the results of induction motor diagnostics. In: Proceeding of XIX international conference on electrical machines—ICEM, Rome, Italy 34. Salles G, Filippetti F, Tassoni C, Crellet G, Franceschini G (2000) Monitoring of induction motor load by neural network techniques. IEEE Trans Power Electron 15(4):762–768 35. Kral C, Habetler TG, Harley RG (2004) Detection of mechanical imbalances of induction machines without spectral analysis of time domain signals. IEEE Trans Ind Appl 40(4):1101– 1106 36. Shoen RR, Lin BK, Habetler TG, Schlag JH, Farag S (1995) An unsupervised, on-line system for induction motor fault detection using stator current monitoring. IEEE Trans Ind Appl 31:1280–1286 37. Maier R (1992) Protection of squirrel-cage induction motor utilizing instantaneous power and phase information. IEEE Trans Ind Appl 28:376–380 38. Obaid RR, Habetler TG (2003) Effect of load on detecting mechanical faults in small induction motors. In: Proceeding of IEEE SDEMPED, Atlanta, GA, pp 307–311 39. Long W, Habetler TG, Harley HG (2007) Review of separating mechanical load effects from rotor faults detection in induction motors. In: Proceeding of IEEE SDEMPED, pp 221–225 40. Drif M, Benouzza N, Bendiabdellah A, Dente JA (2001) Induction motor load effect diagnostic utilizing instantaneous power spectrum. In: Proceeding of 9th European conference on power electronic applications

Analysis of Unbalanced Magnetic Pull in a Multi-physic Model of Induction Machine with an Eccentric Rotor 2nd World Congress on Condition Monitoring (Singapore 2–5 December 2019) Xiaowen Li, Adeline Bourdon, Didier Rémond, Samuel Koechlin, and Dany Prieto Abstract Based on a multi-physic model of an asynchronous electrical machine with a strong electro-magneto-mechanical interaction, the Unbalanced Magnetic Pull (UMP) generated from the uneven air-gap due to the rotor eccentricity is calculated and discussed. The proposed model includes strong couplings between mechanical, magnetical, and electrical behaviors, particularly, the relationship between angle sampling and time sampling about the rotor position which introduces the instantaneous angular speed. Simulations with different input static eccentricity values are investigated in rated operation, and the effects of UMP on the dynamic behavior of the whole system are analyzed. Vibration analysis is performed at the supports to extract information in vibration signals for eccentricity monitoring. Keywords Unbalanced magnetic pull (UMP) · Asynchronous electrical machine · Eccentric rotor · Rotor center orbit · Multi-physic model

1 Introduction Unbalanced Magnetic Pull (UMP) is a radial electromagnetic force generated in the uneven air-gap of the electrical machine when a rotor eccentricity exists. Usually, it is ignored in the vibration analysis due to the difficulty of its measurement. However, once it is produced, it will pull the rotor toward the direction of the minimum air-gap and continues to worse the eccentric situation. Noise and vibration are generated inside of motors which increase the bearing wear and other NVH problems on the external structure. In the industry, sometimes the motor needs to be installed on a X. Li (B) · A. Bourdon · D. Rémond University of Lyon, INSA-Lyon, CNRS, UMR5259, LaMCoS, F-69621 Villeurbanne, France e-mail: [email protected] X. Li · S. Koechlin · D. Prieto NIDEC PSA Emotors, 212, Bd Pelletier, 78955 Carrieres-Sous-Poissy, France © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_40

429

430

X. Li et al.

poorly stable structure like a cantilever or a long flexible shaft. Under the effect of gravity, the buckling of the shaft increases the rotor eccentric level and potentially leads to the rotor–stator contact during the operation that may destroy windings or motor inside structure. Considering that an electrical machine is a multi-physic system consisting of three different fields that operate together and interact each other; a multi-physic analyzed model is established to study the effects of UMP on the vibration behavior of the induction machine with an eccentric rotor. A particular attention is paid to the rotor angle position which is considered as an output without any assumption on its time history. This relationship offers the capability to describe precisely the cyclic phenomena (component with discrete geometry in rotation) and time phenomena (as the resonance in mechanical parts). In this paper, the simulation results about 10% eccentricity of the average air-gap distance are discussed firstly to present the effect of UMP to the whole system. Afterwards, several simulation results about different eccentricity values are compared to highlight the linear variation with small eccentricities and nonlinear variation effect appearing for large eccentricity values. The rotor center orbit is studied to analyze the stability of the system. The connection between the support vibration and UMP is also analyzed at the end of the paper to explain one of the important vibration sources in the electrical machines and how to extract information in vibration signals for eccentricity monitoring.

2 Method of Analysis 2.1 Multi-physic Model Fourati et al. have proposed a multi-physic model about a 3-phases induction machine based on angular approach in [1]. In the present paper, this electro-magnetomechanical interaction is reinforced by adding the coupling of the UMP (Femx ,Femy ) generated inside of the motor due to the air-gap eccentricity. The modified strong multi-physics coupling is described in Fig. 1.

2.2 Effective Air-Gap Distance Considering the rotor eccentricity, the air-gap distance around the periphery of the rotor is no longer uniform. In most literatures, people used to calculate it by an approximative expression as a function of the shaft rotation angle as in [2]. Only the rotor whirling motion is considered so the rotor center has no variation in terms of rotation angle. Hence in this paper, a formula of instantaneous effective air-gap distance as a function of rotor center coordinates is defined in (1) according to Fig. 2.

Analysis of Unbalanced Magnetic Pull in a Multi-Physic Model …

431

Fig. 1 Multi-physics couplings

Fig. 2 Calculation of instantaneous air-gap associated with the stator tooth i

ei (xr , yr ) =



Rs2 − 2Rs (cos θsi · xr + sin θsi · yr ) + xr2 + yr2 − Rr

(1)

where Rs and Rr are the radius of stator inner ring and rotor outer ring, respectively, θsi is the anglar position of each stator tooth, and xr and yr are the coordinate of

432

X. Li et al.

the rotor center. This is the effective air-gap distance between each pair of stator and rotor teeth to be adapted with the Ostovic model from [9] used to calculate the air-gap permeance.

2.3 Calculation of UMP Oppositively, from the Maxwell stress tenseur method used in the references [2–7], in our model, UMP is achieved by the principle of virtual work [1, 5, 8, 9] to obtain a more realistic UMP value in a simpler manner. Therefore, each component of UMP is calculated by deriving the magnetic co-energy along the x and y direction according to (2) and (3). Femx

  ns  nr ∂ pi j 1 1 2 = ∅i j · − 2 · 2 i j ∂x pi j

(2)

Femy

  ns  nr ∂ pi j 1 1 2 = ∅i j · − 2 · 2 i j ∂y pi j

(3)

where n s and n r are the stator and rotor tooth number, respectively, ∅i j is the magnetic flux transferred in permeance network of the air-gap area, and pi j is the air-gap permeance between each pair of stator and rotor teeth.

3 Results and Discussions 3.1 Simulation Results of the Case with 10% Ee Static Eccentricity 10% static eccentricity of the average air-gap distance is introduced in the horizontal axis (x axis) to perform a simulation in rated operation. In order to highlight the strong electro-magneto-mechnical interaction of this multi-physic model called as “Model EMM,” a similar model known as “Model EM” without mechanical coupling is established with a fixed rotor center.

3.1.1

Variation of UMP

It can bee seen in Fig. 3 that with an input static eccentricity, the two models generate 2 UMP components that fluctuate around two constant values. It is reasonable that

Analysis of Unbalanced Magnetic Pull in a Multi-Physic Model …

433

120

120

Fx Fy

Fx Fy

100

Unbalanced magnetic pull(N)

Unbalanced magnetic pull(N)

100

80

60

40

20

0

-20 195

80

60

40

20

0

195.5

196

196.5

197

197.5

198

198.5

199

199.5

200

-20 195

Shaft position (revolution)

195.5

196

196.5

197

197.5

198

198.5

199

199.5

200

Shaft position (revolution)

Fig. 3 Variation of UMP as a function of shaft position (Left: Model EMM; Right: Model EM)

the x component is bigger than y component because the eccentricity is set up in the x axis direction. However, the average value of y component is not zero which means the UMP resultant force does not point to the narrowest air-gap direction but rotates with a small angle from it. According to [10], this can be explained as the effects of the compensation currents induced in the rotor cage bar when facing the alternation of the big and small air gap. If two model results are compared, even their average values are almostly the same, those amplitudes from the model EM are bigger than those simulated from model EMM. It illustrates that without a mechanical coupling, UMP is overestimated.

3.2 Variation of UMP with Different Eccentricities 3.2.1

With a Relatively Small Eccentricity

The polar diagram of UMP is realized by plotting the variation of F y as a function of F x in the polar coordinate. As shown in Fig. 4, the arrows are plotted by the average values of (F x , F y ) with the solid line while the trajectory at the end of each arrow represents their fluctuation in the last revolution with the dotted line. The magnitudes of UMP increase proportionally with the increase of the input static eccentricity while their offset angle hardly changes. The fluctuation in the case of 40% Ee is bigger than others because of the strong modulation phenomena.

3.2.2

With a Relatively Large Eccentricity

In rated operation, once the eccentricity exceeds 40% Ee, some nonlinear effects appear in all the simulation results. From Fig. 5, different from the variation of UMP with 40% Ee, there is a strong pulsation emerged in the case with 50% Ee. The

434

X. Li et al.

Polar diagram of UMP in the last revolution 90

40%Ee

500

120

30%Ee

60 400

20%Ee

300 150

10%Ee

30 200 100

180

0

210

330

240

300 270

Fig. 4 Polar diagram of UMP with different eccentricities in rated operation

500

6000 Fx Fy

400 350 300 250 200 150 100 50 0 980

Fx Fy

5000

Unbalanced magnetic pull(N)

Unbalanced magnetic pull(N)

450

4000 3000 2000 1000 0 -1000 -2000 -3000

982

984

986

988

990

992

994

Shaft position (revolution)

996

998

1000

-4000 1380 1382 1384 1386 1388 1390 1392 1394 1396 1398 1400

Shaft position (revolution)

Fig. 5 Variation of UMP as a function of shaft position (Left: 40% Ee; Right: 50% Ee)

global magnitudes increase largely even the eccentricity amplifies only by 1.25 times. Meanwhile, the variation of F y component fluctuates almost at the same level as F x component does. That’s why in Fig. 7, the rotor center orbit of 40% Ee converges to a small circle close to the initial rotor center position while the rotor center orbit with 50% Ee rotates around the initial position with a relatively big radius. From the angular spectrum of UMP magnitude given in Fig. 6, despite of the even harmonics of the supply frequency, some modulations between the harmonics of f t and those supply frequency harmonics will appear in the case of 50% Ee. It is not clear how to explain the origin of f t but it will increase with the increase of rotation speed. This

Analysis of Unbalanced Magnetic Pull in a Multi-Physic Model … Angular spectrum of UMP magnitude: 40%Ee

103

Angular spectrum of UMP magnitude: 50%Ee

103

102

102

435

ft

101

Amplitude (N)

Amplitude (N)

ft

2fs

100 4fs

10-1 10-2

6fs

-3

2fs 4fs

100

6fs

10-1 10-2 10-3

10

10-4 0

101

1

2

3

4

5

6

7

Angular frequency (event/revolution)

8

10-4 0

1

2

3

4

5

6

7

8

Angular frequency (event/revolution)

Fig.6 Angular spectrum of UMP magnitude

Fig. 7 Rotor center orbit in xoy section

strong modulation phenomenon also appears in other simulation results with 50% Ee, for example, the variation of electromagnetic torque will fluctuate with a larger amplitude. This will potentially increase the risk of instability of the whole system.

3.3 Vibration Analysis from the Support Acceleration Based on the simulation with 10% Ee input static eccentricity, the translation acceleration of support 1 from the mechanical part is analyzed along two directions. In order to study their periodicity, their angular spectra are compared and presented in Fig. 8. From three spectra, all the important characteristic frequencies can be identified including 2 f θs = 2.032 ev/rev, Nr = 30 ev/rev and the modulation Nr ± 2 f θs that are caused by the input eccentricity. However from the spectra of UMP and the acceleration along x axis “Accx1,” more frequencies like the harmonics

436

X. Li et al. Angular spectrum of UMP magnitude

2

Amplitude (N)

10

X: 30 Y: 1.294

X: 27.97 Y: 1.166

X: 2.032 Y: 0.3287

0

10

X: 6.096 Y: 0.00213

-2

10

X: 18.29 Y: 0.0129

X: 12.19 Y: 0.01261

X: 32.03 Y: 0.3624

X: 34.06 Y: 0.00996

X: 24.39 Y: 0.006047

-4

10

0

5

10

15

20

25

30

35

Amplitude ( m/rad 2 )

Angular frequency (event/revolution) Angular spectrum of acceleration Accx1

10-4 X: 2.032 Y: 4.601e-07

10-6

X: 12.19 Y: 1.905e-07

X: 6.096 Y: 1.885e-07

X: 18.29 Y: 1.664e-07

X: 30 X: 27.97 Y: 3.698e-05 Y: 3.353e-05 X: 32.03 Y: 1.023e-05 X: 34.06 X: 24.39 Y: 1.926e-07 Y: 1.139e-07

10-8

10-10

0

5

10

15

20

25

30

35

Angular frequency (event/revolution) Angular spectrum of acceleration Accy1

Amplitude ( m/rad 2 )

-4

10

X: 27.97 Y: 3.308e-05

X: 2.032 Y: 4.478e-07

-6

10

X: 32.03 Y: 8.983e-06

X: 30 Y: 2.191e-06

X: 34.06 Y: 2.094e-07

-8

10

-10

10

0

5

10

15

20

25

30

35

Angular frequency (event/revolution)

Fig. 8 Angular spectrum of different results (Top: UMP magnitude, middle: translation acceleration of support 1 along x; Bottom: translation acceleration of support 1 along y)

of 6 f θs = 6.096 ev/rev allowed to be recognized although the spectrum of UMP is richer than that of Accx1 considering that Accx1 is an indirect signal excited by the UMP. Furthermore, by comparing the amplitude of the frequency 30 ev/rev between the spectra of Accx1 and Accy1, its simple to distinguish the static eccentricity appeared in which direction.

4 Conclusions By adding the coupling of UMP with an unfixed rotor center, the multi-physic induction machine model based on angular approach is developed in the term of electro-magneto-mechnical interactions. The effective air-gap distance is defined as a function of the rotor center coordinate to calculate the UMP with different rotor eccentricities. The variation of UMP is more reasonable when compared with the

Analysis of Unbalanced Magnetic Pull in a Multi-Physic Model …

437

results from the model without mechanical coupling. An offset angle of UMP resultant force is detected from the direction of the narrowest air-gap which does not change with the variation of a relatively small eccentricity, and meanwhile, their UMP magnitude increases linearly with the augementation of the eccentricity. In the case of a relatively large eccentricity, nonlinear effects appear to all the simulation results and the harmonics begin to show up in their angular spectrum. The whole machine tends to be instable. The spectra of UMP and those of the translation acceleration from support 1 demonstrate the possibility of detecting the rotor eccentricity from the vibration analysis of support part. It should be noted that due to the tangential component of UMP in respect of narrowest air-gap direction, the vibrations can be detected from both the two directions. Acknowledgements Authors gratefully acknowledge the support of Association National Research Technology of France and Enterprise Nidec Leroy Somer.

References 1. Fourati A et al (2017) Angular-based modeling of induction motors for monitoring. J Sound Vib 395:371–392 2. Guo D, Chu F, Chen D (2003) The unbalanced magnetic pull and its effects on vibration in a three-phase generator with eccentric rotor. J Sound Vib 254(2):297–312 3. Smith AC, Dorrell DG (1996) Calculation and measurement of unbalanced magnetic pull in cage induction motors with eccentric rotors. Part 1: analytical model. IEE Proc Electr Power Appl 143(3):193–201 4. Han X, Palazzolo A (2016) Unstable force analysis for induction motor eccentricity. J Sound Vib 370:230–258 5. Holopainen TP (2004). Electromechanical interaction in rotordynamics of cage induction motors, Number 543. VTT Publications, p 543 6. Mahyob A et al (2007) Induction machine modelling using permeance network method for dynamic simulation of air-gap eccentricity. In: 2007 European Conference on Power Electronics and Applications, EPE 7. Boy F, Hetzler H (2019) A co-energy based approach to model the rotordynamics of electrical machines. In: Mechanisms and Machine Science, vol 63 8. Xu X, Han Q, Chu F (2016) Nonlinear vibration of a generator rotor with unbalanced magnetic pull considering both dynamic and static eccentricities. Arch Appl Mech 86(8):1521–1536 9. Ostovic V (1989) Dynamics of saturated electric machines, 1st edn. Springer, New York, USA 10. Saint-Michel J (2001) Influence of rotor eccentricity on IR motor C, 1st edn. Nidec LeroySomer, France

Bearings Fault Diagnosis in Electromechanical System Using Transient Motor Current Signature Analysis Alok Verma, Haja Kuthubudeen, Viswanathan Vaiyapuri, Sivakumar Nadarajan, Narasimalu Srikanth, and Amit Gupta Abstract The bearing faults may lead to expensive and catastrophic failures, which affect the reliability of power drivetrain in electromechanical system. To reduce impact of the bearing failures, accurate analysis and timely diagnosis are required for improved reliability of the electromechanical systems. In this work, an enhanced transient current signature analysis has been investigated for aircraft application using permanent magnet synchronous motor with bearing defects. The motor current under steady-state and transient conditions is acquired from an experimental test-rig with bearing defects at different loading and speed levels. The acquired signals are first investigated using frequency domain analysis and then compared with the time– frequency domain analysis such as wavelet analysis. The discrete wavelet transform is used to analyze on the calculated residual current for the bearing fault diagnosis. The proposed time–frequency-based technique is able to provide useful features that characterize the condition of the bearings on transient current. Further, backpropagation neural network is used over the calculated features which distinguishes the defective bearing from the healthy bearing with high accuracy. Keywords Condition monitoring · Bearing defects · Transient analysis · Frequency domain analysis · Wavelets · Neural network

A. Verma (B) · H. Kuthubudeen Rolls-Royce @ NTU Corporate Lab, Nanyang Technological University, Singapore, Singapore e-mail: [email protected] V. Vaiyapuri · S. Nadarajan · A. Gupta Rolls-Royce Singapore Pte Ltd., Singapore, Singapore N. Srikanth Energy Research Institute @ NTU, Nanyang Technological University, Singapore, Singapore © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_41

439

440

A. Verma et al.

1 Introduction Electrical machines and associated electromechanical components are becoming integral part of aircraft engine for onboard electric power generation. But, the faults in the electromechanical system may lead to significant damage to the machine and system associated. The techniques like condition monitoring and faults diagnosis are used in modern industries to overcome these problems. The condition monitoring is a key to improve the reliability and availability of electromechanical system by detecting and diagnosing the faults at an early stage to prevent failures. The faults in electromechanical system can be distinguished as mechanical and electrical faults. As per the IEEE standard 493-1997 and statistical study of EPRI, bearing defects are the most common failures in rotating machines which constitute more than 40–45% of all defects [1]. Large number of papers have been published in the field of fault diagnosis of rotating machines, and most of the work has been done on vibration monitoring of the machine to detect and classify the types of fault. The motor current signature analysis (MCSA) is a popular technique which uses current harmonics to detect and diagnose bearing faults because the effect of faults is reflected to the harmonics of stator current. The MCSA is a non-invasive, cheaper, and does not require extra sensors because current is already monitored for protection and control purposes. Schoen et al.[2] first evaluated that the bearing defect produced vibration characteristics frequency and modulated the air gap flux density, so stator current frequencies were produced and can be used for fault detection. Later, comparison of vibration and current monitoring technique was also given by Benbouzid et al.[3]. Blodt et al.[4] proposed a noble analytical model for the rollingelement bearing faults of electrical motor which influenced the stator-based current. Yazici and Kliman [5] presented a time–frequency domain method for the diagnosis of broken rotor bar and bearing faults using current, torque, and speed signals from a 3/4 HP induction motor. An experimental study on permanent magnet synchronous motor (PMSM) was also used for the diagonosis of bearing faults using short-time frequency transform (STFT) and Gabor Spectrogram of current signals [6]. Generally, time domain, frequency domain, and time–frequency domain techniques are used in the literature for bearing fault diagnosis [7, 8]. Sometimes, techniques of information entropy and artificial intelligence were also used for the fault diagnosis purpose [9, 10]. The transient current signal contains non-stationary fundamental components. So, time-domain and frequency-domain analyses are not useful for the non-stationary waveform [11, 12]. Therefore, technique based on time– frequency domain analysis such as wavelet transform can be used to analyze the bearing defects for the non-stationary current signals [13]. Inspired by their work, this work is focused on the bearing fault diagnosis of rotating electromechanical systems using transient current signature analysis for aircraft application. The presented work is first of its kind in the author’s knowledge where the diagnosis algorithm is successfully implemented with the experimental analysis for the diagnosis of bearing faults using transient current signature analysis.

Bearings Fault Diagnosis in Electromechanical System …

441

2 Experimental Analysis 2.1 Experimental Setup The experimental setup along with the major components are shown in Fig. 1. The experimental setup has a 3- PMSM as a test machine coupled to a 3- induction machine (load machine through roller bearing, shaft, gearboxes, and flexible coupling). The gear-ratio of the respective gearboxes was adjusted in such a way that speed of the input shaft is almost equal to the output shaft. The current sensor was connected to the stator winding cable in order to collect the current signals using a 32-channel data acquisition system as shown in Fig. 2a, b. The components and their specification used in this experimental setup are given in Table 1, and Table 2 shows the specification of bearings used in the experimental setup. The healthy bearing, outer race defective bearing before and after the experiments are shown in Fig. 3.

Fig. 1 Experimental setup

Fig. 2 a Current sensors attached to stator wiring, b data acquisition system

442 Table 1 Components specifications for the experimental setup

Table 2 Bearing specifications

A. Verma et al. Sensors/machine

Sensitivity/specification

Current sensor

10 mV/A

DAQ

32 channels

Speed drive

32–38 Amp, 380–500 V

PMSM

20 HP, 50 Hz, 5000 RPM

Induction motor

20 HP, 50 Hz, 5000 RPM

Bearing type

Angular contact ball bearing

Number of balls (n)

13

Contact angle (F)

40°

Pitch diameter (PD)

38.5 mm

Ball diameter (BD)

8.9 mm

Fig. 3 a Healthy bearing, b bearing with outer race defect: before experiment and c bearing with outer race defect: after experiment

2.2 Experimentation The startup transient 3- current signals were collected in two sets. First set of data were acquired from healthy system (N1 ) with the speed and load ranging from 800–1500 rpm and 0–75% of full load, respectively, at an equal interval of 50 rpm and 25% of full load. Then, the bearing with outer race defect (O1 ) was installed to the experimental setup to acquire the second sets of current signals, with the same operating conditions as that of the first set. The healthy system was denoted as 0 and the system with outer race defect was designated by 1. In this work, individual data set consisted of 60 experimental responses for transient condition so, the total numbers of experimental responses are 120 in the two sets. The sampling rate for each signal was set to 10KS/s. The key test cases for the experiments are given in Table 3.

Bearings Fault Diagnosis in Electromechanical System … Table 3 Controlling factors and levels for the experiments

443

Controlling factor

Level

A

Rotational speed (rpm)

800–1500

B

Loading condition (%)

0–75

C

Bearing conditions

0 and 1

3 Methodology The method used in this paper consists of total five steps, i.e., experimental test-rig arrangement, data collection, data pre-processing, feature extraction, and classification. The acquired current signals are analyzed in time–frequency domain such as discrete wavelet transform. The wavelet features of different bearing conditions operating under different operating speeds and loads are fed as inputs to the backpropagation neural network (BPNN). Thorough description of the procedure is given in the sub Sects. 3.1 and 3.2. The acquired signals are first investigated using frequency domain analysis and then compared with the time–frequency domain analysis such as wavelet analysis. Fast Fourier transform (FFT) of steady-state and transient state current signals for healthy and outer race defective bearings conditions are shown in Fig. 4a–d. From Fig. 4a, b, it can be observed that Fig. 4b has prominent 1X and F o is clearly visible as compare to the Fig. 4a. F o can be expressed as fO =

  BD n fr 1− cos  = 267.46 Hz 2 PD

(1)

where PD is the pitch diameter,  is the contact angle, n is the number of balls, BD is the ball diameter, f r is the rotating frequency, f o is the outer race frequency and 1X, 2X, … are frequency sidebands. The spectrum shown in Fig. 4b with F o , clearly demarcates that the system has bearing with outer race defect. However, Fig. 4c, d represents the spectrum for the healthy and outer race defective bearing using transient current signals and we can notice that F o is not at all visible for the outer race defect fail to differentiate between healthy and outer race defect as shown in Fig. 4d. The non-stationary fundamental components occurred for the transient current signals. So, traditional frequency domain analysis is unable to detect faults using transient current signals as shown in Fig. 4d. Hence, time–frequency domain analysis needs to be use for the transient conditions.

3.1 Wavelet Transform The wavelet transform is a well known signal processing technique for the nonstationary signal analysis having transient behavior [13]. Continuous wavelet transform (CWT) of f (t) is obtained by the sum of all signal multiplied by the scale and

444

A. Verma et al.

Fig. 4 FFT results for different bearing conditions at 1000 RPM and no load: a current spectrum for healthy bearing at steady-state condition, b current spectrum for outer race defective bearing at steady-state condition, c current spectrum for healthy bearing at transient state condition and d current spectrum for outer race defective bearing at transient state condition

shifted form of the wavelet function (t). 1 CWT(x, y) = √ |x|



∞ f (t) −∞

 t−y dt x

(2)

where x is a scale parameter and this is inverse of frequency, y represents as translation parameter. Further, discrete wavelet transform (DWT) is a discretization of CWT and expressed as 1 DWT(i, j) = √ 2i

∞ −∞

 t − 2i j dt f (t) 2i 

(3)

where parameter i,j = 1, 2, …. High frequencies are associated wide bandwidth, while low frequencies are associated with narrow bandwidth. Using DWT, the signal with lower bandwidths and higher bandwidths can be separated which helps in adjusting the resolution at high frequency.

Bearings Fault Diagnosis in Electromechanical System …

445

3.2 Artificial Neural Network In this work, BPNN is considered for the diagnosis of bearing fault using transient current signature. The BPNN consists of input layer, hidden layer, and output layers. Input layer is used to receive a set of inputs and forward the signals to the hidden layer. The hidden layer processed that information and send to the output layer and finally, output layer transfers the values to an external receiver. The nodes in a hidden layer and number of hidden layers are variable and dependent on the convergence of result. Further, the weight (W rs ) is used to modify the output signal of hidden layer from rth node to sth node of the output layer. The summation of modified signal is again adjusted here by the sigmoid function and send that signals to the output layer. The results for output layer can be calculated as, Ops = f

 m 

 Wrs Opr , s = 1, 2, . . . ,n

(4)

r =0

Here, f is used as sigmoid transfer function. The predicted values are compared with the desired values during training period, and the mean square error (E p ) was calculated as Ep =

n  2 1 Dpk − Opk n i=1

(5)

Here, Dps is the desired output value and Ops is the calculated output for the pth pattern. The detail classification results with the training, testing, and errors are presented in the result section.

4 Results In this work, transient stator current signals acquired through permanent magnet synchronous motor (PMSM) are used to calculate the features using discrete wavelet transform by calculating the residual current signal. The acquired transient current signals need to be further preprocessed to remove the fundamental components from the signal. So, the average moving filter was used for this task and the residual current signals by subtracting filtered signal from original transient signal were used for feature extraction using DWT as shown in Fig. 5. The average moving filter can be calculated as bs (n) =

n+(b−1) 1  xs (n) s n

(6)

446

A. Verma et al.

Fig. 5 Startup transient current analysis for different bearing conditions at 1000 RPM and different loading conditions: a smooth current signal for system with healthy bearing at no load, b smooth current signal for system with outer race defective bearing at no load, c wavelet decomposition at d3 level of a healthy system at 0–75% loading and d wavelet decomposition at d3 level of a system with outer race bearing defect at 0–75% loading

where the average value from filter is bs , n is considered as data points, smoothing factor is s, and actual data point is x s . The smooth transient current signals are extracted using moving average filter for healthy and faulty bearings as shown in Fig. 5a, b. Further, wavelet decomposition was applied on the residual current signal for both the healthy and faulty bearings at different loading conditions as shown in Fig. 5c, d. From Fig. 5d, the significant changes in the features using DWT can

Bearings Fault Diagnosis in Electromechanical System …

447

Fig. 6 a Variation of neurons in hidden layer with mean square error and b variation of experimental value with the predicted value

Table 4 Accuracy of bearing fault diagnosis on the test set as N 1 U O1 S. No.

Network architecture

No. of iterations

Mean square error training

Mean square error testing

Prediction error (%)

1

3-10-1

893

0.014725

0.013559

12.8577

2

3-13-1

494

0.010054

0.008795

10.12103

3

3-15-1

1069

0.01287

0.009499

11.2014

be observed in between 150 and 200 samples for the defective bearing at loading from 0 to 75%. Later, the calculated features along with back-propagation neural network (BPNN) is employed as noble method to diagnose bearing defects. In this presented work, the number of inputs to the BPNN is bearing condition, load, and speed whereas output is a bearing fault diagnosis. The BPNN was first trained and then tested for different values of the network architecture and the BPNN architecture 3-13-1 considered best as shown in Fig. 6. The best results for the trained and tested BPNN architecture are given in Table 4. It is noticed that the maximum error for the best neural network architecture is approx. ±12.85% of the experimental value and minimum error for is ±10.12%. This observation clearly demarcates that the model based on BPNN is useful to trained and predict the bearing defects using startup transient current signal with reasonable and good accuracy.

5 Conclusions In this work, the combination of DWT and BPNN is suggested for the bearing fault diagnosis in the electromechanical system using startup transient current signals. The results claim that frequency domain analysis is not useful for the non-stationary waveform. Therefore, a technique based on time–frequency domain analysis, such

448

A. Verma et al.

as DWT, is used on the residual current to analyze the bearing defects. The significant differences between healthy and defective bearing have been observed on the extracted DWT features. The results and the prediction accuracy of the BPNN model are quite close to 90%, which shows that the combination of DWT and BPNN model using transient current signals can be used effectively. Acknowledgements This work was performed at the Rolls-Royce@NTU Corporate Lab under the Corp Lab@University Scheme with support from the Rolls-Royce Pte Ltd. Singapore and National Research Foundation (NRF) Singapore

References 1. Allbrecht PF, Appiarius JC, McCoy RM, Owen EL, Sharma DK (1986) Assessment of the reliability of motors in utility applications. IEEE Trans Energy Convers EC-1(1):39–46 2. Schoen RR, Habetler TG, Kamran F, Bartheld RG (1995) Motor bearing damage detection using stator current monitoring. IEEE Trans Ind Appl 31:1274–1279 3. Benbouzid MEH, Vieira M, Theys C (1999) Induction motors faults detection and localization using stator current advanced signal processing techniques. IEEE Trans Power Electron 14:14– 22 4. Blodt M, Granjon P, Raison B, Roistaing G (2008) Models for bearing damage detection in induction motors using stator current monitoring. IEEE Trans Ind Electron 55:383–388 5. Yazıcı B, Kliman GB (1999) An adaptive statistical time-frequency method for detection of broken bars and bearing faults in motors using stator current. IEEE Trans Ind Appl 35:442–452 6. Rosero J, Cusido J, Espinosa AG, Orteg JA, Romeral L (2007) Broken bearings fault detection for a permanent magnet synchronous motor under non-constant working conditions by means of a joint time frequency analysis. In: IEEE international symposium on industrial electronics (ISIE), Spain, 4–7 June 2007, pp 3415–3419 7. Immovilli F, Bellini A, Rubini R, Tassoni C (2010) Diagnosis of bearing faults in induction machines by vibration or current signals: a critical comparison. IEEE Trans Ind Appl 46:1350– 1359 8. Devaney MJ, Eren L (2004) Detecting motor bearing faults. IEEE Instrum Meas Mag 7:30–50 9. Verma A, Sarangi S, Kolekar M (2016) Misalignment faults detection in an induction motor based on multi-scale entropy and artificial neural network. Electr Power Compos Syst 44:916– 927 10. Verma A, Sarangi S (2015) Fault diagnosis of broken rotor bars in induction motor using multiscale entropy and backpropagation neural network. In: Intelligent computing and applications, vol 343. Springer, New Delhi, pp 393–204 11. Douglas H, Pillay P (2005) The impact of wavelet selection on transient motor current signature analysis. In: IEEE international conference on electric machines and drives (IEMDC), San Antonio, TX, USA, May 2005, pp 80–85 12. Widodo A, Yang B-S, Gu D-S, Choi B-K (2009) Intelligent fault diagnosis system of induction motor based on transient current signal. Mechatronics 19:680–689 13. Douglas H, Pillay P, Ziarani AK (2004) A new algorithm for transient motor current signature analysis using wavelets. IEEE Trans Ind Appl 40:1361–1368

SHM/NDT Technologies and Applications

Assessment of RC Bridge Decks with Active/Passive NDT T. Shiotani, H. Asaue, K. Hashimoto, and K. Watabe

Abstract We are facing a lot of infrastructures damaged seriously. In consideration with restriction of the maintenance budget, strategy to prolong their service as long as possible has been generally adopted. To maintain those structures reasonably, current condition of the structures shall be properly evaluated by ideal inspections, for example, using NDT. The authors have thus been studying NDT utilizing elastic wave approaches to quantify the damage of infrastructures. In the paper, periodic monitoring and continuous monitoring with elastic wave approaches are demonstrated respectively. Specifically elastic wave- or AE- tomography is applied to the existing concrete bridge deck to interpret the damage distribution with their characteristic parameters. As for the continuous monitoring, newly developed super acoustic (SA) sensors and a four channels FPGA-based edge computing unit with event-driven technology are demonstrated. And to introduce the NDT into in-situ maintenance program, essential issues are summarized. As for a key technology of remote monitoring, several fundamental aspects of laser-induced elastic waves are experimentally studied. Keywords Infrastructures · NDT · Elastic wave · Tomography · SA sensor · Laser induced AE

1 Introduction Deterioration processes of materials/structures can be classified into several stages, namely primary, secondary, and tertiary stages. Although ideal proactive maintenance shall be conducted during primary or at the latest secondary stage before coming into the seriously deteriorated level of tertiary stage, almost infrastructures constructed T. Shiotani (B) · H. Asaue · K. Hashimoto Department of Civil & Earth Resources Engineering, Kyoto University, C-Cluster, Nishikyo-Ku, Kyoto, Japan e-mail: [email protected] K. Watabe Corporata Research & Development Center, Toshiba Corporation, Tokyo, Japan © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_42

451

452

T. Shiotani et al.

intensively during the rapid economic era have been left without proper maintenance resulting in reaching their original life-span of about 50 years at present. In addition to consider the current low economic growth leading to the restricted investment for existing infrastructures, replacement of those with new ones will not be realized, while life extension policy for the existing structures sometimes with retrofitting is generally adapted worldwide. When focusing even on the deteriorated tertiary stage, there are still issues to solve, for example, how to determine the further implementation for the deteriorated structures, i.e., if it keeps leaving, repair, or replace. And for the replacement, how to prioritize the structure for repair program is still in study. Quantification of the repair work and estimation of life time behavior based on their integrity are the issues in progress. The authors have thus been studying snap-shot monitoring for periodic inspection and continuous monitoring to alert the critical condition with NDT. In this paper, as applications of the NDT, bridge decks made of reinforced concrete (RC) are exemplified, which is referred to as a most budget taking member of road infrastructures in Japan [1].

2 Snap-Shot Monitoring In Japan, periodic inspections with close visual observation in a span of five-year have been regulated in the Road Act in 2014. Health condition of the infrastructure is thus been rated every five years with naked eyes. As these ratings are obtained by the surface information of infrastructures, these only suggest the condition during the final phase of deterioration, namely ones for collective/reactive maintenance so as to realize quantitative evaluation even during initial phase of deterioration, the method which can evaluate inside of the infrastructures has been in great demand. It seems difficult to directly measure the whole of infrastructures with their properties; however, there is a potential for elastic wave techniques as these provide the health information of the structures with such parameters as elastic wave velocity and energy attenuation. Accordingly, the authors have been studying elastic wave techniques enabling the structural owner to obtain damage or deterioration information. Specifically elastic wave tomography and acoustic emission tomography are the candidates to quantify the damage. The basic analytical procedure and the applicability of these techniques have already been reported as elastic wave tomography [2–4] and AE tomography [5–7]. A typical result of AE tomography is introduced as in the following together with the verification using core samples [7]. Two RC panels in a real 40 years old RC bridge deck, which has deterioration such as rebar corrosion due to salt attack and fatigue failure due to mobile loads, were selected. One referred to as Panel A is seriously deteriorated as combweb-like cracks are readily confirmed on the bottom, and the other referred to as Panel B has no remarkable cracks observed on the bottom. The AE measurement was carried out with 12 AE sensors of 60 kHz resonance, set on the bottom side of the RC bridge deck as shown in Fig. 1. Random excitations with a hammer of a φ11 mm curvature edge were carried out for exciting elastic waves on the top surface of the panel.

Assessment of RC Bridge Decks with Active/Passive NDT

453

Fig. 1 AE sensor arrangement for AE tomography (unit: mm)

Figure 2a shows velocity tomogram obtained from AE tomography analysis and AE sources induced by mobile load in service before implementing artificial excitation with a hammer. Two facts can be clarified: small velocity areas suggesting serious deterioration are find both cases of seriously deteriorated and minor deteriorated, and more AE sources are found for the case of minor deterioration (see Panel B) than of serious case (see Panel A). These implies that internal damage cannot always be compatible to that of the surface. As for the case that the damage developed remarkably, AE sources, namely secondary AE sources become more active to that case of more deteriorated. Not only less numbers of secondary AE sources and moreover large attenuation rate during AE propagation for the deteriorated case may be attributed to this fact. In Fig. 2b, core samples’ photos corresponding to the

(a) AE sources and velocity tomogram (m/s)

(b) Core samples retrieved from the designated points of (a)

Fig. 2 Measurement results for Panel A (left column) and Panel B (right column)

454

T. Shiotani et al. AE source density dense

Fig. 3 Damage assessment based on AE sources and elastic wave velocity [8]

Minor damage

Severe damage

sound

2

low

Moderate damage

4

1

Elastic wave velocity high

3

sparse

position # of Fig. 1a can be found. Indeed the areas showing low elastic wave velocity correspond well to the condition including visible large lateral cracks. Accordingly, damage identification would be possible when combining AE sources and velocity distribution as shown in Fig. 3.

3 Continuous Monitoring As a national project under the sponsorship of NEDO, super acoustic (SA) sensors and self-powered operated wireless AE signal processing module have been developed. The SA sensor is a MEMS device expanding the responsive frequency to the extremely wide band between Hz and MHz [9]. The SA sensor consists of four principal parts: sensor chip, polydimethylsiloxane (PDMS), printed circuit board (PCB), and Au/Cr electrode as shown in Fig. 4. A beam is formed on the center of a Si sensor chip. The beam vibrates when acoustic emission signals propagate into the sensor. A piezoresistive area is formed at the center of the beam. Au/Cr electrode is patterned to supply current for the piezorestive area. On the one side of the sensor chip, silicone oil is encapsulated with a parylene film. Taking advantage of the surface tension of the silicone oil and properly limiting the size of the gap between the beam and the surrounding chip, leakage of the oil is prevented. An appropriate amount of PDMS is mounted on the parylene film to form the fixed area to receive the lower-frequency vibration. The AE signal processing module involves a wireless edge device including the SA sensor, a concentrator, and an analysis server. Date detected by the SA sensors is transmitted to the server via the concentrator. The module is directly installed in the object of interest and the continuous monitoring is executed aided by event-driven technology (detailed later on). Appearance of the module (100 × 70 × 40 mm) is depicted as in Fig. 5. The module can be input of four channels SA sensors,

Assessment of RC Bridge Decks with Active/Passive NDT

455

Fig. 4 Configuration of SA sensor

Fig. 5 Appearance of the AE signal processing module (left) and internal components (right)

consisting of analog frontend processor, a signal processing circuit, a self-power operation circuit generated by solar panels, and a wake-up circuit. The signal processing circuit involves field programmable gate array (FPGA) so that parametric features of signal waveforms can be extracted with FPGA. An Industry, Science, Medical (ISM) frequency band is adapted to transfer the date. The module runs by solargenerated power; however, there is a potential of power loss when enormous data of high frequency are detected simultaneously and, therefore, event-driven technology is introduced, where excessive accelerations by heavy vehicles wake the system

456

T. Shiotani et al.

Fig. 6 AE activity with event-driven technology (left) and resultant AE sources (right)

up. Reasonable AE monitoring is thus implemented by the event-driven technology without missing the effective date. The SA sensors and the module are installed into the real highway bridge deck and continuous monitoring has been performed almost one year. The number of wake-up and those of vehicles’ passages can be found as in Fig. 6. It is found that the system wakes when the large cars pass and corresponding AE activity is found as well.

4 Future Technologies to Introduce the NDTs to Maintenance Program of Infrastructures There is still space to introduce the introduced NDTs to the maintenance program of infrastructures. Although the elastic wave velocities below 3000 m/s appear to suggest serious damage of RC decks of bridges, exact scales of damage or deterioration and, for example, resulting elastic wave velocities have no decisive relation so far. In the tomography spaces of sensors and obtaining resolution of tomogram (velocity distributions) are almost clarified, i.e., a half space of sensors is the minimal resolution for the analysis [10], however, several issues on frequency, namely sensors’ responsible frequency and AE source- or propagated AE wave-frequency depending on the condition of propagation media as concrete, and moreover corresponding detectable scale of damage are not well studied. Nevertheless, most important fact is that: damage scale of interest for any cases of structures shall be decided in advance to the study by owners of structures, i.e., once the damage scale of interest and degree of damage of interest have been set, corresponding appropriate frequency and sensors’ type and space will be uniquely determined. It is noted that these discussions have just started recently. Other issues should be discussed are on sensors’ installation. The sensors are currently contacted physically to the object of interest and those are connected to the system via cables. Remote sensing, for example, air-borne sensors or laser doppler sensors shall be developed specified for the infrastructures.

Assessment of RC Bridge Decks with Active/Passive NDT

457

Fig. 7 Experimental overviews (left) and configuration of specimen with artificial defect (right)

Besides, excitation of elastic waves for active monitoring places an important role to assess the structures during tomography approaches. There are so far several ways to excite the elastic waves: occurrence of secondary AE activity from existing defects due to stimulation of traffic loads, artificial impacts by hammers, and naturally induced impact by such as precipitation. In the AE tomography as the information of the excitations is unknown and identified by a conventional AE source location algorithm, the resultant tomograms calculated based on the arrival time differences between the excitation- and detection-time includes errors influenced by the uncertain information of the sources. The authors are thus studying artificial stable impacts by laser, realizing excitations at designated positions without contact to the object. As a preliminary study, a concrete plate of 300 × 300 × 100 mm with an artificial defect of 10 mm thick Styrofoam is subjected to high-speed laser excitations [11]. The waveforms excited by the laser propagated in the concrete are detected by conventional AE sensors of 60 kHz resonance. The system overview and the area of the artificial defect are shown as in Fig. 7. Two types of laser focus namely 14 J/cm2 and 82.5 J/cm2 by 250 mJ impact are studied. As found in Fig. 8, no remarkable differences are obtained for both cases of propagation path with/without the defect. An area of low velocity identifies with the area of artificial defect as shown in Fig. 9. It was found that elastic wave excitations by the laser and the tomography based on the excitations gave a pretty reasonable result implying that remote laser excitations could be used as the sources of elastic waves for the tomography approaches.

5 Conclusions To contribute NDT for the maintenance program of infrastructures, two approaches shall be studied: one is for periodic monitoring, for example, a span of several year and the other is for continuous monitoring enabling final alert to the citizens. Active and passive elastic wave approaches are the candidates for these roles. Tomographic approaches provide unvisible internal information of, for example, concrete slabs, and SA sensors and the monitoring module give real-time failure evolution. To realize these ideas in the in-situ maintenance flow, many remaining issues shall be still studied, for example, remote excitations for tomography approaches will be the key technology, several fundamental aspects of laser-induced elastic waves were experimentally studied in this paper.

458

T. Shiotani et al.

Fig. 8 Detected waveforms for 14 J/cm2 (left) and 82.5 J/cm2 (right) by 250 mJ impact

Fig. 9 Wave paths (left) and velocity tomogram (right)

Acknowledgements A part of the study is implemented under the sponsorship of NEDO (New Energy Development Organization). Their financial support is greatly appreciated. The SA sensor is concepted by Prof. Shimoyama with Toyama Prefectural University (Formerly involved in the University of Tokyo). His notable contribution to this project is highly applauded.

References 1. W-nexco.co.jp/Expressway Renewal Project, 30 Sept 2019, https://www.w-nexco.co.jp/renewa lproject/ 2. Kobayashi Y, Shiotani T, Shiojiri H (2006) Damage identification using seismic travel time tomography on the basis of evolutional wave velocity distribution model. Structural Faults and Repair 2006 (CD-ROM), Edinburgh

Assessment of RC Bridge Decks with Active/Passive NDT

459

3. Kobayashi Y et al (2007) Three-dimensional seismic tomography for existing concrete structures. In: Proceedings of second international operational analysis conference, vol 2, pp 595–600 4. Shiotani T, Aggelis DG, Momoki S (2009) Elastic wave validation of large concrete structures repaired by means of cement grouting. Constr Build Mater 23:2647–2652 5. Kobayashi Y, Shiotani T (2012) Seismic tomography with estimation of source location for concrete structure. Structural Faults and Repair 2012, CD-ROM, Edinburgh 6. Iwamoto K et al (2012) Application of AE tomography for fracture visualization of rock material. Prog Acoust Emiss XV:193–196 7. Asuae H et al (2016) Applicability of AE tomography for accurate damage evaluation in actual RC bridge deck. Structural Faults & Repair 2016, CD-ROM, Edinburgh 8. Watabe K et al (2017) Novel non-destructive technique of internal deterioration in concrete deck with elastic wave approaches. In: Proceedings of the 12th world congress on engineering asset management (WCEAM 2017), Paper 93, Brisbane 9. Khang-Quang N et al (2014) Multi-axis force sensor with dynamic range up to ultrasonic. In: Proceedings of 27th IEEE MEMS conference (MEMS2014), pp 769–772 10. Hashimaoto K et al (2018) Simulation investigation of elastic wave velocity and attenuation tomography using rain-induced excitations. Structural Faoults and Repair 2018, CD-ROM Edingurgh 11. Shiotani T et al (2019) Application of non-contact laser impact with acoustic emission analysis and tomography technique for damage detection in concrete. AEWG 2019, Chicago (in print)

Estimation of In-Plane Strain Field Using Computer Vision to Capture Local Damages in Bridge Structures T. J. Saravanan and M. Nishio

Abstract The monitoring of variation in strain field near the ends of girder and bearings in steel plate girder bridges to capture local damages, is of a paramount importance for condition assessment of its service life. It is well-known that, the thickness reduction due to corrosion and failure at bearings, possibly will decrease the necessary functional requirements of structure. By understanding the computer vision technique, it is implemented for a typical bending test under quasi-static loading to aluminum plate girder bridge model specimen at laboratory level, for estimating in-plane strain field variation near end of girder. In this research work, the experiments are carried out using low-cost camera and two-dimensional digital image correlation (2D-DIC) technique is employed for analysis. The experiment is repeated under two different loading conditions and change in boundary conditions, to capture small responses with noise in it. The in-plane strain field patterns at different time step corresponding to the applied quasi-static load can be clearly understood from 2D-DIC results. The obtained results are in good agreement with the strain gauge measurements. The residual strain can also be captured at the unloading condition. From the strain captured at the Web section, it is between 15 and 35µ which is very low strain value. Hence, the computer vision camera can capture the low strain variations which corresponds to local damages in steel plate girder bridge structures through 2D-DIC technique. Keywords In-plane strain field · Computer vision · DIC · Quasi-static loading · Local damage · Condition monitoring

T. J. Saravanan (B) Institute of Advanced Sciences, Yokohama National University, Kanagawa 2408501, Japan e-mail: [email protected] M. Nishio Department of Engineering Mechanics and Energy, University of Tsukuba, Ibaraki 3058577, Japan © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_43

461

462

T. J. Saravanan and M. Nishio

1 Introduction In Japan, many bridges constructed during the period of high economic growth are in service for 50 years and it is necessary to maintain and manage the bridges appropriately and efficiently [1]. Damage due to corrosion at the end of girder and malfunction of the bearings causes serious issues in steel plate girder bridges [2]. Periodic inspection with close proximity visual inspection has become mandatory. The existing steel plate girder bridges are subjected to various loading conditions; however, there are a lot of uncertainties in the properties of existing structures due to deteriorations, seismic loading histories, and the other factors during their operations [3]. As existing structures are in service, structural parameters such as material properties and boundary conditions have undergone deterioration (aging). Structural health monitoring (SHM) is a method in which various conventional sensors are attached to a structure, and the overall state of the structure is quantitatively and objectively grasped from the acquired measurement data. Computer vision technique based on full-field non-contact measurements at low cost using cameras is currently used as new application in SHM field for condition monitoring of structures at local level [4]. Due to environmental effect on steel plate girder bridges, the boundary conditions are prone to determination and malfunctioning with high uncertainty. It is difficult to grasp this change and reflect it quantitatively in the numerical model. However, if there are uncertainties at boundary conditions, it is difficult to independently evaluate the state of other members, such as bridge girders and floor slabs. This is because conventional monitoring considers acquisition of a single data such as acceleration vibration data or static strain data. Under these circumstances, digital image correlation (DIC) techniques can be used to perform a data acquisition of a full-field strain/displacements patterns for damage localization at local level [5]. The aim of this research work is to implement computer vision-based twodimensional (2D) DIC technique for the measurement of in-plane strain field near the end of steel-plate girder bridge structures. In this paper, the experimental investigations are carried out at laboratory level on aluminum plate girder bridge model. For estimation of strain field near the end girders and bearings, the bending test is carried out by applying quasi-static loading. The end supports are designed with steel bearings replicating the real scenario such that it accounts for translation and rotation. The experiment is repeated for two different loading conditions (i.e., at L/4 and L/2 locations) with different boundary conditions to capture small responses with noise on it. The strain field measurements obtained using CMOS camera through 2D-DIC technique are compared with conventional strain gauges for validation.

2 Experimental Study For the experimental investigation, a simple and economical industrial camera (Model: DFK33UP5000, The Imaging Source Europe GmbH, Germany) with

Estimation of In-Plane Strain Field Using Computer Vision …

463

complementary metal oxide semiconductor (CMOS) sensor type. The camera is connected to a C-mount optical lens, (Model: VS-5018H1, VS Technology, USA) with focal length 50 mm. The pixel resolution is 2592 (width) × 2048 (height), which accounted for the pixel size of 4.8 µm × 4.8 µm and maximum frame rate of 60 frames per second. The computer vision software utilized to view is ICCapture— Image acquisition (Version: 2.4.642.2631) and the acquired data are communicated to the control laptop (Panasonic, CF-SV7) through USB 3.0. For in-plane strain field estimation, a plate girder bridge specimen is considered for the bending test by applying quasi-static loading. Figure 1a shows the fabricated beam specimen and its boundary condition. The beam member is made of aluminum and has a total length of 1500 mm. The angle sections are fixed with bolts to form an I-shaped section with an outer dimension of 60 mm × 60 mm and a top and bottom flange thickness of 3 mm and Web thickness of 6 mm. The beam is installed on the pedestal via support members at both ends say, bearing ‘A’ and ‘B’. The strain measurements are taken near the end of bearing ‘B’ as shown in Fig. 1a. The bearing member is made with the same mechanism as the sliding steel bearing allowing translation and rotation with stoppers. The rotation function is ensured by placing the upper steel plate on the curved surface of the lower plate surface, and the translation function is ensured by allowing displacement in the direction of the bridge axis as shown in Fig. 1b. It is difficult to evaluate the coefficient of friction at the contact surface between the upper and lower plates, so it is hardly applied at present. However, many existing bridges have this support system and the degradation occurs due to corrosion. It is installed at both ends of the beam member and is used as a simple beam boundary condition. The conventional strain gauges are utilized along with data logger (TDS-540, Tokyo Measuring Instruments Laboratory Co., Ltd., Japan) to measure the reference strain values on one side of the Web section of the beam, as shown in Fig. 2a. The measurement layout of strain gauges is illustrated in a schematic diagram in Fig. 2b. The other side of the Web section is prepared with random speckle pattern for the camera measurement along with the external illuminance effect for better image clearance for DIC post processing as shown in Fig. 2c. The camera is placed at a distance of 300 mm away from the beam specimen. The quasi-static loading of

Bearing A

Bearing B (a)

(b)

Fig. 1 Beam specimen: a strain measured at bearing ‘B’; b details of boundary conditions

464

T. J. Saravanan and M. Nishio

Fig. 2 Strain gauge measurement set up: a beam specimen; b schematic layout; c camera measurement

76.37 kg is gradually applied and removed on the beam specimen at two different locations, L/4 and L/2 using weight blocks as shown in Fig. 3, while only the results corresponding to the location L/4 is provided in this paper. The experiment is repeated for two different boundary conditions, such as fixed support condition (stoppers are used to arrest the translation and rotation at both ends) and simply supported condition (translation is allowed at bearing ‘B’). To analyze the obtained data, GOM correlate 2D-DIC software is utilized to estimate the full field strain patterns. From Figs. 4a, b, the in-plane strain field patterns obtained at different time steps corresponding to applied quasi-static loading at the location L/4 can be clearly seen for both fixed and simply supported boundary conditions, respectively. At time, t = 0 (no applied load), zero strain can be measured, while at t = 150 s (no load), the residual strain can be captured. The strain values obtained by strain gauges are compared with the corresponding value by 2D-DIC analysis and the comparison plots for strain values at top and bottom of Web section near end of the bearing ‘B’ are shown in Fig. 5a, b, respectively, for the fixed support boundary conditions. Similarly, the comparison plots for strain values at top and bottom of Web section near end of the bearing ‘B’ are shown in Figs. 6 (a-b), respectively, for the simply

Estimation of In-Plane Strain Field Using Computer Vision …

465

L/4

(a)

(b) 1000

Load (N)

800 600 400 200 0

0

50

100

150

time (s) (c)

Fig. 3 Experimental setup: a loading at L/4; b loading at L/2; c applied quasi-static load

supported boundary conditions. The obtained 2D-DIC results are found to be in good agreement with the strain gauges. The strain variations captured at the Web section near the end of bearing ‘B’ for quasi-static loading applied at location L/4 is in the range of 20–25µ and 30–40µ for the fixed support and simply supported condition, respectively, which is of very low strain value. Likewise, the experiment is repeated for the bending test on beam specimen by applying quasi-static load at location L/2. The strain variations are obtained and compared among 2D-DIC and strain gauges and the strain value range from 15 to 25µ. Still, the camera with CMOS sensor can capture very small strain variations through 2D-DIC at local level which corresponds to the damages near the end girder caused due to corrosion and bearing failure.

3 Conclusion In this paper, the experimental investigations are carried out to estimate the in-plane strain field patterns using computer vision to capture local damages in bridge structures near the end of girders. The laboratory level and plate girder bridge specimen are considered for bending test by applying quasi-static loading at two different locations with different boundary conditions to capture the small strain variation near the

466

T. J. Saravanan and M. Nishio

(a) fixed boundary conditions

(b) simply supported boundary conditions Fig. 4 In-plane strain field pattern obtained from 2D-DIC

Estimation of In-Plane Strain Field Using Computer Vision …

467

(a) at top of the web section

(b) at bottom of the web section Fig. 5 Comparison plot for strain variation obtained from 2D-DIC and strain gauges for loading at L/4 and fixed support boundary condition

468

T. J. Saravanan and M. Nishio

(a) at top of the web section

(b) at bottom of the web section Fig. 6 Comparison plot for strain variation obtained from 2D-DIC and strain gauges for loading at L/4 and simply supported boundary condition

Estimation of In-Plane Strain Field Using Computer Vision …

469

bearings. The obtained 2D-DIC results are in good agreement with the strain gauge values. Hence, it reveals that local in-plane strain field patterns can be obtained using available cameras. In future, the computed deformation fields can be used to compare with the FE model of a similar plate girder bridge specimen and the posterior distribution of the model parameters can be done by FE model updating. It helps in reducing the actual uncertainties of model parameters. Acknowledgements The authors are thankful to Tatsuya Suzuki, Yokohama Nationa University, Japan for his kind support during experiments. The first author is grateful to the Japan Society for the Promotion of Science (JSPS); whose postdoctoral fellowship program supports his stay in Japan during this work.

References 1. Nishio M, Marin J, Fujino Y (2012) Uncertainty quantification of the finite element model of existing bridges for dynamic analysis. J Civ Struct Health Monit 2(3–4):163–173 2. Ahn JH et al (2013) Residual shear strength of steel plate girder due to web local corrosion. J Const Steel Res 89:198–212 3. Nishio M, Hitomi J (2015) Sequential estimation of posterior distributions of model parameter of bridge bearing under corrosion process. J Jpn Soc Civ Eng Ser A2 (Appl Mech) 71(2):I_99–I_108 (in Japanese) 4. Marrugo W et al (2014) A vision-based system for the dynamic measurement of in-plane displacements. In: III international congress of engineering mechatronics and automation (CIIMA), IEEE. Cartagena, Colombia, Oct 2014, pp 1–3 5. Dizaji MS, Alipour M, Harris DK (2018) Leveraging full-field measurement from 3D digital image correlation for structural identification. Exp Mech 58(7):1049–1066

Wireless Strain Sensors for Dynamic Stress Analysis of Rail Steel Structural Integrity X. P. Shi, Z. F. Liu, K. S. Tsang, H. J. Hoh, Y. Liu, Kelvin Tan Kaijun, Phua Song Yong, Kelvin, and J. H. L. Pang

Abstract Fatigue fractures were observed at the web-to-head and web-to-foot interface regions of rail steel thermite weld joint on curve tracks in a mass rapid transit tunnel. To investigate these failures, onsite wireless strain measurement was conducted to record the train-induced stress spectrum at a thermite weld joint and the results were presented in this paper. Dynamic finite element (FE) analysis was then conducted using the thermite weld and rail steel structural model under train-induced loading conditions and is validated by the strain gage sensor measurements. The traininduced stress at critical locations of thermite weld was investigated. Combining the strain measurements and FE analysis results, the service load stress spectrum using the variable amplitude strain analysis spectrum at the stress concentration site was obtained and fatigue life for a critical location of the thermite weld was predicted using the cumulative damage summation method. This research provides a fatigue mechanics-based approach for rail steel asset life prediction methodology. Keywords Strain sensors · Fatigue · Finite element · Structural integrity · Thermite weld

X. P. Shi · Z. F. Liu · K. S. Tsang · H. J. Hoh · Y. Liu · J. H. L. Pang (B) SMRT-NTU Smart Urban Rail Corporate Laboratory, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore e-mail: [email protected] X. P. Shi · H. J. Hoh School of Electrical & Electronic Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore Z. F. Liu · K. S. Tsang · Y. Liu · J. H. L. Pang School of Mechanical & Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore K. T. Kaijun · P. S. Yong · Kelvin SMRT Trains Pte. Ltd., Permanent Way, Singapore, Singapore © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_44

471

472

X. P. Shi et al.

1 Introduction Rail steel thermite welds are widely used in railway track systems. However, due to the geometry irregularity induced by a thermite weld, stress concentrations at the connection between weld and parent rail can lead to fatigue failures at the thermite weld region. Cracks may initiate from these stress concentration sites and under repeated loading induced by wheel rolling in and out, and these cracks may cause weld failure. Some typical weld failures are shown in Fig. 1. Most thermite weld failures occur at curved tracks where fatigue occurs at the webto-head and web-to-foot transition regions due to complex in-plane and out-of-plane loading arising from the wheel forces applied to the rail track. To investigate these failures, strain measurement at these locations were conducted on a curved track during train operation hours. The measurements provided important information about train loading such as train velocity and rail track operational stresses. The rosette strain gages are installed at a close distance from the thermite weld toe to parent metal rail transition region. Finite element (FE) modelling of the thermite weld was used to determine the stresses and a stress scale factor was obtained from strain measurement at several locations to weld failure locations. Combining the strain measurements and FE analysis results, the service load stress spectrum using the variable amplitude strain analysis for fatigue life prediction was conducted. A rain-flow cycle counting analysis method was performed, and the results were used in Miner’s Law, a cumulative damage summation fatigue failure assessment approach. The fatigue life of the critical region on the thermite weld was then predicted.

Fig. 1 Typical weld failure for rail thermite weld, a fracture initiates at head-web region in the field side, b fracture initiates at foot-web region in the field side, c fracture initiates at head-web region in the gauge side

Wireless Strain Sensors for Dynamic Stress Analysis of Rail …

473

2 Strain Measurement Onsite strain measurement was conducted, on a 300 m radius track comprising of a concrete slab in a metro tunnel. The wireless strain measurement system is shown in Fig. 2. Strain gages are connected, to a wireless transmitter with Wheatstone bridge circuits powered by rechargeable batteries. The transmitter was fixed on the wall of the tunnel and the strain signal was transmitted to a wireless receiver which was placed beyond a safety distance from the track and was connected to a laptop for data post processing. The instrument set-up was shown in Fig. 3. As the centrifugal force, mainly acted on the outside rail of the curve track, the strain measurement was conducted only on the outside rail. Four rosette strain gages were attached near the thermite weld at the web-to-head and web-to-foot transition locations. The rosette strain elements in 0°, 90° and 45° direction were recorded as shown in Fig. 4. For electrical shielding protection and EMI effect elimination, strain gage areas were covered with aluminium foil-based tape. The instrument set-up was completed before train operating hours in the early morning engineering hours, and the system was then turned on before the first train

Fig. 2 Wireless strain measurement system

Fig. 3 Instrument set-up for strain measurement

474

X. P. Shi et al.

Fig. 4 Strain measurement locations

300 Head-web 45o strain Head-web Longitudinal strain Head-web Vertical strain

Strain (με)

200

100

0

-100

-200 0

2

4

6

8

10

Time (s)

Fig. 5 Strain measurement result for multiple trains (left) and one train (right)

starts. The strain signals were automatically recorded. A recording frequency of 1024 Hz was used in order to accurately capture the peak strains induced by passing trains. Typical strain signals recorded was shown in Fig. 5. On the left of Fig. 5, each peak represents a train passing by the measurement locations. The first peak was observed by zooming in to the strain spectrum as shown on the right of the figure. There are 24 axles in total for a 6-car train. Hence, there are 24 strain peaks on each channel shown on the right of Fig. 5. The speed of the train can also be obtained from analysis of Fig. 5 (right) by dividing the physical distance ‘from the first axle of the train to the last axle’ of the train by time difference ‘between the first peak and the last peak of the strain signals’. The train shown in this figure is running at a speed of 52 km/h derived from the results of strain measurement.

3 Numerical Simulation As shown in Fig. 4, in the previous section, strain gages were attached some distance from the exact interface of parent rail and weld. A numerical model was then developed to determine a stress scale factor from strain measurement at various locations to weld fracture location.

Wireless Strain Sensors for Dynamic Stress Analysis of Rail …

475

Fig. 6 Numerical model of train-track system

Multibody dynamics method [1, 2] and finite element (FE) method [3–7] were usually used in simulation of the train-track system. Multibody dynamics method is able to simulate the dynamic effects due to the structural flexibility of the vehicle components and the track and the computation scale can be quite large [8]. However, due to rigid body assumption used in this method, the stress distribution in the track and thermite weld is unlikely to be solved. FE method can solve complex structural stress problem, but the calculation cost is high when the model scale is large. In this research, an integrated approach was developed. The multibody dynamics analysis was firstly conducted with the commercial software Universal Mechanisms. All the components in the railway systems are considered in this multibody dynamic model. The model is the same as used in Bai’s work [9]. The interaction force between wheels and tracks was exported and was used in following FE models. The FEM simulation was conducted using commercial FEM software ABAQUS. The whole FE model consists of a wheel, rail track with weld, base plate, rubber pad, clips and sleepers as shown in Fig. 6. The shape of thermite weld was constructed based on a thermite weld specimen. The bottom of the sleepers as well as the longitudinal movement of the rail track end were both fixed. General contact in ABAQUS was used between wheel and rail track, also between rail track and base plate, rubber pad and clips. The friction factor used for general contact in this model was assumed as 0.3. The track length in this research is 4.9 m which can minimize the rail end effect meanwhile reduces the calculation time. Similar length was used in other researchers’ work [5, 10, 11]. The wheel was modelled as rigid body for simplification. A separate study showed the stress distribution in the regions below the railhead using this simplification is only 1% less than that of deformable wheel. Bilinear elastic–plastic material model was used for rail steel and linear elastic model was used for other materials. The material parameters used in this simulation were listed in Table 1. To eliminate numerical oscillation and shock impact in explicit simulation, an implicit–explicit sequential approach was used. The wheel was located 0.45 m away from one end of the rail and a vertical and lateral small displacement was applied to establish wheel–rail contact in implicit analysis. The deformation and the stress components of the rail system were then imported into input file for explicit analysis. Similar implicit–explicit sequential approach was also used in literature [14, 15]. In explicit analysis, the wheel–track reaction force obtained from UM was applied on the reference node of the wheel, and the wheel was rolling towards moving direction

476

X. P. Shi et al.

Table 1 Material parameters used in FE model Parts

Material

Material properties Density (kg/m3 )

Elastic modulus (MPa)

Poisson’s ratio

Yield stress (MPa)

Ultimate stress (MPa)

Rail

R350 steel [12]

7850

210,000

0.3

830

1255

Sleeper

Concrete

2500

38,000

0.15





Clip and base plate

Steel

7850

210,000

0.3





920

2.29

0.47





Rubber pad Rubber [13]

without slipping. The model was validated by strain measurement. Figure 7 shows the comparison result of weld web-head location, outside of the rail. The simulation result shows the same trend exists both in longitudinal strain and vertical strain compared to test results. In Fig. 7a, both simulation and test result show a tension peak at ~1.36 s. The same phenomenon has also been reported in other researcher’s work [16–18] and this tension spike was considered the main reason for crack propagation at thermite weld. The difference in strain peak value between FEM result and test result may be due to irregular geometry of the test weld and rail constrain condition in FE model. To determine stress scale factor from the location of ‘strain measurement’ to the location of ‘weld fracture’, stress profile of two elements at these two locations was compared and the scale factor for weld fracture location at the head-web corner is 3 in this example. 80

140 Test Simulation

120

Test Simulation

60

100

40

Strain (με)

Strain (με)

80 60 40 20 0 -20 -40

20 0 -20 -40

-60

-60

-80 -100 1.30

1.32

1.34

1.36

1.38

1.40

1.42

1.44

1.30

1.32

1.34

1.36

1.38

Time (s)

Time (s)

(a)

(b)

1.40

Fig. 7 FEM result comparing with test result, a longitudinal strain, b vertical strain

1.42

1.44

Wireless Strain Sensors for Dynamic Stress Analysis of Rail …

477

4 Fatigue Life Prediction The fatigue life of thermite weld at critical location was estimated using rain-flow counting and damage accumulation method using the strain measurement and simulation result. This section shows an example of estimation for fatigue life at weld web-head location on the outside of the rail. Rain-flow counting is conducted using a commercial software DATS analysis. The damage on the weld due to the traininduced stresses on the track can be calculated using the previous/past strain measurement results and the stress-life curve (S–N curve) of the material. The whole fatigue life prediction process is shown in Fig. 8. A single train signal as shown in the Fig. 5 (right) was used for this calculation. The rain-flow matrix is shown in Table 2. Using the rain-flow matrix, the damage accumulated can be calculated using the stress-life curve and Miner’s rule. The Palmgren-Miner rule states that failure occurs when I  ni =1 Ni i=1

(1)

where n i is the number of applied load cycles of type i, and Ni is the fatigue life under different stress range S. The S–N curve used in this paper is from literature [19] where the crack growth life was predicted for an initial crack size of 0.025 mm to a final crack size of 5 mm. The S–N equation related is

Fig. 8 Flowchart for fatigue life prediction at weld joint

0

0

0

0

0

0

0

1

2

1

1.42E+07 4.27E+07 3

1.68E+07 5.03E+07 4

1.93E+07 5.80E+07 5

2.19E+07 6.56E+07 6

2.44E+07 7.33E+07 7

2.70E+07 8.09E+07 8

2.95E+07 8.86E+07 9

3.21E+07 9.62E+07 10

1

2

1

4

2

0

0

0

0

0

0

2

1

0

0

4

1

1

0

0

0

0

3

0

0

0

0

1

0

0

0

0

0

4

0

0

0

0

0

0

0

0

1

4

5

0

0

0

0

0

0

0

1

1

2

6

0

0

0

0

0

0

0

1

1

0

7

0

0

0

0

0

0

0

0

0

0

8

0

0

0

0

0

0

0

0

0

0

9

0

0

0

0

0

0

0

1

3

1

10

Mean −3.64E+06 −2.35E+06 −1.06E+06 2.23E+05 1.51E+06 2.80E+06 4.08E+06 5.37E+06 6.66E+06 7.94E+06

1.17E+07 3.50E+07 2

After scale

9.12E+06 2.74E+07 1

Range

Stress (Pa)

Table 2 Damage accumulation

4

3

5

6

2

1

0

3

6

7

ni /N i

Total

2.16782E−06

5.89E+06 6.79285E−07

7.76E+06 3.86643E−07

1.05E+07 4.77011E−07

1.46E+07 4.11241E−07

2.11E+07 9.4954E−08

3.18E+07 3.1422E−08

5.10E+07 0

8.83E+07 3.39813E−08

1.70E+08 3.52022E−08

3.87E+08 1.8077E−08

Total Expected (ni ) life (N i )

478 X. P. Shi et al.

Wireless Strain Sensors for Dynamic Stress Analysis of Rail …

 N=

S 10383

479

−3.33 (2)

The damage accumulated is then calculated as shown in Table 2. The total damage caused per train is 2.17e−6. The expected life can be calculated as 1/total damage, which is 461,294 trains. Assuming there are 300 trains per days, the weld fatigue life is 1538 days (~4 years). It should be noted that as only one S–N curve was used, effect of mean stress was not considered in this section. The paper only presents an integrated approach to predict the fatigue life of thermite weld. More accurate prediction can be achieved by considering more details of material properties.

5 Conclusions In this research, onsite wireless strain measurement was conducted on a curved track in a tunnel. The structural strain spectrum of train-induced loading at web-head and web-foot regions of thermite weld was obtained. Numerical model was then developed to calculate the stress scale factor. Finally, the fatigue life of thermite weld at critical locations was predicted using rain-flow counting method under cumulative damage summations by combining strain measurement and simulation results. Acknowledgements This research work was conducted in the SMRT-NTU Smart Urban Rail Corporate Laboratory with funding support from the National Research Foundation (NRF), SMRT and Nanyang Technological University; under the Corp Lab@University Scheme.

References 1. Sharma SK (2018) Multibody analysis of longitudinal train dynamics on the passenger ride performance due to brake application. Proc Inst Mech Eng Part K: J Multi-body Dyn 1–14 2. Ding GF, He Y, Zou YS, Li R, Ma XJ, Qin SF (2015) Multi-body dynamics-based sensitivity analysis for a railway vehicle. Proc Inst Mech Eng Part F-J Rail Rapid Transit 229(5):518–529 3. Ranjha SA, Mutton P, Kapoor A (2014) Fatigue analysis of the rail underhead radius under high axle load conditions. Adv Mater Res 891–892:1181–1187 4. Ranjha S, Kulkarni A, Ekambaram P, Mutton P, Kapoor A (2016) Finite element prediction of the stress state of a rail underhead radius under high axle load conditions. Int J Cond Monit Diagn Eng Manage 19(2):11–16 5. Salehi I, Kapoor A, Mutton P, Mutton P, Alserda J (2010) Improving the reliability of aluminothermic rail welds under high axle load conditions. In: CORE 2010, rail—rejuvenation and renaissance, conference on railway engineering, Wellington, New Zealand 6. Ranjha S, Ding K, Mutton P, Kapoor A (2012) Mechanical state of rail underhead region under heavy haul operations. In: CORE 2012, global perspectives, conference on railway engineering, Brisbane, Queensland 7. Salehi I, Kapoor A, Mutton P (2011) Multi-axial fatigue analysis of aluminothermic rail welds under high axle load conditions. Int J Fatigue 33(9):1324–1336

480

X. P. Shi et al.

8. Shabana AA, Sany JR (2001) A survey of rail vehicle track simulations and flexible multibody dynamics. Nonlinear Dyn 26(2):179–210 9. Bai L, Li B, Shen F, Ng WY, Yusoff MFBM, Lim ST, Zhou K (2018) An integrated approach for predicting wear and fatigue damage of wheels and rails in SMRT systems. In: International conference on intelligent rail transportation, Singapore 10. Desimone H, Beretta S (2006) Mechanisms of mixed mode fatigue crack propagation at rail butt-welds. Int J Fatigue 28(5):635–642 11. Ranjha SA, Ding K, Mutton PJ, Kapoor A (2012) Finite element modelling of the rail gauge corner and underhead radius stresses under heavy axle load conditions. Proc Inst Mech Eng Part F: J Rail Rapid Transit 226(3):318–330 12. Motameni A, Eraslan AN (2016) Fracture and fatigue crack growth characterization of conventional and head hardened railway rail steels. J Sci Eng Res 3(3):367–376 13. Kaewunruen S, Lewandrowski T, Chamniprasart K (2018) Dynamic responses of interspersed railway tracks to moving train loads. Int J Struct Stab Dyn 18(01):1850011 14. Yang Z, Boogaard A, Wei ZL, Liu JZ, Dollevoet R, Li ZL (2018) Numerical study of wheel-rail impact contact solutions at an insulated rail joint. Int J Mech Sci 138:310–322 15. Zong N, Wexler D, Dhanasekar M (2013) Structural and material characterisation of insulated rail joints. Electron J Struct Eng 13(2):75–87 16. Eisenmann J (1967) Stresses acting on the rail—recent findings. Brussels, Belgium, International Railway Congress Association, pp 537–550 17. Sugiyama T, Yamazani T, Umeda S (1971) Measurement of local stress on outer rail head. Q Rep RTRI 11–13 18. Mutton PJ, Tan M, Bartle P, Kapoor A (2009) The effect of severe head wear on rolling contact fatigue in heavy haul operations 19. Lawrence FV, Ross ET, Barkan C (2004) Reliability of improved thermite welds

Purpose and Practical Capabilities of the Metal Magnetic Memory Method A. A. Dubov, S. M. Kolokolnikov, M. N. Baharin, M. F. M. Yunoh, and N. M. Roslin

Abstract This paper presents the purpose and practical capabilities of metal magnetic memory (MMM) methods. MMM is an advanced Non-destructive Testing (NDT) method developed through accumulation of Self-Magnetic Leakage Field (SMLF) and its distribution analysis in Stress Concentration Zones (SCZs) area. The SCZs is a major source of damages mechanism development and can be detected in express-control mode using specified MMM sensor, scanning devices, and software. It can be performed without require surface preparation and artificial magnetization where it utilizes residual magnetization formed naturally during product fabrication and throughout the operation. MMM method when combined with other NDT methods such as ultrasound dramatically improves inspection result efficiency. Presently, MMM methods are widely employed in base metal and weld joints inspection. For inaccessible objects such as buried pipelines, Non-Contact Magnetometric Diagnostics (NCMD) will be employed with the same fundamentals used by the MMM method. Therefore, it is proposed that MMM is best suited for SCZs detection in feasible diagnosis of equipment and structure and at once enables the assessment of actual stress–strain state to be carried out for all structural elements. Keywords Non-destructive testing · Inspection · Metal magnetic memory · Stress concentration zones · Non-contact magnetometric diagnostic

1 Introduction Metal magnetic memory (MMM) is an advance NDT method developed through accumulation of SMLF and its distribution analysis in SCZs area [1]. In reference to ISO 24497-1-2009, it is an after-effect that occurs as residual magnetization in A. A. Dubov · S. M. Kolokolnikov Energodiagnostica Co. Ltd., Moscow, Russia M. N. Baharin (B) · M. F. M. Yunoh · N. M. Roslin Technology Department, MR Technology Sdn. Bhd., 4809-1-30, Jalan Perdana 2, 63000 Cyberjaya, Selangor, Malaysia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_45

481

482

A. A. Dubov et al.

the parent metal or welded joint formed during the fabrication or due to irreversible change of the local magnetization state of inspection object in zones of stress concentration or damage under loadings. SMLF that reflects the natural residual magnetization during the fabrication process is principally different compared to Magnetic Leakage Field (MLF) occurring on anomalies found in inspected object through artificial magnetization applied in other magnetic testing method such as Magnetic Particle Inspection (MPI) [2]. MMM method can be performed without requirement of surface preparation such as insulation or coating removal as well as stimulating artificial magnetization into the inspected object whereby it utilizes the natural generated residual magnetization formed during object manufacturing or object under pressurized condition. The main objective of conducting MMM is to locate SCZs on inspected object which indicates early stages of damage mechanism in express-control mode by using specified sensor scanning devices and software.

2 Stress Concentration Zones in Metal Magnetic Memory Method The SCZs is not only a predefined area where it creates dissimilar conditions for stress distribution produced by external loads, but then it is also recognized as a random area due to the initial metal heterogeneity occurrence related with the addition workload of large structural exterior design. The geometric characteristic of SCZs magnetic anomalies is the distance between self-magnetic field extreme values multiple with the standard product size (thickness, width, diameter). This distance is the minimum distance between the adjacent glide pads or the critical size of the shell, for instance, the loss of stability in the pipe. For quantitative evaluation of the level of stress concentration (sources of damage), the gradient of the normal (H y ) and/or tangential (H x ) SMLF elements is as shown below [2].    Hy  dH , at x → 0 K in = (1) K in = x dx where x is the distance between the adjacent points of inspection [2]. In some cases, during the inspection of equipment Stress–Strain State (SSS) the resulting SMLF gradient is used as below [2]: |H | =



Hx2 + Hy2 + Hz2

(2)

Numerous studies done at the manufacturing plant have shown that all metallic object of the same category, made of the same material grade and under the same manufacturing technology, have a similar Residual Magnetization (RM) distribution

Purpose and Practical Capabilities of the Metal Magnetic …

483

where the magnetic anomalies are only detected during inspection at residual stress concentration areas and different structural distortions in individual metallic object [3]. For example, during the magnetization of the thermodynamic products in their production process, the internal pressure, rather than the weak external geomagnetic field, plays a decisive role. During product operation, the RM is first distributed under the impact of the workload, and magnetic anomalies, due to the geometric displacement and standard size of the product, occur in the SCZs. If the local SCZs does not occur in a metallic object of the same category under the influence of the work load, the RM distribution pattern is almost the same. In ensuring that, it is required to check thousands of units and products from the same type. Based on established principles and extensive practice in examining numerous units of equipment and structures, the authors propose less standardized calibration methodologies and inspection approaches, as well as their corresponding metrology.

3 Integration of MMM Method with Other NDT Methods It has been more than 10 decades since welding activity was introduced into industries and the most important factor in determining the reliability of weld joints is the distribution of residual stresses. There are various NDT methods used to examining the welding condition and MMM is best utilized to perform the comparative measurement on the effect of the relationship between residual stress and the properties of the material. Incorporating inspection result utilizing MMM with other NDT methods will dramatically improve the examination efficiency. Currently, MMM method is widely employed in multipart inspection of base metal and welded joints. By using expresscontrol mode in MMM method, the SCZs can be detected on the Inspection Object (IO) without any surface preparation required at where they will be classed by SMLF gradient and the design parameter, m afterward. For verification purposes, specified SCZs will furthermore inspected by other NDT methods such as Ultrasonic Testing (UT). For inspection conducted using MMM method on specific equipment unit/IO, the development of techniques in classifying magnetic anomalies of the size on the surface and internal section of metallic object are currently progressing. According to some sources, in many industries, standards for unacceptable anomalies resulted from NDT are different for same IOs. Moreover, the size of acceptable and unacceptable anomalies in existing guidance documents cannot be physically proven from a mechanical point of view. On this concern, it shall be understood that a significant attribute of the MMM technique is that it evaluates the anomalies’ severity and presume their direction and intensity by using magnetic parameters in the SCZs which already contain macro cracks of unacceptable sizes. In the event of increased number of damage mechanism in aging operating metal equipment having unexpected fatigue properties, the proposed MMM method

484

A. A. Dubov et al.

primarily for early screening and analysis of the anomalies has obviously beneficial asset owner over other NDT methods.

4 Development Capabilities of the MMM Method Today, more and more experts are focusing on the development of technical analysis, lifetime assessments and risks, and condition monitoring of equipment/IO. Nonetheless, the “catastrophe” of the scenario is that when evaluating the industrial conditions of Hazardous Facilities (HIFs), experts are facing the fact that existing guidelines do not enclose a strong definition of the concept of mechanical fracture and material science standpoint. Some research papers provide a perspective on contemporary knowledge of fracture mechanics as the explanation of such concepts as “limiting metal conditions” within the local SCZs structure and “limiting the structural properties of the structure itself.” In practice, according to the above concept, the MMM method allows the assessment of the actual condition of the equipment and structures [4]. In addition, in the field of metal and structural element fracture mechanics and materials science, the development of new standards or guidance documents on technical analysis, monitoring and lifetime HIFs and risk assessment is also crucial. In some possible circumstances, utilizing MMM method to perform 100% coverage of inspections object and to recognize all potentially hazardous zones, suspected damage growth, lifecycle assessments and risks will be more precise and predictable. From the author’s point of view, MMM is currently the most suitable and practical method for SCZs detection in structures. Simultaneously, this method enables actual stress–strain state evaluation of total structural elements in the express-control mode based on 100% inspection coverage to be performed. On the other hand, the Non-Contact Magnetometric Diagnostics (NCMD) capabilities of unpiggable pipeline sections are also unique. Fundamentals of NCMD are similar with MMM method during contact inspection. NCMD recognize magnetic anomalies in geomagnetic field distributions caused by variations in magnetization of metal pipeline in SCZs and in corrosion-fatigue growth areas. Inspection conducted for buried (or underwater) pipelines utilizing NCMD must be able to answer to the question of: “When and where will the damage or accident be expected?”. In order to solve the issues, NCMD applications will be combined with additional inspection methods for pipelines such as UT, eddy currents, etc., since presently it is unfeasible to detect and classify anomalies in the pipeline based solely on NCMD results. Figure 1 shows an example of SCZs location based on NCMD data analysis. Figure 2a shows an example of UT results based on the location of NCMD SCZs data. Longitudinal cracks were found at the longitudinal stitch welding as shown in Fig. 2b. The defect was found by performing Ultrasonic Test Flaw Detection (UTFD) at high SCZs. However, it should be taken into concern that if, according to NCMD results, auxiliary inspections detect anomalies in only a few SCZs from many SCZs detected,

Purpose and Practical Capabilities of the Metal Magnetic …

485

SCZ Locaon

Fig. 1 Example of SCZs location of NCMD signal

the outcome is quite efficient because it is considered that even if an accident is prevented, for instance on a gas or oil pipeline, all expenses of the NCMD carried out are covered.

5 Conclusion As a conclusion, mechanical express-control device for metal anomalies and local SCZs detection provides the advantages to industrial users as described below; • Provides early indication of corrosion-fatigue and lifespan evaluation of equipment and structures; • Detect anomalies/defects (lamination, casting defects, etc.) in the inner metal layer by using SMLF geometry parameters controlled by glide pads dislocations in SCZs; • 100% coverage of IO inspection to detect local SCZs where source of damage mechanism is growing; • Improve the efficiency of the IO inspection program through combination of the MMM method and other significant NDT methods; • Reduce cost of performing inspection as it discards the use of artificial magnetization through other magnetic testing method and does not requires surface preparation such as insulation removal.

486

A. A. Dubov et al.

Fig. 2 a Example of UT result based on SCZs location of NCMD data, b defect on the inspection object

Acknowledgements The authors would like to express his sincere thanks and gratitude especially to Energodiagnostica Co. Ltd., Moscow and MR Technology Sdn. Bhd. and everyone throughout accomplishing this paper.

References 1. Dubov A, Dobov A, Ladanyi P (2014) Tensile testing of steel specimens using the metal magnetic memory method. In: 11th European conference on non-destructive testing (ECNDT 2014), Prague, pp 1–6 2. Kolokolnikov S, Dobov A (2018) Physical basics, practical capabilities and purposes of the metal magnetic memory method application for technical diagnostic of critical industrial equipment.

Purpose and Practical Capabilities of the Metal Magnetic …

487

NDT for safety, Prague, pp 49–55 3. Dubov A, Vstovsky GV (2000) Physical base of the method of metal magnetic memory. In: 2nd international conference on metal magnetic memory, Moscow 4. Gorkunov ES, Efimov AG, Shubochkin AE, Artemyev BV (2016) On the question of magnetic NDT application to determine the stress-strain state of metal structures. World NDT 3:52–55

Development and Application of Inner Detector in Drilling Riser Auxiliary Lines Based on Magnetic Memory Testing Xiangyuan Liu, Jianchun Fan, Wei Zhou, and Shujie Liu

Abstract Drilling riser auxiliary lines are critical components of deep-water floating drilling platform, which mainly consist of choke line, kill line, hydraulic line, boost line, etc. This paper discusses an inner detector in drilling riser auxiliary lines based on metal magnetic memory testing. In contrast to conventional detect methods currently employed, the prominent advantage of the detector is that it can scan the inner wall directly by the pull of a winch, with no need to disassemble the buoyancy modules. This way, the time and cost of testing would be shortened considerably. The detector is designed with two sensor arrays assembled in staggered pattern, and each array consists of six tunnel magnetoresistance probes radially arranged from the canter of the shaft, much like spokes on a wagon wheel, ensuring 100% coverage of the inner wall. The raw data acquired from the front and rear sensor arrays can be aligned automatically by the alignment algorithms of imaging software for further processing. For determination of the detector characteristics, the laboratory test was conducted with 3.5 tubing that were modified with a series of prefabricated groove and dome-shaped artificial defects. And result shows that it can present the damage sharply. Besides, field test also confirms the design has features of reasonable structure and stable performance. Keywords Riser auxiliary lines · Inner detector · Imaging software · Magnetic memory

X. Liu · J. Fan (B) · W. Zhou College of Safety and Ocean Engineering, China University of Petroleum-Beijing, Changping, Beijing, China e-mail: [email protected] X. Liu e-mail: [email protected] S. Liu CNOOC Research Institute, Beijing, China © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_46

489

490

X. Liu et al.

1 Introduction In deep-water floating drilling, the drilling riser system is an indispensable equipment. It refers to a section of pipe from the subsea blowout preventer to the moonpool. Its main function is to isolate the seawater, guide the drilling tool, circulate the drilling fluid, and trip sunsea BOP. Drilling riser auxiliary lines are critical components of this system, which mainly consist of choke line, kill line, hydraulic line, boost line, and so on [1]. They enable the marine riser system to keep the pressure in the well balanced and control the subsea blowout preventer properly. These pipelines are subjected to high pressure during the working process, and the fluid medium therein is corrosive. The inner wall tends to form cracks, corrosion pits, and the like under the combined action of corrosion, erosion, and stress concentration. According to the statistics of the literature, the total downtime of the BOP of the 47 development wells drilled by the semi-submersible platform from 1987 to 1989 was 627 h due to the failure of the choke line and kill line, accounting for 29% of the total downtime [2]. It can be seen that the failure of choke line and kill line on the drilling riser is one of the main reasons for the shutdown of the floating drilling unit. Therefore, regularly detecting the damage status of these auxiliary pipelines and ensuring their safe operation are the key to successful drilling and completion quality. According to industry regulations, the riser is inspected annually. At present, conventional non-destructive testing techniques such as magnetic powder, penetrant testing, and ultrasonic are commonly used to detect crack defects and wall thickness of the pipe body (as shown in Fig. 1). All those methods require the removal of the buoyancy block, which brings about problems such as long operating cycle, low efficiency, and high cost. Moreover, since the inner diameter of the auxiliary pipeline is small and the length is up to 22.86 m, the detection process of ultrasonic testing and permeation detection have to be adopted in the form of sampling inspection on the oil tools base, which makes the problem of miss detection cannot be avoided. Therefore, it is of great significance to develop a detection device being capable of performing full-line scanning detection on the riser auxiliary pipeline.

Fig. 1 Sample inspections on the base (ultrasonic testing left-hand side, penetrant testing right-hand side)

Development and Application of Inner Detector in Drilling Riser …

491

What is known is that magnetic flux leakage detection is a well-recognized detection method for long-distance oil and gas pipelines, but magnetic flux leakage testing requires specialized magnetization equipment to magnetize the pipeline [3], while the auxiliary pipeline has an inner diameter ranging from 2 to 4 in. It is very difficult to design the magnetization module in such a small inner diameter. But the metal magnetic memory detection technology proposed in the 1990s does not need to consider this problem, because what it detects is the residual magnetic field induced by the stress and the geomagnetic field with no need to magnetize the detected object, and it has unique advantages in the early diagnosis of fatigue damage [4]. This paper introduces magnetic memory detection into the inner detection of the drilling riser auxiliary lines. An inner detector is designed to detect the pipeline directly without removing the buoyancy block. The two-dimensional appearance of flaws can be presented intuitively by the imaging software.

2 Design of the Inner Detector 2.1 Analysis of Design Requirement The magnetic memory method is a non-destructive testing method based on the principle of recording and analyzing the distribution of the own leakage field generated by the stress concentration region of the ferromagnetic member. The signal picked up by the detection is the magnetic field distribution on the surface of the ferromagnetic member, which requires magnetic memory probe fits closely against the surface of the component during the inspection process. Each riser typically has 4 or 5 auxiliary lines, and the inner diameter specifications of these auxiliary lines are varied by different drilling tool bases, ranging from 2 to 4 in. Therefore, the detector should have a variable diameter to meet the need of detection of different lines.

2.2 Details of the Design The inner detector designed in this paper consists of four main modules: central shaft, sensor boxes, screw-spring support frame, and aviation plug. As shown in Fig. 2, the front end of the center shaft is designed with a hook for attaching to the winch wire rope. The detector completes the scanning inside the line by the pulling of the winch. The sensor box is packaged with a giant magnetoresistive sensor, which can effectively pick up the tangential magnetic field signal on the surface of the ferromagnetic component. The six sensor boxes are arranged radially along the central axis, forming a set of spoke-like probe arrays. Screw-spring frame is a key component of the equipment. By tightening or loosening the screw, the external

492

X. Liu et al.

Fig. 2 Design drawing of the inner detector

diameter radiated from the six sensor boxes can be adjusted within a certain range to adapt to the internal detection of lines of different diameters. A total of two sets of probe arrays assembled in staggered pattern are equipped with the central axis. Figure 3 shows the front view of two probe arrays which consist of 12 channels sensors. Since the scanned portions of the two sets of arrays complement each other, a sufficient magnetic field signal of inner wall can be obtained in one test, thereby achieving full coverage detection in high efficiency. The tail of the central shaft is designed with an aviation plug interface. The power and signal lines of the 12 sensors in the two probe arrays are routed through the wire slot of the central axis to the aviation plug. The detector can be connected to the data acquisition box by means of aviation plug to power the probe and transport data. These good wire arrangements enable the detectors to be of good performance in maintenance. Fig. 3 The front view of two probe arrays consist of 12 channels sensors

Development and Application of Inner Detector in Drilling Riser …

493

Fig. 4 Etch pits (design drawing left-hand side, picture right-hand side)

Fig. 5 Crack defects (design of the groove artificial defects above, picture of tube with defects below)

3 Laboratory Test 3.1 Artificial Flaws For determination of the detector characteristics, two sections of 3.5 tube were modified with artificial defects to conduct laboratory test. Sample 1: A ball-shaped cutter with diameter of 5 mm was used in order to mill a series of defects in analogy to etch pits to a depth of 2 and 6 mm into the 3.5 tube. Sample 2: 3.5 tube with 13 items of grooves produced with different depths from 0.5 to 6.5 mm by wire-electrode cutting in analogy to cracks. Figures 4 and 5 show the design drawings of the artificial defects and the detailed pictures of tubes with artificial defects.

3.2 Data Processing and Result Export data from the acquisition software and load it into the data imaging software. After the data denoising, the software needs to align the data of 12 channels. In order

494

X. Liu et al.

to cover more detection areas in one test, the detectors are staggered with two front and rear sensor arrays, so that the data detected by the previous probe array is a bit ahead of the data of the next probe array. Therefore, the data of the two arrays must be aligned for further gradient processing. The signal detected by the sensor is the tangential magnetic field of the wall of the auxiliary line, and the magnetic field of the wall is not uniform itself, so the original signal is not able to well characterize the defect. Under the circumstances the magnetic field needs to be gradient processed to eliminate the influence of the unevenness of the magnetic field. The gradient becomes larger due to the magnetic field mutation near the defect, and this feature can be used to characterize the defect. The last step is to draw a two-dimensional cloud image. Gradient data of 12 sensors can be drawn in the form of a cloud image to realize data visualization. Figure 6 shows that six round-hole defects are detected by the test. And the twodimensional contour distortion of the circular hole is within an acceptable range, which indicates that the inner detector can be applied to the detection of etch pits. Figure 7 shows the section result of the 3.5 tube with groove defects. 13 items of prefabricated groove artificial defects are totally presented on the cloud image sharply. To get better visual effect, the cloud image can be attached to the surface of a cylinder as Fig. 8 shows.

Fig. 6 Contrast between picture of tube with artificial defects and cloud image of detection

Development and Application of Inner Detector in Drilling Riser …

495

Fig. 7 Cloud image of the groove artificial defects’ detection

Fig. 8 The surface of a cylinder attached with cloud image

Laboratory test indicates that the inner detector can present the prefabricated groove and dome-shaped artificial defects effectively.

4 Industrial Application To verify the reliability of the inner detector, industrial application has been carried out in a drilling base. The auxiliary lines in this base have an inner diameter of 2.5 and 3.5 in. The onsite inspection requires two students to cooperate with each other. As is shown in Fig. 9: One is responsible for manipulating the winch, pulling the inner detector forward in the auxiliary line, and the other operating the computer at the other end to collect data and assist in conveying the cable connected to the back of the detector by aviation plug. After the detection, the data can be exported, and the developed data imaging software can be used to figure two-dimensional imaging of the inner wall condition, which is an unfolded view of the inner wall. Considering the length of the auxiliary line is 22.86 m and its diameter is 3.5 in., the aspect ratio of the image should be

496

X. Liu et al.

Fig. 9 Industrial application of the inner detector

Fig. 10 Data imaging result

kept at 22.86: 3.5 * Pi to ensure that the contour of the defect will not be distorted. However, because the aspect ratio is too high, in order to avoid the image being too narrow to be viewed, the imaging software is designed to automatically cut the image into 4 segments. It can be seen from the data imaging result (Fig. 10) that there are several pit defects at about 3, 13, and 22 m in this auxiliary line. In fact, as is shown in Fig. 11, the ends of the auxiliary lines on the base are seriously corroded. The result of industrial application shows that the inner detector has features of reasonable structure and stable performance.

5 Conclusion and Perspective A novel inner detector based on magnetic memory testing has been developed for the detection of corrosion damage in drilling riser auxiliary lines. It can clearly display the 2D contour of the defect with the aid of supporting imaging software. Due to the high speed of data processing, it is suitable for full-line scanning of auxiliary

Development and Application of Inner Detector in Drilling Riser …

497

Fig. 11 Photographs of the auxiliary lines

lines on the drilling tools base. Its test results are instructive for the re-inspection of lines by other means of detection, which can transform the current random sampling detection method into targeted detection, enhancing the accuracy and efficiency of detection. The downside is that the advancement of the detector in the lines relies on the pulling of the winch. It needs to pass the wire rope through the entire auxiliary line in advance, which is a heavy workload. What is worse, the section of cable connected to the back of the detector by aviation plug will be contaminated after passing through the inner wall of the auxiliary line, affecting the subsequent computer operation of the inspector. Therefore, wireless transmission should be considered in subsequent studies. In addition, it should be considered to use the endoscope to take pictures of damage, such as corrosion or cracks on the inner wall, and match them with the twodimensional contour map formed by the magnetic memory test data. After match and confirmation, the damage map library can be established, which lays a foundation for the future research on the intelligent identification of defects. Acknowledgements This work was financially supported by the National Key Research and Development Program of China (Grant No. 2017YFC0804503).

Reference 1. Zhou S, Liu Q (2016) Introduction. In: Theory and application research of deepwater drilling riser system mechanical behavior. Beijing, China, Science Press, p 5 2. Montgomery ME (1998) Choke and kill lines—safety and downtime. In: Houston IADC deepwater conference, Brookshire, Texas 3. Shi Y et al (2015) Theory and application of magnetic flux leakage pipeline detection. Sensors 13(9):31036–31055 4. Yao K, Wu LB, Wang YS (2019) Nondestructive evaluation of contact damage of ferromagnetic materials based on metal magnetic memory method. Exp Tech 43(3):273–285

Defect Recognition and Classification Techniques for Multi-layer Tubular Structures in Oil and Gas Wells by Using Pulsed Eddy Current Testing Hang Jin, Hua Huang, Zhaohe Yang, Tianyu Ding, Pingjie Huang, Banteng Liu, and Dibo Hou Abstract Pulsed eddy current (PEC) method has attracted researchers’ attention in the detection of multi-layer tubular structure defects in oil and gas wells because of its advantages of non-destructive testing, not being affected by non-ferromagnetic interference. In this paper, the detection experiment of the casing with prefabricated defects is carried out, and the PEC signal analysis method for defect recognition and classification of multi-layer tubular structures is studied. To deal with the problem that it is difficult to extract time-domain features, a modified principal component analysis method is proposed. Combining the statistical features extracted by the method and frequency-domain features, the combined features are obtained, and the random forest algorithm is introduced to classify the defects. This paper demonstrates an effective method for the detection of multi-layer tubular structure defects in oil and gas wells. Keywords Pulsed eddy current testing · Multi-layer tubular structure · Combined feature · Random forest

1 Introduction In the field of oil exploitation, the well casings, which are responsible for protecting tubing and providing a safe acquisition environment, are typical multi-layer tubular structures [1]. However, in the actual production process, casing wall may suffer from perforation cracks, bending deformation, multi-surface extrusion and other defects H. Jin · Z. Yang · T. Ding · P. Huang (B) · D. Hou State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, Zhejiang, China e-mail: [email protected]; [email protected] H. Huang Zhongyuan Oilfield Company, SINOPEC, Puyang 457001, Henan, China B. Liu College of Information Technology, Zhejiang Shuren University, Hangzhou 310015, Zhejiang, China © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_47

499

500

H. Jin et al.

due to the long-term influence of harsh working environment [2]. If the casing defects are not dealt with timely and pertinently, it will affect the normal production of oil and gas wells and cause huge economic losses. Pulse eddy current testing is a non-destructive testing technology developed on the basis of traditional eddy current testing technology [3]. Because of its advantages of non-destructive testing, non-ferromagnetic interference, non-contact, and high detection efficiency, it has been applied in the detection of defects of multi-layer tubular structures in oil and gas wells. Zhang et al. realized the classification of casing outer wall corrosion, inner wall corrosion, and outer wall corrosion based on the stacked auto-encoder neural network [4]. Wang et al. conducted experimental evaluation on the defects and ovality of coiled tubing and realized the localization, qualitative and quantitative analysis of defects [5]. Xu et al. proposed a composite nondestructive testing method based on PEC method and magnetic flux leakage method and realized longitudinal crack, internal defect and transverse crack detection of tubing [6]. At present, the research on multi-layer tubular is mainly aimed at identifying special defects, but few studies on the classification of casing defects that may occur in actual situations. Due to the limitation of signal sampling in the detection process, it is difficult to extract typical time-domain features to effectively characterize the defect types [4]. Therefore, a method of defect recognition and classification for multi-layer tubular structures based on combined features is proposed in this paper. Firstly, based on the statistical features extracted by the modified PCA method, the casing defects are recognition. Then the combined features are obtained by combining the statistical features with signal frequency-domain features. Based on the combined features, the random forest algorithm is used to construct the data model of the relationship between the PEC signals and defect types, and the defect classification of multi-layer tubular structures is realized.

2 Experiment System and Specimen 2.1 Experiment System The experimental setup is shown in Fig. 1. In the experimental system, the excitation signal module generates the pulse wave signal and drives the excitation coil in the compound detection probe to generate the excitation magnetic field, which is sensed by the detection coil. The detection signal is transmitted to the signal processing center by the signal acquisition module for storage and processing. Figure 2 shows the structure of the PECT instrument, which mainly consists of excitation signal module, composite detection probe, and signal acquisition module. The instrument will place the vertical centers in the multi-layer tubular structures for detection. The composite detection probe is composed of longitudinal probe A, transverse probe B, and transverse probe C. The three probes are perpendicular to

Defect Recognition and Classification Techniques …

501

Fig. 1 PEC detection system

Fig. 2 Schematic diagram of PEC instrument

each other and the electromagnetic information of specimens in three-dimensional space can be obtained. The auxiliary module includes excitation signal generating circuit, detection signal processing circuit, coding transmission circuit, and so on, which is used for signal transmission and processing.

2.2 Tubular Structures Defect Specimens In the experiment, according to the actual defects in an oil field, standard defects such as perforation crack, bending deformation, multi-surface extrusion, and so on are defined and processed corresponding experimental specimens for low carbon steel casing, as shown in Fig. 3. Figure 4 shows partial processed experimental specimens. The nonlinear relationship between the mentioned casing defect types and PEC signals and the difficulty in extracting typical time-domain features of signals to characterize defects make it difficult to classify effectively [7]. Therefore, it is necessary

502

H. Jin et al.

Fig. 3 Schematic diagram of some experimental specimens

Fig. 4 Partial processed experimental specimens

to introduce other methods to describe defect information more comprehensively and classify the defects [8].

3 Theory 3.1 PEC Detection Principle PEC excites the square wave signal with constant period, amplitude, and duty ratio as the excitation signal and induce the primary magnetic field. The change of magnetic field will induce eddy currents in the specimen, and different defects affect the distribution and intensity of eddy current, ultimately affect the total magnetic field in the space. By analyzing the changes of detection signal, the information of the specimen can be obtained to realize qualitative and quantitative analyse of defects [9]. The casing material used in this experiment is low carbon steel, which belongs to the ferromagnetic material. Figure 5 shows a typical PEC response signal of ferromagnetic materials. The secondary field of ferromagnetic material is in the same direction as the primary field, so there is no zero-crossing time. Affected by “smoke ring effect” [10] caused by the relatively large space size of casing, the detection is difficult to obtain typical time-domain features such as peak time or voltage peak.

Defect Recognition and Classification Techniques …

503

Fig. 5 PEC response signal of ferromagnetic materials

3.2 Principal Component Analysis Principal component analysis (PCA) is a signal dimension reduction processing technology based on the statistical analysis method. By converting multiple variables into a few integrated variables (principal components, PC), each PC is a linear combination of the original variables and the PCs reflect most information of the original signal. In this paper, a modified PCA method is proposed. The features are extracted by PCA separately based on the direction of probes and combined as statistical features.

3.3 Random Forest Random forest algorithm (RF) is an improved decision tree method based on bagging sampling. The basic idea is to construct a large number of classifiers to avoid overfitting in a single decision tree. RF will abandon the results of trees which have no influence on prediction, adopt the results of trees which have strong influences, and get the final results by weighted fusion. Therefore, RF has strong anti-noise ability and prediction ability based on similar features. In this paper, combined features are obtained by combining the frequency-domain features of PEC signals and the statistical features extracted by the modified PCA method. Based on combined features, a model is constructed by RF algorithm to classify the defects of multi-layer tubular structures. The overall technology diagram is shown in Fig. 6.

504

H. Jin et al.

Fig. 6 Diagram for defect classification

4 Results and Analysis For non-destructive testing and evaluation of the double-layer tubular structures commonly used in actual wells, namely the experimental scenario of tubing + casing, casing defects are recognized first, and then classified based on defective response signals. 1774 sets of PEC response signal were collected, including six different categories: defect-free, perforation cracks (Defect A), single-surface extrusion (Defect B), double-surface extrusion (Defect C), four-surface extrusion (Defect D), and bending deformation (Defect E). Sample information in detail is shown in Table 1.

4.1 Defect Recognition For sample with or without defects, defect features are obtained by the modify PCA method, which separately extracts the PCs of probes A, B, C and integrated into a set of features. Compared with the traditional PCA that extract the PCs of the original signals directly, modified PCA makes use of the probes’ different detection capabilities for defects. Table 2 shows the results of using the statistical features to determine whether the defect exists. Figure 9 shows the comparison of feature extraction: (a) represents features of the first three PCs extracted by traditional PCA, (b) represents features extracted by modified PCA. There is features’ overlapping in (a), which affects the accuracy of defect recognition. The overlapping part disappears by the modified PCA method, which gives a reliable explanation of the experimental results. Table 1 PEC signal dataset Total dataset Defect-free Defect A Defect B Defect C Defect D Defect E Training set 1332

703

Testing set

442

234

36

46

45

34

47

1774

937

146

185

180

136

190

Total

110

139

135

102

143

Defect Recognition and Classification Techniques …

505

Table 2 Results of defect recognition Feature extraction

Model

Traditional PCA

SVM

Modified PCA

SVM

Traditional PCA

RF

Modified PCA

RF

Training set accuracy (%)

Testing set accuracy (%)

97.14

95.14

97.47

97.31

100 99.93

98.32 99.55

(a) Feature Extraction Based on Traditional PCA (b) Feature Extraction Based on modified PCA

Fig. 9 Comparison of feature extraction

From the results in Table 2, RF algorithm is more accurate than SVM. This is mainly due to the randomness of several classifiers in RF, which has a better identification of similar features compared to SVM’s single classifier. This also provides a reliable basis for defect classification later.

4.2 Defect Classification The classification of defect types is the main target of PECT of tubular structures. Therefore, after the mentioned defect recognition, this paper analyzes defective signals to classify the types of defects. This paper adopts modified PCA to extract statistical features because of its better characterization of defects. Figure 10 shows the result of feature extraction. There are many features’ overlapping among different defect types, which indicates that only statistical features are not enough to classify defects accurately. The excitation signal of PEC is composed of sinusoidal waves at multiple harmonic frequencies and its response contains abundant spectrum information, so effective frequency-domain feature extraction is helpful for defect classification. Figure 11 shows the amplitude spectrum and low-frequency amplified spectrum of different defect signals after FFT. It is found that the FFT amplitude spectrum peak can be used as the frequency-domain feature to characterize different defect

506

H. Jin et al.

Fig. 10 Modified PCA feature extraction

Fig. 11 FFT amplitude spectrum

types. Therefore, statistical features extracted by modified PCA and FFT amplitude spectrum peak as a frequency-domain feature are proposed to form combined features.

Defect Recognition and Classification Techniques …

507

Table 3 Defect classification accuracy with RF model Feature extraction

Model

Training set accuracy (%)

Testing set accuracy (%)

Frequency-domain

RF

71.70

64.42

PCA

RF

92.05

90.38

Modified PCA

RF

93.96

93.27

Combined features

RF

99.68

99.51

Table 3 shows the classification accuracy of the RF model using different feature extraction methods, from which the following conclusions can be drawn: Compared with the traditional PCA method, the modified PCA method improves the classification accuracy slightly; Compared with the single statistical feature, the combined features only add one-dimensional feature but improves the accuracy greatly. The classification accuracy of the training set and test set reach 99.68 and 99.51% based on the modified PCA combined with the frequency-domain features, indicating a high model accuracy. To analyze the reasons for improvement of defect classification by combined features, the confusion matrix of the testing set result based on modified PCA and combined features is shown in Fig. 12. From the results in Fig. 12, combined features have advantages in classify defects such as single-surface extrusion and double-surface extrusion, which may be caused by the spectrum peak contains the maximum mean of energy information of the defect sample signal and the frequency-domain feature make up the shortcomings of PCA that PCs with less contribution may contain important information of defect types. In order to verify the performance of the classifier of the proposed method, the SVM model commonly applied in PEC studies is used as a reference. Figure 13 shows the prediction results of RF and SVM model with combined features. Both models accurately classify the defects of multi-surface extrusion and perforation cracks, while RF model has better performance for the defects of single-surface extrusion

Fig. 12 Testing set result with different feature extraction methods

508

H. Jin et al.

Fig. 13 Confusion matrices of RF and SVM model with combined features

and bending deformation. This is attributed to the randomness of the RF model, which improves the anti-noise ability and the classification accuracy of similar features.

5 Conclusion In this paper, defect recognition and classification techniques for multi-layer tubular structures using PECT is studied. Through the detection experiment of the casing with prefabricated defects, this paper adopts modified PCA method to extract features to realize defect recognition and constructs a model by RF algorithm based on the combined features to realize defect classification of multi-layer tubular structures. The following conclusions are obtained:(1) The modified PCA with RF model can realize accurate defect recognition of multi-layer tubular structures; (2) Compared with single statistical feature, combined features obtained from frequency-domain features and modified PCA can effectively improve the defects classification accuracy of multi-layer tubular structures; (3) RF algorithm, compared with SVM, has a better identification of similar features, which improve the defects recognition and classification accuracy of multi-layer tubular structures. From the foregoing, it can be seen that the defect recognition and classification techniques of multi-layer tubular structures presented in this paper have a further application prospect in PECT. Acknowledgements This work is supported by the National Natural Science Foundation of China [Grant No. 61174005] and the Technology Major Project of China [Grant No. 2016ZX0517-003 and No. 31300028-18-ZC0613-0002].

Defect Recognition and Classification Techniques …

509

References 1. Qi C, Yu R, Cheng Q et al (2013) Research and application of the online magnetic detection device for CT. J Nanchang Hangkong Univ Nat Sci 27(04):77–83 2. Zhang Z, Wu J, Luo Y (2002) Analysis and countermeasures of casing deformation causes in Gangxi oil field. J Jianghan Pet Inst (04):83–84+1 3. Sophian A, Tian GY, Taylor D et al (2001) Electromagnetic and eddy current NDT: a review. Insight-Non-Destr Test Cond Monit 43(5):302–306 4. Zhang Y, Li Y, Yan B et al (2018) Classification of subsurface corrosion in double-casing pipes via integrated electromagnetic testing with stacked auto-encoder artificial neural network. J Air Force Eng Univ (Nat Sci Ed) 19(01):72–78 5. Wang L, Song Z, Chang J et al (2015) Test analysis and application of coiled tubing electromagnetic lossless detection. Oil Field Equip 44(07):60–63 6. Xu X (2005) Nondestructive testing of oil pipe defects. Daqing Petroleum Institute 7. Tang W, Shi YW (1997) Detection and classification of welding defects in friction-welded joints using ultrasonic NDT methods. Insight 39(2):88–92 8. Huang P, Ding T, Luo Q et al (2019) Defect localisation and quantitative identification in multi-layer conductive structures based on projection pursuit algorithm. Nondestr Test Eval 34(1):70–86 9. Sophian A, Gui YT, Taylor D et al (2003) A feature extraction technique based on principal component analysis for pulsed Eddy current NDT. NDT E Int 36(1):37–41 10. Yang P, Qiu N (2015) Research on transient electromagnetic detection technology of Goaf water based on smoke ring effect. Zhongzhou Coal 12:93–96

Corrected Phase Reconstruction in Diffraction Phase Microscopy System V. P. Bui, S. Gorelik, Z. E. Ooi, Q. Wang, and C. E. Png

Abstract This paper presents an approach for correcting phase reconstruction from the interferogram Diffraction phase microscopy (DPM) system if there exist artefacts around the defect features due to noises. By averaging local region around the artefacts of the unwrapped phase using a sliding window with iterative approach, the correct phase information can be retrieved. The reconstructed phase images after correction step demonstrate the ability of the proposed approach in extracting the correct phase data from the system. Keywords Diffraction phase microscopy · 3D reconstruction · Phase extraction · Phase unwrapping · Local phase approximation · Background removal

1 Introduction Optical interferometry has the ability to reveal nanoscale information because it can provide access to phase. Diffraction phase microscopy (DPM) is one of the most popular optical inspection systems that uses a common-path interferometer to obtain both the amplitude and phase information from a sample in order to achieve 3D information (shape and size) of an object. In the conventional DPM system, one stringent requirement is that the first order of diffraction beam needs to be produced with higher magnitude than zero order beam so that after going through the pinhole, which also requires high accuracy alignment, these two fields interfere at the final image plane on the CCD sensor to create the correct final interferogram with the optimal fringe visibility [1–4]. The 3D reconstruction of interferogram images obtained from DPM system using conventional algorithms such as Fourier transform-, or Hilbert transform-based phase extraction, and branch cuts-, or global V. P. Bui (B) · C. E. Png Institute of High Performance Computing, A*STAR, 1 Fusionopolis Way, Singapore 138632, Singapore e-mail: [email protected] S. Gorelik · Z. E. Ooi · Q. Wang Institute of Materials Research and Engineering, A*STAR, Singapore, Singapore © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_48

511

512

V. P. Bui et al.

optimization-based phase unwrapping often results in very good outcome [5–16]. However, we have been observed that, in some specific cases, there are distorted reconstructed images and lack of clarity if the defect size is below the diffraction limit (sub-µm). This work contributes to derive an approach for correcting phase reconstruction from the interferogram system if there exist artefacts around the defect features. Additionally, even though the noise level is inherently low in the DPM system, there is still a need to quantify such noise and remove it when dealing with subdiffraction size of defects. As we aim to develop a quasi-real-time imaging system, the first step is to employ GPU-based Hilbert transform for extracting the phase information from the interferograms. After that, making use of a fast algorithm based on quality map is leveraged on for unwrapping the extracted phase, which will be gone through a pre-corrected step by averaging region around the artefacts before dimension characterization (estimation of shape and size of the defects). In this paper, we have also attempted to reconstruct the background intensity of the resulting unwrapped phase, from which the constructed background is then subtracted before retrieving the correct phase information. The reconstructed phase images after corrections step demonstrate the ability of the proposed approach in extracting the correct phase data from the DPM system.

2 Phase Reconstruction in DPM System 2.1 Conventional Phase Reconstruction Process The ultimate limiting factor for defect detection is not resolution, but rather noise, which decreases image contrast and reduces sensitivity. This can be relieved by using DPM system, which presents a low-noise optical microscope imaging system with high sensitivity. A conventional DPM interferometer is illustrated in Fig. 1, where a 405 nm semiconductor laser coupled into a single-mode fibre is used as the illumination source. The light beam going through speckle reducer and optical components is focused on the back focal plane of the objective lens for normal incidence on the test sample. The reflected beam is collected by the same objective lens and incident on a diffraction grating (300 lines per mm) to produce multiple orders of beams. The first order and the zeroth order beams will then pass through a pinhole, and 4f system which is composed of two lenses with focal lengths of 75 mm and 400 mm, respectively. In DPM system, one stringent requirement is that the first order of diffraction beam needs to be produced with higher magnitude than zeroth order beam so that after going through the pinhole, which requires high accuracy alignment, these two fields interfere at the final image plane on the CCD sensor creating correct interferogram image with optimal fringe visibility [1–4]. The algorithm for reconstruction of phase image from the interferogram image obtained from DPM system consists of two main steps. Firstly, phase extraction

Corrected Phase Reconstruction in Diffraction Phase Microscopy System

513

Fig. 1 A common-path diffraction phase microscope (DPM) for 3D imaging of defects

algorithm is used to extract the phase information from the raw interferogram image. Then, phase unwrapping algorithm is employed to obtain the absolute phase image and its 3D information. In fact, the 3D phase reconstruction is usually performed using conventional algorithms such as Fourier transform-, or Hilbert transform-based phase extraction [5–9], and branch cuts-, or global optimization-based phase unwrapping [10–16]. In addition, there is a need to develop a background removal algorithm in case no reference image can be provided. The background model is based on the 2D surface fitting (fourth-order polynomial) with data extracted from the unwrapped phase. This is then used as the reference image for subtracting when retrieving the phase information [17, 18]. The reconstruction process often results in very good outcome as illustrated in Figs. 2, 3 and 4. The result obtained in Fig. 2 is from the interferogram image generated by Eq. (3), where the one in Fig. 3 is the phase unwrapped from the wrapped phase simulated by using Eq. (4). We use the measured metrics, such as peak signal-to-noise ratio (PSNR) defined in Eq. (1) where mean-squared error (MSE) is

Fig. 2 Phase extraction from simulated interferogram image obtained by Eq. (3)

514

V. P. Bui et al.

Fig. 3 Phase unwrap from simulated wrapped phase image obtained by Eq. (4)

Fig. 4 Reconstructed 2D and 3D images of SiO2 pillar with diameter of 60 µm, high of 100 nm on Si wafer

obtained from Eq. (2), to evaluate the accuracy of the most popular extraction and unwrapping algorithms for the results in Figs. 2 and 3, and the outcome is listed in Table 1. It can be seen that a combination of Hilbert transform for phase extracting and quality map for phase unwrapping can be considered as the best for subsequent phase reconstruction. Figure 4 shows the reconstructed phase images of a 60 µm Table 1 Accuracy level of conventional extraction and unwrapping algorithms Extraction algorithm

Fourier

Hilbert

Derivative

Hilbert Huang

PSNR (dB)

35

57

34

46

Unwrapping algorithm

Branch cuts

Quality map

Lp-norm

Network flow

PSNR (dB)

36

36

35

36

Corrected Phase Reconstruction in Diffraction Phase Microscopy System

515

diameter SiO2 pillar with high of 100 nm standing on the Si wafer. Note that a CCD camera (1024-by-2048, pixel size of 5.3 µm) is used to capture the interferogram image. The results agree very well with the test sample’s fabrication data (shape and size).   PSNR = 10 log10 peakval2 /MSE MSE = 1/NM I (x, y) = 128e−(x

2

+y 2 )



+ 127e−(x

(X (i, j) − Y (i, j))2

2

+y 2 )

    cos 8π x 2 + y 2 + 32π x

  2 2 ϕ(x, y) = Wrap To Pi 20e−(x +y ) /2/0.22 + 10(x + y) + 0.5 randn(N )

(1) (2) (3) (4)

2.2 Local Corrections of Unwrapped Phase It has been seen that DPM system is able to reduce much noise. However, because of system calibration errors, sample-tilt, aberration, non-uniformity or time-varying illumination intensity, there still exist noises causing artefacts during phase reconstruction; and worst yet if the subdiffraction limit defect signal buried into noise that will result in phase data corrupted. A calibration procedure is proposed by smoothing based on iterative approach to approximate local phase for unwrapped noisy data as presented in Fig. 5. The local approximation with average estimation in a sliding window is performed to remove the artefact phase wherever there is. The procedure

Algorithm: Corrected unwrapped phase Input: unwrapped phase, sliding window (size), threshold Output: corrected unwrapped phase Procedure corrected_phase(): N (number of sliding windows) for i = 1 to N do P = peak_detection(): Diffp > threshold, Maxp = number of peaks (artefacts) NewPeakList.append(peak) while count > 0 do average_region(pixels, average parameters) check = Diffp vs. threshold count = number of remaining peaks (Maxp, check) end while Estimation = pixel value in sliding window end for return corrected phase End procedure

Fig. 5 Pseudo-code of corrected unwrapped phase algorithm

516

V. P. Bui et al.

Fig. 6 a Interferogram image, b Phase reconstruction, c corrected unwrapped phase, d 3D phase image with respect to (c)

is initiated by the phase estimates obtained for the neighbouring points and the artefacts (containing in the sliding window), which is defined as when the magnitude of the phase difference taking value larger than certain threshold). This is repeated until there is no more discrete peak (artefact) within the sliding window, and the phase becomes smoother. This process not only removes the artefact, but also helps enhance signal-to-noise ratio (SNR) [19–21].

3 Results and Discussion The experimental samples are captured by CCD camera as interferogram images and processed using conventional phase reconstruction algorithm as shown in Fig. 6a, b, respectively. It can be observed that strong noises disrupted the phase information causing artefacts around the defects’ location. Figure 6c shows the corrected unwrapped phase using the proposed procedure, and its 3D phase image is depicted in Fig. 6d. From this result, we can see the importance of implementing the proper correction procedure when there exist strong noises in the DPM system, and using conventional phase reconstruction algorithm leading to undesireable outcome.

4 Conclusions and Future Works A pre-corrected procedure of phase reconstruction in a common-path interferometer system for quantitative phase imaging is presented. The reconstructed phase image is corrected and able to provide more accurate 3D morphological information of the test sample. Further investigation on many more samples will be conducted to improve the robustness of the developed procedure. Acknowledgements This study was supported by the Agency for Science, Technology and Research (A*STAR) of Singapore (Grant No. 182-243-0030).

Corrected Phase Reconstruction in Diffraction Phase Microscopy System

517

References 1. Chowdhury S, Izatt J (2014) Structured illumination diffraction phase microscopy for broadband, subdiffraction resolution quantitative phase imaging. Opt Lett 2. Zhou R et al (2015) Semiconductor defect metrology using laser-based quantitative phase imaging. Proc SPIE 3. Pham H et al (2011) Off-axis quantitative phase imaging processing using CUDA: toward real-time applications. Biomed Opt Express 4. Bhaduri B et al (2014) Diffraction phase microscopy: principles and applications in materials and life sciences. Adv Opt Photonics 5. Bhaduri B, Popescu G (2012) Derivative method for phase retrieval in off-axis quantitative phase imaging. Opt Lett 6. Ikeda T et al (2005) Hilbert phase microscopy for investigating fast dynamics in transparent systems. Opt Lett 7. Takeda M, Ina H, Kobayashi S (1982) Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. J Opt Soc Am 8. Kemao Q (2004) Windowed Fourier transform for fringe pattern analysis. Appl Opt 9. Trusiak M et al (2016) Single and two-shot quantitative phase imaging using Hilbert-Huang Transform based fringe pattern analysis. Proc SPIE 10. Herráez M (2002) Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path. Appl Opt 11. Goldstein R et al (1988) Satellite radar interferometry: two-dimensional phase unwrapping. Radio Sci 12. Costantini M (1998) A novel phase unwrapping method based on network programming. IEEE Trans Geosci Remote Sens 13. Pritt MD, Shipman JS (1994) Least-squares two-dimensional phase unwrapping using FFT’s. IEEE Trans Geosci Remote Sens 14. Ghiglia DC, Romero LA (1996) Minimum Lp-norm two-dimensional phase unwrapping. J Opt Soc Am A 15. Huang HYH et al (2012) Path-independent phase unwrapping using phase gradient and totalvariation (TV) denoising. Opt Express 16. Flynn TJ (1997) Two-dimensional phase unwrapping with minimum weighted discontinuity. J Opt Soc Am A 17. Cheng C-Y, Hsieh C-L (2017) Background estimation and correction for high-precision localization microscopy. ACS Photonics 18. Zhou R et al (2013) Detecting 20 nm wide defects in large area nanopatterns using optical interferometric microscopy. Nano Lett 19. Katkovnik V et al (2008) Phase local approximation (PhaseLa) technique for phase unwrap from noisy data. IEEE Trans Image Process 20. Ghiglia DC, Romero LA (1994) Robust two-dimensional weighted and unweighted phase unwrapping that uses fast transforms and iterative methods. J Opt Soc Am A 21. Goldstein G, Creath K (2014) Quantitative phase microscopy: how to make phase data meaningful. Proc SPIE Int Soc Opt Eng

Study on Fatigue Damage of Automotive Aluminum Alloy Sheet Based on CT Scanning Ya li Yang, Hao Chen, Jie Shen, and Yongfang Li

Abstract Fatigue damage is the damage accumulation process under the repetitive cycle load, which is accompanied by degradation of material properties. It is great significant to study the fatigue life of aluminum alloy in the field of engineering application. The damage characteristics of 6061 aluminum alloy are studied by threedimensional reconstruction and finite element method to estimate low-cycle fatigue damage based on the X-ray computer tomography. The results show that the estimated damage variables are accordant with those measured experimentally. Moreover, the three-dimensional reconstruction method combined with the finite element analysis method to calculate the material damage can meet the engineering requirements. Keywords Damage characteristics · Fatigue test · X-ray computer tomography · Finite element analysis

1 Introduction Fatigue failure is one of the most common failure modes of structures, which is caused by gradual material degradation under cyclic loading. Material failure is originated initially from the microvoid to grow gradually and finally forming the macro-scale cracks [1–3]. Such damage evolution in the components steps across meso-scale and macro-scale. Therefore, in order to track the fatigue life along with whole evolution of damage, it is necessary to investigate the relationship between damage variables of different scales, and establish an approach to describe damage, thereby forming a multiscale correlation method to predict fatigue life. The theoretical research of macroscopic damage mechanics emphasizes the influence of damage on macroscopic mechanical behavior of materials [4–7], however, Y. Yang (B) · H. Chen · Y. Li School of Mechanical and Automotive Engineering, Shanghai University of Engineering Science, Long Teng Road 333, Shanghai, China e-mail: [email protected] J. Shen Computer Science Department, University of Michigan, Dearborn, USA © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_49

519

520

Y. Yang et al.

the process of microscopic physical changes of damage evolution is not observed. Microscopic damage mechanics uses a mechanics average method to establish relationship between microscopic damage parameters with material damage and deformation with a combination of damage primitives (microcrack, microvoid, shear zone) [8–10]. In order to study the damage process of structure, it is impossible to solve this problem simply from a macroscopic or microscopic view. Huang presented a model with spring element to connect the damage evolution process on the mechanism level to the material and component design on the continuum level [11]. Shelke extracted lower scale feature from the macro-scale wave signal using nonlocal elasticity theory to show progressive evolution of the damage [12]. In recent years, efforts have been made to study effect of microstructural evolution on fatigue behavior by advanced nondestructive testing instruments. High-resolution X-ray tomography has been used to characterize the fatigue mechanisms [13–16]. However, few authors studied the evolution of degradation mechanism from mesoscopic defects to macroscopic damage. Therefore, it makes sense to establish theoretical method of multiscale damage evolution and failure process in engineering practice. The objective of this study is to propose a multiscale correlation method to predict fatigue life of aluminum alloy 6061. In this method, mesoscopic defects are visualized by X-ray computed tomography (CT), macroscopic damage is acquired through the finite element simulation based on three-dimensional reconstruction from CT data. An integral system of fatigue life prediction is obtained. The validity of the present method is shown by comparing its results with experimental results.

2 Identification of Mesoscopic Defects 2.1 Material and Specimen The composition of the material AL6061 is shown in Table 1. The specimen was prepared in accordance with ASTM E8/E8M-15a and combined with the requirements of the fatigue tester. The specimen thickness is 2 mm. The specimen shape is shown in Fig. 1. Table 1 Property of aluminum alloy 6061 Chemical composition Scale factor

Cu 0.25

Si

Fe

Mn

Mg

Zn

Cr

Ti

Other

AL

0.6

0.7

0.15

0.85

0.25

0.16

0.15

0.15

Margin

Study on Fatigue Damage of Automotive Aluminum Alloy Sheet …

521

Fig. 1 Fatigue specimen geometry

2.2 Fatigue Test and Nondestructive Detection In order to investigate the damage degradation of AL6061 under cyclic loading, one material testing machine (MTS810) was selected to be carried out for the experiment (Fig. 2). For low-cycle fatigue tests, a suitable fatigue loading was first searched to achieve a fatigue life of slightly over 20,000 cycles. The value of 3500 N was

Fig. 2 Material testing system (MTS) and deformation measurement system

522

Y. Yang et al.

Fig. 3 X-ray computed tomography system

obtained by using large number of specimens subject to different magnitudes of fatigue loading. Then, three repeated tests were conducted to avoid the fortuity. To observe the degradation of microstructure for specimen under each sub-cycle, a high-energy and high-resolution micro-focus X-ray computed tomography system (XCT) were used to measure all the voids and cracks in those material test specimens (Fig. 3). Radiographic images were acquired at spatial resolution of 5 µm.

3 Results and Discussion Three-dimensional reconstruction was conducted in VG studio system based on two-dimensional initial data from XCT, and then, the FEA model was generated in ABAQUS software, as shown in Fig. 4. For the finite element analysis, the elastic strain energy, plastic strain energy, and residual strain energy can be output for the damage degradation. In addition, the Young’s modulus of the specimen can also be obtained. Figure 5 shows that there is a linear relationship between the effective Young’s modulus with the fatigue damage, but the residual strain energy have no obvious relationship with the fatigue damage. This means that the effective Young’s modulus is a more suitable for depicting the fatigue damage evolution. Figure 6 shows the comparison between the simulated effective Young’s modulus and the experimental test modulus. The result indicates that the degradation trend of the effective Young’s modulus simulated by FEM is consistent with the experimented elastic modulus, and the simulation of Young’s modulus based on three-dimensional reconstruction from CT images could be a promising potential method for fatigue damage evolution and estimation of residual life.

Study on Fatigue Damage of Automotive Aluminum Alloy Sheet …

523

Fig. 4 The damage model of specimen in ABAQUS

Fig. 5 The change of damage parameters in numerical computation, a the change of effective Youngs modulus, b the change of residual strain energy

4 Conclusions A multiscale methodology for fatigue life prediction of aluminum alloy was developed. In this technique, effective Young’s modulus as macroscopic material degradation parameter is acquired through the finite element simulation based on threedimensional reconstruction of mesoscopic defects from X-ray computed tomography

524

Y. Yang et al.

Fig. 6 The change of damage parameters in numerical computation

(CT). The elastic modulus decreases linearly as the fatigue cycles increases. The three-dimensional reconstruction method combined with the finite element analysis method to calculate the material damage can meet the engineering requirements. Acknowledgements This research was funded by the Natural Science Foundation of Shanghai (18ZR1416500) and Development Fund for Shanghai Talents, and “Shuguang Program” supported by Shanghai Education Development Foundation and Shanghai Municipal Education Commission.

References 1. Liu YB et al (2010) Advanced characterization of pores and fractures in coals by nuclear magnetic resonance and X-ray computed tomography. Sci China (Earth Sci) 53(6):854–862 2. Jordon JB et al (2010) Microstructural inclusion influence on fatigue of a cast A356 aluminum alloy. Metall Mater Trans A 41:356 3. Mu P et al (2014) Influence of casting defects on the fatigue behavior of cast aluminum AS7G06-T6. Int J Fatigue 63:97–109 4. Murakami S, Ohno N (1980) A continuum theory of creep and creep damage. Creep in structures, Leicester, pp 422–444 5. Talreja R (1985) A Continuum mechanics characterization of damage in composite materials. R Soc 399(1817):195–210 6. Chaboche JL (1988) Continuum damage mechanics: part I-general concepts. J Appl Mech 55(1):59–64 7. Chow CL, Chen XF (1992) An anisotropic model of damage mechanics based on endochronic theory of plasticity. Int J Fract 55(2):115–130 8. Gurson AL (1977) Continuum theory of ductile rupture by void nucleation and growth: part I-yield criteria and flow rules for porous ductile media. J Eng Mater Technol 99(1):2–15 9. Ju JW, Lee X (1991) Micromechanical damage models for brittle solids. Part I: tensile loadings. J Eng Mech 117(7):1495–1514 10. Beeker W, Gross D (1988) A two-dimensional micromechanical model of anisotropic elasticmicroplastic damage evolution. Ingenieur-Archiv 58(4):295–304 11. Huang Y, Gong XY, Suo Z (1997) A model of evolving damage bands in materials. Int J Solids Struct 34(30):3941–3951

Study on Fatigue Damage of Automotive Aluminum Alloy Sheet …

525

12. Shelke A, Banerjee S (2011) Multi-scale damage state estimation in composites using nonlocal elastic kernel: an experimental validation. Int J Solids Struct 48(7–8):1219–1228 13. Marrow TJ, Buffiere JY (2004) High resolution X-ray tomography of short fatigue crack nucleation in austempered ductile cast iron. Int J Fatigue 26(7):717–725 14. Zhang H, Toda H (2007) Three-dimensional visualization of the interaction between fatigue crack and micropores in an aluminum alloy using synchrotron X-ray microtomography. Metall Mater Trans 38(8):1774–1785 15. Padilla E et al (2012) Quantifying the effect of porosity on the evolution of deformation and damage in Sn-based solder joints by X-ray microtomography and microstructure-based finite element modeling. Acta Mater 60(9):4017–4026 16. Nicoletto G, Konecna R, Fintova S (2012) Characterization of microshrinkage casting defects of Al-Si alloys by X-ray computed tomography and metallography. Int J Fatigue 41:39–46

IoT-Web-Based Integrated Wireless Sensory Framework for Non-destructive Monitoring and Evaluation of On-Site Concrete Conditions A. O. Olaniyi, T. Watanabe, K. Umeda, and C. Hashimoto

Abstract This research combines the use of IoT-web-based application in monitoring the real-time conditions of concrete, to accurately evaluate the maturity of concrete samples from its fresh state, accurately predict the strength of concrete among other properties, and it provides remote access to the outputs in real-time. The predicted strength of concrete was acquired using various maturity functions based on the temperature-time history of concrete samples. Two cement types of fresh concrete were prepared, ordinary portland cement (OPC) and blast furnace slag (BB) cement, at water-cement ratio 50%. Immediately, after fresh concrete samples were cast into molds and well vibrated, the monitoring of internal average temperature commenced, and these specimens were kept under controlled curing condition over a 28-day period, and the temperature data were logged directly into the database by the IoT sensors via a mobile internet hotspot connection. Graphical outputs of temperature, concrete maturity and strength data for both concrete types were monitored remotely; presented on the programmed web-based application and was accessible for download on any internet-connected device. Keywords Concrete · Temperature · Maturity · Internet-of-things · Wireless · Strength

1 Introduction Valuable information of the existing conditions of concrete structural components are enabled by continuous data-driven structural health monitoring (SHM) techniques and these assists concrete professsionals in making timely construction management decisions on/off-site. However, conventional approaches can be tedious and are often time-consuming; especially when large amount of SHM data is to be processed and presented into easy-to-understand user-friendly outputs [1]. The proliferation of the A. O. Olaniyi · T. Watanabe (B) · K. Umeda · C. Hashimoto Civil Engineering Department, Tokushima University, Josanjima Campus, Tokushima City, Japan e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_50

527

528

A. O. Olaniyi et al.

internet of things (IoT) micro-controllers as well as cloud-based data storage platforms have greatly improved the art of SHM practices over the past few decades. The temperature-time history of concrete otherwise known as the maturity method has been used to reliably predict the compressive strength of concrete since around 1950, when there were several cases of building collapse due to uninformed decision of early removal of formworks and props on-site [2]. Extensive research from that time birthed maturity functions, and various equipment; several of which requires proximity to site, are quite bulky to use and sometimes expensive. Remotely monitoring the real-time strength conditions of concrete with simple wireless techniques is therefore desirable. This project utilized IoT-based embedded temperature sensors and a web-based application to evaluate the real-time conditions of concrete; the maturity of concrete samples from fresh state, accurately predict the strength of concrete among other properties and it provides remote access to the outputs in real-time. Graphical outputs of temperature, concrete maturity and strength data for two concrete types were evaluated and was made accessible for download on any internet-connected device. Monitoring the real-time strength of concrete with the use of simple techniques such as IoT-based sensors to monitor on-site properties of concrete is very relevant nowadays.

2 Research Background and Objectives 2.1 Research Objectives • To develop simple IoT senors and web-based application that allows users to monitor concrete conditions; temperature, maturity and strength, remotely and in real-time. • Variation of materials and curing conditions to know how the properties of concrete are affected.

2.2 Maturity Concept of Concrete Maturity of concrete is a quality control mechanism that predicts the strength of the in-situ concrete under varying temperature conditions. According to the ASTM, maturity method can be defined as “a system for estimating concrete strength that is based on the assumption that samples of a certain concrete mixture reaches the same maturity index, such samples will attain equal strength” [3]. One of the earliest researchers of the maturity technique, A.G. Saul, had defined maturity in very simple terms; “concrete of the same mix proportion at equal maturity has almost the same strength, at whatever combination of temperature and time to make up that maturity”

IoT-Web-Based Integrated Wireless Sensory Framework …

529

Fig. 1 Illustration of strength-maturity relationship [5]

[4]. This implies that there is a unique relationship between the maturity index and strength of a concrete mixture. If two samples of a concrete mixture have the same maturity, they will have the same strength even though they may have been exposed to different curing times and temperatures. At lower temperature, concrete samples take more time to reach maturity M1, meanwhile, concrete exposed to higher temperature reaches maturity M2 faster in less time (Fig. 1). However, if M1 = M2, these two concrete mixes will have approximately the same strength, even though their curing times and temperatures are different. The effects of time and average temperature on concrete strength gain are quantified through maturity equations. Evaluating the strength of concrete at its early age is not only important for smart structural components but also in constructing structurally sound concrete structures faster [5]. The Nurse-Saul maturity function is one of the widely used models to monitor the temperature-time history of concrete; M=

t  (T − To )t

(1)

0

where: M = Maturity Index (°C-h or °C-day). T = average temperature of concrete (°C). T o = the datum temperature (often taken as 0 °C or 10 °C). t = time interval (in days or hours). The datum temperature employed in this equation depends on the type of cement being used to prepare the concrete sample and the curing temperature range that such sample will be subjected to. For example, concrete prepared with the Ordinary Portland Cement (OPC) and exposed to a curing temperature range of 0–40 °C, a datum temperature of 0 °C or −10 °C is applied [5]. Based on (1), the equivalent age, can be mathematically expressed as:

530

A. O. Olaniyi et al.

t

− To ) t Tr − To

0 (T

ten =

(2)

where: t en = equivalent age of concrete (days or hours). T r = the reference temperature, (°C). Equation (2) was, however, found to represent a linear relationship, which may not be valid for concrete samples at elevated temperatures above 60 °C. A new function was developed in 1977 by Freiesleben and Pedersen, based on Arrhenius equation that was used to account for the impact of the temperature on a chemical reaction [2]. tea =

t 

e

−E R



1 T

− T1r



t

(3)

o

where: t ea = equivalent age at ref. temp. (days/h). E = activation energy, J/mol. R = universal gas constant, 8.314 J/mol K.

2.3 Internet of Things (IoT) in Construction Practices IoT involves the use of intelligently connected devices and systems to leverage data from embedded sensors and actuators in machines [6]. It can be stated that the huge collection and processing of data from sensors in civil structures is approaching an advanced stage. The application of embedded sensors within civil infrastructures is, however, no longer at its infancy in civil engineering. Numerous instances of integrated physical/chemical sensors are being deployed in structural management, energy efficiency, or transportation. Nonetheless, embedded sensors in civil infrastructures is still far from ubiquitous. Deployment of sensors within structures are, however, limited to specific cases of high experiences of SHM techniques [7]. The application of temperature sensors in this project is another step further in the utilisation of actuators for endless possibilities in the field of civil engineering, especially for instances where remote and real-time situation of the condition of infrastructures are desired.

IoT-Web-Based Integrated Wireless Sensory Framework …

531

3 Research Methodology 3.1 Concrete Specimen Preparation According to the mix design provisions in the Japanese Industrial Standard; JIS A1101, and based on two cement types; OPC and Blast Furnace Slag cement (BB cement), two types of concrete mix were prepared in the laboratory [8]. The fine and coarse aggregates used for the experiment were washed thoroughly to remove dirt particles and air-dried to reduce the surface water content after washing. For each concrete type, trial mixes were carried out to determine the required properties of fresh concrete, i.e., slump (SLP), air content and concrete temperature (CT) as at the time of casting, in accordance with the JIS A1101 standard. Table 1 shows the mix proportion used in this experiment, using the two types of cement. A total of thirty-eight cylindrical concrete specimens were cast and cured at 20 °C (19 samples for each concrete type) [3]. To determine the compressive strengths of each concrete cylindrical specimen, fifteen samples for each type, were subjected to uniaxial compressive strength test at ages 1, 3, 7, 14 and 28 days. The concrete cylinders were kept in moulds and sealed with polythene bags to prevent loss of moisture to the atmosphere (Fig. 2).

3.2 Maturity Functions Equation (4) is a derivative of (1) and can be used to evaluate concrete strength F c of concrete samples with maturity M, where A is the slope and B the intercept of equation [9]. Fc = a log M + b (Fc )t =

t   f α + βt c 28

(4) (5)

Substituting the curing age t with the equivalent age t ea , from (3), the modified ACI model was used  as shown in (5) below, to evaluate the strength of concrete. The 28-day strength f c 28 and strength revelation factors α and β; constants 4.0 and 0.85, respectively, were applied in this study [10].

3.3 Temperature Sensors and IoT Micro-controllers The DS18b20 waterproof temperature probes and WiFi-enabled ESP32 Arduino micro-controllers were used in this experiment to measure the temperature of

Cement type

OPC

BB

Type

I

II

50

W /C (%)

Table 1 Concrete mix proportion

42

S/A 7.0

7.0

W (kg) 14.0

14.0

C (kg/m3 ) 28.68

28.68

S (kg/m3 ) 23.76

23.76

G1 (kg/m3 )

15.76

15.76

G2 (kg/m3 )

SLP 12.3

12.5

Air 4.0

7.2

CT 20.6

20.8

532 A. O. Olaniyi et al.

IoT-Web-Based Integrated Wireless Sensory Framework …

533

Fig. 2 Measurement of the temperature data of concrete cylindrical specimen in the laboratory

concrete samples and the temperature data was logged in real-time to the database. The Arduino integrated development environment (IDE) was used to write Arduinobased codes on the ESP32 board and the Application Programming Interface (API) of the database was provided within the code, for each sensor to log its temperature readings at intervals. The program was written in a way that the temperature data of each concrete specimen were logged at time intervals of thirty (30) minutes for the first forty-eight (48) hours, and then subsequent data were logged at intervals of one hour for the total period of the maturity test (i.e., 28 days). The SSID and password of a mobile hotspot (provided by a dedicated portable WiFi modem) was also written within the Arduino codes, which enables the sensors to be able to transmit their temperature data wirelessly to the database via internet. To confirm the accuracy of the designed sensors, the temperature of 2 extra concrete samples each were monitored using the Keyence datalogging device simultaneously (Fig. 3).

Fig. 3 Keyence data-logging device and the developed wireless sensor schematic diagram

534 Type-I

Type-II

25

Temperature (⁰C)

Fig. 4 Temperature data of both types of concrete specimen

A. O. Olaniyi et al.

20 15 10 5 0

Age (hour)

3.4 Web Application Development A free and open source Python programming language-based module known as Django was used to develop the web-based application that allows users to view concrete maturity and strength results in real-time, irrespective of their location. Django is a high-level Python Web framework that enables swift development and pragmatic web-based application design. This highly sophisticated but easy-to-use framework takes care of much of the hassle of Web development, so that the engineer can focus on writing his conceptualised application without the need to reinvent the wheel [11].

4 Results and Discussion 4.1 Concrete Temperature and Maturity Index Both the data logger and the IoT device gave satisfactory results of the temperaturetime history of each of the concrete samples, as shown in Fig. 4. The temperature of both concrete specimens increased significantly within the first 24 h and returned to a constant minimum value, just a little bit below the temperature rise. The temperature measurement procedure lasted for twenty-eight (28) days from the fresh state (Fig. 5).

4.2 Concrete Compressive Strength by Uniaxial and Maturity Methods The compressive strengths of both concrete mixes were evaluated via the conventional uniaxial compressive strength test and the proposed maturity technique. The 28-day predicted strength of concrete type-I was approximately 100.25% of the uniaxial (41.05 N/mm2 ) compressive strength; 41.15 N/mm2 (Fig. 6). While for

Fig. 5 Maturrity data of concrete types I and II specimens

Compressive Strength (N/mm2)

IoT-Web-Based Integrated Wireless Sensory Framework …

535

Type -I OPC Concrete

Type II -BB Concrete

45 30 15 0

0

4000

8000

12000

16000

20000

Maturity (⁰C-hr)

type-I (OPC)

Compressive Strength (N/mm2)

Fig. 6 Compressive strength of concrete types I and II specimens

type-II (BB)

50 40 30 20 10 0

0

7

14

21

28

Age (days)

Fig. 7 Uniaxial compressive and predicted strength of type-I and type-II concrete

Compressive Strength (N/mm2)

concrete type-II specimen, the strength result showed approximately 99.75% of the uniaxial (28.69 N/mm2 ) strength; 28.63 N/mm2 (Fig. 7). The uniaxial result had approximately 100% of the design strength for both mixes. Uniaxial Type-I

Uniaxial Type-II

Pred. Type-I

Pred. Type-II

50 40 30 20 10 0

0

7

14

Age (days)

21

28

536

A. O. Olaniyi et al.

4.3 Conclusion and Recommendations The aim of this research was to utilize IoT micro-controllers and temperature probes combined with written computer programs based on the maturity method, to monitor and evaluate the properties of concrete samples. Two types of concrete specimen based on cement types were prepared and their temperature, maturity and compressive strength were evaluated. A web-based application was developed, and it provided a remote platform that enabled access to various data of the concrete conditions in an interactive graphical format. This technique is applicable on-site, and several improvements are being worked on. The proposed technique suggests solution to some of the problems associated with conventional methods which often require proximity to the construction site. It also suggests an informed decision about safe early removal of formworks, where such is required. Without having to worry about the cluster of wires and other obstructions on-site, this solution can thus be regarded as a user-friendly, and reliably safe technique for real-time condition monitoring of concrete structures.

References 1. Boddupalli C, Sadhu A, Azar ER, Pattyson S (2019) Improved visualization of infrastructure monitoring data using building information modeling. Struct Infrastruct Eng Maint Manage Life-Cycle Des Perform 15(9) 2. Topçu ˙IB, Karakurt C (2018) Strength-maturity relations of concrete for different cement, types. In: Fırat S, Kinuthia J, Abu-Tair A (eds) Proceedings of 3rd international sustainable buildings symposium (ISBS 2017). ISBS 2017. Lecture Notes in Civil Engineering, vol 6. Springer, Cham 3. American Society for Testing and Materials (ASTM) (2004) Standard practice for estimating concrete strength by the maturity method, ASTM C 1074-04. American Society for Testing and Materials, Philadelphia, PA 4. Anderson KW, Uhlmeyer JS, Kinne C, Pierce LM, Muench S (2009) Use of the maturity method in accelerated PCCP construction. (Report No. WA-RD 698.1). Washington State Department of Transportation, Washington, DC 5. Kaburu A, Kaluli J, Kabubo C (2015) Application of the maturity method for concrete quality control. J Sustain Res Eng 2(4):127–138 6. Wu W et al (2015) Improving data center energy efficiency using a cyber-physical systems approach: integration of building information modeling and wireless sensor networks. Procedia Eng 118:1266–1273 7. Chacón R, Posada H, Toledo Á, Gouveia M (2018) Development of IoT applications in civil engineering classrooms using mobile devices. Comput Appl Eng Educ 26:1769–1781. https:// doi.org/10.1002/cae.21985 8. Japan Industrial Standard (JIS A1101) (2018) Standard specifications for concrete structures2018, ‘Design’ 2018 Edition 9. Eichi T, Asuo Y, Tetsuroo K, Isao U, Hideki O, Chikanori H (2002) Concrete engineering handbook. ISBN 978-4-254-26476-0. C 3351, p 77 10. Dan BE, Domingo CJ (1997) Prediction of creep, shrinkage, and temperature effects in concrete structures. ACI Committee Report 209R-92 11. ‘DjangoProject’ Documentation, 10 Oct 2018, https://www.djangoproject.com/

A Study of Atmospheric Corrosive Environment in the North East Line Tunnel L. T. Tan, S. M. Goh, H. Y. Gan, and Y. L. Foo

Abstract This project was conducted to study the effect of the tunnel atmosphere on uniform corrosion of metals. Metal coupons and environment monitoring systems (ECMs) were deployed at buffer areas of different stations along the North East Line to measure the rate of corrosion. Concurrently, temperature and relative humidity were also measured through the ECM. Results from one year of monitoring are presented in this paper. Keywords Corrosion · Train tunnel · Atmosphere · Metal · Sensor · Exposure

1 Introduction Corrosion refers to the degradation of a material as a result of its chemical or electrochemical reaction with the environment. For corrosion of metals to take place, the following components must be present: an anode and a cathode on the metal surface are in contact with an electrolyte (Fig. 1) [1]. General (or uniform) corrosion refers to corrosion that occurs uniformly across a surface area. For atmospheric corrosion, the general corrosion rate of a material depends on the corrosivity of the environment, which may be categorized into different levels. ISO 9223 standard (2012) [2] identifies six categories of corrosivity for outdoor environment ranging from C1 to C5 followed by the highest corrosivity category of CX (Table 1). The corrosivity levels can be determined directly through mass loss measurements of standard carbon steel, copper, zinc and aluminum coupons that have undergone corrosion for one year or indirectly through environment parameters consisting of relative humidity, temperature, SO2 deposition and Cl− deposition. Separately, ANSI/ISA Standard 71.04 (2013) [3] has developed four corrosivity levels specifically for the electronics industry (Table 2). The lowest severity level is G1 where corrosion is unlikely to be a factor in determining equipment reliability (ANSI/ISA 2013). L. T. Tan (B) · S. M. Goh · H. Y. Gan · Y. L. Foo Singapore Institute of Technology, 10, Dover Dr, Singapore, Singapore e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 L. Gelman et al. (eds.), Advances in Condition Monitoring and Structural Health Monitoring, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-15-9199-0_51

537

538

L. T. Tan et al.

Fig. 1 Four components for an electrochemical corrosion cell [1]

Table 1 Categories of corrosivity of outdoor atmosphere according to ISO 9223 (2012)

Category

Corrosivity

C1

Very low

C2

Low

C3

Medium

C4

High

C5

Very high

CX

Extreme

Table 2 Severity levels of corrosivity according to ANSI/ISA 17.04 (2013) standards. The overall ISA severity level is based on the higher of the two corrosion rates of copper and silver Severity level

G1 mild

G2 moderate

G3 harsh

GX severe

Copper reactivity level (in angstroms)*