Signal and Information Processing, Networking and Computers: Proceedings of the 8th International Conference on Signal and Information Processing, Networking and Computers (ICSINC) 9811933863, 9789811933868

This book collects selected papers from the 8th Conference on Signal and Information Processing, Networking and Computer

324 26 143MB

English Pages 1533 [1534] Year 2022

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Signal and Information Processing, Networking and Computers: Proceedings of the 8th International Conference on Signal and Information Processing, Networking and Computers (ICSINC)
 9811933863, 9789811933868

Table of contents :
Preface
Organization
Committee Members
International Steering Committee
General Chairs
Technical Program Committee Chairs
Publication Chairs
General Secretaries
Sponsors
Contents
Space Technology I
Averaged Equation of Satellite Relative Motion in an Elliptic Orbit
Abstract
1 Introduction
2 Relative Motions in Two-Body Orbit
2.1 Relative Motion Model and Analytical Equation
2.2 The Averaged Equation of Relative Motion
3 Averaged Equation of Relative Motion with Orbital Perturbation
4 Simulation
4.1 Simulation in Two-Body Orbit
4.2 Simulation Considering the Effect of J2 Perturbation and Atmosphere Perturbation
5 Conclusions
References
HDR Fusion Algorithm Based on PCNN and over Exposure Correction
Abstract
1 Introduction
2 Segmentation of Exposure Region
3 Solution of the Mapping Function
4 The Fusion Algorithm Based on over Exposure Correction
5 Experiment Results
6 Conclusion
References
A Dynamic Range Analysis of Remote Sensor Based on PcModWin
Abstract
1 Introduction
2 An Atmospheric Radiation Transfer Software, PcModWin
3 Radiative Transport Model of Optical Remote Sensor for Earth Observation
3.1 {{{\varvec L}}}_{{{\varvec \lambda}}}^{{{\varvec s}}{{\varvec u}}} Direct Reflected Radiation
3.2 {{{\varvec L}}}_{{{\varvec \lambda}}}^{{{\varvec s}}{{\varvec d}}} Skylight
3.3 {{{\varvec L}}}_{{{\varvec \lambda}}}^{{{\varvec s}}{{\varvec p}}} Haze Light
3.4 Total Solar Radiation {{{\varvec L}}}_{{{\varvec \lambda}}}^{{{\varvec S}}} Received by Spatial Optical Remote Sensors
3.5 A Radiation Conversion Model of Optical Remote Sensor
4 Output Radiance Range of Integrating Sphere Light Source Calculated by PcModWin
5 Conclusion
References
The Analog Variable Observation Method for S3MPR Controller
Abstract
1 Introduction
2 The S3MPR Power Topology
3 The Variable Observation Method
4 Circuit Design for the MPPT Controller
4.1 The Analog Power Sampling Circuit
4.2 The Variable Comparator and the Reference Generator
5 Simulation Analysis
6 Conclusions
References
Analysis and Design of Aerospace Flight Control Knowledge Base
Abstract
1 Introduction
2 Aerospace Flight Control Knowledge Representation
2.1 Knowledge Representation
2.2 Knowledge Base Level
3 Standardization of Aerospace Flight Control Knowledge
4 Construction of Aerospace Flight Control Knowledge Base
4.1 Aerospace Flight Control Knowledge Construction Process
4.2 Aerospace Flight Control Knowledge Base Management
4.3 Analysis of Typical Examples of Aerospace Flight Control Knowledge Base
5 Conclusion
References
Study on Planning Strategy of TT&C Resource Scheduling Based on Data Mining
Abstract
1 Introduction
2 Type of Resource Scheduling Data
2.1 Input Data
2.2 Output Data
3 Requirements Planning of Resource
4 Planning Strategy and Implementation
4.1 Periodical Resource Requirements Forecasting
4.2 Dynamic Activation and Recovery of Peak Resources
5 Verification and Summary
5.1 Periodical Resource Requirements Forecasting
5.2 Dynamic Activation and Recovery of Peak Resources
6 Conclusion
References
Research on Product Assurance Parallel Work of Spacecraft Control System
Abstract
1 Introduction
2 Research on Spacecraft Control System Product Assurance
3 Product Assurance Work Evaluation of Spacecraft Control System Developed in Parallel
4 Conclusion
References
Research on Inter-satellite Ranging Technology Based on Proximity-1 Protocol
Abstract
1 Introduction
2 Inter-satellite Ranging Principle Based on the Proximity-1 Protocol
2.1 Short Range Link Time Service
2.1.1 Time Capture Method
2.1.2 Definition of Ranging Transmission Frame Format
2.2 Principle of Incoherent Bidirectional Ranging and Time Synchronization
2.3 Error Analysis
3 Experimental Verification
3.1 Design and Composition of Ground Experiment
3.2 Experimental Results and Analysis
4 Conclusion
References
SpaceWire Network Configuration Method Based on Digraph
Abstract
1 Introduction
2 Configuration Protocol
2.1 Path Addressing and Logical Addressing
2.2 Network Configuration Method and Protocol Involved
3 Configuration Method Based on Digraph
4 Implementation
5 Conclusion
References
LDPC Encoding and Decoding Algorithm of LEO Satellite Communication Based on Big Data
Abstract
1 Introduction
2 Method
2.1 Concept of Big Data
2.2 LDPC Encoding and Decoding Algorithm for LEO Satellite Communication
3 Experience
3.1 Extraction of Experimental Objects
3.2 Experimental Analysis
4 Discussion
4.1 The Influence of Iterations on the Performance of LDPC Codes
4.2 Theoretical Analysis on the Performance of Mobile Satellite Communication Network Edge Computing Architecture
5 Conclusion
References
LDPC Coding and Decoding Algorithm of Satellite Communication in High Dynamic Environment
Abstract
1 Introduction
2 Method
2.1 High Dynamic Environment
2.2 LDPC Coding and Decoding Algorithm for Satellite Communication
2.3 Spread Spectrum Technology
3 Experience
3.1 Extraction of Experimental Objects
3.2 Experimental Analysis
4 Discussion
4.1 Tracking Performance of Second Order DPLL
4.2 Resource Cost Comparison of Multi Rate LDPC Encoder
5 Conclusion
References
Analysis of Optimal Orbital Capture Maneuver Strategy to LEO Spacecraft Based on Space Debris Collision
Abstract
1 Introduction
2 Constrained Conditions in Engineering
3 Calculation Model
3.1 Velocity Impulse and Relative Trace Drift Model
3.2 The Closest Distance and the Closest Approach Moment
4 Initial Orbit Capture Maneuver Strategy Planning Process
5 Simulation and Result
5.1 Simulation Parameter Setting
5.2 Safety Analysis of Intra Group Collision Before Orbital Maneuver
5.3 Orbital Maneuver Sequence Planning
5.3.1 Collision Avoidance Analysis of First Maneuver Based on Error Estimation
5.3.2 Orbital Capture Maneuver Strategy
6 Summary
References
Research on a Centralized Thrust Deployment Method for Space Satellite Swarms
Abstract
1 Introduction
2 Design of Swarm Structure Assembly
3 Mother Satellite Initialization Based on Three-Axial-Stabilization
4 Establish the Space Geometric Construction of Swarm
5 Satellite Swarm Communication Strategy
6 Conclusion
References
Research on Navigation and Communication Fusion Technology Based on Integrated PNT System
Abstract
1 Introduction
2 Current Status at Home and Abroad
2.1 International Status
2.2 Domestic Current Situation
3 Navigation and Communication Integration Architecture
3.1 Navigation and Communication Information Fusion
3.2 Navigation and Communication Signal Fusion
4 Application Prospects
5 Conclusion
References
A Solar Array on Orbit Output Power Prediction Method for Satellite
Abstract
1 Introduction
2 Satellite Orbit and Solar Array Light Incidence Analysis
2.1 Orbit Analysis
2.2 Solar Arrays Light Incidence Analysis
3 Satellite Solar Array Output Power Prediction
3.1 Solar Cell Electric Generate Theory and Model
3.2 Solar Array on Orbit Output Power Predict
4 Conclusion
Acknowledgement
References
Conception and Prospects of Exploration and Mining Robots on Lunar Base
Abstract
1 Introduction
2 Research Status at Home and Abroad
2.1 United States
2.2 Russia
2.3 Europe
2.4 China
2.5 Summary
3 Conception of the Lunar Base Exploration and Mining Robot
3.1 Engineering Goals
3.2 Conceptual Assumptions
3.3 Key Technologies and Suggestions
4 Conclusion
References
Analysis of Orbit Determination Accuracy of GEO Satellite Navigation Data
Abstract
1 Introduction
2 Orbit Determination of GEO Satellite Navigation Signal
2.1 Weighted Least Square Method with Additional Constraints
2.2 Solving Effect
3 Orbit Determination of Heterogeneous Data Fusion Based on Space and Ground-Based Observation
4 Summary
References
Variable Bandwidth Digital Channelization Algorithm and Engineering Realization
Abstract
1 Introduction
2 Fundamental
2.1 Digital Channelization Synthesis and Efficient Implementation Structure
2.2 Digital Channelization Decomposition and Efficient Implementation Structure
3 Variable Bandwidth Digital Channelization
References
A Design Framework for Complex Spacecraft Systems with Integrated Reliability Using MBSE Methodology
Abstract
1 Introduction
2 Literature Review
3 Concrete Framework
3.1 Construction of SysML Profile
3.2 Model-Based FMEA
3.3 Model-Based FTA
4 Case Study
5 Conclusion
References
Intelligent Task Management of Infrared Remote Sensing Satellites Based on Onboard Processing Feedback
Abstract
1 Introduction
2 Requirements and Constraints of Remote Sensing Satellite Mission Management
3 Intelligent Task Management of Infrared Remote Sensing Satellites Based on Onboard Processing Feedback
3.1 Intelligent Task Management Solution
3.2 Key Technologies of Intelligent Task Management
4 Application and Development
5 Conclusion
References
Fragmentation Technology of Spacecraft TT&C Resources Under Multi-constraint Condition
Abstract
1 Introduction
2 Analysis of Current TT&C Equipment Allocation
3 Design of TT&C Strategy for Multi-constraint Fragmentation Assembly
4 Simulation Test
5 Conclusion
References
Key Technology Research and On-Orbit Verification of Beidou Short Message Communication System of LEO Satellite
Abstract
1 Introduction
2 Task Requirement Analysis
2.1 On-Board Intelligent Processing and Broadcast Distribution Requirements
2.2 Satellite Measurement and Control Requirements
2.3 Requirements for Multi-satellite Collaboration Missions to Key Targets
3 Key Technology Analysis and Design
3.1 High-Precision Doppler Compensation Technology
3.2 Analysis of Time Slot Allocation and Message Communication Delay
4 On-Orbit Verification
4.1 Verification of the Forecasting Technology of the Selected Chain Satellite
4.2 Coverage Detection of Global Short Message Communication Signal
5 Conclusion
References
Summary of Artificial Neural Network Effectiveness Evaluation Technology
Abstract
1 Effectiveness Evaluation
2 Artificial Neural Network
3 Artificial Neural Network Apply to Effectiveness Evaluation
4 The Pros and Cons of BP Neural Network Effectiveness Evaluation
5 Research Direction of Artificial Neural Network Effectiveness Evaluation
References
Experiment Study on the Impact Limit of Basalt/Aramid Stuffed Whipple Shields
Abstract
1 Introduction
2 Whipple Shields Stuffed with Basalt/aramid and Impact Test
2.1 Protective Mechanism for Stuffed Whipple Shields
2.2 Ultrahigh Velocity Impact Test
2.3 Damage Characteristic Analysis of the Shield
3 Research on Impact Limit Equation and Parameters
3.1 NASA Nextel/Kevlar Stuffed Whipple Shields Equations
3.2 Parameter Modification of Impact Limit Equation
3.3 Impact Limit Velocity Threshold Correction
4 Conclusion
References
Application of Software Remote Synchronization Mode in Aerospace Products
Abstract
1 Introduction
2 Related Work
2.1 ESB
2.2 JMS
2.3 XML
3 Software Remote Synchronization Model
3.1 Interactive Mode of Heterogeneous Database
3.2 Heterogeneous Multi Subsystem Data Interaction Model
3.3 Remote Data Synchronization Mechanism
3.3.1 Real-time Synchronization Mechanism
3.3.2 Timing Synchronization Mechanism
3.3.3 Hybrid Synchronization Mechanism
3.4 Selection of Data Synchronization Mechanism
4 The Next Step of Research
References
Pointing Precision Modification Method for the Low Orbit Satellite in Ka-Band by the Triaxial Antenna
Abstract
1 Introduction
2 Analysis of Traditional Antenna Pointing Errors
3 Analysis of Pointing Error Model of the Rotary Antenna
3.1 The Triaxial Antenna Structure
3.2 Error Item of Triaxial Antenna for Oblique Turntable
4 Pointing Correction Method for the Triaxial Antenna of Oblique Turbine
4.1 Pointing Correction Model of the Triaxial Antenna Oblique Turbine
4.2 Point Correction Algorithm for the Triaxial Antenna of Oblique Turbine
5 Engineering Test Verification
5.1 Test Steps
5.2 Test Results
6 Conclusion
References
Theoretical Analysis and Experimental Study on Oxidant Depletion Cutoff Method of Satellite Dual Mode Propulsion System
Abstract
1 Introduction
2 Oxidant Depletion Cutoff Design
2.1 Composition of a Satellite Dual-Mode Propulsion System
2.2 Oxidant Depletion Cutoff Strategy
2.3 Theoretical Analysis of Oxidant Depletion Criterion
3 Oxidant Depletion Cutoff Test
3.1 Test System
3.2 Manual Cutoff Test
3.3 Oxidant Depletion Cutoff Test
4 Conclusion
References
Parameter Optimization Design and Application Analysis of Water-Based Propulsion System
Abstract
1 Introduction
2 Configuration of Water-Based Propulsion System
3 Calculation Model
3.1 Mission Background
3.2 Optimization Model
4 Parameter Matching Design and Application Research
4.1 Parameter Matching Design for Common Working Conditions
4.2 Research on the Relationship Between Parameters
5 Conclusions
References
A New Tracking Measurement Method Based on QPSK Spread Spectrum Communication Signal
Abstract
1 Introduction
2 A New Tracking Measurement Method
3 Conclusions
References
Design and Application of OBDH Subsystem for Gaofen-4 Satellite
Abstract
1 Introduction
2 Overview
3 Technical Features
3.1 Different Working Modes
3.2 CCSDS Telemetry Format
3.3 Telemetry Processing Based on Layer Architecture
3.4 Second Pulse Design
3.5 Autonomous Management Function
3.6 Autonomous Layout of Load Mission Design
4 Technical Innovation Points
5 On-Orbit Application
References
Autonomous Mission Planning Software Design and Testing of the Agile Earth Observation Satellite
Abstract
1 Introduction
2 Software Requirement Analysis
2.1 User Requirement Analysis
2.2 Software Running Environment Analysis
3 Software Design
3.1 The Operating System and Underlying Hardware Layer
3.2 The Target Task Abstraction Layer
3.3 The Core Service Layer
3.4 The Application Layer
3.5 Reliability Design
4 Software Test and Verification
5 Summary
References
Research for Non-cooperative Space Objects Detection Methods Based on Image
Abstract
1 Introduction
2 Classical Object Detection Methods
2.1 Methods Based on Kinematic Characteristics
2.2 Methods Based on Morphological Characteristics
3 Improvement and Development of Object Detection Methods
3.1 Improvement
3.2 Development
4 Conclusion
References
Research on a BSDiff-Based System-Level On-Orbit Maintenance Technology of Spacecraft
Abstract
1 First Section
2 Differences Algorithm
2.1 BSDiff
2.2 BSPatch
3 System-Level Incremental On-Orbit Maintenance
3.1 Create Telecommands
3.2 Preprocessing on the Spacecraft
3.3 System Reconstruction
4 Simulations
5 Conclusions
References
Research on the Risk Identification Index System of Software Project
Abstract
1 Overview of Software Project Risk Management
2 Current Research Status of Software Project Risk Management at Home and Abroad
3 Construction of Risk Identification Index System for Software Projects
3.1 Determine the Main Risk Factors of the Software Project
3.2 Determine the Risk Decomposition Level of the Software Project
3.3 Determine the Risk Driving Factor of the Software Project
3.4 Establish a Software Project Risk Identification System
References
Research on Time-Frequency Location Algorithm of Launch Vehicle Vibration Data
Abstract
1 Introduction
2 Fundamental
3 Rocket Vibration Time-Frequency Positioning Algorithm
4 Time-Frequency Positioning Simulation
5 Examples of Time-Frequency Positioning of Rocket Vibration Data
6 Conclusion
References
Study on Disturbing Moment Compensation for the Satellite with Large-Scale Rotating Antennas
Abstract
1 Introduction
2 Angular Momentum Estimation of Antenna Rotation
3 Feedforward Compensation Method of Antenna Rotation Angular Momentum
3.1 Interference Torque Compensation When Antenna Fly-Back
3.2 Interference Torque Compensation of Antenna in Fly-Back Position
3.3 Simulation Verification
4 Conclusion
References
Electromagnetic Interference Posed by Wireless Devices Inside Airproof Cabin of Manned Spacecraft
Abstract
1 Introduction
2 Physical Modeling of Electromagnetic Compatibility Analysis for Typical Manned Spacecraft
2.1 Analysis Method
2.2 Establishment of Simulation Model
3 Simulation Analysis and Research
3.1 Coverage Analysis of Radiation Field in Capsule
3.2 Interference Coupling Analysis Between Radiation Field and RF Equipment in the Capsule
4 Conclusion
References
Security Enhancement for Noise Aggregation in DVB-S2 Systems
Abstract
1 Introduction
2 Security and Architecture of DVB-s2
2.1 Security Requirements and Status
2.2 Physical Layer Architecture
3 Theory and Simulation of Noise Aggregation Coding
3.1 Theory of Coding
3.2 Simulation
4 Applying Noise Aggregation to DVB-S2
4.1 System Analysis and Parameter Design
4.2 Simulation
5 Conclusion
References
A Pipelined Arithmetic Architecture for the Motion Planning of Multi-DOF Manipulator
Abstract
1 Introduction
2 Basic Algorithm
3 Hardware Architecture
3.1 Float-Point Uint
3.2 CORDIC Uint
4 Simulation and Verification
5 Conclusion
References
Information Hiding in Space Data Link Protocols Based on Spare Value of Fields in Frame Header
Abstract
1 Introduction
2 Related Works
3 Method
3.1 Method Statement
3.2 Implementation Example
4 Analysis
4.1 Transparency
4.2 Robustness
4.3 Hidden Information Percentage
5 Conclusion
References
Space Technology II
Optimal Escape Strategy with a Single Impulse in Space Attack-Defense
Abstract
1 Introduction
2 Problem Description and Hypotheses
3 The Establishment of Calculation Model
3.1 Coordinate Frames Definition
3.2 Calculation Model
4 Simulation Verification
4.1 Global Search for the Optimal Impulse Direction
4.2 Different Magnitude of Impulse Vector and Different Flight Times
4.3 Analysis of the Impulse Application at Different Positions Along the Orbit
5 Conclusion
Foundation Item
References
A Method of Using Control Chart to Monitor Software Key Performance Evolution
Abstract
1 Introduction
2 Statistical Steady State Analysis of Evolution
2.1 Definition and Principle of Control Chart
2.2 The Monitoring Requirements of Software Performance Evolution
2.3 Statistical Steady State Analysis of the First Sampling
2.4 Statistical Steady State Analysis of the Second Sampling
3 Technical Steady-State Analysis of Evolution
3.1 Analysis of Sample Data Distribution
3.2 Comprehensive Judgment of Process Capability and Process Performance
3.3 Analysis of System Reliability Index
4 Reproducibility Analysis
4.1 The Concept of Reproducibility
4.2 Reproducibility Calculation
5 Summary
References
Accurate Pointing Control Algorithm for Space Based Movable Antenna Tracking Ground Station Using GPS
Abstract
1 Introduction
2 Model
3 Design of Algorithm
3.1 Calculate the Relative Position Vector on WGS84
3.2 WGS84 to J2000
3.3 J2000 to Orbital Coordinate System
3.4 Calculate Antenna Angles
4 Test Results
5 Conclusion
References
Telemetry Data Prediction Method Based on Time Series Decomposition
Abstract
1 Introduction
2 Theory Foundation
2.1 HP Filtering
2.2 Seasonal ARIMA Model
3 Modeling Method
4 Empirical Analysis
4.1 Trend Term Prediction
4.2 Fluctuation Term Prediction
4.3 Combination Prediction and Reliability Verification
5 Conclusion
References
A Query Optimization Method for Real-Time Embedded Database Based on Minimum Arborescence
Abstract
1 Introduction
2 Structure of Query Processing
3 Query Optimization Sequence Based on Directed Graph Structure
3.1 Representation of Multi-table Dependencies in a Join Query
3.2 Constructing Multi-table Dependencies Based on Directed Graphs
3.3 Storage and Calculation of Directed Graph Weights
4 SQLite Query Optimization Algorithm Based on Minimum Arborescence
5 Experiment and Analysis
6 Conclusion
References
Design of Reconfigurable Test System for Information Interface Between Space Station Combination with Multi-aircraft
Abstract
1 Introduction
2 System Framework
3 Hardware Composition
4 Software Design
4.1 Physical Channel Selection Module
4.2 Instrument Configuration Module
4.3 Instrument Trigger Setting Module
4.4 Data Analysis Module
4.5 Main Software Design
5 System Verification
6 Conclusion
References
A Dual Model Control of Sensorless High Frequency Permanent Magnet Synchronous Motor Based on the Space Vector Pulse Width Modulation and Six-Step Mode
Abstract
1 Introduction
2 Mathematical Model of High Frequency Permanent Magnet Synchronous Motor and Drive System
2.1 Mathematical Model of High Frequency Permanent Magnet Synchronous Motor
2.2 Sinusoidal-Wave Drive System Based on Vector Control Algorithm
2.3 Square-wave Drive System Based on Six-Step Mode
3 Back Electromotive Force Zero Crossing Point Detection and Phase Shift Compensation
4 Simulation and Analysis
4.1 Simulation Setup
4.2 Simulation Results
5 Summary
References
Development Mode of Remote Sensing Satellite OBDH Software Based on Prototype Software
Abstract
1 Introduction
2 Analysis of Traditional Development Mode
2.1 Waterfall Model
2.2 Software Components
3 Development Mode Based on Prototype Software
3.1 Define of Prototype Software
3.2 Development Mode
3.3 Restrictions
4 Satellite Implementation and Analysis
4.1 Satellite Implementation
4.2 Workload Analysis
5 Conclusion
References
Design of Closed-Loop Tracking Mode for Remote Sensing Satellite Data Transmission Antenna
Abstract
1 Introduction
2 Closed-Loop Tracking Scheme Design
2.1 Typical Data Transmission Mode
2.2 System Composition
2.3 Control Process
3 Ground Test Verification
4 Conclusion
References
High Accuracy Orbit Control Strategy of Change’4-Relay Satellite
Abstract
1 Introduction
2 Track Control Task Design
3 High-Precision Track Control Strategy
3.1 Thruster Installation Layout Optimization
3.2 Control Cycle Sampling and Quantization Error Suppression
3.3 Thruster Aftereffect Suppression
3.4 Accelerometer Zero Deviation Suppression
3.5 Summary
4 On-Orbit Control Results
5 Conclusion
References
Observation Data Quality Evaluation of GPS Receiver on Agile Platform Based on Ground Simulation
Abstract
1 Background
2 Simulations
3 Simulation Data Quality Evaluation
3.1 Pseudo Range Observation Accuracy
3.2 Carrier Phase Accuracy
3.3 Simulation Orbit Determination Analysis
4 Conclusions
References
A Station Guidance Method Based on Combination Deviation Spacecraft Orbit Prediction
Abstract
1 Introduction
2 A Station Guidance Method Based on Combination Deviation Orbit Prediction
2.1 Deviation Orbit Prediction
2.2 Deviation Orbit Prediction
2.2.1 Computational Theory Guide Waiting Point
2.2.2 Calculate the Guidance Data of the Deviation Ephemeris
3 Calculation Results and Analysis
4 Conclusion
References
A Distributed Real-Time Orbit Determination Algorithm Based on Asynchronous Multi-sensor
Abstract
1 Introduction
2 Real-Time Orbit Determination Problem Formulation
2.1 Models
2.2 Distributed Sensor Network
2.3 Asynchronous Sampling System
3 Asynchronous Distributed Real-Time Filtering
3.1 Information Transmitted in the Sensor Network
3.2 Distributed Fusion Filtering
3.3 Distributed Smoothing
4 Actual Data Verification
5 Conclusion
References
An Equivalent Calculation Method of VLBI Instantaneous Time Delay Rate
Abstract
1 Introduction
2 Instantaneous Time Delay Rate Model
3 Equivalent Calculation Method of Instantaneous Time Delay Rate
3.1 Average Time Delay Rate Model
3.2 Parameter Selection of Equivalent Model
4 Actual Data Verification
5 Conclusion
References
Design and Research of Satellite Fault Real-Time Simulation and Verification Platform Based on FMI Standard Model
Abstract
1 Introduction
2 Platform Scheme Design
3 System Implementation
3.1 Design of Satellite Simulator Based on FMI Standard Model
3.2 Design of Fault Generator Based on FMI Standard Model
3.3 Design of Automatic Fault Manager Based on FMEA
3.4 Design of Platform Manager
4 Application and Verification
4.1 Platform Workflow
4.2 Typical Application Cases
4.3 Results of Analysis
5 Conclusion
References
A Method for Calculating Solar Pressure of TDRS with a Movable IOLA
Abstract
1 Foreword
2 Modeling of Solar Pressure
2.1 Discretization of Satellite Model
2.2 Calculation of the IOLA Tracking User Satellite Rotation
2.3 Calculation of Occlusion Relationship for Satellite Components
2.4 Calculation of Solar Pressure of the Whole Satellite
3 Simulation Example
4 Conclusion
References
Research on Platform Construction of Small Satellite GNC System
Abstract
1 Introduction
2 Current Status of Domestic Small Satellite Development
3 Platform Design and Development of Control System
3.1 Platform-Based Architecture Design
3.2 Improvement of Development Technical Process
3.3 Practice of Platform Application
4 Conclusion
References
The Storage Optimization of Test Cases and Test Results in Spacecraft Automatic Test Based on JSON
Abstract
1 Introduction
2 JSON Technology
3 NoSQL Technology
4 Storage Model Design
4.1 Storage Scheme Design of Traditional Relational Database
4.2 Design of Storage Scheme Based on JSON
4.2.1 Design of Test Case Storage Structure
4.2.2 Design of Test Result Storage Structure
5 Storage Model Validation
6 Conclusion
References
Research on the Modeling and Application of Intelligent Factory Facing the Complex Space Optical System
Abstract
1 Introduction
2 Problem Statement
2.1 State of the Art
2.2 Requirement Analysis
3 Intelligent Factory Modeling
3.1 Model Framework
3.2 Business Hierarchy Dimension
3.3 Development Lifecycle Dimension
4 Intelligent Factory Model Application
4.1 Information Integration Application
4.2 Product DT Model Application
5 Conclusion
References
A New Space-Based Optical Surveillance System
Abstract
1 Introduction
2 Feature of Space Visual-Monitor System
3 Visual-Monitor System Scheme
3.1 Camera Selection and Configuration
3.2 Camera Data Transmission Scheme
3.3 Camera Image Processing and Transmitting Scheme
3.4 System Work Flow
4 Visual-Monitor System Key Technology
4.1 Motion Image Detection and Recognition Technology
4.2 Motion Image Detection and Recognition Technology
5 Conclusion
References
The Application of MPLS VPN in Improving the Reliability of Test Network
Abstract
1 Introduction
2 MPLS VPN Technology
3 Application of MPLS VPN Technology in Test Network
4 Verification of Configuration Results
5 Conclusion
References
Performance Analysis of the Photon-Counting LIDAR Based on the Statistical Property
Abstract
1 Introduction
2 Theoretical Analysis
3 Data Validation
3.1 Photon Number of Signal and Background Noise
3.2 Photon Number of Signal and Background Noise
4 Conclusion
References
A Space Vector Transformation Based Dynamic Image Rotation Compensation Model
Abstract
1 Introduction
2 Vector Optical Transmission Model
3 Dynamic Image Rotation Compensation Model
4 Discussion
4.1 Paraxial Incidence
4.2 Large Angle Incidence
4.3 Incident in Different Coarse Pointing Mechanism Attitudes
5 Conclusion
References
Research on Segmented Weighted Multi-vector Attitude Determination Method Based on On-Orbit Accuracy of Star Sensors
Abstract
1 Introduction
2 Multi-vector Attitude Determination Method
3 Selection of Weighted Coefficients
4 Simulation
4.1 Simulation Example I
4.2 Simulation Example II
5 Conclusion
References
The Research of Micro-vibration Nonlinear Characteristic Produced by Spacecraft Mechanical Momentum Wheel
Abstract
1 Introduction
2 Mechanism of Nonlinear Micro Vibration
2.1 Amplitude Nonlinearity
2.2 Frequency Nonlinearity
3 Dynamic Model
4 Micro-vibration Control Method
5 Conclusion
References
Design of On-Orbit Monitoring Method for Small Satellite Payload Equipment Based on Grey Prediction
Abstract
1 Introduction
2 Analysis of Conventional Monitoring Methods of Payload Equipment
3 On-Orbit Monitoring Method of Payload Equipment with Grey Prediction
3.1 Grey Prediction
3.2 Self-adjustment Function
4 Test Results
5 Conclusion
References
Investigation on EMC Design of Spacecraft Under High-Power Microwave
Abstract
1 Introduction
2 HPM Effects on Spacecraft Through Front-Door Coupling and the Mitigation Suggestion
3 The Simulation of HPM Effects Through Back-Door Coupling and the Mitigation Suggestion
4 Conclusion and Future Works
References
Ground Experimental Verification of High-Stability Temperature Control System and Flight Data Analysis
Abstract
1 Foreword
2 High-Stability Temperature Control System Design
2.1 Multiple Level Thermal Insulation Design
2.2 Active Temperature Control System Design
3 Ground Experimental Verification of High-Stability Temperature Control System
3.1 Temperature Measurement PCB Assembling Standard Resistance Verification
3.2 Temperature Measurement PCB Assembling 4-Wire RTD Calibration
3.3 Ground Experimental Verification of High-Stability Temperature Control System
4 Flight Data Analysis
5 Summary
References
Experimental Verification of Discharge of Two Parallel Sets of Batteries in Mars Probe During EDL
Abstract
1 Introduction
2 Tasks and Power Supply Mode During EDL Stage
3 Parallel Discharge Experimental Verification of Two Sets of Batteries
3.1 Experimental Design
3.2 Experimental Verification
4 Conclusion
Acknowledgement
References
Reliability Design and Experimental Verification of Impact Sensor Signal Cable
Abstract
1 Introduction
2 The Design of the PCB Transfer Unit
2.1 The PCB Layout
2.2 The Shell Design
2.3 The Shielding Design
2.4 The Insulation Design
3 The Experimental Verification
3.1 The Mechanical Environment Experiments
3.2 The Thermal Environment Experiments
3.3 The Tensile Experiment
3.4 The Metallographic Experiment
4 Conclusions
References
Simulation and Analysis of Influence of Group Delay Distortion on Satellite-to-Ground and Satellite-to-Satellite Link Channel
Abstract
1 Introduction
2 Introduction of Group Delay and All Pass Filter
2.1 Group Delay
2.2 Allpass Filter
3 The Influence of Group Delay on Modulation Performance Simulation
3.1 Simulation of the Influence of Group Delay on Transmitted Signal
3.2 The Influence of Group Delay Analysis Based on the Whole Link
3.3 Analysis of Simulation Results
4 Conclusion
References
The Influence of Ka Band Satellite Transponder Nonlinearity on High-Speed Data Communication
Abstract
1 Introduction
2 Ka Band Transponder System
3 Nonlinear Characteristics of Ka Band Traveling Wave Tube Amplifiers
4 Test the Influence of Ka Frequency Transponder Nonlinearity on High-Speed Data Transmission
4.1 Test Plan
4.2 Test Results
5 Conclusion
References
GPS and Laser Altimeter Data Alignment for Snow Roughness Measurement in Antarctic
Abstract
1 Background Introduction
2 The Measurement System
2.1 The Measurement System Configuration
2.2 The Data and Format
2.3 The Field Around Concordia
2.4 GPS Data Process
3 Data Preparation and Alignment
3.1 Data Process Procedure
3.2 Data Validation
3.3 Basic Technique for Data Processing
4 Conclusion
References
Angles-Only Onboard Initial Relative Orbit Determination with Auxiliary Satellite
Abstract
1 Introduction
2 Relative Motion Dynamics and Measurement Models of Auxiliary Satellite
2.1 Relative Dynamics of Motion
2.2 Measurement Model of Auxiliary Satellite
3 IROD Algorithm Based on State Augmentation Least Square Method
4 Simulation Case Studies
4.1 Simulation Model and Parameters Setting
4.2 Convergence Analysis of State Estimation
5 Conclusions
Acknowledgement
References
Space-ROS: A Novel Real-Time Operating System Architecture and Implementation for Space Robot
Abstract
1 Introduction
2 Design and Implementation
2.1 Architecture
2.2 Dynamic Loading
2.3 Three-Level Scheduling Strategy
3 Evaluation
4 Conclusions
References
The Conception of the Construction of Remote Sensing Satellites Information Quality Evaluation Test Field
Abstract
1 Introduction
2 Demand Analysis of the Construction Remote Sensing Satellites Test Field
3 Evaluation Principles of Remote Sensing Satellites Information Quality
4 The Preliminary Construction Idea of Remote Sensing Satellites Test Field
5 Conclusion
References
Design and Optimization of Permanent Magnet Brushless DC Motor Control System for Flywheel
Abstract
1 Introduction
2 Working Principle and Mathematical Model of Permanent Magnet Brushless Direct Current Motor for Flywheel
3 The Composition and Working Principle of the Motor Speed Control System
4 Control Algorithm of Permanent Magnet Brushless DC Motor for Flywheel
5 Experimental Verification
6 Conclusion
References
On-orbit Research and Validation on the Lifetime Capability of Propulsion System
Abstract
1 Foreword
2 Catalytic Decomposition Test Method
2.1 Test Method
2.2 Research on Catalytic Decomposition Rate in Orbit
3 In-flight Validation
3.1 Mission Design
3.2 In-flight Catalytic Decomposition Validation
3.3 Propulsion System In-flight Test
4 Conclusions
References
Observer-Based Variable Structure Control for Spacecraft with Actuator Failure
Abstract
1 Introduction
2 Attitude Model and Control Problem Formulation
3 Observer-Based Variable Structure Controller Design
3.1 Design Fault Estimation Observer
3.2 Observer-Based Control Law Derivation
4 Numerical Simulation
4.1 Normal Mode
4.2 Failure Mode
5 Conclusion
References
Research on the Method of Satellite Energy Real-Time Dispatching
Abstract
1 Introduction
2 Disadvantages of Traditional Energy Balance Budgeting Method
3 Satellite Energy Real-Time Dispatching Method
3.1 Design Ideas
3.2 Calculation Method
4 Conclusion
References
System Error Registration Method of Bearings-Only Passive Location System for Surface Ships
Abstract
1 Introduction
2 Dynamic Models
3 Genetic Algorithm
4 Simulation Analyses
5 Conclusions
Acknowledgement
References
Research on Passive Location of Maneuvering Target Based on IMM-SRCKF Algorithm
Abstract
1 Introduction
2 Mathematical Model
2.1 Target Motion Model
2.2 Measurement Model
2.3 Augmented Model
3 Imm-SRCKF
4 Simulation Analyses
5 Conclusions
Acknowledgement
References
A Standard Driver Design of Onboard Control Computer
Abstract
1 Introduction
2 Standard Driver Design
2.1 A6017 Introduction
2.2 Standardization of Underlying Program
2.3 Anti-SEU Design
2.4 Configurable Design
2.5 Standardization of User Interface
3 Experiment
3.1 Functional Test
3.2 Anti-SEU Test
3.3 Configuration Test
4 Conclusion
References
Optimal Multiple-Impulse Noncoplanar Rendezvous Trajectory Planning Considering the J2 Perturbation Effects
Abstract
1 Introduction
2 J2 Perturbed Multi-Impulse Orbit Maneuver Problem
3 The J2 Perturbed Lambert Problem
4 Lambert-Based Multi-Impulse Orbit Planning Model
5 Simulation example
5.1 Multi-impulse coplanar rendezvous
5.2 Multi-impulse Noncoplanar Rendezvous
6 Conclusion
References
PIC-MCC Simulation of the Temporal Characteristics of the Plasma in a Hall Thruster
Abstract
1 Introduction
2 Calculation Model
3 Numerical Method
4 Results and Discussion
4.1 Plasma Number Density
4.2 Plasma Velocity
4.3 Plasma Potential and Axial Electric field
4.4 Channel Erosion
5 Conclusions
References
Research on Automatic Measurement of Aerospace Electrical Connector Wire
Abstract
1 Research Background
2 Principle and Design of Automatic Signal Measurement
2.1 Fifteen Level 74ls148 and NAND Gate Constitute 128–7 Line Priority Encoder
2.2 Design of Minimum System for Single Chip Microcomputer
2.3 128 Wire Number Extended 256 Wire Number
2.4 Programming of Measurement Number
2.5 Design of Communication with Printer
2.6 Test Box Design
3 Conclusion
References
Correctness Verification of Aerospace Software Program Based on Hoare Logic
Abstract
1 Introduction
2 Rules of Program Verification Based on Hoare Logic
3 Verification of Program by Theorem Proving
4 Conclusion
References
Research on Image Restoration in Remote Sensing Quick-View System
Abstract
1 Introduction
2 Image Degradation and Restoration Principle
3 Wiener Filter Optimization and Parameter Estimation
3.1 Wiener Filter Method
3.2 Degradation Scale Estimating
3.3 Power Spectrum Ratio Estimating
3.4 Restrain of Ringing Artifact
4 Experiments and Analysis
4.1 Simulations Verification
4.2 Experimental Verification
4.3 Result Analysis
5 Conclusions
References
An Image Denoising Algorithm Combining Global Clustering and Low-Rank Theory
Abstract
1 Related Work
1.1 Spectral Clustering Algorithm
1.2 Clustering of Similar Image Blocks Based on LSC Spectral Clustering Algorithm
2 Gaussian Noise Filtering Model Based on Weighted Low-Rank Sparseness(GWLS)
3 Experimental Results and Analysis
4 Summary
References
Constraint Self-updating Method for Autonomous Task Planning of Earth Observation Satellites Based on In-Orbit Data
Abstract
1 Introduction
2 Description of Satellite Autonomous Task Planning Constrains
3 The Constraints of Task Planning with Adjustable Parameters
3.1 Attitude Maneuver Constraints of Satellite Platform
3.2 Charging and Discharging Constraints of Satellite Power System
4 Auxiliary In-orbit Data Determination
5 Online Estimation of Constrained Adjustable Parameters Based on In-orbit Data
6 Simulation and Analysis for Algorithm
7 Conclusion
Acknowledgements
References
A Novel Message Communication Mechanism Based on Partition Name
Abstract
1 Introduction
2 Overview of Space-Time Partitioning OS
3 Message Communication Principle
4 Software Implementation
4.1 Data Structure Design
4.2 Message Interaction Interface Design
4.3 System Call Design
4.4 Partition Messaging Flow Design
5 Experiment
6 Conclusion
References
An Application of Image Denoising Technique Based on Convolutional Neural Network in Star Tracker
Abstract
1 Introduction
2 Star Maps Denoising Algorithm Based on Deep Convolution Neural Networks
2.1 Typical CMOS Noise Model
2.2 Generation of Training Dateset
2.3 A Modified Convolution Denoising Net Structure
2.4 Training with Synthetic and Real Noisy Star Images
3 Experimental Results
3.1 Improvement to the Number of Identified Star
3.2 Improvement to the Angular Distance Error
4 Conclusion
References
A Time System Based on All Digital Simulation Platform with Precise Clock
Abstract
1 Introduction
2 Design of Digital Simulation Platform
2.1 Simulation Platform Architecture
2.2 Instruction Translation and Execution
3 Design and Implementation of Time System
3.1 Time System Design Factors
3.2 SPARC V8 Instruction Set Execution Time Consuming
3.3 Time System Algorithm
4 Experimental Verification Results
5 Conclusion
References
Medium Power Class-E Amplifier Design
Abstract
1 Scope
2 Class-E Power Amplifier Introduction
3 Design and Verification of Class-E Amplifier
3.1 Transistor Model Establishment
3.2 Amplifier Design
3.3 Measurement Results
4 Conclusion
References
Ka Band Satellite Communication Adaptive Coding and Modulation Technology Based on Artificial Intelligence LDPC Code
Abstract
1 Introduction
2 Ka Band Satellite Communication Adaptive Coding
3 Method and Process of Research on Ka band Satellite Communication Adaptive Coding
3.1 Adaptive Coding Parameter Selection
3.2 Data-Assisted Maximum Likelihood Signal-to-Noise Ratio (DA-ML) Estimation
3.3 Second-Order Moment-Fourth-Order Moment Signal-to-Noise Ratio (M2M4) Estimation
4 SNR Estimation Algorithm Performance Simulation
4.1 Analysis of the Performance Results of the Maximum Likelihood Algorithm and the Second Moment-Fourth Moment Algorithm
4.2 Analysis of Error and Variance Results of Maximum Likelihood Algorithm and Second Moment-Fourth Moment Algorithm
5 Conclusion
References
Encoding and Decoding Algorithm of LDPC for Low Orbit Satellite Communication Based on Artificial Intelligence
Abstract
1 Introduction
2 LDPC Encoding and Decoding Algorithm for Low-Orbit Satellite Communication Based on Artificial Intelligence
2.1 Implementation Structure of LDPC Encoding and Decoding Algorithm
2.2 Multivariate LDPC Decoding Algorithm
2.3 Three Ways to Improve the Decoding Algorithm
2.4 Low-Complexity LDPC Encoding and Decoding Algorithm Improvement
3 Analysis of LDPC Performance Simulation Results Under Low-Orbit Satellite Channel Environment
3.1 Bit Error Rate Performance of LDPCs Under Rice Channel
3.2 Time Complexity Analysis
4 Conclusion
References
Capacity Analysis of Relay Channel Based on Computer Technology and LDPC Coding Cooperation and Satellite Communication
Abstract
1 Introduction
2 Capacity Analysis of Relay Channel Based on Computer Technology and LDPC Coding Cooperation and Satellite Communication
2.1 The Development of Coding Theory
2.2 Introduction to LDPC Codes
2.3 Collaborative MIMO Technology
3 Analysis and Research on the Capacity of Relay Channel
3.1 Maximum Flow-Minimum Cut Theorem
3.2 Relay Channel and Its Channel Capacity
3.3 Cooperative Communication in Satellite Communication
4 Simulation and Performance Analysis
4.1 Simulation Analysis of Cooperative Communication Performance in Satellite Communication
4.2 Performance Simulation Analysis of LDPC Coded Cooperative Communication Scheme
5 Conclusions
References
Design and Verification of HY-2A Satellite Precision Orbit Determination Technology
Abstract
1 Introduction
2 Composition of HY-2A Precision Orbit Determination System
3 Key Technologies and Difficulties
3.1 Research on Precision Orbit Determination Algorithm
3.2 Optimization of Observation Data of Dual Frequency GPS Receiver
3.3 Dual-Frequency GPS Antenna Phase Center Stability
3.4 Laser Ranging Technology
4 Test Verification
4.1 SLR Satellite-To-Ground Test
4.2 HY-2A Precision Track Accuracy Verification
5 Results and Summary
References
Design of High-Reliability LDPC Code Algorithm for Relay Satellite Communication System
Abstract
1 Introduction
2 Relay Satellite LDPC Encoding and Decoding Algorithm Design
2.1 Design of LDPC Coding Algorithm for Relay Satellite
2.2 Design of Decoding Algorithm for Relay Satellite LDPC Codes
2.3 Simulation Analysis of the Performance of Relay Satellite LDPC Codes
3 Conclusion
References
Research on Real-Time Monitoring Method of Spot Beam Antenna Direction on Satellites
Abstract
1 Introduction
2 Research on Real-Time Monitoring Method
2.1 Problem Formulation
2.2 Real-Time Monitoring Method
2.3 Construction of the Real-Time Monitoring System
3 Example of GF-3 Satellite
3.1 Simulation System Construction
4 Conclusion
References
A Novel SOC Estimation Approach for the Lithium-Ion Battery Pack Using in the Deep Space Landers
Abstract
1 Introduction
2 Model and Identification
2.1 Rint Model
2.2 Parameter Identification
3 SOC Estimation Method
4 Simulation and Verification
5 Results and Discussion
6 Conclusions
Acknowledgments
References
Analysis of Characteristics and Requirements of Large LEO Constellations Management
Abstract
1 Introduction
2 Large Constellation Development
2.1 Starlink
2.2 OneWeb
2.3 China’s Constellation
2.4 Summary
3 Characteristics of Large Constellation Management
3.1 Difficulty in Parallel Access
3.2 Difficulty in Long-Term Operation Management
3.3 Low Cost Introduces Risk
3.4 High Configuration Maintenance Requirement
4 Analysis of Large Constellation Management
4.1 Automation and Intelligence
4.2 Health Management
4.3 Configuration Maintenance
4.4 Constellation Autonomous Operation
5 Conclusion
References
Research on Space Project Systems Technological Risk Assessment Based on Multi-level Fuzzy Comprehensive Evaluation
Abstract
1 Introduction
2 Space Project Systemic Technological Risk Architecture and Characteristics
3 The Model and Method for Multi-level Fuzzy Comprehensive Evaluation of Technology Risk
4 Application Analysis
5 Conclusion
References
Research on the System Construction of Spaceborne Adaptive Software and Its Key Technologies
Abstract
1 Introduction
2 Overall Design of the Target System
3 Variability Modeling
3.1 Work Mode Modeling
3.2 Upstream Injection Modeling
4 Message Middleware Design
5 Parameter Control Center Design
5.1 Behavior Parameters Extraction
5.2 Structural Parameters Design
6 Adaptive Strategy Definition
7 Conclusion
References
Air-Space-Ground Integrated Information Network: Technology, Development and Prospect
Abstract
1 Introduction
2 Research Focus
3 Development Status
4 Future Prospects
References
High-Resolution Satellite Radiation Big Data All-Statistical Intelligent Screening Technology
Abstract
1 Introduction
2 The Principle of Big Data Full Statistical Radiation Calibration
3 Data Screening Strategy
3.1 Screening Strategy
4 Algorithm Strategy Effect
5 Conclusion
References
EMC System Control on XX-2 Satellite
Abstract
1 Introduction
2 EMC Work Planning
3 EMC Control and EMC Test Verification of Equipment
4 RF EMC System Analysis
5 RM Satellite Test
6 EMC System Test Verification
7 Conclusions
References
Transmission and Control of the Image Data from Remote Sensing Satellite to the Ground Station
Abstract
1 Introduction
2 Analysis of the Data Transmission
2.1 Quantity Analysis of the Sensor Data
2.2 Mode of the Data Transmission
2.3 Data Transmission System
3 The Pointing Control of the Antenna
3.1 Requirement of Antenna Control
3.2 Control Protocol
3.3 Safety Strategy
4 Application
5 Conclusion
References
Micro-vibration Attenuation Design, Analysis and Verification for Agile Remote Sensing Satellite
Abstract
1 Introduction
2 Micro-vibration Attenuation Technology Route
3 Micro-vibration Attenuation Design
3.1 Sensitive Payload Requirement Analysis
3.2 Micro-vibration Transfer Path
3.3 Disturbance Source Characteristics Research
3.4 Micro-vibration Attenuation Design
4 Micro-vibration Attenuation Verification at System Level
4.1 Ground Micro-vibration Test
4.2 On-Orbit Response Analysis
4.3 Fight Verification
5 Conclusion
References
Practice and Research on FMEA of Telecommunication Satellite System
Abstract
1 Introduction
2 FMEA Method on Satellite System
2.1 Equipment FMEA
2.2 System FMEA on Satellite
2.2.1 System FMEA Overview
2.2.2 System Failure and Risk Probability
3 System FMEA Practice on an International Commercial Telecommunication Satellite
3.1 Analysis Hypothesis
3.2 Typical Satellite System FMEA Process
3.3 Subsystem and Equipment FMEA
3.4 Practice on Telecommunication System FMEA
4 Problems When Study System FMEA on Telecommunication Satellite System at Present
4.1 Absence on System FMEA Work Item
4.2 The Size of Failure Mode is Always Different
4.3 False Redundancy, True Single Point of Failure Mode
4.4 True Redundancy, But Still Insecure
4.5 Single Evaluation of the Impact of System Failure Modes
4.6 Problems that Cannot Be Solved by System FMEA
5 Conclusion
References
Characterization of On-Orbit Temperature and Pressure Performance for Hydrazine Propulsion System
Abstract
1 Introduction
2 Hydrazine Propulsion System
3 Preheating Process of the 5N Thruster
3.1 Preheating Under Vacuum Conditions
3.2 Preheating Under Atmospheric Conditions
3.3 Preheating During Ascent Stage
4 Pressure Change of the Propulsion System
4.1 Pressure Change During Ascent Stage
4.2 Pressure Change Induced by Water Hammer Effect
4.3 Pressure Change in Locked Pipeline
5 Conclusions
References
On-Orbit Calibration of a 7-DOF Space Arm
Abstract
1 Background
2 Modeling
3 Calibration Method
4 Simulation
5 Conclusion
References
Correlation Analysis and Forecast of Electric Vehicle Charging Stations Considering Seasonal Factors
Abstract
1 Introduction
2 Data Statistics of Large EV Charging Station
2.1 Data Statistics of Charging Habits
2.2 Data Statistics of DC Charging Pile
2.3 Data Statistics of AC Charging Pile
3 Queuing Model of EV Charging Pile
3.1 Build the Queueing Model
3.2 Data Analysis
4 Conclusion
References
Inversion Modeling Study of Dissolved Oxygen in Wangyu River Based on BP Neural Network
Abstract
1 Introduction
2 Overview of the Study Area
3 Data Source and Preprocessing
4 Research Results and Analysis
4.1 Correlation Analysis
4.2 The BP Neural Network Model
4.3 Comparison with a Linear Regression Model
5 Conclusion
References
Research on Satellite Energy Autonomous Security Strategy
Abstract
1 Introduction
2 Characteristics of Satellite Mission Requirements
3 Energy System Solution
3.1 Energy System Hardware Architecture
3.2 Energy System Software Architecture
4 Design of Energy Security Self-management Strategy
4.1 Independent Management Strategy Architecture Design
4.2 Hardware Independent Safety Function Design
4.3 Software Autonomous Safety Function Design
4.4 Energy Management Function Design
5 Conclusion
References
Research and Development of High Speed and Large Bandwidth MIMO Data Transmission Test System
Abstract
1 First Section
2 System Model and Baseband Algorithm Design
2.1 Channel Estimation Technology
2.2 Signal Equalization Technology
3 Hardware Module and Software Module Design
3.1 Hardware Module Design
3.2 Software Module Design
4 Result
5 Conclusion
Acknowledgment
References
Multimedia and Communication
Continuous Evolution for Efficient Neural Architecture Search Based on Improved NSGA-III Algorithm
Abstract
1 Introduction
2 CARS
2.1 Non-dominated Sorting
2.2 pNSGA-III
3 I-CARS
3.1 PBI Distance
3.2 Selection-and-Elimination Operator
4 Experiments and Analysis
4.1 Experimental Setting
4.2 Analysis
5 Conclusions
Acknowledgment
References
Research on Individual Identification of Radar Emitter Based on Multi-domain Features
Abstract
1 Introduction
2 Related Work
3 Multi-domain Feature Extraction
3.1 Feature Extraction Based on Bispectrum
3.2 Feature Extraction Based on Wavelet Packet Decomposition and Time Domain Statistic
4 Experiment Analysis
5 Conclusion
References
Fast HEVC Intra CTU Partition Algorithm Based on Lightweight CNN
Abstract
1 Introduction
2 Proposed Method
2.1 Observations and Motivations
2.2 Three-Level Classifier
2.3 Neural Network Design
2.4 Database Construction
3 Experimental Results
3.1 Comparison of Model Size
3.2 Performance Evaluation
3.3 Ablation Study
4 Conclusions
References
A New Pulmonary Nodule Detection Based on Multiscale Convolutional Neural Network with Channel and Attention Mechanism
Abstract
1 Introduction
2 Method
2.1 Architecture
2.2 Attention Module
2.3 Multiscale Module
3 Experiments
3.1 Training and Test
3.2 Ablation Study
3.3 Results
4 Conclusion
References
A Novel Attention Mechanism for Scene Text Detection
Abstract
1 Introduction
2 Methodology
3 Experiments
3.1 Dataset
3.2 Evaluation Metrics
3.3 Ablation Experiment
3.4 Contrast Experiment
4 Conclusion
References
Epileptic Seizure Detection Based on Multi-synchrosqueezing Transform and Multi-label Classification
Abstract
1 Introduction
2 Material and Method
2.1 EEG Database
2.2 MSST Transform
2.3 MLKNN
2.4 Post-processing
3 Result
4 Conclusion
References
Mobility-Aware Task Offloading Scheme Based on Optimal System Benefit in Multi-access Edge Computing
Abstract
1 Introduction
2 Problem Formulation
2.1 Transmit and Receive Cost
2.2 Calculation Cost
2.3 Migration Cost
2.4 Idle Cost
2.5 Energy Consumption
2.6 System Benefit Function
3 Simulation Analysis
4 Conclusion
References
Design of Electronic Intelligent Code Lock
Abstract
1 Introduction
2 Total Design Scheme of Intelligent Electronic Code Lock
2.1 System Structure Design
2.2 Main Functions of the System
3 System Hardware Design
3.1 STM32 Main Control Circuit
3.2 Touch Screen Circuit Module
3.3 Radio Frequency Card Reading Circuit
3.4 WIFI Module Circuit
4 Software Design of Intelligent Electronic Password Lock
4.1 Overall Design
4.2 Password Judgment Module Design
4.3 RF Card Module Design
4.4 Communication Module Design
5 System Test
6 Conclusion
References
Integrating Ensemble Learning and Information Entropy for Diabetes Diagnosis
Abstract
1 Introduction
2 Data Preprocessing
2.1 Outlier Detection and Processing
2.2 Missing Value Processing
3 Feature Engineering
3.1 Feature Construction
3.2 Feature Selection
4 Model Building and Post-processing
5 Conclusion
References
The Optimal UDP Data Length in Ethernet
Abstract
1 Introduction
2 Experiments and Results
3 Analysis
4 Conclusion
References
An Attention-GRU Based Gas Price Prediction Model for Ethereum Transactions
Abstract
1 Introduction
2 Related Work
3 Influencing Factors and the Price Prediction Model
3.1 Influencing Factors of Gas Price
3.2 An Attention-GRU Based Gas Price Prediction Model
3.3 Evaluation Index
4 Experiments and Performance Evaluation
5 Conclusions
References
A Research on an Unknown Radar Signal Sorting with DBSCAN-BP Algorithm
Abstract
1 Introduction
2 DBSCAN Algorithm
2.1 Data Classification
2.2 Algorithm Flow
3 BP Neural Network Algorithm
4 DBSCAN-BP Algorithm
4.1 Radar Signal Data Preprocessing
4.2 DBSCAN-BP Radar Signal Sorting Algorithm
5 Simulation Analysis
5.1 Radar Big Data Acquisition
5.2 Experiment and Parameter Setting
5.3 Analysis of Sorting Results
6 Conclusions
References
An Industrial Internet Delay Optimization Scheme Based on Time Sensitive Network
Abstract
1 Introduction
2 System Model
2.1 EWRR Queue Scheduling Algorithm
2.2 TAS-EWRR Queue Scheduling Algorithm and Model
2.3 Delay-Based Dijkstra Algorithm
3 System Simulation
3.1 Single Switch Node Model
3.2 Network Model
4 Conclusion
Acknowledgement
References
The Key Technology of Ultra-high-Speed Laser Communication Principle Verification System
Abstract
1 Introduction
2 Basic Architecture of Equipment
3 Design of Data Flow Scheme
3.1 User Data Sending Mode
3.2 Error Rate Statistics Mode
4 Summary
References
Inappropriate Visual Content Detection Based on the Joint Training Strategy
Abstract
1 Introduction
2 Proposed Method
2.1 Network Sharing and Multi-scale Fusion
2.2 Joint Prediction Training Strategy
2.3 Focal Loss Improvement
3 Experiments
3.1 Dataset
3.2 Results
4 Conclusions
Acknowledgement
References
Spectral Optimization for Multi-carrier NOMA-Enabled Industrial Internet of Things
Abstract
1 Introduction
2 System Model and Problem Formulation
2.1 System Model
2.2 Spectral Efficiency Problem Formulation
2.3 Proposed Solutions
3 Simulation Results
4 Conclusion
Acknowledgement
References
Local Edge Structure Guided Depth Up-Sampling
Abstract
1 Introduction
2 Proposed Method
2.1 Image Preprocessing
2.2 Pixel Classification
2.3 Depth Refinement
3 Experimental Results
4 Conclusion
References
Deep Learning and XGBoost Based Prediction Algorithm for Esophageal Varices
Abstract
1 Introduction
2 Materials
3 Methods
3.1 Image Preprocessing
3.2 Feature Extraction
3.3 Feature Selection
3.4 Classification
4 Experimental Results
5 Conclusion
References
A Data-Driven Model for Bearing Remaining Useful Life Prediction with Multi-step Long Short-Term Memory Network
Abstract
1 Introduction
2 Proposed Method
2.1 HI Curve Construction
2.2 HI Curve Prediction via MS-LSTM
2.3 Calculation of RUL
3 Experiments and Results
3.1 Dataset
3.2 HI Curve Construction Experiment
3.3 Comparison Experiment of HI Prediction
4 Conclusions
References
State Estimation of Networked Control Systems with Packet Dropping and Unknown Input
1 Introduction
2 Problems Statement
3 LMMSE Estimation
4 Minimum Variance Unbiased Estimation
4.1 Input Estimation
4.2 State Estimation
5 Simulation Results
6 Conclusion
References
Self-attention Mechanism Fusion for Recommendation in Heterogeneous Information Network
Abstract
1 Introduction
2 Related Work
2.1 Recommendation Based on Attention Mechanism
2.2 Network Embedded Learning
2.3 The Proposed Model
2.4 Basic Recommendation Model Based on Attention Mechanism
2.5 Global Information Modeling Based on Self-attention
2.6 Joint Model
3 Experiments
3.1 Compared Methods
3.2 Detailed Implementation
3.3 Results
3.4 Conclusion
Acknowledgment
References
DAMNet: Hyperspectral Image Classification Method Based on Dual Attention Mechanism and Mix Convolutional Neural Network
Abstract
1 Introduction
2 Materials and Methods
3 Experimental Results
3.1 Experimental Settings and Assessment Criteria
3.2 Compare with Other Methods
4 Conclusions
References
Dual-Discriminator Based Multi-modal Medical Fusion
Abstract
1 Introduction
2 Method
2.1 Generative Confrontation Network with Dual Discriminator Conditions
2.2 Pre-trained Neural Network
3 Experiments
3.1 Experiment Preparation and Setup
3.2 Comparison with Other Image Fusion Methods
4 Conclusion
Acknowledgments
References
Topology Design of Aircraft Cargo Loading Control System
Abstract
1 Introduction
1.1 Typical Topology and Existing Problems
1.2 Load Rate of Communication Bus
1.3 Research Contents
2 Calculation Method of Bus Load Rate
3 Design of Communication Network Topology
3.1 Scheme 1
3.2 Scheme 2
3.3 Scheme 3
4 Simulation and Analysis of CAN Bus Topology
5 Conclusion
References
Information Bottleneck Attribution Based Retinal Disease Classification Using OCT Images
Abstract
1 Introduction
2 Related Work
2.1 Information Bottleneck for Attribution
3 Method
3.1 Per-sample Bottleneck
3.2 Readout Bottleneck
4 Experiments and Results
4.1 Datasets
4.2 Experimental Results
5 Discussion and Concluding Remarks
References
Distributed Consensus for Discrete Multi-agent Systems with Time Delay in the Transmission
1 Introduction
2 Preliminary
3 Problem Statement
4 Main Results
5 Numerical Example
6 Conclusion
References
Simulation Design of Space Gravitational Wave Detection Satellite Formation
Abstract
1 Introduction
2 Simulation System Design
3 Comprehensive Scene Display System Design
4 Attitude and Orbit Control Simulation System Design
5 Simulation Results
6 Conclusion
References
Scheme Design for Cargo Loading Control System of Cargo Plane
Abstract
1 Introduction
1.1 Composition of Automatic Loading System
1.2 Research Content
2 Functions and Requirements
3 Architecture Design for Controller
3.1 Design of CAN Communication Network
3.2 Design of CAN Communication Message
4 Implementation Scheme for Controller
4.1 Scheme Design of Hardware and Software
4.2 Scheme and Implementation of System Visualization
5 Conclusion
References
Chinese Word Segmentation of Ideological and Political Education Based on Unsupervised Learning
Abstract
1 Introduction
2 Theory
2.1 Character-Level N-Gram Language Model
2.2 Viterbi Algorithm
3 Proposed Method
3.1 The Construction of Corpus
3.2 Training and Application of Character-Level N-Gram Model
4 Experiment
4.1 Language Model Comparison Test
4.2 Algorithm Accuracy Test
5 Conclusion
References
Random Design and Quadratic Programming Detection of Cayley Code in Large MIMO System
Abstract
1 Introduction
2 Description of Cayley Code Transmission
3 Cayley Code Design Based on Random Complex Gaussian Matrices
4 Quadratic Programming Detection of Cayley Code
5 Simulation Results and Performance Analysis
6 Conclusion
References
The Design and Implementation of Mobile Operation Platform Based on Domain Driven Design
Abstract
1 Introduction
2 Scenario-Based Micro-application Design for Mobile Operation
2.1 Scenario-Based Micro-application Design Method
2.2 Scenario-Based Micro-application for Mobile Operations
3 Design of Domain-Driven Mobile Operation Middle-Ground
3.1 Design Method of Domain-Driven Mobile Operation Business Middle-Ground
3.2 The Mobile Operation Business Middle-Ground
4 Architecture of Platform Construction
5 Conclusion
Acknowledgement
References
Research on Ensemble Prediction Model of Apple Flowering Date
Abstract
1 Introduction
2 Data Description and Study Methodology
2.1 Data Description
2.2 Research Methodology
3 Results and Analysis
3.1 Data Pre-processing Results
3.2 Analysis of Variable Screening Results
3.3 Analysis of Prediction Results
4 Summary
Acknowledgements
References
Apple Leaf Disease Identification Method Based on Improved YoloV5
Abstract
1 Introduction
2 Materials and Methods
2.1 Dataset Acquisition and Image Augmentation
2.2 An Improved YoloV5s Object Detection Framework
3 Results and Analysis
3.1 Experimental Environment and Parameter Settings
3.2 Model Performance Evaluation Metrics
3.3 Comparison of Model Improvement Effects
4 Conclusion
Acknowledgements
References
Analysis of Apple Replant Disease Based on Three Machine Learning Methods
Abstract
1 Introduction
1.1 Apple Replant Disease Background
1.2 Influencing Factors of Apple Replant Disease
1.3 Apple Replant Disease Prevention and Control
2 Data and Methods
2.1 Data Set and Its Division
2.2 Data Preprocessing
2.3 Model Establishment
3 Results and Analysis
3.1 Experimental Results
4 Conclusion
Acknowledgements
References
Improved Federated Average Algorithm for Non-independent and Identically Distributed Data
Abstract
1 Introduction
2 Method
2.1 Improved FedAvg Algorithm
3 Experimental
3.1 Experimental Setup
3.2 Result
4 Conclusions and Future Work
Acknowledgements
References
A Monitoring Scheme for Pine Wood Nematode Disease Tree Based on Deep Learning and Ground Monitoring
Abstract
1 Introduction
2 Solution Design and Implementation
2.1 Experiment Location and Equipment
2.2 Obtaining Images
2.3 Datasets
3 Algorithm Research
3.1 YOLOv3 Network
3.2 SENet Attention Mechanism Module
3.3 Bottom-Up Multi-scale Feature Fusion
4 Analysis of Experimental Results
5 Conclusion
Acknowledgements
References
An Optical Detection Method with Noise Suppression for Focal Panel Arrays
Abstract
1 Introduction
2 System Overview and Signal Design
2.1 System Overview
2.2 Signal Design
3 Modeling and Algorithms
4 Application and Simulation
5 Conclusion
References
Orientation-Aware Vehicle Re-identification via Synthesis Data Orientation Regression
Abstract
1 Introduction
2 Related Work
2.1 Orientation-Aware Vehicle Re-ID
2.2 Utilization of Synthetic Data in Re-ID
3 Method
3.1 Synthesis Data Orientation Regression
3.2 Orientation Bias Elimination
4 Experiments
4.1 Datasets
4.2 Evaluation Metric
4.3 Implementation Details
4.4 Comparison with State-of-the-Art Methods
5 Conclusion
References
Opportunities and Challenges of Artificial Intelligence Education in Secondary Vocational Schools Under the Background of Emerging Engineering
Abstract
1 Introduction
2 Artificial Intelligence Education
3 The Development Background of Artificial Intelligence Education at Home and Abroad
3.1 The Development of Artificial Intelligence Education Abroad
3.2 The Development of Artificial Intelligence Education in China
4 The Development of Artificial Intelligence Education in My Country’s Secondary Vocational Schools
4.1 From the Perspective of the Number of Documents Published Over the Years
4.2 Based on the Number of Papers Published in Different Regions
5 The Necessity of Artificial Intelligence Education in Secondary Vocational Schools
6 Opportunities and Challenges of Artificial Intelligence Education in Secondary Vocational Schools
6.1 Opportunities for Artificial Intelligence Education in Secondary Vocational Schools
6.2 The Challenge of Developing Artificial Intelligence Education in Secondary Vocational Schools
7 System Construction of Artificial Intelligence Education in Secondary Vocational Schools
7.1 Improve the Artificial Intelligence Education Curriculum System
7.2 Speed up the Training of AI Teachers and Improve Teachers’ AI Literacy
7.3 Improve the Construction of Artificial Intelligence Education Base
7.4 Strengthen School-Enterprise Cooperation and Promote Enterprises to Participate in Running Schools
8 Conclusion
References
Research on the Hotspots and Trends of Student Portrait Based on Citespace
Abstract
1 Introduction
2 Data Source and Research Method
2.1 Data Source and Processing
2.2 Research Tool and Method
3 Student Portrait Literature in Time and Space Distribution
3.1 Time Distribution of Literature
3.2 Distribution Map of Literature Author
4 Student Portrait Research Hotspots Analysis
4.1 Keyword Statistical Analysis
4.2 Keyword Clustering Map and Research Hotspots Analysis
5 Research Trends of Student Portrait
6 Conclusions and Suggestions
References
An Overview on Digital Content Watermarking
Abstract
1 Introduction
2 Research Status of Digital Watermarking
3 Some Specific Algorithms
3.1 Typical Spatial Algorithm
3.2 Typical Transform Domain Algorithm
4 Attacks on Digital Watermarking
4.1 Signal Processing Attacks
4.2 Geometric Attacks
4.3 Protocol Attacks
4.4 Cryptographic Attacks
5 Possible Future Research Directions
6 Conclusion
References
Location-Independent Human Activity Recognition Using WiFi Signal
Abstract
1 Introduction
2 Related Work
2.1 Recent Advances in Human Activity Recognition
2.2 Data Acquisition and Data Description
3 System Design
3.1 System Overview
3.2 Pre-processing
3.3 Classifier Models for 1-Dimensional CNN
4 Experimental Results and Discussion
5 Conclusion
References
Design of Text Summarization System Based on Finetuned-BERT and TF-IDF
Abstract
1 Research Background
2 Introduction to the NLP Field
2.1 Text Preprocessing Technology
2.2 BERT-Based Method
2.3 Graph-Based Method
2.4 Method Based on Word Fequency Statistics
3 System Requirements and Framework Design
3.1 System Requirements
3.2 System Framework Design
4 System Realization and Effect Display
4.1 Effect Display for the .txt Type of Input
4.2 Effect Display for the .Jpg Type of Input
5 Conclusion
References
Rogue WiFi Detection Based on CSI Multi-feature Fusion
Abstract
1 Introduction
2 CSI Acquisition and Phase Analysis
3 Data Preprocessing and Feature Extraction
3.1 Phase Unwrapping
3.2 Phase Filter
3.3 CFO Feature Extraction
3.4 Non-linear Phase Error Feature Extraction
4 Feature Evaluation and Fusion
4.1 Feature Evaluation
4.2 Feature Fusion
5 System Design and Simulation Analysis
5.1 System Design
5.2 Simulation Analysis
6 Conclusion
References
Panoramic Video Quality Assessment Based on Spatial-Temporal Convolutional Neural Networks
Abstract
1 Introduction
2 Background
2.1 Panoramic Video
2.2 Datasets
2.3 Convolutional Neural Networks
3 Proposed Model
3.1 Spatial-Temporal Convolutional Neural Networks
3.2 Weight Calculation
4 Experiment and Result
5 Conclusion
References
Fast CU Partition Algorithm for AVS3 Based on DCT Coefficients
Abstract
1 Introduction
2 Working Principle of CU Partition in AVS3
3 DCT and Implementation of Fast CU Partition Algorithm
3.1 Discrete Cosine Transform
3.2 Fast CU Partition Algorithm
4 Experimental Results
5 Conclusions
References
Research on Recognition Method of Maneuver and Formation Tactics in BVR Cooperative Combat Based on Dynamic Bayesian Network
Abstract
1 Backgrouad
2 Theoretical Basis
2.1 Dynamic Bayesian Network
2.2 Cloud Model
3 Recognition Model of BVR Maneuvers Based on DBN
3.1 Construction of Recognition Model of Typical Maneuvers of BVR
3.2 Tactical Identification Model Reasoning Process Based on DBN
4 Recognition Model of BVR Formation Tactics Based on DBN
4.1 Construction of Typical Formation Tactics Recognition Network of BVR
5 Simulation Experiment and Analysis
6 Conclusion
References
Fast Intra Prediction Algorithm for AVS3
Abstract
1 Introduction
2 Proposed Algorithms
2.1 Fast CU Partition Algorithm Based on the Best DT Partition
2.2 Fast Intra Mode Decision-Making Algorithm Based on the Best Prediction Mode Under 2Nx2N Partition Mode
3 Experimental Results
4 Conclusion
References
Big Data Workshop
Research on Interference Source Identification and Location Based on Big Data and Deep Learning
Abstract
1 Introduction
2 Programme Background
3 Algorithm Design
3.1 Interference Type Recognition Algorithm
3.2 External Interference Source Localization Algorithm
4 Output Result
4.1 External Interference Source Location Algorithm
4.2 List of Interference Sources
4.3 Uplink Interference Data
4.4 Page Data Rendering Results
5 Conclusions
References
Research on Automated Test and Analysis System for 5G VoNR Service
Abstract
1 Introduction
2 Key Technologies of 5G VoNR Solutions
2.1 5G Voice Solutions Evolution
2.2 5G VoNR Solution
3 Automated Test and Analysis System for 5G VONR Service
3.1 Simulation Dial-and-Test Module
3.2 Quality Assessment Module
3.3 Automatic Early Warning Module
4 5GC Network Capability Application Solution
4.1 VoNR Service Guarantee During Network Updates
4.2 Routine Inspection of the Core Network
5 Conclusion
Acknowledgment
References
Mobile Network Monitoring Technology for City Freeway Based on Big Data Analysis
Abstract
1 Introduction
2 Architecture of Big Data Platform for Mobile Network Quality Control
3 Mobile Network Monitoring Scheme for City Freeway Based on Big Data Analysis
3.1 Related Definitions
3.2 Identification Model for City Freeway User
3.3 Network Monitor for City Freeways
3.3.1 Acquisition Data
3.3.2 Generate Freeway Users Perception Indices Bill
3.3.3 Attach Labels of Road Sections
3.3.4 Generate Application Table by Aggregation
3.3.5 Web Presentation
4 Conclusions
References
5G Uplink Coverage Enhancement Based on Coordinating NR TDD and NR FDD
Abstract
1 Introduction
2 Uplink and Downlink Decoupling SUL Technology
3 Uplink Enhancement Based on Coordinating NR TDD and NR FDD
3.1 NR TDD and NR FDD Timeslot Scheduling
3.2 Performance Comparison Analysis
4 LTE Load-Based SUL Uplink and Downlink Decoupling Mode
5 Conclusion
References
Application of Improved GRNN Algorithm for Task Man-Hours Prediction in Metro Project
Abstract
1 Introduction
2 Related Work
3 Materials and Methods
3.1 GRNN
3.2 The Method Proposed
4 Performance Analysis
4.1 Analysis of Influencing Factors of Man-Hours
4.2 Man-Hour Prediction and Analysis
5 Conclusion
Acknowledgements
References
Resident Area Determination Model Based on Big Data and Large Scale Matrix Calculation
Abstract
1 Introduction
2 Trace in Cell Level Modelling
3 Data Collection and Trace Construction
4 Resident Area Determination with Matrix
4.1 Rough Estimation of Resident Area
4.2 Resident Area Self-recognition
4.3 Calculation and Verification
Acknowledgement
References
Research of New Energy Vehicles User Portrait and Vehicle-Owner Matching Model Based on Multi-source Big Data
Abstract
1 Introduction
2 Research of User Profile Based on Multi-source Big Data
2.1 Network Behavior Preference
2.2 Distribution of Living and Working Areas
2.3 Geographical Distribution of Charging Stations
3 Research of Vehicle-Owner Matching Model
4 Conclusion
References
Cell Expansion Priority Recommendation Based on Prophet Algorithm
Abstract
1 Introduction
2 Research Theory and Method
2.1 Prophet Time Prediction Algorithm Introduction
2.2 Analysis of Key Factors Affecting Network Capacity
3 Overall Model Construction
3.1 Introduction to Expansion Priority Recommendation Process
3.2 Expansion Priority Recommendation Model
4 Model Experiment and Result Analysis
4.1 Data Preparation
4.2 Prediction of Cell Traffic Based on Prophet Model
4.2.1 Predictive Analysis of Each Component of Prophet
4.2.2 Analysis of Prediction Results of Cell Traffic
4.2.3 Comparative Analysis of Prediction Results Between Prophet and LSTM
4.3 Expansion Priority Recommendation Results and Analysis
5 Conclusion
References
Load and Energy Balance Strategy of SDN Control Plane in Mobility Scenarios
Abstract
1 Introduction
2 System Model
2.1 Architecture
2.2 Problem Formulation
3 Controller Placement
3.1 Position Prediction Algorithm
3.2 Controller Placement Algorithm
4 Evaluation
4.1 Simulation Environment
4.2 Prediction Accuracy
4.3 The Network Performance
5 Conclusion
References
A Method for Intelligently Evaluation the Integrity of 5G MR Data
Abstract
1 Introduction
2 The Overall Scheme Description
3 Machine Learning Algorithm Selection
3.1 Correlation Analysis
3.2 Multiple Linear Regression Analysis
4 Detailed Scheme Evaluation
4.1 Extract the Quantity of MR Sampling Points
4.2 Extract Relevant Performance Indicators
4.3 Correlation Analysis
4.4 Modeling of MR Sampling Evaluation Function
4.5 Accuracy Verification of Regression Model
4.6 Integrity Verification of MR Data
5 Conclusions
References
Internet of Vehicles Product Recommendation Algorithm and Implementation Based on Ensemble Learning
Abstract
1 Introduction
2 System Architecture
3 Implementation Process
4 Effect Evaluation
5 Conclusion
Acknowledgment
References
Research and Verification of Power Saving Technology in 5G Network
Abstract
1 Introduction
2 Power Saving Function of 5G Base Station
2.1 Subframe Shutdown
2.2 Chanel Shutdown
2.3 Cell Shutdown
3 Detailed Strategy
3.1 Test Plan
3.2 Emergency Strategy
4 Effect Evaluation
4.1 Overall Situation
4.2 City Level Verification
4.3 Single Base Station Verification
4.4 Impact on Network Quality
4.5 Implementation Strategy
5 Conclusion
References
Application of XGBOOST Model on Potential 5G Mobile Users Forecast
Abstract
1 Introduction
1.1 5G Internet Development and Potential Profits
2 Data and Method
2.1 Data Collection and Feature Selection
2.2 Data Cleaning
2.2.1 Training data
3 Model
3.1 Model Parameters
3.2 Model Validity
4 Model Results and Feature Importance
5 Conclusion
References
Research on Methodology of Symmetry Optimization with Backpropagation
Abstract
1 Introduction
2 Lagrangian Structure for Higgs Mechanism
2.1 The Principle of Least Action
2.2 Analysis of Conservation Laws Based on Noether’s Theorem
2.3 Maxwell’s Lagrangian
3 Analyses of Symmetry for Standard Model
3.1 Global Symmetry
3.2 Local Symmetry
3.3 Scalar Electrodynamics
4 Optimization for Symmetry Breaking
4.1 Global Symmetry Breaking
4.2 Local Symmetry Breaking
5 Conclusion
References
Author Index

Citation preview

Lecture Notes in Electrical Engineering 917

Jiande Sun Yue Wang Mengyao Huo Lexi Xu   Editors

Signal and Information Processing, Networking and Computers Proceedings of the 8th International Conference on Signal and Information Processing, Networking and Computers (ICSINC)

Lecture Notes in Electrical Engineering Volume 917

Series Editors Leopoldo Angrisani, Department of Electrical and Information Technologies Engineering, University of Napoli Federico II, Naples, Italy Marco Arteaga, Departament de Control y Robótica, Universidad Nacional Autónoma de México, Coyoacán, Mexico Bijaya Ketan Panigrahi, Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, Delhi, India Samarjit Chakraborty, Fakultät für Elektrotechnik und Informationstechnik, TU München, Munich, Germany Jiming Chen, Zhejiang University, Hangzhou, Zhejiang, China Shanben Chen, Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai, China Tan Kay Chen, Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore Rüdiger Dillmann, Humanoids and Intelligent Systems Laboratory, Karlsruhe Institute for Technology, Karlsruhe, Germany Haibin Duan, Beijing University of Aeronautics and Astronautics, Beijing, China Gianluigi Ferrari, Università di Parma, Parma, Italy Manuel Ferre, Centre for Automation and Robotics CAR (UPM-CSIC), Universidad Politécnica de Madrid, Madrid, Spain Sandra Hirche, Department of Electrical Engineering and Information Science, Technische Universität München, Munich, Germany Faryar Jabbari, Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA, USA Limin Jia, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Alaa Khamis, German University in Egypt El Tagamoa El Khames, New Cairo City, Egypt Torsten Kroeger, Stanford University, Stanford, CA, USA Yong Li, Hunan University, Changsha, Hunan, China Qilian Liang, Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX, USA Ferran Martín, Departament d’Enginyeria Electrònica, Universitat Autònoma de Barcelona, Bellaterra, Barcelona, Spain Tan Cher Ming, College of Engineering, Nanyang Technological University, Singapore, Singapore Wolfgang Minker, Institute of Information Technology, University of Ulm, Ulm, Germany Pradeep Misra, Department of Electrical Engineering, Wright State University, Dayton, OH, USA Sebastian Möller, Quality and Usability Laboratory, TU Berlin, Berlin, Germany Subhas Mukhopadhyay, School of Engineering & Advanced Technology, Massey University, Palmerston North, Manawatu-Wanganui, New Zealand Cun-Zheng Ning, Electrical Engineering, Arizona State University, Tempe, AZ, USA Toyoaki Nishida, Graduate School of Informatics, Kyoto University, Kyoto, Japan Luca Oneto, Department of Informatics, Bioengineering., Robotics, University of Genova, Genova, Genova, Italy Federica Pascucci, Dipartimento di Ingegneria, Università degli Studi “Roma Tre”, Rome, Italy Yong Qin, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Gan Woon Seng, School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore Joachim Speidel, Institute of Telecommunications, Universität Stuttgart, Stuttgart, Germany Germano Veiga, Campus da FEUP, INESC Porto, Porto, Portugal Haitao Wu, Academy of Opto-electronics, Chinese Academy of Sciences, Beijing, China Walter Zamboni, DIEM - Università degli studi di Salerno, Fisciano, Salerno, Italy Junjie James Zhang, Charlotte, NC, USA

The book series Lecture Notes in Electrical Engineering (LNEE) publishes the latest developments in Electrical Engineering - quickly, informally and in high quality. While original research reported in proceedings and monographs has traditionally formed the core of LNEE, we also encourage authors to submit books devoted to supporting student education and professional training in the various fields and applications areas of electrical engineering. The series cover classical and emerging topics concerning:

• • • • • • • • • • • •

Communication Engineering, Information Theory and Networks Electronics Engineering and Microelectronics Signal, Image and Speech Processing Wireless and Mobile Communication Circuits and Systems Energy Systems, Power Electronics and Electrical Machines Electro-optical Engineering Instrumentation Engineering Avionics Engineering Control Systems Internet-of-Things and Cybersecurity Biomedical Devices, MEMS and NEMS

For general information about this book series, comments or suggestions, please contact leontina. [email protected]. To submit a proposal or request further information, please contact the Publishing Editor in your country: China Jasmine Dou, Editor ([email protected]) India, Japan, Rest of Asia Swati Meherishi, Editorial Director ([email protected]) Southeast Asia, Australia, New Zealand Ramesh Nath Premnath, Editor ([email protected]) USA, Canada: Michael Luby, Senior Editor ([email protected]) All other Countries: Leontina Di Cecco, Senior Editor ([email protected]) ** This series is indexed by EI Compendex and Scopus databases. **

More information about this series at https://link.springer.com/bookseries/7818

Jiande Sun Yue Wang Mengyao Huo Lexi Xu •





Editors

Signal and Information Processing, Networking and Computers Proceedings of the 8th International Conference on Signal and Information Processing, Networking and Computers (ICSINC)

123

Editors Jiande Sun School of Information Technology and Engineering Shandong Normal University Jinan, China Mengyao Huo Beijing University of Posts and Telecommunications Beijing, China

Yue Wang China Academy of Space Technology Beijing, China Lexi Xu China Unicom Haidian, Beijing, China

ISSN 1876-1100 ISSN 1876-1119 (electronic) Lecture Notes in Electrical Engineering ISBN 978-981-19-3386-8 ISBN 978-981-19-3387-5 (eBook) https://doi.org/10.1007/978-981-19-3387-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

It is our great honor to welcome you to the 8th International Conference on Signal and Information Processing, Network and Computers (ICSINC 2021). The ICSINC 2021 Committee has been monitoring the evolving COVID-19 pandemic. We have decided to delay the 2021 edition of this conference from May to December. ICSINC 2021 provides a forum for researchers, engineers and industry experts to discuss recent development, new ideas and breakthrough in signal and information processing schemes, computer theory, space technologies, big data and so on. ICSINC 2021 received 243 papers submitted by authors, and 178 papers were accepted and included in the final conference proceedings. The accepted papers will be presented and discussed in 27 regular technical sessions and two workshops. On behalf of the ICSINC 2021 committee, we would like to express our sincere appreciation to the TPC members and reviewers for their tremendous efforts. Especially, we appreciate all the sponsors for their generous support and advice, including Springer, China Unicom, Shandong Normal University and HuaCeXinTong company. Finally, we would also like to thank all the authors for their excellent work and cooperation. Qingliang Zeng Songlin Sun Yue Wang ICSINC 2021 General Chairs

v

Organization

Committee Members International Steering Committee Songlin Sun Ju Liu Chenwei Wang Jiaxun Zhang Takeo Fujii Martin Haardt Cecilia Zanni-Merk Yuanjie Zheng

Beijing University of Posts and Telecommunications, China Shandong University, China DOCOMO Innovations (DoCoMo USA Labs), USA China Academy of Space Technology, China The University of Electro-Communications, Japan Ilmenau University of Technology, Germany INSA Rouen Normandie, France Shandong Normal University

General Chairs Qingliang Zeng Songlin Sun Yue Wang

Shandong Normal University Beijing University of Posts and Telecommunications, China China Academy of Space Technology, China

Technical Program Committee Chairs Songlin Sun Gang Sun Jiande Sun

Beijing University of Posts and Telecommunications, China China Academy of Space Technology, China Shandong Normal University, China

vii

viii

Organization

Publication Chairs Xinmin Song Jia Zhang

Shandong Normal University, China Shandong Normal University, China

General Secretaries Jiaqi Zou Yufeng Yan

Sponsors Springer

China Unicom

HuaCeXinTong

Shandong Normal University

Beijing University of Posts and Telecommunications, China Beijing University of Posts and Telecommunications, China

Contents

Space Technology I Averaged Equation of Satellite Relative Motion in an Elliptic Orbit . . . Rucai Che HDR Fusion Algorithm Based on PCNN and over Exposure Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fang Dong and Jinyi Guo

3

12

A Dynamic Range Analysis of Remote Sensor Based on PcModWin . . . Zhao Ye, Gaochao, Ye Xue, and Chao Wang

21

The Analog Variable Observation Method for S3MPR Controller . . . . . Qifeng Xu, Luwei Liao, Junchao Song, Chengrong Cao, Xingjun Lu, and Guojun Wang

35

Analysis and Design of Aerospace Flight Control Knowledge Base . . . . Zhaoqiang Yang, Jiang Guo, Jian Yang, Ruihua Wang, Daoji Wang, and Yang Wu

44

Study on Planning Strategy of TT&C Resource Scheduling Based on Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ying Shen, Huili Gao, Zhaoqiang Yang, Jianjun Zheng, and Wei Zhao

54

Research on Product Assurance Parallel Work of Spacecraft Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xu Jingyu, Yu Songbai, Wang Hong, and Ji Ye

62

Research on Inter-satellite Ranging Technology Based on Proximity-1 Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wenxia Xu, Lihua Zuo, and Jingshi Shen

68

SpaceWire Network Configuration Method Based on Digraph . . . . . . . Yu Junhui, Niu Yuehua, and Mu Qiang

76

ix

x

Contents

LDPC Encoding and Decoding Algorithm of LEO Satellite Communication Based on Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nan Ye, Jianfeng Di, Jiapeng Zhang, Rui Zhang, Xiaopei Xu, and Ying Yan LDPC Coding and Decoding Algorithm of Satellite Communication in High Dynamic Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaoni Wu, Xuexue Qin, Xiaodong Ma, Dayang Zhao, Dan Lin, and Yifeng He

84

92

Analysis of Optimal Orbital Capture Maneuver Strategy to LEO Spacecraft Based on Space Debris Collision . . . . . . . . . . . . . . . . . . . . . . 100 Hong Ma, Tao Xi, Shouming Sun, Jianmin Zhou, Qiancheng Wang, Zhenguo Xiao, and Wei Zhang Research on a Centralized Thrust Deployment Method for Space Satellite Swarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Yuan An, Yaruixi Gao, Xu Yang, Ruolan Zhang, Minzhang Han, and Yongjie Zhu Research on Navigation and Communication Fusion Technology Based on Integrated PNT System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Zhao Wenwen, Zhu Xiaofeng, and Hu Jintao A Solar Array on Orbit Output Power Prediction Method for Satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Jiang Dongsheng, Peng Mei, Yang Dong, Jing Yuanliang, and Du Qing Conception and Prospects of Exploration and Mining Robots on Lunar Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Zhihui Zhao, Wei Zhu, and Ning Xia Analysis of Orbit Determination Accuracy of GEO Satellite Navigation Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Xiusong Ye, Hong Ma, Yang Yang, Yuancang Cheng, and Xing Liu Variable Bandwidth Digital Channelization Algorithm and Engineering Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Chen Dan, Wang Xiaotao, Zhang Qi, Shi Min, and Wang Xin A Design Framework for Complex Spacecraft Systems with Integrated Reliability Using MBSE Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Jingyi Chong, Haocheng Zhou, Min Wang, and Yujun Chen Intelligent Task Management of Infrared Remote Sensing Satellites Based on Onboard Processing Feedback . . . . . . . . . . . . . . . . . . . . . . . . 174 Xinyu Yao, Yong Yuan, Fan Mo, Xinwei Zhang, and Shasha Zhang

Contents

xi

Fragmentation Technology of Spacecraft TT&C Resources Under Multi-constraint Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Jun Liu, Jun Wei, Yuan An, Yaruixi Gao, Feng Zhang, Ruolan Zhang, and Qifeng Sun Key Technology Research and On-Orbit Verification of Beidou Short Message Communication System of LEO Satellite . . . . . . . . . . . . 189 Ren Xiaohang, Shan Baotang, Xu Dinggen, Wang Po, and Wang Baobao Summary of Artificial Neural Network Effectiveness Evaluation Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Zhang Yue and Liang Min Experiment Study on the Impact Limit of Basalt/Aramid Stuffed Whipple Shields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Jiangkai Wu, Runqiang Chi, Guotong Sun, Zengyao Han, Baojun Pang, and Shigui Zheng Application of Software Remote Synchronization Mode in Aerospace Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 ManLi Li, XiaoHong Liang, and XingLong Han Pointing Precision Modification Method for the Low Orbit Satellite in Ka-Band by the Triaxial Antenna . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Fei Gao, Mingnuan Qin, Min Liang, and Zhuo Zhang Theoretical Analysis and Experimental Study on Oxidant Depletion Cutoff Method of Satellite Dual Mode Propulsion System . . . . . . . . . . . 232 Zhen Lin, Mengjie Wang, and Xiangning Li Parameter Optimization Design and Application Analysis of Water-Based Propulsion System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Wenjuan Yin and Zhen Lin A New Tracking Measurement Method Based on QPSK Spread Spectrum Communication Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Zhimei Yang, Hao Tang, Xuan Li, and Lixin Zhang Design and Application of OBDH Subsystem for Gaofen-4 Satellite . . . 258 Liu Xin, Wang Luyuan, Wei Yongquan, Yang Peiyao, and Zhao Shengdong Autonomous Mission Planning Software Design and Testing of the Agile Earth Observation Satellite . . . . . . . . . . . . . . . . . . . . . . . . . 265 Xiutao Fu, Xiaogang Dong, Xiaofeng Li, Fan Xu, and Shimin He Research for Non-cooperative Space Objects Detection Methods Based on Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Yunfan Lei, Hongjun Zhong, Long Wang, and Yanpeng Wu

xii

Contents

Research on a BSDiff-Based System-Level On-Orbit Maintenance Technology of Spacecraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Yang Peiyao, Zhang Yahang, Liu Xin, Yang Zhigang, Dong Zhenhui, and Zhao Fangyuan Research on the Risk Identification Index System of Software Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Xiaohong Liang, Manli Li, and Jingjing Zhao Research on Time-Frequency Location Algorithm of Launch Vehicle Vibration Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Bin Che, Wendong Zhong, Jue Wang, Zhenglei Yang, and Yangsongyi Su Study on Disturbing Moment Compensation for the Satellite with Large-Scale Rotating Antennas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Wei Fan, Yin Zhang, and Haibo Zeng Electromagnetic Interference Posed by Wireless Devices Inside Airproof Cabin of Manned Spacecraft . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Yan Liu, Kang Yu, and Ruixun Chen Security Enhancement for Noise Aggregation in DVB-S2 Systems . . . . . 326 Ziqi Zhao, Na Zhou, Hanyu Zheng, Pengfei Qin, and Longteng Yi A Pipelined Arithmetic Architecture for the Motion Planning of Multi-DOF Manipulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Yalong Pang, Shuai Jiang, Jiyang Yu, and Shenshen Luan Information Hiding in Space Data Link Protocols Based on Spare Value of Fields in Frame Header . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Guang Wu and Zhelei Sun Space Technology II Optimal Escape Strategy with a Single Impulse in Space Attack-Defense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 Zhengfei Peng, Yang Guo, Shaobo Wang, Yanhua Tao, and Dan Wei A Method of Using Control Chart to Monitor Software Key Performance Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Fan Zhang, Wei Guo, Liang Qin, Yanzhao Zhao, and Zexi Li Accurate Pointing Control Algorithm for Space Based Movable Antenna Tracking Ground Station Using GPS . . . . . . . . . . . . . . . . . . . . 376 Yong Zhou, Chao Ma, Xiaona Yuan, Jie Pang, and Chao Dong Telemetry Data Prediction Method Based on Time Series Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Zhiqiang Li, Hongfei Li, Xiangyan Zhang, and Huadong Tian

Contents

xiii

A Query Optimization Method for Real-Time Embedded Database Based on Minimum Arborescence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Xudong Li, Bo Liu, Jian Xu, Jianyu Yang, and Mengdan Cao Design of Reconfigurable Test System for Information Interface Between Space Station Combination with Multi-aircraft . . . . . . . . . . . . 404 Ying Peng, Jing Xu-zhen, Li Jichuan, and Tan Zheng A Dual Model Control of Sensorless High Frequency Permanent Magnet Synchronous Motor Based on the Space Vector Pulse Width Modulation and Six-Step Mode . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Yanzhao He, Zhenyan Wang, Quanwu Wang, Qingrui Bu, and Jufeng Dai Development Mode of Remote Sensing Satellite OBDH Software Based on Prototype Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Dong Zhenhui, Zhang Yahang, Wang Xianghui, Yang Peiyao, Zhang Hongjun, Yuan Yong, and Liu Yiming Design of Closed-Loop Tracking Mode for Remote Sensing Satellite Data Transmission Antenna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 Yong Yuan, Yufei Huang, Xinyu Yao, Xinwei Zhang, and Zhenhui Dong High Accuracy Orbit Control Strategy of Change’4-Relay Satellite . . . . 435 Tao Zhang, Rongxiang Cao, Jianmin Zhou, Jianzhao Ding, Yifeng Guan, and Weiguang Liang Observation Data Quality Evaluation of GPS Receiver on Agile Platform Based on Ground Simulation . . . . . . . . . . . . . . . . . . . . . . . . . 443 Jinghua Wang, Xingchen Liu, Ning Liu, Wenwen Li, and Yilong Bai A Station Guidance Method Based on Combination Deviation Spacecraft Orbit Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452 He Yufan, Huang Jingqi, Li Junfeng, and Zhao Ning A Distributed Real-Time Orbit Determination Algorithm Based on Asynchronous Multi-sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 Shanpeng Sun, Jingqi Huang, Jue Wang, Junying Hong, and Yangsongyi Su An Equivalent Calculation Method of VLBI Instantaneous Time Delay Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 Huang Jingqi, Wang Fan, Ye Nan, Wang Yanrong, and Sun Shanpeng Design and Research of Satellite Fault Real-Time Simulation and Verification Platform Based on FMI Standard Model . . . . . . . . . . . 473 Liu Jian, Liu He, and Wang Huamao A Method for Calculating Solar Pressure of TDRS with a Movable IOLA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 Yin Zhang and Dong Han

xiv

Contents

Research on Platform Construction of Small Satellite GNC System . . . . 490 Xin Zhang and Qingyuan Tang The Storage Optimization of Test Cases and Test Results in Spacecraft Automatic Test Based on JSON . . . . . . . . . . . . . . . . . . . . 498 Hongjiang Song, Haiyang Chu, Xin Wen, Wei Lv, Xiaoyu He, Haixiang Zhang, and He Gao Research on the Modeling and Application of Intelligent Factory Facing the Complex Space Optical System . . . . . . . . . . . . . . . . . . . . . . . 508 Ying Che, Hui-li Fan, Feng Guo, Yong-sheng Pei, and Li-ling Liu A New Space-Based Optical Surveillance System . . . . . . . . . . . . . . . . . . 517 Yang Sun, Ling-ling Chen, Xuan Li, Ye Yang, and Ling Liu The Application of MPLS VPN in Improving the Reliability of Test Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526 Hui Wang, Yuhua Cao, and Quanwu Wang Performance Analysis of the Photon-Counting LIDAR Based on the Statistical Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534 Jun Dai, Shaohui Li, Fei Gao, Haiyi Cao, Gaofeng Guo, Xigang Liu, and Qianying Wang A Space Vector Transformation Based Dynamic Image Rotation Compensation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 Bin Ren, Qingkun Xie, Jingying Bian, Qian Lu, Xiaoqi Liu, and Le Zhang Research on Segmented Weighted Multi-vector Attitude Determination Method Based on On-Orbit Accuracy of Star Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 Wang Jiawei, Jing Quan, Li Shaohui, Gao Hongtao, Mo Fan, and Chen Xi The Research of Micro-vibration Nonlinear Characteristic Produced by Spacecraft Mechanical Momentum Wheel . . . . . . . . . . . . . . . . . . . . . 560 Quanwu Wang, Yiping Yao, Zekun Yang, and Xiufeng Zhong Design of On-Orbit Monitoring Method for Small Satellite Payload Equipment Based on Grey Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . 568 Yandong Han, Jian Shi, Guorui Yan, Da Lv, and Lei Ma Investigation on EMC Design of Spacecraft Under High-Power Microwave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577 Ran Li, Dianjun Wang, and Hanlu Ground Experimental Verification of High-Stability Temperature Control System and Flight Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . 583 Ran Wei, Yupeng Zhou, Yelong Tong, Qing Yang, Ming Li, Lihua Zhang, and Xin Zhao

Contents

xv

Experimental Verification of Discharge of Two Parallel Sets of Batteries in Mars Probe During EDL . . . . . . . . . . . . . . . . . . . . . . . . 592 Jing Wang, Haijin Li, Yihong Liu, Xinjun Liu, Hao Mu, Yi Yang, Dong Yang, Xuerong Qiao, and Yao Zhou Reliability Design and Experimental Verification of Impact Sensor Signal Cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599 Yi Yang, Yuqi Qin, Jing Wang, Bingxin Zhao, Yihong Liu, and Yue Han Simulation and Analysis of Influence of Group Delay Distortion on Satellite-to-Ground and Satellite-to-Satellite Link Channel . . . . . . . . 607 Rui Wang, Yu Wang, and Xige Hu The Influence of Ka Band Satellite Transponder Nonlinearity on High-Speed Data Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . 616 Zhiyu Zhou, Junyi Zhao, Baoxiang Song, and Bingzhe He GPS and Laser Altimeter Data Alignment for Snow Roughness Measurement in Antarctic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 Sheng Zhang Angles-Only Onboard Initial Relative Orbit Determination with Auxiliary Satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629 Jiaxing Li, Li Yuan, Cong Zhang, and Sihang Zhang Space-ROS: A Novel Real-Time Operating System Architecture and Implementation for Space Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . 639 Zhi Ma, Anhua Deng, Jingjing Jiang, Hongbiao Liu, and Xi Chen The Conception of the Construction of Remote Sensing Satellites Information Quality Evaluation Test Field . . . . . . . . . . . . . . . . . . . . . . . 647 Dalei Luo, Fengtian Bai, Chenming Bao, Qian Liu, and Chongying Lu Design and Optimization of Permanent Magnet Brushless DC Motor Control System for Flywheel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655 Chenming Bao, Bin Li, Ailin Kong, and Dalei Luo On-orbit Research and Validation on the Lifetime Capability of Propulsion System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664 Jialong Ji, Yuan Wang, Xin Miao, and Tao Chen Observer-Based Variable Structure Control for Spacecraft with Actuator Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673 Haibin Qin, Jianxiang Xi, Jun Liu, Xu Yang, and Nengjian Tai Research on the Method of Satellite Energy Real-Time Dispatching . . . 682 Ming Qiao and Xiao Fei Li

xvi

Contents

System Error Registration Method of Bearings-Only Passive Location System for Surface Ships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688 Zhi Guo, Chunyun Dong, Zhenyong Bo, Xu Yang, Yang Yang, Tao Xi, and Jian Zhang Research on Passive Location of Maneuvering Target Based on IMM-SRCKF Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696 Chunyun Dong, Zhi Guo, Xiaolong Chen, Xu Yang, and Yang Yang A Standard Driver Design of Onboard Control Computer . . . . . . . . . . 704 Chenlu Liu, Jian Xu, Zhi Ma, Fei Peng, Mengdan Cao, and Jianyu Yang Optimal Multiple-Impulse Noncoplanar Rendezvous Trajectory Planning Considering the J2 Perturbation Effects . . . . . . . . . . . . . . . . . 712 Guangde Xu, Zongbo He, Huiguang Zhao, Qiang Zhang, Yanjun Xing, and Cong Feng PIC-MCC Simulation of the Temporal Characteristics of the Plasma in a Hall Thruster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721 Rui Chen, Li Wang, and Xingyue Duan Research on Automatic Measurement of Aerospace Electrical Connector Wire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730 Yan Li, Songyin Sui, Qiong Wu, MingHua Zhang, GuangJun Chen, and Jun Wang Correctness Verification of Aerospace Software Program Based on Hoare Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737 Jian Xu, Hua Yang, Yanliang Tan, Yukui Zhou, and Xiaojing Zhang Research on Image Restoration in Remote Sensing Quick-View System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744 Yunsen Wang, Chengzhi Ma, Weiguang Cui, Jiang Liu, and Yang Kang An Image Denoising Algorithm Combining Global Clustering and Low-Rank Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753 Hongjian Guo, Yaruixi Gao, Nengjian Tai, Xu Yang, Shaoyu Zhang, and Xiaobo Hui Constraint Self-updating Method for Autonomous Task Planning of Earth Observation Satellites Based on In-Orbit Data . . . . . . . . . . . . . 762 Dong Sun, Cong Zhang, and Jiaxing Li A Novel Message Communication Mechanism Based on Partition Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771 Peng Fei, Xu Na, and Ma Zhi An Application of Image Denoising Technique Based on Convolutional Neural Network in Star Tracker . . . . . . . . . . . . . . . . 779 Jingjin Li, Fei Peng, and Yizheng Wang

Contents

xvii

A Time System Based on All Digital Simulation Platform with Precise Clock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786 Ruijun Li, Tao Zhang, Hongjing Cheng, Yanfang Fan, and Jiali Zheng Medium Power Class-E Amplifier Design . . . . . . . . . . . . . . . . . . . . . . . . 794 Lin Pan, Xiaoran Li, and Gang Dong Ka Band Satellite Communication Adaptive Coding and Modulation Technology Based on Artificial Intelligence LDPC Code . . . . . . . . . . . . 802 Guozhi Rong, Yikai Wang, Gang Dong, and Yifeng He Encoding and Decoding Algorithm of LDPC for Low Orbit Satellite Communication Based on Artificial Intelligence . . . . . . . . . . . . . . . . . . . 810 Yifeng He, Yijian Zheng, Jingyi Zhang, Wenze Qi, and Dayang Zhao Capacity Analysis of Relay Channel Based on Computer Technology and LDPC Coding Cooperation and Satellite Communication . . . . . . . . 818 Rui Zhang, Shuanglan Mao, Nan Ye, Bo Zhang, Liang Peng, and Zhihang Mao Design and Verification of HY-2A Satellite Precision Orbit Determination Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827 Jie Liang, Yi Fang, and Shilin Dong Design of High-Reliability LDPC Code Algorithm for Relay Satellite Communication System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833 Xiaoran Li, Pan Lin, and Yifeng He Research on Real-Time Monitoring Method of Spot Beam Antenna Direction on Satellites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841 Zhi Yuan, Lina Wu, and Yinghua Li A Novel SOC Estimation Approach for the Lithium-Ion Battery Pack Using in the Deep Space Landers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849 Hao Mu, Zhigang Liu, Wang Jing, and Dong Yang Analysis of Characteristics and Requirements of Large LEO Constellations Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857 Da Tang, Guoting Zhang, Tang Li, and Wanhong Hao Research on Space Project Systems Technological Risk Assessment Based on Multi-level Fuzzy Comprehensive Evaluation . . . . . . . . . . . . . 866 Lily Wang Research on the System Construction of Spaceborne Adaptive Software and Its Key Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874 Xiaofeng Li, Qian Wu, and Xiaolong Yang Air-Space-Ground Integrated Information Network: Technology, Development and Prospect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883 Gang Dong, Lin Pan, Yi Zhang, and Yanan Liu

xviii

Contents

High-Resolution Satellite Radiation Big Data All-Statistical Intelligent Screening Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892 Huimin Zhong, Xi Wang, Wenyong Yu, Xiaoxiang Long, and Xiaoyan Wang EMC System Control on XX-2 Satellite . . . . . . . . . . . . . . . . . . . . . . . . . 899 Liping Zhou, Hua Zhang, Lili Cheng, and Yajie Zhang Transmission and Control of the Image Data from Remote Sensing Satellite to the Ground Station . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904 Xiao-ming Wang, Xiao-dong Wu, and Bo Han Micro-vibration Attenuation Design, Analysis and Verification for Agile Remote Sensing Satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 913 Xingsu Gao, Guangyuan Wang, Yongsheng Wu, Yue Wang, and Yu Zhao Practice and Research on FMEA of Telecommunication Satellite System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921 Hao Zhang, Zong Ren Wang, Xue Wang Wang, and Feng Chun Lin Characterization of On-Orbit Temperature and Pressure Performance for Hydrazine Propulsion System . . . . . . . . . . . . . . . . . . . 931 Yong Gao, Yuan Wang, Ning Yao, Jialong Ji, and Jiaai Yang On-Orbit Calibration of a 7-DOF Space Arm . . . . . . . . . . . . . . . . . . . . 939 Rui Liu, Shuai Xiao, and Chi Zhang Correlation Analysis and Forecast of Electric Vehicle Charging Stations Considering Seasonal Factors . . . . . . . . . . . . . . . . . . . . . . . . . . 945 Qianxu Yu, Jing Ren, and Shanshan Wei Inversion Modeling Study of Dissolved Oxygen in Wangyu River Based on BP Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953 Min Jin, Xinhua Qian, Lei Wu, Yuhu Li, Bangguo Hu, and Wangmeng Chen Research on Satellite Energy Autonomous Security Strategy . . . . . . . . . 962 Xiaofei Li, Ming Qiao, and Bingxin Zhao Research and Development of High Speed and Large Bandwidth MIMO Data Transmission Test System . . . . . . . . . . . . . . . . . . . . . . . . . 969 Chai Yuan, Shuai Han, and Zhiqiang Li Multimedia and Communication Continuous Evolution for Efficient Neural Architecture Search Based on Improved NSGA-III Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979 Ziyan Wang, Feng Qi, and Liming Zou

Contents

xix

Research on Individual Identification of Radar Emitter Based on Multi-domain Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987 Bo Li, Songlin Sun, ZhiKai Su, and Chenwei Wang Fast HEVC Intra CTU Partition Algorithm Based on Lightweight CNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996 Rifa Zhao, Hai Huang, Ronghui Zhang, and Xiaojun Jing A New Pulmonary Nodule Detection Based on Multiscale Convolutional Neural Network with Channel and Attention Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1004 Yingying Zhao, Jiaxin Wang, Xiaomin Wang, and Honglin Wan A Novel Attention Mechanism for Scene Text Detection . . . . . . . . . . . . 1011 Jiaxin Wang, Yingying Zhao, Xiaomin Wang, and Honglin Wan Epileptic Seizure Detection Based on Multi-synchrosqueezing Transform and Multi-label Classification . . . . . . . . . . . . . . . . . . . . . . . . 1017 Yuying Yang, Jiazheng Zhou, Jianping Liu, and Qi Yuan Mobility-Aware Task Offloading Scheme Based on Optimal System Benefit in Multi-access Edge Computing . . . . . . . . . . . . . . . . . . . . . . . . 1025 Chuanfen Feng and Jiande Sun Design of Electronic Intelligent Code Lock . . . . . . . . . . . . . . . . . . . . . . 1034 Qian Bao, Zixuan Wang, Xinpeng Wang, Zhixin Guo, Deyan Wang, Jia Guo, and Chunxing Wang Integrating Ensemble Learning and Information Entropy for Diabetes Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1042 Cheng Li, Yiyang Xiong, Xuezhi Zhang, Ruitong Liu, and Xiaojun Jing The Optimal UDP Data Length in Ethernet . . . . . . . . . . . . . . . . . . . . . . 1050 Chunhao Li, Lingfeng Fang, Songlin Sun, Tingting An, and Chenwei Wang An Attention-GRU Based Gas Price Prediction Model for Ethereum Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1058 Yunxia Feng, Yuan Sun, and Jing Qu A Research on an Unknown Radar Signal Sorting with DBSCAN-BP Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1067 Zhikai Su, Songlin Sun, Yuhao Liu, and Chenwei Wang An Industrial Internet Delay Optimization Scheme Based on Time Sensitive Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077 Lei Wu, Ju Liu, Tianfeng Xu, and Xueping Shao The Key Technology of Ultra-high-Speed Laser Communication Principle Verification System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1087 Shidong Lv, Yilong Bai, Xinghua Wang, and Jinghua Wang

xx

Contents

Inappropriate Visual Content Detection Based on the Joint Training Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1095 Xuejing Wang, Ju Liu, Xiaoxi Liu, Yafeng Li, and Luyue Yu Spectral Optimization for Multi-carrier NOMA-Enabled Industrial Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105 Tianfeng Xu, Ju Liu, Zhichao Gao, Lei Wu, Shanshan Yu, and Xueping Shao Local Edge Structure Guided Depth Up-Sampling . . . . . . . . . . . . . . . . . 1113 Chunxing Wang, Zeying Xue, Lanjing Zu, Wenbo Wan, Zixuan Wang, Zhixin Guo, and Yannan Ren Deep Learning and XGBoost Based Prediction Algorithm for Esophageal Varices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1121 Xinyi Chen, Jiande Sun, Zhishun Wang, Yanling Fan, and Jianping Qiao A Data-Driven Model for Bearing Remaining Useful Life Prediction with Multi-step Long Short-Term Memory Network . . . . . . . . . . . . . . . 1129 Zhen Zhao, Zehou Du, Kaixuan Yang, Haoqin Sun, Jialong Wei, and Yang Liu State Estimation of Networked Control Systems with Packet Dropping and Unknown Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1139 Lei Tan, Xinmin Song, Guoming Liu, and Zhe Liu Self-attention Mechanism Fusion for Recommendation in Heterogeneous Information Network . . . . . . . . . . . . . . . . . . . . . . . . . 1147 Zhenyu Zang, Xiaohui Yang, and Zhiquan Feng DAMNet: Hyperspectral Image Classification Method Based on Dual Attention Mechanism and Mix Convolutional Neural Network . . . . . . . 1156 Zengzhao Sun, Huamei Xin, Bo Zhang, Jinwen Ren, Zhenye Luan, Hongzhen Li, Xingrun Wang, and Jingjing Wang Dual-Discriminator Based Multi-modal Medical Fusion . . . . . . . . . . . . . 1164 Haoran Wang, Zhen Hua, and Jinjiang Li Topology Design of Aircraft Cargo Loading Control System . . . . . . . . . 1173 Jie Deng, Chunyun Dong, Meng Nan, Siqing Wu, and Xiaolong Chen Information Bottleneck Attribution Based Retinal Disease Classification Using OCT Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1181 Sehrish Aslam, Yuanjie Zheng, Xiaojie Li, Junxia Wang, Muhammad Zakir Ullah, Gogo Dauda Kaizolu, and Neelam Gohar Distributed Consensus for Discrete Multi-agent Systems with Time Delay in the Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189 Qipeng Hu, Yanli Zhu, Zhenhua Wang, and Liming Zou

Contents

xxi

Simulation Design of Space Gravitational Wave Detection Satellite Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197 Wentao He, Xiande Wu, Yuheng Yang, Huibin Gao, Haiyue Yang, Yuqi Sun, and Songjing Ma Scheme Design for Cargo Loading Control System of Cargo Plane . . . . 1205 Siqing Wu, Chunyun Dong, Wenjuan Lei, Jie Deng, and Xiaolong Chen Chinese Word Segmentation of Ideological and Political Education Based on Unsupervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213 Xinghai Yang, Wenjing Zang, Shun Meng, Jiafeng Liu, and Yulin Zhang Random Design and Quadratic Programming Detection of Cayley Code in Large MIMO System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222 Da-Peng Zhang, Fu-Yun Ma, Ting Liu, Ning Liu, Li-Jun Bian, and Zhi-Min Zhang The Design and Implementation of Mobile Operation Platform Based on Domain Driven Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1230 Wenjuan Shi, Shulong Wang, Jing Ma, Junquan Zhang, Yuxi Liu, Zaiyu Jiang, and Yang Hu Research on Ensemble Prediction Model of Apple Flowering Date . . . . 1238 Fan Zhang, Fenggang Sun, Zhijun Wang, and Peng Lan Apple Leaf Disease Identification Method Based on Improved YoloV5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1246 Yunlu Wang, Fenggang Sun, Zhijun Wang, Zhongchang Zhou, and Peng Lan Analysis of Apple Replant Disease Based on Three Machine Learning Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1253 Guoen Chen, Li Xiang, Zhiquan Mao, Fenggang Sun, and Peng Lan Improved Federated Average Algorithm for Non-independent and Identically Distributed Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1261 Zhongchang Zhou, Fenggang Sun, Peng Lan, and Zhijun Wang A Monitoring Scheme for Pine Wood Nematode Disease Tree Based on Deep Learning and Ground Monitoring . . . . . . . . . . . . . . . . . . . . . . 1268 Chengkai Ge, Fengdi Li, Fenggang Sun, Zhijun Wang, and Peng Lan An Optical Detection Method with Noise Suppression for Focal Panel Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276 Bin Wang, Yuning Wang, and Xiaoyan Wang Orientation-Aware Vehicle Re-identification via Synthesis Data Orientation Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285 Hong Wang and Ziruo Sun

xxii

Contents

Opportunities and Challenges of Artificial Intelligence Education in Secondary Vocational Schools Under the Background of Emerging Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1293 Ying Qu, Liancheng Xu, Ran Lu, and Min Gao Research on the Hotspots and Trends of Student Portrait Based on Citespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1302 Jie Lv, Ran Lu, and Zhe Liu An Overview on Digital Content Watermarking . . . . . . . . . . . . . . . . . . 1311 Wang Qi, Bei Yue, Chen Wangdu, Pan Xinghao, Cheng Zhipeng, Wang Shaokang, Wang Yizhao, and Wang Chenwei Location-Independent Human Activity Recognition Using WiFi Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1319 Gogo Dauda Kiazolu, Sehrish Aslam, Muhammad Zakir Ullah, Mingda Han, Sonkarlay J. Y. Weamie, and Robert H. B. Miller Design of Text Summarization System Based on Finetuned-BERT and TF-IDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1330 Yuyang Liu, Kevin Song, Jin He, Songlin Sun, and Rui Liu Rogue WiFi Detection Based on CSI Multi-feature Fusion . . . . . . . . . . . 1339 Zhaoyuan Mei, Songlin Sun, and Chenwei Wang Panoramic Video Quality Assessment Based on Spatial-Temporal Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1348 Tingting An, Songlin Sun, and Rui Liu Fast CU Partition Algorithm for AVS3 Based on DCT Coefficients . . . . 1357 Xi Tao, Lei Chen, Fuyun Kang, Chenggang Xu, and Zihao Liu Research on Recognition Method of Maneuver and Formation Tactics in BVR Cooperative Combat Based on Dynamic Bayesian Network . . . 1365 Yan Zhang, Yuyang Liu, Yinjing Guo, Xiaodong Wang, Lei Feng, Guo Li, and Yangming Guo Fast Intra Prediction Algorithm for AVS3 . . . . . . . . . . . . . . . . . . . . . . . 1376 Lingfeng Fang, Songlin Sun, and Rui Liu Big Data Workshop Research on Interference Source Identification and Location Based on Big Data and Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387 Hui Pan, ChengFan Zhang, Saibin Yao, JiuCheng Huang, YiDing Chen, and Long Zhang

Contents

xxiii

Research on Automated Test and Analysis System for 5G VoNR Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1396 Zhenqiao Zhao, Xiqing Liu, Jie Miao, Shiyu Zhou, Xinzhou Cheng, Lexi Xu, Xin He, and Shuohan Liu Mobile Network Monitoring Technology for City Freeway Based on Big Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1404 Baoyou Wang, Saibin Yao, Jiucheng Huang, Hui Pan, and Chao Liu 5G Uplink Coverage Enhancement Based on Coordinating NR TDD and NR FDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1412 Jinge Guo, Yang Zhang, Bao Guo, Zhongsheng Fan, and Huangtao Song Application of Improved GRNN Algorithm for Task Man-Hours Prediction in Metro Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1421 Zhengyu Zhang, Shuying Wang, and Jianlin Fu Resident Area Determination Model Based on Big Data and Large Scale Matrix Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1431 Chuntao Song, Yong Wang, Xinzhou Cheng, and Lexi Xu Research of New Energy Vehicles User Portrait and Vehicle-Owner Matching Model Based on Multi-source Big Data . . . . . . . . . . . . . . . . . 1440 Yuting Zheng, Tao Zhang, Guanghai Liu, Yi Li, Chen Cheng, Lexi Xu, Xinzhou Cheng, and Xiaodong Cao Cell Expansion Priority Recommendation Based on Prophet Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1449 Xiaomeng Zhu, Zijing Yang, Guanghai Liu, Yi Li, Lexi Xu, Xinzhou Cheng, and Runsha Dong Load and Energy Balance Strategy of SDN Control Plane in Mobility Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1458 Fuwei Zhang, Wenwen Ma, Dequan Xiao, and Hongyu Peng A Method for Intelligently Evaluation the Integrity of 5G MR Data . . . 1467 Yuan Fang, Qintian Wang, Ao Shen, Pengcheng Liu, Jinhu Shen, Bao Guo, Zetao Xu, and Jimin Ling Internet of Vehicles Product Recommendation Algorithm and Implementation Based on Ensemble Learning . . . . . . . . . . . . . . . . . 1475 Mingjing Zhang, Fei Su, Yanru Li, Guangtao Zhou, Liang Xin, and Tingting Ren Research and Verification of Power Saving Technology in 5G Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1484 Zetao Xu, Yuan Fang, Ao Shen, Yang Zhang, Pengcheng Liu, Jinhu Shen, Jimin Ling, and Qintian Wang

xxiv

Contents

Application of XGBOOST Model on Potential 5G Mobile Users Forecast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1492 Xijuan Liu and Tianyi Wang Research on Methodology of Symmetry Optimization with Backpropagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1501 Sitong Zhang, Youli Yu, and Chenyuan Ye Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1509

Space Technology I

Averaged Equation of Satellite Relative Motion in an Elliptic Orbit Rucai Che(&) Beijing Institute of Control Engineering, Beijing 100190, China [email protected]

Abstract. The precision model of relative motion is a necessity for satellite formation flying, but these models are complex for analysis and design, especially in elliptic orbit. For some satellite formation flying applications, the longterm formation maintenance and fuel-saving is more important for satellite life. Using the averaged analysis method over an orbit period, the averaged equation of satellite relative motion in an elliptic orbit is presented in this paper. Firstly, based on the homogeneous solutions of T-H equations, a simple averaged equation of relative motion is derived. Secondly, the improved averaged equation of relative motion which is described by the instantaneous orbit elements difference is developed for considering the orbital perturbation. The effectiveness of the proposed models is verified by four simulation cases which consider the orbit perturbation or not. The proposed model can eliminate the periodicity movement of satellite relative motion, and it is convenient for longterm formation flying designing and configuration controlling. Keywords: Elliptic orbit

 Satellite formation  Relative motion

1 Introduction In recent years, there has been a growing interest in satellite formation flying and there are many space applications of formation flying, such as stereo observation, SAR interferometry etc. [1]. The earliest and most prevalent model governing relative motion is given by the Clohessy-Wiltshire (C-W) equation [2] which assumes a circular reference orbit for target satellite, providing an analytic description of relative motion. The Tschauner-Hempel (T-H) equation [3] extend the C-W model to accommodate non-circular reference orbit, by expressing the equations of motion in terms of target orbit eccentricity and initial true anomaly [4]. Currently, there have been many scientific papers written on the subjects of high precision models and configuration designing for relative motion. Katsuhiko [5] developed expansion of T-H equation which considering J2 perturbation to improve the calculating precision. Mai [6] developed a graphical analysis for T-H equation based on the period orbit. Schaub [7] proposed a new presentation which is based on the orbital elements difference. For configuration controlling of relative motion, Vadali [8] designed a digital filter to

©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 3–11, 2023. https://doi.org/10.1007/978-981-19-3387-5_1

4

R. Che

eliminate the period vibration which can reduce the fuel consumption. Balaji [9] has parameterized C-W equation in terms of mean value for configuration controlling. There is a wealth of literature on relative motion models. Nevertheless, the averaged equation over orbit period is useful for relative motion analysis, especially for long term prediction and formation configuration designing. Based on the average calculation in an orbit period, two different averaged relative motion equations are presented in this paper. Then, the characteristic of averaged models are also been developed. The analysis thought and calculation of this method is simple, and the effectiveness of the model was proved by digital simulations.

2 Relative Motions in Two-Body Orbit We take the orbit coordinate systems oxyz of the target satellite as the relative motion coordinate system. The origin is the center of the mass of the target satellite, and move in orbit with it. The x-axis coincides with geocentric vector r of the target satellite, and points from geocenter to target satellite. The y-axis is normal to x-axis in the orbit plane, and points to the direction of motion. The z-axis determined by right-hand rule, coincides with the angular momentum vector of the target satellite. The relative motion

Fig. 1. The oxyz and OXYZ coordinate.

coordinate system oxyz and geocentric inertial coordinate system OXYZ is shown in Fig. 1. 2.1

Relative Motion Model and Analytical Equation

Consider two satellites in elliptical orbits about a common gravitational source, as shown in Fig. 1. Employing the common designations associated with relative motions, tracking satellite denotes the second satellite, moving close to the target and with position rt . By form the relative position vector q ¼ ½x y zT . Based on the condition of q\\r, the relative motion equation in two-body orbit can be described by [4].

Averaged Equation of Satellite Relative Motion in an Elliptic Orbit

3 2 2e sin f x00 kf 6 x0 7 6 6 6 00 7 ¼ 6 1 4 y 5 4 2 y0 0 2



1þ 0

2 kf

32 3  2e ksin f x0 f 76 7 0 76 x 7 e cos f 74 0 5 5 y kf y 0

2 0

2e sin f kf

2e sin f kf

0

1

  2e sin f z00 kf ¼ z0 1

 k1 f 0

 0  z z

5

ð1Þ

ð2Þ

where kf ¼ 1 þ e cos f , e is the eccentricity of target satellite, f is true anomaly of target satellite, ðÞ0 ¼ dðÞ=df . In two body orbit, the analytical model of Eq. 1 can be written as follows [4]: xðf Þ yðf Þ

¼

zðf Þ

¼

x0 ðf Þ

¼

y0 ðf Þ

¼

0

z ðf Þ

¼

9 > = i> 1 1 2 d3 ð1 þ kf Þ sin f þ ½d1 e þ 2d2 e Hðf Þ cos f þ d1 þ d4 kf þ 2d2 eHðf Þ > > ; d k1 sin f þ d k1 cos f ½d1 e þ 2d2 e2 Hðf Þ sin f  ðd2 ek2 f þ d3 Þ cos f

¼

5 f

h

9 > ½d ek2 þ d3  sin f þ ½d1 e þ 2d2 e2 Hðf Þ cos f > = h2 f i h i 2 1 2 2 d3 ek2 sin f þ d ek  d e  2d e Hðf Þ sin f þ d ð1 þ k Þ þ 2d ek cos f 4 f 1 2 3 2 f f f > > ; ðd ek2 sin f  d k1 Þ sin f þ ðd k1 þ d ek2 sin f Þ cos f 5

f

ð3Þ

6 f

6 f

5 f

6

ð4Þ

f

where di ði ¼ 1; :::; 6Þ is the integral constant, Hðf Þ is integral expansion item. 2.2

The Averaged Equation of Relative Motion

It is well known that typical character of the T-H equations is periodicity of orbit because each equation contains trigonometric function. For applications to surveillance, the first goal is to survey the averaged equation in an orbit period. Because the integral expansion item Hðf Þ is very complex, and it is difficult to directly calculate the average equation. Nevertheless, the series of Hðf Þ can be used to express the equations. Based on the power series m km f ¼ ð1 þ e cos f Þ ¼ 1 þ me cos f þ    þ

mðm  1Þ    ðm  n þ 1Þ n e cosn f þ    n! ð5Þ

The integral expansion item Hðf Þ can be rewritten as Hðf Þ ¼ sin f  3e½

f sin2f sin3 f 3f 3 sin f cos3 f þ  þ 6e2 ½ sin f þ 10e3 ½ þ sin2f þ  þ     Hf0 2 4 3 8 16 4

ð6Þ

6

R. Che

where Hf0 ¼ sin f0  3e½

f0 sin2f0 sin3 f0 3f0 3 sin f0 cos3 f0 sin2f0 þ þ  þ 6e2 ½ sin f0 þ 10e3 ½ þ þ  16 2 4 3 8 4

ð7Þ

f0 is the true anomaly at initial time. For satellites in low earth orbit, e4 and higher items are very small which can be omitted in calculation. The Eq. 7 can be approximated as Hðf Þ  sin f  3e½

f sin2f sin3 f 3f 3 sin f cos3 f þ  þ 6e2 ½ sin f  10e3 ½ þ sin2f þ   Hf0 2 4 3 8 16 4

Then, the x-axis averaged motion in an orbit period can be calculated as Z o  1 2p n x¼ d1 e þ 2d2 e2 Hðf Þ sin f  ðd2 ek2 þ d Þ cos f df 3 f 2p 0

ð8Þ

ð9Þ

Substituting Eq. 8 into Eq. 9, omitting e4 and higher order items, we obtain x  d2p2 e

R 2p n h 2 f 2e sin f  3eð f sin 0 2 þ

sin f sin2f 4

i o Þ 2e sin fHf0  cos f ½1  2e cos f þ 3e2 cos2 f  df

¼ ð2e2 þ 3e3 Þd2

ð10Þ

In the same way the averaged relative position and averaged relative velocity in yaxis and z-axis can be obtained 9 x ¼ ð2e2 þ 3e3 Þd2 = 2 y ¼ d1  ð3e2 p þ 2eHf0 Þd2 þ ð1 þ e2 Þd4 ð11Þ ; e 3e3 z ¼ ð2 þ 8 Þd6 x0 y0 z0

9 ¼0 = ¼ ½3e2  6e3 d2 ; ¼0

ð12Þ

The averaged equations, Eq. 11 and Eq. 12, are simple and convenient for analysis and designing. In two-body elliptical orbit, if d2 6¼ 0, the relative motion in along-track direction is not closed. The averaged drift velocity in y-axis is y0 , which can be decided by eccentricity and initial constant d2 . The mean velocity in x-axis and z-axis is zero, which is correspond to the relative motion.

3 Averaged Equation of Relative Motion with Orbital Perturbation For LEO satellite, the earth’s non-spherical shape and atmosphere perturbation greatly affects the relative motion of satellite. The perturbations make the satellite rotate or drift, and the relative motion will not be closed. Equation 11 and Eq. 12 are not convenient to analyze the orbital perturbation, so the orbital elements will be considered in this section.

Averaged Equation of Satellite Relative Motion in an Elliptic Orbit

7

We denote r ¼ ða; e; i; x; X; MÞT

)

rt ¼ ðat ; et ; it ; xt ; Xt ; Mt ÞT

ð13Þ

where a is semi-major axis, e is eccentricity, i is inclination, x is perigee anomaly, X is right ascension of ascending node, M is mean anomaly. The orbital element differences between the target satellite and tracking satellite defined as dr ¼ rt  r. Based on the relations between the integral constant di and the orbital element difference dr0 [10] 9 2e2 e 1 1 sin x cot i cos x cot i > > > d d ; de ¼ d þ ; dx ¼ d  d þ d 2 0 2 3 0 4 5 6 = ag2 a ag2 ag2 ag2 g4 1 1 g cos x sin x sin x sin i cos x sin i > > dM0 ¼ ðd1  2eHf0 d2 Þ; di0 ¼ d5 þ d6 ; dX0 ¼ d5  d6 > ; a ag2 ag2 ag2 ag2 da0 ¼

where g ¼

ð14Þ

pffiffiffiffiffiffiffiffiffiffiffiffiffi 1  e2 , Eq. 11 can be rewritten as x y z

9 ¼ ð1 þ 32 eÞg4 da0 > = 4 2 e2 ¼ ag dM0  3pg da þ ag ð1 þ Þðdx þ cos idX Þ 0 0 0 2 2 > 3 ; ¼ ð2e þ 3e8 Þag2 ðsin xdi0  cos x sin idX0 Þ

ð15Þ

Considering the orbital perturbation, the orbital element difference would not be a constant, that means dr_ 6¼ 0. In each calculation step, the instantaneous orbital element of satellite can be calculated in high precision. So the orbital element difference can be obtained in real time, the averaged relative motion can also be calculated by following equations. 9 xðtÞ ¼ ð1 þ 32 eÞg4 da > = 3pg4 a 2 e2 yðtÞ ¼ g dM  2 da þ ag ð1 þ 2 Þðdx þ cos idXÞ > 3 ; zðtÞ ¼ ð2e þ 3e8 Þag2 ðsin xdi  cos x sin idXÞ

ð16Þ

The time derivative of Eq. 16 can be approximated as 9 _ xðtÞ  ð1 þ 32 eÞg4 da_ > = _ _  3pg4 da_ þ ag2 ð1 þ e2 Þðdx_ þ cos idXÞ _ yðtÞ  ag dM 2 2 > 3 ; _ z_ ðtÞ  ðe þ 3e Þag2 ðsin xd_i  cos x sin idXÞ 2

ð17Þ

8

Moreover, with the theory of the secular orbit perturbation, we can obtain the following conclusion:

8

R. Che

(1) Out of orbit plane: The averaged relative motion of the z-axis is decided by di and dX, which is affected by the J2 perturbation. In short-term, for example, in several orbit periods, z-axis drift is small that can be omitted. (2) In-orbit plane: The averaged relative motion in x-axis is only decided by da, which is mainly affected by the atmospheric perturbation. The averaged relative motion in y-axis is complex, which is decided by dM, da, dx and dX. If the crosssection mass ratio is different between the target satellite and tracking satellite, there will be a long-term drift in x-axis and y-axis.

4 Simulation 4.1

Simulation in Two-Body Orbit

In order to evaluate the accuracy of averaged equations which described above, the following test case are considered. The orbit elements of satellites listed in Table 1. Table 1. Initial orbit elements of satellites Orbit elements

Target satellite Chaser satellite Case-1 (d2 ¼ 0) Semi-major axis (km) 7000 7000 Eccentricity 0.03 0.030143 Right Arc. of the A.N.(°) 200° 200.000148° Perigee anomaly(°) 100° 100.00411° Inclination(°) 98.5° 98.4999° Mean anomaly(°) 0° 0.001848°

Chaser satellite Case-2 (d2 6¼ 0) 7000.2 0.0301 200.0001° 100.0001° 98.4999° 0.0001°

The initial relative conditions (integral constants of T-H equation) of chaser satellite in two test cases are. Case 1: d1 ¼ 225:9, d2 ¼ 0:0, d3 ¼ 1001:0, d4 ¼ 499:1, d5 ¼ 19:8, d6 ¼ 9:4 Case 2: d1 ¼ 12:2, d2 ¼ 110911:2, d3 ¼ 2630:3, d4 ¼ 10:4, d5 ¼ 14:1, d6 ¼ 10:2 Simulation duration corresponds to 25 orbit periods of target satellite. Both the true relative position and averaged relative position are shown in Fig. 2 and Fig. 3, coloured in blue and red respectively. Figure 2 shows the results of test case 1, the relative motion is closed or periodic, which means no drift in along-track (y-axis). Figure 3 shows the results of test case 2, if d2 6¼ 0 the averaged along track distance is significantly changed. It can be seen that simulation results are consistent with the analysis reported above.

Averaged Equation of Satellite Relative Motion in an Elliptic Orbit

9

Fig. 2. Relative track with no perturbation (Case-1:d2 ¼ 0)

Fig. 3. Relative track with no perturbation (Case-2:d2 6¼ 0)

4.2

Simulation Considering the Effect of J2 Perturbation and Atmosphere Perturbation

With the same initial orbit parameters of satellite which are listed in Table 1. Two different test cases are considered for comparing the effect of atmosphere perturbation. Case 3: The cross-section mass ratio of target satellite and chaser satellite are same, 0:01 m2 =kg.

10

R. Che

Case 4: The cross-section mass ratio of target satellite and chaser satellite are different, 0:01 m2 =kg and 0:02 m2 =kg respectively. Simulation duration corresponds to 25 orbit periods. Both the true relative position and mean relative position are shown in Fig. 4 and Fig. 5, coloured in blue and red respectively. Figure 4 shows the results of test case 3, with the same cross-section mass ratio, the averaged relative motion shown a linear drift in along-track (y-axis), which is mainly caused by J2 perturbation. Figure 5 shows the results of test case 4, the averaged relative motion shown a long-term drift in radial direction (x-axis) which is mainly caused by atmosphere perturbation. In other words, with different cross-section mass ratio, the orbit altitude of two satellites decreased in different velocity. There is an approximate parabola drift in along-track, which is an integrated affection of J2 perturbation and atmosphere perturbation.

Fig. 4. Relative track with J2 and atmosphere perturbation (Case-3:d2 ¼ 0)

Fig. 5. Relative track with J2 and atmosphere perturbation (Case-4:d2 6¼ 0)

Averaged Equation of Satellite Relative Motion in an Elliptic Orbit

11

Simulation results indicated that curves based on the averaged equation of relative motion are well consistent with the numerical calculation. Using the result of averaged equation, it is more intuitionistic and convenient to analysis the long-term movement trend, and that is also helpful for the formation designing and configuration maintaining.

5 Conclusions The averaged equations of satellite relative motion in elliptic orbit have been presented. Based on the homogeneous solutions of T-H equations, a simple averaged relative motion equation is derived over an orbit period. Then, the improved averaged relative motion equation which is described by the orbit elements difference is developed for considering the orbital perturbation. This representation of averaged relative motion is convenient to understand Earth oblateness and atmospheric perturbation effects on formation dynamics. Simulation results have verified the effectiveness of the proposed models. The averaged model can eliminate the periodicity movement of relative motion, and it is useful for formation designing and configuration maintaining.

References 1. Amico, S.D., Ardaens, J.S.: Spaceborne autonomous formation-flying experiment on the PRISMA mission. J. Guid. Control. Dyn. 35(3), 834–850 (2012) 2. Colhessy, W.H., Wiltshire, R.S.: Terminal guidance system for satellite rendezvous. J. Aerosp. Sci. 27(9), 653–658 (1960) 3. Tschauner, J.F.A., Hempel, P.R.: Rendezvous with a target in an elliptical orbit. Astronautica Acta 11(2), 104–109 (1965) 4. Inalhan, G., Tillerson, M., How, J.P.: Relative dynamics and control of spacecraft formations in eccentric orbits. J. Guid. Control. Dyn. 25(1), 48–59 (2002) 5. Katsuhiko, Y., Masaya, K.: New state transition matrix for formation flying in J2-perturbed elliptic orbits. J. Guid. Control. Dyn. 35(2), 536–547 (2012) 6. Mai, B., Akira. I.: Graphical generation of period orbits of Tschauner-Hempel equation. Journal of Guidance, Control and Dynamics 35(3), 1002–1007 (2012) 7. Schaub, H.: Analytical Mechanics of Space Systems. AIAA Education Series, Virginia (2003) 8. Vadali, S.R., Vaddi, S.S., Alfriend, K.T.: An intelligent control concept for formation flying satellite. Int. J. Robust Nonlinear Control 12(2–3), 97–115 (2002) 9. Balaji, S.K., Alfred, N.: A bang-bang control approach to maneuver spacecraft in a formation with differential drag. In: AIAA Guidance, Navigation and Control Conference and Exhibit, p. 6469 (2008) 10. Rucai, C.: Configuration design of formation flying of satellite in elliptic orbit. Aerospace Control 28(2), 36–41 (2010)

HDR Fusion Algorithm Based on PCNN and over Exposure Correction Fang Dong1(&) and Jinyi Guo2 1

2

Beijing Institute of Spacecraft System Engineering, Beijing, China [email protected] Beijing Institute of Space Mechanics and Electricity, Beijing, China

Abstract. For traditional HDR fusion algorithm based on mapping functions, the artificial determination of the over exposure region often ignores the spatial distribution and the relation of pixels of the image brightness. In this paper, a stimulated HDR fusion algorithm based on pulse coupled neural network is proposed. For the proposed algorithm, first obtain the external stimuli according to the maximum image brightness, calculate whether the pixel “fires” or not and determine the over exposure region. Then alter the value of external stimuli to perform iteration and construct judgment matrix to make sure that every pixel in the image “fires”. The image irradiance is linear with the illuminance and exposure time. Obtain the brightness mapping function using linear fitting, and synthetize HDR image with over exposure correction algorithm to eliminate artifacts. The algorithm mainly has the following advantages: 1. It fully considers the spatial distribution of the image brightness. 2. For the imaging of different scenes, it self-adapts to separate exposure regions. 3. It performs mapping according to the imaging principles and obtains the mapping function without the exposure time. 4. It reduces the quantitative error of synthetizing image, and the artifacts in the transition region. The algorithm proposed in this paper is experimented under multiple scenes, and the image entropy and spatial frequency are improved greatly. Keywords: Pulse coupled neural network dynamic range  Mapping function

 Over exposure correction  High

1 Introduction The dynamic range of image refers to the ratio of the maximum brightness to the minimum noise of an image. With the continuous development of digital image technology, image with high definition is becoming more and more popular. So the research on high dynamic range image can play a huge role in the development of digital image technology with larger amount of information and better visual effects [1–3]. For the HDR algorithm for multiple images, existing methods include obtaining HDR image using multiple images with different exposure times [4], image fusion algorithm based on optimal blocks [5], algorithm based on Laplace pyramid [6], algorithm based on discrete wavelet [7], and direct fusion algorithm based on Sigmoid function fitting [8]. However, these methods have very high demands for the collection equipment and are not suitable for dynamic scenes. For the HDR algorithm for single ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 12–20, 2023. https://doi.org/10.1007/978-981-19-3387-5_2

HDR Fusion Algorithm Based on PCNN and over Exposure Correction

13

image, existing methods include: transformation of single LDR image into HDR image [9], which has relatively high complexity; piecewise linear mapping method [10], but it leads to the segmentation between regions; anti-tone mapping HDR image algorithm based on bilateral filtering [11], however its performance on sky detail is not ideal; transformation of single LDR image into HDR image based on human visual system model [12], but it loses the details of large area of highlight regions and works not as good as the multiple images fusion algorithm; pseudo-exposure tone fusion algorithm based on local region adjustment [13], however it is prone to have ghost phenomenon on relatively bright and relatively dark regions. The proposed algorithm in this paper first utilizes PCNN algorithm to constantly repeat the firing process, and calculates the firing frequency of the corresponding pixel position of each neuron. The firing frequency can be a reference for HDR fusion. And then, the algorithm performs linear fitting for the well-exposed regions of the long exposure images and short exposure images, and obtains the mapping function. Finally, it calculates the number of well-exposed points and abnormally exposed points, and performs weighted fusion. The flow chart of the algorithm is as follows (Fig. 1):

Fig. 1. Flow chart.

2 Segmentation of Exposure Region Consider the brightness of each pixel of the image as the input of the PCNN, and construct the reception field F. First, the reception field F receives external stimuli S and its internal stimuli F(n-1), and L receives the stimuli from surrounding neurons. The modulation field obtains the internal activity term U by modulating the signal F and L of reception field. Then compare the internal activity term Uij and the dynamic threshold Eij at the corresponding position of the pulse generation region neuron. If Uij [ Eij , then the pulse generator fires the neuron, and the firing output Yij ¼ 1. Or, the firing output Yij ¼ 0. Update the counting matrix, record the “firing” point, and

14

F. Dong and J. Guo

repeat this process constantly until all the pixels are “firing”. Then the over-exposed region, under-exposed region and well-exposed region can be obtained. At the initial state, F = 0 and E = 0, choose the external stimuli as the maximum brightness of the image, and the neuron ij fires, then the following initial state formula can be obtained: 8 Fij ð0Þ ¼ Sij > > < Uij ð0Þ ¼ Fij ð0Þ E ð0Þ ¼ VE > > : ij Yij ð0Þ ¼ 1

ð1Þ

The formula for determining whether the nth pixel fires based on the brightness of the n−1th pixel is as follows: 8 Fij ¼ eaf Fij ðn  1Þ þ Sij > > > > < Uij ðnÞ ¼ Fij ðnÞ Eij ðnÞ ¼ eae Eij ðn  1Þ þ VE Vij ðn  1Þ > > 1 Uij ðnÞ [ Eij ðn  1Þ > > : Yij ðnÞ ¼ 0 otherwise

ð2Þ

For the formula above: af is the time attenuation coefficient of the reception field F, ae is the time attenuation coefficient of the dynamic threshold. vE is the amplifying coefficient of the dynamic threshold, Eij is the threshold, and Uij is the modulating signal. Without the impact of other neurons, the firing period of neuron is closely related to the magnitude of the external stimuli S. The larger the magnitude of S is, the smaller the period is. And vice versa. The choice of the number of iterations would affect the precision and efficiency of the algorithm. With the consideration that the separation of the exposure regions relies only a little on the detail of the edge information, in this paper a counter is established, to record the firing point each time. When all the pixels in the image have been initiated once, the iteration can stop.

3 Solution of the Mapping Function Camera response curve: R ¼ linear ðO  Dt  stg  gÞ

ð3Þ

I ¼ f ðRÞ ¼ f ðO  Dt  stg  gÞ

ð4Þ

For the formula above: is the image irradiance, is the illumination, is the integration time, is the gain, is the series, is the value of image brightness. From above, it can be obtained that the value of image brightness is proportional to the exposure time. Take the relationship of the brightness of each pixel of long exposure image and short exposure image into account. Since the brightness mapping function is

HDR Fusion Algorithm Based on PCNN and over Exposure Correction

15

independent of the pixel position and it is only dependent of the image brightness, the map can be obtained by calculating the average brightness of the according position in the long exposure image for every value in the short exposure image. In the normally exposed region, it is clearly a linear relationship. Selecting the well-exposed pixels, the map can be drawn (Figs. 2 and 3).

Fig. 2. Well-exposed brightness mapping.

Fig. 3. Brightness mapping function.

16

F. Dong and J. Guo

The brightness mapping function can be obtained by linear fitting: b I 1 ¼ by I 2 þ b

ð5Þ

For the formula above: is the pixel brightness of long exposure image, is the pixel brightness of short exposure image, and are the linear fitting coefficient.

4 The Fusion Algorithm Based on over Exposure Correction With the pollution of the over exposure pixels, artifacts would occur in the transition region if direct brightness mapping method is used. To solve this problem, the image fusion algorithm based on over exposure correction is proposed. The formulas are as follows: 8 n ¼ numðlongðx þ i; y þ 1ÞÞ < d ¼ n=½ð2b þ 1Þ  ð2b þ 1Þ  1 ð6Þ : Hðx; yÞ ¼ d  longðx; yÞ þ ð1  dÞ  c  shortðx; yÞ For the formula above: the ð2b þ 1Þ  ð2b þ 1Þ pixels around the (x, y) pixel are the surrounding pixels. long(x, y) is the long exposure pixel, short(x, y) is the short exposure pixel, n is the number of non-overexposure pixels around the pixel(x, y), and c is the brightness mapping function. The closer the d is to 1, the less the pollution there is, and the more credible the long exposure value is. The closer the d is to 0, the more the pollution there is, and the less credible the long exposure value is. Synthesize the data of long exposure and short exposure using weight function that is positive correlated to d.

5 Experiment Results In this paper, image entropy, spatial frequency and HDR-VDP are used to evaluate HDR algorithms. Image entropy and spatial frequency represent image information. The more the amount of information there is, the better the fusion is. HDR-VDP evaluates the similarity between 2 images based on human visual system. The reference images are normally exposed images. From the following table, the HDR fusion algorithm proposed in this paper not only takes brightness information into account, but also has better precision in saturation, contrast, and other aspects (Table 1).

HDR Fusion Algorithm Based on PCNN and over Exposure Correction

17

Table 1. HDR algorithms evaluation. Image

Exposure Image Time Entropy

Spatial Fre- HDRquency VDP2

0.333

5.4348

0.2283

~

1.000

4.6804

0.0013

~

Ours

HDR

5.7948

0.6663

53.6952

Rafael[11]

HDR

5.5364

0.5036

42.3625

Wang[13]

HDR

5.3654

0.5936

43.5265

Short Exposure Image

Long Exposure Image

Image Brightness

To compare the HDR algorithm proposed in these paper and other algorithms such as the Rafael algorithm, it can be seen from the following pictures that the algorithm proposed in this paper has better performance in showing details and it clearly mitigates the artifacts. For the following images, the left one is the result using the algorithm in this paper, and the right one is the result using existing algorithm in the reference. It is obvious that in the over exposure region (the windows), the algorithm in this paper can solve the exposure pollution problem, and has better detail information (Figs. 4, 5, 6, 7).

18

F. Dong and J. Guo

Fig. 4. HDR algorithm proposed in this paper.

Fig. 5. Rafael algorithm.

Fig. 6. (a) over-exposure image (b) short-exposure image (c) Wang [13] method (d) our method

HDR Fusion Algorithm Based on PCNN and over Exposure Correction

19

Fig. 7. (a) over-exposure image (b) short-exposure image (c) Wang [13] method (d) our method

6 Conclusion The traditional HDR fusion algorithms based on mapping function consider a pixel as over exposure region simply by comparing its brightness to a certain threshold, and it ignores the spatial information of the image. Therefore, in this paper the PCNN algorithm is introduced, and the value of firing frequency of the pixels can reflect whether it is over exposed region, under exposed region or well-exposed region. Also, to solve the artifacts on the boundary of image brightness mapping, an image fusion algorithm with weighted non-overexposure pixels is proposed. The HDR algorithm in this paper can provide more precise detail information, and has better brightness and contrast.

References 1. Neale, D., Al Abbas, T., Gyongy, I., et al.: High dynamic range imaging at the quantum limit with single photon avalanche diode-based image sensors. Sensors 18(4), 1166 (2018) 2. Serrano, A., Heide, F., Gutierrez, D., et al.: Convolutional sparse coding for high dynamic range imaging. Computer Graphics Forum 35(2), 153–163 (2016) 3. Azimi, M., Banitalebi-Dehkordi, A., Dong, Y., et al.: Evaluating the Performance of Existing Full-Reference Quality Metrics on High Dynamic Range (HDR) Video Content. (2018) 4. Larson, G.W., Rushmeier, H., Piatko, C.: A Visibility Matching Tone Reproduction Operator for High Dynamic Range Scenes. (1997) 5. Kalantari, N.K., Ramamoorthi, R.: Deep high dynamic range imaging of dynamic scenes. Acm Trans. Graph. 36(4), 1–12 (2017)

20

F. Dong and J. Guo

6. Peng, C., Chen, Y., Chen, Q., et al.: A new type of tri-axial accelerometers with high dynamic range MEMS for earthquake early warning. Comput. Geosci. 100(2017), 179–187 (2017) 7. Du, L., Sun, H., Wang, S., et al.: High dynamic range image fusion algorithm for moving targets. Acta Optica Sinica 37(4), 0410001 (2017) 8. Tang, Q., Liu, Y., Tsytsarev, V., et al.: High-dynamic-range fluorescence laminar optical tomography (HDR-FLOT). Biomed. Opt. Express 8(4), 2124 (2017) 9. Sen, P.: Overview of state-of-the-art algorithms for stack-based High-Dynamic Range (HDR) Imaging. Electronic Imaging 2018(5), 1–8 (2018) 10. Gao, Z., Zhou, Y., Xu, J., et al.: High dynamic range low distortion pwm readout method with self-adjusted reference for digital pixel sensors. J. Circuits Syst. Comput. 26(10), 1750148 (2017) 11. Pantoli, L., Stornelli, V., Leuzzi, G.: High dynamic range, low power, tunable, active filter for RF and microwave wireless applications. IET Microwaves Antennas Propag. 12(4), 595– 601 (2017) 12. Afreen, S., Tirmizi, A., Sarwar, M.: Pseudo-multiple-exposure-based tone fusion and visualsalience-based tone mapping for high dynamic range images: a review. Int. J. Comput. Appl. 127(8), 41–46 (2015) 13. Petro, A.B., Sbert, C., Morel, J.M.: Multiscale Retinex. Image Process. Line 4, 71-88 (2014)

A Dynamic Range Analysis of Remote Sensor Based on PcModWin Zhao Ye1(&), Gaochao2, Ye Xue1, and Chao Wang1 1

2

DFH Satellite Co., Ltd., Beijing 100094, China [email protected] Beijing Institute of Tracking and Telecommunications Technology, Beijing 100094, China

Abstract. Space optical remote sensor for earth observation, ground targets and the observation conditions are always changing, as at-sensor radiance changing. Dynamic range is an important factor that can be detected in determining targets, which is an important indicator for measuring imaging quality of space optical remote sensor and radiation properties. The optical sensor design condition is precondition by accurate radiance evaluation on the optical remote sensor. This paper started with an analysis of influence factors for Dynamic Range of The Space Remote Sensor based on radiative transfer and Photoelectric conversion. The Universal calculation of Entrance pupil radiant energy of optical remote sensor was derived from this paper. The calculation provided the basis for the design and evaluation of optical remote sensor. Then, this paper established the radiation transmission model of optical remote sensor for earth observation, using the PcModWin software to calculate each component energy distribution of radiation transfer model, comparing the calculation results with labsphere 1 m integrating sphere radiation data, this method was proved the authenticity and accuracy of the results. Keywords: Radiometric calibration sphere  Radiance calculation

 Optical remote sensor  Integrating

1 Introduction Radiometric calibration of optical sensor can provide quantitative physical quantities of remote sensing information pointed at surface substances in different electromagnetic waves. Only by these quantitative physical quantities, can remote sensing information be connected with geographical parameters and biological parameters through experiments and physical models, so that quantitative inversion can be carried out in a targeted way. At the same time, accurate estimation of surface biophysics from remote sensing data mainly depends on the quality of remote sensing data, especially the accuracy of radiometric calibration. Before launching a satellite, the radiation characteristics of the remote sensor must be comprehensively evaluated to ensure the authenticity and reliability of the acquired date and provide a benchmark for the application of remote sensing images.

©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 21–34, 2023. https://doi.org/10.1007/978-981-19-3387-5_3

22

Z. Ye et al.

To determine the sensor’s dynamic range, entrance pupil energy assessment of conventional optical remote sensors were calculated directly by radiative transfer software such as Modtran and 6S. However, before the radiometric calibration of the optical remote sensor in the laboratory, the dynamic range of the output radiance of the integrating sphere light source must be beyond the dynamic range of the optical remote sensor, so that the total energy requirements of the integrating sphere light source can meet the calibration requirements. In this paper, we put forward a method for the dynamic range analysis of the optical remote sensor based on PcModWin, a software for radiation transfer.

2 An Atmospheric Radiation Transfer Software, PcModWin PcModWin was developed by an American company, Ontar, according to the MODTRAN MODEL. As a commercial software, this atmospheric radiation transfer software can be run under Windows. PcModWin can achieve the calculation of atmospheric transmittance, atmospheric background radiation, scattered solar and lunar radiation, and direct solar and lunar irradiance [2, 3]. The abundant data input methods in the software are very convenient for radiation transmission research and practical analysis. Its input and output parameters include the following parts: 1) Model selection: MODTRAN or LOWTRAN; 2) Atmospheric model selection: Atmospheric model such as tropical, mid-latitude summer, mid-latitude winter, subarctic summer, subarctic winter and other can be set. Users are also allowed to call the measured sounding data and the past China’s 30 years of meteorological statistical average data; 3) Path selection for calculation of radiation transfer mode: For single or multiple scattering in the direction of radiation flow, the horizontal path, two-height sloping path, ray sloping path, path length and path sloping can be set according to different situations; 4) Aerosol model: Aerosol spectral property profile selection can be set. Due to the influence of wind speed, rainfall and altitude on the aerosol spectrum distribution and refraction index, parameter values can be set separately and the default values of the system can be used; 5) Cloud mode input: The default value of cloud thickness varies for different atmospheric modes. 5 cloud modes are provided in PcModWin, and where can deal with haze to a certain extent; 6) Parameter input: The radiation flow analysis is carried out according to the azimuth of the time (solar position), the earth radius and the spectrum range of the object. The position of the sun can be represented by the solar zenith angle, which is a function of time, date and latitude, as shown in the formula below: cosh ¼ sinUsind þ cosUcosdcost where, U is the observer latitude, d is the solar latitude angle, t is the local time.

A Dynamic Range Analysis of Remote Sensor Based on PcModWin

23

3 Radiative Transport Model of Optical Remote Sensor for Earth Observation The imaging optical remote sensor for earth observation is a typical radiance observation system. The Earth is so far away from the Sun that solar radiation enters the Earth’s atmosphere almost in parallel [4]. The spectral irradiance of solar radiation reaching the top of the earth’s atmosphere is shown in Eq. (1): E 0k ¼

Mk AS  2 ðW  m2  lm1 Þ p ASE

ð1Þ

where, AS —solar spot area; ASE —distance between the sun and the earth; M k —spectral radiant exitance, concluded by Planck Equation: Mk ¼

C1 1 C  ðW  m2  lm1 Þ 5 2  1 exp k kT

ð2Þ

where, T—thermodynamic unit of the boldbody; C1 ¼ 3:74151  108 ðW  m2  lm4 Þ, C2 ¼ 1:43879  104 ðlm  KÞ; The solid line in Fig. 1 represents the radiation density curve of the top atmosphere, while the dashed line is the atmospheric irradiance curve drawn by PcModWin.

Fig. 1. Solar radiation of visible light to near-infrared spectroscopy

24

Z. Ye et al.

Figure 2 shows the main composition of solar radiation from visible light to nearinfrared spectrum region. The horizontal parallel dashed line group in the figure changes from dense to sparse from bottom to top, indicating that the density of the atmosphere gradually decreases with the increase of altitude. There are three main sources of radiation in the spectral region received by a remote sensor in an orbit: Lsu k directly reflected radiation: Solar radiation directly reflects a non-scattered part of the earth’s surface; Lsd k skylight: The part of solar radiation which is scattered downward through the atmosphere and reflected off the Earth’s surface; Lsp k haze light: Solar radiation through the atmosphere up scattering, directly reached the remote sensing part: sp sd LSk ¼ Lsu k þ L k þ Lk

ð3Þ

When the stray radiation received by the remote sensor, such as sky light and haze light, the difficulty of pixel radiometric correction will increase and the contrast of light and dark boundary can reduce, such as coastline and dam boundary. The influence of surface reflection and atmospheric scattering on the correct reception of radiation from ground objects by remote sensors is not negligible [5].

Fig. 2. Earth observation schematic of space optical remote sensor

3.1

Lsu k Direct Reflected Radiation

At present, the detection spectrum of high-resolution space optical remote sensor is mainly concentrated in the visible light band to the near infrared band. The influence of atmosphere on these bands is difficult to eliminate. The solar radiation will scatter and absorb radiation in the irradiation path from the sun to the earth and the observation path from the earth to the remote sensor. The part of the solar radiation that first reaches the earth surface is called ss ðkÞ, that is, the transmittance of solar radiation path (its value is between 0 and 1, dimensionless). Figure 3 shows the typical spectral transmittance curve of solar radiation (0.4–1.3 lm). As shown in the figure, the atmospheric transmittance gradually decreases at the transition spectral segment to blue light. The absorption of water vapor near band 0.9 and band 1.1 is relatively strong, which can prevent the narrow-band remote sensor from receiving Lsu k [6]. There are many

A Dynamic Range Analysis of Remote Sensor Based on PcModWin

25

suspended particles in the atmosphere (such as fog, dust, haze, smoke and so on), which make the atmosphere exist both Rayleigh scattering and Mie scattering. The atmosphere greatly affects the irradiance reaching the earth’s surface. On the earth’s surface, the irradiance perpendicular to the solar incident direction can be expressed by the following formula: Ek ¼ ss ðkÞ  E 0k

ð4Þ

Fig. 3. Typical transmittance curve of solar radiation spectroscopy

In Fig. 4 the solar irradiance in the visible to near-infrared spectrum (the solar altitude is 45°) was depicted.

Fig. 4. Solar radiation irradiance of visible light to near-infrared spectroscopy

The irradiance of the earth’s surface is related to the incident Angle of solar radiation. Equation (4) means that the maximum irradiance occurs when the angle of incidence is perpendicular to the surface. With the decrease of the angle of incidence,

26

Z. Ye et al.

the incident radiation gradually decreases. Therefore, the incident irradiance of the surface is expressed as follows: E k ðx; yÞ ¼ ss ðkÞ  E 0k cos½hðx; yÞ

ð5Þ

Within the observation angle of 20°–40°, the surface of the earth is considered to be approximately the surface of the Lambertian body, and the surface irradiance leaving the earth surface is expressed as follows: Lk ¼ qðx; y; kÞ

ss ðkÞE 0k cos½hðx; yÞ p

ð6Þ

where, q—spectral transmission reflectivity factor (its value is between 0 and 1, dimensionless); In an ideal lambert surfaces, q changes over wavelength and space position, but has nothing to do with direction of remote sensor observations. For there are certain deviation, actual object can be said with function of the angle of incidence and the angle of observation (BRDF). The geometry of the direct illumination of the sun reaching the earth’s surface is shown in Fig. 5.

Fig. 5. Geometric relationship of solar direct irradiance which comes in the Earth’s surface

The vector s points to the sun, the sun altitude angle is expressed with b, while the solar zenith angle is 90°−b, h is the incident angle of solar radiation, and u means the emergence angle of the remote sensor (excluding the influence of terrain on the observation angle of the remote sensor). In Fig. 6, the atmospheric transmittance curve along the solar zenith path and the path 40° deviation from the zenith position respectively are shown. The dotted line is the atmospheric transmittance curve when the observation angle is 40°. In the observation of flat ground scenery, the change of atmosphere has little effect on the remote sensor with small field angle, because the distance of the ground object through the atmosphere is almost equal to each pixel point. However, the image of the remote sensor is greatly affected by the atmosphere in high mountain and hilly areas because of the height difference between pixels. For large field of view space optical remote sensor, due to the large number of pixels in one

A Dynamic Range Analysis of Remote Sensor Based on PcModWin

27

direction, the difference of various terrain is obvious, the influence of the atmosphere can not be ignored. Therefore, the transmittance of observation path sm ðkÞ must be added to correct the radiation received by the remote sensor [7].

Fig. 6. Atmospheric transmissivity curve

To sum up, the direct reflection radiation Lsu k from the sun received by space optical remote sensor is: Lsu k ¼ sm ðkÞ  Lk ¼ qð x; y; kÞ

3.2

sm ðkÞss ðkÞE0k cos½hð x; yÞ p

ð7Þ

Lsd k Skylight

Lsd k is usually to explain the phenomenon that the shadow part of the image of the space optical remote sensor is not all black. In addition, Schott used topographic interference d factor F(x, y) (For flat ground, F = 1) to correct Lsd k , while E k Represents the irradiance of downward-scattered atmospheric radiation at the surface, so: Lsd k ¼ Fðx; yÞqðx; y; kÞ

3.3

sm ðkÞE dk p

ð8Þ

Lsp k Haze Light

Haze light is mainly caused by the Rayleigh scattering of atmospheric molecules and the Mie scattering caused by aerosols and air suspended particles. Rayleigh scattering is inversely proportional to the fourth power of the wavelength, whereas Mie scattering is wavelength independent. In relatively clean atmospheres, the radiation from Rayleigh scattering is proportional to k2 , and the radiation from Mie scattering is proportional to k0:7 . For a large field of view space optical remote sensor, the influence of Lsp k is difficult to eliminate. For a down-looking remote sensor with a relatively small field of view, Lsp k can be assumed to be a constant.

28

Z. Ye et al.

3.4

Total Solar Radiation LSk Received by Spatial Optical Remote Sensors

sp sd LSk ðx; yÞ ¼ Lsu k ðx; yÞ þ Lk ðx; yÞ þ Lk

sm ðkÞss ðkÞEk0 sm ðkÞEkd cos½hðx; yÞ þ F ðx; yÞqðx; y; kÞ þ Lsp k p p   s m ð kÞ ss ðkÞEk0 cos½hðx; yÞ þ F ðx; yÞEkd þ Lsp ¼ qðx; y; kÞ k p ¼ qðx; y; kÞ

3.5

ð9Þ

A Radiation Conversion Model of Optical Remote Sensor

The remote sensor converts the input radiation Lsp k into an image, which is the radiation value distributed by space. During this conversion process, the radiation performance, spatial characteristics and geometric features will change, resulting in attenuation of the total surface radiation obtained by the image. Figure 7 is a CCD (Charge Coupled Device) remote sensing model that simplifies the scanning process (linear array scanning, push-sweep and pendulum scanning). The Lsp k optical system received by the remote sensor is transformed into a continuous time-varying optical signal, and the CCD further converts the optical signal into an electronic signal, which is amplified and processed by the signal processing circuit. Finally, the signal time processed is sampled and quantified into the DN value of the image pixel by the analog/digital (A/D) converter [9].

Spectral filter

Incident radiation

Imaging optical system (opt)

Remote sensor

Image motion

CCD sensor (det)

Processing circuit (elec)

DN A/D

Fig. 7. The model of radiation conversion of CCD remote sensor

As is shown in Fig. 8, according to the relationship between object and projection, there are: Ad XI ¼ At XS Ad ¼ At m2

ð10Þ

where, XI , XS —solid angle of image and solid angle of object respectively; Ad —image area; At object area; m ¼ h=H, m is the geometric amplification factor from the ground to the focal plane of the remote sensor.

A Dynamic Range Analysis of Remote Sensor Based on PcModWin

29

Obtained by Gauss formula for ideal optical systems: 1 1 1 þ ¼ h H f

ð11Þ

where, f—the focal length of the optical system of a remote sensor, further: h ¼ f ðm þ 1Þ

ð12Þ

As for space optical remote sensor, H  h, therefore, m þ 1  1, f  h. In conclusion, the radiation flux from the total surface radiation to the focal plane of the sensor is: Ud ¼

Z

Ad Aa f ðm þ 1Þ 2

2

kmax

kmin

LS ðx; y; kÞso ðkÞdk

ð13Þ

Focal plane

Detector area Ad

Distance h

Image space

Height H

Object space

Optical system

IFOV

Ground object area At Earth surface GIFOV

Fig. 8. The schematic of object-image mapping of optical remote sensor

so ðkÞ is the spectral transmittance of optical system of space optical remote sensor, Aa is the aperture area of remote sensor. Figure 9 is the imaging schematic diagram of the off-axis three-reflection optical system. Different from the coaxial card optical system, there is no central blocking ratio.

30

Z. Ye et al.

Fig. 9. The object-image mapping of off-axis three-mirror optical system

D is the diameter of primary mirror diameter, Aa is deemed as its area, Eq. (13) can be equaled as: Ad pD2 Ud ¼ 4f 2

Z

kmax

kmin

LS ðx; y; kÞso ðkÞdk

pD2 4 ,

so the

ð14Þ

D⁄f is called the relative aperture; F# is the reciprocal of the aperture, that is, f⁄D.

4 Output Radiance Range of Integrating Sphere Light Source Calculated by PcModWin The radiative transfer model of the remote sensor established in Sect. 3 is used to estimate the radiative brightness range of the ground scene when the space optical remote sensor enters the pupil, and the radiance range of the output radiance of the integrating sphere is determined accordingly. The radiance at the entrance pupil of space optical remote sensor was simulated by using PCModwin. Assuming the weather is clear with the 1976 US standard atmospheric model, the atmospheric visibility is set as 5 km, and solar altitude h was adjusted from 0° to 80°, where 10° was added each time, and the remaining conditions were default. The wavelength range is suitable for the spectral response range of general CCD, and 450–800 nm can be selected to obtain the following data: 1) Radiation directly reaching the earth’s surface E k calculated by PcModWin in Eq. (7) is shown in Table 1: Table 1. Solar direct radiance Ek ðmW=cm2 Þ h 80° 70° 60° 50° 40° 30° 20° 10° 0° Ek 18.3 18 17 15.4 13.1 9.97 6.62 2.42 0.144

A Dynamic Range Analysis of Remote Sensor Based on PcModWin

31

2) Radiation from the sun that is scattered upward through the atmosphere to a remote sensor Lsp k calculated by PcModWin are shown Table 2: 2 Table 2. Haze light radiance Lsp k ðmW=cm  srÞ

h 80° 70° 60° 50° 40° 30° 20° 10° 0° Ek 1.642 1.628 1.583 1.509 1.405 1.265 1.08 0.832 0.488

3) In Eq. (8), solar radiation scatters downward through the atmosphere accepted by the earth’s surface radiation Edk calculated by PcModWin are shown in Table 3 below: Table 3. Sky Light radiance E dk ðmW=cm2 Þ h 80° 70° 60° 50° 40° 30° 20° 10° 0° Ek 7.56 7.48 7.23 6.83 6.23 5.4 4.32 2.93 1.36

4) In Eq. (9), the total radiation received by the remote sensor LSk ð x; yÞ is: LSk ð x; yÞ ¼ qð x; y; kÞ

 s m ð kÞ n ss ðkÞE 0k cos½hð x; yÞ þ F ð x; yÞEdk þ Lsp k p

where, qðx; y; kÞ—transmission spectrum reflectance factor, to simplify the calculation ignore the influence of wavelength and space position, features an average reflectivity q successive increased by 0.1 to 0.9 from 0.1. sm ðkÞ—atmospheric transmissivity, a simplified average transmittance sm ðkÞ can be obtained by PCModwin, sm ¼ 0:3; ss ðkÞ—value = 1; 5) Ignoring the influence of terrain and angle, Eq. (9) has the following form: LSk ¼ q

 sm  0 E k þ E dk þ Lsp k p

ð15Þ

The total radiation received by the remote sensor LSk are as shown in Table 4.

32

Z. Ye et al. Table 4. Remote sensor receiving total radiation LSk ðmW=cm2  srÞ h q 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

80°

70°

60°

50°

40°

30°

20°

10°



1.8896 2.136 2.383 2.63 2.877 3.124 3.371 3.618 3.865

1.871 2.115 2.358 2.601 2.845 3.088 3.331 3.575 3.818

1.814 2.046 2.277 2.509 2.74 2.971 3.203 3.434 3.665

1.721 1.934 2.146 2.358 2.57 2.783 2.995 3.207 3.42

1.59 1.774 1.959 2.143 2.328 2.513 2.697 2.882 3.066

1.412 1.559 1.705 1.852 1.999 2.146 2.292 2.439 2.586

1.181 1.281 1.382 1.483 1.583 1.684 1.785 1.885 1.986

0.883 0.934 0.985 1.036 1.087 1.139 1.19 1.241 1.292

0.502 0.517 0.531 0.545 0.56 0.574 0.603 0.603 0.617

6) In Eq. (14), knowing the F# and transmittance of the optical system can obtain the amount of radiation received by the focal plane of the optical remote sensor Taking ALI (Advanced Land Imager) on board EO-1 (The Earth Obtaining One) launched by The United States in 2000 as an example, its F# = 7.5, s0 ¼ 0:9, the radiation E fp ALI on the focal plane of ALI was calculated, results are as shown in Table 5. 2 Table 5. ALI focal plane receiving radiation Efp ALI ðlW=cm Þ

h q 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

80°

70°

60°

50°

40°

30°

20°

10°



23.7 26.8 29.9 33.1 36.2 39.3 42.4 45.5 48.6

23.5 26.6 29.6 32.7 35.8 38.8 41.9 44.9 48.0

22.8 25.7 28.6 31.5 34.4 37.3 40.3 43.2 46.1

21.6 24.3 27.0 29.6 32.3 35.0 37.6 40.3 43.0

20.0 22.3 24.6 26.9 29.3 31.6 33.9 36.2 38.5

17.7 19.6 21.4 23.3 25.1 27.0 28.8 30.6 32.5

14.8 16.1 17.4 18.6 19.9 21.2 22.4 23.7 25.0

11.1 11.7 12.4 13.0 13.7 14.3 15.0 15.6 16.2

6.3 6.5 6.7 6.8 7.0 7.2 7.4 7.6 7.8

The existing 1m integrating sphere radiometric calibration light source of LabSphere is used to measure the output radiance of the integrating sphere light source with a standard radiometer, as shown in Table 6. From the radiance data in Table 6, it can be found that the radiance range provided by the 1m integrating sphere radiometric calibration light source is larger than the calculation range in Table 4, which can meet the calibration requirements [10, 11].

A Dynamic Range Analysis of Remote Sensor Based on PcModWin

33

Table 6. Actual measurement radiance of 450-800 nm integrating sphere opening Radiance level Actual value mW=cm2  sr 1 4.879 2 3.939 3 3.155 4 2.789 5 2.423 6 2.057 7 1.691 8 1.208 9 0.719 10 0.596 11 0.567

Radiance level Actual value mW=cm2  sr 12 0.537 13 0.508 14 0.448 15 0.357 16 0.276 17 0.216 18 0.157 19 0.112 20 0.065 21 0.034 22 0.007

5 Conclusion This paper introduces the application of PcModWin combined with example in the calibration and analysis of space optical remote sensor laboratory, including the establishment of the radiative transport model of optical remote sensor for earth observation, the calculation of the components of the radiative transfer model and analysis of the radiative energy at the focal plane of the optical remote sensor. The feasibility of the proposed method is proved by comparing the luminance of the output radiance of the integrating sphere with that of the actual one. In practice, the method introduced in this paper can also be used in the correction and application of atmospheric effects of optical remote sensing, and has a broad application prospect in the quantification of optical remote sensing. The radiation transfer model of this method can be further improved.

References 1. Wan, Z., Ren, J.W., Li, X.S., et al.: Analysis of signal-to-noise ratio for remote sensing TDICCD camera based on radiative transfer model. Infrared and Laser Eng. 37(3), 497–500 (2008) 2. Li, X.M., Song, P.F., Ma, W.P.: PcModWin software application design in optical remote sensor. In: Chinese Society of Astronautics Space Remote Sensing Specialized Committee Proceedings of the 2009 Academic Exchange Conference 3. Xie, D.J., Xu, Z.H., Feng, H.J., et al.: Exposure of space camera prediction with radiative transfer software PcModWin[C]. Proc. SPIE. 7654, 765411–1—765411–5 (2010) 4. Xu, W.Q., Zhan, J., Xu, Q.S., et al.: Development of measuring instrument for sky background brightness. Opt. Precis. Eng. 21(1), 46–52 (2013) 5. Xiao, X.G., Wang, Z.H., Bai, J.G., et al.: Influence of earth radiation on photoelectric detection system based on space. Acta Photonica Sinica 38(2), 375–381 (2009)

34

Z. Ye et al.

6. Luis, G., Rudolf, R., Hermann, K.: On the application of the MODTRAN4 atmospheric radiative transfer code to optical remote sensing. Int. J. Remote Sens. 30(6), 1407–1424 (2009) 7. Yang, C.Y., Zhang, J.P., Cao, L.H.: Influence radiation measurement based on proportional corrected atmospheric transmittance. Opt. Precis. Eng. 20(7), 1626–1634 (2012) 8. Schiller, S.J.: Technique for estimating uncertainties in top-of-atmosphere radiances derived by vicarious calibration. Proc. SPIE., 5151. Earth Observing Systems VIII (2003) 9. Yu, H.C., Yi, Q.J., Jian, K.Z., et al.: Computation of signal-to-noise ratio of airborne hyperspectral imaging spectrometer. In: 2012 International Conference on Systems and Informatics, 6223191, pp. 1046-1049 (2012) 10. Li, B.Y., Ren, J.W., Wan, Z., et al.: Research of large aperture integrating sphere used in the radiative calibration for space remote sensor. J. Optoelectronics. Laser 24(3), 464–469 (2013) 11. Liu, H.X., Ren, J.W., Li, X.S., et al.: Radiometric characteristics simulation of large aperture integrating sphere based on light- tools. Infrared and Laser Eng. 22(3), 836–840 (2013)

The Analog Variable Observation Method for S3MPR Controller Qifeng Xu, Luwei Liao(&), Junchao Song, Chengrong Cao, Xingjun Lu, and Guojun Wang Shanghai Institute of Space Power Sources, Shanghai 200245, China [email protected]

Abstract. MPPT (Maximum power point tracking) is applied in order to reduce the area of solar panels on spacecrafts, improving the power density and reducing the cost. S3MPR (Sequential switching shunt maximum power regulator) combines S3R (sequential switching shunt regulator) power topology and MPPT control strategy and is suitable for high-voltage high-power platform with solar arrays in parallel. In this paper, we propose a new analog MPPT control method (the variable observation method) and use the S3MPR as the power topology. In this new designed method, the dynamic MPPT reference is generated by observing the logic relationship between derivatives of output voltage and power. Finally, a whole analog MPPT control circuit is designed to satisfy the requirements for space applications. The Saber simulation platform is built and simulation results confirm the principles. Keywords: MPPT Tracking accuracy

 S3MPR  Variable observation  Analog algorithm 

1 Introduction With the continuous development of space technology, such as remote sensing satellites, space weapon platforms and deep space probes, the demand for electric energy is increasing [1, 2]. Researchers have to increase the area of the solar panels which increases weight and cost at the same time. By using MPPT controller, we can keep the solar array operating at its MPP (maximum power point) and thus, the efficiency of solar cells can be greatly improved [3, 4]. Common MPPT power topology connects the solar array in series with a DC-DC converter. This kind of power topology has been applied to the LANDSAT-4/5 of the US, the RADARSAT-2 of Canada and the AGILE of Italy [5, 6]. However, this power topology is hard to be applied in high-power platforms with solar arrays in parallel. Because, every solar array is equipped with a DC-DC converter. This caused problems of low mass to power ratio, low efficiency and slow dynamic response. To solve these problems, the new S3MPR topology has been widely researched and it has been successfully applied in the Mercury Probe in 2008 [7, 8]. In terms of MPPT control strategies, the mostly used methods are the conductance increment method and the disturbance observation method. Both above two methods need single chip microcomputers and other high speed digital control chips. Digital ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 35–43, 2023. https://doi.org/10.1007/978-981-19-3387-5_4

36

Q. Xu et al.

control systems are not suitable for spacecrafts due to the complex space environment. In reference [9], they proposed an analog control circuit based on the staggered disturbance method and MPPT is realized by alternating control of output voltage and current. By using analog circuits, it is more stable and reliable in space applications. But the tracking accuracy is controlled by disturbance step width and is hard to increase. In this paper, we recommend the new analog variable observation method which can be realized by both digital and analog circuits. We use an analog multiplication chip (the AD534 by ADI) to make power sampling and design a variable observation comparator. Then, the MPPT reference is obtained through the logic relationship between variables of output voltage and power of the solar array.

2 The S3MPR Power Topology The S3MPR power topology is based on the widely applied frequency-limited shunt circuit. Figure 1 depicts the basic structure of S3MPR power topology. Compared to the S3R power topology, the only difference is that the Vref of S3MPR topology is regulated dynamically according to the operation conditions of solar cells. But in S3R power topology, the Vref is constant or man-modified. Vfb is the sampling signal of the bus voltage. The operational amplifier amplifies the error between the sampled signal and the reference voltage. The Vmea is compared with the two threshold voltages of the Nth stage hysteresis comparator to generate the driving signal. Vmea and VREFN together determine that which stage solar array is modulating.

PI AMP +

Vfb Vref VREF N Vmea

Vmea

SN

DN

LN

BUS Co

COMP +

Driver

Rload

QN

R1 R2

Fig. 1. Topology of S3MPR power controller

3 The Variable Observation Method The conductance increment method needs to calculate the derivative of output power. The disturbance observation method has to inject a disturbance to Vref and observe the influence to output power. The proposed variable observation method is based on the upper two methods. Figure 2 is a typical power-voltage characteristic curve of a solar

The Analog Variable Observation Method for S3MPR Controller

37

array. On the left of the MPP, the voltage is proportional to the power. And on the right of the MPP, the voltage is inversely proportional to the power.

Fig. 2. Characteristics of a solar array

When the voltage and the power increase (the variable > 0) or decrease (the variable < 0) at the same time, this means the operating point is on the left and the voltage should be increased. Otherwise, the voltage should be decreased. Therefore, we can judge the operation condition and adjust the output voltage by observing the logic relationship between the variables of voltage and power. We can express the relationship by a logic expression: vref ¼ vs  ps

ð1Þ

In the Eq. 1, vref, vs and ps represent the logic value of the variables. Logic value ‘1’ means the variable > 0 and ‘0’ means the variable < 0. The three logic variables satisfy the relation of the same or. The flow diagram of the variable observation method is depicted in Fig. 3. In order to realize the algorithm by analog circuits, we designed the analog variable comparator and the analog multiplication.

38

Q. Xu et al.

Fig. 3. The variable observation method flow diagram

4 Circuit Design for the MPPT Controller 4.1

The Analog Power Sampling Circuit

In this paper, we use an analog multiplication chip AD534 to make power sampling. The AD534 is a precision analog multiplier IC with internal compensation. It has an excellent power rejection ratio and a low temperature coefficient. When the AD534 is used as a multiplier, the output voltage satisfies the following formula: Vout = ½ðX 1  X 2 Þ  ðY 1  Y 2 Þ=AV  ðZ1  Z2 Þ

ð2Þ

We can adjust the magnification by adjusting the resistance to pin SF. In this design, the expression for Vps is (Fig. 4): Vps = V VS V CS =K

ð3Þ

The Analog Variable Observation Method for S3MPR Controller

39

Fig. 4. Power sampling using the AD534

4.2

The Variable Comparator and the Reference Generator

Two variable comparators, a same or logic gate and a low pass filer constitute the reference generator. One variable comparator is made up of R9, R10, R11, C1 and a comparator. The voltage sampling signal Vvs is transmitted to both ends of the comparator. The signal to the inverting input passes through the delay network consisting of R9 and C1, thus the inverting input signal lags behind the positive input signal. When Vvs is increasing, the positive input voltage is higher and the comparator outputs logic ‘1’. Otherwise, the comparator outputs ‘0’. When the solar array is operating on the left of the MPP, the output voltage is proportional to the power, so the same or logic gate outputs a high voltage and the Vref increases. Otherwise, the logic gate outputs a low voltage and the Vref decreases. When the control system is stabilized, the operating voltage will oscillate around the MPP. Figure 6 gives the key points waveforms when the system operates at MPP. Because all analog signals are continuous, the tracking accuracy is high (Fig. 5).

Fig. 5. The MPPT reference generator

40

Q. Xu et al.

Fig. 6. Ideal waveforms at MPP

The complete S3MPR power control circuit block diagram is depicted in Fig. 7.

Fig. 7. The analog control system based on S3MPR

The Analog Variable Observation Method for S3MPR Controller

41

5 Simulation Analysis We built a simulation platform in Saber to confirm if the principle is valid. The simulation parameters are set based on the most widely used 100 V power source platform for satellites. Table 1 gives the parameters and simulation results and characteristics curve of the solar array is given in Fig. 2. The open circuit voltage of a solar array is 115 V. The maximum output power is 523 W at 99.1 V. The tracking process is depicted in Fig. 8 and key nodes waveforms are shown in Fig. 9. V1 and V2 are the output voltage waveforms of the two stage solar arrays. P1 is the output power waveforms of the first stage. The load current is set to 7 A so the first stage array is in constant-voltage mode and the second stage array is in the switching mode. The output voltage of the first stage is around 100.2V and the output power is 523 W. The power tracking accuracy is 99.9% and the voltage tracking accuracy is 98.9%. Table 1. Simulation results Parameters Open circuit voltage Short circuit current MPPT voltage Maximum power Load Frequency Ripple wave Operating voltage Output power Powe tracking accuracy Voltage tracking accuracy

Example 115 V 5.5 A 99.1 V 523.74 W 7A 2.4 kHz 650 mV 100.2 V 523.7 W 99.9% 98.9%

Fig. 8. The tracking process simulation waveforms

42

Q. Xu et al.

Fig. 9. Simulation waveforms of key nodes

6 Conclusions In this paper, we proposed the variable observation method for high-voltage highpower controller using S3MPR topology and designed the analog control circuits. The S3MPR topology will be more suitable in high power platform with solar arrays in parallel. By using analog multiplication chip AD534, we realized power sampling. We also designed the variable comparator to judge the voltage increasing or decreasing. Simulation results confirm that the principles of the variable observation method is valid and the designed circuits have high power tracking accuracy up to 99.9%. And soon, we will do physical verification. We really hope that it would be applied to next generation satellite power source platform.

References 1. Miao, D.: Improved Dynamic Response Power Density and Integrated Topology Research on Power Condition Unit to Spacecraft. Harbin Institute of Technology, Harbin (2011, in Chinese) 2. Lempereur, V., Jaquet, D., Liegeois, B.: Power conditioning and distribution unit of Globalstar-2constellation. In: Proceedings of 8th European Space Power Conference, pp. 119–125. ESA, Paris (2008) 3. Veerachary, M.: Fourth-order buck converter for maximum power point tracking applications. IEEE Trans. Aerosp. Electron. Syst. 47(2), 896–911 (2011)

The Analog Variable Observation Method for S3MPR Controller

43

4. Tariq, A. Asghar, J. Development of analog maximum power point tracker for photovoltaic panel. In: Proceedings of International Conference on Power Electronics & Devices Systems, pp. 251–255. New York (2005) 5. Ebale, G., Lamantia, A., La, Bella, M.: Power control system for the AGILE satellite. In: SeventhEuropean Space Power Conference, pp. 589-596. ESA, Paris (2005) 6. Brambilla, A., Gambarara, M., Torrente, G.: Perturb and observe digital maximum power point tracker for satellite applications. In: Proceedings of 6th European Space Power Conference, pp. 263–268. ESA, Paris (2002) 7. Garrigos, A., Blanes, J.M., Carrasco, J,A., et al.: The sequential switching shunt maximum power regulator and its application in the electric propulsion system of spacecraft. In: Power Electronics Specialists Conference, pp. 1374–1379. IEEE, New York (2007) 8. Qing, D., Cui, B., Zeng, Y., et al.: Research of space power system based on S3MPR topology and incremental conductance algorithm. Spacecraft Eng. 25(3), 74–79 (2016) 9. Xie, W., Jin, Y., Dong, B., et al.: Design and verification of high-voltage high-power controller using S3MPR. Spacecraft Eng. 29(2), 59–66 (2020)

Analysis and Design of Aerospace Flight Control Knowledge Base Zhaoqiang Yang(&), Jiang Guo, Jian Yang, Ruihua Wang, Daoji Wang, and Yang Wu State Key Laboratory of Astronautic Dynamics, Xi’an 710043, China [email protected] Abstract. The use of knowledge in the professional field of aerospace flight control to build a knowledge base of aerospace flight control process can improve the execution efficiency of aerospace flight control program and accelerate the progress of automation and intelligence of aerospace flight control system. Propose the representation method of aerospace flight control knowledge and its knowledge base level; use the theory of knowledge classification and first-order logic grammar to standardize the aerospace flight control knowledge; put forward a space flight control knowledge base scheme based on ontology knowledge, and point out its management method. Keywords: Knowledge base representation

 Aerospace flight control process  Knowledge

1 Introduction Aerospace is a concentrated expression of national will and comprehensive national strength, and is a strategic high ground for a major country’s national strength. International competition for space strategic resources is also becoming increasingly fierce, and the number of global satellites launches in the next ten years will exceed 2,000. In 2015, NASA released the Autonomous Horizon Project [1], which further elaborated the development direction and approach of space-related systems. In recent years, China’s aerospace industry has been developing rapidly, and the number and functions of spacecraft have continued to increase. This requires more and more spacecraft measurement and control technology. In the field of aerospace measurement and control technology, the rapid improvement of the ability to enter and exit and control space is an important guarantee for aerospace core technology innovation and ability level improvement. Improving the efficiency of aerospace flight control procedures is the commanding height of this guarantee. Therefore, we must vigorously develop to adapt to the characteristics of space flight control intelligent autonomous technology [2]. During the command and execution of launch TT&C mission, equipment-level fault knowledge base and system-level fault knowledge base have been established in China [3]. Although the aerospace flight control organization has a wealth of knowledge reserves, most of it is scattered in personal computers and minds, and lacks effective management, which makes it difficult for individuals to acquire knowledge and has a low knowledge reuse rate. How to obtain information and data from multiple fields and integrate them has become an important problem to be solved. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 44–53, 2023. https://doi.org/10.1007/978-981-19-3387-5_5

Analysis and Design of Aerospace Flight Control Knowledge Base

45

Therefore, more and more organizations choose to establish their own knowledge management system, which is of great significance for improving production efficiency, shortening work cycles, and enhancing competitiveness. At present, knowledge base platforms can be divided into two categories: one type is a knowledge base platform customized for the system, which usually needs to be closely related to the specific development content of a professional organization; the other type is a relatively general knowledge base platform, which is relatively. The language has a more general framework. This article is aimed at the specific needs of aerospace measurement and control, and considers that more and more knowledge content is created, released and used in a way that supports open access, and considers its scalability, analyzes and designs a dedicated aerospace flight control program knowledge base.

2 Aerospace Flight Control Knowledge Representation 2.1

Knowledge Representation

The aerospace flight control process is a clear and orderly system engineering, as shown on the left side of Fig. 1. In the process of aerospace flight control, related knowledge covers launch vehicle engineering, spacecraft applications, spacecraft orbits, spacecraft space physical environment, space mission space geometry, spacecraft subsystems, spacecraft configuration, spacecraft measurement and control, etc. Professional knowledge [4]. The aerospace system has high requirements for the effectiveness of the flight control process, and errors in a certain link in the middle may cause the mission to get into trouble. How to analyze and express this information clearly and accurately in the complex and diverse situational knowledge, knowledge attributes and the intricate relationships between them is directly related to the accuracy, completeness and effectiveness of the established model. In order to construct a flight control knowledge base, it is necessary to consider which functions of the flight control process this knowledge serves. Formulation of flight control plan; Verification of the flight control plan; Amendment of flight control plan; State processing and monitoring of spacecraft during flight control; Judging the state of the spacecraft during the flight control process; The interaction of business, data and other content between relevant departments of the aerospace system during the flight control process; 7) Abnormal handling during flight control.

1) 2) 3) 4) 5) 6)

There are many existing knowledge representation methods. According to the characteristics of aerospace flight control process [5], space flight control knowledge representation can use ontology knowledge representation methods, case-based knowledge representation methods and model knowledge representation methods, as shown on the right side of Fig. 1 Shown. Ontology expresses semantics in an explicit and formal way, which can improve the interoperability between heterogeneous systems. It has the characteristics of accurate expression, standardization and clear

46

Z. Yang et al.

Nomianl Flight control plan

ontology knowledge representation

Implementatio n of Flight control plan

Flight control status information acquisition

Information analysis and processing

Flight control status is normal

Implementation according to the nominal flight control plan

case-based knowledge representation

Flight control status is abnormal

Flight control plan modification

model knowledge representation

Fig. 1. Knowledge representation method of aerospace flight control

structure, and has unique advantages in the reuse and sharing of knowledge [6]. Ontology is a clear formal specification of the shared conceptual model. This definition includes four meanings: conceptual model, clarification, formalization and sharing. Case knowledge representation is the main technical strategy for externalizing tacit knowledge. Cases are the standardized representation of the problems encountered by the knowledge subject and their solutions [7]. A case is a model that represents a problem situation, covering the expression and solution of the problem that has occurred, and can describe and store the procedural and empirical knowledge in human cognitive activities in a structured form. Model knowledge representation uses professional knowledge classification to arrange knowledge according to logical habits, and abstract and modularize predicate logic representation knowledge, that is, knowledge of knowledge. Predicate logic representation is a method of symbolic representation to describe certain facts or rules. This method is very precise, without ambiguity, and is suitable for expressing procedural knowledge. The model-based knowledge representation has a higher level of abstraction, is suitable for expressing general knowledge, and can be applied to the productized aerospace flight control process to reduce unnecessary duplication in the use of knowledge, thereby improving the efficiency of implementation.

Analysis and Design of Aerospace Flight Control Knowledge Base

ontology knowledge representation

case-based knowledge representation

47

model knowledge representation

Flight control organization

declarative knowledge

procedural knowledge

strategic knowledge

Fig. 2. Knowledge representation method and knowledge hierarchy

2.2

Knowledge Base Level

The knowledge of the knowledge base has levels. According to the level, the knowledge includes declarative knowledge, procedural knowledge, strategic knowledge, etc. Declarative knowledge is usually “fact” knowledge, which is at the bottom of the knowledge base; procedural knowledge is usually expressed in terms of rules, processes, etc., which is used to control “facts” and is in the middle layer of the knowledge base; strategic knowledge is usually The rules of the rules, with the middle-level knowledge as the control object, are at the highest level of the knowledge base. As aerospace flight control knowledge has the characteristics of vagueness, hierarchy, coupling, complexity and diversity, it is difficult for traditional knowledge representation methods to be directly applied to the knowledge representation of related information. After research, it is found that the ontology-based knowledge representation can completely and accurately express the knowledge representation requirements of this complex system, and the ontology knowledge representation method can be used for basic declarative knowledge. Procedural knowledge has strong procedural nature and many cases. Case knowledge representation methods and model knowledge representation methods can be used. As the top-level strategic knowledge, its versatility and commonality are relatively strong, and the model knowledge representation method is the most appropriate. The relationship between the three knowledge representation methods listed in this article and the corresponding relationship with the level of knowledge are shown in Fig. 2. After the flight control organization has gone through practice, its experience, summary, etc. need to be standardized before it can enter the knowledge base as knowledge.

48

Z. Yang et al.

3 Standardization of Aerospace Flight Control Knowledge Due to the high coupling between the aerospace flight control process and the entire aerospace system, in order to effectively use knowledge, it is necessary to express the aerospace flight control knowledge, and the classification of knowledge is the most critical step in the construction of ontology. When the content of the knowledge base is richer and more detailed, the sharing and reuse of knowledge will be more convenient and sufficient, but the problem of content redundancy in the knowledge base is inevitable. Therefore, the knowledge of aerospace flight control must be classified reasonably. Taking the flight control process as the main line, design knowledge is divided into process knowledge, substantive knowledge, empirical knowledge, regular knowledge, and relational knowledge. Substantial knowledge includes documents, manuals, charts, and data, etc.; regular knowledge is divided into conceptual knowledge, formula knowledge, and procedure knowledge; procedural knowledge refers to every activity in the flight control process; relational knowledge refers to other The relationship between kinds of knowledge. Process knowledge exists in the entire process of aerospace flight control. Based on process knowledge, core classes are extracted, and the entire class is expanded on the basis of core classes, which can ensure the comprehensiveness and completeness of aerospace flight control knowledge. When classifying aerospace flight control knowledge, procedural knowledge is used as the basic category, and relational knowledge is used as relational attributes to connect other kinds of knowledge to complete the knowledge classification. In order to provide a detailed and comprehensive description of the knowledge of the flight control process, the flight control process can be divided into four categories: aerospace flight control verification knowledge base, ascending flight control knowledge base, operation flight control knowledge base and return flight control knowledge base. According to the requirements for knowledge reuse and sharing, the knowledge required and generated in the process of aerospace flight control is divided into flight control plan knowledge, flight control software knowledge, data flow knowledge, measurement and control equipment knowledge, carrier knowledge, spacecraft knowledge, spacecraft orbital knowledge and spacecraft return knowledge. The description and relationship between the two are shown in Fig. 3. After the knowledge classification is completed, the standardization of aerospace flight control knowledge can be better realized. First, we must standardize the basic concepts of aerospace, that is, the declarative knowledge we consider. First-order logic [8] has rich expressive ability and can express a large amount of declarative knowledge, and the basic grammar of first-order logic language has the ability to check logical integrity. Considering the particularity of the semantics of the flight control process, it is necessary to attach type constraints to all items (including function items, variables and constants) and atomic sentences on the basis of the basic grammar of the first-order logic language for grammatical checking. The function items and predicate sentences with additional type constraints have the following forms:

Analysis and Design of Aerospace Flight Control Knowledge Base flight control plan knowledge PK development

flight control software knowledge PK function

data flow knowledge PK exchange

49

spacecraft return knowledge PK satellite return principle

coordination

operation

telemetry,telecontrol

satellite return process

verification

principle.

external measurement

search and rescue

flight control process Verification phase

aerospace flight control verification knowledge base

measurement and control equipment knowledge

PK equipment knowledge

Rocket ascent phase

ascending flight control knowledge base

carrier knowledge

PK function

Satellite in orbit phase

Satellite return phase

operation flight control knowledge base

spacecraft knowledge

return flight control knowledge base

spacecraft orbital knowledge

PK spacecraft mission

PK spacecraft orbit knowledge

ground equipment

flight path

spacecraft subsystem

orbit control

space-based equipment

data format

spacecraft configuration

orbit principle

Fig. 3. Knowledge required and generated in the process of space flight control

Satellite 1 ðascending section ðyesÞ; whether to be manned ðnoÞ; constellation mode ðnoÞ; carrier type ða rocketÞ; orbit type ðnear earthÞ; :::Þ

ð1Þ

Using this kind of function terms and predicate sentences, the aerospace flight control knowledge statement is formalized into a first-order logic formula. The nominal concepts in the aerospace flight control knowledge statement are all formalized as function items; the procedural and verbal concepts in the aerospace flight control knowledge statement are all formalized as predicate sentences; the formal statements of the aerospace flight control knowledge are all formalized Formal concepts are connected by logical connectives.

4 Construction of Aerospace Flight Control Knowledge Base 4.1

Aerospace Flight Control Knowledge Construction Process

Ontology knowledge construction takes concepts as the basic unit, and constructs knowledge according to the process of defining classes, class levels, class attributes, and attribute hierarchical processes. At the same time, the relationships and relationship constraints between classes must be considered, as shown in Fig. 4. The construction of aerospace flight control knowledge is based on the ontology knowledge representation. It is also necessary to consider the establishment of examples and model abstract processes, combining case knowledge representation and

50

Z. Yang et al.

Concept hierarchy

Concept classification

knowledge Relationship classification

Relationship

hierarchy

Relationship constraint

Fig. 4. Ontology knowledge construction

Determine the professional field and scope of the ontology

Examine the possibility of reusing existing ontologies

List a series of important terms in the field of space flight control

Determine the hierarchical system of classes and classes

Define the attributes and relationships of the class

Define the side of the attribute

Create specific cases

Abstract model

Fig. 5. Knowledge construction of aerospace flight control

model knowledge representation to construct aerospace flight control process knowledge. The detailed process is as follows, as shown in Fig. 5. 1) Determine the professional field and scope of the ontology, clarify the purpose, coverage, and users of the ontology. 2) Refer to various existing ontologies that have been established so far to examine the possibility of reusing existing ontologies. 3) List a series of important terms in the field of space flight control, such as geosynchronous earth satellite, sun synchronous satellite, low earth orbit, spacebased measurement and control, manned spaceflight, type launch vehicle and other related concepts. 4) Determine the hierarchical relationship between terms, that is, the hierarchical system of classes and classes. For example, the low-Earth satellite orbit control class is a subclass of the orbit control class, and the space-based measurement and control channel class is a subclass of the measurement and control channel entity class. The top-down method, bottom-up method, and comprehensive method can be used. But no matter what method is used to determine the hierarchical relationship, we must first start with the definition of classes.

Analysis and Design of Aerospace Flight Control Knowledge Base

51

5) Define the attributes and relationships of the class to describe the internal structure between related concepts. For example, the attributes of the orbit category include attributes such as semi-major axis, inclination and eccentricity, and the attributes of the constellation satellite measurement and control category include the category of inter-satellite links. 6) Define the side of the attribute, including other characteristics such as the type, value range, quantity, and formal semantics of the attribute value. 7) Create specific cases. To define an instance of a certain class, you need to determine a class, create an instance of the class, and add attribute values. 8) Abstract model. Simplify the same process and similar examples, consider their commonalities, and abstract them into a model. 4.2

Aerospace Flight Control Knowledge Base Management

Knowledge Retrieval. Knowledge is stored in the database in the form of structured data. Users may randomly input any phrase when performing knowledge retrieval. Knowledge retrieval applies natural language understanding technology for word segmentation, phrase recognition and synonym processing, which improves information retrieval from a keyword-based level to the knowledge-based level, thereby improving the accuracy of retrieval. Relevance and importance are considered in the ranking of search results, so that the ranking is more accurate, and the documents most relevant to the user’s wishes can be ranked first, and the search efficiency can be improved. Knowledge Maintenance. A good knowledge base must be continuously iterated and maintained regularly. The maintenance of the knowledge base involves coverage and accuracy. The method to improve the coverage rate is to add new knowledge in real time to identify and add the latest aerospace technologies at home and abroad; on the other hand, to improve the accuracy rate, which requires optimizing the recognized knowledge and rationalizing the original knowledge according to new requirements Split, merge, delete. 4.3

Analysis of Typical Examples of Aerospace Flight Control Knowledge Base

Taking an ordinary near-Earth sun-synchronous orbiting spacecraft flight control process as an example, first retrieve the key information of the flight control process, such as the word “sun synchronization”, and the knowledge base will retrieve multiple typical sun-synchronous orbiting spacecraft flying The model or ontology of the control process. Secondly, according to the types of transportation, measurement and control, orbit required for this process, examine the reusability of the model or ontology, and reserve the conforming model or ontology to determine the processing of transportation data, measurement and control data, orbit determination, etc. the way. Again, list the important terms of this process, determine the level and relationship of the terms, in order to describe the flow of data, the classification of detailed measurement and control data, and the evaluation of orbit determination. Finally, add its

52

Z. Yang et al.

attribute values according to the special circumstances during this flight control process. The specific classes are as follows: Measurement and control (flight arc, flight procedure, measurement and control equipment (ground-based, space-based), flight control data (format, information, circulation…), troubleshooting (plan, method)); Track type (track type, track determination , track control). In this example, the knowledge base model is compared with the traditional model. The knowledge-based aerospace flight control process description of concepts and behaviors can better realize sharing and reuse. After adaptive modification of the reusable process, the type module can be realized. Plug and play standardizes the flight control process, reduces its complexity, and improves fault tolerance. At the same time. Avoid a lot of waste of measurement and control resources and human resources.

5 Conclusion One of the important elements of the flight control process is knowledge. The key technology of knowledge lies in its acquisition, expression and application. Its success is directly related to the theoretical and practical level of space flight control. In view of the complex and diverse characteristics of the information elements involved in aerospace flight control, this article first summarizes the knowledge representation method of the aerospace flight control process. On this basis, it describes the hierarchical structure of the aerospace flight control knowledge base in detail; The process classifies the knowledge as the main line, and uses the first-order logic language to standardize the knowledge according to the characteristics of ontology knowledge representation, laying the foundation for the construction of the knowledge base; finally, the construction process of the aerospace flight control knowledge base based on the ontology knowledge representation is proposed. It can provide a shared, clear and formal conceptual terminology for aerospace flight control knowledge modeling, can effectively promote the reuse and interoperability of the aerospace flight control knowledge base, and provide an effective reference for future research on related technologies of space flight control intelligence.

References 1. Endsley, M.: Autonomous. Horizons: system autonomy in the airforce-a path to the future (volume i: Human autonomy teaming). US Department of the Air Force, Washington (2015) 2. Jia-run, L., Qinghai, G., Wenjing, Z.: Intelligent autonomous system and application in aerospace. Flight Control Detect. 1(1), 059–062 (2018) 3. Wen-peng, X., Yong-hui, G., Jing-yan, W., Cheng, H.: Optimization and research on command automation process of aerospace measurement and control task in fault mode. Measur. Control Technol. 7, 52–57 (2019) 4. Liangxian, G., Chunlin, G.: Spacecraft Flight Design, pp. 361–383. Northwestern Polytechnical University Press, Xi’an (2016) 5. Shuangwei, X., Xiaoyue, W.: Mixed method for analyzing the reliability of TT&C mission. Syst. Eng. Electron. 35(3), 667–671 (2013)

Analysis and Design of Aerospace Flight Control Knowledge Base

53

6. Jing, L., Liansheng, M.: Comparison of seven approaches in constructing ontology. New Technol. Libr. Inform. Serv. 7, 17–22 (2004) 7. Jianhua, Z., Guo, M., Liu, X.: Study on the mechanism of case knowledge representation for knowledge management. J. Inform. 31(6) (2012) 8. Stuart, J.R., Peter, N.: Artifical Intelligence: A modern Approach, 3rd edn, Jianping, Y., et al. Translate, vol. 2013, pp. 238–262. Tsinghua University Press, Beijing (2012)

Study on Planning Strategy of TT&C Resource Scheduling Based on Data Mining Ying Shen1(&), Huili Gao1, Zhaoqiang Yang2, Jianjun Zheng1, and Wei Zhao1 1

2

Xi’an Satellite Control Center, Xi’an 710043, China [email protected] State Key Laboratory of Astronautic Dynamics, Xi’an 710043, China

Abstract. Currently, the scheduling mode used by most TT&C resource scheduling system was based on the mission requirements offered by users. After the scheduling plan is formulated, the resource redundancy has been basically determined, without the advance planning of the resources. This article analyzed the data of the scheduling system, by dividing the requirements of resources into two types of “period” and “peak”. Through analyzing the periodic use of resource historical data, the periodic resource demand will be predicted and guided resources for time-dividing and domain-planning. By analyzing the allocation data of dynamic resource scheduling, the peak resource will be reserved to enable and recycle dynamically in actual. Finally the strategy was verified by simulation, which provided a different approach to improve the utilization rate of the entire network resources. Keywords: Data mining

 Periodic resource  Peak resource

1 Introduction The TT&C resource scheduling is to solve the resource allocation problem supported by the reasonable allocation of resources to complete the tasks, in accordance with certain constraints. The constraints include visibility of satellites and resources, requirements of missions, and capabilities of resources, etc. At present, there are many researches on resource scheduling algorithms, especially the introduction of intelligent algorithms to solve complex muti-constrained objective optimization problems [1–3]. For mining and analysis of the long-term accumulated data of the scheduling system, there is relatively little research on how to plan and use resources. There is still much room for improvement in resource utilization. This paper focuses on the analysis results of using static and dynamic scheduling data, by dividing resources into “periodic” and “peak” resource requirements [4]. Using historical data and time series prediction models to analyze the regular use of resources, the periodic requirement of resource will be used to plan scheduling in advance. Through dynamic resource allocation result data, the peak resource requirements will be analyzed and real-time monitoring and balanced scheduling strategy for enabling and recycling of reserved resources. The planning of resources scheduling realized by these strategies will improve the flexibility of scheduling system and the utilization rate of the entire network. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 54–61, 2023. https://doi.org/10.1007/978-981-19-3387-5_6

Study on Planning Strategy of TT&C Resource

55

2 Type of Resource Scheduling Data The data type of the resource scheduling system can be divided into two categories, input data and output data. The input data mainly include the visible forecasts of satellite and ground resources, the requirements of satellite and the status of resources. The output data is mainly the scheduling plan of resources. 2.1

Input Data

The visible forecasts of satellite and ground resources are related to the satellite orbit and the geographical location of the resources, which determined the traceable arcs of resources. The requirements of satellite generally include the number of tracking laps required per day, the number of laps required for lifting and lowering, the length of the tracking arc, the time between laps, and the demand for entry and exit, etc. The resource status generally includes online and offline. Resources which can be allocated in normal are online, and that cannot be allocated due to mission testing and equipment maintenance, etc. are offline. 2.2

Output Data

The resource scheduling algorithm is used to allocate resources for satellites, considering the constraints such as the input data, ability of resources and attributes of satellites, etc. Finally, the resource scheduling plan is released. According to the formulated stage, the resource scheduling plan is generally divided into two types: the medium or long-term plan and the emergency plan that is dynamically adjusted according to temporary needs. According to different users the scheduling plan can be divided into satellite work plans and equipment tracking plans.

3 Requirements Planning of Resource It is necessary to grasp the resource requirements accurately to improve the effectivity of resources. In order to realize the advanced planning and flexible scheduling of resources to avoid waste of resources. Considering to divide resource requirements into two categories by long-term experience, one type is “periodical resource requirements”, which analyzes the data of the periodic plan, mainly to meet the periodic and regular resource requirements; the other is “peak resource requirements”, which analyzes the data of dynamic scheduling plans, mainly to meet the peak demand for resources brought by temporary missions. The resource scheduling with planning process based on data is shown in Fig. 1.

56

Y. Shen et al. Resource planning scheduling Periodical resource requirements Management system of resource scheduling

Resources requirements prediction

Preprocessing

Historical data collection Real-time acquisition

Available data

Dynamic using and recycling of reserved resources

Peak resource requirements

Fig. 1. The resource scheduling with planning process based on data

4 Planning Strategy and Implementation 4.1

Periodical Resource Requirements Forecasting

Time series prediction method [5] is a commonly used as a quantitative prediction method. The basic principle is to use past time series data for analysis, that is, to use historical data to build a model that combines the causality to predict the future. Time series can be divided into stationary series and non-stationary series. A stationary series is a series without a trend basically, and the observations of this series fluctuate at a certain level. Non-stationary series are those that contain trends, seasonality, or periodicity. The first step in time series modeling is to determine the type of time series, then find a suitable prediction model, and finally perform analysis of variance on the prediction model to evaluate and determine the best prediction solution. There are two commonly used models, exponential smoothing and Holt exponential smoothing. Studies [6] have shown that Holt exponential smoothing is a more suitable prediction method when the time series has a significant trend change. According to preliminary analysis of the planned data, it is found that when the satellites and online resources maintain a certain amount, the periodical resource demand forecast belongs to a stable sequence In mathematical methods, an exponential smoothing prediction model is often used, that is, by calculating the exponential smoothing value and matching a certain time series prediction model, the future change of a variable is predicted. When analyzing long-term planning data, the forecast of periodical resource demand is a non-stationary sequence, with a certain development trend. Holt exponential smoothing prediction model is often used to solve such problems. The applications of these two models in the forecast of periodical resource requirements are introduced below. Exponential Smoothing Prediction Model. An exponential smoothing method is often used for stationary series, that is, the weighted average of historical data observations. The farther the observation time is, the weight will decrease exponentially. The prediction formula is

Study on Planning Strategy of TT&C Resource

Pn þ 1 ¼ axn þ ð1  aÞSn

57

ð1Þ

Take a set of equipment tracking plan data as an example, xn is the nth historical tracking data value and Sn is the nth exponential smoothing value, a is the smoothing factor (0\a\1). The n þ 1th data prediction Pn þ 1 is the linear combination of observations and exponential smoothing value in period n. The key to exponential smoothing prediction is the selection of smoothing coefficient a, which can be measured by the variance of the error. The value of a can also be determined by selecting several values for prediction, and choosing the value with the smallest error. When the entire network maintains a certain amount of resources, the previous iði  2Þ period data can be used to predict the i þ 1 period data. According to the predicted tracking data values, mining equipment at different points and tracking conditions at different periods of time are used to guide the rational planning before the i þ 1th period of resources scheduling. Holt Exponential Smoothing Prediction Model. This model is suitable for time series prediction with certain trend. Holt exponential smoothing prediction model is as follows. 8 < Sn ¼ axn þ ð1  aÞðSn1 þ Tn1 Þ Tn ¼ cðSn  Sn1 Þ þ ð1  cÞTn1 ð2Þ : Pn þ j ¼ Sn þ jTn where a is the smoothing coefficient and c is the smoothing parameter ð0\a; c\1Þ. Sn ; Sn1 are the exponential smoothing values of the nth and n  1th periods, respectively. Tn1 is the trend value of the n  1th period, it is a correction to the nth smoothing value, so that the smoothed value is as close as possible to the nth historical tracking data valuexn . Pn þ j is the predicted value of the tracking data for the n þ jth period. When j ¼ 1, the prediction value of the n þ 1th period is the sum of the smooth value Sn and the trend value Tn of the nth period. S1 is the actual value of the first period, that is S1 ¼ T1 , and the trend value of the first period is T1 ¼ x2  x1 : From the perspective of the long-term development of the entire network, resources show a certain growth trend. Using the predicted tracking data values to mine different point equipment tracking situations can provide data support for the reasonable allocation of resources [7, 8]. 4.2

Dynamic Activation and Recovery of Peak Resources

Peak resource requirements are mainly analysis of dynamic resource scheduling data. Based on the adjustment principles of transfer and preemption of the resource scheduling system, the satisfaction of temporary resource requirements can be analyzed and predicted statistically. Time-reserved and region-reserved resources will be adopted to improve the dynamic scheduling resource satisfaction rate. Here mainly introduces the implementation of dynamic activation and recycling of reserved resources.

58

Y. Shen et al.

Due to the relatively high real-time requirements of peak resource requirements, it is considered to run two processes on the resource scheduling management system, that is Real-time Monitoring (RM) and Activation-Recycling (AR). RM is responsible for real-time monitoring and collection of resource tracking load and information of requirement change, and periodically updates the above information list. Based on resource requirements and scheduling equilibrium strategies, AR is responsible for the allocation of reserved resources, where the specific points are derived from data analysis. The above steps will be repeated by AR until a suitable resource is found for the waiting task. The realization of Activation-Recycling of reserved resources is shown in Fig. 2.

Management system of resource scheduling Information of resource

Progress of Real-time Monitoring (RM)

Is the temporary requirements met

Y

N

A request to enable the reserved resource Progress of Activation-Recycling (AR) Strategies of resources equilibrium

Fig. 2. The realization of activation-recycling of reserved resources

The choice of reserved resources here mainly considers the guidance of peak resource requirements. Subsequent research on the transfer ability of the entire network resources can be combined with the scheduling algorithm.

5 Verification and Summary 5.1

Periodical Resource Requirements Forecasting

Firstly, periodical scheduling result data of the resource scheduling system were collected. Take the equipment from eastern and southern region respectively for example, the data for 3 consecutive weeks are used as basic data. Exponential smooth prediction

Study on Planning Strategy of TT&C Resource

59

calculations are performed in order to obtain the predicted value of the next period, and the prediction result is compared with the actual value. When analyzing the daily load period, the daily average is divided into 12 periods based on experience. The statistical method of tracking number m in each period is shown in Eq. (3). The comparisons of tracking data between the predicted and actual values of equipment in different region are shown in Fig. 3 and Fig. 4. 8 s < 1; Dt [ te t 2 s m ¼ 0:5; Dt ¼ te t ð3Þ 2 : s 0; Dt\ te t 2 where ts and te are the start and end time of the tracking arc, respectively.

Fig. 3. The predicted and actual values of the equipment in eastern region

5.2

Dynamic Activation and Recovery of Peak Resources

The node control and status information collection were added to the information service node of the resource scheduling management system [9], and the task requirement list was placed in the UR. Based on the usage of resource, the scheduling system will be enabled or disabled to complete the task requirements by scheduling reserved resources or resource transfer. In the verification environment, although resources can be started or closed according to the needs of the task, the resource scheduling needs to be combined with actual applications, and reasonable resource start-up and shutdown frequencies can be derived for specific applications. Through the verification results, the historical data of the resource scheduling system is mined to find the periodic characteristics and rules of resource using. It can be used to guide the system in planning of resource scheduling, such as the equipment in eastern region with heavy tracking load in 6–8 h, 12–14 h and 22–24 h. Time-dividing planning of resources at this point can be considered before executing the scheduling

60

Y. Shen et al.

Fig. 4. The predicted and actual values of the equipment in southern region

algorithm. When the tracking load is dynamically monitored to exceed the threshold during the operation, the reserved resources are enabled and recovered after the task is completed, in order to achieve a reasonable and full utilization of the resources. Comparing the tracking load between the eastern and southern region, the resource planning should be considered to place emphasis on the east.

6 Conclusion This article takes the scheduling plan data as the mining analysis object, and focuses on exploring methods to improve utilization from the aspect of resource planning in advance and reasonable scheduling. By using the time series prediction model to predict the periodic and peak requirement of resources, we take the resource planning use in advance to the scheduling system. Combined with the activation and recovery of reserved resources in the dynamic scheduling process, the resource efficiency can be targeted to improve the utilization rate of the entire network.

References 1. Qingqing, Y., Huairong, S., Qiongling, S.: Research overview on modeling and solution of aerospace TT&C scheduling problem. J. Syst. Simul. 27(1), 1–12 (2015) 2. Haibo, W., Minqiang, X., Rixin, W., Yuqing, L.: Spacecraft TT&C resource scheduling based on improved pareto ant colony optimization algorithm. Syst. Eng. Electron. 34(4), 719–725 (2012) 3. Ning, K., Xiaoyue, W.: The scheduling model of TT&C resources of TDRSS and ground stations based on task begin time. J. Acad. Equipment Command Technol. 22(6), 97–101 (2011)

Study on Planning Strategy of TT&C Resource

61

4. Songtao, P.: Resource scheduling for cloud data center based on data mining in smart grid. Telecommun. Sci., 142–147 (2015) 5. Meiying, Z., Jie, H.: Summary on time series forecasting model. Math. Pract. Theor. 41(18), 189–195 (2011) 6. Jia, L.: Technol. Econ. Guide 29(27), 174–175 (2019) 7. Guifeng, W., Xianghua, G.: Application of cloud technology in power grid resource dispatching system. Technol. Power Source 41(11), 1652–1659 (2017) 8. Weiwei, L., Deyu, Q.: Survey of resource scheduling in cloud computing. Comput. Sci. 39 (10), 1–6 (2012) 9. Congfeng, L.: The construct and realization for tracking telemeter and command resource scheduling management system. Aeronaut. Comput. Tech. 35(3), 68–71 (2005)

Research on Product Assurance Parallel Work of Spacecraft Control System Xu Jingyu1(&), Yu Songbai2, Wang Hong3, and Ji Ye1 1

2

Beijing Institute of Control Engineering, Beijing 100094, China [email protected] Institute of Remote Sensing Satellite, CAST, Beijing 100094, China 3 Beijing SunWise Space Technology Ltd., Beijing 100190, China

Abstract. This paper mainly studies the practice of product assurance in the process of spacecraft control system development. Product Assurance of spacecraft control system has gone through a series of development processes. Because users have higher requirements for product assurance of spacecraft control system, based on modern quality management theory and combined with the reality of aerospace engineering, this paper presents a parallel research on product assurance based on spacecraft control system. Parallel research method is a systematic method, and its most basic working principle is the working principle of collaborative iteration. In this paper, the product guarantee parallel working model of spacecraft control system is presented, and the product quality of the control system is improved by the parallel research. According to the practice of product assurance parallel work in spacecraft control system, the correctness and validity of the method are confirmed. Keywords: Control system

 Product assurance  Parallel work

1 Introduction Modern satellites usually use a series of product assurance methods to control the quality of satellites [1–3]. Normally, The product assurance of spacecraft control system goes through the serial development process of design, production, test and verification, etc. With the continuous improvement of spacecraft control system product assurance requirements, the output of control system is increasing and the pace of development is accelerating. Higher requirements are put forward for the product assurance of spacecraft control system. That concurrent engineering can be used as a means of product assurance is a systematic approach. The processes include design, production, testing, assembly and other spacecraft control system manufacture processes (life cycle from design to on/off orbit operation), which are integrated in parallel with product assurance. The most basic principle of parallel product assurance of spacecraft control system adopts the principle of “cooperative work” [4–6]. The concept of collaborative iteration implements on the process. Every product assurance process in spacecraft control system uses parallel iteration, which could help eliminating the risks during the development processes [7, 8]. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 62–67, 2023. https://doi.org/10.1007/978-981-19-3387-5_7

Research on Product Assurance Parallel Work

63

In this paper, a mathematical model is used to describe the product assurance of spacecraft control system. The evaluation models of parallel work for spacecraft control system are given.

2 Research on Spacecraft Control System Product Assurance In the process of serial development of control system, the work of product assurance is often carried on. After the product assurance work in the design stage, the product assurance work in the production stage and test verification stage will be carried out. The serial engineering development model of product assurance is shown in Fig. 1 [9–11].

Comprehensive quality requirements

Design process

Production process

Test verification process

Assembly integration process

On orbit operation process

Final product

Fig. 1. Product assurance serial development model of spacecraft control system

The results of product assurance serial development may be open-loop. In parallel development mode, the production process, test verification process, assembly integration process and on orbit operation process feedback the design process. By comprehensively considerate the factors, which are the feasibility in production process, interface matching in test verification process, simulated assembly and the influence of plume in assembly integration process, the flight control process and on orbit fault plan in orbit operation process combined in the design process. Correct and adjust the problems and errors in the design process to improve it through the simulation of the follow-up development process. The product assurance quality of spacecraft control system output is P, the product assurance quality of spacecraft control system in every process is Pi . The relationship is shown in Fig. 2. The design process is the first stage, the production process is the second stage. The test certification process is the third stage, and so on.

The product assurance quality (P)

Design process (P1)

Production process (P2)

Test verification process (P3)

Assembly integration process (P4)

On orbit operation process (P5)

Fig. 2. Parallel development model of spacecraft control system

The product assurance results in test verification process, assembly integration process and on orbit operation process feedback to the production process, and so on.

64

X. Jingyu et al.

As shown in Fig. 3, it is the simplified parallel product assurance development model of spacecraft control system.

On orbit operaon process

Assembly integraon process

On orbit operaon process

Test verificaon process

Assembly integraon process

On orbit operaon process

Producon process

Test verificaon process

Assembly integraon process

On orbit operaon process

Design process

Producon process

Test verificaon process

Assembly integraon process

On orbit operaon process

Fig. 3. Simplified parallel product assurance development model of spacecraft control system

Product guaranteed gain of spacecraft control system at each stage is DPi . Feedback product guaranteed gain coefficient of spacecraft control system at each stage is ri [11]. P1 þ DP1 P1 P4 þ DP4 r4 ¼ P4 r1 ¼

P2 þ DP2 P2 P5 þ DP5 r5 ¼ P5 r2 ¼

r3 ¼

P3 þ DP3 P3

ð1Þ

Parallel development process is a series of processes such as production process, which improves the overall quality of design process to K15 ¼ P1  r1  r2  r3  r4  r5 , Where r1  r2  r3  r4  r5  1. The comprehensive quality improvement of parallel process is Knm , among them, n is the stage that needs to modify and feedback, m is the stage to modify. After parallel process, the latter process is a comprehensive iteration of the previous process. The quality of the product assurance in each stage of spacecraft control system is Ki . The quality of the product assurance in design process of spacecraft control system is:

Research on Product Assurance Parallel Work

K1 ¼ P1  r11  r22  r33  r44  r55

65

ð2Þ

After synthesis, the gain of product assurance quality in parallel process design stage is: DP1 ¼ P1  r11  r22  r33  r44  r55  P1

ð3Þ

The gain coefficient of product assurance quality in parallel process design stage is: e1 ¼

P1  r11  r22  r33  r44  r55  P1 ¼ r11  r22  r33  r44  r55  1 P1

ð4Þ

The gain coefficient of product assurance quality in parallel process production stage is: e2 ¼

P2  r12  r23  r34  r45  P2 ¼ r12  r23  r34  r45  1 P2

ð5Þ

The gain coefficient of product assurance quality in parallel process test verification stage is: e3 ¼

P3  r13  r24  r35  P3 ¼ r13  r24  r35  1 P3

ð6Þ

The gain coefficient of product assurance quality in parallel process assembly integration stage is: e4 ¼

P4  r14  r25  P4 ¼ r14  r25  1 P4

ð7Þ

The gain coefficient of product assurance quality in parallel process orbit operation stage is: e5 ¼

P5  r15  P5 ¼ r15  1 P5

ð8Þ

After parallel engineering, integrated product assurance quality improvement for spacecraft control system is: K ¼ P  ðr11  r22  r33  r44  r55 Þ  ðr12  r23  r34  r45 Þ  ðr13  r24  r35 Þ  ðr14  r25 Þ  r15 15 ¼ P  r11  r32  r63  r10 4  r5

ð9Þ

The gain coefficient of spacecraft control system product assurance in parallel process is:

66

X. Jingyu et al.



15 P  r11  r32  r63  r10 15 4  r5  P ¼ r11  r32  r63  r10 4  r5  1 P

ð10Þ

3 Product Assurance Work Evaluation of Spacecraft Control System Developed in Parallel According to the experience of spacecraft model development, taking the control system of scientific research satellite as an example, there are at least three iterations and optimizations in each stage. Each iteration will have an impact on the product development cycle. The parallel development can adopt the methods of hardware and software generalization, modularization and serialization. Promoting the engineering and systematization of spacecraft control system development more scientifically, it provides the theoretical basis for the improvement of product assurance of spacecraft control system. The concurrent engineering simulation diagram is shown in the Fig. 4.

Fig. 4. Feedback graph of concurrent engineering

4 Conclusion Usually, the development stage of spacecraft control system goes through scheme stage, pattern stage, preliminary sample stage and normal sample stage. Through parallel development model for product assurance of spacecraft control system, design process, production process, test verification process, assembly integration process and on orbit operation process are iterated and optimized. Through parallel development, the development stage can be reduced and the pattern stage and preliminary sample stage can be complained. The development stage of spacecraft control system goes through the scheme stage, preliminary sample stage and normal sample stage. Under the condition of ensuring quality of spacecraft control system, the development cycle is shortened from 5 to 4 years. The advantages are that the cost of rework and repetition

Research on Product Assurance Parallel Work

67

of the process is reduced, development cycle is shortened and the efficiency is greatly improved.

References 1. Chunling, F., He, Z., Xueying, W.: Practice of product assurance based on chang’e-4 probe mission characteristics. Spacecraft Eng. 28(4) (2019) 2. Yao, W., et al.: Practice of product assurance for HXMT satellite. Spacecraft Eng. 27(5) (2018) 3. Jianjun, W., Yan, L., Qingjun, Z., Yalin, Q.: Executive strategy and practice of product for GF-3 satellite. Spacecraft Eng. 26(6) (2017) 4. Estefan, J.A.: Survey of model-based system engineering methodologies. Incose MBSE Initiative, 1–70 (2008) 5. Lusen, H.: Systems engineering and systems engineering management in spacecraft. China Astronautics Press, Beijing (2007) 6. Yamada, T.: Attribute oriented modeling-another dimension for model-based systems engineering. In: Proceedings of the 2011 IEEE International System Conference, New York, pp. 66–71. IEEE (2011) 7. Yu, H., Hao, W., Jungang, Y.: Development schemes of spacecraft system engineering techniques. Spacecraft Eng. 18(1), 1–7 (2009) 8. Yiming, L., Jungang, Y.: System engineering concept, process and framework. Spacecraft Eng. 18(1), 8–12 (2009) 9. Hua, L., Chunlai, Q., Jia, L.: Research on distributed object and data management of concurrent engineering integration framework. Syst. Eng. Electron. 21(7), 1–3 (1999) 10. Shi, S., Libin, L., Hongchang, Z.: Application of Aerospace Product Assurance and Product Management in Implementing QJ9000 Standard, Management of Scientific Research (4) (1999) 11. Shi, S., Hongchang, Z.: Study on concurrent engineering in the satellite product assurance. Systems Engineering and Electronics 23(1) (2001)

Research on Inter-satellite Ranging Technology Based on Proximity-1 Protocol Wenxia Xu1,2(&), Lihua Zuo3, and Jingshi Shen4 1

National University of Defense Technology, Changsha 410073, Hunan, China [email protected] 2 China Academy of Space Technology, Beijing 100080, China 3 DFH Satellite Co., Ltd., Beijing 100080, China 4 Shandong Institute of Space Electronic Technology, Yantai 264032, China

Abstract. The key technologies of satellite constellations and formation flying missions are information interactions, relative ranging/time synchronization between satellites. In this paper, by using the time service given in CCSDSProximity-1 protocol, a non-coherent ranging and time synchronization method is proposed to complete the high-precision ranging between satellites while transmitting information between them. The formula derivation of the algorithm and analysis process of the model error are given in detail. The theoretical analysis and simulation results show that the proposed method achieves higher accuracy of ranging/time synchronization and functional integration. It can provide the powerful technical support for the Inter-satellite link. Keywords: Proximity-1 protocol  Inter-satellite ranging  Incoherent bidirectional ranging  Time synchronization  Ranging accuracy

1 Introduction In order to achieve the accurate measurement, accurate tracking, full space coverage, real-time control of space targets, and improve the survivability of TT&C system, satellite formation flying technology has become the focus of current research. The realization of the relative ranging and time synchronization between the satellites and the establishment of information interaction links [1] are important prerequisites for the cooperative control and autonomous operation between the satellites in the constellation and the formation, as well as for the reduction of the burden of the ground TT&C system and the improvement of the autonomous survival and autonomous management capabilities of the satellites in the formation. The traditional way of Inter-satellite ranging and time synchronization is to use ground equipment to measure satellite constellation. But there exist some problems such as high cost, low precision of synchronization, complexity of the equipment and so on. No ground TT&C equipment is needed to measure the relative distance with autonomous Inter-satellite link technology, but only rely on its own tracking and measurement equipment to complete ranging, time synchronization and the information interaction. This can reduce the work burden on the ground, at the same time, it can ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 68–75, 2023. https://doi.org/10.1007/978-981-19-3387-5_8

Research on Inter-satellite Ranging Technology Based on Proximity-1

69

effectively avoid the measurement error caused by the atmosphere, which is conducive to improve the measurement accuracy [2]. This paper is based on the time service provided in the CCSDS-Proximity-1 protocol, and focuses on the realization of time calibration between the primary and the secondary stars and the realization of time ranging of the binary stars. By referring to the incoherent time difference ranging method provided by CCSDS-Proximity-1 protocol, the high-precision distance ranging between satellites can be achieved while transmitting information between satellites. Under the condition of specified signal system and clock stability, the design of message, as well as the capture and tracking algorithm is optimized to improve the ranging accuracy as much as possible.

2 Inter-satellite Ranging Principle Based on the Proximity-1 Protocol In most missions, the satellite networking topology configuration is generally not large in scale, and the distance between the two satellites is generally within several hundred kilometers. The Inter-satellite communication link is suitable for the CCSDSProximity-1 communication protocol for close range space link. The frame structure adopts the Version-3 transmission frame defined by CCSDS-Proximity-1 [2]. 2.1

Short Range Link Time Service

Because of the short range and real-time communication of the protocol, both sides of communication are required to achieve a high degree of time unity. The basis of completing the time service is to use the same time mark capturing method. This method requires both the transmitter and receiver to have the tracking ability to the receiving/transmitting time of the last bit of attached synchronization markers (ASM) of all data frames. 2.1.1 Time Capture Method The local controller sends the instruction of Set Control Parameters to the transmitting terminal so that the terminal captures the local reference event and the sequence numbers of associated frames at a specified time interval. Once the command is received, the MAC layer sets the time-collection variable from “inactive” to “collecting data” to indicate the start of Time Collection. The transmitting terminal generates and transmits ‘Set Control Parameters’ instructions. When frames are sent at a specified time interval, the transmitter will capture the time node and frame sequence number of each transmitted frame. In applying the collected data, it is also necessary to know the delay information of the internal signal path during the transmission. Once the specified Time interval has been reached, the MAC layer will set the ‘time-collection’ variable to ‘Collection Complete’, indicating that these timestamps and the corresponding frame sequence number (FSN) can be transmitted. After receiving the ‘Set Control Parameters’ instruction, the receiver will confirm and decode the instruction and will capture the timestamp and FSN of each transmitted frame within a prescribed period. The receiver will also track

70

W. Xu et al.

the latency of each internal signal path until the collected data is read out and the ‘timecollection’ is set to ‘inactive’. When the time collection process ended, both the sender and the receiver will transmit the captured time and FSN to their respective local controllers. The local controller will generate a proximity temporal relational packet containing a series of points (timestamp, FSN) received by the local transceiver within a specified time interval [3, 4]. In addition, internal signal path delay in the transceiver link is required to be determined in advance. 2.1.2 Definition of Ranging Transmission Frame Format Version3 transmission frames are shown in Fig. 1. The length of a frame is defined as an integer multiple of 1K bits because the local distance measurement value is required to be extracted at the transmission time of the local frame ASM (0xFAF320). The frame includes the attached synchronization markers (24-bit ASM), data header, CRC check code and data field.

Fig. 1. Transfer frame structure of Proximity-1 protocol

In order to realize higher precision ranging and time synchronization between satellites, three sequential fields are defined in the data field, namely the service field, the transmission mode field and the task information field, respectively used for highprecision ranging and time synchronization (as shown in Fig. 2), defining the information transmission mode and storing valid data [5].

Fig. 2. Service field structure for ranging/time synchronization

2.2

Principle of Incoherent Bidirectional Ranging and Time Synchronization

The method of incoherent bidirectional ranging and time synchronization can be used to obtain the Inter-Satellite geometric distance and a synchronization time difference,

Research on Inter-satellite Ranging Technology Based on Proximity-1

71

and then perform synchronization time adjustment. Figure 3 shows the principle and time sequence relation graph of the measurement of Inter-satellite distance and clock difference between two satellites (defined as A and B).

Fig. 3. Principle and time sequence relation of bidirectional incoherent ranging and clock difference measurement

In the figure, the primary satelliteA and the secondary satelliteB transmit ranging signals in their assigned slots based on their respective reference clocks. The ranging frames sent from both sides have a time difference due to the inconsistence of clocks. For primary satelliteA, the propagation delay, obtained through capturing and tracking the ranging signal transmitted by satelliteB, include the propagation delay of electromagnetic wave sBA between antennas of them, the delay of satelliteB’s transmitter, the delay of satelliteA’s receiver, the clock difference of them Dt and the measurement error eA . Their relationship is: qA ¼ Dt þ tB þ sBA þ rA þ eA

ð1Þ

Similarly, the time delay qB is measured by star satelliteB,: qB ¼ Dt þ tA þ sAB þ rB þ eB

ð2Þ

The error terms zBA ¼ tB þ rA 和 zAB ¼ tA þ rB are unidirectional combination zero values with a small range and can be calibrated accurately. eA 和 eB are the uncertainties caused by the accuracy of the satellite borne frequency standard. Z eA ¼ tB þ sBA þ rA

Z eB ¼ tA þ sAB þ rB

  f A ðt Þ  f 0 dt f0

ð3Þ

  f B ðt Þ  f 0 dt f0

ð4Þ

When decoupling the distance and clock difference by the bidirectional ranging method, the measurement error caused by the satellite movement can be ignored if the influence range of two stars is not large. Therefore, the Inter-satellite distance and clock difference can be calculated simply by adding and subtracting (1) and (2).

72

W. Xu et al.

sBA ¼ ðqA þ qB Þ=2  ðzAB þ zBA Þ=2ðeA þ eB Þ=2

ð5Þ

Dt ¼ ðqA  qB Þ=2  ðzAB  zBA Þ=2ðeA  eB Þ=2

ð6Þ

If the relative motion between two satellites and the model is unknown, the error caused by satellite motion must be analyzed and modeled, that higher accuracy of Intersatellite ranging and time synchronization can be obtained through error correction. Let the relative velocity between two satellites be vðtÞ and let c be the propagation velocity of electromagnetic wave in vacuum. According to the rigid kinematics theory, let: Z 1 vave Dt ð7Þ sBA  sAB ¼ vðtÞdt ¼ c c Dt

vave in Eq. (7) is the average speed of the two satellites’ relative motion in a given inertial system and time system within the time period of clock difference Dt. According to formula (1) and (2), the following can be obtained:

2.3

sBA ¼ 0:5½qA þ qB  ðzAB þ zBA Þ  ðeA þ eB Þ  0:5

vave Dt c

ð8Þ

Dt ¼ 0:5½qA  qB  ðzAB  zBA Þ  ðeA  eB Þ þ 0:5

vave Dt c

ð9Þ

Error Analysis

Equations (8) and (9) give the calculation formula for decoupling the Inter-satellite distance and clock difference and the error terms contained are as follows: (1) Errors of the combination zero values zBA and zAB influence the Inter-satellite measurement as the combination of zero values in system. The combination zero values can be accurately calibrated from the ground before being injected into satellite borne equipment, with calibration accuracy up to 0.1 ns. (2) Errors of the satellite atomic clock According to Eqs. (3) and (4), the influence of error terms eA and eB is related to the accuracy of the satellite atomic clock and the Inter-satellite propagation delay. According to the inequality law, the error is: 8 > < > : lmax

jeA  eB j  2  lmax  smax   jfA ðtÞ  f0 j jfB ðtÞ  f0 j ¼ max ; t f0 f0

ð10Þ

Satellites use high stable atomic frequency standard as the reference, which generally has a frequency accuracy better than lmax  e11 . Assuming the maximum Intersatellite propagation delay smax  1 ms (about 300 km), the accuracy of atomic clock

Research on Inter-satellite Ranging Technology Based on Proximity-1

73

measurement in Inter-satellite can be analyzed according to the second-order stability index. Therefore, the maximum effect of the error term is likely on the order of picoseconds (equivalent to millimeter), which can be ignored. 1) Relative motion error The relative motion error is proportional to the average velocity of the relative motion vave and clock difference of the two satellites. Generally, the relative motion velocity between satellites is not too large, and can be set as k ¼ j0:5vave =cj, if jvave j  34 km=s, then k  6  105 . The uncertain term of the calculation model in Eqs. (8) and (9) is a small quantity, so if it is ignored in the calculation as the model error, the relative error of clock estimation caused by it is less than or equal to 6  10 −5, which can meet the accuracy requirements in most tasks. For other Inter-satellite measurement systems in which the influence of the satellite’s dynamic error must be considered, the satellite radial velocity can be calculated from the stored ephemeris. 2) Measurement error The thermal noise error and dynamic stress error of the loop directly affect the error of the pseudo-range measurement. Pseudo code tracking loop (usually using delay locking loop, DLL), the total tracking error (1r) is rDLL ¼ rtDLL þ he =3

ð11Þ

rtDLL is the tracking error caused by thermal noise; he is the dynamic stress error caused by relative motion. The carrier-assisted code-loop technology is used to eliminate most of the dynamics, therefore the dynamic stress error can be ignored. The tracking error caused by thermal noise is rtDLL

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   2d 2 BL 4d ¼ 2ð 1  d Þ þ fcode T  CN0 CN0 1

ð12Þ

BL is the equivalent noise bandwidth (Hz) of the loop; d is the relative distance; CN0 is the carrier to noise ratio of received signal; T is the pre-detection integration time; fcode is the rate of pseudo code.

3 Experimental Verification 3.1

Design and Composition of Ground Experiment

In order to verify the correctness of the theoretical analysis, the ground verification platform includes two Inter-satellite networking communication/ranging terminals, which have the same functions as the on-board terminal tasks, and can simulate the distance attenuation and transmission delay of the wireless channel, channel noise and interference, and Doppler frequency shift caused by relative motion, etc.

74

3.2

W. Xu et al.

Experimental Results and Analysis

During the test, the cable of known length (10.05 m) was used. The length of the cable was 10.17 m measured by measuring the delay time of the cable. Fig. 4 and Fig. 5 respectively show the measurement errors of Inter-satellite distance and clock difference. The random error statistics of 1000 measurement results show that the ranging accuracy is 0.05 m (1r) and the measuring accuracy of clock difference is 0.16 ns (1r), and the accuracy is the same level.

Fig. 4. Measurement error of Inter-satellite distance

Fig. 5. Measurement error of clock difference

According to the transmit power, antenna gain etc., the spatial propagation loss of the link can be calculated. Combined with feeder loss etc. the carrier to noise ratio CN0 of the received signal is calculated as about 56 dB. By putting the result into Eq. (12), the thermal noise measurement error of the loop can be calculated as 0.043 m (1r). It’s clear that the measurement error is mainly affected by the thermal noise.

4 Conclusion In this paper,an incoherent bidirectional ranging/time synchronization method is proposed based on the time service given in the CCSDS Proximity-1 protocol. This paper mainly discusses the principle of distance ranging communication between satellites based on Proximity-1 protocol, gives the calculation formula and measurement error analysis. The experimental results show the correctness of the theoretical analysis and

Research on Inter-satellite Ranging Technology Based on Proximity-1

75

the superior performance of the algorithm. It provides a novel method for Inter-satellite link system, which can realize the functions of Inter-satellite network communication, ranging, time synchronization, relative positioning, etc.

References 1. Wangjie: Design and software implementation of CCSDS Proximity-1 protocol, Nanjing University of Aeronautics and Astronaotic (2016). 王杰, 邻近空间链路协议自动重传系统 的设计和实现[D].南京航空航天大学, 2016 2. Zhao, H., He, X., Liu, C., Qiang, H.: Space Data System, BEIJING INSTITUTE OF TECHNOLOGY PRESS (2018). 赵和平, 何熊文, 刘崇华, 强晖萍, 空间数据系统[M], 北京 理工大学出版社, 2018 3. Huang, B., Hu, X.: Inter-Satellite Ranging and Time Synchronization Technique for BD2, Journal of Astronautics, 32(6), 1271–1275 (2011). 黄波,胡修林.北斗2导航卫星星间测距与 时间同步技术[J]. 宇航学报, 2011, 32(6): 1271–1275 4. Ren, F., Zhao, H., Chen, X.: Strategy Study of Dynamic Variable Frame Length Based on CCSDS Proximity-1 Protocol. SPACECRAFT ENGINEERING, 22(4), 72–76 (2013). 任放, 赵和平,陈曦. 基于 CCSDS Proximity-1 协议的动态变帧长策略研究[J]. 航天器工程, 2013, 22(4):72–76 5. CCSDS.211.0-B-4 Proximity-1 Space Link Protocol-DA TA LINK LAYER[S]. Washington D.C.:CCSDS,2006. 170–173 (2004)

SpaceWire Network Configuration Method Based on Digraph Yu Junhui(&), Niu Yuehua, and Mu Qiang Beijing Institute of Spacecraft System Engineering, Beijing 100094, China [email protected]

Abstract. SpaceWire is becoming popular as a high-speed (2-400Mbps) and low-cost network. Backup mechanism is often applied in onboard networks to ensure higher reliability. When one node in the network is switched to its backup device, the newly online node needs to be set up as soon as possible to start normal communication. A SpaceWire network configuration method based on digraph is proposed in this article. Network topology is stored in onboard software in form of digraph. The target SpaceWire address from the management node to each router is found by searching path in the digraph so that the configuration of the network can be accomplished automatically. The method helps improve both the reliability and the flexibility of onboard SpaceWire applications. Keywords: Onboard software

 SpaceWire  Digraph  Path addressing

1 Introduction SpaceWire applies bidirectional, full-duplex technology, and is already widely used in spacecraft programs [1, 2]. Redundant backup mechanism is usually used in onboard SpaceWire networks to improve the reliability of the whole system, in which both each node and connection in the network can be replaced when it meets its failure [3, 4]. The target SpaceWire address in front of the data packet implicates the output ports of each router chip in the data transferring path when applying path addressing [5, 6]. Thus the target SpaceWire address from one node to another using the main connection is different from the one using the backup connection. In such occasion, onboard software should be able to notice the node switching and keep aware of the construction of the network, so that it could choose the proper path for data transfer. There is usually only one management node (normally the central computer in the spacecraft) to accomplish network configuration in onboard applications, for the reason that not all the nodes in the network are intelligent terminals. It is essential for this management node to acquire the target SpaceWire address of every node in the network based on the analysis of the network architecture. A configuration method of SpaceWire network with backup mechanism is introduced, and an automatic method to get the routing path of each node dynamically is further proposed in this article, in which the network topology is transformed to digraph to be stored in onboard software so that the routing path can be obtained by searching the node in the graph. The method is applied to an example to prove its ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 76–83, 2023. https://doi.org/10.1007/978-981-19-3387-5_9

SpaceWire Network Configuration Method Based on Digraph

77

integrity. Test result shows that this method is helpful in managing onboard SpaceWire networks with backup mechanism by getting the routing path of each node in the network automatically based on the network status, which is meaningful to improve the flexibility and reliability of onboard SpaceWire applications.

2 Configuration Protocol 2.1

Path Addressing and Logical Addressing

Data is transferred over SpaceWire networks in form of packet, which is consisted of destination address, cargo and end of packet (EOP/EEP), as shown in Fig. 1. The destination address is used to route the packet from the source node to the destination node. In SpaceWire networks, there are two forms of addressing: path addressing and logical addressing [2, 7, 8].

Destination Address Cargo EOP/EEP

Fig. 1. Graphical packet notion

The destination address is made of a series of chars suggesting the output ports of each router involved in the path of the data transfer when path addressing is applied. When a router receives a data packet, the first character of the packet leads to its output port. Then the router would delete the first character and have the rest of the packet transferred to the next router, and in the end the address would be totally deleted when the packet meets its destination. The data source node needs to know the exact path for the data transfer when applying path addressing, while the addressing paths through the main link and through the backup link are different in the onboard networks with backup mechanism. The SpaceWire network topology of a spacecraft is shown in Fig. 2, in which the main node and the backup node can be switched and only one of them is online at all time (device A is the main node and device B is the backup node). When Router Unit A is on, the routing path from SMU A to payload 1A is {01 07}, while the routing path is {02 07} when the Router Unit B is turned on. In such occasion, the software of the source node needs to be aware of all the routing connections in the network in order to organize the right routing path for the packet. When applying logical addressing, each destination node gets an identifier, which is a number in the range 32 to 255. To make data transfer by logical addressing possible, the routing tables of every router in the network, which indicate the output port corresponding to each logical address in the router, needs to be configured in advance.

78

Y. Junhui et al. Router Unit A

2

2

5

4

Router 1 (0xA1) 6 3

7

8

10

1

9

SMU 1 A 0xC1 2

1Payload1

SMU 1 B 0xC2 2

1Payload1

5

A 2 0xD1

B 2 0xD2

4

6 3 Router 1 (0xB1)

7

1 8

9

10

8

2

5

Router 2 (0xA2) 4 6

9 3

7

1

9

10

1 8

4

Router 3 (0xA3) 6 3

7

2

5

Payload21 A 0xE1 2

1Payload3 A 2 0xF1

Payload41 A 0x11 2

1Payload5 A 2 0x21

Payload21 B 0xE2 2

1Payload3 B 2 0xF2

Payload41 B 0x12 2

1Payload5 B 2 0x22

10

8

2

5

4 6 Router 2 (0xB2)

3

7

1

9

10

10

8

9

1

4

6 3 Router 3 (0xB3)

7

2

5

Router Unit B

Fig. 2. Graphical packet notion

Each router decides the output port of the packet by looking for the logical address in its routing tables. Take the network shown in Fig. 2 for example. The same identifier is set to both the main node and the corresponding backup node. The routing tables of each node should be configured as shown in Table 1 (Port 9 is reserved for storage district inside the node). The data source nodes no longer need to understand the network topology and they can accomplish data transfer by adding the logical address to the head of the packet under such occasion. Table 1. Logical address assignment and routing table configuration Node SMU A/B Payload 1 A/B Payload 2 A/B Payload 3 A/B Payload 4 A/B

Logical address Routing table configuration A1/B1 A2/B2 A3/B3 C1/C2 34 4,6 8 1 9 40 3,7 8 1 1,2 66 1 4,6 8 1,2 72 1 3,7 8 1,2 68 8 1 4,6 1,2

…… …… …… …… …… ……

21/22 1,2 1,2 1,2 1,2 1,2

SpaceWire Network Configuration Method Based on Digraph

2.2

79

Network Configuration Method and Protocol Involved

As we can see from Sect. 2.1, path addressing is more intuitionistic, while its short points are obvious too. There are too many abundant characters in front of the packet which deduces the data transfer efficiency, and the software in source node should get hold of the whole network connections to accomplish data transfer which would make the onboard software more complicated. Only one character can identify the destination node when logical address is applied, however, it requires that configuration of all the routing tables should be finished in advanced. In onboard applications, the network is usually managed by a single node (normally the central computer on the spacecraft, SMU in the network shown in Fig. 1), since not all the nodes are intelligent terminals. The Remote Memory Access Protocol (RMAP) was published by ECSS in 2010 [9, 10], which standardized the access protocol for remote nodes in SpaceWire networks. Current commercial router chips for astronautic applications, such as Spw-10X AT7910E, have all embedded the RMAP IP core. In this scenario, the central computer should first accomplish the configuration of all the routing tables in every node of the network, using the write command based on RMAP (the write command format is shown in Fig. 3) by path addressing. Then all the nodes in the network could make transactions by logical addressing. In this way, only the management node should be aware of the network construction and it should be able to notice the on and off of all the nodes so that the routing tables of the newly turned-on node could be configured in time, while other nodes have no such concerns. First Byte Transmitted Target SpW Address

Target SpW Address Target Logical Address

Protocol Identifier

Instruction

Key

Reply Address

Reply Address

Reply Address

Reply Address

Reply Address

Reply Address

Reply Address

Reply Address

Reply Address

Reply Address

Reply Address

Initiator Logical Address

Transaction Identifier(MS)

Transaction Identifier(LS)

Address (MS)

Address

Address

Data Length (MS)

Data Length

Data Length

Data

Data

Data

Data Data

Reply Address Extended Address Address (LS) Header CRC Data Data

Data CRC

EOP

Last Byte Transmitted

Fig. 3. Write Command Format

3 Configuration Method Based on Digraph Based on the analysis in Sect. 2, the management node of the network needs to get knowledge of the network construction and be aware of node switching, so that it could lay out the path to each node and then finish the network configuration.

80

Y. Junhui et al.

In order to accomplish the network configuration by a single node, a solution is proposed in this chapter. The software in the management node stores the whole SpaceWire network topology in form of digraph. The management node can notice the node switching by watching the power status of each device. When one device is turned off, the corresponding node and paths related are deleted from the digraph so that the software gets the real time connection of the network. The path to each node can be found by searching the node in the digraph. And the RMAP write command can be organized to set up all the routing tables in the nodes. To be more specific, the problem can be solved by the following steps: 1) Store the whole SpaceWire topology in the software: each router chip is a node in the diagraph. If the port x of router A is connected to the port y of router B, the length of paths between A and B is recorded as A-> B = x, B- >A = y. 2) The management node keeps watching the power status of each node, deletes the nodes that are powered off, and leave only the turned-on nodes in the digraph. The digraph reflects the real time network connection and can be changed automatically when there is device switching in the network. 3) Searching each node in the digraph in step 2) to get the Target SpW Address and Reply Address. And then the software may set up each route table needed in every node one by one. 4) The management node should keep watching the power status of each node, and repeat the step 2) and 3) when there are any changes.

4 Implementation We apply the method to the network topology in Fig. 2 to verify its integrity. First thing to do is to transform the network topology to digraph. The connection between C1 and A1, for example, can be transformed to digraph shown in Fig. 4. We can get the digraph representing the whole network by applying this transformation to all the connections.

1 A1

C1 6

Fig. 4. Digraph reflecting connections between A1 and C1

Secondly, delete the nodes that are powered off. If all the main devices are on and the backup devices are off, the digraph corresponding to the effective connections is shown in Fig. 5.

SpaceWire Network Configuration Method Based on Digraph

7

81

D1

1 E1 A1

6

1

8

1

1

6

F1

7 A2

1

8

8

1

1 11

6

C1 A3

1 7

21 1

Fig. 5. Digraph containing effective nodes

Search path from the management node (C1) to the other nodes in the digraph. Path to each node can be found as followings applying depth first search: C1- > A1; C1- > A1- > D1; C1- > A1- > A2; C1- > A1- > A3; C1- > A1- > A2- > E1; C1- > A1- > A2- > F1;C1- > A1- > A3- > 11; C1- > A1- > A3- > 21; Since all the paths in the digraph are bidirectional, we can get the reply path by reversing the target path. The target SpceWire address is consisted by the length of each path. In this way, the target SpW addresses and reply addresses to the nodes set in the RMAP write command are shown in Table 2. The target path might be different when applying different arithmetic. By all means the configuration can be accomplished as long as the path is effective. Table 2. Target SpaceWire addresses and reply addresses Chip ID Target SpaceWire address Reply address A1 [1] [0 0 0 6] A2 [1 1] [0 0 8 6] A3 [1 8] [0 0 1 6] D1 [1 7] [0 0 1 6] E1 [1 1 6] [0 1 8 6] F1 [1 1 7] [0 1 8 6] 11 [1 8 6] [0 1 1 6] 21 [1 8 7] [0 1 1 6]

In the end, the management node could start the routing table configuration one by one. Since the path to each node is already obtained, the RMAP write command for

82

Y. Junhui et al.

configuration can be organized. The data field of the command can be calculated according to Table 1. The flow chart of the configuration process is shown in Fig. 6.

Fig. 6. Flow chart of the configuration process

The software is implemented with C language. The software function and protocol consistency is verified by onboard devices and test machine. Test result shows that the software can finish the network configuration automatically when there is device

SpaceWire Network Configuration Method Based on Digraph

83

switching, which ensures the data transfer quality and increases the flexibility and reliability of SpaceWire onboard applications.

5 Conclusion Backup mechanism is necessary in onboard SpaceWire applications. A configuration method and protocols applied in networks with backup redundancy are introduced in this article. The target path as well as reply path to each node can be obtained by searching the node in the digraph transformed from the network topology, so that the configuration can be accomplished by a single management node when there is device switching in the network. The method may show its advantages when the network is complicated and it has positive meanings in improving the flexibility and reliability of SpaceWire onboard applications.

References 1. Parkes, S., Armbruster, P.: SpaceWire: Spacecraft onboard data handling network. Acta Astronaut. 66, 88–95 (2010) 2. European Cooperation for Space Standardization. (Standard ECSS-E-ST-50–12A) SpaceWire - Links, nodes, routers and networks. 24 Jan 2003 3. Zhang, H., Wu, J., Zhang, C.X., et al.: Design of backup fault tolerant protocol for SpaceWire on-board networks. Comput. Meas. Control 23(2), 633–636 (2015) 4. Niu, Y.H., Zhao, W.Y.: Design and analysis of a strong fault-tolerant on on-board SpaceWire bus network. J. Comput. Appl. 34(9), 2497–2500 (2014) 5. Qu, X.P.: Research on Synchronization and Fault-Tolerant Technology of SpaceWire. Xidian University, Xi’an (2015) 6. Mao, C.J., Guan, Y., David, J.: Research and design of on-board dynamic SpaceWire router. J. Electron. Inf. Technol. 32(8), 1904–1909 (2010) 7. Mu, Q., Niu, Y.H., Yu, J.H., et al.: Design of spacecraft distributed storage system using SpaceWire network. Spacecraft Eng. 28(2), 78–84 (2019) 8. Yang, Z., Li, G.J., Yang, F., et al.: Design of communication protocol for SpaceWire onboard networks. Journal of Astronautics 33(2), 200–209 (2012) 9. European Cooperation for Space Standardization. (Standard ECSS-E-ST-50–52C) SpaceWire Remote memory access protocol (05 Feb 2010) 10. Yan, M.T., An, J.S., Gong, Q.M.: Design and implement of SpaceWire on-board High Speed Bus Based on RMAP Protocol. Application of Electronic Technique 42(1), 108*110, 114 (2016)

LDPC Encoding and Decoding Algorithm of LEO Satellite Communication Based on Big Data Nan Ye(&), Jianfeng Di, Jiapeng Zhang, Rui Zhang, Xiaopei Xu, and Ying Yan Xi’an Institute of Space Radio Technology, Xi’an 710100, China [email protected]

Abstract. With the rapid development of science and technology today, mankind has entered the era of digitization, and computer and big data technologies have penetrated into all fields and classes of society. This paper first studies two basic decoding message passing mechanisms of LDPC C, and analyzes the performance of the DA through experimental simulation. The application of big data concept in SC network can bring huge performance improvement, and further design the LEO SC network architecture. The results show that the BER performance of the system can be improved by increasing the number of iterations. When the number of iterations is 10, the BER performance is the worst. When the number of iterations is 20, the BER performance is greatly improved. When the number of iterations is 30, the performance is also greatly improved. However, when the number of iterations is 40, the BER performance of the system tends to be stable, and it is difficult to improve the performance of the visible light communication system through iterations. Keywords: Big data  LEO satellite communication  LDPC code  Encoding and decoding algorithm

1 Introduction In LEO satellite constellation system, due to the idle resources of LEO satellite constellation and the high-speed movement of LEO satellite, the network topology is highly dynamic, which makes the SC from source to destination face severe challenges. This situation limits the applicability of traditional communication methods, which rely on the exchange of topology information when changing or setting connections. Therefore, in recent years, many communication algorithms and implementation strategies have been proposed for satellite constellation networks with inter satellite links. With the development of science and technology, many experts have studied LEO SC algorithms. For example, some domestic teams have studied the algorithm for accurately estimating the posterior probability in the downlink, analyzed the estimation problem, and derived an improved posterior probability estimation algorithm using probability theory [1]. Some experts have studied the multiple LDPC coding and DA, ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 84–91, 2023. https://doi.org/10.1007/978-981-19-3387-5_10

LDPC Encoding and Decoding Algorithm of LEO Satellite Communication

85

and applied QC-LDPC code to the channel, and proposed a new method to calculate the log likelihood ratio (LLR) initialization of LLR BP decoding in APD receiving channel. This paper proposes a new solution for these computation intensive algorithms. Through parallel LDPC decoding based on ubiquitous multi-core architecture, high throughput compared with dedicated VLSI system is achieved. Systems, methods, and other embodiments related to the architecture of an LDPC decoder are described. Three different platforms are built to simulate the BER performance of ldpc-ccs. Aiming at the problem of error propagation caused by the transmission of external information between nodes in the iterative decoding process of LDPC C, a new method is proposed [2]. In addition, some experts have studied the hairstyle algorithm in satellite optical network and proposed a dynamic window decoding scheme for LDPC convolutional codes. Compared with belief propagation decoding, this scheme can effectively reduce the decoding delay. The BER performance of the algorithm is verified by simulation. A construction method of LDPC C without period parity CM is proposed, and a method to shorten the period of LDPC C is designed. In order to achieve efficient software LDPC decoding on mobile devices, it is necessary to replace LDPC decoding function on communication baseband chip to save cost and make the decoder easy to upgrade to be compatible with multi-channel access scheme. The applicability of CGR in LEO satellite DTN communication is discussed through the study of earth observation and data mule. The advantages of CGR in LEO SC are verified by running ion, DTN package protocol and CGR developed by NASA on Linux platform. In order to study optical modem and optical tracking mechanism, an optical islanding simulator is developed. Using erbium-doped fiber amplifier, polarization filter and narrow-band filter, the sensitivity of 56 photons/bit is obtained at the bit error rate of 10–9. Coude path type, active universal joint type and double plane mirror type optical antenna mechanisms have been developed [3]. Although the research on LEO SC is fruitful, the research on LDPC Encoding and DA based on big data is not enough. In order to study the LDPC Encoding and DA of LEO SC based on big data, this paper studies the LDPC Encoding and DA of big data and SC, and finds the interference analysis method of satellite system. The results show that big data technology is conducive to the development of LDPC Encoding and DA in SC.

2 Method 2.1

Concept of Big Data

Big data is a technology that analyzes and processes a large number of data through mathematical analysis, computer operation and other methods, and excavates valuable or potentially valuable information contained in a large number of data [4]. Big data is an analysis and decision-making technology supported by mathematical tools. Data mining absorbs the ideas of statistics, mathematical modeling, artificial intelligence, machine learning and many other technical fields [5]. The nonlinear characteristics of big data ensure the fault tolerance and the amount of computing data of big data. The characteristics of big data are determined not only by a single data, but also by a

86

N. Ye et al.

network composed of multiple data [6]. Generally speaking, big data has a strong ability of autonomous learning, which can change the data through the change of learning information. Iterative algorithm makes the whole big data model more suitable for fault diagnosis, but the explanatory power of big data is weak, and empirical knowledge cannot be directly read and applied. Therefore, in practice, it is often combined with fuzzy logic and other methods [7]. Big data mainly uses the concept of grouping. The whole data source is divided into multiple data source groups in a certain way. Through this grouping method, we can distinguish the categories of data [8]. 2.2

LDPC Encoding and Decoding Algorithm for LEO Satellite Communication

(1) LEO satellite communication SC refers to the use of artificial earth satellite as a relay station to transmit electromagnetic wave and realize the communication between two or more ground stations. According to different working orbits, SC systems are divided into Leo, MEO and Geo. Among them, low orbit satellite has the characteristics of low orbit height, short transmission delay, small road stiffness loss, and can achieve real global coverage and frequency reuse; the disadvantage is that it needs dozens of satellites to form a global system, the satellite operation speed is fast, the connection time is limited, and frequent switching between satellites or carriers is required [9]. The RF processing part is mainly responsible for sending and receiving data. After modulation, data need to be loaded on a frequency carrier, power is amplified, and the signal with data packet is sent to satellite via VHF antenna [10]. (2) LDPC code LDPC C are a subset of linear block codes. The general coding method of linear block codes can be used directly, that is, the parity matrix is converted into system generation matrix by Gaussian elimination method, and the coding is completed. In this method, Gaussian elimination method is used to transform the CM into the lower triangular matrix structure, and then the iterative coding is carried out, which simplifies the coding process to a certain extent, but the preprocessed matrix is not sparse, and the coding complexity is still very high. The DA of LDPC code determines whether LDPC code can achieve the best performance. At the same time, the DA of LDPC code also directly affects its hardware overhead and decoding delay. The best DA is MLD, but it is too complex to be implemented. (3) Encoding and decoding algorithm The codec algorithm improves the credibility of codewords by repeatedly passing external messages between variable and check nodes, thus obtaining the best coding performance. If the message in codec adopts probability message, it is a probability codec algorithm; If log likelihood ratio message is used, the algorithm is log domain encoding and decoding.

LDPC Encoding and Decoding Algorithm of LEO Satellite Communication

87

Satellite System Interference Analysis Method Here, the change of geographical distance between iridium system and ground mobile communication system mainly changes the loss of space signal. The free space loss refers to the attenuation of electromagnetic wave in the transmission path. The calculation formula is as follows (1): L ¼ lgF þ lgD

ð1Þ

The useful signal power [DBM] received by satellite is calculated according to formula (2): PR ¼ PT þ GT þ GR  Loss

ð2Þ

In the above formula, Pt, GT, GR and loss are respectively transmission power, transmitter gain, receiver gain and transmission loss. The calculation of free space loss and thermal noise power is shown in formula (3): PN ¼ logðktbÞ

ð3Þ

where k is Boltzmann constant, t is noise temperature and B is transmission power. Calculation method of dry load ratio (4): p ¼ lgðpindoor þ poutdoor Þ

ð4Þ

3 Experience 3.1

Extraction of Experimental Objects

The main difference between visible light communication and traditional radio frequency communication lies in transmission channel and transceiver. The former channel is the optical channel, with LED as the signal transmitting device and photodetector as the receiving device. The process of baseband information processing is exactly the same, so the simulation model of visible light communication system can be designed by imitating the traditional RF communication simulation model. Considering the large amount of internal program code in data terminal, the processor should have more space and space to provide enough flexibility for system software design. For the peripheral equipment of data terminal, the processor should provide enough peripheral equipment interfaces to ensure the normal communication between the processor and peripheral equipment and the scalability of system functions. 3.2

Experimental Analysis

Firstly, the ROM and the shift value ROM of the basic information of the base matrix (the position and shift value of the non-zero submatrix) are considered. Because the code length of the three codes is the same as the order of the submatrix, the range of the

88

N. Ye et al.

basic information is the same. The bit width of memory does not need to be changed. The basic information of three base matrices can be stored by increasing the storage depth. Secondly, ram is considered to store message values in decoding iteration. The required bit width is VX6 and does not need to be changed in multiplexing. Storage depth is the number of nonzero submatrix in parity CM. The number of nonzero submatrix of parity CM of three rate LDPC C is 275, 296 and 294 respectively. We set the memory depth to a maximum of 296. The last thing to consider is first in first out. This FIFO is used to store messages and address information waiting for one line to be updated, so the design depth is the maximum number of lines.

4 Discussion 4.1

The Influence of Iterations on the Performance of LDPC Codes

The number of iterations is an important factor affecting the decoding quality. The influence of iterations on the BER performance of the improved MS DA is studied by simulation. The more iterations, the better BER performance of the system, but at the same time the decoding complexity increases, which is not conducive to hardware implementation; if the number of iterations is too small, the BER performance of the system may not meet the requirements. Therefore, while meeting the performance requirements, it is necessary to find an appropriate number of iterations to reduce the complexity of implementation as much as possible. 5, 10, 20, 30 and 50 iterations are selected for simulation experiment, and the simulation results are shown in Table 1. Table 1. The influence of iterations on the BER performance of the system Type Ite = 5 Ite = 10 Ite = 20 Ite = 30 Ite = 50

It can be seen code performance code performance code performance code performance code performance

Data Eb/N0 (dB) 1.4 0.5 1.6 2.8 1.2

from the above that when the number of iterations is 5, is 1.4eb/n0 (DB); When the number of iterations is 10, is 0.5eb/n0 (DB); When the number of iterations is 20, is 1.6eb/n0 (DB); When the number of iterations is 30, is 2.8eb/n0 (DB); When the number of iterations is 50, is 1.2eb/n0 (DB). The results are shown in Fig. 1.

the the the the the

LDPC LDPC LDPC LDPC LDPC

LDPC Encoding and Decoding Algorithm of LEO Satellite Communication

89

Fig. 1. The influence of iterations on the BER performance of the system

From the above results, it can be seen that increasing the number of iterations is beneficial to improve the BER performance of the system. When the number of iterations is 10, the BER performance is the worst. When the number of iterations is 20, the BER performance is greatly improved. When the number of iterations is 30, the performance is also greatly improved. However, when the number of iterations is 40, the BER performance of the system tends to be stable, and it is difficult to improve the performance of the visible light communication system through iterations. 4.2

Theoretical Analysis on the Performance of Mobile Satellite Communication Network Edge Computing Architecture

Generally speaking, the deployment of edge computing platform on the edge devices of mobile SC network can make the edge devices become edge computing nodes with computing and storage capacity, and then make the whole SC network have certain computing capacity, not just forwarding function. In SC networks with edge computing capability, edge computing nodes can be divided into ground-based edge computing nodes and space-based edge computing nodes according to the different deployment locations of edge computing platforms. Ground edge computing nodes mainly include base station and ground data center. Space based edge computing nodes mainly include GEO Satellite edge computing nodes and LEO satellite edge computing nodes. The scenario analysis results of three different performance indicators are shown in Table 2. Table 2. Scene performance analysis of different edge computing nodes Node type Foundation node LEO satellite node High orbit satellite node

Response delay (ms) Coverage (%) 32 20 34 53 16 37

90

N. Ye et al.

It can be seen from the above that the response delay of the ground node is 32 ms, and the coverage is 20%; the response delay of LEO satellite nodes is 34 MS, and the coverage is 53%; the response delay of high orbit satellite nodes is 16 ms, and the coverage is 37%. The results are shown in Fig. 2.

Fig. 2. Scene performance analysis of different edge computing nodes

It can be seen that although the response delay of LEO satellite edge computing nodes is long, the coverage is wide. It only needs three satellites to cover all regions of the world, and is not disturbed by complex terrain, weather and other natural conditions during the service period for ground users.

5 Conclusion The traditional mobile communication satellite is only used as an information transmission station in the communication process, which does not have the ability of data processing. Moreover, the traditional satellite network has the disadvantages of large propagation delay and expensive spectrum resources, which can not meet the development of time sensitive services. This paper summarizes and analyzes some classical DA in optical communication system, such as DA and its improved algorithm, and their performance analysis methods. On this basis, the key problems in the DA of optical communication system are summarized, which has a very important theoretical significance for future research.

LDPC Encoding and Decoding Algorithm of LEO Satellite Communication

91

References 1. Cornelius, H., Shao, Z., Oliveira, R.M., et al.: Knowledge-aided informed dynamic scheduling for LDPC decoding of short blocks. IET Commun. 12(9), 1094–1101 (2018) 2. Ao, J., Liang, J., Ma, C., et al.: Optimization of LDPC codes for PIN-based OOK FSO communication systems. IEEE Photonics Technol. Lett. 29(9), 727–730 (2017) 3. Kakde, S., Khobragade, A., Ambatkar, S., et al.: Implementation of decoding architecture for LDPC code using a layered Min-Sum algorithm. IIUM Eng. J. 18(2), 128–136 (2017) 4. Ye, C., Gu, F., Shi, W.: Research on the communication signal processing and parallel decoding algorithm based on neural network. Revista de la Facultad de Ingenieria 32(3), 255–263 (2017) 5. Krishnamoorthy, N.R., Ramadevi, R., Marshiana, M., et al.: Reduced decoding time of LDPC code using convolutional code with Vitrebi decoding in underwater communication. Int. J. Eng. Technol. 7(2), 167–169 (2018) 6. Wei, H., Banihashemi, A.H.: An iterative check polytope projection algorithm for ADMMbased LP decoding of LDPC codes. IEEE Commun. Lett. PP(99), 1 (2017) 7. Xia, Q., Lin, Y., Tang, S., et al.: A fast approximate check polytope projection algorithm for ADMM decoding of LDPC codes. IEEE Commun. Lett. 23(9), 1520–1523 (2019) 8. Wijekoon, V.B., Viterbo, E., Hong, Y., et al.: A novel graph expansion and a decoding algorithm for NB-LDPC codes. IEEE Trans. Commun. 68(3), 1358–1369 (2020) 9. Zhao, M., Bin, X., et al.: Serial decoding algorithm with continuous backtracking for LDPC convolutional codes. Wireless Personal Commun. 107(4), 1823–1833 (2019) 10. An Improved Reliability-Based Decoding Algorithm for NB-LDPC Codes. IEEE Commun. Lett. PP(99), 1 (2021) 11. Lin, Y., Xia, Q., et al.: A fast iterative check polytope projection algorithm for ADMM decoding of LDPC codes by bisection method. IEICE Trans. Fund. Electron. Commun. Comput. Sci. E102.A(10), 1406–1410 (2019) 12. Santini, P., Battaglioni, M., Baldi, M., et al.: Analysis of the error correction capability of LDPC and MDPC codes under parallel bit-flipping decoding and application to cryptography. IEEE Trans. Commun. PP(99), 1 (2020)

LDPC Coding and Decoding Algorithm of Satellite Communication in High Dynamic Environment Xiaoni Wu(&), Xuexue Qin, Xiaodong Ma, Dayang Zhao, Dan Lin, and Yifeng He Xi’an Institute of Space Radio Technology, Xi’an 710100, China [email protected]

Abstract. In this digital age, modern communication system plays an important role in almost all aspects of life, such as mobile network and satellite communication to the Internet and data transmission. This paper analyzes the concept of high dynamic environment, satellite communication system and LDPC algorithm. The results show that the resource cost of the rate editor of regs is higher than that of LUTS. Look up table LUTS, register regs and two kinds of encoders can be selected. Keywords: High dynamic environment Coding  Decoding algorithm

 Satellite communication  LDPC

1 Introduction In recent years, providing reliable broadband satellite communication in high mobility environment has attracted wide attention. One of the biggest challenges is the large Doppler shift caused by the rapid relative motion between the transmitter and the receiver. Usually, over time, data loss occurs. Current methods mainly focus on improving the performance of carrier synchronization to ensure that all data are received as much as possible. With the continuous development of science and technology, many experts have studied the LDPC Encoding and decoding algorithm of satellite communication. For example, some domestic research groups have studied the coding and decoding algorithms of LDPC codes and proposed a concatenated code model of high-order lowdensity parity check (LDPC) coding and modulation scheme. The model consists of an external LDPC encoder, an internal BDC encoder and a series puncture device. The joint parity check matrix of concatenated codes is derived, and the corresponding joint demodulation decoding algorithm is given, which is called concatenated code confidence propagation (CCBP). Simulation results show that, compared with bit interleaved coded modulation (BICM) scheme, LDPC coded modulation scheme based on CCBP algorithm has an effective improvement on higher-order modulation. In order to effectively reduce the impact of short period on different bit rates, a short period calculation algorithm and a graph correlation metric called external message degree (EMD) are proposed. The reweighted belief propagation algorithm (BP) is proposed to ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 92–99, 2023. https://doi.org/10.1007/978-981-19-3387-5_11

LDPC Coding and Decoding Algorithm of Satellite Communication

93

realize the effective decoding of LDPC codes through various design methods, that is, accurate signal reconstruction and low decoding delay. A variable factor occurrence probability algorithm (vfap BP) and an improved local optimization reweighting algorithm (low BP) are proposed. Both algorithms can significantly improve the decoding performance of regular and irregular LDPC codes. More importantly, the weighting parameters are only optimized in the off-line phase, so the real-time decoding process does not need additional computational complexity. Finally, two iterative detection and decoding receivers are proposed for spatial multiplexing MIMO systems. The proposed multiple feedback (MF) - QRD or variable m (VM) QRD detection algorithm and standard BP decoding algorithm are used in the QR decomposition (QRD) IDD receiver. The knowledge aided (KA) receiver is equipped with a simple soft parallel interference cancellation detector and the proposed reweighted BP decoder. In the case of non coding, the performance of the proposed MF QRD and VM QRD algorithms is close to the best, but the computational complexity needs to be reduced [1]. Some experts have studied the unequal error LDPC coding and decoding in satellite ATM network transmission, and proposed a method using BICM ID to alleviate the performance degradation caused by FTN. BICM ID is a method to calculate a new LLR according to the hard decision value of decoder output to improve the performance. In Gaussian channel, the performance of DVB-S2 system based on 8PSK modulation and FTN technology is better than that of DVB-S2 system based on 8PSK modulation and FTN technology. In order to achieve high dynamic carrier tracking, a new tracking algorithm is proposed. The regeneration bit clock determines the update time of carrier tracking loop and code tracking loop. Carrier tracking loop is a part of PLL and PLL in series. Theoretical analysis and simulation results verify the effectiveness of the algorithm. A decoding method and apparatus for improving soft error correction performance, an error correction method and apparatus, and a soft output method and apparatus are provided. In order to support different packet lengths, the pre-defined rules are applied to the parity check matrix of LDPC codes, and then the parity check matrix is selectively shortened. Then, if the information data bits are input, the input data information bits are encoded into LDPC codewords using parity check matrix according to the preset coding scheme, and the encoded LDPC codewords are sent after selectively applying pruning. In order to reduce the computational complexity, a N2 majority logic decoding algorithm for non binary LDPC codes is designed [2]. Some experts have studied the hybrid spread spectrum burst communication system, analyzed the minimum maximum decoding algorithm of non binary LDPC codes, and proposed the serial minimum maximum decoding algorithm. Then, combined with variable node message weighting, a non binary LDPC code serial min max decoding algorithm based on variable weighting is proposed. Finally, the characteristics of optical communication are analyzed in detail, and the simulation model of optical communication system is given. A design method of low density parity check (LDPC) codec for 40 Gb/s optical communication system is proposed. In order to get the best performance, the power of EDFA is further optimized to use with SMF and DCF to get the highest Q factor. At 40 Gb/s bit rate, the LDPC coding system (r = 0.8 LDPC coding, spa decoding) is compared with the non coding system. An improved quantization and product decoding algorithm for low density parity check (LDPC) codes is proposed. The algorithm is based on the quantization of soft channel

94

X. Wu et al.

information, which improves the complexity of the algorithm. According to the structural characteristics of Gallager low density parity check (SCG LDPC) codes, an effective hierarchical reliable belief propagation (hrbp) decoding algorithm is proposed. The serial and parallel encoding circuits are designed and implemented on xc4v-sx55 FPGA. The correctness of the coding algorithm is verified by the standard belief propagation decoding algorithm [3]. Although the research on LDPC Encoding and decoding algorithm of satellite communication is quite fruitful, there are still some deficiencies in the research on LDPC Encoding and decoding algorithm of satellite communication in high dynamic environment. In order to study the LDPC Encoding and decoding algorithm of satellite communication in high dynamic environment, this paper studies the LDPC Encoding and decoding algorithm of satellite communication in high dynamic environment, and finds the spread spectrum technology. The results show that the high dynamic environment is conducive to the research of LDPC Encoding and decoding algorithm in satellite communication.

2 Method 2.1

High Dynamic Environment

The larger the dynamic range of high dynamic communication relative motion is, the larger the carrier frequency offset is, and even the higher-order variation rate offset is, which brings a great test to carrier synchronization [4]. In order to stabilize the tracking signal, the most direct method is to increase the noise bandwidth of the loop filter [5]. However, the increase of the loop noise bandwidth improves the dynamic adaptability of the loop, but it is conducive to the stability of the loop, but at the cost of increasing the thermal noise error of the loop. The overall tracking performance is not very good [6]. It is proposed to add a lock loop to the PLL. Using the advantages of large dynamic range and high tracking accuracy of PLL, the synchronization of download wave in high dynamic environment is realized. Due to the particularity of high dynamic environment, only capture or tracking can achieve fast and accurate synchronization [7]. Therefore, in the actual communication system, open-loop carrier synchronization and closed-loop carrier synchronization are often needed to work together [8]. The high dynamic communication scene is processed according to the characteristics of high-speed tracking and real-time synchronization of closed-loop carrier. The acquisition part is responsible for the rough estimation of carrier phase deviation, frequency offset and frequency deviation change rate, and then roughly estimates the estimated parameters, and then compensates the estimated parameters to reduce the tracking range and tracking time of the next tracking module; the tracking part is responsible for tracking carrier residual phase deviation, frequency deviation and frequency deviation rate, realizing continuous and accurate tracking of unknown parameters and reliable demodulation of received signal. Therefore, in the future, it is necessary to combine turbo and LDPC coding methods and make use of the characteristics of encoding and decoding to achieve a high-speed, accurate and reliable carrier synchronization scheme.

LDPC Coding and Decoding Algorithm of Satellite Communication

2.2

95

LDPC Coding and Decoding Algorithm for Satellite Communication

Satellite communication is composed of one or more satellites, providing connections between users or between users and gateway stations. With the development of satellite payload, space segment not only has simple relay function, but also has on-board switching and on-board processing functions. Xingyuan station is mainly responsible for information transmission, exchange and management; Satellite control center is mainly used for monitoring communication satellites; Network control includes resource allocation, payload management and network performance monitoring. The user part includes handheld, vehicle, ship, airborne and other forms of user terminals. Kalman filter algorithm has been widely used in signal processing because its estimation accuracy is at least second-order term, it does not need to deduce and calculate Jacobian matrix, and its computational complexity is the same as that of extended Kalman filter (EKF). The autocorrelation algorithm based on time-domain delay is to calculate the autocorrelation between the received signal and the delayed signal, get the delayed autocorrelation function, and then carry out FFT transformation or other processing on the function to solve the frequency offset and the first-order rate of change. Due to the delayed autocorrelation processing, this algorithm will greatly reduce the signal-to-noise ratio, which makes the performance of the algorithm deteriorate in the case of low signal-to-noise ratio. LDPC code is based on the original information, according to certain rules to add a certain amount of redundant information. The receiver can use this redundant information to check and correct errors. At present, the commonly used channel codes are turbo code, Reed Solomon code, LDPC code and so on. These codes have their own characteristics in coding gain, decoding delay and hardware complexity, which are suitable for different scenarios. Among them, LDPC code is one of the most widely used and best performance coding technologies in the field of wireless communication. 2.3

Spread Spectrum Technology

Under the condition of Gaussian white noise, the channel capacity C is shown in Eq. (1), where B is the signal transmission bandwidth, s is the signal power, and N is the noise power:   S C ¼ Blog2 1 þ N

ð1Þ

According to Eq. (1), when the signal-to-noise ratio is constant, the channel capacity is proportional to the channel transmission bandwidth; at the same time, when the signal transmission bandwidth is fixed, there is a positive correlation between channel capacity and SNR. In addition, in the study of anti-jamming performance of wireless communication, kojelnikov defines the information transmission error probability, as shown in Eq. (2), where Powj is the error probability, 0n is the noise power spectral density, and E is the signal energy per bit:

96

X. Wu et al.

E Powj ¼ f ð Þ Nij

ð2Þ

In spread spectrum technology, there is an important concept, namely spread spectrum gain PG, which is an important parameter to measure the improvement of signal-to-noise ratio of spread spectrum technology. Its definition is shown in Eq. (3): Gp ¼

SNRout So ¼ SNRin Si

ð3Þ

The spread spectrum gain, that is, the improvement of signal-to-noise ratio, is directly proportional to the expansion multiple of signal bandwidth before and after spread spectrum. Therefore, the spread spectrum gain PG of DSSS can be expressed by Eq. (4): Gp ¼

Bc Bb

ð4Þ

3 Experience 3.1

Extraction of Experimental Objects

Considering the coherent demodulation of convolutional coded modulation system, Viterbi based maximum likelihood decoding algorithm is adopted. When Viterbi decoding is performed at the receiver, there is a surviving path in each state each time. When using PLL structure for carrier tracking, the traditional system structure only uses one PLL structure for carrier tracking, which can not meet the requirements of high-precision carrier parameter tracking. It is assumed that the length of the information symbol in the modulated data frame is n and the length of the pilot symbol is L. Convolutional code coding and QPSK modulation are used. After the signal enters the Gaussian white noise channel, the normalized carrier frequency offset and normalized frequency offset change rate are added. 3.2

Experimental Analysis

Carrier tracking of PSP technology is Viterbi decoding algorithm based on convolutional code. Firstly, the principle and characteristics of convolutional code and Viterbi demodulation are introduced, and their decoding characteristics are analyzed. A carrier tracking algorithm based on PSP technology is proposed. Block code is to divide the input information sequence into k blocks, and then add some redundant codes to form n, K sub block codes. The block coding is extended, so that the extra redundant codewords are not only related to the information codewords of the group, but also related to the information codewords of the previous group, so as to play a better role in verification. Convolutional code divides the input information sequence into k

LDPC Coding and Decoding Algorithm of Satellite Communication

97

segments, which is not enough to make up for 0. Then it is input into the discrete linear system with the maximum time delay of m through series parallel transformation. Then, the N codewords from the discrete linear system are sent to the channel through the parallel serial transformation to complete the channel coding. We call these codes n, K, m convolutional codes.

4 Discussion 4.1

Tracking Performance of Second Order DPLL

For different time values, the performance of DPLL frequency offset estimation is almost different. This shows that there is no necessary relationship between the parameter and the value in the proposed method, as shown in Table 1. Table 1. Influence of pre detection integration time on DPLL transient response T = 0.1 ms T = 0.5 ms True frequency offset Iterations 12 4 2 Frequency offset | Hz 18 8 3

It can be seen from the above that the iteration number T = 0.1 ms is 12 ms, t = 0.5 ms is 4 ms, and the real frequency offset is 2 ms; The estimated frequency offset t = 0.1 ms is 12 ms, t = 0.5 ms is 4 ms, and the real frequency offset is 2 ms. The results are shown in Fig. 1.

Fig. 1. Influence of pre detection integration time on DPLL transient response

It can be seen from the above that the time value of iteration times t = 0.1 ms, t = 0.5 ms, and the development trend of real frequency offset value are decreasing.

98

4.2

X. Wu et al.

Resource Cost Comparison of Multi Rate LDPC Encoder

The multi rate integrated architecture based on the maximum check matrix (minimum bit rate) can realize the compatible design of multiple codes only by using the hardware resources equivalent to a single encoder based on the minimum bit rate coding, and the external “encoding type” (ENC) can be switched online by using the type switch, which is flexible to use. The trend of resource cost comparison between lookup table and register is shown in Table 2. Table 2. Comparison of resource overhead of multi rate LDPC coder Bit rate1|6 Bit rate1|3 Bit rate2|3 Look up table LUTS 1000 1300 1500 Register regs 1100 1500 1800

From the above, the resource cost of the LUTS code rate 1  6 editor is 1000, the resource cost of the code rate 1  3 editor is 1300, and the resource cost of the 2 | 3 editor is 1500; the resource cost of register regs code rate 1  6 editor is 1100, and the resource cost of code rate 1  3 editor is 1500, and that of 2  3 editor is 1800. The specific results are shown in Fig. 2.

Fig. 2. Comparison of resource overhead of multi rate LDPC coder

It can be seen from the above that the resource cost of the rate editor of register regs is higher than that of LUTS. Look up table LUTS, register regs and two kinds of encoders can be selected.

LDPC Coding and Decoding Algorithm of Satellite Communication

99

5 Conclusion In satellite borne regenerative satellite communication, the satellite repeater is less burdened when the satellite processor does not decode and recode the channel. However, the classical LDPC MP algorithm can not accurately estimate the post probabilities of bits, which will lead to the performance of LDPC. The algorithm sets the initial channel likelihood information of the data loss position in the receiving sequence to 0. The iterative decoding algorithm is used to solve the problem of channel noise interference and data loss, and reduces the requirement of carrier synchronization.

References 1. Xu, Z., et al.: Joint shuffled scheduling decoding algorithm for DP-LDPC codes-based JSCC systems. IEEE Wireless Comm. Lett. 57(9), 99–105 (2019) 2. Ye, C., Gu, F., Shi, W.: Research on the communication signal processing and parallel decoding algorithm based on neural network. Revista de la Facultad de Ingenieria 32(3), 255– 263 (2017) 3. Sun, S., Lin, M.: High performance and hardware friendly NB_LDPC decoder of EMS algorithm. Inte. J. Innov. Compu. Info. Cont. Ijicic 13(3), 1047–1054 (2017) 4. Wei, H., Banihashemi, A.H.: An iterative check polytope projection algorithm for ADMMbased LP decoding of LDPC codes. IEEE Commun. Lett. 33(7), 97–102 (2017) 5. Xia, Q., et al.: A fast approximate check polytope projection algorithm for ADMM decoding of LDPC codes. IEEE Commun. Lett. 23(9), 1520–1523 (2019) 6. Wijekoon, V.B., et al.: A novel graph expansion and a decoding algorithm for NB-LDPC codes. IEEE Trans. Commun. 68(3), 1358–1369 (2020) 7. Ming, Z., et al.: Serial decoding algorithm with continuous backtracking for LDPC convolutional codes. Wireless Personal Commun. 107(4), 1823–1833 (2019) 8. Kakde, S., et al.: Implementation of decoding architecture for LDPC code using a layered Min-Sum algorithm. IIUM Eng. J. 18(2), 128–136 (2017)

Analysis of Optimal Orbital Capture Maneuver Strategy to LEO Spacecraft Based on Space Debris Collision Hong Ma1,3(&), Tao Xi1,3, Shouming Sun1,3, Jianmin Zhou4, Qiancheng Wang1,3, Zhenguo Xiao1,3, and Wei Zhang2,3 1

2

State Key Laboratory of Astronautic Dynamics, Xi’an 710043, China [email protected] State Key Laboratory of Spacecraft In-Orbit Fault Diagnosis and Maintenance, Xi’an 710043, China 3 Xi’an Satellite Control Center, Xi’an 710043, China 4 Beijing Institute of Control Engineering, No.104, Youyi Road, Beijing 100194, China

Abstract. With the increasing of multi-satellites launching mission, contradiction between operationally orbit maneuver deployment of spacecraft and collision avoidance of space target becomes prominently. In this article, a spacecraft maneuver strategy analysis for the design optimization of orbit capture with collision avoidance is presented. With the linear relationship between maneuvering pulse and spacecraft relative motion is formulated, the analytical model of trace drift nearest distance and crossing space debris time with velocity pulses is proposed. Considering the constraints of TT&C resources, orbit determination accuracy calibration, attitude and orbit control propulsion system calibration, maneuver time limit, space target avoidance and so on, the optimization process of optimal orbital maneuver pulse is designed based on synthetic error estimation. An orbit altitude capture maneuver of triple-LEO Spacecraft launching with one vehicle is designed, the simulation shows that the optimal first maneuver pulse and orbit acquisition strategy are presented, which has good engineering operability and applicability. Keywords: Optimal maneuver strategy Error estimation

 Orbital capture  Relative motion 

1 Introduction With the increase of human space activities has accelerated the deterioration of the outer space environment [1]. According to the data statistics, the number of space targets above 10 cm in-orbit has reached to 34000 [2, 3] by 2021. Optimization design of satellite and rocket separation [5] and safety analysis of launch window [6], which can ensure the safety of space collision after spacecraft launching, according to the relevant policies of space debris mitigation strategy [4], However, most spacecraft need to quickly carry out initial orbit acquisition maneuver after launching, with large amount of maneuver and short interval time. The space target recognition system ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 100–110, 2023. https://doi.org/10.1007/978-981-19-3387-5_12

Analysis of Optimal Orbital Capture Maneuver Strategy to LEO

101

cannot quickly distinguish the targets entering out space at the same time, and the catalog information update is slow, which brings some difficulties to discuss the acquisition maneuver strategy. At present, the research on collision avoidance mainly focuses on three aspects: one is the calculation method research, such as explicit analytical expression of collision probability [7], error transfer model [8, 9], etc. The second is transforming the collision avoidance problem to optimal control problem with complex constraints. Intelligent optimization methods such as artificial potential field [10], Gauss pseudo spectral method [11], convex optimization and distributed cooperative control [12, 13] are adopted. By optimizing energy transfer index and Hamilton function, the value and direction of maneuver speed increment under optimal conditions, i.e. optimal control rate, are given, so as to obtain active collision avoidance control strategy based on energy optimization [14]. The third is to study the motion law of spacecraft under the perturbation of earth’s non spherical gravity and atmospheric drag [15] and the evolution characteristics of constellation configuration [16]. Combined with engineering application experience, which focuses on shorten the formation configuration reconstruction time after evasive maneuver, and reduce the impact on the original constellation and group formation configuration design [17–20]. The above research results have made outstanding contributions to ensure the operation safety of in-orbit spacecraft, but there are few literatures focus on collision avoidance of multi batch maneuver pulse with the initial orbital capture maneuver strategy to LEO spacecraft [21]. Considering engineering application constraints of the spacecraft, for the optimization design of in-plane maneuvering pulse in this article, based on the analytical orbital maneuvering dynamics model, the analytical models of maneuvering speed increment, trace drift distance and collision time are established to quickly evaluate the safety of a maneuvering quantity. Based on the estimation of maneuver impulse increment error, the implementation steps and process of optimal orbit acquisition maneuver are presented. Especially, the selectivity of the first maneuver impulse in the maneuver strategy is analyzed, which can quickly calculate the multi impulse strategy to realize the initial orbit capture, and is conducive to the practical application of engineering.

2 Constrained Conditions in Engineering Generally, most of the initial orbit acquisition maneuvers of spacecraft in engineering implementation are in-plane maneuvers. In-plane maneuver is to change only the semi major axis, eccentricity and perigee angle by applying tangential impulse to the spacecraft. Due to the intersection between the maneuvering orbit and original orbit, the probability of collision with the spacecraft in the original orbit is very high. Figure 1 shows the problem of collision avoidance in-plane maneuver. One launching of three spacecraft named as A, B, C, respectively, enters the orbit, and separated in turn. After spacecraft A maneuver, it emerges four intersection points with B and C orbital plane, respectively, which makes it difficult for every maneuver to avoid path planning. In order to achieve initial orbit acquisition maneuver target and ensure the collision avoidance safety of spacecraft, the constraints should be considered as follows.

102

H. Ma et al.

(1) Pulse maneuver direction and time: The tangential or normal pulse should be used as far as possible with single maximum pulse time, radial pulse should not be used; C

Intersection Point 2 Intersection Point 1

B

Initial orbit A Orbital Maneuver Point A Entry Transfer Orbit

Intersection Point 3

Intersection Point 4 Transfer Orbit

Fig. 1. In-plane orbital maneuver and intersection point

(2) maneuvering position: perigee and apogee of orbit or using AEW dual pulse joint maneuver can capture the eccentricity or out-plane parameters of target orbit; the first maneuver or orbit engine switching and main and standby fuel tank switching, the maneuver should be implemented in the arc of measurement equipment tracking, so as to stop the orbit maneuver in case of emergency; (3) Efficiency calibration constraint of orbit engine: Reasonably arrange the times of TT & C to evaluate the maneuvering effect; (4) Total time of maneuver: completed as soon as possible after launch with maneuver orbit determination accuracy and constraints of TT & C conditions; (5) Parameters to be optimized: pulse maneuver time and pulse quantity and direction;

3 Calculation Model 3.1

Velocity Impulse and Relative Trace Drift Model

With the assumption of in-plane impulse thrust, the velocity increment direction is along the tangential direction, and the instantaneous geocentric distance remains unchanged. For small eccentricity orbits, Gauss equation can be simplified as: 

Da ¼ 2aDVU =V De ¼ Da=a ¼ 2DVU =V

ð1Þ

Analysis of Optimal Orbital Capture Maneuver Strategy to LEO

103

Suppose the radial distance as r1 , the velocity as V1 , impulse acceleration DVU is applied along the tangential direction, the semi major axis of the orbit after maneuver a2 , can be conclude by using the vis viva equation as follows. Where l ¼ 3.9860044e + 14, rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2a1  r1 a2 ¼ r1 l=2l  r1 ð lð Þ þ DVU Þ2 r1 a1

ð2Þ

The variation of the semi major axis and the average running angular velocity after maneuver as follows. Da ¼ a2  a1

ð3Þ

Dn ¼ 1:5nDa=a1

ð4Þ

After t times elapses, the distance of the spacecraft relative to the maneuver point in the trace direction can be approximately expressed as: L ¼ a1 Dnt

ð5Þ

The relationship between tangential velocity increment DVU and drift distance L is as formula (6). rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   pffiffiffiffiffiffiffiffiffiffi 2a1  r1 Þ þ DVU Þ2  a1 t L ¼ _l0  t ¼ 1:5nDat ¼ 1:5 l=a1 r1 l=2l  r1 ð lð r 1 a1 ð6Þ According to formula (1), the tangential velocity increment will also have a certain effect on the eccentricity, that is, in the tangential and radial relative motion plane of the target satellite, there will be a similar phenomenon of flying around. The radius of short and long half axis of the flying ellipse are RA and RB . RA ¼ aDe;

RB ¼ 2aDe ¼ 2RA

ð7Þ

Formula (7) characterizes the size of the flying ellipse in the tangential radial plane. As Fig. 2, although there is a height difference between collision target in the radial direction after the tangential maneuver, the short radius of the fly around ellipse and eccentricities are different. At the same time, if the initial eccentricities of two spacecraft is similar, the semi major axis of maneuvering spacecraft determines the radius of the fly around ellipse.

104

H. Ma et al. 4

x 10 2 1.5

relative motion direction of chaser spacecraft 1

target spacecraft

0.5

radial distance /m

0 -0.5 -1

A

similar fly optimal position of target spacecraft around orbit

-1.5 -1

-0.5

0

0.5

1

1.5

tangential distance/m

2

2.5

3 4

x 10

Fig. 2. Schematic diagram of relative motion trajectory

The relative movement speed and distance of spacecraft along the track after maneuver are formulas (8) and (9) respectively. L ¼ _l  t rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   pffiffiffiffiffiffiffiffiffiffi _l ¼ 1:5 l=a1 r1 l=2l  r1 ð lð2a1  r1 Þ þ DVU Þ2  a1 r 1 a1 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2a1  r1  2RA DVU = lð Þ r 1 a1

ð8Þ

ð9Þ

According to the analytic formulas (8) and (9), the time for the chaser spacecraft to pass through the target spacecraft after maneuver can be quickly estimated, and the position of the chaser spacecraft in the middle of the relative trajectory is the safest, as shown in the green triangle in Fig. 2, which indicates that the maneuvering control quantity of the chaser spacecraft needs to be optimized. 3.2

The Closest Distance and the Closest Approach Moment

Suppose the current time as t0 ¼ 0, then at time t, the relative velocity vector between the two targets is vr , and the relative position vector is: qðtÞ ¼ q þ vr t

ð10Þ

where q is the distance between two targets, the square of the distance between two targets is q2 ðtÞ¼qðtÞ  qðtÞ, the time derivative is obtained, and the derivative is 0, then there is d 2 d ½q ðtÞ¼ ½ðq þ vr tÞ  ðq þ vr tÞ ¼ 2q  vr þ 2vr  vr t ¼ 0 dt dt

ð11Þ

Analysis of Optimal Orbital Capture Maneuver Strategy to LEO

105

The closest distance and point approach moment is as follows. tcpa ¼ q  vr =vr  vr

ð12Þ

In this case, the relative position vector of the two targets is: qðtcpa Þ ¼ q þ vr tcpa ¼ q þ ðq  vr =vr  vr Þvr

ð13Þ

By multiplying the relative velocity vector on both sides of formula (14), we can get qðtcpa Þ  vr ¼ q  vr q  vr ¼ 0

ð14Þ

The above formula shows that when the distance between two targets reaches the minimum, the point multiplication of their relative position vector q and relative velocity vector vr is 0, and the two vectors are perpendicular to each other [7].

4 Initial Orbit Capture Maneuver Strategy Planning Process On the basis of defining the orbit acquisition control target, the maneuver strategy planning is carried out according to the following process: Step 1. Orbit determination and calculating TT &C Resources; Step 2. Update space environment parameters, space catalogue target parameters, including space environment parameters of F107, AP and the atmospheric drag Cd and light pressure coefficient Cr retrieved from the orbit determination results of each tracking arc segment; Step 3. Carried out the collision safety assessment of space cataloging targets before orbital maneuver, according to the collision avoidance assessment criteria, and calculated the approach distance d and approach time; Step 4. Evaluated safety status of the group spacecraft, according to the orbit determination results. Analyzed the intra group collision risk caused by satellite rocket separation error. If it is safe, go to step 5; Otherwise, go to step 8 and execute the emergency avoidance process; Step 5. Planning maneuver pulse sequence and strategy. Specially for the first pulse, generated error trajectory according to estimated error of speed increment for each maneuver, carried out the collision safety analysis of space target in preset time, to ensure the collision safety in the first and second maneuvers at least. The subsequent collision risk can be adjusted after the previous maneuver effect evaluation. Step 6. Evaluate the impulse effect and collision safety according to the orbit determination results after maneuver. If it is safe, go to step 7; else, go to step 8; Step 7. Update maneuver strategy, adjust the error estimation range of the next sequence of maneuver according to the thrust calibration error after maneuver, and go to step 9 after collision safety calculation; Step 8. Apply for the emergency TT&C resources, adjust the maneuver strategy combined with maneuver target, and ensure the safe evasion. Step 9. Repeat Step 1*8 until the maneuver acquisition target was completed. Figure 3 shows the process diagram of each link in the implementation process of initial orbit capture maneuver.

106

H. Ma et al. Analysis of Maneuver Objectives and Constraints

Apply for TT & C Resources Maneuvering Impulse Planning Orbital maneuver strategy plan

Orbit Determination Maneuver Effect Evaluation

Initial orbit capture Maneuver

Space Safety Assessment Before Maneuver

Efficiency Calibration of Orbit Control Engine

First Maneuver Safety Assessment Based on Error Estimation Analysis of maneuvering safety

Safety Assessment of Residual Impulse Maneuver Nearest Distance

Approach time

Approach Angle

Collision Probability

Thrust error

Calibration results

Space Safety Assessment After Maneuver

Fig. 3. Implementation process diagram of initial spacecraft orbit capture maneuver

5 Simulation and Result 5.1

Simulation Parameter Setting

Suppose three spacecrafts launched with one rocket to complete the initial orbit acquisition maneuver task. Three spacecrafts are respectively recorded as ABC, with the same area mass ratio, tp is the orbital period of A, and the orbital maneuver is carried out at 1.5 days after launching. Collision warning safety threshold is relative distance between targets d > 2 km. The simulation conditions are set in Table 1. 5.2

Safety Analysis of Intra Group Collision Before Orbital Maneuver

After 1.5 days in orbit, before spacecraft A maneuver, the distance of AB is 73.4 km, the distance of AC is 325.8 km, the distance of BC is 252.1 km. The space distance assessment is safe and the initial orbit capture maneuver can be carried out normally. Table 1. Simulation conditions set Orbit acquisition target, Precision and maneuver time interval conditions B C A Uplift orbit altitude + 21.7 km

< v0x ¼ v cosðex Þ < rtx0 ¼ v00x t  gt2 =2 < v0tx ¼ v00x  gt v00y ¼ v0 cosðey Þ ; rty0 ¼ v00y t ; v0ty ¼ v00y : 0 > : 0 0 : 0 vtz ¼ v00z rtz ¼ v0z t v0z ¼ v0 cosðez Þ

ð11Þ

The position vector of the sub-satellite relative to its mother satellite in the geocentric equatorial rectangular coordinate system is: 2

3 rtx0 0 ~ rtm ¼ R3 R2 R1 4 rty0 5 rtz0

ð12Þ

Then the position vector of the sub-satellite in the geocentric equatorial rectangular coordinate system is:

116

Y. An et al. 0 0 ~ ¼~ rtg þ~ rtm rtg

ð13Þ

It is acquired that the position vector and velocity vector of the sub-satellite at t 0 and ~ v0tg . moment are respectively ~ rtg According to the orbital energy conservation integral equation 0 lrtg 2 1 v2 ¼ lð  Þ; a0 ¼ r a 2l  v0tg2

ð14Þ

0 rtg ~ v0tg , then the semi-parameter of the The moment of momentum vector ~ h0tg ¼ ~ elliptical orbit 0

p0tg

h2tg 0 ;e ¼ ¼ l

rffiffiffiffiffiffiffiffiffiffiffiffiffiffi p0tg 1 0 a

ð15Þ

It can be seen that the semi-parameter p0tg and semi-major axis a0 of the sub-satellite codetermine the orbital eccentricity. From the elliptic orbit equation r¼

p 0 ; e0 cos h0t ¼ p0tg =rtg 1 1 þ e cos h

ð16Þ

can be obtained, and by vector scalar product  sffiffiffiffiffiffi  0 0 rffiffiffi 0 ~ ~ v r p tg tg l tg ~ re sin h; e0 sin h0t ¼ r ~ v¼ 0 p rtg l

ð17Þ

can be obtained. The true anomaly ht’ can be uniquely determined by the above two formulas. According to the formula h0 tan t ¼ 2

rffiffiffiffiffiffiffiffiffiffiffiffi 1 þ e0 Et0 ; Mt0 ¼ Et0  e0 sin Et0 tan 1  e0 2

ð18Þ

Let the unit vectors of the geocentric equatorial rectangular coordinate system OXYZ along the axis X, axis Y and axis Z be ~ix ; ~iy ; ~iz respectively, the normal unit vector of 0 the sub-satellite orbit is ~i0h , and the moment of momentum vector ~ h0tg ¼ ~ rtg ~ v0tg points to the normal direction of the orbit, so: ~i0h ¼ ~ h0tg =h0tg ¼ sin i0 sin X0 i0x  sin i0 cos X0~ i0 y þ cos i0~ i0 z Then it can be obtained:

ð19Þ

Research on a Centralized Thrust Deployment Method for Space

! ~ h0tgx 0 i ¼ arccosð~ X ¼ arctan  h0tgz =h0tg Þ; 0  i0  p 0 ~ h 0

117

ð20Þ

tgy

Let the unit vector with the earth center pointing to the ascending node of the subsatellite orbit be ~i0X , then: ~i0X ¼ cos X0~ix þ sin X0~iy

ð21Þ

The argument of perigee x’ can be uniquely determined by   ~i0  ~   ~ r 0 tg r 0 tg i0 h ~i0X  ~ cos x0 þ h0t ¼ X 0 ; sin x0 þ h0t ¼ 0 rtg rtg

ð22Þ

5 Satellite Swarm Communication Strategy The “sub-mother” and “sub-sub” inter-satellite link communication transmission strategy are adopted to complete the TT&C of the entire swarm. In the swarm, the mother satellite can communicate with the sub-satellite, and the sub-satellite can communicate with another sub-satellite. The mother and the subsatellites set up satellite-ground TT&C channels respectively. The information of the sub-satellites distributed around the mother satellite can be sent to the main satellite through inter-satellite links, and the main satellite can download through the satelliteground TT&C link. Based on the establishment of the sub-satellite network topology, the relative fixed beam is used to establish the inter-satellite link, and effective dynamic routing is used to select the destination satellites to complete the inter-satellite measurement communication and the download of TT&C information.

6 Conclusion The deployment of distributed satellite swarm in the construction of “hitch-hiker satellite” is flexible and easy to be implemented because there is no restriction from serial launch. According to the optimized swarm construction, this paper put forward a centralized thrust deployment method for space satellite swarms, which establishes the space satellite swarm geometrical construction and assembly inter-satellite link transmission strategy through the special architecture design of the swarm assembly and attitude stability control calculation, so that to satisfy the future needs of TT&C for space satellite swarm deployment with high efficiency and convenience.

118

Y. An et al.

References 1. Wang, X.Y., et al.: Satellite constellation design with genetic algorithms based on system performance. J. Sys. Eng. Elec. 27(2), 379–385 (2016) 2. Inalhan, G., Tillerson, M., Jonathan, P.H.: Relative dynamics and control of spacecraft formation in eccentric orbits. J. Guid. Control. Dyn. 25(1), 48–59 (2002) 3. Liu, F., Lu, S., Sun, Y.: Guidance and control technology of spacecraft on elliptical orbit. Springer Science and Business Media LLC (2019) 4. Maral: Orbits and Related Issues. Satellite Communications Systems (2009)

Research on Navigation and Communication Fusion Technology Based on Integrated PNT System Zhao Wenwen(&), Zhu Xiaofeng, and Hu Jintao China Academy of Space Technology (Xi’an), Xi’an 710100, China [email protected] Abstract. The integrated development of navigation and communication systems has gradually become the mainstream of the development of space-based network information systems. Based on the comprehensive PNT (navigation, positioning, timing) thinking, this article investigates the current status of the integration and development of domestic and foreign communication systems, and summarizes the development status of my country’s communication and navigation systems. On this basis, the system architecture and several methods for the integration of navigation and communication systems are proposed, and combined with the construction of my country’s navigation and communication systems, the main problems and application prospects are analyzed, in order to promote the integration of conduction under the comprehensive PNT system in my country. The application provides development ideas. Keywords: Integrated PNT system  Navigation  Communication Navigation and communication fusion  Information network



1 Introduction In the context of the rapid development of a new generation of information technology, the development of the integration of space and earth navigation and communication remote integration has become the mainstream of the development of space-based network information systems [1]. The so-called “comprehensive PNT” means that multiple PNT information sources based on different principles are controlled by a cloud platform, multi-sensor high integration, and multi-source data fusion to generate a unified time and space reference, and has anti-interference, anti-deception, robustness, availability, Continuous and reliable PNT service information [2]. Based on the integrated PNT (navigation, positioning, timing) thinking, this paper investigates the current status of the integration and development of navigation and communication systems at home and abroad, and summarizes the development status of my country’s communication and navigation systems. On this basis, the system architecture and several methods for the integration of navigation and communication systems are proposed, and combined with the construction of my country’s navigation and communication systems, the main problems and application prospects are analyzed, in order to promote the integration of navigation and communication under the comprehensive PNT system in my country. The application provides development ideas. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 119–127, 2023. https://doi.org/10.1007/978-981-19-3387-5_14

120

Z. Wenwen et al.

2 Current Status at Home and Abroad 2.1

International Status

The United States started early in the integration and application of navigation and communication systems. The United States is developing low-orbit communication satellites on a global scale, and plans to launch thousands of satellites to achieve global communications [3]. Space X’s Star-1ink project plans to launch 12,000 satellites to achieve global communications. In terms of navigation, the United States has always been in an advantageous position. It has established a global GPS navigation network and promoted GPS modernization to continue to provide global users with high precision and low cost. The cost of navigation services [4]. The U.S. military has formulated a national PNT system construction route before 2025, using communication, autonomous navigation and PNT integration as a way to build a military PNT system that meets the conditions of future confrontation. In terms of satellite communications, the European Space Agency has launched a special program of “Pre-Research on Communications Systems” (ARTES), which has strong competitiveness in the global market; in terms of satellite navigation, Europe has completed and initially completed the EGNOS and Galileo systems, and has global capabilities. Navigation service capabilities. Other countries have also carried out the construction of communication and navigation systems. In summary, the developed countries represented by the United States have all made certain achievements in the construction of navigation and communication satellite networks.Countries all over the world attach great importance to the integration and development of satellite networks such as communications and navigation to promote fast, efficient and reliable spatial information intelligent services on a global scale. 2.2

Domestic Current Situation

After more than 50 years of unremitting efforts, China has established a solid foundation for navigation and communication satellites and related application industries, and has basically completed the construction of a certain scale of communication and navigation constellations.With the rapid development of satellite communication technology, the unique advantages of the low-orbit satellite communication system have been highlighted [6], and my country has also begun to develop its own ``Hongyan'' low-orbit communication satellite system. In terms of navigation satellites, under the vigorous promotion of the country, the construction of a national comprehensive PNT system with the Beidou global navigation system as the core has seen initial results [2]. In July 2020, Beidou Global Navigation System officially realized global constellation networking and launched global navigation services. China’s comprehensive PNT integration should focus on Beidou navigation satellite information as much as possible, and be based on the coordinate reference and time reference corresponding to the Beidou global satellite navigation system. Navigation and communication integration has become a new opportunity for the development of satellite navigation systems and communication systems.

Research on Navigation and Communication Fusion Technology

121

3 Navigation and Communication Integration Architecture The key to solving the application problem of remote integration of navigation and communication is to build a network of Unicom’s existing satellite resources to achieve barrier-free and rapid transmission of various types of information. The “five in one” of air), medium orbit, low orbit, ground (sea) surface, underwater, etc. receives various communication and navigation information. The ground uses a number of ground information stations as the basic platform, through a unified standard agreement, undertakes Navigation and communication satellite data aggregation, management, processing and distribution, application and other functions, so as to build a space-toearth integrated navigation, communication and remote integration three-dimensional integrated information network composed of satellites with different orbit heights and several ground-based nodes on the ground. The Beidou global satellite navigation system can connect to the integrated network through the inter-satellite link, and use the coverage capability of the integrated information network to complete the forwarding of the navigation signal to achieve seamless coverage of the navigation signal; by carrying the navigation enhancement load on the low-orbit communication satellite, The global coverage of navigation enhancement information can be realized. Through the integrated information network to complete the deep integration of navigation and communication satellite networks and data resources, real-time services of various space-based information can be realized, and various users can obtain accurate information, high-precision positioning timing and multimedia communication services at any time and any place. From the technical level of navigation and communication fusion, it is divided into signal fusion, information fusion, and hardware fusion. Signal fusion and information fusion will also drive hardware fusion. Here we focus on information fusion and signal fusion. 3.1

Navigation and Communication Information Fusion

In the future navigation equipment, the GNSS (Global Navigation Satellite System) + inertial navigation combination will become a basic standard combination, and the information assistance of the communication system will further improve the performance (precision, sensitivity) of the deep combination. The auxiliary information provided by the inertial navigation and communication system includes: user position, speed, attitude, satellite position, speed, frequency, time and text. The navigation performance with the aid of communication inertial navigation is affected by both the accuracy level of the auxiliary information and the signal characteristics (Tables 1 and 2).

122

Z. Wenwen et al. Table 1. Communication-aided information accuracy level Auxiliary level Rough time assist

Font size and style Time accuracy: ms < Dt < 10 s Precision Ephemeris Initial position accuracy: < 150 km Precise time assist Time accuracy: < ms Precision Ephemeris Initial position accuracy: < 10 km Precise time assist Time accuracy: < ms Text assistance Text assistance Initial position accuracy: < 10 km Time accuracy: < ms Precise time assist Text assistance Text assistance Frequency synchronization Communication frequency synchronization assistance

Table 2. Information accuracy level assisted by inertial device Auxiliary level

Font size and style Accelerometer zero bias (lg) Gyro drift (deg/h) Consumer grade > 1000 > 10 Tactical level 50–1000 0.01–10 Strategic level 1–50 0.0001–0.1

Among the signal characteristics, the main parameters that affect the navigation limit receiving performance are: pseudo code rate, message rate and power ratio. Based on the Beidou global navigation satellite signals: B1C, B2A and B1I, the sensitivity improvement simulation analysis shows that the capture sensitivity can be increased by about 10 dB, and the tracking sensitivity can be increased by about 15 dB (Table 3). Table 3. Sensitivity improvement simulation analysis results Auxiliary level

Capture sensitivity (dBHz) B1C B2A B1I Unassisted 31.2 34.7 34.7 Promotion of Rough time assist 4.6 dB 1.1 dB 3.9 dB Promotion of 5.0 dB 2.2 dB 8.5 dB Precise time assist and Inertial navigation assist 6.2 dB 5. 2dB 9.7 dB Promotion of Precise time assist and Text assistance Promotion of Precise time assist and Text assistance Inertial navigation assist

Tracking sensitivity (dBHz) B1C B2A 27.5 27.5 11.2 dB 9.4 dB

B1I 27.5 1.8 dB

12.4 dB 12.4 dB 12.4 dB

15.5 dB 15.5 dB 15.5 dB

Research on Navigation and Communication Fusion Technology

3.2

123

Navigation and Communication Signal Fusion

In terms of performance indicators, navigation and communication are quite different, and it is necessary to thoroughly combine the performance requirements of the two to design integrated navigation and communication signals. 1) Iridium’s STL signal The Iridium satellite system is a global mobile personal communication satellite system based on low-orbit satellites developed by Motorola in the United States. It supports global wireless digital communication. The low-orbit satellite communication system is different from the traditional GNSS system. In addition to the low satellite orbit, the biggest feature is the natural advantage of the communication satellite-strong communication capability and large data throughput. If the navigation signal is integrated on the communication satellite system, it will not only It reduces the time for the satellite to receive the entire navigation message, thereby increasing the speed of navigation and positioning convergence, and its powerful communication capabilities can obtain more auxiliary data, and increase redundant observations to improve the performance of navigation and positioning; secondly, The advantages of low-orbit satellites’ low natural orbits. Satellites run fast, and the geometric distribution of satellites is varied. It is easy to capture and track satellite navigation signals, and the receiving equipment that reaches the ground with low loss has higher power than traditional GNSS signals., The stronger the signal, the stronger the anti-interference ability. Therefore, the fusion of navigation and communication signals on communication satellites can effectively improve and enhance the performance of satellite navigation. STL navigation and communication fusion signals are a typical representative [7]. The integration of satellite navigation and communication is of great significance. The integration of the two will put forward new design requirements for the signal system of the entire system, including the selection of carrier frequency bands, the selection of modulation methods, the layout of satellite navigation communication fusion messages, and the design of spreading codes. As well as signal evaluation methods, etc., a certain degree of innovation and improvement are required. In the system of satellite navigation and communication integration, the overall design idea is shown in the figure below (Fig. 1).

Fig. 1. Navigation and communication fusion signal design

124

Z. Wenwen et al.

Iridium’s STL (Satellite Timing, Positioning) signal is a burst signal designed for navigation under the framework of communication signals. A single STL burst signal has a total duration of 90 ms and a bandwidth of 31.5 kHz. The time domain includes continuous wave flags and pseudo-random code spreading bands. Continuous wave flags are used for signal type identification and coarse synchronization. Pseudo-random spreading code segments are used for accurate time delay measurement, the broadcast frequency is 1 time/1.4 s (Fig. 2).

Fig. 2. STL burst signal

2) 5G PRS signal 5G adds a special PRS (Positioning Reference Signal) signal in the standard formulation to provide positioning and navigation services. However, practical applications are subject to many practical factors, including market demand, construction costs, and positioning accuracy, and have not been promoted at present. The deep-coupled navigation communication integrated PRS signal broadcast by 5G downlink is used for delay measurement, bandwidth (1, 4, 3, 5, 10, 15, 20 MHz) and broadcast period (160, 320, 640, 1280 ms) Configured by the system within a certain range. After the system frequency and bandwidth are selected, the waveform fusion needs to be considered for the specific needs of navigation and communication. For the allocation of frequency bandwidth, time and power, there can be three modes (Fig. 3). a) Time division navigation signal: broadcast short burst navigation signal periodically; b) Frequency division navigation signal: set independent navigation signal broadcast bandwidth; c) Broadband power control navigation signal: superimpose a high-bandwidth low-power navigation signal on the communication signal.

Fig. 3. Broadband power control navigation signal

Research on Navigation and Communication Fusion Technology

125

The three waveform fusion methods achieve a balance between navigation performance and communication performance under the condition of a certain frequency bandwidth, but there is a corresponding compromise in performance compared to a completely independent navigation signal (Table 4). Table 4. Navigation and communication signal performance indicators Time division navigation signal Time delay estimation accuracy Frequency estimation accuracy Carrier-to-noise ratio margin Observation type

Communication loss

low

Frequency division navigation signal medium

Power control navigation signal high

medium

high

high

high

high

medium

pseudorange/ Doppler (Carrier phase needs to increase the broadcast cycle) low

Pseudorange/ Doppler/ Carrier phase

pseudorange/ Doppler/ Carrier phase

medium

low

4 Application Prospects The navigation and communication integration of Beidou + 5G realizes the information fusion between navigation and communication systems, and promotes the formulation and optimization of the standard protocol of the information fusion layer. As one of the main components of the communication network, 5G completes the transmission and application of temporal and spatial information, and provides radio solutions for wide-area indoor positioning; develops a unified system interface, protocol, base station, and terminal chip, which can realize the conversion of civil systems in wartime into Military system. The future technology development trend of indoor applications is UWB [8] (radio solutions for indoor centimeter-level positioning)+vision+inertial devices. UWB positioning technology has been advancing rapidly. Apple intends to integrate UWB technology into mobile phone chips. UWB will become a basic positioning facility similar to wifi. The integrated development of navigation and communication systems has broad application prospects in various fields. With the advancement and development of the comprehensive PNT system, the accuracy and real-time nature of space-based information will be greatly improved. It is used in national security, emergency and disaster relief, and popular applications. The importance of aspects will increase day by day. With the opening of space-based information, seamless coverage of real-time

126

Z. Wenwen et al.

communication, high-precision navigation and positioning and other related services have broad room for development, realizing goal-oriented dynamic monitoring, logistics monitoring, precision agriculture, road-air integrated transportation, smart cities, and cities Governance, etc. provide real-time situation and intelligent services.

5 Conclusion Navigation and communication integration is a complex problem with multiple application scenarios, multiple requirements, and multiple systems. It is impossible to solve the integration of navigation and communication with a set of solutions. It is necessary to comprehensively sort out the system, combine typical application scenarios and requirements, and clarify the transformation plan and development of specific systems. Picture. In military applications, navigation and communication fusion can effectively improve the anti-jamming and anti-deception capabilities of the navigation system in the face of strong enemy confrontation in the future. Different from traditional navigation systems, after the integration of navigation and communication, the safety of the system becomes prominent, and safety becomes one of the core design elements in the design of navigation and communication integration. From the perspective of national demand and the international technological development situation, the integrated development of navigation and communication between the world and the earth has become the mainstream trend in the development of the integrated PNT system. The construction of relevant key technologies and infrastructure in my country is progressing in an orderly manner. Guided by the comprehensive PNT system thinking, and the construction of a world-earth integrated information network as an opportunity, we will promote the integration of navigation and communication to meet future needs, and further accelerate the development of my country’s integrated PNT. Navigation and communication integration application services under the system.

References 1. Zheng, Z., Xue, Q., Qiu, L., Liu, G., Li, Q.: Discussion on the application of space-earth integrated navigation and communication remote fusion based on network information system thinking. Journal of China Academy of Electronics 15(8), P709- 714 (2020). Author, F., Author, S.: Title of a proceedings paper. In: Editor, F., Editor, S. (eds.) CONFERENCE 2016, LNCS, vol. 9999, pp. 1–13. Springer, Heidelberg (2016) 2. Yuanxi, Y.: Comprehensive PNT system and its key technologies. J. Survey. Mapp. 45(5), 505–510 (2016). https://doi.org/10.11947/j.AGCS.2016.20160127 3. Li, G., et al.: The development status of mainstream communication satellite platforms in the United States. China Aerospace (8), 16–21 (2017) 4. Chang, J., Wang, R., Hu, T.: Cooperative management mode and characteristics of major foreign satellite navigation systems. Satellite Applications 5, 56–59 (2019) 5. Deren, L.: On the military-civilian deep integration of navigation, communication and remote integration of aerospace information real-time intelligent service system. Netcom JM Convergence 12, 12–15 (2018)

Research on Navigation and Communication Fusion Technology

127

6. Ruming, C.: The first lecture on low- and mid-orbit satellite communications and its basic characteristics. Telecommunications Science 07, 44–48 (1997) 7. Xie, Z.: Research on Iridium STL Signal System and Performance. Huazhong University of Science and Technology (2019) 8. Xiong, Y.: Research and Application of Positioning Technology Based on Ultra-wideband. University of Electronic Science and Technology of China, Chengdu (2015)

A Solar Array on Orbit Output Power Prediction Method for Satellite Jiang Dongsheng(&), Peng Mei, Yang Dong, Jing Yuanliang, and Du Qing Beijing Institute of Spacecraft System Engineering of China Academy of Space Technology, Beijing 100094, China [email protected] Abstract. An satellite solar array power prediction tool and method is carried out in this paper. The prediction tool including satellite orbit analysis module, the incidence angle of sun light on solar panel analysis model. According to the satellite orbit parameters, the scheme of series-parallel solar cell strings designed, and the solar cell degradation character, we predict the output power of solar wings. The tool can help analyzing power and load balance, mission scheduling and automatic operate management. Keywords: Solar array

 Power prediction  On-orbit simulation

1 Introduction Satellite solar array out put power prediction is a complex calculation process, which requires multiple parameters such as temperature, current and voltage degradation ratio, light incident angle, etc. to create the equation. Development of satellite solar array out put power prediction method can help engineer design the solar array scale and schedule the mission with the load working sequence. It can provide more accurate analysis by using computer and saving a lot of design and calculation time than the conventional manual calculations. Considering the satellite orbit parameters and attitude variables we can get the angle between solar array and the sunlight, so solar array output power analysis can be get.

2 Satellite Orbit and Solar Array Light Incidence Analysis 2.1

Orbit Analysis

For the prediction of solar array out put power, the basic element of calculation are Orbital parameters. Because the sun light incidence angle on solar array is different for satellite on different type orbit, even for one satellite on the same orbit, the sun light incidence angle is different at different time. It can also affects some other important parameters such as the temperature of the solar panel, which influence the prediction Author: Jiang Dongsheng (1974 - ), senior engineer of electrical power system of spacecraft. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 128–136, 2023. https://doi.org/10.1007/978-981-19-3387-5_15

A Solar Array

129

accuracy. The software gets algorithm for the analysis of the satellite position and velocity, the temperature of solar panel and the incidence angle of sun light on solar array, and satellite attitude analysis and other functions. It is the key technology for satellite solar array out put power prediction. An design example is introduced to verify the accuracy of orbital dynamics submodule, the analysis results by using this orbital dynamics sub-module and the results of STK are compared as follow. Test example: • • • • • • • •

Simulation Start time: at 12:00 on July 1, 2013 Semi-major axis: 6778.14e3 meters Eccentricity: 0 Orbital inclination: 42.5°; RAAN: 0° Mean anomaly: 0 Satellite attitude: to be oriented three-axis stabilized Solar array configuration: mounted on the Y-axis, solar array can track sun (Table 1) Table 1. Calculated deviation of example 4 Deviation The satellite position maximum deviation on X axis The satellite position maximum deviation on Y axis The satellite position maximum deviation on Z axis The satellite velocity maximum deviation on X axis The satellite velocity maximum deviation on Y axis The satellite velocity maximum deviation on Z axis The intersection angle between sunlight and orbit plane The intersection angle between sunlight and solar array The intersection angle between solar panel and the earth-satellite vector

Value 250 m 200 m 150 m 0.25 m/s 0.2 m/s 0.2 m/s 0.01° 0.037° 0.12°

Fig. 1. Intersection angle between sunlight and orbit-plane (unit: degree)

130

J. Dongsheng et al.

The test results show that the calculated deviation between orbital dynamics module and the STK software is within the permissible range. 2.2

Solar Arrays Light Incidence Analysis

Affected by the attitude of the satellite and the position on orbit, the solar incidence angle can directly calculate the intersection angle between the sunlight vector and the normal of solar panel considering the solar array mounting direction and the sun tracking mode as well. This method is simple and convenience for solar array power analysis (Fig. 1, Fig. 2 and Fig. 3).

Fig. 2. Intersection angle between sunlight and solar array (unit: degree)

Fig. 3. Intersection angle between sunlight and earth-satellite vector (unit: degree)

3 Satellite Solar Array Output Power Prediction 3.1

Solar Cell Electric Generate Theory and Model

Solar cell is a kind of photo-voltage effect energy conversion device. It depends on the photo-voltage effect of the semi-conductor, to directly convert the solar energy into electrical energy, and thus, the solar cell is also called the photo-voltage cell (Fig. 4).

A Solar Array

131

Fig. 4. Solar cell current generate theory

The current generated by the solar cell irradiated by light increases with the increase of light intensity. When the received illuminance is constant, the solar cell can be regarded as a constant current power source. Due to the contact between the electrodes on the surface and the back of the cell, and the material itself has a certain resistivity, the current flowing through the load will inevitably cause loss when passing through them. In the DC equivalent circuit, their total effect can be represented by a series resistance Rs. At the same time, due to the leakage of the cell edge, such as the leakage of the metal bridge formed in the micro cracks, scratches, etc. of the cell, a part of the current that should pass through the load is short-circuited. This effect can be equivalently represented by a parallel resistor Rsh. According to the electric theory, the solar cell equivalent circuit shown in Fig. 7 is usually used in practical applications.

Fig. 5. Solar cell equivalent circuit

In the figure, the solar cell equivalent circuit consists of a current source that can constantly generate photocurrent IL, a parallel diode under a positive bias, a series resistance Rs, a parallel resistance Rsh, and a load resistance RL. Rs is the series resistance, which is composed of the surface resistance of the diffusion top zone, the body resistance of the cell, the resistance between the upper and lower electrodes and the solar cell, and the resistance of the metal conductor. The I-V curve of a solar cell is determined by the following factors: 1. The value of the photo generated current IL; 2. The cell temperature T and terminal voltage V; 3. The current Ip flowing through the ideal p-n junction inside the cell; 4. The current flowing Ish through the parallel resistor Rsh.

132

J. Dongsheng et al.

According to Kirchhoff's law and diffusion theory, the equivalent mathematical model based on the DC circuit model in Fig. 5 is shown in Eq. (1)     q(V þ IRs Þ V þ IRs I ¼ IL  ID  Ish ¼ IL  Id exp 1  AKT RSH

ð1Þ

where I is the output current of the cell, IL is the photo-generated current, Id is the ideal p-n junction saturation current inside the cell, Rs is the equivalent series resistance, Rsh is the equivalent parallel resistance, V is the load voltage, q is the electronic charge (C), K is Boltzmann’s constant, T is the absolute temperature (K), and A is the diode factor. For an ideal solar cell, the series internal resistance is equal to zero, and the parallel internal resistance of the solar cell is infinite. So, the solar cell DC equivalent model can be simplified as:  I ¼ IL  Id

   q(V þ IRs Þ exp 1 AKT

ð2Þ

The satellite solar cell circuit adopts series-parallel connection, as shown in Fig. 6. When building the solar cell circuit model, the voltage and current of the solar array model can be accumulated through the series-parallel relationship of the solar cell. IO ID

NpI L

ID

NS

VO

NP

Fig. 6. Solar cell circuit connection diagram

      q(Vo =Ns þ Io Rs =Np Þ Np Vo Io RS Io ¼ Np IL  Np Id exp þ 1  AKT RSh Ns Np Among them: Ns represents the number of series connected; Np represents the number of parallel slices. The simplified equation is:     q(Vo =Ns þ Io Rs =Np Þ Io ¼ Np IL  Np Id exp 1 AKT

ð3Þ

A Solar Array

133

According to the equation, the short circuit current of the Ns and Np solar cell arrays can be obtained as NPISC. The output open-circuit voltage is NsVOC. And the maximum power point voltage and current can be approximated as NsVmp and NpImp, respectively. 3.2

Solar Array on Orbit Output Power Predict

The solar array is made up of several solar cells in series and parallel connection, and its total output characteristics are determined by the characteristic of the solar cell. Aiming at the requirements and characteristics of the overall design, the macro model method in the computer analysis method is adopted, that is, the entire solar array is simulated with a single-chip solar cell model. It is convenient to study the output performance of the solar array from the overall perspective of the satellite power system. The output current and power of the solar cell array can be calculated based on the four characteristic parameters of solar cells (short-circuit current Isc, open circuit voltage Voc, maximum power point current Imp and maximum power point voltage Vmp), bus voltage demand and the number of solar cells in series and parallel. The mathematical model of the solar array, that is, the I-V curve of the solar cell can be expressed by the following equation: I ¼ Isc ð1  C1 fexp½V=ðC2 Voc Þ  1gÞ

ð4Þ

C1 ¼ ½1  ðImp =Isc Þfexp½Vmp =ðC2 Voc Þg

ð5Þ

C2 ¼ ½ðVmp =Voc Þ  1½lnð1  Imp =Isc Þ1

ð6Þ

where:

It can be seen from the above equation that only four characteristic parameters (short-circuit current Isc, open circuit voltage Voc, maximum power point current Imp and maximum power point voltage Vmp) can determine the functional relationship between solar cell output current and voltage (I-V curve). The output power of the satellite solar array is mainly affected by the sun incident angle, the distance factor between the sun and the earth, the temperature of the solar array, and the degradation factor of the solar array. The solar incidence angle and the distance factor between the sun and the earth can be calculated with the orbital factors. Combining the solar radiation energy incident on the solar cell array, the reflected energy of the earth to sunlight, the earth’s radiation energy and the thermal performance parameters of the solar cell array itself, and the thermal balance equation of the solar array, the temperature of the solar array can be estimated. Through the ground radiation test, short-circuit current, open circuit voltage, maximum power point current and maximum power point voltage of the solar cell after irradiation can be obtained. So, the I-V characteristic equation of the solar cell after radiation can be obtained, and the operating conditions of any operating point after radiation can be obtained.

134

J. Dongsheng et al.

Combining the operating environment of the satellite, the moving damage curve of the solar cell is calculated which is converted into voltage and current degradation factors (as shown in Fig. 7).

Fig. 7. The decay factor of current and voltage

It can be seen from the above formula that the I-V curve equation of the solar cell array can be obtained only by knowing the value of Isc, Voc, Imp, Vmp. Therefore, it is only necessary to substitute the working point voltage into the equation to obtain the working point current, and then we can obtain the actual output power of the solar array. Assuming that the parameters are Isc0, Voc0, Imp0, Vmp0 under the ideal state AM0 of the solar array, the on orbit actual values Isc, Voc, Imp, Vmp are: V V P VOC ¼ [VOC0 þ bv ðT  25Þ]KV E KA KM Kuv  NS

ð7Þ

V V P Vmp ¼ [Vmp0 þ bv ðT  25Þ]KV E KA KM Kuv  NS

ð8Þ

ISC ¼ [ISC0 þ bi ðT  25Þ]KIE KIA KIM KIuv cosh  NP

ð9Þ

Imp ¼ [Imp0 þ bi ðT  25Þ]KIE KIA KIM KIuv cosh  NP

ð10Þ

where: bv and bi are temperature coefficient of voltage and current, respectively. T is operating temperature. I KV E and KE are the radiation degradation factor of voltage and current, respectively; V KA and KIA are voltage and current combined loss factor, respectively; I KV M and KM are voltage and current test error factors, respectively; V Kuv and KIuv are voltage and current UV radiation degradation factor, respectively; Substituting the values of Isc, Voc, Imp, Vmp into Eqs. (4), (5), (6), the actual I-V curve equation I = f(v) can be obtained. The actual operating point voltage of the solar cell array can be obtained by the following equation; VOP ¼ Vbus þ Vdio þ Vline where: Vop is the actual operating point voltage of the solar cell array;

ð11Þ

A Solar Array

135

Vbus is the bus voltage. Vdio is the isolation diode voltage drop. Vline is the cable voltage drop. Substitute Vop into I = f(v) to get the actual working current Iop at this time. According to the on-orbit working conditions of satellite, working sequence mode, satellite attitude, and consider the temperature vary of solar panel, together with sun light angle of incidence, the output power and the working process of solar array are simulated. The algorithm can analysis any certain period of time in the whole on-orbit duration of satellite, and verify the result of the predicted output of the solar farray with the telemetry data from the satellite (Fig. 8).

Fig. 8. Analysis results of solar arrays power output (unit: A)

4 Conclusion This paper analyzes the satellite solar array output power prediction method. Through building a satellite orbit dynamics algorithm and solar array light incident conditions analysis model, establishing the mathematical model of solar array analysis and design constraints model, this method calculates the output power of the solar array. The calculation example of solar array on orbit telemetry data also use to analysis result verifies the correction of analysis algorithm designed. Acknowledgement. The method and simulation has been sponsored by the national key development plan topics “Industry application and promotion of multidisciplinary system analysis CAE platform”, under agreement number 2019YFB1706504. Thanks for the support of team leader Dr. Liu Zhigang.

References 1. Shijun, M.: Satellite Power Technology. China Astronautics Press, Beijing (2001). (in Chinese) 2. Tan, W., Hu, J.: Spcecraft System Engineering. China Science and Technology Press, Beijing (2009)

136

J. Dongsheng et al.

3. Guoxin, L.: Spacecraft Power System Technique Summary. China Astronautics Press, Beijing (2008). (in Chinese) 4. Patel, M.R.: Spacecraft Power System. Han, B, Chen, Q., Cui, X., translated. China Astronautics Press, Beijing (2010). (in Chinese) 5. Rauschenbach, H.S.: Si Solar Array Design Handbook. Xi, Z.J., translated. China Astronautics Press, Beijing (1987) 6. Wanjuan, Y.: Research of sun synchronization orbit satellite power system design computation method. Chin. Space Sci. Technol. 21(2), 19–25 (2011) 7. Zhang, Y., Jiang, D.S., Liu, Z.: Research on modeling and simulation of satellite electrical power system. Electr. Electron. Aerosp. Technol. Forum 343–348 (2014)

Conception and Prospects of Exploration and Mining Robots on Lunar Base Zhihui Zhao1,2(&), Wei Zhu1, and Ning Xia1 1

Beijing Institute of Spacecraft System Engineering, No. 104, Youyi Road, Haidian District, Beijing 100094, China [email protected] 2 Beihang University, No. 104, Youyi Road, Haidian District, Beijing 100094, China

Abstract. Today, the United States, Russia, Europe, and China have all put forward their own plans for lunar bases, and the prelude to the construction of lunar bases as a transit base for deep space exploration has been slowly opened. This article investigates and analyzes the construction plans of various national lunar bases and the results obtained in the exploration and sampling of extraterrestrial celestial bodies. On this basis, an exploration and mining robot for the construction of lunar bases is proposed and briefly explained, include its engineering goals, conceptual assumptions and key technologies, etc. Keywords: Deep space exploration  Lunar base  Exploration and sampling  Robot

1 Introduction As the only natural satellite of the earth, the moon is the first stop for mankind to explore space. Since 1959, the United States and the Soviet Union have begun to launch probes to explore the moon. Since the beginning of the 21st century, the moon is still one of the hotspots of deep space exploration in various countries due to its particularity. The scale of human exploration of the moon is even larger, and various countries have put forward their own lunar base plans. The exploration of lunar resources is the prerequisite for the construction of lunar bases. The exploitation of lunar resources provides raw materials for the operation of infrastructure construction systems and life support systems, and runs through the construction and operation of lunar bases. Up to now, the purpose of collecting samples of extraterrestrial bodies by humans is to conduct scientific research on the star soil, and there is no exploration and mining robot that serves the construction of the lunar base to realize the exploration and mining of the lunar soil. This article investigates and analyzes the construction plans of various national lunar bases and the results obtained in the exploration and sampling of extraterrestrial celestial bodies. On this basis, a exploration and mining robot for the construction of lunar bases is proposed and briefly explained, include its engineering goals, conceptual assumptions and key technologies, etc.

©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 137–145, 2023. https://doi.org/10.1007/978-981-19-3387-5_16

138

Z. Zhao et al.

2 Research Status at Home and Abroad 2.1

United States

As early as 1989, the United States proposed the “trilogy” of conquering space, namely, the construction of the “Freedom” space station, the establishment of a lunar base, and the realization of manned landing on Mars. In the early 1990s, Wendell Mendel, head of the Houston Space Center in the United States, proposed to the White House to build a lunar base. He believed that the establishment of a lunar base was extremely important to support further large-scale development in space. In December 2006, NASA announced the “Return to the Moon” plan. Its core goal is to establish a permanent base on the moon. After that, there will be rover and living area, which can realize power supply and ensure that 4 astronauts will stay on the moon for 180 days. In terms of exploration and sampling of extraterrestrial celestial bodies, the United States has very rich experience and technology accumulation. See Table 1.

Table 1. The geotechnical exploration of extraterrestrial bodies of the U.S. Mission

Task type

Mobile system

Sampling type

Sampling method

Exploration load

Surveyor

Soft landing

/

Surface layer

Shovel

Viking

Soft landing

/

Surface layer

Shovel

NEAR

Soft landing Soft landing, Patrol Soft landing Soft landing, Patrol Soft landing

/

/

/

Camera, lunar soil hardness and bearing capacity measuring instrument, alpha scatterometer, etc. Cameras, gas chromatography mass spectrometers, X-ray induction phosphors, etc. Camera, X/a-ray spectrometer, etc.

Wheeled /

/

Camera, alpha proton X-ray mass spectrometer, etc.

/

Shovel

Cameras, Martian volatiles and climate kits, etc. Camera, alpha particle X-ray mass and Moosebauer spectrometer, microscopic imager, etc. Camera, microscopy/conductivity/electrochemistry analyzer, etc. Cameras, alpha particle X-ray spectrometers, neutron dynamic reflection detectors, etc.

Mars Pathfinder MPL

Surface layer Wheeled Rocks

Grind

/

Surface layer

Shovel

Soft landing, Patrol Soft landing

Wheeled Surface layer

Shock

/

Drill through

Perseverance Soft landing, Patrol

Wheeled Surface layer

MER

Phoenix

MSL

InSight

Deep layer

Drill through

Camera, internal structure seismic experiment, heat flow and physical properties package, etc. Camera, X-ray photochemical fluorescence spectrometer, etc.

Conception and Prospects of Exploration

2.2

139

Russia

Russia proposed a lunar base plan in 2019. It is planned to launch Lunar 25, through a small polar landing on the moon, to verify the lunar soft landing technology for subsequent exploration missions, and to carry out some lunar scientific exploration of the lunar surface weathering layer; launch the “Lunar-Waterdrop” orbiter (Lunar-26), draw a complete picture of the lunar surface, probe the lunar surface and subsurface structure, carry out scientific research such as lunar surface chemical composition and hydrogen-rich area detection; launch the Lunar-27 lander and Lunar-28 lander to collect the lunar south pole Low-temperature soil samples and return to the earth; launch the Lunar-29 lander to send the lunar rover to the moon; launch the Lunar-30 lander to send the reusable lunar exploration spacecraft to the surface of the moon to serve manned scientific research; launch the Lunar-31 lander to transported nearly 5 tons of heavy-duty lunar rover with equipment necessary for the development of lunar resources to the moon; launch the Lunar-32 lander to send a 6-ton heavy module to the moon to build a lunar test site; launch the Lunar-33 orbiter, To ensure communication and navigation. Russia planned to start the establishment of the first lunar base in 2034. In the area of exploration and sampling of extraterrestrial celestial bodies, Russia has certain experience and technology accumulation. The Luna series of probes launched in the 1970s verified and realized soft landing on the moon, and carried out deep sampling of lunar soil through the sampling method of drilling, exploration load mainly include X-ray instruments, alpha ion emitters and so on. 2.3

Europe

At the 31st National Space Symposium held from April 13th to 16th, 2015, ESA Director John Dietrich Werner proposed to build an international base on the moon as an alternative to the International Space Station Outpost in space. It is planned to send a lander to the moon, which will be equipped with all the equipment used by the space agency to build an inflatable base, as well as a robot with a 3D printer. When the inflatable habitat is in place, the robot-equipped 3D printer will use lunar dust to build the outer walls around the base. The outer wall will completely cover the base, protecting the base and future human residents from the impact of cosmic rays and meteorites. For the exploration and sampling of extraterrestrial celestial bodies, ESA has designed the Mars Air Express probe, carried out soft landing and clamp-on surface sampling techniques. The exploration loads mainly include cameras, microscopes, and X-spectrum analyzers. 2.4

China

In 2015, China proposed a plan for follow-up missions to the moon that will carry out “reconnaissance, construction, and use” by 2030. “Prospect”, survey the lunar environment and resources; “build”, establish a long-term basic scientific research platform; “use”, use technology to verify the exploitation of available resources. In the specific timetable, China has initially planned to carry out the automatic sampling and return of

140

Z. Zhao et al.

the Chang’e-6 in the lunar south pole area before 2030, the comprehensive survey of the environment and resources of the lunar polar area by the Chang’e-7, and the verification of the “3D printed” house on the lunar surface by the Chang’e 8, to build a comprehensive lunar robot scientific research station with scientific exploration, scientific research experiments, and resource utilization technology verification, and complete technical verifications such as lunar resource mining and utilization, and biorenewable life support. With regard to the exploration and sampling of extraterrestrial celestial bodies, China has verified and realized some lunar soil exploration and sampling technologies through the lunar exploration project. See Table 2 for details. Table 2. The geotechnical exploration of extraterrestrial bodies of China. Mission

Task type Soft landing, patrol

Mobile system Wheeled

Sampling type /

Sampling method /

Chang’e4

Soft landing, patrol

Wheeled

/

/

Chang’e5

Soft landing

/

Surface and deep layer

Shovel and drill

Chang’e3

2.5

Exploration load Camera, particle excitation X-ray spectrometer and infrared imaging spectrometer, lunar radar, etc. Cameras, visible and near-infrared spectrometers, lunar radars, etc. Cameras, lunar soil structure detectors, etc.

Summary

Since the 1950s, countries led by the United States and the former Soviet Union have carried out explorations of solar system celestial bodies, and have achieved a series of results in the exploration and sampling of extraterrestrial celestial bodies. It has laid a solid foundation for the exploration and mining of the star soil and the construction of the lunar base. According to the technical achievements of geotechnical exploration and sampling of extraterrestrial bodies in various countries, the following three points are summarized: 1) Geotechnical exploration and mining robots for extraterrestrial celestial bodies mainly include landers for in-situ detection and rover for mobile detection, among which the rover uses a wheeled structure. 2) The sampling of extraterrestrial celestial bodies and soil is divided into two types: surface sampling and deep sampling. Surface sampling mainly includes shoveling, clamping, shocking, and grinding, and deep sampling is mainly drilling. 3) The exploration of extraterrestrial celestial bodies mainly relies on cameras, mass spectrometers, spectrometers and other equipment to detect and analyze the topography and composition of rocks and soils.

Conception and Prospects of Exploration

141

3 Conception of the Lunar Base Exploration and Mining Robot The robot that completes the exploration and mining of lunar resources is called the lunar base exploration and mining robot. It has strong mobility and can explore and mine a large amount of lunar rock and soil according to the different needs of the construction and operation of the lunar base. The collected lunar rock and soil will be used for Lunar infrastructure work and life support system. The lunar base exploration and mining robot is a system-integrated detector system, which includes a robot operating platform, a sampling system, an exploration load system and other operating device systems. It has strong application value. The idea of a lunar base exploration and mining robot proposed in this article will promote the development of spacecraft engineering in the field of deep space exploration, and promote its integration with lunar base engineering, providing a new idea and corresponding for the design of a new type of extraterrestrial celestial rock and soil exploration robot,and the engineering foreshadowing. The following introduces the engineering goals, conceptual assumptions and key technologies of the lunar base exploration and mining robots. 3.1

Engineering Goals

The construction of the lunar base is divided into three phases. The first phase is unmanned, called a robot base, to carry out the initial construction of the lunar base, and the second phase is to dispatch a lunar landing team, and man-machine joint construction of the base until the infrastructure is complete and necessary to sustain life. In the third phase, scientists will be stationed on the moon in rotation to carry out scientific research and the expansion of the moon base. The construction of the lunar base is a huge project, which mainly includes three parts. First is the mining of lunar soil and lunar rock, then the infrastructure construction, and finally the establishment of a life support system. During the construction of the lunar base, robots play an indispensable and important role at any stage, mainly completing the three tasks of exploration, mining, and construction. Among them, the exploration of lunar resources is the prerequisite for the construction of lunar bases. The exploitation of lunar resources provides raw materials for the operation of infrastructure construction systems and life support systems, and runs through the construction and operation of lunar bases. According to the lunar base, combined with the current engineering experience and achievements of deep space exploration, and based on the technical characteristics of the robot, the following engineering goals have been sorted out: 1) Capable of highreliability soft landing and high-performance mobility. 2) Realize the fine exploration and large-capacity acquisition of the lunar rock and soil. 3) Adapt to lunar geotechnical exploration and mining operations under long-term, high-strength and harsh environments.

142

3.2

Z. Zhao et al.

Conceptual Assumptions

The lunar base exploration robot detector is composed of an orbiter and a lunar base exploration robot. The orbiter carries the lunar base exploration robot through the earthmoon transfer orbit from the earth orbit to the lunar orbit. The lunar base exploration robot is separated and released from the orbiter, and softly land on the lunar surface, and carry out follow-up lunar geotechnical exploration and mining (Fig. 1).

Fig. 1. Conceptual assumptions of lunar base exploration robot

(1) Soft landing. The lunar base exploration and mining robot softly landed near the lunar base site. The solar wings deploy for charging. After the energy storage was completed, the mobile mechanism was deployed and released to the working position, the landing buffer mechanism was retracted, and the mobile mechanism contacted the lunar surface for mobile driving conditions. 2) Mobile driving. The lunar base exploration and mining robots move to the predetermined geotechnical collection point. If there are multiple lunar base exploration and mining robots near the lunar base, the robots can communicate with each other, and the geotechnical collection area is planned in advance according to the ground, and they can carry out coordinated passage as a group with unified planning of routes and sampling tasks. 3) Explorating. After arriving at the area to be sampled, the lunar soil exploration load carried by the lunar base exploration robot will detect and analyze the composition of the lunar soil in the sampled target area, and determine that the lunar soil to be collected has an effective composition, compactness, particle size level, and equipped with sampling feasibility and safety. 4) Geotechnical sampling. The lunar base exploration and mining robot needs to carry out some preparations before rock and soil sampling, such as the solar wing retracted to a dust-proof state, and the landing buffer mechanism is reused as a sampling support mechanism to reach out to the moon surface to protect and

Conception and Prospects of Exploration

143

stabilize during the sampling process, the moving mechanism is moved away from the moon surface to the sampling avoidance position, so as to be in a safe dust-proof state during the sampling process. After the preparatory work is over, the rock and soil sampling mechanism extends or starts the rock and soil excavation and sampling work, and dumps the samples into the rock and soil storage box. 5) Sample transportation. When the rock and soil collection is completed, the mobile mechanism is unfolded, the sampling support mechanism is retracted, so that the mobile mechanism contacts the lunar surface, the lunar base exploration and mining robot has the mobile driving conditions. Then the lunar base exploration and mining robot travels to the designated area for the construction of the lunar base, and gives the collected rock and soil to related engineering equipment for lunar base construction. 6) Continue geotechnical sampling. After the lunar geotechnical transportation is transferred to the designated location, the lunar base exploration robot sets out again for the next geotechnical collection work. If energy needs to be supplied, it can be charged at a specific location on the lunar base, or the solar wing can be deployed on the spot for charging. 3.3

Key Technologies and Suggestions

Regarding the follow-up research on the lunar base exploration and mining robots, for some of the key tasks that need to be overcome, there are the following key technologies: 1) The design of the robot operating platform on the lunar base. The exploration and exploitation of lunar resources is the prerequisite for the construction of lunar bases. The first problem that needs to be solved is the acquisition of lunar resources in situ. The scientific goals of the current deep space probes are the detection of extraterrestrial objects and the verification of engineering technology. It is necessary to combine the current engineering experience of deep space detection to research and design a lunar base robot operating platform with high reliability, dustproof, intelligent autonomy, high-power energy supply, to adapt to complex lunar geotechnical exploration application scenarios, and meet the high-strength long-term lunar base construction needs. 2) Design of high-reliability and large-capacity lunar geotechnical sampling operation device. In the detection missions of extraterrestrial objects that have been implemented so far, the sampling methods implemented mainly include surface star soil scooping, deep star soil drilling, and rock sample grinding acquisition. Since the previous deep space exploration missions have been for scientific research purposes, the demand for samples is limited, so the designed sampling mechanism has a small volume and sampling volume, which does not meet the needs of lunar base construction and operation. Chang’e-5 has successfully verified the mining sampling method, lofting method and sample storage method on the lunar surface. It’s recommended that on this basis, design an excavation sampling operation device and sample storage device suitable for the construction of lunar bases.

144

Z. Zhao et al.

3) Integrated landing and mobile design. So far, the lunar and Mars probes launched by mankind have separated the lander and the rover. The lander is responsible for completing the probe’s landing on the surface of the planet, and the rover is responsible for patrolling and detecting the surface of the planet. The mobile system of the rover adopts a wheeled structure, which has the advantages of simple mechanical structure and easy control. For lunar base exploration and mining robots, it is necessary to study and design an integrated operation platform with both landing function and mobile capability, reuse the landing platform as a mobile platform, integrate the resources of the whole device, and provide more resources for lunar geotechnical sampling operations. In addition, it is recommended to consider the wheel-leg type of the mobile system, which can cope with complex terrain and has high mobile performance. It can also be folded during the sampling process to avoid moon dust generated during rock and soil excavation and reduce the impact of moon dust on the mechanism.

4 Conclusion The construction of lunar bases and deep space exploration based on lunar bases are one of the important development trends of deep space exploration in the future. The United States, Russia, Europe, and China have all proposed their own lunar base plans. The exploration and exploitation of in-situ resources on the moon is the prerequisite for the construction of the lunar base, and a lunar geotechnical exploration and mining robot suitable for the construction of the lunar base becomes the key. So far, mankind has rich experience in the detection of extraterrestrial objects and engineering technology verification. A series of achievements have been made in the exploration and sampling of extraterrestrial objects and soil, which has laid a solid foundation for the exploration and mining of lunar soil and the construction of lunar bases. The construction of the lunar base and the design and engineering verification of the lunar base exploration and mining related robots are still in their infancy, and further exploration and research, as well as further prototype preparation and test verification are needed. The concept of lunar base exploration and mining robots proposed in this paper fully considers the needs of lunar base construction, proposes engineering goals and conceptual ideas, and sorts out key technologies and related suggestions, which can provide ideas and directions for subsequent studies on the construction of lunar bases. The research and development of base-related technologies will play a huge role in the field of deep space exploration.

References 1. 2. 3. 4.

Official Official Official Official

website website website website

of of of of

NASA [EB/OL]. https://www.nasa.gov/ ESA [EB/OL]. http://www.esa.int/ JAXA [EB/OL]. https://www.jaxa.jp/ the RFSA [EB/OL]. www.roscosmos.ru/

Conception and Prospects of Exploration

145

5. Ye, P., Zou, L.Y.: Development and prospects of China’s deep space exploration field. Int. Space 478, 4–10 (2018) 6. Ding, L.Y., Xu, J.: Challenges and research progress of lunar construction engineering. Manned Space Flight 25(3), 277–285 (2019) 7. Zuo, Y., Qi, J.F.: Research progress of in-situ construction technology of lunar facilities. Aerosp. Overall Technol. 3(4), 63–70 (2019) 8. Zang, C.F., Liangliang, H.L.L.: A preliminary study on the lunar robot system for manned lunar exploration tasks. Manned Space Flight 25(5), 1–11 (2019)

Analysis of Orbit Determination Accuracy of GEO Satellite Navigation Data Xiusong Ye1,2, Hong Ma1,2, Yang Yang1,2, Yuancang Cheng1,2, and Xing Liu1,2(&) 1

State Key Laboratory of Astronautic Dynamics, Xi’an 710043, China [email protected] 2 Xi’an Satellite Control Center, Xi’an 710043, China

Abstract. Aiming at the problem of rank deficiency of orbit determination algorithm equation caused by weak navigation signal, poor geometric structure of signal and small number of visible satellites of a GEO satellite, a weighted least square orbit determination algorithm with additional constraints are proposed based on the relative geostationary characteristics of GEO satellite, which can improve the success rate of orbit determination of GEO satellite navigation signal when the number of navigation satellites are less than 4 and the measurement value is poor. Shorten the initialization time of the receiver. Based on the characteristics of strong radial constraint, weak lateral constraint, strong lateral constraint and strong longitudinal constraint of the ground-based orbit measurement data, the feasibility of the above algorithm is verified by the spacebased heterogeneous data fusion orbit determination test. The results show that the accuracy of the two kinds of orbit determination test results are the same, and the root mean square error of the total position in the overlapping trackarc is superior to 5 m. The root mean square error of the total velocity is superior to 2 mm/s. It is concluded that the effectiveness of GNSS system supporting orbit determination of high orbit spacecraft is verified, which provides technical support for the application of GNSS orbit determination mission in the future. Keywords: GEO satellite  Highorbit GNSS orbit determination least square method  Orbit determination accuracy

 Weighted

1 Introduction The technology of low earth orbit (LEO) satellite orbit determination using GNSS high-precision measurement data is now mature, and the orbit determination accuracy can reach centimeter level [1]. For geostationary orbit (GEO) and high earth orbit (HEO) satellites, using GNSS measurement data for orbit determination can simplify ground equipment and improve orbit determination accuracy. However, when the spacecraft orbit is higher than GNSS satellite, it is necessary to use the “leakage signal” of GNSS satellite transmitting antenna for measurement. Considering that the “leakage signal” strength is weak and the operating distance is increased, it puts forward higher requirements for the sensitivity, dynamics and visibility of GNSS receiver. Therefore, the use of GNSS for Geo and HEO satellite navigation is still in the experimental stage ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 146–155, 2023. https://doi.org/10.1007/978-981-19-3387-5_17

Analysis of Orbit Determination Accuracy

147

[2, 3]. In recent years, NASA has carried out experiments with AMSAT oscar-40 satellite in a large elliptical orbit with perigee of 1000 km and apogee of 58800 km, and verified the feasibility of orbit determination with global positioning system (GPS). The single point positioning error is about 100 km [4, 5]. ESA participated in the research and development of the C/A code receiver TOPSTAR 3000 with four antennas and 24 channels, and specially developed the enhanced filtering software based on Kalman filter, which can be used for real-time orbit determination under the condition of poor observation geometry and drastic clock error changes, and can be used for autonomous navigation of Leo, MEO and HEO [6]. In addition, ESA has also developed a simulation software kit for Geo and HEO satellite orbit determination based on GNSS measurement. Simulation analysis shows that the position accuracy of GEO and HEO satellite orbit determination can reach 15 m and 20 m respectively by using GNSS measurement data [7]. At present, the ground-based orbit determination system has become the main orbit determination method for Geo and HEO satellites because of its high reliability and stable measurement accuracy. However, its orbit determination accuracy is generally about 100 m, which is difficult to meet the requirements of the user market for the continuous improvement of the orbit accuracy of GEO and HEO satellites. For example, TDRSS of the United States requires its own orbit determination accuracy to be less than 10 m in order to track the meter level orbit of Leo user satellites; If an orbital deep space repeater station (ODSRS) is established on a synchronous orbit, the positioning accuracy of this space station is required to be 2 m [8]. In theory, spacebased orbit determination system, especially high orbit GNSS system, can provide high-precision autonomous navigation orbit determination system service, which can effectively reduce the requirements of ground-based orbit determination system on station geometry, equipment performance and working arc. At the same time, it can backup and fuse data processing with ground-based system, so as to further improve GEO orbit determination accuracy, What order of magnitude is the orbit determination accuracy of high orbit GNSS? How to evaluate the orbit determination accuracy of GNSS in high orbit when there is a lack of higher precision measurement means in engineering application? This is the content that needs to be studied later in this article.

2 Orbit Determination of GEO Satellite Navigation Signal Because the GNSS receiver of GEO satellite receives the earth side lobe “leaky” GNSS signal, the visible number of the navigation satellite received by the GNSS receiver of GEO satellite is small, the geometric structure is poor, and the received signal is weak, which requires that the quality of the navigation data and the stability of the navigation algorithm should be considered when using GNSS to determine the orbit. The accuracy of GNSS orbit determination depends not only on the measurement error, but also on the geometry of the received signal. The influence of measurement error on orbit determination results will be magnified by poor satellite geometry. Another difficulty in this kind of environment is that the signal quality changes rapidly when the receiver moves. Therefore, the signal often becomes so weak that the signal loses its lock and needs to be recaptured [9, 10].

148

X. Ye et al.

Due to the poor availability, quality and geometric shape of the signal, the normal equations are often ill conditioned. In the LEO satellite navigation solution, it can be obtained by simple parameter estimation techniques such as least square method, and it is not necessary to adjust the measurement weight for the estimation of signal quality [11, 12]. However, it is necessary to adopt the appropriate measurement variance model and corresponding weight in the navigation solution of GEO and other high orbit satellites, and the measurement noise estimation should be variable power observation. Therefore, the weighted least square method with additional constraints can avoid the ill conditioned problem of the above normal equation, and can also be applied to the case that the number of satellites is less than 4 and the measurement value is poor. 2.1

Weighted Least Square Method with Additional Constraints

Suppose the pseudo-range observation equation of the i-th GNSS satellite to the flight tester is. ~i þ cdt  cdti þ dqrel þ dqant þ dqiant þ eq qi ¼ q

ð1Þ

~i is the geometric distance from the where, qi is the pseudo-range measurement value, q i-th GNSS satellite to the flight tester, dt is the clock error of the receiver, dti is the clock error of the i-th GNSS satellite, dqrel is the relativistic effect correction, dqant is the phase center correction of the receiver antenna, dqiant is the phase center correction of the transmitter antenna, and eq is the pseudo-range observation noise. Since the pseudo-range measurement of GNSS signal with the earth side lobe “leaky” is not affected by tropospheric and ionospheric refraction errors, the influence of neutral atmosphere and ionospheric delay is not considered at the right end of formula (1). In addition, compared with the ground receiver, the high orbit GNSS receiver is not affected by the earth tide and sea tide displacement of the station, and the multipath effect is greatly weakened, so the multipath effect is not included in the right end of (1) [13, 14]. By simplifying formula (1), the pseudo-range measurement vector and pseudorange rate model can be expressed as follows: Dq ¼ GDx þ e

ð2Þ

G is a design or geometric matrix obtained by using satellite direction vector, and e is a vector containing pseudo-range measurement error. Vector Dx ¼ ½dx db is the real position and time linearization point offset of GEO satellite. Because of the unknown random error pollution in e of Dq, Eq. (2) can be regarded as a stochastic equation, which Dx can be determined by parameter estimation technology. When the weight heavy matrix W is covariance matrix R (see formula (3)), weighted least square (WLS) estimation is the best linear unbiased estimation, which produces the lowest estimation error in all linear estimates. Assuming that GT R1 G is nonsingular and at

Analysis of Orbit Determination Accuracy

149

least as many independent observations as unknown quantities, the WLS estimation of Dx is D~x ¼ ðGT WGÞ1 GT WDq ¼ ðGT R1 GÞ1 GT R1 Dq

ð3Þ

The weight W in WLS estimation of formula (3) is that of the measured value, which is based on its relative importance to noise level and to each estimation. When W is nonsingular, so is the GT WG value, and it is independent to indicate the direction vector of the satellite. The weight matrix W can be expressed as. 2

r21

6 W ¼ R1 ¼ 4 0 0

0 .. . 0

0

31

7 05 r2k

ð4Þ

where, r2k is the variance of the k-measurement. Classical navigation and positioning requires four or more GPS satellite signals, and the above model can be used for positioning solution. In orbit calculation of high orbit satellite, sometimes more than four satellite signals can be observed at the same time. Therefore, it is necessary to introduce orbit constraint information and use pseudo acceleration measurement to solve the problem, which can improve the success rate of solution and shorten the initialization time of receiver. For position calculation, the unknown variables are the three-dimensional coordinates P and clock error B of the receiver; For velocity calculation, the unknown variables are receiver velocity V and clock error drift rate Df =f , and the pseudo-range measurement model and pseudo-range rate model [13] are Pd ¼ q þ b,

Pv ¼ q_ þ

Df f

ð5Þ

where Pd is the measurement pseudo-range, Pv is the pseudo-range rate, q is the geometric distance, i.e. q ¼ krk ¼ kP  Pe k; P is the three-dimensional coordinates of the high orbit receiver, Pe is the position of the GPS satellite, t q_ the derivative of q to time. Then qv ¼ u  r_

ð6Þ

PPe where, u ¼ kPP , r ¼ v  ve , ve is the speed of the GPS satellite. ek For the normal orbit, the centripetal acceleration can be introduced by using the dynamic satellite motion information, including the Kepler motion of the satellite,

€ ¼ l  P P kP3 k

ð7Þ

150

X. Ye et al.

Among them, l is the constant of centripetal gravitation. For the near circle orbit, if the semi-long axis a and r are equal, the constraint conditions can be further introduced by vis viva formula, kvk 

l ¼ 0; P  v ¼ 0 kPk

ð8Þ

For Geo orbits, constraints can be introduced kPk  42164  103 ¼ 0;

2.2

Pz ¼ 0;

vz ¼ 0

ð9Þ

Solving Effect

In order to verify the effectiveness of the above algorithm, this paper selects the measured data of three arc segments (20:00:00 on June 2, 2020 to 23:55:00 on June 3, 2020, 20:00:00 on June 3, 2020 to 23:55:00 on June 4, 2020, 20:00:00 on June 4, 2020 to 23:55:00 on June 5, 2020) of a geostationary satellite flight attitude stability, and uses the weighted least squares algorithm with additional constraints, The overlapping arc segment of 0.5 days is used to calculate the GNSS inter satellite differential pseudorange data. The results show that the RMS of the total position of the two overlapping track arc segments are 2.3 m and 1.5 m respectively, and the RMS of the total velocity are 2.2 mm/s and 1.5 mm/s respectively, as shown in Fig. 1 and Fig. 2.

Fig. 1. Orbit determination residual error of GNSS inter satellite measurement data difference (overlapping arc 1)

Analysis of Orbit Determination Accuracy

151

Fig. 2. Orbit determination residual error of GNSS inter satellite measurement data difference (overlapping arc 2) Number of BDS Measurements 8

6

6

4

4

-

-

Number of GPS Measurements 8

2

0 0

2

2 4 6 8 Seconds into Measurements of One Period

0 0

10 4

x 10

8

6

6

4

4

-

-

10 4

x 10

Number of Satellite Measurements

Number of GLN Measurements

8

2

0 0

2 4 6 8 Seconds into Measurements of One Period

GPS num user BDS num user GLN num user

2

2

4

6

Seconds into Measurements of One Period

8

10 4

x 10

0 0

2 4 6 8 Seconds into Measurements of One Period

10 4

x 10

Fig. 3. Number of GNSS visible satellites in a geosynchronous orbit satellite operation cycle

It can be seen from Fig. 3 that in the GEO satellite orbit, there are less than 4 observation satellites in some time periods, that is, there are less than 4 observation satellites in about 8.7% time periods in a cycle. Therefore, it is difficult to apply the ground weighted least squares algorithm without additional constraints to GEO satellite navigation and positioning, and the stability of orbit determination is not high, and the success rate of the solution is relatively low. Using the ground weighted least squares algorithm with additional constraints, even in the case of only two visible stars, the fixed solution of the circle can be solved, while for Geo orbit, one visible star can be solved.

152

X. Ye et al.

3 Orbit Determination of Heterogeneous Data Fusion Based on Space and Ground-Based Observation Considering the fact that the ground-based ranging data has strong radial constraint and weak lateral constraint on the orbit, while the GNSS inter satellite differential pseudorange data has weak radial constraint and strong lateral constraint on the orbit [8]. Therefore, by fusing GNSS pseudo-range and ground-based ranging data, the orbit determination accuracy of space-based data fusion is further analyzed. At the same time, considering the lack of higher actual measurement means, the joint orbit determination results of space-based measurement data can be used as the external standard of GNSS differential carrier phase measurement data orbit determination accuracy verification. The heterogeneous data used in this project is ground-based measurement data and GNSS differential pseudo-range measurement data. The data arc segment is 8 days long from 20:00:00 on June 2, 2020 to 20:00:00 on June 10, 2020. The interval between GNSS differential pseudo-range measurement data and ground-based measurement data is 32 s and 1 s respectively. The arc distribution of ground-based measurement data is shown in Fig. 2. The vertical axis of Fig. 3 is the number of ground-based stations distributed in different areas, and the horizontal axis is the arc length of ground-based observation (Fig. 4).

Fig. 4. Tracking arc length and distribution of a geosynchronous satellite tracked by traditional ground-based stations

For the accuracy requirement of meter level orbit determination, the precise orbit determination arc of GEO Satellite usually does not exceed 5 days, mainly because the relative magnitude of the ratio of perturbation force to central gravity is required to be

Analysis of Orbit Determination Accuracy

153

kept to 10−9.The magnitude of perturbation force and its choice are closely related to the numerical integration of motion equation and variational equation, as well as the orbit calculation ability and the present situation, that is, the relative error of perturbation force and its cumulative effect through the orbit determination arc should be guaranteed to meet the requirements of relative orbit determination accuracy [8]. Thus, this paper selects 3 days of orbit determination arc, and the specific mechanical model of precise orbit determination is shown in the Table 1 below. Table 1. Multi source data fusion orbit determination mechanical model Gravity field of the earth Third body gravity Earth tide Tide Solar pressure Relativistic effect Atmospheric resistance Empirical force model Integrator

J gm3 gravitational field model The sun, the moon Considered Considered The light pressure reflection coefficient is 0.2, the light pressure reflection area is 27.4 m2, and the satellite mass is 3000 kg Considered Unconsidered There are two parameters in the ACR direction Step size: 5 s, Adams order: 7

The arc length of each orbit determination in the following tests is 72 h, and the cross arc segments of comparison is 24 h. The comparison results of cross arc segments are shown in the following table: from Table 2, it can be seen that when the quality of the measured data is normal, the advantages and disadvantages of the orbit determination results are greatly related to the distribution of the measured data and whether the tracking arc length of the measuring station is uniform. As the cross arc segments 1–2 and cross arc segments 2–3 are only tracked continuously by a single station, the other two or three stations in the orbit determination arc segment follow. The maximum position deviation is 161 m and the maximum velocity deviation is 0.0115 m/s. Compared with the orbit determination using only ground-based measurement data, the RMS of the total position in the overlapping arc is less than 5 m, and the RMS of the total velocity is less than 2 mm/s. The reasons why the orbit determination accuracy can be greatly improved by the fusion mode are as follows: first, the accuracy of GNSS inter satellite differential pseudo-range measurement is relatively high; second, the data of GNSS inter satellite differential pseudo-range measurement is continuous and evenly distributed; third, the measurement quantity after fusion has strong constraints on the radial and lateral orbit. From Sect. 3 and Table 2 of this paper, it can be seen that the accuracy of orbit determination using ground-based and GNSS inter satellite differential pseudo-range measurement data fusion is equivalent to that using GNSS inter satellite differential carrier phase measurement data alone.

154

X. Ye et al. Table 2. Comparison results of one day overlapping arcs

Orbit determination data source Overlapping arc Location (m) RMS Max Ground-based Arc 1-arc 2 106.23 143.74 Arc 2-arc 3 113.29 161.00 Arc 3-arc 4 20.09 29.50 Arc 4-arc 5 54.20 74.84 Arc 5-arc 6 83.31 118.76 Ground-based + GNSS Arc 1-arc 2 2.33 3.75 Arc 2-arc 3 2.09 4.41 Arc 3-arc 4 3.26 6.03 Arc 4-arc 5 2.65 4.17 Arc 5-arc 6 2.33 3.75

Speed (M/s) RMS Max 0.0077 0.0111 0.0082 0.0115 0.0013 0.0020 0.0039 0.0055 0.0059 0.0087 0.000141 0.000245 0.000173 0.000245 0.000173 0.000332 0.000173 0.000245 0.000141 0.000245

4 Summary Compared with the increasingly mature GNSS orbit determination technology for LEO Spacecraft, GNSS orbit determination technology for LEO Spacecraft still has some disadvantages, such as less visible satellites, poor geometric accuracy factor, low carrier to noise ratio of signal reception, etc., The instantaneous pseudo acceleration is obtained by short-time pseudo-range rate difference, and the unknown variables of position, speed, clock error and clock drift are solved together, so as to improve the success rate of solution and shorten the initialization time of receiver. The results show that the root mean square errors of the total positions of the two overlapped arcs are 2.3 m and 1.5 m respectively, and the root mean square errors of the total velocities are 2.2 mm/s and 1.5 mm/s respectively. At the same time, this paper also attempts to fuse GNSS inter satellite differential pseudo-range and groundbased ranging data, which can further enhance the constraints on orbit and improve the accuracy of orbit determination. The calculation results show that the root mean square error of the total position is less than 5 m and the root mean square error of the total velocity is less than 2 mm/s in one day’s overlapping arc. The accuracy of orbit determination using GNSS inter satellite differential pseudo-range data and groundbased ranging data is similar to that using only inter satellite single frequency pseudorange data with additional constraints. Thus, the effectiveness of GNSS system supporting orbit determination of high orbit spacecraft is verified.

References 1. Peng, D.J., Wu, B.: The impact of GPS ephemeris on the accuracy of precise orbit determination for LEO using GPS. Acta Astron. Sinica 49(4), 434–443(2008) 2. Meng, Y., Fan, S., Li, G., Gao, W.: Orbit determination of medium-high earth orbital satellite using GNSS crosslink ranging observations. Geomatics Inform. Sci. Wuhan Univ. 39(4), 445–449 (2014)

Analysis of Orbit Determination Accuracy

155

3. Wang, W., et al.: Research and simulation of orbit determination for geostationary satellite based on GNSS. Acta Geodaetica et Cartographica Sinica 40(5), 6–10 (2011) 4. Moreau, M., Bauer F., Carpenter, J., Davis, E., Jackson, L.: Preliminary Results of the GPS Flight Experiment on the High Earth Orbit AMSAT-OSCAR 40 Spacecraft. In: 25th Annual AAS Guidance and Control Conference, pp. 1–15. Breckenridge, CO, 6–10 Feb 2002 5. Moreau, M., Davis, P., Carpenter, J.: Results from the GPS flight experiment on the high earth orbit AMSAT OSCAR-40 spacecraft. In: ION GPS 2002 Conference, pp. 1–12. Portland (2002) 6. Cristian, M., Laurichesse, D.: Real-time GEO orbit determination using TOPSTAR 3000 GPS receiver. In: ION GPS 2000, pp. 1985–1994. Salt Lake City (2000) 7. Lorga. J., et al.: GNSS sensor for autonomous orbit determination. In: 23th International Technical Meeting of the Satellite Division of the Institute of Navigation, pp. 2717–2731. Portland, OR, 21–24 Sept 2010 8. Du, L.: A Study on the Precise Orbit Determination of Geostationary Satellites. The PLA Information Engineering University, ZhengZhou (2006) 9. Qin, H.L., Liang, M.M.: Research on positioning of high earth orbital satellite using GNSS. Chin. J. Space Sci. 28(4), 316–325 (2008) 10. Sun, Z.Y., Wang, X.L.: GNSS satellite visibility in high orbit environment and DOP analysis. Aero Weaponry 1, 18–27 (2017) 11. Lai, G.F., et al.: Weak GNSS signal tracking technology. J. Military Commun. Technol. 31 (4), 77–93 (2010) 12. Zhao, Y.Z.: Positioning of high earth orbit satellite using GPS/Beidou combined sytsem. China. J. Space Sci. 36(1), 77–82 (2016). (in Chinese) 13. Liu, H.Y., Wang, H.N.: Orbit determination of satellite on the middle-high earth orbit based on GPS. Chin. J. Space Sci. 25(4), 293–297 (2005) 14. Gao, Y., et al.: GNSS receiver techniques based on high earth orbit spacecraft. Chin. Space Sci. Technol. 37(3), 101–109 (2017)

Variable Bandwidth Digital Channelization Algorithm and Engineering Realization Chen Dan1,2(&), Wang Xiaotao3, Zhang Qi1, Shi Min1, and Wang Xin1 1

State Key Laboratory of Astronomic Dynamics, Xi’an 710043, China [email protected] 2 Xi’an Satellite Control Center, Xi’an 710043, China 3 The 38th Research Institute of China Electronics Technology Group Corporation, Hefei 230088, China

Abstract. According to the design requirements of multi-user, multi-rate satellite modems, this paper proposes an adaptive digital channelization synthesis and decomposition algorithm for the number of users and different bandwidth signals. The algorithm is engineered on a hardware platform realization. By adjusting the working frequencies and extracting the results of DFT, variable users and signal decompositions with variable bandwidth can be achieved; or variable users and variable bandwidth signal syntheses can be achieved by adding zeros at the DFT input in practice. Keywords: Variable rate  Digital channelization  Digital channel decomposer

1 Introduction Digital channel synthesizers and splitters are widely used in various terrestrial wireless communication systems and satellite communication systems. In the frequency-based multiplexing communication satellite system, each user works in a different frequency band of the satellite channel. After satellite payload completes multi-user signals receiving, the baseband multi-carrier broadband signal is obtained through digital processing [1]. Multi-carrier signal is decomposed by digital channelization. The satellite can exchange the signals of each user moved to the baseband, and then send them to the digitization after the exchange. The channel synthesizer forms a broadband multi-carrier signal and sends it to the ground users to realize the information exchange between different users. However now some ASIC-based digital channel synthesizers and splitters have a fixed working mode and can only support fixed number of users and sub-band bandwidths; while the digital channel synthesizers and splitters based on field programmable gate array (FPGA) can be flexibly configured. It is difficult to achieve high-throughput digital channel integrations limited to the device level of the existing on-board FPGA. The present invention utilizes the advantages of FFT and polyphase filtering structures to design a digital with variable bandwidth and variable number of users. The channel synthesizers and decomposers only need to configure different clock frequencies in the work. This structure realizes the diversification of digital channel functions, and it can be used for the design of multi-purpose satellite digital channel syntheses and decomposition ASIC chips. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 156–164, 2023. https://doi.org/10.1007/978-981-19-3387-5_18

Variable Bandwidth Digital Channelization Algorithm

157

2 Fundamental 2.1

Digital Channelization Synthesis and Efficient Implementation Structure

In order to realize the syntheses of K = 4, 16, 64 and other multi-user FDMA signals, a signal synthesis scheme based on digital channelization is adopted. The basic principle is shown in Fig. 1, where hLP ðmÞ are low-pass filters with bandwidth of 2p=K. The K complex baseband signals xi ðtÞði ¼ 1; 2;    ; K Þ are input. Firstly, each digital spectrum bandwidth is obtained by K-fold interpolation and filtering to get a bandwidth of 2p=K. Then the basebands are moved to the positions where the frequency shift factors eixk n used. Finally, the K signals with different frequencies are added to get the transmitted signal yðnÞ.

x0 (m) x ⎢( K −1)/2⎥ (m) ⎣



x ⎢( K +1)/2⎥ (m) ⎣



xK −1 (m)

v0 (m)

K↑

K↑ K↑

v⎢⎣( K −1)/2⎥⎦ (m) v⎣⎢( K +1)/2⎦⎥ (m)

K↑

vK −1 (m)



hLP (m)

e jω0 n



hLP (m) hLP (m)

e

jω⎢( K −1)/2⎥ n

e

jω⎢( K +1)/2⎥ n





⊗ ⎣

y⎣⎢( K −1)/2⎦⎥ (m)

y⎣⎢( K +1)/2⎦⎥ (m)



y(n)





hLP (m)

y0 (m)

yK -1 (m)

e jωK −1n

Fig. 1. Basic principle of digital channelized signal synthesis

The synthesized signal is yðnÞ ¼

K 1 X i¼0

yi ðnÞ ¼

K 1 X i¼0

½x0i ðnÞ

 hðnÞe

ixi n

¼

K 1 X i¼0

"

þ1 X

# x0i ðkÞhðn

 kÞ eixi n

ð1Þ

k¼1

where x0i ðkÞ ¼ xi ðkÞ; k ¼ 0; K; 2K; ::: In the implementation structure shown in Fig. 1, the implementation of low-pass filter is complex after high-power data interpolation. The channelized signal synthesis method based on polyphase structure in Fig. 2 can be adopted. The figure shows the coefficients obtained by integer multiple extractions of low-pass prototype filters. The DFT, phase rotation and filtering in the implementation structure are completed before data interpolation, so the calculation efficiency is high. It also has strong realtime processing ability, simple structure and easy implementation.

158

C. Dan et al. v0 (m)

x0 (m) x1 (m)

⊗ ⊗ e

vK −1 (m )

xK −1 (m )

e

j

π K

⊗ j

π K

z −1



h1 (m)

( −1)

⋅1

K↑ k



hN −1 (m )

( −1)

⋅( K −1)

y (n)

K↑

( −1) k

v1 (m) DFT



h0 (m)

K↑

z −1

k

Fig. 2. Channelized signal synthesis based on multiphase structure

2.2

Digital Channelization Decomposition and Efficient Implementation Structure

Digital channelization signal decomposition is the process of evenly dividing and extracting the wideband signals in frequency domain, and finally outputting some lowspeed baseband signals. Its function is equivalent to moving the evenly divided frequency bands to baseband with DFT modulation filter banks with decimator. The number of channels K is an integer, usually to the power of 2. Let the center frequencies of the K subchannel be xk ¼ 2pk=K þ p=2K; k ¼ 0; 1; :::K  1; and the stop band cutoff frequency of the prototype filters are xs  p=K to ensure that there is no spectrum aliasing between adjacent subchannels. The Digital channelization decomposition model is shown in Fig. 3 [2]. x (n )

⊗ e

jω0 n

⊗ e

hLP (n) hLP (n)

jω⎢( k −1) / 2⎥ n

e



hLP (n)

u0 (m ) u⎣⎢( k −1) / 2⎦⎥ (m ) u ⎣⎢( k +1) / 2⎦⎥ (m )

v0 (m)

M↓

v⎣⎢( k −1) / 2⎦⎥ (m )

M↓ M↓

v⎣⎢( k +1) / 2⎦⎥ (m )

y0 ( m) y⎣⎢( k −1) / 2⎦⎥ (m ) y⎢⎣( k +1) / 2⎥⎦ (m )

jω⎢( k −1) / 2⎥ n



hLP (n)

u K −1 (m )

M↓

vK −1 (m )

yK −1 ( m)

e jωK −1n

Fig. 3. Prototype structure of digital channelized signal decomposition

Assume that FDMA signal can be expressed as: xðnÞ ¼

K X k¼1

xk ðnÞ

ð2Þ

Variable Bandwidth Digital Channelization Algorithm

After digital channelized, output signal for the ith road is X1 yi ðnÞ ¼ xðnÞejxi n  hðnÞ ¼ hðMn  mÞxðmÞejxi m m¼1

159

ð3Þ

where M is the extraction factor before diversity signal processing. Substitute (3) into the above equation to get: yi ðnÞ ¼

1 X

( hðnM  mÞ 

m¼1

N X

) xk ðmÞe

jxi m

; i ¼ 1; 2; 3    K

ð4Þ

k¼1

It is further decomposed into real one and imaginary parts:   1 N P P hðnM  mÞ  xk ðmÞ cos xi m Re½yi ðnÞ ¼ m¼1 k¼1   1 N P P Im½yi ðnÞ ¼  hðnM  mÞ  xk ðmÞ sin xi m m¼1

ð5Þ

k¼1

When xi xk , (5) is simplified as follows: Re½yi ðnÞ ¼ 12 ½Ii ðnÞ cosðDxi n þ /i Þ  Qi ðmÞ sinðDxi n þ /i Þ Im½yi ðnÞ ¼ 12 ½Ii ðnÞ sinðDxi n þ /i Þ þ Qi ðmÞ cosðDxi n þ /i Þ

ð6Þ

where Dxi ¼ xk  xi . When Dxi is zero or very small, coherent demodulation can be realized by (6) under a certain SNR. To greatly reduce the computational complexity, polyphase filter technology can be used, first extraction and filtering. Let hLP ðnÞ be the impulse response of a N-order low-pass filter (N/K = L, L is a positive integer), then the output of the extracted channel is vk ðmÞ ¼ f½SðnÞejxk n   hLP ðnÞgjn¼mM N1 P ¼ SðmM  lÞejxk ðmMlÞ hLP ðlÞ ¼

l¼0 K1 P N=K1 P p¼0

ð7Þ

SðmM  iK  pÞejxk ðmMiKpÞ hLP ðiK þ pÞ

i¼0

We set K=M ¼ F; F ¼ :::21 ; 20 ; 21 ; :::, Sp ðmÞ ¼ SðmM  pÞ; hp ðiÞ ¼ hLP ðiK þ pÞ to get: vk ðmÞ

K1 P L1 P

Sp ðm  iFÞejxk ðmiFÞM hp ðiÞejxk p " # K1 P ðL1ÞF P jxk ðmlÞM l ¼ Sp ðm  lÞe hp ðF Þ ejxk p ¼

p¼0 i¼0

p¼0

ð8Þ

l¼0

Let hp 0ðlÞ ¼ hp ð1=FÞ. When F [ 1, hp 0ðlÞ is a F times interpolation of hp ðlÞ; while F\1, hp 0ðlÞ is a 1=F times extraction of hp ðlÞ. Then (8) can be written as

160

C. Dan et al.

vk ðmÞ ¼

K 1  X   Sp ðmÞejxk mM  hp 0ðmÞ ejxk p ; k ¼ 0; 1; 2; :::K  1

ð9Þ

p¼0

  Let xp ðmÞ ¼ Sp ðmÞejxk mM  hp 0ðmÞ. Substitute xp ðmÞ and xk ¼ 2pk=K þ p=2K into (9) to get: vk ðmÞ ¼

K X

p

p

½xp ðmÞej2K p ej K kp ¼ DFT½xp ðmÞej2K p  2p

ð10Þ

p¼0

s ( n)

M↓

z −1 M↓

z

−1

z −1

s0 (m) s1 (m)



h0′ (m)

e jω0 mM



h1′(m)

x0 (m)

⊗ e

x1 (m)

M↓

sK −1 (m)



e jωK mM

e

hK′ −1 (m)

xK −1 (m)

e

π − j ⋅0 2K



e jω1mM

−j

x0′ (m) x1′(m)

π −j 2K



xK −1′ (m)

v0 (m)

y0 (m)

v ( K −1) 2 (m) D F v ( K +1) 2 (m) ( )∗ T vK −1 (m)

( )∗

y ( K −1) 2 (m) y ( K +1) 2 (m)

yK −1 (m)

π ( K −1) 2K

Fig. 4. Efficient implementation structure of digital channelization decomposition

According to the above derivation, the efficient structure of digital channelization can be obtained and shown in Fig. 4. For the decomposition of complex baseband FDMA signal, the extraction factor is M = K = N.

3 Variable Bandwidth Digital Channelization Variable bandwidth digital channel synthesizer completes the spectrum shift of narrowband signals of different users, and outputs multicarrier wideband signals after synthesis. Variable bandwidth digital channel synthesizer can adapt to the composition of different bandwidth signals with different number of users according to the task requirements. Variable bandwidth digital channel synthesizer is based on fixed filter coefficient architectures and FFT of fixed points. In the variable bandwidth digital channel synthesizer based on K points FFT structure, the multi-user narrowband signal is shifted by FFT, then the image component is suppressed by low-pass filtering, and finally synthesized. When the number of users changes: 1) Input all zero data on the ports without data input; 2) Changing the working clocks of the synthesizer; 3) In the back-end output, decimation and parallel serial conversion are carried out according to the corresponding proportion. Completing the above three steps enable the digital channel synthesizer to switch operating mode, the number of users that can be supported includes K, K/2, K/4… and

Variable Bandwidth Digital Channelization Algorithm

161

single user. According to the working mode configuration, the variable bandwidth digital channel resolver sends the received signal to each input port to complete the multi-user broadband signal extraction and anti-aliasing filtering, and then sends it to K points FFT to complete the spectrum shift, so as to obtain the baseband signals of each user. Like the variable bandwidth digital channel synthesizer, the variable bandwidth digital channel resolver can also support K, K/2, K/4… Single user mode and only need to change the working clocks when switching the mode. The following describes the working principles of variable bandwidth digital channel resolver. The multi-carrier wideband signal received by satellite is expressed as x(n) after digitization. Let x(n) include K sub-bands, corresponding to the narrowband signals of users y0 ðmÞ, y1 ðmÞ, …, yK1 ðmÞ where K is the maximum number of subbands supported by the digital channel resolver, which K is a power of 2. For antialiasing filter hLP ðnÞ after polyphase transformations, we get g0 ðmÞ, g1 ðmÞ, …, gK1 ðmÞ where gq ðmÞ ¼ hLP ðkM þ qÞ; q ¼ 0; 1; 2; . . .; K  1. The implementation structure of variable bandwidth digital channel resolver is shown in Fig. 5.

Fig. 5. Efficient implementation structure of digital channelization decomposition

According to the requirements of application scenarios, the working mode of variable bandwidth digital channel decomposer is shown in Table 1. In the second column, the unselected port inputs are all zero at the corresponding IDFT inputs in Fig. 1. Table 1. Working mode configuration of variable bandwidth digital channel resolver Subband number (user number) K K= 2

Subband signal Input ports Working bandwidth clock ^xK1 ðmÞ, ^xK2 ðmÞ, …, Clk B ^x1 ðmÞ &^x0 ðmÞ ^xK2 ðmÞ, ^xK4 ðmÞ, …, Clk  2 B2 ^x2 ðmÞ &^x0 ðmÞ

K= 4

B4

… 2

B

1

BK

K 2

^xK4 ðmÞ, ^xK8 ðmÞ, …, Clk  4 ^x4 ðmÞ &^x0 ðmÞ … ^xK ðmÞ, ^x0 ðmÞ =2 ^x0 ðmÞ

… Clk 

K 2

Output ports ^ y1 ðmÞ, …, y0 ðmÞ, ^ ^K1 ðmÞ y ^ y1 ðmÞ, …, y0 ðmÞ, ^ ^ ðmÞ yK =21 ^ y1 ðmÞ, …, y0 ðmÞ, ^ ^ yK ðmÞ =41 … ^ y1 ðmÞ y0 ðmÞ, ^

Clk  K ^ y0 ðmÞ

162

C. Dan et al.

The implementation structure of the variable bandwidth digital channel synthesizer is shown in Fig. 6.

Fig. 6. Implementation structure of variable bandwidth digital channel synthesizer

The narrowband signal corresponding to each of the K users is y0 ðmÞ, y1 ðmÞ, …, yK1 ðmÞ. After DFT transform, polyphase anti-mirror filtering and phase rotation, the multiuser narrowband signal is synthesized into wideband signal through parallel series conversion. Among them, the polyphase anti-mirror filter is.. qq ðmÞ ¼ hLP ðKm  qÞ; q ¼ 0; 1; 2;    K  1. When the number of users becomes K, K/2, K/4,… or single user, the input ports, working clocks and outputs branch of the variable bandwidth digital channel synthesizer shown in Fig. 6 can be configured according to Table 2.

Table 2. Working mode configuration of variable bandwidth digital channel synthesizer Subband number (user number) K

Subband signal bandwidth B

K= 2

B2

K= 4

B4

… 2

B

1

BK

K 2

Input port

Working clock

Output port

^y0 ðmÞ, y^1 ðmÞ, …, ^yK1 ðmÞ ^y0 ðmÞ, ^y1 ðmÞ, …, ^yK ðmÞ =21 ^y0 ðmÞ, ^y1 ðmÞ, …, ^yK ðmÞ =41 … ^y0 ðmÞ, ^y1 ðmÞ

Clk

^xK1 ðmÞ, ^xK2 ðmÞ, …, ^x1 ðmÞ &^x0 ðmÞ ^xK2 ðmÞ, ^xK4 ðmÞ, …, ^x2 ðmÞ &^x0 ðmÞ

^y0 ðmÞ

Clk  2 Clk  4

^xK4 ðmÞ, ^xK8 ðmÞ, …, ^x4 ðmÞ &^x0 ðmÞ

… Clk 

… ^xK ðmÞ, ^x0 ðmÞ =2 ^x0 ðmÞ

K 2

Clk  K

Base on Fig. 6, the 4-user digital channel decomposer is realized by changing the working clocks, as shown in Fig. 7.

Variable Bandwidth Digital Channelization Algorithm

163

Fig. 7. Variable bandwidth digital channel decomposer for 4-user signal synthesis

The implementation method of variable bandwidth digital channel resolver is described in combination with engineering implementation. According to Fig. 3, a variable bandwidth digital channel resolver with K = 64 points is constructed. The baseband symbol rate of each user is 3.90625 Msps, and the modulation mode is QPSK. After shaping filtering, the bandwidth is 5.27 MHz and the sampling rate is 7.8125 Msps. The symbol rate and sampling rate of 64-channel signal are 250 Msps and 500 Msps respectively. Let the length of anti-image and anti-aliasing filter be 512 orders, and the length of each phase filter is 8 after polyphase decomposition. After digital channel synthesis, the measured broadband signal spectrum is shown in Fig. 8.

Fig. 8. Signal spectrums of 64 users

By changing the working frequency of the channel resolver, the K/16 = 4 sub-band signal is decomposed. At this time, the baseband symbol rate of each user is

164

C. Dan et al.

3.90625  16 = 62.5 Msps. After shaping filtering, the signal bandwidth of each user is 84.375 MHz and the sampling rate is 125 Msps. DFT input ports, y4 ðmÞ, y5 ðmÞ, …, y63 ðmÞ zero compensation, output effective ports are ^x48 ðmÞ, ^x32 ðmÞ, ^x16 ðmÞ, ^x0 ðmÞ, DFT and polyphase filter working clock is 125 MHz. The spectrum signal of 4-user is shown in Fig. 9.

Fig. 9. Signal spectrum of 4 users

References 1. Wu, P.: Research on digital channelized filtering algorithm and FPGA implementation technology. Master’s thesis of Xi’an University of Electronic Science and Technology, Xi’an (2010) 2. Liu, X.: Design and implementation of broadband digital channelized receiver. Master’s thesis of Harbin Engineering University, Harbin (2016)

A Design Framework for Complex Spacecraft Systems with Integrated Reliability Using MBSE Methodology Jingyi Chong(&)

, Haocheng Zhou(&), Min Wang, and Yujun Chen

China Academy of Space Technology, 104 Youyi Road, Beijing 100094, China [email protected], [email protected]

Abstract. In the development of the spacecraft system, reliability analysis is often performed independently of the system design, and its input is usually an artificial abstract of the design scheme, such as function trees or block diagrams. This model gap between analysis and design lead to uncertainty in the results, while the conclusions are difficult to feed directly into the system design. In addition, manual reliability analysis consumes a lot of time and effort, while also increase the risk of errors. With the rapid growth in the size and complexity of spacecraft systems, reliability analysis is becoming increasingly challenging and its value is difficult to show. To cope with these difficulties, model-based systems engineering has emerged. However, MBSE focuses on requirements for traceability without a precise definition of reliability, resulting that the model conversion is still needed. In order to truly exploit the advantages of MBSE, this paper proposes a system design framework integrated reliability analysis for complex spacecraft system using MBSE. Both the design and reliability analysis are based on a single SysML model, from which FMEA and fault tree analysis can be generated directly. This framework ensures the consistency of the system model and the reliability model, eliminating the model conversion and reducing the workload of manual analysis. In this paper, the application process of a simple case is also demonstrated. Keywords: MBSE  Spacecraft systems  Reliability analysis

 FMEA  FTA

1 Introduction Spacecraft systems, as typical complex systems, contain numerous subsystems which involve many disciplines. Due to the harsh environment, high cost and nonmaintainability, the spacecraft is required to be superior reliability. Therefore, it is needed to conduct a comprehensive reliability analysis to identify potential weaknesses and make proper design improvements so that all the possible failure effects are within the acceptable range. With the development of technology, the spacecraft system shows the trend of multi-functional, complex and highly integrated. Thus, traditional document-based design and analysis are not able to support the complex requirements, and the model-based system engineering has become an important direction. However, under the model-based digital development of the spacecraft, reliability analysis has not already been integrated with design, so that the traditional limitations ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 165–173, 2023. https://doi.org/10.1007/978-981-19-3387-5_19

166

J. Chong et al.

are not substantially improved through the pure MBSE methodology. Furthermore, the traditional reliability analysis requires engineers to extract and analyze manually, which is difficult to reflect the value of reliability analysis for design guidance. Model-based reliability analysis ensures the model consistency of design and analysis which increases the traceability of model elements. In addition, thanks to the strictly defined in the model association, an efficient automatic generation of reliability analysis can be realized by digital means. In this paper, an integrated framework of MBSE-based system design and reliability analysis is proposed for complex spacecraft system. Based on the forward design and SysML modeling, reliability information and the system model are integrated through the metamodel expansion. Meanwhile FMEA and FTA could be generated directly through the mapping and reasoning of system model elements. In the paper, Sect. 2 describes the related research in this field. Then Sect. 3 presents the specific process proposed in this paper. In Sect. 4, a simplified model carries out the validation of the feasibility. Finally, Sect. 5 concludes the entire paper.

2 Literature Review Reliability analysis of complex systems is tedious and error-prone, so people have resorted to formal tools such as AltaRica to automate reliability analysis. However, models in AltaRica still cannot resolve the uncertainty caused by model differences. So, it is necessary to carry out reliability analysis directly from the system model, which can strictly ensure the model consistency and facilitate rapid feedback. Pierre David proposed MeDISIS method for reliability analysis of complex systems expressed in SysML [1–3]. It first captures each component and function from the SysML model to create a preliminary FMEA, and finally supplements the analysis by experts. Mhenni proposed SafeSysE, a framework for integrating the automatic generation of FMEA and FTA, which completely describe the system modeling integrated with security analysis as well as dynamic behavior verification [4]. Alfredo Garro proposed RAMSAS which integrating overall system design, reliability analysis, and Simulink simulation based on SysML [5]. Based on the above research, the framework proposed in this paper integrates the characteristics of the spacecraft reliability analysis of each level in the process of forward design, and generates the analysis through the mapping and reasoning of model elements.

3 Concrete Framework The integration of system design and reliability analysis starts from the requirement definition in early stage. In initial requirement analysis, functional requirements should be taken into consideration, and based on the subsequent reliability analysis, the requirements should be added to provide constraints for design changes. In MBSE, the functionality of the system can be identified from the use case scenarios. Functional decomposition proceeds from the top-level functionality down

A Design Framework for Complex Spacecraft Systems

167

through the hierarchy until components. In SysML, Block Definition Diagram (BDD) describes the functional hierarchy and Activity Diagram (AD) or Internal Block Diagram (IBD) expresses the interaction between functions. Functional decomposition will be an input to reliability analysis at functional level. According to the function FMEA, the functional architecture is updated and a new iteration could be performed until the failure impact is within acceptable limits. This process effectively strengthens the integration between reliability analysis and system design, avoiding subsequent costly changes to the logical or even physical architecture. Through the iteration in requirements and functional layer, a complete system functional architecture has been formed. Based on these, components can be assigned which result in a further logical architecture. In this phase, components are described through BDD, and the internal interactions are described through IBD. The logical architecture will be an input to the component-level reliability analysis. Similarly, new reliability requirements should be added for unacceptable failure risks and fed back to the requirements layer for iteration. Based on the previous iterations of design and analysis, in order to clear the fault propagation of specific failure modes, a fault tree analysis is needed. The fault tree is generated from the logical architecture which is mentioned above. IBD describes the interactions within the components. Thus, these can be used for the identification of fault propagation. According to the generated fault trees, qualitative analysis can specify the minimum set of cuts for a particular failure and provide targeted guidance to eliminate or reduce the risk of failure. Design changes will continue to be fed back to the requirements layer of the overall system design for a further iterative until all the risks are reduced to an acceptable level. Components at the layer above are logical components actually, and at the physical layer, a more detailed decomposition will be performed. 3.1

Construction of SysML Profile

In the framework proposed, reliability information should be integrated with the system model at first. However, SysML is domain-independent so that special semantics cannot be constructed directly. Here, a specific profile could be constructed, so that the reliability information can be explicitly defined in the system model. For the FMEA

Fig. 1 Spacecraft reliability analysis profile

168

J. Chong et al.

and FTA in this study, “Spacecraft Reliability Analysis Profile” can be created as Fig. 1. 3.2

Model-Based FMEA

FMEA for spacecraft systems include functional FMEA and component FMEA. Functional FMEA focuses on the functions in its mission which is mainly based on the functional architecture. While component FMEA focuses on the specific components after the logical architecture have been defined. Model stands for stack of elements. When the information required has been already carried in the model, extracting the model elements and creating the corresponding units can generate the preliminary FMEA easily. Therefore, in Magicdraw, using a program can realize the automatic generation of functional FMEA from functional decomposition. BDD, AD and IBD describe the functional decomposition created by the system design. Functions are provided by a series of activities and the hierarchical decomposition of system functions can be captured as well. So far, these models have completely described system functions and their interaction. At first the program extracts all the function and develops a list. Then failure mode units are made for each function, which are populated with the generic failure modes defined in the Profile. Next, the input and output of the functional module are populated to the cause unit as an aid in subsequent refinement. Finally, for the impact unit, related upstream and downstream functions are populated, which plays an important supporting in the refinement. Based on the final results, if unacceptable conditions are identified, changes need to be done to the functional architecture and a further iteration is needed. Component FMEA is nearly the same as functional FMEA, except that the failure causes and effects can be supported by a standard library of models. On the basis of the functional architecture, one component could be assigned to each function. If a function requires more components, the decomposition of function is considered incomplete (except for backups for the same function). After that, logical architecture could be constructed already. So far, these models have completely described the components and their interactions. Similarity, the information required can be extracted from the system model using a program. Since the failure mode of components are relatively fixed, these can be extracted and populated from the model library as well. 3.3

Model-Based FTA

Fault tree analysis for spacecraft needs to clarify the fault propagation, which are reflected in the interactions between components. As IBD describes the interactions, FTA is also obtained from the IBD at logic layer. In order to capture all the fault propagation, the generation can be achieved using program traversal in Magicdraw. In these process, two main problems need to be solved, which the first is to abstract the IBD graph into a tree structure, and the second is to determine the logical relationships. Since the current qualitative analysis of FTA is

A Design Framework for Complex Spacecraft Systems

169

limited to monotonic correlation fault trees consisting of and-gate, or-gate and fault events, only “and” and “or” are considered. Generating tree structure from IBD can be achieved by directed graph. IBD can be presented as a directed graph of G = (V, E), where V is the set of vertices and E is the set of directed edges. The vertex set consists of system components and the external port. Internal port can be expressed by directing edges which connect the components. Therefore, using traversal algorithm could find the causes that are related by directed edges. When utilizing deep traversal, it may happen that the upstream of the vertex has not been visited yet, so it needs a sub tree before stitching. In contrast, when using breadth traversal, because it is done hierarchically, the fault tree can be generated sequentially. As well as traversing the directed graph, reasoning about the logical relationships according to certain rules is also needed. In SysML, IBD composes of three parts: “external input”, “external output” and “intermediate execution” (Fig. 2).

Fig. 2. IBD diagram partition diagram

Structure 1: External Output. The failure of the external output is regarded as the top event. The adjacent vertex is component E, so the top event is caused by the internal fault of component E or by the input error of port P4, which are connected by “or” gate. Structure 2: External Input. Component A1 in the figure has a port representing external input, and outputs to component B through the internal port P2. The external input port could represent all the bottom events from the external, and are connected with “component A1 internal fault” through “or” gate to transfer the fault to all components connected with component A1. Structure 3: Intermediate Execution. The logical relationship of intermediate execution needs to be judged according to the fault transmission. If the components connected to the input port are assigned to the same function, it is considered that the faults are connected by “and” gate. While if the components are assigned to different functions, they are connected by “or” gates.

170

J. Chong et al.

4 Case Study This section verifies the feasibility of the framework through the design and reliability analysis of a simplified system, which is the rotational speed control system of a spinstabilized satellite. First, at the requirements level, the system boundaries and operating modes need to be defined. The system is installed on the satellite and receives the control signal, which is the comparison result of measurement and rated range. The system controls the speed of the satellite through the propulsion system. Also, the real system interacts with other stakeholders which are not listed here. Its operation mode is: After separation of the satellite and the rocket, the satellite is accelerated until reaching the upper limit of the rated range. Then during operation, due to the external interference, the satellite is decreased. When its speed is lower than the limit, the system will accelerate again, so as to control the speed within the rated range. By analyzing the system context and life cycle, initial requirements can be described in the requirements diagrams and tables. And then, the top-level use cases can be determined. At a functional level, the use case can be expanded through an AD with swim lanes, resulting in the functional decomposition architecture of the system which will be the input of functional FMEA (Fig. 3). Finally, according to the method mentioned above, the final function FMEA could be output and the expert supplement has been marked (Fig. 4).

Fig. 3. Functional decomposition

Judging from the results, some preventive measures should be taken, which could include redundant, protection or fault tolerance design as well as the isolation of faults. Thus, additional reliability requirements could be “when reaches the upper limit, even if the system sends an acceleration command, it should force stop accelerating” and “when the main function fails, the system can still complete the task”. Function decomposition after design iteration is shown in Fig. 5, in which the backup of the main function as well as the protective function is added. Then, components could be assigned to the logic layer resulting in the logical architecture of the system (Fig. 6). Also, the interaction between components can be clearly described in IBD. Based on the method of component FMEA, the final results are shown in Fig. 7 and the supplementary content of experts have been marked.

A Design Framework for Complex Spacecraft Systems

171

Fig. 4. Final functional FMEA

Fig. 5. Updated functional decomposition

Fig. 6. Logical architecture and component interaction.

According to the results, the failure of the locking mechanism and the leakage of the valve have a serious impact on system, so control measures are needed. In component level, these could be redundant design, preferred components as well as protection and isolation devices. In this case, the redundancy of the locking mechanism is increased and using two-seats valves to reduce the possibility of leakage. The corresponding fault tree can be generated according to the FT method mentioned above (Fig. 8).

172

J. Chong et al.

Fig. 7. Final component FMEA

Fig. 8. Updated component interaction and generated fault tree.

5 Conclusion This paper introduces a framework for integrating reliability analysis in the design of spacecraft systems, which ensure the consistency of models through model-based FMEA and FTA. It is shown that the proposed framework could benefit in the efficient iterative of system design and reliability analysis, and fully reflects the constraints and guidance of reliability analysis in design. However, the process still contains some human definition so that further research is needed to reduce the human involvement and realize the fully automated analysis based on system models.

A Design Framework for Complex Spacecraft Systems

173

References 1. David, P., Idasiak, V., Kratz F.: Towards a better interaction between design and dependability analysis: FMEA derived from UML/SysML models. In: Proc. ESREL 2008 and 17th SRA-Europe Annual Conference, Valencia, Spain, Sept (2008) 2. David, P., Idasiak, V., Kratz, F.: Improving reliability studies with SysML. In: Reliability & Maintainability Symposium. IEEE (2009) 3. Cressent, R., Idasiak, V., Kratz, F., David, P.: Mastering safety and reliability in a model based process. In: Proceedings-Annual Reliability and Maintainability Symposium, IEEE, Lake Buena Vista, FL, USA (2011) 4. Mhenni, F., Nguyen, N., Choley, J.-Y.: Towards the integration of safety analysis in a modelbased system engineering approach with SysML. In: Design and Modeling of Mechanical Systems. Springer, Berlin, Heidelberg (2013) 5. Garro, A., Tundis, A.: A model-based method for system reliability analysis. In: Simulation Series (2012)

Intelligent Task Management of Infrared Remote Sensing Satellites Based on Onboard Processing Feedback Xinyu Yao(&)

, Yong Yuan, Fan Mo, Xinwei Zhang, and Shasha Zhang

Institute of Remote Sensing Satellite, China Academy of Space Technology, Beijing 100094, China [email protected]

Abstract. In order to meet the needs of real-time abnormal events early warning missions of infrared remote sensing satellites with multiple observation elements, an intelligent task management method for infrared remote sensing satellites based on onboard processing feedback is designed. Relying on the feedback suggestions of the onboard processing of the satellite-borne infrared images, the users’ observation requirements, satellite design constraints, target area attributes, ground receiving capabilities and other information are collaboratively used to complete the autonomous mission planning and autonomous health management of the infrared remote sensing satellites. On the basis of application and research verification on-orbit, recommendations for mission management under the background of future remote sensing satellite network are given. Keywords: Onboard processing  Infrared remote sensing satellites Intelligent task management  Target detection



1 Introduction Remote sensing satellites realize the observation of ground targets through satellite platforms, and use on-board sensors to generate high-resolution images [1]. In recent years, with the increase of satellite load spatial resolution and spectral resolution, the data size of images acquired on the satellite has increased sharply, and the multifunctional intelligent task management and multi-satellite collaborative task planning are required eagerly. The onboard processing of infrared remote sensing images can perform intuitive temperature measurement and thermal state analysis of various targets, which improves the time efficiency of effective image products and achieves quickly acquirement and respond to thermal abnormal events. Hyper spectral Infrared Imager (HyspIRI) has designed an onboard intelligent data processing system to realize the on-orbit extraction of important information and is equipped with a ground broadcast channel to transmit event information to the ground at the fastest speed [2]. In order to meet the needs of real-time abnormal events early warning missions of infrared remote sensing satellites with multiple observation elements, and to improve ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 174–181, 2023. https://doi.org/10.1007/978-981-19-3387-5_20

Intelligent Task Management of Infrared Remote Sensing Satellites

175

the intelligence level of remote sensing satellite, the new generation of remote sensing satellites should have the ability to plan missions and perform target observations in orbit autonomously, which integrated with onboard information extraction and distribution can support users’ rapid application to the greatest extent [3]. Based on the mission requirements of infrared remote sensing satellites, this paper presents an intelligent task management method based on the onboard processing feedback of remote sensing data. Compared with traditional control method of “mission planning + command generation” which completed by ground [4], the intelligent task management method relies on the feedback suggestions of the onboard processing of infrared images, and synergistically considers multi-source information, to complete the autonomous mission planning of the infrared remote sensing satellites. And in this paper, we give the mission management recommendations under the background of future remote sensing satellite networking.

2 Requirements and Constraints of Remote Sensing Satellite Mission Management According to the imaging observation and operation characteristics of remote sensing satellites, when designing mission planning and management strategies, the requirements and constraints in each mission mode should be fully considered. Taking the high-resolution low-orbit infrared remote sensing satellite as an example, the main requirements and constraints of mission planning and management can be summarized as follows. 1. Constraint of Sensor Imaging Area Passive optical imaging sensor, of which the imaging principle is to passively receive and converse the spectral information transmitted or reflected by the surface target [5]. The optical load with different spectral bands has different requirements for the lighting conditions and cloud cover conditions of the photographed ground objects. 2. Satellite Resource Constraints Satellite energy constraints: The contradiction between the load energy consumption level and the onboard energy supply is becoming more and more prominent [6]. When autonomously planning tasks, it is necessary to consider energy balance constraints to prevent the onboard energy from being unable to support the completion of the mission or even affecting the safety of the satellite. Storage resource constraints: the data obtained by imaging recording activities occupy the storage space of the onboard memory, while the data playback activities when the satellite passes the ground station releases the storage space. Therefore, when optimization scheduling tasks, it is necessary to arrange as many imaging activities for important targets as possible under the premise of not overflowing memory resources. 3. Time Window Constraints According to the characteristics of the low-orbit remote sensing satellite, the satellite observation target or the transmission of data to the ground station must be

176

X. Yao et al.

completed within a visible time window. Any mission planning is based on the optimization management of the time window constraint. 4. Users’ Requirements for Satellite Data With the development of remote sensing satellites, compared with the reception of a single fixed institution, there are also ground receiving methods such as mobile tactical distribution, commercial users self-receiving and self-use. Facing users’ different requirements for shooting targets, the level and timeliness of image product, as well as the adaptable receiving code rate and demodulation capabilities of the ground receiving system, mission planning should be customizing combined with satellite capabilities for special users.

3 Intelligent Task Management of Infrared Remote Sensing Satellites Based on Onboard Processing Feedback 3.1

Intelligent Task Management Solution

According to the requirements and constraints of remote sensing satellite mission planning and management, an intelligent mission management design for infrared remote sensing satellites based on onboard processing feedback of remote sensing data is presented. The design adopts the idea of collaborative multi-source information, takes the satellite operation constraint model and the current state as input, combines users’ application requirements, relies on the onboard processing feedback suggestions of the infrared image, completes the intelligent planning of the task, and adopts appropriate data transmission ways to push users of different levels of products. Figure 1 shows the flow diagram of the intelligent task management solution. User

Direct uploaded imaging task schedule

Expected autonomous observaon area

Receiving user informaons

Data product level requirement

Satellite operaon status Satellite constraint model On-orbit imaging task schedule

Work command

Feedback Infrared remote sensing satellite plaorm and payload systems

Onboard data processing tasks

Data tranmission tasks

Infrared remote sensing data

Fig. 1. Flow diagram of the intelligent task management solution

The specific process of task management based on onboard processing feedback of remote sensing data is described as follows.

Intelligent Task Management of Infrared Remote Sensing Satellites

177

1. The satellite receives and stores user requirements and user information through the ground control, including direct task arrangements, observation requirements for specific areas or targets, product level requirements, and receiving user information. 2. According to the satellite’s own operating status, combined with resource and capacity constraint models, execute, merge or delete the task arrangements directly made by] user, or independently plan the imaging task according to the pre-stored user-specified observation area and the pre-judged target area characteristics. 3. During the imaging period, according to the real-time shooting point position, combined with the pre-stored onboard image processing requirements, timely guide the data processing and target monitoring system to perform corresponding operations. 4. Relying on the satellite’s high-precision orbit prediction capability, it can obtain information about the visible area of the receiving user in the future, and independently plan the transmission tasks of different levels of data products based on the feedback of the data processing and target monitoring results. 5. The target detection result is used as the feedback information of task management, and the area where the target event occurs is listed as a real-time hotspot area, and it will be first observed again and regularly observed in the task planning. 3.2

Key Technologies of Intelligent Task Management

According to the planning work performed by the ground and satellite during mission planning and the interface form of the user’s upload task, the satellite mission management can be divided into different levels. According to the level of intelligence from low to high, the key technology of infrared remote sensing satellite mission management design based on onboard processing feedback lies in the following aspects. Task Dynamic Processing Based on Satellite Status. At present, the working mode of remote sensing satellites is complex and flexible. By decoupling and simplifying the complex working modes, it can provide users with simple task interface. At this time, the satellite needs to dynamically optimize the load control, data transmission control, antenna control, and attitude control processes based on the distribution characteristics of the user’s upload task, combined with the satellite operating status information. Task Autonomous Planning Based on Target Area. According to the different users served by infrared remote sensing satellites, there will be a variety of observation mission target areas, such as water bodies, forests, urban areas, ports, etc. Under the condition that the arc allows and does not conflict with the users’ uploading tasks, the target area is imaged autonomously. When planning autonomous missions based on the target area, the ground objects can be meshed. Each grid has corresponding attribute information such as ground object type, observation priority, observation urgency index, whether it has been observed, the last observation time, etc. Autonomous Observation and Expression of Hotspots Based on Target Detection. Traditional remote sensing satellite data has many ground processing links, and it is difficult to meet the timeliness requirements of emergency response events. Therefore, it is the current development trend of remote sensing satellites to have onboard image processing capabilities. For infrared remote sensing satellites that can reflect thermal

178

X. Yao et al.

events, the onboard real-time image data processing can quickly extract the typical events of the user’s attention, carry out effective observation conclusion expression, and download the target detection results in real time. At the same time, the event area can be listed as a real-time hotspot area according to requirements to form sensitive target guidance information, which is prioritized for re-observation and regular observation in mission planning. The key points of intelligent task planning based on onboard processing are as follows. 1. Onboard infrared image processing and target detection For the onboard processing of infrared remote sensing images, the key technology is the inversion of image temperature and the detection of abnormal events. Here is a general onboard infrared image processing and target detection process, as shown in Fig. 2. After the infrared remote sensing image is preprocessed by radiometric calibration and combined with the ground object classification information reflected by different spectrum information or the attributes of the observation area provided during autonomous mission planning, the corresponding spectrum is selected for brightness temperature calculation, and the split window is used to retrieve feature temperature. Perform anomaly detection on the obtained temperature results, use different judgment criteria to confirm the abnormal points and eliminate the false detection points according to the requirements, and finally get the abnormal results. Take the detection of anomalous fire points as an example. The basic principle is to judge the abnormal points based on the fire point being higher than the surrounding temperature. The radiation intensity of the fire point at a wavelength of 4 um (which brightness temperature is defined as T4 in this part) is often used for detection, and the long wave infrared band (wavelength 11 um) (which brightness temperature is defined as T11 in this part) can be used to screen out highlight background features. There can be the following judgment criteria. a. In the absolute detection criterion, that is, if the brightness temperature is abnormally high, it will be judged as a fire point, such as T4 > 360 K (330 K at night). b. In the difference judgment criterion, the difference between T4 and T11 is used to identify the fire point. Such as when the pixel point meets T4 > 330 K (315 K at night), the difference between 4 and 11 um band is greater than 25 K (10 K at night); c. In the background judgment criterion, that is, when the temperature of the fire point is not very high or is affected by the threshold setting of the above two criteria, the fire point is separated from the background by analyzing the difference between the fire point and the background. The judgment condition can be T4>mean(T4)+3std (T4)&T4-T11>median(T4-T11) + 3std(T4-T11), where mean, std, and median are the mean value, standard deviation, and median of effective background (no cloud, land, not fire point) pixels in the window where the central pixel is located.

2. High-efficiency expression of target observation conclusions Onboard processing is carried out according to different user needs. The intelligence information is further processed into products of different levels to meet the

Intelligent Task Management of Infrared Remote Sensing Satellites

179

Infrared remote sensing data

Data preprocessing

Temperature inversion

Feature classificaon

Specific feature temperature results Abnormal judgment and false detecon point eliminaon

Abnormal temperature results

Fig. 2. General flow of onboard infrared image processing and target detection

specific user needs of different levels. Different users’ needs for information products generated by the onboard remote sensing data intelligent processing system mainly include [7]: a. Message-type information obtained for situation changes, which has the highest requirements for time response, and should be sent to ground application personnel at the fastest speed. b. The image slice information used for target recognition and confirmation can be used for further analysis of the target on the ground, and it is required to be sent to the ground users as soon as possible c. The unprocessed (may only be compressed) raw data information, the response time requirement is medium, and it will be transmitted downward during pass according to the orbit characteristics. Considering the characteristics of all-timer access, message intelligence can be transmitted to ground application terminals in the form of Beidou short messages. The advantage is that the Beidou system has a high coverage rate, and the transmission of information is not restricted by the location of remote sensing satellites and users, so the transmission is time efficient. For image slice information, we consider the method of information broadcasting on the satellite to increase the number of users and reduce the difficulty of transmission. The satellite intelligent task management system independently plans the image slice information transmission tasks according to the user’s future visibility. Figure 3 shows the high-efficiency multi-channel transmission of different levels of observational conclusions.

180

X. Yao et al.

Fig. 3. High-efficiency multi-channel transmission of different levels of observational conclusions

4 Application and Development At present, China’s self-developed satellites have different levels of autonomous task management capabilities, such as simplifying the task interface by carrying out load task autonomous analysis and design, and upgrading the timing design of multiple actions in the working mode from the original manual design on the ground to the solving on satellite through time constraint model, while having functions such as task inspection, task optimization, task analysis and task management. And receiving and managing user imaging task requirements, forming a unit-task-driven agile satellite autonomous task management mode. The mission planning of remote sensing satellites is based on the development from extensive management to refined management and should also be aimed at the development trend from single-satellite observation to multi-satellite observation. The realizations of multi-satellite autonomous mission planning, independent health management and collaborative observation for remote sensing are the key technologies of multi-satellite integrated intelligent dispatching. At this time, the information for the intelligent task management of remote sensing satellites is not only derived from the mission information on the ground and the satellite itself, but also includes various features and guidance information in the satellite network. By perceiving the orbital position, task arrangement, and operational capabilities of each satellite node in the network, and combining with feedback from onboard processing, through the data interaction interface between each node, comprehensively integrate network resources, determine the optimal imaging observation plan for users’ targets, and dynamically schedule the imaging actions of each satellite node according to the real-time situation, to ensure the high-efficiency work of the entire remote sensing satellite network.

Intelligent Task Management of Infrared Remote Sensing Satellites

181

5 Conclusion This paper presents an intelligent mission management design for infrared remote sensing satellites based on onboard processing feedback of remote sensing data, through cooperation with satellite mission information and execution condition information from multi-channel sources such as mission information uploaded by the ground, pre-bound user observation requirements, and satellite constraint models, satellite operating status, observation target area characteristics and data reception user information, relying on feedback suggestions for onboard processing of infrared images, and provides support for the high-efficiency development of remote sensing satellites through the management model recommendations on completing the autonomous mission planning and future multi-satellite coordination tasks.

References 1. Chen, Y., Bai, B., He, R., Li, J.: A survey on mission planning for remote sensing satellites. J. Spacecraft TT&C Technol. 27(5), 1–8 (2018) 2. Steve, C., Dorothy, S., Ashley, G.D., David, M.: Onboard instrument processing concepts for the HyspIRI mission. In: 2010 IEEE International Geoscience and Remote Sensing Symposium, pp. 3748–3751. IEEE, Honolulu (2010) 3. Bai, B., Chen, Y., Xing, L.: Scheduling architecture for multi-satellites mission planning. Comput. Eng. 34(15), 244–246 (2008) 4. He, R., Li, J., Yao, F., Xing, L.: Imaging satellite mission planning technology. Science Press, Beijing (2011) 5. Thomas, M.L., Ralph, W.K., Jonathan, W.C.: Remote Sensing and Image Interpretation, 7th edn. John Wiley & SONS Inc., Hoboken (2015) 6. Tian, Z., Li, X., Yang, Y., Wang, Y., Wang, H.: Usability constraints analysis and design strategies for remote sensing satellite operation. Spacecraft Eng. 27(1), 37–44 (2018) 7. Mo, F., et al.: Intelligent onboard processing and multichannel transmission technology for infrared remote sensing data. In: 2019 IEEE International Geoscience and Remote Sensing Symposium, pp. 9063–9066. IEEE, Kanagawa (2019)

Fragmentation Technology of Spacecraft TT&C Resources Under Multi-constraint Condition Jun Liu, Jun Wei, Yuan An, Yaruixi Gao, Feng Zhang, Ruolan Zhang(&), and Qifeng Sun State Key Laboratory of Astronautic Dynamics, Xi’an 710043, China [email protected]

Abstract. With the development of the world space industry, the number of spacecrafts in the future will increase much faster than the ground-based TT&C equipment and space-based TT&C capacity. In the case of SpaceX's Starlink Constellation program, all 4409 satellites are expected to function by 2024, eventually forming a supergiant Internet constellation of about 42,000 satellites. Therefore, with the rapid improvement of spacecraft technology and quantity, there is an urgent demand for space TT&C capability. How to speed up the reasonable allocation of TT&C resources, make efficient use of existing TT&C resources, and delve the potential of TT&C resources deployment is a practical and innovative mission. Based on China’s many years of space TT&C practice and the practical problems in ground-based space TT&C capability, this paper puts forward a fragmentation of TT&C resources reorganization algorithm (FRRA) under multi-constraint conditions and studies the technology of deep excavation of the ground TT&C capability of spacecraft. The results of this paper are of practical engineering significance to the existing mode and utilization efficiency of space TT&C in China. Keywords: Space TT&C

 Resource terminal  Resource deployment

1 Introduction With the increasing of China’s launch task year by year, the number of satellites in orbit is increasing rapidly, which leads to the full-load operation of TT&C equipment and the slower construction speed of TT&C stations than the satellite launch speed. In effect, accelerating the number of TT&C stations, increasing the manpower of satellite monitoring and control, and enhancing the automation of satellite management and control are the best ways to solve the shortages of TT&C resources. However, from the perspective of the construction economy and the application timeliness based on the existing conditions, the relatively practical and efficient way is to digging up the potential of the existing TT&C equipment resources, arranging the deployment and application strategy of the equipment, establishing the open equipment access standard protocol, and realizing the rapid access of TT&C equipment resources. This paper mainly analyzes the above contents from the following three aspects. First, based on the present situation of TT&C, this paper analyzes the constraints of TT&C system from ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 182–188, 2023. https://doi.org/10.1007/978-981-19-3387-5_21

Fragmentation Technology of Spacecraft TT&C Resources

183

the view of equipment usage. Second, the corresponding scheme and analysis are given from the aspect of potentiality mining of TT&C resources. This paper is of great significance for TT&C practitioners to better understand the connotation of TT&C equipment operation and management. Through algorithm design, data simulation and experiment, the feasibility of the proposed method is proved. Third, this paper has a certain engineering application value to improve the efficiency of operation and management in space TT&C.

2 Analysis of Current TT&C Equipment Allocation In the current spacecraft TT&C system of China, to realize the resource scheduling of different spacecraft TT&C missions, the following four basic requirements should be satisfied: 1: to satisfy the time requirements for the parameter testing of the spacecraft TT&C, the data downlink and the telemetry; 2: to satisfy the time requirements for orbit determination and data download; 3: to satisfy parameter setting and the time requirements for equipment adjustment and tracking of the next satellite mission; 4: to satisfy requirements for the actions of the segmental arc, such as the order of the pass, execution, effect verification and so on. The above four conditions belong to the essential conditions of conversion tracking of TT&C equipment, namely the minimum requirements. Therefore, it is necessary to realize the content of the four elements regardless of any algorithm or strategy. Hence the characteristics of the four elements can be analyzed. If the above data constitute a set of TT&C data, the degree of fragmentation of each pass will depend on the requirements of the multi-pass tracking task A, which is multi-constraint. While, suppose one space TT&C mission (single-pass tracking) Ai (where i = (1, 2, 3……n)), then the space TT&C mission (multi-pass tracking) during a period of time is assumed to be A = {A1, A2,……An}. The space TT&C mission is a kind of continuous overall work, almost belongs to multi-pass TT&C mission in order to finish the spacecraft activities. Therefore, it can reasonably extend the multi-constraint problem to the multipass-tracking spacecraft TT&C task A. The above hypotheses have the following practical significance: 1: In a multi-pass mission, the download data needed for orbit measurement is collected, and the orbit determination can be realized by the multi-pass fragmented orbital data; 2: Orders of different passes have the certain irrelevance, which can be dynamically adjusted to other passes for sending; 3: In real tasks, tracking time is greater than the actual needs of the order, orbit measurement and other work required time.

3 Design of TT&C Strategy for Multi-constraint Fragmentation Assembly Suppose that task A needs 5 passes of tracking with 10 min per pass and 120 min per interval, the total set of TT&C tasks in task A is TA = {Si,Hi,Oi,Mi}; Suppose B needs 5 passes of tracking with 12 min per pass and 120 min per interval, the total set of TT&C tasks in task B is TB = {Sn, Hn, On, Mn}; The problem to be solved in this

184

J. Liu et al.

paper is reasonably allocating the distribution of the mission S, H, O and M of TA and TB in all tracking pass number in order to find the most reasonable tracking switch moment in the tracking overlap time of TA and TB and realize the centralized use of TT&C resources, meanwhile, reduce the overall time of TA and TB and achieve the utilization of TT&C resources, enhance the idle time of equipment, then finally realize the excavation of TT&C resources in general.

Fig. 1. Time chart between TT&C task A and B of traditional and FRRA algorithm

Given the large number of satellites and the relatively small resources of stations, task A and B can be represented by time chart based on the above assumptions. In the traditional TT&C task, as shown in the figure above, the overlap region of task A and B are [ta1-tb1],[ta2-tb2],[ta3-tb3],[ta4-tb],[ta5-tb5], so two sets of equipment must be arranged to track, telemeter and control task A and B respectively in order to satisfy the demands of reality. The FRRA algorithm designed in this paper takes this kind of situation into overall consideration, decomposes TT&C task A and B by fragmentation, and then reconstructs them according to the TT&C strategies. Specifically, the right schematic diagram in Fig. 1 is generated after simulation adjustment by using this algorithm: The figure above shows the situation after all tasks are assigned according to the overlap time interval of task A and B. Finally, five continuous intervals of F1, F2, F3, F4 and F5 are realized, that is, the tracking time after the tracking target is switched by a single set of equipment.

4 Simulation Test The simulation hypotheses of this paper are based on two cases: 1: the tracking time each pass must be larger than the actual effective working time. These hypotheses must be consistent with the real situation in the tracking tasks such as satellite monitoring, orbit determination and satellite command issuing; 2: The adjusted total relay duration shall not be greater than the total duration for the equipment’s tracking capacity. Here are the adjusted data comparison tables of the simulation data (Tables 1, 2): The results of the experiment can be illustrated by the following figures (Fig. 2):

Fragmentation Technology of Spacecraft TT&C Resources

185

Table 1. Traditional TT&C mode data (second) Traditional TT&C mode Pass TA effective working time 1 156 2 240 3 251 4 210 5 230 Statistical information Total time 1087 Utilization rate Precision Optimum

EA tracking time 700 700 700 700 700

TB effective working time 240 251 320 300 236

EB tracking time 680 680 680 680 680

3500 100%

1347

3400 100%

Optimum

Table 2. Data after the implementation of FRRA (second) FRRA Pass

TA effective working time 1 191 2 112 3 211 4 251 5 322 Statistical information Total time 1087 Utilization rate

EA tracking time 391(F1) 342(F2) 368(F3) 641(F4) 692(F4)

TB effective working time 279 151 157 370 390

2334 69.5%

1347

Fig. 2. TT&C time change after the implementation of FRRA

EB tracking time 0 0 0 0 0

0%

186

J. Liu et al.

According to the comparison of the above two figures, after the implementation of FAAR, the tracking task of each satellite can realize the efficient use of an existing equipment after reasonable planning and coordination, thus, the working time of another set of tracking equipment can be greatly saved. The figure above is not only a sampling experiment, but also a good experiment for real task, which realizes the saving of a whole set of equipment, thus greatly exploiting the potential of task tracking. The above experiment is only a sample analysis experiment. Through the analysis of the actual task, this simulation experiment assumes that the adjustment rate of the two tasks can be between 20% and 40%, so the data analysis is as follows (Tables 3, 4, 5): Table 3. Task A required effective working time after simulation (second) Pass 1 2 3 4 5

20% adjustment 229.2 145.2 253.2 301.2 386.4

40% adjustment 267.4 169.4 295.4 351.4 450.8

Median adjustment 258.2 164.2 281.1 341.2 435.3

Table 4. Task B required effective working time after simulation (second) Pass 1 2 3 4 5

20% adjustment 334.8 181.2 188.4 444 468

40% adjustment 390.8 211.4 219.8 518 546

Median adjustment 377.2 206.1 213.2 500 526.1

Table 5. Simulation results for dual task tracking (second) Pass 1 2 3 4 5

Dual task needed working time 635.4 370.3 494.3 841.4 961.4

Equipment A tracking time 635.44 370.3 494.3 700 700

Equipment B tracking time 0 0 0 141.4 261

Here is the comparison before and after the simulation in terms of 20%, 40%, and median adjustment (Figs. 3, 4).

Fragmentation Technology of Spacecraft TT&C Resources

187

Fig. 3. The change of TT&C time requirement under the adjustable proportion of each task

Fig. 4. Equipment using time in the simulation of two task

Through the analysis of the above data, it can be seen that the FFAR has certain effect on the exploration of the potential of the task equipment. Through the reasonable adjustment of the passes of the satellite task, and the preferential arrangement of a certain set of working equipment, it can dynamically invoke another set of equipment when the conditions are not met, which finally realizes the efficient use of equipment and exploits its potential.

5 Conclusion Based on the existing problems of space TT&C Technology at resource terminal in space TT&C field, this paper puts forward a method of space TT&C resource allocation and optimization based on engineering practice, and makes reasonable simulation hypotheses through the analysis of the actual engineering situation, which has certain guiding significance and practical value for the innovation of the subsequent working mode of the space TT&C equipment in our country. Through the allocation and excavation of space TT&C resources, relevant research achievements with guiding

188

J. Liu et al.

significance and theoretical achievements with certain value for engineering improvement can be produced, which has a beneficial value of innovation for the following space TT&C resources allocation, optimization and excavation.

References 1. An, M., Wang, Z., Zhang, Y.: Self-organizing control strategy for asteroid intelligent detection swarm based on attraction and repulsion. Acta Astronaut. 130, 84–96 (2016) 2. Nag, s., Summerer, L.: Behaviour based, autonomous and distributed scatter maneuver for satellite swarms. Acta Astronaut. 82, 95–109 (2013) 3. Vassev, E., Sterritt, R., Rouff, C., Hinchey, M.: Swarm technology at NASA: building residual systems. IT Prof. 14(36), 42 (2012) 4. Puschell, J.J., Stanton, E.: Cubesat modules for multispectral environmental imaging from polar orbit. In: Proc. SPIE 8516, Remote Sensing System Engineering IV, vol. 8516, pp. 85160B:1–85160B:8 (2012) 5. Verhoeven, C., Bentum, M., Monna, G., Rotteveel, J., Guo, J.: On the origin of satellite swarms. Acta Astronaut. 68, 1392 (2011)

Key Technology Research and On-Orbit Verification of Beidou Short Message Communication System of LEO Satellite Ren Xiaohang(&), Shan Baotang, Xu Dinggen, Wang Po, and Wang Baobao Sergeant School of AeroSpace Engineering University, Beijing, China [email protected]

Abstract. Introduced the typical mission requirements of Beidou short message communication technology for low-orbit remote sensing, studied the key technology of low-orbit satellite short message communication system, and proved the correctness of the designed key technology through the research on the data obtained from the actual test on orbit. Keywords: Beidou short message Coverage verification

 Chain-building satellite forecast 

1 Introduction With the successful global networking of the BeiDou Navigation Satellite System (BDS), The Beidou Short Message Communication System (BSMCS) can be used as a space-based TT&C resource to solve the problem of global full-time measurement and control of low-orbit spacecraft. Guan Xinfeng, etc. of Space Star Technology Co, Ltd. proposed a space-based TT&C method based on Beidou III RDSS short message [1], It also analyzed the TT&C service capabilities and the available service resources of Beidou-3. Zhu Xiangpeng of the Chinese Academy of Space Technology (Xi’an) proposed a design of space-based TT&C system LEO spacecraft based on BSMCS [2], and analyzed the communication capabilities, telemetry and remote control time, and access service capabilities of this type of system. The service capacity of short message communication has been expanded from the Asia–Pacific region to the global area, and regional short message services and global short message services can be provided (Fig. 1).

©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 189–198, 2023. https://doi.org/10.1007/978-981-19-3387-5_22

190

R. Xiaohang et al.

Fig. 1. Distribution of GEO satellites of BeiDou Navigation Satellite System (BDS)

2 Task Requirement Analysis 2.1

On-Board Intelligent Processing and Broadcast Distribution Requirements

The ability of high-resolution remote sensing satellites to acquire data is getting stronger and stronger, and the amount of data is getting larger and larger, which brings tremendous pressure to the satellite’s downlink data transmission and post-processing. Due to weather factors, areas in the image are often covered by clouds, making it impossible to obtain information on the surface under the clouds, which greatly reduces the data utilization rate, and makes it difficult to ensure the accuracy of image classification and recognition, and sometimes even impossible. Therefore, real-time and effective cloud detection, selecting low cloud content, especially cloud-free images, and removing images with large cloud coverage are very important for alleviating the pressure of massive remote sensing image data transmission, processing and storage, and improving data utilization. significance. More importantly, in order to improve the application value of satellite tactical operations, increase response speed and mission execution efficiency, it is necessary to further carry out on-orbit intelligent processing of the massive load data obtained to complete radiation correction, regional target extraction, and key regional target detection. This function can be shielded or selected according to instruction requirements; target recognition mainly completes the discovery and recognition of marine targets, land targets, and air targets. However, the existing remote sensing satellite system is still in an independent operation system, and lacks the necessary information interconnection and data fusion with other spacecraft or space-based networks, so that the high-resolution load data obtained by remote sensing satellites can only pass through the ground/relay in isolation The transmission channel transmits to the ground fixed station, the timeliness is very poor, the task execution efficiency is low, and it is

Key Technology Research and On-Orbit Verification of Beidou Short

191

difficult to have the tactical and combat application capabilities. In order to further improve the response speed and task execution efficiency, the identified intelligence information is quickly transmitted to the corresponding space-based node forwarding or ground user terminal through the intelligence distribution system to realize the rapid distribution of intelligence information and effectively shorten the information return link. Provide users with information support directly. 2.2

Satellite Measurement and Control Requirements

High-Efficiency Measurement and Control Requirements. High-resolution earth observation satellites require prompt response to commands so that they can quickly perform imaging tasks and achieve real-time control. At the same time, they also need to monitor the health of the satellites in real time. When the satellites fail, they can be detected and handled quickly. Time-sensitive measurement and control capabilities. Traditional ground-based measurement and control belong to regional measurement and control, and measurement and control channels can only be provided when the satellite enters the visual range of the domestic ground station, so it is difficult to meet the high-efficiency measurement and control requirements of high-resolution satellites [3, 4]. However, relay measurement and control needs to apply for available arcs at least one week in advance, which is not suitable for real-time task response mechanism, and the timeliness still cannot meet the demand. Emergency Mission Measurement and Control Requirements. The emergency measurement and control capability of the measurement and control system is an important means to ensure the rapid mission response capability of high-resolution earth observation satellites in orbit, and to enhance my country’s intelligence detection and support capabilities. On the one hand, in order to meet the mission requirements of emergency imaging, the ground system needs to adjust the reconnaissance plan of the low-orbit satellite in time to ensure the timeliness of intelligence acquisition; on the other hand, once the satellite in orbit fails, the monitoring and control system needs to ensure that the satellite is in operation. Comprehensive and detailed telemetry data is downloaded to the ground system through the measurement and control channel to realize rapid fault identification, location and resolution. The satellite measurement and control system needs to meet the above two requirements. At present, remote sensing satellites use the ground measurement and control link or the relay measurement and control link to complete the instruction note. Under the premise of meeting the link budget, the uplink/forward remote control rate is further improved, and the support Quick upload of combat instructions, update procedures, etc. 2.3

Requirements for Multi-satellite Collaboration Missions to Key Targets

Due to the orbital limitation of a single low-orbit remote sensing satellite, its time to pass a key target is only about 5 to 15 min, which severely limits the continuous monitoring of key targets and the scheduling of emergency tasks. If the low-orbit remote sensing satellite is built to support the communication data link with the high,

192

R. Xiaohang et al.

medium and low orbit remote sensing satellites in the system, the Beidou navigation satellites outside the system, and the military communication satellite constellation, the target can be found from the census satellite, and the single-satellite surveillance target and identification of the detailed investigation satellite can be realized [5, 6]. It also intelligently cooperates with friends or networked satellites to monitor the relay. Through on-orbit target identification and situational awareness, and intelligent followup under the agreement of space-based networks, it is a good solution to the bottleneck of the multi-satellite cooperative detection of key targets and to meet the long-term surveillance of operations.

3 Key Technology Analysis and Design In order to obtain reliable communication with Beidou satellites, the key technologies are as follows: 3.1

High-Precision Doppler Compensation Technology

In a single observation arc, the short message transceiver first quickly completes the high-dynamic capture, tracking and reception of the navigation star signal. The LEO orbit height of the user satellite is 500 km, and the linear velocity of the circular motion is 7.2 km/s. The maximum radial velocity of the system satellite is 8 km/s relative to Beidou satellites, and the radial acceleration over the equator is 0.85 g. Doppler frequency calculation formula is: f f d¼  v c

ð1Þ

The calculation formula of Doppler frequency change rate is: f fd¼  a c

ð2Þ

According to the above formula, for the current MEO global coverage system of the Beidou-III system, the user terminal needs to adapt to the dynamic characteristics [7]: 1) Carrier Doppler frequency fd: −43.3 kHz–43.3 kHz; 2) Carrier Doppler frequency change rate Dfd: −46.03 Hz/s–46.03 Hz/s; The clock drift of the local frequency source is obtained according to the Doppler measurement and positioning solution of the above-mentioned return receiving beam, and the real-time frequency compensation of the inbound signal transmitted in the forward direction is completed. The short message transceiver equipped this time adopts Kalman-assisted third-order high dynamic tracking algorithm, which can realize that the frequency tracking error of the reverse receiving beam is less than 1 Hz, and the clock drift error of the positioning solution is also in the order of several hertz (Fig. 2).

Key Technology Research and On-Orbit Verification of Beidou Short DS-Code Generator

193

NCO

Ahead

Points clear

Code ring discriminator

Points clear

Postponed

Subcarrier generator

NCO SPLL Discriminator

Digital IF signal Points clear

Realtime

PLL Discriminator

Kalman Filter

NCO

Fig. 2. High dynamic third-order tracking loop block diagram

By calculating the frequency change rate of the inbound signal introduced by the satellite acceleration change is 46 Hz/s, considering that the inbound signal duration of the forward transmission is up to 2.5 s, the resulting frequency error is 115 Hz, which is greater than the index requirement of ±100 Hz, so In the process of transmitting the inbound signal, the signal machine needs to continuously measure the carrier Doppler of the reverse received signal, and compensate the transmitted inbound signal carrier pseudo-code in real time according to the measured value and the clock drift solution value (Fig. 3).

Receive antenna

Downconverter

ADC

Digital receiving circuit

Local crystal frequency offset calculation Transmit Code NCO Frequency Control Calculation

Transmission compensation parameter calculation unit

Local crystal frequency offset calculation Unit

Transmission PLL parameter calculation

Transmit Code NCO

Transmit antenna

Upconverter

Spectrum Spread

BPSK

Fig. 3. Block diagram of RF offset compensation algorithm

3.2

Analysis of Time Slot Allocation and Message Communication Delay

According to the data provided by the Beidou-III global system, any Beidou short message terminal in the world sends information to the central station through the Beidou constellation. The probability of completing the transmission through the Beidou inter-satellite network (via a satellite) is 60%, and the transmission is completed by two hops. The probability is 95%. In extreme cases, it takes 3 hops to complete the transmission.

194

R. Xiaohang et al.

When the short message communication system is applied to the remote sensing satellite, since the user end is in the China mirror, the user end to the ground central station can ensure that it can be reached by one hop. Information passes through the Beidou short message link, and the information flow between the user terminal and the user satellite is divided into a forward link and a return link. 1) Short message forward link: User Terminal—>L Uplink—>Star A—>S Downlink—>Central Station—>L Uplink—>Star B—>Ka Star Interlink—>Star C—>L Downlink—>User Satellite 2) Short message backward link: User terminal :  arctan ry rz

two axis are calculated by (Fig. 4): ry [ 0; rz [ 0 a 2 ½p=2; 0 ry [ 0; rz \0

a 2 ½p; p=2

ry \0; rz \0

a 2 ½p=2; p

ry \0; rz [ 0

a 2 ½0; p=2

b ¼ arcsin where r¼

ð21Þ

rx r

ð22Þ

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rx2 þ ry2 þ rz2 .

Z P1

α

O

β

P

Y

X

Fig. 3. Relative position of coordinates

Fig. 4. The define of antenna pointing angles

4 Test Results In this section we apply the algorithm of Sect. 2 to the prediction algorithm of antenna pointing to ground station. In STK(Satellite Tool Kit) software, orbit extrapolation model is HPOP, Gravity Field Model is WGS84_EGM96. Moreover, atmospheric drag, solar radiation pressure and lunisolar gravitational perturbation are took into account. STK produces the angles(a, b) of antenna pointing to ground station. These angles are to verify the results of the algorithm. We verified the algorithm with different LEO satellites and ground stations. It performs well, giving good results. As shown in Fig. 5. It is clear seen that the computational errors are is less 0.1°.

Accurate Pointing Control Algorithm for Space Based Movable

(a)

383

(b)

Fig. 5. a. Calculation error. b. Angles of antenna

We have chosen to use a DSP hardware platform to perform the algorithm. The TMS320C6701 DSP processor used here is a 32-bit floating-point processor. Its clock is 25 MHz. The running time of the algorithm is less than 600 us. In addition, by using the double word-wide capability of the processor, the computation precision is better than using the float word-wide. But, this will generate more program code.

5 Conclusion The paper described the prediction algorithm of antenna pointing to ground station. The algorithm is simplified to apply in LEO satellite. Besides, it meets the requirements for precision pointing. We tested the algorithm using elements of many satellites on DSP hardware platform, it has been shown that the algorithm ensures computational precision and minimize the processing time. Comparing test results, the algorithm is correct and reliable.

References 1. You, R., Gao, W., Wu, C.: Spacecraft antennas engineering design technology, p. 5. Beijing Institute of Technology Press (2018) 2. Yong, Z., Jian, Z.: Modeling and simulation of driving data transmission antenna. Chinese Space Sci. Technol. 6, 31–37 (2014) 3. Fan, L., Xiaosong, Z.: Analysis of influence on satellite-to-earth date transmission link by directing precision of Ka-band spot beam antenna. Spacecraft Engineering. 25(6), 61–68 (2016) 4. Zhang, Y.: Satellite Orbit Determination, p. 5. National Defense Industry Press (2007) 5. Xi, X., Wang, W.: Fundamentals of Near-Earth Spacecraft Orbit, p. 4. University of National Defense Press (2003). 6. Souchay, J., Capitaine, N.: Precession and nutation of the earth. Lect. Notes Phys. 861(1), 167–200 (2013)

384

Y. Zhou et al.

7. Lei, W., Zhang, H., Li, K.: Effects of precession-nutation models update, polar motion. Difference between UTl and TT on Coordinate Transformation Journal of Spacecraft TT&C Technology 35(1), 53–61 (2016) 8. Zhang, F., Xu, G., Jean, P.B.: Keplerian orbit elements induced by precession,nutation and polar motion. Progress in Geophysics 34(6), 2205–2211 (2019) 9. Wang, M., Luo, J., Ma, W.: Comparison of IAUl976, 1980 and IAU2000A PrecessionNutation Model. Chinese Space Sci. Technol. 29(5), 42-47 (2009) 10. Li, M., Hu, Y., Zongbo, Mu, W.: Applications of precession-nutation models in satellite coordinate transformation. Sci. Survey. Mapp. 30(3) (Mar 2019) 11. Krasinsky, G.A.: Earth’s precession–nutation motion: the error analysis of the theories IAU 2000 and IAU 2006 applying the VLBI data of the years 1984–2006. Celest. Mech. Dyn. Astron. 101(4), 325–336 (2008) 12. Li, H., Sun, Z., Yu, X.: Error compensation method for satellite borne movable antenna installation. Space Electr. Technol. 5(11), 63–66 (2019)

Telemetry Data Prediction Method Based on Time Series Decomposition Zhiqiang Li(&), Hongfei Li, Xiangyan Zhang, and Huadong Tian Beijing Institute of Spacecraft System Engineering, Beijing 100094, China [email protected]

Abstract. In this paper, a prediction method of satellite telemetry data based on time series decomposition is proposed. The HP (Hodrick Prescott) filtering is introduced to telemetry data analysis and prediction for the first time. The HP filtering method decomposes the time series of telemetry parameters into trend term and fluctuation term, by polynomial fitting and seasonal ARIMA model, the predicted values of the two terms are superimposed to obtain the final combination prediction result. The accuracy and effectiveness of the prediction method are verified through the empirical analysis of the telemetry data of the output current of a satellite solar array and the results show that the relative error is less than 0.67% during the one-month prediction period. This method has significant engineering application value in satellite health assessment, fault diagnosis and early warning. Keywords: HP filtering

 ARIMA model  Data prediction

1 Introduction Satellite telemetry data reflects the status of satellite devices on orbit, and the ground personnel usually monitor and evaluate the satellite operation by the telemetry data. By analyzing the changes and making prediction of telemetry data, we can not only find the degree of performance decline of equipment, but also make early warning and fault diagnosis, so as to deal with unexpected abnormal changes and reduce the operation risk of satellite. It is of great significance to ensure long-term safe operation [1, 2]. The output current of the solar array is a key parameter to measure the power supply capacity of the satellite. It has a tendency to gradually attenuate with time, and is also affected by various physical and external factors such as solar incidence angle, the distance between the sun and the earth, and the attenuation factor. The existing current prediction methods are based on complex physical laws and external factors with varying degrees of influence, so the prediction accuracy is not high enough and the prediction time is limited. At present, a lot of research has been carried out on the trend prediction of telemetry data. Common data prediction methods include curve fitting extrapolation, wavelet analysis, grey theory prediction, neural network and so on [3, 4]. These methods are mostly suitable for the short-term prediction of single trend, and seasonality and randomness of data are not fully considered. For the medium and longterm forecasts, there are limitations of insufficient accuracy or forecast divergence. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 385–394, 2023. https://doi.org/10.1007/978-981-19-3387-5_44

386

Z. Li et al.

The HP (Hodrick Prescott) filtering is a commonly used time series analysis method [5, 6], which is widely used in the fields of industrial energy analysis, business cycle prediction and so on. However, there is no report on the application of HP filtering in the field of telemetry data. In this paper, the application of HP filtering is introduced into telemetry data field for the first time. Firstly, the theory foundation of HP filtering and seasonal ARIMA model are introduced; Secondly, the technical steps of the prediction method are described in detail; Finally, an empirical analysis is carried out to verify the effectiveness of the method through the prediction of the output current of the solar array, with the prediction results achieving high prediction accuracy within a one-month span. This method is different from the existing prediction methods, and provides a new technical approach for telemetry parameter analysis and early warning.

2 Theory Foundation 2.1

HP Filtering

HP filtering method, which is a method to decompose time series, was first proposed by Hodrick and Prescott in 1981 [5]. Its basic principle is to decompose the original series Y = fy1 ; y2 ; . . .; yn g into two groups: trend series G = fg1 ; g2 ; . . .; gn g and fluctuation series C = fc1 ; c2 ; . . .; cn g. They are related to the original series by Eq. (1) [6, 7]: yt ¼ gt þ ct ; t ¼ 1; 2; :::n

ð1Þ

The purpose of HP Filtering is to decompose the trend component from the nonsmoothed series. The separation process must satisfy the principle of minimum loss function: ( min

n X

2

ð y t  gt Þ þ k

t¼1

n1 X

) ½ðgt þ 1  gt Þ  ðgt  gt1 Þ

2

ð2Þ

t¼2

where k ¼ r21 =r22 , r21 and r22 are the variance of the sequence G and C respectively, and k is the smoothing parameter. An important problem is the value of smoothing parameter k, which is used to adjust the tracking degree of the trend to the actual series and the smoothness degree of the trend series. With the increase of the value, the estimated trend is smoother. By HP filtering, the trend, fluctuation and seasonality of the sequence itself are not eliminated. These properties are retained in the decomposed sequence G and C respectively. 2.2

Seasonal ARIMA Model

ARIMA (autoregressive integrated moving average model) is a time series prediction model proposed by Box and Jenkins in the 1970s. It is a high precision time series prediction method. The general expression of ARIMA model is:

Telemetry Data Prediction Method Based on Time Series Decomposition

UðLÞDd yt ¼ HðLÞet

387

ð3Þ

where et is a white noise sequence, L is a lag operator, and Dd yt ¼ ð1  LÞd yt is a stationary time series obtained by difference; UðLÞ ¼ 1  u1 L  u2 L2      up Lp and HðLÞ ¼ 1 þ h1 L þ h2 L2 þ    þ hq Lq , are autoregressive operator and moving average operator polynomials. The seasonal ARIMA model is a time series model based on the traditional ARIMA, which takes into account the adjustment of seasonal factors [8]. /p ðLÞUP ðLs Þð1  LÞd ð1  Ls ÞD yt ¼ hq ðLÞHQ ðLs Þet

ð4Þ

where /p ðLÞ is a non-seasonal autoregressive operator (AR) polynomial, UP (Ls ) is the seasonal autoregressive operator (SAR) polynomial, hq ðLÞ is a non-seasonal moving average operator (MA) polynomial; HQ ðLs Þ is the seasonal moving average operator (SMA) polynomial. D, d are the difference times of season and non-season; P, Q, p, q are the maximum lag orders of seasonal, non-seasonal autoregressive and moving average operators respectively. The seasonal ARIMA (p, d, q)(P, D, Q Þs model can be established when the non-stationary time series with period S is transformed into stationary time series after D times of seasonal difference. There are several steps to build seasonal ARIMA model. 1) The original series needs to be tested for stationarity. If it is not, the difference transformation is performed until it is stable. 2) By analyzing the autocorrelation function (ACF) and partial autocorrelation function (PACF) of the original sequence or the sequence after difference, the order of ARIMA can be determined by the truncation and tailing of ACF and PACF, then the ARIMA model can be established. 3) Estimate the parameters of the model, and test whether the residual sequence of model estimation meets the requirements of randomness. 4) Optimize the model and choose the best model. 5) Predict the future data and the predicted value is obtained based on the principle of minimum mean square error.

3 Modeling Method By observing the characteristics of the two sequences after HP Filtering, the fluctuation term is obviously periodic and seasonal, so the seasonal ARIMA model is considered. The trend term has a significant tendency, and a curve fitting model is established. The specific steps are as follows. 1) According to HP filtering, decompose the original sequence Y to trend term G and fluctuation term C. 2) The sequence G is regarded as a function of time t. The error term from each sample to the  autoregressive curve is defined as e1 ; e2 ; en . According to the principle of min e21 þ e22 þ . . . þ e2n , determine the fitting multiple autoregressive polynomials.

388

Z. Li et al.

gt ¼ a0 þ a1 t þ ::: þ an tn þ Lðt  nÞ

ð5Þ

where L(t−n) is the function of lag term. 3) According to the fitting polynomial, the sequence G is predicted, and the predicted value is b gt. 4) Check the stationarity of the sequence C. If it is not stable, perform a difference operation until it is stable, and denote it as the sequence WC. 5) Perform correlation analysis on the sequence WC, determine the order of the seasonal ARIMA model. d 1 s D ct ¼ /1 hq ðLÞHQ ðLÞet p ðLÞUP ðLÞð1  LÞ ð1  L Þ

ð6Þ

6) Test the residual sequence fet g, judge the rationality of the model. 7) The sequence C is predicted and the predicted value is recorded as bc t . The final prediction result of the original sequence is obtained: ^yt ¼ ^g t þ ^ct

ð7Þ

8) Perform error analysis on the prediction results, and the error tested is relative error (RE): REt ¼

^yt  yt  100% yt

The prediction flow chart of the combined model is shown in Fig. 1.

Fig. 1. Model flow chart based on HP filtering

ð8Þ

Telemetry Data Prediction Method Based on Time Series Decomposition

389

4 Empirical Analysis In order to verify the effectiveness of the method, the in-orbit data of the output current of a satellite solar array from July 18 (2018) to April 23 (2021) is selected as the research sequence. The data from July 18 (2018) to March 24 (2021) is used as the reference data for modeling, while the data from March 25 to April 23 (2021) is used to compare with the predicted results based on the model and to measure the prediction error between actual values and predicted ones in the same period. MATLAB and Eviews are used for modeling and analyzing the sequence. The output current from 2018 to 2021 is shown in Fig. 2. The output current has a downward trend year by year, and the annual value has seasonal fluctuations. This is due to the attenuation of output current with the aging of solar array performance. Affected by the solar incidence angle and orbit conditions, it shows a certain periodic fluctuation, which is seasonal variation. The sequence is decomposed by the HP filtering method, as is shown in Fig. 2. Considering the smoothness of the trend and conducive to the fluctuation term ARIMA modeling, the k value is set to be 90002500. The sequence G curve represents a longterm smoothly downward trend. The sequence C curve represents the seasonal and periodic fluctuation items, with regular variability every year. 42 40 38

4

36

Trend term (A)

Original series Y Trend term G Fluctuation term C

Fluctuation term(A)

2 34 0

32

-2

-4 2018-9

2019-1

2019-5

2019-9

2020-1

2020-5

2020-9

2021-1

Time

Fig. 2. Series separation based on HP filter

4.1

Trend Term Prediction

The sequence G approximates a smooth curve. t (t = 1, 2, 3…) is defined as the cumulative time, the power and lag term are adjusted according to the result of polynomial curve fitting. Based on the principle of minimum sum of square error, the estimation coefficients of the model are obtained and included in Table 1. Here, R2 and adjusted R2 are used to explain the goodness of fit of the model. The larger the R2 is, the better the model fits. AIC (Akaike information criterion) and SC (Schwarz

390

Z. Li et al.

criterion) measure the performance of the model, the smaller the better. D-W Statistic is used to test the first-order autocorrelation of residuals. The prediction model of the sequence is established. Table 1. Estimation results of trend model Variable C T t2 t3 g(t−1) g(t−2) R-squared Adjusted R-squared D-W Statistic

Coefficient 0.010273 −7.60E-07 −1.04E-09 8.79E-13 1.998918 −0.999181 1.000 1.000 0.001002

t-Statistic Prob 88.53131 0.000 −48.05049 0.000 −34.79574 0.000 42.30121 0.000 11042.52 0.000 −5524.187 0.000 AIC −20.20581 SC −20.17586

From Table 1, the fitting effect of polynomial autoregressive curve is very good, the adjusted R2 reaches 1.000, and all parameters pass the significance test. The trend term is not only related to the time t, but also related to the second-order lag term.

35.52 Trend prediction Confidence interval line

Current (A)

35.51 35.50 35.49 35.48 35.47 35.46 35.45 3-27

3-31

4-4

Time

4-8

4-12

4-16

4-20

Fig. 3. Prediction value and confidence interval of trend term

Based on the model estimation results and Eq. (5), the sequence G from March 25 to April 23 (2021) is predicted, and the prediction curve is shown in Fig. 3. The prediction results are included in Table 3, and the confidence interval is very narrow. It

Telemetry Data Prediction Method Based on Time Series Decomposition

391

can be seen that when the polynomial autoregressive curve is used for forecasting the trend, the overall smooth trend prediction is very good. 4.2

Fluctuation Term Prediction

The seasonal ARIMA model is established to analyze and forecast the fluctuation sequence C. Analyzing the sequence C, in order to further eliminate the trend, the first-order difference of sequence C is carried out and then it is stable, so d = 1. C also has seasonal variation law, and the period is 11. In order to eliminate the seasonality, perform the seasonal difference processing with lag of 11 periods, so D = 1, S = 11. By observing the PACF and ACF of seasonal difference sequence, the identification of the model can be determined. We can see that p = 8, q = 10, P = 0 or 1, Q = 1. The optimal model needs to be selected. The model with the largest adjusted goodness of fit R2 is ARIMA (8, 1, 10) (0, 1, 1)11, whose AIC and SC are relatively small, so the more suitable model ARIMA (8, 1, 10) (0, 1, 1)11 is established. The estimated results of this model are shown in Table 2. Table 2. Estimation results of the model Variable AR(1) AR(2) AR(8) MA(1) MA(3) MA(10) SMA(11) R-squared Adjusted R-squared D-W Statistic

Coefficient 0.908967 0.266282 −0.18081 −1.51025 0.52659 −0.00857 −0.95501 0.676672 0.674639 1.983981

t-Statistic Prob 28.56085 0.0000 7.207283 0.0000 −32.9195 0.0000 −120.125 0.0000 33.85174 0.0000 −2.23603 0.0256 −130.922 0.0000 AIC −2.435469 SC −2.400010

It is necessary to test whether the residual of the model is white noise. After the ADF test, the residual is determined to be white noise. Therefore, ARIMA (8,1,10) (0,1,1)11 is an ideal prediction model.

392

Z. Li et al. 0,8 0,6

Current (A)

0,4 0,2 0,0 -0,2 -0,4 Fluctuation prediction Confidence interval line

-0,6 -0,8 3-27

3-31

4-4

4-8

4-12

4-16

4-20

Time Fig. 4. Prediction value and confidence interval of fluctuation term

According to the model and Eq. (6), the data of fluctuation term from March to April (2021) are predicted, with the predicted results included in Table 3 and the prediction curve shown in Fig. 4. It can be seen that the model can accurately predict the variability over time, and the prediction confidence interval is narrow. With the expansion of the prediction period, the prediction confidence interval also becomes larger, indicating that the prediction accuracy becomes worse in the future. 4.3

Combination Prediction and Reliability Verification

According to the principle of HP Filtering, yt ¼ gt þ ct , the prediction results from March to April 2021 are obtained and included in Table 3. The results are compared with the actual values and the reliability is verified. The existing grey theory model GM (1, 1) is used for prediction as a comparison, and the prediction results are also included in the table. Table 3. Prediction results of current from 2021.03 to 2021.04 Time Trend Fluctuation prediction prediction

Combined prediction

GM Actual prediction value

Absolute error

Combinedrelative error

GMrelative error

3–25 3–26 3–27 …… 4–22 4–23

36.1011 36.0858 36.0899 …… 35.3400 35.3130

36.3344 36.3628 36.3911 …… 37.1360 37.1649

0.0201 0.0048 0.0089 …… 0.1280 0.1010

0.056% 0.013% 0.025% …… 0.363% 0.287%

0.702% 0.781% 0.859% …… 5.464% 5.546%

35.5150 35.5130 35.5109 …… 35.4598 35.4579

0.5861 0.5728 0.5790 …… -0.1198 -0.1449

36.081 36.081 36.081 …… 35.212 35.212

Telemetry Data Prediction Method Based on Time Series Decomposition

393

Fig. 5. Combined model prediction and relative error chart

From Table 3, the combined prediction results obtained are very accurate, the difference between the predicted value and the actual value is very small, the absolute error is between −0.0037*0.234, the relative error is within 0.665%, the minimum relative error is only 0.01%, as shown in Fig. 5, which achieves a very high accuracy. As a contrast, in the grey theory prediction model, the relative error reaches 5.5%, as shown in Table 3, which is higher than the proposed combination method. Therefore, the identification and verification results of the combined model are ideal, showing that the model can be used to accurately predict the output current data when the state of the solar array does not change suddenly. In fact, the current data is calculated by the telemetry source code through processing method. The minimum unit of the change of the source code is a stratified value, which corresponds to the calculated current value of about 0.24; Therefore, the prediction accuracy (about 0.234) of the model has reached the level of predicting the smallest change, reflecting the trend more essentially before the telemetry value. Because the output current is affected by many factors, including illumination angle, performance degradation and so on, the prediction accuracy based on physical laws is often insufficient. The verification results show that it is not necessary to consider a variety of complex multiple physical and external factors, while a high-precision prediction superior to the stratified value over a long period of time can be made just relying on the change law of the data itself. The combined prediction method can also be extended to be used in fault diagnosis and early warning. In the monitoring of satellites, this method can be used to model the historical telemetry data and to find the difference between the theoretical prediction and the actual value. When the actual value deviates greatly from the prediction value, the potential fault state of equipment can be diagnosed. Meanwhile, this method can be used to evaluate the long-term performance of the equipment and to warn of the

394

Z. Li et al.

abnormal problems in advance by setting the dynamic monitoring range, thereby realizing the effective prediction of the operation life.

5 Conclusion This paper presented a parameter prediction method based on time series decomposition. The HP filtering method was used to decompose the original series into the trend term and fluctuation term, which weakened the mutual influence between the trend and fluctuation, so as to establish the combination forecasting model more accurately. Through the empirical analysis of the solar array output current, the effectiveness of the prediction method was verified. The results showed that the method had the advantage of high fitting accuracy and high prediction accuracy, and could also ensure the accuracy of multi-step prediction in a long-time span. This method has important practical significance in satellite operation. The HP filter idea was applied to the telemetry data field, and the mature tools and models of time series analysis were introduced to provide new approach and technical guarantee for satellite status monitoring, health assessment and early fault warning, which has significant engineering application value.

References 1. Yan, Q., Cui, G.: Spacecraft telemetry data prediction algorithm based on time serie. Comp. Measure. Cont. 25(5), 188–191 (2017) 2. Zhou, F., Pi, D.: Prediction algorithm for seasonal satellite parameters based on time series decomposition. Computer Science 43(2), 9–12 (2016) 3. Zhang, G., Zhai, J., Yang, H.: Research on telemetry data tendency prognosis for navigation satellite. Spacecraft Engineering 26(3), 70–77 (2017) 4. Guo, X., Xu, X., Zhao, S.: Satellite telemetry parameter trend forecast algorithm based on new information and applications. Journal of Astronautics 31(8), 1939–1943 (2010) 5. Hodrick, R.J., Prescott, E.C.: Postwar U.S.: business cycles: an empirical investigation. Journal of Money, Credit, and Banking 29(1), 1–16 (1997) 6. Yongxiu, H., et al.: Correlation between Chinese and international energy prices based on a HP filter and time difference analysis. Energy Policy 62, 898–909 (2013) 7. Chen, W.: Research on the cycle of thermal coal price volatility in China: based on HP filter method. China Mining Magazine (z1), 63–66 (2013) 8. Dehong, A., et al.: Traffic modeling and predication using seasonal ARIMA models in power system. J. Tianjin Univ. 37(2), 184–187 (2004)

A Query Optimization Method for Real-Time Embedded Database Based on Minimum Arborescence Xudong Li, Bo Liu(&), Jian Xu, Jianyu Yang, and Mengdan Cao Beijing Institute of Control Engineering, Beijing 100190, China [email protected]

Abstract. In this paper, a query optimization method based on minimum arborescence for real-time embedded database is proposed to solve the problems of memory limitation and real-time access performance limitation of spaceborne computer system. Based on the characteristics of SQL statements, this method applies the minimum arborescence algorithm to the real-time embedded query processing by relying on the relationship between the attributes of each table during the construction of SQL query. This method can improve the processing speed of multi-table complex query statements, reduce the system memory usage, and optimize query processing in embedded database. The verification on SQLite database shows that this method is an effective query optimization method for embedded database. Keywords: Real-time embedded database optimization

 Minimum arborescence  Query

1 Introduction Embedded database is an important data management warehouse under the condition of limited resources. Spaceborne embedded real-time database is an important direction in the field of aerospace technology in the future. With the increase and diversification of the data of the spaceborne system, how to improve the data processing ability of the spaceborne embedded system and how to improve the real-time processing of the SQL statement query of the embedded database is a topic worthy of continuous research in view of the problems of the memory resource limitation and real-time restriction of the spacecraft control system. In generic databases, join operations will generate intermediate result sets and store them in temporary tables. A large number of intermediate results will lead to high memory overhead. The worst-case optimal join algorithm has been pioneered [1]. This algorithm is mainly dedicated to solving the problem that traditional methods are difficult to avoid using the pairwise combination of relations, and then generate a large number of intermediate calculations when they are working on a multi-table natural join query. It revolves around the join characteristics of relational database query statements, restricts the join scale of each table, and then limits the overall scale. Then there is a hybrid query optimizer based on hash table and hypergraph structure ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 395–403, 2023. https://doi.org/10.1007/978-981-19-3387-5_45

396

X. Li et al.

proposed on this basis in order to deal with two-way and multi-way joins more effectively [2], as well as subgraph queries optimization based on the worst-case optimal join algorithm [3]. These methods achieve the purpose of reducing the size of the intermediate result set of the query statement from different angles. Although it has to be admitted that the above query processing algorithms have a certain degree of universality in most relational databases, the query optimization of some embedded databases is an exception. Due to the limited memory resources of embedded databases, these embedded databases often adopt different methods to avoid generating intermediate result sets. For example, SQLite query processing adopts the idea of virtual machine using cursors to locate records from different tables according to the conditions of the WHERE statement, fetch them and put them in the result register, so SQLite does not generate an intermediate result set. In this regard, we need to take further research on the optimization of the join statement for the SQL statement processing characteristics of these databases [4, 5]. This article studies a method for the optimization of the query statement join processing related to the embedded database characteristics.

2 Structure of Query Processing Query processing of SQL statements is mainly divided into the following four steps: API, query analysis, design of the execution plan, implementation of the plan [6], as shown in Fig. 1.

Fig. 1. Steps of query processing

The SQL query statement is read by the API, and the query information after lexical analysis, syntax analysis, and semantic analysis is stored in a syntax tree, which contains the FROM clause, WHERE clause and other information of the SQL statement. Subsequently, using the method of statement flattening, the complex and diverse clauses such as NATURAL JOIN, ON, USING, etc. are uniformly transformed into WHERE clause statements between multi-table attributes for constraints.

A Query Optimization Method for Real-Time

397

According to the dependencies among each FROM clause and the attribute information of the clause itself (such as index, primary key, etc.), the path information corresponding to the dependencies is constructed. Each FROM clause in an SQL query has at least one corresponding path instance object. Each path object represents a potential way to implement the query statement. This object contains the estimated costs and dependencies that are prerequisites for the selected constructor. The execution plan of the query statement needs to construct a set of path objects including all the FROM clauses according to the dependencies and minimize the overall cost, and then implement it according to the path sequence with the help of the underlying methods [7].

3 Query Optimization Sequence Based on Directed Graph Structure 3.1

Representation of Multi-table Dependencies in a Join Query

It is not difficult to obtain the constraints of the query statement after flattening processing. For the join relation R in the query statement, for the purpose of intuition and simplicity, we only consider the following NATURAL JOIN form relationship. ð1Þ Among them Ri is the FROM clause with attributes v1 ,…,vn . Please note that the use of NATURAL JOIN to describe the constraint relationship between tables and tables does not lose generality. Because of the same attribute constraint between tables, such as R1 :va ¼ R2 :vb , it is more intuitive to use the attribute of the same name to represent two or more table attributes with the same constraint. Attributes not involved by the equality constraint will not be shown, so we can use the following join statement as an example. ð2Þ According to SQLite multi-table attribute dependencies and construction rules, in the SQL query above we can get 2 path objects in the R1 clause (its own and corresponding constraints of attribute with the same name in another clause). In the same way we can get 3 R2 objects and 2 R3 objects. Each object describes a potential way to implement the construction of the FROM clause and contains the estimated costs and prerequisites for the selected constructor (expressed as a bitmask, which is the bitmask corresponding to the attribute of the same name in another FROM clause). 3.2

Constructing Multi-table Dependencies Based on Directed Graphs

Bitmasks are used to describe the conditions of the prerequisites and the current clause. For the relational dependencies in the query, we can take the FROM clause as vertices, the constraints in the prerequisite of the clause as a starting point (0 means no prerequisites, cannot be quoted directly), the current clause as the end point, consumption as weight, building up the directed edge. And finally constructing an “imperfect” graph Ge ¼ ðR; EÞ, where R ¼ fR1 ; R2 ; R3 g, E ¼ fE1 ; E2 ; . . .; E7 g. Except for the left join and

398

X. Li et al.

cross join statements, there is no prerequisites (bitmask 0) for each FROM clause to construct the path object itself, that is, Ei ¼ ð0; Ri ; valðRi ÞÞ; i ¼ 1; 2; 3. For the dependencies between FROM clauses, such as the relationship between v1 in R1 and v1 in R2 , we can get the corresponding constraint E4 ¼ ðR1 :v1 ; R2 :v1 ; valðR1 :v1 ; R2 :v1 ÞÞ from the perspective of R2 , and get E5 ¼ ðR2 :v1 ; R1 :v1 ; valðR2 :v1 ; R1 :v1 ÞÞ from the perspective of R1 . As a result, the remaining E6 and E7 are not difficult to draw. Please note that the path consumptions valðRa :vc ; Rb :vc Þ and valðRb :vc ; Ra :vc Þ are often different here. The reason why it is called “imperfect” is that the edge constructed by each FROM clause itself has no prerequisites, that is, it only has the end point but no starting point, and it is not a directed graph in the strict sense. To some extent, the subgraph after removing the constraint Ei ði ¼ 1; 2; 3Þ generated by the FROM clause itself is a strongly connected component, which is shown in Fig. 2. However, due to the absence of Ei (i = 1,2,3), such strongly connected components cannot fully reflect the consumption of directly constructing R1 ; R2 ; R3 clauses while

Fig. 2. “Imperfect” graph and strongly connected component

the “imperfect” directed graphs are not as perfect as expected. Therefore, we can regard the absence of constraints (bitmask 0) as a special prerequisite, i.e. the starting point of the query processing plan. Add this starting point as a special vertex R0 to the above strongly connected component, and we can get a “perfect” directed graph, as shown in Fig. 3. As shown in the Fig. 3 above, R0 represents the starting point of the query processing plan, that is, the construction path of the FROM clause without prerequisites, so

Fig. 3. A “perfect” directed graph of the dependency relationship

that the problem of generating the processing plan becomes the problem of solving the minimum arborescence diagram in Fig. 3.

A Query Optimization Method for Real-Time

3.3

399

Storage and Calculation of Directed Graph Weights

For the weights of the minimum arborescence, using SQLite as an example [4], a specific data type, LogEst is used, which is short for “logarithmic estimation”. This 16bit logarithmic format is used for the estimation weight storage of query planning. For quantity X and LogEst data E, the method of conversion is shown as follows. E ¼ 10  log2ð xÞ

ð3Þ

For the calculation of weights, each path instance object has three different forms of consumption, namely the cost of a one-time setup by rSetup (such as creating a transient index), the cost of each loop run by rRun, and the number of rows estimated by nOut. Assuming that n paths have been selected now, and the three consumption of path E to be added are E:rSetup; E:rRun; E:nRow, then the following calculation formula can be obtained when calculating the total consumption of the (n + 1) paths. valðn þ 1Þ ¼ LogEstAdd ðvalðnÞ; E:rSetup; E:rRun þ nRowðnÞÞ

ð4Þ

nRowðn þ 1Þ ¼ nRowðnÞ þ nRow

ð5Þ

where valðnÞ represents the total consumption of the first n paths, LogEstAdd ða; b; cÞ is the function to calculate the sum of LogEst data of type a, b and c, and nRowðnÞ represents the estimated output rows of the first n paths. As for the choice of path, in actual calculation, except rRun of path object instance generated by itself and path node involving primary key, all other instance nodes approximately meet Ek :nRow ¼ El :nRow. Therefore, E:rSetup and E:rRun can be used as the new consumption of approximate estimation of the next scale weight. It is convenient for the generation and calculation of minimum arborescence. valðn þ 1Þ ¼ LogEstAdd ðvalðnÞ; E:rSetup; E:rRunÞ

ð6Þ

E:val ¼ Dval ¼ valðn þ 1Þ  valðnÞ

ð7Þ

E:val ¼ LogEstAdd ðE:rSetup; E:rRunÞ

ð8Þ

Then we can define the weight of edge E according to E:rSetup and E:rRun. Based on this, the problem of directed edge weight is solved.

4 SQLite Query Optimization Algorithm Based on Minimum Arborescence The minimum arborescence algorithm was first proposed by Yongjin Zhu and Zhenhong Liu in 1965. It can be used to reduce the algorithmic complexity of query plan generation greatly [8]. After flattening the query statement and the construction of WHERE clause dependency information, a path set containing prerequisites and path consumption can

400

X. Li et al.

be obtained. Firstly, the query information with nc FROM clause and nd join dependencies can be represented by a directed graph G ¼ fV; Eg with nc directed graph vertices and nd join dependencies. The FROM clauses corresponds to node V ¼ fV0 ; V1 ; :::; Vnc g, and the join dependencies pairs corresponds to edge E ¼ fE0 ; E1 ; :::; End g, where the edge Ei ¼ fVa ; Vb ; valðVa ; Vb Þg, the starting point of Va is directed edge Ei , Vb is the end point of the edge Ei , and valðVa ; Vb Þ is the weights of directed edge Ei . With the help of the minimum arborescence algorithm, we can get the set Eusing of the edges used to form the minimum arborescence. Construct the current existing node set Valready ¼ fV0 g, the set of remaining nodes Vleft ¼ V  fV0 g, and start from the virtual root node fV0 g. By selecting the least consumed edge among the connected edges in Eusing as the next path, add the endpoint node pointed to by this edge to the connected node set Valready , remove this node from the remaining node set Vleft , and remove the used edge from the used edge set Eusing . At the same time, the construction generation path consumption is accurately calculated, and the path is added to the node queue, that is, the generation plan of the query statement.

A Query Optimization Method for Real-Time

401

5 Experiment and Analysis We used SQLite database to test the read and write speed in different modes in Intel Core i7-9750H 2.60GHz Windows environment. The benchmark performance tests of are shown in Table 1 below (taking N = 1000000 pieces of data as an example) [9]. SQLite using minimum arborescence has a time advantage in many aspects. Table 1. Performance under SQLite-Bench (microseconds/operation) Measurement index fillseq

Traditional greedy algotithm 42.94

Minimum arborescence 40.59

fillseqsync

50895.30

43151.70

fillseqbatch

10.05

9.59

fillrandom

69.82

67.48

84121.20

76013.70

40.49

39.86

overwrite

240.62

231.71

overwritebatch

182.35

160.80

readrandom readseq fillrand100K

25.93 18.76 1071.00

24.49 18.32 700.00

fillseq100K

674.00

644.00

readrand100K

164.00

143.00

fillrandsync fillrandbatch

Remark write N values in sequential key order in async mode write N/100 values in sequential key order in sync mode batch write N values in sequential key order in async mode write N values in random key order in async mode write N/100 values in random key order in sync mode batch write N values in random key order in async mode overwrite N values in random key order in async mode batch overwrite N values in random key in async mode read N times in random order read N times sequentially write N/1000 100K values in random order in async mode wirte N/1000 100K values in sequential order in async mode read N/1000 100K values in sequential order in async mode

In order to compare the memory usage and running time of the original greedy algorithm and the minimum arborescence algorithm when processing SQL query statements with different sizes of FROM clauses, 6 tables with the same magnitude of record numbers are taken as an example. The processing time of 30 query statements and memory usage reflected by WorkingSetSize under 6 different sizes of FROM clauses were recorded. Among them, the average processing time of query statements of the two algorithms was shown in Fig. 4. The experimental data show that before the FROM clause reaches a certain size, the processing time of SQL statement increases with the increase of the number of clauses. After the FROM clause reaches a certain

402

X. Li et al.

size, the processing time of SQL statement faces a significant decrease. This is because the query processing is performed by the construction time of execution plan and the plan execution time. As mentioned above, the time complexity of the execution plan construction is OðV 3 Þ and OðV  EÞ respectively. The size of the execution plan will increase with the increase of clause size, leading to the increase of overall time. When the size of the FROM clause increases to a certain extent, the size of the result set will be significantly reduced due to the increase of constraints. The execution time of the execution plan decreases more than the increase of construction time, resulting in a decrease in overall time. In general, the running time of SQLite query statement optimization algorithm based on minimum arborescence is less than that of traditional greedy algorithm under different FROM clause sizes.

Average query me (μs)

Tradional Algorithm

Minimum Arborescence

24000 20000 16000 12000 8000 4000 0 1

2

3 4 FROM clause size

5

6

Fig. 4. Average processing time of two algorithms

As shown in Table 2 below, memory usage under 6 different sizes of FROM clause is compared. When the size of FROM clause increases, the memory occupied by the construction of the execution plan increases, and then the total memory occupation increases, where, SQL statement processing using the minimal attribute graph algorithm takes up slightly less memory than the traditional greedy algorithm. To sum up, the minimum arborescence algorithm is better than the traditional greedy algorithm in terms of running time and memory usage. Table 2. WorkingSetSize of two algorithms (Bytes) From clause size 1 2 3 4 5 6 Traditional algorithm 843776 888832 884736 950272 954368 974848 Minimum arborescence 831488 876544 880640 937984 942080 954368

6 Conclusion In this paper, we analyzed the method of query optimization algorithm for embedded database based on minimum arborescence. The method constructed the structure of directed graph equivalent to the FROM clause and join dependency of query statement.

A Query Optimization Method for Real-Time

403

In this method, the minimum arborescence algorithm was applied to SQL statement query processing, which could improve the processing speed of multi-relational query statements and reduce the system memory usage. Finally, it was verified in SQLite database. Our future work will include more validation in other embedded databases and further complexity optimization of the logic of the minimum arborescence algorithm.

References 1. H. Q., Ngo, E., Porat, C., Ré, A., Rudra.: Worst-case optimal join algorithms. Journal of the ACM 65(3), 1–40 (2018) 2. Freitag, M., Bandle, M., Schmidt, T., et al.: Adopting worst-case optimal joins in relational database systems[J]. Proceedings of the VLDB Endowment 13(12), 1891–1904 (2020) 3. Amine, M., Semih, S.: Optimizing subgraph queries by combining binary and worst-case optimal joins. Proceedings of the VLDB Endowment 12(11), 1692–1704 (2019) 4. Grant, A., Mike, O.: The Definitive Guide to SQLite. 2nd edn. Apress, America (2010) 5. Sibsankar, H.: Inside SQLite. 2nd edn. O’Reilly Media Inc, America (2007) 6. Lu, H., Xu, Z., Gao, Z.: Principle and Application of Embedded Database, 1st edn. Tsinghua University Press, China (2013) 7. Alex, P.: Database Internals: A Deep Drive into Distributed Data Systems work. 1st edn. O'Reilly Media Inc, America (2020) 8. Bahel, E., Trudeau, C.: Minimum incoming cost rules for arborescences. Soc. Choice Welfare 49(2), 287–314 (2017). https://doi.org/10.1007/s00355-017-1061-9 9. Sqlite-bench. https://github.com/ukontainer/sqlite-bench. Accessed 20 May 2021

Design of Reconfigurable Test System for Information Interface Between Space Station Combination with Multi-aircraft Ying Peng(&), Jing Xu-zhen(&), Li Jichuan, and Tan Zheng Institute of Spacecraft System Engineering, CAST, Beijing 100094, China [email protected], [email protected] Abstract. Aiming at the need for space station assembly flight information engineering functional verification, test system design method is proposed reconfigurable. Platform for high-performance system can be configured to be equipment design, combined with relatively flexible, the software system can be configured using the graphical programming software virtual instrument, comprising on the one hand to support a plurality of interfaces between different aircraft designed by reconfigurable Verification capabilities. On the other hand, in the absence of real aircraft docking, the newly developed aircraft can be effectively verified on the ground. Through simulation test verification, the system can quickly realize the simulation verification of the interface with a variety of manned spacecraft, which makes the test process forward, improves the parallelism of test and development, can effectively cover various failure modes, and the system framework is reliable and significantly reduces cost. Keywords: Space station Reconfigurable

 Combination  Information interface 

1 Introduction One of the characteristics of the space station project is that multiple manned spacecraft are in orbit for space rendezvous and docking. After forming a composite body, it will operate for a long time in orbit. It is necessary to fully verify the interface between the aircraft and the function of the composite body on the ground. Traditional manned spacecraft interface verification test systems generally have the following situation: after the completion of the construction, it is difficult for ordinary users to change its relatively fixed functions, and the special aircraft is dedicated, which cannot meet the diverse tests. The developed test system cannot share software and hardware components due to its strong professionalism, incompatibility, poor scalability, lack of generalization, and modularity. This not only makes the development efficiency low, but also makes the development of a complex test system expensive. Reconfigurable: In a system, its hardware module or software module can reconfigure the structure and algorithm according to the changed data flow or control flow. Reconfigurable technology has the following advantages:

©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 404–410, 2023. https://doi.org/10.1007/978-981-19-3387-5_46

Design of Reconfigurable Test System for Information Interface

405

1) Reconfigurable technology can efficiently realize specific functions. Reconfigurable logic devices are hard-wired logic, which changes the function by changing the configuration of the device. 2) The reconfigurable technology can dynamically change the device configuration and flexibly meet the needs of multiple functions. 3) Reconfigurable technology has strong technical support to accelerate product development. 4) The use of reconfigurable technology can greatly reduce system costs. Using dynamic reconstruction technology to implement them in different demand periods to achieve “one machine with multiple uses”.

2 System Framework The design of reconfigurable test system includes two parts: hardware and software reconfigurable. Hardware reconfigurable means that data acquisition cards with different interfaces are universal, and the multiple channels of the data acquisition card can be arbitrarily allocated to each instrument. Software reconfigurability refers to the reconstruction of the framework and functions. The framework is a reusable, semifinished application, and the final application can be generated by customizing it [1]. To realize the reconfigurability of virtual instruments, the software must be modularized, the physical layer and the data layer must be separated, and a unified data interface must be defined to facilitate data transmission and calling between different modules. When adding new functions, as long as the interface definition conditions are met, the new function modules can be added to the framework, and the existing modules can be improved without changing other settings. The design process of a typical reconfigurable test system is shown in Fig. 1. It involves system decomposition, function modeling, reconfigurable module design, data interface design, system testing, etc.

Fig. 1. Flow chart of the development of a reconfigurable test system

The composition of a typical reconfigurable test system is shown in Fig. 2. A complete reconfigurable test system should include the following functional modules: reconfigurable configuration module, real-time processing module, multi-source data acquisition, simulation excitation module, and data analysis module [2].

406

Y. Peng et al.

Fig. 2. Composition of a reconfigurable test system

3 Hardware Composition The reconfigurable test system consists of four parts of hardware: PXI platform, CompactRIO module, external interface and signal conditioning module, and the object under test (Fig. 3). PXI Plaorm Labview

Aircra under test

CompactRIO LabVIEW RT

Signal condioning

Fig. 3. Hardware composition diagram

Choose PXI as the core module of the manned spacecraft reconfigurable test system platform. The PXI platform (PCI extensions for Instrumentation) combines the electrical bus characteristics of PCI and The ruggedness, modularity, and the characteristics of Eurocard mechanical packaging of CompactPCI (compact PCI) have developed into mechanical, electrical and software specifications suitable for applications in test, measurement and data acquisition applications [3]. Choose CompactRIO as the pre-module of the in-the-loop system. CompactRIO is a rugged, reconfigurable embedded system, mainly composed of three parts-real-time controller, reconfigurable FPGA And industrial grade I/O modules. First, design the transfer module and jumper module according to the various connectors involved in the tested object to facilitate the plugging of the device's proprietary connectors and the communication of electrical signals. And merge the corresponding signal into the corresponding conditioning module; design the signal conditioning module to adjust the analog voltage and current signals of the external device from the input aspect, and adjust the analog signal to the range that the sampling circuit can adapt, and the digital signal of the external device Perform level conversion, from the output aspect, amplify and adjust the output voltage and current signals to facilitate driving the corresponding circuits of external devices, and perform level conversion on the output digital signals in accordance with the external interface electrical protocol.

Design of Reconfigurable Test System for Information Interface

407

4 Software Design In the overall design of the virtual instrument, the reconstruction function module is the core part of the virtual instrument. The reconstruction process of the entire instrument is displayed in the reconstruction module. All the functional instruments in the instrument function module can be added, modified and deleted in the instrument configuration module. There are sub-modules such as physical channel selection module, instrument configuration module, instrument trigger setting module and data analysis module in the reconstruction function module. 4.1

Physical Channel Selection Module

In the physical channel selection module, determine the number of channels, channel names and channel IDs of the data acquisition card, and set the acquisition range, sampling frequency and buffer size of the data acquisition card. 4.2

Instrument Configuration Module

All instrument function modules must be configured in the instrument configuration module. In the instrument configuration module, set the parameters for the instrument used, and assign the instrument ID and physical channel. For instruments with the same function, due to the different instrument IDs, the parameter settings of the same instrument are independent of each other. Therefore, different functions of the same instrument can be used for different channels. 4.3

Instrument Trigger Setting Module

There are many similarities between the instrument trigger setting module and the instrument linkage setting module. The trigger module is the inheritance of the linkage module and is reconstructed on the basis of the linkage module. In the instrument trigger setting module, different trigger combinations are distinguished by the trigger ID, and the instrument trigger condition, trigger initial value, trigger channel and trigger sequence are set. 4.4

Data Analysis Module

The instrument functions in the instrument function module are all independent of each other. The instrument function modules include voltmeter module, ammeter module, ohmmeter module, frequency meter module, spectrum analyzer module, oscilloscope module, signal generator module, counter module. Data analysis is to display the data to be analyzed in the form of a curve. 4.5

Main Software Design

The PXI platform runs Labview development program, configures and monitors the data flow and test process of the entire system in the loop. Veristand is a configuration-

408

Y. Peng et al.

based real-time test software. The internal engine is the execution core that controls the communication timing between the entire system, the execution host and the user interface. It is responsible for executing model operation, hardware I/O and the system definition file [4]. The system definition file contains the setting options for executing Veristand engine tasks, such as target configuration, hardware I/O configuration, model import, interface mapping, custom device import, etc. In the software, the start and end of the simulation test are controlled through shared variables, and the data in the lower computer is recorded with global variables, and then uploaded to the upper computer through the network. The three parts of the program all adopt the state machine method, which is convenient for software upgrade and maintenance. Labview monitoring software is mainly divided into two parts, simulation process monitoring and viewing simulation data. Simulation process monitoring includes functions such as parameter call, simulation control, real-time parameter monitoring, and driver input during simulation. And can configure the simulation mode, shift strategy, simulation time, initial state, road adhesion, etc. It is convenient and flexible to realize the simulation of various situations (Fig. 4).

Fig. 4. Software structure diagram

CompactRIO runs the Labview RT real-time operating system, and develops applications based on Labview RT, which can realize precise timing operation and

Design of Reconfigurable Test System for Information Interface

409

respond to various events in time. The timing arrangement mechanism of the operating system ensures that high-priority tasks are executed first, and the execution tasks are sorted according to preemptive and time-sliced cycles. When low-priority threads execute, high-priority threads require processor time, and low priority Threads stop running to ensure that high-priority threads run; when threads of the same priority execute, time slice cyclic ordering allocates equal processing time to threads. Task sequencing that mixes preemptive and time-sliced loops can ensure that LabVIEW RT has time determinism and can reduce time jitter. Running on the Labview FPGA module requires logic such as parallel execution, repeated execution, and data processing. Ensure the parallelism of various logic executions, the cycle rate can reach above MHz, and complete a large number of data filtering, data framing and other functions.

5 System Verification A test simulation system based on reconfigurable technology is used for simulation verification of a certain aircraft information interface. Connect the rear PXI platform with the front CompactRIO and signal conditioning module through the environment construction, and electrically connect with the real electronic equipment installed on the corresponding structure. The built-in power supply supplies the normal power supply for the tested equipment. According to the agreed electrical protocol, CompactRIO controls the corresponding module to send digital signals, complete the protocol handshake with the measured object, and start data communication. Various analog signals and digital signals that indicate the state of the measured object are normally collected by the front module through the electrical interface, and sent to CompactRIO for processing after signal conditioning and filtering, and the data is displayed and stored on the PXI platform; The device under test sends simulated data injection, and monitors the response of the device from status monitoring; simulates fault simulation and fault injection, observes the response of the device from the input interface, and simulates the real interface of the simulated device and the device under test. After testing and verification, the system can effectively cover various interfaces of the tested object, cover various normal and fault conditions, comprehensively cover information flow and status telemetry, and can quickly reconfigure configuration to support another aircraft's information interface.

6 Conclusion This article discusses the reconfiguration design method of the interface verification between aircrafts from the two aspects of hardware and software. Through the reconfigurable design of hardware and software, a variety of verification combinations can be flexibly matched and the verification efficiency is high. Through system design and environment construction, a reconfigurable manned spacecraft interface verification system has been constructed. Simulation verification shows that the system has the ability to simulate real equipment, can be configured quickly, and simulate a variety of

410

Y. Peng et al.

spacecraft interfaces, and The failure mode can be set. The establishment of a unified interface reduces the difficulty of secondary development and significantly reduces the development cost. Solve the problem of high production and maintenance costs and waste of resources caused by the complexity of test objects, multiple test equipment, and low utilization of test resources. It has significant application value for large-scale space rendezvous and docking spacecraft development and assembly operation technology verification, and it can also be applied to the rapid verification of low-cost commercial aircraft.

References 1. Ya-nan, S., Xin-ying, T., Kai-heng, X., Zhi, L.: Spacecraft simulation and test integrated system. Spacecraft Eng. 18(1) (2009) 2. Sen, L.: Development of ATE reconfigurable instrument resources. Master’s degree thesis of Harbin Institute of Technology (2013) 3. Ming-ze, G., Shao-hui, C.: PXIe bus reconfigurable test instrument design. Automation Instrumentation, no. 11 (2017) 4. Hong, L., Ling-song, H.: Dynamic reconfigurable virtual instrument technology. Instrument Technol. Sens. (2) (2012) 5. Jin-song, Y., Run-fa, Q., Xin-yu, Z.: Research on portable LXI field reconfigurable universal test platform. Comput. Measure. Control (4) (2014) 6. Lei, W.: Software design and implementation of reconfigurable acquisition equipment based on NIOS II. Master’s degree thesis of University of Electronic Science and Technology of China (2017) 7. Da-yi, W., Yuan-yuan, T., Cheng-rui, L.: The connotation and research review of the reconfigurability of spacecraft control system. Acta Automatica Sinica 2017–10 8. Chao, P.: Development of PXI reconfigurable serial communication module. Master's thesis of Harbin Institute of Technology (2013) 9. Kun-ji, L.: FPGA dynamic reconfigurable technology and its application research. Harbin Institute of Technology Master's degree thesis (2012) 10. Yuan, L.: Dynamic reconfigurable design of general-purpose bus based on FPGA. Master’s thesis of Nanjing University of Aeronautics and Astronautics (2015)

A Dual Model Control of Sensorless High Frequency Permanent Magnet Synchronous Motor Based on the Space Vector Pulse Width Modulation and Six-Step Mode Yanzhao He1,2(&), Zhenyan Wang3, Quanwu Wang1,2, Qingrui Bu1, and Jufeng Dai1,2 1

Beijing Sunwise Space Technology Ltd., Beijing 100190, China [email protected] 2 Beijing Institute of Control Engineering, Beijing 100190, China 3 School of Electronic Information and Engineering, Taiyuan University of Science and Technology, Taiyuan 030024, China

Abstract. To effectively drive the sensorless high frequency permanent magnet synchronous motor (HF-PMSM) within the whole range of speed, the dual mode control of the ten pole pairs HF-PMSM is proposed. Based on the equivalent hardware configuration of the HF-PMSM drive, the typical space vector pulse width modulation for the sinusoidal-wave drive mode is utilized as the startup strategy. The sensorless variable voltage and variable frequency control algorithm is carried out simultaneously. The sensorless six-step square-wave drive mode based on three-phase inverter bridge pulse width modulation is utilized at the high frequency operation. The key rotor commutation position is detected with the phase back electromotive force method. The three phase voltages are obtained by the divider resistances and filter capacitances in the designed zero crossing point detection circuit. Then the commutation signals are detected by the designed detection circuit and software phase shift compensation. The simulation results show that the proposed control method is effective. Keywords: Sensorless  High frequency permanent magnet synchronous motor (HF-PMSM)  Dual mode control

1 Introduction The multipolar high frequency permanent magnet synchronous motor (HF-PMSM) is a kind of important actuator motor in flywheel, control moment gyroscopes, military booster and respirator, due to several distinct advantages [1]. High commutation frequency and small resistance-inductance characteristics are not propitious to sensorless multipolar HF-PMSM drive method in a wide speed range [2]. Consequently, sensorless multipolar HF-PMSM start-up control and high frequency operation is a challenging research topic. To ensure that the sensorless motor can be started up successfully and operated at high frequency reliability, a research on the HF-PMSM sensorless control is necessary ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 411–418, 2023. https://doi.org/10.1007/978-981-19-3387-5_47

412

Y. He et al.

for further development of HF-PMSM control algorithm. In the common three-steps start-up method, the motor rotor position angle was achieved by the three phase back electromotive force (BEMF) zero crossing point (ZCP) information [3]. The controlled motor start-up current is beneficial to improve motor start-up performance [4]. According to the achieved commutation instant detectors, the six-step commutation mode was utilized to control the sensorless brushless dc motor defined as square-wave drive (SQUWD) method [5]. Generally, the permanent magnet motors with nonideal BEMF were driven by sinusoidal-wave drive (SINWD) method besides SQUWD method in practical applications [6]. To start up the sensorless HF-PMSM smoothly and successfully, the SINWD could be studied. The field-oriented control (FOC) was adopted in the SINWD, and the enhanced startup performance was achieved under the low speed range [7]. To achieve high frequency operation reliability, the sensorless brushless DC motor position detection circuit was designed for the SQUWD commutation signals [8], and the compensation method was analyzed [9]. The trapezoidal and sinusoidal compound control strategy was utilized to drive the permanent magnet motor and a smooth transition scheme was proposed [10]. Inspired by the above valuable researches, a hybrid drive method combining the FOC and six-step mode is studied to drive the sensorless HF-PMSM in a whole speed range. The phase BEMF zero crossing point detection circuit and software phase shift compensation are designed for the six-step mode. The organization of the HF-PMSM control method study is displayed as follows. The HF-PMSM mathematical model and drive system is described in Sect. 2, and the dual model control method is analyzed especially. In Sect. 3, the commutation point estimation based on BEMF ZCP detection circuit and compensation algorithm is introduced. The simulation verification is reported in Sect. 4. Finally, Sect. 5 draws the conclusions of the prosed dual mode control method.

2 Mathematical Model of High Frequency Permanent Magnet Synchronous Motor and Drive System 2.1

Mathematical Model of High Frequency Permanent Magnet Synchronous Motor

For the surface-mounted HF-PMSM, the description of the mathematical equation is shown as the following phase voltage model in the a-b-c reference frame. 2

3 2 Rs Ua 4 Ub 5 ¼ 4 0 0 Uc

0 Rs 0

32 3 2 0 ia Ls 0 54 ib 5 þ 4 0 Rs 0 ic

0 Ls 0

3 2 3 2 3 i 0 ea d a 0 5 4 ib 5 þ 4 eb 5 dt Ls ic ec

ð1Þ

where Ua, Ub and Uc are the HF-PMSM a-b-c axis voltages, Rs is the stator resistance, ia, ib and ic are the a-b-c axis currents, Ls is the phase inductance, ea, eb, and ec are the BEMFs. The two-level DC/AC power topology is selected to design the HF-PMSM drive system. The lithium ion battery pack is utilized as the HF-PMSM drive system dc-link

A Dual Model Control of Sensorless High Frequency

413

voltage. The capacitor, inverter, HF-PMSM, commutation circuit, driving circuits, operational amplifier circuit, and microprocessor are configured as the main circuits in Fig. 1.

capacitor

inverter Q1

Udc

Q3

commutation circuit

Q5

DC, 42V-58V

A B C Q4

G

HF-PMSM Ls + ea

ia Rs

Q6

Q2 S1~S6

-

ib Rs

Ls

+ eb -

ic Rs

Ls

+ ec -

operational amplifier

inverter driving circuits

3 vabc

VN

3 iabc A/D

6

isolation operational amplifier

PWM

CAP

A/D SCI microprocessor

ωr*

Fig. 1. Block diagram of a three-phase HF-PMSM two-level drive system

2.2

Sinusoidal-Wave Drive System Based on Vector Control Algorithm

Based on the three-phase HF-PMSM two-level drive system, the current control and space vector pulse width modulation (SVPWM) strategy is employed for the SINWD as shown in Fig. 2.

id*

current controller

iq*

ωr*

id, iq

θ ∫

αβ/dq

vd vq



dq/αβ

iα, iβ



abc/αβ

SVPWM 6 generator S1~S6 iabc

3

Fig. 2. Block diagram of the SINWD based on current control and SVPWM

The phase currents iabc under a-b-c coordinate system are transformed to a-b reference frame currents (ia and ib) and d-q reference frame currents (id and iq). The current control loop is carried out under the d-q coordinate system with the given electrical angle position h. The position is obtained utilizing the integration of the

414

Y. He et al.

reference velocity xr . The specified values of the current controller are id and iq , and the output variables are vd and vq. According to the coordinate transformation of rotate coordinate reference frame and the stationary reference frame, the va and vb are represented. The metal-oxide-semiconductor field effect transistor (MOSFET) driving signals named S1S6 are generated based on the SVPWM algorithm. 2.3

Square-wave Drive System Based on Six-Step Mode

For the SQUWD, the120-degree commutation control, i.e., the six-step mode is introduced to drive the motor. Meanwhile, the motor phase voltage is adjusted according to the speed error and the pulse width modulation (PWM) duty. The inverter is 3-phase full bridge. The upper and lower inverter bridge arms are driven by the PWM, i.e., the so called HPWM_LPWM modulation strategy. The conduction from U phase to V phase is expressed as U+V-. The conduction sequence is U+V-, U+W-, V +W-, V+U-, W+U-, and W+V-. The motor reversion correspond sequence is W+V-, W +U-, V+U-, V+W-, U+W-, and U+V- under the same commutation signals’ control.

3 Back Electromotive Force Zero Crossing Point Detection and Phase Shift Compensation The back electromotive forces of the utilized HFPMSM are designed into nonideal trapezoidal waveform close to 10° flat top width. The sensorless six-step commutation mode algorithm based on the BEMF ZCPs’ detection is proper to the utilized permanent magnet motor. The sensorless commutation can be realized by the ZCPs detection circuit of the phase BEMFs. The six phase BEMF ZCPs correspond to the six commutation points of the six-step mode. The diagram of the sensorless ZCPs detection circuit of the phase BEMF is described in Fig. 3. AGND1

+12V

+ -

VN1

voltage comparator

AGND1 R11 R21 R31

R12 R22 R32 R33

low pass filter

A B C

R23 C31 C32

R13 C21 C22 PGND

HF-PMSM

EARTH

±12V

R14 R24 R34 C11 C12 C4

LM139 AGND1 VN1

3

optocoupler 3

R4

schmitt inverter 3

DSP

ZCP1,2,3

3

logicl level translator

Fig. 3. Schematic diagram of the BEMF ZCPs detection circuit

A Dual Model Control of Sensorless High Frequency

415

The passive low-pass filter (LPF) circuit is utilized for the ZCP detection. The LPF and zero crossing comparator circuits of the A-B-C three phases are designed with the same circuit topology and parameters. The divider resistors R11, R12 and R13 are operated for the A phase voltage. Meanwhile, the filer capacitors C11 and C12 are chose. The phase shift a1 is caused by the resistance capacitance circuit. a1 ¼ tan1 ½ðC11 þ C12 ÞR13 ðR11 þ R12 Þxr =ðR13 þ R11 þ R12 Þ

ð2Þ

As illustrated in Eq. (2), the phase shift angle is correlated to motor velocity, the resistance and capacitance parameters. The real-time compensation is carried out to achieve optimal commutation. The BEMF ZCP PWM signals are captured by the digital signal processor (DSP) capture port. The motor speed is observed by PWM signal’s frequency calculation. According to the Eq. (2) and 30° inherent phase shift, the optimal commutation signal is determined by the DSP timer. The 60° is achieved by six-step mode commutation sequence. When the LPF cut-off frequency is low, the 90° is utilized as the inherent phase shift. And the 60° multiple is applied according to the ZCPs detection circuit parameters.

4 Simulation and Analysis To verify the effectiveness of the proposed sensorless HF-PMSM dual model control method, the simulations are implemented in this section. 4.1

Simulation Setup

The proposed dual model control method simulation based on software Matlab (R2014a)/Simulink (version8.3) has been carried out. The sensorless variable voltage and variable frequency control method is utilized for the HF-PMSM startup. The sixstep mode 120° commutation is employed for the motor high frequency operation. And the two sensorless methods are switched according to the transition logic. The measured HF-PMSM parameters and values are shown in Table 1. Table 1. The tested HF-PMSM parameter and value. Parameter Rated speed Bus voltage Pole pairs Phase resistance Phase inductance

Symbol xN Vbus P RS LS

Value 8000 4258 10 4.35 3.39

Unit r/min V / mX lH

416

4.2

Y. He et al.

Simulation Results

The simulation results for the sensorless HF-PMSM dual model method are illustrated in Figs. 4 to Fig. 5. The target motor speed is set as 8000r/min, and the load is given as 2 Nm. Figure 4 depicts a simulation result for the dynamic transition during the acceleration from the zero to the 8000 r/min. The three phase current amplitudes are 50 A. The speed transition point is 0.18 s approximately. The sinusoidal current is changed to square current. Meanwhile, the 1400 r/min speed transition is carried out. According to the simulation results, it can be seen that the transition response at the transient-state speed is satisfactory under an excellent speed change.

phase current(A) (60A/div)

ia ib ic 50 0 -50 0.174

speed(r/min) (2000r/min/div)

SINWD

0.178

0.182

0.186

SQUWD 1500 1400

speed transition 1300 0.175

0.18

0.185

time (200ms/div) Fig. 4. Phase current and speed change transient response of dynamic transition during the acceleration from the zero to the 8000 r/min

Before compensation, the leading commutation is operated. And the phase current amplitude is reduced from 150 A to 130 A at 1.8 s as shown in Fig. 5. Meanwhile, the tested speed overshoot amplitude is about 70 r/min, and the response time is 0.01 s approximately. It can be noted that the proposed method achieves a high accuracy of the commutation signal adapted to nonideal BEMF. The phase current is reduced with the same motor speed based on the compensated sensorless signals. During the deceleration, the easy transition from the SQUWD to the free parking is utilized and not described in detail here.

phase current(A) (60A/div)

A Dual Model Control of Sensorless High Frequency

ia

200

ib

ic

0 -200 1.799

1.8

before compensation speed(r/min) (2000r/min/div)

417

1.801

after compensation

8070

7970

1.8 1.82 1.84 1.86

time (200ms/div) Fig. 5. Phase current and speed change transient response before and after compensation at the 8000 r/min

5 Summary In this paper, the sensorless dual mode control method aimed to drive the ten pole pairs HF-PMSM effectively within the whole range of speed is proposed. The sensorless control method is executed based on the equivalent drive system hardware configuration. The sensorless variable voltage and variable frequency control and typical SVPWM algorithm for the sinusoidal-wave drive mode is utilized to start up the HFPMSM. The proper transition is carried out, and the sensorless six-step is used at the high frequency operation. The phase BEMF ZCP detection circuit and software phase shift compensation algorithm is designed to achieve the commutation signals. The divider resistances and filter capacitances are the key components in the designed ZCP detection circuit. The effectiveness of the HF-PMSM control method and sensorless method transition logic has been verified through Matlab simulation results. A simulation for the dual model control during the acceleration from the zero to the 8000 r/min is carried out. And the phase current amplitude is improved under the same motor speed with the compensated sensorless commutation signals. The HF-PMSM entire speed range drive is achieved effectively according to the proposed dual mode method. The proposed method in this paper is undoubtedly a meaningful attempt for the sensorless HF-PMSM control, and would have a good application in practice.

418

Y. He et al.

References 1. He, Y.Z., Wang, Z.Y., Zheng, S.Q.: Decoupling control of high speed permanent magnet synchronous motor based on online least squares support vector machine inverse system method. Proc. CSEE 36(2), 56395646 (2016). (in Chinese) 2. He, Y.Z., Zheng, S.Q., Fang, J.C.: Start-up current adaptive control for sensorless high-speed brushless DC motor based on inverse system method and internal mode controller. Chin. J. Aeronaut. 30(1), 358–367 (2017) 3. Liu, G., Cui, C., Wang, K., et al.: Sensorless control for high-speed brushless dc motor based on the line-to-line back EMF. IEEE Trans. Power Electron. 31(7), 4669–4683 (2016) 4. Wang, Z.Y., He, Y.Z.: A new initial rotor angle position estimation method for high-speed brushless direct current motor using current injection and mathematical model. J. Dyn. Sys. Measure Con. Trans. ASME. 140(7), 1–8 (2018) 5. Zhang, L.H., Yang, T.T., Wang, R.G., et al.: Detection and correction of commutation position for sensorless BLDCM. Micromotors 52(10), 6065 (2019). ( in Chinese) 6. He, Y.Z., Wang, Z.Y.: High performance control of high-speed permanent magnet synchronous motor based on least squares support vector machine inverse system method. Electric Power Comp. Syst. 45(1), 99–110 (2017) 7. He, Y.Z., Wang, Z.Y., Wang, J.X., et al.: Speed observation for high-speed permanent magnet synchronous motor with model reference adaptive system. Electric Drive 50(10), 1622 (2020). (in Chinese) 8. Zhou, X.Y., Chen, S.H., Song, Y.M., et al.: Analysis of position-sensorless detection circuit for brushless DC motor. Micromotors 52(2), 6265 (2019). (in Chinese) 9. Zhao, Y.L., Ma, L.J., Sun, W.S., et al.: Research on non-position sensor commutation compensation for brushless DC motor. J. Beijing Inf. Sci. Technol. Univ. 35(3), 6773 (2020). (in Chinese) 10. He, Y.Z., Wang, Z.Y., Wei, D.Z., et al.: A dual-model operation and smooth transition control of sensorless high-speed permanent magnet synchronous motor. In: Proceeding of the 36th Chinese control conference (CCC2017), pp. 1022–1027 (2017)

Development Mode of Remote Sensing Satellite OBDH Software Based on Prototype Software Dong Zhenhui1(&), Zhang Yahang1, Wang Xianghui1, Yang Peiyao1, Zhang Hongjun1, Yuan Yong2, and Liu Yiming1 1

Institute of Spacecraft System Engineering, Beijing, China [email protected] 2 Institute of Remote Sensing Satellite, Beijing, China

Abstract. In the field of remote sensing satellite, the common framework of OBDH software requirements for each satellite is similar, but there are many subtle differences between each other. Although some reusable components were accumulated in the early stage by means of software components and common requirements design, the overall framework of software and many modules that cannot be fully reused are still difficult to be continuously accumulated and inherited as organizational assets. This paper analyzes the difficulties and problems in the current software development mode, puts forward a remote sensing satellite software development mode based on the prototype software, designs the prototype software to meet the different needs of several satellites, and defines the reuse and improvement principles. The application in several satellites shows that this mode can effectively improve the development efficiency of common function software in remote sensing field. Keywords: Prototype software  Remote sensing satellite  OBDH  Software

1 Introduction Spacecraft OBDH software is a large embedded software with high safety and reliability, which is the management core of the information flow and control flow of the whole satellite [1, 2]. During the software development process in the field of remote sensing in China, it is usually required to deliver a software version with basic functions (like telemetry, telecommand and satellite data service functions) quickly for the early requirement of electrical measurement of platform subsystem. The development cycle of OBDH software is often very tight. By defining common OBDH software requirements and designing software components, software reuse and development efficiency can be improved to a certain extent. However, due to the difference of tasks, there are still many different requirements in remote sensing field. Although the basic functions of OBDH software of each satellite are similar, there are still problems such as repeated design, lack of continuous improvement and hard accumulation of good design due to the lack of unified design protocol. At the same time, personal improvement behavior lacking organizational management can easily lead to overdesign and over-complexity of software, which leads to the problem of “second-system ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 419–426, 2023. https://doi.org/10.1007/978-981-19-3387-5_48

420

D. Zhenhui et al.

effect”. The software becomes more and more complex and bloated, and it is difficult to inherit by the subsequent satellites. It can only be rewritten, which results in a great waste of resources and seriously affects the progress and quality of OBDH software development. This paper analyzes the problems and causes of the current development mode, puts forward the idea of localization of the development mode based on prototype software, and carries on the implementation and verification in several satellites, and analyzes the implementation effect.

2 Analysis of Traditional Development Mode 2.1

Waterfall Model

According to the requirement of software engineering management, the Waterfall Model is generally adopted in the life cycle of software in remote sensing field [3]. There are generally two methods to quickly deliver the first version of basic function software for satellite testing. The first method is for the satellite with good inheritance, the software can be trimmed or modified based on the inherited satellite. The second method is for the satellite with completely new development, by building the simplest software framework first, and then copy the mature code (which may need to be modified in the copy) and write new code at the same time. The common feature of these two approaches is that they need a software object that can be compiled and debugged, but the first method inherits some code that is not necessary for the new satellite, and the second method takes more time to add code that does not need to be changed at all. Software in remote sensing field is relatively mature, and the division of functional modules seems clear. However, in the software development process, detailed business applications inevitably disperse logic to various places, resulting in implicit coupling. This simple software framework lacks strict specification requirements. Although it leaves a high flexibility for software design, it will cause the design decentralization among multiple satellites and lead to the fragmentation of program architecture. Without constant refactoring and tweaking, software will become complex and bloated increasingly. In the process of software development based on Waterfall Model, some reusable software components or excellent designs often appear, but these experiences are difficult to be maintained and inherited as organizational assets. There are several reasons for this problem: First, only consider the requirement of this satellite, rather than the whole field for overall consideration, although the design of this satellite is excellent, but other satellites are not fully reusable. Second, understand the previous code design and accept its design style is not mandatory. Some people like to reinvent the wheel instead of continuously refactoring. Third, the lack of tool carrier for collaborative development of software frameworks among different satellites makes it difficult to accumulate good designs among different satellites, thus failing to take into account the differences in requirements among satellites and making it difficult to carry out continuous unified design.

Development Mode of Remote Sensing Satellite

2.2

421

Software Components

Typical software components of OBDH subsystem are the implementation of CCSDS telemetry and telecommand standards [4, 5]. There are two limitations in practice: First, components require designers to be used in strict accordance with the specifications of components. Second, the completed functions of components account for a low proportion of the whole functions. In addition to the recognized standard protocol, there are obstacles to increasing the number of components and the percentage of the component in the total code. The fundamental reason is that the differentiated requirements between satellites really exist objectively. For example, it is difficult to unify the software requirements of autonomous thermal control management and autonomous energy management. The most common situation is that part of the software that interfaces with the OBDH sub-system is the mature inherited software, and the unified requirements of OBDH software will cause the mature software to change accordingly. Therefore, such software requirements, which have similar functions but cannot be forced to eliminate differences between satellites, are difficult to be made into software components. In the process of common basic functions software requirements unification, the software requirements standard does not have much binding force, resulting in the actual software design often need to consider the requirements of the disunity, but the software design should be compatible with the situation of the disunity. Unified promotion of software requirements is difficult, and then the difficulty of unifying the soft design at the lower level is multiplied.

3 Development Mode Based on Prototype Software 3.1

Define of Prototype Software

The characteristics of OBDH software in the field of remote sensing are that there are many satellites, good software continuity and certain platform characteristics. From a macro point of view, there is little difference in common basic functions between different satellites, but there are many changes in details, and there is a situation of resource waste caused by repeated development. Therefore, there is a great space for optimization of software development in remote sensing field, and it is easier to achieve results. The development mode of prototype software is an attempt and exploration in the development of remote sensing software in recent years. The concept of the prototype software proposed in this paper: it is not a directly delivered version of the software, and has some functions that need to be customized and modified, but it is already a complete piece of software, which can be directly compiled and run. The difference between prototype software and components: First, prototype software is a complete piece of software (including some already assembled components) that can be modified and compiled to run directly on a hardware device. In the process of software engineering management, document writing and verification test take up a large amount of work, so the prototype software not only contains software code, but also includes the development document template

422

D. Zhenhui et al.

and test case library. Both are managed synchronously as part of the prototype software. Second, prototype software modification and maintenance is similar to open source code, the original intention is not to remain constant, but to encourage everyone to participate in the modification and improvement of the prototype code, and constantly absorb the excellent design in the development process of various satellites. In addition, some functionality is doomed to change, and the prototype software framework has a friendly modification interface in the form of extracting configuration parameters. 3.2

Development Mode

The essence of the development mode of prototype software is to dismantle the complexity of the problem domain, and solve the problems in the traditional software development mode one by one by using various ideas of software platformization, process control and feedback improvement [6]. The development mode considers the following four contents: Standard: The development mode of prototype software firstly develops the common software requirements standard in the field according to the previous satellite practice. This standard unifies the common software requirements at the organizational level, guides the design of software components, and restricts the design of software requirements of various satellites. Iteration: In the application process of prototype software, the developer will feedback the excellent or improved design to the organization, and use prototype software as the tool carrier for continuous accumulation, iteration and release, so as to keep the design of the main line optimal. Reuse: Based on high quality prototype software code framework, document template and test case library, software of all satellites can reuse the organizational assets in the whole cycle of software development to reduce the workload of writing the same or similar functional codes and documents. Controlled: The addition, modification and deletion of prototype software code framework, document template and test case library shall be verified and confirmed at the organizational level, and configuration management shall be carried out in accordance with the requirements of software engineering. The flow chart of development mode based on prototype software is shown in the Fig. 1. 3.3

Restrictions

Restriction 1: The purpose of the development of the common template is to control divergence and reduce branching. The inherited satellite of the new satellite is no longer the previous satellite, but a version of the prototype software. The workload of the early satellite is relatively heavy, and the common software framework and document template need to be continuously improved. Restriction 2: The selection of equipment for OBDH subsystem will affect the unification of software requirements and design. For example,, it is found that the choosing

Development Mode of Remote Sensing Satellite

423

Fig. 1. Flow chart of development mode based on prototype software

of RTU [7] has a great influence on software. Prototype software needs to consider several different situations, such as single RTU, DRTU and SRTU, DRTU without SRTU, etc. Restriction 3: At present, the function of prototype software is limited to the functions of OBDH subsystem. The requirements of thermal control management, health management and task planning are highly customizable for each satellite, and a set of common design criteria cannot be abstracted at the system level, so it is difficult to integrate them into the prototype software recently.

4 Satellite Implementation and Analysis 4.1

Satellite Implementation

The development mode of remote sensing satellite based on prototype software is divided into two phases: Quick speed phase: Common requirements writing, software programming and testing, this phase is completed quickly. Because OBDH software is a necessary condition for spacecraft testing and also undertakes the task of testing between subsystems, it is important to deliver the first version quickly. Low speed phase: Specific requirements writing, code programming and testing, this phase is completed slowly. Because the delivery of specific requirements is often not so urgent, requirements determination takes a long time and the point of time for spacecraft testing is relatively late. In this paper, four satellites which have completed the software development process are selected for comparative analysis. They are shown in the Table 1:

424

D. Zhenhui et al. Table 1. Four satellites which are selected for comparative analysis Satellite Satellite Satellite Satellite Satellite

1 2 3 4

Type This mode is not applied The first satellite to use the prototype software Satellite for promotion and application Satellite for promotion and application

The total number of lines of code is denoted as A, the number of lines of code of specific requirements is denoted as B, and the number of lines of code required for mass configuration is denoted as C. Percentage of lines of common function was analyzed. The statistical results are shown in the Table 2: Table 2. Percentage of lines of common function Satellite

Satellite 1 Satellite 2 Satellite 3 Satellite 4

Total lines of code (Mark as A) 38083

Code of specific function (Mark as B)

Code required for mass configuration (Mark as C)

Percentage of lines of common function

6100

6700

84%

31763

3500

7000

89%

35811

6300

6900

82%

37553

7000

7300

81%

Two things should be considered in the statistical process: The percentage of lines of code is not equivalent to the percentage of workload. The actual workload should also take into account the cost of communicating requirements and verifying tests, so it can be confirmed that the percentage of workload to complete specific function is much higher than its percentage of lines. The code except the specific function is not just logical code for the common function (including the common software framework and software components). The rest of the code is the code required for mass configuration (such as macro definitions for telemetry parameters and commands). 4.2

Workload Analysis

Consider the above two things, if we calculate the percentage of the workload of specific functions in the whole software, we need to do some equivalent processing.

Development Mode of Remote Sensing Satellite

425

A noun “common logic code” is defined as the workload reference, and the code required for mass configuration is multiplied by a coefficient (0.25 according to experience) and converted to “common logic code”. According to experience, the time it takes to complete the requirement coordination, code programming and test confirmation of the software with specific requirements is about 4–6 times that of the ordinary logic code, that is, after converted into the ordinary logic code, it needs to be multiplied by the coefficient η(η is determined by the requirement difficulty, the number of changes, the difficulty of testing, whether it is applied for the first time and other factors). Mark the equivalent total lines of code as NUMtotal, and mark the equivalent total lines of specific requirements code as NUMspec. The calculation formula is as follows: NUMtotal ¼ A  C þ C  0:25  B þ B  g ¼ A  C  0:75 þ B  ðg  1Þ

NUMspec ¼ B  g

ð1Þ ð2Þ

The percentage of workload should be the percentage of lines of code after equivalence. It has more practical reference value, as shown in Table 3.

Table 3. Percentage of workload of common function Satellite Satellite Satellite Satellite Satellite

1 2 3 4

Coefficient η

NUMtotal

NUMspec

2.5 3.6 4.8 4.7

42208 35613 54576 57978

26958 23013 24336 25078

Percentage of workload of common function 64% 65% 45% 43%

According to the analysis in the Fig. 2, the development of common requirements for Satellite 1 accounts for more than 60% and takes up a large amount of work. Satellite 2, the first application of the prototype software, had a similar proportion of common requirements development to Satellite 1 due to the development of the prototype software framework. However, Satellite 3 and Satellite 4 as the subsequent promotion and application, the proportion of common requirement development has decreased significantly, and it is no longer the main workload. It is important to note that the percentage of workload is not the same as the percentage of time of the software development cycle. Compared with specific requirements, the requirements of common function are relatively stable, and the development time is relatively concentrated and controllable, so the development cycle is relatively short. However, due to the long design cycle of the specific requirement and the relatively less urgent time requirement of the satellite test, the development cycle is longer, and it is mainly concentrated in the late stage of software development.

426

D. Zhenhui et al.

Fig. 2. Comparison of lines code and workload of common function

5 Conclusion Based on the practice of software development in remote sensing field in recent years, this paper proposed a localization solutions based on practical problems. On the basis of satisfying the different requirements of different satellites, this paper proposed a development mode based on prototype software, which liberates the development of OBDH common function software from the differences in details, and enables software designer to focus more on the specific requirements of the satellite. The satellite practice shows that this development mode can effectively improve the development efficiency of common function software in remote sensing field.

References 1. Zhenhui, D., Xianghui, W., Qiang, M., Li, P., Yongquan, W.: Multi-partition boot design and verification of GF-3 satellite CTU. Spacecraft Eng. 26(6), 126–131 (2017) 2. Zhang, Y.H., Yuan, J., Yu, J.H., Zheng, G.: Design of spacecraft OBDH software framework based on onboard route. Spacecraft Eng. 24(6), 70–74 (2015) 3. Jian, G., Lin, L., Junna, W.: Research on CMM application in spacecraft software development. Spacecraft Eng. 23(6), 75–79 (2014) 4. Zhang, Y.H., Guo, J., Yu, J.H.: Research of on-board software components based on nonobject oriented language. Chin. Space Sci. Technol. 35(4), 37–45 (2015) 5. Zhang, Y.H., Yuan, J., Guo, J.: Design of reconfigurable general TM based on software components. Spacecraft Eng. 22(4), 62–67 (2013) 6. Fu, C.Y., Wen, Z.J., Sheng, H.Y., Zhong, D.G.: A paradigm of software development for mass customization. J. Comput. Res. Dev. 39(5), 593–598 (2002) 7. Zhenhui, D., Qiang, M., Liang, M., Hongjun, Z., Rui, Z., Yong, Y.: RAM fault tolerant design of 1553B bus chip for HXMT satallite. Spacecraft Eng. 27(5), 107–113 (2018)

Design of Closed-Loop Tracking Mode for Remote Sensing Satellite Data Transmission Antenna Yong Yuan1(&), Yufei Huang1, Xinyu Yao1, Xinwei Zhang1, and Zhenhui Dong2 1

2

Institute of Remote Sensing Satellite, CAST, Beijing, China [email protected] Institute of Spacecraft System Engineering, CAST, Beijing, China

Abstract. The remote sensing satellite data transmission antenna is used to track the ground receiving station in orbit, and transmit the radio frequency signal with load data down. The tracking accuracy of the data transmission antenna to the ground receiving station directly affects the smooth execution of the satellite mission. The imaging mission characteristics of remote sensing satellites are analyzed, and the closed-loop tracking mode of the data antenna based on angle and angular acceleration is designed. The data antenna controller receives the data of the antenna pointing angle from the control computer, and calculates the angular velocity of the antenna to track the ground station according to the predetermined control algorithm, uses the angular velocity to control the antenna to track the ground station continuously, and feeds back the position angle and angular velocity of the antenna to the control computer periodically. The results of ground simulation shows that the tracking accuracy of the data transmission antenna using the closed-loop tracking mode meets the requirements of remote sensing satellites, laying a foundation for the design of inter-satellite data transmission links in future. Keywords: Data antenna

 Closed-loop  Angle  Pointing  Tracking

1 Introduction Remote sensing satellites refer to satellites that use remote sensing technology and remote sensing equipment to observe surface coverage and natural phenomena. With the development of practical earth observation satellites in china in the 1980s, the use of satellite data antennas has also been studied.After decades of development, china has successfully launched Earth observation and remote sensing application satellites represented by Ziyuan No. 1, Ziyuan No. 3 and Haiyang No. 2, and established a set of in-orbit application models for data antennas suitable for the characteristics of remote sensing satellites. With the improvement of satellite image resolution, the original data rate of satellite payload images is as high as several Gbit/s. It is required that the design of the radiation characteristics of the data transmission antenna presents the characteristics of high gain ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 427–434, 2023. https://doi.org/10.1007/978-981-19-3387-5_49

428

Y. Yuan et al.

and narrow beam, and higher requirements are put forward for the accuracy of the antenna pointing and tracking the ground station.

2 Closed-Loop Tracking Scheme Design Aiming at the flight attitude and working mode of remote sensing satellites, a data antenna tracking scheme with a two-dimensional tracking mechanism is designed. The two-dimensional tracking mechanism adopts an X–Y rotation system and controls the rotation of the two-dimensional tracking mechanism according to the rotation angle information provided by the control computer. This allows the antenna beam to track the ground station. 2.1

Typical Data Transmission Mode

Remote sensing satellites mainly use the data transmission arc during the transit ground station to complete the download of on-board image data. According to the working characteristics of the payload camera, the data transmission mode of remote sensing satellites generally includes three modes. (1) Real-time transmission mode Within the visible range of the ground receiving station, the satellite is in a normal flight state or a side-swing flight state, the satellite payload camera images and the satellite data transmission subsystem sends the received payload image data to the ground receiving station in real time. (2) Playback mode Within the visible range of the ground receiving station, the satellite is in a normal flight state, the satellite payload camera is not working, and the satellite data transmission subsystem will replay the image data stored in the solid-state memory to the ground receiving station. (3) Memorizing and Transmission Mode Within the visible range of the ground receiving station, the satellite is in a normal flight state or a side-swing flight state, the satellite payload camera images, and the satellite data subsystem sends the received image data to the ground receiving station in real time, and at the same time sends the image data to the solid-state memory for storage. 2.2

System Composition

The data antenna closed-loop tracking system is composed of satellite central computer, control computer, GPS receiver, antenna controller and data antenna. The central computer is responsible for dispatching and managing the flow of bus data during the work process. The GPS receiver is responsible for providing satellite orbit data. The control computer calculates the rotation angle of the data antenna according to the

Design of Closed-Loop Tracking Mode for Remote Sensing Satellite

429

satellite orbit data and attitude data. The antenna controller drive the antenna to rotate based on the antenna rotation angle output by the control computer, and feedback the position angle of the antenna to the control computer. Figure 1 shows the composition of the closed-loop tracking system of the digital antenna.

Fig. 1. Block diagram of the closed-loop tracking system of the data transmission antenna

The control computer provides real-time satellite orbit data through the bus. According to the satellite’s real-time attitude information, orbit information, data antenna status, and ground station information, the position angle of the X–Y axis for the antenna is completed at the end of the current beat within each antenna control cycle. The control computer can communicate with the two antenna controllers separately or at the same time, and send the antenna rotation angle to the antenna controller. The antenna controller drives the antenna to rotate according to the target angle information, and feeds back the result of the antenna rotation to the control computer. Asynchronous serial communication is used between the control computer and the antenna controller, the transmission data including frame head, rotation time, X axis rotation and Y axis rotation. In order to improve the reliability of transmission, check data can be added at the end of transmission data Table 1. Table 1. Data format for the control computer and the antenna controller Frame head Rotation time X axis rotation Y axis rotation Check data

2.3

Control Process

When the control computer receives the command to enter the antenna tracking working mode, it starts a new task control cycle, and ends the task control until it receives the stop tracking command. The specific work flow is as follows: (1) The antenna controller is powered on before the control computer performs antenna control. When the central computer powers on the antenna controller or performs the

430

Y. Yuan et al.

main backup setting, the corresponding identifier and the ground station information that needs to be tracked are sent to the control computer in the form of instructions; (2) The control computer immediately enters the tracking mode after receiving the command, and starts the control work with the data antenna; (3) The control computer calculates the antenna rotation angle information and sends it to the antenna controller periodically; (4) The antenna controller drives the antenna to rotate according to the angle of rotation, and returns the current state of the antenna to the control computer immediately after receiving the information of the angle of rotation of the antenna (information such as the position angle, the angle of rotation speed, etc.); (5) After the control computer receives the antenna status information, it calculates the antenna rotation angle for the next control period and sends it to the antenna controller; Antenna Control Command

The Central Computer Autonomously Calculates the Tracking Start Time According to the Instructions

N

Time to Start Tracking?

Y The Central Computer Forwards the T racking I nstructions to the Control Computer

Control the Computer to Start the Tracking Process

N

Time to Track?

Y The Central Computer Sends a Stop Command to the Control Computer and the Antenna Controller

End of Mission

Fig. 2. Closed-loop tracking process

Design of Closed-Loop Tracking Mode for Remote Sensing Satellite

431

(6) Before the end of this tracking task, the central computer sends tracking stop commands to the control computer and the antenna controller respectively. The control computer stops the calculation and transmission of the antenna rotation angle after receiving the command, and the antenna controller stops the antenna rotation after receiving the command Fig. 2.

3 Ground Test Verification According to the closed-loop tracking scheme of the data transmission antenna, different working conditions are designed to analyze the target angle calculation accuracy and antenna tracking accuracy. The hypothesis working conditions are shown in Table 2. Table 2. Closed-loop tracking conditions Angle of roll Ground station Working mode Condition 1 + 32° Sanya Real-time transmission mode Condition 2 –7° Arctic Playback mode Condition 3 0° Kashi Memorizing and transmission mode

The orbital parameters used in the test are shown in Table 3. Table 3. Instantaneous orbit parameters for testing Coordinate system J2000 Semi-major axis /km 6880.0 Eccentricity 0.0 Inclination/° 97.0

First, the tracking accuracy of the target angle is calculated and compared. In the process of comparison, STK is used to check the accuracy of the target angle. The theoretical point file of the GPS simulator is used as the external orbit input, and the theoretical attitude is used as the STK attitude input. The comparison results of target angle calculation accuracy are shown in Table 4.

432

Y. Yuan et al. Table 4. Comparison results of target angle calculation accuracy

Condition

Results

1

X-axis: 0.0097° Y-axis: 0.0119

2

X-axis: 0.0144° Y-axis: 0.00516

3

X-axis: 0.0441° Y-axis: 0.1346

By calculating the tracking angle output by the control computer and the actual rotation angle of the digital antenna, the tracking accuracy of the data antenna is calculated and analyzed. The results are shown in Table 5.

Design of Closed-Loop Tracking Mode for Remote Sensing Satellite

433

Table 5. Results of tracking accuracy comparison Condition X/Y -axis Tracking accuracy 1 X  0.01° Y  0.025° 2 X  0.006° Y  0.01° 3 X  0.011° Y  0.011°

4 Conclusion This paper describes the working process of the data transmission antenna pointing and tracking during the remote sensing satellite ground data transmission. The satellite periodically calculates and controls the antenna rotation angle to realize the continuous tracking of the ground station by the data transmission antenna during the transit period. At the same time, a robust design is adopted for the antenna tracking mode to avoid task interruption or wire damage due to communication failures during use of the antenna. Ground tests have shown that the overall pointing accuracy of the digital antenna with the master–slave tracking mode to the ground station is better than 0.1°. While meeting the needs of remote sensing satellites, it is also provide guiding significance for inter-satellite links design in future.

References 1. Yunshang, Y.: Technical development of my country’s remote sensing satellite data transmission antenna. In: Proceedings of the 2004 Academic Seminar of the Aircraft General Professional Committee of the Chinese Astronautical Society (2005) 2. Xiaodong, W., Jian, W., Yan, Z., Hongyu, Z., Xiaohu, W., et al.: Analysis and design of satellite-ground data balance of Ziyuan-1 02D satellite. Spacecraft Eng. 29(6) (December 2020) 3. Shasha, Z., Jin, H.: Research on autonomous mission planning and implementation strategies of low-orbit optical remote sensing satellites. In: Proceedings of the China Space Law Society 2012 Annual Conference (2012) 4. Li, L., Xiaosong, Z., Puming, H.: The development trend of satellite high-speed data processing and transmission technology. In: Proceedings of the 2011 Small Satellite Technology Exchange Conference, pp. 340–344. Aerospace Dongfanghong Satellite Co., Ltd., Beijing (2011) 5. Bo, P., Donghua, Z., Wenhua, S., et al.: Satellite antenna pointing mechanism pointing accuracy modeling analysis. Spacecraft Eng. 20(5), 49–54 (2011) 6. Chakravarthy, N., Xiao, J.: FPGA-based control system for miniature robots. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 339–3404 (2006)

434

Y. Yuan et al.

7. Kovaleva, M., Zeb, B.A., Bulger, D., et al.: An extremely wideband fabry-perot cavity antenna for superfast wireless backhaul applications. In: Proceedings of the 17th International Symposium on Antenna Technology and Applied Electromagnetics (2016) 8. Chen, X.P., Wu, K., Han, L., He, F.: Low-cost high gain planar antenna array for 60-GHz band applications. IEEE Trans. Antennas Propag. 58(6), 2126–2129 (2010) 9. Yarr, N., Ceriotti, M.: Optimization of inter-satellite routing for real-time data download. IEEE Trans. Aerosp. Electron. Syst. 54(5), 2356–2369 (2018) 10. Tang, F., Zhang, H., Fu, L., et al.: Multipath cooperative routing with efficient acknowledgement for LEO satellite networks. IEEE Trans. Mob. Comput. 18(1), 179–182 (2019) 11. Zeng, H., Di Natale, M., Ghosal, A., Sangiovanni-Vincentelli, A.: Schedule optimization of time-triggered systems communicating over the flexray static segment. IEEE Trans. Industr. Inf. 7(1), 1–17 (2011)

High Accuracy Orbit Control Strategy of Change’4-Relay Satellite Tao Zhang1(&), Rongxiang Cao1, Jianmin Zhou1, Jianzhao Ding1, Yifeng Guan1, and Weiguang Liang2 1

Beijing Institute of Control Engineering, Beijing 100190, China [email protected] 2 Beijing Aerospace Control Center, Beijing 100094, China

Abstract. In order to ensure the long-term stable operation of the satellite in orbit, the Change’4-relay satellite has put forward a very high requirement for the orbit control accuracy. Aiming at the incremental error around thrust direction, quantization error of sampling interval, thrust error of engine and zero deviation of accelerometer, this paper formulated the installation layout optimization of thruster, sampling quantization error suppression, thruster after-effect suppression and zero deviation suppression of accelerometer respectively. The in-orbit control results show that the control strategy makes the orbit control accuracy exceed the target requirements and extends the satellite’s in-orbit life simultaneously. Keywords: Change’4-relay satellite control strategy

 Halo orbit  High accuracy  Orbit

1 Introduction The Chang’e-4 relay satellite belongs to the Chang’e-4 detector system [1, 2]. It is the world’s first satellite orbiting the Earth-Moon L2 Halo orbit [3, 4]. The lander and rover of Chang’e-4 landed in the Aitken Basin near the south pole of the moon. The landing site is located on the back of the moon. Due to the consistency of the lunar rotation period and revolution period, the probe on the back of the moon can never directly communicate with the ground station. In order to solve this communication problem, the Chang’e-4 mission needs to launch a relay communication satellite orbiting the Earth-Moon L2 Halo orbit. On June 14, 2018, the relay satellite successfully entered the orbit of the Halo mission, laying a solid foundation for the success of the Chang’e-4 lunar exploration mission (Fig. 1). Based on the instability characteristics of the Halo orbit at the L2 point of the Earth and Moon [5–7], the satellite puts forward extremely high precision requirements for orbit control accuracy in order to ensure long-term stable operation in orbit.1) In the midway correction and near-moon braking phase: when the orbit control speed increment error is less than 10 m/s, the thrust direction speed increases The amount error is not more than 0.1 m/s, when the incremental error is more than 10 m/s, the thrust direction velocity increment error is not more than 1% of the velocity increment; 2) L2 track maintenance phase: when the velocity increment is less than 10 m/s, After not less than 2 times of on-orbit calibration, the incremental error of the thrust direction ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 435–442, 2023. https://doi.org/10.1007/978-981-19-3387-5_50

436

T. Zhang et al.

Chang’e-4 relay satellite

EML2

The Moon

The Earth

Fig. 1. Schematic diagram of Chang’e-4 relay satellite mission.

velocity is not more than 0.02 m/s. The Chang’e-4 relay satellite is equipped with 12 sets of 5 N thrusters and 4 sets of 20 N thrusters for orbit control; 6 accelerometers developed by Beijing Institute of Control Engineering are also equipped to monitor the acceleration during orbit control. In response to mission requirements, the control subsystem designed a set of highprecision orbit control strategies from four aspects: thruster installation layout optimization, sampling and quantization error suppression, thruster aftereffect suppression, and accelerometer zero deviation suppression; actual on-orbit testing The results were compared with the mission requirements. The data showed that the high-precision orbit control strategy adopted by the Chang’e-4 relay satellite was practically available, and the orbit control accuracy met the mission requirements.

2 Track Control Task Design The entire orbital mission of the Chang’e-4 relay satellite is divided into four phases: Earth-moon transfer orbit, near-moon braking, capture of Halo orbit and Halo mission orbit. Throughout the flight phase, the Chang’e-4 relay satellite orbit control mission design is as follows Table 1: In the halfway correction phase and Halo orbit capture phase, improving the orbit control accuracy can reduce the number of corrections and reduce fuel consumption. In the near-month braking phase, the orbit control accuracy is related to the safety of the satellite and the success of the mission. The orbit maintenance control [8–11] of the Halo orbit around the L2 point adopts the quasi-Halo orbit method [10, 12, 13]. Improving the accuracy of orbit control can not only reduce the number of orbit control, so as to ensure the continuous provision of relay communication services for theChang’e-4 probe, but also extend the satellite’s orbit life under limited propellant conditions.

High Accuracy Orbit Control Strategy of Change’4-Relay Satellite

437

Table 1. Chang’e-4 relay satellite track control task design. Flight event First halfway correction Second halfway correction Third halfway correction Near-moon braking Fourth halfway correction Fifth halfway correction First capture maneuver First capture orbit correction Second capture maneuver Second capture orbit correction Third capture maneuver Third capture orbit correction Orbit maintenance of Halo mission orbit

Date 2018–5-21 2018–5-22 2018–5-24 2018–5-25 2018–5-26 2018–5-27 2018–5-30 2018–6-1 2018–6-8 2018–6-10 2018–6-14 2018–6-16 Once every seven days

3 High-Precision Track Control Strategy The incremental error of orbit control speed is composed of the error of the theoretical thrust direction and the error around the thrust direction, as shown in Fig. 2 the error of the theoretical thrust direction is usually determined by the sampling interval quantization error, engine thrust error and accelerometer zero deviation, and the error around the thrust direction is usually determined by thruster installation layout.

Theoretical thrust direction error

Theoretical speed increment

Error around theoretical thrust direction

Actual speed increment

Fig. 2. Track control speed incremental error.

438

T. Zhang et al.

To reduce these four kinds of errors and improve the accuracy of track control, the Chang’e-4 relay satellite has formulated targeted strategies from four aspects: thruster installation layout optimization, control cycle sampling and quantization error suppression, thruster aftereffect suppression and accelerometer zero deviation suppression. 3.1

Thruster Installation Layout Optimization

The relay satellite has a total of twelve 5 N thrusters and four 20 N thrusters. In order to reduce the error around the thrust direction, the control system optimizes the thruster installation layout. Thrusters are installed in parallel and force couple mode, as shown in the Fig. 3.

Fig. 3. Thrusters installation layout.

The 20 N thrusters (J7A/J7B/J8A/J8B) are installed on the -Z surface of the star, the thrust direction is strictly parallel to the Z axis, and the installation position is symmetrical around the Z axis. The 5 N thrusters (J1A/J1B/J2A/J2B/J3A/ J3B/J4A/J4B) are also installed on the −Z surface of the star, using force couple layout. Thrusters J1 and J2 can generate torque in the X direction and at the same time generate thrust in the +Z direction. Thrusters J3 and J4 can generate torque in the Y direction and at the same time generate thrust in the +Z direction. Thrusters (J1/J2/J3/J4) can be used as a backup of 20 N thruster to complete precise orbital thrust control. The 5 N thrusters (J5A/J5B/J6A/J6B) adopt a force couple layout and mainly generate torque in the Z direction. Specific installation accuracy requirements: the installation accuracy of each thruster is better than 0.1°; the symmetrical tolerance of the reference axis of the main backup nozzle is better than 0.2; the position deviation of each nozzle relative to the satellite centroid is better than 0.5%; the main backup nozzle The symmetry tolerance is better than 0.5%.

High Accuracy Orbit Control Strategy of Change’4-Relay Satellite

439

Adopting parallel and force couple installation method can reduce the interference torque around the thrust direction of the engine, thus reducing the jet around the thrust direction of the engine, and reducing the speed increment error around the thrust direction. Even if there is an air jet around the thrust direction, the thrust (J5A/J5B/J6A/J6B) can cancel each other out due to the force couple layout. 3.2

Control Cycle Sampling and Quantization Error Suppression

In the traditional orbit control mode, the theoretical speed increment is set to DV. The system integrates the measured value of the accelerometer to obtain the actual speed increment DVS . When DVS  DV, the thrusters shut down. This method will cause sampling and quantization errors in the control period. The sampling quantization error of the control period refers to the error caused by the quantization of the read accelerometer integral value by the sampling period of the control system. The longer the sampling period, the greater the quantization error. In order to reduce the sampling and quantization error, the control system precalculates the track control shutdown speed threshold. During the control process, the remaining speed increment and the shutdown speed threshold are compared in real time. When the remaining speed increment is less than the shutdown threshold, the short timed shutdown is used to eliminate the tiny remaining speed increment. The shutdown speed threshold is set to DVoff , and its calculation method is as shown in Eq. (1). DVoff ¼

aT F M

ð1Þ

which, a is the coefficient, T is the control period, F is the thruster thrust, and M is the mass. 3.3

Thruster Aftereffect Suppression

The aftereffect of the thruster means that there is still a certain thrust effect after the thruster is turned off. Generally, the greater the thrust of the thruster, the greater the aftereffect error. Since the aftereffect of the 20 N thruster is significantly greater than the aftereffect of the 5 N thruster, in order to reduce the impact of the thruster, 5 N thrusters must be used at the end of the orbital control. The control system sets the critical value of the speed increment to DVL . When the remaining speed increment is less than DVL , the system can autonomously switch to the 5 N thruster to reduce the impact of the thruster’s aftereffect. 3.4

Accelerometer Zero Deviation Suppression

The zero deviation of the accelerometer. The longer the zero deviation. In order to accelerometer, it is necessary

accelerometer is an inherent characteristic of the track control time, the greater the error caused by the reduce the influence of the zero deviation of the to reduce the orbit control time as much as possible.

440

T. Zhang et al.

Therefore, if allowed, use the 20 N orbit control engine with a larger thrust for as long as possible for control. 3.5

Summary

The high-precision orbit control strategy of the Chang’e-4 relay satellite can be summarized as follows: (1) Thrusters are installed in parallel and force couple mode. (2) Determine the theoretical speed increment as DV through the result of track measurement. (3) Preset the speed increment threshold DVL for switching between 20 N thruster and 5 N thruster, and preset the early shutdown speed threshold DVoff . (4) In the orbit control process, use an accelerometer to measure the acceleration in the thrust direction, and obtain the actual orbital speed increment DVS by real-time integration, and DV minus DVS to obtain the remaining speed increment DVR . (5) When DVR  DVL , the system switches to use 5 N thruster for orbit control. (6) When DVR  DVoff , the 5 N thrusters use a short timed shutdown method to eliminate the tiny remaining speed increase.

4 On-Orbit Control Results The actual on-orbit flight process of the Chang’e-4 relay satellite: (1) The Chang’e-4 relay satellite was launched into orbit on May 21, 2018. (2) Due to the high accuracy of the orbit and the first halfway correction accuracy, the second and the third midway corrections were canceled. (3) The relay satellite performed near-month braking on May 25, and the satellite entered the transfer orbit after control. (4) Due to the high accuracy of the near-month braking, The satellite canceled the fourth midway correction and only performs the fifth midway correction. (5) The relay satellite arrived in capture orbit on May 29. (6) Due to the high accuracy of orbit control, only the second and the third capture maneuver were performed. (7) The satellite entered the mission orbit around the L2 point on June 14. (8) According to the design, the orbit maintenance will be performed once in about 7 days in the mission orbit. The accuracy statistics of the orbit control of the Chang’e-4 relay satellite in nearmoon braking, L2 capture, and L2 orbit maintenance are shown in Table 2.

High Accuracy Orbit Control Strategy of Change’4-Relay Satellite

441

Table 2. Statistics of control accuracy of Chang’e-4 relay satellite. Flight event

Theoretical speed increment Near moon braking 203 m/s Halo orbit capture 65 m/s 70 orbit maintenance of mission orbit Average 0.3 m/s

Control error Actual speed requirements incremental error 2000 mm/s

> < 2pJ dn T  TL ¼ 60 dt þ E ¼ k 2pn > > 60 : T ¼ ki

2pB 60 n

Table 1. Parameters of permanent magnet brushless DC motor Parameters Winding inductance Winding resistance Moment factor Moment of inertia

Value 250 lH 1X 0.005 Nm/A 1.59155e-4kg.m2

ð1Þ

Design and Optimization of Permanent Magnet Brushless DC Motor Control System

657

The flywheel uses a brushless direct current motor, and its parameters are as follows (Table 1).

3 The Composition and Working Principle of the Motor Speed Control System The permanent magnet brushless direct current motor control system is established according to the mathematical model of the motor, using a double closed-loop structure, as shown in Fig. 1. It is mainly composed of two parts: current loop and speed loop. The speed loop is used as the outer control loop, and the speed value collected is transmitted to the speed controller to obtain the target current; the current loop is used as the inner control loop to receive the speed loop current command, which is compared with the actual current value through the current control inverter to obtain the voltage value, compares it with the triangle wave, and generates the PWM drive voltage [7]. After logical operation with the set driver commutation sequence, it is sent to the inverter bridge three-phase six-arm drive circuit to drive the motor to rotate.

wr

Speed controller w

ir

Current controller

u

Pulse width modulation

Three-phase inverter

BLDCM

i

Commutation control Speed calculation

position sensor

Rotor interval calculation

Fig. 1. Flywheel speed control system

4 Control Algorithm of Permanent Magnet Brushless DC Motor for Flywheel According to the working status of the flywheel, the common speed range is 100 rpm to 4000 rpm. When the speed controller uses traditional PID control, a single set of kp and ki parameters cannot meet such a wide speed range operation. So the common method is to use segmented control. The operating speed range is divided into multiple sections, and each section uses a different value of kp and ki. The advantage of this method is that the principle is simple and easy to implement. But in the actual debugging process, a single motor needs to adjust multiple sets of parameters, which affects the efficiency of using. Compared with other control algorithms, the neural network algorithm is based on the brain neural network to build an artificial neural network, which has the ability of self-adaptation, self-organization, and self-learning, which can realize the approximation of a certain function. Therefore, the PID parameters can be fitted by the neural

658

C. Bao et al.

network to automatically adjust with the motor speed and the error of the speed, and the control algorithm can be optimized [8]. The speed controller control algorithm is replaced by adaptive PID, modify the kp and ki. parameters in the original PI control, and turn them into functions related to the speed error: (

kp ¼ kp þ g1 ðn  nÞ ki ¼ ki þ g2 ðn  nÞ

ð2Þ

where η1 and η2 are learning rates. In order to ensure the dynamic performance, the saturation function related to the target speed is selected. The original expression of the saturation parameter is as in Eq. (3). Since the amount of change is the target speed, the input and output are multiplied by coefficients to adjust the range within the actual speed range, as shown in Eq. (4). x 1 þ j xj

ð3Þ

n =k1 k2 1 þ jn j=k1

ð4Þ

y= g¼

Therefore, when the target speed and the speed error are different, the kp and ki parameters can be independently adjusted to respond to the given speed, and there is no need to adjust the parameters in sections. Since the signal updating of Hall sensor is slow at a lower speed, the speed cannot be controlled stably [9]. However, the motor can be stabilized at a lower speed by updating kp and ki in real time. Bringing the obtained kp and ki into the speed loop controller obtains the current value. The current loop using PI algorithm meets the control requirements, so no improvement will be made [10]. i ðkÞ ¼ kp  ½en ðkÞ þ ki

X

en ðkÞ

ð5Þ

5 Experimental Verification According to the foregoing, the corresponding algorithm programs were written and the two algorithms were verified by experiments. Using the original PI speed controller, the lowest controllable speed is 50 rpm. Although it meets the accuracy requirement of 1 rpm, the fluctuation is large The Fig. 2 shows the speed response curve with a 0.5 rpm step signal at 50 rpm.

Design and Optimization of Permanent Magnet Brushless DC Motor Control System

659

Fig. 2. Speed curve of motor at 50 rpm when using PI control

The motor running speed is segmented, and the kp and ki parameter values of each speed segment are shown in Table 2.

Table 2. The kp and ki value of each speed section Speed (rpm) 0–200 200–400 400–1000 >1000

Kp 0.0001 0.0002 0.0003 0.00035

Ki 0.00015 0.00025 0.0003846 0.00023

Figure 3 (a) to (e) are the speed response curves of 100 rpm, 300 rpm, 600 rpm, 1000 rpm and 2000 rpm with 0.5 rpm step signal. After stabilization, the torque fluctuation is within the range of 0.1 rpm, which meets the 0.2 rpm torque fluctuation requirement. Using the optimized adaptive PI control algorithm, when the speed is less than 200 rpm, the parameters k1 and k2 in the η1 and η2 formulas are selected as 1000 and 1e-9 respectively. When the speed is greater than 200 rpm, the parameters k1 and k2 in η1 formulas are selected as 1000 and 1e-9 respectively. The parameters k1 and k2 in η2 formulas are respectively 1000 and 1.5e-9, and the initial value of ki is set to 0.00005.

660

C. Bao et al.

(a)100rpm

(b)300rpm

(c)600rpm

(d)1000rpm

(e)2000rpm

Fig. 3. The response curve of each speed to 0.5 rpm step command when using traditional PI control

Design and Optimization of Permanent Magnet Brushless DC Motor Control System

Fig. 4. Speed curve of 30 rpm using adaptive PI control

661

Fig. 5. Speed curve of 50 rpm using adaptive PI control

The minimum controllable speed is 30 rpm, the speed curve is shown in the Fig. 4, and the torque fluctuation is 0.5 rpm, which meets the requirement of 1 rpm. Figure 5 shows the speed response curve of 0.5 rpm step signal at 50 rpm, and the torque fluctuation is at 0.2 rpm, which fully meets the design requirements. Figure 6 (a) to (e) are speed response curves of 100 rpm, 200 rpm, 400 rpm, 800 rpm, and 1500 rpm with 0.5 rpm step signals. After stabilization, the torque fluctuation is in the range of 0.1 rpm, which meets the 0.2 rpm torque fluctuation requirements. The response time of a given step signal is basically the same as the traditional PI algorithm. But there is no need to adjust the parameters in sections, which improves the control efficiency.

662

C. Bao et al.

(a)100rpm

(b)300rpm

(c)600rpm

(d)1000rpm

(e)2000rpm

Fig. 6. The response curve of each speed to 0.5 rpm step command when using adaptive PI control

Design and Optimization of Permanent Magnet Brushless DC Motor Control System

663

6 Conclusion This article introduces the flywheel, an important execution unit in the attitude and orbit control system of commercial optical remote sensing satellites, and selects the permanent magnet brushless DC motor. According to the characteristics and mathematical model of BLDCM, the flywheel speed control system is designed. Adaptive PID is used for speed control to optimize the design on the basis of the original PI speed controller. And a Saturation function is introduced to update the parameters kp and ki in real time according to the target speed and speed error. Finally, the feasibility of the controller is verified through experiments. The minimum control speed is reduced from 50 rpm to 30 rpm, which meets the 1 rpm error requirement of speed below 100 rpm, and does not need to adjust the parameters in sections above 200 rpm. It directly realizes the wide speed range control of speed to 2000 rpm and meets the response time requirements, which greatly improves the debugging efficiency.

References 1. Yang, R., Wang, D., Gu, S., Chen, S.: Design of micro-vibration isolators of flywheels of high resolution optic satellites. Noise Vibr. Control 38(06), 178–183 (2018) 2. Ma, E.: Research on Control and Drive Strategy of Wide Speed and Low-Induction Brushless DC Motor for Flywheel. Harbin Institute of Technology (2020) 3. Shen, Y.: Study on Satellite Actuator Scheme for Large Angle Maneuver. Shanghai Jiaotong University (2019) 4. Narayan, S.S., Nair, P.S., Ghosal, A.: Dynamic interaction of rotating momentum wheels with spacecraft elements. J. Sound Vib. 315(4), 970–984 (2008) 5. Choi, U.M., Lee, K.B., Blaabjerg, F.: Diagnosis and tolerant strategy of an open-switch fault for T-type three-level inverter systems. IEEE Trans. Ind. Appl. 50(1), 495–550 (2014) 6. Doss, M.A., Premkumar, E., Kumar, G.R., Hussain, J.: Harmonics and torque ripple reduction of brushless DC motor (BLDCM) using cascaded H-bridge multilevel inverter. In: Proceedings of IEEE ICPEC, pp. 296–299, February 2013 7. Eggers, J., Suramlishvili, N.: Singularity theory of plane curves and its applications. Eur. J. Mech. B Fluids 65(5), 87–90 (2017) 8. Tren’kin, A.A., Karelin, V.I., Shibitov, Yu.M., et al.: Microstructure of the regions on a plane copper electrode surface affected by a spark discharge in air in the point–plane gap. Tech. Phys. 62(9), 45–47 (2017) 9. Cui, J.F., Liu, Y., Yan, H., et al.: Multi-motor neuron PID synchronous control based on fuzzy control. Mod. Mach. Tool Autom. Manuf. Tech. 2(2), 81–83, 87 (2013) 10. Devi, K.S., Dhanasekaran, R., Muthulakshmi, S.: Improvement of speed control performance in BLDC motor using fuzzy PID controller. In: International Conference on Advanced Communication Control & Computing Technologies, pp. 380–384. IEEE (2017)

On-orbit Research and Validation on the Lifetime Capability of Propulsion System Jialong Ji1,2(&), Yuan Wang1,2, Xin Miao3, and Tao Chen2 1

2

Beijing Institute of Control Engineering, Beijing 100190, China [email protected] Beijing Engineering Research Center of Efficient and Green Aerospace Propulsion Technology, Beijing 100190, China 3 Beijing SunWise Space Technology Ltd., Beijing 100190, China

Abstract. Nonmetallic material is widely used in the satellites’ propulsion systems. In order to analysis the lifetime of the propulsion systems, the compatibility especially the rubber material compatibility should be test. A method for researching the rubber compatibility of a propulsion system in orbit is proposed based on the fact that the compatibility test is carried out only under the standard condition nowadays. In this method, the conventional method on the standard condition is consulted and the system condition variety by catalytic decomposition under pressurized, high-pressure condition is considered to get the exact catalytic decomposition rate. At the same time, the rubber compatibility is performed based on the system pressure variety. According to the rubber compatibility research, we can know that the catalytic decomposition rate of the rubber at high pressure is obviously lower than that at standard pressure, which ensures the margin of the test result aground applied in orbit. At the same time, some in-flight orbital controlling test is carried out to validate the lifetime of propulsion system. And some suggestion of rubber application in propulsion system is proposed according to the catalytic decomposition rate and the system condition in orbit. Keywords: Propulsion system Lifetime

 Rubber  Compatibility  High-pressure 

1 Foreword The satellite is designed with the characteristic of long lifetime, high reliability, and so on. And in order to ensure the lifetime, the propulsion system is always configured in satellites to raise or lower down its orbit altitude. The metallic material is widely used in the components of the propulsion system as the structural support because of its high intensity [1], and the components used in the monopropulsion system can bear the pressure as high as 8 MPa. At the same time, some hard non-metallic material is also usually applied as the seal parts such as FEP [2]. And the rubber is always used as the propellant management device in the tanks [3]. In the propulsion system, the propellant is stored in the tank and pipes in orbit. The lifetime of the systems is a key specification. In order to ensure the specification, the ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 664–672, 2023. https://doi.org/10.1007/978-981-19-3387-5_78

On-orbit Research and Validation on the Lifetime Capability

665

non-metallic materials of the propulsion system components should have a good compatibility with the propellant. If not, some gas may be produced by the reaction between the materials and propellant, which raises the pressure of the system, and the materials will be corroded, which may lead to the breakage of the rubber and the leakage of the system. So, the compatibility experiment especially the rubber compatibility with the propellant is necessary because the management capability of the system depends on the holonomy of the rubber. The compatibility with the propellant includes the corrosive characteristics and catalytic decomposition. Catalytic decomposition means that some gas is produced by the chemical reaction between the materials and propellant. While, the catalytic decomposition experiment is carried out at the standard pressure, which is different from the applied high-pressure environment. The differences may lead to a different decomposition rate, and it leads to an inaccurate lifetime estimation, which may mislead the satellite design and application. So, an analysis of catalytic decomposition performance shoule be carried out based on the on-orbit condition of the system, and some test shoule be carried out to validate the lifetime capability.

2 Catalytic Decomposition Test Method 2.1

Test Method

The conventional test of the catalytic decomposition is carried out by testing the generation of gas, and the decomposition rate is calibrated by measuring the amount of produced gas. As shown in Fig. 1, during the test, some clean rubber samples of standard size and of unbroken structure are prepared, and these samples are put in the propellant to make sure that all the faces of sample contact the propellant. The bottle with propellant and rubber sample in it is put in the water of a constantly high temperature.. Always, the test will last for about 15 days. The volume of the gas generated by the catalytic decomposition is assumed as V, so V ¼ Vt  V0

ð1Þ

With: Vt : the gas volume with rubber sample V0 : the gas volume without sample, ml The generation rate of standard state Ss can be solved as follows (Fig. 1): Ss ¼ a  V  P=ðS  tÞ

With: a: conversion ratio P: the environment pressure S: the surface area of the sample t: test time

ð2Þ

666

J. Ji et al.

1. Water of constant temperature 2. Sample bottle 3. Rubber sample 4. Propellant 5. Linking pipe 6. Valve 7. Gas-gathered pipe 8. Balance pipe 9. Environment of constant temperature

Fig. 1. Test device configuration of gathering method

From the test equipment we can know that the material of the gas-gathering pipe is hard glass, and the pistons, pipes and other devices can’t bear the working pressure of propulsion system such as 1.7 MPa. So the test is always carried out at the standard pressure which is different from the actual state of the propulsion system. At the same time, the test is carried out at a high temperature such as 35 °C or 50 ° C. But because of the thermal controlling measures and the actual space environment of the satellites [4], the temperature of the system in orbit will not be very high for a long time, the test result cannot actually reflect the catalytic decomposition rate. From the method we can know that it is applied by testing the gas volume variation. If the gas is generated when the propellant is drained, the gas is stored in the tank, and the pressure of propulsion system will rise in orbit. According to the relation of the gas volume and pressure variation, the catalytic decomposition rate can be validated by the working mode of the propulsion system and the parameter such the pressure of tank, the temperature of tank, and so on. 2.2

Research on Catalytic Decomposition Rate in Orbit

The tanks in the satellites, rockets are used for storing the propellants and as the propellant management devices. And the diaphragm and rubber bag are widely used as the propellant management in the tanks. The diaphragm-tank and bladder-tank can work with a larger flow rate under satellite’s higher acceleration at any direction, overcoming the difficulty of liquid sloshing and gas-liquid separation [5]. So these kinds of tanks are usually used in the recoverable satellites [6, 7]. As shown in Fig. 2, when the rubber bag is applied in the tank, the propellant is stored in the bag, and either the inner surface or the outer surface of the rubber bag is touched by the propellant. According to the working mode of bladder-tank, the on-orbit data of system can be applied for the research on the catalytic decomposition.

On-orbit Research and Validation on the Lifetime Capability

667

Rubber bag

Inlet

Outlet

Propellant

Gas

Shell

Fig. 2. Configuration of tank with a rubber bag [8]

The monopropulsion system using hydrazine (called N2H4 for short) as the propellant is taken as the researching object here because that the diaphragm-tank is always used in the monopropulsion system [9]. The decomposition process of hydrazine is shown as follows where x is defined as the degree of the dissociation of NH3 generated from N2H4 decomposition [10]: 1 4 N2 H4 ! ð1 þ 2xÞN2 þ ð1  xÞNH3 þ 2xH2 3 3

ð3Þ

During the on-orbit time, if propulsion doesn’t work, when the rubber is touched with hydrazine, the produced gas will raise the pressure of the system. The research is performed based on some assumptions as follows: (1) The conversion ratio a is constant during the test time, and according to Formula (2), the: the generation rate Ss is constant too. (2) The solubility of propellant is constant, and the generated gas such as N2 and H2 does not dissolve in the propellant any more. In this research, the pressure variation is shown as follows: DP ¼ P2  P1

ð4Þ

With: P1: the pressure before catalytic decomposition P2: the pressure after the catalytic decomposition time During the process, the produced gas is stored in the tank, so the quantity of the produced gas depends on the pressure variation: N ¼ DP  V

ð5Þ

With: V: the volume of gas in the tank. Because the propellant is incompressible medium, V equals to the volume of tank except the volume of propellant.

668

J. Ji et al.

According to Formula (4) and (5), the relation of the pressure and the generation rate can be shown as follows: V þ Sp  S  t  P1 ¼ P2 V

ð6Þ

P2 Sp ¼ V  ð  1Þ=ðS  tÞ P1

ð7Þ

So,

During the orbital time, if the propulsion doesn’t work, the propellant will not be consumed, V is a constant. According to the design of tank, S is also a constant. If the orbital parameter such as P1, P2, t can be obtained, the catalytic decomposition rate Sp at high pressure can be obtained.

3 In-flight Validation 3.1

Mission Design

A bladder-tank is applied in the propulsion system [11], the 1L-tank is shown in Fig. 3. The bladder fixed in the left side of the tank is designed as a spheric form, and its volume is 0.5L. During the manufacture process, the material used for bladder is rubber of good compatibility with hydrazine, and the gas generation rate is well controlled. When the propulsion system is applied in the satellite, the propellant is drained into the bladder through the liquid draining-valve, and the high-pressure gas such as N2 is fed into the gas side of tank through the gas draining-valve. During the long-time storage stage, neither the composition of propellant nor that of rubber varies obviously. And during the orbital controlling process, when the propellant is consumed, the pressure goes down according to the ideal-gas equation. And the material integrity of bladder within the lifetime ensures that all the propellant is still stored in the bladder, which ensures the normal working condition of the propulsion system. Rubber Bag

Fig. 3. Configuration of tank

On-orbit Research and Validation on the Lifetime Capability

669

Other nonmetallic material is also used in the propulsion system, but their sizes are much smaller than that of bladder in the tank, and their gas generating rate is much smaller than that of rubber, either. So other nonmetallic material is ignored in this research. 3.2

In-flight Catalytic Decomposition Validation

The propulsion system has carried out the thermal vacuum test, and the test data is based on the in-flight time far away from the launching time, the conventional outgassing rate property of rubber can be ignored. And during the test time, the propulsion system doesn’t work in orbit [12]. Some assumption is performed in the catalytic decomposition validation as follows: (1) The bladder is considered as a closed ball, the liquid side area of tank is ignored, and the contact area of bladder and propellant is constant. (2) The on-orbit temperature of tank is constant. (3) All the propellant is stored in the bladder, the rubber material is intact and unbroken during the process. (4) The influence of other material in the tank and valves is ignored. The temperatures of tank is shown in Fig. 4 and the pressure variation of the test time is shown in Fig. 5. From Fig. 4 we can know that the temperatures vary from 18 ° C to 22 °C during the validation, which proves the correctness of assumption (2). And the working condition of the propulsion system in orbit is better than that of the ground test. From Fig. 5 we can know that the tank pressure rises from 1.684 MPa to 1.76 MPa during the process. And the catalytic decomposition rate Sp is 0.46  10– 3 ml/cm2d by Formula (7), which is much lower than 1.0 * 1.5  10–3 ml/cm2d obtained by the ground test [13].

T/

24

Tank1

23

Tank2

22

1.8 P/MPa 1.78 1.76 1.74

21

1.72

20

1.7

19

1.68

18

1.66

17

1.64

16

1.62 Day

15 0

50

100

150

Fig. 4. Temperatures variation of tank

Day

1.6 0

50

100

150

Fig. 5. Pressure variation of tank

At the same time, another group of the orbital data lasting for about 1.8y is taken to analysis the catalytic decomposition. During this process, the propulsion system doesn’t work, either. And the pressure variation is shown in Fig. 6, the pressure

670

J. Ji et al.

variation derives from the catalytic decomposition totally. The catalytic decomposition rate Sp of this time is 0.44  10–3 ml/cm2d, near the result of the former one. Both the results show the coherence of the catalytic decomposition rate at high pressure. On the other hand, although the rate is lower than that of imagined, the pressure of tank rises evidently for a long time. The catalytic decomposition effect of rubber should be considered when the rubber bag is applied in tanks, especially in tanks of the longlifetime satellites. 2.2 P/MPa 2.1 2.0 1.9 1.8 1.7 0

200

400

600

Day 800

Fig. 6. Pressure variety of tank

3.3

Propulsion System In-flight Test

The in-flight test of the propulsion system is carried out for several times. A test was performed 5 months after the propellant was drained in the tank. And other test are performed 3 years later, the pressure becomes higher as shown in Fig. 6. Because of the pressure variation by the catalytic decomposition, the common P-V-T method [14] is no longer fit for this system. The BK method [15] is applied for the orbital controlling model of this mission. During the orbital controlling process, the strategy is made according to pressure in orbit and the quality of the residual propellant. And the thrust ratio of each time is shown in Table 1. All the thrust ratio, whose indicator is 1 ± 5%, is about 1 by validation. From Table 1 we can know that the thrust ratio of the preliminary stage of the propulsion system consists with that of the final, overloading stage. And although the test time has already been over the application lifetime of rubber, the structure of the rubber bag is intact. And the lifetime capability of the propulsion system is beyond the index.

On-orbit Research and Validation on the Lifetime Capability

671

Table 1. Thrust ratios of propulsion system Test series Pressure (MPa) 1 2.165223 2 2.034816 3 1.932648 4 1.84462 5 1.76785 6 1.699987 7 1.661271 8 1.652093

Academic thrust (N) Thrust ratio 0.331 0.985 0.314 1.015 0.30 1.042 0.288 1.023 0.277 0.9965 0.267 1.001 0.261 1.025 0.260 1.008

4 Conclusions A method of analyzing the catalytic decomposition rate in-orbit is presented and test by on-orbit data, and the lifetime capability of the propulsion system is validated by orbital controlling. The conclusions are achieved as follows: a) The temperature environment in orbit of satellites is evidently better than that of the ground test, and the catalytic decomposition rate in orbit is lower than that of the ground test too. b) The gas generating rate of rubber should be considered when it is applied in tanks of the satellites especially in the long-lifetime satellites. c) The structure of the rubber material is intact in the longtime touch in spite of the gas generating phenomenon, the lifetime of rubber is longer than expected. d) Some further research such as the exact analysis of gas generating rate can be carried out by considering the accurate pressure in orbit.

References 1. Hinckel, J.N., Trava-Airoldi, V.J., Corat, E.J., et al.: Propulsion subsystem component development program for the MECB RSS satellite. In: 27th AIAA/SAE/ASME/ASEE Joint Propulsion Conference, Sacramento (1991) 2. Salim, A.A.: In-orbit performance of lockheed Martin’s electrical power subsystem for A2100 communication satellites. In: 35th Intersociety Energy Conversion Engineering Conference and Exhibit,Las Vegas (2000) 3. Kammerer, H., Hughes, J., Gribben, E.: Analytical and material advances in contoured metal diaghragms for positive expulsion tanks. In: 31st AIAA/SAE/ASME/ASEE Joint Propulsion Conference and Exhibit, San Diego (1995) 4. Cho, Y.H., Jae, H.Y., Kyun, H.L., et al.: Sensitivity analyses of satellite propulsion components with their thermal modelling. Adv. Space Res. 47, 466–479 (2011) 5. Weislogel, M.M., Sala, M.A., Collicott, S.H.: Analysis of tank PMD rewetting following thrust resettling. In: 40th AIAA Aerospace Sciences Meeting & Exhibit, Reno, NV (2002) 6. Rattan, T.M., Patil, K.C., et al.: Hydrazine and its inorganic derivatives. In: Inorganic Hydrazine Derivatives: Synthesis, Properties and Applications, pp. 1–36 (2014)

672

J. Ji et al.

7. Chen, Z., Li, Z., Wu, J.: Research on propellant tank composite diaphragm. J. Rocket Propul. 31(1), 21–23 (2005). (in Chinese) 8. Wang, X., Zhang, H., Wang, Z., et al.: Test research on the sloshing properties of the liquidfilled tank with a rubber bag. Struct. Environ. Eng. 45(2), 22–28 (2018). (in Chinese) 9. Sackheim, R.L.: Overview of United States space propulsion technology and associated space transportation systems. J. Propul. Power 22(6), 1310–1333 (2006) 10. Yuan, T., Huang, B., Tang, M., et al.: The integration and test of a 5-N hydrazine propulsion system. In: 45th AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit, Denver, Colorado (2009) 11. Ji, J., Wang, Y., Ma, Y.: In-space demonstration of XW-2 propulsion system. Space Propul. 10(3), 8–11 (2016). (in Chinese) 12. Saito, K., Sato, Y., Inayoshi, S., et al.: Mesaurement system for low outgassing materials byswitching between two pumping paths. Vacuum 47(6), 749–752 (1996) 13. Mao, G., Tang, J.: Application of Spacecraft Propulsion System, 1st edn. Northwestern Polytechnic University Press, Xi’an (2009) 14. Duhon, D.D.: The shuttle orbital maneuvering system P-V-T propellant quantity gauging and leak detection allowance for four instrumentation conditions. NASA CR-150972 (1975) 15. Chang, Y.K., Young, M.B., Chae, Y.S.: Error analyses for spacecraft propellant gauging techniques. In: 30th AIAA/ASMW/SAE/ASEE Joint Propulsion Conference, Indianapolis (1994)

Observer-Based Variable Structure Control for Spacecraft with Actuator Failure Haibin Qin1,2, Jianxiang Xi1(&), Jun Liu2, Xu Yang2, and Nengjian Tai2 1

2

High-Tech Institute of Xi’an, Xi’an 710025, China [email protected] Xi’an Satellite Control Center, Xi’an 710043, China

Abstract. Aimed at the problem of actuator efficiency loss failure of spacecraft attitude control system, a variable structure fault-tolerant control method under the influence of interference is studied. Firstly, a sliding mode fault estimation observer is designed to realize effective estimation of actuator failure. Secondly, using the output of the observer and considering the suppression of external interference, an adaptive sliding mode control method is introduced to estimate the unknown parameters in real-time. Finally, based on Lyapunov theory to prove the stability of the controller, the simulation results show that the method of execution efficiency loss spacecraft failure has strong fault tolerance. Keywords: Actuator efficiency loss Sliding mode control

 Estimation observer  Fault-tolerant 

1 Introduction Due to the variety of space exploration missions and complex spacecraft systems, stability is particularly important for spacecraft control systems, where small changes can lead to significant performance degradation or even instability. To improve system reliability and security, the controller should be able to tolerate potential failures and maintain performance. Therefore, the development of spacecraft fault-tolerant control technology has very important practical significance. The actuator failure affects the accuracy of attitude control and even lead to system collapse. In recent year, the scholars has studied lots of fault tolerant control strategies for actuator failure [1–3], such as model reference adaptive control, backstepping control, variable structure control and so on. Ref [4] proposed an adaptive variable structure fault tolerance control method, based on adaptive technology to estimate minimum Fault. In Ref [5], the author combined adaptive control theory and fuzzy control theory to derive an adaptive attitude controller for system uncertainty identification. The passive fault-tolerant control law was derived in Ref [6] based on adaptive technology, which can complete attitude tracking without any fault diagnosis information in the case of actuator failure. Ref [7] studied a discrete adaptive control method to solve the problem of external interference and actuator failure of rigid spacecraft. Taking into account the maximum output capability limitation and external disturbance of actuator in Ref [8], the author derived an adaptive fault tolerant control method based on variable structure control theory. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 673–681, 2023. https://doi.org/10.1007/978-981-19-3387-5_79

674

H. Qin et al.

Most adaptive control methods was studied under the assumption that the actuator failure has an upper bound. These lead to the conservative nature of the controller inevitably. If an observer is designed to observe the spacecraft fault in real time, to compensate and reconstruct the controller, the conservatism of the controller could be weakened. A sliding mode observer was designed to observe spacecraft failure [9], but it is only applied to linear model. Most systems are nonlinear in reality, spacecraft is a complex nonlinear system. Therefore, the paper designs a fault estimation observer to conduct real-time online estimation of the fault for the Lipschitz nonlinear attitude system. Once the actuator fails, the estimated value can be given in time and the controller will be reconstructed to achieve the purpose of fault-tolerant control.

2 Attitude Model and Control Problem Formulation Consider the rigid spacecraft [10], the attitude dynamics equation is Jx_ ¼ xX Jx þ u þ d

ð1Þ

where J 2 R3X3 is the total inertia matrix of the spacecraft, u ¼ ½ u1 u2 u3 T 2 R3 denotes the control torque that the actuator can actually impose, and d ¼ ½ d1 d2 d3 T 2 R3 denotes the external disturbance torque. Due to the singularity of the Euler angle kinematics equation, the article uses quaternion to describe the attitude motion of the spacecraft, where Q ¼ ½ q0 q1 q2 q3 T ¼ ½ q0 qv T and q20 þ q2v ¼ 1. Then the attitude kinematics equation of spacecraft can be described as:    1 X ðqv þ q0 IÞx q_ 0 2 _ Q¼ ¼  12 qTv x q_ v

ð2Þ

 T x ¼ x1 x2 x 3 , I is the third-order unit matrix, 3 0 q3 q2 4 q3 qX 0 q1 5. v ¼ q2 q1 0 Considering the failure of actuator efficiency loss, the failure attitude model is established in the form of a product factor, and the dynamic Eq. (1) can be rewritten as where

2

Jx_ ¼ xX Jx þ Eu þ d

ð3Þ

where E ¼ diag( e1 e2 e3 Þ 2 R3X3 denotes the validity factor for the actuators with 0\ei  1 ði ¼ 1; 2; 3Þ. The case when ei ¼ 1 means that the i th actuator works normally, and 0\ei \1 corresponds to the case in which the i th actuator partially loses its effectiveness. To satisfy the design of fault-tolerant controller, some necessary assumptions are introduced.

Observer-Based Variable Structure Control

675

Assumption 1: The inertia matrix is positive definite; Assumption 2: The external disturbance torque d is bounded, namely, it satisfies kdk  dmax , where dmax is the upper bound of the disturbance torque; Assumption 3: The nonlinear term xX Jx in [11] is globally Lipschitz for 8x 2 R3 , there exists a known positive scalar #, such that the following condition is satisfied:  X  x Jx  x ^ X J^ ^ k ¼ # kx ~k x  #kx  x In order to effectively solve the problem, we developed a fault-tolerant control strategy. The control problem formulation may be summarized as follows: Based on the above Assumptions, considering the cases of the actuator failure and external interference, one can design the control law u which make the systems (2) and (3) meet x ! 0; qv ! 0 and q0 ! 1 when t ! 1.

3 Observer-Based Variable Structure Controller Design 3.1

Design Fault Estimation Observer

In order to design a sliding mode fault estimation observer to realize the estimation of actuator failure, since E is a diagonal matrix, the Eu in Eq. (3) can be rewritten as: Eu ¼ Up

ð4Þ

where U ¼ diagð u1 u2 u3 Þ; p ¼ ð e1 e2 e3 ÞT . Therefore, the Eq. (3) can be rewritten as Jx_ ¼ xX Jx þ Up þ d

ð5Þ

Based on Eq. (5), we design a sliding mode fault estimation observer: :

^ ¼ x ^ X Jx ^ þ U^ ^ þ dmax signðx  xÞ ^ Jx p þ Cðx  xÞ :

^ ¼ k1 p ^ þ k2 UT ðx  xÞ ^ p

ð6Þ ð7Þ

^ and p ^ is respectively state observations and fault estimates, x ~ ¼xx ^ is the where x error variable of the observer, k1 [ 0, k2 [ 0, C [ 0 is a diagonal matrix. Establishing a dynamic equation of observation error by subtracting (5)–(6): :

~ ¼ ðxX Jx  x ^ X J^ ~ þ d  dmax signðxÞ ~ Jx xÞ þ U~p  Cx ^. where ~p ¼ p  p

ð8Þ

676

H. Qin et al.

Theorem 1: If the estimated observer of the attitude control system (2)–(4) achieves accurately the efficiency loss fault estimated value of the actuator, the observer satisfies kmin ðCÞ  # [ 0. Proof: Assumed the Lyapunov function is 1 T 1 T ~ Jx ~þ V1 ¼ x p~ ~ p 2 2k2 The derivation of the V1 is : : 1 ~ ~ TJ x ~ þ ~pT p V_ 1 ¼ x k2

Substituting the equation of observation error (7) into the above equation, it can be get : 1 T: ~ p ~ ~ TJ x ~ þ p V_ 1 ¼x k2

~ T ½  ðxX Jx  x ^ X J^ ~  ¼x xÞ þ U~p  C~ x þ d  dmax signðxÞ

1 T ~ k1 p ^p ~T UT ð~ p xÞ k2

By scaling the above equation, we can obtain: 1 T ~ k1 p ^ ~ k2 þ x ~ T ðd  dmax signðxÞÞ ~  p V_ 1   ðkmin ðCÞ  #Þkx k2 1 T ~ k1 p ^ ~ k2  p   ðkmin ðCÞ  #Þkx k2 ~T p ^¼p ~T p  p ~   12 p ~T p ~ þ 12 pT p, 0\pi  1. ~T p Due to p ^ ~ k2 ~ V_ 1   ðkmin ðCÞ  #Þkx pT k 1 p k ~Þ ~ k2  1 ðpT p  p ~T p   ðkmin ðCÞ  #Þkx 2k2 Then, V_ 1  0, if the observer satisfies the condition kmin ðCÞ  # [ 0, the designed sliding mode estimation observer can achieve the ultimate uniform boundlessness. Therefore, the estimation fault observer specified in Theorem 1 can estimate the fault information in real time and accurately, and send the fault information to the fault tolerant controller in time. 3.2

Observer-Based Control Law Derivation

The controller is designed using the sliding mode control theory. For the equations of (2) and (3), the sliding surface is designed as follows:

Observer-Based Variable Structure Control

677

s ¼ x þ kqv The derivation of the s is s_ ¼ J1 xX Jx þ J1 Eu þ J1 d þ

k X ðq þ q0 IÞx 2 v

The fast power sliding mode approach law is chosen in the form: s_ ¼ k3 s  k4 diagfjsjga signðsÞ where k3 [ 0, k4 [ 0; 0\a\1. Based the above, the following forms of control law and adaptive law are designed: u ¼E1 J½J1 xX Jx  J1 ^dmax

s k  ðqX þ q0 IÞx  k3 s  k4 diagfjsjga signðsÞ ks k 2 v

_ d^max ¼k5 ksk ð9Þ where k5 [ 0. Theorem 2: The fault-tolerant control law (9) ensures that the attitude system can reach a stable state under the condition of loss of actuator performance, that is x ! 0; qv ! 0 and q0 ! 1 when t ! 1. Proof: Select the Lyapunov function is 1 1 V2 ¼ sT s þ ðdmax  ^ dmax Þ2 2 2k5 The derivation of the V2 : 1 : V_ 2 ¼sT s þ d~max d~ max k5

: k X 1 ðq þ q0 IÞx þ d~max d~ max 2 v k5 s k  ðqX þ q0 IÞx  k3 s  k4 diagfjsjga signðsÞÞ ¼sT ½  J1 xX Jx þ J1 EðJ1 EÞ1 ðJ1 xX Jx  J1 d^max ks k 2 v : k 1 þ q0 IÞx þ d~max d~ þ J 1 d þ ðqX max 2 v k5 : s 1 ¼sT ½  J1 d^max þ J1 d  k3 s  k4 diagfjsjga signðsÞ  d~max d^ max ksk k5 s Þ  k 3 ksk2 k4 kska þ 1 d~max ksk  sT ½J1 ðd  d^max ks k

¼sT ½  J1 xX Jx þ J1 Eu þ J1 d þ

  k 3 ksk2 k4 kska þ 1 0

Then, the spacecraft attitude system is asymptotically stable under the designed controller.

678

H. Qin et al.

Remark1: To suppress the chattering effect of sliding mode control, let the function satðsÞ replace the function signðsÞ and use the function ksksþ m instead of the function kssk in the controller, where m is a normal number with a small value. 8 < 1 s[D satðsÞ ¼ ks jsj  D ; k ¼ 1=D : 1 s\D

4 Numerical Simulation In this section, numerical simulations are presented to demonstrate the validity of the designed fault-tolerant 2 3 controller. The rotation inertia matrix of the spacecraft is 20 0 0:9 J ¼ 4 0 17 0 5, the external disturbance torque 0:9 0 15 2 3 3 cosðw0 tÞ þ 1 d ¼ A0 4 1:5 sinðw0 tÞ þ 3 cosðw0 tÞ 5, where w0 is the orbit angular velocity (Table 1). 3 sinðw0 tÞ þ 1 Table 1. The main simulation parameters. Parameters Initial values of attitude and angular velocity Disturbance Observer parameters Controller parameters

Value Qð0Þ ¼ ½0:170.26 0:79  0.53T xð0Þ ¼ ½000T A0 ¼ 2:5e  5=ðN  mÞ; w0 ¼ 1:2e  4=ðrad=sÞ C ¼ 58:0I; k1 ¼ 3e  4I; k2 ¼ 2:7e3 k ¼ 0.8; k3 ¼ 0.6I,k4 ¼ 3e  3I, k5 ¼ 5e  3I; v ¼ 0:01

Under the above initial conditions, numerical simulation is carried out by distinguishing the normal actuator from the loss fault. Case 1 Normal mode, The actuator is working fine. Case 2 Failure mode, The actuator has the following form of efficiency loss failure:  8 1 0  t\5s > > < e1 ðtÞ ¼ 0:6 t  5s  > 1 0  t\8s > : e2 ðtÞ ¼ 0:8 t  8s

4.1

ð10Þ

Normal Mode

In the normally mode, the actuator works well. Using the action of control rate (9), the response of the attitude angle and the angular rate are shown in Fig. 1, the control torque and the estimated upper bound of the external interference is shown in Fig. 2.

Observer-Based Variable Structure Control 0.3

1.5

0.2 attitude angle rate (°/s)

1 four elements

679

0.5 0 q0 q1 q2 q3

-0.5 -1 0

5

10

15 time(s)

20

25

0.1 0 -0.1 -0.2

wx wy wz

-0.3 -0.4 0

30

5

10

15 time(s)

20

25

30

Fig. 1. The attitude four elements and the angular velocity

4

ux uy uz

2 0 -2 -4

x 10

-5

3.5 3 dmax(N.m)

control torque (N.m)

4

2.5 2 1.5 1

-6 -8 0

0.5

5

10

15 time(s)

20

25

30

0 0

5

10

15 time(s)

20

25

30

Fig. 2. The control torque and the last estimate curve of external interference

From the simulation results, the attitude system reaches steady state in about 20s and the control precision is better than 10-4. 4.2

Failure Mode

In this section, considering the actuator failure (10), the same controller parameters is simulated to verify the validity of the observer and the fault-tolerant capability of the fault-tolerant controller under the same initial states. The response of the attitude angle and the angular rate are shown in Fig. 3, the attitude system reaches state less than 20 s. It can be seen from the Figs. 4, 5, the observer can find the fault information of the actuator in time, and the designed controller can realize the effective compensation for the fault within 3 s, and realize the stable control of the attitude system. In short, the designed observer-based variable structure controller has good capability in both modes, and can meet the requirements of spacecraft control system. Especially in the failure mode, it has good performance of fault tolerance.

680

H. Qin et al. 0.3

1.5

0.2 attitude angle rate (°/s)

four elements

1 0.5 0

q0 q1 q2 q3

-0.5 -1 0

5

10

15 time(s)

20

25

0.1 0 -0.1 -0.2

wx wy wz

-0.3 -0.4 0

30

5

10

15 time(s)

20

25

30

Fig. 3. The attitude four elements and the angular velocity 1.1 1

0.9

efficiency y

0.95

0.8 0.7 0.6

0.85 0.8

0.5

0.75

0.4

0.7

0

5

10

15 time(s)

20

25

Estimated fault Actual fault

1

0.9

30

0.65 0

10

5

15 time(s)

Fig. 4. The attitude four elements and the angular velocity 4

ux uy uz

2 control torque (N.m)

efficiency x

1.05

Estimated fault Actual fault

0 -2 -4 -6 -8 0

5

10

15 time(s)

20

25

30

Fig. 5. The attitude four elements and the angular velocity

20

25

30

Observer-Based Variable Structure Control

681

5 Conclusion This paper studies the integrated fault estimation and fault-tolerant control design of rigid spacecraft attitude Lipschitz nonlinear system for external disturbance and partial failure of the actuator. An adaptive sliding mode observer with an upper disturbance limit is designed to obtain an estimate of the effect of unknown actuator failure. Additionally, a fault-tolerant control strategy is proposed using the method of the fast sliding mode reaching law. Furthermore, the Lyapunov method is used to prove the stability of the faulty closed-loop system. Finally, the simulation results demonstrate the feasible of the observer-based variable structure control scheme designed in this paper. The follow-up work will be to study the fault-tolerant control rate, realize the finitetime control of the attitude system and enhance the robustness of the controller.

References 1. Han, Y., Biggs, J., Cui, N.: Adaptive fault-tolerant control of spacecraft attitude dynamics with actuator failures. J. Guid. Control. Dyn. 38(10), 2033–2042 (2015) 2. Hu, H., Liu, L., Wang, Y.: Active fault-tolerant attitude tracking control with adaptive gain for spacecrafts. Aerosp. Sci. Technol. 98, 1–13 (2020) 3. Gu, C., Qu, W.: Nonsingular terminal sliding mode control for flexible spacecraft attitude control. J. Phys.: Conf. Ser. 1828(1), 012176 (2021) 4. Hu, Q., Zhang, A., Li, B.: Adaptive variable structure fault tolerant control of rigid spacecraft under thruster faults. Acta Aeronautics et Astronautica Sinica 34(4), 909–918 (2013) 5. Zou, A., Kumar, K.: Adaptive fuzzy fault-tolerant attitude control of spacecraft. Control. Eng. Pract. 19(1), 10–21 (2011) 6. Zou, A., Kumar, K.: Robust attitude coordination control for spacecraft formation flying under actuator failures. J. Guid. Control. Dyn. 35(4), 1247–1255 (2012) 7. Ma, Y., Jiang, B., Tao, G.: Actuator failure compensation and attitude control for rigid satellite by adaptive control using quaternion feedback. J. Franklin Inst. 351(1), 296–314 (2014) 8. Bustan, D., Sani, S., Pariz, N.: Adaptive fault-tolerant spacecraft attitude control design with transient response control. IEEE/ASME Trans. Mechatron. 19(4), 1404–1411 (2014) 9. Zhang, A., Hu, Q., Huo, X.: Fault reconstruction and fault tolerant attitude control for overactivated spacecraft under reaction wheel failure. J. Astronaut. 34(3), 369–376 (2013) 10. Li, S., Rong, S.: Attitude Dynamics and Control of Spacecraft, 1st edn. Harbin Institute of Technology Press, Harbin (2020) 11. Xiao, B., Hu, Q., Friswell, M.: Active fault-tolerant attitude control for flexible spacecraft with loss of actuator effectiveness. Int. J. Adapt. Control Signal Process. 27(11), 925–943 (2013)

Research on the Method of Satellite Energy Real-Time Dispatching Ming Qiao(&) and Xiao Fei Li Institute of Remote Sensing Satellite, Beijing 100094, China [email protected]

Abstract. The satellite energy management is great significance in dealing with power supply and distribution system failures and ensuring the reliable operation of spacecraft. The article proposes a real-time energy dispatching method on the satellite, which can judge the power supply capacity and load status of the energy system on the satellite, and accurately calculate the energy status, avoid the calculation deviation caused by the inaccurate collection of ground budget parameters. Autonomous calculation of on-board energy status in multi-change working mode and multiple combinations greatly improves on-board energy utilization and enhances the satellite’s business capabilities. Keywords: Satellite

 Energy  Real-time  Dispatching

1 Introduction Satellite energy management is a classic resource allocation problem [1–3]. The shortterm load distribution of the power supply and distribution system is an important part of the optimal operation. It is to maximize the use of electrical energy under the premise of meeting the requirements of the flight mission and various constraints. When the satellite power supply fails or the power supply capacity cannot meet the power demand of the normal working electrical load, it is necessary to automatically unload part of the electrical load to make the power consumed by the electrical load match the power supply. Ensure that the satellite completes the mission or the Satellite is basically operating normally in orbit. For a satellite energy system, the capacity of energy it can provide is limited. The main purpose of the power budget in the initial stage is to ensure that the energy provided by the energy system can meet the maximum energy requirements of the satellite during various periods of operation in orbit, and to ensure the completion of the flight mission. However, since the satellite is likely to decrease its power supply capacity due to external or its own reasons during the orbiting period, or the load power rises beyond the power budget due to potential faults. With the goal of achieving the maximum flight mission and ensuring the safety of the power supply and distribution system, the satellite is required to independently determine the power supply based on telemetry data, and the power load is dynamically scheduled.

©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 682–687, 2023. https://doi.org/10.1007/978-981-19-3387-5_80

Research on the Method of Satellite Energy Real-Time Dispatching

683

2 Disadvantages of Traditional Energy Balance Budgeting Method At present, satellite energy management mainly adopts the method of energy balance budget [4–6]. That is, according to the input conditions such as the configuration of the energy system, the power load on the satellite, the main operating mode of the satellite, etc., the depth of discharge of the batteries, the number of cycles required for energy balance, and the working time limit in different operating modes of the satellite are precalculated on the ground. This method of energy budgeting has many drawbacks. (1) There is limitation in the applicable working mode. Satellite working modes are complicated. Due to the diversity of requirements, there are multiple combinations and variability of working modes. The energy balance budget can only give the envelope of a certain working mode, and it is impossible to determine the energy in real time based on the combination and change of the load working mode. (2) The calculation accuracy is average. The energy system parameters used in the budget are mainly based on simulation or test data, and there are deviations from the on-orbit data. Therefore, a conservative budget method is often adopted according to the worst conditions, and the energy utilization is reduced. (3) The energy situation cannot be monitored in real time. When the energy system fails and the power supply capacity is degraded, the results of the energy budget will not be applicable, and the satellite cannot realize energy dispatch independently. If it is used in accordance with the predetermined working mode, it will bring risks to the entire satellite energy.

3 Satellite Energy Real-Time Dispatching Method 3.1

Design Ideas

Use the computer resources on the satellite, collect energy system related parameters, compute the energy system power supply, and then determine the upcoming mode of operation, limit the operation mode execution time. From the perspective of load power supply, the energy status is divided into the normal power supply area, limited power supply area and the emergency power supply area. The power is reasonably allocated according to the priority when the energy is not enough. The power supply area is distinguished by batteries capacity. In normal power supply area, there is no limited on payload mode; in limited power supply area, payload working time is restricted; in emergency power supply area, payload is prohibited. 3.2

Calculation Method

The energy real-time dispatching process is shown in Fig. 1.

684

M. Qiao and X. F. Li

Fig. 1. Energy real-time dispatching process

Research on the Method of Satellite Energy Real-Time Dispatching

685

The procedure flow is as follows: (1) At the end of the eclipse, the depth of discharge (DOD) is calculated, and the step (2) is proceeding; Satellite Control System provides sunlight and eclipse flag. When eclipse flag is detected, the computer starts to calculate DOD. The calculation formula of DOD is: DOD ¼ Qd =Q

ð1Þ

In the formula, Qd is the current time batteries capacity, Q is batteries rated capacity. Qd is calculated from the batteries charge current Ic, the batteries discharge current Id, and the batteries rated capacity Q, the formula is as follows: Qd ¼ Q þ g  Qc  Qf Qc ¼ Qcl þ Ic  Mtc

ð2Þ

Qf ¼ Qfl þ Id  Mtf Qc is charging capacity in the charge operation state; Qf is discharging capacity in the discharge operation state; Qc and Qf are cleared after batteries is fully charged. Qcl is charging capacity at the time of last calculation, and Qfl is discharging capacity at the time of last calculation. △Tc and △Tf represent charge operating state and the discharge operating state calculation time interval. (2) Estimate the region of the DOD. When the DOD is less than the preset first threshold DOD1, the satellite is in the normal power supply area, and the step(3) is proceeds; when it is greater than the preset second threshold the DOD2, the satellite is in the emergency power supply area, and the step(4) is proceeds; Otherwise, the satellite is in the limited power supply area, and the step(5) is proceeds. (3) In the normal power supply area, there is no restraint of the working mode, and all the working modes are performed in turn until the end of the eclipse, return to step (1). (4) After entering the emergency power supply area, the energy system reserves are at a lower level, the satellite computer prohibits all the working modes and judge the batteries capacity. Until the DOD is more than DOD1, return to step (1) Restart execution. (5) Budget the DOD of performing current working mode. If the DOD is less than DOD2, the working mode is executed normally, and repeat this step when performing the next mode of operation, until all the working modes are executed, returns to step (1); If there is a working mode the DOD is greater than DOD2, the subsequent working mode is forbidden until the end of eclipse, return to step (1). Batteries discharge depth budget method is: If the current time is in the sunlight period, and the satellite is executing the working mode n, the following steps are performed:

686

M. Qiao and X. F. Li

(5.1a) Budget sunlight period charging capacity Qcn: Qcn ¼

n X

Ii  tgi þ ðTg 

n X

i¼1

tgi Þ  Ic0

ð3Þ

i¼1

In the formula, Ii is the charge or discharge current of the batteries, Tg is the sunlight period time, tgi is the charge or discharge time when the working mode i is executed, and Ic0 is the charging current when there is no working mode of operation. (5.2a) Budget eclipse period discharging capacity Qf: Qf ¼

If Vbus Td g

ð4Þ

Vd

In the formula, Ƞ is charging efficiency, If is no working mode load current, Vbus is main bus voltage, Vd is batteries discharge end voltage, Td is eclipse period time. (5.3a) Budget the DOD is as follows: DOD ¼

 2Qf gQcn Q

Qf =Q

Qf  g  Qcn Qf \g  Qcn

ð5Þ

If the current time is in the eclipse period, and the satellite is executing the working mode k, the following steps are performed: (5.1b) Budget eclipse period charging capacity Qfk:

Qfk ¼

k Ifj Vbus tdj X g j¼1

Vd

If  Vbus  ðTd  þ

k P j¼1

g Vbus

tdj Þ ð6Þ

In the formula, Ifj is load current in working mode j, tdj is the time of working mode j. (5.2b) Budget the DOD is as follows: DOD ¼

 2Qfk gQcn Q

QfK =Q

QfK  g  Qcn QfK \g  Qcn

ð7Þ

4 Conclusion For the energy real-time dispatching method, the power supply area is divided, and the batteries capacity is used as a judgment. According to the status of the satellite, the energy scheduling can be performed in time to prevent excessive energy consumption in case of failure in the event of energy system and abnormal increasing of electricity load, and the security of the energy system is improved; Multi-combined, multi-

Research on the Method of Satellite Energy Real-Time Dispatching

687

variation mode of energy state independent computation is carried out in the method, and the satellite energy utilization rate is greatly improved.

References 1. Tan, W., Hu, J.: Spacecraft System Engineering, p. 205. China Science and Technology Press, Beijing (2009) 2. Ma, S.: Satellite Power Technology, pp. 49–51. China Astronautics Press, Beijing (2001) 3. Li, X., Qiao, M., Chen, Q.: Optimum analysis method of power balance for inclined-orbit satellite. Spacecraft Eng. 23(4), 52–56 (2014) 4. Ren, X., Yu, X., Ma, X.: Design and implementation of real time energy balance analysis system on Shenzhou manned spaceship. Spacecraft Eng. 18(5), 48–53 (2009) 5. Qiao, M., Zhu, L., Li, X., Du, Q.: Design and simulation on power supply system of SAR satellite. Spacecraft Eng. 24(2), 45–50 (2015) 6. Jiang, D., Chen, Q., Zhang, P., Wu, L.: Study on intelligent management technology for spacecraft electrical power. Spacecraft Eng. 21(4), 100–105 (2012)

System Error Registration Method of Bearings-Only Passive Location System for Surface Ships Zhi Guo1, Chunyun Dong2(&), Zhenyong Bo3, Xu Yang1, Yang Yang1, Tao Xi1, and Jian Zhang1 1

State Key Laboratory of Astronautic Dynamics, Xi’an 710043, Shaanxi, China 2 Xi’an Key Laboratory of Intelligent Instrument and Packaging Test, Xidian University, Xi’an 710071, Shaanxi, China [email protected] 3 Xi’an Satellite Control Center, Xi’an 710043, Shaanxi, China

Abstract. Aiming at the problem of system error registration in the bearingsonly location system for surface ships, this paper proposes a registration method based on the genetic algorithm (GA). The presented method directly uses the pure angle observation information of multiple passive sensors to construct the optimal objective function according to the basic principle of error registration. The original problem is then transformed into a nonlinear parameter optimization model with constraints, and GA is used to solve it. The coding mode, fitness function, selection, crossover, mutation and other iterative processes of the GA are properly designed. Numerical simulation results demonstrate that the proposed method can achieve the error registration effectively with high accuracy and reliability. Keywords: Bearings-only tracking  Passive location Genetic algorithm  Marine reconnaissance

 Error registration 

1 Introduction Maritime ships can perform electronic reconnaissance on harbor targets or ships on the ocean for a longer period of time, which is conducive to early detection of enemy ship movements and gains an advantage in military operations. Traditional reconnaissance ships often use the active positioning method in the target recognition. It can obtain the exact position of the enemy ship using actively transmitting radar signals. However, the active positioning method is less concealed and is easy to be detected in the naval attack-defense confrontation. To this end, the passive location technology through the ship-borne reconnaissance equipment of multiple ships has been paid much more attention. Generally, the passive location technology has a certain over-the-horizon effect, and can detect the enemy ship before the enemy’s radar detects our ships, which allows us to attack the enemy first and strive for the initiative [1, 2]. Passive location technology mainly includes three methods: the direction-finding cross location, the time difference location and the frequency difference location. At ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 688–695, 2023. https://doi.org/10.1007/978-981-19-3387-5_81

System Error Registration Method

689

present, the time difference location and the frequency difference location methods face some technical problems and limitations in the engineering implementation. On the contrary, the direction-finding cross location method is relatively mature due to its low cost as well as low technical difficulty. It has been widely used in the modern naval warfare. The direction-finding cross location method is also known as the triangle location method. It is usually composed of several sensor platforms deployed in different positions. Due to the fact that each detection platform has limited observability, it is difficult to locate the target accurately independently. In general, multiple stations make a network formation for the joint direction finding, and a central station receives all the measurement data to implement the cross location. Therefore, the destroyer and frigate formation can use this method to measure the direction of the radiation source through high-precision passive electronic reconnaissance equipment. Then, according to the data measured by each ship and the relative position relation of them, the location of the radiation source can be determined using the geometric triangle operation. However, in the aspect of engineering implementation, the direction-finding cross location method still has the disadvantage of low positioning accuracy. In the networked measurement and positioning system, the measurement error of each sensor is one of the important factors affecting the positioning accuracy of the target [1, 2]. The measurement errors consist of two types, namely the random error and the system error. Among them, the random error can be reduced by filtering method, while the systematic error generally remains constant or changes slowly and is difficult to be eliminated by filtering method [3, 4]. The research shows that, due to the existence of errors in multi- sensor systems, the target positioning accuracy of a networked positioning system is sometimes worse than that of a single sensor, which seriously affects the performance of the target tracking [5]. Therefore, the systematic error of each sensor must be accurately estimated and corrected via certain methods. This process is called the error registration or spatial registration, which is the premise and key of multi-sensor information fusion. This paper discusses the passive positioning technology and the system error registration problem. The remainder of this paper is organized as follows. Section 2 constructs an optimization model of the space registration problem for passive location system. Section 3 gives the design of genetic algorithm. Section 4 is the simulation analysis. Section 5 is the final summary.

2 Dynamic Models In the passive positioning of the marine target, it is assumed that the enemy target and our ship formation can be set on the same two-dimensional plane. Consequently, the passive positioning of the target can be achieved by the passive direction-finding sensor network. In this paper, three passive sensors A, B and C are located at ðxA ; yA Þ, ðxB ; yB Þ and ðxC ; yC Þ, respectively. zm ðkÞ ¼ ½hA;m ðkÞ ; hB;m ðkÞ ; hC;m ðkÞT is the measurement vector which includes the azimuth angle of the three sensors at the time k. The target T moves in a certain law on the plane. The schematic diagram of the positioning system is shown in Fig. 1.

690

Z. Guo et al.

For the direction-finding cross passive positioning system, when there is no measurement error, all the direction-finding lines should intersect at a point, that is, the target point. On the premise that the navigation position of each sensor is known accurately, the target position can be located by applying the line of sight intersection of any two sensors. However, as shown in Fig. 1, due to the measurement error, the line of sight between the sensor and the target will deviate from the true line of sight of the target. The degree of deviation is related to the measurement systematic error and random error. At this time, all the direction-finding lines in the direction-finding cross positioning no longer converge to one point. The two-by-two intersections of the three lines of sight in the figure can determine three different target estimation points, which are denoted as T^AB , T^AC , and T^BC , respectively.

y

TˆAB TˆAC

Enemy Warship T TˆBC

Warship A

Warship B

Warship C

O

x

Fig. 1. Passive location of the target for reconnaissance ships

It is assumed that the measurements of all the three sensors contain additive systematic errors and random errors, then the measurement equation can be summarized as follows, i.e. 8 > < hA;m ðkÞ ¼ hA ðkÞ þ DhA ðkÞ þ nA ðkÞ hB;m ðkÞ ¼ hB ðkÞ þ DhB ðkÞ þ nB ðkÞ > : hC;m ðkÞ ¼ hC ðkÞ þ DhC ðkÞ þ nC ðkÞ

ð1Þ

where hi ðkÞ ði ¼ A; B; CÞ is the true azimuth angle of each sensor without error; Dhi ðkÞ ði ¼ A; B; CÞ is the system error of each sensor, that is, the registration deviation value to be estimated; ni ðkÞ ði ¼ A; B; CÞ is the measurement random error; k represents time. Generally, the random error is small relative to the system error. When the random error is ignored, the system error is the only reason that causes the direction-finding line to deviate from the real target line of sight. The goal of error registration at time k is to remove the system error, so that the target estimation points T^AB ðkÞ, T^AC ðkÞ and T^BC ðkÞ determined by the two-by-two intersections of the three sensors are overlapped with the target point TðkÞ, that is, the three direction-finding lines are registered to the real

System Error Registration Method

691

target. At this time, the sum of the two-by-two distances of the three target estimation points DðkÞ is 0. The formula of DðkÞ is as follows. DðkÞ ¼ dðT^AB ðkÞ; T^AC ðkÞÞ þ dðT^AB ðkÞ; T^BC ðkÞÞ þ dðT^AC ðkÞ; T^BC ðkÞÞ

ð2Þ

After a simple geometric derivation, the formula for calculating the slant distance from A to T^AB is dðA; T^AB Þ ¼

ðxA  xB Þ sin hB  ðyA  yB Þ cos hB sinðhA  hB Þ

ð3Þ

dðB; T^AB Þ ¼

ðxB  xA Þ sin hA  ðyB  yA Þ cos hA sinðhB  hA Þ

ð4Þ

Similarly,

Then, the two-dimensional coordinates of the point T^AB can be calculated by the following formula. 8 ðdðA; T^AB Þ cos hA þ dðB; T^AB Þ cos hB þ xA þ xB > > < xT^AB ¼ 2 ^ ^ > > : y ^ ¼ ðdðA; TAB Þ sin hA þ dðB; TAB Þ sin hB þ yA þ yB TAB 2

ð5Þ

From Eq. (1), a deformation can be obtained (

hA ¼ hA;m  DhA  nA hB ¼ hB;m  DhB  nB

ð6Þ

By substituting formulas (3), (4) and (6) into formula (5), the two-dimensional coordinates of the point T^AB can be obtained from the measurement azimuth angles of sensors A and B. T^AC and T^BC can also be calculated in the same way. At this time, the Euclidean distance between the two points T^AB and T^AC on the plane can be determined by the following formula. dðT^AB ; T^AC Þ ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðxT^AB  xT^AC Þ2 þ ðyT^AB  yT^AC Þ2

ð7Þ

In the same way, the other two calculation formulas dðT^AB ; T^BC Þ and dðT^AC ; T^BC Þ can be obtained. By means of putting the three distance calculation results into Eq. (2), the pairwise distance sums of the three target estimation points at the moment, namely DðkÞ can be obtained. In addition to the three system error parameters to be estimated, DðkÞ also contains three random error terms. Since the random error is numerically small relative to the system error to be estimated, we ignore it here. Therefore, the sum of the distances of the three target

692

Z. Guo et al.

estimation points DðkÞ determined by Eq. (2) is no longer strictly equal to 0. The principle of system error registration is to find a set of system error parameters to make DðkÞ as small as possible. The optimization model of the registration error estimation at time k can be described as follows. Find a set of appropriate parameters ½DhA ðkÞ; DhB ðkÞ; DhC ðkÞT to minimize the objective function DðkÞ determined by Eq. (2) while satisfying the parameter value constraints, i.e. min8 DðkÞ > < DhA;min  DhA ðkÞ  DhA;max s:t: DhB;min  DhB ðkÞ  DhB;max > : DhC;min  DhC ðkÞ  DhC;max

ð8Þ

3 Genetic Algorithm Aiming at the above optimization problem of registration error estimation, processes of the genetic algorithm (GA) are designed step by step as follows. Coding. A 16-bit binary code is adopted, of which the highest level is the symbol bit, and the symbol bit value is as follows  Si ð16Þ ¼

0; Dhi [ 0 1; Dhi \0

ð9Þ

where Si ; ði ¼ A; B; CÞ represents the binary code of Dhi . The corresponding relationship between Si and the real system error value can be obtained by the following decoding formula Dhi ¼ Dhi;min þ ðDhi;max  Dhi;min ÞðSid þ 215  1Þ=ð216  1Þ

ð10Þ

where Sid ; ði ¼ A; B; CÞ represents the decimal value corresponding to Si . Initial Population Generation. A set of system error parameter values fb1; b2 ; b3 ;    bm g is randomly generated as the initial population, where bi is the code value of a set of system errors, and m is the size of the initial population. Fitness Function. Taking the reciprocal of DðkÞ shown in formula (2) as the fitness function, the fitness function of the first individual in the population at time k can be expressed as gðbi Þ ¼ 1=Di ðkÞ

ð11Þ

When multiple observation data are obtained, the following cumulative fitness function can be obtained by the batch processing.

System Error Registration Method

gðbi Þ ¼ 1=

N X

Di ðkÞ

693

ð12Þ

i¼1

Selection. The probability of an individual being selected is calculated by the ratio of the size of the individual fitness value to the total fitness of all individuals in the population, that is, the probability of the first individual being selected to participate in the next-generation population is Pðbi Þ ¼ gðbi Þ=

m X

gðbj Þ

ð13Þ

j¼1

Crossover. Determine whether to perform crossover operation according to crossover probability pc . When crossover is performed, a crossover position is randomly generated. Two individuals are selected from the population without repetition, and the genes after the crossover position are exchanged to generate new individuals. Mutation. Mutation probability is applied to perform mutation operation on a gene locus randomly selected by each individual. In order to increase the diversity of the population and accelerate the convergence speed, a multi-point mutation strategy is applied in this paper.

4 Simulation Analyses To verify the feasibility of the proposed registration algorithm, numerical simulations are carried out in this section. It is assumed that the target is moving in a straight line at a constant speed on the sea plane. Its initial position coordinates are ð58734 m; 19539 mÞ, and the route speed in x and y directions are ð1323:1 m/s; 1472:4 m/sÞ, respectively. The sensor platforms are located in ð1220 m; 62920 mÞ, ð120 m; 520 mÞ and ð30000 m; 30000 mÞ. They stay relatively static during the observation period. The three sensors can only obtain the target’s azimuth information with both system and random errors. Random errors are nA ¼ nB ¼ nC ¼ 0:001 rad, and the system errors of the three sensors are maintained during the observation period, i.e., DhA ¼ 2:5 , DhB ¼ 3 , DhC ¼ 4:3 , respectively. The observation sampling period is Ts ¼ 1. The sampling points N ¼ 160. The number of Monte Carlo simulations is 500. The number of initial population is 1000. The chromosome length is 48. The registration errors are initialized as 0 rad. The crossover probability is 0.65. The mutation probability is 0.1, and the number of iterations is 400. The true value of registration errors and the estimated values of our GA registration methods for sensor A, B and C are shown in Fig. 2, 3 and 4, respectively. The average population fitness change curve of the GA algorithm is shown in Fig. 5.

Estimated system error for Sensor A/deg

Z. Guo et al. 3

2.5

2

1.5

1

True value of registration error Estimated registration error

0.5

0

0

50

100

150

200

250

300

350

400

Number of iterations

Estimated system error for Sensor B/deg

Fig. 2. Registration error estimation result (sensor A) 0

True value of registration error Estimated registration error

-0.5 -1 -1.5 -2 -2.5 -3 -3.5 -4 -4.5 -5 -5.5

0

50

100

150

200

250

300

350

400

Number of iterations

Estimated system error for Sensor C/deg

Fig. 3. Registration error estimation result (sensor B) 4.5

4

3.5

3

2.5

2

1.5

1

True value of registration error Estimated registration error

0.5

0

50

100

150

200

250

Number of iterations

300

350

Fig. 4. Registration error estimation result (sensor C) 6

5

4

Fitness value

694

3

2

1

0

0

50

100

150

200

250

300

Number of iteractions

Fig. 5. Fitness value of GA

350

400

System Error Registration Method

695

The simulation results show that the proposed algorithm in this paper can effectively estimate the registration deviation of the three sensors. From Fig. 2, 3, 4 and 5, it can be seen that the GA method can achieve high registration accuracy. During the iterative process, the average fitness of the population increases with the number of iterations, and the estimated registration error converges to the true value step by step. After the optimization, the registration error estimates of the three sensors fluctuate around the true value, and the steady-state estimation accuracy is within 0.03°.

5 Conclusions In this paper, an optimization model for the space registration problem of the passive positioning system for surface ships is established. A genetic algorithm (GA) based algorithm is designed to solve the transformed optimization problem. Numerical simulations are carried out to verify the feasibility of the proposed method. This GA based registration method does not require the prior information of the target motion and registration error model. It can obtain an accurate registration only through measurement information. The simulation results show that it is efficient and reliable. Acknowledgement. This work was financially supported by the Natural Science Foundation of Shaanxi Province (No. 2020JQ-318) and the Fundamental Research Funds for the Central Universities (No. JB210420 and JB210509). The authors gratefully acknowledge all of the support.

References 1. Zhang, Y., Qu, C.H., Dai, Y.: Registration algorithm for radar netting based on genetic algorithm. Radar Sci. Technol. 1 (2008) 2. Qiu, SF., Liu, J.: Error analysis for passive double-station cross location. Shipboard Electronic Countermeasure (2018) 3. Geng, X., Suo, Y., Yang, H.: Algorithm for bearings-only passive localization based on UKF. In: 2018 3rd International Workshop on Materials Engineering and Computer Sciences (IWMECS 2018), pp. 405–409. Atlantis Press (2018) 4. Bar-Shalom, Y.: Multitarget-multisensor Tracking: Advanced Applications. Artech House, Decham (1990) 5. Therese, A., Babu, M.N., Krishna, P.M.: Passive localisation based on correlative interferometry. Procedia Comput. Sci. 143, 2–9 (2018)

Research on Passive Location of Maneuvering Target Based on IMM-SRCKF Algorithm Chunyun Dong1(&), Zhi Guo2, Xiaolong Chen1, Xu Yang2, and Yang Yang2 1

2

Xi’an Key Laboratory of Intelligent Instrument and Packaging Test, Xidian University, Xi’an 710071, Shaanxi, China [email protected] State Key Laboratory of Astronautic Dynamics, Xi’an 710043, Shaanxi, China

Abstract. To address the problem of three-dimensional non-cooperative maneuvering target tracking in the air, a multi-sensor cooperative passive positioning solution based on the Kalman filter framework is proposed. Firstly, the constant velocity (CV) model and the Singer model are involved to approximate the unknown kinetic characteristic of the non-cooperative target. Secondly, the network positioning is carried out by multiple passive sensors deployed in different areas on the ground, and a centralized observation equation is established. Then, the registration error of each sensor is extended to the target state vector considering its influence on the positioning accuracy, and a unified state space model is built up. Finally, the interacting multiple model (IMM) algorithm based on the square root cubature Kalman filter (SRCKF) filter is devised to realize the estimation of the target state as well as the registration error. Numerical simulations indicate that the presented algorithm can achieve the joint estimation of the target state and the sensor registration error in real time with high accuracy. Keywords: Maneuvering target tracking Passive location

 Cubature Kalman filter  IMM 

1 Introduction In the attack-defense confrontation under modern warfare conditions, aerial moving targets, such as fighter aircrafts, unmanned aerial vehicles and guided missile warheads, often have the characteristics of high flight speed, large space mobility and strong attack capability. On the premise of detecting the attacking target in time, how to ensure the real-time stable tracking is the base of the follow-up countermeasures against the attacking target. Traditionally, active tracking sensors, such as the pulse radar, can measure the state and characteristics of the target by actively transmitting electromagnetic waves. Since both the range and the angle measurements of the target can be obtained simultaneously, the estimation of the target state can be achieved using a single sensor. Nevertheless, the active transmitting power is easy to be detected or even destroyed by the enemy in the case of confrontation. On the other hand, passive sensors, such as the infrared and optical sensors, can achieve the target positioning by passively receiving the target radiation information [1]. Although the target positioning ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 696–703, 2023. https://doi.org/10.1007/978-981-19-3387-5_82

Research on Passive Location of Maneuvering Target

697

information cannot be obtained through a single sensor due to the lack of distance measurement, the passive sensor is highly concealed because it does not actively transmit power, making it easier to survive in battlefield conditions. It can also realize the accurate positioning of the target through sensor networking. Therefore, the problem of target tracking through passive sensor networking has been a hot topic of theoretical research [2, 3]. For the above-mentioned problem, three issues need to be resolved [4]. The first one is the target motion modeling. Since the motion law of the non-cooperative target is difficult to be perceived accurately in advance. A feasible solution is to adopt a combination of multiple general maneuvering models to represent the target motion law. In the existing models [5], the CV model is simple and efficient and the Singer model can deal with target maneuvering effectively. Therefore, this paper chooses the CV and Singer models to approximately describe non-cooperative targets. The second problem is the measurement fusion method. For the measurement data of multiple sensors, Kalman filter provides a good solution to integrate the measurements organically and obtain better state estimation. Generally, it can obtain fusion estimation of the target based on the measurement data of multiple sensors with random noise. There are many kinds of Kalman filtering algorithms. Considering that we have adopted multiple models to characterize the target, we naturally choose interacting multiple model (IMM) algorithm. For the internal filter of IMM, square root cubature Kalman filter (SRCKF) which shows high precision and stability is applied in this paper [6]. The third problem is how to deal with the registration error, that is, the measurement system error. In the network measurement system, the system errors of multiple sensors will bring the divergence of estimation, which greatly affect the positioning precision of the network system. One feasible method is to calibrate the system errors in advance. However, due to the unknown characteristics, it is usually difficult to accomplish calibration in advance. In this paper, we take the registration error as unknown terms to be estimated within the Kalman filter framework. It is a feasible solution to determine the registration error while estimating the target state information.

2 Mathematical Model 2.1

Target Motion Model

Suppose that the target is non-cooperative, so its motion law cannot be accurately obtained. But it can still be modeled by a general motion model. As long as the update period is short enough, the target motion can be described by an approximate model with accepted error. In addition, to deal with the model mismatch caused by complex motion or high mobility, we use a variety of models to model the target. (1) Constant Velocity (CV) Model Considering the influence of modeling noise, the state equation of CV model is X ðk þ 1Þ ¼ F ðk ÞX ðkÞ þ WxðkÞ

ð1Þ

698

C. Dong et al.

where X ðk Þ ¼ ½ xk x_ k yk y_ k zk z_ k T denotes the state vector. It contains the position and the velocity of the three-dimensional maneuvering target. The state transition matrix F ðkÞ is shown as 2

Ux F ðk Þ ¼ 4 0 0

3  0 1 0 5; Ux ¼ Uy ¼ Uz ¼ 0 Uz

0 Uy 0

T 1

 ð2Þ

Moreover, W ¼ ½T 2 =2; T; T 2 =2; T; T 2 =2; TT denotes the modeling noise propagation matrix, and xðkÞ is white noise with mean of 0 and variance of d2x . (2) Singer Model For the convenience of description, the first-order time correlation model of the second-order system, namely Singer model, is given as follows [7] X ðk þ 1Þ ¼ UX ðkÞ þ xðkÞ

ð3Þ

where 2

3 ð1 þ nT þ enT Þ=n2 5 ð4Þ ð1  enT Þ=n enT 2 3 1 T 0 when n ! 1, UðT; nÞ can be simplified as U ¼ 4 0 1 0 5; when n ! 0,UðT; nÞ 0 0 0 2 3 1 T 0:5T 2 can be simplified as U ¼ 4 0 1 T 5. 0 0 1 The covariance matrix of xðkÞ is 1 U ¼ 40 0

T 1 0

2

q11 QR ¼ 2nr2a 4 q12 q13

q12 q22 q23

3 q13 q23 5 q33

ð5Þ

where 8 > q ¼ð1  e2nT þ 2nT þ 2n3 T 3 =3  2n2 T 2  4nTenT Þ=ð2n5 Þ > < 11 q12 ¼ð1 þ e2nT  2enT þ 2nTenT  2nT þ n2 T 2 Þ=ð2n4 Þ > q ¼ð1  e2nT  2nTenT Þ=ð2n3 Þ; q22 ¼ð4enT  3  e2nT þ 2nTÞ=ð2n3 Þ > : 13 q23 ¼ðe2nT þ 1  2enT Þ=ð2n2 Þ; q33 ¼ð1  e2nT Þ=ð2nÞ

ð6Þ

Research on Passive Location of Maneuvering Target

2.2

699

Measurement Model

Assume that the earth is flat, then a unified Cartesian coordinate system can be established. The coordinate origin is located at a certain point, such as the data processing center. Multiple sensors are distributed in a certain configuration on the ground. For passive sensors, only azimuth angle and elevation angle can be observed. In addition, the angle measurement has not only random noise, but also system error. The effects of both random and system errors are considered in this paper. Suppose there are N sensors, Z ¼ ðb1 ; e1 ; b2 ; e2 ;    ; bN ; eN ÞT denotes the measurement vector of the passive monitoring system, in which b is the azimuth angle, and e is the elevation angle. For each sensor, the measurement equation can be expressed as 8 < bi ðkÞ ¼ arctan ðyi ðkÞ=xi ðkÞÞ þ Dbi þ vbi ðkÞ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : ei ðkÞ ¼ arctan zi ðkÞ= xi ðkÞ2 þ yi ðkÞ2 þ Dei þ vei ðkÞ

ð7Þ

where vi ðkÞ denotes the measurement noise vector at time k. It is white noise with mean value 0 and d2i as variance.Db and De are system errors for the azimuth measurement and the elevation measurement, respectively. 2.3

Augmented Model

Despite the position and speed parameters, the registration errors, namely Dbi and Dei , are also unknown terms to be estimated in the target tracking problem. Therefore, it is added to XðkÞ, forming an extended dimension state vector XA ðkÞ ¼ ½XðkÞT ; Db1 ; De1 ; Db2 ; De2 ;    ; DbN ; DeN T

ð8Þ

Then, the state equation and the observation equation can be rewritten as

 where FA ¼

UA 0

0 I2N2N

XA ðk þ 1Þ ¼ FA XA ðkÞ þ xA ðkÞ  and xA ðkÞ ¼ ½CxðkÞ 0 0    0 0T .

ð9Þ

ZA ðkÞ ¼ HA ðXA ðkÞÞ þ vz ðkÞ

ð10Þ

where 2 6 6 6 6 6 HA ðXA ðkÞÞ ¼ 6 6 6 6 6 4

1 ðkÞ=x 1 ðkÞÞ þ Db 1 arctanðyq ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2 2 arctan z1 ðkÞ= x1 ðkÞ þ y1 ðkÞ þ De1

.. .

ðyN ðkÞ=xN ðkÞÞ þ DbN   arctanq ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi arctan zN ðkÞ= ðxN ðkÞÞ2 þ ðyN ðkÞÞ2 þ DeN

3 7 7 7 7 7 7 7 7 7 7 5

ð11Þ

700

C. Dong et al.

Furthermore, vz ðkÞ ¼ ½vb1 ðkÞ; ve1 ðkÞ;    ; vbN ðkÞ; veN ðkÞT is the extended observation noise vector; HA is the newly-formed measurement function.

3 Imm-SRCKF In this paper, an IMM algorithm based on the SRCKF(IMM-SRCKF) is applied. Steps of the IMM-SRCKF algorithm are as follows. Step1. Initial interaction Suppose ^x j ðk  1jk  1Þ denotes the state estimation of moment k  1, and P j ðk  1jk  1Þ is the corresponding covariance matrix. It is assumed that the transition between different models obey the Markov chain, and the transition probability is r P pij ; i; j ¼ 1; 2    r, which satisfies 0\pij \1 and pij ¼ 1. After initial interaction, the j¼1

state estimation and its covariance matrix are ^xoj ðk  1jk  1Þ ¼ Poj ðk  1jk  1Þ ¼

r P i¼1 r P

^xi ðk  1jk  1Þuij ðk  1Þ ½Pi ðk  1jk  1Þ þ ð^xi ðk  1jk  1Þ

ð12Þ

i¼1 i

^xoj ðk  1jk  1ÞÞð^x ðk  1jk  1Þ  ^xoj ðk  1jk  1ÞÞT uij ðk  1Þ where uij ðk  1Þ ¼ pij ui ðk  1Þ=Cj , and Cj ¼

r P

pij ui ðk  1Þ.

i¼1

Step2. Filter update According to ^xoj ðk  1jk  1Þ and Poj ðk  1jk  1Þ obtained in Step 1, the optimal state estimation ^x j ðkjkÞ and the corresponding covariance matrix P j ðkjkÞ of the model can be calculated via the SRCKF [6], where P j ðkjkÞ ¼ Sj ðkjkÞSTj ðkjkÞ. Step3. Probability update of the model Assume that the filter residuals of the model obey Gauss distribution, then its likelihood function is K j ðkÞ ¼ expf½v j ðkÞT ðS j ðkÞÞ1 ½v j ðkÞ=2g=

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2pS j ðkÞ

ð13Þ

The probability of updating model is uj ðkÞ ¼ K j ðkÞCj =C where C ¼

N P i¼1

Ki ðkÞC i .

Then, filtering residuals and their covariances are

ð14Þ

Research on Passive Location of Maneuvering Target

v j ðkÞ ¼ zðkÞ  ^zj ðkjk  1Þ; S j ðkÞ ¼ Szz ðkjk  1ÞSTzz ðkjk  1Þ

701

ð15Þ

Step4. Combination of state estimation and covariance ^xðkjkÞ ¼ PðkjkÞ ¼

r P i¼1 r P

^xi ðkjkÞui ðkÞ

ð16Þ

 ui ðkÞ Pi ðkjkÞ þ ½^xi ðkjkÞ  ^ xðkjkÞ½^ xi ðkjkÞ  ^ xðkjkÞT

i¼1

4 Simulation Analyses It is assumed that there are 3 sensors located at (0 km, 15 km, 0 km), (−40 km, 0 km, 0 km), and (−40 km, 15 km, 0 km), respectively. The measurement random noise follows Gauss distribution with mean value 0 and variance 0.01 rad. The system errors of angle measurements are 0.1 rad for all the three sensors. The target is maneuvering in three dimensions with initial position at (−30 km, 15 km, 30 km). The sampling period is 0.1 s. 1000 Monte Carlo simulations are carried out to evaluate algorithm performance. The actual flight trajectory and the estimated trajectory are shown in Fig. 1. The trend of the model transition probability over time is shown in Fig. 2. The tracking performance including RMSE [8] of the position and the velocity are displayed in Fig. 3. The RMSE of system error estimation for all three sensors are exhibited in Fig. 4. From Fig. 1, the target makes maneuvering flight in three-dimensional space. The estimated trajectory agrees well with the actual flight trajectory except for the initial few seconds. This proves the effectiveness of the proposed algorithm. From Fig. 2, the two models switch over time automatically to match target maneuver characteristics. From Fig. 3, applying the IMM-SRCKF method, position and velocity of the nonThe actual flight trajectory and the estimated trajectory

35

actual flight trajectory estimated trajectory

30

25

z/km

20

15

10

5

0 16

14

12

10

8

y/km

6

4

2

-40

-30

-20

-10

0

10

20

x/km

Fig. 1. Target flight trajectory and the estimated trajectory

30

40

C. Dong et al. model transition probability

CV model Singer model

1 0.9

model transition probability

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0

10

20

30

40

50 time/s

60

70

80

90

80

90

100

RMSE of Position/km

Fig. 2. Model transition probability RMSE of Position 2.5 2 1.5 1 0.5 0 -0.5

0

10

20

30

40

50

60

70

100

Time/s RMSE of Velocity/km/s

RMSE of Velocity 0.4 0.3 0.2 0.1 0 -0.1 -0.2

0

10

20

30

40

50

60

70

80

90

100

Time/s

Fig. 3. RMSE of target state estimation Sensor 1 0.06

Azimuth Elevation

0.04 0.02 0 0

RMSE of system error rad

702

10

20

30

40

50

60

70

80

90

100

Time/s Sensor 2 0.04

Azimuth Elevation

0.02

0

0

10

20

30

40

50

60

70

80

90

100

Time/s Sensor 3 0.06

Azimuth Elevation

0.04 0.02 0

0

10

20

30

40

50

60

70

Time/s

Fig. 4. RMSE of system error estimation

80

90

100

Research on Passive Location of Maneuvering Target

703

cooperative target can be properly estimated. From Fig. 4, accompanying the iterative operation, estimations of the registration errors all converge to their real values. All the results reveal that the presented IMM-SRCKF technique can properly acquire estimation of the target state and the registration errors simultaneously.

5 Conclusions To solve the problem of passive tracking of three-dimensional target in the air, this paper establishes a mathematical model under certain assumptions. A multi-sensor fusion IMM-SRCKF algorithm is proposed to obtain effective estimation of the target state in real time, while obtaining the sensors’ system errors simultaneously. It is demonstrated that the algorithm runs stably and the calculation efficiency is high. Acknowledgement. This work was supported by the Natural Science Foundation of Shaanxi Province (No. 2020JQ-318) and the Fundamental Research Funds for the Central Universities (No. JB210420 and JB210509). The authors gratefully acknowledge all of the support.

References 1. Li, L.Q., Xie, W.W., Liu, Z.X.: Bearings-only maneuvering target tracking based on truncated quadrature Kalman filtering. AEU-Int. J. Electron. C. 69(1), 281–289 (2015) 2. Li, L.Q., Wang, X., Liu, Z.X., et al.: Auxiliary truncated unscented Kalman filtering for bearings-only maneuvering target tracking. Sensors 17(5), 972 (2017) 3. Mušicki, D.: Bearings only multi-sensor maneuvering target tracking. Syst. Control Lett. 57 (3), 216–221 (2008) 4. Zhu, W., Wang, W., Yuan, G.: An improved interacting multiple model filtering algorithm based on the cubature Kalman filter for maneuvering target tracking[J]. Sensors 16, 805 (2016) 5. He, H., Liu, G., Zhu, X., et al.: Interacting multiple model-based human pose estimation using a distributed 3D camera network. IEEE Sensors J. 19, 10584–10590 (2019) 6. Guo, Z., Dong, C.Y., Cai, Y.L., et al.: Time-varying transition probability based IMMSRCKF algorithm for maneuvering target tracking. Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Syst. Eng. Electron. 37(1), 24–30 (2015) 7. Li, X.R., Jilkov, V.P.: Survey of maneuvering target tracking. Part I. Dynamic models. IEEE Trans. Aeros. Electron. Syst. 39(4), 1333–1364 (2003) 8. Fang, F., Cai, Y.L.: Multi-sensor space registration for maneuvering target tracking. J. Solid Rocket Technol. 39, 574–579 (2016)

A Standard Driver Design of Onboard Control Computer Chenlu Liu, Jian Xu(&), Zhi Ma, Fei Peng, Mengdan Cao, and Jianyu Yang Beijing Institute of Control Engineering, Beijing 100190, China [email protected] Abstract. Driver control is a unique program for communication between computers and equipment, an integral part of software development. In the traditional satellite design, each computer product has its functions; thus, the supporting software also needs to be developed separately, which leads to the heavy workload of software repeated development. At present, the functions of the onboard control computer are becoming more and more complex. It is hard to reuse the software, which increases the system risk. This paper establishes a standard software driver function library and integrates anti-SEU design and configurable design to further enhance the reliability of software in-orbit operation. At the same time, it can realize rapid configuration for different products and improve the reuse rate of software. At present, the method has been successfully applied in scientific research, which achieves a good effect. Keywords: Onboard control computer Configurable  Anti-SEU

 Embedded linux  Driver design 

1 Introduction The device driver is a communication program between the kernel and hardware, which directly controls whether the hardware can work typically. Generally, driver developers need to understand the specific principles and processing logic of the hardware devices, resulting in the massive workload of driver development. At present, the research on driver models has been more in-depth. Taking the mainstream real-time embedded operating systems on the market as an example, the driver models are quite different. These systems can be divided into two classes. The first type is that the function is perfect and general, but the system is too complicated, such as RT_thread [1] and wince. The other type is simple in function but strong in real-time, such as operating systems include VxWorks [2], ECOS [3], and QNX [4]. RT_thread is an embedded system independently developed by the Chinese (following the gplv2 license agreement), including the kernel module, TCP/IP protocol stack, file system, and other modules. The system is characterized by solid real-time performance and easy expansion. In addition, the code is open source and can be obtained for free for teaching and production use. RT_thread uses the object-oriented mechanism, which treats the device as a class of objects. The upper application only needs to find the corresponding device name to obtain the device handle and then use the standard device interface to access the hardware device. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 704–711, 2023. https://doi.org/10.1007/978-981-19-3387-5_83

A Standard Driver Design of Onboard Control Computer

705

The VxWorks system uses a device linked list and a driver linked list to manage devices and drivers, respectively [5]. When the device is added to the system, first initialize the device, then attach its driver to the driver management linked list. To be more user-friendly, devices of the same type can share the driver. Most application programs usually access the device through the I/O system interface, and a few devices need to abstract the function interface further. Compared with industrial grade drive management, the drive management of the onboard control computer mainly has the characteristics of various types and robust customization. The driving functions used by different products are pretty different, which is not conducive to the modular design and transplantation of the software, resulting in a large amount of repetitive software development. Thus, it is hard to increase the efficiency of software development and difficult to verify the system. A practical problem faced by production is how to design a scalable drive model, which can customize configuration quickly and enable code-reuse among various products. The main contribution of this article includes three aspects: 1. This article provides a standard software driver function library to standardize product driver design and reduce the workload of repetitive software development. 2. Based on the driver, this article adds a new anti-SEU design to enhance the reliability of the software. 3. This article provides configurable tools that can quickly develop customized products, saving developers time and costs. This paper will introduce the industrial implementation of the standard drive architecture used in onboard control computers [6, 7]. The proposed driver model contains the anti-SEU design based on the standard interface, which can significantly improve software reliability. What is more, to reduce the time cost of customized configurations, we develop a tool to perform error checks and generate header files, which can be used as a part of the driver source code. The following will take the I/O chip A6017 used in production as an example.

2 Standard Driver Design To improve the versatility and reliability of the software, this paper proposes a standard software-driven function library used on the onboard control computer, which releases the coupling between the application software and the hardware. This method provides a unified interface for the upper layer and uses a configuration table to meet the requirements of different products. In addition, considering the impact of a single event in space, the anti-SEU design is added to the standard-driven design, ensuring the reliability of the superior software. The standard drive model design framework is shown in Fig. 1. It mainly consists of four parts: the hardware platform, functional design, anti-SEU design, and configuration design. Finally, this model provides the standard drive interface to the application program. The standard drive model will be detailed in the chapters below.

706

C. Liu et al. Application program

Standard User Interface

Configurable Design Anti-SEU Design

Interface Standard Underlying Program

Glue Status1

Status2

Status3

Hardware

Fig. 1. The standard drive model design framework

2.1

A6017 Introduction

A6017SRSC20MA (hereinafter referred to as A6017) is an IO logic chip for aerospace applications, with many internal functions and complex configuration methods, which can meet the needs of most onboard control computers. A6017 can be used to achieve various functions, such as universal input and output, eight-channel frequency acquisition, synchronous serial port, asynchronous serial port, CAN controller [8], SPI [2], multi-channel SPI automatic data acquisition, time management, interrupt management, processor bus bridge conversion, bus ready control, etc. General-purpose input and output are designed with 80 general-purpose input and output functions, of which 16-channel single pulse latch, 32-channel pulse count, and 32-channel pulse output. Significantly, the 32-channel pulse output support multiplexing. The eight-channel frequency acquisition function realizes the frequency acquisition function of 8-channel pulse signals within a given time range. The synchronous serial port function can be subdivided into a universal synchronous serial port, CSB synchronous serial port, and telemetry synchronous serial port. A6017 realizes ten independent universal synchronous serial ports, two groups of independent CSB synchronous serial ports. What is more, the CSB bus and synchronous serial port pins are multiplexed, and two groups of telemetry synchronous serial ports are effectively independent. The asynchronous serial port function realizes 16 independent serial port modules and 22 channels of time-division multiplexing and the time-division multiplexing serial port and SPI module multiplex pins. The SPI function has seven independent SPI modules, which can complete 128 channels of automatic acquisition.

A Standard Driver Design of Onboard Control Computer

707

CAN controller function realizes four independent CAN logic modules. The time management mainly includes controlling cycle generation, periodic signal generation, second pulse processing, and onboard time generation. The processor bus bridge is extended to the standard parallel bus commonly used in the control computer, namely the address bus, data bus, and control bus. A bus-ready signal and processor carry out the access handshake. 2.2

Standardization of Underlying Program

To increase the software reuse rate, the standard drive design adopts a scalable design scheme to meet the diverse needs of products. Due to the various hardware design status, the realization principle of the same function is very different. For example, we can use two-pulse or one pulse to turn on a switch. Thus, we add a glue layer to the functional design, which can encapsulate the multiple possible states and shield the difference of the underlying hardware design. Finally, the proposed driver model provides a unified interface for the same function. According to the function of the A6017 chip, the standard drive design provides a bit port module, Can-bus module, CSB bus module, pulse module, asynchronous serial port module, time module, synchronous serial port, and digital-to-analog conversion module. The functions of each module are independent, and there is no nested calling relationship. In addition, the functional module supports the common use of multiple IO boards and has good scalability. When other resources are sufficient, the number of IO boards is not limited, which meets the requirements of complex products. 2.3

Anti-SEU Design

The anti-SEU design is an essential module in the onboard standard drive design [9, 10], which is mainly responsible for reducing single event flipping on the A6017 chip. Single event flipping is a problem that is difficult to avoid for orbiting starships. A single high-energy particle in the universe is injected into the sensitive area of a semiconductor device, causing the logic state of the device to flip (from 0 to 1 or from 1 to 0). If the single event flip occurs in the storage area, the stored data may become wrong. If the A6017 chip flips register, it may cause a certain function to be abnormal and even make the hardware product in a more dangerous state. Under normal circumstances, a redundancy strategy of three out of two is adopted for the storage area. At the same time, with an error correction algorithm, the harm caused by single event flipping can be significantly reduced. As for the chip's registers, the standard drive function design proposed in this article mainly performs anti-single event flipping from two aspects: refreshing the register value and turning off the enable bit in time. At the same time, it performs single event flipping statistics in idle tasks, which can observe the chip's status. For a specific product, once the system is running, the internal registers of the A6017 chip can be divided into two categories, one is the register whose value will not change during the entire running process, and the other is the register whose value will change according to the upper-level program operation value.

708

C. Liu et al.

Regarding the first type of register, we can periodically refresh the value of the register. Thus, it can be restored to normal within a few control cycles after the single event flip. It is worth noting that this includes unused registers reserved by the hardware design. Such registers need to be constantly refreshed to ensure that the unused registers are safe values, which cannot cause damage to the product. For the second type of register whose value will change, this paper takes no action to guarantee the correctness of the value. For many modules that generate actions, the enable bit plays a very critical role. Take the pulse module as an example. If someone wants to send out a pulse, the enable register and the start register need to be set to 1 at the same time. However, there are differences between enable register and start register. When the start register is set to 1, it will automatically become 0 later. When the enable register is set to 1, the register will keep the set value until the next operation. Now, let us analyses the situation. After the pulse is sent, if the other register values related to the pulse are correct, once the single event overturns the start register, since the enable bit is still at the previous high state, a pulse will be generated at this time, which may affect the controlling system. Therefore, the enable bit of the pulse should be turned off after the pulse is sent, which greatly reduces the impact of the single event flip. 2.4

Configurable Design

This article uses the configuration table to record the information of hardware devices, such as address information, register information, bit definitions, etc. When the design status of the device changes, the configuration table needs to be modified accordingly. The driver first obtains the device status by reading the information in the configuration table, then checks whether the information is illegal. If there is no error, it will automatically generate related header files. For different products, users just need to modify the configuration table, resulting in a high rate of software reuse. The configurable setting logic is as shown in the figure above. For a product, the basic parameters of each IO board in the product must be configured first, such as base address, frequency, control cycle generation method. And then, users should configure related functions according to the specific design of each IO board. Many registers are configured in the configuration table, and they are scattered in different function files. Considering a register in the A6017 chip usually indicates the status of several channels. For example, the register with an offset address of 07H indicates the number of signal filtering times corresponding to GPIO [7:0]. Therefore, before generating the h file, the configuration table needs to be performed an error correction check, which can prevent the value of the register from being modified in the configuration table. After the error correction check is completed, the configurable tool automatically generates the h file as a part of the source code. The automatic generation of header files breaks the traditional model of manually modifying header files, which can greatly save development time and reduce the probability of human-generated errors. Take the bit port input function as an example. The information in the configuration table includes six categories: base address, GPIO port number, reverse flag, mode, filter

A Standard Driver Design of Onboard Control Computer

709

equivalent, and filter width. When the script file is running, the specific information of each bit port input is extracted to generate the h source code file (see the figure below). 2.5

Standardization of User Interface

The driver design draws on the design ideas of the VxWorks operating system driver, which uses the device list and the driver list to realize the management of the driver [11, 12]. Each device provides seven standard user interfaces for upper users, which is shown in the following table. Table 1. Standard user interfaces Function name creat() open() read() write() Ioctl() close() remove()

Function description Create and open a new device Open a new or existed device Read from a device Write to a device Other functions Close the device Remove the device

The device list is realized by a doubly-linked list, which connects all the devices in the system in series. The hardware interface address and personal data are stored in the device information block. For example, if there are multiple identical serial ports, multiple device blocks need to be created, and the hardware interface addresses of each device are different. The driver list maintains the entry function address of each driver. Each device is a row in the driver maintenance list, and the seven functions of the device are seven columns in a row, as shown in Fig. 2. Each device realizes the connection of the driver through the driver registration function. When a new device wants to join the system, it must first be added to the device list and associate its driver program to the standard interface of the operating system. Then the user can use its function through the standard interface. creat

open

close

0

xxcreat xxopen

xxclose

1

xxcreat xxopen

xxclose

2

xxcreat xxopen

xxclose

Device Index

Program Entry

Fig. 2. The management of device list

710

C. Liu et al.

3 Experiment The Experiment mainly includes three parts: function test, anti-SEU test, and configuration test, which will be introduced in detail. 3.1

Functional Test

The functional test in this article is mainly divided into two steps. First, test the underlying hardware driver, directly call the underlying driver, and perform simulation tests on each module. All functional modules are normal, proving that there is no coupling between the functional modules and the implementation is correct. After that, through the access of the standard IO interface, the correctness of the driver connection is verified. Taking the serial port as an example, the functions such as initialization, sending, and receiving the serial port are realized through a standard IO interface. 3.2

Anti-SEU Test

To simulate the actual situation in space, this paper will modify the hardware memory injection fault, which can artificially cause the illusion that the hardware device is overturned. Then we can verify the effectiveness of the anti-SEU strategy in the driver design. After several control cycles, these registers that contain the anti-SEU strategy return to normal, and the single event rollover count is consistent with the expected value. The experiment proves that the anti-SEU design proposed in this paper is effective. 3.3

Configuration Test

We first use all the functions of the configuration table and generate the header file through the script tool successfully. Then compile and run the driver program, test functions one by one, experiments show that all functional modules are normal. Finally, we select several functions in the configuration table to use and test the functional modules. To test more comprehensively, we repeat it several times. In addition, if the value in the configuration table is illegal, the script tool will remind the error during the error correction check link.

4 Conclusion This paper proposes a new standard driver design method for onboard control computer, which adds an anti-SEU design and a configurable design based on the standard drive. The driver model has good scalability, which developers can easily use. At present, this method has been successfully used in actual software development with good results.

A Standard Driver Design of Onboard Control Computer

711

References 1. Liang, T., et al.: Home environment detection system based on RT-thread and OneNET cloud platform (2020) 2. Lu, H.: Design and realization of SPI driver based on VxWorks. In: 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). IEEE (2020) 3. Domahidi, A., Chu, E., Boyd, S.: ECOS: an SOCP solver for embedded systems. In: Control Conference. IEEE (2013) 4. Louyeh, R.M.: Features of the QNX operating system (2020) 5. Zhang, C.H., Peng, R.M.: Design of device driver in VxWorks. Appl. Mech. Mater. 380– 384, 2038–2041 (2013) 6. Zhu, K., et al.: device driver design based on three-axis acceleration sensor in linux (2020) 7. Hongyu, L.I., Maoyue, L.I., Liu, X.: The development of PCI hardware device driver programs based on RTX system. Manuf. Technol. Mach. Tool (2019) 8. Yin, J., Tao, Z., Cui, K.: Design and optimization of can driver in VxWorks system. Comput. Eng. 46(511.03), 198–203(2020) 9. Sun, H., et al.: Engineering realization of the ANTI-SEU techniques for the FPGAs. Electron. Technol. (2015) 10. Xue, Q., et al.: Flight experiments and failure analysis of FPGA for anti-SEU at aviation altitudes. Qiangjiguang Yu Lizishu/High Power Laser Particle Beams 28(12) (2016) 11. Yuan, C.: Design Technology on drivers reusable software based on VxWorks. Ind. Control Comput. (2019) 12. Hao, Z., et al.: A new communication model for multi-tasks in VxWorks. IEEE (2010)

Optimal Multiple-Impulse Noncoplanar Rendezvous Trajectory Planning Considering the J2 Perturbation Effects Guangde Xu(&), Zongbo He, Huiguang Zhao, Qiang Zhang, Yanjun Xing, and Cong Feng Beijing Institute of Spacecraft System Engineering, China Academy of Space Technology, Beijing 100094, China [email protected]

Abstract. The minimum-fuel optimal multiple-impulse noncoplanar rendezvous trajectory planning considering the J2 perturbation effects is mainly studied. The multi-impulse orbit dynamic model is constructed based on the dynamic equation considering J2 perturbation and the theory of impulse orbit maneuver. To automatically satisfy the terminal condition,the Lambert algorithm considering J2 perturbation is introduced to solve the last two impulses. Taking the impulse time and delta velocity as optimization variables, and the total velocity increment as optimization objective function, a multi-impulse orbital rendezvous planning model was established.The optimization solution is carried out by particle swarm optimization algorithm (PSO). The simulation results show that the proposed method is effective by using three impulses and four impulses orbit maneuver to achieve the rendezvous with noncoplanar targets. Keywords: Multiple-impulse rendezvous trajectory planning rendezvous  Particle swarm optimization  J2 perturbation

 Noncoplanar

1 Introduction In order to achieve space transfer, rendezvous and other tasks, spacecraft need to carry out long-distance orbit maneuver. Aiming at the problem of remote rendezvous planning for space targets, the multi-impulse rendezvous planning method is an important method, and extensive research has been done in the literature. In the literature [1–4], a multi-pulse rendezvous programming model is established based on the two-body model. The Lambert equation is used to automatically satisfy the rendezvous terminal condition, which improves the convergence of the solution, but the orbit perturbation effect is not taken into account. In the literature [5], the sequential quadratic programming method is used to study the multiple-impulse noncoplanar rendezvous trajectory planning problem considering J2 perturbation. However, the iterative process cannot guarantee that all solutions satisfy the terminal condition and the convergence speed is slow.

©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 712–720, 2023. https://doi.org/10.1007/978-981-19-3387-5_84

Optimal Multiple-Impulse Noncoplanar Rendezvous Trajectory

713

In this paper, J2 perturbation is considered to study optimal multiple-impulse noncoplanar rendezvous trajectory planning problem. A multi-impulse orbital maneuver planning model considering J2 perturbation is established. Lambert method considering J2 perturbation is used to solve the last two impulses to meet the terminal requirements, and particle swarm optimization algorithm is used to optimize the time and velocity increment. The optimal noncoplanar rendezvous trajectories of three impulse and four - impulse are obtained.

2 J2 Perturbed Multi-Impulse Orbit Maneuver Problem Starting conditions of spacecraft rendezvous orbit are t0 ,v0 and r0 , and the terminal conditions are tf , vf , rf .In the geocentric equatorial inertial coordinate system, only the most significant J2 effect on spacecraft is considered. By taking the first order approximation of the earth’s gravitational field model, spacecraft orbit dynamic equation is got considering the earth J2 oblateness effects: x_ ¼ vx y_ ¼ vy z_ ¼ vz #  2 2 lx 3 Re z ð5 2  1Þ v_ x ¼  3 1  J2 r 2 r r " #  2 ly 3 Re z2 ð5 2  1Þ v_ y ¼  3 1  J2 r 2 r r " #  2 lz 3 Re z2 ð3  5 2 Þ v_ z ¼  3 1 þ J2 r 2 r r "

where, J2 ¼ 1:08264  103 , l is the gravity constant of the earth, Re is the radius of the earth, and r is the geocentric distance of the spacecraft. When the orbit maneuver impulse is applied, the orbit state before application is denoted by “-”, and the orbit state after application is denoted by “ + ”. þ  For the sake of representation, set ri ¼ riþ ¼ r i ; ti ¼ ti ¼ ti . riþ ¼ r i tiþ ¼ ti

Dvi ¼ viþ  v i And let rðt þ DtÞ ¼ f ðrðtÞ; vðtÞ; t; t þ DtÞ and vðt þ DtÞ ¼ gðrðtÞ; vðtÞ; t; t þ DtÞ be the solutions of the dynamic equation. For an intermediate impulse (i 6¼ 1; i 6¼ n; n [ 2), the following conditions are satisfied:

714

G. Xu et al. þ þ ri ¼ f ðri1 ; vi1 ; ti1 ; ti Þ; v i ¼ gðri1 ; vi1 ; ti1 ; ti Þ

The state before the action of the first impulse meets the following conditions: r1 ¼ f ðr0 ; v0 ; t0 ; t1 Þ; v 1 ¼ gðr0 ; v0 ; t0 ; t1 Þ where, t1 is the first impulse time. Terminal constraints are as follows: rn ¼ f ðrf ; vf ; tf ; tn Þ; vnþ ¼ gðrf ; vf ; tf ; tn Þ where, t0 and tf are the start and end time. The schematic diagram of multi-impulse orbital rendezvous is shown below. The purpose of orbital transfer planning is to find and satisfy the following constraints (Fig. 1). r1 ¼ f ðr0 ; v0 ; t0 ; t1 Þ; v 1 ¼ gðr0 ; v0 ; t0 ; t1 Þ

þ þ ri ¼ f ðri1 ; vi1 ; ti1 ; ti Þ; v i ¼ gðri1 ; vi1 ; ti1 ; ti Þ rn ¼ f ðrf ; vf ; tf ; tn Þ; vnþ ¼ gðrf ; vf ; tf ; tn Þ

t0  t1 \t2 \    tn  tf

Fig. 1. Schematic diagram of multi-impulse orbit maneuver

3 The J2 Perturbed Lambert Problem !

!

1 !

1i !

2

2f

The Lambert problem is as follows: the position r and velocity v at the initial moment of chaser spacecraft are known, the position r and velocity v at the moment of !

!

1

2

rendezvous of the target spacecraft are known, and the flight time from r to r of the

Optimal Multiple-Impulse Noncoplanar Rendezvous Trajectory

715

chaser spacecraft and the movement direction of the spacecraft are known. The velocity ! ! increment D v and D v of the chaser spacecraft are calculated. The Lambert transfer 1

2

diagram is shown in Fig. 2.

Fig. 2. Schematic diagram of Lambert maneuver

This paper uses the universal variable method to solve the problem. The equation of the universal variable method to solve the Lambert problem is as follows: 8! ! ! < v ¼ ð r f r Þ=g 1t

2

1

! ! :! v ¼ ðg r  r Þ=g 2t

2

ð1Þ

1



wherein, f , g and g are functions of universal variables. The Lambert time equation can be expressed as: pffiffiffiffiffiffiffiffi pffiffiffi lDt ¼ ðx3 ðzÞSðzÞ þ A yðzÞÞ

ð2Þ

where, l is the gravitational constant of the earth; Dt is transfer time for Lambert; the ! ! coefficient A can be calculated from the geocentric distance r and r in Fig. 2, 1 2 pffiffiffiffiffiffi r1 r2 sin Dh ffi. A ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffi 1cos Dh The process of solving the Lambert problem with the universal variable method is as follows: firstly, the initial value of the universal variable z is selected, which is  ! obtained by the iterative method from Eq. (2); then f , g, g can be got from z;finally, v !

1t

and v are obtained from Eq. (1), and the velocity increment required by Lambert 2t

maneuver is obtained as:

716

G. Xu et al.

Dv ¼ jD v 1! j þ jD v 2! j ¼ j v 1t!  v 1i! j þ j v 2f !  v 2t! j

ð3Þ

The traditional solution method of Lambert problem only consider two-body model. For Lambert problem with perturbation considered, the state transition matrix based on two-body model, the partial derivative state transition matrix is expressed as [6] " Uðt; t0 Þ ¼

U11

U12

U21

U22

#

2

@r 6 @r0 ¼6 4 @v @r0

3 @r @v0 7 7 @v 5 @v0

Assuming that the initial position r1 and velocity v1 , under the perturbation model, the position passing through time Dt is r2int . The value of r2int is obtained by numerical integration. The location of the intersection point is obtained by numerical integration of the perturbation model, while the velocity of the orbit maneuver point is obtained by the Lambert algorithm based on the two-body model. The numerical integration method is based on the Kepler equation, and the two-body model is used to solve the spacecraft motion state by integrating the orbital elements. Better than the existence of perturbation term, the destination position will inevitably deviate, the desired intersection point position vector. So the velocity vector is modified as dv1t ¼ ðU12 Þ1 ðr2  r2int Þ The required velocity is updated as v1t ðk þ 1Þ ¼ v1t ðk Þ þ dv1t .When r2  r2int is small enough, the iteration ends.

4 Lambert-Based Multi-Impulse Orbit Planning Model The nonlinear optimal rendezvous optimization model can be divided into feasible iterative planning model and non-feasible iterative planning model. An important problem in constructing an iterative planning model for feasible solutions of nonlinear intersection is to select appropriate independent variables. At the same time, the Lambert algorithm is introduced to automatically satisfy terminal conditions. Hughes [1] has discussed the planning model in detail, and one of the optimization models was selected in this paper, which is briefly described as follows. The optimization variables were selected as the time and velocity increment of orbit impulse maneuver, X ¼ ½t1    tn ; Dv1    Dvn2 .  r1 ¼ f ðr0 ; v0 ; t0 ; t1 Þ, v 1 ¼ gðr0 ; v0 ; t0 ; t1 Þ is calculated to determine r1 and v1 , and then for i ¼ 1; 2;    ; n  2, there is

Optimal Multiple-Impulse Noncoplanar Rendezvous Trajectory

717

riþ ¼ r i

ri þ 1 ¼ f ðri ; viþ ; ti ; ti þ 1 Þ þ v i þ 1 ¼ gðri ; vi ; ti ; ti þ 1 Þ rn ¼ f ðrf ; vf ; tf ; tn Þ, vnþ ¼ gðrf ; vf ; tf ; tn Þ is calculated to determine rn and vnþ . The terminal condition of rendezvous is to solve a Lambert problem, rn1 ; rn and Dt ¼ tn  tn1 is automatically satisfied. The Dvn1 and Dvn can be obtained by solving the Lambert problem, and total Dv can be determined. The total orbital velocity increment was selected as the optimization target min J ¼

n X

Dvi

i¼1

The particle swarm optimization algorithm is taken as the optimization algorithm. See [3] for details.

5 Simulation example 5.1

Multi-impulse coplanar rendezvous

The initial and boundary conditions are the same as the calculation examples in the literature [2]. The initial and terminal orbits are coplanar circular orbits, and the orbit parameters are shown in the table below. Through the three-pulse orbit maneuver planning, the total velocity increment is 3770 m/s, and the orbit transfer trajectory is shown in the figure below. The planning result is the same as the Hohmann transfer optimal solution, which verifies the effectiveness of the algorithm (Fig. 3).

Target orbit Chaser initial orbit Tranfer orbit1 Tranfer orbit2

4

5

x 10

4 3 2 1 0 -1 -2 -3 -4 -5 -5

-4

-3

-2

-1

0

1

2

3

4

5 4

x 10

Fig. 3. Multi-pulse rendezvous Trajectory of Coplanar Target (Three-impulse)

5.2

Multi-impulse Noncoplanar Rendezvous

Simulation verification was further carried out for multi-pulse noncoplanar rendezvous. The initial orbit parameters of the chaser and the target are shown in the table below.

718

G. Xu et al.

Simulation of three-pulse and four-pulse orbit planning was carried out respectively. The planned trajectory and relative distance between the orbital transformation and the target are shown in. At the rendezvous moment, the relative distance between the chaser and the target is close to zero. The orbital maneuver planning results are shown. Under the same rendezvous time constraint, the total velocity increment of the fourpulse orbital maneuver is better than that of the three-pulse orbital maneuver (Fig. 4, 5, Table 1, 2, 3). 4

3.5

Target orbit Chaser initial orbit Tranfer orbit1 Tranfer orbit2

4

x 10 4

relative distance

x 10

3

2.5

range(km)

2 DV1

0 -2

2

1.5

DV3 DV2 1

-4 4 2

2 1

0

4

x 10

0.5

0

-2

4

x 10

-1 -4

0

-2

0

5000

10000

15000

time(s)

Fig. 4. Orbit trajectory and relative distance variation of three pulses

4

3.5

Target orbit Chaser initial orbit Tranfer orbit1 Tranfer orbit2 Tranfer orbit3

4 2 0

2.5

DV1 DV4

DV3

-2

relative distance

x 10

3

range(km)

4

x 10

2

1.5

DV2 1

-4 4 2 4

x 10

2 1

0

0

-2

-1 -4

-2

0.5

4

x 10

0

0

2000

4000

6000

8000 time(s)

10000

12000

Fig. 5. Orbit trajectory and relative distance variation of four pulses

Table 1. Orbit parameters Orbit elements Chaser spacecraft Semimajor axis 8371 km eccentricity 0.1 Orbital inclination 30° right ascension of ascending node 45° argument of perigee 20° true anomaly 10°

Target spacecraft 26371 km 0.05 55° 55° 70° 115°

14000

16000

Optimal Multiple-Impulse Noncoplanar Rendezvous Trajectory

719

Table 2. The planning results of the three-pulse orbit maneuver Time (s) 919.1 5938.6 14,938.1

DV component (m/s) Total DV (m/s) [−789.04,−745.34,229.52] 4340.6 [−50.44,364.23,−2146.23] [98.30,185.69,1032.56]

Table 3. The planning results of the four-pulse orbit maneuver Time (s) 1172.9 4772.9 8372.9 15,572.9

DV component (m/s) Total DV (m/s) [−744.74,−802.38,207.91] 4206.5 [−271.83,415.10,−2000] [8.85,108.85,−129.22] [205.87,383.55,744.47]

6 Conclusion In this paper, a multiple-impulse noncoplanar rendezvous trajectory planning problem considering J2 perturbation is studied. Firstly, a multi-impulse orbital maneuver model considering J2 perturbation is built, and the Lambert algorithm is used to solve the last two pulses to automatically satisfy the terminal condition and reduce the amount of iterative calculation. A multi-impulse orbit planning model based on particle swarm optimization (PSO) was established, and the validity of the proposed method was verified by comparing the results of three-pulse coplanar target rendezvous with Hohmann transfer. The simulation of three - pulse and four - pulse orbital rendezvous is further carried out.The simulation results show that the multi-impulse optimal rendezvous trajectory obtained by nonlinear programming method can meet the terminal accuracy requirements of remote rendezvous, and the velocity increment of four-pulse orbital maneuver is better than that of three-pulse orbital maneuver.

References 1. Hughes, S.P., Mailhe, L.M., Guzman, J.J.: A comparison of trajectory optimization methods for the impulsive minimum fuel rendezvous problem. Advances in the Astronautical Sciences, 113 (2003) 2. Abdelkhalik, O., Mortari, D.: N-impulse orbit transfer using genetic algorithms. J. Spacecraft and Rockets, 44(2), 456–460 (2007) 3. Chen, Q., Yang, Z., Luo, Y.: Optimization of multi-impulse orbit transfer based on particle swarm optimization algorithm. Aerospace Control Appl. 40(5), 25–30 (2014) 4. Liu, Y., Xiao, Y.E., Hao, Y., et al.: Optimization of transfer orbit for multiple-pulse noncoplanar rendezvous and docking. Optics and Precision Eng. 25, 987-998 (2017) 5. Zhou, J., Chang, Y.: Optimal multiple-impulse rendezvous between non-coplanar elliptic orbits considering the J2 perturbation effects. Journal of Astronautics 29(2), 472-475 (2008) 6. Li, Z., Zhang, H., Zheng, W.: An analytical solution for lambert problem with J2 perturbation based on state space perturbation method. Aerospace Control (2018)

720

G. Xu et al.

7. Shen, H.J., Tsiotras, P.: Optimal two-impulse rendezvous using multiple-revolution Lambert solutions. Journal of Guidance, Control and Dynamics, 26(1), 50-61 (2003) 8. Shakouri, A., Pourtakdoust, S.H., Sayanjali, M.: Multiple-Impulse Orbital Maneuver with Limited Observation Window. (2020)

PIC-MCC Simulation of the Temporal Characteristics of the Plasma in a Hall Thruster Rui Chen1(&), Li Wang1, and Xingyue Duan2 1

Key Laboratory of Spacecraft In-Orbit Fault Diagnosis and Maintenance, China Xi’an Satellite Control Center (XSCC), Xi’an, China [email protected] 2 College of Aerospace Science and Engineering, National University of Defense Technology, Changsha, China

Abstract. In order to investigate the temporal characteristics of the plasma in a Hall thruster with a magnetically shielded configuration, a 2-D cylindrical PICMCC model of the Hall thruster is established. The plasma parameters and the sputtering erosion related parameters are derived and obtained. The results indicate the designed combination of the magnetic field and the channel walls remains relatively excellent propulsion performance while achieves near-zero erosion of the channel walls when the anode voltage is 350 V. The results can be utilized for the design improvement of new Hall thruster with optimized propulsion performance and prolonged lifetime. Keywords: Hall thruster

 PIC-MCC  Plasma  Sputtering erosion

1 Introduction The Hall thruster technology is advantageous for the very high exhaust speed, high specific impulse and relatively simple structures, and therefore promising in orbit maneuver and deep-space missions. The numerical simulation methods of the Hall thruster can be roughly divided into three categories: fluid method, particle in cell (PIC) method and the hybrid fluid-PIC method. The fluid simulation method is a complete dynamic description of the plasma [1]. Compared with the fluid method, the particle simulation method is more direct, and the plasma characteristics are the result of self-consistency of various physical processes, avoiding the computational distortion caused by fluid approximation. The fluid simulation method can simulate most areas of the thruster discharge channel, but it is difficult to obtain accurate near wall plasma parameters because the near wall area does not meet the continuity assumption, and empirical expressions are needed for modeling, such as the wall sheath. The particle method has high accuracy in self consistent simulation of plasma in the thruster, but it requires a very short time step, which leads to a high cost of calculation time. Therefore, to compromise the computational cost and speed, the fluid-particle hybrid method is developed by combining fluid simulation and particle simulation [2–6]. For the magnetically shielded Hall thruster, in order to fully and accurately reveal the ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 721–729, 2023. https://doi.org/10.1007/978-981-19-3387-5_85

722

R. Chen et al.

influence of the magnetic field configuration on the plasma parameters and behavior in the thruster, the model assumption should be minimized. Moreover, there are few literatures investigating the temporal characteristics of the plasma in a Hall thruster. Therefore, the purpose of this paper is to use the pure PIC method to simulate the Hall thruster with a magnetic configuration which is supposed to shield the channel wall from the high energy plasma, and thus to prolong the life span of the Hall thruster system. In this paper, Temporal characteristics of the plasma in a Hall thruster are presented and analyzed, which is expected to provide support for the design improvement of the Hall thruster.

2 Calculation Model The two-dimensional cylindrical model used in this paper is based on the model presented in the reference [7]. Figure 1 shows the schematic diagram of the simulation domain. The cathode is located outside the thruster, and the gray area is the discharge channel (insulation wall boundary). The upper and lower sides are the magnetic pole parts, while the bottom end is the thruster center line (symmetrical boundary). The top end is the free boundary, and the right end is the electron emission boundary and the free boundary, which is set to −22 V. The particles leaving the free boundary are deleted and the electrons reaching the channel wall have a 50% probability of being converted into secondary electrons. 1.2

Free space

1

Magnetic pole Outer wall

r (mm)

0.8

Anode

0.6

Magnetic pole

0.4 0.2 0

Free space Electron emitter

Inner wall

BN cover Centerline

0

0.2

0.4

0.6

0.8

1

1.2

z (mm)

Fig. 1. Two-dimensional model of the Hall thruster. [7]

PIC-MCC Simulation of the Temporal Characteristics of the Plasma

723

In addition, to minimize the influence of simplification or acceleration measures on the simulation results and accurately obtain the plasma parameters, the method of reducing the geometric size is mainly used in this paper. The axial and radial dimensions are 1.2 mm, and the results of plasma parameters obtained in the reduced geometry model do not calculate to the real geometry. To accurately calculate the plasma parameters, the axial and radial magnetic field is considered, and the detailed configuration is borrowed from the reference [7]. The xenon neutral gas is assumed as the background gas, and the density distribution is obtained from Reid calculation results, and it is assumed that the neutral xenon number density of the propellant is uniformly distributed in the radial direction. In the process of particle collision modeling, because the average free path of atom collision is much larger than the scale of computational domain, the atom–atom elastic scattering is ignored. In addition, due to the weak effect of ion atom elastic scattering, it is also Ignore. The electron-atom inelastic collisions [8], atom-ion inelastic excitation and charge exchange collisions [9] and ionization [10] (primary ionization), and ionatom charge exchange collisions are taken into consideration [11]. Among them, the cross-section data are calculated by the polynomial fitting formula of Szabo et al. [12].

3 Numerical Method To The model is calculated by using the open source code XOOPIC [13]. XOOPIC is a two-dimensional code which can be used to simulate the motion of plasma under external electric and magnetic fields, including various boundary conditions (insulation, conductor, symmetry, etc.). In addition, the data of magnetic field and crosssectional area can be imported through external files, which provides a prerequisite for the simulation of the Hall thruster with a detailed magnetic field configuration. In addition, XOOPIC code can model the sputtering induced by ion bombardment of wall materials, and Yamamura model is used to calculate the sputtering yield [14]. However, the current XOOPIC code only supports the sputtering yield calculation of copper as channel wall material.

4 Results and Discussion The computational grid is 200  200, and the macro particle weight is 1.8  104, and the time step is 10–14 s. Anode is set to 350 V. Figure 2 shows the time variation of the number of electron, Xe + ion and the sputtered particle. At the end of the calculation, the thrust and the specific impulse are 42.4 mN and 1732.2 s respectively.

724

R. Chen et al.

200000

number

150000

100000 electron Xe+ Cu

50000

0

0

0.02

0.04

0.06

0.08

0.1

0.12

t (μs)

Fig. 2. Time variation of the particle number

4.1

Plasma Number Density

Figure 3 shows the time variation of the number density of the electrons in the simulation domain and on the channel centerline, where the initial distribution of the electrons is relatively uniform after 2.22  10–10 s of cathode emission. Thereafter, the electrons accumulate, and the number increases because of the electron-atom collision ionization process.

(a) t=2.22×10-10 s

(b) t=7.58×10-9 s

(c) t=1.4×10-7 s

Fig. 3. Time variation of the electrons in the simulation domain

Figure 4 shows the time variation of the number density of the ions in the simulation domain and on the channel centerline. In the beginning, the initial ions are in the channel near the exit plane. Then, the ion number density increases because of the electron-atom collision ionization process. At t = 1.4  10–7 s, it is clear that the ion number density on the thruster centerline is larger than that in the channel, which is consistent to the bright central core in the Hall thruster.

PIC-MCC Simulation of the Temporal Characteristics of the Plasma

(a) t=2.22×10-10 s

(b) t=7.58×10-9 s

725

(c) t=1.4×10-7 s

Fig. 4. Time variation of the ions in the simulation domain

4.2

Plasma Velocity

Figure 5 shows Time variation of the ion axial velocity. At t = 1.4  10–7 s, the ions diffuse thermally. After about 7.0  10–9 s, the ions accelerate to 6000 m/s by the electrostatic force established by the plasma. Near the steady state, the maximum value of the ion axial velocity reaches 20,000 m/s. Moreover, the ion axial velocity distribution is consistent with the LIF results measured in many Hall thrusters [15–17]. Another prominent phenomenon is that the axial velocity of the ions near the anode is negative, and is about several thousands of m/s, which is also driven by the thermal field rather the electrostatic field. in addition, there are three different groups of the ions can be clearly identified by the ion distribution.

8000

250 200

20000

150

6000

100

15000

-50 -100

uz (m/s)

4000

0

uz (m/s)

uz (m/s)

50

2000

5000

-150 0

-200

0

-250 -2000

-300 -350

10000

0

0.2

(a)

0.4

0.6

0.8

z (mm)

t=2.22×10-10

1

s

1.2

1.4

0

0.2

(b)

0.4

0.6

z (mm)

0.8

t=7.58×10-9

s

1

1.2

-5000

0

0.2

(c)

0.4

0.6

z (mm)

t=1.4×10-7

0.8

1

1.2

s

Fig. 5. Time variation of the ions in the simulation domain

4.3

Plasma Potential and Axial Electric field

The electric potential is calculated from the charge density, and the electric field is the inverse gradient of the electric potential. Figure 6 and Fig. 7 show the plasma potential and axial electric field distribution in the Hall thruster respectively. Initially, the potential and the electric field are axially distributed. With the development of the plasma generation and acceleration, the potential and the electric field evolve, where the potential near the inner and the outer pole pieces is negative, while the potential and the electric field distribution are consistent with the probe and LIF measurements [15].

726

R. Chen et al.

(a) t=2.22×10-10 s

(b) t=7.58×10-9 s

(c) t=1.4×10-7 s

Fig. 6. Time variation of the electric potential 1.2 1

r (mm)

0.8 0.6 0.4 0.2 0

0

0.2

0.4

0.6

0.8

1

1.2

z (mm)

(a) t=2.22×10-10 s

(b) t=7.58×10-9 s

(c) t=1.4×10-7 s

Fig. 7. Time variation of the axial electric field

(a) Number density of sputtered particles

(b) Velocity vector of sputtered particles

Fig. 8. Distribution of the number density and the velocity vector of the sputtered particle

PIC-MCC Simulation of the Temporal Characteristics of the Plasma

727

In the discharge channel region, the plasma potential is similar to the anode potential. The sputtering erosion of the inner and outer magnetic poles will be caused when the ions accelerate to bombard the magnetic poles under a large potential difference. 4.4

Channel Erosion

The distribution of the number density and the velocity vector of the sputtered particle are shown in Fig. 8. It is apparent that the sputtered particles mainly concentrate on the inner magnetic poles [18], while the channel erosion can be ignored, which means the sputtering erosion of the inner magnetic pole pieces is dominant in this Hall thruster. Also, the sputtered particles are seen to leave the sputtered surface upon sputtering, and the velocity of the sputtered particles is about several hundred m/s as a comparison, the channel of the Hall thruster is changed as straight (while remain the other parameters), different from Fig. 1, to demonstrate the magnetic shielding effect of the designed combination of the magnetic field and the channel profile. Figure 9 shows the distribution of the number density of the sputtered particle in the second configuration. It is clear that the sputtering erosion mainly occurs on the channel inner and outer wall, while the erosion of the magnetic pole pieces can be neglected.

Fig. 9. Distribution of the number density of the sputtered particle

5 Conclusions In this work, a 2-D cylindrical PIC-MCC model of the Hall thruster is established. In addition, the plasma parameters and the sputtering erosion related parameters are derived and obtained. The results indicate the designed combination of the magnetic field and the channel walls remains relatively excellent propulsion performance while achieves near-zero erosion of the channel walls when the anode voltage is 350 V. However, the sputtering erosion of the inner magnetic pole pieces is a new problem to be solved, which is not supposed to cause severe failure. Finally, the conventional model is calculated as well, which shows the sputtering erosion mainly occurs on the

728

R. Chen et al.

channel inner and outer wall, while the erosion of the magnetic pole pieces can be neglected, and the peak number density of the sputtered particles is less than that in the first configuration which is proved magnetically shielded. The above results can be used for the design improvement of new Hall thruster with optimized propulsion performance and prolonged lifetime.

References 1. Hofer, R.R., Goebel, D.M., Mikellides, I.G., et al. : Design of a laboratory Hall thruster with magnetically shielded channel walls, Phase II: experiments. In: 48th AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit, AIAA-2012–3788, Atlanta, Georgia (2012) 2. Smith, B.D., Boyd, I.D. , Kamhawi, H.: Hybrid-PIC modeling of the transport of atomic boron in a hall thruster. In: 34th International Electric Propulsion Conference, IEPC-2015– 252, Hyogo-Kobe, Japan (2015) 3. Huismann, T.D.: Improving Hall Thruster Plume Simulation Through Refined Characterization of Near-Field Plasma Properties. PhD dissertation, University of Michigan, Ann Arbor (2011) 4. Brieda, L., Keidar, M.: Plasma-wall interaction in Hall thrusters with magnetic lens configuration. J. Applied Physics 111(12), 123302 (2012) 5. Bareilles, J., Hagelaar, G.J., Garrigues, L., et al.: Critical assessment of a two-dimensional hybrid Hall thruster model: comparisons with experiments. Phys. Plasmas 11(6), 3035–3046 (2004) 6. Koo, J., Boyd, I.: Computational modeling of stationary plasma thrusters. In: 39th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit, AIAA-2003–4705, Huntsville, Alabama (2003) 7. Duan, X.Y.: Investigation on the Interaction Mechanism Between the Plasma and the Channel Wall in the Hall Thrusters. PhD dissertation, National University of Defense Technology, Changsha, China, pp. 129–132 (2020) 8. Sullivan, R.: The Physics of High-Velocity Ions in the Hall Thruster Near-Field. PhD dissertation, California Institute of Technology, pp. 41–42 (2010) 9. Hayashi, M.: Determination of electron-xenon total excitation cross-sections, from threshold to 100 eV, from experimental values of Townsend’s a. J. Physics D: Applied Phys. 16, 581– 589 (1983) 10. Rapp, D., Englander-Golden, P.: Total cross sections for ionization and attachment in gases by electron impact. I. Positive Ionization. The Journal of Chemical Physics, 43(5), 1464– 1479 (1965) 11. Rapp, D., Francis, W.E.: Charge exchange between gaseous ions and atoms. J. Chemical Phys. 37(11), 2631–2645 (1962) 12. Szabo, Jr. J.: Fully Kinetic Numerical Modeling of a Plasma Thruster. PhD dissertation Cambridge, MA: Massachusetts Institute of Technology (2001) 13. Verboncoeur, J.P., Langdon, A.B., Gladd, N.T.: An object-oriented electromagnetic PIC code. Comp. Phys. Comm. 87, 199–211 (1995) 14. Stoltz, P.H., Verboncoeur, J.P., Cohen, R.H., et al.: Modeling ion-induced electrons in the High Current Experiment. Phys. Plasmas 13(5), 1–201 (2006) 15. Mazouffre, S., Kulaev, V., Luna, J.P.: Ion diagnostics of a discharge in crossed electric and magnetic fields for electric propulsion. Plasma Sources Science Technol. 18(3), 510–514 (2009)

PIC-MCC Simulation of the Temporal Characteristics of the Plasma

729

16. Chaplin, V.H., Conversano, R.W., Ortega, A.L., et al.: Ion velocity Measurements in the Magnetically Shielded Miniature (MaSMi) hall thruster using laser-induced fluorescence. In: 36th International Electric Propulsion Conference, IEPC-2019–531, Vienna, Austria (2019) 17. Duan, X.Y., Cheng, M.S., Yang, X., et al.: Investigation on ion behavior in magnetically shielded and unshielded Hall thrusters by laser-induced fluorescence. J. Applied Phys. 127 (9), 1–13 (2020) 18. Wensheng, H., Hani, K.: Counter-streaming ions at the inner pole of a magnetically shielded Hall thruster. J. Applied Phys. 129(4), 043305 (2021)

Research on Automatic Measurement of Aerospace Electrical Connector Wire Yan Li(&), Songyin Sui, Qiong Wu, MingHua Zhang, GuangJun Chen, and Jun Wang Beijing Spacecraft, China Academy of Space Technology, Beijing 10080, China [email protected]

Abstract. Aerospace cable network products have the characteristics of large cable network, difficult transportation, many kinds and types of cable plugs, small batch, complex cable signal connection, etc. in the early stage, the loose wire labeling of low-frequency cable connectors mainly relies on manual operation, using common tools such as conductors and multi-meter for inspection, and the test operator generally needs at least two or more people, According to the design drawings and the cable wiring table, the test is carried out one by one. The test time is long, and it is easy to make people tired during the inspection process, resulting in cable detection errors. Therefore, the loose wire conduction number of aerospace cable connector is a major reason for hindering the production efficiency of cable network, This paper designs the scheme of automatic number measurement for different cable connectors, compares and optimizes the design ideas, outputs the required binary code through the cascade of multiple encoders, completes the research on the principle of wire number measurement, realizes the signal conversion and display through the single chip microcomputer, and then transmits the tested wire label content to the label printer through the communication agreement, The label marking machine prints out the label from the test results of the wire number measuring tooling, which makes the label printing convenient for use, improves the efficiency of operator’s whole wire bundle labeling, and ensures the accuracy of wire bundle labeling. Keywords: Electrical connector measurement

 Single chip microcomputer  Signal

1 Research Background At present, many industries such as aerospace, aviation, new energy, high-speed rail and so on often use automatic test equipment to test cable products. However, in aerospace products, there are few batch production equipment R & D. the main problem is that the equipment on the market is difficult to meet the practical application of aerospace cable network products. At present, the common automatic test equipment in the market is mainly for simple structure and batch products. Therefore, if we can realize the automatic test of aerospace cable products, we need to refine and quantify each process in the production process of aerospace products, and realize semiautomatic or automatic for each small process, then the overall Aerospace cable ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 730–736, 2023. https://doi.org/10.1007/978-981-19-3387-5_86

Research on Automatic Measurement of Aerospace Electrical Connector Wire

731

production will have a greater level of automation, and we must carry out targeted transformation and customization, Make it fully meet the characteristics of aerospace products. According to the current production of the spacecraft cable network, aerospace cable network products have several main characteristics: (1) Huge space cable network The cable network used in aerospace is generally large, with a large number of cable branches and cable plugs. (2) Difficult transportation of aerospace cable network There are dozens of cables used by satellites and rockets, and hundreds of them. Even in many relatively straight cables, there are dozens of branches. The whole cable network is bulky and heavy. In the process of testing, it is difficult to transport the products and test them. (3) There are many types of cable connectors Electrical connector, also known as “aviation plug”, gets its name from its early application in aircraft manufacturing and other aviation fields. Now it is widely used in military, aerospace, navigation and other fields [1, 2]. According to the current situation of aerospace and satellite ships, there are more than 400 kinds of cable plugs. (4) Small batch of aerospace cable network As the aerospace cable network is generally put into production with many types and few quantities, some types will not be used in subsequent production even after one set is put into production, and there are few tens of thousands of batch products. According to the current test equipment on the market, it cannot adapt to the use of some automatic test equipment. (5) The signal connection of cable is complex Aerospace cable network has many branches in each cable, so the signal of the cable is more complex, and even there are many short circuits between different points . Through the research and research of independent research and development, the automatic test system of the conductor of distribution cable network with aerospace characteristics has been preliminarily constructed. Through this system, the common problems and difficulties in the automatic test of low-frequency cables of spacecraft can be basically solved, and the application of automatic equipment in automatic measurement of low frequency cable number of spacecraft is realized, for example, Fig. 1, 2, It is a common plug of electric connector. Each cable has 55 points. After welding the lead wire at one end of the electric connector, the automatic number measuring system developed by the independent can be used to select a conductor at will to complete the action of number measurement and marking. It completely replaces the manual use of the Guide to conduct and mark the signal. The two people are able to conduct the problem at the same time, so the problem of low efficiency and error prone is solved. The following NUMBER should be correct and the quality of the electrical connector should be improved.

732

Y. Li et al.

Fig. 1. A cable connector plug

Fig. 2 Temporary label

2 Principle and Design of Automatic Signal Measurement 2.1

Fifteen Level 74ls148 and NAND Gate Constitute 128–7 Line Priority Encoder

When multiple 74ls148 are combined, and when there are multiple input data requests at the same time, the input signal is coded according to the priority level. When the high priority level is not in the working state, the low priority input signal is coded [3–6]. We mainly use this principle, and form a required signal coding output through more 74ls148. With the inspiration of the above logic, now make a 128 wire logic table to analyze the signal of each kind of core number (0-127): 128 logic table is shown in Table 1. Table 1. Temporary attachments for cable electrical connectors Number of films GS0 GS0 ┊ GS15 GS15

Z6 Z5 Z4 Z3 Z2 Z1 Z0 Number of cores 0 0 0 0 0 0 0 0–7 0 0 0 0 1 1 1 ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ 1 1 1 1 0 0 0 120–127 1 1 1 1 1 1 1

GS is the first coding output terminal, and the number after GS represents the number of 74ls148 chips. From this table, the logic signal principle can be determined. 2.2

Design of Minimum System for Single Chip Microcomputer

MCU is equivalent to microcontroller, with CPU, ram, ROM, input / output and other integrated functions, which are integrated to form a micro-computer system to complete complex operation, logic control, communication and other functions [7]. SCM is widely used, mainly for navigation system, intelligent equipment, automobile, communication equipment, etc., and SCM has small volume, high integration, 5V low voltage, low energy consumption, strong processing ability, can be applied to all kinds

Research on Automatic Measurement of Aerospace Electrical Connector Wire

733

of environment, has good control ability. The application of SCM is very wide and convenient. Finally, the output logic signal is connected to the minimum system of single chip microcomputer. The minimum system of single chip microcomputer converts the input binary system into decimal system, which can be displayed by LCD. The minimum system of single chip microcomputer is shown in Fig. 3.

Fig. 3. Schematic diagram of minimum system of single chip microcomputer

2.3

128 Wire Number Extended 256 Wire Number

At present, the 128-7 line priority encoder is composed of 15 74ls148, which can only realize the test range of 128 line number. In order to realize the test of 256 line number, the two printed boards are connected, and the expansion board only reduces the use of the minimum system of the single chip microcomputer, which can complete the expansion goal. Two printed boards, board A and board B, are used to measure the number, realizing 256 line display. Board A is connected with 128 high line and board B is connected with 128 low line; The number selection result of board a is transmitted to board B through 7-bit binary data line; The MCU of board B makes priority processing and display according to the number selection result output by board a and board B: if board a wire number is output, board a wire number + 128 will be displayed in the nixie tube; if board a does not output, board B wire number will be displayed; if board a and board B do not output, 0 will be displayed, The communication between board A and board B is connected by 8 lines (7-Bit data line + GND line), and then the range of test line number is expanded, which can be processed according to this method. 2.4

Programming of Measurement Number

Hex file is an ASC II text file composed of text in hex file format. In the hex file, each item contains a hex data. Is based on a specific language or some data hexadecimal coded number composition. It mainly transfers the data and program in ROM and EPROM.

734

Y. Li et al.

The signal transmission program is composed of hexadecimal coded digits, see part of the interception program: :030000000202F801 :0C02F800787FE4F6D8FD75813302002603 :1002D700C0F9A4B0999282F880908883C6A1868ECF :0102E7000016 2.5

Design of Communication with Printer

The communication protocol of printed circuit board for wire number measurement and printing uses Modbus protocol to connect the output signal of printed circuit board for wire number measurement with the printer. Finally, the printer prints out the corresponding number while measuring the number of printed circuit board. For the connection through Ethernet, there are multiple MODBUS protocols. This method does not need to calculate and check. The printed circuit board is connected with the printer through RS485 interface. The printer is the host and the wire number measurement system is the slave. The db9-1 pin is rs485a, and the db9-2 pin is rs485b. Communication data format: baud rate, start bit, data bit, stop bit, check code. Communication protocol adopts Modbus protocol, frame format: “device address”, “function code”, “data” and “check sum”, query data frame and response data frame, using Modbus RTU general protocol. USB to rs232com serial port line 9-pin 9-pin type-C is used to connect the printed circuit board assembly and the printing serial port data line DB9 (see Figure 2-24 DB9 definition) for serial communication protocol connection and transmission. Finally, the correctness of the data sent and received is tested by the serial port assistant, and a total of 256 potential test numbers are detected to be correct. The data of each point is sent, and the received signal verification is displayed. Finally, the connection between the printed circuit board for wire number measurement and the printer is completed (the connection diagram is shown in Fig. 4), the details are as follows:

Fig. 4. Schematic diagram of connection between PCB assembly and printer

Research on Automatic Measurement of Aerospace Electrical Connector Wire

2.6

735

Test Box Design

The cable end of the production line shall be used as the test socket for the test cable plug. Subsequently, the transfer cable shall be adopted to meet the requirements of different test plugs. The shell is made of resin material to ensure the insulation between the cable socket and the shell and prevent the signal interference generated in the test process. Connect the points of each cable head with 1-256 points reserved on PCB Board A and Board B successively by wire. Finally, each cable head is installed in the shell. Each cable head has 38 points and 8 cable heads are used. Connect the test number range of 1-38,39-76,…,229-254,255-256 respectively to realize the test range of 1-256 points, and the overall appearance of the test number box (Fig. 5).

Fig. 5. Appearance of the measuring box

3 Conclusion This design is based on the research of the problems in the actual cable production process, aiming at the welding process of various kinds, small batch, complex structure of distribution appliances, developed a kind of matching with the characteristics of distribution production, used for distribution electrical wiring test tooling, after the wire bundle welding on the electrical connector is completed, Any wire extracted from it can display its corresponding line number through this tooling, and then print the label and paste the wire automatically, which is convenient to use, improving the automatic efficiency of wire labeling in the cable production process and ensuring the production quality of products. The main work and conclusion are as follows: (1) Through the problems existing in the production process of aerospace electrical connector cables, the method of automatic identification of wire number is studied by converting the manual operation in the process of wire labeling into an automatic wire label labeling system; (2) Study on the measuring signal circuit composed of the encoder cascading in the wire measuring number, and finally realize the cascading coding of 15 74LS148 chips. Through the subsequent signals and non-conversion into the required binary signals, complete the 256 wire measuring number research;

736

Y. Li et al.

(3) The binary signal of the test number is connected to the printer through the communication port Modbus protocol. After the test number is completed, the label number of the printer is transmitted to print; (4) The function of automatic wire numbering and label printing is finally completed, and the research on the automatic numbering system of aerospace electrical connectors is realized. The automatic wire numbering system has been applied to the actual cable wire numbering, which improves the operating efficiency by 2–5 times, ensures the labeling accuracy by 100%, and improves the reliability of wire labeling labels.

References 1. Eischeid, J.K., Baker, C.B., Karl, T.R., et al.: The quality control of long-term climatological data using objective data analysis. J. Applied Meteorol. 34(12), 2787–2795 (1995) 2. De Cerio, J.M.: Quality management practices and operational performance:empirical evidence for Spanish industry. Int. J. Production Res. 41(12), 2763–2786 (2003) 3. Torin, H.: Digital Electronic Technology Tutorial. Shaanxi Normal University Press, Xi ‘an 76–82 (1995) 4. Qu, W.: Electronic Technology Tutorial. Posts and Telecommunications Press, Beijing 181– 182 (2013) 5. Zheng-wei Z.: Digital Circuit Logic Design. Tsinghua University Press, Beijing 70–80 (2013) 6. Shi, Y.: Digital electronic technology basic course problem solving, pp. 53–57. Tsinghua University Press, Beijing (2009) 7. Lee, M.: Spectrometer design based on single-chip incremental encoder. Univ. Physics Exp. 5, 53–58 (2015)

Correctness Verification of Aerospace Software Program Based on Hoare Logic Jian Xu(&), Hua Yang, Yanliang Tan, Yukui Zhou, and Xiaojing Zhang Beijing Institute of Control Engineering, Bejing 100190, China [email protected]

Abstract. Spacecraft software requires more and more credibility. The correctness of program implementation is the basic requirement of software credibility. However, traditional testing methods cannot guarantee the correctness of the program implementation. Therefore, in order to guarantee the consistency between the designing and realization of the software, the method of verifying the spacecraft software by formal method was studied. This paper studied the feasibility of verifying the correctness of the program in theory by formal verification. Firstly, this paper introduced in detail the method of formal verification, and the principles and reasoning rules of theorem proofing based on Hoare logic for part of C language syntax. Then, according to the Hoare logic tripe and its reasoning rules, a module of an aerospace software was verified by artificial reasoning. Finally, it was concluded that the program verification method based on Hoare logic was feasible in the correctness verification of spacecraft software program. Keywords: Hoare logic Correctness of program

 Spacecraft software  Formal verification 

1 Introduction Highly trusted software has always been the goal of software developers, and the correctness of program implementation is the basic requirement of software trustworthiness. Therefore, how to improve the correctness of program implementation and improve the credibility of software is an important research topic of aerospace software. It is difficult to find out all the inconsistencies between program implementation and design by using traditional testing methods [1]. Formal verification is currently recognized as an effective means to ensure the credibility of software [2, 13]. Applying formal verification to software development process can verify the correctness of program implementation in theory [9]. Formal verification is to describe the realization and properties of software system through formal description language, and analyze and verify whether the system meets the expected properties through mathematical methods. The formal method includes model checking and theorem proving [1]. Model checking is a kind of automatic verification technology of finite state system, which searches the finite state space of the software system model to verify whether the ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 737–743, 2023. https://doi.org/10.1007/978-981-19-3387-5_87

738

J. Xu et al.

behavior of the system has the expected properties [3, 4]. Theorem proving is similar to theorem proving in mathematics [14]. It describes software system and its properties by logic system, and verify whether the realization of software system meets the expected properties by reasoning. Model checking is complementary to theorem proving. The important advantage of model checking technology is its high degree of automation. When the system does not have the expected properties, it can give the corresponding counterexample path. Its disadvantage is that the state explosion of software systems restricts its expansibility. The advantage of theorem proving technology is that it deals with infinite state space based on induction over infinite field. However, the degree of automation is far less than that of model checking, and it is difficult to provide readable counterexamples, when the proof fails. At present, aerospace software and hardware are closely related, and the system state is numerous. There is a state explosion problem. So, it is difficult to apply model checking to verify the correctness of the software. Therefore, this paper studies the use of theorem proving to verify the correctness of aerospace software programs, in order to ensure that the software achieves the desired properties. This paper first introduces the principle and reasoning rules of theorem proving verification program based on Hoare logic, and then studies the feasibility of program verification by manual reasoning.

2 Rules of Program Verification Based on Hoare Logic The application of Hoare logic to verify the program has been put forward in the 1960s [5]. In Hoare logic, the correctness of the program is represented by Hoare logic triple {P}S{Q}, where P and Q are the pre-condition and post-condition of the program respectively, and S is the program fragment [6, 7]. If the pre-condition P is satisfied before the execution of program S, when the execution of program S is completed, its end state must meet the post condition Q. Generally, pre-conditions and post-conditions are represented by assertion expressions in Hoare logic. Hoare logic triples provide a set of axioms and reasoning rules for imperative languages. These rules can be used to verify most of the C language syntax [10, 12]. Its verifiable syntax is roughly expressed as follows. ð1Þ where C, C1 and C2 represent statements, x represents variables, E represents values or expressions of the same type asx, and B represents Boolean expressions. The syntax is explained as follows. skip for empty statements. x :¼ E for assignment statements; C1; C2 means that C1 and C2 are executed in sequence. if B then fC1gelsefC2g indicates a conditional statement, the value of Boolean expression B is true, the C1

Correctness Verification of Aerospace Software Program

739

statement is executed, otherwise C2 is executed. while B fC g is a loop statement. If the value of Boolean expression B is true, statement C will be executed in a loop until the value of B is false. The reasoning rules that Hoare logic provides for these syntaxes are as follows. ð2Þ The rule states that the predicate P is true before the skip statements is executed and true after the skip is executed, i.e., the skip statements does not change the state of the system. fPgC1fQgfQgC2fRg fPgC1; C2fRg

ð3Þ

This rule is also called “combination rule”, which means: if C1 execution will change the system state from P to Q, and C2 execution will change the system state from Q to R, then the sequence statement C1; C2 will be executed, the state of the system will be changed directly from P to R. ð4Þ The rule is “assignment rule”, which means: before the assignment operation, if the state of the system is P½ x=E, after the assignment operation , the state of the system is P. P½x=E means to replace all the free variables x in predicate P with E to get the predicate. fP ^ BgC1fQgfP ^ +BgC2fQg fPg if B then C1 else C2 fQg

ð5Þ

The rule is “conditional rule”, which means that if the pre-condition of C1 is fP ^ Bg, the post-condition is Q, and the pre-condition of C2 is {P^+B}, and the postcondition is Q, then the execution of if B then C1 else C2 will change the state of the system from P to Q. fP ^ BgCfPg fPgwhile B C fP ^ +Bg

ð6Þ

This rule is called “loop rule”, where P is the loop invariant, which is always unchanged before and after the execution of the while loop [8]. P0 ! PfPgC fQgQ ! Q0 fP0 gCfQ0 g

ð7Þ

The rule is “implication rule”, which indicates that the execution of program C can change the state of the system from P to Q. If the pre-condition P can be deduced from

740 0

J. Xu et al. 0

P , and the post condition Q can deduce Q , then the execution of program C can 0 0 change the state of the system from P to Q .

3 Verification of Program by Theorem Proving This section studies how to use Hoare logic triples and its reasoning rules to verify the specific program manually. First, according to the design requirements, the precondition and post-condition of program are described. The pre-condition refers to the state of the system before the program executing, such as the value of each variable. Post-condition refers to the description of the functions of the program, that is, the state of the system and how the values of various variables change after the program is executed. Then, the syntax of each statement is analyzed, and the specific changes of system state after each statement is executed are analyzed combined with the reasoning rules of triple in C language syntax introduced above. Finally, after the program is executed, according to the final state of the system expressed by the logical expression, it is proved that it can deduce the post-condition of the system. If it can be proved in the end, it means that the implementation of this program meets the specification. Otherwise, it is not satisfied. Modify the program or check whether the pre-conditions or post-conditions are correct, and prove it again until it is proved correctly. In this paper, a function named SetBitMap in the scheduling module of the operating system is taken as an example to study the method of verifying the specific program through artificial reasoning. The function of SetBitMap is to set the priority bit of the task in the ready table. The implementation of SetBitMap program is as follow.

unsigned char OSTaskRdyIndex; unsigned char OSTaskRdyBitmap[8]; unsigned char OSMap[8]={0x01,0x02,0x04, 0x08,0x10,0x20,0x40,0x80}; void SetBitMap(unsigned int prio) { unsigned int x,y; y = prio>>3; OSTaskRdyIndex |= 1 > > > Q2 ; jDL ðxLh þ 1; yL Þ  DL ðxLh ; yL Þj\T2 ; > > > > Q3 ; jDL ðxL ; yL  1Þ  DL ðxL ; yL Þj\T3 ; > h h > < Q4 ; jDL ðxLh ; yL þ 1Þ  DL ðxLh ; yL Þj\T4 ; Q¼ Q5 ; jDL ðxLh  1; yL  1Þ  DL ðxLh ; yL Þj\T5 ; > > > > > Q6 ; jDL ðxLh  1; yL þ 1Þ  DL ðxLh ; yL Þj\T6 ; > > > > Q ; jD ðxL þ 1; yL  1Þ  DL ðxLh ; yL Þj\T7 ; > : 7 L hL Q8 ; jDL ðxh þ 1; yL þ 1Þ  DL ðxLh ; yL Þj\T8 ;

ð2Þ

where T ¼ fTi ; 1  i  8g is the threshold set by the experiment. 2.3

Depth Refinement

Assume that the non-robust pixels in the HR depth map and the projection pixels in the LR depth map are respectively ðx; yÞ and ðxL ; yL Þ. Where the 8 pixels around the pixel point ðxL ; yL Þ, the set of pixels belonging to the effective depth value is   X ¼ DL ðxL  s; yL  tÞ; 1  s  1; 1  t  1

ð3Þ

Correct non-robust pixels according to the following formula DH ðx; yÞ ¼ arg min jD0 ðx; yÞ  dj

ð4Þ

d2X

where D0 is the initial depth map, DH is the HR depth map output by smooth area. Assuming that pixel ðxh ; yÞ has the largest gradient value in the HR space, the set of effective depth edge pixels is XG ¼ fðx; yÞjjGðx; yÞ  Gðxh ; yÞj\TG ; ðx; yÞ 2 Pg

ð5Þ

Local Edge Structure Guided Depth Up-Sampling

1117

Calculate the difference between the pixel value of the above-mentioned set of pixels and its neighboring pixels. The abscissa of the pixel with the largest difference from the pixel value of pixel ðxh ; yÞ is defined as xe xe ¼ arg maxfIðx; yÞ  Iðx  1; yÞg

ð6Þ

x;y2XG

where the abscissa of the true edge point is defined as xf , which is the mean of xe and xh . Assuming that the size of DL is l  n, and the size of the D0 after be enlarged by K times is L  N, the coordinate mapping formula between the coordinate ðxL ; yL Þ of any pixel in DL and the coordinate ðx; yÞ of any pixel in D0 is xL ¼ ðx  1Þ

l þ1 Kl1

ð7Þ

yL ¼ ðy  1Þ

n þ1 Kn1

ð8Þ

n1 þ 1  yL Kn  1

ð9Þ

l1 þ 1  xL kl  1

ð10Þ

r1 ¼ ðy  1Þ

r2 ¼ ðx  1Þ

where r2 is the horizontal distance difference and r1 is the vertical distance difference. The influence factor in the x direction is XX 1 XX 3 3 Ci ðiÞ ¼ ð ðm1  i þ r2 Þ þ ð3  iÞð3  iÞðm2  i þ r2 Þ3 Þ 6 m1 i 2 i m2

ð11Þ

where the value of m1 is 3 and 4, the value of m2 is 1 and 2, i ¼ 1; 2; 3. In the same way, the influence factor in the y direction is Cj . In a 3  3 pixel block, the influence factor of any pixel is ai;j ¼ Ci ðiÞCj ðjÞ; i ¼ 1; 2; 3:j ¼ 1; 2; 3:

ð12Þ

where i and j are both 1, 2, and 3. The final HR depth map is obtained by 88 P P ai;j DL ðxLf  s1 ; yL  t1 Þ; 2  s1  0; 1  t1  1; x1  x  xf  1 > > > < > > s1 ;t1 i;j > e1 [ em > PP > L L >> > < : s ;t i;j ai;j DL ðxf þ s2 ; y þ t2 Þ; 0  s2  2; 1  t2  1; xf  x  xm 2 2 8PP DH ðx; yÞ ¼ L L > > < s ;t i;j ai;j DL ðxf  s1 ; y  t1 Þ; 2  s1  0; 1  t1  1; x1  x  xf  1 > > > 1 1 > > e1 \em > > > P P a D ðxL þ s ; yL þ t Þ; 0  s  2; 1  t  1; x  x  x > i;j L f 2 2 2 2 f m : :

ð13Þ

s2 ;t2 i;j

where e1 and em respectively represent the gradient difference between the real edge point and the first pixel point and the last pixel point in a certain column.

1118

C. Wang et al.

3 Experimental Results In this paper, Art, Books, Dolls, Laundry, Moebius, and Reindeer [8] are used to show the superiority of our algorithm. Bicubic [6] is used to downsample the real depth image with scaling factors of 4 and 8 to obtain the LR depth image. We compare our algorithm with Bicubic [6], JBU [1], TGV [7], and EGDU [5]. Generally, BPR (Bad Pixel Rate) is global image quality objective evaluation indexes. It can be seen from Table 1 that our algorithm can obtain the lowest BPR with scaling factors of 4, and 8, which proves the stability of the algorithm. Figure 3 and Fig. 4 are the experimental results with up-sampling factors of 4 and 8. The columns (a-b) are HR color map and real depth map, columns (c-g) are HR depth maps output by Bicubic [6], JBU [1], TGV [7], EGDU [5] and our algorithm. Figure 5 and Fig. 6 are the subjective comparison map of errors with up-sampling factors of 4, and 8. The columns (a–f) are HR color map, the error map of the algorithm Bicubic [6], JBU [1], TGV [7], EGDU [5] and our algorithm. It can be seen that our algorithm can obtain the smallest error compared with other algorithms.

Table 1. BPR experimental results of the algorithms Sampling factor Image 4

8

Art Books Dolls Laundry Moebius Reindeer Art Books Dolls Laundry Moebius Reindeer

Algorithm Bicubic [6] 0.3255 0.1815 0.2226 0.2650 0.2053 0.1858 0.5276 0.3316 0.4014 0.4329 0.3543 0.3267

JBU [1] 0.3117 0.1583 0.2074 0.2505 0.2018 0.1827 0.6113 0.4312 0.5893 0.4895 0.4099 0.5039

TGV [7] 0.1847 0.1549 0.1477 0.1320 0.1446 0.1775 0.2396 0.1894 0.1941 0.1640 0.1907 0.2025

EGDU [5] Ours 0.0314 0.0299 0.0239 0.0163 0.0422 0.0354 0.0298 0.0235 0.0330 0.0312 0.0208 0.0201 0.0606 0.0411 0.0519 0.0498 0.0897 0.0776 0.0681 0.0537 0.0677 0.0625 0.0432 0.0416

Local Edge Structure Guided Depth Up-Sampling

1119

Fig. 3. Algorithm subjective comparison, up-sampling factor is 4. (a) HR color image, (b) real HR depth image, (c) results of Bicubic, (d) results of JBU, (e) results of TGV, (f) results of EGDU and (g) results of Ours.

Fig. 4. Algorithm subjective comparison, up-sampling factor is 8. (a) HR color map, (b) real HR depth map, (c) results of Bicubic, (d) results of JBU, (e) results of TGV, (f) results of EGDU and (g) results of Ours.

Fig. 5. The error subjective comparison of the algorithm, the up-sampling factor is 4. (a) HR color map, (b) results of Bicubic, (c) results of JBU, (d) results of TGV, (e) results of EGDU and (g) results of Ours.

1120

C. Wang et al.

Fig. 6. The error subjective comparison of the algorithm, the up-sampling factor is 8. (a) HR color map, (b) results of Bicubic, (c) results of JBU, (d) results of TGV, (e) results of EGDU and (g) results of Ours.

4 Conclusion In this paper, a new depth up-sampling method based on the local edge structure is proposed. Firstly, the non-robust pixel is marked and refined with the guidance of the LR edge map. Then, the pixels classification is completed based on the consistency of the edge structure between the depth map and the color map. Finally, the depth map refinement is finished by the influence between pixels and the constraints of the spatial. Experimental results demonstrate that our algorithm outperforms the conventional interpolation algorithms quantitatively and visually.

References 1. Kopf, J., Cohen, M., Lischinski, D., Uyttendaele, M.: Joint bilateral upsampling. In: ACM Transactions on Graphics (ToG), vol. 26, no. 3, pp. 96–100. ACM (2007) 2. Ren, Y., Liu, J., Yuan, H., et al.: Depth up-sampling via pixel-classifying and joint bilateral filtering. KSII Trans. Internet Inf. Syst. 12(7) (2018) 3. Liu, M.-Y., Tuzel, O., Taguchi, Y.: Joint geodesic upsampling of depth images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 169– 176 (2013) 4. Qiao, Y., Jiao, L., Yang, S., et al.: A novel segmentation based depth map up-sampling. IEEE Trans. Multimed. 21(1), 1–14 (2018) 5. Ren, Y., Liu, J., Yuan, H., et al.: Edge-guided with gradient-assisted depth up-sampling. Electron. Lett. 53(21), 1400–1402 (2017) 6. Jung, C., Joo, S., Ji, S.M.: Depth map upsampling with image decomposition. Electron. Lett. 51(22), 1782–1784 (2015) 7. Ferstl, D., Reinbacher, C., Ranftl, R., Ruether, M., Bischof, H.: Image guided depth upsampling using anisotropic total generalized variation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 993–1000 (2013) 8. Middlebury Datasets (2013). http://vision.middlebury.edu/stereo/data

Deep Learning and XGBoost Based Prediction Algorithm for Esophageal Varices Xinyi Chen1, Jiande Sun2, Zhishun Wang3, Yanling Fan1, and Jianping Qiao1(&) 1

Shandong Province Key Laboratory of Medical Physics and Image Processing Technology, School of Physics and Electronics, Shandong Normal University, Jinan, China [email protected] 2 School of Information Science and Engineering, Shandong Normal University, Jinan, China 3 Department of Psychiatry, Columbia University, New York, NY, USA

Abstract. The deep convolutional neural network has been extensively applied for clinical computer-aided diagnosis. In this study, we combine deep learning feature extraction and eXtreme gradient boosting (XGBoost) classifier for predicting the risk of esophageal varices (EV). First, the quantitative deep learning features and radiomics features of the regions of interest which includes spleen, liver and esophageal are extracted and concatenated. Then, XGBoost and the Least Absolute Shrinkage and Selection Operator (LASSO) are applied for the optimal predictive features selection and prediction of EV risk. XGBoost is used to assess the significance of the extracted features and LASSO is used to select the distinctive features. Finally, random forest, XGBoost and support vector machine classification methods are applied for predicting the low-risk and highrisk of esophageal varices. We collected computed tomography images of cirrhotic patients in two hospitals as the independent training and validation sets. Experimental results show that the features of esophageal are more distinctive than that of other organs. Moreover, the combination of deep learning and radiomics features based on XGBoost algorithm has outperforming classification performance in predicting the severity of EV disease compared to existing approaches. Keywords: Esophageal varices

 Deep learning  Radiomics  XGBoost

1 Introduction Esophageal varices are abnormal and enlarged veins in the tube that connects the throat and stomach which often occur in patients with liver diseases [1]. Generally, the risk of esophageal variceal bleeding depends on the appearance of red color signs and the size of varices. In clinical, the severity of EV is examined by endoscopy for both low-risk and high-risk EV and patients with EV need regular endoscopy for disease control. However, endoscopic examination is time-consuming and expensive as well as some patients do not tolerate invasive endoscopy. Therefore, the effective non-invasive computer-aided diagnosis is necessary to reduce the pain of patients. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1121–1128, 2023. https://doi.org/10.1007/978-981-19-3387-5_134

1122

X. Chen et al.

Radiomics combined with magnetic resonance imaging (MRI) or computed tomography (CT) has been widely applied in the determination and the status evaluation in many diseases. The basic idea is that numerous high-throughput quantitative features to be extracted in medical images which capture information about various aspects of the organ or tumor and have predictive power [2]. Han et al. used dualenergy CT scanning of splenic hemodynamics to assess the severe degree with esophageal varices of patients with cirrhosis [3]. Jhang et al. used magnetic resonance elastography (MRE) to detect the spleen and liver and then diagnosed EV by splenic stiffness [4]. Liu et al. extracted radiomic features of the spleen and liver and constructed a radiomics-based hepatic venous pressure gradient (HVPG) model with the least absolute shrinkage and selection operator (LASSO) algorithm [5]. Li et al. proposed a multi-organ fusion and machine learning framework for prediction of the esophageal variceal risk in our previous work [6]. In addition, the convolutional neural networks (CNNs) applied to medical images. Liu et al. used MR and CT images of the spleen and liver with CNNs networks for identifying portal hypertension patients [7]. Chen et al. trained CNNs to identify EV and discover risk factors for their rupture [8]. Hong et al. utilized portal vein diameter, spleen width, and platelet count variables in an artificial neural network (ANN) model for predicting the likelihood of variceal appearance [9]. However, these models require more manpower and the classification performance may be suboptimal because of the limited datasets. In this paper, a novel deep learning based method for radiomics together with the eXtreme gradient boosting (XGBoost) algorithm is proposed to predict the risk of EV. First, the deep learning and radiomics local and global features from three organs of CT images including spleen, liver and esophageal are extracted and concatenated. Then XGBoost and LASSO feature selection algorithms are applied to select the optimal subset of regions of interest (ROI) features. Finally, random forest (RF), XGBoost and support vector machine (SVM) are used to build the classification model. The independent training datasets and testing datasets are utilized for demonstrating the validity and generalization in the proposed deep learning based radiomics model.

2 Materials The study enrolled 182 patients from Qilu Hospital of Shandong University and 63 patients from Jinan Central Hospital. The training set (127 patients) and internal validation set (55 patients) collected from January 2018 to October 2020 in Qilu Hospital of Shandong University. The external validation set collected from January 2020 to October 2020 in Jinan Central Hospital. The approval for the Ethics Committee is conducted via the Medical Ethics Committee of the relevant institution. The parameters of the contrast-enhanced CT scans including Sensation 16 CT (Siemens), Brilliance iCT (Philips Healthcare), or Discovery CT750 HD (GE Healthcare) are tube voltage with 120 kVp, tube current with 150–600 mA, pitch with 1.375 and slice thickness with 1.25 mm. The intravenous injection is Ultravist (2.5 mL/kg, 300 mg/mL, rate is 3 mL/s). The starting time of arterial, venous and delayed phase scanning are 30 s, 70 s and 120 s after injection.

Deep Learning and XGBoost Based Prediction Algorithm

1123

3 Methods The proposed XGBoost and deep learning-based prediction algorithm for EV consists of four modules including segmentation, feature extraction, feature selection and classification as illustrated in Fig. 1. Specifically, the deep learning and radiomics methods extract the features of the three organs. The importance of each concatenated feature is evaluated by using the XGBoost algorithm. The risk of EV is predicted based on the distinctive features and multi-classifiers.

Fig. 1. The framework of esophageal varices risk prediction based on XGBoost and deep learning.

3.1

Image Preprocessing

The spleen, liver and esophageal were manually and independently segmented by two experienced radiologists. The segmentation boundaries were drawn with the open source software ITK-SNAP volume by volume for three organs. The volumes of interest are then normalized for the following feature extraction. 3.2

Feature Extraction

Deep Feature Extraction. The CNNs are utilized to extract the features of regions of interests. First, the ResNet-50 CNNs trained on ImageNet is used as pre-trained model which is then fine-tuned with the Stochastic Gradient Descent with momentum method. The ResNet-50 classifies the data in the ImageNet database into 1000 classes while there are two classes in the current task. Therefore, the output layer with two neurons in the fine-tuned network is exploited instead of the last fully-connected layer. The value of momentum is adjusted from 0.90 to 0.95. The maximum of epochs is increased to 150. The batch size is 128 and the initial value of learning rate is decreased from 0.001 to 0.0001. The L2 regularization parameter is set to be 0.001. Second, the normalized ROIs of the spleen, liver and esophageal are putted into the fine-tuned ResNet-50 convolutional neural network model. The output depth features are obtained from the

1124

X. Chen et al.

average pooling layer before the last fully-connected layer. In total, 6144 (2048  3) depth features are extracted for each subject. Radiomics Feature Extraction. The spleen, liver and esophageal organs were applied for the extraction of the radiomic features with the open source package pyradiomics which follows the Image Biomarker Standardization Initiative recommendations. Three image types including original, laplacian of gaussian (LoG) and wavelet are applied to extract the non-texture and texture features as shown in Table 1 and 2. The LoG filter emphasis regions of gray level variation where sigma is defined as the coarse of the emphasized texture and set to [1.0, 2.0, 3.0, 4.0, 5.0]. In this way, total of 3864 (1288  3) concatenated radiomics features are obtained for each subject from the spleen, liver and esophageal.

Table 1. Non-texture radiomics features. Non-texture features Feature Shape Maximum2DDiameterRow Maximum2DDiameterColumn Maximum3DDiameter Maximum2DDiameterSlice

MajorAxisLength LeastAxisLength MinorAxisLength SurfaceVolumeRatio MeshVolume

Flatness Elongation Sphericity VoxelVolume SurfaceArea

Table 2. Texture radiomics features. Texture features Firstorder GLCM GLRLM GLSZM GLDM NGTDM

3.3

Feature Median, Skewness, Energy, Entropy, Mean, Range, etc. Informational Measure of Correlation, Inverse Difference Moment, etc. Gray Level Variance, Run Percentage, etc. Small Area Emphasis, Zone Percentage, etc. Large Dependence High Gray Level Emphasis, etc. Busyness, Complexity, Strength, Coarseness, etc.

Feature Selection

High-dimensional features with redundant information might influence the predictive performance and is not conducive to EV diagnosis. Thus, feature selection is essential to decrease the feature dimension and identify the effective features. In this study, the radiomics and depth features of the three organs are sequentially concatenated. After that, XGBoost algorithm is used to evaluate the importance of the concatenated features which leads to the selection of features that are important for EV prediction. All regression coefficients that are not relevant are set to zero by using the LASSO algorithm. The overfitting problem in machine learning may increase the probability that the model cannot correctly identify the correct samples that meet the concept definition and

Deep Learning and XGBoost Based Prediction Algorithm

1125

the generalization will become worse. This problem can be solved by introducing regularization and adding a constraint term of the cost function. The cost function of LASSO regression is as follows: Rðx; yÞ ¼

n X

ðyz  la ðxz ÞÞ2 þ k

z¼1

where

n P

n X

j az j

ð1Þ

z¼1

ðyz  la ðxz ÞÞ2 is the cost function of the regression model, la ðxz Þ is the linear

z¼1

function of y with respect to x, k

n P

jaz j is the regularization term which represents the

z¼1

L1 norm (the total of the absolute values of all parameters). k is the regularization parameter. The larger the k is, the greater the penalty will be for the linear model with more variables. Similarly, the XGBoost algorithm specifically has an additional regularization term to limit the model complexity in order to avoid overfitting and improve the model generalization capability. The objective function can be stated by: Y ðbÞ ¼ X ðbÞ þ T ðbÞ

ð2Þ

where X ðbÞ is the regularization term and T ðbÞ is the error function. Then the objective function can be solved by: r X     ðr Þ ðr1Þ ^yj ¼ gZ xj ¼ ^yj þ gr x j ð3Þ Z¼1

Y ðrÞ ¼

n  X   ðr1Þ k yj ; ^yj þ gr xj þ Xðgr Þ þ constant

ð4Þ

j¼1

where k is the loss function, Xðgr Þ is the regularization term, gð xÞ represents the prediction of regression tree and ^yj is the label value. Taylor expansion is used to approximate the goal which can be formulated by:

Let

then Y ðrÞ ffi

1 gðx þ DxÞ  gð xÞ þ g0 ð xÞDx þ g00 ð xÞDx2 2

ð5Þ

  wj ¼ @^yðr1Þ k yj ; ^yðr1Þ

ð6Þ

  qj ¼ @ 2 k yj ; ^yðr1Þ

ð7Þ

  n X   1   ðr1Þ k yj ; ^yj þ wj gr xj þ qj g2r xj þ Xðgr Þ þ constant 2 j¼1

ð8Þ

1126

X. Chen et al.

The second order Taylor expansion of the objective function can use the first and second order derivatives at the same time. The second derivative is beneficial to the faster and more accurate descent of gradient. The results are as follows: Y

ðrÞ



n  X j¼1

   1 2  wj gr xj þ qj gr xj þ Xðgr Þ 2

ð9Þ

In this way, the XGBoost algorithm constructs each tree as a splitting criterion [10]. 3.4

Classification

The RF, XGBoost and SVM algorithms are applied to build classification model with the selected features for the EV risk prediction. Specifically, the XGBoost is used both for feature selection and classification. The kernel SVM is utilized to classify low-risk and high-risk EV with the optimal classification surface. The prediction of EV is accomplished by integrating the results of multiple classifiers with the basic unit of the decision tree when the RF algorithm is used as a classifier. Therefore, there are six models in this study by combining two feature selection methods and three classifiers to achieve the prediction of EV risk.

4 Experimental Results The independent training CT image set, internal validation set and external validation set collected from two hospitals are utilized for demonstrating the validity and generalization in the proposed deep learning-based radiomics method. Four indicators including area under the ROC (AUC), accuracy (ACC), sensitivity (SEN) and specificity (SPE) are used for criteria. The compared methods are also performed on the same datasets. The classification performance comparison of these methods on internal validation set are illustrated in Fig. 2 and the results on external validation set are illustrated in Fig. 3. It can be seen that the accuracy of predicting the risk of EV with esophageal features in both internal and external validation sets is better than that of the spleen and liver. Moreover, the concatenated features of three organs lead to a high accuracy because the esophageal contains more predictive information than spleen and liver. Specifically, the classification accuracy of esophageal varices risk with esophageal features are highest among all methods and all organs in the internal validation set. For example, the highest accuracy of 0.93 is reached in the esophagus while the AUC is 0.96, the SPE is 0.93 and the SEN is 0.96. The classification accuracy with spleen features is lower than that of the esophagus and the three organs concatenated features. The accuracy with liver features is the worst. A little different in the external validation set is that the prediction accuracy of EV risk with the concatenated features of three organs is highest among all methods and all organs. The SPE and SEN also have better results compared with that of the esophageal. For example, the highest accuracy with the concatenated features of three organs reaches 0.92 with AUC of 0.92, SPE of 0.90 and SEN of 0.94. The accuracy with esophageal features follows closely behind with

Deep Learning and XGBoost Based Prediction Algorithm

1127

0.91. The classification results with the spleen and liver features are consistent with the findings in the internal validation set, indicating the good generalization of the proposed method. In addition, the prediction model based on XGBoost algorithm works better than other methods both in the internal and the external validation set. In detail, in the internal validation set, the XGBoost algorithm is used to select significant features of the esophagus and build a prediction model, resulting ACC (0.93) with AUC (0.96, (95% CI, 0.9033-1), P > 0.05). In the external validation set, the XGBoost algorithm is used to selected effective features of the concatenated features of three organs, resulting ACC of 0.92, AUC of 0.92 (95% CI, 0.8524-0.9883), P > 0.05.

Fig. 2. The comparison of ACC, AUC, SPE and SEN in internal verification set.

Fig. 3. The comparison of ACC, AUC, SPE and SEN in external verification set.

1128

X. Chen et al.

5 Conclusion In this study, we present the XGBoost based EV risk prediction algorithm by integrating deep learning and radiomics features. Independent training and testing sets are utilized for enhancing the generalization of the model. The ResNet-50 convolutional neural network on the ImageNet dataset is used as pre-trained network. Then the deep features are extracted with the fine-tuned ResNet-50 CNN. Then the deep learning and radiomics features are combined to classify and predict EV. Experimental results demonstrate that the features of esophagus are more effective than that of spleen and liver and the performance of the prediction model with XGBoost algorithm outperforms other prediction models. In all, the proposed method is outstanding compared with the existing radiomics methods in the prediction of EV risk.

References 1. Lay, C.S.: Endoscopic variceal ligation in prophylaxis of first variceal bleeding in cirrhotic patients with high-risk esophageal varices. Gastrointest. Endosc. 47(2), 1346–1350 (1998) 2. Bourgier, C.: Definition and clinical development. Cancer/Radiothérapie 19(6), 532–537 (2015) 3. Han, X.: Noninvasive evaluation of esophageal varices in cirrhotic patients based on spleen hemodynamics: a dual-energy CT study. Eur. Radiol. 30(6), 3210–3216 (2020) 4. Jhang, Z.: Diagnostic value of spleen stiffness by magnetic resonance elastography for prediction of esophageal varices in cirrhotic patients. Abdom. Radiol. 46(2), 526–533 (2020) 5. Liu, F.: Development and validation of a radiomics signature for clinically significant portal hypertension in cirrhosis (CHESS1701): a prospective multicenter study. EBioMedicine 36, 151–158 (2018) 6. Li, L.: A multi-organ fusion and LightGBM based radiomics algorithm for high-risk esophageal varices prediction in cirrhotic patients. IEEE Access 9, 15041–15052 (2021) 7. Liu, Y.: Deep convolutional neural network-aided detection of portal hypertension in patients with cirrhosis. Clin. Gastroenterol. Hepatol. 18(13), 2998–3007 (2020) 8. Chen, M.: Automated and real-time validation of gastroesophageal varices under esophagogastroduodenoscopy using a deep convolutional neural network: a multicenter retrospective study (with video). Gastrointest. Endosc. 93(2), 422–432 (2021) 9. Hong, W.: Use of artificial neural network to predict esophageal varices in patients with HBV related cirrhosis. Hepatitis Monthly 11(7), 544–547 (2011) 10. Chen, T.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Fransisco, pp. 785–794. ACM (2016)

A Data-Driven Model for Bearing Remaining Useful Life Prediction with Multi-step Long Short-Term Memory Network Zhen Zhao, Zehou Du, Kaixuan Yang, Haoqin Sun, Jialong Wei, and Yang Liu(&) School of Information Science and Technology, Qingdao University of Science and Technology, Qingdao 266061, China [email protected]

Abstract. Accurate prediction of the remaining useful life of rolling bearing is of great significance for making reasonable maintenance strategy and reducing maintenance cost. In order to extract the features with time sequence information better for prediction, we propose a data-driven method using multi-step long short-term memory (MS-LSTM) network for predicting the remaining useful life (RUL) of bearings. Firstly, stacked denoised autoencoder (SDAE) and selforganizing maps (SOM) are combined to construct one-dimensional health index (HI) curve according to the original vibration signal, and then the HI curve is input into the MS-LSTM network to predict the long term future trend. Finally, the remaining useful life is calculated according to the failure threshold. Compared with the existing advanced methods, it achieves an absolute improvement of RMSE by 13.63% in the whole remaining life prediction and 7.43% in the last 840 steps. Keywords: Remaining useful life predication LSTM  Feature extraction

 Rolling bearing  RNN 

1 Introduction The running state of rolling bearing is closely related to the overall performance of mechanical equipment. Bearing performance degradation is inevitable under the longtime operation and extreme environments. Serious safety accidents and unnecessary economic losses will occur when the degradation process cannot be monitored in time. Therefore, it is of great significance to carry out the research on the remaining useful life (RUL) prediction of bearings. In recent years, the theory of reliability evaluation and RUL prediction of key components of mechanical equipment has developed rapidly, forming a variety of prediction theories which have been proved by practice. RUL prediction methods can be categorized as model-driven methods and data-driven methods. Model-driven methods [1–3] utilize statistical or physical models to describe the degradation process of mechanical components, and the model parameters are estimated according to the monitoring data to characterize the degradation process. However, the working ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1129–1138, 2023. https://doi.org/10.1007/978-981-19-3387-5_135

1130

Z. Zhao et al.

conditions and failure forms of the same component in different systems are not the same. Therefore, the applicability of model-driven methods is limited and the prediction accuracy in the complex mechanical system is low [4]. With the rapid development of sensor technology [5–7], the combination of signal processing and mobile Internet of things (IoT) has become the focus of research in recent years [8–10], data-driven methods for RUL prediction are proposed to overcome the limitations of model-driven methods, including continuous hidden Markov model (CHMM) [11], multiple output convolutional Gaussian process model [12] and degradation model based on Wiener process [13]. Data-driven models are usually more accurate, fast and economical to deploy than traditional model-driven models. However, these models are based on statistics, which require an accurate mathematical model to estimate the parameters. Therefore, the data-driven methods based on statistics are limited in practical applications. With the development of deep learning which is very suitable for the analysis of diverse, nonlinear and high-dimensional health monitoring data, data-driven methods based on deep learning are gradually applied in the field of degradation trend prediction [14]. Yuan et al. [15] used the standard LSTM network model for detecting fault location and RUL prediction. Gugolothu et al. [16] proposed the embedded RUL model to solve the problem of sensor data loss in life analysis of sensor data. Cao et al. [17] realized the prediction of bearing life based on Gated Recurrent unit (GRU). However, when the running time of the test bearing is short, the above methods can only predict the subsequent complete life cycle based on a small amount of the original data, resulting in a large difference between the predicted results and the actual values. In this paper, we propose a data-driven model for bearing RUL prediction with multi-step long short-term memory (MS-LSTM) network. Firstly, stacked denoised autoencoder (SDAE) and self-organizing maps (SOM) are used to extract the degraded features from the original data to construct one dimensional health index (HI) curve. Next, HI curve is input into the MS-LSTM network, and the sliding window is used to add the predicted results into next window for forward prediction. Finally, the RUL is calculated according to the predicted HI curve. The main contributions of this paper are as follows: (1) A deep feature extraction method based on SDAE and SOM is proposed to extract the HI curve from the original vibration signal of bearing. (2) A multi-step LSTM network is proposed to predict the long-term trend of HI value. (3) Compared with the existing advanced methods, it achieves an absolute improvement of RMSE by 13.63% in the whole remaining life prediction and 7.43% in the last 840 steps (Fig. 1).

A Data-Driven Model for Bearing Remaining Useful Life Prediction

1131

Remaining useful life predic on part

Health Index Curve construc on part Collect bearing vibra on signal

Data preprocessing

Health Index Feature set

Training set

Test set

Train

Training set

Test set

Train

Test

Stacked Denoising Autoencoders Self-organizing map

Health Index Curve

Trained Model

Long Short-Term Memory Network

Test

Saved model Health Index Feature trends

Calculate remaining useful life

Fig. 1. Overall structure of the proposed framework.

2 Proposed Method 2.1

HI Curve Construction

As shown in Fig. 2, SDAE is used for adaptive feature extraction of bearing vibration data because it has strong deep feature extraction ability due to multi hidden layer structure, and the training process does not need labels, which greatly reduces the manual participation. SDAE consists of six denoising autoencoders, in which the first three autoencoders constitute the coding network, the last three autoencoders constitute the decoding network, and the output of the coding network is the extracted feature. Due to the high dimension of the original vibration signal of the bearing, the direct extraction of a single feature can not well reflect the degradation of the bearing, and the self-organizing map neural network is based on unsupervised learning, which can map high-dimensional data to a two-dimensional topological network structure [18]. Therefore, SOM network is used for feature reduction based on the extracted high-level features. The whole process does not need the expert experience in the bearing field, and does not need complex signal processing methods.

1132

Z. Zhao et al.

Decoding network

Input features

Encoding network

Fig. 2. Structure of stacked denoising autoencoders.

2.2

HI Curve Prediction via MS-LSTM

LSTM neural network solves the problem of long-term dependence and is suitable for processing and predicting important events with relatively long interval and delay in time series. Each LSTM memory cell contains forgetting gate, input gate and output gate. The calculation process of LSTM is shown in Eq. (1)–(6). f t ¼ r W f ½ht1 ; xt  þ bf it ¼ r W f ½ht1 ; xt  þ bf

 

ð1Þ ð2Þ

~ t ¼ tan hðW C ½ht1 ; xt  þ bC Þ C

ð3Þ

~t Ct ¼ f t  C t1 þ it  C

ð4Þ

ot ¼ rðW o ½ht1 ; xt  þ b0 Þ

ð5Þ

ht ¼ ot  tan hðCt Þ

ð6Þ

~ t is where xt is the input at time t, ht1 is the output at t  1, C t1 is the cell state at t, C the temporary state of the cell at t, it , f t , ot are the input node output of input gate, forgetting gate and output gate at time t, b is the bias term and W is the hidden layer weight. The input of traditional LSTM prediction is the feature data with fixed time step, and the output is the feature data with specified time step. When the required prediction time is too long, the prediction error will be large. The MS-LSTM prediction proposed in this paper is shown in Fig. 3. Let the current time be t and the LSTM time step be m, set a fixed length sliding window to add the predicted value of the current time HI ðt þ 1Þ to the window of the next time. Then, continue to predict the output HI ðt þ 2Þ of next

A Data-Driven Model for Bearing Remaining Useful Life Prediction

1133

window. Finally, determine the time when bearing is completely damaged according to the failure threshold named last. The advantage of this method is that it can use the prediction results to achieve multi-step prediction, and then get the change trend of the predicted value. It is especially suitable for the situation that the test set has short running time and few data, and needs to calculate the RUL according to the change of the future HI value.

...

Outputs:

LSTM layers:

...

...

Inputs:

...

...

...

Fig. 3. Structure of multi-step LSTM HI prediction.

2.3

Calculation of RUL

Judge whether the predicted HI value has exceeded the preset failure threshold, suppose that the sampling point exceeding the failure threshold is P, the last sampling point of the current sliding window is T and the time interval between sampling points is I. The total length of the current test set is L. Then the predicted remaining life at the current time t is: PreRult ¼ ðP  T Þ  I

ð7Þ

The real RUL at the current time is: ActRult ¼ ðL  T Þ  I

ð8Þ

3 Experiments and Results 3.1

Dataset

This paper uses dataset of IEEE 2012 PHM Prognostic Challenge [19] to construct HI curve of rolling bearing vibration signal, which is provided by FEMTO-ST Institute. As shown in Fig. 4, experiments were carried out on a laboratory experimental platform (PRONOSTIA) that enables accelerated degradation of bearings under constant and/or variable operating conditions, while gathering online health monitoring data. The details are shown in Table 1.

1134

Z. Zhao et al.

Fig. 4. Overview of PRONOSTIA. Table 1. Datasets of IEEE 2012 PHM Prognostic Challenge.

3.2

HI Curve Construction Experiment

In this paper, bearing 1_1 and bearing 1_2 under condition 1 is taken as training set, while bearing 1_3*1_7 as the test set. The structure of SDAE is 2560-1500-500-100500-1500-2560, where 2560 corresponds to each sampling point of the bearing original vibration signal, and 100 corresponds to the number of features finally extracted. The number of pretraining for a single autoencoder is 5, the noise rate is 0.1, the learning rate is 0.01, the number of global network reverse tuning is 5, the number of SOM network topology layer nodes is 390, and the epoch of training is 100. After the model training, the normalized bearing test data are input into the SDAE for adaptive feature extraction, the 100 dimensional features extracted from each sampling point are input into the trained SOM network to calculate the HI value, and the final HI curve is obtained, as shown in Fig. 5.

A Data-Driven Model for Bearing Remaining Useful Life Prediction

1135

Bearing1-1 Bearing1-2 Bearing1-3 Bearing1-5 Bearing1-6 Bearing1-7

Steps

Fig. 5. HI curve of bearing 1-1 to 1-7.

3.3

Comparison Experiment of HI Prediction

Single step prediction is under the same condition as the HI curve construction experiment. Firstly, parameters are set as follows: input layer node is 5, memory unit number is 5, output node number is 1, error threshold is 0.01, iteration number is 200, the initial value of learning rate is 0.01. The square loss function is selected as the loss function. The LSTM network is used to predict the HI value and the result is shown in Fig. 6(a). Since the test set starts from 1000 steps and the time step is set to 20, there is less data available for prediction, and the subsequent complete life cycle can only be predicted according to the data of one time step of test set, resulting in a large difference between the predicted result and the actual value.

Single step PredictValues ActualValues

Multi step PredictValues ActualValues

(a)

(b)

Fig. 6. Comparison between (a) traditional single-step RUL prediction and (b) proposed multistep RUL prediction.

The parameter setting of multi-step prediction in LSTM network is the same as that of single-step prediction. As can be seen from Fig. 6(b), as the test time increases, the prediction value of LSTM network deviates from the true value, but it can track the true

1136

Z. Zhao et al.

value well. According to the previous HI curve construction of all data sets, it can be found that when the value of HI curve exceeds 6, the remaining life is less than 5% of the whole life cycle, so the failure threshold of bearing HI curve is set as 6. The time lag of multi-step prediction of HI value to failure threshold is mainly due to the accumulation of multi-step prediction error. However, in the case of short running time and less data, the health status of key components can be mastered by multi-step prediction of HI value, which provides reference for the maintenance and replacement of parts. In order to further reflect the superiority of the proposed model, the life prediction models of other strategies are selected to compare and analyze the degradation trend on bearing 2–3 data sets. The other methods are the traditional LSTM model [15], the bidirectional LSTM [16], and gated recurrent unit network model [17]. The optimal parameters of the three models are obtained by cross validation method and are listed in Table 2. Table 2. Hyperparameters for comparison models. Type LSTM BiLSTM GRU

Batchsize 128 128 128

Nodes 64 64 64

Timestep 5–10 10–20 5–10

The RMSE of the error between the predicted life and the real life is used to evaluate the performance of the model. The test is carried out under the two requirements of the whole process prediction and the last 840 steps. It can be seen from Table 3 that the RMSE of the proposed method is better than the previous advanced methods in both two test conditions. Because the whole life prediction needs to predict longer time step, the proposed method has a greater improvement than the traditional method. In the short-term forecast like the last 840 steps, the proposed method also has a good performance. Table 3. Comparison of the performance of different models. Type LSTM BiLSTM GRU Mutistep-LSTM

RMSE 0.4145 0.3683 0.5202 0.3181

RMSE of last 840 steps 0.2324 0.1843 0.2587 0.1706

4 Conclusions In this paper, a data driven model based on MS-LSTM is proposed for bearing RUL prediction. The proposed model is used to solve the problem of less data available and large error in long-term prediction. Through a series of comparative experiments on PHM2012 dataset, the RMSE of the proposed method in the bearing RUL prediction

A Data-Driven Model for Bearing Remaining Useful Life Prediction

1137

reaches 13.63% absolute improvement, and in the last 840 steps, the RMSE reaches 7.43% absolute improvement compared with the existing advanced methods, which verifies the effectiveness of the method. Moreover, the proposed model greatly reduces the time cost of training by constructing the HI curve after feature extraction of the original data set. In future work, we will study how to choose the first prediction time and how to reduce the cumulative error in the long-term prediction.

References 1. Kacprzynski, G.J., Sarlashkar, A., Roemer, M.J., et al.: Predicting remaining life by fusing the physics of failure modeling with diagnostics. JOM: J. Miner. Metals Mater. Soc. TM 56 (3), 29–35 (2004) 2. Liao, L.: Discovering prognostic features using genetic programming in remaining useful life prediction. IEEE Trans. Ind. Electron. 61(5), 2464–2472 (2013) 3. Wang, J., Gao, R.X., Yuan, Z., et al.: A joint particle filter and expectation maximization approach to machine condition prognosis. J. Intell. Manuf. 30, 1–17 (2017) 4. Kirubarajan, T.: Physically based diagnosis and prognosis of cracked rotor shafts. In: Proceedings of SPIE - The International Society for Optical Engineering, vol. 4733 (2002) 5. Chehade, A., Bonk, S., Liu, K.: Sensory-based failure threshold estimation for remaining useful life prediction. IEEE Trans. Rel. 66(3), 939–949 (2017) 6. Liu, K., Huang, S.: Integration of data fusion methodology and degradation modeling process to improve prognostics. IEEE Trans. Autom. Sci. Eng. 13(1), 344–354 (2014) 7. Chehade, A., Shi, Z.: Sensor fusion via statistical hypothesis testing for prognosis and degradation analysis. IEEE Trans. Autom. Sci. Eng. 16(4), 1774–1787 (2019) 8. Xu, L., Wang, J., Li, X., Cai, F., Tao, Y., Gulliver, T.A.: Performance analysis and prediction for mobile internet-of-things (IoT) networks: a CNN approach. IEEE Internet of Things J. 8(17), 13355–13366 (2021). https://doi.org/10.1109/JIOT.2021.3065368 9. Wang, J., et al.: A novel underwater acoustic signal denoising algorithm for Gaussian/NonGaussian impulsive noise. IEEE Trans. Veh. Technol. 70(1), 429–445 (2021) 10. Li, J., Wang, J., Wang, X., Qiao, G., Luo, H., Gulliver, T.A.: Optimal beamforming design for underwater acoustic communication with multiple unsteady Sub-Gaussian interferers. IEEE Trans. Veh. Technol. 68(12), 12381–12386 (2019) 11. Wang, M., Wang, J.: CHMM for tool condition monitoring and remaining useful life prediction. Int. J. Adv. Manuf. Technol. 59(5–8), 463–471 (2012). https://doi.org/10.1007/ s00170-011-3536-7 12. Chehade, A., Hussein, A.A.: A multi-output convolved Gaussian process model for capacity estimation of electric vehicle Li-ion battery cells. In: 2019 IEEE Transportation Electrification Conference and Expo (ITEC), pp. 1–4 (2019) 13. Si, X., Wang, W., Hu, C., Chen, M., Zhou, D.: A wiener-process-based degradation model with a recursive filter algorithm for remaining useful life estimation. Mech. Syst. Signal Process. 35(1–2), 219–237 (2013) 14. Lei, Y., Jia, F., Kong, D., et al.: Opportunities and challenges of mechanical intelligent fault diagnosis under big data. J. Mech. Eng. 54(5), 94–104 (2018) 15. Yuan, M., Wu, Y., Lin, L.: Fault diagnosis and remaining useful life estimation of aero engine using LSTM neural network. In: 2016 IEEE International Conference on Aircraft Utility Systems (AUS), pp. 135–140 (2016) 16. Gugulothu, N., Tv, V., Malhotra, P., Vig, L., Agarwal, P., Shroff, G.: Predicting remaining useful life using time series embeddings based on recurrent neural networks (2017)

1138

Z. Zhao et al.

17. Cao, Y., Jia, M., Ding, P., Ding, Y.: Transfer learning for remaining useful life prediction of multi-conditions bearings based on bidirectional GRU network. Measurement 178, 109287 (2021). ISSN 0263-2241 18. Kohonen, T.: The self-organizing map. Neurocomputing 21(1–3), 1–6 (1990) 19. Sutrisno, E., Oh, H., Vasan, A.S.S., Pecht, M.: Estimation of remaining useful life of ball bearings using data driven methodologies. In: 2012 IEEE Conference on Prognostics and Health Management, pp. 1–7 (2012). https://doi.org/10.1109/ICPHM.2012.6299548

State Estimation of Networked Control Systems with Packet Dropping and Unknown Input Lei Tan1 , Xinmin Song1(B) , Guoming Liu1 , and Zhe Liu2 1

School of Information Science and Engineering, Shandong Normal University, Jinan 250358, People’s Republic of China [email protected], [email protected] 2 Sciences and Technology for Innovation, Yamaguchi University, 1677-1 Yoshida, Yamaguchi, Japan [email protected]

Abstract. This brief considers the problem of state estimation for networked control systems with malicious packets drop between controllers and actuators. We firstly take advantage of the statistical characteristics of packets drop to estimate the system state. However, when it is difficulty to obtain the statistical characteristics of packet drops, we transform the uncertain malicious packet drops estimation problem into an unknown input estimation problem, then we design a recursive estimator, in which the unknown input estimator is obtained from the innovation analysis and the state estimate is derived by a procedure which similar to the Kalman filter. Finally, we utilize simulation examples to show the effectiveness of the estimators. Keywords: State estimation · Networked control systems packets drop · Recursive estimator · Unknown input

1

· Malicious

Introduction

The networked control systems (NCSs) first appeared in Reference [1] of Macri University, it Generally means that the sensors, actuators and controllers in the control system exchange control and sensing information through the network. Reference [2] proposed a vector recovery method, which applied to state estimation of NCSs. Reference [3] designed Nash-optimal feedback controllers for NCSs. Reference [4] discussed an optimal control design problem for perturbed NCSs. In practical applications, researchers gradually take packet drops into account in the system model. One type can refer to [5], the feature of this kind of model is that when the data packet is lost, the noise term of the measurement data will not be lost. Further, Reference [6] considered the state estimation of multiplicative noise system based on the model [5]. Another type of packet dropping is to lose all observation information including noise. Reference [7] c The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023  J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1139–1146, 2023. https://doi.org/10.1007/978-981-19-3387-5_136

1140

L. Tan et al.

presented an analysis of Kalman filter with intermittent observations. In many cases, there are disturbances that cannot be measured in the system, they are generally defined as unknown input. For the systems with unknown input only in the state equation, Reference [8] designed an recursive estimator based on minimum variance unbiased sense. Reference [9] considered the asymptotic stability of the estimator developed by [8], and further proved the stability of the time invariant system. For the system with unknown input in both state equation and observation equation, Reference [10] considered a three-step recursive Kalman estimator. Motivated by the above-mentioned research findings, we consider the state estimation problem for NCSs with packets drop between controller and actuators in this paper. When the statistical characteristics of packets drop is quite difficulty to obtain, we attempt transform packet drops estimation problem between controller and actuators into unknown input estimation problem, which is the main contribution in this paper.

2

Problems Statement

We consider a discrete linear plant model,  x(l + 1) = A(l)x(l) + B(l)ua (l) + w(l), y(l) = C(l)x(l) + v(l),

(1)

where x(l) ∈ Rn is the system state, ua (l) ∈ Rm is received values of the actuators, y(l) ∈ Rp is output values of the sensors. w(l) ∈ Rn and v(l) ∈ Rp are zero mean white noises, and the covariances are E{w(l)wT (j)} = Qδl,j , E{v(l)v T (j)} = Rδl,j , respectively. A(l), B(l), C(l) are coefficient matrices with rankB(l − 1) = rank(C(l)B(l − 1)) = m. The mean value of initial state x(0) is x ¯(0) and the covariance is P (0). x(0), w(l), v(l) are assumed to be not related to each other. We consider packets drop when the control signal uc (l) is sent to the actuator, which is represented as ea (l), we have ea (l) = ua (l) − uc (l). And we assume a safe connection between the sensor and the controller. We utilize Λ(l) = diag(λ1 (l), λ2 (l), · · · , λp (l)) to model packets dropping, λi (l) = 1 represents that the actuator receives the packet correctly; otherwise, λi (l) = 0. Therefore, the received signal value of the actuator is ua (l) = Λ(l)uc (l). In addition, λi (l) obeys the following probability distribution P r{λi (l) = 1} = αi , P r{λi (l) = 0} = 1 − αi . Figure 1 shows the NCSs we studied. Problem 1. We utilize the statistical characteristics of packets drop to estimate the system state. The system under consideration now is:  x(l + 1) = A(l)x(l) + B(l)Λ(l)uc (l) + w(l), (2) y(l) = C(l)x(l) + v(l). The objective is to find the linear minimum mean square error(LMMSE) estimator x ˆ1 (l|l − 1) of state x(l).

State Estimation of Networked Control Systems

1141

Fig. 1. The graph of the NCSs we studied

Problem 2. When it is difficult to obtain statistical characteristics of packets drop, we utilize a method that transform packets drop estimation problem into the unknown input estimation problem to solve this type of estimation problem. We try to consider ea (l) as an unknown input. The system now can be rewritten as (3) and we design the estimators (4)–(7).  x(l + 1) = A(l)x(l) + B(l)(uc (l) + ea (l)) + w(l), (3) y(l) = C(l)x(l) + v(l). x ˆ2 (l|l − 1) = A(l − 1)ˆ x2 (l − 1|l − 1) + B(l − 1)uc (l − 1), eˆa (l − 1) = Mq (l)(y(l) − C(l)ˆ x2 (l|l − 1)), ∗ x ˆ (l|l) = x ˆ2 (l|l − 1) + B(l − 1)ˆ ea (l − 1), x ˆ2 (l|l) = x ˆ∗ (l|l) + Kq (l)(y(l) − C(l)ˆ x∗ (l|l)),

(4) (5) (6) (7)

The goal is to obtain the gain matrices Mq (l) ∈ Rm×p and Kq (l) ∈ Rn×p such that E{˜ ea (l)˜ eTa (l)}, E{˜ x2 (l)˜ xT2 (l)} are minimized, respectively.

3

LMMSE Estimation

We define errors and state error covariance matrix as follows: ˆ1 (l|l − 1), P1 (l|l − 1)  E{˜ x1 (l|l − 1)˜ xT1 (l|l − 1)}. x ˜1 (l|l − 1)  x(l) − x Π = diag{α1 (1 − α1 ), · · · , αp (1 − αp )}, Lemma 1. For the given system (2), we obtain LMMSE estimator as follows: x ˆ1 (l + 1|l) = A(l)ˆ x1 (l|l) + B(l)Λg (l)uc (l), x ˆ1 (l|l) = x ˆ1 (l|l − 1) + Kp (l)[y(l) − C(l)ˆ x1 (l|l − 1)], x ˆ1 (0| − 1) = x ¯(0),

(8) (9) (10)

1142

L. Tan et al.

where Kp (l) = A(l)P1 (l|l − 1)C T (l)[C(l)P1 (l|l − 1)C T (l) + R]−1 , Λg (l) = diag(α1 , · · · , αp ),

(11) (12)

and we obtain P1 (l|l − 1) by the following Riccati equation, P1 (l + 1|l) = [A(l) − Kp (l)C(l)]P1 (l|l − 1)[A(l) − Kp (l)C(l)]T + B(l)[Π  [uc (l)(uc )T (l)]]B T (l) + Q + Kp (l)RKpT (l), (13) P1 (0| − 1) = P (0),

(14)

Proof. From (2), (8) and (9), we derive, x ˜1 (l + 1|l) = [A(l) − Kp (l)C(l)]˜ x1 (l|l − 1) − Kp (l)v(l) + w(l) + B(l)[Λ(l) − Λg (l)]uc (l).

(15)

Then we calculate the following formula based on the Eq. (15), P1 (l + 1|l) = A(l)P1 (l|l − 1)AT (l) + Q + B(l)Π  (uc (l)(uc )T (l)) + [Kp (l) − Kp∗ (l)](C(l)P1 (l|l − 1)C T (l) + R)[Kp (l) − Kp∗ (l)]T − Kp∗ (l)(C(l)P1 (l|l − 1)C T (l) + R)(Kp∗ )T (l),

(16)

It is obvious that when Kp (l) = Kp∗ (l), P1 (l + 1|l) will be minimized, Kp (l) = A(l)P1 (l|l − 1)C T (l)[C(l)P1 (l|l − 1)C T (l) + R]−1 . 

4

Minimum Variance Unbiased Estimation

4.1

Input Estimation

In this subsection, we obtain Mq (l) such that (5) is an unbiased estimator of ea (l)˜ eTa (l)} is minimized. Defining: ea (l − 1) and E{˜ y˜(l)  y(l) − C(l)ˆ x2 (l|l − 1),

(17)

from (4) and (5), we obtain that y˜(l) = C(l)B(l − 1)ea (l − 1) + d(l),

(18)

d(l) = C(l)(A(l − 1)˜ x2 (l − 1) + w(l)) + v(l),

(19)

where

where x ˜2 (l) = x(l) − x ˆ2 (l|l).

State Estimation of Networked Control Systems

1143

Theorem 1. (4) and (5) is an unbiased estimator of ea (l − 1) if x ˆ2 (l − 1|l − 1) be unbiased and Mq (l) satisfies the following condition. Mq (l)C(l)B(l − 1) = Im .

(20)

Proof. According to (18), yields e˜a (l) = ea (l − 1) − Mq (l)C(l)B(l − 1)ea (l − 1) − Mq (l)d(l), Thus ea (l − 1) unbiased need Mq (l)C(l)B(l − 1) = Im . The proof is finished.  ˜ as the variance of d(l), We define R[l] ˜ = E{d(l)dT (l)} = C(l)(A(l − 1)P2 (l − 1|l − 1)AT (l − 1) + Q)C T (l) + R, (21) R(l)

where P2 (l|l) = E(˜ x2 (l)˜ xT2 (l)). Then we define P2 (l|l − 1) = A(l − 1)P2 (l − 1|l − T ˜ as follows: 1)A (l − 1) + Q, thus we obtain R(l) ˜ = C(l)P2 (l|l − 1)C T (l) + R. R(l) Theorem 2. Under the condition of Rank(C(l)B(l − 1)) = RankB(l − 1) = m, and x ˆ2 (l − 1|l − 1) be unbiased, then we derive the unbiased minimum variance input estimator and its gain matrix Mq (l), y (l), eˆa (l − 1) = Mq (l)˜ ˜ −1 (k)C(l)B(l − 1)]−1 B T (l − 1)C T (l)R ˜ −1 (l). Mq (l) = [B T (l − 1)C T (l)R (22) The proof process can refer to the proof of lemma 1, therefore we omit it. 4.2

State Estimation

˜∗ (l) = x(l) − x ˆ∗ (l|l), Let x ˆ2 (l − 1|l − 1) and eˆa (l − 1) be unbiased. We define x and utilize (3), (4) and (6), we derive, x2 (l − 1) + B(l − 1)˜ ea (l − 1) + w(l − 1), x ˜∗ (l) = A(l − 1)˜

(23)

we calculate an expression of e˜a (l − 1) by utilizing (5) and (18–19) that e˜a (l − 1) = (Im − Mq (l)C(l)B(l − 1))ea (l − 1) − Mq (l)d(l) = Mq (l)d(l), (24) substituting (24) to (23), we obtain ˜ − 1)˜ x(l − 1) + w(l ˜ − 1), x ˜∗ (l) = A(l ˜ − 1) = (In − B(l − 1)Mq (l)C(l))A(l − 1). A(l w(l ˜ − 1) = (In − B(l − 1)Mq (l)C(l))w(l − 1) − B(l − 1)Mq (l)v(l).

(25) (26) (27)

Then according to (25), (26) and (27), we compute P ∗ (l|l) as follows: P ∗ (l|l) = (In − B(l − 1)Mq (l)C(l))P2 (l|l − 1)(In − B(l − 1)Mq (l)C(l))T + B(l − 1)Mq (l)RMqT (l)B T (l − 1), (28)

1144

L. Tan et al.

Then according to (3), (7) and (25), yields ˜ − 1)˜ x ˜2 (l) = (In − Kq (l)C(l))(A(l x2 (l − 1) + w(l ˜ − 1)) − Kq (l)v(l).

(29)

By applying (28) and (29), we can calculate P2 (l|l) as follows: ˜ ∗ (l)K T (l) − V ∗ (l)K T (l) − Kq (l)(V ∗ )T (l) + P ∗ (l|l), P2 (l|l) = Kq (l)R q q

(30)

where ˜ ∗ (l) = C(l)P ∗ (l|l)C T (l) + R + C(l)S ∗ (l) + (S ∗ )T (l)C T (l), R V ∗ (l) = P ∗ (l|l)C T (l) + S ∗ (l), S ∗ (l) = −B(l − 1)Mq (l)R.

(31)

˜ ∗ (l) = E(˜ We define R y ∗ (l)(˜ y ∗ )T (l)), where y˜∗ (l)  y(l) − C(l)ˆ x∗ (l|l) = (Ip − C(l)B(l − 1)Mq (l))d(l).

(32)

˜ ∗ (l) can be written as Utilizing (32) and (21), R T ˜ ∗ (l) = (Ip − C(l)B(l − 1)Mq (l))R(l)(I ˜ R p − C(l)B(l − 1)Mq (l)) .

˜ ∗ (l) is singular, thus Kq (l) is not unique. It is easy to know the matrix R Then we define the gain matrix as follows: ¯ q (l)ρ(l), Kq (l) = K

(33)

˜ ∗ (l)ρT (l) has full rank. where ρ(l) ∈ Rr×p is an matrix satisfying ρ(l)R Theorem 3. The optimal gain matrix Kq (l) in (33) can be given by ˜ ∗ (l)ρT (l))−1 ρ(l), Kq (l) = V ∗ (l)ρT (l)(ρ(l)R

(34)

˜ ∗ (l) are given by in (31). P2 (l|l) in (30) can be rewritten: where V ∗ (l) and R P2 (l|l) = P ∗ (l|l) − Kq (l)(P ∗ (l|l)C T (l) + S ∗ (l))T .

(35)

The proof process can refer to the proof of lemma 1, therefore we omit it.

5

Simulation Results

For Problem 1, we discuss the following discrete linear system     ⎧ 0.88 0.1 12 ⎪ ⎪ x(l) + Λ(k)uc (l) + w(l), ⎨ x(l + 1) = 0.4 0.3 21   ⎪ 0.3 0.6 ⎪ ⎩ y(l) = x(l) + v(l), 1.1 −1.1 with  x(0) =

           −2 1 10 0 11 10 , P (0) = sin(k). ,x ˆ(0) = ,Q = ,R = , uc (k) = 2 1 01 0 11 01

State Estimation of Networked Control Systems

1145

uc (l) is the controller signal. w(l) and v(l) are zero mean white noises, and the covariances are Q, R, respectively. Λ(l) is the diagonal matrix be used to model malicious packets dropping. Then we consider the tracking effect of the LMMSE estimator x ˆ1 (l|l − 1) under the packets dropping rate of Λg (l) = diag{0.8, 0.9}. From Fig. 2 and Fig. 3, we can see clearly that the estimator x ˆ1 (l|l − 1) track the initial state x(l) well. It is further explained that the estimator we designed in Sect. 3 is effective. the x1

the x2 8

10 state estimator

state estimator

6 4

5

2 0

0

-2 -4

-5

-6 -8

-10

-10 -15

-12 0

10

20

30

40

50

60

70

80

90

100

0

10

20

30

Time in Second

40

50

60

70

80

90

100

Time in Second

Fig. 2. The x1 (l) and its estimator.

Fig. 3. The x2 (l) and its estimator.

For values ˆ(0), Q and R are set as above.  Problem  2, theinitial   x(0), P (0), x 0.2 0.5 8 0.3 0.6 A= ,B= ,C= , uc (k) = sin(k). These initial values 0.28 0.3 3 1.1 −1.1 are introduced into the discrete-time system of the Problem 2 of the Sect. 2. Figure 4 and Fig. 5 demonstrates the state estimator x ˆ2 (l|l) has good tracking effect, which also verifies the effectiveness of the method in Sect. 4. the x1

the x2

30

15 state estimator

state estimator

20

10

10

5

0

0

-10

-5

-20

-10

-30

-15 0

10

20

30

40

50

60

70

80

90

100

Time in Second

Fig. 4. The x1 (l) and its estimator.

0

10

20

30

40

50

60

70

80

90

100

Time in Second

Fig. 5. The x2 (l) and its estimator.

1146

6

L. Tan et al.

Conclusion

We have considered the problem of state estimation for NCSs with packets drop between controller and actuators. When the statistical characteristics of packets drop can be obtained, we have derived the estimator based on a Riccati equation and the Hadamard product. When it is quite difficulty to get the statistical characteristics of packets drop, we first have transformed the packet drops problem into unknown input problem. Then, we have utilized the innovation analysis to obtain the unknown input estimator, and the state estimate is derived by a procedure which similar to the Kalman filter. Finally, we have shown the effectiveness of the estimators by numerical examples.

References 1. Walsh, G.C., Ye, H., Bushnell, L.G.: Stability analysis of networked control systems. IEEE Trans. Control Syst. Technol. 10(3), 438–446 (2002) 2. Zhang, M., Hui, S., Bell, M.R., et al.: Vector recovery for a linear system corrupted by unknown sparse error vectors with applications to secure state estimation. IEEE Control Syst. Lett. 3(4), 895–900 (2019) 3. Sun, Y., Xu, J., Zhang, H.: Nash equilibrium strategy for networked control systems with state-package dropouts. In: 2020 IEEE 16th International Conference on Control and Automation (ICCA), Singapore, pp. 1452–1456. IEEE Press (2020) 4. Bahraini, M., Zanon, M., Colombo, A., Falcone, P.: Optimal control design for perturbed constrained networked control systems. IEEE Control Syst. Lett. 5(2), 553–558 (2021) 5. Nahi, N.E.: Optimal recursive estimation with uncertain observation. IEEE Trans. Inf. Theory 15(4), 457–462 (1969) 6. Song, X., Park, J.H.: Linear optimal estimation for discrete-time measurement delay systems with multichannel multiplicative noise. IEEE Trans. Circ. Syst. II: Express Briefs. 64(2) 156–160(2017) 7. Sinopoli, B., Schenato, L., Franceschetti, M., et al.: Kalman filtering with intermittent observations. IEEE Trans. Autom. Control 49(9), 1453–1464 (2004) 8. Gillijns, S., Moor, B.D.: Unbiased minimum-variance input and state estimation for linear discrete-time systems. Automatica 43(1), 111–116 (2007) 9. Fang, H., De Callafon, R.A.: On the asymptotic stability of minimum-variance unbiased input and state estimation. Automatica 48(12), 3183–3186 (2012) 10. Gillijns, S., Moor, B.D.: Unbiased minimum-variance input and state estimation for linear discrete-time systems with direct feedthrough. Automatica 43(5), 934–937 (2007)

Self-attention Mechanism Fusion for Recommendation in Heterogeneous Information Network Zhenyu Zang1,2, Xiaohui Yang1,2(&), and Zhiquan Feng1,2 1

Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan 250022, China [email protected] 2 School of Information Science and Engineering, University of Jinan, Jinan 250022, China

Abstract. In recent years, lots of recommendation methods based on heterogeneous information network has been proposed. However, traditional heterogeneous information network mainly relies on the similarity of meta-path to recommend, which cannot fully mine the user’s potential preference features. We propose a heterogeneous information network model using self-attention mechanism for recommendation (HINS). In our method, local neighbor information is modeled for users and items respectively via collaborative attention mechanism. Moreover, by optimizing the relationship representation between learning nodes of multi label classification problem based on meta-path, selfattention mechanism is added to the relationship representation to make the model personalized. Finally, the two parts are unified into one model for top-N recommendation. We do evaluation experiments on four different datasets, and our model achieves the best results. Keywords: Heterogeneous information network Recommendation  Meta path

 Self-attention mechanism 

1 Introduction As a new direction, heterogeneous information networks (HIN) have attracted a lot of attention [1]. Sun et al. [2] utilized Meta-path to mine potential information in heterogeneous information networks. Hu et al. [3] proposed a context based meta-path and attention model for recommendation system. HIN compose of different types of objects and the connections between different objects represent different relationships [4]. The meta-path was used for recommendation which measure the similarity between different objects [5]. Shi et al. [6] designed a collaborative filtering model, which could integrate heterogeneous information for personalized recommendation. Figure 1 illustrates an example of HIN. Users can connect different items through the click history, and these items have different attributes (e.g., type, actor), which constitutes a heterogeneous graph. In heterogeneous graph, different semantic relations in meta path can enhance the representation of users and items. For example, u1 can interact with m4 through UMUM path (Fig. 1c). Network schemas emphasize the type ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1147–1155, 2023. https://doi.org/10.1007/978-981-19-3387-5_137

1148

Z. Zang et al.

constraints of objects and relation sets, which make heterogeneous networks semistructured, so that facilitating semantic exploration and pattern mining (Fig. 1b).

Fig. 1. An example of HIN.

Inspired by the spatiotemporal feature gating [7], we add the self-attention mechanism to recommendation model. In our method, attention mechanism is used to process local information, and the global information is processed by classification model. According to the relationship vector representation generated by users and items, the self-attention mechanism is used to mine users’ own preferences when the relationship vector is generated. We employ the classification loss to determine whether there is a meta path. In this way, the global information can be used. These two parts are combined in a unified model to optimize the ranking recommendation. The main contributions of our research are as follows. Firstly, we raise a unified model, which integrates different information of users by heterogeneous information network. Secondly, we append self-attention mechanism to mine users’ own preferences in HIN, which making the recommended items more closer users, and the results are more interpretative. Finally, we conduct lots of experiments on four different datasets, and our model outperforms existing methods.

2 Related Work 2.1

Recommendation Based on Attention Mechanism

Lots of research works propose methods utilized deep learning to model recommendation tasks [8]. He et al. [9] proposed neural collaborative filtering (NCF) model. On this basis, a series of collaborative filtering algorithms had been proposed which relied on deep learning [10].

Self-attention Mechanism Fusion for Recommendation

1149

Chen et al. [11] introduced a model which applied attention mechanisms to use full comment information. Wang et al. [12] proposed a neural attention network pattern to model the different interaction feature. Pablo et al. [13] proposed a learnable model, which used attention mechanism to learn the dynamic changes of user interest in product sequence. In order to make the recommendation model more explanatory, Yu et al. [14] proposed an interpretable model which adopted attention mechanism to extract meaningful text from comments. 2.2

Network Embedded Learning

Network embedding is often used in data mining which has good performance in structure feature acquisition [15]. Deepwalk [16] learned network representation by using random walk on skip-gram model. Grover et al. [17] proposed a network representation learning framework which biased random walk. In order to mine the highorder proximity characteristics of network representation, Cao et al. [18] proposed GraRep model. Unfortunately, most network embedded learning methods mainly solve the problem of homogeneous network. Recently, some researches introduced network representation learning into heterogeneous information networks. Chang et al. [19] proposed a deep embedded learning model to capture the potential information through the interaction of different nodes in heterogeneous information network. 2.3

The Proposed Model

We propose our model to integrate all interaction information for recommendation in heterogeneous information networks, which is called HINS. We show the architecture in Fig. 2.

Fig. 2. The architecture of the proposed model

1150

Z. Zang et al.

Our model uses neighbor information flexibly and applies collaborative attention mechanism. On this basis, the representation of vector relationship is learned through the interaction between users and products. In addition, we use the learned relationship representation to infer their interaction distribution. We adopt context feature gating (i.e., self-attention mechanism), and then model the problem as multilabel classification. Finally, we fuse these two parts into a united model for recommendation. 2.4

Basic Recommendation Model Based on Attention Mechanism

Each item (user) represented by other users (items). We use the model based on matrix factorization to sort these directly connected neighbors. We can establish lookup layers, which is a low dimensional vector. The local information is expressed as a matrix according to the following formula: Xu ¼ Look  upðP; Pu Þ; Yv ¼ Look  upðQ; Qv Þ

ð1Þ

Because different users have different preferences for different products, we need to distinguish their attention. Through the local information representation matrix of users and items, so we can reckon the collaborative attention matrix. The calculation formula is as follows:   M i;j ¼ F u X iu M att F v Y vj

ð2Þ

where Matt is an attention matrix. Fu ðÞ is the user’s multilayer feedforward neural network function. Fv ðÞ is the item’s multilayer feedforward neural network function. We pool (P) the rows and columns of Mi,j, the expression is as follows: 2 M ui ¼ PðfM i;j gkj¼1 Þ;

1 M vi ¼ PðfM j;i gkj¼1 Þ

ð3Þ

Next, we use the function to normalize and integrate the local information to generate the final node embeddings using (4): xu ¼ X u M ui ;

2.5

yv ¼ Y v M vi

ð4Þ

Global Information Modeling Based on Self-attention

We obtain the interaction information between different nodes to model the global information. We can get the interaction matrix based on meta-path as follows: H q ¼ CA1 A2  CA1 A2  . . .  CAl1 Al

ð5Þ

We use multi-layer perceptron to model the interaction between users and items. Then add self-attention mechanism to standardize. The input of multilayer sensor is s0 ¼ hu;v , which is a user-item pair. The specific function equation is as follows:

Self-attention Mechanism Fusion for Recommendation

  sj ¼ ReLU WjT sj1 þ bj  sj1

1151

ð6Þ

where W is weight matrix and b is bias vector of layer j. We can get the expression of the potential relationship between a user and an item as s ¼ sL . After obtaining the relationship representation, we can use the output vector to predict the interaction probability. The probabilities are calculated as follows: qs ¼ W 0 s þ b0

ð7Þ

qs is a prediction vector with the length of |p|. Then we encode the distribution of interaction into one-hot vector, denoted as yRjuj1 . We get the interaction matrix H q of each meta path q. We define the expression as y ¼ fH quv jqug. It means that there is an interaction based on meta-path between user and item. We use the following optimization objectives to solve this problem: Lmc ð yÞ ¼ qs  qs  y þ logð1 þ expðqs ÞÞ

2.6

ð8Þ

Joint Model

After obtaining the global information based on self-attention, we need to fuse this part of information into the representation of heterogeneous information network. We can fuse it by extending the score function: cðu; v; zÞ ¼ jjxu þ s  yv jj22

ð9Þ

For each positive sample pair, we need select an negative sample pair match it, and then we will learn the relationship representation of positive sample s+ and negative sample s− respectively. The loss function is as follow: Ltrans ¼ maxð0; k þ cðu þ ; v þ ; s þ Þ  cðu ; v ; s ÞÞ

ð10Þ

where k [ 0 is a super parameter. We combine the objective function to propose a unified recommendation model. The common optimization objectives of our model are as follows: L ¼ Ltrans þ a½Lmc ðy þ Þ þ Lmc ðy Þ þ bLreg

ð11Þ

1152

Z. Zang et al.

3 Experiments 3.1

Compared Methods

MF. Matrix factorization is the most widely used collaborative filtering model in recommender systems. By using cross entropy loss function, Matrix factorization can be used for ranking recommendation. NeuMF. Different from the traditional collaborative filtering algorithm based on matrix factorization, NeuMF framework introduces neural network to learn the interaction information between users and items. LGRec. By constructing heterogeneous information network, meta path is used to mine potential information in heterogeneous network, and all information is integrated for recommendation task. HINSnola. This is a variant of our model. In order to verify the effectiveness of the collaborative attention mechanism, we remove the collaborative attention mechanism of the model in this variant. HINSnoga. This is the other variant of our model. In this model, we remove the selfattention mechanism. HINS. This is our complete model, which integrates self-attention mechanism into heterogeneous information network.

Table 1. Statistics of the four datasets Datasets Relations(A-B) Movielens User-Movie User-Age User-Occupation Movie-Genre User-Business Yelp User-User Business-City Business-Category User-Artist LastFM User-User Artist-Artist Atrist-Tag User-Item Amazon Item-View Item-Brand Item_Category

Meta-paths UMUM UMGM UAUM UOUM UBUB UBCaB UUB UBCiB UATA UAUA UUUA UUA UIUI UIVI UIBI UICI

#A 943 943 943 1682 16,239 16,239 14,267 14,180 1,892 1,892 17,632 17,632 3,584 2,753 2,753 2,753

#B 1682 8 21 18 14,284 16,239 47 511 17,632 1,892 17,632 11,945 2,753 3,857 334 22

#A-B 100,000 943 93 2861 198,397 158,590 14,267 40,009 92,834 18,802 153,399 184,941 50,903 5,694 2,753 5,508

Self-attention Mechanism Fusion for Recommendation

3.2

1153

Detailed Implementation

We test our model on four datasets: Movielens, LastFM, Yelp and Amazon datasets. These datasets are often used in recommendation tasks. The details in Table 1.We use the cumulative gain of hit rate (HR@k) and normalized discount (NDCG@k) to evaluate the recommendation effect of the model. We construct the known information network by meta-path information. We extract 100 interactive items (users) for each user (item), and then use the matrix decomposition method to generate their respective vectors. In the unified model, we set the learning rate as 0.002 and the batch size as 256. We set four hidden layers to mine users’ potential information, and the initial embedded value of hidden layer is 256. 3.3

Results

In Table 2, we show the results of our model and comparison methods on different datasets. The main results are summarized as follows: The results of the proposed model are better than those of the comparison methods on four datasets. This observation verifies the effectiveness of our model in top-N recommendation task, and the model can integrate neighbor information and global heterogeneous information more effectively to improve the performance of recommendation. In our variants of HINS, the result is as follows: HINS > HINSnola > HINSnoga. The result shows that the attention mechanism is effective. In most cases, the selfattention in the global information plays a crucial role in improving the performance. Table 2. The comparison results of different models. Model

Movielens Re@10 ND@10 MF 0.6448 0.3534 NeuMF 0.6660 0.3752 LGRec 0.6893 0.4005 HINSnoga 0.6776 0.4020 HINSnola 0.6893 0.4036 HINS 0.6903 0.4048

3.4

LastFM Re@10 0.7295 0.7338 0.7683 0.7699 0.7875 0.7880

ND@10 0.5296 0.5555 0.5876 0.5755 0.6060 0.6068

Yelp Re@10 0.5852 0.6783 0.6785 0.6478 0.6872 0.6891

ND@10 0.3560 0.4225 0.4085 0.3814 0.4227 0.4243

Amazon Re@10 0.4541 0.4584 0.4645 0.4541 0.4754 0.4816

ND@10 0.2473 0.2513 0.2502 0.2443 0.2589 0.2654

Conclusion

In this paper, we propose a new deep neural network model for recommendation, which integrates attention mechanism into the global information of heterogeneous information network and combines with local information. The model uses collaborative attention mechanism to represent the node relationship of users (items) according to the adjacent items (users). In addition, our model optimizes the multi label classification problem and learns the relationship representation. Combined with these two factors, we design a unified optimization objective for recommendation.

1154

Z. Zang et al.

Acknowledgment. This work was supported in part by the Shandong Provincial Natural Science Foundation (No. ZR2020LZH004), the Independent Innovation Team Project of Jinan City (No. 2019GXRC013), the Science and Technology Program of University of Jinan (No. XKY1803).

References 1. Shi, C., et al.: Heterogeneous information network embedding for recommendation. IEEE Trans. Knowl. Data Eng. 31, 357–370 (2017) 2. Sun, Y., Han, J., Yan, X., et al.: PathSim: meta path-based top-k similarity search in heterogeneous information networks. Proc. VLDB Endow. 4(11), 992–1003 (2011) 3. Hu, B., Shi, C., Zhao, W.X., et al.: Local and global information fusion for top-n recommendation in heterogeneous information network. In: The 27th ACM International Conference. ACM (2018) 4. Sun, Y., Han, J.: Mining heterogeneous information networks: a structural analysis approach. ACM SIGKDD Explor. Newsl. 14(2), 20–28 (2013) 5. Yu, X., et al.: Recommendation in heterogeneous information networks with implicit user feedback. In: Proceedings of the 7th ACM Conference on Recommender Systems, pp. 347– 350 (2013) 6. Shi, C., Zhang, Z., Ji, Y., et al.: SemRec: a personalized semantic recommendation method based on weighted heterogeneous information networks. World Wide Web 22(1), 153–184 (2019) 7. Xie, S., Sun, C., Huang, J., et al.: Rethinking spatiotemporal feature learning: speedaccuracy trade-offs in video classification (2017) 8. Wang, J., Yu, L., Zhang, W., et al.: IRGAN: a minimax game for unifying generative and discriminative information retrieval models. In: 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2017. ACM (2017) 9. Lu, C.T., He, L., Hao, D., et al.: Learning from multi-view multi-way data via structural factorization machines. In: WWW (2018) 10. He, X., Liao, L., Zhang, H., et al.: Neural collaborative filtering. In: WWW, pp. 173–182 (2017) 11. Chong, C., Min, Z., Liu, Y., et al.: Neural attentional rating regression with review-level explanations. In: The 2018 World Wide Web Conference (2018) 12. Xiao, J., Ye, H., He, X., et al.: Attentional factorization machines: learning the weight of feature inter-actions via attention networks (2017) 13. Loyola, P., Liu, C., Hirate, Y.: Modeling user session and intent with an attention-based encoder-decoder architecture. In: Eleventh ACM Conference, pp. 147–151. ACM (2017) 14. Seo, S., Huang, J., Yang, H., et al.: Interpretable convolutional neural networks with dual local andglobal attention for review rating prediction. In: Proceedings of the Eleventh ACM Conference on Recommender Systems, pp. 297–305 (2017) 15. Ho, P.D., Raftery, A.E., Handcock, M.S.: Latent space approaches to social network analysis. Publ. Am. Stat. Assoc. 97, 1090–1098 (2002) 16. Tu, C., Zhang, W., Liu, Z., et al.: Max-margin DeepWalk: discriminative learning of network representation. In: Proceedings of the 25th International Joint Conference on Artificial Intelligence, pp. 3889–3895 (2016) 17. Grover, A., Leskovec, J.: Node2vec: scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 855–864 (2016)

Self-attention Mechanism Fusion for Recommendation

1155

18. Cao, S., Lu, W., Xu, Q.: GraRep: learning graph representations with global structural information. In: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 891–900 (2015) 19. Chang, S., Han, W., Tang, J., et al.: Heterogeneous network embedding via deep architectures. In: Proceedings of the 21 the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 119–128 (2015)

DAMNet: Hyperspectral Image Classification Method Based on Dual Attention Mechanism and Mix Convolutional Neural Network Zengzhao Sun1, Huamei Xin1, Bo Zhang1, Jinwen Ren1, Zhenye Luan1, Hongzhen Li1, Xingrun Wang2, and Jingjing Wang1(&) 1

College of Physics and Electronic Science, Shandong Normal University, Jinan 250358, China [email protected] 2 International Institute of Next Generation Internet, Macao University of Science and Technology, Taipa, China

Abstract. Deep learning approaches have been broadly used in the study of hyperspectral images (HSI) with promising results in recent years. However, improved network complexity and features extraction still have scope for improvement. As a result, we propose a hybrid convolutional neural network (DAMNet) based on a 3D-dual attention mechanism (spectral and spatial attention mechanism) to fully capture spectral and spatial features and improve classifying precision. This is a spectral-spatial 3D-CNN that is being followed. The 3D-CNN allows for the encoding of joined spatial-spectral features of a stacks of images. The 2D-CNN learns more extensive abstract spatial representative on top of the 3D-CNN. When compared to using a 3D-CNN alone, the usage of mixed CNN streamlines the model. By incorporating the 3D-dual attention mechanism module into the network, it can enhance the representation of image features in various dimensions, thereby increasing classification precision. Image features from multiple levels are represented in the image, improving classification accuracy. This article improves the classification accuracy of common datasets with the architecture DAMNet, while also validating the superiority of the attention mechanism on the Xuzhou dataset. Keywords: Convolutional neural network Hyperspectral image classification

 Dual attention mechanism 

1 Introduction As a three-dimensional data cube of two spatial dimensions and one spectral dimension, hyperspectral images contain a wealth of information, both spatial and spectral, and because of this characteristic, HSI classification is of practical significance in many fields. Examples include water pollution detection, crop growth prediction, land cover mapping, and contactless detection. A basic task is HSI classification, which emphasizes the assignment of labels to each pixel in the image, so pixel-level feature extraction is extremely important for classification. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1156–1163, 2023. https://doi.org/10.1007/978-981-19-3387-5_138

DAMNet: Hyperspectral Image Classification Method

1157

In the initial research of HSI classification, the primary purpose is to obtain spectral and spatial features for classification by some a priori knowledge. The traditional methods using handcrafted features does not effectively extract critical yet reliable information for HSI classification. At the beginning, we chose many traditional machine learning classifiers to accomplish the task, such as decision trees [1], random forests [2] and support vector machines [3]. Nonetheless, none of these classifiers can achieve satisfactory classification results and require extensive background knowledge and experience in HSI, as the extraction of features is complicated and integrity is not easily guaranteed. Recently, deep learning approaches have been widely used for HIS classification tasks with good results. Deep learning-based methods take advantage of the powerful feature learning capability of neural networks to achieve desirable classification results. Such methods tend to produce more desirable results, such as 2D-CNN [4], 3D-CNN [5], SSRN [6], DRNN [7] and other methods. Since attention mechanisms can focus on critical information, many visual attention-based approaches have been developed in the decade to obtain classification maps for HSI. Despite the aforementioned approaches have significant benefits over conventional machine learning in the extraction of hyperspectral features, the use of neural networks to obtain spatial and spectral information remains a major challenge [8–11]. For starters, some networks are extremely complex, with a massive number of parameters. As a result, training them with the finite tagged HSI data is difficult. Secondly, because of the complexity of HSI, important attributes of each dimension are essential for categorization in addition to global information. To address the aforementioned issues, this paper present a DAMNet for HSI categorization based on a dual-attention mechanism. Our major contributions are summarized below. (1) For spectral-spatial feature learning, we use a hybrid convolutional neural network, and because of the HSI attributes, we must ensure that the network does well. (2) A dual attention mechanism, consisting of a spectral attention mechanism and a spatial attention mechanism, is introduced, in which the network learns to focus on key information while reducing the effect of noise. (3) The method also has the advantage of being quick to train and test. In this paper, we put DAMNet to the test for HSI classification on a common dataset. Extensive experiments show that our network outperforms other cutting-edge methods.

2 Materials and Methods In this subsection, we give a detailed introduction of our model. It consists of two parts: the advantages of the dual-attention mechanism in suppressing noise interference, and the fusion of the dual-attention mechanism with convolutional neural network will enhance the accuracy of HSI classification. Figure 1 illustrates the frame of our DAMNet.

1158

Z. Sun et al.

Dual attention mechanism is a straightforward and efficient attention block for feeding forward CNNs. Our goal is to increase representation by concentrating on significant features and inhibiting unimportant ones using attentional mechanisms. The spectral attention module concentrates on ‘what’ is significant in an input hyperspectral image. Average-pooling is a popular method for aggregating geographic data. Combining max-pooling and average-pooling increased the network’s expressiveness in previous studies. Predators aggregate spatial information of feature map through pooling operation. c c At this time, two different context descriptors Favg and Fmax will be generated, and then the two descriptors will be sent to a network composed of MLP, thus generating spectral attention map Mc . In a nutshell, the formula for spectral attention is: Mc ðFÞ ¼ SigmoidðMLPðAvgPoolðFÞÞ þ MLPðMaxPoolðFÞÞÞ c c ¼ SigmoidðW1 ðW0 ðFavg ÞÞ þ W1 ðW0 ðFmax ÞÞÞ

ð1Þ

where F 2 RCHW denotes an in-between feature map, Mc 2 RC11 means a 1D spectral attention map. W0 2 RC=rC , W1 2 RCC=r are the MLP weights. The spatial attention module is focused on ‘where’ is an informative part. We used two pooling operations to aggregate spectral information from the functional mapping s s to produce two 2-dimensional mapping Favg 2 R1HW and Fmax 2 R1HW . The convolution layer is then joined and mixed through a canonical convolution layer to produce our 2-dimensional spatial attention map. The foregoing can be represented as: Ms ðFÞ ¼ SigmoidðConv77 ð½AvgPoolðFÞ; MaxPoolðFÞÞÞ s s ¼ SigmoidðConv77 ½Favg ; Fmax Þ

ð2Þ

where Conv77 indicates a convolution action with a filter size of 7  7. Let I 2 0, otherwise αil = 0. Let Di = l=1 ail be the degree of individual i, then the corresponding Laplacian matrix is signified as LG = D−A, where D = diag{D1 , . . . , DN }. If there are interconnected paths between any different agents, the undirected graph G is called connected. Furthermore, if G is connected, all eigenvalues of matrix LG satisfies 0 = λ1 < λ2 ≤ . . . ≤ λN [16].

Distributed Consensus for Discrete Multi-agent Systems

3

1191

Problem Statement

In this manuscript, N individuals in an undirected network are addressed, and assume the model of the i-th node is described by formula: xi (k + 1) = Axi (k) + Bui (k),

i = 1, . . . , N,

(1)

where xi (t) ∈ Rn , ui (t) ∈ Rm are the state and input; A ∈ Rn×n and B ∈ Rm×n are system matrices. Suppose that time-varying delay d(k) postpones the relative state information, then [xj (k − d(k)) − xi (k − d(k))] can be applied to design protocol at time k. According to above analysis, protocol ui (k) = H

N 

ail [xl (k − d(k)) − xi (k − d(k))],

(2)

l=1

can be devised, where H ∈ Rm×n is a constant gain to be ascertained. Now, we give a exact definition of consensus. Definition 3.1: If limk→∞ xl (k) − xi (k) = 0 holds for ∀i, l = 1, . . . , N , systems (1) is named to realize consensus via protocol (2). From the definition above and reference [13], we know that there is no effect on consensus for the stable poles of the agent dynamic. Additionally, to assure the convergence value of consensus not tend to infinity exponentially, we list an assumption as follows. Assumption 3.1: All eigenvalues of matrix A are located on the unit circle, in other words, |λi (A)| = 1 for i = 1, 2, . . . , n and (A, B) is controllable. Problem Statement: The purpose of this paper is to establish appropriate conditions to enable the multi-agent model (1) to reach consensus under the devised protocol (2).

4

Main Results

Before giving the primary results, two valuable lemmas are presented at first. Lemma 4.1: [14,17] Under Assumption 3.1 and for ∀γ ∈ (0, 1), the discrete-time parameter algebraic Riccati equation AT QA + γQ = AT QB(I + B T QB)−1 B T QA + Q

(3)

admits a unique positive definite solution Q. Moreover, there follows HγT RHγ ≤ 3 nγ T T T 2 (1−γ)n−1 and Hγ M Hγ ≤ γ P , where M = B QB, R = (I + B QB) and −1 T Kγ = R B QA.

1192

Q. Hu et al.

Lemma 4.2: [18] Suppose that 0 < λ2 ≤ . . . ≤ λN are N −1 constants. Then, for 2 h2 λ2i 2 +λN ) any h > 2λ1 2 , the minimum value of maxi=2,...,N 2hλi −1 is (λ4λ  R(λ2 , λN ). 2 λN λ2 +λN Especially, the minimum value is taken if and only if h = 2λ2 λN . At present, the key results of our paper are offered. Theorem 4.1: Under Assumption 3.1 and suppose that the undirected graph G is 2 +λN connected. Design the control gain H = h(I + B T QB)−1 B T QA with h = λ2λ 2 λN in protocol (2), where Q is the solution of equation (3). Then, the consensus can be realized once the time-varying delay satisfies 0 ≤ d(k) ≤ d < 

1 nR(λ2 , λN )(2n − 2tr(A))

,

where d ∈ Z+ is some positive integer. Proof: As the undirected graph G is connected, it yields from the second part that all eigenvalues of matrix LG satisfies 0 = λ1 < λ2 ≤ . . . λN . Applying protocol (2) to the agent dynamics (1), we get xi (k + 1) = Axi (k) + BH

N 

ail [xl (k − d(k)) − xi (k − d(k))].

l=1

Stack state xi (k) to get vector x(k) = [xT1 (k), . . . , xTN (k)]T , then the overall state can be written as x(k + 1) = Ax(k) − [LG ⊗ BH]x(k − d(k)) N Using X(k) = N1 i=1 xi (k) = N1 [1TN ⊗ In ]x(k) to represent the average state of all the agents at time t, then X(k + 1) = AX(k). Denote the error state T (k)]T , then as ζi (k) = xi (k) − X(k) and construct vector ζ(k) = [ζ1T (k), . . . , ζN equation ζ(k + 1) = [IN ⊗ A]ζ(k) − [LG ⊗ BH]ζ(k − d(k)) can be obtained. Furthermore, consider that matrix LG is symmetric, thus there exist a unitary matrix ΦT = Φ−1 ∈ Rn×n such that ΦT LG Φ = diag{0, λ2 , . . . , λN }, where the first row of matrix Φ−1 is √1N 1T . Denote state T ˜ (k)]T . Since matrix Φ is nonsingular, ζ(k) = [ΦT ⊗ In ]ζ(k) = [ζ˜1T (k), . . . , ζ˜N ˜ = 0. Additionally, formula limk→∞ ζ(k) = 0 is equivalent to limk→∞ ζ(k) N 1 ˜ due to ζ1 (k) = i=1 N xi (k) − X(k) ≡ 0, consensus is achieved if the following delayed subsystems ζ˜i (k + 1) = [A − σi BHγ ]ζ˜i (k) + σi BHγ Δζ˜i (k), i = 2, . . . , N are stable simultaneously, where Hγ = (I + B T QB)−1 B T QA, σi = hλi and ˜ − d(k)). Δζ˜i (k) = ζ˜i (k) − ζ(k

Distributed Consensus for Discrete Multi-agent Systems

1193

Taking the Lyapunov function as V (ζ˜i (k)) = ζ˜iT (k)Qζ˜i (k) and applying Lemma 4.1, we can obtain ΔV (ζ˜i (k)) = V (ζ˜i (k + 1)) − V (ζ˜i (k)) T T T 2 T T T T = − γ ζ˜i (k)Qζ˜i (k) + (1 − 2σi )ζ˜i (k)Hγ RHγ ζ˜i (k) + σi ζ˜i (k)Hγ M Hγ ζ˜i (k) + 2σi Δζ˜i (k)Hγ RHγ ζ˜i (k)

−2σi Δζ˜i (k)Hγ M Hγ ζ˜i (k)+σi Δζ˜i (k)Hγ M Hγ Δζ˜i (k). 2

T

T

2

T

T

T  − γ ζ˜i (k)P ζ˜i (k) + W (ζ˜i (k)).

˜ − d(k)), it follows Making use of ζ˜i (k) = Δζ˜i (k) + ζ(k W (ζ˜i (k)) ≤ [1 +

(1 − σi )2 li

T T T T ]Δζ˜i (k)Hγ RHγ Δζ˜i (k) + (1 − 2σi + li )ζ˜i (k − d(k))Hγ RHγ ζ˜i (k − d(k))

2 T T + σi ζ˜i (k − d(k))Hγ M Hγ ζ˜i (k − d(k)),

where li > 0 is an arbitrary constant. Take li = 2σi − 1 > 0, then it yields 2 σ2 i) 1 + (1−σ = 2σii−1 . Thus, li ΔV (ζ˜i (k)) ≤ − γ ζ˜i (k)Qζ˜i (k) + T

σi2 T T 2 T T Δζ˜i (k)Hγ RHγ Δζ˜i (k) + σi ζ˜i (k − d)Hγ M Hγ ζ˜i (k − d). 2σi − 1

In addition, from the definition of Δζ˜i (k), it yields 

k−d(k)

Δζ˜i (k) =



k−d(k)

[ζ˜i (s + 1) − ζ˜i (s)] =

s=k−1

[(A − I)ζ˜i (s) − σi BHγ ζ˜i (s − d(s))].

s=k−1

Then, it yields from reference [19] that Δζ˜i (k)QΔζ˜i (k) T



k−d(k)

≤d(k)

s=k−1

1 2 T T T [(1 + r)ζ˜i (s)(A − I) Q(A − I)ζ˜i (s) + (1 + )σi ζ˜i (s − d(s))Hγ M Hγ ζ˜i (s − d(s))], r

(4) where r > 0 is an arbitrary constant. To use the Razumikhin stability Theorem [20], assume V (ζ˜i (k − ν)) < κV (ζ˜i (k)) for ν ∈ (0, 2d], where κ > 1 is a constant to be designed. It yields from (4), Lemma 4.1 and reference [13] that   ΔV (ζ˜i (k)) ≤ −γ 1 − κ

σi2 2σi − 1

×



nd2 (1 − γ)n−1

(1 + r)f (α, γ) + (1 +

From Lemma 4.2, function maxi=2,...,N λ2 +λN 2λ2 λN

. In this case, h= can be changed as

σi2



2 σN

=

σi2 2σi −1

(λ2 +λN )2 4λ22

1 r

2

3

)σi γ 2



2

1

+ σi γ 2

 V (ζi (k)).

(5)

= R(λ2 , λN ) if and only if

 Γ(λ2 , λN ). Thus, inequality (5)

 nd2 (1 + r)f (α, γ) (1 − γ)n−1 

3 1 1 + (1 + )Γ(λ2 , λN )γ 2 + Γ(λ2 , λN )γ 2 V (ζi (k)). r

  ΔV (ζ˜i (k)) ≤ − γ 1 − κ R(λ2 , λN ) ×

(6)

1194

Q. Hu et al.

From the condition in the theorem, it follows Δω  1 − nR(λ2 , λN )d2 [2n − 2tr(A)] ∈ (0, 1). When γ → 0 and r → 0, it yields γ < (1 − γ)n → 1 and 1 + r → 1. Let α = γ, then  there exists parameters γ > 0 and  r > 0 such 3 nd2 1 that R(λ2 , λN ) × (1−γ)n−1 (1 + r)f (α, γ) + (1 + r )Γ(λ2 , λN )γ 2 < 1 − Δω 2 8 and Γ(λ2 , λN )γ 2 < Δω 4 . Design κ = 8−Δω , then it is easy to know κ > 1. Δω V (ζ˜i (k)), i = 2, . . . , N. Thus, Consequently, there holds ΔV (ζ˜i (k)) ≤ −γ × 8−Δω the consensus is assured from reference [20]. 2 For a special case, the following Corollary can be gained if all poles of agent dynamics are 1. 1

Corollary 4.1: Under the conditions in Theorem 4.1 and assuming all eigenvalues of matrix A are located at the complex plane of z = 1, consensus can be derived in the event of 0 ≤ d(k) ≤ d, where d ∈ Z+ is an any integer which is bounded. Proof: From the condition that all eigenvalues of matrix A are 1, it is easy to know 2n − 2tr(A) = 0. Let α = 12 (1 − γ)n > 0, then yields from [13] that

6n γ f (α, γ) = (1−γ) 2n . Follow the procedure of Theorem 4.1 and take r = 1, then inequality (6)   2 3  2 1 6n γ 2 nd can be changed as ΔV (ζ˜i (k)) ≤ −γ 1 − κγ 2 2R(λ2 , λN ) × (1−γ) n−1 (1−γ)2n + 

Γ(λ2 , λN )γ + Γ(λ2 , λN ) V (ζi (k)). For any time-varying delay 0 ≤ d(k) ≤ d ∈  1 Z+ , let κ = 2, then there exist parameter γ ∈ (0, 1) such that κγ 2 2R(λ2 , λN ) ×  2 3  2 6n γ 2 nd 1 ˜ (1−γ)n−1 (1−γ)2n + Γ(λ2 , λN )γ + Γ(λ2 , λN ) < 2 . Thus, it follows ΔV (ζi (k)) ≤ − γ2 V (ζ˜i (k)) for i = 2, . . . , N . Then the consensus can be ensured from [20], the proof is completed. 2 2

5

2

Numerical Example

In this part, we will expound and explain the availability of the proposed consequences by a simulation example. Suppose that there are four agents in an undirected network G. In addition, the adjacency matrix A and the Laplacian matrix LG are characterised as ⎡ ⎡ ⎤ ⎤ 2 −1 −1 0 0110 ⎢−1 3 −1 −1⎥ ⎢1 0 1 1⎥ ⎢ ⎥ ⎥ A=⎢ ⎣1 1 0 1⎦ and LG = ⎣−1 −1 3 −1⎦ . 0 −1 −1 2 0110 Clearly, G is connected and eigenvalues of LG are λ1 = 0, λ2 = 2, λ3 = 2 and λ4 = 4. The agent model of the i-th individual is described as     0 1 0 xi (k) + u (k), i = 1, 2, 3, 4. xi (k + 1) = −1 1.872 1 i

Distributed Consensus for Discrete Multi-agent Systems 40

1195

40

30

30

20

20

10

10

0

0

-10

-10

-20

-20

-30

-30

-40

-40

0

100

200

300

400

500

600

time t

700

800

0

100

200

300

400

500

600

700

800

time t

Fig. 1. Error states 1

Fig. 2. Error states 2

It is obvious that above system is controllable and two poles, i.e., 0.936 ± 2 0.352i with  i = −1 are on the unit circle. By calculation, the solution of Eq. (3) is Q =

2γ−γ 2 −1.872γ 1−γ 1−γ −1.872γ 2γ−γ 2 1−γ (1−γ)2

.

From Theorem 4.1, the allowable condition for consensus can be calculated as d(k) ≤ d < 1.3975. Assume that the time-varying delay in protocol (2) is d(k) = sin2 (k) , where sin2 (k) stands for the integer part of function sin2 (k). 2 +λN = 38 and γ = 0.009, From the proof of Theorem 4.1, select parameters h = λ2λ 2 λN then Fig. 1 and Fig. 2 display that the error states tend to zero asymptotically, where X i = 14 x1i + 14 x2i + 14 x3i + 14 x4i for i = 1, 2.

6

Conclusion

This thesis addresses the consensus issue for discrete-time multi-agent dynamics that are critically unstable and subjected to delay in the transmission. With a connected undirected graph, consensus criterion linked with the parameters of communication delay, network topology and agent structure are acquired. Future research will extend the presented results to the system whose eigenvalues are outside the unit circle or transmission channel is affected by noises.

References 1. Jeon, I., Lee, J., Tahk, M.: Homing guidance law for cooperative attack of multiple missiles. J. Guid. Control Dyn. 33(1), 275–280 (2010) 2. Lee, D., Spong, M.W.: Stable flocking of multiple inertial agents on balanced graphs. IEEE Trans. Autom. Control 52(8), 1469–1475 (2007) 3. Ren, W., Beard, R.W.: Consensus seeking in multi-agent systems under dynamically changing interaction topologies. IEEE Trans. Autom. Control 50(5), 655–661 (2005)

1196

Q. Hu et al.

4. He, W., Cao, J.: Consensus control for high-order multiagent systems. IET Control Theor. Appl. 5(1), 231–238 (2011) 5. You, K., Xie, L.: Network topology and communication data rate for consensusability of discrete-tie multi-agent systems. IEEE Trans. Autom. Control 56(10), 2062–2075 (2011) 6. Ma, C., Zhang, J.: Necessary and sufficient conditions for consensusability of linear multi-agent systems. IEEE Trans. Autom. Control 55(5), 1263–1268 (2010). Neurocomputing 359, 122–129 (2019) 7. Papachristodoulou, A., Jadbabaie, A., M¨ unz, U.: Effects of delay in multi-agent consensus and oscillator synchronization. IEEE Trans. Autom. Control 55(6), 1471–1477 (2020). ISA Trans. 71, 10–20 (2017) 8. Tian, Y., Liu, C.: Consensus of multi-agent systems with diverse input and communication delays. IEEE Trans. Autom. Control 53(9), 2122–2128 (2008) 9. Xu, J., Zhang, H., Xie, L.: Input delay margin for consensusability of multi-agent systems. Automatica 49(6), 1816–1820 (2013) 10. Wang, Z., Zhang, H., Song, X., Zhang, H.: Consensus problems for discrete-time agents with communication delay. Int. J. Control Autom. Syst. 15(4), 1515–1523 (2017). https://doi.org/10.1007/s12555-015-0446-8 11. Lin, P., Jia, Y.: Consensus of second-order discrete-time multi-agent systems with nonuniform time-delays and dynamically changing topologies. Automatica 45, 2154–2158 (2009) 12. Wang, X., Saberi, A., Stoovogel, A.A., Grip, H., Yang, T.: Synchronization in a network of identical discretetime agents with uniform constant communication delay. Inter. J. Robust. Nonlin. Control 24(18), 3076–3091 (2013) 13. Zhou, B.: Truncated Predictor Feedback for Time-Delay Systems. Springer, Heidelberg (2014) 14. Wang, Z.: Consensus analysis for high-order discrete-time agents with time-varying delay. Int. J. Syst. Sci. 52(5), 1003–1013 (2021) 15. Xu, J., Zhang, H., Xie, L.: Consensusability of multi-agent systems with delay and packet dropout under predictor-like protocols. IEEE Trans. Autom. Control 64(8), 3506–3513 (2019) 16. Merris, R.: Laplacian matrices of graphs: a survey. Linear Algeb. Appl. 197, 143– 176 (1994) 17. Zhou, B., Lin, Z.: Parametric Lyapunov equation approach to stabilization of discrete-time systems with input delay and saturation. IEEE Trans. Circ. Syst. I: Regular Pap. 58(11), 2741–2754 (2011) 18. Wang, Z., Zhu, Y., Song, X., Zhang, H.: Distributed consensus for high-order agent dynamics with communication delay. Int. J. Control Autom. Syst. 18(8), 1975–1984 (2020) 19. Gu, K., Kharitonov, V., Chen, J.: Stability of Time-Delay Systems. Birkh¨ auser, Boston (2003) 20. Zhang, S., Chen, M.: A new Razumikhin Theorem for delay difference equations. Comput. Math. Appl. 36(10–12), 405–412 (1998)

Simulation Design of Space Gravitational Wave Detection Satellite Formation Wentao He, Xiande Wu(&), Yuheng Yang, Huibin Gao, Haiyue Yang, Yuqi Sun, and Songjing Ma College of Aerospace and Civil Engineering, Harbin Engineering University, Harbin 150001, China [email protected]

Abstract. Gravitational wave detection is one of the hottest science topics in science today. However, due to the terrain conditions and the limitation of the detection arm length of the ground detection device, countries around the world have invested in the trend of space gravitational wave detection research. However, the cost of launching satellites is high, and the space environment is complex, which requires a lot of premise simulation experiments. This article aims to design a ground simulation method for space gravitational wave detection satellite formations, and a comprehensive scene display computer for simulation control and scene result display. The dynamics and component simulation computer simulates the space environment and the state of the satellite, and the attitude and orbit control computer verifies the control algorithm of the satellite. The design of this article provides a stable and effective simulation verification platform for space gravitational wave detection. Keywords: Gravitational wave detection simulation  Gravitational waves

 Satellite formation  Satellite

1 Introduction Scientists from various countries have been committed to the detection of gravitational waves for a long time. On February 11, 2016, the LIGO Scientific Cooperation Organization first detected the gravitational wave signal from the merger of two black holes. However, due to ground noise and terrain conditions, the ground gravitational wave detection system can only detect high-frequency gravitational wave signals. Therefore, in order to detect more frequencies of gravitational waves, the use of spacecraft formations to construct detection systems in space has become the first choice of scientists. The space gravitational wave detection systems currently under development in the world include the LISA project led by European scientists [1], the Tianqin project and the Tai Chi project [3] led by Chinese scientists [2]. Among the space gravitational wave detection projects under research, LISA Project, Taiji Project and Tianqin Project all use three satellite formations for gravitational wave detection (see Fig. 1). Among these research projects, the LISA and Tai Chi projects, the arm length of the space laser interference gravitational wave antenna is 2.5 million km and 3 million ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1197–1204, 2023. https://doi.org/10.1007/978-981-19-3387-5_143

1198

W. He et al. Satellite 1

Laser

Satellite 3

Satellite 2

Fig. 1. Satellite formation configuration

km, respectively, and they all use heliocentric orbits, as shown in Fig. 2. Another Chinese project, the Tianqin Project, uses an arm length of 170,000 km and a geocentric orbit. Earth Ecliptic plane

Sat1 Sat2

Sun

Sat3

Fig. 2. Heliocentric orbit

In space gravitational wave detection, different arm lengths have different sensitive frequency bands for the detection wave source. Therefore, the detection targets of the LISA and Taiji projects are almost the same, and the arm length of the Tianqin project is much smaller than that of the LISA and Taiji projects, and the detection sensitive frequency band is higher than the former. In addition, in the geocentric orbit, the spacecraft is subject to greater disturbances such as lunar gravity and earth perturbation, while the heliocentric orbit is relatively small and stable. Therefore, the geocentric orbit has a more urgent need for control algorithm simulation. Therefore, this article refers to the plan of the Tianqin project and adopts the geocentric orbit (see Fig. 3).

Simulation Design of Space Gravitational Wave Detection

1199

Sat1 Earth Sat2

Sun

Sat3

Ecliptic Plane

Earth Equatorial Plane

Fig. 3. Geocentric orbit

The ground simulation experiments are needed to ensure the success rate and correctness of the satellite launch. The purpose of this paper is to design a simulation system for formation of space gravitational wave detection satellite. Through the simulation of space scene, typical satellite components and attitude orbit, the on-board algorithms can be verified.

2 Simulation System Design The space gravitational wave detection satellite formation simulation system consists of four modules, including comprehensive scene display computer, wireless router, dynamics and component simulation computer, and attitude and orbit control computer (see in Fig. 4). Among them, the comprehensive scene display computer is the scene control and result display terminal of the detection simulation system. The user sets various parameters of the simulation and controls the start and end time of the simulation according to the requirements, and displays the results of the simulation. The wireless router simulates satellite-to-earth communication, transmits the telemetry data of dynamics and component simulation computers back to the ground, or uploads ground control commands. The dynamics and component simulation computer simulates the attitude and orbit status information of the satellite when it is in orbit and the working condition of the typical satellite components through computational simulation technology. The attitude and orbit control computer simulates the control part of the satellite’s on-board computer, and plays a role in verifying the control algorithm.

1200

W. He et al.

Cable

Wifi

Wifi Cable

Cable

Scene Display Computer

Wireless Router

Dynamics & Component Simulation Computer

Attitude and Orbit Control Computer

Fig. 4. Composition of simulation system

3 Comprehensive Scene Display System Design Based on the structure diagram of the software integrated scene display system software written in C# language, as shown in Fig. 5, the system software inputs the initial satellite information and mission information through the interactive interface, and receives the dynamics and component simulation computer and the attitude and orbit control computer through the router. The calculation result is displayed in the 3D scene display module to display the space gravitational wave detection satellite formation configuration, and the information is stored in the database in the data management module and output through the data report. At the same time, the data curve is displayed on the display module.

Fig. 5. Integrated scene display program structure diagram

Simulation Design of Space Gravitational Wave Detection

1201

4 Attitude and Orbit Control Simulation System Design In order to meet the mission requirements of space gravitational wave detection, the international research projects mentioned above all use drag-free satellites, and each satellite contains 1–2 test masses. When the satellites are carrying out scientific experiments, the test mass is required to be in a suspended state only subject to conservative forces, which means that the satellite needs drag-free control, and the thruster of satellite is used to offset the interference of other forces in space. The structure of attitude and orbit control simulation system is shown in Fig. 6.

Fig. 6. Structure of control system

5 Simulation Results The results of the simulation system of satellite formation for space gravitational wave detection are shown below. First, set the scene simulation time, that is, the start time and end time of the system simulation, as shown in Fig. 7.

Fig. 7. Time information

1202

W. He et al.

Then proceed to the setting of the satellite initialization information. Configure the satellite orbit and various parameters of sensors and actuators (Fig. 8).

Fig. 8. Satellite information

The detection task simulation is divided into the scientific experiment stage and the non-scientific experiment stage. The setting of the scientific experiment stage is shown in Fig. 9.

Fig. 9. Task information

After calculation, the satellite formation configuration is shown in Fig. 10. Three satellites form an equilateral triangle detection formation and orbit height is 80,000 km.

Simulation Design of Space Gravitational Wave Detection

Fig. 10. Formation configuration

The final simulation result is shown in the Fig. 11.

Fig. 11. Simulation result

1203

1204

W. He et al.

6 Conclusion The space gravitational wave detection satellite formation simulation system designed in this paper uses a comprehensive scene computer to display the simulation results and simulation scene settings. simulate the space environment and satellite state of the satellite operation through the dynamics and component simulation computer, and controls it through the attitude and orbit control computer Algorithm verification. The simulation system simulates the operation state of the satellite formation during space gravitational wave detection, and has good practicability.

References 1. Hammesdahr, A.: LISA mission study overview. Class. Quantum Grav 18(19), 4045–4051 (2001) 2. Luo, J., Chen, L.S., Duan, H.Z., et al.: TianQin: a space-borne gravitational wave detector. Class. Quantum Gravity 33(3), 035010 (2016) 3. Hu, W.R., Wu, Y.L.: The Taiji program in space for gravi-tational wave physics and the nature of gravity. Natl. Sci. Rev. 4(5), 685–686 (2017) 4. Xue-Fei, G., Sheng-Nian, X.U., Ye-Fei, Y., et al.: Laser interferometric gravitational wave detection in space and structure formation in the early universe. Chin. Astron. Astrophys. 39 (4), 411–446 (2015) 5. Vallado, D.: Fundamentals of Astrodynamics and Applications, 4th edn. Fundamentals of Astrodynamics and Applications, UK (2013) 6. Ye, B.B., Zhang, X.F., Zhou, M.Y., et al.: Optimizing orbits for TianQin. Int. J. Mod. Phys. D 28(9), 1950121 (2019) 7. Wang, J.H., Meng, Y.H., Song, J.N., et al.: Review of numerical and hardware-in-the-loop simulation technology of space-borne gravitational wave detection system. Acta Sci. Nat. Univ. Sunyatseni 60(Z1), 233–238 (2021). (in Chinese) 8. Zhang, L.H., Li, M., Gao, Y.X., et al.: The spacecraft system and platform technologies for gravitational wave detection in space. Acta Sci. Natur. Univ. Sunyatseni 60(Z1), 129–137 (2021). (in Chinese)

Scheme Design for Cargo Loading Control System of Cargo Plane Siqing Wu1, Chunyun Dong1, Wenjuan Lei2, Jie Deng1, and Xiaolong Chen1(&) 1

Xi’an Key Laboratory of Intelligent Instrument and Packaging Test, Xidian University, Xi’an 710071, Shaanxi, China [email protected] 2 Qingan Group Co., Ltd., Xi’an 710077, Shaanxi, China

Abstract. Due to the late start of China’s civil aviation, the research on cargo loading control system of air freighter is less, and there is no research and development system and technical ability at present, so the research on related technology of loading control system is imminent. Aiming at the above problems, this paper studies the key technologies of CAN communication network design, software design and control hardware design of controller, and puts forward a design scheme of cargo loading system controller for large cargo aircraft. Firstly, based on the analysis of the composition of the loading control system and the requirements of loading and unloading, the basic function design requirements of the control system controller are proposed. Secondly, aiming at the characteristics of a large number of CAN communication nodes, a large amount of data transmission and high real-time requirements of the loading control system, a three-layer CAN communication network with sub-regional control is proposed to ensure the communication requirements and operation efficiency of the controller. On this basis, the hardware implementation scheme of dual MCU is proposed, and the application layer software design is based on an embedded real-time operating system. And the visualization function of PDU status and ULD real-time position is realized by using a serial display. The scheme proposed in this paper has a significant reference value for the automation and domestication of cargo loading control systems. Keywords: Automatic loading system

 Cargo plane  CAN bus  Controller

1 Introduction Cargo loading system as an important part of the aircraft cargo hold, foreign countries began to study the related technology as early as the 1960s [1]. The early development of the cargo loading system was mechanical loading, which was pushed unit load device (ULD) into cargo hold by manual. This loading method is reliable but inefficient and the labor cost is high [2]. With the demand for efficient cargo loading system is becoming more and more urgent in the air cargo industry, automatic loading system came into being and popularized rapidly. The application of an automatic loading system greatly reduces the loading-unloading time of ULD, improves efficiency, and reduces the labor cost at the same time. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1205–1212, 2023. https://doi.org/10.1007/978-981-19-3387-5_144

1206

S. Wu et al.

On the domestic front, the development of civil transport aircraft puts forward higher requirements for cargo loading system. However, due to China’s civil aviation started late and has less demand for cargo loading system, there is a big gap between China and foreign countries in terms of system requirements, system design, system integration verification and after-sales service [3]. Therefore, it is of great significance to develop the domestic cargo loading system of a cargo airplane. 1.1

Composition of Automatic Loading System

The cargo loading system is mainly composed of decentralized power drive unit, transmission stop device and control subsystem. The power driver unit (PDU) is regularly distributed in the cargo hold to provide driving force for the container unit. The ULD is driven by the PDU, and finally stays in the corresponding loading position, and its movement is limited by the stop device. Figure 1 shows the interior of a cargo hold. Control Panel

Power drive unit

Cargo door

Floor ball

Unit load device

Cargo floor

Fig. 1. Interior image of a cargo plane

1. Power Drive Unit PDU is a power drive device for the automated cargo loading system, which is embedded in the cargo hold floor [4]. According to the function, PDU can be divided into two types: linear PDU and steering PDU. The linear PDU is mainly responsible for the movement of the front and rear direction of the container unit. The steering PDU is responsible for the steering of large-scale container entry and exit doors. 2. Device of limiting The main function of the limiting device is to fix and limit the movement of ULD, ensure that the unit is stably placed in the designated position, and consists of stop, lock block and various blocks [5]. 3. Control subsystem The core of the automatic loading system is the control subsystem. The work staff control the movement of the power drive unit through the handle on the control panel, then drive ULD into the effective loading position. The control subsystem is

Scheme Design for Cargo Loading Control System of Cargo Plane

1207

usually composed of controller, control panel, field bus and sensor. The scheme in this paper also includes local control panel and Outside control panel. 1.2

Research Content

In the cargo loading control system, aiming at the large number of communication nodes in the communication network, a three-layer CAN communication network composed of the main controller, area controller and PDU is proposed. In view of the multiple functional requirements of the system, this paper puts forward a scheme that divides the functional modules into control function and maintain function, and then uses dual MCU in hardware. According to the characteristics of different functional modules, the software adopts different design architectures. Finally, the real-time visualization function of the important components of the system is realized by using the serial display screen.

2 Functions and Requirements The scheme in this paper, cargo control and maintain unit (CCMU) is the core of cargo loading control system, which uses fieldbus technology to transmit sensor data in realtime. Through the analysis and processing of a large number of data in the system, nearly 100 PDU in the cargo hold are controlled, and finally, the cargo loading and unloading operation is completed. This system provides a complete automatic solution for cargo loading, greatly improves the efficiency of cargo loading. Besides, the system status visualization function enables work staff to understand the operation status of the system at any time during loading and respond to the fault more timely. CCMU is the key to realize automation of the cargo loading system. It needs to process the information of all sensors and PDU in the cargo hold, and issue warnings in time when the system fails. CCMU needs to realize the following functions. 1. CCMU shall be able to receive feedback signals from all sensors (including cargo door status signals) in the system through CAN bus, and process the signals to make corresponding indicator lights on and off on display and control panel or display information on the display screen. 2. CCMU shall be able to power on the display screen after the power is turned on, receive the information sent by the local control panel (LCP), and display the status information of PDU and cargo lock on the display control panel or display screen. 3. CCMU should be able to receive the status signal of the power drive unit in LCP (normal, overcurrent, overheating) through the CAN bus. After analyzing and processing, the display information can be sent to the main panel through the bus, and the communication period time is 40 ms. 4. In case of PDU, LCP, bus or power supply line fault, CCMU shall be able to locate the fault through the algorithm and issue an warning message. 5. CCMU shall be able to display the real-time position of ULD on the display screen by analyzing the information of CAN message.

1208

S. Wu et al.

3 Architecture Design for Controller 3.1

Design of CAN Communication Network

Can bus is one of the most widely used communication buses at present. Its biggest use is to realize communication in a distributed control system. Can bus communication protocol is simple, low cost, with high reliability, high real-time, easy to use and other characteristics, has been gradually applied in a variety of fields [6, 7]. In the communication of cargo loading control system, the most significant feature is that there are many communication nodes and high real-time requirements, which means that the communication pressure of the system is high and the bus load rate is high. In order to meet the real-time requirement of 40 ms and reduce the bus load rate, 86 PDUs are divided into 9 groups. Each PDU is equipped with an area controller or outside control panel (OCP). The PDU transmits the information to LCP through the bus, and then the LCP transmits the information to CCMU. To sum up, CAN bus is selected as the communication protocol of the cargo loading control system, and the nodes are grouped reasonably to meet the real-time requirements of the system. Figure 2 is the schematic diagram of CAN communication network. CCMU

LCP

PDU

LCP

PDU

PDU

OCP

PDU

PDU

PDU

Fig. 2. Schematic diagram of CAN communication network

3.2

Design of CAN Communication Message

When designing CAN messages that transmitted between nodes, we should not only ensure the integrity of information, but also minimize the space occupied by information to reduce the bus load rate per unit time. Therefore, CAN message design is an important part of design of cargo loading system. Since the CAN communication network of the loading system includes PDU, LCP and CCMU, it is necessary to design the PDU status message and the interaction message between CCMU and LCP/OCP. 1. Design of PDU status message According to the demand analysis, the transmission cycle of PDU status message is 40 ms, and the number of PDU nodes is 86. It’s easy to see that the most message in the CAN communication network is PDU status message. In the case of ensuring that the information transmitted by PDU can realize specific functions, redundant information is eliminated and the utilization of CAN message data frame is improved.

Scheme Design for Cargo Loading Control System of Cargo Plane

1209

2. Design of interactive message between CCMU and LCP/OCP In addition to uploading PDU status message to CCMU, LCP/OCP also needs to upload cargo lock status message and regional fault message. Before controlling the start of PDU, CCMU needs to determine whether the cargo lock that bound to the loading position of PDU is unlocked. Therefore, LCP/OCP needs to upload the cargo lock status message. In addition, when a fault occurs, LCP/OCP needs to judge whether the fault in this region is caused by the line or PDU through information processing, and then it would send the fault message with higher priority to CCMU.

4 Implementation Scheme for Controller 4.1

Scheme Design of Hardware and Software

As the core part of the cargo loading control system, the cargo control and maintain unit is responsible for processing most of the messages in the CAN communication network of the system. Therefore, the software and hardware design of CCMU must meet the requirements of real-time and reliability of the system. In order to facilitate the maintain for software and hardware of CCMU, the functional requirements of Sect. 2 are divided into two categories: control function and maintain function Among the functions that CCMU needs to realize, the functions related to control require high real-time performance. In order to ensure the reliability of the system, this paper proposes a dual MCU control scheme, in which the control MCU is responsible for the start-up and stop control of PDU and its related functions, and the maintain MCU is responsible for visualization and fault diagnosis. The hardware structure of CCMU is shown in Fig. 3. The communication interface includes CAN interface, RS485 interface, SPI interface, UART interface, JTAG program download interface, etc. In addition, the CAN communication test interface should be reserved for subsequent test and function expansion. After the handshake of HS1 and HS2 hardware-level signals, the two MCU communicate through SPI. Program download

GPIO

JTAG Send and receive CAN data of CAN bus

C&WU module

CAN Test interface

Program download

Scram button

GPIO

Control MCU

RS485 Wireless operation handle

Buttons of four direction

JTAG SPI HS1 HS2

touch screen

UART CAN Send and receive data of CAN bus

Maintain MCU SPI data storage

CAN

Test interface

RS485 RFID reader

Fig. 3. Hardware structure of CCMU

Because of the scheme of dual MCU in hardware, two sets of application programs need to be developed in software. The characteristics of functions that control MCU

1210

S. Wu et al.

needed to realize are a large amount of information to be processed. Therefore, the control MCU uses the main program cycle and multiple interrupts to realize the program design. The cycle is used to continuously detect the user’s behavior, including whether the key is pressed, etc. The interrupt mode to receive CAN message can ensure the real-time performance of the system to a certain extent. The feature of functions that maintain MCU needs to realize is more tasks, therefore, the FreeRTOS embedded realtime operating system is adopted. According to the characteristics of each task, the priority and sleep conditions are set, then the program executes each task according to the priority. For control MCU, CAN message processing is a more complicated process, and the received messages are divided into three types, and the three types of messages are interrelated and interdependent. After receiving the message, the control MCU should first determine which type of CAN message belongs to. If it is a higher priority fault message, the MCU should respond immediately and issue a warning. If it is a status message of the cargo lock, the status array of the cargo lock needs to be updated. If it is a PDU status message, it is necessary to further determine whether the relevant cargo lock is released. The flow chart of specific software design is shown in Fig. 4. CAN-ID judgment

PDU status message Cargo lock message PDU fault message

PDU normal Update cargo lock status array

Not Sending overcurrent / overheat Message to maintenance MCU

Yes

Send Message to maintain MCU ULD on position Send Message to maintain MCU

Not

Yes

Send uld location Message to maintenance MCU

Cargo lock unlock?

Not

Yes

Sending PDU isolation message to LCP / OCP

Sending CAN message to pre start the next PDU

Fig. 4. The flow chart for software design of control MCU

The maintenance MCU adopts FreeRTOS embedded real-time operating system. In the running environment based on FreeRTOS, the priority of tasks determines the order

Scheme Design for Cargo Loading Control System of Cargo Plane

1211

of using the CPU. Therefore, in the design of the application program, the priority of tasks should be determined according to the characteristics of tasks. 4.2

Scheme and Implementation of System Visualization

System visualization is a very clear way of human-computer communication, which is convenient for operators to understand the operation status of the system timely and accurately, and make the fastest response to system abnormalities. In this paper, the visualization scheme uses the serial screen as the display device. The serial screen is a kind of LCD screen with a serial controller. By receiving the instructions sent by the controller, all the operations of drawing on the LCD are completed [8]. The design of the display control panel is shown in Fig. 5.

PDU status PDU normal PDU abnormal PDU fault ULD Position Completed ULD ULD Real-time Position

next page

Run

Power

Cargo door

Turn in

Aircraft abnormal

Turn out

Fig. 5. Schematic diagram of display and control panel

The visualization content of the system mainly includes PDU status and real-time position of ULD. Figure 6 shows the graphic interface design and real effect for the visualization scheme of the serial screen. The upper part shows the status of the PDU, and the lower part shows the real-time position of the ULD.

Fig. 6. Graphic interface design of display screen and actual effect

1212

S. Wu et al.

5 Conclusion In this paper, an automation scheme of the cargo loading system is proposed, and the visualization function of the system is realized. In order to reasonably control 86 PDUs in the cargo hold and ensure real-time performance of the system, CAN bus with high reliability is used for communication among nodes, and nodes are controlled by groups and regions. The control and maintenance computer of the loading system adopts the scheme of double MCU, and the visualization system adopts the serial display screen which is more friendly to MCU. Finally, the software and hardware design of the system control and maintenance computer is described in detail, and the visualization of PDU status and ULD real-time position is realized. The system design in this paper is feasible, and has reference significance for the localization of cargo loading system.

References 1. Anonymous: Transportation Security Administration. TSA Announces Key Milestone in Cargo Screening on Passenger Aircraft, Bioterrorism Week (2010) 2. IATA: IATA ULD Technical Manual, 19th edn. Montreal, Canada (2010) 3. Jia, T., Nan, M., Zhu, P.: Summary of loading technology of containerized cargo for civil transport aircraft. Aviat. Sci. Technol. 29(05), 1–7 (2018) 4. Berlowitz, I.: Passenger airplane conversion to freighter. In: 29th Congress of the International Council of the Aeronautical Sciences (2014) 5. AIRBUS Product Safety Department. https://safetyfirst.airbus.com/a320-family-cargocontainers-pallets-movement/. Accessed 25 May 2021 6. Yushev, A., Barghash, M., Nguyen, M.P.: An experimental study of internet-grade end-to-end communication security for CAN networks. IFAC PapersOnLine 51(6) (2018) 7. van Osch, M., Smolka, S.A.: Finite-state analysis of the CAN bus protocol. In: Sixth IEEE International Symposium on High Assurance Systems Engineering. Special Topic: Impact of Networking, pp. 42–52 (2001) 8. DACAI Tech. http://www.gz-dc.com/article/id/735.html. Accessed 5 June 2021

Chinese Word Segmentation of Ideological and Political Education Based on Unsupervised Learning Xinghai Yang1(&), Wenjing Zang2, Shun Meng1, Jiafeng Liu1, and Yulin Zhang2(&) 1 College of Information Science and Technology, Qingdao University of Science and Technology, Qingdao 266061, China [email protected] 2 School of Information Science and Engineering, University of Jinan, Jinan 250022, China [email protected]

Abstract. This paper proposes an unsupervised Chinese word segmentation algorithm for ideological and political education. The algorithm is divided into two parts: language model generation algorithm and Viterbi algorithm. Language model generation algorithm calculates the conditional probability based on the big texts and determines the number of occurrences between single character and character. Then we can have a character-level N-gram language model. Viterbi algorithm uses the idea of dynamic programming. Viterbi algorithm can use character-level language model to find the optimal word segmentation path, so as to further improve the accuracy of segmentation. Finally complete the task of Chinese word segmentation supported by big texts. Experiments show that the proposed algorithm has a good recognition rate for vocabulary in the field of ideological and political education. With the characteristics of unsupervised learning, the algorithm can save a lot of labor costs and meet the needs of word segmentation in the field of ideological and political education. Keywords: Chinese word segmentation  Natural language processing Language model  Unsupervised learning



1 Introduction This paper focus on the unsupervised Chinese word segmentation algorithm for ideological and political education. The N-gram language model can summarize the language usage habits. Viterbi algorithm can calculate the optimal path of word segmentation by combining language model. Therefore, this paper combines the above two methods. Taking into account the characteristics of unsupervised learning, establish a character-level N-gram language model based on words. The text is segmented by using the Viterbi algorithm. Reference [1] focus on the study of Chinese word segmentation based on statistics. Reference [2] uses agricultural corpus to conduct research, word-combination based on ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1213–1221, 2023. https://doi.org/10.1007/978-981-19-3387-5_145

1214

X. Yang et al.

word frequency deviation and algorithm to achieve unsupervised Chinese word segmentation. Reference [3] utilizes Long Short-Term Memory (LSTM) to develop a twoway LSTM model, which improves the shortcomings of the ordinary cyclic neural network model that cannot rely on information for a long time. Reference [4] focus on the use of CWS dictionary information for Chinese word segmentation in the case of insufficient training data. Reference [5] utilizes the combination of statistics and dictionaries to perform Chinese word segmentation based on the second participle disambiguation method of rules and word table. Reference [6] in order to cope with the problem of a difficult word segmentation in legal documents, combined with the idea of local labeling and linear programming, and finally complete the Chinese word segmentation. The N-gram language model is reviewed in reference [7], and the smoothing algorithm for the language model is introduced to solve the problem of zero probability.

2 Theory 2.1

Character-Level N-Gram Language Model

The traditional N-gram language model is mostly based on words, mainly used for supervised learning. Reference [5] is based on domain dictionaries and has extensive optimization of domain dictionaries. However, the field of ideological and political education is different from many fields. There are many professional vocabularies in the field of ideological and political education, which poses a great challenge to the traditional language model. Therefore, this paper introduces a character-level N-gram language model. The character-level language model can predict the n-th word based on the first (n − 1) words. In a sequence M consisting of n words, the character-level N-gram language model can be calculated in a sequence consisting of n words by Ngram language model training algorithm. Pðwn jwn1 ; wn2 ; . . .wn1 Þ

ð1Þ

where w1 ; w2 ; . . .; wn1 represents the ðn  1Þ words in the sentence. wn represents the n-th word in the sentence. According to the conditional probability formula as PðBjAÞ ¼ PðA; BÞ=PðAÞ

ð2Þ

We can get the segmentation path probability of the sequence M as Pðw1 w2 . . .wn Þ ¼ Pðw1 Þ  Pðw2 jw1 Þ  . . .  Pðwn jw1 w2 . . .wn1 Þ

ð3Þ

The length of the sequence is not fixed. It is necessary to introduce a Markov hypothesis that the probability of occurrence of a word is only related to the first m words. When m = 1, it is a 1-g language model. The probability of the segmentation path of the sequence M can be corrected to

Chinese Word Segmentation of Ideological and Political Education

Pðw1 w2 . . .wn Þ ¼ Pðw1 Þ  Pðw2 jw1 Þ  . . .  Pðwn jwn1 Þ

1215

ð4Þ

where Pðwn jwn1 Þ can be obtained by maximum likelihood estimation, equal to Countðwn ; wn1 Þ=Countðwn1 Þ

ð5Þ

when m = 2, it is a 2-g language model. The N-gram model includes language models of 1-g, 2-g, 3-g, 4-g, 5-g, etc. The different n actually represents the longest length of the sequence of byte segments in the N-gram model. The high-order N-gram language model contains low-order N-gram language models. 2.2

Viterbi Algorithm

The Viterbi algorithm is a kind of dynamic programming algorithms in machine learning. It can be utilized to find the Viterbi path-implicit state sequence which most likely to produce a sequence of observation events. The hidden Markov model is presented in Fig. 1.

Fig. 1. Hidden Markov model

The predicted problem can be expressed as ðx1 ; x2 ; . . .; xN Þ ¼ argmaxPðx1 ; x2 ; . . .; xN jy1 ; y2 ; . . .; yN Þ

ð6Þ

According to Markov’s hypothesis, the above equation is equivalent to argmax

YN i¼1

Pðxi jyi ÞPðyi jyi1 Þ

ð7Þ

The optimal solution path graph structure obtained by the Viterbi algorithm is shown in Fig. 2. In the Markov Chain, there may be more than one value at any time t. xij represents the j-th possible value of state xi . The above recessive state sequence is expanded by value to obtain a fence network structure. To record intermediate variables, we can introduce d and u and define the maximum probability path for all individual paths (i1 ,i2 , …,it ) as

1216

X. Yang et al.

Fig. 2. The optimal solution path graph structure

dt ðiÞ ¼ maxPðit ¼ i; it1 ; . . .; i1 ; ot ; . . .; o1 jkÞ; i ¼ 1; 2; . . .; N

ð8Þ

where it represents the shortest path. ot represents the observed symbol. k represents the model parameter. According to the above formula, the recursive formula of d can be obtained as   dt þ 1 ðiÞ ¼ max dt ðjÞaji bj ðot þ 1 Þ

ð9Þ

where: i ¼ 1; 2; . . .; N; t ¼ 1; 2; . . .; T  1. We can define the (t − 1)-th node of the most probable path among all the individual paths (i1 ,i2 , …,it )with the state i at time t as   ut ðiÞ ¼ argmax dt1 ðjÞaji 1jN

ð10Þ

According to the above formula, ut ðiÞ is used to record the most probable state node for backtracking until the optimal hidden state sequence is found. To obtain the optimal path, the input model and observation status as k ¼ ðA; B; pÞ; O ¼ ðo1 ; o2 ; . . .; oT Þ

ð11Þ

where A is the state transition matrix. B is the probability distribution matrix. Therefore, the termination status can be calculated as P ¼ max dT ðiÞ

ð12Þ

iT ¼ argmax½dT ðiÞ

ð13Þ

1jN

1jN

Backtracking to the optimal path

Chinese Word Segmentation of Ideological and Political Education

1217

  iT ¼ ut þ 1 it þ 1

ð14Þ

  I ¼ i1 ; i2 ; . . .; iT

ð15Þ

Output is the optimal path as

The Viterbi algorithm uses the idea of dynamic programming to solve the optimal path. The core idea of the algorithm is to calculate the state transition path with the highest probability by synthesizing the transition probability between the state and the state from the previous state, so as to infer the sequence of the implicit state. The Viterbi algorithm effectively reduces the amount of calculation and improves efficiency.

3 Proposed Method 3.1

The Construction of Corpus

Considering the characteristics of unsupervised learning and language models, it is a need for a large and widely covered corpus in the training process. Therefore, we established a corpus [8]. The corpus consists of three parts: papers in the field of ideological and political education, Wechat articles and people’s daily. Papers in the field of ideological and political education mainly provide vocabulary in the professional field. Wechat articles related to ideological and political education mainly provide hot words and new words. The news of people’s daily contains a large number of professional vocabulary and a large number of hot topics, policies and other information. Because of the unsupervised learning characteristics, there is no need to label the corpus. We need to separate the sentences and sentences in the corpus with newlines and separate the character-to- character by a space. Some results are shown in Fig. 3.

Fig. 3. The format of processing corpus text

3.2

Training and Application of Character-Level N-Gram Model

After obtaining the data. We use the 4-g language model. Then we need to add and tags to the data set sentences. This method can ensure that the

1218

X. Yang et al.

sentences do not affect each other and can obtain the conditional probability of a word as the beginning or the end. The next step is to map each word to a unique numeric ID. Then perform a sorting operation based on the tag ID to calculate the count of adjacent words. Next consider sparse data. This step is called discounting. The idea is to divide the probability of some frequently occurring n grams into non-occurring n grams. Calculate the discount weight after completing the discount. Weight mainly measures the ability of one word to follow another. The higher the weight, the higher the probability of a word being the first word. Finally, we need to establish an association for the N-gram model. This step is called interpolation. This step will improve the correctness of the model and get the complete 4-g language model. The partial 4-g language model with the text in Fig. 3 as shown in Fig. 4.

Fig. 4. Part of the N-gram language model

Next, the N-gram language model is imported into the Viterbi algorithm, and the specific algorithm is shown in Table 1. Table 1. N-gram language model training algorithm Input: Corpus Output: N-gram language model 1. Marking. Use and to mark the beginning and end of a sentence 2. Mapping. Map each word to a unique ID 3. Counting. Combine and sort the same words to get the raw count of the model 4. Adjusting. Adjust the count of each word based on the ability of a single word to form a word 5. Discounting. Use discounting algorithms to reduce common N-gram conditional probabilities and assign them to N-grams that never appear 6. Normalization. Calculate back-off weight 7. Interpolation. Establish the correlation between the models

Chinese Word Segmentation of Ideological and Political Education

1219

After the model is trained, the N-gram model is input to the Viterbi algorithm, and the text is segmented by the Viterbi algorithm. The specific steps are shown in Table 2. Table 2. Viterbi algorithm Input: N-gram language model, text Output: Result of the word segmentation I 1. Initialize local states d1 ðiÞ and u1 ðiÞ 2. Obtain dt ðiÞ and ut ðiÞ according to the local state 3. Calculate the termination status P and iT 4. Backtracking the path, get the result of word segmentation I

4 Experiment 4.1

Language Model Comparison Test

This paper chooses the 4-g language model. In order to verify that the 4-g language model is more suitable for Chinese word segmentation in the field of ideological and political education than other language models, the experiment compared the processing speed of Chinese word segmentation in different language models. As shown in Fig. 5, the processing time changes between different levels of language models are relatively stable, and there is no exponential growth. Therefore, it is feasible to use the 4-g language model from a practical perspective. Secondly, it mainly tests whether the segmentation ability and accuracy of the algorithm meet the requirements.

Fig. 5. Comparison of Chinese word segmentation processing time of different order language models

4.2

Algorithm Accuracy Test

After completing the test of language model comparison, this paper compares the results of algorithm segmentation with the results of traditional word segmentation

1220

X. Yang et al.

tools to verify the advantages of the algorithm in the field of Ideological and political education. In the experiment, Jieba participle was used as the main comparison object. This paper also compares 2-g language model, 3-g language model and 4-g language model. The test results are shown in Table 3.

Table 3. Results of word segmentation Algorithm Jieba 2-g 3-g 4-g

Precision 83.87% 80.18% 85.37% 88.85%

Recall 89.77% 90.82% 90.31% 89.87%

F 85.18% 85.16% 87.77% 89.35%

As a traditional word segmentation tool, the result of Chinese word segmentation in the field of ideological and political education is not ideal, which may be caused by the special way of word formation in the field of literature, we can see that 4-g has advantages in comparison with other order language models. In the result of word segmentation, it is found that with the increase of the order of language model, the algorithm tends to segment longer words, which makes the 4-g language model more suitable for ideological and political education literature. With the further improvement of the corpus, the precision of the Chinese word segmentation can be further improved.

5 Conclusion The paper takes the Chinese participle of ideological and political education as the scene. The N-gram language model is obtained by training the corpus of unsupervised learning. Finally, the unsupervised Chinese word segmentation is treated by the Viterbi algorithm combined with the character-level N-gram model. The results show that the method has a great ability to segment vocabulary in the field of ideological and political education, and can effectively reduce labor costs and improve the efficiency of word segmentation.

References 1. Zou, J., Wen, H., Wang, T.: Research on Chinese word segmentation algorithm based on statistics. Comput. Knowl. Technol. 15(04) (2019) 2. Ni, W., Sun, H., Liu, W., Zeng, Q.: Automatic optimization of unsupervised Chinese word segmentation for domain literature. Data Anal. Knowl. Discov. 2(02), 96–104 (2018) 3. Jin, W., Li, W., Ji, C., Jin, X., Guo, Y.: The Chinese participle based on bidirectional LSTM neural network model. J. Chin. Inf. Process. 32(02), 29–37 (2018) 4. Liu, J., Wu, F., Wu, C., Huang, Y., Xie, X.: Neural Chinese word segmentation with dictionary. Neurocomputing 338, 46–54 (2019)

Chinese Word Segmentation of Ideological and Political Education

1221

5. Cheng, Y.S., Shi, Y.T.: Chinese word segmentation method for professional field. Comput. Eng. Appl. 54(17), 30–34+109 (2018) 6. Yan, Q.: Research on Chinese word segmentation method for legal documents. Suzhou University (2018) 7. Yin, C., Wu, M.: Overview of N-gram model. Comput. Syst. 27(10), 33–38 (2018) 8. Ramos, F.P., Cerutti, G., Guzmán, D.: Building representative multi-genre corpora for legal and institutional translation research: the LETRINT approach to text categorization and stratified sampling. Transl. Spaces 8(1), 93–116 (2019)

Random Design and Quadratic Programming Detection of Cayley Code in Large MIMO System Da-Peng Zhang1(&), Fu-Yun Ma1, Ting Liu2, Ning Liu1, Li-Jun Bian1, and Zhi-Min Zhang1 1

Beijing Toglobe Electronic Technology Co., Ltd., Beijing 100094, China [email protected] 2 Academy of Opto-Electronics, Chinese Academy of Sciences, Beijing 100094, China

Abstract. Cayley code can assure high rate transmission without channel estimation in Large multi-input and multi-output (MIMO) system, but it lacks systematic design and feasible detection method. In this paper, we propose a systematically random design and convex optimization detection scheme for Cayley codes with all kinds of data rate and all numbers of transmit and receive antennas. We employ complex Gaussian random matrices to generate Hermitian base matrices for signal design. Furthermore, we regard the transmission system as an equivalent large MIMO system whose transmit signals are the base coefficients, and then use quadratic programming (QP) for detection. In large MIMO system, when conventional non-coherent space-time codes are hard to design and the performance of conventional algorithms deteriorates, the proposed scheme can still operate. As the base matrices are randomly generated, our design has a high secrecy in nature. Keywords: Large multi-input and multi-output code  Signal design  Quadratic programming

 Space-time code  Cayley

1 Introduction The capacity of large multi-input and multi-output (MIMO) system is much larger than that of traditional MIMO system, as it can reduce the effect of additive noise and inter cell interference on a large scale [1]. After the concept was put forward, it aroused wide concern all over the world [2–6]. In large MIMO system, traditional space-time codes (STC) which need channel state information (CSI) at the receiver are still applicable. However, they are sensitive to channel estimation error and the overhead of channel estimation in large MIMO system is further increased, which make it hard to be used in fast fading environment. Cayley code is a special kind of differential unitary space-time code, which has the advantages of high transmission rate and relatively simple detection complexity without CSI [7, 8]. Cayley code is the first practical non-coherent space-time code as it solved the problems of limited data rate and too complicated detection that exist in noncoherent space-time codes such as unitary space-time modulation (USTM) [9] and ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1222–1229, 2023. https://doi.org/10.1007/978-981-19-3387-5_146

Random Design and Quadratic Programming Detection

1223

differential unitary space-time modulation (DUSTM) [10]. Cayley code in large MIMO system can provide high speed wireless transmission without CSI, and has a broad application prospect. It can be found that there are two main problems in Cayley codes. First, although there has been theoretical bases for Cayley code design, there is no systematic design method as USTM, DUSTM, and traditional STC. Second, in existing detection methods, the error rate performance of zero forcing (ZF) and minimum mean square error (MMSE) detection is too poor, and the complexity of sphere decoding (SD) in large MIMO system is still too high. In practice, we need to consider the error rate and complexity comprehensively. In this paper, we propose systematic design and quadratic programming (QP) detection schemes for Cayley code in large MIMO systems. We get Hermitian matrices by complex Gaussian random matrices with Cayley transformation at the transmitter and reduce signal detection complexity using convex optimization theory at the receiver to make it more practical for high speed Cayley code. Due to the randomness of signal generation, the communication process has a high security in nature.

2 Description of Cayley Code Transmission Base Coefficient Generation

Hermitian Matrix Generation

Cayley

Differential

Transform

Encoding

Receiving

Detection Decision

Fig. 1. Block diagram of Cayley encoding and decoding procedures.

We employ a spatially independent and temporally continuous large MIMO Rayleigh fading channel model where the number of transmitter and receiver antennas are M and N respectively, as shown in Fig. 1. The relationship between the M  M transmit signal X and the M  N receive signal Y is via rffiffiffiffiffi q Y¼ XH þ W M

ð1Þ

where q is the signal-to-noise ratio at the receiver. The components of M  N fading coefficient matrix H and additive noise matrix W obey complex Gaussian distribution CN ð0; 1Þ, where the components of H are independent in space but continuous in time and the components of W are independent in both space and time. At the transmitter, we divide a length M  R source bit stream into Q sub streams and map each sub stream into a decimal number i. According to this decimal number, we can select an element from the coefficients set a with cardinality r to get aq , where q ¼ 0; . . .; Q  1. Each base coefficient aq has r possible values, where r is an integer power of 2 and Q is an integer no more than minðM; NÞ  ½2M  minðM; NÞ. The data rate is R ¼ Q=M  log2 r and the cardinality of signal set is L ¼ 2MR ¼ r Q . With the weighted sum of coefficient aq and M  M Hermitian base matrix Aq , we can get a new

1224

D.-P. Zhang et al.

Hermitian matrix A ¼

PQ1 q¼0

aq Aq which carries data. Via Cayley transform, we can

get an M  M unitary matrix U ¼ ðI þ jAÞ1 ðI  jAÞ. Using U and current M  M transmit signal X k for differentially encoding, we can get a new transmit signal Xk þ 1 ¼ UX k for next time block and send them to channel via M transmit antennas. At the receiver, we get M  N signal Y k at time block k via N receive antennas. At the detection module, the maximum likelihood (ML) detection is

aq;ML

 !1 " # 2  Q1 Q1 X X   ¼ arg min Iþj aa Aq  Yk þ 1  Yk þ j aq Aq ðY k þ 1 þ Y k Þ   ð2Þ  faq g   q¼0 q¼0 F

where k  kF denotes the Frobenius norm. We can use aq;ML to get ^iq , and finally get the  1 P a A in (2), we get a linearized ML detection. decided bits. Omitting I þ j Q1 q q q¼0 However, the detection is still too complicated compared with traditional space-time codes that need CSI.

3 Cayley Code Design Based on Random Complex Gaussian Matrices DUSTM directly maps the powers of square matrices with base band bit sequences into differentially transmitted signals. However, Cayley code selects coefficients via bit sequences to generate Hermitian matrices by the weighted sum of base matrices. In theory, we should use exhaustive search to find practical codes. However, for large MIMO systems with high data rate, the cardinality of signal set is too high to use exhaustive search. Therefore, although Cayley code has a very high speed transmission capability, there is no systematic generation method fitting for all kinds of transmit antennas and data rates at present. Here, we propose a design method for Cayley code based on random complex Gaussian matrices. As the codes are randomly generated, they have high security in transmission. At the transmitter, the procedures for base matrices search are listed below. 1. Parameters initialization. Setting M, N, Q, and r, and assuming that D0 ðAÞ corresponds to a set of randomly generated base matrices. 2. Generating a set of M  M random complex Gaussian matrices Zq , where $q 2 f0; . . .; Q  1g.  2  H 3. Generating Aq ¼ Zq ZH q =Zq Zq  via Zq . F

4. Getting new metric D1 ðAÞ with the new set of base matrices. 5. If D0 ðAÞ [ D1 ðAÞ, keeping D0 ðAÞ not changed, if not, letting D0 ðAÞ ¼ D1 ðAÞ. 6. Repeating step 2 to step 5. If D0 ðAÞ do not increase any more after a large number of iterations, we can stop the search. The base matrices acquired from the former several iterations has a greater improvement in error performance. As the number of iterations increases, the

Random Design and Quadratic Programming Detection

1225

improvement caused by new base matrices is getting smaller and smaller, and the error performance tends to be stable. In traditional MIMO systems, we can use ZF and MMSE detection at the receiver. As the number of antennas or data rate increases, the detection speed gets obviously slower and the error rate is too high. In high rate large MIMO system, the proposed QP detection method can increase both search speed and reliability. With the above search scheme, we can find out useful representation for transmitted signals more quickly in non-coherent high data rate large MIMO system.

4 Quadratic Programming Detection of Cayley Code Cayley code has a very high data carrying capability, but in ML detection, Cayley code is more complex than DUSTM, which makes ML detection of Cayley code not practical. Convex optimization theory has been successfully applied to signal detection in MIMO systems, but it is rarely involved in the field of noncoherent space-time codes because the convex optimization detection results of USTM and DUSTM cannot be mapped into bit stream. However, for Cayley code, by some equivalent processing for the signals, we can realize QP detection. Assuming C ¼ ðY k þ 1  Y k ÞH and B ¼ ½jðY k þ 1 þ Y k ÞH , and considering (1) and the differential relationship between adjacent transmit signals, we have C ¼ BA þ V ¼

Q1 X

aq BAq þ V

ð3Þ

q¼0

 where V ¼ ½ðI M þ jAÞW k þ 1  ðI M  jAÞW k H . Defining row vectors cn ¼ cn;0 ; . . .;     cn;M1 , bn ¼ bn;0 ; . . .; bn;M1 , and vn ¼ vn;0 ; . . .; vn;M1 , then we have 2 3 2 3 2 3 c0 b0 v0 6 . 7 6 . 7 6 . 7 C ¼ 4 .. 5, B ¼ 4 .. 5, and V ¼ 4 .. 5. cN1 bN1 Therefore, we can rewrite (3) as ½c0 ; . . .; cN1  ¼ 2

Q1 X

vN1

  aq b0 Aq ; . . .; bN1 Aq þ ½v0 ; . . .; vN1 

q¼0

b0 A 0 6 .. ¼ ½a0 ; . . .; aQ1 6 . 4 b0 AQ1

 .. . 

3 bN1 A0 7 .. 7 þ ½v0 ; . . .; vN1  . 5 bN1 AQ1

Denoting c ¼ ½c0 ; . . .; cN1 , a ¼ ½a0 ; . . .; aQ1 , v ¼ ½v0 ; . . .; vN1 , and

1226

D.-P. Zhang et al.

2

b0 A0 .. 6 G¼4 . b0 AQ1

 .. . 

3 bN1 A0 .. 7 5 . bN1 AQ1

then we can finally write the equivalent transmission form as c ¼ aG þ v

Mapping

ð4Þ

Rx

Tx

Detection

Fig. 2. Block diagram of equivalent transmission system based on Cayley code.

Expression (4) can be equivalently regarded as a Q  MN large MIMO system, as shown in Fig. 2. Source signal a can be regarded as a special real modulated signal. When r ¼ 2, it is just a common binary phase shift keying (BPSK) signal. It has been pointed out in [7] that when data rate remains unchanged, the performance of larger Q and smaller r is superior to that of smaller Q and larger r, but there is no explanation. We present an equivalent system model that can explain this phenomenon. When Q is smaller and r is larger, we have a system with a smaller number of transmit antennas and higher order modulation. Otherwise, the equivalent system is with a larger number of transmit antennas and lower order modulation. Lower order modulation usually gives a higher reliability. Source signal a is sent into a fading channel with coefficient G and is interfered by additive noise v. Then we finally get received signal c. All parameters in the equivalent system are real. a has L ¼ r Q possible values which are the same as the cardinality of the transmit signal set. Therefore, when we determine a detected value for a, we also determine a piece of source bits. With (3), we can get ML detection aML ¼ arg

min

l2f0;...;L1g

kal G  ck2F

ð5Þ

In high data rate large MIMO system, the ML detection scheme descripted in (5) cannot be realized, both SD and MMSE detection lose their speed advantages, and ZF shows a too high error rate. Convex optimization detection take into account both complexity and reliability. Assuming that P ¼ GGH , d ¼ cGH , and the maximum and minimum of aq with r possible values in set a are amax and amin , then the detection of Cayley code can be converted into a QP problem, namely

Random Design and Quadratic Programming Detection

 min aPaH  2Re adH s.t: 1Q1  amin 4 a 4 1Q1  amax

1227

ð6Þ

where 1Q1 is an all 1 column vector with Q elements. As a is a real row vector, (6) can be simplified into min aPR aT  2adTR s.t: 1Q1  amin 4 a 4 1Q1  amax

ð7Þ

where the subscript “R” denotes the real part of a matrix or a vector. With (7), we can finally get the optimal solution aF QP . As a ¼ sort½tanðh=2Þ, where hi ¼ ð2i  1Þp=r, i ¼ 1; . . .; r, and “sort” function sorts its arguments in ascending order. Therefore, with each element aF q;QP in optimal solution aF q;QP , we can uniquely determine a decided index value

rþ1 ^iq ¼ round r tan1 aF þ q;QP p 2

ð8Þ

where “round” function rounds the arguments to the nearest integer and ^iq 2 ½1; . . .; r. By mapping the Q values of ^iq  1 to the corresponding bit sequence whose lengths are log2 r and changing them from parallel to serial, we get the finally decided bits. Therefore, we finished the proposed QP detection scheme for Cayley code. QP detection gives a much lower complexity than ML, SD, and even MMSE in large MIMO system. When compared with ZF, QP detection shows a much higher reliability that is near to the performance of SD.

5 Simulation Results and Performance Analysis In this section, we present several examples to illustrate the performance of the proposed scheme via simulations. We employ Jakes channel model, where the fading parameters are independent in space and continuous in time. Assuming that symbol rate is fs ¼ 3  104 s1 , carrier frequency is 900 MHz, and the speed of mobile station is 25m=s, then we have Doppler frequency shift fD ¼ 75Hz. When searching for base matrices, we set a source block with MR  104 bits. Figure 3 shows the error rate performances of Cayley code with ZF, MMSE, and QP detection, where M ¼ 20, N ¼ 200, R ¼ 20, Q ¼ 400, r ¼ 2, and L ¼ 2MR  2:58  10120 . Traditional differential space-time codes cannot provide such a high data rate, the ML and SD detection of Cayley code is not practical because of high complexity, and ZF and MMSE only have a reference value because their error rates are too high. As the number of antennas increases, because the inverse operation of large scale matrix is difficult to achieve, MMSE scheme is also not practical. Then, QP detection is the only scheme that can present acceptable reliability and error rate.

1228

D.-P. Zhang et al.

Fig. 3. BER performance of Cayley code with ZF, MMSE, and QP for detection when M = 20, N = 200, and R = 20.

Fig. 4. transmit different and R =

Relationship between number of antennas M and time needed by detection methods when N = 200 20.

Therefore, we can realize high speed non-coherent wireless transmission in large MIMO system. Figure 4 shows the relationship between the number of transmit antennas M and the number of operations needed by all the mentioned detection methods, where N ¼ 200, R ¼ 2, and bit length is MR  200. As M increases, the complexity of both ML and linearized ML increases exponentially, and other methods increase as polynomial functions. When M increases, the complexity of QP detection increases the most slowly. When M  2, the complexity of all other detection is higher than convex optimization method, except for ZF. However, ZF detection loses practical value because its error rate is too high.

6 Conclusion In this paper, we proposed random design and convex optimization detection for Cayley code, which can realize reliable high rate communication in fast fading without channel estimation. At the transmitter, we generate Hermitian base matrices by random complex Gaussian matrices to construct Cayley code. We regarded the whole system as a new large MIMO system, where the transmitted signals are the base coefficients, and then we used QP detection at the receiver. Simulation results showed that the proposed schemes are feasible, realize high rate transmission, and can get a compromise between complexity and reliability in large MIMO system.

References 1. Marzetta, T.L.: Noncooperative cellular wireless with unlimited numbers of base station antennas. IEEE Trans. Wireless Commun. 9(11), 3590–3600 (2010) 2. Sah, A.K., Chaturvedi, A.K.: Sequential and global likelihood ascent search-based detection in large MIMO systems. IEEE Trans. Commun. 66(2), 713–725 (2018)

Random Design and Quadratic Programming Detection

1229

3. Wu, C.-Y., Huang, W.-J., Chung, W.-H.: Robust update algorithms for zero-forcing detection in uplink large-scale MIMO systems. IEEE Commun. Lett. 22(2), 424–427 (2018) 4. Arti, M.K.: A space-time transmission scheme for large MIMO systems. IEEE Wireless Commun. Lett. 7(1), 62–65 (2018) 5. Qi, Y., Qian, R.: Distance hardening in large MIMO systems. IEEE Trans. Commun. 66(4), 1452–1466 (2018) 6. Mann, P., Sah, A.K., Budhiraja, R., Chaturvedi, A.K.: Bit-level reduced neighborhood search for low-complexity detection in large MIMO systems. IEEE Wireless Commun. Lett. 7(2), 146–149 (2018) 7. Hassibi, B., Hochwald, B.M.: Cayley differential unitary space-time codes. IEEE Trans. Inform. Theory 48(6), 1485–1503 (2002) 8. Oggier, F., Hassibi, B.: Algebraic cayley differential space-time codes. IEEE Trans. Inform. Theory 53(5), 1911–1919 (2007) 9. Hochwald, B.M., Marzetta, T.L.: Unitary space-time modulation for multiple-antenna communications in Rayleigh flat fading. IEEE Trans. Inform. Theory 46(2), 543–564 (2000) 10. Hochwald, B.M., Sweldens, W.: Differential unitary space-time modulation. IEEE Trans. Commun. 48(12), 2041–2052 (2000)

The Design and Implementation of Mobile Operation Platform Based on Domain Driven Design Wenjuan Shi(&), Shulong Wang, Jing Ma, Junquan Zhang, Yuxi Liu, Zaiyu Jiang, and Yang Hu State Grid Information and Telecommunications Group Co., Ltd., Beijing 100192, China [email protected] Abstract. In order to actively promote the implementation of grass-roots burden reduction work of SGCC and improve the efficiency of power marketing field operation, this paper establishes a micro-application system for typical business scene by progressive six-step method, forms “building block” microapplication model for carrying all kinds of mobile operation business and application construction. On this basis, the business middle-ground quickly responded front-end micro-application for business changes and innovations is established by the domain driven design method, and constructs eight capability centers to support front-end micro-application. By this way, the architecture of “big platform, micro-application” is designed to support the fast iteration of new business, form shared micro service about common application and rule precipitation, and build the platform of “data feedback business” benign circle. Keywords: Domain driven design (DDD) middle-ground  Micro-application

 Mobile operation  Business

1 Introduction With the rapid development of Internet technology, there are more and more real-time delivery demands from both supply and demand parties. Power customers have put forward higher requirements on the service concept, service method, service content and service quality of the power grid, and we need to further enrich service channels, expand service content, change service model and improve service efficiency. It is imperative to build a service architecture with intensive management and control, flexible response, and iterative agility, to quickly respond to customer needs, and to capture market changes, and to enhance corporate competitiveness. Zhang Shengnan et al. uses domain-driven design to design the distribution and sales business model, introduces the whole process from demand analysis to business entity extraction and aggregation to form the business field, and provides guidance for the next step [1]. With the practice and wide application of the middle-ground architecture, the advantages of domain-driven design concepts, methods and adaptability are reflected, especially the middle-ground construction of large-scale projects with complex business [2]. Studies [3–5] introduce the design methods of the business middle-ground, ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1230–1237, 2023. https://doi.org/10.1007/978-981-19-3387-5_147

The Design and Implementation of Mobile Operation Platform

1231

which are practiced in the fields of power trading and power marketing. Li wei et al. introduces the business blueprint and framework structure of business middle-ground and which is application in the telecommunications industry [6]. Studies [7–9] aimed at the problem of data interaction between systems, established a multi-channel service scene for power marketing, and developed an integrated platform based on microservices and middle-ground architecture. Aiming at the current situation of mobile operation applications, this paper proposes a hierarchical and progressive business requirement analysis method to form clusters of typical business scenes, and uses domain-driven design methods to build a business middle-ground to support mobile operation micro-application groups, this method realized the architecture design of “large middle-ground, micro-applications”, which can serve fast iterations and respond quickly to needs.

2 Scenario-Based Micro-application Design for Mobile Operation 2.1

Scenario-Based Micro-application Design Method

Centered on grassroots posts and based on grassroots job scenes, the scenario-based micro-application for business mobile operations is designed to adapt hierarchical progressive design, as is shown in Fig. 1. Sorting Out Business Scenes. The business coverage of mobile operations is clarified in view of the business scenes and on-site operation specifications of the actual work of mobile operations in the whole domain. Identifying Business Scenes. Job responsibilities and work processes are identified and sorted out in accordance with the actual job operating specifications on site, and the problems and needs are identified from mobile operation scenes. Filtering Business Scenes. For the mobile operation scenes that have been built or under construction, unsuitable scenes are removed, such as large and second-time information entry scenes for field operations, and non-necessary mobile business scenes where grassroots employees have no desire to demand. Splitting Business Scenes. That means to split mobile applications with miscellaneous features and to build the independent business scenes micro-applications. On-site mobile operation applications are designed with the granularity of scenes and with job positions as the center, taking into account the ease of use and integrity of the application. Aggregating Scenario-Based Micro-applications. It is required that a single independent work should be completed in a single micro-application, and similar operations are aggregated with job responsibilities as the core to avoid repeated operations or repeated data entry.

1232

W. Shi et al. Microapplication Business scenes

Sort out

Operation specifications

Content Operation specifications Porblem Redundant process

Aggregate

Filter

Split

Content Unsuitable scenes Undemand scenes Unneces sary applications

Principle Graininess suitable Multi-dimension Moderate split

Identify

Precipitate

Principle Profess ional aggregation Scene aggregation Function aggregation

Content Share data Common service General function

Fig. 1. Scenario-based micro-application design method

Precipitating Common Services. To support mobile operation business middleground, the common functions are precipitated into enterprise-level common services. 2.2

Scenario-Based Micro-application for Mobile Operations

Through the above methods, based on business scenes, mobile micro-applications are built for grassroots positions. Combining business types, five mobile operations and one basic application are formed. Among them, professional micro-applications are business expansion and installation, electricity bills and reminders, power supply inspection, metering and disassembly and customer service. Micro-application of Mobile Operations Micro-application Expansion and installation

Electricity fee reminder

Electricity inspection

Metering disassembly

On-site acceptance

On-site fee

Electricity check

Install the table

Business acceptance

On-Site survey

On-site reminder

Breach of contract

Change table

On-site customer service

Power outage/ recovery

Operation management

Basic application

Customer service

Basic tools

Log in

Registere

Message

Mine

Scan code

Bluetooth

Job overview

Work order task

Bind user

Unbind user

Take pictures

Record

Fig. 2. Scenario-based micro-application architecture for mobile operations

As is shown in Fig. 2, business expansion and installation micro-applications involve businesses like on-site surveys, intermediate inspections, completion acceptance, power transmission, meter installing and disassembly, etc., and thus include all kinds of on-site operations like new installations and expansions for low-voltage

The Design and Implementation of Mobile Operation Platform

1233

residents, new installations for low-voltage non-residents, new installations for highvoltage installations, transfers, capacity reductions, etc. The electricity bill reading and reminding category involves on-site ammeter reading, on-site reminder, power outage announcement, on-site power outage, on-site power restoration and other business links, supporting the signing of tasks (work orders), obtaining the list of arrears users, performing on-site work, confirming and uploading results, etc. Electricity inspection involves periodic inspections, special inspections and other business environments, supporting task work order downloads, on-site inspections, inspection results confirmation, on-site investigations and evidence collection and other on-site operations. The meter installation and disassembly category is on-site business of the meter/changing meter, involves new installations for low-voltage residents (increased capacity), new installations for low-voltage non-residential installations (increased capacity), new high-voltage installations (increased capacity), temporary electricity usage for installation of meters, reclassification, voltage change, capacity reduction (capacity reduction restoration), etc. The customer service category mainly supports the acceptance of applications for new installations for low-voltage residents, new installations for lowvoltage non-residents, new installations for high-voltage, temporary electricity installations, name changes, ownership transfers, capacity reduction, class changes, pressure changes, and household cancellation. Basic applications include eight parts of functions including user login management, work order tasks, messages, work overview, mine, etc., which meet the needs of field operations and management personnel, maintain business continuity and efficiency, and improve field service work quality and efficiency.

3 Design of Domain-Driven Mobile Operation Middle-Ground 3.1

Design Method of Domain-Driven Mobile Operation Business Middle-Ground

The design of mobile operation business middle-ground follows domain-driven design concepts and methods, by analyzing, abstracting, and categorizing business applications design to realize the reasonable configuration of competence centers. The construction process is divided into different phases, including domain analysis phase, service center planning phase, service architecture design phase, and service model design, service realization and governance phase. This article focuses on the first two stages, of which the service center planning stage is decomposed into three parts: service center planning, service center aggregation, and middle-ground architecture. In the domain analysis stage, the design results of the scenario-based microapplication are used as the input of the design of the business center. Semantic syntax analysis methods are used to analyze use cases, to determine the specific content of subject, predicate, object and definite, and adverbial, to extract business entities, attributes and association relationships from the perspective of business logic, and to extract the essence of the transaction, thus forming a business domain model.

1234

W. Shi et al.

In the service center planning stage, the business domain model is produced, and the domain model is summarized and divided in a business-related and similar responsibilities to form a business domain based on the principle of low coupling and high cohesion. The division of domain boundaries adopts the GRAPS principle, and forms a data center from a technical perspective and a business center from a business perspective according to the two dimensions of master data and business logic. In the service center aggregation stage, domain models such as aggregation and entities are aggregated to the service center according to service center aggregation rules. Data reading-based aggregation and entities belonging to the main data service center can be shared by multiple business centers, and can be technically separated from reading and writing. Business logic processing-based aggregation, entities belonging to the business logic service center can be called by multiple application sections, and technically can achieve transaction consistency. For aggregations and entities that cannot be attributed to any center, we generate new service centers by domain clustering method. In the formation stage of the middle-ground architecture, service centers are formed according to the above-mentioned design methods, the connotation of each center is defined, and rules for naming and numbering service centers are formulated. In other design stages, the three stages of service architecture design, service model design, and service realization and governance aim at system realization. The principles and methods of the service architecture design stage are used to achieve seamless connection with the service center planning stage with business interpretation as the goal. In the service architecture and model design stage, comprehensive consideration of business, technology, management and other dimensional factors, the services and interfaces are identified, divided boundaries and clustered, and then the definition, naming and numbering of interfaces are designed, following the principles of appropriate granularity, high aggregation and low coupling, rapid iteration, general capability sinking, etc. In the service realization and governance stage, to realize the purpose of agile response and rapid iteration, and to support the business optimization of the front-end micro-application group, the service release discovery, service authentication and authorization, service monitoring statistics, service failure prediction and other methods innovate and realize the “data feeding back business” model (Fig. 3).

Domain modeling

Input

Typical business scene

Use cases identify Identify use cases bas ed on business scenes

Use case analysis Through semantic syntax analysis, identify entities, attributes, and association relations hips

Determine business objects and relationships

Domain modeling

Service center planning Form a s hared service based on common business and stable data, plan the theme center with business logic and preliminary master data Subject data

Service center aggregation According to the domain model construction rules, the domain models such as aggregation and entities are divided into corresponding competence centers

Formation of business middle-ground structure The mobile station is divided according to the two dimensions of master data and business logic. Subject data

Business Center

Subject data1

Business Center1

Subject data2

Business Center2

Business logic

Work Order control center Trouble repair Center User Center Center1

Center2

Equipment Grid management center

Configuratio n Center

Fig. 3. The design method of business middle-ground based on DDD

The Design and Implementation of Mobile Operation Platform

3.2

1235

The Mobile Operation Business Middle-Ground

Through the above-mentioned methods, the capability-driven business middle-ground is constructed to provide front-end micro-applications with the ability to quickly respond to business changes and needs, including a business service center and a data service center. The results of data service center division are as follows: message center, configuration center, work order center, application center, monitoring center, role center, user center, and equipment center. Among them, the configuration center is responsible for the management of interface registration, interface logs, and system parameter configuration with other business systems. The work order center is responsible for the unified scheduling and management of work orders by building a work order resource pool, to enhances the service, interaction capabilities and efficiency of work orders for on-site operators. The application center is responsible for the release of various micro-applications after the completion of the research and development and user access management. The monitoring center realizes real-time monitoring and analysis of business data. The user center is responsible for the account management of mobile operation micro-application users, including login verification, account binding, etc. Equipment center is responsible for equipment acquisition, distribution, maintenance and other management functions. The message center realizes unified management of business work orders, notifications, and message logs. The control center cooperates with the configuration center to support the personalized design and management of micro-applications. The business service center provides support for the personalized back-office functions of each business field, provides support for the independent research and development of the back-end business logic of micro-applications, provides services for the large-scale innovation of micro-applications of various manufacturers, and realizes the plug-in deployment of a unified platform. Among them, the marketing mobile operation microservices realize the application functions of fault repair, grid management, electricity consumption inspection, and market expansion (Fig. 4).

Fig. 4. Business middle-ground architecture

1236

W. Shi et al.

4 Architecture of Platform Construction The overall technical architecture of the marketing mobile operating platform complies with the construction principles of “three unifications and one independence”, namely “unified development platform, unified software release, unified application management, and independent organization of micro-application research and development.” Combined with the overall technical requirements, the business middle-ground adopts a Spring-Cloud-based microservice architecture that meets the combination of technological advancement and maturity, and uses a unified development platform SGUAP3.0 to improve the unity, flexibility, and scalability of the system, security and concurrent processing capabilities. Unified management of operators, mobile terminals, unified workbenches, customer service smart devices, achieve “unified terminals, unified entrances, unified authentication, unified procedures, and unified operations.” By standardizing the management and access of mobile operations micro-applications for marketing services, optimizing the interactive service methods with customers, the job homogeneity and service personalization are achieved to promote the overall improvement of marketing management and customer service quality, as shown in Fig. 5.

Fig. 5. Overall architecture diagram of mobile operating platform

5 Conclusion The mobile operation platform designed in this paper aims at serving the frontline employees at the grassroots level and coordinating enterprise-level mobile operation application planning. To realize “building block type” marketing mobile operation scene micro-applications, taking the mobile operation business scene as the design object, a six-step process of sorting, identifying, filtering, splitting, aggregating and precipitating is designed to realize the reasonable layout and overall planning of the business scene, forming five business scene micro-application groups. Taking the above results as input, the business middle-ground of eight data centers is designed by domain-driven design method, which supports the mobile application platform to

The Design and Implementation of Mobile Operation Platform

1237

establish a responsive, agile and iterative technical architecture. This method realizes the offline “one terminal, one application” mobile operation working mode, effectively reduces the amount of mobile terminal data carried by grassroots employees during field operations, and improves the efficiency of field operations. Acknowledgement. This work was financially supported by the Science and Technology Project of State Grid Corporation of China (5700-202190173A-0-0-00 Research and application of SG-CIM auxiliary design based on semantic recognition technology).

References 1. Zhang, S., Liu, Y., Hu, W., Ye, G., Feng, H., Wang, J.: Design of business middle platform of power trading platform. Power Syst. Technol. (2), 1–7 (2021) 2. Zhou, H., Zeng, C., Chang, H.: Domain-driven enterprise mid-plane design research. Mod. Inf. Technol. (3), 20, 79–81(2019) 3. Zhao, G., Zhang, C., Ouyng, H., et al.: Research on architecture design of multi-channel operation support platform based on business middle-platform. Distrib. Utilization 36(6), 61, 67–71 (2019) 4. Zhou, G., Wang, J., Xv, D., Fang, X., et al.: Research on design method and support system of power marketing service business. China Manag. Inf. 23(1), 83–89 (2020) 5. Cheng, L., Wang, H., Gao, C.: Research on application of micro service in power transaction system. Power Syst. Technol. 42(2), 441–446 (2018) 6. Li, W., Xiong, W., Liu, P., Zhu, Z., Zhao, Y.: Research on the evolution and opening service capability of information system architecture based on business middle platform system. Telecom Eng. Tech. Stand. 33(11), 8–12 (2020) 7. Ouyang, H., Liu, X., Zhang, T., et al.: Research and design of mobile operation application system for electric power marketing service. Comput. Appl. Softw. 36(11), 14–19 (2019) 8. Liu, J., Yang, W., Zhu, P., et al.: Design of multi-channel micro-service architecture for electric power marketing. Distrib. Utilization 36(6), 78–82 (2019) 9. Lin, H., Fang, X., Yuan, B., et al.: Research and design of business platform strategy for multi-channel customer services in Internet of Things in electricity. Distrib. Utilization 36(6), 39–45 (2019)

Research on Ensemble Prediction Model of Apple Flowering Date Fan Zhang, Fenggang Sun, Zhijun Wang, and Peng Lan(&) College of Information Science and Engineering, Shandong Agricultural University, Taian 271000, China [email protected] Abstract. Apple flowering freeze is one of the major disasters affecting yield, and prediction of flowering date using meteorological factors is one of the important aids to reduce the impact of freeze damage. The prediction results of one single model are prone to fluctuations due to interannual and spatial changes. In this paper, 6 areas were selected from Shandong, Shaanxi, Henan, and Liaoning. By analyzing the meteorological data and apple flowering date in the last 10 years, key meteorological factors affecting flowering date were selected based on distance correlation coefficients. Later, the ensemble prediction model was constructed by using support vector machine regression, multiple linear regression, and decision tree regression as the base models. The results showed that the mean absolute deviation of the ensemble prediction model was in the range of 0.736–3.616, which showed good stability and prediction accuracy and could provide theoretical support for apple flowering date prediction. Keywords: Apple flowering date  Meteorological factors correlation coefficient  Ensemble prediction

 Distance

1 Introduction China is the world’s largest producer and consumer of apples. Blossom frost damage is sudden and has a great impact after the disaster, e.g., blossom frost damage in 2018 reduced production by 15–20% in Shandong. As the climate has warmed in recent years, apple flowering has advanced and the probability of apple flower freezing has increased, so it is important to make accurate predictions of flowering date based on meteorological data to assist farmers in preventing flowering freeze damage. Existing studies have shown that apple flowering date is significantly correlated with temperature [1], and in the context of global warming, apple phenology in spring is trending earlier, and extreme meteorological conditions are more likely to affect yields [2, 3]. Most of the existing apple flowering date prediction methods use single prediction methods such as partial least squares regression [4–6], multiple linear regression [7–9], and stepwise regression [10, 11], and temperature-related factors are often used as predictor variables in these methods. Different regions show large differences in apple flowering date, single regression models tend to achieve better results only in regions with similar meteorological and flowering date characteristics. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1238–1245, 2023. https://doi.org/10.1007/978-981-19-3387-5_148

Research on Ensemble Prediction Model of Apple Flowering Date

1239

Simultaneously, ensemble prediction models have shown better prediction results than single prediction models. In this paper, three methods, namely support vector machine regression (SVR), multiple linear regression (MLR) and decision tree regression (DTR) were selected as the base models to predict the flowering date. Firstly, we use the distance correlation coefficient to measure the correlation between each meteorological factor and flowering date, and further filter the variables based on the threshold formula for the number of predictor variables; secondly, we use the leaveone-out cross-validation method to create a data set, and determine the weight of the base model according to the error situation to finally realize the ensemble prediction. To verify the model performance, Shandong Yiyuan, Shandong Yanzhou, Shandong Longkou, Shaanxi Luochuan, Henan Shangqiu, and Liaoning Wafangdian were selected as research objects in this paper, and the results showed that the proposed prediction model has good stability and prediction accuracy.

2 Data Description and Study Methodology 2.1

Data Description

The data used in this study included day-by-day meteorological data and apple flowering date for the 6 locations mentioned above. Meteorological data were obtained from local weather stations, and 3 typical meteorological factors were selected: sunshine hours (h), temperature (°C), and precipitation (mm), and 4 °C cumulative temperature was calculated as the fourth factor, where sunshine hours and precipitation were the cumulative values of the day and temperature was the average value of the day. The flowering data were obtained from local fruit bureaus and fruit farmers. 2.2

Research Methodology

Due to the large differences in flowering date characteristics in different regions, the prediction effect of a single model is often unstable. In this paper, we establish a prediction model based on ensemble prediction model. The main ideas are: to establish a prediction model by analyzing the relationship between historical meteorological factors and apple flowering, to screen out key meteorological factors, and to predict the flowering period of the current year by the key meteorological factors of the year to be predicted, and finally to verify the prediction effect of each model by using the leaveone-out cross-validation method. The methodology in this paper can be divided into the following steps. (1) Establishing a sample database using flowering data and four types of meteorological factors. (2) Selection of predictor variables from the sample database as the optimized sample set based on the distance correlation coefficient. (3) One year was sequentially selected from the data as the test set, and the data of other years were used as the training set. SVR, MLR and DTR models are trained using the training set data, and the model parameters are calculated with the prediction results of the training set. (4) The crossvalidation results are compared with the corresponding true values to find the relative

1240

F. Zhang et al.

errors, determine the model weights, and use the weighted average method to ensemble the predictions to obtain the final prediction results. Data Pre-processing To ensure that the number of predictor variables is appropriate while using as much weather data as possible, this paper averages sunshine hours, precipitation, and temperature on a weekly basis, and accumulates the 4 °C cumulative temperature. In order to improve data comparability and weaken the impact of different data size on the prediction results, this study uses the Z-Score standardization method to standardize the sample meteorological data with the following standardization formula: x ¼ ðx  xÞ=r

ð1Þ

where r is the standard deviation of the original data and x is the mean value of the original data. Moreover, this paper converted the time of bloom to the number of days in sequence from January 1 of the year, e.g., April 13, 2019 was converted to 103. Variable Filtering Too many variables in the modeling process can result in complex models and slow fitting. On the other hand, the inclusion of irrelevant or weakly correlated variables can reduce the interpretability of the model. Therefore, this paper first evaluates the variables using distance correlation coefficients, which have better test efficacy compared to Pearson correlation coefficients and rank correlation coefficients while allowing the measurement of nonlinear correlations. Suppose u and v are two variables of any dimension. Let fðui ; vi Þ; i ¼ 1; 2; . . .; ng be a random sample of the overall ðu; vÞ, the distance correlation coefficient can be calculated as [12] b dcovðu  vÞ b dcorr ðu; vÞ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi b dcovðu  uÞ b dcovðv  vÞ

ð2Þ

where b dcov2 ðu; vÞ ¼ Sb1 þ Sb2 þ Sb3 , Sb1 , Sb2 and Sb3 are respectively: 8 n P n     P > ui  uj  vi  vj  b1 ¼ 12 > S > n du dv > > i¼1 j¼1 > > > < n P n  n n    P ui  uj  12 P P vi  vj  Sb2 ¼ n12 du n dv > i¼1 j¼1 i¼1 j¼1 > > > n P n P n >   P > 1 > > kui  ul kdu vj  vl d : Sb3 ¼ n3 i¼1 j¼1 l¼1

ð3Þ

v

The larger b dcorr ðu; vÞ means stronger correlation between u and v. The number of predictor variables is determined based on the threshold formula for the number of predictor variables, and can be calculated as [12]

Research on Ensemble Prediction Model of Apple Flowering Date

n ¼ ðN=log NÞ4=5

1241

ð4Þ

where n is the threshold for the number of variables and N is the total number of variables. Ensemble Prediction Due to the large variation in the prediction results of a single prediction model for different regions and years, this paper uses the weighted average method to integrate the prediction results of the three base models. This method determines the base model weights based on the performance of the models in the test set, reducing the extent to which models with poor performance in the training set affect the final prediction results, and the model weight calculation formula is shown as follows [13]. wj ¼ S1 j =

m X

S1 j

ð5Þ

j¼1

where wj denotes the weight of the firs j, Sj denotes the absolute value of the average relative error of the first j model, and m denotes the number of models. In order to get good prediction results in different regions, base models with different prediction principles should be selected. The SVR model maps low-dimensional data to high-dimensional data to find a regression plane, so that all data in the collection is the closest to the plane, which is more advantageous in solving the problem of small data volume [14]. The DTR model divides the feature space by analyzing the mapping relationship between object attributes and object values, and each leaf node represents a specific output for each feature space area [15]. The MLR model has become a common regression analysis method because of its simplicity and intuition. Partial least squares regression (PLSR) projects the predictor variables into a new space to find a linear regression model. Ridge regression (RR) introduces an insensitive loss function on the basis of PLSR. Compared with MLR, the advantage of the above model lies in reducing collinearity the influence of variables. Therefore, under the premise of using the distance correlation coefficient to analyze the predictor variables, this paper chooses SVR, DTR and MLR as the base model.

3 Results and Analysis In this paper, the prediction time is uniformly set to April 1 of each year, i.e., the day order of leap year is 92 and the day order of non-leap year is 91. This section shows the analysis process data with examples of Shaanxi Luochuan and Shandong Longkou, and evaluates the prediction results for all regions. 3.1

Data Pre-processing Results

From the data in Table 1, it can be seen that the flowering date in 2 regions fluctuates up and down within a certain range: the flowering date in the Luochuan region fluctuates in a smaller range, with the number of days in the order of 102–110, i.e., between April 12 and April 19; the flowering date in the Longkou region fluctuates in a larger

1242

F. Zhang et al.

range, with the number of days in the order of 103–125, i.e., between April 13 and May 5. The paper [16] showed that the eastern sea area warms up faster than the adjacent land area, and the spring phenological period in the sea area changes significantly than the land area. Longkou is located in the Bohai Bay region of China, and the apple bloom period showed an obvious trend of advancement, while the Luochuan region is located in the inland area of China, and the rate of influence of climate warming on the apple flowering date is less than that of regional microclimate conditions, and the apple flowering date did not show an obvious trend of change. Table 1. Apple flowering date in the 2 regions Region Year Date Region Luochuan 2011 106 Longkou 2012 105 2013 102 2014 104 2015 103 2016 110 2017 109 2018 109 2019 106 2020 110

3.2

Year 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020

Date 117 118 122 103 112 107 107 106 115 108

Analysis of Variable Screening Results

In this paper, the distance correlation coefficients were solved based on Eq. (3) using the standardized weekly average meteorological data and the days in sequence of apple flowering date in Table 1, and the distance correlation coefficients of some variables are shown in Fig. 1, where PRE denotes precipitation, TEM denotes temperature, SSD denotes sunshine hours, and JW denotes cumulative 4 °C accumulation temperature, and the suffix number denotes the number of weeks until January 1, for example, “SSD13” means the average sunshine hours in the 13th week.

Fig. 1. Distance correlation coefficients: Luochuan (left) and Longkou (right)

Research on Ensemble Prediction Model of Apple Flowering Date

1243

Combined with Fig. 1, it can be concluded that temperature has a significant effect on the flowering period of Longdong apple, which is consistent with the results of previous studies [1, 4, 6, 8]. In addition to temperature, sunshine hours also had a significant effect on apple flowering date in Luochuan, and a study [6] showed that the main factor affecting apple flowering date in Longdong was temperature, followed by sunshine. For every 1 h increase in sunshine, flowering started 0.07 days earlier [4]. Based on Eq. (4) and distance correlation coefficients, the seven variables with the greatest correlation were finally selected, and the selected variables were consistent with previous studies, and the screening results were consistent with expectations and could be used for modeling predictions. 3.3

Analysis of Prediction Results

After screening the predictor variables, this section uses SVR, MLR and DTR for training and prediction, and later ensemble the prediction of the 3 model results based on Eq. (5). The training is performed using the leave-one-out cross-validation method, i.e., one year is sequentially selected as the test set, and the data of the remaining years are used as the training set. This method can improve the data utilization efficiency and better reflect the model prediction effect compared with the method of dividing the training set and test set at one time. The predict results of the three models in the 2 regions are shown in Table 2. Table 2. Predict results for each model Region Year Date Luochuan 2011 106 2012 105 2013 102 2014 104 2015 103 2016 110 2017 109 2018 109 2019 106 2020 110 Longkou 2011 117 2012 118 2013 122 2014 103 2015 112 2016 107 2017 107 2018 106 2019 115 2020 108

SVR 106.309 104.984 108.476 101.560 106.987 107.545 106.266 107.780 105.986 106.287 117.135 122.528 121.668 102.312 109.477 107.226 110.425 107.175 112.952 111.349

MLR 105.633 95.795 116.145 95.874 108.124 105.472 110.731 113.315 127.610 105.758 124.735 124.395 112.039 102.378 109.104 107.937 105.543 108.628 112.826 112.534

DTR 103 106 110 106 104 106 110 106 103 109 118 125 125 108 107 106 125 112 117 103

Ensemble Prediction 104.949 103.669 110.459 102.170 106.081 106.586 108.484 108.135 108.857 107.203 119.325 123.557 119.856 103.566 108.840 107.147 112.307 108.606 113.799 109.846

1244

F. Zhang et al.

Further, the Mean Absolute Deviation (MAD) results of the cross-validation results for the 6 areas are shown in Table 3. Table 3. Mean absolute deviation of each model Region Longkou Yiyuan Yanzhou Shangqiu Luochuan Average

SVR 2.043 2.793 1.249 4.824 2.337 2.649

MLR 3.762 6.299 2.000 4.560 7.339 4.792

DTR 5.091 5.909 1.727 2.273 2.700 3.540

Ensemble Prediction 2.559 2.920 1.229 3.616 2.620 2.589

From Table 3, it can be concluded that the 3 base models have unstable prediction effects in different regions: the SVR model has higher prediction accuracy than the other 3 methods in 3 regions, but the lowest accuracy in Shangqiu; the MLR model has higher prediction accuracy in 2 regions, but the MAD exceeds 5 days in Yiyuan and Luochuan; the DTR model has higher prediction accuracy in 4 regions, and the best prediction accuracy in Shangqiu, but the MAD exceeds 5 days in Longkou and Yiyuan. The ensemble prediction model is best in Wafangdian and Yanzhou, and the average MAD of the ensemble prediction method in the 6 regions is the smallest. The experimental areas were located in 4 provinces, which indicated that the ensemble prediction model had better prediction stability in time and space compared with the single prediction model, and the MAD of the ensemble prediction model was less than 3.616, which could be applied to the actual apple flowering date prediction.

4 Summary This paper establishes an ensemble prediction model based on meteorological data to predict the flowering date of apples. The key predictors were screened out by solving distance correlation coefficients between flowering period and weekly average meteorological data and cumulative accumulated temperature, and an ensemble prediction model with SVR, MLR and DTR as the base models was established and crossvalidated for 6 main production areas. The prediction results showed that the mean absolute deviation of the ensemble prediction model was between 0.736 and 3.616 in the six experimental regions, which showed higher prediction stability compared with the single prediction models. The proposed apple flowering date prediction method can be applied to predict apple flowering date in other major producing counties. Acknowledgements. This work is partly supported by the Major Scientific and Technological Innovation Project of Shandong Province (2019JZZY010706) and Shandong Provincial Key Research and Development Program of China (2019GNC106106), Shandong Provincial Natural Science Foundation of China (ZR2019MF026). The authors would like to thank the HPC center of Shandong Agricultural University for providing the computing support.

Research on Ensemble Prediction Model of Apple Flowering Date

1245

References 1. Mao, M., Liu, M., Jiang, C., et al.: A study on relationship between temperature and early blooming time of malus. Chin. J. Agrometeorol. 26(02), 123–124+128 (2005) 2. Yang, X., Jiang, G.: Responses of apple trees growth to climate change in typical stations of Longdong loess plateau. Chin. J. Agrometeorol. 31(01), 74–77 (2010) 3. Liu, L., Guo, L., Wang, J., et al.: Phenological responses of apple tree to climate warming in the main apple production areas in northern China. Chin. J. Appl. Ecol. 31(03), 845–852 (2020) 4. Liu, L., Wang, J., Fu, W., et al.: Relationship between apple’s first flower and climate factors in the main producing areas of the northern China. Chin. J. Agrometeorol. 41(01), 51–60 (2020) 5. Liu, H., Dang, X., Du, Q., et al.: Apple at initial flowering stage in Ansai mountainous area: prediction research. Chin. Agric. Sci. Bull. 35(10), 99–103 (2019) 6. Zhang, Y., Zhao, W., Gao, Q., et al.: The effect of climate change on the apple’s initial flowering date in the eastern Gansu province. J. Fruit Sci. 34(04), 427–434 (2017) 7. Li, M., Du, J., Li, X., et al.: Prediction model for beginning of apple flowering period in fruit growing areas of Shaanxi province. Chin. J. Agrometeorol. 30(03), 417–420 (2009) 8. Bai, Q., Wang, J., Qu, Z., et al.: The research on Shaanxi apple florescence prediction model. Chin. Agric. Sci. Bull. 29(19), 164–169 (2013) 9. Ding, X., Wang, B., Jiang, R., et al.: Study on predicting model of flowering period for Red Fuji apple in Yantai city. J. Shaanxi Meteorol. (03), 33–36 (2018) 10. Zang, X.: Research on apple flowering forecast model in Richeng county. Mod. Agric. Sci. Technol. (15), 94–96+101 (2018) 11. Yin, Z.-Q., Xue, T., Xin, B., et al.: Study on the predicting model of the apple initial flowering period at Baishui county. J. Shaanxi Meteorol. (02), 34–36 (2019) 12. Wang, L., Wu, X., Zhao, T., et al.: A scheme for rolling statistical forecasting of PM2.5 concentrations based on distance correlation coefficient and support vector regression. Acta Sci. Circumst. 37(04), 1268–1276 (2017) 13. Li, P., Wang, Q., Wu, J., et al.: Study on integrated prediction based on BP neural network, ARIMA and LS-SVM model—an empirical study of apple production in Shaanxi province from 1978 to 2017. Jiangsu Agric. Sci. 48(04), 294–300 (2020) 14. Wang, N., Xie, M., Deng, J., et al.: Mid-long term temperature-lowering load forecasting based on combination of support vector machine and multiple regression. Power Syst. Prot. Control 44(03), 92–97 (2016) 15. Li, H.: Statistical Learning Methods, 2nd edn., pp. 80–82. Tsinghua University Press, Beijing (2019) 16. Cai, R., Fu, D.: The pace of climate change and its impacts on phenology in eastern China. Chin. J. Atmos. Sci. 42(04), 729–740 (2018)

Apple Leaf Disease Identification Method Based on Improved YoloV5 Yunlu Wang, Fenggang Sun, Zhijun Wang, Zhongchang Zhou, and Peng Lan(&) College of Information Science and Engineering, Shandong Agricultural University, Tai’an 271000, China [email protected]

Abstract. In this paper, an improved YoloV5s based identification model, named as GSE_YoloV5s, is proposed for apple leaf disease identification, which can efficiently address the problem of high storage and computational resource consumption for the YoloV5s model. Firstly, the GhostBottleneck module is used to replace the original CSPBottleneck module to reduce the parameters and computation of the YoloV5s; meanwhile, by adding the channel attention module SE (Squeeze-and-Excitation), the model’s detection performance for small target lesions is improved. The experimental results show that, the improved GSE_YoloV5s model can reduce the number of parameters and computational effort by 40%, as compared with the YoloV5s model, and the AP (Average Precision) achieves 83.4%, which can effectively detect apple leaf diseases. Keywords: Apple leaf disease module

 GSE_YoloV5s  GhostBottleneck  SE

1 Introduction Timely and accurate identification and prevention of apple diseases has always been one of the hot issues of concern to orchard workers. Traditional apple disease identification mainly relies on personal experience and expert diagnosis, which is highly subjective and lagging, and can hardly meet the requirements of rapid and accurate apple disease identification. Early machine learning methods used image pre-processing, image segmentation and feature extraction techniques for artificial feature selection and design, followed by specific machine learning algorithms for classification of disease features [1–5]. Although machine learning based approaches can achieve high detection accuracy for specific plant disease recognition, the tedious feature selection and design work bring many difficulties to their popularization and application. Deep learning uses artificial neural networks to complete feature vector learning in a self-supervised manner, which can overcome the drawbacks of machine learning methods limited by manual feature selection, and has been used in many applications in the field of disease recognition. For example, Baranwal et al. [6] used GoogLeNet to achieve a recognition rate of 98.54% for 2561 apple diseases. Zhang et al. [7] used ResNet and K-means algorithm to improve the Faster RCNN, which can enhance the ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1246–1252, 2023. https://doi.org/10.1007/978-981-19-3387-5_149

Apple Leaf Disease Identification Method Based on Improved YoloV5

1247

average precision by 2.71% compared with the original Faster RCNN model. Zhong et al. [8] used DenseNet-121 network to identify six apple leaf diseases with an accuracy of 93.71%. In practical research, apple leaves may be mixed with a variety of disease spots, and the disease spot detection based on deep learning has certain application value in this scene. However, the object detection algorithm is more complex and takes up more computational resources and memory space in the training process. To solve this problem, this paper proposes a GSE_YoloV5s algorithm based on lightweight module and attention mechanism. By using lightweight GhostBottleneck instead of CSPBottleneck, the parameters and calculation cost of the YoloV5s algorithm are reduced, and attention mechanism module named SE is added to the backbone network to improve the recognition performance of the model.

2 Materials and Methods 2.1

Dataset Acquisition and Image Augmentation

Five common apple diseases were selected for the study: alternaria leaf spot, apple scab, black rot, cedar rust and mosaic, where apple scab, black rot and cedar rust from the PlantVillage dataset at 256  256 pixels. The data of alternaria leaf spot and mosaic disease were collected in the Apple orchard and photographed using mobile phones at 1080  1440 pixels, adjusted to 416  416 pixels for training purposes. Through the LabelImg image processing tool, the location information and category information of the lesions in the apple disease image were written into the corresponding xml file. After that, the data is randomly divided at the ratio of training set: test set = 8:2 to form a data set in VOC format. To enhance the model’s generalization ability, image data augmentation was performed on the data set, and 2013 original disease images of the training set were rotated, random brightness enhancement, random chromaticity enhancement, random contrast enhancement and sharpening, and 16104 apple disease images were obtained as the training set after image augmentation. 2.2

An Improved YoloV5s Object Detection Framework

YoloV5 model has fast reasoning speed and high accuracy, which is suitable for diagnosis of apple leaf diseases. Considering the need of lightweight, the YoloV5s model is used to perform this task. YoloV5s consists of three main components: backbone, neck and prediction. Backbone is a functional component of image feature extraction and is mainly composed of CSPBottleneck, which divide the feature map of the base layer into two parts, and then fuse the two parts through a cross-stage hierarchy. Neck uses FPN (Feature Pyramid Network) and PAN (Path Aggregation Network) structures to fuse features at different scales. Prediction, the detection subnetwork, is used to generate anchors to determine the target location and output the class probability of the target object. It consists of three detection branches, each of which is used to detect targets at different scales.

1248

Y. Wang et al.

Image

focus CBL

Ghost

CBL

Ghost

SElayer

CBL

Ghost

CBL SPP

Ghost

Backbone

SElayer

Upasmple Concat Ghost CBL

Neck

Ghost

Ghost Moddle

CBL

Conv

BN

BN

Relu Ghost Moddle

Upsample

CBL

CBL

Concat

Concat

Concat

Ghost

Ghost

Ghost

Conv

Conv

Conv

BN

Reaky Relu Predicition

Fig. 1. Improved YoloV5s structure diagram

In this paper, a lighter GhostBottleneck is used to replace the CSPBottleeneck of the original YoloV5s model in order to reduce the model parameters and computational effort. Also, to compensate the performance loss caused by the reduced model parameters, the attention module SE is added to backbone to improve the detection accuracy of the model. The structure of improved YoloV5s is shown in Fig. 1. GhostBottleneck is the component module of GhostNet [9], which is formed by stacking two Ghost modules twice. Among them, a Ghost module is used to enhance the feature dimension and expand channels; another module is used to lower the feature dimension to make it consistent with the directly connected feature. In Ghost module, the convolved original feature map generates multiple phantom feature maps through a series of simple linear changes. These phantom feature maps are used as the output data of the Ghost module together with the original feature map, which reduces the parameters and computation of the model while ensuring the necessary redundant feature. The structure is shown in Fig. 2.

Ghost modle Add

Input BN

Conv

Ghost modle BN ReLU Ghost modle

input1

input2

...

Output

Fig. 2. Structure of GhostBottleneck

inputk

Apple Leaf Disease Identification Method Based on Improved YoloV5

1249

The SE module is the main component of the convolutional neural network SENet [10]. Its main function is to enhance the features that are beneficial to improve model’s performance and suppress the features that are ineffective by learning the importance of each feature channel. The SE module consists of three main components: Squeeze, Excitation and Scale. In Squeeze, the h  w  c feature map is transformed into 1  1  c by adaptive global pooling. Excitation obtain the importance of different channels through the fully connected layer. Scale performs a weighting operation and multiplies the sigmoid activation value of each channel to the corresponding channel of the original feature map, completing the reallocation of the weight of the feature map. The structure is shown in Fig. 3.

1×1×c h×w×c

Squeeze

1×1×c

h×w×c

Excitation Scale

Fig. 3. SE module structure

3 Results and Analysis 3.1

Experimental Environment and Parameter Settings

This study was conducted under the Ubuntu18.04 system and Pytorch deep learning framework. The computer processor is an Intel(R) Core (TM) i7-9750H, 2.60 GHz and an NVIDIA GeForce RTX 2060 graphics card with 6G of video memory. To increase the training speed of the network, the GPU is used for acceleration, the version of cuda is 11.1.0 and the version of cudnn is 8.1.0. In the experiment, SGD optimizer is used to optimize the neural network and speed up the training process. The initial learning rate of SGD is set to 0.01, the momentum parameter is set to 0.9, and batch size was set to 8. During the training, warm-up strategy was used, and a small learning rate was used in the first three epochs of training, and then the initial learning rate was restored. Under the data enhancement strategy, the model can converge quickly. 100 epochs were conducted in each round of the experiment, and three groups of repeated experiments were conducted. 3.2

Model Performance Evaluation Metrics

The model uses AP (Average Precision), Parameters and GFlOPs (Giga Floating-point Operations Per Second) as evaluation metrics to test the model performance. AP is the integral of the PR curve plotted by Precision and Recall, reflecting the model’s overall detection performance, which can be calculated as,

1250

Y. Wang et al.



TP TP þ FP

ð1Þ

TP TP þ FN Z 1 AP ¼ P  RdR

ð2Þ



ð3Þ

0

where TP (True Positive) denotes positive samples divided correctly, FP (False Positive) denotes positive samples divided incorrectly and FN (False Negative) denotes negative samples divided incorrectly. Parameter and GFLOPs are used to measure the complexity of the model. Parameter indicates the scale of weight parameters contained in the model, that is, the size of the storage space occupied by the model file in bytes; GFLOPs indicates the amount of computation required when using the model in 109/s. 3.3

Comparison of Model Improvement Effects

Under the parameter settings in Sect. 2.2, the YoloV5s algorithm using the GhostBottleneck module and the attention channel mechanism are evaluated to verify the improvement effect. The experimental models for comparison in this paper are: YoloV5s, G_YoloV5s (with the GhostBottleneck module), GSE_YoloV5s (with the GhostBottleneck and with the SE module). Table 1 shows the changes in model complexity and average precision under three sets of replicate trials, where AP is the mean value of the model AP under 100 rounds. After using the GhostBottleneck instead of the CSPBottleneck, the parameters and computation of the model were reduced by about 40%, but there was a slight decrease in mAP. This is because although the GhostBottleneck module reduces the resources taken up by feature redundancy through a series of linear changes, the reduction in the number of parameters still has an adverse effect on the generalization ability of the model. With the addition of the SE module, the model detection accuracy is improved with a small increase in parameters and computation, indicating that the attention channel mechanism is effective in improving the model detection accuracy. Compared with YoloV5s, the resulting GSE_YoloV5s has a reduction of 2938032 bytes in parameters, a reduction of 7.2  109/s in GFLOPS, and 0.5% increase in AP. Table 1. Comparison of the performance of different models Parameters (bytes) GFLOPS (109/s) AP (Group 1) AP (Group 2) AP (Group 3) AP (Average)

YoloV5s 7071633 16.4 0.829 0.829 0.831 0.829

G_YoloV5s 3994337 9.1 0.825 0.823 0.829 0.826

GSE_YoloV5s 4133601 9.2 0.836 0.832 0.835 0.834

Apple Leaf Disease Identification Method Based on Improved YoloV5

1251

Among the 100 epochs of each experiment, the model with the best AP value was taken as the result of that experiment. The average value of AP for each disease category in the three groups was taken to represent the AP of the disease category under this model, and the results are shown in Table 2. Overall, the addition of GhostBottleneck module did not have a more obvious negative effect on detection ability of the model. The improved GSE_YoloV5s model was more accurate for alternaria leaf spot, black rot, cedar rust and mosaic, but less accurate for apple scab, which was caused by the unclear contour of the spot area in apple scab. Table 2. AP values by category YoloV5s All 0.865 Alternaria leaf spot 0.900 Apple scab 0.742 Black rot 0.911 Cedar rust 0.845 Mosaic 0.928

G_YoloV5s 0.863 0.889 0.738 0.907 0.847 0.931

GSE_YoloV5s 0.866 0.897 0.731 0.917 0.850 0.938

4 Conclusion In this paper, the YoloV5s model is improved to reduce the computational resources in the apple disease identification task. The parameters and computational effort of the model are reduced by using the lightweight module GhostBottleneck, and the SE module is added to enhance its ability to identify disease spots. Experiments show that the improved GSE_YoloV5s model reduces the parameters and computational effort by about 40% compared with YoloV5s, and the recognition accuracy is slightly improved, which demonstrates that the proposed method of this paper is effective. Acknowledgements. This work is partly supported by the Major Scientific and Technological Innovation Project of Shandong Province (2019JZZY010706) and Shandong Provincial Key Research and Development Program of China (2019GNC106106), Shandong Provincial Natural Science Foundation of China (ZR2019MF026). The authors would like to thank the HPC center of Shandong Agricultural University for providing the computing support.

References 1. Loti, N., Noor, M., Chang, S.: Integrated analysis of machine learning and deep learning in chili pest and disease identification. J. Sci. Food Agric. 101, 3582–3594 (2020) 2. Sammany, M., Zagloul, K.: Support vector machine versus an optimized neural network for diagnosing plant disease. In: Proceeding of 2nd International Computer Engineering Conference, pp. 25–31 (2006)

1252

Y. Wang et al.

3. Chuanlei, Z., Shanwen, Z., Jucheng, Y., et al.: Apple leaf disease identification using genetic algorithm and correlation based feature selection method. Int. J. Agric. Biol. Eng. 10(2), 74– 83 (2017) 4. Chakraborty, S., Paul, S., Rahat-uz-Zaman, M.: Prediction of apple leaf diseases using multiclass support vector machine. In: 2021 2nd International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST), pp. 147–151. IEEE (2021) 5. Islam, M., Dinh, A., Wahid, K., et al.: Detection of potato diseases using image segmentation and multiclass support vector machine. In: 2017 IEEE 30th Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1–4. IEEE (2017) 6. Baranwal, S., Khandelwal, S., Arora, A.: Deep learning convolutional neural network for apple leaves disease detection. In: Proceedings of International Conference on Sustainable Computing in Science, Technology and Management (SUSCOM), vol. 1, pp. 260–267. Amity University Rajasthan, Jaipur (2019) 7. Zhang, Y., Song, C., Zhang, D.: Deep learning-based object detection improvement for tomato disease. IEEE Access 8, 56607–56614 (2020) 8. Zhong, Y., Zhao, M.: Research on deep learning in apple leaf disease recognition. Comput. Electron. Agric. 168, 105146 (2020) 9. Han, K., Wang, Y., Tian, Q., et al.: GhostNet: more features from cheap operations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1580–1589 (2020) 10. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018) 11. Yan, B., Fan, P., Lei, X., et al.: A real-time apple targets detection method for picking robot based on improved YOLOv5. Remote Sens. 13, 1619 (2021) 12. Barbedo, J.G.A.: Plant disease identification from individual lesions and spots using deep learning. Biosyst. Eng. 180, 96–107 (2019) 13. Cheng, X., Zhang, Y., Chen, Y., et al.: Pest identification via deep residual learning in complex background. Comput. Electron. Agric. 141, 351–356 (2017) 14. Lu, Y., Yi, S., Zeng, N., et al.: Identification of rice diseases using deep convolutional neural networks. Neurocomputing 267, 378–384 (2017) 15. Geetharamani, G., Pandian, A.: Identification of plant leaf diseases using a nine-layer deep convolutional neural network. Comput. Electr. Eng. 76, 323–338 (2019)

Analysis of Apple Replant Disease Based on Three Machine Learning Methods Guoen Chen1, Li Xiang2, Zhiquan Mao2, Fenggang Sun1, and Peng Lan1(&) 1

2

College of Information Science and Engineering, Shandong Agricultural University, Tai’an 271000, China [email protected], [email protected] State Key Laboratory of Crop Biology, College of Horticultural Science and Engineering, Shandong Agricultural University, Tai’an 271000, China

Abstract. Apple replant disease (ARD) has become the limiting factor of the sustainable development of apple industry in China. The dry weight inhibition rate of apple tree rootstocks is one of the important indicators for judging the degree of apple replant disease. The calculation of dry weight inhibition rate needs to detect the dry weight of planted fruit tree rootstocks before and after soil sterilization. The specific operation is more troublesome. In response to this problem, this article proposes a method of using machine learning to predict the dry weight inhibition rate of fruit tree rootstocks. Using the existing measured data of the chemical composition, microbial community, and apple tree rhizosphere exudates in the soil in the orchard, the data is first preprocessed, including the division of the data set and the data normalization, and then the feature vectors are screened by the Distance Correlation Coefficient, and finally, using three models of Multiple Linear Regression, Support Vector Regression and Random Forest to predict the dry weight suppression rate respectively. The study found that the three machine learning methods are more suitable for the analysis of the experimental data. All three methods have obtained more accurate analysis and prediction results, and their R2_score is above 0.8. This study is of great significance for evaluating the severity of apple replant disease. Keywords: Apple replant disease coefficient

 Machine learning  Distance correlation

1 Introduction 1.1

Apple Replant Disease Background

China is one of the most important countries in the production and consumption of apples, and the area and total yield of apples rank first in the world. There are 25 provinces in China that produce apples, and the main producing areas are concentrated in the Bohai Bay, the Northwest Loess Plateau, the Old Course of the Yellow River, the Southwest Highlands and Xinjiang [1]. The total annual apple’s yield in China has reached 42 million tons. There are many factors that affect apple’s yield, such as meteorological disasters, pests and diseases, and planting area. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1253–1260, 2023. https://doi.org/10.1007/978-981-19-3387-5_150

1254

G. Chen et al.

In recent years, nearly 67,000 to 100,000 hm2 apple orchards in Shandong Province are facing renewal. As there is not enough stubble land for new orchards, continuous apple replant disease will become the main limiting factor restricting the renewal of apple orchards. Apple replant disease seriously hinders the sustainable development of the apple industry. The research on the problem of apple replant disease and its prevention and control measures are becoming more and more important [2]. 1.2

Influencing Factors of Apple Replant Disease

The Impact of Soil Physical and Chemical Properties. Soil is an important source of plant mineral nutrients. Due to the nutrient absorption preference of plants, there may be an imbalance of nutrient elements in plots where the same crop is planted for a long time, which will cause apple replant disease [3]. The Influence of Root Exudates. Root exudates are an important medium for plantsoil-microbe communication. At present, researchers believe that most components in root exudates have autotoxicity [4], and the autotoxicity on plants are closely related to their concentration. The mass production of autotoxicity should be the key to apple replant disease. Effects of Rhizosphere Microorganisms. In addition to abiotic factors such as the soil environment and self-toxic substances, the imbalance of the microbial community [5] caused by environmental changes can also lead to apple replant disease. 1.3

Apple Replant Disease Prevention and Control

There are many ways to improve soil quality, such as intercropping wheat, green onions and other crops between rows in the first three years after planting apple trees. By enriching the variety of crops, using their rhizosphere exudates can effectively improve soil quality; or mix special bacterial fertilizer in the tree hole soil before planting to improve the microbial structure of the soil; or dig a planting trench before winter to remove waste and residual roots. These methods can alleviate apple replant disease to a certain extent, reduce the pressure on the management of old orchards, and create income-increasing benefits [2]. After the occurrence of apple replant disease in apple trees, the main manifestations are poor root growth and development, short plants, rosette-like leaves, small leaves, short shoots, late fruit, poor quality, and high seedling mortality. Once these phenomena appear, they will cause great losses. In order to detect, prevent and control apple replant disease early, the study will use machine learning to analyze the status of apple replant disease in the orchard. The dry weight inhibition rate is the inhibition of plant growth that occurs when apples are replanted in the soil of the old orchard. It can be regarded as the degree of inhibition of the growth of replanted plants, which is a good indicator for predicting continuous cropping obstacles.

Analysis of Apple Replant Disease

1255

0

Rð%Þ¼

HH ; H0

ð1Þ 0

where H is the dry weight of apple rootstock planted in sterilized soil, and H is the dry weight of apple rootstock planted in untreated soil. The higher the dry weight inhibition rate, the more serious the apple replant disease of apple trees. However, there are certain difficulties in manually calculating the dry weight inhibition rate. First, it takes a long time to calculate it, which takes more than three months; second, it is inconvenient and requires more soil to be collected from the old orchard. In order to obtain the dry weight inhibition rate of old apple trees more conveniently and quickly, the study used machine learning methods to predict the dry weight inhibition rate by using the chemical composition, microbial community, rhizosphere exudates and other data in the soil in the orchard, and then to evaluate the degree of apple replant disease.

2 Data and Methods 2.1

Data Set and Its Division

Original Data Set. The data objects of the study are the soil composition, various microbial content, tree conditions and other data of apple orchards in 10 main apple producing areas. The data includes Organic Content, Mortierella, Pseudeurotium, Guehomyces, Fusarium, Penicillium, Cryptococcus, Humicola, Ilyonectria, Kernia, Oidiodendron, Tetracladium, Hydrolyzed Nitrogen, Available Phosphorus, Available Potassium, pH, Powder, Sand, Clay and Phlorizin, the main apple producing areas are represented by MP(Muping), PL(Penglai), QX(Qixia), LK(Longkou), ZY(Zhaoyuan), LZ(Laizhou), DL(Dalian), YK(Yingkou), HLD(Huludao), and QHD(Qinhuangdao), and the apple orchards in the main producing areas are represented by different numbers, for example, MP1 represents the No. 1 apple orchard in Muping. Data Set Division. The divided areas of the apple orchard are 57 areas in total. The data of at most 1 area of each apple orchard is taken as the test set, and the others are the training set. The final training set data is 50 pieces and the test set data is 7 pieces. 2.2

Data Preprocessing

Data Normalization. Data Normalization is a basic task of data analysis. After the original data are normalized, each index is in the order of magnitude, which is convenient for comprehensive comparison and evaluation, and reduces the influence of dimensions on the results of data analysis. The study uses Z-Score normalization, which uses the mean and standard deviation of the original data to standardize the data. The normalized data conforms to the normal distribution,

1256

G. Chen et al.

x ¼

xx ; r

ð2Þ

where x and r are the mean value and the standard deviation of the original data respectively. Feature Vector Selection. If all the variables in the original data are used for the training, it may cause information redundancy and reduce the interpretability of the model. Therefore, it is necessary to filter the variables. When selecting characteristic variables, Pearson correlation coefficient is generally used to judge the correlation of variables, which only describes the degree of linear correlation of data. For nonlinear correlation data, Pearson correlation coefficient is obviously not suitable. The Distance Correlation Coefficient(DC) [6] overcomes the weakness of the Pearson correlation coefficient to a large extent. It can judge both the linearly and nonlinearly related variables. Use the DC to study the independence of the two variables p and q, denoted as b dcorrðp; qÞ. When b dcorrðp; qÞ¼0, it means that p and q are independent of each other. b The greater the dcorrðp; qÞ, the stronger the correlation between p and q is the DC of p and q, b dcovðp; qÞ b dcorrðp; qÞ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; b dcovðp; pÞ b dcovðq; qÞ

ð3Þ

c1 þ D c2  2 D c3 , D c1 , D c2 and D c3 are: in formula (3) b dcov2 ðp; qÞ ¼ D n X n  X    pi  pj  qi  qj  ; c1 ¼ 1 D d dq 2 p n i¼1 j¼1

ð4Þ

n X n  n X n  X X   pi  pj  1 qi  qj  ; c2 ¼ 1 D d dq 2 2 p n i¼1 j¼1 n i¼1 j¼1

ð5Þ

n X n X n  X    pi  pj  qj  ql  : c3 ¼ 1 D d dq 3 p n i¼1 j¼1 l¼1

ð6Þ

Calculate b dcov(p,p) and b dcov(q,q) in the same way.

Table 1. The 7 variables with the largest distance correlation coefficients Variate Fusarium Organic content DC 0.665214 0.556954

Mortierella Phlorizin Sand

Available Phosphorus 0.535400 0.360331 0.339499

pH

0.557098

0.325889

Analysis of Apple Replant Disease

1257

The DC of the 7 largest variables measured in this study are shown in Table 1. According to the DC, two sets of data are screened for experiments, which are 5 and 7 feature vector data sets. 5 Feature variable data sets take the first 5 feature vectors in Table 1. 2.3

Model Establishment

The data set in the study belongs to the category of small data sets. When using machine learning methods to make predictions, it is necessary to pay special attention to the overfitting phenomenon of the prediction results. It is necessary to carry out reasonable preprocessing of the data set in the early stage, remove outliers, and choose simple and effective model methods as much as possible. Therefore, the study established three models suitable for prediction of small data sets, namely Multiple Linear Regression [7, 8], Support Vector Regression [9–11] and Random Forest [12, 13], and conducted comparative experiments. Multiple Linear Regression Model. When there are multiple independent variables in the data, a multiple linear regression model can be used. the general formula of the multiple linear regression model is as follows: y ¼ b0 þ b1 x1 þ . . . þ bk xk þ n;

ð7Þ

where y is a random dependent variable, x1 ,…, xk are non-random independent variables, b0 ,…, bk are regression coefficients, and n is a random error term. The 5-feature vector multivariate linear model and the 7-vector feature multivariate linear model established in this study are as follows: y =  10.22721x1  9.13795x2 þ 13.80544x3  5.42141x4 þ 3.75304x5 þ 87.58631; ð8Þ y =  10.16587x1  9.13586x2 þ 13.80888x3  5.45219x4 þ 3.84119x5  0.21601x6 þ 0.18551x7 þ 87.59103,

ð9Þ

where y is the inhibition rate of apple tree dry weight, x1 is the organic matter content in the soil, x2 is the Mortierella content in the soil, x3 is the Fusarium content in the soil, x4 is the available phosphorus content in the soil, and x5 is the soil pH value, x6 is the sand content in the soil, and x7 is the phlorizin content. Support Vector Regression. Support Vector Regression (SVR) is a branch of Support Vector Machine (SVM), which itself is proposed for two classification problems. SVR's sample points are ultimately only one type. It maps the sample points from the original space to the high-dimensional space, and finds a hyperplane in the space, so that the sample points are linearly separable in the high-dimensional space, and all sample points are separated from this the hyperplane deviation is minimal. The choice of kernel function is very important for SVR. In this study, four kernel functions of linear, poly, rbf, and sigmoid were compared and tested, and finally choose the rbf

1258

G. Chen et al.

kernel function as the final model training kernel function. The hyperplane model of SVR in the feature space can be expressed as: f ðxÞ ¼ xt uðxÞ þ b;

ð10Þ

in formula (10), x is the input space, and uðxÞ is the feature vector after x mapping. Introducing the kernel function, by introducing Lagrangian multipliers and derivation, formula (10) expands as follows: f ðxÞ ¼ xt uðxÞ þ b ¼

n X

ai yi uðxi Þt uðxÞ þ b ¼

i¼1

n X

ai yi kðxi ; xÞ þ b;

ð11Þ

i¼1

where kðxi ; xÞ is the kernel function. Random Forest. Random Forest is a kind of ensemble learning and can be used for almost any kind of prediction problem. Random Forest has the advantages of high accuracy, less overfitting, and fast training speed. Random Forest does not use all the data in the data set when building the model, but randomly samples, and the data that has not been selected becomes an out-of-bag data set. In the study, the use of Random Forest modeling showed a higher goodness of fit. The model with 7 eigenvectors is better than the model with 5 eigenvectors. Therefore, the more input variables, the better the fitting effect of the prediction results.

3 Results and Analysis 3.1

Experimental Results

This study first screened out 7 eigenvectors with greater correlation using DC, and then conducted two sets of experiments with Multiple Linear Regression models, Support Vector Regression models, and Random Forest models. One set of experiments selected 5 feature vectors, and one set of experiments selected 7 feature vectors, analyzed and predicted the dry weight inhibition rate of apple tree rootstocks. The fitting diagram of the prediction results of the two sets of experiments is shown below.

Fig. 1. Fitting results of three models under 5 eigenvectors (left) and 7 eigenvectors (right)

Analysis of Apple Replant Disease

1259

It can be seen from the fitting curves of the prediction results of the two sets of experiments that the three models have produced good prediction effects, and there is no over-fitting phenomenon. It can be seen from Fig. 1 that the prediction results of the PL6 and DL9 areas have large errors. It has been verified that these two areas have deviations in the original data collection process. The collection value of organic matter content in the PL6 area is too high, and the collection value of the sand content in the DL9 area is too high. The collected value of sand content is too high. However, this has little effect on the overall prediction, and it just proves the validity and accuracy of the model prediction, which meets the standard of the result. There are many methods to evaluate the model, such as MSE, RMSE and R2_score. In the study, R2_score is selected to evaluate the model. Compared with MSE and RMSE, R2_score is not sensitive to the dimension of the data set, so it is more convincing to measure the effect of the model. R2_score formula is shown in formula (12): P R ¼1 P 2

yi i ð^

 yi Þ 2

i ðy

 yi Þ 2

;

ð12Þ

Table 2. The R2_score of the three models The number of eigenvectors Multiple linear regression Support vector regression Random forest

5 eigenvectors 0.846232050255056 0.816700224976132 0.840636307219591

7 eigenvectors 0.84746504853645 0.843176450791243 0.856651810030612

where ^y is the predicted value and y is the actual value. The value of R2_score should be between 0 and 1. The closer to 1, the better the effect of the model. R2_score comparison of the three models in this study is shown in Table 2. From the R2_score of the three models in Table 2, it can be seen that the R2_score of the three models is above 0.8, and the model trained with 7 feature vectors is better than the prediction effect of the 5 variables. The R2_score of the random forest model with 7 feature vectors reached 0.856, which is the best among the three models.

4 Conclusion The apple replant disease of apple trees seriously hinders the sustainable development of apple production, and it is particularly important to prevent the apple replant disease in old orchards. The dry weight inhibition rate of planting apple rootstocks is one of the important indicators for judging the degree of apple replant disease. This study analyzed and predicted the dry weight inhibition rate of apple rootstocks through three

1260

G. Chen et al.

machine learning methods: Multiple Linear Regression, Support Vector Regression, and Random Forest. The results showed that the machine learning method can accurately predict the dry weight inhibition rate. This has played an important reference role for the evaluation of apple replant disease. Acknowledgements. This work is partly supported by the Major Scientific and Technological Innovation Project of Shandong Province (2019JZZY010706) and Shandong Provincial Key Research and Development Program of China (2019GNC106106), Shandong Provincial Natural Science Foundation of China (ZR2019MF026). The authors would like to thank the HPC center of Shandong Agricultural University for providing the computing support.

References 1. Du, X., Gao, J., Wang, Q., Cai, H., Li, C., Wang, S., Yang, T.: Research progress in apple cropping obstacles. South China Fruit Tree 50(02), 191–195+204 (2021) 2. Yin, C., et al.: Progress in the study of continuous apple cropping obstacles. Acta Hortic. 44 (11), 2215–2230 (2017) 3. Zhang, Y., Liu, F., Wang, H.: The formation mechanism of fruit tree continuous cropping obstacles and solutions. China Fruit Tree (06), 6–11 (2020) 4. Chen, L., Dong, K., Yang, Z., Dong, Y., Tang, L., Zheng, Y.: The allelopathy self-toxic effect in continuous cropping disorder and the mitigation mechanism of intercropping. Chin. Agric. Sci. Bull. 33(08), 91–98 (2017) 5. Wang, X., Jiang, W., Yao, Y., Yin, C., Chen, X., Mao, Z.: Research progress on soil microorganisms that hinder continuous apple cropping. Chin. J. Hortic. 47(11), 2223–2237 (2020) 6. Wang, L., et al.: PM_(2.5) concentration rolling statistical forecasting scheme based on distance correlation coefficient and support vector machine regression. J. Environ. Sci. 37 (04), 1268–1276 (2017) 7. Leng, J., Gao, X., Zhu, J.: Application of multiple linear regression statistical forecasting model. Stat. Decis. (07), 82–85 (2016) 8. Lu, L.: Forecast of Jingdong revenue based on exponential smoothing and multiple linear regression. China Storage Transp. (06), 94–95 (2021) 9. Li, H.: Support vector machine regression algorithm and application research. South China University of Technology (2005) 10. Wang, D., Fang, T., Tang, Y., Ma, Y.: A review of support vector machine regression theory and control. Pattern Recogn. Artif. Intell. 16(02), 192–197 (2003) 11. Zhao, G., Tu, X., Wang, T., Xie, Y., Mo, X.: Drought prediction based on artificial neural network and support vector regression. People’s Pearl River 42(04), 1–9 (2021) 12. Cao, Z.: Research on optimization of random forest algorithm. Capital University of Economics and Business (2014) 13. Zhang, L., Wang, L., Zhang, X., Liu, S., Sun, P., Wang, T.: The basic idea of random forest algorithm and its application in ecology: Taking Yunnan pine distribution simulation as an example. Acta Ecol. Sin. 34(03), 650–659 (2014)

Improved Federated Average Algorithm for Non-independent and Identically Distributed Data Zhongchang Zhou, Fenggang Sun, Peng Lan(&), and Zhijun Wang College of Information Science and Engineering, Shandong Agricultural University, Tai’an 271000, China [email protected]

Abstract. Federated learning is a distributed machine learning model that can protect data privacy, where a large number of clients and servers participate in the joint train procedure. However, the problem of heterogeneous data has been a potential obstacle for federated learning. Traditional joint averaging algorithms suffer from low accuracy and slow convergence on non-IID (identical and independently distributed) data. To this end, this paper proposes an improved FedAvg based algorithm called FedAvg-Z. We introduce penalty coefficients and adaptive parameter averaging methods in our algorithm, which adjusts the influence of clients on the global model as a way to increase the convergence and stability of the model. We first demonstrate the reliability of our method on the dataset and then compare it experimentally with the traditional algorithm. The experiments show that the algorithm has higher accuracy and convergence speed in the non-IID case. Keywords: Federated learning learning

 FedAvg-Z  Non-IID  Distributed machine

1 Introduction Federated Learning is a machine learning paradigm in which multiple clients learn a model under the arrangement of the central server [1]. Like existing machine learning methods, the first thing that federated learning has to face is the data problem. To deal with data heterogeneity, the approach that allows local updates and low participation is a popular approach for federated learning. But unfortunately, if each edge client has a unique data distribution, the quality of model training will reduce [2]. The FedAvg (Federated Averaging) [1] algorithm is the first algorithm proposed in federated learning for solving non-IID problems. In the subsequent study, Zhao et al. improved the training of non-IID data by creating a subset of data that was shared across all clients. With only 5% of the data shared globally, the accuracy was improved by 30% [3]. In the case of heterogeneous data distribution, Yao et al. proposed a local sequential training strategy, a small dataset is built on the central server to calculate the important weights and continuously and integrate them into the global model [4]. Meanwhile, Ruan et al. investigated the convergence method. They forced on the ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1261–1267, 2023. https://doi.org/10.1007/978-981-19-3387-5_151

1262

Z. Zhou et al.

clients to build the final model by establishing convergence bounds [5]. Kopparapu et al. proposed FedCD, which dynamically groups clients with similar data by cloning and deleting the model, and the FedCD has higher accuracy and faster convergence with less computation [6]. At the meanwhile, another approach called FedFMC was proposed by Kopparapu et al. This time, they split the clients to obtain different global models and then merged the independent models into the global model, which greatly improved the earlier non-IID data processing method [7]. Despite the excellent work done by the above-mentioned people, the effect of each bad client on the global model is irreversible under non-IID data, and the traditional algorithm needs to be improved in terms of accuracy and convergence speed. In this paper, we add a penalty factor and adjust it during weight averaging to reduce the impact of bad clients on the global model in time. Depending on the size of the penalty factor, we can dynamically adjust the algorithm that we will use. We conducted experiments on the MNIST [8] dataset and achieved better results in solving the problem of non-IID data distribution. If the number of training epochs is sufficient, the accuracy of the FedAvg-Z algorithm is generally about 95%., and the accuracy rate can be improved by 3% in the best case. At the meanwhile, the method proposed in this paper has improved convergence, reduced the number of communications of federated training, and improved the overall efficiency.

2 Method 2.1

Improved FedAvg Algorithm

The federated averaging algorithm FedAvg is evolved from SGD [9, 10]. The method first initializes the weights of the model on a global server, and then assigns the weights to the participants for local training of the model. The whole process stops after a certain number of iterations [11].

Client1

Client2

ClientK

Fig. 1. Federated learning overall framework

The overall structure of federated learning is shown in Fig. 1. It is beneficial for each participant to train the same model locally with their own data. After the training is completed, the parameters and other data are uploaded to the central server. The

Improved Federated Average Algorithm

1263

central server returns the parameters to each client by FedAvg-Z algorithm until the model converges or reaches the set number of iterations to end the training. The traditional algorithm does not consider the effect of clients with bad results on the model when performing aggregation, so we improve it here. If the number of data samples of the k-th participant is nk , and the model weight is x, then the total sample of P all participants is n ¼ Kk¼1 nk . The objective function needing to be optimized is min f ðxÞ, which is defined as: w2Rd

f ðxÞ ¼

1 Xn f ðxÞ i¼1 i n

ð1Þ

where fi ðxÞ is the loss of prediction of individual cases by model parameter x. However, the function may result in the optimal function for each device to the local one, while ignoring the global objective function. At the meanwhile, the adjustment of hyperparameters has a very large impact on the function. And due to the different computing power of each client, some clients may not be able to complete the entire training process. In order to avoid the influence of bad client on the global model, we have to reduce its weight. Here we add the penalty factor u, defined as: u ¼ 1  l  maxðC  K; 1Þ

ð2Þ

where C is the fraction of the client, l is the threshold value set, K is the number of clients. When updating the average parameters, we redefine the function. In the following formula, t is the number of iterations, and the new parameters obtained in the t-th iteration are xt þ 1 , updated as: xt þ 1 ¼

X K nk xktþ 1  u k¼1 n

ð3Þ

The overall loss function of the federation model is: f ðxÞ ¼

X K nk F ðxÞ k¼1 n k

ð4Þ

where Fk is the loss function of the kth participant. It is defined as: Fk ðxÞ ¼

1 fi ðxÞ nk

ð5Þ

The learning rate is g, and the local update for each participant is:   xktþ 1 ¼ xkt  grFk xk

ð6Þ

1264

Z. Zhou et al.

As each client passes the parameters back to the central server, a penalty factor is determined. If it is not 1, the parameters are weakened and calculated. The pseudo-code of the algorithm is shown below as:

3 Experimental 3.1

Experimental Setup

In this experiment, we use MNIST dataset to train two-layer convolutional neural network and fully connected network. We set the number of clients as 50 and 100 respectively, as used different parameters to carry out the experiment. In order to avoid confusion, we use the same symbol to represent the parameters as in the FedAvg algorithm, B is the local batch size, C is the fraction of clients, K is the number of clients and the E is the number of local epochs. Convolutional Neural Networks (CNN) and Multi-Layer Perceptron (MLP) are two types of networks that have significant representation in machine learning. With the rapid development of the field of machine learning and the continuous improvement of computing equipment, CNN has been developed rapidly. Considering the communication overhead and communication cost of the federated learning of the transmission of the large model, we use the two-layer CNN structure and the MLP structure to experiment with the algorithm. The network model we use is shown in Fig. 2. We are using a simple convolutional neural network in the experiment.

Improved Federated Average Algorithm Conv

Maxpooling

Conv

Maxpooling

FC

1265 FC

Fig. 2. Two-layers CNN

3.2

Result

We deploy the central server on a real machine, and then use the program to simulate 50 and 100 participating clients respectively, and randomly allocate the MNIST data set to each client. Some clients have only tens of rows, and some clients have hundreds of rows. We simulate the real state of non-IID data in this way, and then run the two algorithms to conduct comparative experiments. We then tuned the parameters B and E to verify the performance of the FedAvg-Z algorithm in different situations. According to the setting above, the experimental result of FedAvg-Z on MNIST dataset confirmed the experimental assumption. Comparison of the FedAvg-Z and FedAvg in test accuracy and number of experimental iterations is shown as follows: Table 1. Comparison of FedAvg-Z and FedAvg on non-IID MNIST dataset Algorithm Model B FedAvg-Z CNN 10 10 50 10 10 MLP 10 10 Fedavg CNN 10 10 10 10 50 MLP 10 10

E 1 5 1 1 5 1 5 1 5 1 5 1 1 5

Clients 50 50 50 100 100 100 100 50 50 100 100 100 100 100

Epochs 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000

Accuracy% 96.46 95.09 91.65 97.58 98.49 96.9 96.74 92.28 95.4 97.99 98.07 89.38 93.71 95.5

It can be seen from Table 1 and Fig. 3 that FedAvg-Z algorithm converges faster than FedAvg algorithm. In terms of accuracy, FedAvg-Z can be 3% higher than the traditional algorithm at its peak, which is not a small improvement. This allows federated learning to further improve the accuracy of non-IID data. It can be seen from the table that our proposed algorithm has different results for the number of participating clients. As far as this experiment is concerned, the result of the algorithm when there are 50 participating clients is significantly better than the result when there are 100 participating clients. Compared to the other parameter cases, the accuracy of the algorithm is higher and the convergence of the model is faster in the cases of B = 10 and E = 5.

1266

Z. Zhou et al.

Fig. 3. (a) is the accuracy of 2-layers CNN, and (b) is the train loss of 2 -layers CNN of 50 clients. (c) is the accuracy of MLP in MNIST, and (d) is the train loss of MLP in MNIST of 100 clients.

The algorithm is currently only tested on the MNIST dataset, and there is no experiment on the real dataset. In the experiment, the central server was deployed on a real machine, and each participant client was simulated with a program, and the current performance is good. In a real environment, each computing client will have different computing capabilities. For each participant client, the training results will also be different. If joint training can be carried out in the real environment, it is still necessary to consider each participant. Optimization problem of square clients.

4 Conclusions and Future Work This paper has improved the FedAvg algorithm by adding a penalty coefficient to minimize the impact of bad clients on the server. At the same time, when the average parameters are combined, the aggregation function is improved to further reduce the impact on the server. And the accuracy and convergence speed have been improved. The federated Learning technology makes it possible to train the distributed model, which is a new technology to balance data privacy protection and development of artificial intelligence. Federal learning has been receiving considerable attention in recent years. However, there are still many problems to be solved. For example, the hardware of each client is different, so the computing power will be different, which

Improved Federated Average Algorithm

1267

may cause the waste of computing resources. The search for a universal and efficient federated learning method is the subject that needs to be worked on in this field. Acknowledgements. This work is partly supported by Shandong Agricultural Science and Technology Funds (Forestry Science and Technology Innovation) Project (2019LU003), the Major Scientific and Technological Innovation Project of Shandong Province (2019JZZY010706), Shandong Provincial Natural Science Foundation (ZR2019MF026), and Key R&D Projects of Shandong Province(2019GNC106106). The author would like to thank the HPC center of Shandong Agricultural University for providing the computing support.

References 1. McMahan, B., Moore, E., Ramage, D., et al.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, PMLR 2017, pp. 1273–1282 (2017) 2. Hsieh, K., Phanishayee, A., Mutlu, O., et al.: The non-iid data quagmire of decentralized machine learning. In: International Conference on Machine Learning, PMLR 2020, pp. 4387–4398 (2020) 3. Zhao, Y., Li, M., Lai, L., et al.: Federated learning with non-iid data. arXiv preprint arXiv: 1806.00582 (2018) 4. Yao, X., Sun, L.: Continual local training for better initialization of federated models. In: 2020 IEEE International Conference on Image Processing (ICIP), pp. 1736–1740. IEEE (2020) 5. Reddi, S., Charles, Z., Zaheer, M., et al.: Adaptive federated optimization. arXiv preprint arXiv:2003.00295 (2020) 6. Kopparapu, K., Lin, E., Zhao, J.: FedCD: Improving performance in non-IID federated learning. arXiv preprint arXiv:2006.09637 (2020) 7. Kopparapu, K., Lin, E.: FedFMC: sequential efficient federated learning on non-iid data. arXiv preprint arXiv:2006.10937 (2020) 8. LeCun, Y., Bottou, L., Bengio, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998) 9. Lin, T., Stich, S.U., Patel, K.K., et al.: Don’t use large mini-batches, use local SGD. arXiv preprint arXiv:1808.07217 (2018) 10. Wang, J., Joshi, G.: Cooperative SGD: a unified framework for the design and analysis of communication-efficient SGD algorithms. arXiv preprint arXiv:1808.07576 (2018) 11. Briggs, C., Fan, Z., Andras, P.: Federated learning with hierarchical clustering of local updates to improve training on non-IID data. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–9. IEEE (2020)

A Monitoring Scheme for Pine Wood Nematode Disease Tree Based on Deep Learning and Ground Monitoring Chengkai Ge, Fengdi Li, Fenggang Sun, Zhijun Wang, and Peng Lan(&) College of Information Science and Engineering, Shandong Agricultural University, Taian 271000, China [email protected]

Abstract. The detection of pine nematode trees in forest areas is a very important task to prevent and control pine nematode epidemic, where the timely and accurate detection in large and complex scenarios is a challenging work. To address this issue, we propose a ground-based monitoring detection scheme to acquire the large-scale images, and a single-stage deep neural network is applied to detect the infected tree. By installing monitoring cameras on the ground foundation, cameras are controlled to collect images automatically according to predetermined parameters. We designed an improved YOLOv3 based detection model, named as YOLO-S. Specifically, an attention mechanism is added to the backbone feature extraction network and the bottom-up feature pyramid structure is added to the original feature pyramid to enhance the detection accuracy. Experiments results show that the proposed YOLO-S algorithm can provide higher detection accuracy than that of other models. Keywords: Ground-based monitoring mechanism  Deep learning

 Pine nematode disease  Attention

1 Introduction China’s Pine planting area is vast, pine resources account for one fourth of China’s forestry resources [1]. Pine wood nematode (PWN) disease, has caused devastating damage to pine tree. At present, the main prevention and control way for pine wilt disease (PWD) is to remove the infected tree in time. Therefore, to find the infected tree in time is the key for epidemic prevention and control [2]. Since the PWN has threatened the pine forests in China, it is an urgent need for efficient, timely and large-scale monitoring of pine wood nematode disease [3]. Traditionally, the monitoring mainly depends on manual investigation, which was low efficiency, high cost and easy to miss detection [4]. Satellite remote sensing is also a common monitoring method, but suffers from the effect of atmosphere and low spatial resolution, it is not feasible to realize the accurate detection and positioning of individual tree [5]. UAV (Unmanned Aerial Vehicle) is flexible and efficient, which can obtain high-resolution images [6, 7], but due to the limitation of machine performance, ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1268–1275, 2023. https://doi.org/10.1007/978-981-19-3387-5_152

A Monitoring Scheme for Pine Wood Nematode Disease Tree

1269

UAV cannot work for a long time. The UAV flight is easily affected by the weather, making it difficult to operate. In recent years, the development of monitoring technology is rapid. With the characteristics of long-time, all-weather and stable access to image data, it has been gradually applied in the field of forestry protection [8]. By installing a camera on the foundation such as tower, a large-scale, long-term, stable, comprehensive monitoring can be realized in the forest area. Therefore, ground-based monitoring has become an effective way to obtain forest disease tree data and a powerful supplement. For diseased tree recognition, the initial detection mainly relies on manual identification, which is inefficient. Then machine learning technology was introduced into the field of forest disease monitoring. Xuemin Zhang et al. used three support vector data description methods to identify dead pine tree [9], and Jincang Liu et al. designed a pine wilt disease monitoring method based on multi feature conditional random field (CRF) by using UAV images [10]. The light traditional machine learning method has strong generalization ability, but the recognition accuracy is not enough. With the emergence of deep learning, a variety of deep learning target detection algorithms have been applied in the field of forestry disease recognition. Li Jiaqi et al. obtained a deep learning based PWD detection model by using UAV multispectral and hyperspectral remote sensing images, which proved that the model had better performance than traditional technology [11]. Fengdi Li et al. used an improved YOLOv3 to detect the infected pine tree, and the accuracy rate was 98.88% [12]. However, there are few researches on PWD detection from the perspective of ground monitoring. Due to different forest density and complex background, the detection accuracy of traditional detection model is low. It is an urgent problem to improve the detection accuracy of infected tree in images from the perspective of ground-based monitoring. This paper focuses on the detection method of infected tree based on the combination of deep learning and ground-based monitoring. We propose a monitoring scheme based on ground-based monitoring to achieve long-time, large-area and simple operation. To improve the detection accuracy and enhance the detection effect, we propose an improved disease tree detection method based on YOLOv3.

2 Solution Design and Implementation 2.1

Experiment Location and Equipment

The study area is located at the west side of Mount Tai in Tai’an, Shandong province (117.07378°–117.08039o E, 36.21288°–36.21307o N). The monitoring camera adopts integrated pan tilt camera, which can support up to 53 times optical zoom and 360o continuous rotation in the horizontal direction, vertical direction +40°–−90° rotation. The tower foundation is selected as the foundation, with a total height of 35 m, and the camera is hung at a height of about 30 m.

1270

2.2

C. Ge et al.

Obtaining Images

We have developed a special control software that connects the camera through the network and controls the camera to rotate according to the S-shaped route. The rotation step is controlled to allow the two consecutive pictures to overlap with no omission. During the rotation, according to the distance, the camera chooses different magnification to ensure that the image can contain complete and clear trees. The scanning range can be set in the control program according to the size of the mountain. After the control software starts to run, the image will be automatically obtained through the camera and saved locally. 2.3

Datasets

In this experiment, 4194 infected trees images were collected from the infected pine ground data provided by Taishan Management Committee. The infected trees were labeled by Labelimg image annotation tool, and the Pascal VOC lattice data set was formed by dividing the data into training set and test set at a ratio of 9:1.

3 Algorithm Research 3.1

YOLOv3 Network

Input CBL3 CBL3

CBL1 =

Padding

1x1Conv

BN

LeakeyRelu

CBL3 =

Padding

3x3Conv

BN

LeakeyRelu

Shortcut1 (CBL1+CBL3)xn CBL3 Shortcut2

Shortcut1

=

CBL1

CBL1

CBL1

CBL1

CBL3 YOLO layer

Shortcut8

52 x 52 x 18 CBL3

CBL1

Upsample

CBL1 CBL3

CBL1

Output3

CBL1

Upsample

CBL1 CBL3

26 x 26 x 18 CBL1 Output2

CBL1 CBL3

CBL1

Shortcut8 CBL3 Shortcut4

Output1

13 x 13 x 18 Darknet53 layer

Fig. 1. YOLOv3 network structure

A Monitoring Scheme for Pine Wood Nematode Disease Tree

1271

YOLOv3 mainly consists of Darknet53 and YOLO layers, which are used for extracting image features and multi-scale prediction, respectively, and its network structure is shown in Fig. 1. The input image size is 1  416  416; CBR1 and CBR3 are convolutional layers with 1  1 and 3  3 kernels, respectively, for batch normalization operation, and LeakyReLU is used as the activation function; the shortcut layer consists of CBR1 and CBR3, and shortcut represents the use of n (CBR1 + CBR3) substructures. 3.2

SENet Attention Mechanism Module

SENet is a CNN structure, which is composed of Squeeze, Exception and Reweight. As shown in Fig. 2, it explicitly constructs the interdependence between feature channels. By processing the feature map obtained by convolution, a one-dimensional vector with the same number of channels is obtained as the evaluation score of each channel, and then the score is applied to the corresponding channel to get the result.

Fig. 2. SENet structure

3.3

Bottom-Up Multi-scale Feature Fusion

In general, the target location information of the bottom feature map is rich, but the semantic information is scarce. The high-level feature map location information is rough, and the high-level feature map semantic information is rich. The FPN module in the original YOLOv3 model adopts the top-down mode to transfer the high-level semantic features, but less to the location information of the target. In order to better transfer location information, a PAN (Path Aggregation Network) module is added after the FPN module, which is obtained by improving the PANet. The combined structure of the FPN and PAN modules is shown in Fig. 3. The module uses the bottom-up mode to transmit the positioning information through down sampling, which is a supplement to the FPN module. The combination of FPN and PAN improves the detection accuracy of the model by fusing features from different backbone feature layers.

1272

C. Ge et al.

Fig. 3. Structure diagram of FPN and PAN module

The YOLOV-S network structure proposed in this paper is shown in Fig. 4. The SE represents the SENet structure, which first pools the input feature map globally to obtain a feature map of size C  1  1 (C is the number of feature map channels), and then is activated by the sigmoid function after two fully connected layers to obtain a weight of size C  1  1, which is multiplied with the original input feature map at the corresponding position to output the result. The SEshortcut represents a repeating unit consisting of a regular shortcut layer and an SE structure, and SEshortcutn represents the use of n (CBR1 + CBR3 + SE) substructures. FPN + PAN performs parameter fusion of the 3 feature maps obtained by the backbone feature network to improve the feature extraction capability of the model.

Fig. 4. YOLO-S structure

A Monitoring Scheme for Pine Wood Nematode Disease Tree

1273

4 Analysis of Experimental Results This paper chooses the deep learning framework Pytorch to build the model and conduct experiments under the Ubuntu20.04 system. The computer processor is Intel (@)Xeon(R) Silver4210R CPU2.40 GHz  40, the memory is 31G, the graphics card is NVIDIA GP106GL, and the video memory is 6G. In order to improve the training speed of the network, GPU is used for acceleration, and the version of cuda is 10.0. Figure 5 is the PR curve of different algorithms on the data set, we can see that YOLO-S obtains the best performance and has obvious improvement in detection compared with YOLOV3, Faster-RCNN [13] and SSD [14], which shows that the improved network structure is effective.

Fig. 5. P-R curve

Table 1 gives the comparison results of AP, Precision and Recall for the same datasets under the threshold of 0.5. AP is an important index to evaluate the effect of a single detection, which can be obtained by calculating the area bounded by Precision and Recall curves. The average accuracy of the model is higher than that of the other three methods, and the AP value is 90.5%, which achieves satisfactory results in terms of detection accuracy.

1274

C. Ge et al. Table 1. Test results for different models Model Faster-RCNN SSD300 YOLOv3 YOLO-S

Classification Sicktree Sicktree Sicktree Sicktree

AP 0.5 79.71% 81.30% 85.30% 90.05%

Precision 60.87% 84.60% 82.35% 92.72%

Recall 87.50% 78.30% 87.50% 80.94%

The Fig. 6 shows the detection results of this model on a randomly selected test set of images, and it can be seen that the model can detect the target and pinpoint the location of the diseased tree in complex background environments and other situations. The above results show that the proposed model exhibits advantages in terms of detection accuracy and localization precision in complex scenarios and is suitable for target detection of pine nematode trees in forest areas.

Fig. 6. Part of the recognition results show

5 Conclusion To address the issue of PWD detection, we propose a ground-based monitoring detection scheme. In this scheme, the monitoring camera is installed on the ground foundation, and control software is developed to control the camera to automatically acquire images in a large range and a long time, and detect and locate infected trees. The YOLO-S detection method is proposed to realize an efficient detection with high accuracy. The SENet module is added in the model, and the PAN module is added after the backbone feature network to fuse features from different backbone feature layers. After the improvement, the detection accuracy is improved, and the infected pine trees can be more accurately identified in complex scenes. Acknowledgements. This work is partly supported by Shandong Agricultural Science and Technology Funds (Forestry Science and Technology Innovation) Project (2019LU003) and Shandong Provincial Key Research and Development Project (2019GNC106106). The authors would like to thank the HPC center of Shandong Agricultural University for providing the computing support.

A Monitoring Scheme for Pine Wood Nematode Disease Tree

1275

References 1. Li, F., et al.: Area detection of pine wood nematode disease based on aerial data. Comput. Knowl. Technol. 29, 239–241 (2018) 2. Yue, M., et al.: A review of research on early diagnosis technology of pine wood nematode disease. J. Shandong Agric. Univ. (Nat. Sci. Ed.) 45(01), 158–160+76+153 (2014) 3. Han, Y., et al.: Prediction and analysis of pine wood nematode suitable areas in China based on Maxent niche model. J. Nanjing For. Univ. (Nat. Sci. Ed.) 39(01), 6–10 (2015) 4. Zhang, H., et al.: Review of domestic research on pine wood nematode disease monitoring by drone remote sensing technology. East China For. Manag. 31(03), 29–32 (2017) 5. Tao, H., et al.: Research progress on remote sensing monitoring of pine wood nematode diseased pine trees. For. Sci. Res. 33(03), 172–183 (2020) 6. Windrim, L., et al.: Forest Tree Detection and Segmentation Using High Resolution Airborne LiDAR (2018) 7. Aknc, A., et al.: Detection and mapping of pine processionary moth nests in UAV imagery of pine forests using semantic segmentation. In: Australasian Conference on Robotics and Automation (ACRA) (2019) 8. Wei, Y., et al.: Application of remote video digital monitoring technology in forest resource protection. Inner Mongolia For. Inv. Design 33(06), 74–76 (2010) 9. Zhang, X.: Research on the identification of diseased pine trees in remote sensing images based on support vector data description. Anhui University (2014) 10. Liu, J., et al.: Monitoring method of pine wood nematode disease in drone images based on multi-feature CRF. Bull. Surv. Mapp. 07, 78–82 (2019) 11. Li, J., et al.: Establishment of pine wood nematode monitoring model based on UAV spectral remote sensing and AI technology. Electron. Technol. Softw. Eng. 08, 91–94 (2021) 12. Li, F., et al.: Research on pine wood nematode disease tree detection method based on YOLOv3-CIoU. J. Shandong Agric. Univ. (Nat. Sci. Ed.) 52(02), 224–233 (2021) 13. Ren, S., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017) 14. Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

An Optical Detection Method with Noise Suppression for Focal Panel Arrays Bin Wang1(&), Yuning Wang2, and Xiaoyan Wang3 1

Science and Technology on Communication Networks Laboratory, The 54th Research Institute of China Electronics Technology Group Corporation, 050081 Shijiazhuang, China [email protected] 2 College of Artificial Intelligence, Xi’an Jiaotong University, 28 West Xianning Street, Xi’an, Shaanxi, China 3 China Academy of Space Technology (Xi’an), 504 East Changan Street, Xi’an, Shaanxi, China

Abstract. In practical engineering applications, due to the light reflection and scattering of the wireless channels with the suspended particles, such as the atmospheric channels or underwater channels, parts of the emitted optical signals will be reflected to the focal plane array of the space optical terminal. At the same time, due to the low isolation between the transmitting and receiving optical paths in the terminal, the reflected optical signal will interfere with the detection of the received optical signals from the far end, which seriously reduces the detection performance and the accuracy of the light spot centroid calculation. This paper proposes an increase-decrease statistical method for focal plane arrays, which can effectively eliminate the influence of the reflected optical signal and the noise on the signal detection. In addition, this increasedecrease statistical algorithm is simulated and verified by the two software cosimulation. Keywords: Optical communications suppression

 Focal plane array detection  Noise

1 Introduction High-speed optical communication technology has developed for many years and became more mature in recent years. Many countries had made the demonstrations of space laser links successfully and promoted its development. The space optical communication distance is extended from near earth to lunar distance. Optical communication technology is currently applied to various fields, such as underwater optical communication, deep space exploration, near earth optical communication, and terrestrial free space optics [1]. Optical communications has many advantages, for example, its higher data rate, stronger anti-electromagnetic interference ability, and better transmission confidentiality [1].

©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1276–1284, 2023. https://doi.org/10.1007/978-981-19-3387-5_153

An Optical Detection Method with Noise Suppression for Focal Panel Arrays

1277

In the various applications, the optical communication terminals use the focal plane arrays (FPA) to detect the optical signals for the pointing, acquisition and tracking (PAT) function [1]. For example, each pixel of the FPA can detect the incident optical signal and carries out photoelectric detection. The outputs of FPA each pixel represents the gray values of the received optical signal and can be modeled as the random variables in the theory of stochastic processes [2]. The FPA of the optical terminal has various noises in practice. The main noise sources are the following: (a) Due to the low isolation of the transmitting and receiving paths of the optical terminal, the transmitted light that is reflected and detected by the FPA, interferes with the detection of the received optical signal. (b) In practice, due to the dirt, dust or other unknown obstructions on the mirror surface of the telescope, some part of the transmitted signal is reflected and results in the FPA detection noise. (c) There are various particles floating in the wireless channel and these particles cause the light scattering and reflection, especially in the turbid underwater and the atmospheric channel. The reflected optical signal results in the noise. (d) The device noises of FPAs include the readout circuit noise, shot noise, thermal noise, and dark current noise, etc [3]. In addition, the transmitted power of the optical terminal can be very high, especially for the telecommunication [3, 4], so the reflected optical signal can be a big noise contribution. At the same time, the device states in the optical terminals change gradually and continually. If the cleanliness of the telescope mirror degrades in use, the reflected optical signal causes the detection noise, which changes randomly and slowly compared with the transmitted data rate. The performance of the received signal detection and the accuracy of spot centroid calculation can be degraded by the random noises seriously. In practice, in order to meet the system requirements or make the optical networking convenient, the optical transmitter and receiver need to work in the same band and at the closely adjacent working wavelengths. In two space optical terminals, their working wavelengths are in the 1550 nm band, for example, and the wavelength interval is less than 1 nm. However, the optical terminals cannot perform the satisfactory isolation of the receiving and transmitting channels by only using the traditional optical filtering, because the bandwidth of the optical filter cannot be made narrow enough in practice. For example, the 2*3 nm bandwidth of the optical filter with the higher transmission and lower complexity can only be made in practice [5]. The reflected optical signal, which is transmitted by the optical terminal, reduces the performance of the signal detection and spot centroid calculation seriously. This paper proposes the method of the increase-decrease statistical algorithm (IDSA) to solve the interference problem of the reflected light in FPAs. In the IDSA method, the different transmitters use the different transmitted signal formats. In addition, the receivers adopt the integral control signals of FPAs and use the signal detection and processing based on the IDSA to eliminate the noise effectively. This IDSA method can improve the sensitivity of the received signal detection, the isolation of the transmitting and receiving channels and the accuracy of spot centroid calculation effectively. This paper analyzes the characteristics of the detected signals and noises by using the principles of statistical signal detections, establishes the statistical models of FPA

1278

B. Wang et al.

detections, and designs the IDSA implementation scheme. In addition, the cosimulation of the OptiSystem and MATLAB software is implemented, and fully proves the feasibility and performance of this IDSA scheme.

2 System Overview and Signal Design 2.1

System Overview

The implementation scheme of the IDSA method for FPAs includes the following three parts: (a) The different transmitters adopt the special designs of differently modulated signal formats. The emitted optical signals from the different optical terminals are modulated with different frequencies, and work in the same band and at the closely adjacent wavelength. (b) The receivers adopt the special design of the integral control signal for the FPA, so each pixel detector of a single FPA with the same one integral control signal can output different integral results for the two optical signals with the differently modulated frequencies. (c) The receivers use the IDSA to process the output of each pixel detector to eliminate the influences of the reflected optical signal and the device noises. The IDSA can improve the detection performance and the centroid calculation accuracy effectively. The system diagram is shown in Fig. 1.

Optical Terminal OT1

Integral Control Signal with

at 1553nm and Optical Terminal OT2

IDSA FPA Signal Processing and Light Spot Centroid

with

at 1552nm and

Fig. 1. System diagram

2.2

Signal Design

In the communication system as shown in Fig. 1, the transmitted optical signal from the terminal OT2 is at the wavelength of 1553 nm and is modulated by the signal Sm1 ðtÞ with the frequency f s1 and the equivalent average amplitude Am1 . The transmitted optical signal from the terminal OT1 is at the wavelength of 1552 nm and is modulated by the signal Sm2 ðtÞ with the frequency f s2 and the equivalent average amplitude Am2 .

An Optical Detection Method with Noise Suppression for Focal Panel Arrays

1279

The FPA output of each terminal has the frame frequencyf m ¼ 10  kfps, and  2M 1 þ 1 1 2 ¼ K 2M T m ¼ f ¼ 100 ls, which includes K time segments, and T m ¼ K f f m

s1

s2

for the signal Sm1 ðtÞ and Sm2 ðtÞ. The modulation signal Sm1 ðtÞ of the terminal OT2 is the On-off Keying (OOK) modulated signal with the frequency f s1 ¼ 1:7 MHz, the duty cycle 50% and the equivalent average amplitudes Am1 . The parameters for the terminal OT2 are K ¼ 10, ð2M 1 þ 1Þ ¼ 17, K ð2M 1 þ 1Þ ¼ 170, M 1 ¼ 8, ð2M 12þ 1Þ ¼ 8:5: The modulation signal Sm2 ðtÞ of the terminal OT1 is the OOK modulated signal with the frequency f s2 ¼ 1 MHz, the duty cycle 50% and the equivalent average amplitudes Am2 . The parameters for the terminal OT1 is K ¼ 10, ð2M 2 Þ ¼ 10, K ð2M 2 Þ ¼ 100, , ð2M2 2 Þ ¼ 5. The incident optical signals for the terminal OT1 include the reflected signal Sm2 ðtÞ and the received signal Sm1 ðtÞ that is transmitted by the remote terminal OT2. In this IDSA method, multi-frame joint design is adopted for the FPA integral control signal Sint ðtÞ, so the optical terminal can use the pixel detectors of a single FPA and the same one Sint ðtÞ to differentiate the two incident optical signals Sm1 ðtÞ and Sm2 ðtÞ, which are modulated by the two different frequencies f s1 and f s2 , in order to obtain their equivalent average amplitudes Am1 and Am2 respectively and more precisely. The integral control signal Sint ðtÞ is designed as the OOK modulation periodic signal with one period time of T cycle ¼ 200 ls, which includes 2 frames, namely the X 1 and X 2 frames, where X 1 and X 2 are the positive integers. Each frame has the frame time T m ¼ 100 ls, which includes K ¼ 10 time segments of 10 ls. The design of integral control signal Sint ðtÞ is given as follows: 8 1½X 1  100ls þ i  10ls  t\½X 1  100ls þ i  10ls þ 5lsi ¼ 0; 1. . .9 > > > > < 0½X 1  100ls þ i  10ls þ 5ls  t\½X 1  100ls þ ði þ 1Þ  10lsi ¼ 0; 1. . .9 Sint ðtÞ ¼ > 0½X 2  100ls þ i  10ls  t\½X 2  100ls þ i  10ls þ 5lsi ¼ 0; 1. . .9 > > > : 1½X 2  100ls þ i  10ls þ 5ls  t\½X 2  100ls þ ði þ 1Þ  10lsi ¼ 0; 1. . .9

ð1Þ

3 Modeling and Algorithms In the IDSA method, each pixel detector of the FPA outputs differently for the two incident signals, which are the reflected signal of the local transmitter and the received signal from the remote transmitter. The transmitted signals from the different optical terminals are modulated with the different signal formats, and the received signals are processed by using the multi-frame joint design of integral control signals, as well as the IDSA. The equivalent average amplitudes can be calculated and the noise can be suppressed, which improves the detection and centroiding performance.

1280

B. Wang et al.

It is assumed that the synchronization between the integral control signal and the incident signal can be achieved while the number of this incident signal periods for integrations is a positive non-integer. The synchronization method is not discussed in this paper. Because of the signal periodic properties, the synchronization does not impact the integral results while the number of this incident signal periods for integrations is a positive integer. (a) Modeling and analysis of the FPA signal detection The FPA integral control signal Sint ðtÞ is a periodic signal modulated as the OOK format and each Sint ðtÞ period includes 2 frames, which are represented as the X 1 and X 2 frames with the frame time T mh. During each frame time i T m ,hthere are K consecutive i 2 Þ ði þ 1Þð2M 2 Þ integral time segments of ið2Mf 1 þ 1Þ ; ði þ 1Þðf2M 1 þ 1Þ or ið2M and f s2 ; f s2 s1 s1     2 T m ¼ K 2Mf1 þ 1 ¼ K 2M , where i ¼ 0; 1; 2. . .ðK  1Þ. In each time segment of f s1

s2

the Sint ðtÞ, the period number of one incident modulated signal contained in the FPA integral output is a positive non-integer, and the period number of the other incident modulated signal contained in the FPA integral output is a positive integer. For both of the two incident modulated signals (e.g. Sm1 ðtÞ with the period ½0; T 1 ÞÞ, the integral output of one FPA pixel detector for the former part of one signal period (e.g. ½0; pT 1 Þ) is different from that for the latter part of one signal period (e.g. ½ pT 1 ; T 1 Þ). According to the above joint design of the integral control signal Sint ðtÞ, the signal Sm1 ðtÞ and Sm2 ðtÞ for the X 1 and X 2 frames, the Sm1 ðtÞ integral results of the time periods are represented as follows: 8 ( ðM 4 þ uÞAm1 > R ið2M1 þ f1s1Þ þ M3 þ p > for Sint ðtÞ > f s1 Sm1 ðtÞSint ðtÞdt ¼ > ið2M 1 þ 1Þ > > 0 for Sint ðtÞ > fs1 < ( 1 þ 1Þ 0 for Sint ðtÞ R ði þ 1Þðf2M s1 > S ðtÞSint ðtÞdt ¼ ðM 5 þ vÞAm1 > ði þ 1Þð2M 1 þ 1ÞM 3 p m1 > for Sint ðtÞ > f s1 f s1 > > > : ðM4 þ uÞAm1 6¼ ðM5 þ vÞAm1 fs1

¼ 1 and the X 1 frame ¼ 0 and the X 2 frame ¼ 0 and the X 1 frame ð2Þ ¼ 1 and the X 2 frame

fs1

The Sm2 ðtÞ integral results of the time periods are represented as follows: 8  M 6 Am2 > R ið2M1 þ f1s1Þ þ M3 þ p for Sint ðtÞ > > f s2 S ð t ÞS ð t Þdt ¼ > m2 int < ið2M1 þ 1Þ 0 for Sint ðtÞ fs1  1 þ 1Þ > R ði þ 1Þðf2M 0 for Sint ðtÞ > s1 > S ð t ÞS ð t Þdt ¼ > M 6 Am2 m2 int : ði þ 1Þð2M1 þ 1ÞM3 p for Sint ðtÞ f f s1

s2

¼ ¼ ¼ ¼

1 and 0 and 0 and 1 and

the X 1 the X 2 the X 1 the X 2

frame frame frame frame

ð3Þ

where M 1 , M 2 , M 3 , M 4 , M 5 and M 6 are the positive integers. u þ v ¼ 1, u 6¼ v, u 2 ½0; 1, v 2 ½0; 1, f s1 6¼ f s2 , p 2 ð0; 1Þ. u, v, p, Am1 and Am2 are the positive real numbers. Each pixel of the FPA outputs the integral electrical signal of the photoelectric detection at the frame frequency f m , which is expressed as the output sequence y½n. According to the design of the modulated signals and the integral control signal, and the analysis of the FPA noise sources, the integral output part of the signals Sm1 ðtÞ and Sm2 ðtÞ for each pixel is the deterministic variable and the noise output part for each

An Optical Detection Method with Noise Suppression for Focal Panel Arrays

1281

FPA pixel is the independent and identically distributed (i.i.d) Gaussian-distributed random variable. The total  output sequence y½n for each pixel is also the random variable Gaussian my ; ry 2 with the mean my and the variance ry 2 . Because each pixel detects the optical signal independently, the FPA output sequences are the independent Gaussian random variables. The observation vector Y is formed by the following expression: Y ¼ ½ Y1

Y2

Y3

Y4 

ð4Þ

P PL1 PL1 1 1 Y4 ¼ where Y 1 ¼ L1 L1 h¼0 y½4h þ 1, Y 2 ¼ L h¼0 y½4h þ 2, Y 3 ¼ L h¼0 y½4h þ 3, P L1 1 h¼0 y½4h þ 4. L is the positive integer. In each calculation, there are L periods in the L output sequence y½n and there are 4 elements in each period of y½n. By using the summation formula of Y 1 , Y 2 , Y 3 and Y 4 for the X 1 , X 2 , X 1 and X 2 frames, the corresponding elements with the same sequence number in each period are added, and the observation vector Y is formed finally. The diagram for the vector Y construction is shown in Fig. 2 below.

Fig. 2. Diagram of the observation vector Y

The elements of the observation vector Y are the Gaussian-distributed random variables according to the central limit theorem in statistics and are obtained as follows: KM  A 8 m1 4 m1 Y 1 ¼ uKA > 2f s1 þ 2 f s1 > > > KM  A > > 5 m1 m1 < Y 2 ¼ vKA 2f s1 þ 2 f s1 KM  A m1 4 m1 > Y 3 ¼ uKA > > 2f s1 þ 2 f s1 > > >  5 A : m1 m1 Y 4 ¼ vKA þ KM 2f 2 f s1

s1

þ þ þ þ

KM  A 6

2

m2

f s2

KM  A 6

2

m2

f s2

KM  A 6

m2

2

f s2

2

f s2

KM 6  A

m2

þ N b1 þ N b2 þ N b3

ð5Þ

þ N b4

where N b1 , N b2 , N b3 and N b4 are the output noise of the vector Y elements for each pixel and are the i.i.d Gaussian-distributed random variables, Gaussianðmb ; rb 2 Þ with the mean mb and the variance rb 2 . It is assumed that the non-uniformity of the FPA pixels is very small and can be ignored. The elements Y 1 , Y 2 , Y 3 and Y 4 are all the Gaussian-distributed random variables GaussianðmY ; rY 2 Þ with the mean mY and the r

2

r

2

variance rY 2 ¼ L Ly2 ¼ Ly . The decision variables D and F can be formed as the following expression:

1282

8
> > > > > ½E ðF Þ E ðDÞ½ðu þ vÞ þ ðM 4 þ M 5 Þ > > Am2 ¼ f s2KM  f s2KM > 6 6 ½ðuvÞ þ ðM 4 M 5 Þ > > > P > ZD < Dj EðDÞ ¼ Zj¼1D > > PZ F > > F > j¼1 j > > E ð F Þ ¼ > ZF > > > PZ m > > y0;j : mb ¼ Zj¼1m

ð7Þ

where Dj and F j are the measured sample data of the random variables D and F respectively for the jth measurement during a long measured time, and Z D and Z F are the numbers of the measured data of D and F respectively.

4 Application and Simulation By using the co-simulation of the OptiSystem and MATLAB software, the modulated signals of Sm1 ðtÞ and Sm2 ðtÞ and the FPA integral control signal Sint ðtÞ are generated, and the FPA detection system is built and simulated. The FPA output signals are obtained and the IDSA statistical algorithm is programmed and calculated finally. The statistically calculated results of the equivalent average amplitudes Am1 and Am2 can be obtained respectively and clearly. The diagram of the optical detection simulation for the terminal OT1 is shown in Fig. 3 as follow.

An Optical Detection Method with Noise Suppression for Focal Panel Arrays

1283

Fig. 3. Diagram of optical comm. simulation system

The 6  6 elements of FPA are simulated and the light spots of Gaussian beams from the terminal OT1 and OT2 are generated with the center power −50 dBm, respectively. By the IDSA statistical algorithm, the equivalent average amplitudes Am1 can be obtained more precisely, as shown in Fig. 4.

Fig. 4. Diagram of simulation result of the equivalent average amplitude Am1

1284

B. Wang et al.

5 Conclusion An IDSA method for FPA detections and the statistical models of the IDSA method are proposed in this paper. The calculation analysis is given and the software co-simulation of the detection system is built. The statistical algorithms are implemented in the simulation and the results of the equivalent average amplitudes are obtained. The IDSA method can improve the FPA detection performance, the light spot centroid accuracy and the isolation of the transmitting and receiving paths in the optical terminal. The feasibility and validity of the IDSA method are verified and the advantages of the IDSA method are analyzed.

References 1. Hemmati, H.: JPL. Wiley, Deep-Space Communications and Navigation Series (2006) 2. Hsu, H.P.: Schaum’s Outline of Theory and Problems of Probability, Random Variables, and Random Processes, ser. McGraw-Hill Companies, Inc, Schaum’s Outline (1997) 3. Srinivasan, M., Rogalin, R., Lay, N., et al.: Downlink receiver algorithms for deep space optical communications. In: Free-Space Laser Communication and Atmospheric Propagation XXIX, Proceedings of SPIE, vol. 10096 (2017). https://doi.org/10.1117/12.2254887 4. Biswas, A., Hemmati, H., Piazzolla, S., et al.: Deep-space optical terminals (DOT) systems engineering. NASA IPN Prog. Rep. 42, 183 (2010) 5. Baister, G., Dreischer, T., Tüchler, M., et al.: OPTEL terminal for deep space telemetry links. In: Free-Space Laser Communication Technologies XIX and Atmospheric Propagation of Electromagnetic Waves, Proceedings of SPIE, vol. 6457, 645706-3 (2007). https://doi.org/10. 1117/12.708770

Orientation-Aware Vehicle Re-identification via Synthesis Data Orientation Regression Hong Wang1 and Ziruo Sun2(&) 1

School of Journalism and Communication, Shandong Normal University, Jinan, China 2 School of Software, Shandong University, Jinan, China [email protected]

Abstract. Owing to the need for smart city construction, vehicle reidentification (re-ID) has been widely used in the field of computer vision. Given a probe vehicle image, all of the same vehicles need to be found in the gallery data. However, because of the different camera shooting angles and vehicle driving directions, extreme changes in the viewing angle in vehicle images leads to a dissimilarity in shape, resulting in a difference in vision and having a significant impact on the accuracy. To address this issue, we propose a method for eliminating bias between different viewpoints, specifically, orientation regression training is conducted on a free synthetic dataset through the VehicleX engine, orientation-aware features are extracted using the trained network above, and the similarity calculated by the original re-ID feature and the orientation-aware feature are then fused to obtain the final similarity, which can effectively remove the orientation bias that exists with conventional re-ID features. Extensive experiments on two public datasets confirmed the effectiveness of the proposed method. Keywords: Vehicle re-identification

 Synthetic data  Deep regression

1 Introduction Vehicle re-ID is the task to retrieve vehicles across cameras. With the development of smart city construction, vehicle re-identification as a basic technology is an indispensable part of some other tasks (such as vehicle tracking and counting); therefore, it has gradually attracted wide attention in the field of computer vision. Initially, re-ID was implemented using hardware sensors [1–3]; however, this approach is costly and inconducive to large-scale deployment. With the deployment of large-scale urban surveillance systems, vision-based approaches have become the main research direction, and vision-based approaches are divided into two types: traditional manual features and depth-based features [4–7]. With the high performance of convolutional neural networks in various computer vision tasks, deep feature-based re-ID methods have become the mainstream [8–12]. However, some unfavorable factors affect the accuracy of the recognition, among which an extreme orientation variation of vehicle images is an important factor. Two photographs taken at different angles can differ vastly in the external shape of the ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1285–1292, 2023. https://doi.org/10.1007/978-981-19-3387-5_154

1286

H. Wang and Z. Sun

vehicle, thus causing significant visual differences. To solve this problem, the most direct way is to find the bias between images of different orientations, and thus this bias can be considered when calculating the similarity. However, this requires the vehicle image to be labeled with directional information, which increases the cost of labeling and makes it difficult to label specific angles. In this case, we used synthetic data, which were automatically generated using the VehicleX [13] engine and vehicle images generated by style migration using SPGAN [14]. The engine can control the generation of vehicle images with fixed angles and can provide vehicle angles that are accurate to one decimal place. Compared to the manual labeling of real images, the use of synthetic data greatly reduces the costs. After obtaining the synthetic dataset with precise orientation information, we used this dataset to train the orientation regression and applied the trained model to the real data to extract its orientation-aware features. We also trained a general vehicle reidentification model to extract the normal re-ID features. Finally, we calculated two similarity matrices using two features and fused them to obtain the final matrix and retrieval results. To summarize, the contributions of this study are as follows: • We propose a method based on synthetic data orientation regression to eliminate the bias that exists between real-world images with different orientations. • Extensive experiments on two public datasets (i.e., VeRi-776 [15, 16] and VehicleID [17]) confirm the effectiveness of our method and its performance of previous state-of-the-art models.

2 Related Work In this study, we focus on the problem of extreme variation in the direction of vehicle re-identification, and thus we first present some of the current solutions to this problem. Subsequently, some methods for utilizing synthetic data are introduced. 2.1

Orientation-Aware Vehicle Re-ID

Extreme orientation variation is a hot research topic in vehicle re-ID tasks. Meng et al. [18] proposed a view-aware embedded network based on parsing to realize a feature alignment and enhancement of view-awareness for vehicle re-ID. In addition, Chen et al. [19] proposed a special semantically guided component attention network. During the training process, only image-level semantic tags were used to robustly predict the component attention masks of different vehicle views. Moreover, He et al. [20] proposed TransReID, wherein the SIE module encodes side information (i.e., orientation and camera labels) through learnable embeddings. Zhu et al. [21] also used the orientation and camera similarity as penalties to obtain a final similarity and reduce the influence of the background and shape.

Orientation-Aware Vehicle Re-identification via Synthesis Data Orientation Regression

1287

Fig. 1. Schematic diagram of the training stage and inference stage of the adopted method, where the two feature extractors are trained in the training stage and the distance matrix calculated from the two features is fused in the inference stage to obtain the final result

2.2

Utilization of Synthetic Data in Re-ID

Sun et al. [22] proposed PersonX, which is a data synthesis engine based on Unity designed for pedestrian re-identification tasks, mainly for generating pedestrian images (e.g., different backgrounds, viewpoints, lighting, and poses). Yao et al. [13] introduced a large-scale synthetic dataset, VehicleX, and proposed an attribute descent approach to let VehicleX approximate the attributes in real-world datasets. Tang et al. [23] proposed a Pose-Aware Multi-Task Re-Identification (PAMTRI) framework, which creates a large-scale, highly randomized synthetic dataset with automatically annotated vehicle attributes for training and overcomes the viewpoint dependency by explicitly reasoning the vehicle pose and shape through key points, heatmaps, and segments from the pose estimation.

3 Method In this section, we first introduce the synthesis data orientation regression, including the model architecture used and the processing of the orientation information, and then describe the extraction of orientation-aware vehicle features and how to eliminate bias between images of vehicles with different orientations. 3.1

Synthesis Data Orientation Regression

In the synthetic data, each vehicle image is labeled with the orientation information, with angles ranging from 0° to 360°. When the horizontal flip data augmentation is applied to the images, there will be no bias between the horizontally symmetric orientations, and thus we convert the provided 0° to 360° labels to 0° to 180° labels. Different angle labels were assigned according to the orientation of the vehicle. The angles were then normalized and used as labels for the orientation regression training.

1288

H. Wang and Z. Sun

We feed the synthetic image into a convolutional neural network with a fully connected layer and a sigmoid activation function in the head. The loss function is the logcoh function commonly used in regression tasks, and can be denoted as follows: Lð y; yp Þ ¼

Xn i¼1

logðcoshðypi  yi ÞÞ;

ð1Þ

where y and yp are the true and predicted values, respectively, coshðÞ is hyperbolic functions. After training, a model that can predict the orientation angle of a vehicle was trained. However, we do not use this model directly to predict the orientation angle information of real-world images. In the inference stage, the head of the model is removed, and we directly use the feature vector after global pooling as the output of the model. It now becomes a feature extractor to extract the orientation-aware features of the vehicles, which is denoted as F o ðÞ. The inner product is used as a similarity metric and can be expressed as follows: So ð v 1 ; v 2 Þ ¼

Fo ðv1 Þ  Fo ðv2 Þ ; kF o ð v 1 Þ k  kF o ð v 2 Þ k

ð2Þ

where v represents the vehicle image. When two vehicles are close in orientation, the similarity between their orientationaware features will also be higher; by contrast, when the orientation angle difference is large, their similarity will be lower. The similarity obtained at this time will be used to eliminate orientation bias between real-world pictures, which will be covered in the next subsection. 3.2

Orientation Bias Elimination

To extract the original re-id features, we first train a general re-identification model, which is denoted as F ðÞ. We used the strong baseline model commonly used in the reid task, which uses a multi-task learning mechanism of metric learning and classification learning, as shown in Fig. 1. A deep convolutional neural network as the backbone, followed by a BN neck, which calculates the metric learning loss with the embedding features before the BN and calculates the classification loss with the features after the BN layer to ensure the simultaneous convergence of the two losses. Because this feature is not trained for orientation awareness, images of vehicles with similar orientations tend to be more similar, even if they do not belong to the same ID, whereas some vehicles with the same ID but with large orientation differences are less similar. Based on this, we need to eliminate directional bias to optimize the retrieval of the sorted list, and the method we use is to appropriately reduce the distance between vehicles with large differences in orientation. This requires a measure of the similarity of the two vehicle images in terms of orientation, and the previously trained orientation-aware feature extractor F o can be used for this purpose. As shown in Fig. 1, during the inference stage, the real-world image is fed into two feature extractors separately to extract the orientation-aware feature F o ðvÞ and the

Orientation-Aware Vehicle Re-identification via Synthesis Data Orientation Regression

1289

Table 1. The performance (%) comparison of SDOR on VeRi-776 and VehicleID datasets. Method

VSTM [11] OIFE [24] VAMI [8] PAMTRI [23] CFVMNet [25] PVEN [18] TransReID [20] Baseline [26] Baseline + VOC [21] Baseline + SDOR (ours)

VeRi-776 mAP R1

R5

58.3 51.4 61.3 71.9 77.0 79.5 79.6 79.7 80.9 81.9

90.0 89.7 91.8 97.0 98.4 98.4 – 97.9 98.3 98.6

83.5 68.3 85.9 92.9 95.3 95.6 97.0 95.1 95.7 96.5

VehicleID Small R1 R5 – – – – 63.1 83.3 – – 81.4 94.1 84.7 97.0 83.6 97.1 82.2 95.1 84.0 95.8 84.9 96.2

Medium R1 R5 – – – – 52.9 75.1 – – 77.3 90.4 80.6 94.5 – – 78.0 92.8 79.9 94.0 81.0 94.2

Large R1 R5 – – 67.0 82.9 47.3 70.3 – – 74.7 88.7 77.8 92.0 – – 74.9 90.7 76.7 92.1 77.8 92.5

original re-identification feature F ðvÞ. For two vehicle images, we use two features to calculate the similarity and finally fuse the two similarities; specifically, we use the original similarity minus the similarity calculated using orientation-aware feature, which can be expressed as follows: Sr ð v 1 ; v 2 Þ ¼

F ð v1 Þ  F ð v2 Þ ; kF ðv1 Þk  kF ðv2 Þk

Sðv1 ; v2 Þ ¼ Sr ðv1 ; v2 Þ  kSo ðv1 ; v2 Þ;

ð3Þ ð4Þ

where Sr is the similarity using original re-identification feature, S is the final similarity after removing the orientation bias, k is the weight of the orientation-aware similarity. After removing the orientation bias, when the orientation similarity of two images is high, the original similarity will be subtracted by a larger number. Conversely, when the similarity of the orientation of two images is low, the original similarity will be subtracted by a smaller number, and thus the ranking of those images that are not similar to the orientation of the query image can be effectively improved, thus optimizing the whole retrieval ranking list and improving the retrieval performance.

4 Experiments 4.1

Datasets

We used two large-scale re-ID datasets obtained from real scenarios: VeRi-776 and VehicleID. VeRi-776 contains more than 50,000 images of 776 different vehicles taken by 20 surveillance cameras and VehicleID contains 221763 images of 26267 vehicles taken during daylight hours by multiple real-world surveillance cameras.

1290

4.2

H. Wang and Z. Sun

Evaluation Metric

For the VeRi-776 dataset, we used the standard cumulative matching characteristics (CMCs) and mean average precision (mAP) as the evaluation metrics. For the VehicleID dataset, only the CMC is used as the evaluation metric. 4.3

Implementation Details

All images were resized to 256  256, and ResNet-50 [27] pretrained on ImageNet [28] was used as the backbone of the network. For data augmentation, we used random erasing [29], random horizontal flipping, and random cropping. Re-ID Model. We use a triplet loss [30, 31] with a soft margin as the loss function. For VeRi-776, the batch size was set to 48 with eight identities and six images of each identity. For the VehicleID, the batch size was set to 48 with 12 identities and 4 images of each identity. We trained the model for 80 epochs using the SGD optimizer, and set the init learning rate to 1e−3 and linearly increased the learning rate from 1e−3 to 1e−2 during the first 10 epochs, and then decayed to 1e−3 and 1e−4 at the 40th and 70th epochs. Orientation Regression Model. For the orientation regression model, we train it for 40 epochs with SGD optimizer, the learning rate from 1e−3 to 1e−2 in first 5 epochs, then it is decayed to 1e−3 and 1e−4 at 20th and 30th epoch, and the batch size are set to 48. 4.4

Comparison with State-of-the-Art Methods

In this section, we compare our method with some state-of-the-art methods focusing on the orientation problem in the vehicle re-ID task. Some of the other methods additional labeled the orientation information (e.g., CFVMNet and TransReID), key point information (e.g., OIFE), or semantic segmentation information (e.g., PVEN) manually. The additional information used in our method is a synthetic dataset generated by the VehicleX engine, which is obtained without manual operations and is automatically generated by the program, making it less expensive to use synthetic data compared to manual tagging. We also compared methods using synthetic datasets (e.g., PAMTRI and VOC), and the performance of our method outperformed the other approaches. As shown in Table 1, in comparison to the baseline model, our SDOR method improves the mAP index by 2.2% in the VeRi-776 dataset and the rank-1 index by 1.4% in the VeRi-776 dataset, and by 2.7%, 3.0%, and 2.9% in the small, medium, and large test sets of the VehicleID dataset, respectively.

5 Conclusion In this study, we proposed a method for eliminating the orientation bias between vehicle images. To address the problem of an extreme variation in the orientation during vehicle re-identification tasks and the lack of accurate orientation information

Orientation-Aware Vehicle Re-identification via Synthesis Data Orientation Regression

1291

annotation in real-world data, we used free synthetic data to train the orientation regression models to extract orientation-aware features, eliminate the orientation bias present in ordinary re-ID features, and improve the retrieval performance. Extensive experiments on two large-scale datasets, VeRi-776 and VehicleID, demonstrated the effectiveness of the proposed method.

References 1. Klein, L.A., Kelley, M.R., Mills, M.K.: Evaluation of overhead and in-ground vehicle detector technologies for traffic flow measurement. J. Test. Eval. 25(2), 205–214 (1997) 2. Balke, K.N., Ullman, G.L., McCasland, W.R., Mountain, C.E., Dudek, C.L.: Benefits of real-time travel information in Houston, Texas. Technical report (1995) 3. Kuhne, R.D., Immes, S.: Freeway control systems for using section-related traffic variable detection. In: Proceedings Pacific Rim TransTech Conference, vol. 1. Seattle, Wash. (1993) 4. Woesler, R.: Fast extraction of traffic parameters and reidentification of vehicles from video data. In: Proceedings of the 2003 IEEE International Conference on Intelligent Transportation Systems, vol. 1, pp. 774–778. IEEE (2003) 5. Shan, Y., Sawhney, H.S., Kumar, R.: Vehicle identification between non-overlapping cameras without direct feature matching. In: Tenth IEEE International Conference on Computer Vision (ICCV 2005), vol. 1, pp. 378–385. IEEE (2005) 6. Guo, Y., Rao, C., Samarasekera, S., et al.: Matching vehicles under large pose transformations using approximate 3D models and piecewise MRF model. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE (2008) 7. Watcharapinchai, N., Rujikietgumjorn, S.: Approximate license plate string matching for vehicle re-identification. In: 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1–6. IEEE (2017) 8. Zhou, Y., Shao, L.: Aware attentive multi-view inference for vehicle re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6489–6498 (2018) 9. Khorramshahi, P., Kumar, A., Peri, N., et al.: A dual-path model with adaptive attention for vehicle re-identification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 6132–6141 (2019) 10. Shen, F., Zhu, J., Zhu, X., Xie, Y., Huang, J.: Exploring spatial significance via hybrid pyramidal graph network for vehicle re-identification. arXiv preprint arXiv:2005.14684 (2020) 11. Shen, Y., Xiao, T., Li, H., Yi, S., Wang, X.: Learning deep neural networks for vehicle re-ID with visual-spatio-temporal path proposals. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1900–1909 (2017) 12. He, S., Luo, H., Chen, W., et al.: Multi-domain learning and identity mining for vehicle reidentification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 582–583 (2020) 13. Yao, Y., Zheng, L., Yang, X., Naphade, M., Gedeon, T.: Simulating content consistent vehicle datasets with attribute descent. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12351, pp. 775–791. Springer, Cham (2020). https://doi.org/ 10.1007/978-3-030-58539-6_46 14. Deng, W., Zheng, L., Ye, Q., et al.: Image-image domain adaptation with preserved selfsimilarity and domain-dissimilarity for person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 994–1003 (2018)

1292

H. Wang and Z. Sun

15. Liu, X., Liu, W., Ma, H., Fu, H.: Large-scale vehicle re-identification in urban surveillance videos. In: IEEE International Conference on Multimedia and Expo (ICME) (2016) 16. Liu, X., Liu, W., Mei, T., Ma, H.: A deep learning-based approach to progressive vehicle reidentification for urban surveillance. In: European Conference on Computer Vision (2016) 17. Liu, H., Tian, Y., Yang, Y., Pang, L., Huang, T.: Deep relative distance learning: tell the difference between similar vehicles. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2167–2175 (2016) 18. Meng, D., Li, L., Liu, X., et al.: Parsing-based view-aware embedding network for vehicle reidentification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7103–7112 (2020) 19. Chen, T.S., Liu, C.T., Wu, C.W., Chien, S.Y.: Orientation-aware vehicle re-identification with semantics-guided part attention network. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M. (eds) Computer Vision – ECCV 2020. ECCV 2020. LNCS, vol. 12347. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58536-5_20 20. He, S., Luo, H., Wang, P., et al.: TransReID: transformer-based object re-identification. arXiv preprint arXiv:2102.04378 (2021) 21. Zhu, X., Luo, Z., Fu, P., Ji, X.: VOC-ReID: vehicle re-identification based on vehicleorientation-camera. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 602–603 (2020) 22. Bai, S., Tang, P., Torr, P.H., Latecki, L.J.: Re-ranking via metric fusion for object retrieval and person re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 740–749 (2019) 23. Tang, Z., Naphade, M., Birchfield, S., et al.: PAMTRI: pose-aware multi-task learning for vehicle reidentification using highly randomized synthetic data. In: The IEEE International Conference on Computer Vision (ICCV) (2019) 24. Wang, Z., Tang, L., Liu, X., et al.: Orientation invariant feature embedding and spatial temporal regularization for vehicle re-identification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 379–387 (2017) 25. Sun, Z., Nie, X., Xi, X., Yin, Y.: CFVMNet: a multi-branch network for vehicle reidentification based on common field of view. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 3523–3531 (2020) 26. Luo, H., Gu, Y., Liao, X., Lai, S., Jiang, W.: Bag of tricks and a strong baseline for deep person re-identification. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2019) 27. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770– 778 (2016) 28. Deng, J., Dong, W., Socher, R., et al.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009) 29. Zhong, Z., Zheng, L., Kang, G., Li, S., Yang, Y.: Random erasing data augmentation. In: AAAI, pp. 13001–13008 (2020) 30. Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815–823 (2015) 31. Hermans, A., Beyer, L., Leibe, B.: In defense of the triplet loss for person reidentification. arXiv preprint arXiv:1703.07737 (2017)

Opportunities and Challenges of Artificial Intelligence Education in Secondary Vocational Schools Under the Background of Emerging Engineering Ying Qu1(&), Liancheng Xu1(&), Ran Lu1(&), and Min Gao2 1

2

Shandong Normal University, Jinan City, China [email protected], [email protected], [email protected] Casey Eye Institute, Oregon Health and Science University, Portland, OR, USA [email protected]

Abstract. In order to cope with the new generation of Scientific-technical revolution and industrial transformation, the construction of emerging engineering is in full swing. As the core technology of the new generation, artificial intelligence has had a disruptive impact on traditional industries and is an important part and module of the construction of emerging engineering. Secondary vocational schools are the cradle of cultivating applied technical talents and high-quality workers, the goals and content of talent training should closely follow the development trend of social technology and industry. However, artificial intelligence education in secondary vocational schools is still in the exploratory stage. Based on the development process of artificial intelligence education, this paper studies the opportunities and challenges of artificial intelligence education in secondary vocational schools under the background of emerging engineering, in order to provide a theoretical reference for the development of artificial intelligence education in secondary vocational schools in our country. Keywords: Emerging engineering  Artificial intelligence education Secondary vocational school  Vocational education



1 Introduction With the quick development of a new generation of scientific-technical and modern manufacturing technology, emerging industries such as big data, communication engineering, and intelligent manufacturing have been spawned. The development of emerging industries is inseparable from talents with knowledge of emerging disciplines. Therefore, for the sake of the technological revolution and industrial transformation, the Ministry of Education has actively promoted the construction of emerging engineering disciplines since February 2017 and accelerated the training of emerging engineering talents. Due to the gradual maturity and improvement of artificial ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1293–1301, 2023. https://doi.org/10.1007/978-981-19-3387-5_155

1294

Y. Qu et al.

intelligence-related theories and knowledge, and deep integration with traditional industries, which has a profound impact on modern industries, artificial intelligence is an important module in the construction of emerging engineering. Secondary vocational schools are the cradle for cultivating national applied technical talents. Under the background of emerging engineering construction, facing the future trend of industrial intelligence, the goal and content of talent training in schools should meet the needs of future social talents. Schools should carry out artificial intelligence education and cultivate emerging high-quality laborers and skilled talents with artificial intelligence knowledge.

2 Artificial Intelligence Education There are many combinations of artificial intelligence and education, such as artificial intelligence + education, educational artificial intelligence, artificial intelligence education, etc. “Artificial intelligence + education” refers to the deep integration and development of artificial intelligence and education [1]. “Educational artificial intelligence” refers to an intelligent education and teaching learning ecosystem that integrates teachers, content, and tools to provide learners with “private customization” [2]. “Artificial intelligence education” has two meanings. The first meaning can be understood as artificial intelligence empowering education, that is, the related technologies and ideas of artificial intelligence are applied to the field of education, causing changes in the field of education. This article studies the second meaning of artificial intelligence education, artificial intelligence will be taught as a discipline.

3 The Development Background of Artificial Intelligence Education at Home and Abroad 3.1

The Development of Artificial Intelligence Education Abroad

Combination of artificial intelligence and education has been nearly a hundred years of exploration [3], but artificial intelligence education only began to gain attention in the United States in the 1980s [4]. The research on artificial intelligence education in the United States is earlier than other countries, and great progress has been made in the strategic positioning of artificial intelligence education, curriculum development, funding, education scale, and policy formulation [5–7]. In recent years, other countries, such as the United Kingdom, Canada, and Japan, have also begun to realize the importance of cultivating artificial intelligence talents, focusing on promoting the development of artificial intelligence education, and have made certain progress in the construction of courses, majors and talent training systems [8]. However, on the whole, most of the policies and funds invested by various countries are aimed at basic education or higher-level schools and research institutes, and there are few documents and policies related to artificial intelligence education for secondary vocational schools.

Opportunities and Challenges of Artificial Intelligence Education

3.2

1295

The Development of Artificial Intelligence Education in China

As of May 10, 2021, with “artificial intelligence education” as the theme word in CNKI database, this paper has retrieved a total of 5252 documents after excluding the documents with low correlation such as newspapers, books and achievements. After manual screening, a total of 2970 relevant documents meeting the conditions in the field of artificial intelligence education in China are finally obtained. Based on the data obtained, this article first visually analyzes the publication volume of literature related to artificial intelligence education in my country over the years. It can be seen from Fig. 1 that China’s research on artificial intelligence education began in 1984. As of 2002, the artificial intelligence education related research has not attracted the attention of scholars from various institutions. From 2002 to 2016, with the development of Internet-related emerging technologies, artificial intelligence education increased a certain degree of attention in the country. Until 2017, the State Council issued the “New Generation Artificial Intelligence Development Plan”, and the number of documents published increased sharply. In 2018, the country successively issued the “Education Information 2.0 Action Plan” and the “Artificial Intelligence Innovation Action Plan for Higher Education Institutions”, attention to artificial intelligence education has been unprecedentedly increased, and the annual number of publications in 2020 will be close to 1,000.

Fig. 1. The number and trend of artificial intelligence education papers published over the years

4 The Development of Artificial Intelligence Education in My Country’s Secondary Vocational Schools 4.1

From the Perspective of the Number of Documents Published Over the Years

In this article, an advanced search was conducted in the CNKI database with the subject terms “Artificial Intelligence Education” and “Secondary Vocational School”. As of May 10, 2021, a total of 70 literatures related to artificial intelligence education in

1296

Y. Qu et al.

secondary vocational schools have been obtained. As can be seen from the Fig. 2, the starting year of artificial intelligence education in secondary vocational schools is 2017, which is consistent with the increasing attention of artificial intelligence education in China. In 2018, the Ministry of Education issued a policy clearly mentioning the need to cultivate talents in the field of artificial intelligence in vocational colleges, which has attracted widespread attention from the academic community on this topic. Therefore, in 2019–2020, the number of publications has increased sharply. The number of papers published in the first half of 2021 has reached ten, and it is predicted that the number of papers published will increase in the second half of the year.

Fig. 2. The number and trend of artificial intelligence education published in secondary vocational education

4.2

Based on the Number of Papers Published in Different Regions

As shown in Table 1, the total volume of publications in East China and South China accounts for almost half of the national publication volume, which shows that vocational schools in East China and South China pay more attention to and develop artificial intelligence education in secondary vocational schools. The number of master and doctoral papers published in North China is the largest, which shows that scholars in North China have begun to pay attention to the development of artificial intelligence education in secondary vocational schools and have carried out certain research. The numbers in North and Southwest China are comparable and higher than those in Northeast, Central and Northwest China, indicating that secondary vocational schools in North and Southwest regions pay more attention to artificial intelligence education, while the Northeast, Central China and Northwest regions have not yet realized the importance of artificial intelligence education in secondary vocational schools, and their development has lagged behind.

Opportunities and Challenges of Artificial Intelligence Education

1297

Table 1. Cross-analysis table of document location and journal category Journal grade Periodical Master Area North China 6 3 East China 21 1 Northeast 3 0 Central China 5 0 South China 19 0 Southwest 8 0 Northwest 3 0 Total 65 4

Total Meeting 0 9 1 23 0 3 0 5 1 20 0 8 0 3 2 71

Percentage 12.7% 32.4% 4.2% 7.0% 28.2% 11.3% 4.2% 100%

5 The Necessity of Artificial Intelligence Education in Secondary Vocational Schools The construction of emerging engineering clearly pointed out that it is necessary to pay attention to “New technologies such as artificial intelligence and 11 types of new industries such as intelligent manufacturing demand status and changing trends for outstanding engineering and scientific talents” [9]. The information of traditional industries is the development trend of modern industries, and the intellectualization of traditional engineering disciplines is also one of the contents of the construction of emerging engineering disciplines. Secondary vocational schools are an important part of my country’s education system and are responsible for cultivating hundreds of millions of high-quality laborers and practical talents for my country’s economic development. In the context of technological revolution and industrial transformation, secondary vocational schools should adapt to changes in the demand for social skills, timely adjust the goals and content of talent training, and vigorously develop artificial intelligence education.

6 Opportunities and Challenges of Artificial Intelligence Education in Secondary Vocational Schools 6.1

Opportunities for Artificial Intelligence Education in Secondary Vocational Schools

Policy Background. In 2018, the “Artificial Intelligence Innovation Action Plan for Higher Education Institutions” clearly mentioned the need to add artificial intelligencerelated content to big data and information management-related majors in vocational colleges, and cultivate technical skills talents in the field of artificial intelligence applications [10]. In addition, on August 11, 2015, the “Decision of the State Council on Accelerating the Development of Ethnic Education” emphasized that my country should achieve a general vocational ratio roughly equivalent, and mentioned that free

1298

Y. Qu et al.

secondary vocational education is basically achieved, which ensure that secondary vocational schools have a fixed and huge number of students. Technical Background. Relying on the support of national policies and the efforts of scientific researchers, our country’s artificial intelligence has achieved certain development. The number of scientific papers published and authorized invention patents in artificial intelligence-related field ranks second in the world. In the past ten years, machine learning, deep learning, language recognition, image recognition, and natural language processing have also been developed by leaps and bounds. Artificial intelligence-related technologies have been applied to many areas of our society, such as national defense, medical care, social services, education, etc. 6.2

The Challenge of Developing Artificial Intelligence Education in Secondary Vocational Schools

Through the analysis of the search literature in the third part of the above, the author’s hot research issues in artificial intelligence education in secondary vocational schools are obtained (the following Table 2), and combined with the analysis of the status quo of artificial intelligence education in domestic secondary vocational schools, we can draw conclusions about our country’s secondary vocational schools. There are specific challenges in the development of artificial intelligence education. Table 2. Literature hot research issues and research frequency table Research problem Curriculum system issues Teacher problem Education base problem School-enterprise cooperation issues

Number of studies 59 32 8 15

The Artificial Intelligence Education Curriculum System is not Sound. From the perspective of the curriculum view, there is still a lack of understanding of the concept, preparation, implementation, and evaluation of the curriculum. In terms of curriculum content, most schools in our country conduct artificial intelligence education mainly to teach information technology-related knowledge, ignoring the knowledge of disciplines such as psychology and philosophy, and students cannot have a comprehensive and indepth understanding of artificial intelligence. Weak Artificial Intelligence Teachers. On the one hand, most of the existing information technology teachers in secondary vocational schools have “specialized in technical professions” and lack artificial intelligence related knowledge background. On the other hand, the number of artificial intelligence talents exported by universities is very limited, and few can teach artificial intelligence courses in secondary vocational schools; artificial intelligence technology talents in enterprises are well paid, and the possibility of abandoning business and teaching is extremely low.

Opportunities and Challenges of Artificial Intelligence Education

1299

The Construction of Artificial Intelligence Education Base Needs to be Improved. The artificial intelligence education base is an important place to carry out artificial intelligence education. However, how to use the limited funds in secondary vocational schools to build, maintain, update and use artificial intelligence education bases and the specific equipment to be introduced into artificial intelligence bases remain to be resolved. Insufficient Motivation for Enterprises to Participate in Running Schools. Due to the shortcomings of the guarantee mechanism for enterprises to participate in school running, such as excessive emphasis on the obligation subject and ignorance of the protection of enterprise rights, the motivation of enterprises to participate in vocational education is insufficient, which affects the development of artificial intelligence education in secondary vocational schools.

7 System Construction of Artificial Intelligence Education in Secondary Vocational Schools 7.1

Improve the Artificial Intelligence Education Curriculum System

Clarify the ideas of course development and course goals. Pay attention to the improvement of artificial intelligence literacy and the training of operating skills. Ensure that students are trained as qualified applied technical personnel. The content of the course should not limit to computer-related knowledge, but should also involve comprehensive knowledge such as psychology and philosophy. In addition, combined with the cognitive development laws and training goals of secondary vocational students, content preparation should mainly focus on the experience and practice of artificial intelligence technology. 7.2

Speed up the Training of AI Teachers and Improve Teachers’ AI Literacy

On the one hand, jointly develop digital education resources with artificial intelligence experts, relevant university teachers and technical talents, and students carry out online learning. On the other hand, secondary vocational schools must give certain preferential policies to attract college graduates and enterprise technical talents to form artificial intelligence courses teaching and research and development teams to ensure the continuous development and progress of artificial intelligence courses in secondary vocational schools. 7.3

Improve the Construction of Artificial Intelligence Education Base

First of all, secondary vocational schools can use the computer room to carry out the learning of digital education resources. Secondly, introduce VR equipment, using VR technology, students can experience and operate existing artificial intelligence-related technologies as if they were on the scene, and experience artificial intelligence application scenarios. Finally, with the help of social resources, secondary vocational

1300

Y. Qu et al.

students can enter the science and technology museums, artificial intelligence enterprises, and use social resources to carry out further learning and exploration. 7.4

Strengthen School-Enterprise Cooperation and Promote Enterprises to Participate in Running Schools

First of all, strengthen school-enterprise cooperation. The enterprise cooperates with the school to develop an artificial intelligence education and training system in accordance with the required talent standards. In addition, speed up the revision of relevant laws and policies. Establish the specific status of enterprises in participating in the running of schools. Formulate specific implementation rules, provide specific implementation paths for cooperation in running schools, and guide enterprises to cooperate in running schools in schools.

8 Conclusion This article reviews the development history of artificial intelligence education at home and abroad and the status quo of the development of secondary vocational schools in our country. In-depth analysis of the necessity, opportunities and challenges of artificial intelligence education in my country’s secondary vocational schools, and based on this, gives its future development path, aiming to provide theoretical guidance for my country’s secondary vocational schools to carry out artificial intelligence education. The continuous development of artificial intelligence technology has brought convenience to people’s lives and has also continuously updated the society’s demand for labor. Secondary vocational schools, as the country’s new-type skilled talents, carry out artificial intelligence education is an inevitable trend. Although there are still many difficulties and challenges, under the active adjustment and development of the country, society, and schools, artificial intelligence education in secondary vocational schools It will become more mature and perfect!

References 1. Wu, Y.H., Liu, B.W., Ma, X.L.: Constructing an ecosystem of “artificial intelligence + education.” J. Distance Educ. 35(05), 27–39 (2017) 2. Yi, L.Y.: A Research on Internet Education and Education Revolution. Central China Normal University, China (2017) 3. Gong, W.Y.: Introduction to program teaching. J. Psychol. Sci. 1965(02), 19–24 (1965) 4. Han, Q.Q., Cai, L.Y.: Research on the planning goals and characteristics of artificial intelligence education in American primary & secondary schools. Stud. Foreign Educ. 48 (01), 115–128 (2021) 5. Rasser, M., Lamberth, M., et al.: The American AI Century: A Blueprint for Action. Center for a New American Security 13, Washington DC (2019) 6. Patrick, T.: What the CIA’s Tech Director Wants from AI. http://www.Defenseone.com/ technology/2017/09/cia-technology-director-artificial-intelligence/140801/. Accessed 2 March 2020

Opportunities and Challenges of Artificial Intelligence Education

1301

7. The White House Report: Prepare for the future of artificial intelligence. Inf. Secur. Commun. Secrecy 2016(12), 9–14 (2019) 8. Japan Uchisekifu Integrated Innovation Strategy Promotion Council. Al Strategy 2019. People Industry Regions Government All AI [R/OL]. https://www.kantei.go.jp/jp/lsingi/ai_ senryaku/pdf/aistratagy2019.pdf. Accessed 2 May 2021 9. Lin, J.: Emerging engineering disciplines construction: a updated version of “the plan for educating and training outstanding engineering” with a strong effort. Res. High. Educ. Eng. 2017(03), 7–14 (2017) 10. Notice of the Ministry of Education on the issuance of the “Innovation Action Plan for Artificial Intelligence in Higher Education Institutions”. Bull. Ministry Educ. People’s Repub. China 2018(04), 127–135 (2018)

Research on the Hotspots and Trends of Student Portrait Based on Citespace Jie Lv1(&), Ran Lu1(&), and Zhe Liu2 1

2

Shandong Normal University, Jinan City, China [email protected], [email protected] Graduate School of Sciences and Technology for Innovation, Yamaguchi University, 1677-1, Yoshida, Yamaguchi, Japan [email protected]

Abstract. In order to better play the role of student portrait technology in the field of education, this article displays and analyzes the research hotspot and research trend in this field. Taking the literature related to student portraits in CNKI as the research object, the author cooperation map, keyword clustering map and time line map are drawn by using Citespace. This paper analyzes the distribution of scientific research forces, research hotspots and research trends in this field. The study shows that student portrait is an emerging field of research, and there is a lack of cooperation between research scholars; Research hotspots in this field focus on big data, personalized education and student management and other topics; Smart campus, student behavior, big data, data mining and other information technologies represent the future research trends in the field of student portrait. Based on the research conclusions, this paper gives suggestions for future research on student portrait. Keywords: Student portrait hotspots  Research trends

 Citespace  Visualization research  Research

1 Introduction In the information age, one of the key tasks for the modernization of education is to promote the educational information. It is necessary to promote big data, Internet and other information technologies to serve the whole process of education and teaching. At present, student portrait technology supported by student big data has become an important aspect of educational information. Student portrait is a labeled data model [1] constructed according to the data information of students, and its concept is extended from the definition of user portrait. “User portrait” was first proposed by Cooper and was initially widely used in the commercial field. It is a consumer model established after deep mining of the data generated by consumers in the process of consumption [2]. With the widespread application of information technology in the field of education, a large amount of data related to students has been generated. Therefore, scholars have introduced the concept of user portrait into the field of education and built accurate student portrait based on big data of education, which is of great significance for understanding learners in depth, ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1302–1310, 2023. https://doi.org/10.1007/978-981-19-3387-5_156

Research on the Hotspots and Trends of Student Portrait Based on Citespace

1303

providing accurate teaching for learners and realizing personalized education. At present, scholars’ research on student portraits mainly focuses on the construction process of student portraits, and there are few discussions on the development process and visualization research of student portraits technology. Therefore, this paper takes the literature on student portrait collected in CNKI as the research object, and uses CiteSpace to visually analyze the documents related to student portrait. In order to explore the research hotspot and development trend in this field, and strive to provide reference for further research on student portrait technology.

2 Data Source and Research Method 2.1

Data Source and Processing

China National Knowledge Network (CNKI) is the authoritative Chinese journal literature database in China. It includes the important scientific research results and the academic literature updates quickly, which can reflect the latest scientific research achievements. Therefore, this paper chooses academic literature on student portrait research in CNKI as the research object, which can ensure the authority of research data and research results. Before the literature search, the advanced search was selected with the theme of “Student portrait”, the search condition of “Accurate” and the time span of “2010–2021”. It was found that the literature on student portrait only started to appear in 2015. After excluding non-research papers such as conferences and notices, a total of 131 qualified literatures were retrieved as the research data of this paper. 2.2

Research Tool and Method

Citespace, translated as “citation space” [3], is a software system developed by scholar Chen Chaomei. The function of this software is to display research hotspots and development trends in a certain academic field in a visual way by generating knowledge maps. Compared with purely descriptive analysis, Citespace uses methods such as mathematics and statistics to map the development status of a research field from the dimension of time and space [4]. In this paper, citespace5.7.r5 version was installed and Excel software was used to make statistical and visual presentation of the annual literature published from 2015 to 2021.

3 Student Portrait Literature in Time and Space Distribution 3.1

Time Distribution of Literature

The annual published literature can show the progress of the research on this topic from the dimension of time. As can be seen from Fig. 1, the first literature related to student portraits has been published since 2015. From 2015 to 2019, the number of published literatures was still small, but the overall number of published literatures showed an increasing trend. By 2020, the number of published articles reached the maximum with 54 articles per year. And in the past five months of 2021, the number of literature on

1304

J. Lv et al.

student portraits has reached 21. Based on Fig. 1, we can draw two conclusions: First, the study of student portraits belongs to a relatively emerging research field, and research results have appeared successively since 2015; Second, although the research in this field started late, it is in a booming trend on the whole, and with the development of information technology, it will become an important research field in the future.

Fig. 1. Time distribution of the number of student portrait literature.

3.2

Distribution Map of Literature Author

Through the co-occurrence map of authors, we can know the core research strength of a certain research field and analyze the scientific research cooperation between authors. Using Citespace, we can get author co-occurrence graph, as shown in Fig. 2. As can be seen from the map, some authors formed research groups of four or five people, suggesting that some scholars have a cooperative relationship in the study of student portraits. But from the overall view of the graph, most of the nodes in the graph are connected by two or three, and there are many separate nodes, indicating that in this research field, the core team of research has not yet formed. The cooperation scale is small, most of them are 2 to 3 people, and many scholars are doing independent research.

Fig. 2. Co-occurrence map of authors

Research on the Hotspots and Trends of Student Portrait Based on Citespace

1305

4 Student Portrait Research Hotspots Analysis 4.1

Keyword Statistical Analysis

Keywords can reflect the research theme of the literature, which is a high generalization of the research content [5]. Keyword frequency refers to the number of times a keyword appears in the literature. Centrality refers to the degree of association between a keyword and other keywords. The bigger the centrality, the closer the connection between keywords. This paper uses Citespace to analyze the co-occurrence of keywords in the research literature, and counts the frequency and centrality of the keywords. The statistical results are shown in Table 1. Since “student portrait” is a research topic and index keyword, it is of little significance to research, while the keywords of “big data”, “user portrait”, and “data mining” have high frequency, indicating that these topics are the focus of current research. The centrality of “data mining” is the largest at 1.05, indicating that it is most closely related to other keywords, which further indicates that this keyword is a basic word in the research field of students’ portraits, and relevant research should be carried out around this topic. In addition, keywords such as “smart campus”, “hadoop” and “big data” are also closely related to other keywords. Table 1. Frequency and centrality of keywords Number 1 2 3 4 5 6

4.2

Keyword Student portrait Big data User portrait Data mining Hadoop Smart campus

Frequency 45 38 18 15 6 6

Centrality 0.79 0.37 0.07 1.05 0.44 0.75

Keyword Clustering Map and Research Hotspots Analysis

The keyword clustering map gathers related keywords into a cluster, and assign a value to each key word. The keyword with the largest value in each cluster will become the name of the cluster. A keyword clustering map with 157 nodes and 250 connections can be generated through CiteSpace. As shown in Fig. 3. For the keyword cluster map, there are two important criteria for the generated cluster: Modularity Q > 0.3, which means that the similarity between each cluster is low and the theme of each cluster is distinct; Mean Silhouette >0.5 means that keywords in each cluster are closely linked and clustering has high reliability. The keyword cluster map generated in this study is Modularity Q = 0.8449, Mean Silhouette = 0.9608, indicating that the map is significant and credible.

1306

J. Lv et al.

Fig. 3. Clustering map of keyword co-occurrence network

On this basis, the specific information of the cluster is derived from Citespace, including the cluster name and the number of keywords in the cluster, as shown in Table 2. To further study the hot topics in the field of student portraits.

Table 2. Clustering table of keywords co-occurrence network. Cluster ID 0

Cluster name

1

Big data

19

2

Hadoop

17

3

Data mining

17

4

Data analysis

12

5

Student management system Individualized teaching

10

6

Student portrait

Cluster size 24

9

Keyword Student portrait, comprehensive quality, accurate teaching, student management, recommendation algorithm Big data, big data in education, portrait technology, individualized education, employment guidance Hadoop, data warehouse, behavior analysis, student behavior portrait, big data platform Data mining, smart campus, behavior portrait, big data algorithm, location service Data analysis, student behavior, portrait of college student, campus big data, big data technology Data governance, data sharing, education decision, data center, data communication Learner portrait, individualized teaching, the data analysis, online education, ability training

Research on the Hotspots and Trends of Student Portrait Based on Citespace

1307

According to the frequency and centrality of the keyword and the keywords contained in each cluster in the clustering map, and combined with relevant research literature, the hot spots of student portrait research can be summarized as follows: Research on Student Portrait Based on Big Data. “Big Data”, “Big Data in Education” and “Campus Big Data” in cluster 1 and 4 reflect the research heat of big data in this field. Student data is the basis of constructing student portraits. Before constructing the students’ portraits, the researchers first obtained data about the students. For example, Xiao Jun et al. collected student data from the three dimensions of students’ basic characteristics, behavioral characteristics and learning environment characteristics [6]. Yu Minghua et al. collected and preprocessed the relevant data of high school students and the project [7]. Sun Faqin et al. collected online learning behavior data of online course learners in a university [8], and then carried out data cleaning and data mining on the collected data to determine the dimensions of students’ labels, and finally obtained accurate portraits based on student data. Therefore, student portrait is essentially a model based on student data, and the construction process of student portrait is also a process of processing student data. Student Management Research Based on Student Behavior Portrait. By tracking students’ behavior trajectory and collecting students’ behavior data, students’ behavior portraits can be constructed. The effective management of students can be realized through the portraits of students’ behaviors, and we can take targeted educational measures when the dynamic changes of students’ study occur. The keywords involved are “student management” and “student management system” in cluster 0 and 5. Yu Fang et al. constructed the student behavior portrait through the data related to students in the student one-card data system, so as to achieve the goal of student management and guidance [9]. Ge Suhui et al. constructed a student portrait based on feature matrix in their research to track and manage students’ thoughts and behaviors [10]. Individualized Education Research Based on Student Portrait. To realize individualized education for students, it is necessary to respect students’ personality differences and understand their learning characteristics and learning styles. Student portrait is a data model that reflects students’ learning characteristics. Therefore, scholars study how to promote students’ personalized learning through student portraits. “individualized education” and “individual teaching” in cluster 1 and 6 indicate the research heat of individualized education. Liu Haiou et al. mined students’ personalized learning needs from the students’ portraits and provide students with personalized service resources [11]. Chen Haijian et al. discussed how to realize personalized teaching based on learners’ personality portraits [12]. Student portraits can reflect students’ personalized characteristics, so how to use student portraits to meet students’ personalized learning needs has become a research hotspot in this field.

1308

J. Lv et al.

5 Research Trends of Student Portrait The keyword timeline map judges the development trend of the field by showing the update and evolution of keywords over time. As can be seen from Fig. 4, scholars have been paying attention to student portrait technology since 2015. In 2017, there was a major breakthrough in the study of student portraits. “Smart campus” began to attract scholars’ attention. Li Guangyao [13], Deng Jiaming [14] and others conducted indepth research on the topic, and believed that the development of student portraits played an important role in promoting the construction of smart campuses, and related research on the topic continues until 2020. In addition, since 2017–2019, keywords such as “behavior portrait”, “behavior trajectory analysis” and “behavior prediction” have continued to appear, indicating that research scholars have focused on student behavior during this period. Keywords related to information technology, such as “big data”, “data mining”, and “hadoop”, also began to appear in research in the field of student portraits in 2017, and they have become research hotspots since their appearance, and the research enthusiasm will continue until 2021. It shows that since 2017, scholars have paid more attention to the application of information technology in the field of student portraits.

Fig. 4. Timeline map of keywords

6 Conclusions and Suggestions From the perspective of the number of research documents, students’ portrait is an emerging research field. Scholars started their research in this field relatively late. The first relevant document appeared in 2015. However, from the perspective of the amount of articles published, the number of documents has shown a trend of increasing year by year. According to the distribution map of researchers, the majority of researchers lack communication and cooperation. Therefore, researchers should strengthen the degree of

Research on the Hotspots and Trends of Student Portrait Based on Citespace

1309

close cooperation and form a cohesive and stable core research group. Follow-up research requires scholars to continue to work hard, increase the frequency of publications, and make more scientific research results. Researchers should participate in more competitive research activities and academic conferences to increase the opportunities of cooperation with other scholars. From the perspective of research hotspots and trends of student portraits, as the research on student portraits has just arisen in recent years, research hotspots in recent years overlap with future research trends, and topics with high research popularity will still become important research directions in the next few years. The research on student portrait mainly focuses on big data, personalized education, student management and other topics. Therefore, we should pay attention to the application of big data algorithm, data mining and other technologies in the field of student portrait, and apply more efficient algorithms and models to the construction process of student portrait to improve the accuracy of student portrait. The future research direction will focus on smart campus, student behavior, and the application of big data, data mining and other information technologies in the field of student portrait. Therefore, it is necessary to make students’ portraits better serve the field of education, and play its advantages in realizing students’ personalized education, scientific and effective student management and promoting the construction of smart campus.

References 1. Ren, H.J.: Accurate teaching based on big data: generation paths and implementation conditions. Heilongjiang Res. High. Educ. (9), 165–168 (2017) 2. Liu, H., Lu, H., Ruan, J.H., et al.: Research on precision marketing segmentation model based on mining “persona”. J. Silk 52(12), 37–42+47 (2016) 3. Chen, C.M.: CiteSpace II: detecting and visualizing emerging trends and transient patterns in scientific literature. J. Am. Soc. Inf. Sci. Technol. 57(3), 359–377 (2006) 4. Shu, Y.Y., Li, H.F., Wang, L.L.: Current situation and development trend of Chinese psychobiography research: based on knowledge mapping of CiteSpace. J. Central China Normal Univ. (Humanit. Soc. Sci.) 58(4), 185–192 (2019) 5. Li, B.B., Xu, M.X., Gong, C., et al.: Hotspots and trends in international soil quality research. J. Nat. Resour. 32(11), 1983–1998 (2017) 6. Xiao, J., Qiao, H., Li, X.J.: Construction and empirical study of online learners’ persona in the big data environment. Open Educ. Res. 25(4), 111–120 (2019) 7. Yu, M.H., Zhang, Z., Zhu, Z.T.: Research on the construction of student portrait in researchbased learning based on visual learning analytics. China Educ. Technol. (12), 36–43 (2020) 8. Sun, F.Q., Dong, W.C.: Research on online learning user portrait based on learning analysis. Mod. Educ. Technol. 30(4), 5–11 (2020) 9. Yu, F., Liu, Y.S.: Big data portrait: an effective way to realize data-based governance in higher education. Jiangsu High. Educ. (3), 50–57 (2019) 10. Ge, S.H., Wan, Q., Bai, C.J.: Hadoop-based college student behavior warning decision system. Comput. Appl. Softw. 38(1), 6–12 (2021) 11. Liu, H.O., Liu, X., Yao, S.M., et al.: Precision service model for individual learning based on in-depth portrait of big data. Res. Libr. Sci. (15), 68–74 (2019)

1310

J. Lv et al.

12. Chen, H.J., Dai, Y.H., Han, D.M., et al.: Learner’s portrait and individualized teaching in open education. Open Educ. Res. 23(3), 105–112 (2017) 13. Li, G.Y., Song, W.G., Xie, Y.Q., et al.: Research on student profiling method for smart campus. Mod. Electron. Tech. 41(12), 161–163+167 (2018) 14. Deng, J.M.: Research on creation ways of data image of students in intelligent campus. Mod. Educ. Technol. 42(21), 58–62 (2019)

An Overview on Digital Content Watermarking Wang Qi1, Bei Yue1, Chen Wangdu1, Pan Xinghao1, Cheng Zhipeng1, Wang Shaokang2(&), Wang Yizhao2, and Wang Chenwei3 1

2

MIGU Video Co., Ltd., Beijing, China Beijing University of Posts and Telecommunications, Beijing 100876, China [email protected] 3 DOCOMO Innovations, Inc., Palo Alto, CA 94304, US

Abstract. With the rapid development of multimedia technology, balancing multimedia visual enjoyment and information security has become one of the hot spots of current research. Due to the complexity and the cost of video content production, the corresponding video services face many security challenges, and higher requirements for the protection of video content copyright are also been put forward. Among them, digital watermarking technology is a typical information hiding method, which covers text, image and video. In the case of copyright disputes, watermark has become an important way to confirm the copyright and identity information. Digital watermarking technology uses the redundancy and randomness of media content data to embed the cipher text of copyright identification into the protected data. Except for data processing, the rest is invisible to ensure high security. This paper summarizes the current research progress of digital watermarking technology and digital watermarking algorithms, and then puts forward possible research directions in the future. Keywords: Video  Digital watermark  Digital rights management  Attack Robustness



1 Introduction The modern information society puts forward higher requirements for the copyright protection of video data. The key technologies of DRM (digital rights management) include authentication technology, encryption technology, digital watermarking technology, tamper proof hardware module and smart card technology. Among them, digital watermarking is a typical information hiding method, and its carrier includes digital, text, image or audio and video [1]. The application fields of digital watermarking are copyright protection and hidden identification. Regular algorithms can be used to effectively protect still images, audio and video and other digital objects, which can not only not damage the original audio and video content, but also protect and prevent copyright ownership and copyright infringement. All relevant information can’t be seen during its function, we can only use the detector and reader to get information, and this information can be communicated together with original data [2]. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1311–1318, 2023. https://doi.org/10.1007/978-981-19-3387-5_157

1312

W. Qi et al.

The characteristics of digital watermarking technology include Security: digital watermarking is difficult to be found, erased, tampered or forged, and has a low false alarm rate [3]; Provability: digital watermarking should provide complete and reliable evidence for the product ownership of host data; Imperceptibility: it is not perceptible from the senses and statistics. Legitimate users cannot perceive the existence of the watermark when using it. The digital watermark should be difficult to be erased. Any attempt to completely destroy the watermark will not affect the quality of the carrier (Fig. 1).

Fig. 1. Digital watermarking system.

In order to better study digital watermarking, this paper summarizes the research results of digital watermarking at this stage, discusses the current mainstream digital watermarking algorithms, compares several algorithms, analyzes the research progress of current digital watermarking technology, and finally puts forward the possible research direction in the future.

2 Research Status of Digital Watermarking According to the visible standard, digital watermarking is generally divided into visible watermark and invisible watermark. Visible watermark refers to the watermark that can be perceived by human eyes, while invisible watermark refers to hiding the watermark information in the original carrier, and human perception system can’t detect the existence of watermark. Due to the good concealment of invisible watermark, it is usually used in the fields of covert communication and copyright protection. Invisible watermark can also be divided into robust watermark and fragile watermark [5]. A good watermarking algorithm should be robust to signal processing, geometric deformation, malicious attacks and so on. According to the robustness of digital watermarking, watermarking can be divided into the following two categories: the first category is robust watermarking, which robustly carries the owner’s information and protects copyright against data processing such as lossy compression and resampling; The second category is fragile watermarking, which carries content authentication information, and generally does not need to consider malicious attacks. Robust watermarking and fragile watermarking are designed to adapt to these two scenarios respectively. Robustness means that even if

An Overview on Digital Content Watermarking

1313

the video data is attacked or destroyed, the watermark embedded in it can still be extracted and detected [6]. Robust watermark is often used in the field of data copyright protection; Fragile watermark refers to the corresponding change of watermark after video processing, which is mainly used to identify the integrity of data. Although there are commercial watermarking systems, the research on watermarking is still far from mature. Many problems still need perfect solutions, such as robustness, authenticity identification, copyright certification, fast and automatic network verification, audio and video watermarking and so on. According to the loading method, the watermark can be divided into spatial domain digital watermark and transform domain digital watermark. Spatial domain digital watermarking directly embeds the watermark into the carrier without any transformation of the original carrier. The earliest spatial digital watermarking algorithm is the least significant bit algorithm [4]. The spatial watermarking algorithm is relatively simple and has relatively large embedding capacity, but its robustness is general. Transform domain watermarking corresponds to spatial domain watermarking. For example, when using transform domain watermarking algorithm, the carrier needs to be transformed first, such as DWT (discrete wavelet transform), DCT (discrete cosine transform) and DFT (discrete Fourier transform) on the original carrier, The original carrier is converted from spatial domain to frequency domain, and then the watermark is embedded according to the watermark algorithm. According to whether the detection of the watermark needs the division of the original carrier, the watermark can be divided into blind watermark, semi blind watermark and non-blind watermark. Blind watermarking algorithm does not need to use carrier in the process of extracting watermark [7]. The characteristics of blind watermarking algorithm are wide application range and strong practicability; Semi blind watermarking means that it needs the participation of some original information to extract the watermark correctly. When extracting and detecting the watermark, the non-semi blind watermark needs to have the original carrier image and the original watermark at the same time, so as to realize the correct extraction of the watermark [8]. The watermark has the characteristics of good robustness, but because the information of the original carrier is required when extracting the watermark, this characteristic increases the risk of leakage. Therefore, blind watermarking algorithm is the most commonly used in practical application. According to whether the original carrier is lossless after recovery, the watermark can be divided into reversible watermark and irreversible watermark. Reversible watermark refers to that the watermark can still be lossless recovered after embedding into the carrier. Irreversible watermark refers to that the original carrier cannot be lossless recovered after extracting the watermark.

3 Some Specific Algorithms 3.1

Typical Spatial Algorithm

LSB (Least significant bit algorithm) means unimportant bits. Its principle is to directly replace the least important bits of carrier data with watermark information. This is a

1314

W. Qi et al.

typical spatial information hiding method. The biggest advantage of the algorithm is that there are many embedded watermark information. However, due to the simple implementation principle and strong regularity of embedding position, the watermark added by the algorithm is generally fragile watermark, and the effective information cannot be extracted after attack. Moreover, the algorithm is easy to be decoded. If the specific position of watermark embedding is known, these information can be easily erased or tampered with (Fig. 2).

Fig. 2. Least significant bit (LSB) schemes.

3.2

Typical Transform Domain Algorithm

DCT is a method to superimpose the watermark on the first k coefficients with the largest amplitude in DCT domain, usually the low-frequency component of the image. If the first k maximum components of DCT coefficients are expressed as D ¼ d i , i ¼ 1; . . .; k, and the watermark is a random real number sequence W ¼ wi , i ¼ 1; . . .; k, that obeys Gaussian distribution, the watermark embedding algorithm is d i ¼ d i ð1 þ awi Þ, in which the constant a is the scale factor to control the intensity of watermark addition. Then the watermark image I  is obtained by inverse transformation with the new coefficients. The decoding function calculates the discrete cosine transform of the original image I and the watermark image I  respectively, extracts the embedded watermark W  , and then makes a correlation test to determine whether the watermark exists or not. The characteristic of this method is that it can still extract a reliable watermark copy even when the watermark image has obvious deformation after some general geometric deformation and signal processing operations. A simple improvement is that the watermark is not embedded in the low-frequency component of DCT domain, but in the intermediate frequency component to adjust the contradiction between the robustness and invisibility of the watermark. Fourier transform is a very important algorithm in the field of digital signal processing. The implementation principle of digital watermarking algorithm based on DFT is to use the transformed phase information to hide the watermark. This algorithm can make the watermark image achieve better affine invariance. According to Hayers’ theory “From the perspective of image intelligibility, phase information is more

An Overview on Digital Content Watermarking

1315

important than amplitude information.”, Ruanaith et al. Have proposed a digital watermarking algorithm based on DFT domain: embedding the watermark data into the phase information of the transformed carrier image. DFT method decomposes the signal into phase information and amplitude information, which has richer detail information. The embedded watermark can resist the rotation, scale and translation of the image, and has been more and more widely used in various fields. Compared with traditional DCT transform, discrete wavelet transform is a variable resolution analysis method combining time domain and frequency domain. The size of time window is automatically adjusted with frequency, which is more in line with human visual characteristics. Wavelet analysis has good locality in both time and frequency domain, which provides a good combination of traditional time-domain analysis and frequency-domain analysis. At present, wavelet analysis has been widely used in the fields of digital image and video compression coding, computer vision, texture feature recognition and so on. Because many characteristics of wavelet analysis in image processing can be used for the research of information hiding, the application of this analysis method in the field of information hiding and digital watermarking has attracted more and more attention of researchers. At present, there are many typical digital watermarking algorithms based on discrete wavelet transform. The characteristics of the three transformations are: the main advantage of DFT is that it can decompose the signal into phase and amplitude data. DFT has a good effect on most geometric attacks, such as scaling, translation, rotation and so on. It has not as good graphics compression effect as DCT and DWT, of course, its visual effect is not very good, so the watermark algorithm is rarely used at present. DCT has the advantages of high information collection efficiency, low bit error rate, high computing efficiency and high compression rate. It has become an important technology of image coding. DCT algorithm is also very simple and runs very fast. DWT can represent the local characteristics of signals in both frequency domain and time domain. It has a good processing effect on compression attacks, and is friendly to human vision. It is a mature transformation method at present. Its disadvantage is the large amount of calculation.

4 Attacks on Digital Watermarking In the digital watermarking system, any process that will affect the watermark detection, watermark extraction, or damage the watermark transmission can be called an attack [9]. This attack will lead to the distortion of the digital watermark content, and even can’t be extracted normally. Because anyone can attack the watermark of digital media content at a very low cost, the digital watermark needs to be robust enough. Attacks on digital watermarking usually include the following types: signal processing attacks, geometric attacks, protocol attacks and encryption attacks. 4.1

Signal Processing Attacks

Signal processing attacks will directly attack the algorithm and content of digital watermarking. Such attacks mainly include active attack, passive attack and removal attack. Active attack refers to the cracking and utilization of watermark encryption and

1316

W. Qi et al.

embedding algorithms [10]. Passive attack only obtains the digital watermark content, but does not process the watermark content. The removal attack will add noise interference or lossy compression to the watermark carrier in order to delete the watermark content as much as possible. 4.2

Geometric Attacks

The geometric attack is different from the removal attack described above. The removal attack aims to remove the watermark content itself or remove the information related to the watermark in the carrier content. Geometric attack attempts to insert additional content or distort digital media content, so as to destroy the synchronization between watermark detection and watermark extractor, resulting in watermark extraction error. The main means of geometric attack include image scaling, clipping, stretching, rotation and other image operations, as well as inserting mosaics into the image. 4.3

Protocol Attacks

The focus of protocol attack is not to destroy the digital watermark, but to attack the watermark application. Such attacks mainly include reversible attacks and replication attacks. Irreversible watermarking may be needed in copyright protection applications, which can never be eliminated from the watermark document. Reversible watermark attack means that the attacker erases the watermark from the watermark data and claims that he is the owner of the watermark data, which leads to the ambiguity of the original owner of the data. Replication attack means that the attacker judges the watermark from the digital media with watermark and copies the watermark to some other target data. Therefore, the attacker can claim to hold the original image and the image after inserting the watermark. 4.4

Cryptographic Attacks

Cryptographic attack refers to a kind of attack means of digital watermark attack using cryptography methods. Common encryption attacks include brute force attack and Oracle attack. Brute force attack refers to the use of exhaustive search method to detect embedded encrypted information. In Oracle attack, the watermarked image is processed with the help of watermark detection equipment or application, so as to realize the waterless image. These attacks have high computational cost, so they are used less frequently.

5 Possible Future Research Directions In recent years, artificial intelligence and deep learning technology have more and more prominent advantages in the field of video image processing. Learning and training common attack methods through deep learning technology, and then “repairing” the attacked carrier image, so as to realize intelligent watermark embedding, attack

An Overview on Digital Content Watermarking

1317

detection and intelligent watermark extraction, which plays a greater role in resisting robust attacks. In addition, with the development and innovation of blockchain technology, the combination with blockchain technology may become an important topic for the future development of digital watermarking technology. The combination of digital watermark and blockchain can be based on the decentralization of blockchain itself, non forgery and tampering, openness and transparency, traceability and distributed encrypted storage. At the same time, blockchain technology can completely record the detailed information of works and creators and make them all public; All nodes can be recorded and maintained together; It is also stamped with a tamper proof trusted time stamp to provide proof of high legal effect for copyright. In addition, the blockchain can also record the traces of use and transactions, and can trace their whole process to the copyright traces at the source. What’s more, what the blockchain records is irreversible and cannot be tampered with. All these ensure the security, reliability and robustness of digital watermarking applications.

6 Conclusion At present, due to the interactive characteristics of multimedia data, information is easy to be copied. This makes digital watermarking an important research field. Digital watermarking technology is an important tool for image authentication, integrity verification, tamper detection, copyright protection and image digital security. In this paper, we review the most important and advanced watermarking technologies. This paper discusses the general situation, characteristics, technology, attack and application of digital watermarking respectively. According to the overview of digital watermarking technology, digital watermarking technology is divided into digital watermarking technology based on spatial domain and frequency domain. In the case of frequency domain (DCT, DFT, DWT), the information can spread to the whole image, so these technologies are more robust than spatial watermarking technology. The main purpose of watermarking technology is to limit geometric attacks and signal processing attacks. Because no watermarking technology can resist all attacks performed on digital media. There is still much room for the development of robust watermarking technology. In the future, using different robustness features and appropriate embedding technology can enhance the robustness of watermarking technology and develop truly transparent, secure and reliable technology.

References 1. Lai, C., Tsai, C.: Digital image watermarking using discrete wavelet transform and singular value decomposition. IEEE Trans. Instrum. Measur. 59(11), 3060–3063 (2010) 2. Ernawan, F., Kabir, M.N.: A blind watermarking technique using redundant wavelet transform for copyright protection. In: IEEE 14th International Colloquium on Signal Processing & Its Applications (CSPA), pp. 221–226 (2018)

1318

W. Qi et al.

3. Panchal, U.H., Srivastava, R.: A comprehensive survey on digital image watermarking techniques. In: Fifth International Conference on Communication Systems and Network Technologies, pp. 591–595 (2016) 4. Ayangar, V.R., Talbar, S.N.: A novel DWT-SVD based watermarking scheme. In: International Conference on Multimedia Computing and Information Technology (MCIT), pp. 105–108 (2010) 5. Belet, P., Dams, T., Bardyn, D., Dooms, A.: Comparison of perceptual shaping techniques for digital image watermarking. In: 16th International Conference on Digital Signal Processing, pp. 1–6 (2009) 6. Jamal, A., Alkawaz, M.H., Fatima, M., Ab Yajid, M.S.: Digital watermarking techniques and its application towards digital halal certificate: a survey. In: IEEE 7th Conference on Systems, Process and Control (ICSPC), pp. 242–247 (2019) 7. Mathur, E., Mathuria, M.: Unbreakable digital watermarking using combination of LSB and DCT. In: International conference of Electronics, Communication and Aerospace Technology (ICECA), pp. 351–354 (2017) 8. Dixit, A., Dixit, R.: A review on digital image watermarking techniques. Int. J. Image Graph. Signal Process. 9(4), 56–66 (2017) 9. Begum, M., Uddin, M.S.: Digital image watermarking techniques: review. Information 11 (2), 110 (2020) 10. Mohanarathinam, A., Kamalraj, S., Prasanna Venkatesan, G.K.D., Ravi, R., Manikandababu, C.S.: Digital watermarking techniques for image security: a review. J. Ambient Intell. Humanized Comput. 11(8), 3221–3229 (2019). https://doi.org/10.1007/s12652-019-01500-1

Location-Independent Human Activity Recognition Using WiFi Signal Gogo Dauda Kiazolu1, Sehrish Aslam1, Muhammad Zakir Ullah1, Mingda Han1, Sonkarlay J. Y. Weamie2(&), and Robert H. B. Miller3 1

2

School of Information Science and Engineering, Shandong Normal University, Jinan 250358, China College of Computer Science and Electronic Engineering, Hunan University, Changsha 410082, Hunan, People’s Republic of China [email protected] 3 College of Safety and Environmental Engineering, Shandong University of Science and Technology, Qingdao 266590, People’s Republic of China

Abstract. The WiFi sensor constrains the Human activity recognition scheme to an immovable orientation when providing training samples, drastically reducing practical applications. In addition, the empirical investigation through the ubiquitous experience of participants in environments with wireless signals makes it imperatively challenging to evaluate a location-independent method to recognize human activity. In this research, a 1D CNN-LSTM model was built to comparatively analyze location-independent human activity recognition through WiFi CSI. Thus, the method reduces the impact of human activity recognition on location independent. Nevertheless, the experimental results demonstrate that our method can achieve over 94.9% coverage accuracy for location-independent human activity recognition and 90% coverage accuracy for CNNs using our proposed method. The location-independent model developed can be used possibly for practical applications in any situation, and data from each antenna is processed independently, making it applicable in any location or antenna setup. A variety of input sample lengths has been tested to overcome the measurement sensors’ sampling rate limitations. The results show that human activities can be recognized in real-time through WiFi signaling. Therefore, the CSI WiFi connectivity is potentially enhancive in building an excellent platform to locate human activities in modern age. Keywords: Channel State Information  Deep learning recognition  Location-Independent sensing

 Human activity

1 Introduction Wireless communication channels are becoming increasingly common and frequently utilized in computational intelligence [11]. A further point of interest is Human Activity Recognition (HAR), which has attracted a great deal of attention because of its critical function in people’s daily lives, achieved by perversely acquiring complex knowledge about individuals and their surrounding environments. Virtual reality ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1319–1329, 2023. https://doi.org/10.1007/978-981-19-3387-5_158

1320

G. D. Kiazolu et al.

gaming and health care are just a few applications where technology is becoming increasingly important [4]. The availability of Channel State Information (CSI) on WiFi devices has resulted in a significant rise in the sensing capabilities of the technology [5]. Channel state information for human activity is a non-intrusive method of measuring human activities that do not require the attachment of any gear to the target body [6]. Wireless devices are becoming more diverse, and there is a greater need for spectrum, as we have witnessed. Human activities such as hand waves, walking, sitting, and the CSI data detected standing as a result. Activity tracking systems [8] are required for human-computer interaction. Target objects and their surrounding environment can both be used to gather the information that can be used to determine the most appropriate action to take in a situation. Motion and activity analyses combine sensing and reasoning to produce contextually relevant data that can be used to help an individual [8]. CSI characterizes the wireless propagation pathways between transmitters and receivers, which has become increasingly crucial in wireless-based systems as their number of transmitters and receivers has expanded considerably in recent years. Doppler shift occurring from the movement of bodies with parts moving at different speeds and directions can be distributed by using laptops or access points as transceivers to operate as a central node, which can be distributed using laptops or access points as transceivers [9]. In order to ensure that WiFi success is primarily related to Multi-Input Multi-Output (MIMO) technology, the wireless data traffic capacity is expanding, and channel capacity is constantly improving. [10–12] MIMO communicates Channel State Information (CSI), as well as the Orthogonal Frequency Division Multiplexing (OFDM) channel and antenna pair for each carrier frequency, in addition to other information. Pervasive modalities like radio frequency (RF) signals, ambient light, and sound, all prevalent in our ecosystem, are utilized to develop such a system. User presence is taken into account when RF-based recognition measures, such as Channel State Information (CSI) and Received Signal Strength (RSS), are calculated [1]. Furthermore, system decision-making can be based on sensor data from activities performed. Research has recently considered deep learning techniques such as convolutional neural network (CNN) and long short-term memory (LSTM) to recognize human activities. [16–18]. The human body provides an insight into the complexity of the natural world, and we can determine that the human body is primarily composed of water and that the joints, limbs, arms, legs, head, and chest move slightly differently in the sensing environment by observing how the human body moves in the sensing environment. These metrics fluctuate depending on how the user interacts with the features. The changes in wireless channel metrics display a variety of patterns depending on the physical parameters of the person using the device. In order to correlate each user with their patterns of change, machine learning techniques can be utilized to identify the user's recognition routine based on the correlations that have been learned between them. However, there is some uncertainty about the design of the features for measuring physical traits as well as the aspects that are person-invariant. It is challenging to extract elements of unstable information from CSI data because of the interaction of WiFi signals with the human body, background dynamic multipath transmission, reflection, and CSI attenuation, among other factors.

Location-Independent Human Activity Recognition Using WiFi Signal

1321

This research investigates the physical aspects of received superposition signals from various antennas with their corresponding invariant sampling rates at different environmental locations, emphasizing the spatial layers of features included in the signals. The main contributions in research are shown as follows: • The location-independent algorithm uses raw data without pre-processing and does not use feature engineering (feature extraction, feature ranking, feature selection, and antenna dependence). • A location-independent model was developed, making it possible to use for practical applications in any situation. Furthermore, since data from each antenna is processed independently, the proposed method can be applied to any location or antenna setup. • A variety of input sample lengths has been tested to overcome the measurement sensors’ sampling rate limitations. Thus, it can be used to process data from devices with different sampling rates. As a result, we developed a model for segment-level human activity recognition, which can recognize all human activity in real-time. Therefore, the system is better suited to situations where frequent monitoring is necessary.

2 Related Work 2.1

Recent Advances in Human Activity Recognition

Traditional methods, deep learning (data mining), and signal propagation concepts are the three aspects of recent research in this field. In addition, typical machine learning algorithms such as SVM, KNN, DT, NB, and deep learning algorithms can be used to take a systematic look at the associated work in three methods (CNN, RNN, LSTM, BiLSTM, GAN, and DML). Numerous research studies have examined the use of signal data statistics measures as inputs to machine learning algorithms, and the findings have been promising. To recognize human activity based on WiFi, the researchers used multiple kernel representation learning in their research [19, 20]. Our preliminary experiment is focused on the application of a classifier in response to 1-s data samples. The architectures of two different types of deep neural networks are investigated in this paper. When feeding information into CNN, the input is represented as a multidimensional array. These algorithms are capable of dealing with massive amounts of tagged data. Assign weights to each neuron in accordance with the relevance of the receptive field in question. As a result, we may categorize neurons according to their value. Weighted kNN is the algorithm of choice for dealing with uncertain circumstances. The kernel function for this uncertainty should be chosen to decrease its value as the distance between the two points increases [21–23]. We decided the squared inverse distance weight kernel as our kernel of choice in light of these considerations. Using the exhaustive neutrosophic set, which provides the decision criteria [24–26], it is possible to contain uncertain outliers while dealing with incomplete and inconsistent information within the features.

1322

G. D. Kiazolu et al.

Fig. 1. CSI sensing environment

The SNR [dB] is plotted on a y-axis of amplitude, and time is plotted on an x-axis of packets for 30 subcarrier frequencies shown in Fig. 3. In the large SNR fluctuations, we can see a change in correlation with environmental events [3]. As in Fig. 1, the events correspond to the interaction between two environments of location-1, such as tables, chairs, and sporting equipment containing Raw CSI with noise. 2.2

Data Acquisition and Data Description

We chose eight participants at random for the preliminary performance evaluation to freely participate in data collecting. The office was crammed with tables, seats, and recreational equipment, and the data was collected in this environment. Students’ reading rooms and sporting gymnastic settings are 6 * 8 m in size, with an 8-m ceiling height. The T x and R x antennas are 4.8 m apart, and both antennas are mounted at 1.2 m above floor level, as shown in Fig. 1. As a result, there are 3 * 3 * 30 subcarriers in total. One thousand packets per second are the data transfer rate. Subsampling the signal measurement can also test the system’s performance at low sampling rates[2]. We gathered the data in two different environments: One environment comprises a single activity location, resulting in 36 files. The second environment contains two other areas of the acquired data set, each comprising 54 files for 108 files. As a result, the dataset consists of 144 files, divided into three transmitting antennas (144 * 3 = 432 samples). We trained the classifier 70% of the time, validated 20% of the time, and tested 10% using our dataset of 432 samples. In the dataset, nine activities were recorded at three separate sites. We collected nine activities totaling 176050 packets from three different locations in the dataset.

3 System Design 3.1

System Overview

A location-independent activity recognition schematic design consists of four distinct parts, as depicted in Fig. 4. These parts are data acquisition, Sliding window segmentation, an orientation processing module, and a location-independent processing module. Based on the primary collection of measured data, it is critical to minimize

Location-Independent Human Activity Recognition Using WiFi Signal

1323

Fig. 2. The overview of the proposed classification scheme location-Independent

dimensionality reduction in order to improve clarity and extract features effectively using principal component analysis (PCA). Furthermore, the kNN algorithm discriminates between data features based on the distance between a particular data point and each of its k neighboring data points. As the ten nearest neighbors of the proposed classification algorithm, kNN with Euclidean distance are chosen in order to reduce the overall computation costs of the proposed classification algorithm. The selection of the hyper-parameter k is one of the most important considerations that influence the performance of the kNN classifier. Choosing the smallest possible value of k will make you more sensitive to outliers. Participants’ activities can be adequately analyzed and seamlessly integrated into a system that is not dependent on a specific location. In this study, we used a third-party CSI program, developed by [3], to gather raw CSI sequences from two ThinkPad-T400 laptop computers, shown in Figs. 2 and 5, that were equipped with Intel 5300-NIC adapters that operated at a frequency of 5 GHz. The operating system on the machine is Ubuntu 14.04LTS with the Linux kernel version 4.10. Three omnidirectional antennas (3 cm apart) with an amplitude or phase of subcarriers are utilized in each T x and R x pair to generate three CSI streams, with the amplitude or phase of subcarriers being different for each antenna. 3.2

Pre-processing

Before any time-sequence signal can be analyzed, it must first go through a procedure called preprocessing. We rely on deep learning algorithms and machine learning algorithms to assess data regarding human actions rather than relying on preprocessing techniques. Figure 5 shows two embedded methods that are similar to each other: Deep learning is used in variant 1, and machine learning is used in variant 2 in succession, both at the same time. We want to use a deep learning model and feature classification to characterize human behaviors based on WiFi data in the first step of our research. The segmentation of amplitude CSI data is the first step in the procedure. Each trial is made up of a 1-s data segment. The database used in this work has a sampling rate of 20 Hz, and a single trial has a total of 20 * 30 * 3 matrices.

1324

G. D. Kiazolu et al.

It is necessary to flatten each data segment to use a one-dimensional data technique; this flattening corresponds to creating a vector of 600 samples (20  30 = 600). It is necessary to do additional analysis after getting single-dimensional data in order to extract useful features. Consequently, we developed a random approach to classify location-independent data. This was accomplished by splitting the dimensionality of the feature space into subspaces using the suggested selection technique, which balances features to categorize a subspace, an SVM is then utilized, and the data is embedded in a high-dimensional space using CNN to capture the subcarrier and temporal dimensions of the data. A brief description of these classifier models is provided below.

Fig. 3. The workflow of the system design of Location-Independent

3.3

Classifier Models for 1-Dimensional CNN

A graphical representation of the CNN model used in this study can be found in Fig. 2. Three one-dimensional convolutional layers, two average pooling layers, and three dense layers comprise a three-layer convolutional model. We use the activation function for all convolutional layers with a 0.7 learning parameter called ‘alpha.’ We experimented with multiple parameter values (between 0 and 1), and 0.7 was the best. These layers have a kernel size of three. The network is trained with the ‘binary_crossentropy’ loss function, the output of the dense layer is calculated with the transfer function. The total parameters of the model are 74,767, with all of them trainable nature. An LSTM and CNN feature fusion approach is used to enhance learning capabilities. However, the LSTM layer is added parallel to the CNN model to develop a better learning framework for robust performance. The Adam optimizer’s RMSProp constitutes CNN’s network architecture to learn how to distinguish the true nature of the data by identifying its hidden characteristics. In order to systematically categorize data, a robust system can be developed by combining momentum and scaling by employing the RMSProp algorithm.

Location-Independent Human Activity Recognition Using WiFi Signal

1325

4 Experimental Results and Discussion We investigated a range of classification algorithms to obtain an optimized model to demonstrate the effectiveness of the proposed method. Table 1, 2, 3, 4 encapsulates the classification performance of the obtained results for 20, 40, 60, and 100 samples segments lengths of the test data, respectively. Results are presented in terms of classification accuracy, F1 score, precision, and recall. Figure 5 depicts the classification performance of the various machine learning algorithms for different segment lengths. All of these segments length are integer multiples of the sampling frequency of the dataset. SVM has shown the most promising results among all the listed models for segment lengths. Classification performance of the QDA and NB classifiers is below 70% for all segment lengths. Hence, QDA and NB are not recommended use for this type of data. The proposed classification method is also investigated for a different types of motion activities. Figure 4 shows the activity-wise classification accuracy obtained by the proposed method using an SVM classifier with different input segment lengths. The lowest results obtained are corresponding to the ‘Bend_Stand’ activity for 20 samples length. However, the overall performance of the SVM classifier is good enough to be recognized as a valuable tool for CSI spectrum classification. Figure 6 depicts the same scenario for the DT classifier. DT provides promising results for all the activities with all the listed input segment lengths except the 20 samples. Although SVM obtained the highest classification figures for 20 samples segments, it produces promising results for other segment lengths. DT, on the other hand, provides the best score for all the listed segment lengths. The performance of the other classifier model is also comparable. Based on the 90% accuracy of the CNN model of antenna data, Fig. 6 illustrates that recognizing human activity is highly accurate and locationindependent for all activities (Fig. 7). Table 1. Classification performance of the proposed methodology for test data of 20 samples input segment using different models of Antenna-1 Antenna-1 Classifiers kNN SVM QDA NB DT CNN CNN + STM

Accuracy (%) F1 (%) 92.06 72.27 97.98 72.05 85.43 68.19 71.49 64.13 54.61 64.32 96.41 96.77 94.74 93.52

Precision (%) Recall (%) 59.34 92.40 58.60 93.51 58.35 82.02 60.53 68.19 68.36 60.72 96.80 96.76 95.81 94.96

1326

G. D. Kiazolu et al.

Table 2. Classification performance of the proposed methodology for test data of 40 samples input segment using different models of Antenna-2 Antenna-2 Classifiers kNN SVM QDA NB DT CNN CNN + STM

Accuracy (%) F1 (%) 92.32 71.85 97.88 72.00 85.69 69.46 59.13 50.87 53.12 62.19 96.54 96.93 98.14 98.02

Precision (%) Recall (%) 58.88 92.15 58.55 93.48 59.53 83.35 57.31 45.72 72.82 54.27 96.89 97.01 97.83 98.11

Table 3. Classification performance of the proposed methodology for test data of 60 samples input segment using different models of Antenna-3 Antenna-3 Classifiers kNN SVM QDA NB DT CNN CNN + STM

Accuracy (%) F1 (%) 92.37 73.16 97.99 73.18 86.70 71.77 71.04 62.99 55.77 65.95 96.77 96.98 96.38 95.74

Precision (%) Recall (%) 60.24 93.14 59.97 93.84 61.02 87.11 60.64 65.52 65.98 65.92 97.16 96.85 96.11 95.22

Table 4. Classification performance of the proposed methodology for test data of 100 samples input segment using different models Antenna-4 Classifiers kNN SVM QDA NB DT CNN CNN + STM

Accuracy (%) F1 (%) 85.53 82.37 77.83 81.10 52.60 79.24 59.97 75.52 55.20 85.85 94.00 93.27 91.37 90.50

Precision (%) Recall (%) 73.63 93.47 72.71 91.68 88.57 71.68 81.44 70.40 85.54 86.17 93.50 93.18 89.61 89.03

Location-Independent Human Activity Recognition Using WiFi Signal

1327

Fig. 4. Activity wise classification accuracy of SVM classifier for various input samples frame

Fig. 5. Depicts the classification performance of the various machine learning algorithms for different segment lengths. All these segments are integer multiples.

Fig. 6. Activity wise classification accuracy of DT classifier for various input sample frame

Fig. 7. The confusion matrix of location-independent HAR

1328

G. D. Kiazolu et al.

5 Conclusion This research evaluates various classification models and provides optimized methods for adaptation in different location-independent settings. We segment raw data to process it using PCA utilizing machine learning techniques to construct a system with a low level of complexity. Deep neural networks make direct use of the raw CSI spectrum without any feature engineering. It was necessary to experiment with different input sample lengths in order to produce an optimum model. The proposed method outperformed model methodologies, with the best classification accuracy of 94.9% for DT groupings of three input segment sizes; SVM classifiers were shown to be the most accurate overall, regardless of the input segment size. For CNN, all classification measures are greater than 90% objective. In contrast, the QDA and sample NB failed to achieve satisfactory results, and as a result, they were not recommended for use in classifying the CSI spectrum in location-independent circumstances.

References 1. Wu, D., Zeng, Y., Zhang, F., Zhang, D.: WiFi CSI-based device-free sensing: from Fresnel zone model to CSI-ratio model. CCF Trans. Pervasive Comput. Interact. 88, 102–15 (2021). https://doi.org/10.1007/s42486-021-00077-z 2. Abdollahi, A., Pradhan, B.: Integrated technique of segmentation and classification methods with connected components analysis for road extraction from orthophoto images. Expert Syst. Appl. 176, 114908 (2021) 3. Halperin, D., Hu, W., Sheth, A., Wetherall, D.: Tool release: gathering 802.11n traces with channel state information, SIGCOMM Comput. Commun. Rev., 41 53 (2011) 4. Akbulut, Y., Sengur, A., Guo, Y., Smarandache, F.: NS-k-NN: Neutrosophic set-based knearest neighbors classifier. Symmetry 9, 179 (2017) 5. Al-qaness, M.A.A.: Device-free human micro-activity recognition method using WiFi signals. Geo-spatial Inf. Sci. 22, 128–137 (2019) 6. Bui, N., Cesana, M., Hosseini, S.A., Liao, Q., Malanchini, I., Widmer, J.: A survey of anticipatory mobile networking: context-based classification, prediction methodologies, and optimization techniques. Commun. Surv. Tuts. 19, 1790–1821 (2017) 7. Franco, H., Cobo-Kroenke, C., Welch, S., Graciarena, M.: Wideband spectral monitoring using deep learning. In: Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning, Association for Computing Machinery, Linz, Austria, pp. 19–24 (2020) 8. Gao, R., et al.: Towards position-independent sensing for gesture recognition with Wi-Fi. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5(2), 1–28 (2021). Article 61 9. Guo, Y., Sengur, A.: A novel color image segmentation approach based on neutrosophic set and modified fuzzy c-Means. Circ. Syst. Sig. Process. 32(4), 1699–1723 (2012). https://doi. org/10.1007/s00034-012-9531-x 10. Hong, T., Zhang, G.-X.: Power allocation for reducing PAPR of artificial-noise-aided secure communication system. Mob. Inf. Syst. 2020, 6203079 (2020) 11. Jia, J., Deng, Y., Chen, J., Aghvami, A.H., Nallanathan, A.: Achieving high availability in heterogeneous cellular networks via spectrum aggregation. IEEE Trans. Veh. Technol. 66 (11), 10156–10169 (2017) 12. Jiang, N., Deng, Y., Kang, X., Nallanathan, A.:, A new spatio-temporal model for random access in massive IoT networks, (2017)

Location-Independent Human Activity Recognition Using WiFi Signal

1329

13. Jiang, Z., Chen, S., Molisch, A., Vannithamby, R., Zhou, S., Niu, Z.: Exploiting wireless channel state information structures beyond linear correlations: a deep learning approach. IEEE Commun. Mag. 57, 28–34 (2019) 14. Khan, M., Alghamdi, N.: A neutrosophic WPM-based machine learning model for device trust in industrial internet of things. J. Ambient Intell. Human. Comput. (2021). https://doi. org/10.1007/s12652-021-03431-2 15. Kulin, M., Kazaz, T., Moerman, I., De Poorter, E.: End-to-end learning from spectrum data a deep learning approach for wireless signal identification in spectrum monitoring applications. IEEE Access 6, 18484–18501 (2017) 16. Kulin, M., Kazaz, T., Moerman, I., Poorter, E.D.: End-to-end learning from spectrum data: a deep learning approach for wireless signal identification in spectrum monitoring applications. IEEE Access 6, 18484–18501 (2018) 17. Li, X., Chang, L., Fangfang Song, J., Wang, X., Tang, Z., Wang, Z.: CrossGR: accurate and low-cost cross-target gesture recognition using Wi-Fi. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5(1), 1–23 (2021). Article 21 18. Lou, Z., Wang, L., Jiang, K., Wei, Z., Shen, G.: Reviews of wearable healthcare systems: materials, devices and system integration. Mater. Sci. Eng. R. Rep. 140, 100523 (2020) 19. Ma, Y., Zhou, G., Wang, S.: WiFi sensing with channel state information: a survey. ACM Comput. Surv. 52(3), 1–36 (2019). Article 46 20. Naeem, A., Rehmani, M.H., Saleem, Y., Rashid, I., Crespi, N.: Network coding in cognitive radio networks: a comprehensive survey. Commun. Surv. Tuts. 19, 1945–1973 (2017) 21. Umebayashi, K., Tamaki, Y., Lopez-Benitez, M., Lehtomdki, J.: Design of spectrum usage detection in wideband spectrum measurements. IEEE Access 7, 133725–133737 (2019) 22. Venkatnarayan, R. H., Page, G., Shahzad, M.: Multi-user gesture recognition using WiFi. In: Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services, Association for Computing Machinery, Munich, Germany, pp. 401–413 (2018) 23. Virmani, A., Shahzad, M.: Position and orientation agnostic gesture recognition using WiFi. In: Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, Association for Computing Machinery, Niagara Falls, New York, USA, pp. 252–264 (2017) 24. Wang, W., Teh, K.C., Li, K.H.: Artificial noise aided physical layer security in multi-antenna small-cell networks. Trans. Info. For. Sec. 12, 1470–1482 (2017) 25. Wang, Z. et al.: A Survey of user authentication based on channel state information, Wireless Commun. Mob. Comput. 2021, 6636665 (2021) 26. Won, M., Zhang, S., Son, S.: WiTraffic: Low-cost and non-intrusive traffic monitoring system using WiFi (2017)

Design of Text Summarization System Based on Finetuned-BERT and TF-IDF Yuyang Liu1,2(&), Kevin Song3, Jin He3, Songlin Sun1,2, and Rui Liu4 1

Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing University of Posts and Telecommunications, Beijing, China [email protected] 2 School of Information and Communication Engineering, Beijing University of Posts and Telecommunication, Beijing, China 3 VMware, Beijing, China 4 San Jose State University, San Jose, CA 95192, US

Abstract. In today’s world, information is exploding. Internet readers need to judge the readability of the text by reading abstracts when searching for some information, such as blogs and technology papers, but the current text information is generally lack of abstracts. The development of NLP (Natural Language Processing) technology has a great help to fill the blanks. The purpose of this paper is to propose a text summary system, which has been tested in the corpus (including CLTS and some educational papers) and shown great results in the shorter samples. Our text summary system provides two options for the text without abstracts or with poor original abstracts, which are based on finetune-BERT and TF-IDF respectively.These two strategies are different from processing speed and semantic fit. In addition, in order to apply to more diverse scenarios, this article makes it possible to generate a summary of the image text by incorporating the OCR (Optical Character Recognition) technology. Keywords: NLP

 Finetuned-BERT  TF-IDF

1 Research Background With the development of science technology and the advent of the Internet age, the Internet has provided great convenience to people’s lives. As a result, countless text messages flood the world around us. With the blessing of artificial intelligence tools, it has exploded. Therefore, for Internet users, it is the fastest way to quickly obtain the full text content by reading the abstract, but how to quickly obtain the content of the text information when facing a wide range of texts without abstracts has become a big problem. Therefore, text summarization technology came into being. Text summary refers to a method of extracting key information from one or more texts and reintegrating them. According to the technology of constructing text summarization can be divided into extractive summaries and generative summaries according to different construction methods. At present, extractive abstracts are more commonly used. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1330–1338, 2023. https://doi.org/10.1007/978-981-19-3387-5_159

Design of Text Summarization System Based on Finetuned-BERT and TF-IDF

1331

① The extractive abstract is designed through an algorithm, which can select the most important sentences in the original text and gather them as an abstract. The advantage of this method is that each sentence extracted is the sentence in the original text, it conforms to the conventional expression at the sentence level, but the disadvantage is that the selected sentences are prone to incoherent, inconsistent themes, and the total number of words uncontrollable problems. ② Generative abstract refers to the grammatical and semantic analysis of the source document after the model is trained based on prior knowledge. Generally, words in the original text may be used to recombine into sentences and re-integrate with the original information, which is closer to the human brain. The advantage of this method is that it takes into account semantic information, and the formed sentences are more semantically fitting to the conventional expressions. However, the disadvantage is that this method requires huge training resources and is difficult to understand long length paper’s semantic content. So currently, the progress made in generating abstracts is very limited. In view of this, this article chooses the extractive summarization method and uses the BERT pre-training model proposed by Google in 2019 (this model realizes the cross-age contribution in the NLP field, and it can well capture the semantic information in the sentence), combined with a method based on a directed graph to finetune the original information (especially for some scientific papers, the content of the article core is generally used to concentrate on the article and paragraph head). and finally select the content that has the highest contribution to the original theme to form the final summary. In addition, in order to enrich the usage scenarios of the text summary, this article also introduces PaddlePaddleOCR technology to support the extraction of the text summary in image format.

2 Introduction to the NLP Field 2.1

Text Preprocessing Technology

Text preprocessing technology is an extremely important part of the natural language processing field, and it can be said that it directly determines the quality of the final content information extraction. The goal is to convert textual information into a structured data form that the computer can recognize. In the NLP task, the text preprocessing as shown in the Fig. 1 includes: obtaining the original text, text segmentation (word segmentation, jieba word segmentation in Chinese), information cleaning (remoting the stop words), content standardization, feature extraction and information modeling. Text segmentation refers to the segmentation processing of the original input source text, which converts an article into a sentence for easy processing. Text segmentation is a key step to achieve vectorization of the text. Text cleaning is to remove redundant information (for example, useless tags, special symbols, stop words, etc.) in the source text to reduce the complexity of subsequent processing. The standardization includes morphological restoration and stemming. This function is mainly for the processing of English text, which can help reduce the number of words and reduce the difficulty of processing. After preprocessing, the text can be converted into a vector space model for processing and extracting summaries.

1332

Y. Liu et al.

Raw data

Segmentaon

Cleaning

Web page

Sentence division

Useless symbols

Newspaper

Word division

Stop words

Normalizaon

Feature Extracon

Model Inpung

Case transformaon

Word2vec Similarity

BERT TextRank

Report

Fig. 1. Text preprocessing process

2.2

BERT-Based Method

Before introducing the BERT model, we need to understand the word vector (the unit of processing information in the word vector NLP) in NLP. For the input data in the NLP model, the first step is to vectorize the text, which is to transform the text information into a series of column vectors. The most basic text word vector expression method is one-hot encoding. As the name implies, one-hot is an effective encoding, that is storing N states of text information in N registers. For example, in the sentence “I am a man”, the gender “male” and “female” in this sentence maybe expressed by “01” and “10”. However, due to the simple processing method of one-hot encoding, it is easy to cause too much sample information. After that, the word2vec model uses CBOW and Skip-Gram methods to represent word vectors. CBOW is a method for predicting intermediate words based on context, and Skip-Gram is a method for predicting context based on intermediate words. However, the disadvantage of the two is that they are relatively fixed in position and not flexible. Different from the Word2Vec model, the BERT model based on two-way coding, its word representation is expressed together according to the context information. The representation method is as shown in the Fig. 2 below, which is word vector = word embedding + segment embedding + position embedding, so that not only the sentence is included, the relationship between sentences is also includes. Use [CLS] at the beginning of each sentence to represent the beginning, and use [SEP] at the beginning of the other sentence and at the end of the sentence, something solutions in chinese has been sited in [1]. BERT [2] was proposed by the Google AI team in 2019. It is a bidirectional encode model based on the Transformer. It uses Transformer blocks, pre-trained in a large corpus, and exhibits a very high level of performance in 11 downstream tasks. It is mainly divided into two parts, pre-training and fine-tuning. Google has used its extremely high computing power to help developers make the pre-training tasks done before getting start. The model itself has learned semantic information. For developers, what they need to do is to finetune the BERT model according to different downstream tasks.

Design of Text Summarization System Based on Finetuned-BERT and TF-IDF

1333

Fig. 2. BERT input characteristics including:token embeddings, segment embeddings and position embeddings

2.3

Graph-Based Method

The graph-based method uses the knowledge in graph theory to find the relationship between sentences in the NLP field. PageRank and TextRank algorithms are the most typical graph methods. The PageRank algorithm [3] was first used in web search. Each web page is used as a node, and the importance of a web page is judged by calculating the degree of relevance between the web page and other web pages. Text Rank is derived from PageRank and develped by Rada Mihalcea et al. In [4]. Supposing the content of the text summary is abstracted as a entitled directed graph G = (V, E), V is the set of points in the graph, E is the set of edges in the graph, In (Vi) Is the set of points pointing to this point, Out(Vi) is the set of points Vi pointing to other points, the calculation formula of each point Vi is shown in Eq. (1), WSðViÞ ¼ ð1  dÞ þ d 

X Vj2InðViÞ

Wji

P

Vk2OutðVjÞ

Wjk

WSðVjÞ

ð1Þ

WS (Vi) represents the weight of sentence i, and the sum on the right represents the contribution degree of each adjacent sentence to this sentence. Wji represents the similarity of two sentences and WS (Vj) represents the weight of sentence j generated in the last iteration. d is the damping coefficient, generally 0.85. Graph-based text summarization methods are the unsupervised methods that do not use deep learning. At the same time, this method can be directly used for any type of document information, which allows it to have a certain universality. But unfortunately, due to the limitation of the algorithm itself, its effects are very limited today. 2.4

Method Based on Word Fequency Statistics

The method based on word frequency statistics is a purely mathematical consideration method. This method does not take into account the semantic information of the article at all, only simply measures and estimates the importance of the word based on the

1334

Y. Liu et al.

“frequency” of the word appearing in article. First of all, we assume that the importance of a word increases as the frequency of occurrence increases. So the calculation method of TF (Term Frequency) is shown in Eq. (2): TFw ¼

The number of occurrences of w in each category The number of entries in this category

ð2Þ

However, the general means of word frequency statistics will cause misleading when some words like “的”, “yes”, “I”, etc. The method to avoid this problem is the IDF (inverse document frequency). The IDF calculation method is shown in Eq. (3): IDF ¼ log

Total number of documents in the corpus Number of documents containing w þ 1

ð3Þ

In conclusion, the real TF-IDF means TF * IDF, which combines the advantages of the two calculation methods by calculating the product of the two. It can not only select words with high frequency, ensure the importance with high frequency, but also avoid selecting words with high frequency but not unique characteristics. Therefore, this method ensures that the selected sentence can represent the core information of the text to a certain extent. Its advantage is that it does not need the computational power of deep learning and complex models. It obtains summary information through statistical methods.

3 System Requirements and Framework Design 3.1

System Requirements

For scientific research practitioners, extensive reading papers and blogs is the most effective way to obtain cutting-edge technology in a certain area, but different papers have different formats, and some abstracts even do not exist, which is bad for readers. It takes a lot of time to read the full text to grasp the core content in the article. Therefore, how to quickly and effectively extract key information from a large amount of information is what we focus on. The goals of our system is to be able to be integrated into a pdf reader. It is hoped that the functions can be a lightweight manner. Our design goals are: 1. When the user has text information as the format of.txt, we hope the length of the text summary can be adjusted to the author’s needs. In addition, two ways to extract summaries are provided. The one is a neural network-based slow text summaries with high semantic fit, and the other is a fast, statistically-based text summaries with low semantic fit. 2. What’s more, when the user does not have the document information in txt format, we can provide a pattern, in it the input is image format, and the extracting text summary can be achieved after the image recognition. At the same time, it provides the function of extracting tables and pictures in the figure to maximize the work efficiency of the paper readers.

Design of Text Summarization System Based on Finetuned-BERT and TF-IDF

3.2

1335

System Framework Design

The text summarization system designed in this paper is mainly composed of three parts, namely the input layer, the service layer and the output layer. The Fig. 3 shows the overall framework of the service designed by our text summarization system.

Input

Input In

In

Txt

Jpg OCR

Service Txt Base BERT

Output

Excel

Jpg

Base IF-IDF

Out

Out

Summary .txt

Summary .txt

Out

Output File

Fig. 3. System flow chart

First of all, for the input layer, we provide input in two different content formats, in .txt and .jpg formats. In the service layer, for the input in the.txt format, we use the Pacsum model (which is proposed by Hao Zheng and Mirella Lapata in [5]). The model has been optimized by us to support single-document and multi-document input. As we can see, we can choose to save the output summary content in the .txt format or directly output it on the command line (basic function is not shown in the figure). For the .jpg format input, we first perform the OCR screen content extraction in the first step. Then, according to the content information type, the OCR function will return the text, image and excel files, after that the txt’s content can be convert to the abstract extraction service module that directly processes text information. In the final output layer, the content of the text summary can be saved as a file or printed directly in the screen content. The text, table, and picture information after OCR can be packaged and stored in the final output folder.

4 System Realization and Effect Display 4.1

Effect Display for the .txt Type of Input

For the summarization method of text document information, the tool we use is based on the PACSUM model developed by Hao Zheng and Mirella Lapata [4]. In this model, we performed a process based on the characteristics of the corpus like [6]. And

1336

Y. Liu et al.

make some improvements in processing methods, such as modifying the processing method of text and adding functions such as control of the number of output sentences. The interface we show is implemented in Google’s Colab. By combining Google Drive, we use cloud GPU in the cloud server to achieve the corresponding functions. The following shows the renderings based on the Finetuned-BERT algorithm and the TF-IDF algorithm. The sample content used in the following two sections is obtained by extracting educational papers. In the initial experiment, we chose to use the CLTS Chinese data set produced by Xiaojun Liu et al. The following mainly shows the output effects of two different algorithms for text document input. Display of Text Summarization Effect Based on Finetuned-BERT As shown in the Fig. 4, our information is stored in the demo .txt file, because the sample is too long and will not be shown here. We can get the following multi-text summary result by specifying the file path and the number of sentences to be output. The figure below shows the result of summarizing 3 sentences for each of the three different data input. Because of the central directed graph method, the selection of abstract sentences is more inclined to the sentences in the head. Display of Text Summarization Effect Based on TF-IDF As shown in the Fig. 5, this is a screenshot in the Colab, it’s easy to see that we extract a 5-sentence summary of the source file using the TF-IDF algorithm. It can be seen in the Fig. 6, that the TF-IDF-based algorithm have relatively poor results, but the processing results obtained are relatively faster, the selection of sentences is more random, and the selection of later texts is easier. But in terms of effect, since it does not consider semantic information, its semantic coherence is not as good as the former.

Fig. 4. Effect display for the .txt using Finetuned-BERT

4.2

Effect Display for the .Jpg Type of Input

This section mainly shows the effect of inputting the picture type. In Fig. 7 shows the effect of image content text recognition using PaddlePaddleOCR [7]. By using the open source version of PaddlePaddleOCR, the results of text recognition, table recognition and picture recognition will be solved in the output file, shown as Fig. 7.

Design of Text Summarization System Based on Finetuned-BERT and TF-IDF

1337

Fig. 5. TF-IDF command line input

Fig. 6. TF-IDF output

Fig. 7. Creating a output file.

Through image recognition, we can perform the first step of processing the input content and extract the content information of the source text. After the content information is extracted, for the text information in the figure, as shown in the Fig. 3, you can choose to enter the text summary module for summary, or you can choose to keep the source content.

5 Conclusion This article mainly elaborates the application scenarios and development status of text summarization, expounds the framework of the system design and the needs to respond to it. Finally, this paper uses the two summarization methods of Finetuned-BERT and IT-IDF to realize the design of the text summarization system, and provides an image text recognition function based on PaddlePaddleOCR. At present, the system is not effective enough on recognizing the content of long text summaries. After experimentation, we believe adding text block function according to the subtitles in the text will allow the algorithm to have a better consideration of the full text semantic information. In addition, there is rare authoritative Chinese data set, which may block the chinese text summarization improvement.

1338

Y. Liu et al.

References 1. Zhu, W.: MVP-BERT: Redesigning vocabularies for Chinese BERT and multi-vocab pretraining ICLR 2021. arXiv:2011.08539 (2021) 2. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint. arXiv:1810.04805 (2018) 3. Brin, S., Page, M.: Anatomy of a large-scale hypertextual Web search engine. In: Proceedings of the 7th Conference on World Wide Webpages, pp. 107–117, Brisbane, Australia (1998) 4. Mihalcea, R., Tarau, P.: Textrank: Bringing order into texts. In: Proceedings of EMNLP 2004, pp. 404–411, Barcelona, Spain (2004) 5. Zheng, H., Lapata, M.: Sentence centrality revisited for unsupervised summarization. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 6236–6247, Florence, Italy. Association for Computational Linguistics (2019) 6. Liu, X., Zhang, C., Chen, X., Cao, Y., Li, J.: CLTS: a new Chinese long text summarization dataset. In: Zhu, X., Zhang, M., Hong, Y., He, R. (eds.) Natural Language Processing and Chinese Computing. Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence), vol. 12430, pp. 531–542. Springer, Cham (2020). https://doi.org/10.1007/9783-030-60450-9_42 7. PaddlePaddle homepage. http://www.github.com/PaddlePaddle/PaddleOCR (2021)

Rogue WiFi Detection Based on CSI Multi-feature Fusion Zhaoyuan Mei1,2(&), Songlin Sun1,2, and Chenwei Wang3 1

Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing University of Posts and Telecommunications, Beijing, China [email protected] 2 School of Information and Communication Engineering, Beijing University of Posts and Telecommunication, Beijing, China 3 DOCOMO Innovations, Inc., Palo Alto, CA 94304, US

Abstract. At present, using WiFi to access the Internet has become a mainstream way. However, through research on wireless network security, it is found that attackers forge the mac address or SSID of legitimate users to obtain illegal information. This method is difficult to be detected by traditional network security mechanisms. In this article, we propose a new security mechanism that uses multiple features extracted from Channel State Information (CSI) which are irrelevant to the environment for fusion, and uses a minimum discriminant criterion for classification and discrimination to reject rogue WiFi connections. We found that the Carrier Frequency Offset (CFO) and nonlinear phase error extracted from CSI are stable in time and space. Experiments on a large number of WiFi devices show that this mechanism can reliably detect rogue WiFi connections, with an average recognition rate over 96%. At the same time, through comparing the recognition effect between before and after the feature fusion, we verify the superiority of the fusion method, and provides a reliable basis for the identification of rogue WiFi. Keywords: Rogue WiFi detection  Carrier frequency offset  Nonlinear phase error  Feature fusion

1 Introduction With the popularity of mobile devices and the emergence of the Internet of Things, wireless technology is becoming an important part of modern computing platforms and embedded systems to provide low-cost, anytime, anywhere connections [1]. Unfortunat-ely, many security threats have been found in the connection between WiFi and users. For example, a malicious access point can steal user privacy data by imitating the identifier of a legitimate device to deceive the user's connection. Currently existing wireless security protocols such as WEP, WPA, WPA2, etc. have been found to have large vulnerabilities [2]. Attackers can bypass AP verification through replay attacks and enter private WLANs for free. Therefore, it is very important to find a novel and low-complexity method to effectively detect malicious attacks. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1339–1347, 2023. https://doi.org/10.1007/978-981-19-3387-5_160

1340

Z. Mei et al.

In recent years, many scholars have carried out research on device fingerprint identification. Gao et al. estimated the packet interval time as a fingerprint feature [3], but it requires a large amount of data to be collected with high cost. Hua et al. extracted CFO estimates from CSI as device fingerprints [4], and achieved good performance in detecting devices, but when the device is in a mobile state, the method achieved poor results. Liu et al. used the nonlinear phase error extracted from the CSI to perform fingerprint recognition [5]. The fingerprint is stable in time and space, but the calculation of error parameters in their method has a certain ambiguity, which makes the fingerprint unable to accurately identify the device. Therefore, in this paper, the two features of CFO and nonlinear phase error extracted from the CSI are merged, which overcome the shortcomings of the single feature recognition method and improve the detection efficiency of illegal equipment.

2 CSI Acquisition and Phase Analysis CSI: describes the channel frequency response from the signal sender to the receiver, represents the signal amplitude and phase information of each OFDM symbol. WiFi technology usually uses multiple input multiple output (MIMO), so the channel between each transmitter-receiver (Tx-Rx) antenna pair is composed of multiple subcarriers [6]. Assuming that the number of antennas at the transmitting end is Ntx and the number of antennas at the receiving end is Nrx , the CSI data is expressed as: H ¼ ðHij ÞNtx Nrx

ð1Þ

Each antenna pair data contains Ns subcarriers, then Hij is specifically expressed as: Hij ¼ ðh1 ; h2 ;   ; hNS ÞT

ð2Þ

where the kth subcarrier of Hij can be expressed as: hk ¼ jjhk jjej\hk

ð3Þ

where jjhk jj represents the amplitude of the subcarrier k, \hk represents the pha-se of the subcarrier, and k contains the index of the subcarrier. Phase: According to the literature [7], the phase of subcarrier i measured at the receiving end can be expressed as: ;i ¼ hi  2p Nki d þ b þ  þ z

ð4Þ

where hi represents the true phase, N is the number of FFT points, according to IEEE 802.11a/g/n, N = 64, d represents the time offset of the receiving end, b represents the unknown phase offset, the phase offset of all subcarriers in the same data packet is the same,  represents the nonlinear phase error, it remains stable in different measured CSI data packets, z represents the measurement noise error.

Rogue WiFi Detection Based on CSI Multi-feature Fusion

1341

3 Data Preprocessing and Feature Extraction 3.1

Phase Unwrapping

We found that after solving the CSI phase matrix, there is a jump in the range of [−p,p], as shown in Fig. 1. In order to show the phase more intuitively, it is necessary to unwrap the phase. Take the phases of adjacent sub-carriers for difference processing to obtain D;. According to Eq. (5), the latter phase is processed as follows: ;latterphase ¼ ;latterphase  2p; if D; [ p

ð5Þ

The phase after unwrapping is shown in Fig. 2.

Fig. 1. Unwrapped phase

3.2

Fig. 2. Unwrapping phase

Phase Filter

In the actual environment, the measured CSI data is caused by the interference of multipath effects such as reflection and refraction and environmental noise, resulting in errors in the measured CSI phase. Therefore, in order to obtain a stable fingerprint, we need to design a phase filter to filter out noise interference.

Fig. 3. CSI phase in actual environment

Fig. 4. Filtered phase

1342

Z. Mei et al.

As shown in Fig. 3, the function of the filter is to extract data packets whose phases change smoothly. The design idea of the filter is as follows: Step1: Calculate the slope k of adjacent subcarriers, where ki ¼ ½k1 ; k2 ; :::; k28 ; k29 ; Step2: Calculate the mean square error vi of ki ; Step3: Set a threshold vmin , when vi < vmin , save the data packet. The filtered phase is shown in Fig. 4. 3.3

CFO Feature Extraction

CFO: due to the oscillator offset, it is a basic physical property. It remains fairly consistent over time, and it has great differences in different devices and can be used as a device fingerprint [4]. Analyze the cause of the phase error, the CSI phase can be expressed as: ;t;k ¼ xt;k þ ht;k þ dt;k þ ut þ 

ð6Þ

where xt;k represents the phase shift caused by the frame detection delay (FDD), ht;k represents the phase shift caused by the sampling frequency offset (SFO), the two can be described by the following expressions: xt;k ¼ 2paksd ht;k ¼ 2pbkss

ð7Þ

where a; b represent the constant coefficient, sd ; ss represent the time delay related to the error. dt;k The phase shift caused by the time of flight (TOF). ut represents the phase error caused by the carrier frequency offset, which can be expressed as: ut ¼ 2pDfc t

ð8Þ

where Dfc is the part of CFO that still exists after compensation. Next, calculate the new phase variable as follows: 0

;t ¼

;t;1 þ ;t;1 2

¼ 2pDfc t þ

dt;1 þ dt;1 2

ð9Þ

where ;t;1 ; ;t;1 represent the measured phase values of subcarriers −1 and 1. Then, for each adjacent frame, calculate its phase difference D; and time difference of arrival (TDOA) Dt. In order to remove the influence of TOF, we make the transmitter-receiver remain stationary during frame acquisition to ensure that its relative position remains unchanged. Therefore, the phase difference is expressed as follows: 0

0

D; ¼ ;t1  ;t2 ¼ 2pDfc Dt

ð10Þ

Afterwards, draw all the points on the X-Y plane, it can be seen that a series of periodic stripes are formed. We use the sliding window method and the least square method to estimate the slope of these fringes, as shown in Fig. 5.

Rogue WiFi Detection Based on CSI Multi-feature Fusion

(a) Stripe pattern

(b) Stripe extraction

1343

(c) Linear fit

Fig. 5. CFO feature

3.4

Non-linear Phase Error Feature Extraction

Aiming at the feature extraction of nonlinear phase error [8], we designed a new extraction method. Calculate the following variables: hn h1 2p 1 a ¼ K;nn ; K1 ¼ Kn K1  N d

ð11Þ

Do the following operations on ;i : 0

1 ;i ¼ ;i  aki ¼ hi  Khnn h K1 ki þ b þ z þ 

ð12Þ

where hi can be expressed as the sum of h and the component error e, the h, b, and z are unified as z , and e and  are unified as E, which is the nonlinear phase error. Next, sum the phases of subcarrier −1 and subcarrier 1: 0

0

;1 þ ;1 ¼ 2z þ E1 þ E1

ð13Þ

In some studies on nonlinear phase error, E1 þ E1 is almost equal to zero, so: z 

0

0

;1 þ ;1 2

ð14Þ

Based on this, we subtract z from the phase of each subcarrier to obtain the expression of the nonlinear phase error: 0

E i ¼ ;i  z  þ

hn h1 Kn K1 ki

00

¼ ;i þ lki

ð15Þ

In the study of nonlinear phase error, it is found that the closer the position of the center subcarrier, the smaller the difference between the nonlinear error values of adjacent subcarriers. With this idea, l can be calculated: 00

00

l  ;2  ;1

ð16Þ

1344

Z. Mei et al.

Fig. 6. Non-linear phase error feature

In order to better fit the data, we set E28 and E28 to 0 respectively, and the fingerprint features are shown in Fig. 6.

4 Feature Evaluation and Fusion 4.1

Feature Evaluation

In this section, we mainly demonstrate the stability of nonlinear phase error characteristics in time and space. With the help of vector stability research, we use cosine similarity to measure: cos h ¼ jjajab jjjbjj

ð17Þ

Time stability: Take the average of the fingerprints extracted by the device multiple times as the reference, and calculate the cosine similarity between the fingerprint vector and the reference vector in different days. The test results are shown in Fig. 7. Spatial stability: We set up six AP locations in the laboratory scene and looked for four volunteers at the same time, two of them are constantly walking indoors, and the other two are constantly opening or closing doors and windows to create a dynamic environment. The experimental results are shown in Fig. 7.

(a) Time stability

(b) Spatial stability

Fig. 7. Feature evaluation

Rogue WiFi Detection Based on CSI Multi-feature Fusion

4.2

1345

Feature Fusion

Feature fusion must be established on the basis of mutual independence and complementary information between each sub-feature, and the classification effect obtained after feature fusion should be better than the classification effect of each sub-feature. At present, the common feature fusion methods are additive fusion and multiplicative fusion [9]. Among them, the key to additive fusion lies in the selection of the weight of each feature. However, without prior knowledge, it can only be obtained through sample training. However, different data streams are likely to cause different optimal solutions for weights, in the end, only one fuzzy optimal solution can be obtained, and it cannot be popularized among all samples. Therefore, we choose the multiplicative fusion method and use the minimum distance criterion to design the classifier.

5 System Design and Simulation Analysis 5.1

System Design

The rogue WiFi detection system should have two important functions: one is that it can build a fingerprint database of legal devices; the other is that the system can reject the connection with the malicious WiFi after detecting the malicious WiFi. Therefore, the system we designed is shown in Fig. 8.

Collect CSI data

Data preprocessing

Feature extraction

Legal device fingerprint database

Legal certification

Classifier discrimination

Rogue detection

Fig. 8. System structure

5.2

Simulation Analysis

For the selected N devices, we extract M fingerprints for each device, and randomly select L fingerprints among them to take the average value to construct the fingerprint database S. Among the remaining K fingerprints, if one of the fingerprints is compared with the fingerprint of another device, it simulates malicious WiFi attack detection; if one of the fingerprints is compared with the fingerprints of the same type of device, the

1346

Z. Mei et al.

simulated one is legal Device authentication, so we define the attack detection rate Pd and the false alarm rate Pf as follows: P P Pd ¼

i2k

j2s

P P Pf ¼

matchði;jÞ

ð18Þ

ðMLÞðN1ÞN i2k

j2s

nonmatchði;jÞ

ð19Þ

ðMLÞN

In the simulation experiment, we first used the Intel5300 network card and the CSI Tool tool to collect CSI data [10], and then selected a total of 20 devices, including routers, network cards, mobile phones and smart watches. We set M to 15, L to 5, and K to 10, and conduct experiments in three scenarios. The detection rate results are shown in Table 1, and the false alarm rate results are shown in Table 2, (the nonlinear phase error feature denote as NL). It can be seen from the experimental results that after using the feature fusion algorithm, the recognition accuracy rate has been improved, and the misjudgment rate has decreased, which proves the advantages of the feature fusion algorithm.

Table 1. Attack detection rate Feature CFO NL CFO + NL

Laboratory 92.03% 93.28% 96.84%

Classroom 92.46% 94.07% 97.12%

Hall 95.24% 96.72% 98.74%

Table 2. False alarm rate Feature CFO NL CFO + NL

Laboratory 5.52% 3.02% 1.42%

Classroom 5.12% 2.83% 1.33%

Hall 2.26% 1.66% 0.89%

6 Conclusion Aiming at the current behavior of malicious devices attacking legitimate users in WiFi, a fusion algorithm based on CFO and nonlinear phase error is proposed. Experiments show that the average accuracy of the algorithm is above 96%, and the false alarm rate is below 2%, which provec the reliability of the malicious WiFi detection system. Later, we will study the WiFi fingerprint recognition problem in more complex and more AP scenarios.

Rogue WiFi Detection Based on CSI Multi-feature Fusion

1347

References 1. Gubbi, J., Buyya, R., Marusic, S., Palaniswami, M.: Internet of things (iot): a vision, architectural elements, and future directions. Futur. Gener. Comput. Syst. 29(7), 1645–1660 (2013) 2. Vanhoef, M., Piessens, F.: Key reinstallation attacks: Forcing nonce reuse in wpa2. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’17. New York, NY, USA: ACM, pp. 1313–1328 (2017) 3. Gao, K., Corbett, C., Beyah, R.: A passive approach to wireless device fingerprinting. In: 2010 IEEE/IFIP International Conference on Dependable Systems Networks (DSN), pp. 383–392 (2010) 4. Hua, J., Sun, H., Shen, Z., Qian, Z., Zhong, S.: Accurate and efficient wireless device fingerprinting using channel state information. In: IEEE INFOCOM 2018 - IEEE Conference on Computer Communications, pp. 1–9 (2018) 5. Liu, P., Yang, P., Song, W., Yan, Y., Li, X.: Real-time Identification of Rogue WiFi connections using environment-independent physical features. In: IEEE INFOCOM 2019 IEEE Conference on Computer Communications, pp. 190–198 (2019) 6. Yang, Z., Zhou, Z., Liu, Y.: From RSSI to CSI: Indoor localization via channel response. ACM Comput. Surv. 46(2), 1–32 (2013) 7. Zhu, H., Zhuo, Y., Liu, Q., Chang, S.: p-Splicer: perceiving accurate csi phases with commodity WiFi devices. IEEE Trans. Mob. Comput. 17(9), 2155–2165 (2018) 8. Zhuo, Y., Zhu, H., Xue, H.: Identifying a new non-linear CSI phase measurement error with commodity WiFi devices. In: 2016 IEEE 22nd International Conference on Parallel and Distributed Systems (ICPADS), pp. 72–79 (2016) 9. Tax, D., van Breukelen, M., Duin, R., Kittler, J.: Combining multiple classifiers by averaging or by multiplying? Pattern Recogn. 33(9), 1475–1485 (2000) 10. Xie, Y., Li, Z., Li, M.: Precise power delay profiling with commodity wifi. In: Proceedings of the 21st Annual International Conference on Mobile Computing and Networking. ACM, pp. 53–64 (2015)

Panoramic Video Quality Assessment Based on Spatial-Temporal Convolutional Neural Networks Tingting An1,2(&), Songlin Sun1,2, and Rui Liu3 1 School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, China [email protected] 2 Trusted Distributed Computing and Services Ministry of Education Key Laboratory, Beijing, China 3 San Jose State University, San Jose, CA 95192, US

Abstract. The development of 5G technology and Ultra HD video provide the basis for panoramic video, namely virtual reality (VR). At present, the traditional VQA method is not effective on panoramic video. Therefore, it is crucial to design objective VQA models for the standardization of panoramic video industry. With the development of deep learning, excellent algorithms of VQA methods based on convolutional neural network have emerged. In this paper, we propose a full reference VQA model based on spatial-temporal 3D convolutional neural network, the feature extraction combined the time and spatial information. we verify and optimize the proposed VQA model based on VQAODV panoramic video database, its objective score has a higher correlation with subjective scores than that of traditional VQA methods. Keywords: Video quality assessment

 Panoramic video  CNN  FR-VQA

1 Introduction Panoramic video is made by 360 shooting with panoramic camera, is also called VR video. Panoramic video has attracted more and more attention in recent years. Users can watch panoramic video with head mounted display (HMD) or VR glasses, and can have interactive experience [1]. Viewers can control the viewport, including up and down, left and right, front and back. Panoramic video from production to terminal display goes through the process of shooting, splicing, projection, coding, transmission, decoding and display as Fig. 1 shows. For ease of transmission and encoding, the panoramic video is projected from the sphere onto the plane, which is the process of projection. And the panoramic video is through coding and decoded and then projected onto the sphere by a plane for display. The splicing and projection process will produce quality loss than traditional video and will have a great impact on the quality of user experience. How to evaluate the quality and user experience of panoramic video is of great significance to promote the development of the industry and improve the user experience. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1348–1356, 2023. https://doi.org/10.1007/978-981-19-3387-5_161

Panoramic Video Quality Assessment Based on Spatial-Temporal

1349

Fig. 1. Process from panoramic video production to terminal display

VQA methods are divided into subjective VQA methods and objective VQA methods. The most accurate way to assess the quality of videos is to collect the opinions of a large number of viewers in the form of opinion scores which is called subjective VQA method, and the average score is known as the mean-opinion-score (MOS). However, it costs much human and material resources, so we need to design more accurate and effective objective methods. Objective scores need to have a high correlation with MOS, the higher of correlation, the effective of the subjective model. The objective quality assessment methods are divided into three categories: fullreference video quality assessment (FR-VQA), reduced-reference video quality assessment (RR-VQA), and blind or no-reference video quality assessment (NR-VQA). FR-VQA models need both the information of reference video and distorted video. RRVQA models need some information of the reference video, and NR-VQA models have potentially much broader applicability than FR and RR VQA since they can predict quality scores without the reference video. Full-reference VQA is relatively mature with a solid theoretical foundation, and has a series of widely used VQA algorithms, such as MSE [2], PSNR [3], VQM [4], VIF [5], MAD [6], MOVIE [7], deep-VQA [14] etc. These are examples of successful FRVQA algorithms. MSE and PSNR do not consider human visual system (HVS), therefore SSIM, MS-SSIM [8] and FSIM [9] are proposed. VQM is a combination method of seven video characteristic, and VIF used NSS information. MAD and MOVIE consider the temporal and motion information of video to score. Kim et al. [14] proposed a deep-VQA method based on deep learning. It uses convolution to extract features in time domain and space domain to form feature vectors, and its performance is better than traditional VQA methods. All the above methods can be used for the VQA of panoramic video. However, due to the particularity of panoramic video production and format, it is necessary to design the panoramic VQA method according to the structural characteristics of panoramic video. Y. Sun [10] et al. proposed WS-PSNR, consider the projection weight to score the quality. Zakharchenko et al. [11] proposed CPP-PSNR, it converts the reference video and output video into CPP format, and then evaluates the video quality. In addition, Yu [12] et al. proposed S-PSNR, which calculates the peak signal-to-noise ratio of a group of uniform sampling points on the sphere. By introducing the interpolation algorithm, it can process the quality evaluation under different projections. SPSNR only considers the vertical observation differences. S.Yang [12] extended this algorithm and proposed a method based on multi-level quality factors. The Joint Video Experts Team (JVET) adopted the above three methods.

1350

T. An et al.

In addition, Netflix proposed a multi model fusion evaluation method (VMAF) in [13] 2016. It has been verified to be effective in panoramic VQA. With the remarkable success of deep learning model in the field of computer vision, two-dimensional and three-dimensional convolutional neural networks are also gradually applied to full reference video evaluation. Xu [18] et al. proposed a video quality prediction method based on CNN, which combines eye movement and head movement data into weights of different blocks, and uses CNN network to score the quality. At present, there are still a series of problems in panoramic VQA algorithm. For example, we have not enough panoramic video database for display and testing. And most VQA models do not extract texture deeply or ignore the motion information. Different projection formats have different effects on panoramic video images. Therefore, how to reduce the distortion of video images during projection is also important. In this paper, we proposed a FR-VQA model using 3D convolution neural network which extract both the temporal and spatial information effectively. we verify and optimize the proposed VQA model based on VQA-ODV panoramic video database.

2 Background 2.1

Panoramic Video

Panoramic video materials can be directly exported from the existing materials in the camera through the integrated panoramic camera shooting equipment. At present, the fisheye lens can directly capture the 360 scene, and then correct and splice the distortion to obtain the panoramic content. Another method is to combine the multi lens material taken by the camera lens. The combination camera can use either a rotary PTZ camera or a multi-directional camera to shoot synchronously.At present, the real scene shooting of panoramic video is limited by equipment, technology and site, and cannot be applied to any scene, which makes some lenses with high resolution requirements, difficult shooting or unable to shoot have to use matting synthesis or computer image production to generate three-dimensional panoramic video. In projection process, it will lead to deformation and distortion, as well as the spatial dimension difference between subjective and objective video evaluation methods. The traditional video evaluation methods are not fully suitable for panoramic video quality evaluation. There are many frequently used projection formats: ERP (Equirectangular) projection, CMP (Cubemap) projection, and EAC (Equi-Angular Cubemap) projection. Projection will inevitably distort the original pixel distribution and shape. Therefore, we need to reduce the distortion of video projection. The most commonly use format is ERP as the Fig. 2 shows, after projection, the original pixel distribution will be deformed, and the degree of distortion at different positions will be different. The closer to the two poles, the greater the distortion. The projection process of the panoramic video is shown in figure.

Panoramic Video Quality Assessment Based on Spatial-Temporal

1351

Fig. 2. Panoramic video ERP projection format

2.2

Datasets

Database is very important, which can provide evaluation criteria and unified reference for objective evaluation models. JVET [15] released 10 general panoramic video test sequences for special panoramic video research. Duan et al. [16] established an immersive video quality assessment database (IVQAD 2017). The original video in IVQAD 2017 was captured by insta 360 4K spherical VR camera. The distorted video is derived from the original video by using three degradation factors: bit rate, frame rate and resolution. Yang et al. [17] Established a VQA-TJU database for VR video evaluation, which consists of 13 reference VR, 10 symmetric distortion VR, 260 asymmetric distortion VR and related MOS. This paper selects the VQA-ODV video test set released by Xu [18] et al. in 2018. The data set collects 60 reference sequences and 540 damaged sequences, which are divided into ten groups, and publishes the subjective score and eye movement data of each sequence. The video resolution is between 3K and 8K. We choose 4K test sequence and corresponding ERP compression sequence in VQA-ODV video test set to train and test the objective model in this paper. 2.3

Convolutional Neural Networks

CNN [19] is developed from artificial neural network and is essentially a feedforward neural network with deep structure. CNN uses convolution to analyze spatial data and temporal data. It has strong representation learning ability and can extract high-order features of input information. As Fig. 3 shows, 2D-CNN algorithm convolutes each frame of video to obtain a frame feature map, which extracts spatial information very well. As Fig. 4 shows, 3DCNN [20] method can analyze the time-domain information of video and obtain multi frame feature map.

Fig. 3. 2D-CNN

Fig. 4. 3D-CNN

1352

T. An et al.

The formula of 2D-CNN is: f ði; jÞ ¼ AF

PP p

q

! xi þ p;j þ q wp;q þ b

ð1Þ

The formula of 3D-CNN is: f ði; j; k Þ ¼ AF

PPP p

q

r

! xi þ p;j þ q;k þ r wp;q;r þ b

ð2Þ

Activation formula is: f ðxÞ ¼ maxð0; xÞ

ð3Þ

3 Proposed Model 3.1

Spatial-Temporal Convolutional Neural Networks

As Fig. 5 shows, in this paper we proposed a method that can extract the video feature both in temporal domain and special domain, which both use 2D-CNN and 3D-CNN.

Fig. 5. Proposed algorithm framework

The input video consists of three RGB channels, and each frame is cut into patch of M  N. And the patch passes through the convolution layer, the excitation layer, the pooling layer and the full connection layer, and finally outputs the score of the video. These patches get the feature map after passing through the convolution layer, get the feature vector after passing through the first fully connected layer, and then get the score output after passing through the second fully connected layer. If we want to put an n  M  N. The pooled characteristic graph set of n is output as a 128-dimensional vector. Feature maps need to go through n  128 convolution layers.

Panoramic Video Quality Assessment Based on Spatial-Temporal

1353

Table 1. CNN layer design Conv2D-S1 Conv2D-S2 Conv3D-1 Conv3D-2 Conv3D-3 Conv3D-4 Conv3D-5 Conv3D-6

Kernel size Input channel Output channel Stride Padding 33 1 16 122 1 33 16 16 122 1 333 32 64 122 1 333 64 64 122 1 333 64 128 122 1 3  3  3 128 64 122 1 333 64 32 122 1 333 32 1 122 1

We first perform diff operation on the distorted video and the reference video to obtain the residual video, then extract the spatial features of the residual video and the distorted video respectively, and then perform 3D-CNN operation on the connected 32dimensional video features six times to form the feature vector combination. After investigation, we can know that the size of the convolution kernel is in all layers and uses 3  3  3. The small convolution kernel of 3 is the best. And following is our design of layers (Table 1). During training, give the input video an initial value of weight, and then input it to CNN network. After outputting the score, use the loss function to compare the deviation. If the deviation is reasonable, end the training. If the deviation is unreasonable, calculate the error, update the weight, input the new weight into CNN network and continue the training process. The loss function is defined as: l ¼ N1

N  P

Qsub  Qpre

2

ð4Þ

k¼1

L2 regularization is selected as the weight attenuation method to avoid over fitting of the model: L ¼ N1

N  2 P Qsub  Qpre þ k2 w2

ð5Þ

k¼1

In training process, the setting of learning rate alpha, batch size and cyclic training times epochs is very important for network training. The learning rate is too large (may exceed the optimal value) and the learning rate is too small (unable to converge for a long time). The basic idea of learning rate attenuation is that the learning rate decreases gradually with the progress of training. 3.2

Weight Calculation

The process of mapping panoramic video to ERP will cause more stretching of bipolar images [21], as shown in Fig. 6. Since the digital image is a sampling expression of

1354

T. An et al.

Fig. 6. Shape distortion

Fig. 7. Weight graph

continuous space, the area stretching ratio of image continuous space domain should also be discretized to obtain the weight map, as shown in Fig. 7. wði; jÞerp ¼ cos

ðj þ 0:5N2 Þp N

ð6Þ

M  N is the resolution of ERP, and M ¼ 2  N. The relation of ði; jÞ and ðx; yÞ is 2ði þ 0:5N2 Þ 2ðj þ 0:5N2 Þ and y ¼ . x¼ M M We can use this feature to make the objective score Q1 more effective, we design a formula to correct the objective result. For ERP projection format, the weight value decreases from the equator to the polar region Qf ¼ Qw1f wf ¼

PP i

j

wði; jÞerp

ð7Þ ð8Þ

4 Experiment and Result To measure the quality of an objective VQA method is to measure the correlation between the objective results obtained and the subjective results. We usually compare the results of objective model with the MOS value scored by the subjective method. The commonly used indicators to measure the performance of objective quality evaluation algorithms are PLCC [22], SROCC [23], KROCC [23] and RMSE [24] (Table 2).

Panoramic Video Quality Assessment Based on Spatial-Temporal

1355

Table 2. Experiment results Method PSNR SSIM WS-PSNR CPP-PSNR S-PSNR Proposed Model

PLCC 0.541 0.562 0.596 0.632 0.707 0.867

SROCC 0.512 0.547 0.556 0.575 0.637 0.837

KROCC RMSE 0.496 10.415 0.526 10.391 0.527 8.097 0.547 7.782 0.598 7.692 0.796 5.664

We can see that in VQA-ODV dataset, the PLCC, SROCC, and KROCC is higher than the traditional methods, and RMSE is lower than other methods, so we can make a conclusion that the method we proposed is better.

5 Conclusion We propose a method based on CNN, which can evaluate the quality of panoramic video end-to-end. We use both 2D-CNN and 3D-CNN to extract the temporal and spatial feature to achieve panoramic VQA effectively. In the field of full reference video evaluation, we verify the rationality of 3D-CNN, and verify that the effect of spatiotemporal joint evaluation model is better than other methods. In the future, we will try to improve the evaluation method by using deep learning on the non-reference evaluation model.

References 1. 《5G高新视频-沉浸式视频技术白皮书. 国家广播电视总局科技司 (2020) 2. Gaubatz, M., Hemami, S.: Visual Quality Assessment Package. http://foulard,ece.cornell. Edu/gaubatz/metrix_mux 3. Sheikh, H.R., Sabir, M.F., Bovik, A.C.: A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 15(11), 3440–3451 (2006) 4. Pinson, M.H., Wolf, S.: A new standardized method for objectively measuring video quality. IEEE Trans. Broadcast. 50(3), 312–322 (2004) 5. Sheikh, H.R., Bovik, A.C.: A visual information fidelity approach to video quality assessment. In: Proceedings of the 1st International Workshop on Video Processing and Quality Metrics for Consumer Electronics, pp. 23–25 (2005) 6. Vu, P.V., Vu, C.T., Chandler, D.M.: A spatiotemporal most apparent-distortion model for video quality assessment. In: Proceedings of the 18th IEEE International Conference on Image Processing (ICIP), pp. 2505–2508 (2011) 7. Seshadrinathan, K., Bovik, A.C.: Motion tuned spatiotemporal quality assessment of natural videos. IEEE Trans. Image Process. 19(2), 335–350 (2010) 8. Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multiscale structural similarity for image quality assessment. In: Proceedings of the 27th Asilomar Conference on Signals, Systems & Computers, vol. 2, pp. 1398–1402 (2003)

1356

T. An et al.

9. Zhang, L., Zhang, L., Mou, X., Zhang, D.: FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Process. 20(8), 2378–2386 (2011) 10. Zakharchenko, V., Choi, K.P., Park, J.H.: Quality metric for spherical panoramic video. In: Optics and Photonics for Information Processing X, vol. 9970. International Society for Optics and Photonics, p. 99700C (2016) 11. Yu, M., Lakshman, H., Girod, B.: A framework to evaluate omnidirectional video coding schemes. In: IEEE ISMAR. IEEE, pp. 31–36 (2015) 12. Yang, S. et al.: An objective assessment method based on multi-level factors for panoramic videos. In: 2017 IEEE Visual Communications and Image Processing (VCIP), St. Petersburg, FL, USA, pp. 1–4 (2017). https://doi.org/10.1109/VCIP.2017.8305133 13. Li, Z., Aaron, A., Katsavounidis, I., Moorthy, A., Manohara, M.: Toward a practical perceptual video quality metric. http://techblog.netflflix.com/2016/06/ toward practicalperceptual-video.html. Accessed 5 Dec 2017 14. Kim, W., Kim, J., Ahn, S., Kim, J., Lee, S.: Deep Video Quality Assessor: From SpatioTemporal Visual Sensitivity to a Convolutional Neural Aggregation Network. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 224– 241. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01246-5_14 15. Hanhart, P., Boyce, J., Choi, K.: JVET common test conditions and evaluation procedures for 360 video. In: Joint Video Exploration Team of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, JVET-K1012 (2018) 16. Duan, H., Zhai, G., Yang, X., et al.: IVQAD 2017: an immersive video quality assessment database. In: 2017 International Conference on Systems, Signals and Image Processing (IWSSIP), pp. 1–5 (2017) 17. Yang, J.C., Liu, T.L., Jiang, B., et al.: 3D panoramic virtual reality video quality assessment based on 3D convolutional neural networks. IEEE Access 6, 38669–38682 (2018) 18. Li, C., Xu, M., Du, X., Wang, Z.: Bridge the gap between VQA and human behavior on omnidirectional video: a large-scale dataset and a deep learning model. In: 2018 ACM Multimedia Conference (MM ’18), October 22–26, 2018, Seoul, Republic of Korea. ACM, New York, NY, USA, p. 9 (2018) 19. LeCun, Y., et al.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989) 20. Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.:Learning spatiotemporal features with 3D convolutional networks. In: 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, pp.4489–4497 (2015) 21. 孙宇乐. 全景视频质量评价与编码研究. 浙江:浙江大学 (2020) 22. Pearson, K.: Note on regression and inheritance in the case of two parents. Proc. R. Soc. London 58(347–352), 240–242 (1895) 23. Yang, J.C., Lin, Y.C., Gao, Z.Q., Lv, Z.H., Wei, W., Song, H.B.: Quality index for stereoscopic images by separately evaluating adding and subtracting. PLoS ONE 10(12), e0145800 (2015) 24. Kim, S.J., Chae, C.B., Lee, J.S.: Subjective and objective quality assessment of videos in error-prone network environments. Multimedia Tools Appl. 75(12), 6849–6870 (2016). https://doi.org/10.1007/s11042-015-2613-6

Fast CU Partition Algorithm for AVS3 Based on DCT Coefficients Xi Tao(&), Lei Chen, Fuyun Kang, Chenggang Xu, and Zihao Liu Beijing University of Posts and Telecommunications, Beijing 100876, China [email protected]

Abstract. Audio Video coding Standard 3 (AVS3), which has been brought in many advanced algorithms and technologies on the basis of previous video coding standard, has already improved the coding efficiency greatly. However, with the performance improvement comes a huge increase in coding time. To facilitate commercialization, algorithms that reduce the time complexity of AVS3 standard must be developed. Based on the new code block (CU) partitioning rules of AVS3 standard, we present a new CU partition algorithm based on the discrete cosine transform (DCT) coefficients. According to the feature that DCT coefficient can reflect image texture, we find the relationship between it and CU partition, The DCT coefficients calculated by the fast DCT algorithm are used to limit and make decisions on CU partitioning in all-intra (AI) mode, so that the ergodic process of CU partitioning is greatly reduced, thus achieving the effect of reducing the time complexity. Experimental results show that the proposed algorithm can reduce the time complexity with a small BjøntegaardDelta rate (BD-rate) increase. Keywords: Fast CU partition algorithm  Discrete cosine transform coefficient  Audio Video coding Standard 3

1 Introduction Audio Video coding Standard 3 (AVS3) standard inherits the traditional hybrid coding framework of previous audio and video coding standards, which still includes code block (CU) partitioning, intra-frame prediction, inter-frame prediction, transformation and quantization, and entropy coding. Different from the previous video coding standards, many new algorithms and technologies are introduced into CU partition, intraframe prediction and inter-frame prediction, which greatly improve the coding efficiency. For example, CU partitioning uses binary tree (BT), quadtrees (QT), and extended quadtrees (EQT), and inter-frame prediction mode introduces a new technology, affine motion prediction and so on. These technologies have brought about an ideal improvement in Bjøntegaard-Delta rate (BD-rate) compared with previous standards. Compared with the previous generation of AVS standard, the BD-rate has decreased by about 10.56%, and compared with high efficiency video coding (HEVC) standard, it has decreased by 9.45%. However, the application of a large number of new algorithms and technologies has brought about a significant increase in the complexity of coding time. ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1357–1364, 2023. https://doi.org/10.1007/978-981-19-3387-5_162

1358

X. Tao et al.

According to the previous fast intra-frame prediction algorithm and fast CU partition algorithm, most of the similar parts of the image texture can be divided into the same region, so many fast algorithms take the image texture value as the basis of judging CU partition to speed up the judgment. Therefore, the general idea is to extract the texture information of each frame image in the spatial domain and divide the CU with similar texture information together. Our method is based on the perspective of frequency domain, because according to our research, frequency domain information can also guide the content information of the image. Our work is based on the new block partitioning method of AVS3 standard. According to the features of BT and EQT, the block partitioning method is judged and constrained to reduce the traversal calculation rate distortion cost (RD-cost), so as to achieve coding time saving. In any previous video coding standard, the block partition is QT, so all CU units are square, ranging in size from 64  64 to 4  4. In the AVS3 standard, CU has a nonsquare shape, with both vertical and horizontal characteristics.

2 Working Principle of CU Partition in AVS3 AVS3 adopts the basic CU partition structure of QT + BT + EQT. In the previous generation standard (AVS2 and HEVC), QT partition is used to divide a CU into four sub-CUs. In addition to supporting QT partitioning, AVS3 also supports BT and EQT partitioning. Where, BT can divide a CU into left and right/upper and lower sub-CU. EQT can divide a CU into four sub-CU types, including horizontal and vertical Hshaped partitioning (Fig. 1).

Fig. 1. CU partition structures in AVS3 (QT + BT, two kinds of EQT from left to right)

In AVS3, the representation of QTBT + EQT basic CU partition structure in code stream is shown in the Fig. 2. If it is not QT, we need to further determine whether it is not divided. If it is not divided, we need to end the judgment. If it is divided, we need to determine whether it is EQT or BT. An important part of basic CU partitioning is partitioning constraints, which are divided into three types in AVS3. The first is the partition size constraint, which mainly refers to maximum/minimum CU size that QT, BT and EQT can use. The values in brackets are the default values set in AVS3 reference software. Other constraints

Fast CU Partition Algorithm for AVS3 Based on DCT Coefficients

1359

Fig. 2. CU partition structures in AVS3 (QT + BT, two kinds of EQT from left to right)

include the depth of division, the maximum depth of division in AVS3 is six layers. In addition, the maximum aspect ratio of coding unit in AVS3 is 1:8. QT partition takes precedence over BT/EQT partition. Once entering BT/EQT partition, QT partition cannot be carried out again. And the largest size of CU, which is named as LCU, enlarged from 64  64 to 128  128. In the process of coding, the RD-cost will always be calculated and traversed in accordance with the above rules, so as to realize the selection of the best CU partition. However, traversing the partitioning methods of various sizes and calculating the RDcost are the most time-consuming part in video coding. In previous studies, it is found that image texture and CU partition have obvious regularity. In general, image blocks with relatively flat textures are more easily divided into one CU, while image blocks with high texture complexity are often divided into multiple CUs. Image blocks with similar texture are more likely to be divided into the same CU, while image blocks with different textures are more likely to be split. Taking advantage of this feature, many fast CU partitioning algorithms guide CU partitioning by calculating texture features of images in spatial domain. In addition, some algorithms use the statistical law of CU partition under different QP values in AVS3 to constrain the way of CU partition. However, we find that the coefficient matrix after

1360

X. Tao et al.

DCT transformation has a certain relationship with the image texture, and can also affect the CU partition.

3 DCT and Implementation of Fast CU Partition Algorithm 3.1

Discrete Cosine Transform

DCT is a kind of transform coding and a necessary step in video compression coding. Transform coding is a coding method that carries out orthogonal transformation of pixel matrix in spatial domain and processes the obtained transform coefficients. In video coding, two-dimensional DCT is carried out for the prediction residuals, which can transform the time domain image into the energy distribution map in the frequency domain, and realize several kinds of image information energy, so as to remove the spatial redundancy. There are many algorithms in transformation coding, and the transformation with least mean square quantization error is Karhunen-Loeve (K-L) transformation. But the basis vector of the K-L transformation is related to and changes with the statistical properties of the encoded signal, so it cannot be implemented. The orthogonal transformations whose performance is close to K-L transform are cosine transform, sine transform, Hadamard transform, oblique transform and so on. A large number of experiments have proved that when the signal statistical characteristics conform to the first-order stationary Markov process, the performance of DCT is closest to K-L transform, and DCT has a fast algorithm, so it is widely used in image coding. M  1-dimensional DCT transform: F ð uÞ ¼

qffiffiffi M1 P þ 1Þup 2 f ðmÞcos ð2m2M M C ð uÞ

ð1Þ

m¼0

f ðm Þ ¼

qffiffiffi M1 P 2 M

u¼0

þ 1Þup C ðuÞF ðuÞcos ð2m2M

 C ð uÞ ¼

p1ffiffi ; 2

1;

u¼0 u 6¼ 0

ð2Þ ð3Þ

where f ðmÞ is the signal sample value and F ðuÞ is transformation coefficient. And both m and u are integers from 0 to M  1. The formula of two-dimensional DCT which has been used in video coding is as follow:

f ðm; nÞ ¼ N2

N1 P N1 P u¼0 v¼0

N1 P N1 P

þ 1Þup 1Þvp f ðm; nÞcos ð2m 2N cos ð2n þ 2N

ð4Þ

þ 1Þup 1Þvp C ðuÞCðvÞF ðu; vÞcos ð2m 2N cos ð2n þ 2N

ð5Þ

F ðu; vÞ ¼ N2 C ðuÞCðvÞ

m¼0 n¼0

Fast CU Partition Algorithm for AVS3 Based on DCT Coefficients

 C ðuÞ; C ðvÞ ¼

p1ffiffi ; 2

1;

u; v ¼ 0 u; v 6¼ 0

1361

ð6Þ

Parameter m; n; u; v are all integers from 0 to M  1. It can be obtained from (4) to (6) that each DCT coefficient is a linear combination of all pixel values in CU. The coefficient in the upper left corner of the obtained discrete DCT coefficient matrix is the direct current (DC) coefficient, representing the average brightness of the CU, and the other coefficients are alternating current (AC) coefficients. The value of each AC coefficient represents the change of gray value in certain direction. Taking 8  8 DCT coefficient calculation as an example, F1;0 ¼ C14C0

F1;0

7 P 7 P i¼0 j¼0

1Þp f ði; jÞcos ð2i þ 16

7  8 7 P P > p > cos f ð 0; j Þ  f ð 7; j Þ þ > 16 > > i¼0 i¼0 >   > > 7 7 P P > > 3p > f ð1; jÞ  f ð6; jÞ þ < cos 16 i¼0 i¼0  ¼ C14C0 7 7 P P > > 5p > f ð 2; j Þ  f ð 5; j Þ þ cos > 16 > > i¼0 i¼0 >   > 7 7 > P P > > : cos 7p f ð 3; j Þ  f ð4; jÞ 16 i¼0

ð7Þ 9 > > > > > > > > > > > = > > > > > > > > > > > ;

ð8Þ

i¼0

According to (8), the value of DCT coefficient depends on the intensity difference between the upper and lower parts of the input block in the vertical direction. Thus, when the values of DCT coefficients in horizontal, vertical and diagonal directions are large, the texture features of the corresponding image blocks also point to horizontal, vertical and diagonal directions. Based on the above, we can infer that the absolute value of the DCT coefficient in the first row represents the weight of the corresponding image vertical texture, while the absolute value of the DCT coefficient in the first column represents the weight of the image horizontal texture. Some studies have found that simple texture Angle and DCT coefficient have the following relationship, h represents the angle value of the texture direction and horizontal direction of the image, Ev ¼

N1 P

jF ð0; vÞj

ð9Þ

jF ðh; 0Þj

ð10Þ

v¼1

Eh ¼

N1 P h¼1

1362

X. Tao et al.

h ¼ tanh1

3.2

  Ev Eh

ð11Þ

Fast CU Partition Algorithm

We obtained the DCT coefficients of each sequence under 64  64 in Intra mode through the experiment of DCT transformation, and compared and analyzed the CU partition results obtained by AVS3 standard algorithm in All Intra (AI) mode. It is found that the texture direction of most CU images cannot be simply measured by an Angle value due to the complexity of the texture of most CU images. However, most CU tend to be compared with the first three AC components in the first row and the first three AC components in the first column after the DC component is removed, however, CU with complex image does not have this rule. In order to consider the time complexity of DCT transform introduction, we conducted DCT experiments of image blocks from 128  128 to 4  4, and found that 128  128 and 64  64 had the lowest time complexity. At the same time, due to the characteristics of CU partition in AVS3 standard coding, CU is divided from large to small, and decision-making at a high level often avoids more traversal and calculation, while smaller CU size is corresponding to more complex texture. We consider to complete the decision of CU partition under the maximum CU size of 64  64 AI mode. In combination with the previous rule derived from DCT coefficients, the specific process of our algorithm is as follows: In AI mode, for CU of 64  64, DCT transformation is performed on it first to obtain a matrix of 64  64, and then the sum of absolute values of coefficients of 8  8 in the middle is taken to see if it is within 3000. If not, jump out. If so, explain it as simply CU, can undertake rapid judgment, is in addition to the DC component in the first column of the first three coefficient and the absolute value, and the first column of the first three coefficient absolute value and, if A greater than B two times, not only divide the verdict and vertical division of BT and EQT, if B is greater than A two times, not only divide the verdict and level of division of BT and EQT (Fig. 3). A ¼ jF0;1 j þ jF0;1 j þ jF0;1 j

ð12Þ

B ¼ jF1;0 j þ jF2:0 j þ jF3:0 j

ð13Þ



8 P 8   P Fu;v  u¼1 v¼1

ð14Þ

Fast CU Partition Algorithm for AVS3 Based on DCT Coefficients

1363

Fig. 3. Fast CU partition algorithm based on DCT coefficients

4 Experimental Results This algorithm was embedded into the AVS3 official reference software HPM6.0 for testing, and compared with the three resolutions of HPM6.0 standard algorithm in AI mode, with BD-rate as the performance reference and coding time as the time complexity. We choose one 2160 p sequence, one 1080 p sequence and four 720 p sequences for experiment, the results are as follow (Table 1), Table 1. Performance between HPM6.0 and the algorithm of us in AI mode Sequences BD-rate(%) Y U Avg 0.71 0.99 2160 p 0.66 0.47 1080 p 0.67 1.18 720 p 0.81 1.33

V 0.95 0.86 0.83 1.17

ΔEncT (%) −27 −30 −22 −2

Through the comparative experiment of video coding with three different resolutions, we obtained that the average BD-rate increase of each component was within 1%, while the average time complexity decreased by 27%. The damage of 720 p sequence is relatively large, up to 1.33%; the damage of 2160 p sequence is relatively low, the lowest is 0.47%. In terms of time complexity, the average time saving of 2160 p sequence and 720 p sequence can reach 30%, while the time saving of 1080 p sequence is slightly lower, which is 22%.

1364

X. Tao et al.

5 Conclusions Facing the improvement of the complexity of the new generation of video coding standards, the use of new CU partition rules and unique frequency domain can effectively reduce the time complexity in the case of little performance loss of video coding, this helped to engineer the AVS3 video coding standard. In this paper, from the perspective of frequency domain, the DCT coefficient is used as the reference criterion for image texture to carry out the fast decision of CU partition. The Y-component is improved by 0.71% on average, the U-component and Vcomponent are slightly more, and the coding complexity is reduced by 27%.

References 1. Shen, B., Sethi, I.K.: Convolution-based edge detection for image/video in block DCT domain. J. Vis. Commun. Image Represent. 7(4), 411–423 (1996) 2. Zhang, J., Jia, C., Lei, M., Wang, S., Ma, S., Gao, W.:Recent development of AVS video coding standard: AVS3. In: 2019 Picture Coding Symposium (PCS), pp. 1−5 (2019) https:// doi.org/10.1109/PCS48520.2019.8954503. 3. Li, H., Liu, G., Li, Y.: An effective approach to edge classification from DCT domain. In: Proceedings International Conference on Image Processing. IEEE, vol. 1, p. I (2002) 4. Zhang, Q., Wang, Y., Huang, L., Jiang, B.: Fast CU partition and intra mode decision method for H.266/VVC. IEEE Access 8, 117539–117550 (2020). https://doi.org/10.1109/ACCESS. 2020.3004580 5. AVS. http://www.avs.org.cn/. Accessed 01 Oct 2021

Research on Recognition Method of Maneuver and Formation Tactics in BVR Cooperative Combat Based on Dynamic Bayesian Network Yan Zhang1, Yuyang Liu2, Yinjing Guo1, Xiaodong Wang1, Lei Feng1, Guo Li1, and Yangming Guo1(&) 1

School of Computer Science and Engineering, Northwestern Polytechnical University (NWPU), Xi’an 710072, Shanxi, China [email protected] 2 Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing University of Posts and Telecommunications, Beijing, China

Abstract. With breakthroughs in modern weaponry and equipment technology and coordinated formation flying, coordinated operations beyond visual range have become the main form of modern air combat. Accurately identifying enemy maneuvers and formation tactics is of great significance for grasping the dominance of the battlefield and grasping the direction of war. Traditional situation assessment research lacks pertinence to coordinated formation tactics and strategies in the BVR battlefield. This paper considers the influence of the characteristics of BVR fighters on tactical strategies, analyzes the causal relationship between the characteristics of flight parameters and the evolution of tactical movements, and constructs a dynamic Recognition model of typical maneuver movements beyond the visual range based on Bayesian network; on this basis, a recognition model of cooperative formation tactics is constructed. Finally, simulation experiments verify the accuracy and effectiveness of the model’s recognition. Further, this model can be used as a threat module of the beyond-horizon cooperative formation tactics for the Air combat situation assessment. Keywords: Coordinated operations of beyond visual range  Maneuver recognition  Tactical identification  Bayesian network  Cloud model

1 Backgrouad Although the performance gap between the two sides of the beyond-horizon air combat is the basis of reality, it does not completely dominate the trend of the battle [1]. Real air combat includes complex environments and human factors, which fundamentally drive the emergence of war tactics and strategies. Leading scholars from various countries to continue in-depth research. Combining the literature [2] the research on the cooperative combat mode, in the past, limited by the development of data and information interaction technology, the cooperative combat still mainly adopts the mode of coordination with the ground ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1365–1375, 2023. https://doi.org/10.1007/978-981-19-3387-5_163

1366

Y. Zhang et al.

station. At present, various countries are successively trying to cooperate with drones and drones. Usually, a manned aircraft is used to lead one or dual drone formations to perform air target attack missions. At the same time, future beyond-horizon fighters will inevitably use stealth characteristics to achieve the best stealth effect on radar detection and tracking with flexible tactical maneuvers. Therefore, this article systematically analyzes the reasonable maneuvers and typical two-aircraft formation cooperative tactical strategies of BVR combats based on the existing public information. To sum up, in view of the problem that most of the threat assessment models simplify or even ignore the impact of coordinated tactics threat in coordinated combat situations, this paper uses dynamic Bayesian inference networks to build a coordinated combat maneuver recognition model and the cooperative formation tactical recognition mode under the beyond visual range in succession. The recognition result of the former will be used as a child node of the latter. An overview of the contents of this chapter is shown in Fig. 1:

Coordinated formation tactical threat Target maneuver recognition

Target formation tactical recognition

Infer target intention

Fig. 1. BVR combats cooperative formation tactical threat module

2 Theoretical Basis 2.1

Dynamic Bayesian Network

Bayesian Networks (BN) is a graphical model that describes the dependence between data variables. The concept of time slices is introduced into it to form a new random model for processing time series data, which is Dynamic Bayesian Networks (DBN). DBN can effectively solve the problem of uncertainty and relevance, and realize the nonlinear modeling of the BVR air combat [3]. Dynamic Bayesian network definition: the set of node variables in the X network, then the X joint probability distribution is defined as: Pðxt ; Xt1 ; Xt2 ;    ; Xtj Þ ¼ Pðxt Þ

j Y

PðXtm jxt Þ

ð1:1Þ

m¼1

where t represents the current moment, t 2 [1, T], is the value of the root node state; Xtm is the value state of the m-th child node; Pðxt Þ is the prior probability distribution of the root node; PðXtm j xt Þ is the condition of the child nodes Probability distributions;

Research on Recognition Method of Maneuver and Formation Tactics

1367

(1) Bayesian network parameter learning Bayesian network parameters include prior and conditional probability. The prior probability can be determined directly based on expert experience knowledge or frequency value of sample statistics. Conditional probability often uses Bayesian estimation method combined with expert experience, the posterior probability is used as the next round of prior probability for iterative learning, and finally the maximum ^

likelihood estimator is taken h ¼ arg max LðhÞ as the conditional probability. (2) General modeling process based on dynamic Bayesian network Dynamic Bayesian networks are mainly divided into two stages: network construction (including network structure/network parameter learning) and network chain reasoning. Specifically, as shown in Fig. 2 below,

Fig. 2. Schematic diagram of the construction and reasoning process of DBN

2.2

Cloud Model

The cloud model can realize the uncertainty conversion between qualitative concepts and quantitative values. Definition 1: Forward cloud generator The forward cloud generator is a process of generating cloud droplets based on the digital characteristics of the cloud (Ex, En, He). The description of a cloud includes three elements: Ex, En, and He. The index approximation method is usually used to determine the characteristic parameters of the cloud model [4]:

3 Recognition Model of BVR Maneuvers Based on DBN According to the needs of the beyond visual range air combat battlefield, reasonable maneuver influence parameters are selected, and the knowledge base of common flight actions of military fighters is analyzed, and the maneuver recognition model based on dynamic Bayesian network is constructed and the recognition process is simulated and analyzed.

1368

3.1

Y. Zhang et al.

Construction of Recognition Model of Typical Maneuvers of BVR

(1) Analysis of the characteristics of the maneuvering action of BVR combat The common flight actions of traditional fighters are divided into six categories: circling, lifting and turning, diving and jumping [5]. Complex aerobatics can also be classified into these categories after being decomposed. This article divides the typical maneuvers into three types of basic: maneuvers in the horizontal plane, such as uniform speed, acceleration and deceleration, and circling; maneuvers in the vertical plane, such as sharp jumps and dives; and three-dimensional maneuvers in space, such as snake maneuvers, the battle turns. For the recognition of complex actions can explain by DBN reasoning characteristic over time slices. Table1 uses horizontal maneuvering as an example to describe the characteristic parameters: Table 1. Typical maneuvers of the BVR fighters Maneuver

Circling left Circling right

Description

Approximately uniform circular flight

Maneuvering parameter Altitude Change rate − −

characteristics Heading Change angle rate #(/ −) −



"(/ +)





Speed − −

(2) Preprocessing of raw data of continuous observation node variables According to a reasonable analysis of fighter maneuverability, the five parameter change ranges in Table 1 are divided. Taking the change of the circling heading angle as an example, the results of the range division are shown in Fig. 3.

Fig. 3. Schematic diagram of the circling maneuver height index change range division diagram

(3) Discretization of cloud model According to the Fig. 3 , respectively setting the digital characteristics as (−6, −1, 0.002), (0, 1, 0.001), (6, 1, 0.002). The cloud generated from the change of the heading angle variable to the qualitative concept is shown in Fig. 4 below:

Research on Recognition Method of Maneuver and Formation Tactics

1369

Fig. 4. Heading angle changes generate clouds

Figure 4(a) shows that the observation value Δ/ = 3.2, which belongs to the probabilities of constant and larger heading angle descriptions is Pt current (hold) = 0.23294 and Pt current (increase) = 0.76706; Fig. 4(b) means the same way when the observation value Δ/ = −3.1 (4) Parameter settings of the typical maneuver recognition network model Five observation nodes are selected, such as height change, height change rate, heading angle, heading angle change rate, and speed, The height classification result and the heading angle classification result are two intermediate nodes. Construct a typical maneuver of BVR recognition network model in Netica software, and its fourlayer structure is shown in Fig. 5:

Fig. 5. Network model structure of typical maneuver recognition

(5) DBN parameter settings for typical maneuvers recognition under BVR A Prior probability In the initialization phase, the probability of each state attribute of the node is equal, so the prior probability is determined by the dimension of the state space.

1370

Y. Zhang et al.

B

Conditional probability table The conditional probability table is a bridge between the prior probability and the posterior probability, which implies the causal relationship between nodes. Due to the non-disclosure of air combat data, this article is mainly based on the prior knowledge [5] combined with the existing maneuver knowledge base [6] to fit the relationship between parameter changes of typical maneuvers under BVR. Using a small amount of simulation sample data to learn the CPT. Taking the height classification intermediate node as an example, P(Heighti| HCj=i) = 68, P(Heighti|HCj6¼i) = 8:Where, i represents different states of corresponding nodes. C Conditional transition probability The “dynamic” of the dynamic Bayesian network means that it establishes the transition relationship between the current moment and the next moment based on the conditional transition probability. Limited space, just take the circling as an example to give the conditional transition probability as shown in Table 2: Table 2. Conditional transition probability table P(MD[t]| MD[t+1]) MDi(i = 1, 2, 3, 4, 5, 6, 8) MD7 MD9 MD12 Circling left 2.92 58.3 2.92 12.5 Circling right 2.92 2.92 58.3 12.5

CAD %

Among them, MDi corresponds to 12 maneuvers of the root node. 3.2

Tactical Identification Model Reasoning Process Based on DBN

The chain reasoning process of DBN is summarized as follows: Step 1: Initialize the network structure, prior probability table, conditional probability table, and state transition probability table, and use formula (1.1) to calculate the joint probability distribution of the root node PðHeightt ; HCRt ; CAt ; CARt ; Speedt Þ. Step 2: In the initial state, PðMDt jet Þ equality is related to the dimension of the state of the root node. Then, substituting the joint probability distribution into the formula (1.2) can obtain the conditional probability distribution of the observed child node inferring to the root node of the maneuver recognition result [7]. Pðet jMDt Þ ¼

PðMDt jet ÞPðHeightt ; HCRt ; CAt ; CARt ; Speedt Þ PðMDt Þ

ð1:2Þ

In it: et represents the evidence information observed at the current moment. Step 3: Update the conditional probability of the root node PðMDt je1: t Þ

Research on Recognition Method of Maneuver and Formation Tactics

P Pðet jMDt Þ PðMDt jMDt1 ÞPðMDt1 je1: t1 Þ MDt1 P P PðMDt je1: t Þ ¼ Pðet jMDt Þ PðMDt jMDt1 ÞPðMDt1 jet¼1 Þ MDt

1371

ð1:3Þ

MDt

In the formula (1.3): Numerator can be regarded as normalization processing; PðMDt jMDt1 Þ represents the transition probability from the state at the previous moment to the current moment, and PðMDt1 je1: t1 Þ represents the probability of the recognition result at the previous moment; generally, when PðMDt1 je1: t1 Þ the probability tends to stabilize, it means that the network converges.

4 Recognition Model of BVR Formation Tactics Based on DBN The composition of the two-aircraft formation tactical strategy can be considered from four dimensions: the vertical dimension of the height difference, the horizontal dimension of the azimuth relationship, the time dimension of maneuvering during the process of getting close or far to us, and the performance dimension of the long wingman attribute. Due to space limitations, there will not repeat the same steps as above, including discretization, network parameter setting, model reasoning process, etc.

Table 3. Description of tactical strategy of typical dual aircraft formation BVR [8] Tactical name Leader Altitude Head attack tactics

4.1

Position

Orientation

Maneuver intention

Under us; In front of us; Fly towards Attack directly Under wingman Wingman side Wingman Above or the same height of us; In front of us; Fly towards Turning circle Above the leader Leader front/side to lure

Construction of Typical Formation Tactics Recognition Network of BVR

(1) Analysis of typical two-aircraft formation strategy in the BVR battlefield Take the head attack as an example to analyze as shown in Table 3: (2) Construction of a typical dual-aircraft formation recognition network Determine 8 sub-nodes as relative height, azimuth angle, angle of entry, and Maneuvers model output of the main/wingman; determine the two intermediate nodes

1372

Y. Zhang et al.

Fig. 6. Network model structure of typical two-aircraft formation tactical recognition

for the leader aircraft attack and the wingman attack cover intention; Specifically, therelayer structure is shown in Fig. 6:

5 Simulation Experiment and Analysis Assuming that in the simulation scenario of BVR air combat, the enemy and our air Table 4. Information about the initial state of the enemy and the enemy Participating X (/m) aircraft Our fighter 4231.00 Enemy fighter 1 2060.45 Enemy fighter 2 4308.22

Y (/m)

Altitude (/m) 0 4000.00 34337.43 4015.42 32306.61 4137.86

Speed (/m/s) 233 210 186

Course (/°) 45 −25.15 −102.34

combat are at a height of 4 km−10 km. From a distance of about 30 km, and take the first fighter (long aircraft) as the observation target to take maneuvering actions within 60 s. Identify and analyze the evolution of formation tactics. The initial state information of the enemy and the enemy is shown in Table 4: Input the continuous observation status data of the enemy’s No. 1 fighter in 0−60 s into the cloud model, set the number of cloud drops to 50000, and set the confidence range of membership degree for five indicator factors. Taking the height deviation per second as an example, the result of the observation probability of the evidence node is

Research on Recognition Method of Maneuver and Formation Tactics

1373

shown in Fig. 7(a): in the figure, the blue area represents the probability value of the observation that the enemy’s height deviation remains unchanged, and the yellow represents the increase of the enemy. Corresponding to the simulation scenario, the

(a)Probability result of cloud model

(b)Probability result of maneuver recognition

Fig. 7. Recognition model results of enemy No. 1 maneuvers

enemy No. 1 fighter is raised first and then kept; further, the discretization processing value is used as the observation evidence probability inference model to obtain the probability distribution of the recognition result of the 1−21 s enemy No. 1 fighter taking the maneuver as shown in Fig. 7(b) Shown: It can be seen from Fig. 7(b) that the recognition result of the 1−5 s model of the enemy No. 1 fighter is a jump, and the recognition result of the 6−21 s model is a right hovering, and then the recognition probability of a right hovering 6−21 s gradually rises to 99.8%; In contrast to the simulation scenario, after the enemy No. 1 fighter observed us, in order to prevent our side from pulling apart the altitude difference, the heading angle of the dual aircraft suddenly changed when it climbed to a certain position. After 21 s, it entered a stable right circling under the guidance of formation tactics. (Omitted in the figure). Similarly, the enemy’s No. 2 fighter can be identified.

(a)Probability distribution of the angle of target direction

(b)Probability distribution of the entering the angle of target

Fig. 8. Probability distribution of cloud model results

1374

Y. Zhang et al.

The cloud model is further used to calculate the probability distribution of the three sets of observation evidence information of the target azimuth angle, the target entry angle, and the relative height. Taking the enemy fighter No. 1 as an example, the probability distribution of the observation information cloud model is shown in Fig. 8: The green, blue, and yellow areas in Fig. 8(a) indicate that the enemy flies sideways, toward us, and away from us, respectively. The green and blue areas in Fig. 8(b)

Fig. 9. Probability distribution of tactical recognition results (1−30 s)

indicate that the enemy is flying towards us. The left side and the front side of the square; similarly, the same analysis can be applied to the enemy’s No. 2 fighter. Finally, the tactical recognition result of the two-aircraft cooperative formation beyond the visual range is shown in Fig. 9. It can be seen that the enemy leader and wingman in 0−5 s are flying straight toward us at the same time, and the three aircraft are at the same level. At this time, the formation intention is not obvious to meet the side attack and pincer attack preparation posture; 5−10 s long aircraft maneuver The recognition result is hovering to the right and the wingman hovering along with it. At this time, the probability of side tactical recognition increases; 10−13 s long, although the wingman has entered a hovering maneuver, the azimuth and entry angles are constantly changing, making the lead plane fly straight toward us. At that time, it has the conditions for head-on attack; 13−17 s to restore the side attack posture; 17−30 s long, the wingman enters the intermediate stage of circling, at this time the tactical recognition result is stable and horizontally open.

6 Conclusion This article mainly studies the cooperative combat from the aspect of formation tactical threat. First, it briefly explains the relevant theories and methods involved in the model construction process; then, analyzes the characteristics of typical maneuvering actions of BVR Air combat. Constructing a maneuver recognition model based on dynamic Bayesian networks, setting network related parameters in detail and explaining the reasoning process from a mathematical level; further, using maneuver recognition results as nodes to analyze in detail the common two-aircraft formation characteristics

Research on Recognition Method of Maneuver and Formation Tactics

1375

strategies and its cooperative tactical beyond the visual range, build a formation tactical recognition model based on dynamic Bayesian networks, briefly describe its reasoning process. Finally, the effectiveness of the model constructed in this paper is verified by simulation experiments.

References 1. 孙盛智,孟春宁,侯妍,蔡晓斌.有人/无人机协同作战模式及关键技术研究[J].航空兵器, 28 (05), 33–37 (2021) 2. Fu, L., Liu, J., Chang, F., Meng, G.: Situation assessment for beyond-visual-range air combat situation assessment based on dynamic Bayesian network. 第25届中国控制与决策会议论文 集, 592–595 (2013) 3. 曲婉嘉,徐忠林,张柏林,刘颖.基于贝叶斯网络云模型的目标毁伤评估方法.兵工学报, 37 (11), 2075–2084 (2016) 4. 孟光磊,张慧敏,朴海音,梁宵,周铭哲.自动化飞行训练评估中的战机机动动作识别.北京 航空航天大学学报, 46(07), 1267–1274 (2020) 5. 倪世宏,史忠科,谢川,王彦鸿.军用战机机动飞行动作识别知识库的建立.计算机仿真, (04), 23–26 (2005) 6. Sun, H., Xie, X., Sun, T., Zhang, L.: Threat assessment method of warships formation air defense based on DBN under the condition of small sample data missing. Syst. Eng. Electron. 41(6), 1300–1308 (2019) 7. 杨爱武,李战武,徐安,奚之飞,常一哲.基于加权动态云贝叶斯网络空战目标威胁评估.飞 行力学, 38(04), 87–94 (2020). https://doi.org/10.13645/j.cnki.f.d.20200602.002 8. 孟光磊,周铭哲,朴海音,张慧敏.基于协同战术识别的双机编队威胁评估方法[J].系统工 程与电子技术, 42(10), 2285–2293 (2020)

Fast Intra Prediction Algorithm for AVS3 Lingfeng Fang1,2,3(&), Songlin Sun1,2,3, and Rui Liu4 1 National Engineering Laboratory for Mobile Network Security, Beijing University of Posts and Telecommunications, Beijing, China [email protected] 2 Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing University of Posts and Telecommunications, Beijing, China 3 School of Information and Communication Engineering, Beijing University of Posts and Telecommunication, Beijing, China 4 San Jose State University, San Jose, California 95192, USA

Abstract. With the development of 5G and multimedia technologies, applications such as ultra-high-definition video, VR and High dynamic range video have become more popular. This has led to an increasing amount of video data, which has brought great challenges to the storage and transmission of video data. Therefore, a video encoder with better compression performance is needed to remove the time redundancy, spatial redundancy and statistical redundancy in the image sequence of video, and greatly compress the raw video data under the premise of ensuring image quality. The Audio Video Coding Standard Workgroup of China has formulated the AVS video coding standard, and the latest AVS3 standard is currently under research and formulation. In order to improve the compression performance of the encoder, AVS3 introduced a large number of new technologies on the basis of AVS2 standard, resulting in a 266% increase in encoding complexity compared to AVS2. This paper optimizes the coding unit partition process and intra mode decision process of AVS3 Results show that the proposed algorithms reduce about 46.82% and 4.49% intra encoding time on average, with only 1.22% and 0.01% increases in terms of BDBR. Keywords: AVS3

 DT  CU partition  Intra prediction mode

1 Introduction With the development of ultra-high-definition video, the AVS workgroup of China develops a new generation of video coding standard called AVS3. Compared with AVS2 and HEVC (High Efficiency Video Coding), the coding performance of AVS3 has increased by 30% and 25% on average [1], but at the cost of increased encoding complexity. Intra prediction uses the neighboring blocks that have been coded and reconstructed to predict the current block to remove the spatial redundancy in the frame based on the spatial correlation of pixels. In recent years, the fast intra prediction algorithm mainly optimizes the CU partition process and intra mode decision process. Fan [2] proposed an early termination algorithm for CU partition process based on the texture features. The CU with a variance of pixels less than the threshold T1 is classified ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1376–1384, 2023. https://doi.org/10.1007/978-981-19-3387-5_164

Fast Intra Prediction Algorithm for AVS3

1377

as homogeneous CU, and its partition process is early terminated. In addition, the Sobel operator is used to calculate the horizontal gradient and vertical gradient of CU. If the ratio of the two gradients is less than the threshold T2 , it means the CU is monotonous, and it is split by Quadtree (QT) directly without processing multi-type tree (MT) partitions. Lee [3] proposed a fast CU partition algorithm based on the frame complexity and adaptive depth prediction, which uses Bayesian criterion and quadratic discriminant analysis to early terminate CU partition process. Heindel [4] uses the reference pixels of current prediction unit (PU) to exclude some impossible intra prediction modes in advance, reducing the number of intra modes to be traversed. Zhang [5] divides the angle prediction modes of HEVC into multiple groups, determines which group the best intra prediction mode may fall in according the ratio of horizontal gradient and vertical gradient of PU, and constructs a candidate list through 3 thresholds. Compared with AVS2, AVS3 has 5 CU partition structures, including QT, Binary Tree (BT) and Extended Quadtree (EQT) [6], which are shown in Fig. 1. When performing intra prediction, CU will be further split into PU by Derived Tree (DT) [7] partition process. AVS3 has two rounds of intra prediction mode decision-making process, and the number of intra prediction modes has been expanded from 33 to 66 [8]. This paper proposes the fast algorithms which adapts to the new characteristics of AVS3 to optimize the CU partition process and intra mode decision-making process. The results shows that the algorithms effectively reduce the complexity of intra-frame coding under the premise that they have a small impact on the coding performance of the encoder.

No Split

Quad-tree

Horizontal BT

Vertical BT

Horizontal EQT

Vertical EQT

Fig. 1. AVS3 CU partition structure

2 Proposed Algorithms 2.1

Fast CU Partition Algorithm Based on the Best DT Partition

During the intra prediction and transformation process, AVS3 divides CU into PU and TU according to DT partition structure shown in Fig. 2. According to the partition direction, DT partition structures can be divided into horizontal and vertical. The vertical DT partition structures include 2NxhN, 2NxnU and 2NxnD, and the horizontal DT partition structures contain hNx2N, nLx2N and nRx2N. The direction of the DT partition can reflect the texture direction of CU [7]. During the CU partition process, no split mode is tried first, then the 5 partition modes will be tried, and the mode with the smallest RD cost will be selected as the best CU partition mode. Therefore, the prediction information obtained under no split mode prediction can be used to guide the

1378

L. Fang et al.

subsequent CU partition process. In order to verify that the optimal DT partition mode M of TU selected under no split mode prediction process can be used to guide CU partition, statistical experiments are conducted on the test sequence provided by AVS3 workgroup officially. The results are shown in Fig. 3. It is found that if M is 2NxhN in horizontal direction, the probability that the finally selected optimal CU partition mode T in horizontal direction is 10 times that in vertical direction. If M is hNx2N in vertical direction, the probability of T in vertical direction is 9 times that in horizontal direction.

CU

Inter

PU

TU

2Nx2N

Intra (DT)

2NxhN

2NxnU

PU1

PU1

PU2 PU3 PU4

PU2

hNx2N P P P P U U U U 1 2 3 4

nLx2N P U 1

P U 2

2NxnD TU1 PU1

TU2

PU2

TU3 TU4

nRx2N P U 1

P U 2

PU

T T T T U U U U 1 2 3 4

TU

Fig. 2. DT partition structure of AVS3

Fig. 3. The distribution of the optimal CU partition

According to the statistical results, this paper proposes a fast CU partition algorithm based on the best DT partition mode selected under no split prediction, and the algorithm flow chart is shown in Fig. 4. For a CU to be coded, the no split mode is

Fast Intra Prediction Algorithm for AVS3

1379

performed first and the optimal DT partition mode M of TU is obtained. If M is mode 2NxhN, it means the texture direction of CU is horizontal, and the subsequent attempts for VEQT and VBT are skipped. If M is mode hNx2N, it means the texture direction of CU is vertical, and the subsequent attempts for HEQT and HBT are skipped.

CU

No split prediction The optimal DT partition mode M of TU M = 2NxhN?

Y

Skip VEQT and VBT

Y

Skip HEQT and HBT

N M = hNx2N? N Continue to try the prediction under other available CU partition modes

The optimal CU partition mode

Fig. 4. The flowchart of fast CU partition algorithm

2.2

Fast Intra Mode Decision-Making Algorithm Based on the Best Prediction Mode Under 2Nx2N Partition Mode

AVS3 has 2 rounds of intra prediction mode decision-making process, which is shown in Fig. 5. Each round of mode decision-making process contains 4 sub-processes: build the most probable mode (MPM) candidate list, rough mode decision (RMD), rate distortion optimization (RDO) and chroma mode-decision making. In the first round of mode-decision making process, the intra prediction filter (IPF) [9] technology is turned off, and the DT partition technology is used to divide CU into PU for prediction. In the second round of mode-decision making process, IPF technology is turned on, and CU is directly used as PU for prediction. Finally, compare the RD cost of the two rounds of prediction to select the best intra prediction mode and decide whether to enable the IPF. During the first round of mode-decision making process, 2Nx2N mode is tried first, then the other DT partition modes will be tried in turn, and the mode with the smallest RD cost will be selected as the best DT partition mode. The direction of the intra prediction mode can reflect the texture direction of CU, therefore the best intra prediction mode selected under 2Nx2N partition mode can be used to guide the

1380

L. Fang et al.

subsequent DT partition process. Figure 6 shows the directions of 66 intra prediction modes of AVS3. Modes 0, 1 and 2 are special modes with no directivity. Mode 12 is vertical mode, and its adjacent modes are mode 43 and mode 44. Mode 24 is horizontal mode, and its adjacent modes are mode 57 and mode 58. In order to verify that the best intra prediction mode P selected under 2Nx2N mode prediction process can be used to guide DT partition, statistical experiments are conducted on the test sequence provided by AVS3 workgroup officially. The results are shown in Fig. 7. Obviously, if P is mode 12, 43 or 44, the probability that the best DT partition mode of TU is hNx2N is much greater than that of 2NxhN. If P is mode 24, 57, or 58, the probability that the best DT partition mode of TU is 2NxhN is much greater than that of hNx2N.

CU

DT partition

No split

First round of mode decision-making process

2NxhN 2NxnU 2NxnD hNx2N nLx2N nRx2N

Build MPM list

2Nx2N

Build MPM list

RMD with IPF off

RDO with IPF off

Chroma mode decision-making The best intra prediction mode

RMD with IPF on

RDO with IPF on

Chroma mode decision-making

Second round of mode decision-making process

Fig. 5. The intra mode decision-making process for AVS3

Fig. 6. 66 intra prediction modes of AVS3

Fast Intra Prediction Algorithm for AVS3

1381

Fig. 7. The distribution of the optimal DT partition

Based on the experimental results, this paper proposes a fast intra mode decisionmaking algorithm, and the algorithm flow chart is shown in Fig. 8. In the first round of intra mode decision-making process, the prediction under 2Nx2N mode is performed and the best intra prediction mode P is obtained. If P is mode 12, 43 or 44, it means the texture direction of CU is vertical, and the horizontal DT partition attempts are skipped. If P is mode 24, 57 or 58, it means the texture direction of CU is horizontal, and the vertical DT partition attempts are skipped.

CU

Prediction under 2Nx2N

The best intra prediction mode P Y Y

Skip horizontal DT partition attempts

Y

Skip vertical DT partition attempts

P=12 || P=43 || P=44 N

P=24 || P=57 || P=58 N

Continue to try the prediction under other available DT partition modes The second round of mode decision-making process

The best intra prediction mode

Fig. 8. The flow chart of fast intra mode decision-making algorithm

1382

L. Fang et al.

3 Experimental Results The performance of the algorithms is evaluated on the AVS3 reference software HPM6.0. The 12 test sequences used in the experiment are all the general test sequences provide by AVS3 workgroup officially. The detail information of each sequence is shown in Table 1.

Table 1. The detail information of AVS3 test sequences Resolution Frame rate (frames/s) Bit depth (bit) Sequence 4K 50 10 ParkRunning3 (3840  2160) 60 10 Tango2 30 10 Campfire 60 10 DaylightRoad2 1080 p 50 8 BasketballDrive (1920  1080) 50 8 Cactus 60 10 MarketPlace 60 10 RitualDance 720 p 60 8 Crew (1280  720) 60 8 City 60 8 vidyo1 60 8 vidyo3

Frame number 500 294 300 600 501 500 600 600 600 900 600 600

The experimental configuration refers to the common test condition of AVS3 [10]. The quantization parameter (QP) is set to 27, 32, 38 and 45 respectively, and the encoding mode is set to all intra mode. BDBR [11] is used to evaluate the coding performance of the encoder, and the encoding time saving (TS) is used to evaluate the complexity change of the encoder, which is defined as formula (1). Table 2 shows the experimental results of the fast CU partition algorithm. Compared with HPM-6.0, the proposed algorithm achieves 46.82% time saving with only 1.22% BDBR-Y increase. TS ¼

Toriginal  Tproposed Toriginal

ð1Þ

Fast Intra Prediction Algorithm for AVS3

1383

Table 2. The experimental results of the fast CU partition algorithm Sequence ParkRunning3 Tango2 Campfire DaylightRoad2 BasketballDrive Cactus MarketPlace RitualDance Crew City vidyo1 vidyo3 Average

BDBR-Y (%) BDBR-U (%) BDBR-V (%) TS (%) 0.46 0.43 0.47 31.19 1.05 1.27 1.95 34.68 1.27 2.24 2.45 45.85 1.67 2.92 2.62 46.91 1.18 2.34 1.70 51.31 1.08 1.52 1.97 50.08 0.85 1.07 1.43 40.85 1.52 2.80 3.22 52.17 1.52 6.58 3.02 51.66 0.98 3.12 3.16 53.19 1.67 1.28 2.16 52.92 1.34 2.22 4.04 50.99 1.22 2.32 2.35 46.82

Table 3 shows the experimental results of the fast intra mode decision-making algorithm. Compared with HPM-6.0, this proposed algorithm achieves 4.49% time saving with 0.01% BDBR-Y increase, which means the impact of this algorithm on the encoding performance of the encoder is negligible.

Table 3. The experimental results of the fast intra mode decision-making algorithm Sequence ParkRunning3 Tango2 Campfire DaylightRoad2 BasketballDrive Cactus MarketPlace RitualDance Crew City vidyo1 vidyo3 Average

BDBR-Y (%) BDBR-U (%) BDBR-V (%) TS (%) 0.06 0.00 0.05 4.89 −0.07 −0.37 −0.29 2.84 −0.05 −0.13 0.28 4.61 0.01 −0.14 −0.10 5.23 0.21 0.16 −0.14 4.12 0.04 −0.19 0.26 3.37 0.02 −0.31 −0.29 3.14 −0.02 0.02 0.35 4.57 0.08 1.22 1.60 6.39 0.06 0.14 0.55 3.47 −0.12 −1.51 0.57 4.75 −0.17 −0.46 −0.57 6.51 0.01 −0.13 0.19 4.49

1384

L. Fang et al.

4 Conclusion In this paper, we propose a fast CU partition algorithm and a fast intra mode decisionmaking algorithm. For the fast CU partition algorithm, we have verified through the statistical experiments that the direction of DT partition mode can reflect the texture direction of CU. Based on this conclusion, we use the optimal DT partition mode obtained under no split prediction to skip the calculation of prediction under the vertical BT/EQT or horizontal BT/EQT. For the fast intra mode decision-making algorithm, we found that the direction of the best intra prediction mode can reflect the texture direction of CU. Then the best intra prediction mode selected under 2Nx2N mode is used to guide the DT partition process. The 2 algorithms proposed can effectively reduce the intra-frame coding complexity under the premise of less impact on encoding performance of the encoder.

References 1. Li, J.C.J.: The software maintenance report of HPM-9.0 and HPM-9.1, AVS M5991 (2020) 2. Fan, Y., Chen, J., Sun, H., et al.: A fast QTMT partition decision strategy for VVC intra prediction, (99), 1 (2020) 3. Lee, D., Jeong, J.: Fast intra coding unit decision for high efficiency video coding based on statistical information. Signal Process. Image Commun. 55, 121–129 (2017) 4. Heindel, A., Pylinski, C., Kaup, A.: Two-stage exclusion of angular intra prediction modes for fast mode decision in HEVC. In: IEEE International Conference on Image Processing (2016) 5. Zhang, T., Sun, M.T., Zhao, D., et al.: Fast intra-mode and CU size decision for HEVC (2017) 6. Wang, J.L.M.: Extended quad-tree partitions, AVS M4507 (2018) 7. Wang, B.N.L.: Derived tree mode, AVS4540 (2018) 8. Lei, F.L.M.: AVS3 intra prediction mode extension, AVS M4780 (2019) 9. Xu, K.F.G., Wang, R.: Intra dual prediction method, AVS M4609 (2018) 10. Chen, J.: AVS3 common test condition, AVS N2983 (2021) 11. Bjontegaard, G.: Calculation of average PSNR differences between RD-curves (2001)

Big Data Workshop

Research on Interference Source Identification and Location Based on Big Data and Deep Learning Hui Pan(&), ChengFan Zhang, Saibin Yao, JiuCheng Huang, YiDing Chen, and Long Zhang Network Optimization Center, China Unicom Shanghai Branch, Shanghai, China [email protected]

Abstract. The thesis is mayor in based on the evaluation of base station interference sources in multiple dimensions by China Unicom Shanghai Branch. Based on the value mining of multiple data sources such as KPI/XDR/MDT, it closely integrates the traditional interference source investigation process and big data/machine learning Technology. At the same time, combined with BPNN neural network, the interference source and interference type can be identified intelligently. By using the BFL boundary point fitting algorithm, the interference source can be detected, classified and simulated. At the same time, a set of interference source overall optimization scheme is given. Keywords: Interference source Simulation location

 MDT  Troubleshoot  Classification 

1 Introduction China Unicom Shanghai branch is based on multiple data sources and Interference source detection, classification and simulation location solutions based on big data/artificial intelligence technology. It focuses on solving the problems of difficulty, high cost and slow operation of interference source troubleshooting, and gives a set of corresponding interference source solutions based on the actual results. This solution is based on the interference optimization requirements obtained through repeated communication with the back-end network optimization staff and onsite interference optimization personnel, by continuously investigating the existing interference optimization solutions and the latest interference source identification and location algorithm in each field. Finally, according to the summary and induction of the business, the automatic interference module is designed to assist the network optimizer to complete the identification of the interference type and the location of the interference source in the interference cell.

©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1387–1395, 2023. https://doi.org/10.1007/978-981-19-3387-5_165

1388

H. Pan et al.

2 Programme Background Interference optimization is the key link and long-term demand of LTE network optimization and it is also the key to improving network quality and ensuring service perception. On the one hand, due to radio management, frequency planning and other legacy issues, LTE uplining has been facing the problem of interference outside the system. On the other hand, the number of 4G users has soared, and the internal interference caused by network structure and other reasons has gradually emerged. Therefore, a set of tools is needed to assist optimizers in interference identification and location of interference sources on the basis of big data. Interference type identification: LTE network interference is mainly divided into intra-system interference and out-of-system interference, among which the secondary classification can be made according to the cause of interference. It is hoped that the identification of interference types can be realized with the help of big data, so as to help optimizers to carry out interference optimization. The specific interference type is shown in Fig. 1 below. Unlock GPS clock causing interference Incorrect frame offset configuration causing interference Intrasystem interference

Inconsistent subframe ratio causing interference Over-distance TDD causing interference Overlapping coverage causing interference Hardware malfunction causing interference

LTE interference classification

SpuriousEmission interference Barrage jamming Intermodulation interference Intrasystem interference

Harmonic disturbanc Radio and TV MMDS interference Pseudo base station interference Narrow-band shield interference Broad-band shield interrence

Fig. 1. Interference type classifications

Interference source localization: Interference source localization is mainly aimed at external interference of the system. It is hoped to find out the interference cell of aggregation nature and at the same time output the approximate location of the interference source, so as to help the field interference optimizer to carry out interference screening. Output interference optimization data and solutions: According to the results of interference type identification and interference source position, the recommended optimization solutions are output in different scenarios. For example, from various dimensions, it could give the data of interference cells, or it could sort the number of

Research on Interference Source Identification and Location

1389

cells affected by the same interference source, and give Priority recommendations for interference optimization.

3 Algorithm Design 3.1

Interference Type Recognition Algorithm

At present, LTE interference is mainly uplink interference. After screening the PRB uplink interference of the cells of the entire network in different scenarios, a full set of interfering cells is firstly obtained. Then, according to the known types of internal interference, the internal interference is filtered first. After filtering some common types of internal interference, comparing the PRB spectrum of the remaining cells with those of known interference sources, some cells with external interference types can be found [1]. At the same time, density clustering was carried out on the remaining cells to find the interference cells with clustering characteristics in the regional dimension [2]. After finding a cluster of interfering cells, it is necessary to find a set of cells with the same interference source in the cluster. The similarity analysis in frequency domain and time domain is carried out for the cells within the cluster, and the cells with the same amplitude of interfered waveform and interference in the same time period are found in the frequency domain. If the number of cells in the same set is more than one, they are considered to be cells with the same external interference source [3]. The final identification process of interference type is shown in Fig. 2.

A full set of cells

High interferenc e cells

Whether there is an alram

Whether the frame offset paramater is set incorrectly

Wheyher the interference power positively correlated with traffic

Whether it is overlapping coverage interference

Whether there is GPS out of step alarm around the interference cell

Internal interferen ce cell

External interferen ce

The interference cell waveform is matched with the known external interference waveform, and the correlation exceeds a certain coefficient

The number of cells with similar interference characteristics in the region / time domain / frequency domain of each cluster is more than 1

Fig. 2. Flow chart of interference type identification

In addition to the identification of interference types based on the traditional process of interference detection and identification, the BPNN neural network is also introduced for auxiliary type judgment.

1390

H. Pan et al.

Firstly, multiple data sources are associated, and the data is processed into time series data in the cell dimension, and then 7-day rolling data is used to model and continuously supplement the model. The model used is a BPNN neural network, which is suitable for processing time series data, and at the same time completes unsupervised anomaly detection and completes the establishment of anomaly detection model. Then, according to the historical troubleshooting results, a supervised classifier model is performed on the interference type to complete the establishment of the classifier model. After the model is established, the anomaly detection model is first used to detect the base station in abnormal state. Then the classifier model is used to classify and predict the base stations in abnormal states, and filter out the list of base stations with interference categories, which is the result we need. 3.2

External Interference Source Localization Algorithm

Based on the algorithm of interference type recognition, all the external interference cell sets are taken out first. Under ideal circumstances, external interference sources are mainly divided into two types, one is omnidirectional antenna interference source, the other is unidirectional antenna interference source [4]. The omnidirectional antenna interference source has the characteristics of radioactive interference. With the interference source as the center of the circle, the interference intensity decreases toward the periphery. Therefore, it can be considered that under the same interference source, the interfered cells show the characteristics of clustering, and the interfered cells show the law of decreasing center. The unidirectional antenna interference source has the characteristics of sectorshaped radiation interference. The interference source is the center of the circle, and the radiation angle of the interference source is the central angle, and it radiates interference to the fan-shaped area. Therefore, it can be considered that under the same interference source, the interfered cell presents a fan-shaped distribution feature in the region, and the interfered intensity presents a one-way decreasing law. After the interference characteristics of the interference source are basically determined, the algorithm design can be started based on the characteristics. Firstly, according to the regional distribution of the total disturbance plots, density clustering was carried out to find the cluster [5]. Then, the similarity algorithm is analyzed in time domain to judge the same interference source cell as the real external interference cell. According to the regional variation of the amplitude of PRB upline interference in each cell in the cluster in the frequency domain, the unidirectional interference cell and omnidirectional interference cell can be distinguished, and the location results of the interference source can be further output (See Fig. 3).

Research on Interference Source Identification and Location

External interferencells

According to the similarity of region / time domain / frequency domain, the cluster is generated, which is considered as the same interference source cell

Output and interference source cell set

Whether the regional distribution of average PRB show a central decreasing law

Y

N

1391

Interference source of unidirectional antenna

Direction angle and range of output prediction jamming source

Omnidirectional antenna interference source

Output predicted jamming source coordinates and radius

Fig. 3. Flow chart of external interference cell location

Aiming at the location of omnidirectional antenna interference source, the CL algorithm, WCL algorithm, VFIL algorithm, and BFL algorithm are compared, and it is decided to adopt the single interference source location algorithm based on boundary point fitting (BFL) [6], which is based on network topology nodes, The higher the density, the more accurate it is, and it is more suitable to find the source of omnidirectional antenna interference (Fig. 4).

Fig. 4. Simulation diagram of the location of external interference sources of omnidirectional antenna

For the interference source of unidirectional antenna, the location of the relevant interference source is suggested mainly based on the regional distribution characteristics of the upline interference of PRB. Considering the real situation, this type of interference source has the characteristics of being far away from the interfered cell, difficult to locate, and having a large radiation range. Therefore, in the use of the geographical distribution characteristics of the PRB uplink interference, relevant recommendations are mainly based on the attenuation of the PRB. Direction, the direction where the interference source may exist, and the regional beaming characteristics of the PRB uplink interference of the interfered cell are given, and the beam angle of the predicted position of the interference source is given (See Fig. 5 below).

1392

H. Pan et al.

Fig. 5. Simulation diagram of external interference source location of single-wire antenna

4 Output Result 4.1

External Interference Source Location Algorithm

Output the full list of interfered cells. Double-click the interfering cell to render the cell situation on the map and render reference data in the time domain/frequency domain (Table 1).

Table 1. List of interfered cells Downlink frequency

Base station Cell identification ID

Internal/external Type of interference type interference

1650

48716

21

Internal interference Internal interference

1650

53088

31

450

47508

11

450

40716

22

3770

953659

38

external interference

3770

725281

38

external interference

Internal interference Internal interference

Total upstream and downstream business traffic (GB)

Average PRB

3.74

−118.9

Improper frame 7.72 offset configuration Hardware failure 0.04 interference Improper frame 51.12 offset configuration Omnidirectional 0.73 antenna interference Omnidirectional 0.69 antenna interference

−115.1

GPS lost lock

−119 −113.9

−114.8

−115.3

Research on Interference Source Identification and Location

4.2

1393

List of Interference Sources

Output a full list of interference sources. Double-click the list to render the source and the disturbed cell on the map (Table 2).

Table 2. List of interference sources Cluster ID

Number of affected cells

Influence cell enbid

123

2

2

3

2

3

213

2

70

8

33

6

53088;7211 Single wire antenna interference 53088;7211 Full antenna interference 53088;7211 Single wire antenna interference 53088;7211 Single wire antenna interference 53088;7211 Full antenna interference 53088;7211 Single wire antenna interference

4.3

Type of interference source

Interference source longitude

Interference source latitude

Interference radius/m

Interference source direction

121.13

31.24

422

39

121.14

31.25

333

180

121.17

31.28

663

123

121.18

31.29

412

341

121.19

31.3

321

211

121.2

31.31

631

33

Uplink Interference Data

Refer to the table below for uplink interference data. This table directly provides celllevel uplink PRB data reference and download (Table 3). Table 3. Uplink interference data enbID 48716 53088 47508 40716 52997 48771 725271

cell ID Average PRB PRB0 PRB1 PRB2 21 −118.9 −117.9 117.9 −117.9 31 −115.1 −112.2 −114.3 −114.3 11 −119 −118 −118 −118 22 −113.9 −114 −114.5 −113.9 21 −108 −113.4 −113.6 −114.1 23 −116.5 −117.6 −117.6 −117.9 38 −115.3 −113.1 −115 −115.3

PRB3 −117.9 −108.7 −118 −112 −106.5 −117.9 −115.5

PRB4 −118 −113 −118 −113.5 −111 −118 −115.6

1394

4.4

H. Pan et al.

Page Data Rendering Results

After the output of the basic data, in order to facilitate the use of relevant staff, not only the direct export and download function of the interference data is provided, but also the data rendering function is provided on the network optimization integrated platform. In the automatic interference module, the interference data can be screened according to conditions such as time and frequency, and then the three-dimensional diagnostic data of the interfering base station and the base station can be rendered on the map, including 100PRB*24H interference waterfall chart, cell 100RB curve chart per hour, Interference mean value graph and overall waveform graph for each hour. At the same time, at the bottom of the updated interference cell list, you can doubleclick the relevant cell to complete the jump to the cell, and the 3D diagnostic analysis rendering graph is also updated with the data of the cell. A variety of rendered images, combined with frequency domain, time domain, and geographic data, complete three-dimensional diagnostic analysis, helping staff to evaluate the interference situation of interfering cells in multiple dimensions, so as to facilitate the subsequent faster issuance of interference optimization work orders and send them to the scene The optimizer provides the best solution. The page rendering is shown in Fig. 6 below.

Fig. 6. The page rendering

5 Conclusions The use of big data to complete the assistance and upgrade of traditional network optimization work is an inevitable requirement for the construction of an information society. Among them, the identification and positioning of interference sources, after being put into use, is expected to reduce the workload and labor costs of relevant personnel in many aspects, and improve the efficiency and accuracy of interference source investigation. Arming every network optimization staff with big data weapons, I believe that the network optimization work can be better completed in the more complex and changeable network environment of the 5G future.

Research on Interference Source Identification and Location

1395

References 1. Gao, J., et al.: An interference management algorithm using big data analytics in LTE cellular networks. In: 16th IEEE International Symposium on Communications and Information Technologies, pp. 246–251. IEEE Press, Qingdao (2016) 2. Gao, J., et al.: A coverage self-optimization algorithm using big data analytics in WCDMA cellular networks. In: 1st International Conference on Signal and Information Processing, Networking and Computers, pp. 27–46. CRC Press Taylor & Francis Group, Beijing (2015) 3. Xu, L., et al.: Telecom Big data assisted BS resource analysis forLTE/5G systems. In: 18th IEEE International Conferences on Ubiquitous Computing and Communications, pp. 81–88. IEEE Press, Shenyang (2019) 4. Xu, L., et al.: User perception aware telecom data mining and network management for LTE/LTE-advanced networks. In: Sun, S. (ed.) ICSINC 2018. LNEE, vol. 494, pp. 237–245. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-1733-0_29 5. Xu, L., et al.: Architecture and technology of multi-source heterogeneous data system for telecom operator. In: Wang, Y., Xu, L., Yan, Y., Zou, J. (eds.) Signal and Information Processing, Networking and Computers. LNEE, vol. 677, pp. 1000–1009. Springer, Singapore (2021). https://doi.org/10.1007/978-981-33-4102-9_120 6. Eiza, M., Cao, Y., Xu, L.: Toward Sustainable and Economic Smart Mobility: Shaping the Future of Smart Cities, 1st edn. World Scientific, London (2020) 7. Zhu, C., Cheng, X. et al.: A novel base station analysis scheme based on telecom big data. In: 20th International Conference on High Performance Computing and Communications, pp. 1076–1081. IEEE Press, Exeter (2018) 8. Liu, Y., et al.: A novel power control mechanism based on interference Estimation in LTE cellular networks. In: 16th IEEE International Symposium on Communications and Information Technologies, pp.397–401. IEEE Press, Qingdao (2016)

Research on Automated Test and Analysis System for 5G VoNR Service Zhenqiao Zhao1(&), Xiqing Liu1, Jie Miao2, Shiyu Zhou1, Xinzhou Cheng1, Lexi Xu1, Xin He1, and Shuohan Liu3 1

Research Institute, China United Network Communications Corporation, Beijing, China {zhaozq99,liuxq94,zhousy60,chengxz11, xulx29,hex39}@chinaunicom.cn 2 China United Network Communications Corporation, Beijing, China [email protected] 3 School of Computing and Communications, Lancaster University, Lancaster, UK [email protected]

Abstract. Automated testing technology can effectively save human resources, material resources and time costs. The introduction of automated testing technology into the communication network testing can achieve accurate testing of designated networks and designated functions. To ensure the user perception of the 5G VoNR service, an inherent advantage of the operator, under the condition of rapid iteration and frequent upgrades of the 5G cloud core network, an automated test and analysis system for 5G VoNR service is studied in this article. This system has three primary functions: simulation dial tests, quality evaluation and automatic early warning. Finally, the application results of this system in the live network are introduced, which proves its practicability. Keywords: 5G VoNR  Automated testing technology  Performance analysis

1 Introduction With the freezing of the 3GPP R16, the research and commercialization of 5G have achieved initial success. As an inherent advantage service of operators, voice service plays a decisive role in user experience. Its service perception requirements are highly stringent, such as end-to-end delay less than 5 s and user drop rate no more than 1%. Voice over New Radio (VoNR) is an essential solution for operators’ voice services in the 5G era, so it is still one of the paramount issues for operators to ensuring user perception of VoNR service. There is an urgent need to achieve VoNR service performance monitoring, quality evaluation and prediction in advance. However, this task currently faces many challenges: The 5G network is complex and changes frequently [1]. Therefore, it is difficult to accurately determine the root cause of the problem in a short time and it is necessary to frequently conduct many dial tests, which consumes a lot of human resources and time. Meanwhile, 5G adopts northbound interface network management solution [2]. The network management ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1396–1403, 2023. https://doi.org/10.1007/978-981-19-3387-5_166

Research on Automated Test and Analysis System

1397

passively receives and aggregates network element indicators. Indicator statistics are lagging, which prevents early warning of VONR business performance. The automated testing technology transforms the traditional manual testing method into the use of computer programs or other testing tools for testing, which has three advantages over manual testing [3]. First, it can perform test tasks quickly and automatically. Second, it has strong reusability and can significantly improve test efficiency, accelerate function upgrades, reduce the workload of testers and the cost of equipment resources, avoid human error and improve the accuracy of test results [4]. Third, it can realize the traversal test of all scenarios and can simulate the concurrent operation of a large number of users at the same time to test the stress resistance of the network. To solve the difficulties faced by the VoNR service guarantee, a 5G VoNR service automated test and analysis system is developed in this paper.

2 Key Technologies of 5G VoNR Solutions 2.1

5G Voice Solutions Evolution

For 5G networks, two network architectures were proposed in 3GPP R15: NonStandalone Networks (NSA) and Standalone Networks (SA), and two solutions were suggested for 5G voice services: Evolved Packet System Fallback (EPS FB) and VoNR [5, 6]. In the early stage of 5G network construction, the NSA network architecture was adopted to upgrade the original Fourth Generation Mobile Communication System (4G) Evolved Packet Core Network (EPC) architecture. In the SA mode, the 5G Core Network (5GC) is independent, and the EPS FB is adopted as the initial transitional scheme for voice service. As the SA network continues to perform functional upgrades and expands coverage, voice services can be realized only on 5G networks, that is, the VoNR solution. 2.2

5G VoNR Solution

This section mainly introduces the signalling process of the 5G VoNR solution, which is summarily divided into five steps: 5G initial registration, IMS domain registration, voice establishment, IMS domain bearer establishment, and resource release after the call is over. Taking the Mobile Origination Call (MO) as an example, the specific process is as follows: 5G Initial Registration. After the UE is powered on, it needs to attach to the 5G network to obtain network services. The UE exchanges information with the network. The network performs identity verification and access control on the UE and performs user context acquisition, authentication, control policy issuance, etc. Thereinto, the critical parameter “Whether to support IMS voice through PS session” determines whether the terminal voice can be realized by the VoNR solution. IMS Domain Registration. The purpose of IMS domain registration is to authenticate and authorize UE for the IMS network before UE using the IMS service. Voice Establishment. a) The UE sends an INVITE request to the IMS domain through the 5G New Generation Radio Access Network (NG RAN) [7] to initiate the

1398

Z. Zhao et al.

calling IMS voice session, and the IMS system triggers the QoS establishment process. b) 5GC sends Protocol Data Unit (PDU) session modification request to NG RAN. c) NG RAN comprehensively considers UE capabilities, network support for VoNR services, current wireless conditions and other factors, decides to provide VoNR services for UEs, accepts PDU session modification requests, and reconfigures user plane information. d) NG RAN returns a PDU session modification success response message and informs the 5GC domain and IMS domain to establish the corresponding QoS flow/bearer. e) The call continues and is sent to the IMS domain. IMS Domain Bearer Establishment. After the voice QoS Flow is established, the IMS domain bearer establishment process starts. Resources Release After the Call is Completed. After receiving the MO’s on-hook request, the called user releases the called bearer, sends a response message to the calling side, and the calling side releases the calling bearer. Up to this point, a call is completed and the entire 5G VoNR service signalling process ends.

3 Automated Test and Analysis System for 5G VONR Service This section will study the 5G VoNR service performance automated test and analysis system in detail. The system realizes the three functions: simulation dial tests, quality evaluation, and automatic early warning. The system mainly selects relevant indicators from the accessibility, retention and integrity of voice services for testing and service perception analysis. The parameters involved in this system are defined as follows: Successful call: After the MO sends the INVITE message, the MO receives the 180 response sent by NG RAN within 12 s. Call setup delay is the time difference between MO sending the INVITE message and MO receiving the 180 response message sent by NG RAN. Dropped call: After MO and MT started to talk, one of them received the SIP BYE message, and the reason value was 503 within the time specified by the system. CCtotal , CCtest , CCcallsuccess , CCcallestablish and CCcalldrop represent the number of total calls, tested calls, successful calls, established calls and dropped calls respectively. Te , Tu and Td represent the call establishment time, the set call duration time and the real call duration time. Call success rate is the ratio of the number of successful calls to the total calls, expressed as Rcallsuccess and calculated as (1). Rcallsuccess ¼ CCcallsuccess =CCtotal

ð1Þ

Average call setup delay is the ratio of the sum of call setup delays to the number of successful calls, expressed as T and calculated as (2). T ¼ Ttotal =CCcallsuccess where Ttotal represents the sum of all successful calls’ call setup delay.

ð2Þ

Research on Automated Test and Analysis System

1399

Call establishment: MO receives the 200 (INVITE) message sent by NG RAN. Call drop rate is the ratio of the number of dropped calls to successful calls, expressed as Rcalldrop , and calculated as (3). Rcalldrop ¼ CCcalldrop =CCcallestablish

ð3Þ

Automated Test and Analysis System for 5G VONR Service

Actual Network

Simulation DialAnd-Test Module Automated Early Warning Module

Quality Assessment Module

Emulated Emulated NG RAN UE

AMF

5GC

UPF

IMS

Fig. 1. The system diagram of automated test and analysis system

As shown in Fig. 1, the system includes three modules: Simulation dial-and-test module, Quality assessment module and Automatic early warning module. The function of each module and its implementation are described in the following section. The previous module provides input data for the next module. 3.1

Simulation Dial-and-Test Module

The simulation dial-and-test module simulates the functions of the terminal and the base station and connects to the AMF and UPF network elements of the actual network. It can run uninterruptedly to perform dial test tasks and complete the automatic test of the VoNR service of the actual network.

Set initial parameters

Simulate UE and NG RAN, start test

Count call success times and delay

Count call drop times

Complete test?

Yes

Export data and restore system

No

Fig. 2. The workflow of the simulation dial-and-test module

The workflow of this module is shown in Fig. 2 and the algorithm implementation is as follows: Set the Initial Parameters of the Dial Test Module. According to the needs of the test task, set CCtotal , Tu , CCtest ¼ 0, Te = 0 and Td = 0.

1400

Z. Zhao et al.

Perform Automatic Dial Test Tasks a) The simulation dial test module simulates the VoNR service function of UE and NG RAN, connects AMF and UPF network element of the existing network through protocol stack simulation, uninterruptedly performs automatic dial test to launch 5G VoNR business. When MO sends an INVITE message, the timer Te starts. b) Determine whether the call is successful: If the MO receives a 180 response message from NG RAN within 12 s, the call will be considered successful, CCcallsuccess will add 1, Te will stop, and the module will record the time Te , output a call success record. Otherwise, go to step f). c) Determine whether the MT responds: If the MO receives a 200 (INVITE) message, it will be considered that the MT responds and go to step d); otherwise, go to step f). d) CCcallanswer is increased by 1, and the timer Td starts. When Td reaches the time specified by the timer Tu , the MT will hang up and the resources will be released. e) Determine whether the call is dropped: If the MO/MT receives a SIP BYE message with a reason value of 503 within the time specified by Tu , it will be recorded as a call drop, CCcalldrop will be increased by 1, and a call drop record will be output; otherwise, go to step f). f) Determine whether the set dial test task is completed: If CCtest is not equal to CCtotal −1, CCtest will add 1, set Te = 0 and Td = 0, return to step a) to continue execution; Otherwise, all data will be output, all timers and counters will be removed, and resources will be released. 3.2

Quality Assessment Module

In this module, based on the network test results obtained from the simulation dial-andtest module, a set of evaluation model is established based on three typical indexes, Rcallsuccess , T and Rcalldrop to evaluate the service quality under test. Set the comprehensive score of the tested network’s 5G VoNR service performance as Q, calculated as (4): Q ¼ aCx þ bCy þ cCz

ð4Þ

where a + b + c = 100%, a, b and c can be set, the default is 50%, 20%, 30%. Cx, Cy and Cz respectively represent the score of Rcallsuccess , T and Rcalldrop and the scoring rules are shown as (5), (6) and (7). All values in (5), (6) and (7) can be modified as needed. 8 < 100; Rcallsuccess  98% Cx ¼ 60; 98% [ Rcallsuccess  93% : 0; Rcallsuccess \93%

ð5Þ

8 < 100; T  4s Cy ¼ 60; 4s\T  8s :  0; T [ 8s

ð6Þ

Research on Automated Test and Analysis System

8 < 100; Rcalldrop  1% Cz ¼ 60; 1%\Rcalldrop  3% : 0; Rcalldrop [ 3%

1401

ð7Þ

If the comprehensive score Q of the tested network is not less than 90, the service quality of the network will be considered excellent; if Q is less than 90 and not less than 60, the service quality of the network will be considered qualified; if Q is less than 60, the service quality of the network will be considered poor. 3.3

Automatic Early Warning Module

This module stores a static table of contact information of different network interface operation and maintenance personnel, and multiple alarm conditions can be set according to different business requirements or task needs. Once a certain area business is abnormal to reach the alarm condition, this module will automatically generate an alarm message and send it to the corresponding operation and maintenance personnel. The automatic early warning module mainly has two alarm scenarios that can be set: One is when the comprehensive VoNR service performance score Q in a certain area is less than the set value; the other is when the score of a certain single index of the network is lower than the specified score, for example, the call success rate score Cx is lower than a set value. Through this module, it realizes automatic real-time monitoring of VoNR service quality, saves the energy of operation and maintenance personnel, and facilitates the operation and maintenance personnel to follow up the business guarantee situation. When the alarm information appears, the process analysis can be carried out with the help of the dialing and testing records and signaling information stored in the test system to realize the fault determination and demarcation bit analysis. Compared with the current network, this module does not need to make a second dial test after being complained by the user to trace the source of the problem, which is more rapid, effective and simple.

4 5GC Network Capability Application Solution The deployment plan of this system on the live network is shown in Fig. 3. The simulation test execution machine is deployed in each subarea of the network to be tested and it simulates the functions of the UE and NG RAN to interface with the 5G core network AMF and UPF network elements. Deploying a test analysis control platform in the control center can connect and manage all test execution machines through the VPN transmission network, which is convenient for system management, maintenance, and upgrade. The system can be applied to various scenarios where VoNR service quality needs to be tested, such as service guarantee during network updates and upgrades, daily inspections, quality assessment, network fault delimitation and location analysis, etc. It can also output data and results through the capability opening interface to support other analysis platforms. The following will briefly introduce the first two application scenarios of this model.

1402

4.1

Z. Zhao et al.

VoNR Service Guarantee During Network Updates

The Test Analysis Control Platform

Test Task Management Module

The simulation test execution machine A AMF

UPF

Network A

Test Data Storage Module

Test Data Analysis Module

The simulation test execution machine B AMF

UPF

Automatic Early Warning Management Module

The simulation test execution machine C AMF

Network B

UPF

Network C

Capability Open Module

The simulation test execution machine D AMF

UPF

Network D

Fig. 3. Deployment architecture of automated test and analysis system

5G network function upgrades and iterations frequently, each functional upgrade requires functional testing of the core network to ensure the quality of existing services. There are 45 network elements related to the VoNR service in the 5GC network in area A of the existing network. To verify the impact of new functions on the VoNR service, it is necessary to traverse and detect the network elements. Each network element needs to be tested 30 times before and after the change, each dial test takes 3 min, and ten function points are upgraded that month. Based on each person working 8 h a day and 22 days a month, this network function upgrade test saves about 7.67 man-months with the help of this system. 4.2

Routine Inspection of the Core Network

There are two common methods for operators to daily inspect VONR services. One is triggered by user complaints, which seriously affects the user experience and is delayed and difficult to analyze the network problem cause. The other is to arrange test operation and maintenance personnel to conduct road test and fixed-point dial tests regularly, which is time-consuming and labor-intensive. With the help of this system, the test operation and maintenance personnel only need to set the initial parameters of the system to realize the daily inspection of VoNR services and output the service performance parameters after calculation, which is simple and convenient. The current network B area conducts two daily inspections of core network voice services in March. The daily dialing tests measurement is 2000 times, each lasting 7 days, and a total of 28,000 dial tests are completed. Each dial test takes 3 min and this system saves about 7.95 man-months.

Research on Automated Test and Analysis System

1403

5 Conclusion In this paper, an automated test and analysis system for 5G VONR service is studied and analyzed to realize simulation and measurement, quality evaluation and automatic early warning, which effectively save labor and time costs and lay an important technical foundation for the rapid implementation of 5G VONR service quality evaluation and problem positioning. With the continuous development of 5G scale and the continuous enrichment of 5G services, it is urgent to promote the development of automated testing technology for 5G services in order to facilitate and standardize the rapid renewal and iteration of 5G services. At the same time, the virtual degree of network element function is advancing continuously, test automation is bound to become the general trend. Automated test research for other businesses will be carried out in the future to promote digital, intelligent, and intelligent network operations. Acknowledgment. This work was supported in partially by National Key R&D Program of China under Grant 2018YFB1800800.

References 1. Xu, L., Cheng, X., et al.: Telecom big data assisted BS resource analysis for LTE/5G systems. In: 18th IEEE International Conferences on Ubiquitous Computing and Communications, pp. 81–88. IEEE Press, Shenyang (2019) 2. Tangudu, N.D., et al.: Common framework for 5G northbound APIs. In: 3rd 5G World Forum, pp. 275–280. IEEE Press, Bangalore (2020) 3. Sun, Z., et al.: A web testing platform based on hybrid automated testing framework. In: 4th Advanced Information Technology, Electronic and Automation Control Conference, pp. 689– 692. IEEE, Chengdu (2019) 4. Zun, D., et al.: Research on automated testing framework for multi-platform mobile applications. In: 4th International Conference on Cloud Computing and Intelligence Systems, pp. 82–87. IEEE, Beijing (2016) 5. Li, X., et al.: Research on 5G SA network shared voice solution. In: 2nd Information Communication Technologies Conference, pp. 65–69. IEEE Press, Nanjing (2021) 6. Liu, G., et al.: 5G deployment: standalone vs non-standalone from the operator perspective. IEEE Commun. Mag. 58(11), 83–89 (2020) 7. Kumar, N., et al.: Multi-band all-digital transmission for 5G NG-RAN communication. In: 3rd 5G World Forum, pp. 497–501. IEEE Press, Bangalore (2020)

Mobile Network Monitoring Technology for City Freeway Based on Big Data Analysis Baoyou Wang(&), Saibin Yao, Jiucheng Huang, Hui Pan, and Chao Liu Shanghai Branch of China Unicom, Shanghai 200080, China [email protected]

Abstract. In the scene of city freeway, the driving speed is relatively fast, the flow of people is denser, and the customers have higher requirements for communication experience. Firstly, the architecture of big data platform for mobile network quality control was introduced. Then, the mobile network monitoring scheme for city freeway based on big data analysis was proposed. The scheme is oriented to group perception which approaches real experience, so as to effectively improving the evaluation ability and frequency for mobile network of city freeway. It automatically identifies users who driving on freeway from the data lake of XDR by clustering algorithm. Through 3W matching association, the signaling data pool of freeway users was generated, which would contribute to reduce the data volume and the consumption of computing resources greatly. Through road section analysis of freeway user trajectory, KPI monitoring and warning of abnormal for road section were realized. Keywords: Network optimization

 Monitor  Big data

1 Introduction In order to effectively alleviate urban traffic congestion, many cities choose to build urban viaduct and establish urban three-dimensional traffic network. Take Shanghai as an example, it has 14 viaducts, including Yan’an viaduct, North South viaduct and Inner Ring viaduct, etc. The average daily traffic flow is 2.01 million vehicles. In the scene of city freeway, the user’s driving route is relatively fixed and the driving speed is relatively faster, the flow of people is denser, and customers have higher requirements for communication experience. The mobility, retention and access quality of wireless network play an important role in customer perception. How to accurately identify users driving in expressway? Using the real multi-dimensional perception indices of the group users to find the network problems, and transforming the optimization mode from the traditional offline to online, so as to greatly improve the efficiency of network optimization, is an urgently practical demand, which has great practical value. As is well known, operators have a wealth of massive data, among which measurement report (MR) and XDR signaling are the two most important types of data in OSS domains. MR measurement report can reach several TB per day, and XDR original signaling can reach dozens of TB per day [1, 2, 4, 8]. In the face of such a huge amount of ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1404–1411, 2023. https://doi.org/10.1007/978-981-19-3387-5_167

Mobile Network Monitoring Technology for City Freeway

1405

data, we must rely on big data support platform for processing and analysis. A network monitoring scheme is proposed, it automatically identifies users who driving on freeway from the data lake of XDR by clustering algorithm. Through 3W matching association, the signaling data pool of freeway users is generated, which will contribute to reduce the data volume and the consumption of computing resources greatly. Through road section analysis of freeway user trajectory, KPI monitoring and warning of abnormal for road section are realized [3, 9–13]. Furthermore, it can also provide query and analysis of perception indices of single user and single road section to assists complaint and problem positioning. Through platform based automatic analysis and processing, it realizes extensive and comprehensive testing, which focusing the problem road section quickly, and approaching the user’s real perception experience, thus can improves the efficiency and accuracy of network detection, and achieves the purpose of “saving energy and high efficiency”.

2 Architecture of Big Data Platform for Mobile Network Quality Control Figure 1 shows the architecture of the big data platform for mobile network quality control [2, 4, 8]. The lower three layers are the technical levels, and the upper two layers are the business levels. The bottom layer is operation data store layer, which collecting and processing of near source data, including: MRO file decompression, parsing, preprocessing, geolocation, backfilling IMSI; XDR file decompression and parsing; OMC performance management data and base station parameters, etc. It is used stream computing framework, such as flume and Kafka. The next level is data lake storage and data mart computing. Data Lake is stored in Hadoop cluster using HDFS format, and provided query interface using Hive and HBase components. The intermediate results are aggregated and stored in the MPP database (including basic data and summary data) using the Spark Streaming quasi-real-time computing tool. Then, through data mining and SQL calculation, value data and coarse-grained summary data are generated and stored in RDB data mart in order to facilitate external access. Up more level is query engine layer, it performs business processing and data visualization through some framework components or front-end tools, such as Hibernate, SpringBoot, CAS, ExtJS, AJAX, VUE, WebSocket, etc. Up more level, it is the model and dictionary layer, which provides the calculation models (including user models, business models, and index models), and data dictionary (such as grid, road, scene, and user, etc.). The top level is the application portal, which providing workbench for network optimization (automatic drive test, automatic assess, automatic dispatch, intelligent parameters, etc.), and network maintenance (comprehensive monitoring, intelligent fault diagnosis, etc.).

1406

B. Wang et al.

Fig. 1. Architecture of the big data platform for mobile network quality control

3 Mobile Network Monitoring Scheme for City Freeway Based on Big Data Analysis 3.1

Related Definitions

Define 1: Road Section In order to effectively alleviate urban traffic congestion, many cities choose to build urban viaducts and establish urban three-dimensional traffic network. Vehicles enter or leave the viaduct through up and down ramps. Definition: the viaduct section between P adjacent ramps is called a road section. The set of road sections, denoted as , P ¼ fS1 ; S2 ;    ; Sk g, Where k is the total number of road sections. For example, according to the distribution of viaduct ramps in Shanghai, we divide the 14 viaducts into 168 road sections. Define 2: Fingerprint of Road Section Each road section on viaduct is usually covered by several main control cells, which is called the fingerprint of road section. Suppose Si is the i-th road section, Si ¼ fC1 ; C2 ;    ; Cn g, where n represents that the road section is covered by n main control cells. The fingerprint database of road sections is constructed based on DT results and base station parameters, and the road sections are subdivided furtherly

Mobile Network Monitoring Technology for City Freeway

1407

according to the actual coverage distance. An example of road section fingerprint is shown roughly in Fig. 2.

Fig. 2. An example of road section fingerprint

Define 3: Motion Feature User Based on the mobility principle, a user will move from one location area to another in process of movement. If a user has multiple location area updates in a short time, it is defined as a motion feature user. Define 4: City Freeway User A user driving on city freeway over a period of time is defined as a city freeway user. 3.2

Identification Model for City Freeway User

The main steps are described as follows: Step 1: Slice main information from XDR data to form a quintuple, denoted as Tuples hEventtype ; Timebegin ; UID; Celluk ; Nettype ;   i. Then, order it by user and timestamps. Step 2: According to the motion sequence and site update, the motion feature modeling is carried out, and the user trajectory is generated. Step 3: After calculating the motion trajectory of each user, compare and match it with the fingerprint database to determine whether the user has passed a road section? How many road sections have been gone through? And then determine whether it is a freeway user? Meanwhile, determine which viaducts the motion scene had passed through? Finally, Filtering and screening through some condition or threshold, the list of freeway users could automatically customized, which provides a focus method for wireless network quality perception problems. 3.3

Network Monitor for City Freeways

The data processing flow of mobile network monitoring scheme for city freeway based on big data analysis, is shown in Fig. 3.

1408

B. Wang et al.

Fig. 3. Data processing flow of network monitor for city freeway

The main procedures include five steps: 1) 2) 3) 4) 5)

Acquisition data Generate freeway users perception indices bill Attach labels of road sections Generate application table by aggregation Web presentation.

3.3.1

Acquisition Data

1) Through the freeway user identification algorithm, the movement trajectories of the users is generated, denoted as Utrips_freeway. Then, order it by user and start time. 2) Collect some of the important fields from slice MOS bill, including . And also, order it by user and start time. 3) Gather some of the important fields from SIP interface bill, including . Similarly, order it by user and start time.

Mobile Network Monitoring Technology for City Freeway

1409

3.3.2 Generate Freeway Users Perception Indices Bill The freeway users and DPI signaling bills are matched and associated by IMSI and timestamps. In other words, matched from aspects of “who, when, where”, abbreviated as 3W, which could ensure that the signaling exchange occurs when the user is on the freeway. Then backfill the fields of signaling and service required for analysis, generating freeway users signaling data pool, which including MOS bill, SIP bill, GxRx interface bill, etc [1–3, 5–7]. The specific associated fields are shown in Fig. 4. Then the indices statistics of this data pool are carried out, which generating perception indices table of freeway users, denoted as _imsi_ci_highway. The indices calculation formula is relatively simple, for example, the “bearing direction” field is divided into uplink and downlink in MOS bill, according to which statistics can be made: Uplink packet loss rate ¼

Packet loss in uplink Expected received packets during uplink

Proportion of MOS greater than 3 ¼

ð1Þ

Number of slice IDs for MOS [ 3:0  100 ð2Þ Total number of slice IDs of MOS bill

Fig. 4. Freeway users and DPI signaling bill associated fields

3.3.3 Attach Labels of Road Sections Through the base station cell unique code (Cell_ UK), freeway users perception indices table (imsi_ci_highway) can be joined with fingerprints database of road sections, which generating freeway users table attached with labels of road sections, denoted as

1410

B. Wang et al.

_imsi_ci_highway_segment. The table can be used for single user perception indices query and statistics, and can also be used for drill down analysis of upper aggregation table. 3.3.4 Generate Application Table by Aggregation Aggregating _imsi_ci_highway_segment table by field of road sections, and setting thresholds of each index, then, the anomaly tags are added to generate application table, denoted as _highway_segment. For example, set abnormal thresholds such as connection success rate, drop call rate, MOS value, MOS > 3.0 ratio, uplink packet loss rate and downlink packet loss rate respectively. When an index is higher (or lower) than the abnormal threshold, the index label is set as abnormal. 3.3.5 Web Presentation In summary, the application table, named as _highway_segment, is obtained through XDR big data analysis and processing. Based on the table, the optimization mode is transformed from traditional offline to online through web visualization, thus greatly improve the efficiency of network optimization and achieve the purpose of energy saving and efficiency increasing. Show by large visual screen, it can realize KPI monitoring of road sections, and alarm prompt of abnormal road sections. Furtherly, it also provide query and analysis of perception indices of single user and single road section, which contributing to assist diagnostic and location for complaint problems. The portal of visual monitoring system is shown in Fig. 5.

Fig. 5. Portal of visual monitoring system

4 Conclusions The mobile network monitoring technology for city freeway based on big data analysis, which effectively improving the evaluation ability and frequency for mobile network of city freeway. It changes from the traditional monthly evaluation to daily evaluation.

Mobile Network Monitoring Technology for City Freeway

1411

From the perspective of practical application effect, it has been widely used in expressway scenarios having determination tracks. The visual monitoring system automatically outputs road sections where high drop call or poor quality every day, which are included in the daily optimization control table. The detection timeliness rate of hidden perception problems, such as poor quality and high drop calls, has increased from 30% to 95%. The efficiency of network optimization is improved by 90%, the coverage rate of daily optimization problems is more than 90%, and the completion rate of automatic drive test is more than 70%, which saving the operation and maintenance cost and improving the optimization efficiency.

References 1. Tao, L., Tao, W., Bin, W.: Association and application in XDR and MR data. Telecommun. Sci. 35(04), 120–130 (2019) 2. Yao, S., Huang, J., Wang, B., et al.: Design and application of network optimizing integrated platform based on rasterized big data. In: Wang, Y., Fu, M., Xu, L., Zou, J. (eds.) Signal and Information Processing, Networking and Computers. LNEE, vol. 628, pp. 789–799. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-4163-6_94 3. Xu, L., et al.: Architecture and technology of multi-source heterogeneous data system for telecom operator. In: Wang, Y., Xu, L., Yan, Y., Zou, J. (eds.) Signal and Information Processing, Networking and Computers. LNEE, vol. 677, pp. 1000–1009. Springer, Singapore (2021). https://doi.org/10.1007/978-981-33-4102-9_120 4. Baoyou, W., Jian, Y., et al.: Signaling collection and monitoring technology based on FKS. Telecommun. Sci. 034(003), 145–155 (2018) 5. Yusun, F., Genke, Y.: Application of artificial intelligence in mobile communication: challenge and practice. J. Commun. 41(9), 190–201 (2020) 6. Ying, W., Zhuang, S.: Survey of mobility prediction in wireless network. J. Commun. 40(8), 157–168 (2019) 7. Xu, L., Cao, Y., Yang, H., Sun, C., Zhang, T., et al.: Research on telecom big data platform of LTE/5G mobile networks. In: 18th IEEE International Conferences on Ubiquitous Computing and Communications, pp. 756–761. IEEE press, Shenyang (2019) 8. Wang, B., Qian, J., Yuan, S.: Research and implementation on acquisition scheme of telecom big data based on Hadoop. Telecommun. Sci. 033(001), 135–142 (2017) 9. Xu, J., Zheng, K., Chi, M., et al.: Trajectory big data: data, applications and techniques. J. Commun. 36(12), 97–105 (2015) 10. Xu, L., Luan, Y., Cheng, X., et al.: Telecom big data based user offloading self-optimisation in heterogeneous relay cellular systems. Int. J. Distrib. Syst. Technol. 8(2), 27–46 (2017) 11. Jinkang, Z., Mingyang, C., Wuyang, Z.: Three-three-three network architecture and learning optimization mechanism for B5G/6G. J. Commun. 42(4), 62–75 (2021) 12. Xu, L., Zhao, X., Luan, Y., et al.: User perception aware telecom data mining and network management for LTE/LTE-advanced networks. In: Sun, S. (ed.) Signal and Information Processing, Networking and Computers ICSINC 2018. Lecture Notes in Electrical Engineering, vol. 494, pp. 237–245. Springer, Singapore (2019). https://doi.org/10.1007/ 978-981-13-1733-0_29 13. Yafei, T., Jinwu, W., Yunyong, Z.: Research on big data based signaling monitoring platform. Design. Tech. Posts Telecommun. (7), 47–52 (2014)

5G Uplink Coverage Enhancement Based on Coordinating NR TDD and NR FDD Jinge Guo1(&), Yang Zhang2, Bao Guo3, Zhongsheng Fan4, and Huangtao Song4 1

Beijing Institute of Technology, Beijing, China [email protected] 2 China Mobile Communication Co., Ltd., Beijing, China 3 China Mobile Group CMPak Ltd., Islamabad, Pakistan 4 Huawei Technologies Pakistan Ltd., Islamabad, Pakistan

Abstract. With the development of the 5G vertical industry, significantly improving uplink experience and shortening latency without changing downlink experience is a new requirement and challenge for networks. The uplink coverage of the 5G network is different from that of the downlink coverage due to different antenna quantity settings, meanwhile, the 5G network can better compensate for the problem of large loss caused by high frequency bands on the base station side. As a result, the downlink coverage performance is far greater than the uplink coverage in 5G network. This paper proposes an uplink and downlink decoupling SUL technology for enhancing uplink coverage performance of 5G network and an uplink enhancement technology for coordinating NR TDD and NR FDD so as to uplink data is transmitted on the SUL spectrum or the NR FDD spectrum by time-sharing of the uplink data. This greatly increases the available uplink time-frequency resources for 5G UEs. Keywords: Supplementary uplink  Uplink enhancement  Carrier aggregation

1 Introduction In the 4G era, the network capability is mainly for consumers, and the downlink traffic is mainly used. In the 5G Internet of Everything era, unlike mass consumers, massive data will be generated from the bottom up. In addition to the traditional large downlink bandwidth, new requirements on uplink bandwidth and latency are raised. With the development of 5G vertical industries, significantly improving uplink experience and short latency without changing downlink experience is a new requirement and challenge for networks. The frequency bands used in the 5G era are increasingly high. Currently, among the 5G spectrum released in China, the frequency bands above 6 GHz are mainly millimeter wave spectrum, and the frequency bands below 6 GHz are divided into three bands, including 2.6 GHz, 3.5 GHz, and 4.9 GHz. The 3.5 GHz and 4.9 GHz bands are called C bands. (i.e., the frequency band below 6 GHz and above 3GHz). Currently, the uplink and downlink coverage of the 5G C band spectrum is inconsistent in the test on the 5G trial network. Due to reasons such as channel design and large-scale array ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1412–1420, 2023. https://doi.org/10.1007/978-981-19-3387-5_168

5G Uplink Coverage Enhancement

1413

antenna gain on the base station side, the 5G network can better compensate for the problem of large loss caused by high frequency bands on the base station side. In addition, the transmit power on the 5G base station side can reach more than hundreds Watt (50 dBm), while the maximum transmit power on the mobile phone is 26 dBm. As a result, the downlink coverage is far greater than the uplink coverage [1]. This paper proposes an uplink and downlink decoupling (Supplementary Uplink, SUL) technology for enhancing uplink coverage performance of a 5G network, and an uplink enhancement technology for NR TDD and NR FDD coordination. The uplink data is transmitted on the SUL spectrum or NR FDD spectrum in time division, greatly increasing the available uplink time-frequency resources of 5G UEs [2].

2 Uplink and Downlink Decoupling SUL Technology The 5G network mainly uses TDD networking, that is, uplink and downlink time division multiplexing spectrum resources are used. Therefore, the actual uplink timefrequency resources are limited, which leads to poor uplink user experience. The principle of uplink and downlink decoupling SUL technology is as follows: In the uplink timeslot of the NR TDD spectrum, the NR TDD spectrum is used to transmit uplink data. In the NR TDD downlink timeslot, the idle SUL spectrum supplement is used to transmit uplink data, so that uplink data can be transmitted in all timeslots [3]. SUL breaks the traditional restriction that the uplink and downlink are bound to the same frequency band. In addition to the downlink large capacity of the 5G C band spectrum, SUL transmits the 5G uplink through a lower frequency band (1.8 GHz), implementing co-site deployment of the 5G C band spectrum and LTE 1.8 GHz spectrum. This feature greatly improves network coverage, maximizes network resource utilization efficiency, and effectively reduces the imbalance between uplink and downlink. Uplink and downlink decoupling is a decoupling of the spectrum used by the uplink and downlink links of 5G. The frequency used by the uplink and downlink links of 5G is no longer fixed to the original association relationship, but a relatively low frequency is allowed to be configured for the uplink. So as to resolve or reduce a problem of limited uplink coverage. The 5G terminal can use high and low frequency bands at the same time, the downlink uses high frequency band, and the uplink can be carried on high frequency or low frequency band, so as to achieve a coverage radius that is the same as or similar to the downlink link of high frequency band, so as to balance uplink and downlink radio links. When uplink-downlink decoupling is deployed, generally 5G NR UL still needs to be configured. For example, in an NR TDD duplex mode, the UE needs to send an SRS by using the NR UL to perform NR DL downlink channel estimation by the gNB [4]. Take the 3.5 GHz 5 GHz NR core frequency band as one example. The uplink and downlink coverage of the 3.5 GHz frequency band is unbalanced due to uneven NR uplink and downlink slot assignment and large uplink and downlink power difference between the UE and gNB. During link simulation calculation, the UE power is set to 23 dBm, the gNB power (5 GHz base station) is set to 50.8 dBm (120 W), the NR bandwidth is 100 MHz, and the uplink-downlink timeslot configuration is DL:UL = 4:1.

1414

J. Guo et al.

The subcarrier spacing is set to 30 kHz. The 64T64R antenna is used. According to the link budget, the uplink coverage capability of 3.5 GHz is 13.7 dB less than that of the downlink. Uplink and downlink decoupling defines a new spectrum pairing mode. Some low-frequency uplink frequency bands are allocated as dedicated frequencies for uplink and downlink decoupling. For example, the 3.5 GHz frequency band is used for 5G downlink data, and the 1.8 GHz frequency band can be used for uplink data, which gives full play to the advantages of strong uplink coverage capability of 1.8 GHz lowfrequency. Compared with the 3.5 GHz uplink, the uplink coverage is enhanced by 11.4 dB (see Fig. 1).

Fig. 1. Enhanced uplink coverage by SUL uplink and downlink decoupling

The following Table 1 gives a summary of the frequency bands planned for NR SUL. Currently, multiple frequency bands can be used for NR SUL. Compared with C band, 1800 MHz has certain frequency band advantages. In addition, 1800 MHz has a total bandwidth of 75 MHz, which is 25 MHz compared with 900 MHz. It has a large bandwidth advantage [5]. Table 1. Frequency bands planned for 3GPP NR SUL Duplex mode Band Frequency band Frequency range NR SUL n80 1800M 1710–1785 MHz n81 900M 880–915 MHz n82 800M 832–862 MHz n83 700M 703–748 MHz n84 2100M 1920–1980 MHz

5G Uplink Coverage Enhancement

1415

The co-site deployment of an NR base station and an LTE base station is typical. The 5G NR DL frequency is F2, and the 5G NR and LTE UL frequency is F1. The uplink frequency F1 is referred to as a supplementary uplink frequency (Supplementary Uplink Frequency) for NR networking. When the NR terminal is in the near coverage area, the uplink and downlink of the NR terminal work on F2, for example, 3.5 GHz. When the NR terminal leaves the base station and moves toward the cell edge, the uplink bearer of the NR terminal is switched to F1 as the uplink coverage decreases. For example, 1.8 GHz is used, and the downlink frequency remains at the original F2 frequency, for example, 3.5 GHz, thereby effectively expanding the coverage area of the uplink and downlink decoupled scenario [6].

3 Uplink Enhancement Based on Coordinating NR TDD and NR FDD The uplink enhancement technology of NR TDD and NR FDD collaboration means that uplink data is sent by using the NR TDD spectrum in an uplink timeslot of the NR TDD spectrum. In an NR TDD downlink timeslot, an idle NR FDD spectrum is used to supplement uplink data, so that uplink data can be sent in a full timeslot. This solution requires co-site deployment of NR TDD and NR FDD base stations [7]. The SUL link for uplink data transmission is provided by the NR FDD cell, that is, the SUL and NR FDD co-cell. This solution applies when an operator has spectrums that support both NR FDD and SUL. An NR FDD cell has been established. 3.1

NR TDD and NR FDD Timeslot Scheduling

When the uplink enhancement technology is deployed using the SUL spectrum provided by an NR FDD cell and the uplink and downlink decoupling function is disabled, The rsrp-ThresholdSSB-SUL IE in the SIB1 message is set to the minimum value (– 156 dBm). Because –156 dBm is less than the camping threshold NRDUCellSelConfig.MinimumRxLevel of the cell, the UE performs random access only on NR TDD. When a UE in idle mode triggers random access, the downlink RSRP of the SSB of the NR TDD carrier measured by the UE is greater than rsrpThresholdSSB-SUL. The UE determines to initiate random access on the NR TDD carrier. When a UE supporting super uplink is handed over, if only super uplink is enabled in the target cell, the gNodeB includes the NR TDD carrier to be randomly accessed by the UE in the RRCReconfiguration message [8]. After the uplink enhancement technology of NR TDD and NR FDD coordination takes effect, downlinks are carried on NR TDD carriers, and uplinks are carried on NR TDD and SUL carriers. The SUL carrier is used only in the timeslot corresponding to the downlink timeslot of the NR TDD carrier [9]. In this case, the uplink symbol of the self-contained timeslot on the NR TDD carrier does not transmit data. In the NR TDD downlink timeslot, the idle SUL spectrum supplement is used to transmit uplink data, so that uplink data can be transmitted in all timeslots (see Fig. 2). The subcarrier spacing of NR TDD is 30 kHz, and the subcarrier spacing of SUL is 15 kHz. Therefore, scheduling in different time sequences needs to be considered

1416

J. Guo et al.

Fig. 2. NR TDD downlink timeslots using idle SUL spectrum supplementation

during scheduling. The scheduling time of super uplink UEs and non-super uplink UEs on NR TDD is different. In this way, super uplink UEs and non-super uplink UEs can simultaneously schedule the same air interface resources and MU-MIMO is enabled. The scheduling time of super uplink UEs and non-super uplink UEs must be aligned [10]. 3.2

Performance Comparison Analysis

Compared with the 3.5 GHz band, the uplink enhancement technology of NR TDD and NR FDD provides an FDD uplink channel. If TDD 3.5 GHz transmits downlink data, FDD 2100 transmits uplink data. If TDD 3.5 GHz transmits uplink data, FDD 2100 does not transmit uplink data [11]. As shown in Table 2, there is different from uplink CA (carrier aggregation) and uplink and downlink decoupling (SUL), when uplink data is transmitted on the 3.5 GHz frequency band, FDD does not transmit uplink data. In this way, the uplink throughput can be improved by taking full advantage of the advantages of the 3.5 GHz 100 MHz bandwidth and dual-channel transmission of the terminal. In addition, the maximum transmit power of each channel can reach 23 dBm, improving the coverage by 3 dB. The uplink peak rate of 3.5 GHz 64QAM is 280 Mbit/s and that of 2.1 GHz 64QAM is 90 Mbit/s. After super uplink is enabled, the theoretical uplink peak rate can reach 343 Mbit/s (280 + 0.7  90) and the rate increases by 20%. The average delay decreases by 29.6% from 2.7 s to 1.9 s [12]. The uplink enhancement technology used by NR TDD and NR FDD can significantly increase the uplink data rate. Compared with the 3.5 GHz band, the uplink peak data rate increases by 18.75% from 280 Mbit/s to 332.5 Mbit/s. The uplink edge rate increases from 0.2 Mbit/s to 1.8 Mbit/s, which is an eight-fold increase. The uplink enhancement technology used by NR TDD and NR FDD can significantly reduce the average uplink one-way delay. Compared with the 3.5 GHz, the super uplink (2.1 GHz + 3.5 GHz) technology reduces the average uplink one-way delay from 2.7 s to 1.9 s. The decrease was 29.6%. In addition, the super uplink technology can achieve higher uplink peak rates and lower uplink one-way average delay than CA and SUL technologies.

5G Uplink Coverage Enhancement

1417

Table 2. Benchmark between NR SUL and CA

Uplink peak rate (Mbit/s) UL edge rate (Mbit/s) UL oneway average delay (ms) Coverage capability

3.5G

Pure 2.6 GB

215

Uplink and downlink decoupling (SUL) 3.5G 280

280

185

1.8

1.8



0.2

1.09

1.9

2.7

2.7

2.7

3.9

Super uplink 3.5G + 2.1G

Carrier aggregation: 3.5 GHz + 2.1 GHz

332.5

Co-coverage with 2.1 GHz

50% less than 2.1G

4 LTE Load-Based SUL Uplink and Downlink Decoupling Mode In an existing SUL uplink-downlink decoupling mode, an uplink spectrum is shared by an 1800 MHz frequency band of an NR and an FDD LTE. According to an initial design, spectrum resources of different bandwidths may be flexibly supported by NR and LTE in different scheduling periods (TTIs) (see Fig. 3). SUL uplink and downlink decoupling mode in the existing network, the NR uses the SUL uplink spectrum, which is separated from the LTE uplink spectrum by segment [13]. An NR SUL n80 frequency band (1800 MHz) is used as an example. The frequency band is an uplink frequency band of an FDD spectrum, and includes 1705 MHz–1780 MHz. The traditional uplink spectrum allocation mode has obvious disadvantages. Assume that the NR SUL n80 frequency band is 1710 MHz–1725 MHz, and the total frequency band is 15 MHz. Currently, NR SUL n80 (1800 MHz) and FDD LTE do not share uplink spectrum but are manually allocated. That is, NR uses 10 MHz of 1710 MHz–1720 MHz, and FDD LTE uses 5 MHz of 1720 MHz–1725 MHz. The NR SUL n80 frequency band is manually divided. The 5 MHz frequency band cannot meet the uplink capacity requirements of FDD LTE. Similarly, the fixed 10 MHz frequency band allocated to NR is idle at the initial stage of 5G deployment. In a live network, a method for NR and LTE sharing uplink spectrum in an SUL uplink-downlink decoupling mode based on LTE network load may be considered, including: calculating, according to spectrum allocated to FDD LTE, PRB data that can be allocated; and; The FDD LTE cell is divided into two virtual cells. The FDD LTE cell is the primary cell and the NR SUL cell is the secondary cell. The two virtual cells share all uplink and downlink PRB resources. Calculate the number of uplink PRBs

1418

J. Guo et al.

Fig. 3. NR and LTE can flexibly use spectrum resources with different bandwidths

occupied by LTE FDD, set the number of uplink PRBs that can be allocated to the LTE FDD primary cell, calculate the start and end PRB identifiers, and determine the uplink PRB resource allocation of the primary cell. Allocate the remaining uplink spectrum to the NR SUL SCell to avoid the waste of downlink spectrum resources of the FDD LTE cell due to SUL planning. (1) Calculate the PRB data that can be allocated based on the spectrum allocated to FDD LTE. According to the spectrums owned by an operator, the operator has 2  25 MHz in 3GPP Band 3 (uplink 1710 MHz–1785 MHz and downlink 1805 MHz– 1880 MHz), that is, uplink 1710 MHz–1735 MHz. Downlink 1805 MHz– 1830 MHz. The shared frequency allocated to NR and FDD LTE is 2  15 MHz. The 15 MHz frequency can use 75 RBs. (2) Divide the FDD LTE cell into two virtual cells. The FDD LTE cell is the primary cell and the NR SUL cell is the secondary cell. The two virtual cells share all uplink and downlink PRB resources. According to the calculation, the FDD LTE primary cell and the NR SUL secondary cell share all 75 uplink RB resources and 75 downlink RB resources. (3) Calculate the number of uplink PRBs occupied by FDD LTE. For example, the uplink PRB usage of an LTE FDD cell on the live network is 15%. The available RB resources are 75 RBs (using the 15 MHz bandwidth). That is, the number of uplink PRBs allocated to the LTE FDD primary cell is 11.25 RBs (75 RBs  15%). That is, the number of uplink PRBs allocated to the LTE FDD primary cell is 12 RBs (a certain redundancy can be considered). If the number of uplink PRBs allocated to the FDD LTE primary cell is set to 15 RBs, the start and end of available PRBs are (1–15) PRBs. Allocate the remaining uplink spectrum to the NR SUL, calculate the available uplink spectrum range, and determine the uplink PRB resource allocation of the

5G Uplink Coverage Enhancement

1419

SCell. For example, if the quantity of uplink PRB scheduling allocated to the FDD LTE primary cell is set to 15 RBs, the PRB start and end identifiers that can be used are: (1–15) PRBs, the quantity of uplink PRB scheduling allocated to the SUL NR secondary cell is set to 60 RBs. The PRB start and end identifiers are (16– 75) PRBs. LTE FDD uses all PRB resources in the downlink. This avoids the waste of downlink spectrum resources in LTE FDD cells due to SUL planning.

5 Conclusion In SUL uplink and downlink decoupling mode, NR and LTE share the uplink spectrum. This ensures that the uplink coverage enhancement can be implemented for the limited scenario of the NR uplink coverage. In SUL uplink and downlink decoupling based on actual load on the live network, when FDD LTE and NR share uplink spectrum, FDD LTE cells can use sufficient and continuous PRB resources, ensuring the experience of existing 4G UEs. In addition, the downlink FDD LTE uses all downlink spectrum instead of a few downlink spectrums manually allocated in the previous period.

References 1. Xu, L., et al.: Telecom big data assisted bs resource analysis for LTE/5G systems. In: 18th IEEE International Conferences on Ubiquitous Computing and Communications, pp. 81–88. IEEE Press, Shenyang (2019) 2. Xu, L., Chen, Y., Chai, K.K., Schormans, J., Cuthbert, L.: Self-organising cluster-based cooperative load balancing in OFDMA cellular networks. Wiley Wirel. Commun. Mob. Comput. 15(7), 1171–1187 (2015) 3. Eiza, M., Cao, Y., Xu, L.: Toward Sustainable and Economic Smart Mobility: Shaping the Future of Smart Cities, 1st edn. World Scientific, London (2020) 4. Xu, L., Zhao, X., Yu Y., Luan, Y., Zhao, L., et al.: A comprehensive operation and revenue analysis algorithm for LTE/5G wireless system based on telecom operator data. In: 19th IEEE International Conference on Scalable Computing and Communications, pp.1521– 1524. IEEE Press, Leicester (2019) 5. Xu, L., Cao, Y., Yang, H., Sun, C., Zhang, T., et al.: Research on telecom big data platform of LTE/5G mobile networks. In: 18th IEEE International Conferences on Ubiquitous Computing and Communications, pp. 756–761. IEEE Press, Shenyang (2019) 6. Xu, L., Zhao, X., Luan, Y., et al.: User perception aware telecom data mining and network management for LTE/LTE-advanced networks. In: Sun, S. (ed.) Signal and Information Processing, Networking and Computers ICSINC 2018. Lecture Notes in Electrical Engineering, vol. 494, pp. 237–245. Springer, Singapore (2019). https://doi.org/10.1007/ 978-981-13-1733-0_29 7. Xu, L., et al.: Self-optimised joint traffic offloading in heterogeneous cellular networks. In: 16th IEEE International Symposium on Communications and Information Technologies, pp. 263–267. IEEE Press, Qingdao (2016)

1420

J. Guo et al.

8. Xu, L., et al.: Data mining for base station evaluation in LTE cellular systems. In: Sun, S., Chen, N., Tian, T. (eds.) ICSINC 2017. LNEE, vol. 473, pp. 356–364. Springer, Singapore (2018). https://doi.org/10.1007/978-981-10-7521-6_43 9. Xu, L., Luan, Y., Cheng, X., et al.: WCDMA data based LTE site selection scheme in LTE deployment. In: 1st International Conference on Signal and Information Processing, Networking and Computers, pp. 249–260. CRC Press Taylor & Francis Group, Beijing (2015) 10. Xu, L., Luan, Y., Cheng, X., et al.: Telecom big data based user offloading self-optimisation in heterogeneous relay cellular systems. Int. J. Distrib. Syst. Technol. 8(2), 27–46 (2017) 11. Xu, L., et al.: Channel-aware optimised traffic shifting in LTE-advanced relay networks. In: 25th IEEE International Symposium on Personal Indoor and Mobile Radio Communications, pp. 1597–1602. IEEE Press, Washington (2014) 12. Xu, L., Cheng, X., et al.: Mobility load balancing aware radio resource allocation scheme for LTE-advanced cellular networks. In: 16th IEEE International Conference on Communication Technology, pp.806–812. IEEE Press, Hangzhou (2015) 13. Xu, L., Cheng, X., Chen, Y., Chao, K., Liu, D., Xing, H.: Self-optimised coordinated traffic shifting scheme for LTE cellular systems. In: Sun, S., Li, J. (eds.) ICSON 2015. LNICSSITE, vol. 149, pp. 67–75. Springer, Cham (2015). https://doi.org/10.1007/978-3319-19746-3_9

Application of Improved GRNN Algorithm for Task Man-Hours Prediction in Metro Project Zhengyu Zhang(&), Shuying Wang(&), and Jianlin Fu Southwest Jiaotong University, Chengdu 611756, China [email protected], [email protected]

Abstract. In order to improve the accuracy and efficiency of subway project task man-hours prediction, a generalized regression neural network (GRNN) prediction model based on the improved Sparrow Search Algorithm is proposed. And a prediction study was carried out on the task man-hours of subway projects, and the factors affecting the design task man-hours were analyzed by taking the architectural profession as an example. The algorithm of this paper was used to predict. The results show that the improved GRNN algorithm proposed in this paper has a smaller error and a higher accuracy, and it has great potential for application in the prediction of the task man-hours of subway projects. Keywords: Task man-hours prediction  Generalized regression neural network (GRNN)  Sparrow search algorithm (SSA)  Combinatorial optimization model

1 Introduction Task man-hours refer to the duration from the start of the task to the end. The success of subway engineering design projects is crucial to the development and future of design companies. Among them, design task man-hours, as the core of design project cost, have important value in engineering design, and affect key factors of project implementation such as project quotation, human resource arrangement, and project schedule. Unreasonable working hours estimates may cause project delays and even losses. The forecast of design task man-hours plays an important role in planning and improving work efficiency. Quantitative prediction of working hours is currently mainly predicted by analyzing the relationship between predictor variables and related variables in historical data. At present, the main algorithms are artificial neural network [1], low-rank sparse decomposition [2], support vector machine [3], Gaussian process regression [4], generalized regression neural network [5] and so on. General Regression Neural Network (GRNN) is a non-linear regression feedforward neural network, which belongs to the branch of Radial Basis Neural Network (RBF), and its structure is similar to RBF network. The calculation of GRNN is based on mathematical statistics, and the implicit relationship of data is approximated based on experimental samples. The convergence ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1421–1430, 2023. https://doi.org/10.1007/978-981-19-3387-5_169

1422

Z. Zhang et al.

speed of the GRNN algorithm model is relatively fast, the amount of calculation in the process is small, and it can handle the situation with few training samples [6]. In the case of missing data management [7], industrial data prediction [8], and quality evaluation [9], ozone forecast [10] and other fields have applications.

2 Related Work The traditional GRNN has an important parameter, the smoothing parameter r. The value of the parameter will greatly affect the accurate performance of the model prediction. The traditional GRNN generally adopts the method of manually adjusting the parameters, which has low efficiency and poor precision. At present, many scholars at home and abroad have improved the algorithm in view of the limitations of the GRNN algorithm. Meng et al. [11] uses the global search capability of the simulated annealing algorithm to optimize the GRNN smoothing factor parameter, and establishes a mathematical model for estimating the equilibrium solubility of a ternary water-salt system in a wide temperature range. Azim et al. [12] introduced the gray wolf optimization algorithm to optimize the smoothing factor parameter to obtain an ideal prediction model and use it for CO2 emission prediction. Han et al. [13] uses the fruit fly optimization algorithm to optimize the parameter of GRNN to establish a fund net value prediction model. The results show that it is not easy to fall into a local optimum and the prediction accuracy is improved. Therefore, many intelligent optimization algorithms are widely used to optimize the parameter of GRNN. The sparrow search algorithm (SSA) is an intelligent optimization algorithm proposed by XUE J K in 2020, inspired by sparrow foraging behavior. Compared with other algorithms, SSA algorithm has the advantages of higher stability, better convergence accuracy and avoiding falling into the local optimal solution to some extent. However, SSA algorithm still has the situation that convergence speed is fast and it is easy to fall into local optimal solution. In this paper, the PCA-ISSA-GRNN algorithm is proposed. First, the principal component analysis is performed on the various factors of the sample data, because in engineering practice, the system is a gray system with complex correlations among various factors, and this correlation causes information duplication. It will bring loss to the accuracy of the neural network and the training speed. The dimensionality-reduced data is trained through the GRNN model optimized based on the improved sparrow search algorithm. The experimental results confirm the feasibility and superiority of the improved algorithm. The rest of the paper is organized as follows. In Sect. 3, The implementation process of ISSA and the detailed modeling process of PCA-ISSA-GRNN are described. In Sect. 4, the prediction experiments of PCA-ISSA-GRNN on the metro dataset are presented to verify the validity. Section 5 gives conclusions.

Application of Improved GRNN Algorithm

1423

3 Materials and Methods 3.1

GRNN

Its structure is shown in Fig. 1. Detailed calculations of the algorithm can be found in [8]. After the data is input into the network, the input layer, Pattern layer, summation layer and output layer are successively calculated to get the output results.

Fig. 1. The structure of GRNN.

3.2

The Method Proposed

This section first briefly discusses Improved Sparrow Search Algorithm (ISSA), then introduces the Hybrid GRNN model (PCA-ISSA-GRNN). Our Proposed Sparrow Search Algorithm Sparrow Search Algorithm In nature, sparrows, as social birds, are intelligent and have a strong memory, and there is an obvious division of labor within the population. Some sparrows are responsible for finding food and providing foraging area and direction for the whole population, while the rest follow the former to obtain food. Meanwhile, when a sparrow realizes the danger, it will send out an alarm signal, and the whole population will make antipredation behavior immediately. Each sparrow has only one location attribute, representing the location of the food it finds. Each sparrow has three possible behaviors: • Continue to search for food as a finder, • Follow a finder for food as a follower, • Observe and investigate. The algorithm was proposed in 2020, we can find the detailed calculations in [15]. Chaos Initialization of Population For the swarm intelligence algorithm, the more individuals in the initial population, the more uniform the distribution is, and the algorithm can converge to the optimal solution

1424

Z. Zhang et al.

in the shortest time. The initial population with good distribution characteristics can improve the optimization effect of the algorithm. The standard SSA generates the initial population randomly, so it is difficult to ensure the diversity of the initial population. In order to solve the above problems, chaotic mapping is introduced to initialize the population. Chaotic mapping is used to generate chaotic sequences, which are random sequences generated by a simple deterministic system. The general chaotic sequence has the characteristics of non-linearity, ergodic and randomness, so using chaotic sequence to initialize the population can achieve better results than pseudo random number. Tent mapping, is a piecewise linear mapping function. Tent mapping structure is simple, and the distribution density presented by the mapping is relatively uniform, and has good ergodicity. The expression is:  xn þ 1 ¼

2xn ; 0  xn \0:5 2ð1  xn Þ; 0:5  xn \1

ð1Þ

This is the expression when the parameter a = 5.0. The original mathematical expression for a Tent map is: xn þ 1 ¼

 xn

a ; 0\xn \a 1xn 1a ; 0  xn  1

ð2Þ

Dynamic ST Value Strategy The ST value of the standard SSA is fixed, and the dynamic ST value strategy can balance the global search capabilities and local search capabilities. If the ST value is larger, most sparrows are within the safety range. Expand the search range to find a better position with better fitness and ensure the global searchability. Smaller ST value reduces the population diversity and increases the probability of anti-predation behavior of the population, which will strengthen the searchability around the current optimal solution and improve the accuracy of local search. In this paper, a nonlinear decreasing strategy is proposed to make ST at a large value at the early stage of iteration and at a small value in the later iteration. Its adjustment formula is as follows: ST ¼ STmin þ

   cos p sin p2  Tt þ 1 ðSTmax  STmin Þ 2

ð3Þ

In the above formula, STmin is the minimum value of ST, STmax is the maximum value of ST, and t is the number of current iterations. T is the total number of iterations. Our Proposed Hybrid GRNN Model Figure 2 is the flow chart of the improved GRNN algorithm proposed in this paper. The steps are summarized as follows: Firstly, data is standardized, PCA is used to reduce the dimension of data, and principal components are extracted to eliminate the correlation between variables; The range of smoothness factor was set, and the optimal parameters r were found by the improved sparrow search algorithm. The optimal

Application of Improved GRNN Algorithm

1425

GRNN prediction model was established by using the obtained optimal values r. Test the test set and output the prediction results.

Begin

Data standardization

TENT chaos initializes the population

extract the principal components

Divide the population into finder and follower Dynamically update the safety value St

Forming new data sets

Obtain the global optimal solution as a smoothing factor

Update finder location

NO

Update the follower location

YES

Establish the optimal prediction model

Randomly select the watchman and update the position

The GRNN model predicts the results

Whether to end

End

Fig. 2. The process of the proposed model.

4 Performance Analysis 4.1

Analysis of Influencing Factors of Man-Hours

Engineering projects are often completed jointly by many design majors, such as electrical engineering, civil engineering, heating and ventilation, water supply and drainage, etc. The factors affecting man-hours of different majors are different, and the influencing conditions are complicated. It is the result of the joint action of many factors, and the change of man-hours cannot be predicted from a single factor. And

1426

Z. Zhang et al.

different factors on the design of the impact of different ways, complex form, showing a linear and nonlinear correlation. In order to avoid too complicated research direction, this paper selects architectural design as a minor direction to carry out specific man-hour prediction analysis. In the design of engineering projects, the major of architecture is almost always involved in the reconstruction, expansion or new construction projects. It has a certain representative significance, and its research results can be better promoted and applied. Aimed at the characteristics of the architectural design and the general scale of the project, After consulting relevant literature and experts, station type, station layer, construction area, the station operating characteristics, traffic, the number of the entrances and exits, the number of wind pavilions and the number of designers are finally selected as the input variables to predict the man-hours of the construction project. In order to verify the performance of this algorithm in subway project man-hour prediction, 108 pieces of historical task completion time data were obtained from BIM collaborative platform of urban rail transit to train and predict the model. 4.2

Man-Hour Prediction and Analysis

Before variables are input into the model, various data need to be normalized to avoid the impact of dimensionality of different factors on predicted results. The normalization formula is as follows: x¼

xmax  x x  xmin

ð4Þ

x is the data that needs to be normalized, x is the input sample of the standardized model, and xmax and xmin are the data that take the maximum value of this factor. After the predicted results are obtained by the improved GRNN model, it is necessary to evaluate the performance of the algorithm. In this study, three indexes were used to measure the accuracy of the prediction results, including MAE (Mean Absolute Error), MAPE (Mean Absolute Percentage Error) and RMSE (Root Mean Squared Error). The calculation formula is as follows: MAE ¼

1 Xn ja  pi j i¼1 i n

1 Xn jai  pi j i¼1 ai n rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 Xn ð a  pi Þ 2 RMSE ¼ i¼1 i n MAPE ¼

ð5Þ ð6Þ ð7Þ

In the above formula, n represents the number of data,ai is the true value,pi is the predicted value.

Application of Improved GRNN Algorithm

1427

The PCA-ISSA-GRNN hybrid model was constructed in the MATLAB R2020B environment. Some parameters of ISSA are set as follows: the population size is 50, the maximum number of iterations is 200, the proportion of finders in the population is Pd of 0.7, and the proportion of vigilantes in the population is Sd of 0.2. The safety value is dynamically updated according to the above equation, where STmax is 0.6 and STmin is 0.4.

Fig. 3. Convergence curve of sparrow search algorithm

Figure 3 is the convergence curve of sparrows. After 102 iterations of optimization, the population extremum tends to be flat, and the optimal smoothness factor value is 0.664. To verify the effectiveness of the proposed algorithm, It was compared with the improved GRNN model (PSO-GRNN) proposed in literature [16], traditional GRNN algorithm. The ratio of training set to test set is 7:3. The pros and cons of these models were compared through the predicted results. The experimental results are shown in the Table 1. Table 1. Comparison of model prediction results Index

1 2 3 4 5 6 7 … 33

Actual value 29 32 31 34 36 27 33 … 30

GRNN Predictive value 28.999 30.408 30.405 31.420 32.682 30.524 32.575 … 31.527

AE value 0.001 1.592 0.595 2.580 3.318 3.524 0.425 … 1.527

PSO-GRNN Predictive value 27.670 32.213 31.637 32.042 33.237 25.021 33.999 … 30.001

AE value 1.330 0.213 1.232 1.958 2.763 1.979 0.999 … 0.001

PCA-ISSA-GRNN Predictive AE value value 28.903 0.097 31.791 0.209 31.489 0.489 33.384 0.616 35.173 0.827 27.762 0.762 33.055 0.055 … … 30.065 0.065

1428

Z. Zhang et al.

In the experiment, support vector regression and BP neural network were constructed for prediction. Comparison of performance indexes of different algorithms is shown in the following table. Table 2. Comparison of performance indexes Model SVR BPNN GRNN PSO-GRNN PCA-ISSA-GRNN

RMSE 3.0482 2.8717 2.6703 2.5297 2.4541

MAE 2.5781 2.2689 2.2119 1.9112 1.7021

MAPE 0.0934 0.0750 0.0712 0.0577 0.0536

As can be seen from Table 2 support vector regression algorithm has the lowest accuracy, and is prone to large deviations in the prediction process, showing great instability. Combining RMSE, MAE and MAPE, the prediction accuracy and algorithm stability of GRNN model are higher than that of BP neural network and support vector regression algorithm. Model in this paper compared with standard GRNN model and PSO-GRNN model has better prediction performance, this is due to the improvement of in the late stage of iteration is still to keep the variety of the population characteristics, avoid falling into local optimum, enlarge the search space and increase the probability of finding the global optimal value, so the prediction precision is higher than PSO-GRNN model and GRNN model. We plotted the results in the test set. In the figure, the prediction results of GRNN, BPNN, PSO-GRNN and PCA-ISSA-GRNN models were compared with the original data set, and we found that the PCA-ISSA-GRNN model could better fit the real data value. The original GRNN prediction model was so unstable that it produced a large prediction error (Fig. 4).

Fig. 4. Comparison of daily forecast results

Application of Improved GRNN Algorithm

1429

5 Conclusion • As the traditional GRNN algorithm uses a single smoothing factor and complex correlation between input variables in the regression prediction of engineering problems, which leads to the shortage of prediction accuracy and training speed, this paper proposes an improved hybrid GRNN algorithm. The correlation of input variables was first reduced by principal component analysis, and then the smoothing factor of GRNN model was optimized by improved sparrow search algorithm. Through comparison and analysis with other algorithms, the improved GRNN algorithm proposed in this paper has smaller prediction error and higher prediction accuracy. • The basic sparrow search algorithm is improved by introducing Tent chaotic mapping instead of random initialization and nonlinear decrement ST value strategy to improve the basic sparrow search algorithm, which is prone to fall into local minimum value. • The design man-hour prediction of metro project is studied, Taking architecture as an example, the factors affecting task man-hours are analyzed, By comparing the algorithm proposed in this paper with the traditional algorithm, improved GRNN algorithm by other scholars, It can be seen that the prediction error of the model in this paper is smaller, and it has great application potential in subway project design task man-hours prediction. Acknowledgements. Fund project: Science and Technology Project of Sichuan Province (Applied Basic Research), 2020YJ0215; Research on Production Self-organization and Selfregulation in Digital Twin Workshop, 2020.1-2021.12.

References 1. Wang, C., Zhang, X., Chen, X., et al.: Vessel traffic flow forecasting based on BP neural network and residual analysis. In: 2017 4th International Conference on Information, Cybernetics and Computational Social Systems (ICCSS). IEEE, Dalian (2017) 2. Liu, R., Chen, J., Liu, Z., et al.: Vessel traffic flow separation-prediction using low-rank and sparse decomposition. In: 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC). IEEE, Yokohama (2017) 3. Wang, H., Wang, Y.: Vessel traffic flow forecasting with the combined model based on support vector machine. In: 2015 International Conference on Transportation Information and Safety. IEEE (2015) 4. Renaldy, D., Shi, C., et al.: Running-in real-time wear generation under vary working condition based on Gaussian process regression approximation. Measurement (2021) 5. Ivan, I., Roman, T., et al.:A GRNN-based approach towards prediction from small datasets in medical application. Proc. Comput. Sci. (2021) 6. Erhan, D., Bengio, Y., Courville, A., et al.: Why dose unsupervised pre-training help deep learning? J. Mach. Learn. Res. (2010) 7. Ivan, I., Roman, T., et al.: An approach towards missing data management using improved GRNN-SGTM ensemble method. Eng. Sci. Technol. Int. J. (2021)

1430

Z. Zhang et al.

8. Cao, W., Zhang, C.: An effective parallel integrated neural network system for industrial data prediction. Appl. Soft Comput. (2021) 9. Cheng, J., Xiong, Y.: The quality evaluation of classroom teaching based on FOA-GRNN. Proc. Comput. Sci. (2017) 10. Zhu, S., Wang, X., et al.: CEEMD-subset-OASVR-GRNN for ozone forecasting: Xiamen and Harbin as cases. Atmos. Pollut. Res. (2020) 11. Meng, X., Fu, Y., et al.: Estimating solubilities of ternary water-salt systems using simulated annealing algorithm based generalized regression neural network. Fluid Phase Equilibria (2020) 12. Azim, H., Davide, A., et al.: Renewable energies generation and carbon dioxide emission forecasting in microgrids and national grids using GRNN-GWO methodology. Energy Proc. (2019) 13. Han, S., Huang, L., et al.: Mixed chaotic FOA with GRNN to construction of a mutual fund forecasting model. Cogn. Syst. Res. (2018) 14. Specht, D.F., et al.: A general regression neural network. IEEE Press (1991) 15. Xue, J., Shen, B.: A novel swarm intelligence optimization approach: sparrow search algorithm. Syst. Sci. Control Eng. (2020) 16. Bendu, H., Deepak, B.B.V.L., Murugan, S.: Multi-objective optimization of ethanol fuelled, HCCI engine performance using hybrid GRNN–PSO. Appl. Energy (2017)

Resident Area Determination Model Based on Big Data and Large Scale Matrix Calculation Chuntao Song1(&), Yong Wang2, Xinzhou Cheng1, and Lexi Xu1 1

2

China Unicom Network Technology Research Institute, Beijing, People’s Republic of China {songct2,chenxz11,xulx29}@chinaunicom.cn China United Network Communications Group Corporation, Beijing, People’s Republic of China [email protected]

Abstract. User’s resident area is very important for mobile network planning and customer maintenance. But the traditional methods which is always based on traffic calculation without consideration of neighbor cells or user’s spatiotemporal trace are not sufficient for now. In this paper we present a brand new method based on wide sources of data include traffic usage record and signaling record, meanwhile big data processing and matrix calculation are introduced. All the traffic usage record and signaling record can describe the user’s trace in contrast to traditional method in detail. The determined resident area could be one or a cluster of cells. With some experiment test, this method has got effective and accurate result. The method is used for a cell level location determination and finding a balance between the calculation complexity, resource cost and real-time performance. And a big data calculation module is under developed in one operator’s data middle ground in use. Keywords: Resident area Spatio-temporal trace

 Matrix calculation  Big data  Data fusion 

1 Introduction In the work of mobile network planning and customer maintenance, mobile operators need to know the user’s resident area, which is not accurate and efficient as well for use. The resident area determination is a typical work used for operators for years, as the development of network technology and traffic blooming, the traditional methods which is always based on traffic calculation without consideration of neighbor cells, are not sufficient for now [1, 2]. There are two typical traditional methods for use. The first one is determine the cell with maximum traffic throughput of one user as his/her resident area. It can only get one cell but not an area or a region for the user, and we can’t recognize the large traffic usage area when the user is in the WLAN scenario. The second method is based on the accurate and massive GPS location data to sketch the trace, but we can’t recognize the ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1431–1439, 2023. https://doi.org/10.1007/978-981-19-3387-5_170

1432

C. Song et al.

location’s mobile network cell and meet mass data and processing resource and time consumption [5, 6]. For the reason, we need to build a new method to recognize user’s resident area with more accuracy, lower cost and complexity, and simplicity implementation. In this paper, we introduce broader data source with user cell information, minimum data process, and usage of matrix theory to construct a resident area self-recognition method with less resource cost, more computational efficiency and easier implementation.

2 Trace in Cell Level Modelling The variable features of wireless channel and unexpected movement of mobile, as well as the connection with mobile network and WLAN, result in the user’s irregular time and space trace. The sketch time and space trace with cell connection for one user is shown in Fig. 1.

ti-1 t1

t2 ti ti+1

Fig. 1. A user’s Spatio-temporal trace based on time and corresponding network cells.

The user’s moving trace consist of the time and related mobile cells, which is the basic data used for the method in this paper. There are four steps to calculate user’s resident area which is shown in Fig. 2. First we need to collect all the user’s time and space information as much as possible, then we need to fusion and sort all the time and space data in the sequence of time. For the reason of data quality and Information redundancy, the data need to be cleansing and compression. After that we construct a model to rough determination for resident area. And then an accurate determination will finally output the accurate result [4]. Spatio-temporal Data Collection

Time Sorting by User Partition

Rough Estimation for Resident Area

Resident Area Self-Recognition

Fig. 2. The resident area determination diagram of processing steps.

Resident Area Determination Model Based on Big Data

1433

3 Data Collection and Trace Construction For the users’ spatio-temporal information data acquisition, there are no restrictions on the data source, data type, platform, system or interface. All the data, which at least include times tramp, user’s ID, mobile network Cell ID, can be used for the model. The proposed data list includes but not limited to voice call detailed record, data traffic detailed record, signaling record, measurement report and call trace [1]. Voice and data traffic record are depend on user’s traffic activity, but the signaling record, as well as the MR (Measurement report) and call trace [7, 8], can be collected all the time, even the user is in idle state. All the multi-source heterogeneous data with time, user ID and Cell ID will be fused for the next process [9]. After that, we get a date set contains all the user ID, time, and Cell ID, then we sort all these data by the order of time and partition of user which is shown in Table 1. Table 1. The whole spatio-temporal data list by order of time for user_x. User ID User_x User_x User_x User_x User_x User_x User_x User_x

Time t1 t2 t3 … ti1 ti ti þ 1 …

Cell ID Cell1 Cell1 Cell1 … Cell1 Cell2 Cell2 …

Introduction of multi-source heterogeneous data result in a more rich and accurate user trace, but one user may have more than one record with the same Cell ID in a series time interval. The data need to be compressed further to reduce data redundancy. For the record between two different Cell ID, all the record with the same Cell ID should be deleted except the first record, until the Cell ID changed. The deletion is shown in Table 2. The data after compression, which contain the location of cell change information, is enough for portrait the user’s spatio-temporal trace.

1434

C. Song et al. Table 2. The reduction spatio-temporal data list by order of time for user_x.. User ID User_x User_x User_x User_x User_x User_x User_x User_x

Time t1 t2 t3 … ti1 ti ti þ 1 …

Cell ID Cell1 Cell1 Cell1 … Cell1 Cell2 Cell2 …

Abandon or not Reserve Abandon Abandon Abandon Abandon Reserve Abandon …

4 Resident Area Determination with Matrix 4.1

Rough Estimation of Resident Area

With the location information, which is presented as longitude and latitude, can be used to calculate the distance between the cell pair in every change. The time interval should be calculated as well. The reduced spatio-temporal data list with cell location and time interval is shown in Table 3.

Table 3. The reduced spatio-temporal data list with cell location and time interval. User ID User_x User_x User_x User_x User_x User_x User_x User_x

Time t1 t2 t3 … ti1 ti ti þ 1 …

Cell ID Cell1 Cell2 Cell3 … Cellk1 Cellk Cellk þ 1 …

Longitude Lon_1 Lon_2 Lon_3 … Lon_k−1 Lon_k Lon_k+1 …

Latitude Lat_1 Lat_2 Lat_3 … Lat_k−1 Lat_k Lat_k+1 …

Time interval – Dt2;1 Dt3;2 … Dti1;i2 Dti;i1 Dti þ 1;i …

Cell distance – Dd2;1 Dd3;2 … Ddk1;k2 Ddk;k1 Ddk þ 1;k …

The spatio-temporal trace of one user can be presented as below. User’s time point: ti , i 2 ½1; N. User’s related cell at ti : Cellk , k 2 ½1; M, M  N. Time interval in one cell change: Dti;i1 . Distance between two cells in one change: Ddk;k1 . The determination calculation of resident area should be done in two steps, the first step is to rough judge the resident area, the second step is to make an accurate decision. The rough judge aims to preliminary determine a large scale cell sets the user stay or pass.

Resident Area Determination Model Based on Big Data

1435

Table 4. The rough determination based on time interval and cell distance. User ID User_x User_x User_x User_x User_x User_x User_x User_x

Time t1 t2 t3 … ti1 ti ti þ 1 …

Cell ID Cell1 Cell2 Cell3 … Cellk1 Cellk Cellk þ 1 …

Time interval – Dt2;1 Dt3;2 … Dti1;i2 Dti;i1 Dti þ 1;i …

Cell distance Rough determination – Region 1 Dd2;1 Dd3;2 … Ddk1;k2 Ddk;k1 Region 2 Ddk þ 1;k …

The characters used for the rough judgment is based on the time interval and cell distance, and the condition is expressed as (1): fDti;i1 [ T or Ddk;k1 [ Dg

ð1Þ

On the time of cell change ti , when the time interval is more than T or the changed cells distance is more than D, the cell after change should be considered as a new region, otherwise the cell is still belong to the previous region. It means that when one user stays in one cell for a long time more than T, or the moving distance is more than D, the user has left an old region and moved into a new region. T and D can be set and adjusted according to the scene on demand, as shown in Table 4. The rough determination for resident area ensure the continuity of time and space in the rough region for one user, and ensure the isolation between two regions. 4.2

Resident Area Self-recognition

The rough determination of resident area represents the probably region for one user, but the user’s movement and hop of wireless environment will partly affect the judgment and result in some pseudo resident cell. As for this, a further calculation method is used to eliminate these pseudo cells in rough determination region and left the real resident cells as the resident area. In one user’s one rough region, we can build a N-order determinant based on all the cells in this region, and filled the determinant with the transition times between two different cells [3]. The matrix is shown in (2).   0   a A ¼  21  ...  aN1

a12 0 ... aN2

 . . . a1N   . . . a2N   ... ...   ... 0

ð2Þ

ajk is the transition times of Cellj ! Cellk which is expressed in (3), there is no selftransition for one cell, so the value should be set to 0, that means ajk ¼ 0jj ¼ k.

1436

C. Song et al.

ajk ¼

X

  IF exsist Cellj ! Cellk then 1 else 0

ð3Þ

In this determinant, the transition direction is not important in this paper and used for other work. A threshold R should be set for the transition times count without the consideration of direction. If the transition times is bigger than R, then the cell pair should be marked as resident cells, otherwise they are not. All the cells will be marked as resident cells or not. In this method, three threshold of transition time interval T, transition distance D, and transition times R should be set according to the requirements and scenarios for different use of front-end and back-end departments. 4.3

Calculation and Verification

The calculation result for one user’s resident area, which contains a set of mobile network cells are shown in Fig. 3 as below.

Fig. 3. A fragment of one user’s resident cells calculation result under different conditions.

In this result the region_id_5,5,5 is a resident cell set on the threshold of interval T = 5 h, transition distance D = 5 km, and transition times R = 5 times. The label region_id_5,5,5 is used for rough determination. The label of range_id-5,5,5 is the final determination for resident area. When the condition of transition times is meet, then the cell should be labeled as a number, otherwise it should be labeled as Not.

Resident Area Determination Model Based on Big Data

1437

With the compare of model determination result and manual verification, the model result has high effectiveness on the cell level for user’s accurate trace and resident area recognization. After an experiment of more than twenty volunteer users, this method is general and easy to implement. And now it is promoted and applied for department such as marketing and network operation. With this method, we calculate seven rough regions for one volunteer user’s trace. This user has passed 58 cells in one day with 214 trace records. But there are only six cells determined as his resident area cells after calculation which is shown in Fig. 4.

Fig. 4. The roughly determined resident regions for one test user.

For Region_1_5,5,5, there are more than 8 cells here, but only Cell_1 and Cell_2 are determined as resident cells for this user, he has stayed here for 4 h and 3 min at least. And the two cells are neighbor cells next to each other which is shown in Fig. 5. Under this method, this user’s resident cells are Cell_1 and Cell_2 in region_1_5,5,5, Cell_40, Cell_42 and Cell_43 in region_4_5,5,5 and Cell_58 in region_7_5,5,5. This determination is basically match with the user’s spatio-temporal real condition.

1438

C. Song et al.

Fig. 5. The detailed cells in region 1 for the test user, in which cell_1 and cell_2 are resident cells.

For the model verification, we recruited more than fifty volunteers in three different cities. And the model has achieved good performance for all the users. Acknowledgement. The user’s resident area recognition is very important in mobile network operation, marketing and customer service. But the resident area should not be only one cell, neither should not be only determined by the data traffic or voice length. The method in this paper present a brand new solution with the consideration of user trace, stay time interval and moved distance, as well as the computing resources cost and complexity. This method is based on a wide range of data such as signaling record, voice call record, data traffic record, and all other data including user’s time and cell information. The method is used for a cell level location determination and find a balance between the calculation complexity, resource cost and real-time performance.

References 1. Xu, L., et al.: Architecture and technology of multi-source heterogeneous data system for telecom operator. In: Wang, Y., Xu, L., Yan, Y., Zou, J. (eds.) Signal and Information Processing, Networking and Computers. LNEE, vol. 677, pp. 1000–1009. Springer, Singapore (2021). https://doi.org/10.1007/978-981-33-4102-9_120 2. Zhu, C., Cheng, X., et al.: A novel base station analysis scheme based on telecom big data. In: 20th International Conference on High Performance Computing and Communications, pp. 1076–1081. IEEE Press, Exeter (2018) 3. Fang, B., Zhou, J., Li, Y., et al.: Matrix Thoery, 1st edn., pp. 28–120. Tsinghua University Press, Beijing (2004) 4. Zhang, T., et al.: A novel LTE network deployment scheme using telecom big data. In: 1st International Conference on Signal and Information Processing, Networking and Computers, pp. 261–270. CRC Press Taylor & Francis Group, Beijing (2015) 5. Xu, L., et al.: Data mining and evaluation for Base Station deployment. In: Proceedings of the ICSINC, Chongqing, China, pp. 356–364, September 2017

Resident Area Determination Model Based on Big Data

1439

6. Xu, L., et al.: Data mining for base station evaluation in LTE cellular systems. In: Sun, S., Chen, N., Tian, T. (eds.) ICSINC 2017. LNEE, vol. 473, pp. 356–364. Springer, Singapore (2018). https://doi.org/10.1007/978-981-10-7521-6_43 7. TDD/FDD LTE Digital Cell Mobile Communications Network OMC-R Measurement Report Technical Specification, pp. 7–15. China Unicom, Beijing (2016) 8. 3GPP TS 36.214(2021-03), Evolved Universal Terrestrial Radio Access (E-UTRA) Physical layer Measurements (Release 16). www.3gpp.org 9. Chao, K., et al.: A novel big data based telecom user value evaluation method. In: 1st International Conference on Signal and Information Processing, Networking and Computers, CRC Press Taylor & Francis Group, Beijing (2015)

Research of New Energy Vehicles User Portrait and Vehicle-Owner Matching Model Based on Multi-source Big Data Yuting Zheng(&), Tao Zhang, Guanghai Liu, Yi Li, Chen Cheng, Lexi Xu, Xinzhou Cheng, and Xiaodong Cao Research Institute, China United Network Communications Corporation, Beijing 100048, China [email protected]

Abstract. Due to the national policy of energy conservation and emission reduction, the new energy vehicular industry experiences fast development in China, and the sales of new energy vehicles have grown rapidly in the past two years. China Unicom provides new energy vehicles with the gamut of network services and has collected large volume of network interaction data with potential values. By constructing a model system for mining potential information value, this paper proposes innovative methods of dissecting the characteristics of new energy vehicles’ consumers and building user portraits, as well as matching the relationship between car owners and vehicles based on multi-source big data. This will effectively assist the precision marketing of the new energy vehicular industry and the landing of innovative cooperation scenarios between China Unicom and the new energy vehicular industry. Keywords: Multi-source big data Vehicle-owner matching

 New energy vehicles  User portraits 

1 Introduction In the past decades, information technology undergoes fast development [1–3]. Telecom operators have built 4G LTE networks and 5G networks worldwide, in order to provide services to users and companies [4–6]. Meanwhile, both vehicular techniques and vehicular industry face the challenge of environment protection [7, 8]. More specifically, due to the severe global energy shortage and environmental pollution, the traditional vehicular industry is accelerating the development towards new energy vehicles [9]. Meanwhile, due to the national policy of energy conservation and emission reduction, the new energy vehicular industry experiences fast development in China, and the sales of new energy vehicles have grown rapidly in the past two years. China Unicom provides new energy vehicles with the gamut of network services (e.g., navigation, remote monitoring, multimedia, Internet browsing, and communications) and has collected large volume of network interaction data with potential values. Based on the multi-source big data of X Detail Record (XDR), cell configuration table and traffic-service table, this paper conducts an in-depth research and ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1440–1448, 2023. https://doi.org/10.1007/978-981-19-3387-5_171

Research of New Energy Vehicles User Portrait and Vehicle-Owner

1441

analysis of new energy vehicles user portraits and vehicle-owner matching models. And in order to verify the model and algorithm, this paper takes the data of a wellknown new energy car company as an example, and has identified its car owners in Beijing through Deep packet Inspection (DPI), the HOST and URI accessed by the owner, and the recognition model algorithm.

2 Research of User Profile Based on Multi-source Big Data 2.1

Network Behavior Preference

As people rely more and more on the Internet, network behavior data has evolved into one of the most valuable part of big data. Network behavior preference, that is, the preference for using certain types of mobile phone application services, reflects the daily needs and behavior habits of users. This article highlights the network behavior preferences of new energy vehicle owners through a comparative analysis of the network behavior data of new energy vehicle owners and users across the entire network. The service type code reported in the XDR can be associated with the corresponding APPs and services, by statistically comparing the number of new energy vehicles owners and the users of entire network of using each APP and service type as well as algorithm designing and calculating, a numerical value that reflects the preference of car owners for each APP and service type can be obtained. The details of algorithm design and calculation are as follows. This paper assumes that the total number of users of entire network is denoted as A, the total number of car owners is denoted as T, the number of effective users of the entire network and car owners are AU and TU, respectively (the number of users excluding mobile advertising push and other users’ non-active behaviors is counted as the number of effective users). The number of service types is n, the number of effective users of each service type of the entire network and car owners are AUf and TUf , as expressed in (1) and (2) respectively. This paper denotes the number of APPs is m, the number of effective users of each APP of the entire network and car owners is AUp and TUp , as expressed in (3) and (4) respectively.   AUf ¼ AUf 1 ; AUf 2 ; . . .; AUfn

ð1Þ

  TUf ¼ TUf 1 ; TUf 2 ; :::; TUfn

ð2Þ

  AUp ¼ AUp1 ; AUp2 ; :::; AUpm

ð3Þ

  TUp ¼ TUp1 ; TUp2 ; :::; TUpm

ð4Þ

    Rfi ¼ TUfi =T  A=AUfi

ð5Þ

After repeated scrutiny of the algorithm, as well as multiple tests and verifications of the data, this paper uses (5) to analyze the service types preference of new energy vehicles car owners. More specifically, Rfi represents the preference value of the car

1442

Y. Zheng et al.

owner compared to users of entire network for the i service type. The larger the preference value, the more obvious the preference of the car owner to the service type. This algorithm realizes a comprehensive reflection of the preference of car owners compared to users of the entire network. This paper takes the data of above-mentioned well-known new energy car company as an example and calculates the preference value Rfi of it, the result shows that the top service types include group buying, business office, car-hailing and vehicle services, cloud disk, traveling, takeout and restaurant review, etc. (see Fig. 1).

Fig. 1. Top 25 service types in the list of preference value Rfi

Fig. 2. Top 25 APPs in the list of preference value Rpi

    Rpi ¼ TUpi =T  A=AUpi

ð6Þ

We employe (6) to analyze the APP preference of new energy vehicles car owners. TUpi =T reflects the preference of car owners for i APP. The higher the percentage of car owners who use the i APP in the total number of car owners, the greater the TUpi =T value. A=AUpi reflects the preference of users of entire network for i APP, but the fewer users of entire network who use i APP, the greater the A=AUpi value. Therefore, Rpi represents the preference value of the car owner compared to users of entire network for the i APP, and the larger the preference value, the more obvious the preference of the car owner to the APP. When the proportion of car owners who use i APP is high, but the proportion of users who use i APP of the entire network is low, it means car owners have a more obvious preference for i APP than users of the entire network. At this time, the Rpi value is larger. However, when the proportions of car owners using i APP is high and the proportions of users of the entire network using i APP is high, it indicates that the general usage rate for i APP is high, but it cannot reflect the preference of car owners. At this time, the Rpi value will be reduced due to the influence of the A=AUpi value. The APPs with outstanding performance in the calculation results of the wellknown new energy car company’s car owners include Dianping, Meituan, Sina Weibo, Didi Chuxing, QQ Mail, China Merchants Bank, Tmall, Baidu Netdisk, Dingding, Ctrip and QQ, as shown in Fig. 2. And the results show that new energy vehicles car owners tend to have a higher preference on service types and APPs which are popular with young people.

Research of New Energy Vehicles User Portrait and Vehicle-Owner

2.2

1443

Distribution of Living and Working Areas

The identification and positioning of living and working areas can help to analyze the user’s quality of living and occupational attributes. Based on the association and integration of XDR and cell configuration, the start and end time of each service of the new energy vehicle owner in different dwelling cells can be summarized, and the staying time of the owner in each dwelling cell can be sorted out. Through the division of the time period, cells which the owner stays for the longest time during the day and night can be found, then the specific geographic location of the user can be located. In the statistical analysis of this paper, the cell with the longest staying time in the time period from 9 AM to 5 PM is set as the daytime dwelling cell, and the cell with the longest staying time in the time period from 11 PM to 5 AM is set as the night dwelling cell. Through the identification of the daytime dwelling cell and the location of its geographic location, the working places of the car owner can be inferred. Similarly, the living area can also be inferred by the night dwelling cell. This paper takes data of the well-known new energy car company to verify the model of identifying working and living areas, and analyzes the results of it. According to the heat map of Fig. 3, it can be recognized that the working places of the well-known new energy car company’s car owners are mainly concentrated in the office areas of large financial enterprises and Internet companies, such as China Beijing World Towers, Wangjing, Financial Street, Zhongguancun, Xibei Wang SOHO, Yizhuang Cultural Park and, etc.. Among the top 25 specific working places ranked by the number of car owners, most of them are the working places of large Internet companies (e.g., Baidu, Tencent and Xiaomi), and business centers (e.g., Financial Street, Wangjing SOHO, Jinhui Building). Most of the living areas of car owners that are inferred from the identification of night dwelling cells and their geographic location are located in areas that diverge from the periphery of the daytime working places. In addition, there is basically no obvious gathering within the Beijing Fourth Ring Road, most of the gathering areas are outside the Beijing Fifth Ring Road, as shown in Fig. 4. Among the top 25 specific living places ranked by the number of car owners, there is no lack of new high-quality residential communities. According to the data of Lianjia in November 2020, the average house price of these communities is about 81287.2 RMB per square meter, which is 1.3 times the average house price of 61,304 RMB per square meter in Beijing during the same period.

Fig. 3. Locations of daytime dwelling cells

Fig. 4. Locations of night dwelling cells

1444

Y. Zheng et al.

Based on the above analysis of the work and living areas of the well-known new energy car company consumers, it can be concluded that there are many office areas of high-paid industries such as Internet and finance companies in the working places of car owners, and living areas of car owners are concentrated around the periphery of the working areas and tend to gather outside the Beijing Fifth Ring Road, and the housing quality is relatively high. 2.3

Geographical Distribution of Charging Stations

The convenience of charging is one of the most concerned aspects of new energy vehicle owners, so the geographical layout of charging stations is also one of the most popular studies today. The geographical distribution analysis of charging stations can infer the car owner’s resident areas, which is related to the above-mentioned car owner’s living and working areas in Sect. 2.2, through the centralized trend of charging stations. Further, through statistical analysis of the distance between the car owner’s living/working areas and the nearest charging station site, the rationality of the geographical location of the charging station can be evaluated. New energy vehicles car owners can query the location of the charging stations and the number of available charging stations through vehicle navigation system, so it’s can be concluded that the charging stations will also be equipped with an IoT card to transmit its’ status information. Since we do not know the attribute identification of IoT card which can identify whether the card is used for vehicle or charging station, this paper designs an algorithm model to identify the type of IoT card. Compared with vehicles, the most obvious feature of charging stations is that they are geographically fixed. Therefore, the model recognizes the card with the number of dwelling cells less than a threshold as a charging station, and verifies the result through multi-day data. Take data of the above-mentioned well-known new energy car company as an example, and a total of 58 charging stations in Beijing were identified through the algorithmic calculations. The result shows that the number of charging stations in the eastern and northern parts of Beijing is significantly more than that in the western and southern parts, and it shows a certain correlation with Beijing’s regional development characteristics. The eastern part is represented by the CBD business district, and the northern part is represented by the headquarters of many Internet companies and science and technology parks. If Tiananmen Square is taken as the center point, Beijing is divided into four quadrants, and the number of charging stations in each area is allocated as shown in Table 1. It can also be seen from the Table 1 that the proportion distribution of charging stations also shows a certain correlation with the geographical distribution of the resident area of car owners. Table 1. Number of charging stations in each region. Region Northeastern Southeastern Northwestern Southwestern

Number of charging stations Proportion 20 34.5% 17 29.3% 14 24.1% 7 12.1%

Research of New Energy Vehicles User Portrait and Vehicle-Owner

1445

In order to evaluate the rationality of the location of the charging station site, this paper conducts a statistical analysis of the distance between the car owner’s working/living areas, which analyzed in Sect. 2.2, and the nearest charging station. And the statistical results are shown in Fig. 5.

Fig. 5. Distance distribution between the owner’s working/living areas and the nearest charging station.

From the perspective of distance distribution, it is generally more convenient for car owners to choose to charge at their working places than at their living areas. The proportion of owners who can find a charging station within 2 km nearby working places accounts for more than 50%, while nearby living areas is about 37%; the proportion of owners who can find a charging station within 5 km nearby working places accounts for about 77%, while nearby living areas is about 70%.

3 Research of Vehicle-Owner Matching Model The realization of many application scenarios of new energy vehicles relies on the owner's control and information interaction through the vehicle APP system. The traces of the interactive information create conditions for the association and matching of the owner and the vehicle. For telecom operators, the IoT cards on the new energy vehicles and the mobile cards of the vehicle owners belong to different business categories, which belong to the IoT service and the public service respectively, so it is impossible to realize the internal association through static attribute tags. This paper proposes an innovative vehicle-owner matching algorithm, which realizes the connection of the relationship of vehicle and owner through the coincidence of trajectories. The vehicle-owner matching model essentially compares and matches the relationship of vehicles and owners by evaluating the similarity between the owners’ and the vehicles’ trajectory time series. In the process of algorithm model analysis, a large number of vehicles’ trajectory time series will be compared with owners’ trajectory time series. Taking the owners’ trajectory time series as the matching reference, the cumulative Euclidean distance between the trajectory points of the vehicles’ time series and the owners’ time series is calculated as the trajectory matching distance. Several vehicles with the smallest trajectory matching distances can be regarded as a suspected matching set. In order to eliminate vehicles with interference, it is necessary to observe and analyze the above model analysis process day by day. And based on the summary

1446

Y. Zheng et al.

of the multi-day model outputs, count the number of occurrences of suspected matching vehicles and the average trajectory matching distance, and finally analyze the results of the one-to-one matching relationship between vehicles and owners. This paper assumes that the time series of the owner and vehicle are Traceu and Tracec , as express in (7) and (8), respectively. Meanwhile, the lengths of the trajectory points are n and m respectively. u1 ; u2 ; . . .; un are the trajectory points of the car owner’s time series, and v1 ; v2 ; . . .; vm are the trajectory points of the vehicle’s time series. Traceu ¼ ðu1 ; u2 ; . . .; un Þ

ð7Þ

Tracev ¼ ðv1 ; v2 ; . . .; vm Þ

ð8Þ

Each trajectory points ui or vi contains four types of information, which can be expressed as a four-tuple ðtstart ; tend ; x; yÞ. tstart is the start time, tend is the end time, and x represents longitude, y represents latitude. Table 2 exemplifies an example of the trajectory time series. Table 2. An example of the trajectory time series. MSISDN 186******** 186******** 186******** 186********

tstart 7:26:49 7:27:20 7:29:36 7:31:14

tend 7:26:56 7:27:32 7:29:37 7:31:27

x 116.316 116.31665 116.32431 116.33132

y 39.9315 39.93064 39.93778 39.9373

2    di ui; vj ¼ ui  vj 

ð9Þ

Xn Xm   1=2    ui  v j  2 d ui; vj ¼ MIN i¼1 j¼1

ð10Þ

Calculate the duration based on tstart and tend , and mark the trajectory points in the owner’s time series with a duration of less than 5 min as moving state points, and use the trajectory points of the moving state points as the benchmark for matching. Find vj which is closest to the time of ui in the time sequence of the If the time  vehicle.  difference is within 1 min, calculate the Euclidean distance di ui; vj between the two points according to (9), and if there is no time difference of matching trajectory points within 1 min, no calculation   of distance is performed. Finally, calculate the cumulative Euclidean distance d ui; vj between all matching trajectory points in the time series according to (10), which is set as trajectory matching distance.

Research of New Energy Vehicles User Portrait and Vehicle-Owner

1447

Table 3. An example of the output summary. MSISDN (vehicle) 186******** 186******** 186******** 186********

Trajectory matching distance set [1.14, 1.37, 0.57, 1.09, 0.25, 0.36] [7.64, 4.27, 5.85, 5.32, 6.68] [1.89] [2.89, 3.74, 5.62]

Number of matches 6

Average trajectory matching distance 0.80

5

5.95

1 3

1.89 4.08

Based on the calculation result of the cumulative Euclidean distance, several vehicles with the smallest trajectory matching distance are identified as the suspected matching set. In order to improve the accuracy of the vehicle-owner matching results and eliminate vehicles with interference, the above-mentioned analysis process will be calculated and analyzed for multiple days. An example of the output summary is exemplified in Table 3. This example takes the model outputs for 7 days, and the number of matches is the number of times that the vehicle appeared in the suspected matching set within 7 days, and the trajectory matching distance set is the set of the matching distances of each trajectory of the suspected matching vehicle when it appears in the suspected matching set. Finally, a one-to-one matching relationship between the owner and vehicle can be identified based on the results of statistical analysis.

4 Conclusion This paper employs the integration of network multi-source big data to mine the potential information value and builds a mining system. On the one hand, this paper constructs user portraits by analyzing the characteristics of consumers, and accurately matches potential consumers of new energy vehicles according to the common tags, so as to achieve precise marketing and targeted marketing. On the other hand, it realizes connection of the vehicle-owner relationship by comparing the coincidences of trajectories and completes the binding of owners and vehicles. And it will effectively help the precision marketing of the new energy vehicular industry and the landing of innovative cooperation scenarios between China Unicom and the new energy vehicular industry.

References 1. Xu, Y., Yin, F., et al.: Wireless traffic prediction with scalable gaussian process: framework, algorithms, and verification. IEEE J. Sel. Areas Commun. 37(6), 1291–1306 (2019) 2. Yin, F., et al.: FedLoc: federated learning framework for data-driven cooperative localization and location data processing. IEEE Open J. Sign. Process. 1, 187–215 (2020)

1448

Y. Zheng et al.

3. Xu, L., Chen, Y., et al.: Self-organising cluster-based cooperative load balancing in OFDMA cellular networks. Wiley Wirel. Commun. Mob. Comput. 15(7), 1171–1187 (2015) 4. Xu, L., Cao, Y., et al.: Research on telecom big data platform of LTE/5G mobile networks. In: 18th IEEE International Conferences on Ubiquitous Computing and Communications, pp. 756–761. IEEE press, Shenyang (2019) 5. Yin, F., Pan, L., Theodoridis, S., et al.: Linear multiple low-rank kernel based stationary gaussian processes regression for time series. IEEE Trans. Sign. Process. 68(1), 5260–5275 (2020) 6. Xu, L., Luan, Y., Cheng, X., et al.: Telecom big data based user offloading self-optimisation in heterogeneous relay cellular systems. Int. J. Distrib. Sys. Technol. 8(2), 27–46 (2017) 7. Eiza, M., Cao, Y., et al.: Toward Sustainable and Economic Smart Mobility: Shaping the Future of Smart Cities, 1st edn. World Scientific, London (2020) 8. Zhang, H., Xu, L., Cheng, X., Chen, W., Zhao, X.: Big data research on driving behavior model and auto insurance pricing factors based on UBI. In: Sun, S., Chen, N., Tian, T. (eds.) ICSINC 2017. LNEE, vol. 473, pp. 404–411. Springer, Singapore (2018). https://doi.org/10. 1007/978-981-10-7521-6_49 9. Zhang, X., Cao, Y., et al.: Towards efficient battery swapping service operation under battery heterogeneity. IEEE Trans. Veh. Technol. 69(6), 6107–6118 (2020)

Cell Expansion Priority Recommendation Based on Prophet Algorithm Xiaomeng Zhu(&), Zijing Yang, Guanghai Liu, Yi Li, Lexi Xu, Xinzhou Cheng, and Runsha Dong China Unicom Research Institute, Beijing 100048, China [email protected]

Abstract. In recent years, both LTE and 5G experiences fast development and construction world-widely. Under the explosive business demand and massive mobile terminal equipment connection, the guarantee of network load is of vital importance. This paper proposes a recommendation model for cell expansion priority. Initially, it uses the Prophet algorithm to predict cell traffic trends based on user behavior characteristics in different scenarios. Then it defines expansion warning thresholds according to different expansion types. Finally, combining cell traffic trends and expansion thresholds, a recommendation model is established. The proposed model can assist network optimizer more accurately grasp the current network capacity situation and future capacity trend, monitor the indexes of network expansion dynamically. Besides, this model can also follow the expansion principle of timeliness and predictability, which can make rational use of expansion investment and guarantee user perception. Keywords: Prophet

 Network capacity  Cell expansion  Priority  LSTM

1 Introduction With the development of communication technology, mobile communication have penetrated into many aspects of peoples’ daily uses [1–3]. Its rapid development has attracted a large number of new users [4, 5]. Meanwhile, artificial intelligence (AI) empowered application scenarios (e.g., Internet of Vehicles, smart cities, etc.) have swept across the related industries [6–8]. The differentiated vertical industry applications have transformed the demand from low-traffic to high-traffic and large bandwidth [9]. In the communication network, the network capacity can best reflect users’ perception of the network [10–12]. However, traditional network capacity assessment methods mainly rely on expert experience, and lack systematic assessment tools to serve as a basis for network optimizer to expand the cell [13–15]. The research of this paper includes two aspects: First, this paper researches the cell traffic prediction based on the Prophet algorithm. It selects a number of cells that meet the expansion standards in a specific scenario, and it uses cell-level traffic data in the past time to predict the traffic trend in the future. Second, we analyze the key factors, which affect the network capacity. Then, we combine these key factors with traffic forecasts, and we further construct a cell expansion priority recommendation model, realize the intelligent recommendation of ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1449–1457, 2023. https://doi.org/10.1007/978-981-19-3387-5_172

1450

X. Zhu et al.

the priority of the cell to be expanded. When network resources are in short supply, priority can be given to areas with heavy network load and traffic boiling point, which assists the network optimizer to provide differentiated guarantees.

2 Research Theory and Method 2.1

Prophet Time Prediction Algorithm Introduction

The Prophet algorithm is an additive model open sourced by Facebook specifically for large-scale time series analysis. The model turns the time series forecasting problem into a curve fitting exercise. In this curve, the dependent variable is the fitting result of trend, period and holiday. The Prophet model divides a time series into three parts: trend item, periodic item, and holiday effect. The model is composed as (1): yðtÞ ¼ gðtÞ þ sðtÞ þ hðtÞ þ eðtÞ

ð1Þ

where gðtÞ represents the trend item, which represents the trend of the time series on the non-period. sðtÞ represents the periodic item, or seasonal item. hðtÞ represents the holiday effect of non-fixed periodicity. eðtÞ is the error term, which represents fluctuations not predicted by the model. The Prophet algorithm uses the above parts fitting and accumulation to obtain the predicted value of the time series. 1. Trend item: The trend term function based on logistic regression is as (2): gðtÞ ¼

C ðt Þ      t   1 þ exp  k þ aðtÞ d  t  m þ aðtÞT c

ð2Þ

where C(t) represents the carrying capacity, which limits the maximum value that can be increased. k represents the growth rate. The model defines the corresponding points when the growth rate k changes, that is, change points. m represents the offset. 2. Periodic item: In the period term of the model, a Fourier series is used to simulate the periodicity of the time series as (3). Among them, P is the period, N is the number of periods used in the model, and an and bn are smoothing parameters.   XN

2pnt 2pnt sðt Þ ¼ an cos þ bn sin n¼1 P P

ð3Þ

3. Holiday effect: Prophet can set different before and after window values for different holidays to characterize the sequence of time periods affected by holidays. 2.2

Analysis of Key Factors Affecting Network Capacity

The indicators involved in affecting network capacity include: 1) Utilization rate of RRC connection license users, as calculated via (4). 2) The utilization rate of downlink

Cell Expansion Priority Recommendation Based on Prophet Algorithm

1451

radio resources. 3) The uplink and downlink service traffic of the air interface. 4) The average number of RRC connection users. Utilization rate of RRC ¼ RRC maximum=RRC connection license users connection license users connection users ðRRC license numberÞ

ð4Þ

If the RRC license of the cell is restricted, license expansion needs to be considered. The utilization rate of downlink radio resources can intuitively reflect the size of the cell traffic. The average number of RRC connected users is also an important indicator of the size of the cell network capacity.

3 Overall Model Construction 3.1

Introduction to Expansion Priority Recommendation Process

According to network capacity influencing factors, the expansion is divided into two categories, namely license expansion and carrier expansion. Figure 1 shows the overall process of the cell expansion priority recommendation.

Fig. 1. The process of the cell expansion priority recommendation.

First, the expansion thresholds are given for different types of expansion as follows. The arrival of the corresponding threshold of the cell is monitored daily, and the counter is incremented by one if it arrives. 1) License Expansion: user license limited times is more than 1000 in more than 4 consecutive days or two days of the weekend. 2) Single Carrier to Dual-carrier Expansion: 4 days a week or two days of the weekends, data of the busiest period of cell resources meets: 1) downlink PRB utilization >60%. 2) average number of RRC connected users in a 20 M bandwidth cell >50 (other bandwidth cells are scaled). 3) 20 M bandwidth cell traffic >10 GB (other bandwidth cells are scaled).

1452

X. Zhu et al.

3) Dual-carrier to Three-carrier Expansion: 1) the 1 and 2 carriers in the sector meet the above mentioned single carrier expansion conditions at the same time; 2) upstream noise floor 9. Secondly, for the cell to be expanded, the strategy of “optimizing first, then expanding the carrier” is adopted. If optimization method fails, the expansion priority recommendation model will be used to obtain the expansion priority of each cell. 3.2

Expansion Priority Recommendation Model

The proposed expansion priority recommendation model consists of two influencing factors, namely: the expansion threshold counter and the change rate of cell traffic. The recommendation model of expansion priority is as (5): Expansion ¼ priority

Expansion threshold counter

 ð1 þ Changerateof Þ cell traffic

ð5Þ

1) Expansion threshold counter: it refers to the sum of the counter counts calculated according to the expansion threshold of each type in the past period of time. The larger the number, the more times the expansion threshold is reached, and the higher the expansion priority. 2) Change rate of cell traffic: it refers to the change rate of cell traffic based on the Prophet algorithm at a certain point in the future relative to the current time point. The higher the rate, the higher the expansion priority.

4 Model Experiment and Result Analysis 4.1

Data Preparation

This paper selects 70 subway scene cells with high network load in the main urban area of a city as the experimental cells to be expanded. We extract all the data used in the model from January 17 to April 20, 2021. 4.2

Prediction of Cell Traffic Based on Prophet Model

For the Prophet prediction algorithm process, it uses the processed cell traffic experiment data into the Prophet model for training, train the model in components based on trend items, periodic item, and holiday items, and optimize the model based on the comparison between the results and the real value, then use the model to predict future day-level traffic. This paper tries to obtain the Prophet prediction results and comparison with LSTM. The model is trained based on the daily traffic of a cell from January 17 to April 20, 2021, to predict the cell traffic in the next 10 days. The overall forecast trend is decomposed into components. Finally, we compare the accuracy of the prediction results of Prophet and LSTM algorithms.

Cell Expansion Priority Recommendation Based on Prophet Algorithm

1453

4.2.1 Predictive Analysis of Each Component of Prophet Figure 2 shows the trend items of the model, and it can be seen that the daily-level traffic of the cell shows an overall upward trend.

Fig. 2. The non-periodical trend of cell traffic

Figure 3 shows the holiday effect of the model. It can be seen that statutory holidays have a significant impact on the fluctuation of cell traffic. The two fluctuations in the traffic reduction are the weakening effects of the Spring Festival and Qingming Festival on the traffic of the cell.

Fig. 3. The influence of holiday effect on the cell traffic

Figure 4 shows the periodic items of cell traffic on a weekly basis. Because the extracted data does not cover a whole year, the seasonal change trend based on the year cannot be shown here.

Fig. 4. Periodic items of cell traffic in weeks

1454

X. Zhu et al.

4.2.2 Analysis of Prediction Results of Cell Traffic Figure 5 shows the prediction results of the cell traffic in the next 10 days based on the trained Prophet model. Predict range represents the upper and lower limits of the confidence interval of the prediction result. It can be seen from the prediction results that the model fits the historical data well, the holiday effect has an obvious regulatory effect. The confidence interval of the prediction results contains most of the true values, and the model has good accuracy for cell traffic prediction.

Fig. 5. Cell traffic forecast results

4.2.3 Comparative Analysis of Prediction Results Between Prophet and LSTM Figure 6 shows the comparison of the cell traffic prediction between Prophet and LSTM. Due to the characteristics of the LSTM model, the sliding time window data of the previous 7 days is selected to predict the current cell traffic data. Therefore, the model curve of LSTM starts on January 24. The effect of the May 1st holiday caused the actual value on April 25 to be higher than the predicted result. Figure 6 shows the prediction result of Prophet is more accurate than that of LSTM, which is also closer to the actual value.

Fig. 6. Comparison of prediction results between Prophet and LSTM

Cell Expansion Priority Recommendation Based on Prophet Algorithm

1455

Then, we use evaluation indexes to compare the results of the two algorithms. The selected forecast evaluation indicators include RMSE, MAE, MAPE. Table 1 shows Prophet algorithm is more accurate than LSTM in predicting the daily cell traffic.

Table 1. Comparison of evaluation indexes of Prophet and LSTM algorithms Evaluation index Prophet LSTM RMSE 4.40 6.54 MAE 2.32 4.77 MAPE 7.22 14.42

4.3

Expansion Priority Recommendation Results and Analysis

We select April 30 as the time point for the expansion priority prediction. We use Prophet algorithm on 70 cells to be expanded to predict the cell traffic, and calculate the expansion threshold counter values of the 70 cells from January 17 to April 20 respectively. In this way, the expansion priorities of the 70 cells are obtained. Figure 7 shows the priority of cell expansion. Each color block represents a cell, the higher the expansion priority, the larger the color block area. The data in each color block represents the priority score. For each pre-expansion cell, based on the priority score, the optimizer can analyze the contribution of each impact factor of the priority, hence, they can develop personalized guarantee strategies in different types of scenarios.

Fig. 7. Tree diagram of cell expansion priority sorting

1456

X. Zhu et al.

5 Conclusion This paper proposes a priority recommendation model for the cell to be expanded. The model uses the Prophet algorithm to predict the cell traffic, and it combines the warning expansion thresholds to calculate the expansion priority ranking of the cells. The model considers the impact of the past and future trends on cell expansion. In addition, the characteristics of the split fitting of the Prophet algorithm in this model can be well applied to the cell expansion priority recommendation in different types of scenarios. This model can be used by optimizers to evaluate the network load. While effectively guaranteeing the load of the entire network, it can also meet the differentiated scene guarantee requirements.

References 1. Xu, Y., Yin, F., Xu, W., Lin, J., Cui, S.: Wireless traffic prediction with scalable Gaussian process: framework, algorithms, and verification. IEEE J. Sel. Areas Commun. 37(6), 1291– 1306 (2019) 2. Yan, W., Jin, D., Lin, Z., Yin, F.: Graph neural network for large-scale network localization. In: 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5250–5254, Toronto (2021) 3. Xu, L., Dai, Y., Zhang, J., Zhang, C., Yin, F.: Exact O(N2) hyper-parameter optimization for Gaussian process regression. In: 30th IEEE International Workshop on Machine Learning for Signal Processing (MLSP), pp. 1–6 (2020) 4. Yin, F., Pan, L., Theodoridis, S., et al.: Linear multiple low-rank kernel based stationary Gaussian processes regression for time series. IEEE Trans. Signal Process. 68(1), 5260–5275 (2020) 5. Kong, Q., et al.: Privacy-preserving aggregation for federated learning-based navigation in vehicular fog. IEEE Trans. Ind. Inform. https://doi.org/10.1109/TII.2021.3075683 6. Yin, F., et al.: FedLoc: federated learning framework for data-driven cooperative localization and location data processing. IEEE Open J. Signal Process. 1, 187–215 (2020) 7. Xu, L., Chen, X., Zhao, et al.: Telecom big data assisted bs resource analysis for LTE/5G systems. In: 18th IEEE International Conferences on Ubiquitous Computing and Communications, pp. 81–88. IEEE Press, Shenyang (2019) 8. Zhao, Y., Fritsche, C., Hendeby, G., Yin, F., et al.: Cramér-Rao bounds for filtering based on Gaussian process state-space models. IEEE Trans. Signal Process. 67(23), 5936–5951 (2019) 9. Hong, T., Fan, S.: Probabilistic electric load forecasting. Int. J. Forecast. J. 32(3), 914–938 (2016) 10. Xu, L., Luan, Y., Cheng, X., et al.: Telecom big data based user offloading self-optimisation in heterogeneous relay cellular systems. Int. J. Distrib. Syst. Technol. 8(2), 27–46 (2017) 11. Xue, M., Wu, L., Zhang, Q.P., et al.: Research on load forecasting of charging station based on XGBoost and LSTM model. J. Phys. Conf. Ser. 1757(1), 012145 (2021) 12. Xu, L., et al.: User perception aware telecom data mining and network management for LTE/LTE-advanced networks. In: Sun, S. (eds.) Signal and Information Processing, Networking and Computers. ICSINC 2018. LNEE, vol. 494, pp. 237−245. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-1733-0_29

Cell Expansion Priority Recommendation Based on Prophet Algorithm

1457

13. Cui, C., et al.: Research on power load forecasting method based on LSTM model. In: 2020 IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC), pp. 1657–1660 (2020) 14. Fang, Q., Zhong, Y., Xie, C., Zhang, H., Li, S.: Research on PCA-LSTM-based Short-term Load Forecasting Method. In: IOP Conference Series: Earth and Environmental Science, vol. 495, no. 1, p. 012015 (2020) 15. Quan, R., Zhu, L., Wu, Y., Yang, Y.: Holistic LSTM for pedestrian trajectory prediction. IEEE Trans. Image Process. 30, 3229−3239 (2021)

Load and Energy Balance Strategy of SDN Control Plane in Mobility Scenarios Fuwei Zhang1(&), Wenwen Ma1, Dequan Xiao1, and Hongyu Peng2 1

2

Graduate School of Tangshan, Southwest Jiaotong University, TangShan, China [email protected] Department of Computer Science, TangShan University, TangShan, China [email protected]

Abstract. In this paper, We propose a solution for dynamic placement of controllers in software-defined networking (SDN), with an aim to solve the problem of unbalanced load and energy in SDN control plane. SDN architecture separates control and forwarding of network, provides the advantages of network programmable and easy expansion, and is of great significance in the application of the Internet of Things. Due to the dynamics of network load and device location, SDN control plane faces the problem of unbalanced load and energy. The load of some controllers is significantly higher than other nodes, resulting in a decrease in network performance; in a scenario where the energy of the control node is limited, some key nodes may run out of energy faster and shorten the life of the network. The dynamic placement of the controller is an effective way to solve this problem. This study uses the Oder-3 Markov predictor to predict the future access point location of the device, then used the branch and bound method to dynamically place the controller. The simulation results show that this method can effectively reduce the peak traffic of the SDN control plane and balance the historical load value of nodes. Keywords: SDN placement

 IOT  Load balancing  Energy balancing  Controller

1 Introduction In recent years, the development of emerging network technologies will lead us into a new era of smart cities [1]. Software-defined networking (SDN) separates network control and forwarding, and brings network programmable capabilities by means of centralized control of the network. Compared with the traditional network architecture, SDN has a logically centralized control plane which keeps the topology information of the entire network [2]. As a new type of network paradigm, SDN breaks the vertical integration of the network and promotes the development of the network [3]. In the mobility scenario, the SDN control plane faces the problem of uneven load and energy consumption [4]. The load in the network is dynamic and unevenly distributed [5, 6]. When the device moves, it may cause congestion of some control nodes and idleness of some control nodes, resulting in a waste of network performance [7]. In addition, in a scenario where the energy of the controller nodes is limited, because the ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1458–1466, 2023. https://doi.org/10.1007/978-981-19-3387-5_173

Load and Energy Balance Strategy of SDN

1459

energy consumption of the nodes is not uniform, some nodes may run out of energy faster, resulting in a decrease in network life [8]. Dynamic deployment of network nodes can improve network performance [9]. SDN was initially controlled by a single controller, but the processing capacity of a single controller was limited [10]. The multi-controller architecture is conducive to expanding the scope of the network, and introduces the controller placement problem (CPP). Research by Heller et al. showed that random controller placement will cause five times the delay in the network [11]. There have been some research results on CPP. Hock et al. [12] placed controllers based on the maximum control link delay and the number of switches; Huque et al. [13] comprehensively considered the delay and traffic load of the switches, and predicted the flow of the data plane to proposed LiDy+; Maity et al. [14] proposed the CORE model, which is based on the different needs of device and switch traffic; Bari et al. [15] proposed two heuristic algorithms, DCK-GK and DCP-SP, which use the greedy algorithm and the simulated annealing algorithm to place the controller respectively. Based on the above research, we took into account the current peak load in the network and the historical load of the nodes, then proposed a controller load and energy balance strategy. First, we use the prediction algorithm based on Markov predictor to predict the future access point location of the device, and then use the branch and bound method to place the controller according to the result. The rest of this article is organized as follows. Section 2 introduces the system model. Section 3 shows the controller placement strategy. The simulation results are in Section 4. In Section 5 conclusions are proposed.

2 System Model 2.1

Architecture

Figure 1 shows the network architecture composed of controllers, switches, and IoT devices. Several logically centralized and physically distributed controllers form the controller plane. The access point performs data forwarding according to the flow rules determined by the SDN controller. One switch may be connected to multiple controllers, but there is only one master controller at the same timeslot.

Fig. 1. The network architecture

1460

F. Zhang et al.

We assume that IoT devices are mobile and can be migrated from one switch to another, but a device can only be connected to one switch at the same timeslot. In addition, the activation mode of the device can be divided into two modes: periodic activation and random activation [16]. A periodically activated device will activate and send traffic after a fixed period, and the probability of a randomly activated device being activated at a certain moment obeys the beta distribution. 2.2

Problem Formulation

We formulated the optimization goal of the algorithm. Let C denote the control plane, S denote the data plane, and H denote the equipment collection. There are controller ci 2 C, switch sj 2 S, and device hk 2 H. We define several 0–1 variables to represent the connections in the network. f ij ðtÞ whether ci is the master controller of sj at timeslot t, zjk ðtÞ represents whether sj and hk are connected at timeslot t, then the connection limit in the network can be expressed as (1) and (2). XjCj

f ðtÞ i¼0 ij

XjSj

z ðt Þ j¼0 jk

¼ 1;

8sj 2 S

ð1Þ

¼ 1;

8hk 2 H

ð2Þ

Let tn be the current time, t0 represents the moment of last activation and h be the length of the timeslot. Then the condition of periodic activation device activation can be expressed as (3).  pk ð t n Þ ¼

1; tn  t0 ¼ s ; t \h 0; otherwise n

ð3Þ

The activation probability of a randomly activated device obeys the beta distribution with parameters a and b, which can be expressed as (4). pk ðtn Þ ¼

 b1 tn a1 1  thn h ; tn R 1 a1 ð1  xÞb1 dx 0x

h

ð4Þ

The size of the traffic packet generated when hk ðtÞ is activated is represented by Qk , and the size of the traffic generated in the timeslot t can be expressed as (5). Fk ðtÞ ¼

Xh tn ¼0

pk ðtn ÞQk dtn

ð5Þ

The communication cost of the controller ci can be divided into two parts: controller-switch cost and controller-controller cost. The controller-switch communication cost refers to the traffic intensity generated by the equipment under the switch connected to the controller, and the controller-controller communication cost includes

Load and Energy Balance Strategy of SDN

1461

the synchronization traffic caused by controller migration and device location changes. The above cost can be expressed as (6) and (7). G i ðt Þ ¼ K i ðt Þ ¼

XjSj XjH j j¼0

z ðtÞfij ðtÞFk ðtÞ k¼0 jk

XjSj   XjH j ðfij ðtÞ  fij ðt  1Þ þ x ðtÞÞ j¼0 k¼0 ik

ð6Þ ð7Þ

(8) shows that The cost of the controller can be expressed as the sum of the above costs. costi ðtÞ ¼ Gi ðtÞ þ Ki ðtÞ

ð8Þ

Let hisi ðtÞ represent the sum of historical load values of the controller. The final optimization objective function of this study is the weighted sum of the current maximum traffic load and the maximum historical load node. dðtÞ ¼ a1 costmax ðtÞ þ a2 hismax ðtÞ

ð9Þ

(9) shows the optimization objective function, and a1 , a2 are the weight values, which can be adjusted according to different network conditions and optimization goals.

3 Controller Placement In this study, we used two algorithms to achieve load and energy balance of the control plane: position prediction algorithm and controller placement algorithm. 3.1

Position Prediction Algorithm

Table 1. Position prediction algorithm Algorithm Realization 1 while k>0 do 2 extract from 3

match

in

4 if 5 k k-1 6 else 7 break 8 end if 9 end while

then

As shown in Table 1, The Markov predictor predicts the devices’ access location at the next moment through the devices’ historical access location. Algorithm input is

1462

F. Zhang et al.

U ¼ ðx1 x2 . . .xn Þ, which is a series of historical access point positions of device. The output of the algorithm is the   prediction of the node’s next access point position: xn þ 1 . Let Uði;jÞ ¼ xi xi þ 1 . . .xj represent the one of historical subsequences from U. The Oder-k Markov predictor extracts the latest position sequence of length k as u ¼ Uðnk þ 1;nÞ . Then match the sequence u in the historical position to find the possible transfer position r. Calculate the position with the highest transition to  probability  get the predicted position, which can be expressed as xn þ 1 ¼ argmax

N ður;U Þ N ðu;U Þ

. If the

oder-k search fails, the oder-(k-1) sequence is searched. The oder-3 Markov predictor is used in this paper (Table 2). 3.2

Controller Placement Algorithm

Table 2. Algorithm of controller placement

The controller placement problem is an NP-hard problem [14], and the branch and bound method is one of the general modes to solve this problem. Construct a solution space tree with all the solutions of the controller placement problem, and the node kij represents the master controller of the switch si as cj . The algorithm first puts the root node k00 into the activation node stack P. The algorithm gets kij from the stack every time, obtains its sub-node kði þ 1Þz and calculates its current objective function value dðtÞ, sorts all the sub-nodes and then pushes them into the stack.   Nodes are filtered by the bounding function C kij when they are pushed into the   stack. The bound function C kij calculates the possible optimal costmax ðtÞ and optimal hismax ðtÞ values of the remaining nodes respectively, and calculates the possible

Load and Energy Balance Strategy of SDN

1463

    optimal result of the current branch qlow ¼ C kij þ dðtÞ. If qlow kij is greater than the boundary value q, the node is discarded. The initial boundary value q ¼ qlow ðk0 Þ, if there is a solution such that dðtÞ\q, then q is updated and recorded in the current optimal solution X. When the stack is empty, the optimal solution that minimizes the objective function is found.

4 Evaluation 4.1

Simulation Environment

This study uses mininet to simulate the mobile Internet of Things environment, and the simulation parameter table is shown in Table 3. Table 3. Simulation parameters Parameter Simulation area Number of switches |S| Number of controllers |C| Number of devices |H| Mobility model of devices Speed of devices Shape parameter a, b Average package size

4.2

Value 100 m * 100 m 20 5 200–800 Random Way-Point [17, 18] 0–2 m/s 3, 4 [19] 100 bytes

Prediction Accuracy

200

400

800

Prediction accuracy

1 0.8 0.6 0.4 0.2 0 100

200

300

400

500

600

700

800

900

1000

Number of Iteration

Fig. 2. The prediction accuracy

The O(3) Markov predictor is used to predict the devices’ access point position, and the devices’ trajectory is simulated using a random waypoint model [20]. Figure 2 shows the prediction accuracy when there are 200, 400, 800 devices in the network. The results show that the prediction algorithm performs well. It can be found from the prediction results that the prediction accuracy has little to do with the device scale. In addition, the

1464

F. Zhang et al.

prediction accuracy decreases with the increase of the number of iterations, and the average prediction accuracy rate of the final 1000 iterations still reaches 65%. 4.3

The Network Performance

DCP-GK

CORE

target

Timeslots 12 10 History Load (103 kbytes)

Peak Traffic (kbytes)

11 10 9 8 7 6 5 10

20

30

40

50

60

70

80

90

100

20

30

40

50

60

70

80

90 100

0 2 4 6 8 10 DCP-GK

target

CORE

Timeslots

(a) Peak Traffic

(b) Highest History Load

Fig. 3. The network performance

The simulation result of the network performance is shown in Fig. 3. Figure 3a shows the peak load of the network. The horizontal axis represents the timeslots, and the vertical axis represents the peak traffic of a single controller in the network. The simulation results show that compared with CORE and DCP-GK, the peak flow rate is reduced by 9.5% and 4.6%. It can be seen that compared with the CORE and DCP-GK algorithms, the target algorithm can reduce the peak traffic of the control plane. Figure 3b shows the highest historical load in the network. Let hismax represent the node that consumes the most. The vertical axis represents the consumption value of hismax and the horizontal axis represents the number of timeslots. The simulation results show that compared with CORE and DCP-GK, the consumption value of hismax in this method is reduced by 14.3% and 10.8%. This means that the target algorithm can delay the energy exhaustion time of this node, thereby increasing the life of the network.

5 Conclusion In this paper, we use mobility prediction to solve the SDN controller placement problem, so as to balance the load and energy of the SDN controller. we use the oder-3 Markov predictor to predict the devices’ future access position, and then uses the branch and bound method to dynamically place the controller, So as to improve the performance of the network and extend the life of the network. The simulation result shows that compared with the CORE and DCP-GK solutions, the peak control plane traffic is reduced by 9.5% and 4.6%, and the consumption of the most consuming nodes is reduced by 14.3% and 10.8%. Future work needs to introduce the distance factor between the controller and the switch into the controller allocation problem, and reselect the algorithm to solve this problem.

Load and Energy Balance Strategy of SDN

1465

References 1. Eiza, M., Cao, Y., Xu, L.: Toward Sustainable and Economic Smart Mobility: Shaping the Future of Smart Cities, 1st edn. World Scientific, London (2020) 2. Wu, D., et al.: Towards distributed SDN: mobility management and flow scheduling in software defined urban IoT. IEEE Trans. Parallel Distrib. Syst. 6(31), 1400–1418 (2020) 3. Din, S., Rathore, M.M., Ahmad, A., Paul, A., Khan, M.: SDIoT: software defined internet of thing to analyze big data in smart cities. In: 2017 IEEE 42nd Conference on Local Computer Networks Workshops (LCN Workshops), pp. 175–182, IEEE press, Singapore (2017) 4. Bera, S., Misra, S., Obaidat, M.S.: Mobi-Flow: mobility-aware adaptive flow-rule placement in software-defined access network. IEEE Trans. Mob. Comput. 8(18), 1831–1842 (2019) 5. Xu, L., et al.: Telecom big data assisted BS resource analysis for LTE/5G systems. In: 18th IEEE International Conferences on Ubiquitous Computing and Communications, pp. 81–88. IEEE Press, Shenyang (2019) 6. Xu, L., et al.: Data mining for base station evaluation in LTE cellular systems. In: Sun, S., Chen, N.A., Tian, T. (eds.) ICSINC 2017. LNEE, vol. 473, pp. 356–364. Springer, Singapore (2018). https://doi.org/10.1007/978-981-10-7521-6_43 7. Xu, L., Cheng, X., et al.: Mobility load balancing aware radio resource allocation scheme for LTE-advanced cellular networks. In: 16th IEEE International Conference on Communication Technology, pp. 806–812. IEEE Press, Hangzhou (2015) 8. Pandey, O.J., Hegde, R.M.: Low-latency and energy-balanced data transmission over cognitive small world WSN. IEEE Trans. Veh. Technol. 8(67), 7719–7733 (2018) 9. Xu, L., et al.: WCDMA data based LTE site selection scheme in LTE deployment. In: 1st International Conference on Signal and Information Processing, Networking and Computers, pp. 249–260. CRC Press Taylor & Francis Group, Beijing (2015) 10. Rafique, W., Qi, L., Yaqoob, I., Imran, M., Rasool, R.U., Dou, W.: Complementing IoT services through software defined networking and edge computing: a comprehensive survey. IEEE Commun. Surv. Tutor. 3(22), 1761–1804 (2020) 11. Heller, B., Sherwood, R., Mckeown, N.: The controller placement problem. In: Workshop on Hot Topics in Software Defined Networks, pp. 7–12. Association for Computing Machinery, Helsinki, Finland (2012) 12. Hock, D., Hartmann, M., Gebert, S., Jarschel, M., Zinner, T., Tran-Gia, P.: Pareto-optimal resilient controller placement in SDN-based core networks. In: Proceedings of the 2013 25th International Teletraffic Congress (ITC), pp. 1–9. IEEE press. Shanghai, China (2013) 13. ul Huque, M.T.I., Si, W., Jourjon, G., Gramoli, V.: Large-scale dynamic controller placement. IEEE Trans. Netw. Serv. Manag. 1(14), 63–76 (2017) 14. Maity, I., Misra, S., Mandal, C.: CORE: prediction-based control plane load reduction in software-defined IoT networks. IEEE Trans. Commun. 3(69), 1835–1844 (2021) 15. Bari, M.F., et al.: Dynamic controller provisioning in software defined networks. In: Proceedings of the 9th International Conference on Network and Service Management (CNSM 2013), pp. 18–25. IEEE Press, Zurich, Switzerland (2013) 16. Sivanathan, A., et al.: Classifying IoT devices in smart environments using network traffic characteristics. IEEE Trans. Mob. Comput. 8(18), 1745–1759 (2019) 17. Song, L., Deshpande, U., Kozat, U.C., Kotz, D., Jain, R.: Predictability of WLAN mobility and its effects on bandwidth provisioning. In: Proceedings IEEE INFOCOM 2006. 25TH IEEE International Conference on Computer Communications, pp. 1–13. IEEE press, Barcelona, Spain (2006)

1466

F. Zhang et al.

18. Lin, G., Noubir, G., Rajaraman, R.: Mobility models for ad hoc network simulation. In: IEEE INFOCOM 2004, p. 463. IEEE Press, Hong Kong, China (2004) 19. Mozaffari, M., Saad, W., Bennis, M., Debbah, M.: Mobile unmanned aerial vehicles (UAVs) for energy-efficient internet of things communications. IEEE Trans. Wirel. Commun. 11(16), 7574–7589 (2017) 20. Camp, T., Boleng. J., Davies, V.: A survey of mobility models for ad hoc network research. Wirel. Commun. Mob. Comput. 2(5), 483–502 (2010)

A Method for Intelligently Evaluation the Integrity of 5G MR Data Yuan Fang1(&), Qintian Wang1, Ao Shen1, Pengcheng Liu1, Jinhu Shen1, Bao Guo2, Zetao Xu1,2, and Jimin Ling1 1

China Mobile Group Design Institute Co., Ltd., Beijing, China [email protected] 2 China Mobile Group CMPak Ltd., Islamabad, Pakistan

Abstract. Measurement Report (MR) comes from all actual user equipment that use the service, it has the characteristics of comprehensive data and authentic source, which has been widely used in network optimization. This paper proposes a method for intelligently evaluating the integrity of 5G MR data based on machine learning algorithms, to get the standard sample data by modeling the number of MRO sampling and the relationship function of wireless performance, which verifies the integrity of the MR data reported on the existing network, thereby effectively improving the MR data quality. Keywords: 5G

 Measurement report  Integrity  Machine learning

1 Introduction Since 4G era, MR has become the main input data source of the network optimization platform due to its advantages of low cost, massive data, and reliability. Rely on MR data to refined analysis network problem has become the main method of network optimization. For operators, the implementation process of MR is invisible as a black box. If the MR output by the wireless equipment is incomplete, it cannot represent the actual data of all users or reflect the overall situation of the network, which will bring uncertain factors to the overall quality evaluation of the network based on MR [1]. Since the time users stayed in business state is random, we cannot estimate whether the MR is fully reported by all users simply with the MR data itself, so we need a method to check the integrity of the MR data.

2 The Overall Scheme Description According to 3GPP protocols of physical layer measurement, when Radio Resource Control (RRC) is in connection state, the 5G NR equipment periodically or incidentally reports the measured service cells and neighbor cell’s MR it has measured to the network side [2]. The MR generated by User Equipment (UE) is reported to the base station through RRC signaling, together with the measurement information generated by base station, directly reported to Operations Maintenance Center-Radio (OMC-R) in ©

The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 J. Sun et al. (Eds.): Signal and Information Processing, Networking and Computers, LNEE 917, pp. 1467–1474, 2023. https://doi.org/10.1007/978-981-19-3387-5_174

1468

Y. Fang et al.

cell granularity and stored in the periodic MR data file which called as MRO [3]. China Mobile’s standard definition of the measurement sampling period is set as 5120 ms by default. That is, when the UE is in connection state, the MR data will be reported each 5120 ms as the sampling period and recorded in MRO file in normal situation, then the file will be extracted and used by the upper-layer applications of the operator through the northbound interface [4]. As described in the principle of MR data generation above, the total number of MR samples in a cell is related to the quantity of active users and the RRC connected state duration of each user in the cell. However, the duration of each user in connected state cannot be obtained, which could only be analyzed by related performance statistical modeling, such as the number of active users and base station site traffic in the cell. This paper proposes a model based on large number of fully reported MR cell data in existing network, which could reflect the relation function between the number of MR samples and performance indicators in the same time period, to obtain the MR sampling data benchmarks of the cells in this area. When inspect the integrity of cell’s MR reported in the area, if the performance indicators are known, the model can be used to obtain the quantity of MR sampling points that the cells in the area should achieve when the full MR reporting mechanism is turned on. Then compare the actual collected MR sampling points of the cell in network with the theoretical value to verify the integrity of the MR in existing network.

3 Machine Learning Algorithm Selection 3.1

Correlation Analysis

This paper selects several indicators that can characterize the cell users’ RRC connection time and the number of online users in the cell, extract the sample frequency and performance indicators of MRO for several cells in a day at the hourly granularity. Preliminarily select Physical Resource Block (PRB) utilization, quantity of online RRC users, throughput of the cell, average number of RRC connections, and maximum number of RRC connections. Then analyze through the Pearson correlation coefficient to find out the high correlation indicator besides the performance indicators and quantity of MRO sampling point. Pearson correlation coefficient is used to measure the linear correlation between two variables X and Y, and the value of qX;Y is between –1 and 1. The Pearson correlation coefficient is defined as the ratio of the covariance and standard deviation between the two variables (1): qX;Y ¼

COV ðX; Y Þ E½ðX  lX ÞðY  lY Þ ¼ rX rY rX rY

ð1Þ

A Method for Intelligently Evaluation the Integrity of 5G MR Data

1469

which equals to (2): E ðXY Þ  E ð X ÞE ðY Þ qX;Y ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiqffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi EðX 2 Þ  ðE ð X ÞÞ2 EðY 2 Þ  ðE ðY ÞÞ2 for the sample Pearson correlation coefficient (3): P P P P n xi yi  xi yi xi yi  nxy qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rxy ¼ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P P P P ðn  1Þsx sy n x2i  ð xi Þ2 n y2i  ð yi Þ2

ð2Þ

ð3Þ

this paper calculates MRO sampling points and the correlation between each two selected indicators through the above Pearson coefficient correlation function. 3.2

Multiple Linear Regression Analysis

The paper puts the MRO sample data and the selected high-correlation performance indicators together to do the multiple linear regression analysis and prediction, then modeling and generate formula model. Unary linear regression studies the regression problem between a dependent variable and an independent variable, when there are multiple independent variables that affect the dependent variable, it is a multiple regression analysis. Assuming that y is the dependent variable, x is the independent variable, and there is a linear relationship between the independent variable and the dependent variable, the multiple linear regression model is (4): y ¼ b0 þ b 1 x 1 þ b 2 x 2 þ . . . þ b k x k

ð4Þ

where b0 is a constant term, and b1 to bk are regression coefficients. Let the MRO sampling data as the dependent variable y, and its corresponding set of high correlation performance indicators as x. It needs to use the above regression algorithm to obtain regression coefficients based on the MRO sampling data and performance indicators that can be extracted from the existing network so that to get the multiple linear functions of MRO sampling data. The paper uses this model to predict the MRO sample data and compare it with the actual one collected in existing network, verify the integrity of the MRO sampling data output by the wireless device [5].

4 Detailed Scheme Evaluation This paper proposes a model of the relationship function between MR sampling number and performance indicators in the same time period by using a large number of fully reported MRO cell data on the existing network, this model is to obtain the MR sampling data benchmarks of the cells in this area. When verifying the integrity of MR reported by cells in the area, the performance indicators are known, the established model function is used to regression predict the standard MR sampling quantity in the

1470

Y. Fang et al.

corresponding time period. Then compare the actual MR sampling number collected from the existing network cell with the standard one to verify the integrity of MR activation in existing network [6]. 4.1

Extract the Quantity of MR Sampling Points

First extract the MRO data of a cell according to a certain time granularity (for example, 1 h) and within a certain time period (for example, 24 h a day) in one area of the existing network. Then calculate the number of MR sampling points of the cell in each time granularity. 4.2

Extract Relevant Performance Indicators

The quantity of MR sampling points is positively correlated with the number of users and online duration [7]. Therefore, it is necessary to extract relevant performance indicators that can characterize the number of users and online duration in the same time period and same time granularity as in step 1. For example, the average connections of RRC, the maximum connection of RRC, the number of successful RRC connections established, the number of successful handovers in, the number of successful inter-frequency handovers out, the number of successful intra-frequency handovers out, uplink packets of the cell, downlink packets of the cell, uplink PUSCH PRB occupancy, downlink PDSCH PRB occupancy and the other performance indicators. 4.3

Correlation Analysis

Because some of the same types of performance indicators have similar effects on sampling points quantity, such as the average connections of RRC, the maximum connection of RRC and the number of successful RRC connections established, it is necessary to analyze through the correlation function to find out which of these indicators is high relevant with the quantity of MRO sampling points. This could decrease the number of variables and reduce the complexity of function modeling. 4.4

Modeling of MR Sampling Evaluation Function

The MR sampling points modeling with high correlation index obtained by the correlation analysis in step 3, to get the multiple linear regression analysis and prediction model function. Set y as the dependent variable, representing the sampling points number, set x as the independent variable, representing the performance index with high multivariate correlation. When there is a linear relationship between the independent variable and the dependent variable, the multiple linear regression model is (5): y ¼ b0 þ b 1 x 1 þ b 2 x 2 þ . . . þ b k x k þ e where b0 is a constant term, and b1 to bk are regression coefficients.

ð5Þ

A Method for Intelligently Evaluation the Integrity of 5G MR Data

4.5

1471

Accuracy Verification of Regression Model

According to the linear regression model function established by MR sampling points and high correlation performance index in step 4, plug the actual performance index into the function can calculate the predicted MR sampling points. Comparing with the actual MR sampling points, obtained the error ratio (predicted MR sampling points/actual MR sampling points), then statistic the proportion of error ratio within 10%. If the proportion is greater than 90%, we decide the model is accurate and can be used. 4.6

Integrity Verification of MR Data

The performance index data is get within a certain area and a certain period of time, based on the model obtained in step 5, the model function can be used to obtain the number of MR sampling points theoretically should be reached when the MR full reporting mechanism of the area is on, if the theoretical value is quite different from the actual MR sampling points statistics in MRO file, then this part of the cell has integrity problems and needs further verification. Here is a practical example to help understand the whole scheme. Extract MR and performance index data in hourly granularity of a day within 1,000 cells from the actual network, totally 20,000 sample data are used as model input. Each sample data includes MR sampling points and performance index in hourly granularity [8] (Table 1). Table 1. MR sampling points and performance index MR sampling points

Average RRC connections

Maximum RRC connections

Successful handoff out times under same frequency

The number of packets transmitted uplink in cell

The number of packets transmitted downlink in cell

RRC connection successful established times Uplink PUSCH PRB occupancy rate

Successful handoff in times

Successful handoff out times under pilot frequency

Downlink PUSCH PRB occupancy rate

The same type of index is removed after correlation analysis, which get the performance index are closely related to the dependent variable MR sampling points, such as average RRC connections, successful handoff in times, successful handoff out times under pilot frequency and successful handoff out times under same frequency. After multiple linear regression, R equals to 0.979a, the variance of R is 0.958, it’s also the determination coefficient, which means the percentage of the variance explained by the independent variable in the total variance. The larger the value of R variance is between 0 and 1, the better the fitting effect of the model. The variance

1472

Y. Fang et al.

value means 95.8% of the predictions can be predicted by this model, and the error in standard estimates is 2449.64360. The analysis of model coefficients is as follow (Table 2):

Table 2. Model coefficients analysis The model

(Constant) RRC connection successful established times Successful handoff in times Successful handoff out times under pilot frequency Successful handoff out times under same frequency

Non-standardized coefficient B Standard error 565.971 22.780 718.751 1.632

t

Sig.

24.845 440.330

.000 0.000

–.858 1.261

.069 .068

–12.398 18.648

.000 .000

.890

.074

12.090

.000

According to the model coefficients, the regression function can be obtained: Predicted MR sampling points = 565.971 + 718.751  Average RRC connections –0.858  Successful handoff in times + 1.261  Successful handoff out times under pilot frequency + 0.89  Successful handoff out times under same frequency. Verify the linear regression model (Table 3): Table 3. The validation data of linear regression model The interval of average RRC connections 0