Intelligent Computing and Optimization: Proceedings of the 3rd International Conference on Intelligent Computing and Optimization 2020 (ICO 2020) 303068153X, 9783030681531

Third edition of International Conference on Intelligent Computing and Optimization and as a premium fruit, this book, p

2,972 81 166MB

English Pages 1332 [1341] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Intelligent Computing and Optimization: Proceedings of the 3rd International Conference on Intelligent Computing and Optimization 2020 (ICO 2020)
 303068153X, 9783030681531

Table of contents :
Preface
Contents
About the Editors
Sustainable Clean Energy System
Adaptive Neuro-Fuzzy Inference Based Modeling of Wind Energy Harvesting System for Remote Areas
Abstract
1 Introduction
2 Motivation and Objective of Research
3 Related Work
4 Wind Energy Harvesting System
4.1 Modelling of Wind Turbine
4.2 Modeling of the PMSG
5 Adaptive Neuro-Fuzzy Inference System
6 Simulation Results and Discussion
6.1 Limitation of Research
7 Conclusion and Future Scope
Acknowledgments
References
Features of Distributed Energy Integration in Agriculture
Abstract
1 Introduction
2 Problems of Integration of Distributed Energy Projects in Agriculture
3 Discussion and Results
4 Conclusions
References
Concept of Multi-contact Switching System
Abstract
1 Introduction
2 MCSS
3 Discussion
4 Conclusions
References
The Design of Optimum Modes of Grain Drying in Microwave–Convective Effect
Abstract
1 Introduction
2 Main Part
2.1 Research Method
2.2 Experiment
2.3 Results and Discussion
3 Conclusions
References
Isolated Agroecosystems as a Way to Solve the Problems of Feed, Ecology and Energy Supply of Livestock Farming
Abstract
1 Introduction
2 Materials and Methods
3 Conclusions
References
Laboratory-Scale Implementation of Ethereum Based Decentralized Application for Solar Energy Trading
Abstract
1 Introduction
2 Implement of DApp Using the Ethereum Open-Source Platform
2.1 Set Up the Ethereum Environment
2.2 Set Up React
2.3 Set Up Identities with Ganache and Wallets with Metamask for Participants
3 Implement of ERC20 Token
4 Implement Energy Trading
4.1 The User Uses ET Token to Do Both Sell Orders and Buy Orders
4.2 Trading
5 Deployment and Experimental Result
6 Conclusion
References
Solar Module with Photoreceiver Combined with Concentrator
Abstract
1 Introduction
2 Creation of the Geometry of Working Profile of the Paraboloid Type Solar Radiation Concentrator
3 Results of Calculating the Profile of the Concentrator Paraboloid Type
4 Conclusion
Acknowledgment
References
Modeling of Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module
Abstract
1 Introduction
2 Three-Dimensional Modeling of Bilateral Photoreceivers of Photovoltaic Thermal Solar Modules
3 Modeling and Visualization of the Thermal State of the Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module
4 Conclusion
Acknowledgment
References
Formation of Surface of the Paraboloid Type Concentrator of Solar Radiation by the Method of Orthogonal Parquetting
Abstract
1 Introduction
2 Paraboloid Type Concentrators of the Solar Radiation
3 Method of the Orthogonal Parqueting of Surface of the Paraboloid Type Concentrator of the Solar Radiation
4 Paraboloid Type Concentrators of the Solar Radiation for Various Photoreceivers
5 Conclusion
Acknowledgment
References
Determination of the Efficiency of Photovoltaic Converters Adequate to Solar Radiation by Using Their Spectral Characteristics
Abstract
1 Introduction
2 Comparison of the Spectral Densities of Current Photoresponses of a Reference Silicon Photovoltaic Converter Under Various Exposure Conditions
3 Averaged Spectral Densities of Power of the Photoactive Part of the Spectrum for Silicon
4 Comparison of the Spectral Densities of Current Photoresponses of the Silicon Photovoltaic Converters
5 Dependence of the Efficiency of the Silicon Photovoltaic Converter Versus the Wavelength
6 Conclusion
Acknowledgment
References
Modeling of the Thermal State of Systems of Solar-Thermal Regeneration of Adsorbents
Abstract
1 Introduction
2 Research Problem Statement
3 Research Results
4 Conclusion
References
Economic Aspects and Factors of Solar Energy Development in Ukraine
Abstract
1 Conclusions
References
A Method for Ensuring Technical Feasibility of Distributed Balancing in Power Systems, Considering Peer-to-Peer Balancing Energy Trade
Abstract
1 Introduction
2 Notion of Power Flow and Its Feasibility
3 Proposed Method
3.1 Optimal Power Flow Problem (OPF)
3.2 Formulation (2) – Identification of Most Problematic Unit
3.2.1 Addition of New Constraint
3.3 Formulation (3) – Operator’s Corrective Actions
4 Assessment of the Method
4.1 Measure of Market Effectiveness
4.2 Simulation Results
4.2.1 Numerical Results
5 Conclusions and Discussion
References
Sustainable Optimization, Metaheuristics and Computing for Expert System
The Results of a Compromise Solution, Which Were Obtained on the Basis of the Method of Uncertain Lagrange Multipliers to Determine the Influence of Design Factors of the Elastic-Damping Mechanism in the Tractor Transmission
Abstract
1 Introduction
2 Purpose of Research
3 Materials and Methods
4 Discussion Results
5 Conclusions
Acknowledgments
References
Multiobjective Lévy-Flight Firefly Algorithm for Multiobjective Optimization
Abstract
1 Introduction
2 Multiobjective Lévy-Flight Firefly Algorithm
2.1 Original FA Algorithm
2.2 LFFA Algorithm
2.3 mLFFA Algorithm
3 Multiobjective Test Functions
4 Results and Discussions
5 Conclusions
References
Cooperative FPA-ATS Algorithm for Global Optimization
Abstract
1 Introduction
2 Cooperative FPA-ATS Algorithm
2.1 FPA Algorithm
2.2 ATS Algorithm
2.3 Cooperative FPA-ATS Algorithm
3 Benchmark Functions
4 Results and Discussions
5 Conclusions
References
Bayesian Optimization for Reverse Stress Testing
Abstract
1 Introduction
1.1 Problem Statement
2 Supporting Literature
2.1 Other Optimization Methods
3 Bayesian Optimization: Overview
3.1 The LCB Acquisition Function
4 Changes to LCB Acquisition
4.1 Rationalization of the ZERO Acquisition Function
4.2 Definition: Run Length
5 Results
5.1 Discussion
6 Conclusion and Future Scope
Appendix A
References
Modified Flower Pollination Algorithm for Function Optimization
Abstract
1 Introduction
2 FPA and MoFPA Algorithms
2.1 Original FPA Algorithm
2.2 Proposed MoFPA Algorithm
3 Selected Benchmark Functions
4 Results and Discussions
5 Conclusions
References
Improved Nature-Inspired Algorithms for Numeric Association Rule Mining
1 Introduction
2 Background Information
2.1 Association Rule Mining
2.2 Differential Evolution for ARM Using Mixed Attributes
3 Improved DE for NARM
4 Experiments and Results
4.1 Discussion
5 Conclusion
References
Verification of the Adequacy of the Topological Optimization Method of the Connecting Rod Shaping by the BESO Method in ANSYS APDL System
Abstract
1 Introduction
2 Analysis of Publications
3 Description of the Developed Optimization Method
4 Assessment of the Adequacy of the Developed Method
5 Conclusion
References
Method for Optimizing the Maintenance Process of Complex Technical Systems of the Railway Transport
Abstract
1 Introduction
2 Block Diagram of the Method for Optimizing the Parameters of the Maintenance System
3 Monitoring of the Technical Condition of the Technical System
4 Automatic Identification System “PALM”
5 Automation of Obtaining the Necessary Initial Data to Optimize Maintenance Periods
6 Set of Operations for Maintenance
7 Conclusion
References
Optimization of Power Supply System of Agricultural Enterprise with Solar Distributed Generation
Abstract
1 Introduction
2 Materials and Methods
3 Results and Their Discussion
4 Conclusion
References
Crack Detection of Iron and Steel Bar Using Natural Frequencies: A CFD Approach
Abstract
1 Introduction
2 Mathematical Modeling
2.1 Governing Equation
2.2 Equation of Free Vibration
2.3 Boundary Conditions
3 Computational Domain and Mesh Generation
4 Numerical Results and Discussion
5 Conclusion
6 Future Study
Acknowledgement
References
Belief Rule-Based Expert System to Identify the Crime Zones
Abstract
1 Introduction
2 Related Works
3 Methodology
3.1 Crime Zone Prediction
3.2 Crime Zone
3.3 Purpose of Identifying Crime Zone
3.4 Types of Crime Zone
3.5 Crime Zone Prediction Techniques
3.6 Factors Related to Crime Zone
3.7 Design of BRB for Identifying Crime Zone
4 Implementation of Expert System
4.1 Initial Rule Base for X
4.2 User Interface
5 Results and Discussion
6 Conclusion
References
Parameter Tuning of Nature-Inspired Meta-Heuristic Algorithms for PID Control of a Stabilized Gimbal
1 Introduction
2 Description of the Control System
2.1 The Plant
2.2 Fitness Function Used for Meta-Heuristic Algorithms
3 Nature-Inspired Algorithms
3.1 Flower Pollination Algorithm
3.2 Teaching-Learning-Based Optimization Algorithm
4 Computational Experimental Design
5 Results and Discussions
5.1 Parameter Optimization
5.2 Robustness Analysis
5.3 Suggestions for Future Practitioners
6 Conclusion
References
Solving an Integer Program by Using the Nonfeasible Basis Method Combined with the Cutting Plane Method
Abstract
1 Introduction
2 Preliminaries
2.1 Cutting Plane Method
2.2 Nonfeasible Basis Method
3 Nonfeasible-Basis Cutting Plane Method
4 Computational Results
4.1 Generated Problem
4.2 Computational Results
4.3 Discussion
5 Conclusion
References
A New Technique for Solving a 2-Dimensional Linear Program by Considering the Coefficient of Constraints
Abstract
1 Introduction
2 The Proposed Approach
3 The Illustrative Examples
4 The Computational Results
5 Conclusions
References
A New Integer Programming Model for Solving a School Bus Routing Problem with the Student Assignment
Abstract
1 Introduction
2 Problem Description and the Proposed Model
3 Experimental Results
4 Conclusion
Acknowledgement
References
Distributed Optimisation of Perfect Preventive Maintenance and Component Replacement Schedules Using SPEA2
Abstract
1 Introduction
1.1 Challenges with Sequential Evolutionary Algorithm
2 Related Work
3 The Distributed Solution
3.1 The Proposed Architecture
3.2 The Proposed Distributed Algorithm and Flowcharts
4 MoHPBPSPEA2 Implementation
5 Results
5.1 Discussion
6 Conclusion
References
A Framework for Traffic Sign Detection Based on Fuzzy Image Processing and Hu Features
1 Introduction
2 Related Works
3 Detection Framework
3.1 Preprocessing
3.2 Transformation Using Fuzzy Inference
3.3 Segmentation
3.4 Morphological Closing and Filtering
3.5 Shape Verification
3.6 Detection
4 Result and Analysis
4.1 Experiment Setup
4.2 Detection Results
4.3 Performance Evaluation
4.4 Comparisons of Classifiers in Shape Verification
5 Conclusion
References
Developing a Framework for Vehicle Detection, Tracking and Classification in Traffic Video Surveillance
Abstract
1 Introduction
2 Related Work
3 System Architecture and Design
3.1 Dataset
3.2 Sample Process
3.3 Histogram of Oriented Gradient (HOG) Feature Extraction
3.4 Support Vector Machine (SVM) Classifier
3.5 Non-linear Suppression
3.6 Kalman Filter for Tracking
3.7 Haar –Like Feature Extraction
3.8 Load Classifier
3.9 Length Calculation
3.10 Classification
4 Implementation and Experiments
4.1 Experimental Setup
4.2 Implementation
4.3 Performance Evaluation
5 Conclusion and Future Work
References
Advances in Algorithms, Modeling and Simulation for Intelligent Systems
Modeling and Simulation of Rectangular Sheet Membrane Using Computational Fluid Dynamics (CFD)
Abstract
1 Introduction
2 Experiment Descriptions
3 Computational Fluid Dynamics (CFD)
3.1 Assumptions
3.2 Governing Equations
3.3 Boundary Conditions
3.4 Convergence and Grid Independency Test
4 Results and Discussions
4.1 Selection of Optimum Grid Size
4.2 CFD Analysis
4.3 Error Analysis
4.4 Model Validation
5 Conclusions
References
End-to-End Supply Chain Costs Optimization Based on Material Touches Reduction
Abstract
1 Introduction
1.1 Motivation and Objective
1.2 Methodology
1.3 Contribution
2 Literature Review
3 Touches-Based Cost Optimization Methodology
3.1 Analysis of a SC Levels Using a Touches-Approach
3.2 Incidence Matrix Generation and Numerical Example
3.3 Touches Subsets Cost Calculation
3.4 Generation of the Mathematical Model in Its “Set Covering” Form
4 Results Analysis
5 Conclusions and Future Research
References
Computer Modeling Selection of Optimal Width of Rod Grip Header to the Combine Harvester
Abstract
1 Introduction
2 Materials and Methods
3 Discussion Results
4 Conclusions
References
An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition
Abstract
1 Introduction
2 Related Work
3 Methodology
3.1 Data Collection
3.2 Data Pre-processing
3.3 CNN-LSTM Integrated Model
3.4 System Implementation
3.5 Cross Validation
4 Result and Discussion
4.1 CNN-LSTM Model on Standard Dataset
4.2 CNN-LSTM Model on Local Dataset
4.3 Comparison of Result
4.4 Cross Validation
4.5 Real Time Validaiton
5 Conclusion and Future Work
References
Analysis of the Cost of Varying Levels of User Perceived Quality for Internet Access
1 Introduction
2 Related Work
2.1 Abbreviations and Acronyms
3 Method
3.1 Determining Parameters Value for Experimental Analysis
3.2 Determining Formula for Bandwidth and Bandwidth Cost by Curve Fitting
3.3 Implementing MOS and Cost Relationship by Eliminating Bandwidth from Bandwidth vs Cost Formula
3.4 MOS and Bandwidth Cost
4 Results and Testing
4.1 Plotting MOS vs Bandwidth (Bps) with Different Packet Buffer Lengths
4.2 Plotting of MOS vs Bandwidth (Mbps) with Constant Packet Buffer Length Output
4.3 Plotting of Bandwidth vs Bandwidth Cost Relationship
4.4 Plotting of MOS vs Bandwidth Cost Relationship Output
4.5 Plotting of MOS and Bandwidth Cost Output with Different TCP Sources
5 Discussion and Further Work
References
Application of Customized Term Frequency-Inverse Document Frequency for Vietnamese Document Classification in Place of Lemmatization
Abstract
1 Introduction
2 Methodology
3 Data Collection and Pre-processing
3.1 Data Collection
3.2 Data Processing
4 Data Training
5 Conclusion and Perspectives
References
A New Topological Sorting Algorithm with Reduced Time Complexity
Abstract
1 Introduction
2 Related Works
3 Overview of the Proposed Algorithm
4 Algorithm
4.1 Pseudocode
4.2 Example
5 Analysis
5.1 Time Complexity
5.2 Space Complexity
6 Comparison with Related Works
7 Conclusions and Future Recommendation
References
Modeling and Analysis of Framework for the Implementation of a Virtual Workplace in Nigerian Universities Using Coloured Petri Nets
Abstract
1 Introduction
2 Framework for the Implementation of a Virtual Workplace in Nigerian Universities
3 Top-Level Module of the Virtual Workplace CPN Model
3.1 Conceptual Phase Module of the Virtual Workplace Model
3.2 Pre-implementation Phase Module of the Virtual Workplace CPN Model
3.3 Implementation Phase Module of the Virtual Workplace Model
3.4 Post-implementation Phase Module of the Virtual Workplace
4 Simulation and State Space Analysis
5 Conclusions
References
Modeling and Experimental Verification of Air - Thermal and Microwave - Convective Presowing Seed Treatment
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussion
3.1 Air-Thermal Treatment of Seeds
3.2 Dependence of Operation Modes of Seed Presowing Treatment on Parameters of Convective-Microwave Zone
3.3 Experimental Tests of Seed Presowing Treatment Modes.
4 Conclusion
References
Modeling of Aluminum Profile Extrusion Yield: Pre-cut Billet Sizes
Abstract
1 Introduction
2 Literature Review
3 Research Methodology
3.1 Problem Formulation
3.2 Solution Procedure
4 Case Studies
4.1 Case Study 1
4.2 Case Study 2
5 Conclusion
References
Models for Forming Knowledge Databases for Decision Support Systems for Recognizing Cyberattacks
Abstract
1 Introduction
2 Review and Analysis of Previous Research
3 Objectives of the Study
4 Methods and Models
5 Computational Experiments
6 Discussion of the Results of the Computational Experiment.
7 Further Research
8 Conclusions
References
Developing an Intelligent System for Recommending Products
Abstract
1 Introduction
2 Literature Review
3 Proposed Methodology
3.1 Dataset Description
3.2 Pre-processing
3.2.1 Initial Database Based Pre-processing
3.2.2 Cleaning Text
3.2.3 Final Pre-processing and Training
3.3 Training
3.4 Testing and Rating Prediction
3.5 Product Recommendation
4 Experimental Results
5 Conclusion
References
Branch Cut and Free Algorithm for the General Linear Integer Problem
Abstract
1 Introduction
2 The General LIP
3 Variable Integral Restriction Freeing Theorem
4 Continuous Optimal Solution
5 Surrogate Constraint
6 Measuring Complexity Reduction
6.1 Numerical Illustration 1
6.1.1 Freeing the Selected Variable
7 Branch and Cut Algorithm
7.1 Proposed Branch, Cut and Free Algorithm
7.2 Verification of Optimality
7.2.1 Searching for the Optimal Integer Point
7.2.2 Optimality Is Verification
7.3 Numerical Illustration 2 [18]
8 Computational Experience
9 Conclusions
Acknowledgments
References
Resilience in Healthcare Supply Chains
1 Introduction
2 Literature Review
3 Health Care Supply Chain Design
4 Mathematical Model
4.1 Parameters
4.2 Decision Variables
5 Case Study
6 Computational Results
7 Conclusions
References
A Comprehensive Evaluation of Environmental Projects Through a Multiparadigm Modeling Approach
Abstract
1 Introduction
2 Methodology
2.1 Discrete Event Simulation
2.2 Dynamic Systems Modeling
2.3 Real Options
2.4 Cost-Effectiveness Evaluation
2.5 Multiparadigm Evaluation Proposal
3 Case Study
4 Conclusions
References
Plant Leaf Disease Recognition Using Histogram Based Gradient Boosting Classifier
1 Introduction
2 Related Work
3 Proposed Framework for Recognizing Plant Leaf Diseases
3.1 Dataset
3.2 Enhancing Image
3.3 Clustering by Adaptive Centroid Based Segmentation
3.4 Selecting Features Using HLTP
3.5 Classifying Using Histogram-Based Gradient Boosting Classifier
4 Results and Observations
5 Conclusion and Future Work
References
Exploring the Machine Learning Algorithms to Find the Best Features for Predicting the Breast Cancer and Its Recurrence
1 Introduction
2 Literature Review
3 Methodology
3.1 Data Acquisition
3.2 Data Pre-processing
3.3 Model Building and Analysis
4 Discussion
5 Conclusion
References
Exploring the Machine Learning Algorithms to Find the Best Features for Predicting the Risk of Cardiovascular Diseases
1 Introduction
2 Literature Review
3 Methodology
3.1 Data Acquisition
3.2 Data Pre-processing
3.3 Analyzing ML Algorithms
4 Discussion
5 Conclusion
References
Searching Process Using Boyer Moore Algorithm in Digital Library
Abstract
1 Introduction
2 Related Work
3 Background Theory
3.1 Boyer Moore (BM) Algorithm
3.1.1 Bad-Symbol Shift
3.1.2 Good-Suffix Shift
3.1.3 Algorithm Steps
3.2 Horspool’s Algorithm
4 Discussion and Result
4.1 Searching Process with the BM Algorithm
4.2 Searching Process with the Horspool’s Algorithm
4.3 Comparison Between BM and Horspool Algorithms
5 Conclusion
Acknowledgments
References
Application of Machine Learning and Artificial Intelligence Technology
Gender Classification from Inertial Sensor-Based Gait Dataset
Abstract
1 Introduction
2 Related Work
3 Dataset Description
4 Methodology
4.1 Feature Extraction
4.2 Feature Selection
4.3 Classification of Gender
4.4 Model Training
5 Result and Discussion
6 Conclusion and Future Work
References
Lévy-Flight Intensified Current Search for Multimodal Function Minimization
Abstract
1 Introduction
2 ICuS and LFICuS Algorithms
2.1 ICuS Algorithm
2.2 Proposed LFICuS Algorithm
3 Benchmark Functions
4 Results and Discussions
5 Conclusions
References
Cancer Cell Segmentation Based on Unsupervised Clustering and Deep Learning
Abstract
1 Introduction
2 Literature Review
3 Proposed Methodology
3.1 Preprocessing
3.2 Segmentation
3.3 Median Filtering
3.4 Classification
3.5 Trace and Detect Region
3.6 Area Calculation
4 Conclusions
References
Automated Student Attendance Monitoring System Using Face Recognition
Abstract
1 Introduction
2 Proposed Methodology
2.1 Hog Features Extraction
2.2 Facial Landmark Estimation
2.3 Face Encoding Using CNN
2.4 Face Recognition Using SVM
3 Result and Discussion
4 Conclusion and Future Scopes
References
Machine Learning Approach to Predict the Second-Life Capacity of Discarded EV Batteries for Microgrid Applications
Abstract
1 Introduction
2 Battery Data Collection
3 Input, Output, and Pre-processing of Data
3.1 Case 1
3.2 Case 2
4 Machine Learning Model and Activation Functions Used
5 System Configuration and Software Used
6 Result Analysis
6.1 Result Analysis for Case 1
6.2 Result Analysis for Case 2
6.3 Comparative Analysis Between Case1 and Case 2
7 Conclusion
References
Classification of Cultural Heritage Mosque of Bangladesh Using CNN and Keras Model
Abstract
1 Introduction
2 Problem Statement
2.1 Image Classification
2.2 Deep Learning
2.3 Convolutional Neural Network and Its Characteristics
2.4 Hyperparameter Description and Optimization
2.5 Activation Function
3 Materials and Methods
3.1 Dataset
3.2 RStudio
3.3 Methodology
3.4 Keras Sequential Model with CNN
4 Result and Discussion
4.1 Result
4.2 Discussion
5 Advantages and Limitations
6 Conclusion
7 Future Scope
References
Classification of Orthopedic Patients Using Supervised Machine Learning Techniques
Abstract
1 Introduction
2 Background
3 Literature Review
4 Methodology
4.1 Learning Technique
4.2 Machine Learning Algorithms
4.3 Evaluation Metrics
4.4 Dataset
5 Results
6 Conclusion
References
Long Short-Term Memory Networks for Driver Drowsiness and Stress Prediction
Abstract
1 Introduction
1.1 Literature Review
1.2 Motivation and Contributions
2 Proposed LSTM Network for Driver Drowsiness and Stress Prediction
2.1 Driver Drowsiness and Stress Databases
2.2 Data Pre-processing and ECG Beat Segmentation
2.3 LSTM Network Based Driver Drowsiness and Stress Prediction
3 Analysis and Results
3.1 Performance Evaluation Under Varying Times of Advance Prediction
3.2 Performance Comparison Between Proposed Work and Existing Works
3.3 Future Research Directions
4 Conclusion
Acknowledgments
References
Optimal Generation Mix of Hybrid Renewable Energy System Employing Hybrid Optimization Algorithm
1 Introduction
1.1 Background
2 Methodology
2.1 System Model of HRES
2.2 Formulation of the Optimization Problem
3 The Hybrid Algorithm
4 Result and Discussion
4.1 Descriptive Analysis of the Algorithms
4.2 Normality Tests of the Algorithms
4.3 Optimum Configuration of the HRES
5 Conclusion and Future Work
References
Activity Identification from Natural Images Using Deep CNN
Abstract
1 Introduction
2 Literature Review
3 Methods and Proposed Methodologies
3.1 Dataset
3.2 Preprocessing
3.3 Convolutional Neural Network
3.4 Model 1 (One Layer CNN)
3.5 Model 2 (Proposed One Layer CNN)
3.6 Model 3 (Traditional 4 Layer CNN)
3.7 Model 4 (Proposed 4 Layer CNN)
4 Experimental Results
5 Conclusion
References
Learning Success Prediction Model for Early Age Children Using Educational Games and Advanced Data Analytics
Abstract
1 Introduction
2 Research Preparation
3 Model Development
4 Conclusion
References
Advanced Analytics Techniques for Customer Activation and Retention in Online Retail
Abstract
1 Introduction
2 Customer Segmentation and Analytics
2.1 Customer Lifetime Value (Scoring)
3 Research
3.1 Parameters Selection
3.2 Churn Prevention Process and Customer Reactivation Process
3.3 Forecasting Scenarios
4 Results
5 Conclusion
References
An Approach for Detecting Pneumonia from Chest X-Ray Image Using Convolution Neural Network
Abstract
1 Introduction
2 Related Work
3 Proposed Methods
3.1 Dataset
3.2 Proposed Model
4 Experimental Result
5 Conclusion and Future Work
References
An Analytical Study of Influencing Factors on Consumers' Behaviors in Facebook Using ANN and RF
1 Introduction
2 Related Works
3 Methodology
3.1 Sample Collection and Dataset
3.2 Random Forest Algorithm
3.3 Artificial Neural Network
4 Experiment with Models
5 Results and Observation
6 Future Research Directions and Conclusion
References
Autism Spectrum Disorder Prognosis Using Machine Learning Algorithms: A Comparative Study
1 Introduction
2 Literature Review
3 Proposed Framework
3.1 Dataset Overview
3.2 Data Pre-processing
3.3 Data Analysis
3.4 Prediction Model Development
3.5 Evaluating the Prediction Model
4 Model Progression
4.1 Prediction Model Based on DNN
4.2 Prediction Model Based on SVM
4.3 Pediction Model Based on KNN
5 Evaluating the Performance and Result
6 Conclusion
References
Multidimensional Failure Analysis Based on Data Fusion from Various Sources Using TextMining Techniques
Abstract
1 Introduction
2 Description of Problems
3 Methodology
3.1 Data Preparation
3.2 Lemmatization
3.3 Feedback Loop
3.4 Categorization
3.5 Grouping
4 Conclusion
References
Road Quality Classification Adaptive to Vehicle Speed Based on Driving Data from Heavy Duty Mining Vehicles
Abstract
1 Introduction
2 Input Data
3 Algorithm Assumptions
4 Methodology
4.1 Data Pre-processing
4.2 Idle Detection
4.3 Data Preparation for Classification
4.4 Road Quality Classifier Creation
4.5 Road Quality Classification
5 Experimental Results
6 Summary
Acknowledgements
References
Fabric Defect Detection System
Abstract
1 Introduction
2 Defects in Fabric
3 Proposed Methodology for Defect Detection
3.1 Image Segmentation
3.2 Feature Extraction
3.3 PCA (Principal Component Analysis)
3.4 Image Classification
4 Application of the System and Analysis
4.1 Proposed Methodology
5 Conclusion and Future Work
References
Alzheimer’s Disease Detection Using CNN Based on Effective Dimensionality Reduction Approach
Abstract
1 Introduction
2 Dataset
3 Proposed Methodology
3.1 Preprocessing
3.2 Feature Reduction
3.3 Classification
3.4 Performance Evaluation
4 Experimental Result
5 Conclusions and Future Scope
References
An Analytical Intelligence Model to Discontinue Products in a Transnational Company
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Conclusions
References
Graph Neural Networks in Cheminformatics
Abstract
1 Introduction
2 Cheminformatics in Drug Discovery
2.1 Fundamental Concepts in Cheminformatics
2.2 Machine Learning Approaches in Cheminformatics
3 Graph Neural Networks
3.1 Basic Concepts of Graphs
3.2 Vanilla Graph Neural Networks
3.3 Graph Convolutional Neural Networks
3.4 Graph Recurrent Neural Networks
3.5 Graph Autoencoders
4 Graph Neural Networks Applications in Cheminformatics
5 Discussion
6 Conclusion
Acknowledgement
References
Academic and Uncertainty Attributes in Predicting Student Performance
Abstract
1 Introduction
2 Related Works
3 Methodology
3.1 Student Attributes
3.2 Academic Attributes
3.3 Uncertainty Attributes
3.4 Dataset
3.5 Prediction Model
3.6 Dispositional Learning Analytics
3.7 Long Short-Term Memory and Bidirectional (LSTM)
3.8 Bidirectional Long Short Term Memory Networks (BLSTM)
3.9 Proposed Application
4 Implementation
5 Conclusion
Acknowledgment
References
Captivating Profitable Applications of Artificial Intelligence in Agriculture Management
Abstract
1 Introduction
2 How AI Works in Agriculture
3 Artificial Intelligence in Agriculture
4 Applications of Artificial Intelligence in Agriculture
4.1 Automated Irrigation System (Smart Irrigation)
4.2 Drones for Spraying
4.3 Remote Sensing-Based Crop Health Monitoring
4.4 Autonomous Early Warning System for Fruit-Fly
4.5 Driverless Tractor
4.6 Face Recognition System for Domestic Cattle
4.7 AI Paired IoT Device
4.8 Chabots
4.9 Stethoscope
4.10 Intello Labs
4.11 AI-Based Precision Agriculture
4.12 Crop Phenotyping and Analysis
4.13 Remote Sensing-Based Water Resources Mapping
4.14 AI Technology in Aquaculture
4.15 AI in Food Engineering and Cold Chain Logistics
4.16 Big Data and Cloud Computing
4.17 Automatic Navigation and Self-driving Technology
4.18 Other Applications of AI in Agriculture
5 Future of AI in Agriculture
6 Conclusion
References
Holistic IoT, Deep Learning and Information Technology
Mosquito Classification Using Convolutional Neural Network with Data Augmentation
Abstract
1 Introduction
2 Related Work
3 Methodology
4 Dataset and Data Augmentation
4.1 Data Acquisition
4.2 Characteristics of Aedes, Anopheles and Culex
4.3 Data Augmentation
5 Implementation Process
6 Result and Discussion
7 Conclusion and Future Works
References
Recommendation System for E-commerce Using Alternating Least Squares (ALS) on Apache Spark
Abstract
1 Introduction
2 Related Work
3 Alternating Least Squares (ALS)
4 Predictive Accuracy Metrics
5 System Prototype
6 System Architecture
7 Experimental Result and Discussion
8 Conclusion and Future Work
References
An Interactive Computer System with Gesture-Based Mouse and Keyboard
Abstract
1 Introduction
2 Literature Review
2.1 Virtual Keyboard
2.2 Virtual Mouse
3 Methodology
3.1 Problem Description
3.2 Virtual Keyboard
3.3 Virtual Mouse
4 Results and Discussions
4.1 Virtual Keyboard
4.2 Virtual Mouse
5 Conclusion and Future Work
References
Surface Water Quality Assessment and Determination of Drinking Water Quality Index by Adopting Multi Criteria Decision Analysis Techniques
Abstract
1 Introduction
1.1 Motivation
1.2 Objectives of the Study
2 Study Area
3 Methodology
4 Result and Discussion
5 Sensitivity Analysis of MCDA Based WAWQI
6 Conclusion
References
An Approach for Multi-human Pose Recognition and Classification Using Multiclass SVM
Abstract
1 Introduction
2 Related Work
3 Model
3.1 Steps
3.2 Proposed Method
4 Discussion
5 Conclusion and Future Work
References
Privacy Violation Issues in Re-publication of Modification Datasets
Abstract
1 Introduction
2 Computer Security Versus Privacy Preservation
3 Anonymization Model
3.1 l-Diversity [2]
4 Privacy Violation Scenarios in Re-publication of Modification Datasets
4.1 Privacy Violation Issues in Released Datasets Based on the Difference of Quasi-Identifier Values
4.2 Privacy Violation Issues in Released Datasets Based on the Difference Sensitive Values
5 Proposed Privacy Preservation Model
5.1 {{\varvec \upbeta}}-Diversity
5.2 Data Utility
5.3 Proposed Algorithm
6 Experiment
6.1 Experiment Setup
6.2 Experimental Results and Discussion
6.2.1 Effectiveness
6.2.2 Efficiency
7 Conclusion
References
Using Non-straight Line Updates in Shuffled Frog Leaping Algorithm
Abstract
1 Introduction
2 Materials and Methods
2.1 Shuffled Frog Leaping Algorithm (SFLA)
3 The Drunken SFLA
3.1 The Improved Drunken SFLA
3.2 The Drift Angle (\theta ) is the First Parameter of the Proposed Method
3.3 The Standard Deviation (\sigma ) as the Second Parameter
4 Results and Discussion
4.1 The Drift Angle (\theta ) During 0 to 90
4.2 The Standard Deviation (\sigma ) During 0.1 to 0.9
4.3 The Setting of Experimental Parameters.
4.4 The Testing Function
5 Experiment Method and Results
6 Conclusion
References
Efficient Advertisement Slogan Detection and Classification Using a Hierarchical BERT and BiLSTM-BERT Ensemble Model
1 Introduction
2 Literature Review
3 Slogan Detection and Context Classification Method
3.1 Data Collection and Preprocessing
3.2 Slogan Detection
3.3 Slogan Feature Analysis for Context Classification
3.4 Context Classification
4 Experiment and Result Analysis
4.1 Dataset and Evaluation Metrics
4.2 Hyper Parameters Setting
4.3 Result Analysis
5 Conclusion
References
Chronic Kidney Disease (CKD) Prediction Using Data Mining Techniques
Abstract
1 Introduction
2 Literature Review
3 Methodology
3.1 Chronic Kidney Disease (CKD)
3.2 The Source of the CKD Dataset
3.3 Matrix and Research Hypothesis
3.4 Screenshots of the Dataset
4 Results and Discussion
5 Conclusion and Future Work
References
Multi-classification of Brain Tumor Images Based on Hybrid Feature Extraction Method
Abstract
1 Introduction
2 Proposed Approach for Multi-class Tumor Classification
2.1 Brain Tumor Dataset
2.2 Image Preprocessing Based on Min-Max Normalization
2.3 SWT and GLCM Based Feature Extraction
2.3.1 Stationary Wavelet Transform
2.3.2 Gray Level Co-occurrence Matrix
2.4 Image Classification Using ANN
3 Results Analysis
4 Conclusion
References
An Evolutionary Population Census Application Through Mobile Crowdsourcing
Abstract
1 Introduction
2 Terminologies Related to MCS
3 Methodology
3.1 Proposed System
3.2 Census Procedure
3.3 Database Design
4 Implementation Details
4.1 Implementation of Application Front-End
4.1.1 Phone Verification Module
4.1.2 Registration Module
4.1.3 Login Module
4.1.4 Dashboard and User Location Module
4.1.5 Information Collection and Uploading Module
4.1.6 User Information Checking Module
4.1.7 Region Information Checking Module
5 Experimental Result and Analysis
5.1 Information and Calculation
5.1.1 Calculation Process
5.2 Survey Chart
6 Conclusion
6.1 Future Recommendations
References
IoT-Enabled Lifelogging Architecture Model to Leverage Healthcare Systems
1 Introduction
2 Background and Present State of the System
3 System Description
3.1 System Architecture
3.2 Logic Diagram
3.3 Communication Between Server and Client
3.4 Set up the Value of the Sensors
4 System Implementation
4.1 Hardware Implementation
4.2 Steps of Implementation:
5 Experimental Results and Evaluation
6 Conclusion
References
An Improved Boolean Load Matrix-Based Frequent Pattern Mining
Abstract
1 Introduction
2 Related Work
3 Proposed Algorithm
4 Discussion of the Improved Algorithm Operation Process
5 Performance Evaluation and Complexity Analysis
6 Conclusion
References
Exploring CTC Based End-To-End Techniques for Myanmar Speech Recognition
Abstract
1 Introduction
2 Related Work
3 End-To-End CTC Model
3.1 Deep Convolutional LSTM Network
3.2 Connectionist Temporal Classification (CTC)
4 Experimental Setup
4.1 Description for Myanmar Speech Corpus
4.2 Training Setup
5 Experimental Results
5.1 Effect of Using Different Label Encoding Methods
5.2 Effect of Using Convolutional Layers
5.3 Effect of Using Different Numbers of BLSTM Layers
6 Conclusion
Acknowledgments
References
IoT Based Bidirectional Speed Control and Monitoring of Single Phase Induction Motors
Abstract
1 Introduction
2 Proposed System Design
3 Implementation of the Proposed System
3.1 Speed Control Scheme for Induction Motor
3.2 ON/OFF and Bidirectional Rotation Control
3.3 Speed Measurement and Motor Status Monitoring
3.4 IoT Based Communication Platform
4 Result and Discussion
5 Conclusion
References
Missing Image Data Reconstruction Based on Least-Squares Approach with Randomized SVD
Abstract
1 Introduction
2 Background Knowledge
2.1 Singular Value Decomposition
2.2 Randomized Singular Value Decomposition
3 Missing Image Data Reconstruction Approach
4 Numerical Experiments
5 Conclusions
Acknowledgments
References
An Automated Candidate Selection System Using Bangla Language Processing
1 Introduction
2 Related Work
3 Methodology
3.1 Data Preparation Module
3.2 Score Generation Module
4 Implementation and Experimental Results
4.1 Experimental Setup
4.2 Implementation and Performance Evaluation
5 Conclusion
References
AutoMove: An End-to-End Deep Learning System for Self-driving Vehicles
Abstract
1 Introduction
1.1 Background of the Work
2 Literature Review
2.1 Image Recognition
2.1.1 Traffic Sign Recognition
2.2 Deep Learning Neural Network
2.2.1 Convolutional Neural Network (CNN)
2.3 Object Detection
2.3.1 Faster R-CNN
2.3.2 YOLO
3 Methodology
3.1 System Architecture
4 Implementation
4.1 Lane Detection
4.2 Speed Detection
4.3 Road Angle Detection
4.4 Dataset
4.5 Autonomous Navigation
5 Conclusion
Acknowledgement
References
An Efficient Machine Learning-Based Decision-Level Fusion Model to Predict Cardiovascular Disease
1 Introduction
2 Related Study
3 Proposed Solution
3.1 UCI Heart Diseases Dataset
3.2 Data Pre-processing
3.3 Feature Scaling
3.4 Feature Selection
3.5 Training Phase
3.6 Decision Level Fusion
3.7 Evaluation Metrics
4 Performance Evaluation
5 Performance Comparison with Previous Work
6 Conclusion
References
Towards POS Tagging Methods for Bengali Language: A Comparative Analysis
1 Introduction
2 Related Work
3 Methodology
3.1 Tagged Corpus
3.2 Tokenization
3.3 Train/Test Corpus Creation
3.4 POS Tagging Model Generation
3.5 Predicted Tagged Sentence Generation
4 Evaluation Results and Analysis
5 Conclusion
References
BEmoD: Development of Bengali Emotion Dataset for Classifying Expressions of Emotion in Texts
1 Introduction
2 Related Work
3 Properties of Emotion Expressions in Bengali
4 Dataset Development Processes
4.1 Data Crawling
4.2 Preprocessing
4.3 Data Annotation
4.4 Label Verification
5 Analysis of BEmoD
6 Evaluation of BEmoD
6.1 Comparison with Existing Bengali Emotion Datasets
7 Conclusion
References
Advances in Engineering and Technology
Study of the Distribution Uniformity Coefficient of Microwave Field of 6 Sources in the Area of Microwave-Convective Impact
Abstract
1 Introduction
2 Main Part
2.1 Research Method
2.2 Modelling
2.3 Experimental Research
2.4 Results and Discussion
3 Conclusions
References
Floor-Mounted Heating of Piglets with the Use of Thermoelectricity
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Conclusions
References
The Rationale for Using Improved Flame Cultivator for Weed Control
Abstract
1 Introduction
2 Main Part
3 Conclusion
References
The Lighting Plan: From a Sector-Specific Urbanistic Instrument to an Opportunity of Enhancement of the Urban Space for Improving Quality of Life
Abstract
1 Introduction
2 The Lighting Plan
3 Proposal for an Experimental Methodological Analysis
4 Two Case Studies in Comparison: Florence and Brussels
5 Results of the Research and Future Visions
References
PID Controller Design for BLDC Motor Speed Control System by Lévy-Flight Intensified Current Search
Abstract
1 Introduction
2 BLDC Motor Control Loop
2.1 BLDC Motor Model
2.2 PID Controller
3 Problem Formulation
4 Results and Discussions
5 Conclusions
References
Intellectualized Control System of Technological Processes of an Experimental Biogas Plant with Improved System for Preliminary Preparation of Initial Waste
Abstract
1 Introduction
2 Background
3 Results of Investigations
4 Conclusion
5 Acknowledgment
References
Way for Intensifying the Process of Anaerobic Bioconversion by Preliminary Hydrolysis and Increasing Solid Retention Time
Abstract
1 Introduction
2 Background
3 Results of Investigations
4 Conclusion
Acknowledgment
References
Evaluation of Technical Damage Caused by Failures of Electric Motors
Abstract
1 Introduction
2 Technical Demages
3 Results and Discussion
4 Conclusions
References
Development of a Prototype Dry Heat Sterilizer for Pharmaceuticals Industry
Abstract
1 Introduction
2 Proposed DHS System
3 Implementation of DHS
4 Performance Analysis of Implemented DHS
4.1 Alarm Check of the DHS
5 Conclusion
6 Recommendation for Future Work
References
Optimization of Parameters of Pre-sowing Seed Treatment in Magnetic Field
Abstract
1 Introduction
2 Background
3 Results of the Research
4 Conclusion
References
Development of a Fast Response Combustion Performance Monitoring, Prediction, and Optimization Tool for Power Plants
Abstract
1 Introduction
2 Methodology
2.1 Overview of the Proposed Application Architecture
2.2 Computational Fluid Modelling (CFD) Setup
2.3 Reduced Order Modelling (ROM)
2.4 Machine Learning
3 Results and Discussion
3.1 CFD ROM Capabilities
3.2 Machine Learning Capabilities
4 Conclusion
References
Industry 4.0 Approaches for Supply Chains Facing COVID-19: A Brief Literature Review
Abstract
1 Introduction
2 Current Enabling Technologies
3 Supply Chain Characteristics
3.1 Humanitarian Supply Chains
3.2 Commercial Supply Chains
4 Solution Characteristics
4.1 Supply Chain Agility and Resilience
4.2 Supply Chain Productivity
5 Direct Supply Chain Impacts & Research Strategies
5.1 Vaccine Development and Distribution
5.2 Personal Protective Equipment (PPE)
6 Conclusion
References
Ontological Aspects of Developing Robust Control Systems for Technological Objects
Abstract
1 Introduction
2 Design of the Ontological System
2.1 Concept of an Ontological System
2.2 Engineering Features
2.3 Model of the Ontological System
3 Results and Discussion
4 Conclusion
References
A New Initial Basis for Solving the Blending Problem Without Using Artificial Variables
Abstract
1 Introduction
2 Blending Problem
3 The Proposed Method
4 An Illustrative Example
5 Conclusion
Acknowledgment
References
Review of the Information that is Previously Needed to Include Traceability in a Global Supply Chain
Abstract
1 Introduction
2 Background
3 Methodology
3.1 Identification of Variables and Critical Points
3.2 Supply Chain Modeling
3.3 Technology for Traceability
4 Conclusions
References
Online Technology: Effective Contributor to Academic Writing
Abstract
1 Introduction
2 Literature Review
3 Theoretical Motivation
4 Methodology and Research Plan
4.1 Present Study
4.2 Mini Experiment
4.3 Techniques and Tools of Variables Selection, and Data Collection
5 Results and Discussion
5.1 Descriptive Statistics of the Variables in the Model
5.2 Results of Models
5.3 Result of the Correlation Matrix
6 Conclusion and Policy Implications
Acknowledgments
References
A Secured Electronic Voting System Using Blockchain
1 Introduction
2 Related Works
3 Methodology
3.1 Blockchain Structure
3.2 Voter Verification and Authentication
3.3 Blockchain Transactions
3.4 Hashing the Blocks
3.5 Proposed Mechanism for Consensus
3.6 Conflicts in the Network
4 Implementation and Experiment
4.1 Comparative Analysis of Using Proposed Mechanism
5 Conclusion
References
Preconditions for Optimizing Primary Milk Processing
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
References
Optimization of Compost Production Technology
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
References
Author Index

Citation preview

Advances in Intelligent Systems and Computing 1324

Pandian Vasant Ivan Zelinka Gerhard-Wilhelm Weber   Editors

Intelligent Computing and Optimization Proceedings of the 3rd International Conference on Intelligent Computing and Optimization 2020 (ICO2020)

Advances in Intelligent Systems and Computing Volume 1324

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Nikhil R. Pal, Indian Statistical Institute, Kolkata, India Rafael Bello Perez, Faculty of Mathematics, Physics and Computing, Universidad Central de Las Villas, Santa Clara, Cuba Emilio S. Corchado, University of Salamanca, Salamanca, Spain Hani Hagras, School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK László T. Kóczy, Department of Automation, Széchenyi István University, Gyor, Hungary Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA Chin-Teng Lin, Department of Electrical Engineering, National Chiao Tung University, Hsinchu, Taiwan Jie Lu, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, Australia Patricia Melin, Graduate Program of Computer Science, Tijuana Institute of Technology, Tijuana, Mexico Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro, Rio de Janeiro, Brazil Ngoc Thanh Nguyen , Faculty of Computer Science and Management, Wrocław University of Technology, Wrocław, Poland Jun Wang, Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong

The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing such as: computational intelligence, soft computing including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms, social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and society, cognitive science and systems, Perception and Vision, DNA and immune based systems, self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric computing, recommender systems, intelligent control, robotics and mechatronics including human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent data analysis, knowledge management, intelligent agents, intelligent decision making and support, intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia. The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results. Indexed by SCOPUS, DBLP, EI Compendex, INSPEC, WTI Frankfurt eG, zbMATH, Japanese Science and Technology Agency (JST), SCImago. All books published in the series are submitted for consideration in Web of Science.

More information about this series at http://www.springer.com/series/11156

Pandian Vasant Ivan Zelinka Gerhard-Wilhelm Weber •



Editors

Intelligent Computing and Optimization Proceedings of the 3rd International Conference on Intelligent Computing and Optimization 2020 (ICO 2020)

123

Editors Pandian Vasant Department of Fundamental and Applied Sciences Universiti Teknologi Petronas Tronoh, Perak, Malaysia

Ivan Zelinka Faculty of Electrical Engineering and Computer Science VŠB TU Ostrava Ostrava, Czech Republic

Gerhard-Wilhelm Weber Faculty of Engineering Management Poznan University of Technology Poznan, Poland

ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-3-030-68153-1 ISBN 978-3-030-68154-8 (eBook) https://doi.org/10.1007/978-3-030-68154-8 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The third edition of the International Conference on Intelligent Computing and Optimization (ICO) ICO’2020 will be held via online platform due to COVID-19 pandemic. The physical conference was held at G Hua Hin Resort & Mall, Hua Hin, Thailand, once the COVID-19 pandemic is recovered. The objective of the international conference is to bring together the global research leaders, experts, and scientists in the research areas of Intelligent Computing and Optimization from all over the globe to share their knowledge and experiences on the current research achievements in these fields. This conference provides a golden opportunity for global research community to interact and share their novel research results, findings, and innovative discoveries among their colleagues and friends. The proceedings of ICO’2020 is published by SPRINGER (Advances in Intelligent Systems and Computing). Almost 100 authors submitted their full papers for ICO’2020. They represent more than 50 countries, such as Bangladesh, Canada, China, Croatia, France, Greece, Hong Kong, Italy, India, Indonesia, Iraq, Japan, Malaysia, Mauritius, Mexico, Myanmar, Namibia, Nigeria, Oman, Poland, Russia, Slovenia, South Africa, Sweden, Taiwan, Thailand, Turkmenistan, Ukraine, USA, UK, Vietnam, and others. This worldwide representation clearly demonstrates the growing interest of the research community in our conference. For this edition, the conference proceedings cover the innovative, original, and creative research areas of sustainability, smart cities, meta-heuristics optimization, cybersecurity, block chain, big data analytics, IoTs, renewable energy, artificial intelligence, Industry 4.0, modeling, and simulation. The organizing committee would like to sincerely thank all the authors and the reviewers for their wonderful contribution for this conference. The best and high-quality papers have been selected and reviewed by International Program Committee in order to publish in Advances in Intelligent System and Computing by SPRINGER. ICO’2020 presents enlightening contributions for research scholars across the planet in the research areas of innovative computing and novel optimization techniques and with the cutting-edge methodologies and applications. This conference could not have been organized without the strong support and help from the v

vi

Preface

committee members of ICO’2020. We would like to sincerely thank Prof. Igor Litvinchev (Nuevo Leon State University (UANL), Mexico), Prof. Rustem Popa (Dunarea de Jos University in Galati, Romania), Professor Jose Antonio Marmolejo (Universidad Panamericana, Mexico), and Dr. J. Joshua Thomas (UOW Malaysia KDU Penang University College, Malaysia) for their great help and support in organizing the conference. We also appreciate the valuable guidance and great contribution from Dr. J. Joshua Thomas (UOW Malaysia KDU Penang University College, Malaysia), Prof. Gerhard-Wilhelm Weber (Poznan University of Technology, Poland; Middle East Technical University, Turkey), Prof. Rustem Popa (“Dunarea de Jos” University in Galati, Romania), Prof. Valeriy Kharchenko (Federal Scientific Agro-engineering Center VIM, Russia), Dr. Vladimir Panchenko (Russian University of Transport, Russia), Prof. Ivan Zelinka (VSB-TU Ostrava, Czech Republic), Prof. Jose Antonio Marmolejo (Universidad Anahuac Mexico Norte, Mexico), Prof. Roman Rodriguez-Aguilar (Universidad Panamericana, Mexico), Prof. Ugo Fiore (Federico II University, Italy), Dr. Mukhdeep Singh Manshahia (Punjabi University Patiala, India), Mr. K. C. Choo (CO2 Networks, Malaysia), Prof. Celso C. Ribeiro (Brazilian Academy of Sciences, Brazil), Prof. Sergei Senkevich (Federal Scientific Agro-engineering Center VIM, Russia), Prof. Mingcong Deng (Tokyo University of Agriculture and Technology, Japan), Dr. Kwok Tai Chui (Open University of Hong Kong, Hong Kong), Prof. Hui Ming Wee (Chung Yuan Christian University. Taiwan), Prof. Elias Munapo (North West University, South Africa), Prof. M. Moshiul Hoque (Chittagong University of Engineering & Technology, Bangladesh), and Prof. Mohammad Shamsul Arefin (Chittagong University of Engineering and Technology, Bangladesh). Finally, we would like convey our utmost sincerest thanks to Prof. Dr. Janusz Kacprzyk, Dr. Thomas Ditzinger, and Ms. Jayarani Premkumar of SPRINGER NATURE for their wonderful help and support in publishing ICO’2020 conference proceedings Book in Advances in Intelligent Systems and Computing. December 2020

Pandian Vasant Gerhard-Wilhelm Weber Ivan Zelinka

Contents

Sustainable Clean Energy System Adaptive Neuro-Fuzzy Inference Based Modeling of Wind Energy Harvesting System for Remote Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . Tigilu Mitiku and Mukhdeep Singh Manshahia

3

Features of Distributed Energy Integration in Agriculture . . . . . . . . . . . Alexander V. Vinogradov, Dmitry A. Tikhomirov, Vadim E. Bolshev, Alina V. Vinogradova, Nikolay S. Sorokin, Maksim V. Borodin, Vadim A. Chernishov, Igor O. Golikov, and Alexey V. Bukreev

19

Concept of Multi-contact Switching System . . . . . . . . . . . . . . . . . . . . . . Alexander V. Vinogradov, Dmitry A. Tikhomirov, Alina V. Vinogradova, Alexander A. Lansberg, Nikolay S. Sorokin, Roman P. Belikov, Vadim E. Bolshev, Igor O. Golikov, and Maksim V. Borodin

28

The Design of Optimum Modes of Grain Drying in Microwave–Convective Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dmitry Budnikov Isolated Agroecosystems as a Way to Solve the Problems of Feed, Ecology and Energy Supply of Livestock Farming . . . . . . . . . . . . . . . . . Aleksey N. Vasiliev, Gennady N. Samarin, Aleksey Al. Vasiliev, and Aleksandr A. Belov Laboratory-Scale Implementation of Ethereum Based Decentralized Application for Solar Energy Trading . . . . . . . . . . . . . . . . . . . . . . . . . . Patiphan Thupphae and Weerakorn Ongsakul Solar Module with Photoreceiver Combined with Concentrator . . . . . . Vladimir Panchenko and Andrey Kovalev

36

43

53 63

vii

viii

Contents

Modeling of Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vladimir Panchenko, Sergey Chirskiy, Andrey Kovalev, and Anirban Banik Formation of Surface of the Paraboloid Type Concentrator of Solar Radiation by the Method of Orthogonal Parquetting . . . . . . . . . . . . . . . Vladimir Panchenko and Sergey Sinitsyn Determination of the Efficiency of Photovoltaic Converters Adequate to Solar Radiation by Using Their Spectral Characteristics . . . . . . . . . . Valeriy Kharchenko, Boris Nikitin, Vladimir Panchenko, Shavkat Klychev, and Baba Babaev

73

84

95

Modeling of the Thermal State of Systems of Solar-Thermal Regeneration of Adsorbents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Gulom Uzakov, Saydulla Khujakulov, Valeriy Kharchenko, Zokir Pardayev, and Vladimir Panchenko Economic Aspects and Factors of Solar Energy Development in Ukraine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Volodymyr Kozyrsky, Svitlana Makarevych, Semen Voloshyn, Tetiana Kozyrska, Vitaliy Savchenko, Anton Vorushylo, and Diana Sobolenko A Method for Ensuring Technical Feasibility of Distributed Balancing in Power Systems, Considering Peer-to-Peer Balancing Energy Trade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Mariusz Drabecki Sustainable Optimization, Metaheuristics and Computing for Expert System The Results of a Compromise Solution, Which Were Obtained on the Basis of the Method of Uncertain Lagrange Multipliers to Determine the Influence of Design Factors of the Elastic-Damping Mechanism in the Tractor Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Sergey Senkevich, Ekaterina Ilchenko, Aleksandr Prilukov, and Mikhail Chaplygin Multiobjective Lévy-Flight Firefly Algorithm for Multiobjective Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Somchai Sumpunsri, Chaiyo Thammarat, and Deacha Puangdownreong Cooperative FPA-ATS Algorithm for Global Optimization . . . . . . . . . . 154 Thitipong Niyomsat, Sarot Hlangnamthip, and Deacha Puangdownreong

Contents

ix

Bayesian Optimization for Reverse Stress Testing . . . . . . . . . . . . . . . . . 164 Peter Mitic Modified Flower Pollination Algorithm for Function Optimization . . . . 176 Noppadol Pringsakul and Deacha Puangdownreong Improved Nature-Inspired Algorithms for Numeric Association Rule Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Iztok Fister Jr., Vili Podgorelec, and Iztok Fister Verification of the Adequacy of the Topological Optimization Method of the Connecting Rod Shaping by the BESO Method in ANSYS APDL System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Sergey Chirskiy and Vladimir Panchenko Method for Optimizing the Maintenance Process of Complex Technical Systems of the Railway Transport . . . . . . . . . . . . . . . . . . . . . 205 Vladimir Apatsev, Victor Bugreev, Evgeniy Novikov, Vladimir Panchenko, Anton Chekhov, and Pavel Chekhov Optimization of Power Supply System of Agricultural Enterprise with Solar Distributed Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 Yu. V. Daus, I. V. Yudaev, V. V. Kharchenko, and V. A. Panchenko Crack Detection of Iron and Steel Bar Using Natural Frequencies: A CFD Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Rajib Karmaker and Ujjwal Kumar Deb Belief Rule-Based Expert System to Identify the Crime Zones . . . . . . . . 237 Abhijit Pathak, Abrar Hossain Tasin, Sanjida Nusrat Sania, Md. Adil, and Ashibur Rahman Munna Parameter Tuning of Nature-Inspired Meta-Heuristic Algorithms for PID Control of a Stabilized Gimbal . . . . . . . . . . . . . . . . . . . . . . . . . 250 S. Baartman and L. Cheng Solving an Integer Program by Using the Nonfeasible Basis Method Combined with the Cutting Plane Method . . . . . . . . . . . . . . . . . . . . . . . 263 Kasitinart Sangngern and Aua-aree Boonperm A New Technique for Solving a 2-Dimensional Linear Program by Considering the Coefficient of Constraints . . . . . . . . . . . . . . . . . . . . 276 Panthira Jamrunroj and Aua-aree Boonperm A New Integer Programming Model for Solving a School Bus Routing Problem with the Student Assignment . . . . . . . . . . . . . . . . . . . 287 Anthika Lekburapa, Aua-aree Boonperm, and Wutiphol Sintunavarat Distributed Optimisation of Perfect Preventive Maintenance and Component Replacement Schedules Using SPEA2 . . . . . . . . . . . . . 297 Anthony O. Ikechukwu, Shawulu H. Nggada, and José G. Quenum

x

Contents

A Framework for Traffic Sign Detection Based on Fuzzy Image Processing and Hu Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Zainal Abedin and Kaushik Deb Developing a Framework for Vehicle Detection, Tracking and Classification in Traffic Video Surveillance . . . . . . . . . . . . . . . . . . . 326 Rumi Saha, Tanusree Debi, and Mohammad Shamsul Arefin Advances in Algorithms, Modeling and Simulation for Intelligent Systems Modeling and Simulation of Rectangular Sheet Membrane Using Computational Fluid Dynamics (CFD) . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Anirban Banik, Sushant Kumar Biswal, Tarun Kanti Bandyopadhyay, Vladimir Panchenko, and J. Joshua Thomas End-to-End Supply Chain Costs Optimization Based on Material Touches Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 César Pedrero-Izquierdo, Víctor Manuel López-Sánchez, and José Antonio Marmolejo-Saucedo Computer Modeling Selection of Optimal Width of Rod Grip Header to the Combine Harvester . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Mikhail Chaplygin, Sergey Senkevich, and Aleksandr Prilukov An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Nanziba Basnin, Lutfun Nahar, and Mohammad Shahada Hossain Analysis of the Cost of Varying Levels of User Perceived Quality for Internet Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Ali Adib Arnab, Sheikh Md. Razibul Hasan Raj, John Schormans, Sultana Jahan Mukta, and Nafi Ahmad Application of Customized Term Frequency-Inverse Document Frequency for Vietnamese Document Classification in Place of Lemmatization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 Do Viet Quan and Phan Duy Hung A New Topological Sorting Algorithm with Reduced Time Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Tanzin Ahammad, Mohammad Hasan, and Md. Zahid Hassan Modeling and Analysis of Framework for the Implementation of a Virtual Workplace in Nigerian Universities Using Coloured Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 James Okpor and Simon T. Apeh

Contents

xi

Modeling and Experimental Verification of Air - Thermal and Microwave - Convective Presowing Seed Treatment . . . . . . . . . . . . 440 Alexey A. Vasiliev, Alexey N. Vasiliev, Dmitry A. Budnikov, and Anton A. Sharko Modeling of Aluminum Profile Extrusion Yield: Pre-cut Billet Sizes . . . 452 Jaramporn Hassamontr and Theera Leephaicharoen Models for Forming Knowledge Databases for Decision Support Systems for Recognizing Cyberattacks . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Valery Lakhno, Bakhytzhan Akhmetov, Moldyr Ydyryshbayeva, Bohdan Bebeshko, Alona Desiatko, and Karyna Khorolska Developing an Intelligent System for Recommending Products . . . . . . . 476 Md. Shariful Islam, Md. Shafiul Alam Forhad, Md. Ashraf Uddin, Mohammad Shamsul Arefin, Syed Md. Galib, and Md. Akib Khan Branch Cut and Free Algorithm for the General Linear Integer Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Elias Munapo Resilience in Healthcare Supply Chains . . . . . . . . . . . . . . . . . . . . . . . . . 506 Jose Antonio Marmolejo-Saucedo and Mariana Scarlett Hartmann-González A Comprehensive Evaluation of Environmental Projects Through a Multiparadigm Modeling Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 520 Roman Rodriguez-Aguilar, Luz María Adriana Reyes Ortega, and Jose-Antonio Marmolejo-Saucedo Plant Leaf Disease Recognition Using Histogram Based Gradient Boosting Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 Syed Md. Minhaz Hossain and Kaushik Deb Exploring the Machine Learning Algorithms to Find the Best Features for Predicting the Breast Cancer and Its Recurrence . . . . . . . 546 Anika Islam Aishwarja, Nusrat Jahan Eva, Shakira Mushtary, Zarin Tasnim, Nafiz Imtiaz Khan, and Muhammad Nazrul Islam Exploring the Machine Learning Algorithms to Find the Best Features for Predicting the Risk of Cardiovascular Diseases . . . . . . . . . 559 Mostafa Mohiuddin Jalal, Zarin Tasnim, and Muhammad Nazrul Islam Searching Process Using Boyer Moore Algorithm in Digital Library . . . 570 Laet Laet Lin and Myat Thuzar Soe

xii

Contents

Application of Machine Learning and Artificial Intelligence Technology Gender Classification from Inertial Sensor-Based Gait Dataset . . . . . . . 583 Refat Khan Pathan, Mohammad Amaz Uddin, Nazmun Nahar, Ferdous Ara, Mohammad Shahadat Hossain, and Karl Andersson Lévy-Flight Intensified Current Search for Multimodal Function Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 Wattanawong Romsai, Prarot Leeart, and Auttarat Nawikavatan Cancer Cell Segmentation Based on Unsupervised Clustering and Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607 Juel Sikder, Utpol Kanti Das, and A. M. Shahed Anwar Automated Student Attendance Monitoring System Using Face Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 Bakul Chandra Roy, Imran Hossen, Md. Golam Rashed, and Dipankar Das Machine Learning Approach to Predict the Second-Life Capacity of Discarded EV Batteries for Microgrid Applications . . . . . . . . . . . . . . 633 Ankit Bhatt, Weerakorn Ongsakul, and Nimal Madhu Classification of Cultural Heritage Mosque of Bangladesh Using CNN and Keras Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Mohammad Amir Saadat, Mohammad Shahadat Hossain, Rezaul Karim, and Rashed Mustafa Classification of Orthopedic Patients Using Supervised Machine Learning Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659 Nasrin Jahan, Rashed Mustafa, Rezaul Karim, and Mohammad Shahadat Hossain Long Short-Term Memory Networks for Driver Drowsiness and Stress Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670 Kwok Tai Chui, Mingbo Zhao, and Brij B. Gupta Optimal Generation Mix of Hybrid Renewable Energy System Employing Hybrid Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . 681 Md. Arif Hossain, Saad Mohammad Abdullah, Ashik Ahmed, Quazi Nafees Ul Islam, and S. R. Tito Activity Identification from Natural Images Using Deep CNN . . . . . . . . 693 Md. Anwar Hossain and Mirza A. F. M. Rashidul Hasan Learning Success Prediction Model for Early Age Children Using Educational Games and Advanced Data Analytics . . . . . . . . . . . . . . . . . 708 Antonio Tolic, Leo Mrsic, and Hrvoje Jerkovic

Contents

xiii

Advanced Analytics Techniques for Customer Activation and Retention in Online Retail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720 Igor Matic, Leo Mrsic, and Joachim Keppler An Approach for Detecting Pneumonia from Chest X-Ray Image Using Convolution Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735 Susmita Kar, Nasim Akhtar, and Mostafijur Rahman An Analytical Study of Influencing Factors on Consumers’ Behaviors in Facebook Using ANN and RF . . . . . . . . . . . . . . . . . . . . . . 744 Shahadat Hossain, Md. Manzurul Hasan, and Tanvir Hossain Autism Spectrum Disorder Prognosis Using Machine Learning Algorithms: A Comparative Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754 Oishi Jyoti, Nazmin Islam, Md. Omaer Faruq, Md. Abu Ismail Siddique, and Md. Habibur Rahaman Multidimensional Failure Analysis Based on Data Fusion from Various Sources Using TextMining Techniques . . . . . . . . . . . . . . . 766 Maria Stachowiak, Artur Skoczylas, Paweł Stefaniak, and Paweł Śliwiński Road Quality Classification Adaptive to Vehicle Speed Based on Driving Data from Heavy Duty Mining Vehicles . . . . . . . . . . . . . . . . 777 Artur Skoczylas, Paweł Stefaniak, Sergii Anufriiev, and Bartosz Jachnik Fabric Defect Detection System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 788 Tanjim Mahmud, Juel Sikder, Rana Jyoti Chakma, and Jannat Fardoush Alzheimer’s Disease Detection Using CNN Based on Effective Dimensionality Reduction Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 801 Abu Saleh Musa Miah, Md. Mamunur Rashid, Md. Redwanur Rahman, Md. Tofayel Hossain, Md. Shahidujjaman Sujon, Nafisa Nawal, Mohammad Hasan, and Jungpil Shin An Analytical Intelligence Model to Discontinue Products in a Transnational Company . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812 Gabriel Loy-García, Román Rodríguez-Aguilar, and Jose-Antonio Marmolejo-Saucedo Graph Neural Networks in Cheminformatics . . . . . . . . . . . . . . . . . . . . . 823 H. N. Tran Tran, J. Joshua Thomas, Nurul Hashimah Ahamed Hassain Malim, Abdalla M. Ali, and Son Bach Huynh Academic and Uncertainty Attributes in Predicting Student Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838 Abdalla M. Ali, J. Joshua Thomas, and Gomesh Nair

xiv

Contents

Captivating Profitable Applications of Artificial Intelligence in Agriculture Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848 R. Sivarethinamohan, D. Yuvaraj, S. Shanmuga Priya, and S. Sujatha Holistic IoT, Deep Learning and Information Technology Mosquito Classification Using Convolutional Neural Network with Data Augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865 Mehenika Akter, Mohammad Shahadat Hossain, Tawsin Uddin Ahmed, and Karl Andersson Recommendation System for E-commerce Using Alternating Least Squares (ALS) on Apache Spark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 880 Subasish Gosh, Nazmun Nahar, Mohammad Abdul Wahab, Munmun Biswas, Mohammad Shahadat Hossain, and Karl Andersson An Interactive Computer System with Gesture-Based Mouse and Keyboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894 Dipankar Gupta, Emam Hossain, Mohammed Sazzad Hossain, Mohammad Shahadat Hossain, and Karl Andersson Surface Water Quality Assessment and Determination of Drinking Water Quality Index by Adopting Multi Criteria Decision Analysis Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907 Deepjyoti Deb, Mrinmoy Majumder, Tilottama Chakraborty, Prachi D. Khobragade, and Khakachang Tripura An Approach for Multi-human Pose Recognition and Classification Using Multiclass SVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922 Sheikh Md. Razibul Hasan Raj, Sultana Jahan Mukta, Tapan Kumar Godder, and Md. Zahidul Islam Privacy Violation Issues in Re-publication of Modification Datasets . . . 938 Noppamas Riyana, Surapon Riyana, Srikul Nanthachumphu, Suphannika Sittisung, and Dussadee Duangban Using Non-straight Line Updates in Shuffled Frog Leaping Algorithm . . . Kanchana Daoden and Trasapong Thaiupathump

954

Efficient Advertisement Slogan Detection and Classification Using a Hierarchical BERT and BiLSTM-BERT Ensemble Model . . . . . . . . . 964 Md. Akib Zabed Khan, Saif Mahmud Parvez, Md. Mahbubur Rahman, and Md Musfique Anwar Chronic Kidney Disease (CKD) Prediction Using Data Mining Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976 Abhijit Pathak, Most. Asma Gani, Abrar Hossain Tasin, Sanjida Nusrat Sania, Md. Adil, and Suraiya Akter

Contents

xv

Multi-classification of Brain Tumor Images Based on Hybrid Feature Extraction Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989 Khaleda Akhter Sathi and Md. Saiful Islam An Evolutionary Population Census Application Through Mobile Crowdsourcing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000 Ismail Hossain Mukul, Mohammad Hasan, and Md. Zahid Hassan IoT-Enabled Lifelogging Architecture Model to Leverage Healthcare Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011 Saika Zaman, Ahmed Imteaj, Muhammad Kamal Hossen, and Mohammad Shamsul Arefin An Improved Boolean Load Matrix-Based Frequent Pattern Mining . . . 1026 Shaishab Roy, Mohammad Nasim Akhtar, and Mostafijur Rahman Exploring CTC Based End-To-End Techniques for Myanmar Speech Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1038 Khin Me Me Chit and Laet Laet Lin IoT Based Bidirectional Speed Control and Monitoring of Single Phase Induction Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1047 Ataur Rahman, Mohammad Rubaiyat Tanvir Hossain, and Md. Saifullah Siddiquee Missing Image Data Reconstruction Based on Least-Squares Approach with Randomized SVD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059 Siriwan Intawichai and Saifon Chaturantabut An Automated Candidate Selection System Using Bangla Language Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071 Md. Moinul Islam, Farzana Yasmin, Mohammad Shamsul Arefin, Zaber Al Hassan Ayon, and Rony Chowdhury Ripan AutoMove: An End-to-End Deep Learning System for Self-driving Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082 Sriram Ramasamy and J. Joshua Thomas An Efficient Machine Learning-Based Decision-Level Fusion Model to Predict Cardiovascular Disease . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1097 Hafsa Binte Kibria and Abdul Matin Towards POS Tagging Methods for Bengali Language: A Comparative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1111 Fatima Jahara, Adrita Barua, MD. Asif Iqbal, Avishek Das, Omar Sharif, Mohammed Moshiul Hoque, and Iqbal H. Sarker

xvi

Contents

BEmoD: Development of Bengali Emotion Dataset for Classifying Expressions of Emotion in Texts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1124 Avishek Das, MD. Asif Iqbal, Omar Sharif, and Mohammed Moshiul Hoque Advances in Engineering and Technology Study of the Distribution Uniformity Coefficient of Microwave Field of 6 Sources in the Area of Microwave-Convective Impact . . . . . . . . . . 1139 Dmitry Budnikov, Alexey N. Vasilyev, and Alexey A. Vasilyev Floor-Mounted Heating of Piglets with the Use of Thermoelectricity . . . 1146 Dmitry Tikhomirov, Stanislav Trunov, Alexey Kuzmichev, Sergey Rastimeshin, and Victoria Ukhanova The Rationale for Using Improved Flame Cultivator for Weed Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156 Mavludin Abdulgalimov, Fakhretdin Magomedov, Izzet Melikov, Sergey Senkevich, Hasan Dogeev, Shamil Minatullaev, Batyr Dzhaparov, and Aleksandr Prilukov The Lighting Plan: From a Sector-Specific Urbanistic Instrument to an Opportunity of Enhancement of the Urban Space for Improving Quality of Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1168 Cinzia B. Bellone and Riccardo Ottavi PID Controller Design for BLDC Motor Speed Control System by Lévy-Flight Intensified Current Search . . . . . . . . . . . . . . . . . . . . . . . 1176 Prarot Leeart, Wattanawong Romsai, and Auttarat Nawikavatan Intellectualized Control System of Technological Processes of an Experimental Biogas Plant with Improved System for Preliminary Preparation of Initial Waste . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186 Andrey Kovalev, Dmitriy Kovalev, Vladimir Panchenko, Valeriy Kharchenko, and Pandian Vasant Way for Intensifying the Process of Anaerobic Bioconversion by Preliminary Hydrolysis and Increasing Solid Retention Time . . . . . . 1195 Andrey Kovalev, Dmitriy Kovalev, Vladimir Panchenko, Valeriy Kharchenko, and Pandian Vasant Evaluation of Technical Damage Caused by Failures of Electric Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204 Anton Nekrasov, Alexey Nekrasov, and Vladimir Panchenko Development of a Prototype Dry Heat Sterilizer for Pharmaceuticals Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213 Md. Raju Ahmed, Md. Niaz Marshed, and Ashish Kumar Karmaker

Contents

xvii

Optimization of Parameters of Pre-sowing Seed Treatment in Magnetic Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222 Volodymyr Kozyrsky, Vitaliy Savchenko, Oleksandr Sinyavsky, Andriy Nesvidomin, and Vasyl Bunko Development of a Fast Response Combustion Performance Monitoring, Prediction, and Optimization Tool for Power Plants . . . . . 1232 Mohammad Nurizat Rahman, Noor Akma Watie Binti Mohd Noor, Ahmad Zulazlan Shah b. Zulkifli, and Mohd Shiraz Aris Industry 4.0 Approaches for Supply Chains Facing COVID-19: A Brief Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1242 Samuel Reong, Hui-Ming Wee, Yu-Lin Hsiao, and Chin Yee Whah Ontological Aspects of Developing Robust Control Systems for Technological Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1252 Nataliia Lutskaya, Lidiia Vlasenko, Nataliia Zaiets, and Volodimir Shtepa A New Initial Basis for Solving the Blending Problem Without Using Artificial Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262 Chinchet Boonmalert, Aua-aree Boonperm, and Wutiphol Sintunavarat Review of the Information that is Previously Needed to Include Traceability in a Global Supply Chain . . . . . . . . . . . . . . . . . . . . . . . . . . 1272 Zayra M. Reyna Guevara, Jania A. Saucedo Martínez, and José A. Marmolejo Online Technology: Effective Contributor to Academic Writing . . . . . . 1281 Md. Hafiz Iqbal, Md Masumur Rahaman, Tanusree Debi, and Mohammad Shamsul Arefin A Secured Electronic Voting System Using Blockchain . . . . . . . . . . . . . 1295 Md. Rashadur Rahman, Md. Billal Hossain, Mohammad Shamsul Arefin, and Mohammad Ibrahim Khan Preconditions for Optimizing Primary Milk Processing . . . . . . . . . . . . . 1310 Gennady N. Samarin, Alexander A. Kudryavtsev, Alexander G. Khristenko, Dmitry N. Ignatenko, and Egor A. Krishtanov Optimization of Compost Production Technology . . . . . . . . . . . . . . . . . 1319 Gennady N. Samarin, Irina V. Kokunova, Alexey N. Vasilyev, Alexander A. Kudryavtsev, and Dmitry A. Normov Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1329

About the Editors

Pandian Vasant is a senior lecturer at University of Technology Petronas, Malaysia, and an editor-in-chief of International Journal of Energy Optimization and Engineering (IJEOE). He holds PhD in Computational Intelligence (UNEM, Costa Rica), MSc (University Malaysia Sabah, Malaysia, Engineering Mathematics), and BSc (Hons, Second Class Upper) in Mathematics (University of Malaya, Malaysia). His research interests include soft computing, hybrid optimization, innovative computing and applications. He has co-authored research articles in journals, conference proceedings, presentation, special issues guest editor, book chapters (257 publications indexed in SCOPUS), and General Chair of EAI International Conference on Computer Science and Engineering in Penang, Malaysia (2016), and Bangkok, Thailand (2018). In the year 2009 and 2015, Dr. Pandian Vasant was awarded top reviewer and outstanding reviewer for the journal Applied Soft Computing (Elsevier). He has 30 years of working experiences at the universities. Currently, he is an editor-in-chief of International Journal of Energy Optimization & Engineering, and Member of AMS (USA), NAVY Research Group (TUO, Czech Republic) and MERLIN Research Group (TDTU, Vietnam). H-Index Google Scholar = 33; i-10-index = 107. Ivan Zelinka is currently working at the Technical University of Ostrava (VSB-TU), Faculty of Electrical Engineering and Computer Science. He graduated consequently at Technical University in Brno (1995 – MSc.), UTB in Zlin (2001 – PhD) and again at Technical University in Brno (2004 – assoc. prof.) and VSB-TU (2010 - professor). Before academic career, he was an employed like TELECOM technician, computer specialist (HW+SW), and Commercial Bank (computer and LAN supervisor). During his career at UTB, he proposed and opened seven different lectures. He also has been invited for lectures at numerous universities in different EU countries plus the role of the keynote speaker at the Global Conference on Power, Control and Optimization in Bali, Indonesia (2009), Interdisciplinary Symposium on Complex Systems (2011), Halkidiki, Greece, and IWCFTA 2012, Dalian China. The field of his expertise is mainly on unconventional algorithms and cybersecurity. He is and was responsible supervisor of three grants of fundamental xix

xx

About the Editors

research of Czech grant agency GAČR, co-supervisor of grant FRVŠ - Laboratory of parallel computing. He was also working on numerous grants and two EU projects like a member of the team (FP5 - RESTORM) and supervisor (FP7 PROMOEVO) of the Czech team and supervisor of international research (founded by TACR agency) focused on the security of mobile devices (Czech - Vietnam). Currently, he is a professor at the Department of Computer Science, and in total, he has been the supervisor of more than 40 MSc. and 25 Bc. diploma thesis. Ivan Zelinka is also supervisor of doctoral students including students from the abroad. He was awarded by Siemens Award for his PhD thesis, as well as by journal Software news for his book about artificial intelligence. Ivan Zelinka is a member of British Computer Society, editor-in-chief of Springer book series: Emergence, Complexity and Computation (http://www.springer.com/series/10624), Editorial board of Saint Petersburg State University Studies in Mathematics, a few international program committees of various conferences and international journals. He is the author of journal articles as well as of books in Czech and English language and one of the three founders of TC IEEE on big data http://ieeesmc.org/about-smcs/ history/2014-archives/44-about-smcs/history/2014/technical-committees/204-bigdata-computing/. He is also head of research group NAVY http://navy.cs.vsb.cz. G.-W. Weber is a professor at Poznan University of Technology, Poznan, Poland, at Faculty of Engineering Management, Chair of Marketing and Economic Engineering. His research is on OR, financial mathematics, optimization and control, neuro- and bio-sciences, data mining, education and development; he is involved in the organization of scientific life internationally. He received his Diploma and Doctorate in mathematics, and economics/business administration, at RWTH Aachen, and his Habilitation at TU Darmstadt. He held Professorships by proxy at University of Cologne, and TU Chemnitz, Germany. At IAM, METU, Ankara, Turkey, he was a professor in the programs of Financial Mathematics and Scientific Computing, and Assistant to the Director, and he has been a member of further graduate schools, institutes, and departments of METU. Further, he has affiliations at the universities of Siegen, Ballarat, Aveiro, North Sumatra, and Malaysia University of Technology, and he is “Advisor to EURO Conferences”.

Sustainable Clean Energy System

Adaptive Neuro-Fuzzy Inference Based Modeling of Wind Energy Harvesting System for Remote Areas Tigilu Mitiku1 and Mukhdeep Singh Manshahia2(&) 1

2

Department of Mathematics, Bule Hora University, Bule Hora, Ethiopia [email protected] Department of Mathematics, Punjabi University Patiala, Patiala, Punjab, India [email protected]

Abstract. The wind speed has a great impact on the overall performance of the wind energy harvesting system. Due to variable nature of wind speed, the system is controlled to work only in a specified range of wind speeds to protect both the generator and turbine from damage. This article presents adaptive neuro-fuzzy inference system-based control scheme for operation of the system between the cut in and rated wind speed. By controlling the generator speed to its optimum value, the generator power and speed fluctuation can be reduced. A Matlab/Simulink tool is used for the simulation and analysis of the system. The obtained results indicate that adaptive neuro-fuzzy inference system is an effective method to control the rotor speed. Keywords: Wind energy harvesting system system  Wind speed

 Adaptive neuro-fuzzy inference

1 Introduction Wind energy technology have shown rapid growth among renewable energy sources in most parts of the world due to depletion of fossil fuel reserves, rising pollution levels and worrying changes in the global climate created due to conventional energy sources [1]. According to Global Wind Energy Council (GWEC) report, around 50 GW of newly installed wind power was added in 2018 slightly less than that of 2017, bringing the global total wind power generation capacity to 597 GW [2]. The report indicates that 2018 was the second year in a row with growing number of new installations for energy generation. Wind Energy Harvesting System (WEHS) can operate in both fixed as well as variable speed mode of operation. In fixed speed wind turbines, the generator rotates at almost constant speed and frequency for which it is designed regardless of variation in wind speed [1]. As turbines forced to operate at constant speed, it should be extremely robust to withstand mechanical stress created due to the fluctuation of wind speeds. On the other hand, in variable speed wind turbines, the rotor of the generator is allowed to rotate freely at any speed over a wide range of wind speeds. The generator is directly

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 3–18, 2021. https://doi.org/10.1007/978-3-030-68154-8_1

4

T. Mitiku and M. S. Manshahia

connected to the grid in fixed speed operation whereas connected with the help of power electronic equipment in variable speed system [3]. Thus, it is possible to control the rotor speed by means of power electronics to maintain the optimum tip speed ratio at all the times under fluctuation of wind speeds to produce maximum power. Different types of AC generators can be used along with the modern wind turbine. Researchers classified them into synchronous and asynchronous (induction) generators. Some are Squirrel-Cage Rotor Induction Generator, WoundRotor Induction Generator, Doubly-Fed Induction Generator, Synchronous Generator (With external field excitation) and Permanent Magnet Synchronous Generator [3]. Variable Speed Wind Turbine with Permanent Magnet Synchronous Generator (VSWT-PMSG) is an attractive choice for villages, settlements and remote areas found at very far distance from grid [4]. VSWT-PMSG can operate close to optimal speed using maximum power point tracking (MPPT) for various wind-speeds. This paper has seven sections: the first section contains general introduction about WEHS and the purpose of this work is mentioned in Sect. 2. Section 3 presents literature of the control methods applied on WEHS. Section 4 describes the details of WEHS. Section 5 describes MPPT controller’s used in our work. The model of the system, simulation results and discussions are given in the Sect. 6 before the conclusion and future scope which is the final part of this paper.

2 Motivation and Objective of Research The control system is used in WEHS to produce maximum power and supply the load with a constant voltage and frequency under change in wind speed and load. WEHS are controlled to work only in a specified range of wind speeds limited by cut-in and cutout speeds. Outside of these limits, the turbine should be stopped to protect both generator and turbine from damage [5, 6]. The recent advancements in power electronics and control strategies have made it possible to regulate the voltage of the PMSG in many different ways. The main factor that affects the performance of the system is the variation in wind speed which affects power system stability and power quality. The machine-side a torque/speed controller is applied to improve the performance of the system at variable wind speed between cut-in and rated speed. Grid-side inverter is also controlled to keep the DC-link voltage at a constant value and the current injected to the grid at unity power factor to achieve maximum power delivery to grid as a desired operating condition. Conventional control methods such as proportional integral (PI) have proposed by different researchers for the control of wind energy system [7]. However, these methods need exact mathematical knowledge of dynamics system, which is very often difficult to derive for complex systems. Nowadays to overcome such problem researchers are tending to use intelligent control methods like fuzzy logic control (FLC), neural networks (NN), adaptive neuro fuzzy interfacing system (ANFIS) and genetic algorithm (GA) to control wind speed fluctuations. In this paper, ANFIS controller is proposed to control rotor speed of WEHS. The objective of ANFIS controller is to maximize energy production and ensure a continuous supply of energy to the grid by regulating the turbine speed in such a way that the optimal tip speed ratio is maintained [8]. Modelling and simulation of the system is

Adaptive Neuro-Fuzzy Inference Based Modeling

5

developed using Matlab/Simulink tool to enhancement the performance of the system with the proposed speed controller. In this paper, the VSWT-PMSG system is equipped with AC/DC/AC power electronic converters in which the active and reactive power produced by PMSG can be controlled. The present research analyzes the model of a variable speed wind turbine equipped with 1.5 MW PMSG.

3 Related Work The studies related to modelling of WEHS using Adaptive MPPT network is limited due to complexity of the system and advancement of the control techniques. Researchers applied different MPPT techniques for control applications. Boobalan et al. [9] proposed Fuzzy-PI control strategy to model WEHS using PMSG to provide optimum power to the gird. The proposed vector control provides the desired voltage at the output of the generator side convertor so as to control the generator speed. With the vector control scheme, the magnitude, phase voltage and the frequency of the generator currents are controlled and enable acceptable results in steady state with variation in wind speed. However, if the wind speed changes significantly from time to time, this method may require a long searching time to locate the maximum power point. Sarvi et al. [10] have proposed a maximum power point tracking control algorithm based on particle swarm optimization (PSO) and Fuzzy logic control for variable speed PMSG based WEHS. The PSO algorithm is used to determine optimum rotor speed of turbine generator and its maximum power according to different wind speeds whereas FLC is used to adjust the duty cycle of boost converter. Ahmed et al. [11] applied fuzzy-PID controller based MPPT to track the maximum power available from the WEHS and obtain AC voltage with constant amplitude and frequency. The error between actual rotor speed of PMSG and estimated value of rotor speed which depends on values of DC current and voltage at the terminal of boost-converter and parameters of PMSG are used to obtain maximum power. The rotor speed error is given as input to the fuzzyPID controller that controls the duty cycle of pulse width modulation (PWM) generator and its output is connected to the boost converter. Wafa Torki et al. [12] have applied vector control strategy using PI controller to model and control the direct-drive WEHS with PMSG. The actual speed of the generator is compared to its reference value which is obtained by MPPT method using the optimal tip speed ratio control and gives the reference current. Marmouh et al. [13] have applied two controllers to produce maximum active power of WEHS equipped by PMSG whose stator is connected to the grid through a back-to-back AC-DC-AC converter. The first one is FLC based MPPT control algorithm of stator side control by using a hysteresis control and an optimal generator speed reference which is estimated from different wind speeds for the generation of maximum active power and the second FLC was applied to grid side control to ensure a smooth DC voltage between the two converters to its reference value. Aamer Bilal Asghar et al. [5] have proposed hybrid intelligent learning based ANFIS for online estimation of effective wind speed from instantaneous values of wind turbine TSR, rotor speed and mechanical power. The estimated wind speed is used to design the optimal rotor speed estimator for MPPT of VSWT. Gencer [14] demonstrated fuzzy logic control system of variable speed WEHS to investigate power flow efficiency of

6

T. Mitiku and M. S. Manshahia

PMSG. Fuzzy logic controller was structured for MPPT to drive the WEHS at the optimum speed that relates to maximum power at any wind speed. Kesraoui et al. [15] examined the FLC control of the aerodynamic power for variable-speed wind turbine at high wind speeds to oversee overabundance power created during high wind speeds dependent on PMSG associated with the grid through consecutive power converter. Pitch angle was additionally controlled utilizing fluffy rationale to restrain the aerodynamic power and the power on the dc bus voltage. Nadoushan et al. [16] exhibited optimal torque control (OTC) of stand-alone variable-speed small-scale wind turbine outfitted with a PMSG and a switch-mode rectifier. Right now, control method is utilized for variable speed WEHS to remove optimal power from wind and keep the voltage at 400 V with 50 Hz frequency and produce most extreme power output. The simulation model of the framework was created utilizing MATLAB/SIMULINK framework.

4 Wind Energy Harvesting System The output voltage of the generator is not acceptable due to the variation in amplitude and frequency because of fluctuation in wind speeds throughout the day. A 1.5 MW wind turbine is considered in this paper which consists of a 1.5 MW wind turbine and PMSG connected to AC/DC converter and DC/AC converter modelled by voltage sources. AC/DC converter is used to convert AC voltage with variable amplitude and frequency at the generator side to DC voltage at the DC-link voltage. DC-link voltage should be constant for direct use, storage and for conversion from DC to AC by using inverter. Thus the obtained DC voltage again converted to AC voltage with constant amplitude and frequency at the load side for electrical utilization. The back-to-back PWM converter based power electronic interface is a suitable option for wind-power applications with PMSG. 4.1

Modelling of Wind Turbine

The mechanical power captured by wind turbine is given by [17–19] (Fig. 1). 1 Pm ¼ CP ðk; bÞqpR2 Vw3 2

ð1Þ

The mechanical torque generated from wind power by wind turbine that will be transferred through the generator shaft to the rotor of the generator is given by Tm ¼

Cp ðk; bÞqpR2 Vw3 2xm

ð2Þ

where q is density of air, A is area swept by blades, Vw is wind speed, Cp is the power extraction efficiency coefficient of the wind turbine, b is the pitch angle of the blade and k is the tip speed ratio. The power coefficient Cp is a non-linear function of

Adaptive Neuro-Fuzzy Inference Based Modeling

7

Fig. 1. PMSG based wind turbine model [21, 24]

the tip speed-ratio k that depends on the wind velocity and the rotation speed of the shaft, xm in rad/s, given by [20–22] Cp ðk; bÞ ¼ 0:5716

  21 116  0:4b  5 e ki þ 0:0068 ki

ð3Þ

with 1 1 0:035  2 ¼ ki k þ 0:008 b þ 1

ð4Þ

The tip speed ratio k is defined by k¼

Rxm Rnp ¼ 30Vw Vw

ð5Þ

where R is blade radius and n is wind turbine rotor speed in revolutions per minute (rpm). Since there is no gearbox, the shaft speed of the rotor and mechanical generator speed are the same, again mechanical torque transferred to the generator is also equal to the aerodynamic torque. The maximum value of Cp is 0.59 which means that the power extracted from the wind is at all times less than 59% (Betz’s limit), this is because of the various aerodynamic losses depending on rotor construction [3]. For VSWT the pitch angle is nearly 0, therefore at, b = 0° the maximum power coefficient is 0.4412.

8

T. Mitiku and M. S. Manshahia

To maintain the tip speed ratio at optimum value, xm changes with the wind speed. Therefore, to extract the maximum power from the wind the tip speed ratio should be maintained at optimum value at any wind speed [23]. Hence, the wind turbine can produce maximum power when the turbine operates at optimum value of Cp which is Cp-opt. So it is necessary to adjust the rotor speed at optimum value of the tip speed ratio kopt as shown in Fig. 2 below.

Fig. 2. Cp (k, b) characteristic for different value of the pitch angle b [10]

4.2

Modeling of the PMSG

For the dynamic modeling of the PMSG the magneto motive force (mmf) is considered to be sinusoidal distributed and the hysteresis and eddy current effects are neglected. Transforming the three-phase stator voltages into the d-q reference frame aligned with the electrical rotor position h, the stator voltages equations of salient pole PMSG are given by Eq. (6) and (7) [21, 22]. Lq d 1 Rs id ¼ Vd  id þ Pxm iq dt Ld Ld Ld

ð6Þ

d 1 Rs Ld 1 iq ¼ Vq  iq  Pxm ð id þ wf Þ dt Lq Lq Lq Lq

ð7Þ

where, Lq and Ld are q, d axis components of stator inductance of the generator respectively, Rs is resistance of the stator windings, iq , id and vq , vd are q, d axis components of stator current and voltage respectively, wf is the flux linkage induced by permanent magnet and P is number of pole pairs.

Adaptive Neuro-Fuzzy Inference Based Modeling

9

The generator produces an electrical torque, and the difference between the mechanical torque and the electrical torque determines whether the mechanical system accelerates, decelerates, or remains at constant speed. The electric torque produced by the generator is given by [21, 25, 26],    3  T e ¼ P w f i q þ L d  Lq i d i q 2 For surface mounted PMSG, Lq

=

ð8Þ

Ld and Eq. (12) becomes

3 Te ¼ Pwf iq 2

ð9Þ

The active and reactive powers of PMSG in steady state are given by Ps ¼ Vd id þ Vq iq

ð10Þ

Qs ¼ Vd id  Vd iq

ð11Þ

Since wind turbine and generator shafts are directly coupled with out the gear box, there is only one state variable. The mechanical equation of PMSG and WT is given by dxm 1 ¼ ð T e  Tm  f x m Þ J dt xe ¼ Pxm ;

d h ¼ xm dt

ð12Þ ð13Þ

Where Te is the electromagnetic torque of the generator in Nm, J is inertia of rotor and generator in Kg.m2, and f is the coefficient of viscous friction that can be neglected in a small scale wind turbine, xe electrical rotational speed of the rotor in rpm and h is rotor angle/electrical angle which is required for abc $ d-q transformation [21].

5 Adaptive Neuro-Fuzzy Inference System Currently fuzzy logic control has played significant role in the development and design of many real-time control applications [4]. FLC consists of three important functional blocks i.e., fuzzification which assign fuzzy variables to the crisp data using membership function; inference engine that creates fuzzy rules by mapping from input to output through the use of knowledge base in it and defuzzification which is the reverse process of fuzzification provides the final output to the plant to be controlled [18]. Different adaptive techniques can easily be implemented in it to increase the performance of the network. The ability of the network depends on the quality of the signals used for training and the performance of the training algorithms and their parameters. ANFIS is one of the neuro fuzzy networks that give best result in control of wind energy system. ANFIS is a fuzzy inference system (FIS) whose membership functions

10

T. Mitiku and M. S. Manshahia

and rule base are appropriately tuned (adjusted) by ANN. It takes advantage from both, the learning capability of ANNs and human knowledge based decision making power of FIS. It uses a combination of least squares estimation and back propagation for membership function parameter estimation.

(a)

(b) Fig. 3. (a) A first-order Sugeno fuzzy model with two–input and two rules; (b) Equivalent ANFIS architecture [16]

Adaptive Neuro-Fuzzy Inference Based Modeling

11

The sugeno’s fuzzy logic model and the corresponding ANFIS architecture is shown in Fig. 3. Assume the FIS under consideration has two inputs x, y and one output fi as shown in Fig. 3 above. Square node indicates adaptive node whereas circle node indicates fixed (non-adaptive) nodes. For a first-order Sugeno fuzzy model, a common rule set with two fuzzy if-then rule is: Rule 1: If x is A1 and y is B1 ; then f1 ¼ p1 x þ q1 y þ r1 ;

ð14Þ

Rule 2: If x is A1 and y is B1 ; then f2 ¼ p2 x þ q2 y þ r2 ;

ð15Þ

Here Oni denotes the output of the ith node (neuron) in layer n, pi, qi, and ri are consequent parameters of the first order polynomial which are updated during the learning process in forward pass by least square method [27]. Layer 1. Every node i in this layer is an adoptive node and consists of two inputs as variables with a node function O1i ¼ lAi ðxÞ for i ¼ 1; 2 and O1i ¼ lBi2 ðyÞ for i ¼ 3; 4

ð16Þ

where Ai is the linguistic label like big, very small, large, etc. associated with this node function. lAi(x) is the membership function of Ai and it specifies the degree to which the given x satisfies the quantifier Ai. It is chosen to be triangular or bell-shaped membership function. Bell-shaped membership function with maximum equal to 1 and minimum equal to 0, such as the generalized bell shaped function is selected for our study. Layer 2. Every node in this layer is non-adaptive node labeled p which multiplies the incoming signals and sends the product out to the next layer. For instance, O2i ¼ wi ¼ lAi ðxÞ  lBi ðyÞ for i ¼ 3; 4

ð17Þ

The obtained result represents the firing strength of a rule. Other T-norm operators that perform generalized AND can be used as the node function in this layer. Layer 3. The neurons in this layer is non adoptive labeled by circle node N and compute the normalized firing strength which is the ratio of firing strength of a rule to the sum of the firing strengths of all rules. They compute the activation level of each rule as O3i ¼ w ¼

wi ; i ¼ 1; 2: w1 þ w2

ð18Þ

For convenience, outputs of this layer will be called normalized firing strengths.

12

T. Mitiku and M. S. Manshahia

Layer 4. Every node i in this layer is adaptive square node which multiplies the normalized firing strength of a rule with corresponding first order polynomial function which produces crisp output O4i ¼ wi fi ¼ wi fi ¼ wi ðpi x þ qi y þ ri Þ

ð19Þ

where wi is the output of layer 3, and, pi, qi, and ri are is known as consequent parameters. This layer is called as defuzzification layer. Layer 5. The single node in this layer is a fixed node labeled R that computes the overall output as the summation of all incoming signals and transforms the fuzzy classification results into a single crisp output and it is called the output layer, i.e., O5i ¼ f ¼

X

wi fi ¼

w1 w2 f1 þ f2 w1 þ w2 w1 þ w2

ð20Þ

Therefore, when the values of the premise parameters in layer are fixed, the overall output can be expressed as a linear combination of the consequent parameters in layer 4. Adaptive network functionally equivalent to a first-order Sugeno fuzzy model is constructed this way. The ANFIS learning or training algorithm is used to change all the adjustable parameters to compare ANFIS output with trained data and identify the parameters in the network. Each training period of the network is divided into two phases. In the first phase (forward pass), functional signals go forward until layer 4 and the consequent parameters are adjusted with Least-Squares Method. In the second phase (backward pass), the error rates propagate backwards and the premise parameters are updated with gradient descent (back propagation) method [28]. If these parameters are fixed, ANFIS output is expressed as the summation of all incoming signals to produce a single crisp output. Thus, a combination of gradient descent and least squares methods can easily define optimal values for the result parameters, pi, qi, and ri. The ANFIS based MPPT controller computes the optimum speed for maximum power point using information on the magnitude and direction of change in power output due to the change in command speed.

6 Simulation Results and Discussion The generator-side converter is controlled to catch maximum power from available wind power. According to Eq. (9) in order to obtain maximal electromagnetic torque Te, with minimum current this study just controls the q-axis current iqs with the assumption that the d-axis current ids = 0. Moreover, according to [21, 29], to produce maximum power, the optimum value of the rotation speed is adjusted using fuzzy logic

Adaptive Neuro-Fuzzy Inference Based Modeling

13

control technique. The relation between blade angular velocity reference xref and wind speed Vw for constant R and kopt is given by xmref ¼

kopt Vm R

ð21Þ

First the wind speed is approximated by the proposed ANFIS based MPPT controller to generate reference speed xmref for the speed control loop of the rotor side converter control to track maximum power points for the system by dynamically changing the turbine torque to operate in kopt as shown in Fig. 4. The PI controller controls the actual rotor speed to the desired value by varying the switching ratio of the PWM inverter. In the speed control loop, the actual speed of the generator is compared to its reference value xref that is obtained by the ANFIS based MPPT control kopt which is defined above. Then, the speed controller will output the reference q-axis current iqref. The control target of the inverter is the output power delivered to the load [30, 31].

Fig. 4. Speed control block diagram

The block diagram of the ANFIS-based MPPT controller module is shown in Fig. 5. The input to ANFIS network is mechanical power and speed of the turbine. The network estimates the effective wind speed used to find the reference speed of the rotor. Table 1 shows the parameters of wind turbine and generator used for the model.

14

T. Mitiku and M. S. Manshahia

Fig. 5. ANFIS-based MPPT control module of turbine rotor speed.

The simulated results of generator output voltage at average speed of 12 m/s is given in Fig. 6 below. The voltage is near to rated voltage of the generator and speed fluctuations are reducing. The obtained voltage is purely sinusoidal.

Fig. 6. Inverter output voltage

The advantages of ANFIS over the two parts of this hybrid system are: ANFIS uses the neural network’s ability to classify data and find patterns. Then it develops a fuzzy expert system that is more transparent to the user and also less likely to produce memorization errors than neural network. ANFIS removes (or at least reduces) the need for an expert. Furthermore, ANFIS has the ability to divide the data in groups and adapt these groups to arrange a best membership functions that clustering the data and deducing the output desired with minimum epochs [32]. The learning mechanism finetunes the underlying fuzzy inference system. Using a given input/output data set,

Adaptive Neuro-Fuzzy Inference Based Modeling

15

ANFIS constructs a fuzzy inference system (FIS) whose membership function parameters are tuned (adjusted) using either back propagation algorithm alone, or in combination with a least squares type of method. This allows your fuzzy systems to learn from the data [33, 34]. However, the restriction of ANFIS is only Sugeno-type decision method is available, there can be only one output and defuzzification method is weighted mean value. ANFIS can replace the anemometer for small wind energy system and reduce the size of the turbine as well as the cost. Moreover, it increases the production of power from the system. Sensor less estimation with help of ANFIS network has very good opportunity for small and medium wind energy generation system [35, 36]. Table 1. Parameters of wind turbine and PMSG Wind turbine Rated power Cut-in wind speed Rated wind speed Cut-out wind speed

6.1

1.5 MW 3 m/s 11 m/s 22 m/s

PMSG Rated power 1.5 MW Rated rotational speed 17.3 rpm P 44 Frequency 50 Hz Rated voltage 200 v

Limitation of Research

The results presented in this paper have some limitations. The data set used in this study is collected from 1.5 MW variable-speed PMSG based wind turbine. It was very good if it is supported by laboratory. However, the actual system is very expensive as well as bigger in size is not usually available in the laboratories, except those which are highly equipped and solely dedicated to wind energy research. The only option is to collect the data samples from an operational wind turbine system and then using the collected data to design estimation and control mechanism.

7 Conclusion and Future Scope As wind speed changing throughout the day, variable speed based wind power generation is useful for optimizing power output of wind energy harvesting system using MPPT methods. This paper presents modeling and simulation of variable speed based PMSG using ANFIS network. The network is applied to control the speed of the rotor to adjust it to its maximum value to give maximum power output. Results have shown that the voltage is near to rated voltage of the generator and speed fluctuations are reducing. The future work is to control the pitch angle for the speed more than the rated speed to produce rated power.

16

T. Mitiku and M. S. Manshahia

Acknowledgments. The authors acknowledge Punjabi University Patiala for providing the internet facilities and the necessary library resources.

References 1. Zhang, J., Xu, S.: Application of fuzzy logic control for grid-connected wind energy conversion system. In: Dadios, E.P. (ed.) Fuzzy Logic-Tool for Getting Accurate Solutions. pp. 50–77. IntechOpen (2015) DOI: https://doi.org/10.5772/59923 2. WWEA: Wind power capacity worldwide reaches 597 GW, 50,1 GW added in 2018. World Wind Energy Association, Brazil (2019) 3. Husain, M.A., Tariq, A.: Modeling and study of a standalone PMSG wind generation system using MATLAB/SIMULINK. Univ. J. Electr. Electron. Eng. 2(7), 270–277 (2014) 4. Ali, A., Moussa, A., Abdelatif, K., Eissa, M., Wasfy, S., Malik, O.P.: ANFIS based controller for rectifier of PMSG wind energy conversion system energy conversion system. In: Proceedings of Electrical Power and Energy Conference (EPEC), pp. 99–103. IEEE, Calgary (2014) 5. Asghar, A.B., Liu, X.: Adaptive neuro-fuzzy algorithm to estimate effective wind speed and optimal rotor speed for variable-speed wind turbine. Neurocomputing 272, 495–504 (2017) 6. Pindoriya, R.M., Usman, A., Rajpurohit, B.S., Srivastava, K.N.: PMSG based wind energy generation system: energy maximization and its control. In: 7th International Conference on Power Systems (ICPS), pp. 376–381, IEEE, Pune (2017) 7. Khaing, T.Z., Kyin, L.Z.: Control scheme of stand-alone wind power supply system with battery energy storage system. Int. J. Electr. Electron. Data Commun. 3(2), 19–25 (2015) 8. El-Tamaly, H.H., Nassef, A.Y.: Tip speed ratio and pitch angle control based on ANN for putting variable speed WTG on MPP. In: 18th International Middle-East Power Systems Conference (MEPCON), pp. 625–632, IEEE Power & Energy Society, Cairo (2016) 9. Boobalan, M., Vijayalakshmi, S., Brindha, R.: A fuzzy-PI based power control of wind energy conversion system PMSG. In: International Conference on Energy Efficient Technologies for Sustainability, pp. 577–583, IEEE, Nagercoil (2013) 10. Sarvi, M., Abdi, S., Ahmadi, S.: A new method for rapid maximum power point tracking of PMSG wind generator using PSO_fuzzy logic. Tech. J. Eng. Appl. Sci. 3(17), 1984–1995 (2013) 11. Ahmed, O.A., Ahmed, A.A.: Control of Wind Turbine for variable speed based on fuzzyPID controller. J. Eng. Comput. Sci. (JECS) 18(1), 40–51 (2017) 12. Torki, W., Grouz, F., Sbita, L.: Vector control of a PMSG direct-drive wind turbine. In: International Conference on Green Energy Conversion Systems (GECS), pp. 1–6. IEEE, Hammamet (2017) 13. Marmouh, S., Boutoubat, M., Mokrani, L.: MPPT fuzzy logic controller of a wind energy conversion system based on a PMSG. In: 8th International Conference on Modelling, Identification and Control, Algiers, pp. 296–302 (2016) 14. Gencer, A.: Modelling of operation PMSG based on fuzzy logic control under different load conditions. In: 10th International Symposium on Advanced Topics in Electrical Engineering (ATEE), pp. 736–739, IEEE, Bucharest (2017) 15. Kesraoui, M., Lagraf, S.A., Chaib, A.: Aerodynamic power control of wind turbine using fuzzy logic. In: 3rd International Renewable and Sustainable Energy Conference (IRSEC), Algeria, pp. 1–6 (2015)

Adaptive Neuro-Fuzzy Inference Based Modeling

17

16. Nadoushan, M.H.J., Akhbari, M.: Optimal torque control of PMSG-based stand-alone wind turbine with energy storage system. J. Electr. Power Energy Convers. Syst. 1(2), 52–59 (2016) 17. Aymen, J., Ons, Z., Nejib, M.M.: Performance assessment of a wind turbine with variable speed wind using artificial neural network and neuro-fuzzy controllers. Int. J. Syst. Appl. Eng. Dev. 11(3), 167–172 (2017) 18. Sahoo, S., Subudhi, B., Panda, G.: Torque and pitch angle control of a wind turbine using multiple adaptive neuro-fuzzy control. Wind Eng. 44(2), 125–141 (2019) 19. Mitiku, T., Manshahia, M.S.: Fuzzy logic controller for modeling of wind energy harvesting system for remote areas. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing and Optimization, ICO 2019. Advances in Intelligent Systems and Computing, vol. 1072. Springer, Cham (2019). https://doi-org-443.webvpn.fjmu.edu.cn/10.1007/978-3030-33585-4_4 20. Heidari, M.: Maximum wind energy extraction by using neural network estimation and predictive control of boost converter. Int. J. Ind. Electron. Control Optim. 1(2), 115–120 (2018) 21. Slah, H., Mehdi, D., Lassaad, S.: Advanced control of a PMSG wind turbine. Int. J. Mod. Nonlinear Theory Appl. 5, 1–10 (2016) 22. Medjber, A., Guessoumb, A., Belmili, H., Mellit, A.: New neural network and fuzzy logic controllers to monitor maximum power for wind energy conversion system. Energy 106, 137–146 (2016) 23. Sahu, S., Panda, G., Yadav, S.P.: Dynamic modelling and control of PMSG based standalone wind energy conversion system. In: Recent Advances on Engineering, Technology and Computational Sciences (RAETCS), pp. 1–6, IEEE, Allahabad (2018) 24. Jain, A., Shankar, S., Vanitha, V.: Power generation using permanent magnet synchronous generator (PMSG) based variable speed wind energy conversion system (WECS): an overview. J. Green Eng. 7(4), 477–504 (2018) 25. Elbeji, O., Mouna, B.H., Lassaad, S.: Modeling and control of a variable speed wind turbine. In: The Fifth International Renewable Energy Congress, pp. 425–429. IEEE, Hammamet (2014) 26. Sagiraju, D.K.V., Obulesu, Y.P., Choppavarapu, S.B.: Dynamic performance improvement of standalone battery integrated PMSG wind energy system using proportional resonant controller. Int. J. Eng. Sci. Technol. 20(4), 1–3 (2017) 27. Petkovic, D., Shamshirband, S.: Soft methodology selection of wind turbine parameters to large affect. Electr. Power Energy Syst. 69, 98–103 (2015) 28. Oguz, Y., Guney, I.: Adaptive neuro-fuzzy inference system to improve the power quality of variable-speed wind power generation system. Turk. J. Electr. Eng. Comput. Sci. 18(4), 625–645 (2010) 29. Farh, H.M., Eltamaly, A.M.: Fuzzy logic control of wind energy conversion system. J. Renew. Sustain. Energy 5(2), 1–3 (2013) 30. Sekhar, V.: Modified fuzzy logic based control strategy for grid connected wind energy conversion system. J. Green Eng. 6(4), 369–384 (2016) 31. Gupta, J., Kumar, A.: Fixed pitch wind turbine-based permanent magnet synchronous machine model for wind energy conversion systems. J. Eng. Technol. 2(1), 52–62 (2012) 32. Thongam, J.S., Bouchard, P., Ezzaidi, H., Ouhrouche, M.: Artificial neural network-based maximum power point tracking control for variable speed wind energy conversion systems. In: 18th IEEE International Conference on Control Applications, Saint Petersburg (2009) 33. Vasant, P., Zelinka, I., Weber, G.W.: Intelligent Computing & Optimization. Conference Proceedings ICO 2018. Springer, Cham (2018). ISBN 978-3-030-00978-6

18

T. Mitiku and M. S. Manshahia

34. Vasant, P., Zelinka, I., Weber, G.W.: Intelligent computing & optimization. In: Proceedings of the 2nd International Conference on Intelligent Computing and Optimization. Springer (2019). ISBN 978-3-030-33585-4 35. Mitiku, T., Manshahia, M.S.: Artificial neural networks based green energy harvesting for smart world. In: Somani, A., Shekhawat, R., Mundra, A., Srivastava, S., Verma, V. (eds.) Smart Systems and IoT: Innovations in Computing. Smart Innovation, Systems and Technologies, vol. 141. Springer, Singapore. https://doi.org/10.1007/978-981-13-8406-6_4 36. Mitiku, T., Manshahia, M.S.: Fuzzy inference based green energy harvesting for smart world. In: IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), pp. 1–4. IEEE, Madurai (2018)

Features of Distributed Energy Integration in Agriculture Alexander V. Vinogradov1 , Dmitry A. Tikhomirov1 , Vadim E. Bolshev1(&) , Alina V. Vinogradova1, Nikolay S. Sorokin2, Maksim V. Borodin2, Vadim A. Chernishov3 Igor O. Golikov2, and Alexey V. Bukreev1

,

1

Federal Scientific Agroengineering Center VIM, 1-st Institutsky proezd, 5, 109428 Moscow, Russia [email protected], [email protected], [email protected], [email protected], [email protected] 2 Orel State Agrarian University named after N.V. Parahin, Generala Rodina St., 69, 302019 Orel, Russia [email protected], [email protected], [email protected] 3 Orel State University named after I.S. Turgenev, Komsomolskaya St., 95, 302026 Orel, Russia [email protected]

Abstract. Agriculture is the guarantor of the sustainable development of the state as it supplies the population with basic necessities. Agricultural holdings and similar agricultural associations are distributed agricultural enterprises, each part of which requires reliable provision with the necessary types of energy. The article gives an understanding of the agroholdings and what problems exist in the implementation of distributed energy projects in agriculture. It also considers the options for the small generation facility structure on the example of biogas plants and their property issues. Various promising options for the use of individual and local small generation facilities in agroholdings are given listing facility types, scope of perspective application in the agroholding structure and expected key results. Keywords: Agricultural enterprises  Agricultural holdings  Distributed generation  Energy consumption analysis  Energy resources  Waste recycling

1 Introduction An agroholdings is a group of legal entities engaged in agricultural activities and sales of agricultural products [1]. Thus, it is actually a distributed agricultural enterprise that can be engaged in different types of agricultural production in different territories. In one holding company there can be units engaged in crop production, livestock raising and agricultural product processing. Energy needs of the agricultural holding are formed depending on the specialization of its divisions, production volumes and applied production technologies. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 19–27, 2021. https://doi.org/10.1007/978-3-030-68154-8_2

20

A. V. Vinogradov et al.

Usually, the sources of energy supply for all structural units of holdings for electricity are electric grids (that are on the balance of power grid companies) and for heat are their own boiler houses and boiler rooms (that are on the balance of heat supply companies) or in some cases centralized heat networks. The main problems of energy supply to agricultural holdings as well as all agricultural enterprises, especially with facilities in places far from developed infrastructure, are: • low reliability of electricity supply including low reliability of electricity supply to heat supply facilities (boiler houses) what leads to a decrease in the quality of products, underproduction, livestock losses and heat supply interruptions of agricultural processes in case of in electricity interruptions of boiler rooms [2–4]; • low energy efficiency of used boiler houses what leads to an increase in the cost of holding production; • the lack of sustainable communication channels for the organization of systems for monitoring the activities of energy supply facilities, energy consumption. It does not allow efficiently monitoring of energy consumption by facilities [5–7]; • the lack of qualified specialists who can effectively organize interaction with power sales and power grid companies. First of all, the low qualification of electrical engineering personnel and engineering workers affects the difficulty of forecasting consumption, meeting the requirements of energy sales companies to the conditions of using more favorable price categories, which, in turn, does not allow agricultural enterprises to reduce the cost of electricity in the structure of agricultural holdings and hence the cost products [8].

2 Problems of Integration of Distributed Energy Projects in Agriculture Features of distributed generation allow in certain cases to solve a number of these problems. Advanced options are developed for installations for the production of energy from renewable sources [9–11]. Low generation is an opportunity to increase independence from centralized energy supply systems and the possibility of utilization of enterprise waste, which can be used as raw materials for small generation sources or heat generators. The need for distributed generation for agroholdings exists objectively and logically, but its justification has a number of constraining factors given below. The problem of the lack of effective methods for assessing the potential of using small generation facilities (SGF) in the structure of agricultural holdings contains a full range of issues ranging from the availability of land resources to accommodation and the availability of human, raw materials, etc. The existing methods are focused only on the effects of the replacement of electricity and heat as well as on the effect of waste processing and do not explore all the necessary issues of the prospects of using small generation facilities. There is an insufficient supply of ready-made integrated solutions on the SGF market, including in the ownership variant of the SGF on the balance of the relevant

Features of Distributed Energy Integration in Agriculture

21

companies. Basically, the proposals concern only the SGF directly and do not consider its effective integration into the infrastructure of the enterprise taking into account its features. In combination with the absence of a developed system of SGF operational services, this makes it difficult to decide on the use of distributed generation. Significant investments are needed in pre-project preparation of SGF implementation in agroholdings with unguaranteed validity of the effectiveness of SGF use. Understanding this, enterprises are reluctant to carry out such work. In addition, a very small number of examples of successful application of SGF in the agricultural sector also “scares” agroholdings from their use. There is a problem of consistency of legislation on the SGF use in terms of connecting them to general-use electrical networks and using them for power supply to third-party consumers. The process of joining and determining tariffs are overly bureaucratic. This imposes difficulties on the effective use of SGF generating capacities. It is necessary either to unreasonably overestimate the power of the generators or to use the SGF for incomplete coverage of the capacities of the enterprise. The construction of its own network for the transmission of electricity to third-party consumers requires the creation of its own network and distribution company or to involve them from outside, which increases the cost of electricity sold. It is necessary to coordinate legislation and regulatory documents in terms of the possibility of using renewable energy sources, installations for processing agricultural waste. There is instability in raw materials for use as a fuel for SGF. This is primarily the case of the use of biogas plants, plants for the incineration of crop waste, the forest industry, etc. The market dictates the need to adapt to it in terms of production, therefore, the structure and volume of production wastes may change and become insufficient for SGF. The use of biomass specifically cultivated for energy needs as a raw material requires justification since it needs the occupation of farmland areas for these purposes.

3 Discussion and Results It should be noted that the options of the SGF can vary greatly depending on the type, quantity and quality of raw materials as well as the needs of the customer. Consider the options for the structure of the SGF on the example of biogas plants (BGP), as one of the most promising options for SGF [12, 13]. Individual BGP. Individual BGP is designed to partially or fully meet the needs of small agricultural enterprises for electricity, fertilizer, cheap heat and in some cases for complete energy autonomy. The structural diagram of an individual BGP is presented in Fig. 1.

22

A. V. Vinogradov et al.

Fig. 1. Structural diagram of the use of individual BGP

At the same time, the property issue of BGP has 2 solutions: • BGP is owned by the company and supplies it to: a) gas and fertilizers; b) electricity, heat and fertilizers; c) gas, electricity, heat and fertilizers; • BGP is an independent enterprise buying raw materials and selling the same products in any combination. Local BGP. Local BGP is biogas plant designed to fully or partially cover the required capacities of several enterprises connected to a common energy network. The structural diagram of the local BGP is presented in Fig. 2. The end products of such BGP are: electricity, biogas, bio-fertilizers used by agricultural enterprises for technological purposes.

Fig. 2. Structural diagram of the use of local BGP

Features of Distributed Energy Integration in Agriculture

23

At the same time, the property issue of BGP has 2 solutions: • BGP is jointly owned by enterprises and supplies them with products of biomass processing in the combination they need taking into account the characteristics of enterprises; • BGP is an independent enterprise that buys raw materials and sells its products to suppliers to enterprises. Network BGP. Network BGP is biogas plant intended for the sale of energy resources (gas/electricity/fertilizers/heat) to an external network. Supply of raw materials for the network BGP is engaged in one or more agricultural enterprises, the final product of such the BGP is gas (electricity). The block diagram of the network BGP is presented in Fig. 3.

Fig. 3. Structural diagram of the use of network BGP

At the same time, the property issue of network BGP has 2 solutions: • BGP is jointly owned by enterprises and supplies its products in the network in the form of electricity, heat, fertilizer or gas; • BGP is an independent enterprise purchasing raw materials from enterprises and selling its products in the appropriate networks. BGP can be specialized and sell only gas to the gas network what significantly reduces the necessary equipment for BGP and, accordingly, reduces the cost of its products. Auxiliary product in this case can be fertilizer. In the case of an agricultural holding, BGP can also become one of its structures aimed at solving the tasks of supplying the units with the products they need.

24

A. V. Vinogradov et al.

As for the whole range of SGF, not only BGP, here, first of all, it is necessary to conduct research aimed at determining the scope of using different types of SGF. For example, if there are enterprises engaged in pasture cattle breeding in the structure of an agricultural holding, it is rational to use mobile technological centers (shepherds’ homes on wheels, milking points, haircuts, etc.) equipped with solar batteries as an energy source. Pumping installations on pastures, or stationary in areas of greenhouse, livestock farms can be equipped with both solar and wind power plants. There are promising SGF options for enterprises in the presence of a natural gas network that uses gas to generate heat and electricity. But all these options require careful scientific study with the development of practical recommendations on the required SGF capacities, their choice in different climatic zones and with different specializations of farms and other solutions. Various promising options for the use of individual and local SGF in agroholdings are listed in Table 1. Table 1. Options for the use of individual and local SGF in agroholdings SGF type Wind power plant

Biogas plant

Scope of perspective application in agroholding structure • Power supply and (or) mechanization of pumping stations; • Power supply of individual objects remote from the infrastructure

• Gas supply to facilities of different purposes; • Heating of livestock facilities located near buildings of various purposes, • Recycling of enterprise waste; • In some cases, the power supply of individual objects;

Expected key results • Reducing the cost of pumping water; • Reducing the cost of power supply to individual objects (including by reducing the cost of connecting to the network), for example, at stationary points on remote pastures • Reducing the cost of gas supply; • Reducing heating costs; • Reducing the cost of power supply to individual objects (including by reducing the cost of connecting to the networks); • Reducing the cost of fertilizers; • Effective use of fallow lands (when growing energy plants) or use of land for raw materials of BSU in crop rotation; • Reduction of fines for environmental impact (continued)

Features of Distributed Energy Integration in Agriculture

25

Table 1. (continued) SGF type Solar power plant

Power plant on natural gas (gas turbine, gas piston, etc.)

Heat generator on the enterprise waste

Scope of perspective application in agroholding structure • Power supply to pumping stations; • Power supply (electricity and heat supply) to individual objects remote from the infrastructure; • Power supply to mobile technological points (housing for shepherds on wheels, mobile milking stations, etc.); • Power supply to individual loads in enterprises (emergency lighting, back-up power for the responsible electrical receivers, etc.); • Drying of grain, energy supply to storage facilities • Heating of the enterprise facilities located near buildings for various purposes, • Power supply to the enterprise facilities; • The use of heat in technological processes • Heating of livestock facilities; • The use of heat in technological processes

Expected key results • Reducing the cost of pumping water; • Reducing the cost of electricity supply to individual objects (including by reducing the cost of connecting to the network), for example, at stationary points on remote pastures; • Improving the comfort of workplaces of mobile points • Reducing the cost of creating backup power lines; • Reducing the cost of thermal energy

• Reducing the cost of electricity supply to individual objects (including by reducing the cost of connecting to the network); • Reducing the cost of thermal energy • Reducing the cost of thermal energy

In addition to those indicated in Table 1 there can be other SGF as well as the effects of their use. The final assessment in any case should be carried out taking into account the characteristics of a particular farm. In the countries as a whole, it is rational to formulate a program to investigate the potential of SGF use in the structure of agriculture. It should provide for the solution of legislative and regulatory issues as well as technical solutions, solutions for the SGF service infrastructure, organization of advisory services and other aspects of SGF, for example, the use of the principles of intelligent electrical networks and microgrids described in [14–16] when creating projects of distributed generation.

4 Conclusions 1. Agricultural holdings are objectively interested in the implementation of distributed generation projects in the case of obtaining results from these projects, which consist in reducing the cost of production.

26

A. V. Vinogradov et al.

2. For the implementation of distributed generation projects in agroholdings, it is necessary to carry out pre-project training aimed at determining the potential use of certain types of SGF. For this, it is necessary to create methodologies for assessing this potential, covering all implementation issues from land acquisition to the obtained effects and the availability of an effective SGF service system. 3. It is rational to formulate a program to study the potential of the SGF application in the structure of agriculture providing for the solution of legislative and regulatory issues as well as technical solutions, decisions regarding the infrastructure of service and maintenance of the SGF, the organization of the advisory service and other aspects of the SGF application.

References 1. Epshtein, D., Hahlbrock, K., Wandel, J.: Why are agroholdings so pervasive in Russia’s Belgorod oblast’? Evidence from case studies and farm-level data. Post-Communist Econ. 25(1), 59–81 (2013) 2. Vinogradov, A., Vinogradova, A., Bolshev, V., Psarev, A.I.: Sectionalizing and redundancy of the 0.38 kV ring electrical network: mathematical modeling schematic solutions. Int. J. Energy Optim. Eng. (IJEOE) 8(4), 15–38 (2019) 3. Vinogradov, A., Vasiliev, A., Bolshev, V., Vinogradova, A., Kudinova, T., Sorokin, N., Hruntovich, N.: Methods of reducing the power supply outage time of rural consumers. In: Kharchenko, V., Vasant, P. (eds.) Renewable Energy and Power Supply Challenges for Rural Regions, pp. 370–392. IGI Global (2019) 4. Tikhomirov, D.A., Kopylov, S.I.: An energy-efficient electric plant for hot steam and water supply of agricultural enterprises. Russ. Electr. Eng. 89(7), 437–440 (2018) 5. Bolshev, V.E., Vinogradov, A.V.: Obzor zarubezhnyh istochnikov po infrastrukture intellektual'nyh schyotchikov [Overview of foreign sources on the infrastructure of smart meters]. Bull. South Ural State Univ. Ser. Energy 18(3), 5–13 (2018) 6. Sharma, K., Saini, L.M.: Performance analysis of smart metering for smart grid: an overview. Renew. Sustain. Energy Rev. 49, 720–735 (2015) 7. Kabalci, Y.: A survey on smart metering and smart grid communication. Renew. Sustain. Energy Rev. 57, 302–318 (2016) 8. Vinogradov, A.V., Anikutina, A.V.: Features in calculations for electric energy of consumers with a maximum power over 670 kW and a computer program for selecting the optimal price category [Osobennosti v raschetah za elektroenergiyu potrebitelej s maksimal’noj moshchnost’yu svyshe 670 kVt i komp’yuternaya programma dlya vybora optimal’noj cenovoj kategorii]. Innov. Agric. 2, 161–169 (2016) 9. Litti, Y., Kovalev, D., Kovalev, A., Katraeva, I., Russkova, Y., Nozhevnikova, A.: Increasing the efficiency of organic waste conversion into biogas by mechanical pretreatment in an electromagnetic mill. In: Journal of Physics: Conference Series, vol. 1111, no. 1 (2018) 10. Panchenko, V., Kharchenko, V., Vasant, P.: Modeling of solar photovoltaic thermal modules. In: Vasant, P., Zelinka, I., Weber, GW. (eds.) Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol. 866. pp. 108–116. Springer, Cham (2019) 11. Daus, Y., Kharchenko, V.V., Yudaev, I.V.: Managing Spatial Orientation of Photovoltaic Module to Obtain the Maximum of Electric Power Generation at Preset Point of Time. Appl. Solar Energy 54(6), 400–405 (2018)

Features of Distributed Energy Integration in Agriculture

27

12. Gladchenko, M.A., Kovalev, D.A., Kovalev, A.A., Litti, Y.V., Nozhevnikova, A.N.: Methane production by anaerobic digestion of organic waste from vegetable processing facilities. Appl. Biochem. Microbiol. 53(2), 242–249 (2017) 13. Kovalev, A., Kovalev, D., Panchenko, V., Kharchenko, V., Vasant, P.: System of optimization of the combustion process of biogas for the biogas plant heat supply. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing and Optimization. ICO 2019. Advances in Intelligent Systems and Computing, vol. 1072. Springer, Cham (2019) 14. Kharchenko, V., Gusarov, V., Bolshev, V.: Reliable electricity generation in RES-based microgrids. In: Alhelou, H.H., Hayek, G. (eds.) Handbook of Research on Smart Power System Operation and Control, pp. 162–187. IGI Global (2019) 15. Vinogradov, A., Bolshev, V., Vinogradova, A., Kudinova, T., Borodin, M., Selesneva, A., Sorokin, N.: A system for monitoring the number and duration of power outages and power quality in 0.38 kV electrical networks. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol. 866, p. 10. Springer, Cham (2019) 16. Vinogradov, A., Vasiliev, A., Bolshev, V., Semenov, A., Borodin, M.: Time factor for determination of power supply system efficiency of rural consumers. In: Kharchenko, V., Vasant, P. (eds.) Handbook of Research on Renewable Energy and Electric Resources for Sustainable Rural Development, pp. 394–420. IGI Global (2018)

Concept of Multi-contact Switching System Alexander V. Vinogradov1 , Dmitry A. Tikhomirov1 , Alina V. Vinogradova1, Alexander A. Lansberg2, Nikolay S. Sorokin2, Roman P. Belikov2, Vadim E. Bolshev1(&) Igor O. Golikov2, and Maksim V. Borodin2

,

1

Federal Scientific Agroengineering Center VIM, 1-st Institutsky proezd, 5, 109428 Moscow, Russia [email protected], [email protected], [email protected], [email protected] 2 Orel State Agrarian University named after N.V. Parahin, Generala Rodina St., 69, 302019 Orel, Russia [email protected], [email protected], [email protected], [email protected]@yandex.ru, [email protected] Abstract. The paper proposes a new approach to the construction of intelligent electrical networks. It is based on the use of the multi-contact switching system switching devices having 3 or more contact groups and the control of them is carried out independently. The paper also states the main provisions of this approach. There is an example of a distribution electrical network on the basis of the multi-contact switching system and containing a transformer substation, several renewable energy sources and various types of switching equipment. The basic functions of the network are listed based on the processing of the received data, the interaction of network equipment and systems of monitoring, control, accounting and management. Keywords: Agricultural enterprises  Agricultural holdings  Distributed generation  Energy consumption analysis  Energy resources  Waste recycling  Smart electric networks  Multicontact switching systems  Sectionalizing  Redundancy

1 Introduction At present, the development of “smart power grids” is a global trend related to the fact that existing 0.4 kV distribution power grids are characterized by low reliability, significant energy losses and insufficiently high power quality. The main reasons for this are the oversized length of power lines, the insufficient degree of network automation as well as the construction of networks using radial schemes which does not allow for backup power supply to consumers. The most effective way to increase the power supply system reliability is to connect distributed generation, for example, biogas plants, solar panels, etc. [1–3], enabling these networks to work as microgrids [4]. The joint work of distributed generation sources within the microgrid requires complex automation. It consists in equipping the electric network with intelligent devices that allow analyzing the operating modes and automatically reconfiguring the network to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 28–35, 2021. https://doi.org/10.1007/978-3-030-68154-8_3

Concept of Multi-contact Switching System

29

localize the place of damage and restore power to consumers connected to undamaged network sections. Distributed automation of 0.4 kV electric networks should be carried out taking into account the fact that these networks can have a large number of outgoing power lines from the main transmission lines and there is a possibility to connect small generation sources directly to 0.4 kV networks. In accordance with this, the concept of the development of electric networks using multi-contact switching systems (MCSS) is proposed in this paper.

2 MCSS A new approach to the construction of intelligent electrical networks is proposed. Its feature is in the use of multi-contact switching systems in these networks. Multi-contact switching systems (MCSS) are switching devices having 3 or more contact groups and the control of them is carried out independently [5, 6]. The use of the MCSS in electrical networks makes it possible to automatically change the network configuration when the situation changes or on the instructions of the operator. To this end, the MCSS are equipped with monitoring, accounting and control devices that allow data exchange with a single network information center, which in turn makes it possible to implement the principles of SMART GRID. The developed concept of intelligent electrical networks contains the following main provisions: • Application of distribution electric networks containing multi-contact switching systems. Multi-contact switching systems (MCSS) are switching devices having 3 or more contact groups and the control of contact groups is carried out independently. The use of the MCSS in electrical networks makes it possible to automatically change the network configuration when the situation changes or on the instructions of the operator. To this end, the MCSS are equipped with monitoring, accounting and control devices that allow data exchange with a single network information center, which in turn makes it possible to implement the principles of SMART GRID. • Equipping electrical networks by means of redundancy and partitioning. • Equipping electrical networks, power lines systems [7] by: – The control system of network equipment is designed to monitor the actual state of network elements such as switching devices and power elements. The system is based on SCADA systems. The monitoring system can be combined with monitoring and accounting systems. – The monitoring system performs the functions: monitoring the technical condition of all network elements (the state of power lines and their elements, for example, supports, the state of overgrowing of transmission lines, the state of equipment insulation, etc.); monitoring of the network operation modes and its individual elements (loading, energy flow direction, etc.); monitoring the fulfillment of contractual obligations of consumers and energy supplying companies (power grid, retail, generating, etc.); monitoring the reliability of power

30

A. V. Vinogradov et al.

supply in terms of the number of power outages and damage to network elements; monitoring the power quality; monitoring the energy efficiency of the network (energy losses, other energy efficiency indicators); monitoring other parameters of the network and the overall power supply system depending on the capabilities. The monitoring system can be combined with control and accounting systems. – The accounting system performs the functions: accounting for the amount of electricity, adjusting the electricity cost depending on its quality or reliability indicators of the power supply (for example, if the contractual terms for reliability are violated, the cost decreases); accounting for the number and duration of power supply interruptions; accounting of the number of the network equipment work; accounting of other data. The accounting system can be combined with control and monitoring systems. – The control system manages all network elements depending on its operating modes, specified switching requirements, received data from control, monitoring and accounting systems in order to improve the energy efficiency of the network, increase the power supply reliability, etc. • Creation of database systems and information processing comprising: – Databases of consumers connected to the network with their characteristics, parameters of operating modes; – Database of equipment installed on the network with the characteristics and parameters of the modes of operation; – Database of power supply reliability indicating the characteristics of power supply interruptions, equipment damageability; – Database of power quality with an indication of the quality parameters at different points of the network; – Database of technical connections with an indication of the connection characteristics, terms of implementation, etc. The information collected in the database should serve as the basis for making forecasts of electricity consumption, accident rates, etc. It is also used by control systems that change the network configuration in various modes, switching, shutting down equipment, etc. • Ensuring the possibility of using various energy sources including renewable ones (RES) and energy storage devices both in parallel with centralized systems and without being switched on for parallel operation. • Providing the ability to automatically change the network scheme in the event of various situations. • The possibility of organizing microgrids with different energy sources, operating in parallel or in isolation from each other for different consumers. Figure 1 shows an example of a distribution smart grid containing a transformer substation (TS) and several renewable energy sources (RES), various types of switching equipment. It is shown that the action of each device installed in the grid should be monitored by monitoring, accounting and control systems. Each unit must be remotely

Concept of Multi-contact Switching System

31

controlled. Communication channels in the network can be organized with different data transfer technology (JPS, JPRS, radio frequencies, PLC modems, etc. [8–10]). The following monitoring, accounting and control systems are installed at transformer substations and renewable energy sources: • Systems (sets of equipment) of determining the damage places in power transmission lines and transformer substations – DDP [11]; • Systems (sets of equipment) for regulating the power quality indexes (including means of adaptive automatic voltage regulation, for example [12]) - RPQI; • Systems (sets of equipment) of substation automation (Automatic load transfer, Automatic reclosing, Automatic frequency load shedding, relay protection, etc.) SAU (substation automation unit) [13]; • Systems (sets of equipment) of automatic reactive power compensation (regulation) – RPC [14, 15]; • Advanced metering infrastructure - AMI. The AMI TS takes into account the operation of the automation systems of the TS, operating modes, transformer loading as well as carry out electricity metering for each outgoing line, at the inputs of the transformer on the high and low sides [16–18]; • Other systems as they are developed. Information from all specified and prospective systems should be transmitted to information processing and control units having a communication channel between themselves as well as to a control center (it can be the dispatching office of the electric grid companies). This allows remote control of TS, RES and, if necessary, remote control. The inputs of all consumers are equipped with a AMI system allowing to determine the values of electric power losses, interruptions in power supply, monitor the operation of protective equipment and automation of the consumer network, detect unauthorized electrical equipment (for example, a welding transformer), detect unauthorized connection of generators without compliance safety rules, etc. In addition, consumers AMI allows for electricity cost adjustment depending on its quality, on the number and duration of interruptions in power supply and other factors [19]. It must be possible to prepay electricity and other promising functions. Power lines regardless of their performance (cable or air) are equipped with remote monitoring systems for technical condition (insulation condition, overgrowth, inclination of supports, etc.) transmitting data on the power transmission line element state to IPCU and, respectively, to the SCC. In the event of an emergency at one of the transmission line sections, automatic control of the network switching equipment is based on the data received from the control, monitoring and accounting systems. That is, the network configuration changes due to switching contacts of the MCSS, sectionalizing points (SP), universal sectionalizing points (USP), points of automatic switching on the reserve (ASR) in such a way as to isolate the damaged area and provide power to consumers in the backup areas. The example shown in Fig. 1 does not exhaust all the possibilities of using the presented systems depending on the required reliability parameters. The principles of construction of the network and the types of switching devices presented in the Fig. 1 can also be used in networks of other voltage classes.

32

A. V. Vinogradov et al.

Fig. 1. An example of the use of multi-contact switching systems for the development of microgrids containing renewable energy sources.

3 Discussion The intelligent electrical network based on the use of the multi-contact switching system allows to perform the following basic functions: • To implement switching algorithms and automatic commands for switching in networks (to perform network configuration changes, highlighting damaged areas, etc.);

Concept of Multi-contact Switching System

33

• To “see” electricity loss in real time with the ranking of technological, commercial while for power transformers to “see” the loss of idling and short circuit; • To develop recommendations for reducing losses, optimal network configuration; • To automatically change the electricity cost depending on its quality, power supply interruption time and other parameters; • To develop recommendations for changing the technological connection conditions (change in capacity, change in the scheme, etc.); • To obtain the calculated parameters of power lines and equipment, build mathematical models of the network; • To receive aggregate load schedules daily, monthly, annual …; • To develop recommendations on the volumes and parameters of electricity storage, modes of operation of balancing power plants; • To obtain reliability indicators of the network and equipment installed in it with an analysis of the most/least reliable network elements, brands and types of equipment, issuing recommendations for selection, maintenance and repair; • To develop recommendations for the selection and configuration of protection and automation; • To receive a mapping of the state of switching devices, recommendations for the development and installation of new devices; • To receive diagnostic parameters of the equipment and recommendations on the timing of maintenance, repair, replacement, forecast accidents; • To get other results based on the system capabilities. The implementation of such electrical network intellectualization systems will allow using technical and economic mechanisms to improve the efficiency of power networks and power supply systems in general. It is possible to automate the implementation of justifications for the use of microgrids and networks with varying degrees of automation, form multi-year databases on all parameters of power supply system functioning, predict the operation modes of electrical networks, generating facilities, current trends in the development of networks, electricity market. All this allows integrating power supply systems, intelligent electrical networks into the digital economy, creating new markets for equipment, communications equipment and software for intelligent electrical networks and microgrids, and markets for electricity services. The creation of new Internet equipment and Internet services based on the capabilities of managing network equipment, regulating power quality, power supply reliability, electricity storage, and the use of renewable energy sources are also elements of the digital economy involving the capabilities of intelligent electrical networks. One of the main advantages of such a concept of building autonomous networks and networks connected to centralized systems is the possibility of flexible changes in the power supply scheme. This means that if there is a shortage of power from one of the energy sources or if there is damage to one of the network sections, it is possible to automatically switch to another power source. It is also possible to transfer surplus to the centralized network solutions for coding situations in the electrical network with the peaks of energy production on renewable energy sources [4]. Solutions have been

34

A. V. Vinogradov et al.

developed for partitioning and backing up electrical networks [20], for improving the quality of electricity and regulating the voltage in electrical networks [21]. In addition, the implementation of these systems will significantly improve electrical safety in the operation of electrical networks. Receiving information about the state of the power supply system and emergency conditions in it allows to quickly and remotely localize the accident site or a section of the network where there is a danger of electric shock, for example, when a wire break.

4 Conclusions The developed concept of building intelligent electrical networks based on the use of multi-contact switching systems allows to automatically change the network configuration when using different energy sources, monitor the network situation including the parameters of the equipment technical condition, the parameters of the network operating modes, highlighting accident modes, manage switching equipment. This makes it possible to use several power sources in the network, including renewable ones which can work both in parallel and in isolation from each other.

References 1. Litti, Y., Kovalev, D., Kovalev, A., Katraeva, I., Russkova, Y., Nozhevnikova, A.: Increasing the efficiency of organic waste conversion into biogas by mechanical pretreatment in an electromagnetic mill. In: Journal of Physics: Conference Series, vol. 1111, no. 1 (2018) 2. Panchenko, V., Kharchenko, V., Vasant, P.: Modeling of solar photovoltaic thermal modules. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol. 866. pp. 108–116. Springer, Cham (2019) 3. Daus, Y., Kharchenko, V.V., Yudaev, I.V.: Managing spatial orientation of photovoltaic module to obtain the maximum of electric power generation at preset point of time. Appl. Solar Energy 54(6), 400–405 (2018) 4. Kharchenko, V., Gusarov, V., Bolshev, V.: Reliable electricity generation in RES-based microgrids. In: Alhelou, H.H., Hayek, G. (eds.) Handbook of Research on Smart Power System Operation and Control, pp. 162–187. IGI Global (2019) 5. Vinogradov, A.V., Vinogradova, A.V., Bolshev, V.Ye., Lansberg, A.A.: Sposob kodirovaniya situacij v elektricheskoj seti, soderzhashchej mul'tikontaktnye kommutacionnye sistemy i vozobnovlyaemye istochniki energii [A way of coding situations in an electric network containing multi-contact switching systems and renewable energy sources]. Bull. Agric. Sci. Don 2(46), 68–76 (2019) 6. Vinogradov, A.V., Vinogradova, A.V., Marin, A.A.: Primenenie mul’tikontaktnyh kommutacionnyh sistem s mostovoj skhemoj i chetyr’mya vyvodami v skhemah elektrosnabzheniya potrebitelej i kodirovanie voznikayushchih pri etom situacij [Application of multi-contact switching systems with a bridge circuit and four outputs in consumer power supply circuits and coding of situations arising from this]. Bull. NIIEI 3(94), 41–50 (2019)

Concept of Multi-contact Switching System

35

7. Vinogradov, A., Vasiliev, A., Bolshev, V., Vinogradova, A., Kudinova, T., Sorokin, N., Hruntovich, N.: Methods of reducing the power supply outage time of rural consumers. In: Kharchenko, V., Vasant, P. (eds.) Renewable Energy and Power Supply Challenges for Rural Regions, pp. 370–392. IGI Global (2019) 8. Bolshev, V.E., Vinogradov, A.V.: Perspektivnye kommunikacionnye tekhnologii dlya avtomatizacii setej elektrosnabzheniya [Promising communication technologies for automation of power supply networks]. Bull. Kazan State Power Eng. Univ. 11(2), 65–82 (2019) 9. Ancillotti, E., Bruno, R., Conti, M.: The role of communication systems in smart grids: architectures, technical solutions and research challenges. Commun. Technol. 36, 1665– 1697 (2013) 10. Khan, R.H., Khan, J.Y.: A comprehensive review of the application characteristics and traffic requirements of a smart grid communications network. Comput. Netw. 57, 825–845 (2013) 11. Vinogradov, A., Bolshev, V., Vinogradova, A., Kudinova, T., Borodin, M., Selesneva, A., Sorokin, N.: A system for monitoring the number and duration of power outages and power quality in 0.38 kV electrical networks. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol. 866, p. 10. Springer, Cham (2019) 12. Vinogradov, A., Vasiliev, A., Bolshev, V., Semenov, A., Borodin, M.: Time factor for determination of power supply system efficiency of rural consumers. In: Kharchenko, V., Vasant, P. (eds.) Handbook of Research on Renewable Energy and Electric Resources for Sustainable Rural Development, pp. 394–420. IGI Global (2018) 13. Abiri-Jahromi, A., Fotuhi-Firuzabad, M., Parvania, M., Mosleh, M.: Optimized sectionalizing switch placement strategy in distribution systems. IEEE Trans. Power Deliv. 27(1), 362–370 (2011) 14. Singh, B., Saha, R., Chandra, A., Al-Haddad, K.: Static synchronous compensators (STATCOM): a review. IET Power Electron. 2(4), 297–324 (2009) 15. Haque, M.H.: Compensation of distribution system voltage sag by DVR and D-STATCOM. In: 2001 IEEE Porto Power Tech Proceedings (Cat. No. 01EX502), vol. 1. IEEE (2001) 16. Bolshev, V.E., Vinogradov, A.V.: Obzor zarubezhnyh istochnikov po infrastrukture intellektual'nyh schyotchikov [Overview of foreign sources on the infrastructure of smart meters]. Bull. South Ural State Univ. Ser. Energy 18(3), 5–13 (2018) 17. Sharma, K., Saini, L.M.: Performance analysis of smart metering for smart grid: an overview. Renew. Sustain. Energy Rev. 49, 720–735 (2015) 18. Kabalci, Y.: A survey on smart metering and smart grid communication. Renew. Sustain. Energy Rev. 57, 302–318 (2016) 19. Vinogradov, A., Borodin, M., Bolshev, V., Makhiyanova, N., Hruntovich, N.: Improving the power quality of rural consumers by means of electricity cost adjustment. In: Kharchenko, V., Vasant, P. (eds.) Renewable Energy and Power Supply Challenges for Rural Regions, pp. 31–341. IGI Global (2019) 20. Vinogradov, A., Vinogradova, A., Bolshev, V., Psarev, A.I.: Sectionalizing and redundancy of the 0.38 kV ring electrical network: mathematical modeling schematic solutions. Int. J. Energy Optim. Eng. (IJEOE) 8(4), 15–38 (2019) 21. Vinogradov, A., Vinogradova, A., Golikov, I., Bolshev, V.: Adaptive automatic voltage regulation in rural 0:38 kV electrical networks. Int. J. Emerg. Electric Power Syst. 20(3) (2019)

The Design of Optimum Modes of Grain Drying in Microwave–Convective Effect Dmitry Budnikov(&) Federal State Budgetary Scientific Institution “Federal Scientific Agroengineering Center VIM” (FSAC VIM), 1-st Institutskij 5, Moscow 109428, Russia [email protected]

Abstract. The development of processing modes using electrical technologies and electromagnetic fields can reduce the energy intensity and cost of grain heat treatment processes. During development, it is necessary to consider the technological requirements of the processed material, types of used technology, the mode of operation of the mixing equipment (continuous, pulse etc.). This paper presents the results of experimental studies, on the basis of which systems for optimal control of post-harvest grain processing equipment can be built using electrophysical effects. At the same time, energy consumption can be reduced by 20–30% compared to classic mine dryers and process can be intensified by 35–40%. Keywords: Electrophysical effects  Post-harvest treatment  Microwave field  Optimum modes

1 Introduction The high energy intensity of post-harvest grain processing dictates the need to develop new equipment that reduces energy costs for post-harvest processing [2, 5, 7]. Many researchers note the possibility of reducing these energy costs due to the use of electrophysical factors [1, 3, 4, 6, 8, 9]. These factors include ultrasound, ozone, aeroions, infrared field, microwave exposure and others. In order to reduce costs while ensuring the quality indicators, it is also necessary to develop modes and equipment for optimal management of post-harvest processing plants. At the same time, it is worth considering the possibility of managing according to the criteria of minimum energy consumption and minimum processing time.

2 Main Part 2.1

Research Method

A number of experimental studies are required to obtain the desired data. In this work, the research was carried out on installations of two types. In the first case, the installation contained one microwave power source (a 900 W magnetron) acting on a stationary grain layer with a volume of 0.015 m3. In the second case, the installation © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 36–42, 2021. https://doi.org/10.1007/978-3-030-68154-8_4

The Design of Optimum Modes of Grain Drying

37

contained six similar sources located in three levels of height with two opposite magnetrons at the level. In this case, the grain layer was moved vertically, and the volume of simultaneously processed material was 0.1 m3. At this stage, the planning of the screening experiment is aimed at searching for energy-efficient grain drying modes using electrophysical factors, such as microwave, for different states of the grain layer, taking into account the form of moisture coupling in the grain. It is obvious that natural drying modes will have the lowest energy consumption, but in this case the final material may be damaged due to limited time of safe storage. The mode of constant and pulsed microwave exposure is used. Table 1 shows the factors of the screening experiment and their variation levels. The response function is the cost of electrical energy for the drying process.

Table 1. Values of factors for obtaining drying curves. Pos.

Density, q, kg/m3

1 2 3 4 5 6 7 8 9

800 600 400 800 600 400 800 600 400

Initial moisture, W, % 17.2 17.2 16.9 17.1 16.7 17.0 16.9 16.8 16.8

Microwave operation mode

Air speed, v, m/s

1 1 1 2/3 2/3 2/3 1/3 1/3 1/3

1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0

Air temperature, T, °C 20 20 20 20 20 20 20 20 20

Since the screening experiment was carried out on an installation that allows processing a small volume of grain mass, it makes sense only to dependence of the relative deviations of energy intensity from the assumed initial one. According to the results of the screening experiment, the mode with a thin layer of grain mass and constant microwave exposure has the lowest energy consumption. At the same time, the highest energy intensity is observed in the mode with pulsed microwave exposure to the suspended layer that is 2.4 times higher than the cost in the least energy-intensive mode. Despite this, it is worth considering that a significant amount of energy (about 50%) was spent on creating a fluidized layer. Thus, in the subsequent stages, it is worth dividing the total energy spent on the process and the energy spent on the impact factor. In addition, the volume of the processed material must be increased while maintaining the uniformity of the field distribution in the processed material. The measurement of grain material humidity by rapid assessment devices in the process of microwave exposure gives false results associated with changes in the shape of the moisture coupling and the properties of the material. Since that, the measurement of humidity reduction is carried out by weighing the processed volume of the material.

38

D. Budnikov

The drying curves were taken to get the required data. After the data is approximated by a standard parametric model or a user-defined model, the approximation quality can be evaluated both graphically and using various approximation fitness criteria: SSE (least squares), R-square (R-square criterion), Adjusted R-square (adjusted R-square), RSME (root mean square error). In addition, one can calculate confidence intervals for the found values of model parameters that correspond to different probability levels, and confidence bands for approximation and data that also correspond to different probability levels. The dependence of grain moisture change, W, %, on the drying time, t, min, can be represented by the following equation: W ¼ a  ebt þ c;

ð1Þ

where a, b, c – proportionality coefficients. Table 2 shows the data of statistical processing of drying curves. Table 2. Experimental data processing results. Pos. 1 2 3 4 5 6 7 8 9

a 3.438 3.769 3.293 3.035 3.013 3.332 2.959 2.99 2.963

b 0.0911 0.07893 0.09729 0.05973 0.04815 0.07748 0.02602 0.04361 0.05021

c 13.9 13.48 13.57 14.07 13.79 13.81 13.95 13.96 13.92

R2 0.9819 0.9836 0.9996 0.9911 0.9864 0.9745 0.9879 0.9727 0.9667

The data logged during the experiments were processed directly in the SCADA system. In addition, data downloaded from log files can be loaded into application software packages such as Statistica, Matlab, Excel, etc., for further statistical and regression processing. Table 3 partially presents the results of experimental studies of the efficiency of a laboratory installation containing a single source of microwave power according to the selected optimality criteria. Table 3. The results of experimental studies. #

Culture

Moisture, W, %

1 2 3 4

Wheat

16 20 16 20

Moisture removal, DW, % 1.5 2 1 1

q, MJ/kg of evaporated moisture 7.3 4.6 6.2 3.7

Optimality criterion Maximum performance Minimum energy consumption (continued)

The Design of Optimum Modes of Grain Drying

39

Table 3. (continued) #

Culture

Moisture, W, %

5 6 7 8 9 10 11 12

Barley

16 20 16 20 9 12 9 12

Sunflower

Moisture removal, DW, % 1.5 2 1 1 1.5 2 1 1

q, MJ/kg of evaporated moisture 7.6 4.8 6.3 3.7 5.8 3.9 5.2 3.6

Optimality criterion Maximum performance Minimum energy consumption Maximum performance Minimum energy consumption

Analysis of experimental results shows a decrease in energy costs relative to drying in mine grain dryers up to 30%, but higher than similar indicators in active ventilation bins. The best indicators for reducing energy intensity and intensifying drying were obtained when processing sunflower, this is due to lower values of standard humidity and associated values of dielectric parameters. A new module and equipment structure were designed after evaluating the equipment structure and temperature field parameters in the microwave grain processing chamber. Modification of the design involves reducing the uneven heating of grain during processing, which will increase productivity due to the intensification of heat and moisture exchange. In addition to these, the modes with pulsed switching of microwave power sources were considered. At the next stage, research was conducted for a unit containing six microwave power sources. 2.2

Experiment

The modes presented in Table 4 was considered in the research work of the microwaveconvective module. The operating mode of sources of microwave power are: 0 without using the microwave; ½ - sources of microwave power work intermittently (this mode was implemented as 10 s with microwave treating and 10 s without the one); ¼ - sources of microwave power work intermittently (this mode was implemented as 5 s with microwave treating and 15 s without the one). In the course of experimental studies, wheat drying curves were taken from the initial humidity of 20% [10, 11]. Further, according to these data, the dependence of the rate of moisture removal in the drying process depending on the test mode of operation of the unit was studied. Figure 1 shows the drying curves obtained as a result of experimental studies. The current rate of moisture removal was also calculated from the current humidity of wheat from the test mode of operation of the plant. As a result, dependences of energy consumption for drying (evaporation of 1 kg of moisture) of wheat were obtained depending on the implemented mode of heat treatment.

40

D. Budnikov Table 4. Experimental data processing results. Pos. fig. 1–4 1 2 3 4 5 6 7 8 9 Microwave operation mode 0 0 0 1/4 1/4 1/4 1/2 1/2 1/2 20 30 40 20 30 40 20 30 40 Tair

22 21

1

2

3

4

5

6

7

8

9

10

Moisture, W, %

20 19 18 17 16 15 14 13 0

20

40

60

80

100

120

140

160

Drying time, τ, min

Fig. 1. Drying curves.

Both the rate of moisture removal and energy costs significantly depend on both the operating mode of the equipment and the current humidity. 2.3

Results and Discussion

Despite the fact that the current values of energy consumption for removing moisture in modes using microwave power may exceed the modes using heated air as a drying agent, the total cost of the drying process of these modes for the entire drying process is lower. Table 5 shows the average energy consumption for drying wheat from 20 to 14%, obtained as a result of laboratory tests under these conditions. It is also worth taking into account that the use of pulse modes of microwave exposure is similar to heating the drying agent. Table 5. Average energy consumption for drying wheat from 20 to 14%. Mode Energy consumption for evaporation of 1 kg moisture, MJ/kg

1 6.17

2 6.8

3 8.57

4 6.1

5 7.59

6 4.64

7 4.36

8 3.74

9 3.6

The Design of Optimum Modes of Grain Drying

41

These results allow to refine the developed models of equipment management and implement processing control according to the specified optimality criteria.

3 Conclusions The following conclusions can be drawn from the results of the experiment: 1. The highest energy intensity is observed in the drying mode with the pulse effect of the microwave field on the suspended grain layer and is 2.4 times higher than the cost in the least energy-intensive mode. 2. It should be taken into account that the application of the drying mode in the fluidized layer has a very limited application in the processing of cereals, and significant energy (about 50%) are spent on creating a fluidized layer. 3. When considering the energy intensity of drying from the point of view of profitability, it is necessary to take into account the cost of energy carriers and sources of generation (conversion) of energy. 4. The use of microwave fields allows to intensify the drying process in areas of humidity close to the standard by 3–4 times. 5. It is advisable to apply this technology for drying grain at moisture levels close to the standard one (from 17 to 14% for wheat).

References 1. Agrawal, S., Raigar, R.K., Mishra, H.N.: Effect of combined microwave, hot air, and vacuum treatments on cooking characteristics of rice. J. Food Process Eng. e13038 (2019). https://doi.org/10.1111/jfpe.13038 2. Ames, N., Storsley, J., Thandapilly, S.J.: Functionality of beta-glucan from oat and barley and its relation with human health. In: Beta, T., Camire, M.E. (eds.) Cereal grain-based functional foods, pp. 141–166. Royal Society of Chemistry, Cambridge (2019) 3. Basak, T., Bhattacharya, M., Panda, S.: A generalized approach on microwave processing for the lateral and radial irradiations of various groups of food materials. Innov. Food Sci. Emerg. Technol. 33, 333–347 (2016) 4. Dueck, C., Cenkowski, S., Izydorczyk, M.S.: Effects of drying methods (hot air, microwave, and superheated steam) on physicochemical and nutritional properties of bulgur prepared from high-amylose and waxy hull-less barley. Cereal Chem. 97, 483–495 (2020). https://doi. org/10.1002/cche.10263 5. Izydorczyk, M.S.: Dietary arabinoxylans in grains and grain products. In: Beta, T., Camire, M.E. (eds.) Cereal Grain-Based Functional Foods, pp. 167–203. Royal Society of Chemistry, Cambridge (2019) 6. Pallai-Varsányi, E., Neményi, M., Kovács, A.J., Szijjártó, E.: Selective heating of different grain parts of wheat by microwave energy. In: Advances in Microwave and Radio Frequency Processing, pp. 312–320 (2007) 7. Ranjbaran, M., Zare, D.: Simulation of energetic-and exergetic performance of microwaveassisted fluidized bed drying of soybeans. Energy 59, 484–493 (2013). https://doi.org/10. 1016/j.energy.2013.06.057

42

D. Budnikov

8. Smith, D.L., Atungulu, G.G., Sadaka, S., Rogers, S.: Implications of microwave drying using 915 MHz frequency on rice physicochemical properties. Cereal Chem. 95, 211–225 (2018). https://doi.org/10.1002/cche.10012 9. Intelligent Computing and Optimization. Proceedings of the 2nd International Conference on Intelligent Computing and Optimization 2019 (ICO 2019). Springer (2019). ISBN 978-3030-33585-4 10. Vasilev, A.N., Budnikov, D.A., Ospanov, A.B., Karmanov, D.K., Karmanova, G.K., Shalginbayev, D.B., Vasilev, A.A.: Controlling reactions of biological objects of agricultural production with the use of electrotechnology. Int. J. Pharm. Technol. (IJPT). 8(N4), 26855– 26869 (2016) 11. Vasiliev, A.N., Goryachkina, V.P., Budnikov, D.: Research methodology for microwaveconvective processing of grain. Int. J. Energy Optim. Eng. (IJEOE) 9(2), 11 (2020). Article: 1. https://doi.org/10.4018/IJEOE.2020040101

Isolated Agroecosystems as a Way to Solve the Problems of Feed, Ecology and Energy Supply of Livestock Farming Aleksey N. Vasiliev(&) , Gennady N. Samarin , Aleksey Al. Vasiliev , and Aleksandr A. Belov Federal Scientific Agroengineering Center VIM, 1-st Institutsky passage, 5., Moscow, Russian Federation [email protected], [email protected], [email protected], [email protected]

Abstract. Livestock farming is one of the main energy consumers in agriculture. It consumes about 20% of all energy consumption. The half of these consumption (50%) is consumed by cattle farms. The energy consumption of the livestock production is about 16%. More than 75% of the energy is consumed to produce feed and is subsequently concentrated in animal waste. Improving the energy efficiency of livestock farming is an important scientific and economic issue. The improvement of food production results in the development of monocultures. This rule refers to all spheres of husbandry human activity. In order to solve a contradiction between agricultural production and the nature laws there has been paid a great attention to the organization of agroecosystems. The maintenance of permanent functioning of agroecosystems requires significant energy consumption. The study of energy consumption in agroecosystems is one of the main research methods in ecology, therefore, the current paper has considered the energy-environmental problems of livestock farming from these points of view. The current study carried out on the basis of the energy balance of the livestock complex has shown that the energy efficiency of livestock production was not more than 16%. This factor promotes a great raise of environmental issues. There has been proposed to reduce “energy capacity” of animal waste to increase environmental safety and energy independence of livestock farming. Keyword: Agroecological system consumption  Livestock farming

 Ecology  Energy efficiency  Energy

1 Introduction Livestock farming is one of the main energy consumers in agriculture. It consumes about 20% of all energy consumption. The half of these consumption (50%) is consumed by cattle farms. According to the Mindrin's study [1] in 1928 there were consumed 48 cal of total energy to produce 100 cal of a unit of product, in 1950 there were 57 cal, in 1960 there were 70 cal, and in 1990 there were 86 cal. Over a century, the energy consumption to produce a unit of product have almost doubled. The current © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 43–52, 2021. https://doi.org/10.1007/978-3-030-68154-8_5

44

A. N. Vasiliev et al.

study has considered the efficiency of energy consumption increase and the possible ways to reduce them. There have been analyzed these current issues and there have been considered the variants of production organization.

2 Materials and Methods When assessing the energy efficiency of production and determining the directions of its improvement, it is expedient to use the methodology of energy evaluation of products. It is expedient to apply a universal physical unit, whose exchange rate is constant, solid and clear. Many researchers agree that this is an energy unit. Energy is the only objective and universal measure of the value of any type of products, manufactured not only by man, but also by nature. This measure depends on neither supply and demand, nor on price. The energy approach to the study of natural and production processes has a number of significant advantages and wider opportunities compared to other methods [2]. The approach is based on the necessity to take into account the objective laws of energy conversion within the system, and at that, energy acts as a universal measure allowing us to estimate both the value of material objects and the efficiency of production processes. Without the use of energy analysis, effective production management is thought impossible [3, 4]. One of the ways to implement this methodology is the environmental and energy analysis of farm functioning. The issues of energy analysis of the geosystems’ functioning have been studied since the beginning of the new century. In particular, there has been developed an algorithm to conduct the study economic activities of agroecosystems according to their energy assessment [5]. There are other researchers who carried out thorough work in this direction [6], among them N.P. Mishurov [7]. In F.S. Sibagatullin’s work [8] there has been presented a methodology for calculating the energy content of each element that is included in livestock production. The energy content (E) was calculated on the basis of consumption in physical units, taking into account its energy equivalent according to the formula [8]: E ¼ Rn  In ;

ð1Þ

where R_n is the consumption of the resource in natural units (kg, hwt of feed units, person ∙ h, kWt ∙ h); I_n is an energy equivalent of a resource [8]. The example of the conducted calculation is given in Table 1. From the above presented data, it’s clear that the largest amount of energy has been consumed on feed production. Table 1. The energy consumption to produce a unit of product [8].

Isolated Agroecosystems as a Way to Solve the Problems of Feed, Ecology

45

Table 1. The energy consumption to produce a unit of product. Index

Energy equivalent of a resource, MJ

Feed, hwt of feed units Electroenergy, kWt∙h Fuel, kg Specific quantity of equipment, kg Litter, hwt Consumption of labour, person∙h Total

From 717 to 12995 8.7 10 105 17.1 43.3 –

Total energy consumption per a unit of product (per hwt), MJ 1540.8 19226.0 17160.0 256.6 3241.6 358.4 49.0 850.0 937.0 367.5 2520.0 2940.0 102.6 73.6

81.9 1095.5

2390.1

27015.0

– 982.9 22378.4

If the share of energy consumption to produce various types of products in the total amount of energy consumption is analyzed, it’s clear that 64.5% of the total amount of energy consumption is required to produce milk. According to [9], the energy consumed in the process is about 24% of total amount of energy consumption. To produce cattle live weight, the share of energy consumption is 71.2% of the total amount. In the production of pig live weight, this share rises up to 76.6%. At the same time, direct consumption of electricity and fuel are 5.8% (of the total energy consumption) for pig weight increase and 15.1% for cattle weight increase. Feed production is the main consumer of energy. In the works of E T. Sch. Fuzella [10], the energy efficiency is indicated as the ratio of the energy flow at the system output to the energy flow at the system input. So using this approach [10], there was made an estimation of the energy consumption by the technological processes to produce various products for the SPK “Nelyubino”. Some data of the conducted calculation are presented in Table 2. The energy flow at the system output has been determined on the basis of the energy equivalent. As in Table 1, the energy consumption structure has shifted to feed production with 72.5% of the total energy.

Table 2. The estimation results of energy efficiency of livestock farming on the example of SPK “Nelyubino”, TJ (1010 J). Type of consumption Initial Year 1991 Feed 60.2 Infrastructure 8.1 Electroenergy 12.3 Agromachinery 2.0 Fuel 0.5 Total 83.1

data

Output Type of products Year 1999 2004 1991 1999 2004 45.5 42.6 Milk 8.1 6.6 6.5 2.1 8.8 Beef 4.8 3.2 2.7 11.3 12.3 1.2 1.4 0.6 0.7 60.7 65.8 Total 12.9 9.8 9.2

46

A. N. Vasiliev et al.

In feed production the main share of energy is consumed on the production of crop products, which is automatically transferred to livestock products [11]. According to the data presented in the table, in 1991 there were 0.16 units of a product in energy equivalent per unit of consumed energy; in 1999 there were 0.16 units, and in 2004 there were 0.14 units. The calculation results have shown that the highest energy efficiency of livestock production was 16%. For beef production energy efficiency was not higher than 6%. That has shown that only 6% of the energy used for beef production was consumed efficiently and 94% of the used energy were lost. The problem of the lost energy is considered much more keen and urgent than a simple calculation of energy consumption figures. Consuming 16% of energy to produce livestock products, the farmer uses almost 84% of energy inefficiently. The energy conservation principle must be taken into account in all sectors of agricultural activity, and it’s necessary to find out what this energy is wasted on. At this stage in the development of technology, this part of energy begins to transform natural environment into which it has fallen, and the issue of energy inefficiency in production automatically becomes an environmental concern for people [12]. The issue is becoming keen for global agricultural production management. Among the requirements of nature to biological objects, the provision of maximum stability is of great importance. The most stable community to environmental deviations is a biological community with a maximum variety of individual characteristics. This requirement is practically inconsistent with the tasks of maximum productivity. Maximum productivity leads to a minimum diversity. The best way of maximum productivity is to cultivate a monoculture. However, if a monoculture is cultivated, diversity is at a zero level, and a monoculture is absolutely unstable. To ensure the stability of a monoculture, people must make additional efforts, i.e. to increase energy consumption. The natural development and modification of biological objects in nature results in an improvement of their diversity, that in its turn results in energy dissipation. The organized use of a biological community, the purpose of which is to seize the obtained product, reduces dissipation, and hence the stability of a biological system. Thus we can conclude that any human activity that damages a balance of diversity conflicts with the laws of nature and results in environmental problems. The development of agroecosystems [13] has turned to be one of the way to solve this global contradiction between humanity and the biological community. An agroecosystem is referred to as the territory organized by man, where there is to be ensured a balance to obtain products and return them to the fields to provide an organized circulation of organic substances. Ration-ally organized agroecosystems are necessarily to include pastures, livestock farming complexes, arable lands, though even a perfectly organized agroecosystem cannot provide a complete cycle of substances. This is due to the fact that part of the mineral substances is consumed by the yield. The imbalance is eliminated due to the application of necessary fertilizers. Due to such approach, agroecosystems are extremely unstable and incapable of self-regulation. The additional energy resources introduced by a person must ensure the stability of the agroecosystem. The experience has shown that agroecosystems, in which grain crops prevail, cannot exist more than one year. In the case of a monoculture or perennial grasses, the agroecosystem can break down after 3‒4 years.

Isolated Agroecosystems as a Way to Solve the Problems of Feed, Ecology

47

There are several ways to organize livestock agroecosystems. When free ranging of cattle, an extensive option is implemented when the consumption of anthropogenic energy is at a minimum level. The energy is consumed for support of cattlemen and the primary processing of livestock products. An intensive way to implement an agroecosystem involves the production of products at large complexes, while feed is obtained due to the high energy input in the fields of the agroecosystem. Part of the manure is brought to the fields, but before application it should be processed, that requires additional energy. The amount of obtained manure can be larger than is necessary to be laid into the soil, and the environmental problem remains unsolved again. Results and discussion. By the example of livestock farming there has been considered the management of such system, the effect of the components of the management system on environmental friendliness of production [14, 15]. Figure 1 has presented a structural diagram that allows discussing this issue in detail.

Fig. 1. Structural management scheme of agroecosystem ‘livestock farming’.

The main object of management is the livestock complex. The species of animals are not important, since the control scheme will have the same structure. There should be taken several parameters as controlled ones, namely volume of output (P), given in tons; cost of production (C) in dollar per t; volume of production given in energy equivalent (Ee), J; production waste (W), given in units of energy, J. The use of the energy equivalent Ee as a controlled quantity is logical, since the study of energy flows in ecosystems is one of the main research methods in ecology. To implement technological production processes (Fig. 1), these are used complexes that need general energy supply (Et) and energy supply for feed production (Ef). The sources of energy I provide common energy supply, which is divided into indirect (Ei) and direct (Ed) sources. Indirect energy supply is the main energy given in units of energy consumed on buildings and structures, equipment, machinery.

48

A. N. Vasiliev et al.

Direct energy consumption includes the cost of fuel and electricity, used directly in technological processes to produce finished products. The sources of energy II are represented by feed (Efe), which are obtained in the second element of the agroecosystem, i.e. plant growing. Initially the amount of livestock production fully conforms to the existing arable land and is provided with balanced crop rotation. The system of mechanisms changes an energy flow (Efl) to implement technological processes in the livestock complex. Production results are evaluated using a monitoring system. The obtained data are compared with a given indicator (Pp). Depending on the obtained comparison result (ΔP) the control system management is to be corrected and updated. Deviations of the controlled values from the given values can occur due to such disturbing effects as changes in climate parameters, changes in feeding rations, changes in animal age and weight, and other factors affecting the physiological state and productivity of animals. The livestock agroecosystem presented in the form of a structural diagram is also unstable, like any agroecosystem. From the point of view of the automatic control theory, a system is considered unstable if, due to such disturbing effects, it deviates from the given value and cannot return to its original state by itself. We have considered the results of stability loss of a given system through each controlled value. As for such controlled value as volume of production (P), the stability loss for this value means an irrevocable loss in animal productivity. This problem can occur due to a significant influence of disturbing effects (e.g. an epidemic), as well as a lack of regulatory measures (Efl), i.e. during the epidemic there were not provided appropriate sanitary measures and vaccinations. To ensure sustainability of such controlled value as cost price (C), there are required restrictions on energy consumption. According to the data in Table 1 and Table 2 it is necessary to reduce the energy consumption on feed production (Efe). However, with the nowadays technologies, this can result in a production volume decrease (P). Therefore, at present it is only possible to introduce restrictions on the amount of consumed energy [16]. It has turned out that the stability of any controlled effect is fully ensured by the energy supply of technological processes. Even for such controlled value as volume of production, this thesis is not correct. The current study has paid attention to the analysis of the system through equivalent energy consumption. As for equivalent energy consumption (Ee), the energy efficiency of the system should strive for 1. The control system must reduce the energy flows Rt and Ee, while simultaneously increasing the volume of output and its energy “costs”. In this case, the “energy cycle” occurs without its significant, uncontrolled accumulation in production waste. Those innovative plans, which are being actively discussed and adopted, have got the exact opposite position. Indirect energy consumption is ready to be significantly increased to produce products (e.g. to develop and use satellite systems for precise farming, to use robotic systems, etc.). Moreover, the increase in such expenses can be justified by only an improved product quality, with a reduced content of chemicals in the final product, and a lower content of impurities. At the same time, direct energy consumption is expected to decrease. As a result, there is created another contradiction: to solve environmental problems the indirect costs of production are increased and the energy efficiency of production is

Isolated Agroecosystems as a Way to Solve the Problems of Feed, Ecology

49

reduced, and it results in the amount of energy waste increase, that makes the environmental problems more urgent. Thus, on the one hand, the problem of the sustainability of the livestock agroecological system is being solved by increasing energy regulatory effects, on the other hand, these are simultaneously being created serious prerequisites for an environmental disaster. One of the ways to solve the concern is to use waste as energy carriers. There is so much energy in the agricultural waste that if it is obtained in the form of heat, fuel and electricity, it will be more than enough for several cycles of animal production [17]. Energy obtaining from organic and solid domestic waste is currently being successfully implemented in various types of bioenergy mechanisms (biogas machines, generators operating on mixed fuels obtained from organic waste, pyrolysis mechanisms, etc.). In the world, this issue has been successfully studied by a wide range of researchers [18]. Nowadays technologies and equipment are being improved, which allows stating that bioenergy is able to ensure energy independence of the agro-industrial sector. Bioenergy is commonly considered as an element of renewable energy sources, but its main task is to prevent an environmental disaster from energy stored in waste. It should be noted that the simple waste burning with thermal energy release does not improve the environmental situation. This energy remains unused, only its form changes, and it does not return to the livestock production process. The use of energy stored in waste reduces the share of hydrocarbons and gas in direct energy consumption to produce livestock products. This increases the stability of the livestock agroecosystem due to its cost price. However, the energy efficiency of the system can only increase on 10‒15%. Still, more than 80% of the energy consumed to produce livestock products is wasted. The necessity to increase livestock production will require new energy costs for feed, which will result in an even greater decrease of energy efficiency, and will lead to instability of the system according to cost price. The issue could be solved by the use of non-standard solutions in the energy supply of livestock. Another way to increase the energy efficiency of livestock farming is to change the structure of agroecosystems. Analyzing the data presented in Fig. 1 “the sources of energy I” and “the sources of energy II” are the external ones to the agroecosystem ‘livestock farming’. Livestock farming largely (more than 75%) depends on the ‘external’ energy of crop production. The current trends in agriculture (precise farming) show that the share of indirect energy consumption for feed production will grow constantly and significantly. It is in this place that it is necessary to break the energy cycle. It is necessary to reduce the share of energy consumption for feed production. Therefore, there should not be developed the technologies to provide feed for the livestock farming agroecosystem from the plant-farming agroecosystem. This is possible due to establishing so-called “isolated” agroecosystems. In this case, external energy influx into the agroecosystem should be minimal. The animal husbandry should provide the maximum feed requirements by its own resources. Most of necessary technologies already exist and are undergoing serious production testing. One such technology is the production and use of microalgae feed [19]. Microalgae are actively used as feed additives in livestock and poultry farming, as it increases animal immunity, weight, fertility and survival of young animals. In poultry, it results

50

A. N. Vasiliev et al.

in an increase in egg production and egg size. To implement this technology, the farms specializing in cattle and poultry farming provide themselves with algal water pools where livestock waste is utilized. As a result, 40% of the nitrogen from the wastewater again enters the algae biomass and is eaten by animals. Another technology that realizes the potential of livestock farming to produce feed is the technology of growing fly larvae on organic waste, as they quickly multiply. Larvae’s weight increases by 300‒500 times for a week. Bio-mass from a pair of flies and their progeny with the full realization of the genetic potential will be more than 87 tons by the end of a year, i.e. it will be equal to the weight of six elephants [20]. Biomass of housefly larvae is a complete protein feed for pigs, calves, poultry, fur animals, and fish. It contains 48‒52% of protein, 7‒14% of fat, 7‒10% of fiber, 7% of BEV, 11‒17% of ash, as well as such biologically active substances as vitamins, ecdysone, etc. The processing of organic waste which is environmentally unfriendly and unsuitable for use in crop cultivation technologies can produce fertile humus. Its application as a soil for growing feed in a protected ground will meet the needs of livestock farming. Though organic waste of livestock farming should not be considered as energy raw materials. The use of waste as an energy carrier will reduce the energy intensity of livestock farming and increase its energy efficiency by no more than 15%. The issue could be radically solved by using organic materials to produce highly nutritious feed in closed systems, combined into an isolated agroecological system. In this case, only one restriction must be fulfilled. The volume of the livestock and its productivity (manufactured products and production waste) should maximize the necessary amount of feed for the livestock. Ideally, the chain ‘feed ! animals ! a technological process of production ! waste ! feed’ should have minimal energy supply from the outside. The renewable energy sources acquire a special role as the sources of direct energy supply (Ed) as shown in Fig. 1. Their use can increase the share of capital costs, but significantly reduce the environmental problems of energy supply.

3 Conclusions Analyzing production efficiency through energy supply and consumption allows us to identify and study the structural and functional correlation between the components of agricultural systems, as well as to study the dynamics of the effect of various energy sources on the work of systems. The energy consumption of the livestock production is about 16%. More than 75% of the energy is consumed to produce feed and is subsequently concentrated in animal waste. One of the main ways to increase energy efficiency of livestock farming can be a change of the agroecosystem structure. It should be implemented in the establishing “isolated” agroecosystems, where organic waste technologies for producing livestock products compensate energy consumed to produce feed for livestock farming. It is inexpedient to use organic waste of livestock farming as raw materials for energy. The use of microalgae production technologies, fly larvae for processing of

Isolated Agroecosystems as a Way to Solve the Problems of Feed, Ecology

51

livestock waste can be efficiently applied to produce high-protein animal feed and reduce energy consumption by its products. It is reasonable to use renewable energy sources to compensate direct energy costs in the production of livestock products.

References 1. Mindrin, A.S.: Energoekonomicheskaya ocenka sel'skohozyajstvennoj produkcii [Energyeconomic assessment of agricultural products], 187 p (1987) 2. Bulatkin, G.A.: Analiz potokov energii v agroekosistemah [Analysis of energy flows in agroecosystems]. Vestnik Rossijskoj Akademii Nauk tom 82(8), 1–9 (2012) 3. Perez-Neira, D., Soler-Montiel, M., Gutierrez-Pena, R., et al.: Energy assessment pastoral dairy goat husbandry from an agroecological economics perspective. A case study in Andalusia (Spain). Sustainability 10 (2018). Article number: 2838. 4. Guzman, G.I., de Gonzalez, M.G.: Energy efficiency in agrarian systems from an agroecological perspective. Agroecology Sustain. Food Syst. 39, 924–952 (2015) 5. Pozdnyakov, A.V., Semenova, K.A., Fuzella, T.Sh.: Energeticheskij analiz funkcionirovaniya agroekosistem v usloviyah estestvennogo nasyshcheniya-pervye rezul'taty [Energy analysis of the functioning of agroecosystems in conditions of natural saturation - first results]. Uspekhi sovremennogo estestvoznaniya (2), 124‒128 (2018) 6. da Silva, N.F., da Costa, A.O., Henriques, R.M., Pereira, M.G., Vasconcelos, M.A.F.: Energy planning: Brazilian potential of generation of electric power from urban solid wastes—under “waste production liturgy” point of view. Energy Power Eng. 7(5), 193 (2015) 7. Mishurov, N.P.: Metodologicheskie osnovy energeticheskoj ocenki proizvodstva moloka [Methodological foundations of energy assessment of milk production]. Tekhnika i oborudovanie dlya sela (5), 16‒19 (2017) 8. Sibagatullin, F.S., Sharafutdinov, G.S., Shajdullin, R.R., Moskvichyova, A.B.: Bioenergeticheskaya ocenka i osnovnye puti snizheniya energoemkosti proizvodstva produkcii zhivotnovodstva [Bioenergy assessment and main ways to reduce energy intensity of livestock production]. Uchenye zapiski Kazanskoj gosudarstvennoj akademii veterinarnoj mediciny im. N.E. Baumana T. 216, 295–302 (2013) 9. Rajaniemi, M., Jokiniemi, T., Alakukku, L., et al.: Electric energy consumption of milking process on some Finnish dairy farms. Agric. Food Sci. 26, 160–172 (2017) 10. Fuzella, T.Sh.: Energeticheskaya ocenka funkcionirovaniya agroekosistemy (na primere SPK «Nelyubino») [Energy assessment of the functioning of the agroecosystem (on the example of the SEC ``Ne-Lyubino'')]. Vestnik Tomskogo gosudarstvennogo universiteta vypusk (326), 203‒207 (2009) 11. Stavi, I., Lal, R.: Agriculture and greenhouse gases, a common tragedy. A review. . Agron. Sustain. Dev. 33, 275–289 (2013) 12. Ghosh, S., Das, T.K., Sharma, D., Gupta, K., et al.: Potential of conservation agriculture for ecosystem services. Indian J. Agric. Sci. 89, 1572–1579 (2019) 13. Marks-Bielska, R., Marks, M., Babuchowska, K., et al.: Influence of progress in sciences and technology on agroecosystems. In: Conference: Geographic Information Systems Conference and Exhibition (GIS Odyssey), Trento, Italy, 04–08 September 2017, pp. 254‒263 (2017) 14. Lachuga, Yu.F., Vasilyev, A.N.: Napravleniya issledovanij v bioenergetike [Research areas in bioenergy]. Vestnik Rossijskoj sel'skohozyajstvennoj nauki (2), 4‒7 (2015)

52

A. N. Vasiliev et al.

15. Vasilyev, A.N.: Reshenie energo-ekologicheskih problem zhivotnovodcheskoj agroekosistemy [Solution of energy and environmental problems of livestock agroecosystem]. Tekhnologii i tekhnicheskie sredstva mekhanizirovannogo proizvodstva produkcii rastenievodstva i zhivotnovodstva (88), 19‒25 (2016) 16. Lehtonen, H.: Evaluating adaptation and the production development of Finnish agriculture in climate and global change. Agric. Food Sci. 24, 219–234 (2015) 17. Mancini, F.N., Milano, J., de Araujo, J.G., et al.: Energy potential waste in the state parana (Brazil) Brazilian archives of biology and technology 62 (2019) 18. Aberilla, J.M., Gallego-Schmid, A., Adisa, A.: Environmental sustainability of small-scale biomass power technologies for agricultural communities in developing countries. Renew. Energy 141, 493–506 (2019) 19. Sui, Y., Vlaeminck, S.E.: Dunaliella microalgae for nutritional protein: an undervalued asset. Trends Biotechnol. 38, 10–12 (2020) 20. Kavran, M., Cupina, A.I., Zgomba, M., et al.: Edible insects - safe food for humans and livestock. In: Scientific Meeting on Ecological and Economic Significance of Fauna of Serbia. Ecological and Economic Significance of Fauna of Serbia Book Series, Belgrade, Serbia, 17 November 2016, vol. 171, pp. 251‒300. Serbian Academy of Sciences and Arts Scientific Meetings (2018)

Laboratory-Scale Implementation of Ethereum Based Decentralized Application for Solar Energy Trading Patiphan Thupphae and Weerakorn Ongsakul(&) Department of Energy, Environment and Climate Change, Asian Institute of Technology, Khlong Nueng, Thailand [email protected]

Abstract. The decentralized application (DApp) is an application that has a backend operation on the distributed computing nodes system. DApp has been built on decentralized technology such as blockchain. The advantages of DApp are security, transparency, and reliability. There are several use cases of DApp for many aspects such as prediction potential trading gains on Augur, sharing economy of computing power by Golem and browsing, chatting, and payment on Status. However, these DApps are utilized without any details about how to implement it. This paper address this issue by presenting the implementation of solar energy trading. Ethereum Blockchain – an open-source platform for DApp has been proposed and applied for solar energy trading. The token is created by using the ERC20 token for trading. The wallet is deployed by Metamask. Transactions, assets, and participants are made by Ganache and tested by Truffle. Moreover, the trading algorithm has been shown to check the correction between seller and buyer with the smart contract on Solidity. Lastly, React- a javascript library for building user interfaces has been deployed as a front- end to make users interactive in solar energy trading. Keywords: Blockchain

 Decentralized application  Solar energy trading

1 Introduction One of the most progressive distributed technology today - Blockchain, The blockchain is a decentralized digital database to store inside any nodes in the network called Distributed Ledger Technology (DLT). The most popular example which uses blockchain as a back-end operation system is Bitcoin. Bitcoin is an innovative payment network and a new kind of money [1]. Moreover, blockchain can apply to many aspects such as the hospital which uses blockchain with the information record and payment [2]. In the financial system, blockchain is applied to make payments with the transactions across the border. Blockchain also has more applications such as supply chain, voting, and energy supply [3]. The decentralized application (DApp) plays an important role to apply with blockchain technology. DApp overcomes the limitation of the locally running program which is the performance limitation and cannot respond to the requirements of many applications [4]. DApp could have four categories - open source, internal cryptocurrency © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 53–62, 2021. https://doi.org/10.1007/978-3-030-68154-8_6

54

P. Thupphae and W. Ongsakul

support, decentralized consensus, and no central point of failure [4]. This study represents implement of DApp using the Ethereum open-source platform in Sect. 2. In Sect. 3 shows implement of ERC 20 token. Section 4 shows the implementation of energy trading and Sect. 5 shows the deployment and experimental result.

2 Implement of DApp Using the Ethereum Open-Source Platform The proposed energy trading was implemented using the Ethereum blockchain framework [5]. It includes the following three steps: (1) set up the Ethereum environment; (2) set up the React and (3) set up identities with Ganache and wallets with Metamask for participants. 2.1

Set Up the Ethereum Environment

In this study, the Ethereum platform was selected to implement this experiment. Ethereum has many resources for learning such as Crypto zombies [6], Ethernauts [7], Remix [8], etc. Computer running MacOs 10.14.4 operating system with 2.3 GHz. Intel Core i-5 was used. Pre-requisites such as Node.js, node-gyp, Python2.7.x, Truffle framework, Ganache truffle suit, Metamask – A crypto wallet, Web3.js, git, Chrome web browser, and React-app was installed. Several package were installed inside package.json, including ‘apexcharts’ (represent a candlestick), ‘babel-polyfill’ (emulate a full 2015+ environment), ‘babelpreset-env’ (allow to use the latest Javascript without needing to micromange which syntax transform), ‘babel-preset-es2015’, babel-preset-stage-2, babel-preset-stage-3 (an array of Babel plugins), ‘babel-register’ (automatically compile files on the fly), ‘bootstrap’ (front-end open source toolkit), ‘chai’ (Test drive development/Bahavior driven development assertion library for node), ‘chai-as-promised’ (asserting facts about promises), ‘chai-bignumber’ (assertions for comparing arbitrary-precision decimals), ‘dotenv’ (zero-dependency module that loads environment variables from .env file into process .env), ‘ladash’ (taking the hassle out of working with array, number, etc.), ‘moment’ (Parse, validate, manipulate, and display dates and times in JavaScript.), ‘openzeppelin-solidity’ (library for secure smart contract development), ‘react’ (javascript library for building user interfaces),‘react-apexcharts’ (create charts in react), ‘react-bootstrap’ (front-end framework rebuilt for react), ‘react-dom’ (provide DOM-specific methods), ‘react-redux’ (predictable state container for javaScript application)), ‘react-scripts’ (include scripts and configuration), ‘redux’ (predictable state container for JS Apps), ‘redux-logger’ (can replay problem in app),‘reselect’ (compute derived data), ‘solidity-coverage’, ‘truffle’ (testing framework and asset pipeline), ‘truffle-flattener’ (verify contracts developed on truffle), truffle-hdwalletprovider’ (to sign transactions) and ‘truffle-hdwallet-provider-privkey’ (to sign transactions for address derived from a raw private key string). Then ‘web3’- is a library that allows interacting with local or remote Ethereum nodes.

Laboratory-Scale Implementation of Ethereum

2.2

55

Set Up React

This section show example to create React and how to connect React with smart contract and testing to show at the console of Chrome web browser. To set up react, npm was installed use ($ npm install) to install all dependencies on 2.1 in the package. json file. After that create React by using $ create-react-app follow with the name of the project for example ($ create-react-app Dapp project) and then make sure ganache running. To start React use ($ npm start) which is automatic run on port 3000 on Chrome web browser. After that go to index.js and import ‘bootstrap/dist/css/bootstrap. css’ from package.json. Users can play around with App.js to make his own DApp. In this study, Design DApp by using cards [9], navs [10] from bootstrap, and layout module from CSS flexbox [11]. Inside App.js using Web3.js [12] very heavily such as connecting the blockchain, load the account, load the smart contract, load the smart contract account, and call the smart contract function. Inside App.js (see Fig. 1) use web3.js form ($ const web3 = new Web3 (Web3.givenProvider || ‘https://localhost:7545’) to connect with the specific provider. In this study use Metamask to talk with local blockchain on port:7545. Moreover, use function componentWillMount [13] to load blockchain Data. This function comes up with the React component which is the extensions of the Component of the class App. The componentWillMount is the React life cycle method and can use this lifecycle diagram as a cheat sheet.

class App extends Component { componentWillMount() { this.loadBlockchainData() } async loadBlockchainData() { const web3 = new Web3(Web3.givenProvider || 'http://localhost:7545') } } Fig. 1. A command-line uses to connect to the local blockchain.

After doing the step above, check the web3 loaded inside the console. By ($console.log(“web3”,web3)). The result would shown in Fig. 2.

Fig. 2. The result from loading web3 inside console.

56

P. Thupphae and W. Ongsakul

The next step is detecting the network connected to by using ($web3.eth.net. getNetworkType()) [14] (see Fig. 3). Then checking the web3 loaded inside the console by ($ console.log(“network”,network)). The result shows in Fig. 4. In addition, you can change to the other networks in the Metamask. To fetch the account network that we are connected with. Go to web3.eth.getAccounts [15] and use ($web3.eth. getAccounts()) (see Fig. 3). The result returns an array when checking with ($ console. log(“accounts”, accounts)). In this study sets an account index [0] to show in the console. The token has loaded by import Token.sol file and fetches the token from the blockchain wih ($web3.eth.Contract) [16]. To set the network Id using ($ web3.eth.net. getId()) [17] (see Fig. 3). Then the result when using ($console.log(“token”, token)) returns both abi and address (see Fig. 4). The next step is using ($methods. myMethod.call) [18] to get total supply. (see Fig. 5).

class App extends Component { componentWillMount() { this.loadBlockchainData() } async loadBlockchainData() { const web3 = new Web3(Web3.givenProvider || 'http://localhost:7545') const network = await web3.eth.net.getNetworkType() const accounts = await web3.eth.getAccount() const networkId = await web3.eth.net.getId() const token = web3.eth.Contract(Token.abi, Token.networks[networkId]. address) } } Fig. 3. A command-line use to detect the network, account, network ID and import the token.

Fig. 4. The result command-line from Fig. 3.

Laboratory-Scale Implementation of Ethereum

57

class App extends Component { componentWillMount() { this.loadBlockchainData() } async loadBlockchainData() { const web3 = new Web3(Web3.givenProvider || 'http://localhost:7545') const network = await web3.eth.net.getNetworkType() const accounts = await web3.eth.getAccount() const networkId = await web3.eth.net.getId() const token = web3.eth.Contract(Token.abi, Token.networks[networkId]. address) const totalSupply = token.methods.totalSupply().call() } } Fig. 5. A command-line use to fetch the totalSupply.

Then the result should return the total supply which we defined = 10^18 Wei in Fig. 6.

Fig. 6. Return of total supply.

2.3

Set Up Identities with Ganache and Wallets with Metamask for Participants

Ganache is a personal blockchain [19] that provides public key, private key, and testing Ethereum coin. To set up Ganache go to [19] download and set up. Then at the MNEMONIC tab, there are some phrases as passwords. This MNEMONIC use for fillin Metamask to login local blockchain into Metamask wallet. Metamask wallet is the HD wallet that holds the Ethereum accounts and the amount of cryptocurrency. Metamask [20] is also a wallet interface that allows the user to manage multiple accounts. To set up Metamask users can go to [20] on Chrome web browser download and add to Chrome extension. Metamask can work as an extension of the web browser such as Chrome and Firefox. When Metamask is running, the user can create a private network or can connect to the public network and can also import the account into the Metamask wallet.

58

P. Thupphae and W. Ongsakul

3 Implement of ERC20 Token The implement of ERC20 Token has standard API to use with smart contracts [21]. The function of ERC20 Token consists of name, symbol, decimals, totalSupply, balanceOf, transfer, transferFrom, approve, and allowance. In this study focus on implementing these functions on smart contract to DApp. The flowchart in Fig. 7. Represent the flow of smart contract to create ERC20 token and use them to transfer. To make the ERC20 token, First need to set name, symbol, decimal, and totalSupply of the token. In this study set Name = Energy Token, Symbol = ET, Decimals = 18 (1 Ether = 1018 Wei) and totalSupply = 1,000,000. Then set the balance of deployer equal to totalSupply to test the transfer function. To execute the transfer function, the return values must return true to execute this function. This function checks the balance of the deployer must more than the transfer value. Then check the invalid receiver. If these return true, the transfer function is executed. The balance of the deployer is decreased and the balance of receiver is increase with the transfer value. The last of this function is to emit information from this transfer.

Start Set; Name: Energy Token Symbol : ET Decimals : 18 TotalSupply : 1,000,000 tokens

BalanceOf; Set Deployer :1,000,000 tokens Receiver : 0 Tokens

Transfer; Check1 : !balancceOfDeployer >= _value Check2 : !invalidRecipient True balancceOfDeployer = balancceOfDeployer - _value balancceOfReceiver = balancceOfReceiver + _value False Emit transfer _from, _to, _value, address_sender,address_receiver, _amount

End

Fig. 7. Flowchart of transfer function.

Laboratory-Scale Implementation of Ethereum

59

4 Implement Energy Trading 4.1

The User Uses ET Token to Do Both Sell Orders and Buy Orders

To set amount of energy at a new order tab which wants to do in terms of a kilowatthour (kWh). Then set the price of each kWh of energy. The DApp calculates the total price and shows it’s below. Figure 8 represents the functions which use to create sell and buy order. The variables tokenGet is token to buy order while amountGet convert the amount to Wei [22] which is the smallest denomination of ether. The tokenGive is ether for a buy order. The amountGive is the variable that calculates the price in Wei. Then the function makeBuyOrder call to make order function in the smart contract. The function can return an error if one of the variables is a mistake and show pop up ‘There was an error!’ The makeSellOrder function does the opposite way. The tokenGet is the ether to sell order. The amountGet calculates price in Wei. The tokenGive is the token to sell order and the amountGive convert amount to Wei.

export const makeBuyOrder = (dispatch, exchange, token, web3, order, account) => { const tokenGet = token.options.address const amountGet = web3.utils.toWei(order.amount, ‘ether’) const tokenGive = ETHER_ADDRESS const amountGive = web3.utils.toWei((order.amount * order.price).toString(), ‘ether’) exchange.methods.makeOrder(tokenGet, amountGet, tokenGive, amountGive).send({ from: account }) .on(‘transactionHash’, (hash) => { dispatch(buyOrderMaking()) }) .on(‘error’,(error) => { console.error(error) window.alert(`There was an error!`) }) } export const makeSellOrder = (dispatch, exchange, token, web3, order, account) => { const tokenGet = ETHER_ADDRESS const amountGet = web3.utils.toWei((order.amount * order.price).toString(), ‘ether’) const tokenGive = token.options.address const amountGive = web3.utils.toWei(order.amount, ‘ether’) exchange.methods.makeOrder(tokenGet, amountGet, tokenGive, amountGive).send({ from: account }) .on(‘transactionHash’, (hash) => { dispatch(sellOrderMaking()) }) .on(‘error’,(error) => { console.error(error) window.alert(`There was an error!`) }) }

Fig. 8. The function which creates sell and buy order.

4.2

Trading

In a smart contract, the trading function which is internal function lives inside fillOrder function. This time when the _tokenGet fetch the msg.sender balances which is the person who called the filling order set equal to his balances minus the amountGet. While the _tokenGet for the user is set to the user balances added the amountGet. The _user is the person who creates the order. For the tokenGive use the balances of the user who creates the order minus with the amountGive. This amountGive is added to the msg.sender which is the person who fills the order. Moreover, the fee amount is going to be paid by the person who fills the order and takes out of the amountGet this case is the msg.sender. In this work, fee percent is set to 10% and add

60

P. Thupphae and W. Ongsakul

to the amountGet of the msg.sender. The feeAccount was created to update the balance to the feeAmount (see Fig. 9).

Function _trade(uint256 _orderId, address _user, address _tokenGet, uint256 _amountGet, address _tokenGive, uint256 _amountGive) internal { uint256 _feeAmount = _amountGive.mul(feePercent).div(100); tokens[_tokenGet][msg.sender] = tokens[_tokenGet][msg.sender].sub(_amountGet.add(_feeAmount)); tokens[_tokenGet][_user] = tokens[_tokenGet][_user].add(_amountGet); tokens[_tokenGet][feeAccount] = tokens[_tokenGet][feeAccount].add(_feeAmount); tokens[_tokenGive][_user] = tokens[_tokenGive][_user].sub(_amountGive); tokens[_tokenGive][msg.sender] = tokens[_tokenGive][msg.sender].add(_amountGive); emit Trade(_orderId, _user, _tokenGet, _amountGet, _tokenGive, _amountGive, msg.sender, now); }

Fig. 9. Trade function in smart contract.

5 Deployment and Experimental Result The front-end of Solar-Energy Trading has been designed as Fig. 10. This user interface has been deployed on the localhost:3000. In this study, suppose user1 has defined to make buy order with the total payment amount is 1 ether and send to user 2. Figure 11 represents transferring the total payment amount from user1 to user 2 in the terminal interface (by setting 1 Ether = 11018 tokens). Figure 12 shows the transaction between user 1 and 2 on Ganache personal Ethereum blockchain. User 1 and user 2 have the address 0xb877dCcB80F27b83E4f863c41f050f18FfAEcb9b and 0x12e622A7c90CE-fF482Fc79ADe96a3AcD17C9F282 respectively. The exchange contact address is 0x0FDA0BA4c75c3552A42B9877c9b48fC6fddc022D.

Fig. 10. User interface of Solar energy trading.

Laboratory-Scale Implementation of Ethereum

61

Fig. 11. Transferring total amount payment between user 1 to user 2.

Fig. 12. Transaction from user 1 on Ganache personal Ethereum blockchain.

6 Conclusion The increase of solar PV and the progress of blockchain technology play an important role in the energy section. The future market will become peer to peer market in the microgrid. The excess energy is going to make a profit to prosumer. This paper presents a decentralized application based on the blockchain technology framework. The description is provided in technical details on setting up the ethereum blockchain, make the token by using ERC 20 token, useful dependencies to set up DApp, and some part of trading with smart contract.

References 1. B. Project. “Bitcoin is an innovative payment network and a new kind of money,” MIT license (2009–2020). https://bitcoin.org/en/ 2. Tasatanattakool, P.: Blockchain: challenges and applications. In: International Conference on Information Networking (ICOIN), Thailand (2018) 3. Xu, X.: Architecture for Blockchain Applications. Springer (2019) 4. Wei Cai, Z.W.: Decentralized applications: the blockchain-empowered software system. IEEE Access 6, 53019–53033 (2018) 5. Ethdotorg: Ethereum.org, Stiftung Ethereum. https://ethereum.org/en/ 6. Cleverflare: Learn to Code Blockchain DApps By Building Simple Games, Loom. https:// CryptoZombies.io 7. Santander, A.: Hello Ethernaut, OpenZeppelin. https://ethernaut.openzeppelin.com 8. Remix - Ethereum IDE. https://remix.ethereum.org 9. Team, B.: Bootstrap, MIT. https://getbootstrap.com/docs/4.0/components/card 10. Team, B.: Bootstrap, MIT. https://getbootstrap.com/docs/4.0/components/navs 11. W3Schools: W3schools.com, W3.CSS. https://www.w3schools.com/Css/css3_flexbox.asp 12. Sphinx: web3.js, Core team (2016). https://web3js.readthedocs.io/en/v1.2.11/

62

P. Thupphae and W. Ongsakul

13. OpenSource, F.: React.Component,Facebook Inc. (2020). https://reactjs.org/docs/reactcomponent.html 14. Sphinx. Web3.js, Core team (2016). https://web3js.readthedocs.io/en/v1.2.9/web3-eth. html#net 15. Sphinx. Web3.js, Core team (2016). https://web3js.readthedocs.io/en/v1.2.9/web3-eth. html#getaccounts 16. Sphinx. Web3.js, Core team, (2016). https://web3js.readthedocs.io/en/v1.2.9/web3-ethcontract.html#eth-contract 17. Sphinx, Web3.js, Core team (2016). https://web3js.readthedocs.io/en/v1.2.9/web3-eth-net. html?highlight=getId) 18. Sphinx. Web3.js, Core team (2016). https://web3js.readthedocs.io/en/v1.2.9/web3-ethcontract.html?highlight=method.#methods-mymethod-call 19. McVay, W.: Truffle suit, Truffle Blockchain group (2020). https://www.trufflesuite.com/ ganache 20. Davis, A.: METAMASK, ConsenSys Formation (2020). https://metamask.io 21. Vitalik Buterin, F.V.: Ethereum improvement proposals, EIPs (2015). https://eips.ethereum. org/EIPS/eip-20 22. Sphinx. Ethereum Homestead, Core team (2016). https://ethdocs.org/en/latest/ether.html

Solar Module with Photoreceiver Combined with Concentrator Vladimir Panchenko1,2(&) and Andrey Kovalev2 1

Russian University of Transport, Obraztsova st. 9, 127994 Moscow, Russia [email protected] 2 Federal Scientific Agroengineering Center VIM, 1st Institutsky passage 5, 109428 Moscow, Russia [email protected]

Abstract. The paper considers the design of the solar photovoltaic air cooling module with a paraboloid type concentrator. Photovoltaic converters are located on the surface of the concentrator, which ensures their cooling using a metal radiator, the functions of which are performed by the solar concentrator itself. At the same time, the profile of the solar radiation concentrator provides uniform illumination of photovoltaic cells, which favorably affects their efficiency. It is advisable to use high-voltage matrix silicon photovoltaic solar cells as photovoltaic converters, which have high electrical efficiency and maintain it at a high concentration of solar radiation and heating with concentrated solar radiation. Keywords: Solar energy converters  Air heat sink

 Solar concentrator  Silicon photovoltaic  Uniform illumination  Profile  Efficiency

1 Introduction The use of solar radiation concentrators can reduce the number of photovoltaic converters, which favorably affects the cost of the installation and the electricity received with its help [1–3]. However, when photovoltaic cells convert concentrated solar radiation, their heating occurs significantly, as a result of which their electrical efficiency decreases [4], which indicates the need for their cooling [5–10]. When photovoltaic converters operate in a concentrated solar stream, their current-voltage characteristic acquires a triangular shape, which indicates a significant decrease in their efficiency [11]. All planar solar cells have this property, but matrix high-voltage silicon photovoltaic converters do not lose electrical efficiency in concentrated solar flux due to their structure [12, 13]. The use of a paraboloid type concentrator in the form of an air radiator, in the focal region of which such matrix photovoltaic converters are located simultaneously on its surface, eliminates the need for an active cooling system design. However, for their stable operation, it is necessary to ensure uniform illumination of the entire surface of the photovoltaic converters, which requires a special geometric approach and design methods [14] for the profile of the solar radiation concentrator.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 63–72, 2021. https://doi.org/10.1007/978-3-030-68154-8_7

64

V. Panchenko and A. Kovalev

2 Creation of the Geometry of Working Profile of the Paraboloid Type Solar Radiation Concentrator Developed method is proposed to be applied for the calculations of the profile of a paraboloid type concentrator, which provides uniform illumination of photovoltaic converters at a relatively high degree of solar radiation concentration and ensures stable electricity generation. On the surface of the developed solar radiation concentrator there are photovoltaic converters, which have thermal contact with its metal surface, which allows heat energy to be removed due to heating from the solar radiation by the entire surface of the metal concentrator (Fig. 1). Thanks to the efficient heat sink, the photovoltaic converters do not overheat and operate at the nominal operating mode without losing their electrical efficiency.

Fig. 1. The design of the solar photovoltaic module, where the concentrator is also an air-cooled radiator for photovoltaic converters

Thanks to the use of silicon high-voltage matrix photovoltaic converters (Fig. 2), it appears possible to obtain a high-voltage direct current (1000 V or more) at the output of one solar module, as well as an increase in the electrical conversion efficiency of solar radiation and, accordingly, a reduction in cost as module (specific electric power) and generated electric energy. Due to the use of sealing technology of photovoltaic converters with two-component polysiloxane compound, the period of the rated power of the photovoltaic converters also increases.

Solar Module with Photoreceiver Combined with Concentrator

65

Fig. 2. Silicon matrix high-voltage photovoltaic converters with a voltage of more than 1000 V

The developed method allows obtaining the necessary concentration of solar radiation on the surface of photovoltaic converters due to the calculation of the working profile of the solar radiation concentrator. Solar photovoltaic module (Fig. 3) consists of the paraboloid type concentrator 1, which in turn consists of different zones a – b, b – c, c – d. The zones under consideration provide a cylindrical focal region of concentrated solar radiation with sufficiently uniform illumination in the focal region – on the surface of the photovoltaic receiver 2 (silicon matrix high-voltage photovoltaic converters). Photovoltaic receiver 2 is made in the form of a truncated cone from commutated silicon high-voltage matrix photovoltaic converters of height d0, which are located on the reflective (front) surface of the concentrator in the zone b – c. Paraboloid type solar radiation concentrator for silicon high-voltage matrix photovoltaic converters is also the radiator of passive air cooling.

Fig. 3. Principle of operation and the scheme of the concentrator photovoltaic solar module

66

V. Panchenko and A. Kovalev

Solar radiation incident on the reflecting surface of the solar concentrator 1 is reflected from the surface of the concentrator of zones a – b and c – d in such a way that a sufficiently uniform illumination of the photoelectric receiver 2 is provided by concentrated solar radiation in the focal region, which in turn is located on the profile of the concentrator in the zone b – c. The profile of the reflecting surface of the solar concentrator X (Y) under consideration is determined by a system of various equations corresponding to the illuminance condition of the surface of the photoelectric receiver (silicon matrix high-voltage photovoltaic converters). The values of the X and Y coordinates of the working profile of the solar radiation concentrator in the zone a – b are determined by the following system of equations: 

 1 þ tg b ; cos b

X ¼2f 



X2 ; 4f

ð1Þ ð2Þ

ð2  R  X Þ2 ¼ 4  f  Y;

ð3Þ

ð2  R  Xc Þ ¼ ð2  R  Xb Þ þ d0  sin a0 ;

ð4Þ

(

 12 ) Y0 2  tg b0 1þ 1 ; f

Xb ¼ 2  f  tg b0 

Yb ¼

Xb2 ; 4f

ð5Þ

ð6Þ

Xc ¼ Xb  d0  sin a0 ;

ð7Þ

Yc ¼ Yb  h;

ð8Þ

h ¼ d0  sin a0 ;

ð9Þ

d0 ðYc  f Þ ¼ : sin a sin d  sin u0

ð10Þ

The focal length of the solar concentrator f is calculated by the formula:  f ¼R

 1  tg b0 ; cos b0

ð11Þ

where b0 is the angle between the sunbeam, reflected from the surface of the concentrator profile in the zone a – b at the point with coordinates Xa and Ya, coming into the focal region of the concentrator (on the surface of the photovoltaic photoreceiver) at the point with coordinates (2  R – Xb), Yb and parabola focal length f;

Solar Module with Photoreceiver Combined with Concentrator

67

the angle b ¼ b0  n=N varies from 0º to b0, the values of the coefficient n are selected from a series of integers n ¼ 0; 1; 2. . . N; c0 is the angle between the sunbeam reflected from the surface of the concentrator profile in the zone a – b at the point with coordinates Xb and Yb, coming into the focal region of the concentrator (on the surface of the photovoltaic photoreceiver) at the point with coordinates ð2  R  Xc Þ, Yc and parabola focal length f; a0 is the angle of inclination of the working surface of the profile of the concentrator in zone b – c; the maximum radius of the solar concentrator is R; the angle d is determined by the expression d ¼ p=2 þ a0  b; angles a0 and b0 are selected in accordance with the boundary conditions. The distribution of the concentration of solar radiation Kab on the surface of the photovoltaic photoreceiver, reflected from the upper surface of the working profile of the solar radiation concentrator (zone a – b) is calculated according to the formulas:  2  Rn þ 1  R2n  ; ¼ Ddn  Rbcn þ 1 þ Rbcn

Kab

ð12Þ

Ddn ¼ dn þ 1  dn ;

ð13Þ

Rn ¼ Xn  R;

ð14Þ

Rbcn ¼ Xbcn  R:

ð15Þ

The values of the X and Y coordinates in the zone of the working profile of the concentrator c – d are determined using the system of equations:  Ym ¼

Yc tgam1





Xm ¼ Xc  tgam1 ¼ tgam ¼ Xm ¼



 ðXb  Xc Þ ; 1  tga m tgam1

 

Yb tgam

ð Y c  Ym Þ ; tga0m

ð16Þ

ð17Þ

ðYb  Ym1 Þ ; Xm1  ð2  R  Xc Þ

ð18Þ

ðYb  Ym Þ ; Xm  ð2  R  Xc Þ

ð19Þ

ð2  R  Xc Þ þ ðYc  Ym Þ ; tgam1 a0m ¼ a0 þ am ;

ð20Þ ð21Þ

68

V. Panchenko and A. Kovalev

( Xd ¼ 2  f  tg b0 



Y0 1þ f  tg2 b0

Yh ¼ Yd  h  Yd ¼

12

) 1 ;

Xd2 ; 4f

ð22Þ

ð23Þ

where a0m is the angle of inclination of the working surface of the profile of the concentrator in zone c – d; am is the angle between the sunbeam, reflected from the surface of the profile of the concentrator in the zone c – d at the point with coordinates Xm and Ym coming to the focal region of the concentrator (on the surface of the photovoltaic receiver) at the point with coordinates Xb and Yb and the level of the coordinate Ym; values of the coefficient m are selected from a series of integers m ¼ 0. . . M. Distribution of the concentration of solar radiation Kcd on the surface of the photovoltaic receiver is calculated according to the formulas: Kcd

M X

M X

R2mn  R2mðn þ 1Þ  ; ¼ Km ¼ m¼0 m¼0 Ddn  Rbcðn þ 1Þ þ Rbcn

ð24Þ

Rmn ¼ Rm  DXmn ;

ð25Þ

Rm ¼ Xm  R;

ð26Þ

DXmn ¼ DXm  dn ;

ð27Þ

DXm ¼ Xm  Xm1 :

ð28Þ

The presented systems of equations make it possible to calculate the coordinates of the working surface of the profile of the solar radiation concentrator in various its zones.

3 Results of Calculating the Profile of the Concentrator Paraboloid Type Using the formulas presented above, the coordinates of the working profile of the paraboloid type solar concentrator are calculated, which ensures uniform illumination of the focal region (silicon matrix high-voltage photovoltaic converters) with concentrated solar radiation. Figure 4 shows the profile of the reflecting surface of the paraboloid type solar radiation concentrator with a maximum diameter of about 400 mm. Three different zones of the concentrator profile can be distinguished on the profile of the solar radiation concentrator, one of which has a flat shape and represents the landing surface of the photovoltaic converters. This kind of photovoltaic solar modules can be installed together on one frame with a system for constant tracking the position

Solar Module with Photoreceiver Combined with Concentrator

69

Fig. 4. Working surface of the concentrator profile of the paraboloid type

of the Sun. For a better layout and optimal filling of the frame space, the concentrators of solar modules can be squared and closely mounted to each other. Figure 5 on the left shows a three-dimensional model of the solar module with the profile of the solar concentrator, calculated according to the developed method and providing uniform distribution of concentrated solar radiation over the surface of photoreceiver (silicon matrix high-voltage photovoltaic converters) in the focal region, which is presented in the Fig. 5 on the right.

Fig. 5. Three-dimensional model of the solar module (on the left) and distribution of concentrated solar radiation over the surface of photoreceiver (on the right)

The distribution of illumination by concentrated solar radiation over the surface of the photoreceiver is relatively uniform, varies from 7 to 11 times and averages 9 times. The presented distribution will favorably affect the operation of photovoltaic converters, providing illumination of the entire surface of the photovoltaic converters with

70

V. Panchenko and A. Kovalev

uniform concentrated solar radiation, due to which the output electric power of the solar module will be at a nominal level. Thanks to the calculated profile of paraboloid type solar concentrator using computer-aided design system, it becomes possible to create their three-dimensional solid-state model. Figure 6 shows such concentrator of solar radiation of paraboloid type and its single component (petal), into which it can be divided for subsequent manufacture from metallic reflective material.

Fig. 6. Three-dimensional model of a paraboloid type solar concentrator and its single component made of reflective metal

Based on the developed three-dimensional model, the subsequent manufacture of a paraboloid type solar concentrator from reflective metal material with the help of single components is possible. Number of single components may vary depending on the required manufacturing accuracy and optical efficiency of the solar concentrator itself. The more single elements that make up the solar radiation concentrator, the more accurate its manufacture and the greater its optical efficiency. However, with a large number of components, the complexity of manufacturing also increases, in view of which other methods of manufacturing solar concentrators may be more relevant (centrifugal method, method of electroforming, method of glass bending, manufacture of a reflecting surface from flat curved mirrors) [14].

4 Conclusion Photovoltaic solar module with a paraboloid type concentrator and a photoreceiver, based on silicon high-voltage matrix photovoltaic converters located directly on the surface of the concentrator, which is a passive air-cooled radiator for them, has been developed. Solar radiation concentrator provides a fairly uniform illumination of the surface of the photovoltaic receiver. The use of silicon high-voltage matrix

Solar Module with Photoreceiver Combined with Concentrator

71

photovoltaic converters makes it possible to increase the electrical efficiency in the conversion of concentrated solar radiation even when they are overheated. Excessive heat is removed from the photovoltaic converters due to the entire area of the solar radiation concentrator. The use of a two-component polysiloxane compound in the manufacturing process of the photovoltaic receiver allows increasing the term of the rated power of the photovoltaic converters in the concentrated solar radiation flux. Concentrators of solar radiation of the solar modules can be squared to optimally fill the frame of the system for constant tracking the position of the Sun, and be manufactured using various methods and manufacturing technologies. Acknowledgment. The research was carried out on the basis of financing the state assignments of the All-Russian Institute of the Electrification of Agriculture and the Federal Scientific Agroengineering Center VIM, on the basis of funding of the grant “Young lecturer of RUT” of the Russian University of Transport, on the basis of funding of the Scholarship of the President of the Russian Federation for young scientists and graduate students engaged in advanced research and development in priority areas of modernization of the Russian economy, direction of modernization: “Energy efficiency and energy saving, including the development of new fuels”, subject of scientific research: “Development and research of solar photovoltaic thermal modules of planar and concentrator structures for stationary and mobile power generation”.

References 1. Rosell, J.I., Vallverdu, X., Lechon, M.A., Ibanez, M.: Design and simulation of a low concentrating photovoltaic/thermal system. Energy Convers. Manage. 46, 3034–3046 (2005) 2. Nesterenkov, P., Kharchenko, V.: Thermo physical principles of cogeneration technology with concentration of solar radiation. Adv. Intell. Syst. Comput. 866, 117–128 (2019). https://doi.org/10.1007/978-3-030-00979-3_12 3. Kemmoku, Y., Araki, K., Oke, S.: Long-term performance estimation of a 500X concentrator photovoltaic system. In: 30th ISES Biennial Solar World Congress 2011, pp. 710–716 (2011) 4. Kharchenko, V., Nikitin, B., Tikhonov, P., Panchenko, V., Vasant, P.: Evaluation of the silicon solar cell modules. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 866, pp. 328–336 (2019). https://doi.org/10.1007/978-3-03000979-3_34 5. Sevela, P., Olesen, B.W.: Development and benefits of using PVT compared to PV. Sustain. Build. Technol. 90–97 (2013) 6. Ibrahim, A., Othman, M.Y., Ruslan, M.H., Mat, S., Sopian, K.: Recent advances in flat plate photovoltaic/thermal (PV/T) solar collectors. Renew. Sustain. Energy Rev. 15, 352–365 (2011) 7. Kharchenko, V., Nikitin, B., Tikhonov, P., Gusarov, V.: Investigation of experimental flat PV thermal module parameters in natural conditions. In: Proceedings of 5th International Conference TAE 2013, pp. 309–313 (2013) 8. Hosseini, R., Hosseini, N., Khorasanizadeh, H.: An Experimental study of combining a Photovoltaic System with a heating System. In: World Renewable Energy Congress, pp. 2993–3000 (2011)

72

V. Panchenko and A. Kovalev

9. Panchenko, V., Kharchenko V., Vasant, P.: Modeling of solar photovoltaic thermal modules. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 866, pp. 108–116 (2019). https://doi.org/10.1007/978-3-030-00979-3_11 10. Panchenko, V.A.: Solar roof panels for electric and thermal generation. Appl. Solar Energy 54(5), 350–353 (2018). https://doi.org/10.3103/S0003701X18050146 11. Kharchenko, V., Panchenko, V., Tikhonov, P., Vasant, P.: Cogenerative PV thermal modules of different design for autonomous heat and electricity supply. In: Handbook of Research on Renewable Energy and Electric Resources for Sustainable Rural Development, pp. 86–119 (2018). https://doi.org/10.4018/978-1-5225-3867-7.ch004 12. Panchenko, V.: Photovoltaic solar modules for autonomous heat and power supply. IOP Conf. Ser. Earth Environ. Sci. 317 (2019). 9 p. https://doi.org/10.1088/1755-1315/317/1/ 012002 13. Panchenko, V., Izmailov, A., Kharchenko, V., Lobachevskiy, Y.: Photovoltaic solar modules of different types and designs for energy supply. Int. J. Energy Optim. Eng. 9(2), 74–94 (2020). https://doi.org/10.4018/IJEOE.2020040106 14. Sinitsyn, S., Panchenko, V., Kharchenko, V., Vasant, P.: Optimization of Parquetting of the concentrator of photovoltaic thermal module. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 1072, pp. 160–169 (2020). https://doi. org/10.1007/978-3-030-33585-4_16

Modeling of Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module Vladimir Panchenko1,2(&), Sergey Chirskiy3, Andrey Kovalev2, and Anirban Banik4 1

Russian University of Transport, Obraztsova st. 9, 127994 Moscow, Russia [email protected] 2 Federal Scientific Agroengineering Center VIM, 1st Institutsky passage 5, 109428 Moscow, Russia [email protected] 3 Bauman Moscow State Technical University, 2nd Baumanskaya st. 5, 105005 Moscow, Russia [email protected] 4 Department of Civil Engineering, National Institute of Technology Agartala, Jirania 799046, Tripura (W), India [email protected]

Abstract. The paper describes the modeling of the thermal state and visualization of the operating mode of the bilateral photoreceiver of the concentrator photovoltaic thermal solar module. Based on the results obtained during the simulation, the geometric parameters of the components of the photoreceiver of the solar module are substantiated and selected. The developed design of the solar module will provide high electrical and thermal efficiency of the solar module. These types of concentrator photovoltaic thermal solar modules can operate autonomously or in parallel with the existing power grid to power consumers. Keywords: Solar energy  Solar concentrator  High-voltage matrix photovoltaic converters  Photovoltaic thermal solar module  Power supply

1 Introduction Recently solar energy converters have been developing at a pace that outstrips the development of converters of other renewable energy sources [1–6]. Along with increasing the efficiency of photovoltaic converters, when the technology of their manufacture is being improved [7–9], the cogeneration method is also used to increase the overall efficiency of the solar module – when, along with electricity, the consumer receives thermal energy. Such solar modules can be either a planar design [10] or a concentrator design [11] when various concentrators and reflectors of solar energy are used. Such devices must be manufactured with high accuracy, and their design is an important and difficult task [12]. Concentrator thermal photovoltaic solar plants generate electrical and thermal energy, while the temperature of the coolant can reach © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 73–83, 2021. https://doi.org/10.1007/978-3-030-68154-8_8

74

V. Panchenko et al.

higher temperatures compared to planar photovoltaic thermal modules. The design of such modules is complicated by the need to simulate the thermal state of the photoreceiver located in the focus of the concentrator, since at its high temperature, the electrical efficiency of the photovoltaic converters decreases. Along with the design and manufacture of solar concentrators, an important role is played the design and simulation of photoreceivers of solar modules in both planar and concentrator designs [13]. In addition to the method for designing such modules, methods of modeling and visualizing the processes occurring in the photoreceivers of such solar photovoltaic thermal solar modules are also necessary.

2 Three-Dimensional Modeling of Bilateral Photoreceivers of Photovoltaic Thermal Solar Modules The developed method for creating three-dimensional models of photoreceivers of solar photovoltaic thermal modules allows create photoreceivers with unilateral and bilateral photoelectric converters, as well as with facial, rear and two-sided heat removal (Fig. 1).

Fig. 1. Components of solar thermal photovoltaic modules of various designs and the model of the bilateral photoreceiver “Model 4”

Modeling of Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module

75

Bilateral photovoltaic converters make it possible to create concentrator photovoltaic thermal solar modules where photovoltaic converters are illuminated from two sides and in this case the number of used photovoltaic converters is halved and given that the solar concentrator also is part of the installation, the savings will be even more significant when compared with planar solar modules. Two-sided heat removal allows more efficient cooling of photovoltaic converters and the resulting heat increases the overall efficiency of the solar module. When creating a photoreceiver with bilateral photovoltaic converters and two-sided heat removal, the components of the “Model 4” are used (Fig. 1), which consists of 7 different components. In concentrator solar photovoltaic thermal modules it is advisable to use not standard silicon planar photovoltaic converters, but silicon high-voltage matrix ones, which were originally developed for concentrator modules. The efficiency of highvoltage matrix photovoltaic modules increases when working in concentrated solar flux, and its high value is maintained even when the concentration of solar radiation is more than 100 times. Moreover, the high electrical efficiency of the matrix photovoltaic converters is maintained with increasing temperature, while the efficiency of planar photovoltaic converters significantly decreases with increasing temperature. Silicon photovoltaic converters of this kind are used in “Model 4” as an bilateral electrogenerating component. By scaling the length of the high-voltage matrix photovoltaic converters, it is possible to achieve higher module voltages, and an increase in the concentration of solar radiation can proportionally increase the electric current and electric power of the entire module.

3 Modeling and Visualization of the Thermal State of the Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module After creating a three-dimensional model of the photoreceiver of the solar photovoltaic thermal module in the computer-aided design system, it is advisable to study this model in order to determine its thermal state in the Ansys finite element analysis system. As a result of modeling the thermal state, it is possible to analyze and draw a conclusion regarding the operation mode of the photoreceiver and then perform layer-by-layer optimization of the model components in order to achieve maximum overall module efficiency (maximum thermal efficiency or maximum electric efficiency). As a result of modeling according to the developed method, it becomes possible to determine the temperature fields of the model and visualize the coolant flow [14]. As an example of modeling the thermal state of the bilateral photoreceiver of the concentrator photovoltaic module, we consider a receiver with matrix photovoltaic converters located in the focus of the concentrator and cooled by nitrogen (Fig. 2). The model of the photoreceiver in the Ansys finite element analysis system is presented in the Fig. 2. Half of the photoreceiver is presented in view of its symmetry to accelerate the modeling process in the finite element analysis system.

76

V. Panchenko et al.

Fig. 2. Model of the bilateral photoreceiver and coolant movement

For calculation in a finite element analysis system, the photoreceiver model was divided into finite elements, as well as broken down into components representing various physical bodies. When modeling the thermal state in the finite element analysis system, the following parameters and module characteristics were adopted: module length 600 mm; thermophysical properties of substances: silicon: thermal conductivity: 120 W/m  K, heat capacity: 800 J/kg  K, density: 2300 kg/m3; glass: thermal conductivity: 0,46 W/m  K, heat capacity: 800 J/kg  K, density: 2400 kg/m3; rubber: thermal conductivity: 0,15 W/m  K, heat capacity: 1800 J/kg  K, density: 1200 kg/m3; nitrogen: atmospheric pressure: 101300 Pa, 20 ºC, dynamic viscosity: 1710–6 Pa  s), thermal conductivity: 0,026 W/m  K; heat capacity at constant pressure: 1041 J/kg  K; density: 1,182 kg/m3. At the next stage of modeling, the following parameters were adopted: specific heat release on the photovoltaic converter: 2 W/cm2, gas temperature at the inlet: 15 ºC; gas inlet speed: 10 m/s; gas pressure: 10 atm; total gas flow: mass: 0,066 kg/s, volumetric at operating pressure and temperature: 5,50 l/s, volumetric at atmospheric pressure and temperature 20 ºC: 56,12 l/s. The thermal state of the photovoltaic converter is shown in the Fig. 3 on the left, the temperature of the coolant and other structural components is shown in the Fig. 3 on the right.

Fig. 3. Thermal state of the photovoltaic converter (on the left) and temperature of the coolant and other structural components (on the right)

The temperature of the silicon photovoltaic converter ranged from 44 °C at the edges of the cell to 53 °C in the center of the cell. The temperature of the coolant in the central section of the model ranged from 15 ºC (did not differ from the inlet coolant temperature) to 34 ºC, which indicates the heating of the coolant layer adjacent to the photovoltaic converter and removal heat from it. By varying various parameters, such

Modeling of Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module

77

as the concentration value (specific heat release on the photovoltaic converter), heat carrier velocity and temperature of the heat carrier at the inlet, the necessary temperature values of the photovoltaic converter and heat carrier can be obtained. Thanks to the developed modeling method [14], a study was made of the bilateral photoreceiver of the solar photovoltaic thermal module (“Model 4”), as a result of which temperature distributions, flows, and coolant velocities were obtained, which made it possible to analyze and optimize the layer-by-layer structure of the model. High-voltage bilateral matrix photovoltaic converters are located in the focal region of the concentrator of the module. Bilateral photovoltaic thermal photoreceiver located in the focus of a solar radiation concentrator was subjected to simulation. Bilateral photoreceiver of the solar photovoltaic thermal module consists of 7 different components (Fig. 1), the layered structure of which is presented in the Table 1. Modeling the thermal state in a finite element analysis system with obtaining an array of temperatures allows getting a more detailed three-dimensional picture of the thermal state of the module, in contrast to two-dimensional analytical modeling, which allows comparing the thermal state of all components and each separately for different parameters. Table 1. Parameters of the components of the bilateral photoreceiver of the solar photovoltaic thermal module (“Model 4”) Component

Thick., mm

Comp. mat. Therm. cond., W/mK

Dens., kg/m3

Kin. visc., Pas

Dyn. visc., m2/s

Heat cap., J/kgK

Coef. of therm. exp., 1/K

1. Bilateral electrogenerating 2. Transparent sealing 3. Transparent insulating 4. Transparent heat removal 5. Transparent insulating heat sink 6. Transparent heat-insulating 7. Transparent protective

0,2

Silicon

148

2330





714

2,54  10–6

0,2

Polysiloxane Polyethylene Water

0,167

950





1175

100  10–6

0,14

1330





1030

60  10–6

0,569

1000

1788  1,78 10–6 10–6 – –

4182



1030

60  10–6

0,1 5 0,3

Polyethylene

0,14

1330

3; 5; 7

Air

0,0244

1,293

4

Tempered glass optiwhite

0,937

2530

17,2  10–6 –

13,210– 1005 6



750

– 8,9  10–6

Since the model of the photoreceiver is symmetric, the axis of symmetry will be the middle of the bilateral electrogenerating component (silicon matrix high-voltage photoelectric converter) (Fig. 4 above) in longitudinal section (Fig. 4 below). The thicknesses of the components in the drawing are taken preliminary.

78

V. Panchenko et al.

Fig. 4. Bilateral electrogenerating component (silicon matrix high-voltage photoelectric converter) (above), model of bilateral module and module drawing shown in section (below)

When modeling the thermal state of the model of the photoreceiver, the influence of the concentration of solar radiation (3; 6 and 9 times), transparent heat-insulating component (the air gap) (3; 5 and 7 mm), as well as the flow rate of the coolant (0,5; 5 and 50 g/s) on the thermal state of the coolant is considered. The temperature of the bilateral electrogenerating component (silicon matrix high-voltage photoelectric converter) should be below 60 ºC, since when the photovoltaic converters are heated, their electrical efficiency gradually decreases. As a preliminary calculation, a part of the bilateral electrogenerating component is considered: a strip 10 mm wide, conventionally located in the middle part of the photovoltaic thermal photoreceiver, the length of this part being equal to the length of the entire module. Since the module is symmetrical with respect to the average horizontal plane, only its upper half is considered. The flow of the cooling agent is considered uniform throughout the cross section of the cooling cavity. Three variants of the module design are considered, differing in the height of the transparent heat-insulating component (air layer thickness) – 3 mm, 5 mm and 7 mm (Fig. 5).

Fig. 5. Three variants of the module design (height of the transparent heat-insulating component 3 mm, 5 mm and 7 mm)

The influence of the concentration of solar radiation on the thermal state of the module was studied – three cases were considered: a concentration of 3, 6 and 9 times for each side of the module. Taking into account the fraction of solar radiation converted into electrical energy, the heat flux is taken to be 2400 W/m2, 4800 W/m2 and 7200 W/m2.

Modeling of Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module

79

For the case of water cooling with the temperature of 293 K, the thermal state was simulated for mass flow rates of 0,05 kg/s, 0,005 kg/s and 0,0005 kg/s. It was noted that at the mass flow rate of water of 0,05 kg/s, the water layer (transparent heat removal component) does not warm up to the entire thickness, therefore, the calculation for this flow rate was performed only for a design variant with a heat insulator thickness of 3 mm. To assess the effect of non-uniformity in the supply of the cooling agent (transparent heat removal component), a solid-state model of a quarter of the photoreceiver is constructed, shown in the Fig. 6. A design variant with a thickness of the transparent heat-insulating component (air layer thickness) equal to 3 mm is considered. The layers of transparent heat-insulating component and transparent heat removal component are divided by thickness into 10 layers of elements (Fig. 6). The components of the model with a small thickness are divided by thickness into 3 layers of elements.

Fig. 6. Solid-state model of a quarter of the photoreceiver

The symmetry planes of the module are the average vertical and horizontal planes. In the Fig. 7 on the left, the symmetry planes are marked in green. The coolant enters and exits through the central holes marked in green in the Fig. 7 on the right. The flow rate of the cooling agent is set at 0,205 kg/s and 0,0205 kg/s, which is equivalent to a flow rate of 0,05 kg/s and 0,005 kg/s for a part of the module with a width of 10 mm.

Fig. 7. Symmetry planes of the model (on the left) and coolant inlet (on the right)

As a result of the simulation, the temperature distributions of the components of the solar module model are obtained, as well as the velocity and flow line of the coolant (Figs. 8 and 9).

80

V. Panchenko et al.

Fig. 8. Temperature distribution of the components of the model from the side of the inlet and outlet of the coolant (above) and the velocity of the coolant from the side of the inlet and outlet of the coolant (below)

Fig. 9. Temperature distribution of the components of the model along the entire length – top view (above) and coolant velocity along the entire length – top view (below)

Based on temperature distributions, the temperature values of the various components of the model were determined, which were entered in the Table 2 for further optimization of the thicknesses of the components. Also analyzed the quality of washing a transparent radiator (transparent insulating component) and the uniformity of the flow lines of the coolant (transparent heat removal component) at various flow rates; analyzed the places of overheating of photovoltaic converters (bilateral electrogenerating component) and underheating of the coolant, its stagnation in the module. Presented figures (the result of three-dimensional modeling of the thermal state) carry the necessary information for analysis and optimization of the thermal state of the components of the module and the quality of its cooling. As a result of the analysis of the temperature array of the “Model 4” components obtained in the process of modeling the thermal state of the three-dimensional model using the developed method, optimization was carried out according to the criterion of the changing the coolant temperature (heat removal component). With the same change in the main criterion (coolant temperature), the change in the total criterion is calculated. As a result of design optimization, an air gap (transparent heat-insulating component) of 7 mm, a concentration of 6 times and a coolant flow rate of 0,5 g/s for the selected part of the module were selected. The number of models can be expanded with an increase in the ranges of geometric characteristics of other components and coolant flow.

Modeling of Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module

81

Table 2. Temperature characteristics of the photoreceiver components and component optimization criteria Air gap (transparent heat-insulating component), mm Mass flow rate of coolant (transparent heat removal component), g/s Glass temperature at the coolant inlet (transparent protective component), °C Glass temperature at the coolant outlet (transparent protective component), °C Change in criterion, % (priority of the third category – optimum minimum) Air temperature in the gap at the coolant inlet (transparent heat-insulating component), °C Air temperature in the gap at the coolant outlet (transparent heat-insulating component), °C Change in criterion, % (priority of the fourth category - optimum minimum) Temperature of photovoltaic converters at the coolant inlet (bilateral electrogenerating component), °C Temperature of photovoltaic converters at the coolant outlet (bilateral electrogenerating component), °C Change in criterion, % (priority of the second category – optimum minimum) Temperature of the coolant at the outlet of the module (transparent heat removal component), °C Change in criterion, % (priority of the first (main) category – optimum maximum) Change in the total criterion (maximum priority)

3

5

7

0,5

5

0,5

5

0,5

5

20

20

20

20

20

20

29

20

29

20

29

20

45

0

45

0

45

0

20

20

20

20

20

20

35

20

34

20

34

20

75

0

70

0

70

0

26

26

26

25

26

26

44

32

44

32

44

32

69

23

69

28

69

23

38

22

38

23

38

21

90

10

90

15

90

5

−60,6

−57,1

−57,1

Optimization of the air gap and the flow rate of the coolant were carried out according to the criterion for changing the temperature of the coolant (priority of the first category). The remaining secondary criteria by which optimization did not occur were of secondary importance, but also had priority depending on their contribution to the heating of the coolant (the second category – coefficient 0,9, the third – 0,8, the fourth category – 0,7 and the fifth – 0,6) (Table 2). Also, for changes in the criteria, Table 2 shows the achievement of their desired values in terms of finding the optimum – minimum or maximum, which is also taken into account when summing up the secondary criteria, that is, the smaller the change, the better the heating of the coolant.

82

V. Panchenko et al.

If a change in the value of a secondary criterion reduces the main criterion (the priority of the first category – heating of the coolant, relative to the value of which a decision is made on the appropriateness of a particular module design), then when summing up the magnitude of the change of this criterion is given with a negative sign. With the same change in the main criterion (coolant temperature), the change in the total criterion is calculated, which includes changes in all secondary criteria and preference is given to the maximum value of the total criterion: X

K ¼1  K1  0; 9  K2  0; 8  K3  0; 7  K4  0; 6  K5

The total criterion reflects a change not only in the parameter of the main criterion (change in the temperature of the coolant), but also takes into account changes in other criteria (change in the temperatures of the remaining components), which plays an important role in the optimization process in view of the level of their contribution to the thermal state of the module and heating of the coolant in particular.

4 Conclusion As a result of the developed method for designing solar photovoltaic thermal modules the designer can develop solar cogeneration modules of various designs, which can then be tested in the finite element analysis system using the developed method for modeling and visualization of thermal processes. As a result of optimization of the geometric parameters of the bilateral photoreceiver of a concentrator solar photovoltaic thermal module, a module design is proposed that will allow obtaining high values of the overall efficiency of the solar module when the photovoltaic converters do not overheat and the coolant heats up to high values. Solar photovoltaic thermal modules of this kind can serve as cogeneration modules for the simultaneous generation of electric and thermal energy for the consumers own needs, which can operate both offline and in parallel with existing energy networks. Acknowledgment. The research was carried out on the basis of financing the state assignments of the All-Russian Institute of the Electrification of Agriculture and the Federal Scientific Agroengineering Center VIM, on the basis of funding of the grant “Young lecturer of RUT” of the Russian University of Transport, on the basis of funding of the Scholarship of the President of the Russian Federation for young scientists and graduate students engaged in advanced research and development in priority areas of modernization of the Russian economy, direction of modernization: “Energy efficiency and energy saving, including the development of new fuels”, subject of scientific research: “Development and research of solar photovoltaic thermal modules of planar and concentrator structures for stationary and mobile power generation”.

Modeling of Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module

83

References 1. Buonomano, A., Calise, F., Vicidimini, M.: Design, simulation and experimental investigation of a solar system based on PV panels and PVT collectors. Energies 9, 497 (2016) 2. Ibrahim, A., Othman, M.Y., Ruslan, M.H., Mat, S., Sopian, K.: Recent advances in flat plate photovoltaic/thermal (PV/T) solar collectors. Renew. Sustain. Energy Rev. 15, 352–365 (2011) 3. Kemmoku, Y., Araki, K., Oke, S.: Long-term perfomance estimation of a 500X concentrator photovoltaic system. In: 30th ISES Biennial Solar World Congress 2011, 710–716 (2011) 4. Nesterenkov, P., Kharchenko, V.: Thermo physical principles of cogeneration technology with concentration of solar radiation. Adv. Intell. Syst. Comput. 866, 117–128 (2019). https://doi.org/10.1007/978-3-030-00979-3_12 5. Rawat, P., Debbarma, M., Mehrotra, S., et al.: Design, development and experimental investigation of solar photovoltaic/thermal (PV/T) water collector system. Int. J. Sci. Environ. Technol. 3(3), 1173–1183 (2014) 6. Sevela, P., Olesen, B.W.: Development and benefits of using PVT compared to PV. Sunstain. Build. Technol. 4, 90–97 (2013) 7. Kharchenko, V., Nikitin, B., Tikhonov, P., Panchenko, V., Vasant, P.: Evaluation of the silicon solar cell modules. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 866, pp. 328–336 (2019). https://doi.org/10.1007/978-3-03000979-3_34 8. Panchenko, V.: Photovoltaic solar modules for autonomous heat and power supply. IOP Conf. Ser. Earth Environ. Sci. 317 9 p. https://doi.org/10.1088/1755-1315/317/1/012002. 9. Panchenko, V., Izmailov, A., Kharchenko, V., Lobachevskiy, Y.: Photovoltaic solar modules of different types and designs for energy supply. Int. J. Energy Optim. Eng. 9(2), 74–94. https://doi.org/10.4018/IJEOE.2020040106. 10. Panchenko, V.A.: Solar roof panels for electric and thermal generation. Appl. Solar Energy 54(5), 350–353 (2018). https://doi.org/10.3103/S0003701X18050146 11. Kharchenko, V., Panchenko, V., Tikhonov, P., Vasant, P.: Cogenerative PV thermal modules of different design for autonomous heat and electricity supply. In: Handbook of Research on Renewable Energy and Electric Resources for Sustainable Rural Development, pp. 86–119. https://doi.org/10.4018/978-1-5225-3867-7.ch004 12. Sinitsyn, S., Panchenko, V., Kharchenko, V., Vasant, P.: Optimization of parquetting of the concentrator of photovoltaic thermal module. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 1072, pp. 160–169 (2020). https://doi. org/10.1007/978-3-030-33585-4_16 13. Panchenko, V., Kharchenko, V., Vasant, P.: Modeling of solar photovoltaic thermal modules. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 866, pp. 108–116 (2019). https://doi.org/10.1007/978-3-030-00979-3_11 14. Panchenko, V., Chirskiy, S., Kharchenko, V.V.: Application of the software system of finite element analysis for the simulation and design optimization of solar photovoltaic thermal modules. In: Handbook of Research on Smart Computing for Renewable Energy and AgroEngineering, pp. 106–131 https://doi.org/10.4018/978-1-7998-1216-6.ch005

Formation of Surface of the Paraboloid Type Concentrator of Solar Radiation by the Method of Orthogonal Parquetting Vladimir Panchenko1,2(&) and Sergey Sinitsyn1 1

Russian University of Transport, Obraztsova st. 9, 127994 Moscow, Russia [email protected] 2 Federal Scientific Agroengineering Center VIM, 1st Institutskiy passage 5, 109428 Moscow, Russia

Abstract. The paper discusses the geometric aspects of the surface design of solar radiation concentrators by adjusting its surface with single elements. The proposed method of orthogonal parquetting allows optimizing the accuracy of manufacturing a solar radiation concentrator and the smoothness of its working profile. The geometric characteristics of paraboloid type concentrators are considered. The article also discusses various methods of manufacturing solar radiation concentrators, including the fan-surface parquetting method, which is related to the orthogonal method of parking. The optimization of the number of elements used in the developed method occurs either by the least number of curved components of the concentrator or by the maximum number of flat elements. As an example of the implementation of the developed method two solar radiation concentrators designed for photovoltaic and photovoltaic thermal photoreceivers are presented. Concentrations of solar radiation provide uniform illumination in focal areas. Keywords: Solar energy  Concentrator of solar radiation  Paraboloid Optimization  Orthogonal parquetting  Concentrator solar module



1 Introduction Solar plants are the fastest growing stations based on renewable energy converters [1–5]. Most of solar stations consist of silicon planar photovoltaic modules that generate exclusively electrical energy. To increase the overall efficiency of solar modules, reduce their nominal power and accelerate the payback period, photovoltaic thermal solar modules are relevant, which, along with electrical energy, also generate thermal energy in the form of a heated coolant. Such photovoltaic thermal solar modules are divided according to the type of construction used: planar [6–9] and concentrator (with different concentrators of solar radiation) [10–12]. Since concentrator photovoltaic thermal modules can work with bilateral photovoltaic converters, including silicon matrix high-voltage [4, 5, 12] (which reduces the number of used photovoltaic converters), as well as receive a higher temperature coolant at the output, which increases the overall efficiency of a solar installation, concentrators of such installations require more complex calculations related both to the development of a profile for uniform © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 84–94, 2021. https://doi.org/10.1007/978-3-030-68154-8_9

Formation of Surface of the Paraboloid Type Concentrator of Solar Radiation

85

illumination of the focal region where the photovoltaic converters are located and to methods for its manufacture according to geometrically specified requirements. Methods for calculating the thermal state of concentrator solar photovoltaic thermal modules are also necessary [13], since as the temperature of the photovoltaic converters increases, their electrical efficiency decreases [14], which implies the need for their effective cooling. The main types of concentrators for use in concentrator photovoltaic thermal installations are paraboloid type solar concentrators, however, methods for calculating their working profile and its manufacture are also relevant and important for research.

2 Paraboloid Type Concentrators of the Solar Radiation Ideal parabolic concentrator of solar radiation (Fig. 1) focuses parallel sunrays to the point, which corresponds to an infinite degree of concentration [15], which does not allow evaluating the capabilities of the concentrator of solar radiation, since the Sun has finite dimensions.

Fig. 1. Scheme of the formation of the focal region of the paraboloid concentrator of solar radiation

Figure 1 shows the diagram of the formation of the focal spot of the paraboloid concentrator of solar radiation: an elementary sunray with an angular size (2  0,004654 rad) is reflected from the surface of the concentrator of solar radiation and falls on the focal plane, where the trace of this ray is an elementary ellipse with half shafts: a ¼ p  u0 =ð1 þ cos uÞ  cos u; b ¼ p  u0 =ð1 þ cos uÞ, where p = 2  f – the focal parameter of the parabola; f – focal length [15].

86

V. Panchenko and S. Sinitsyn

From different radial zones of the concentrator of solar radiation (with different angles U), the ellipses have different sizes, which, overlapping each other, form the density of the focal radiation. An approximate estimate of the maximum radiation density in focus is the calculation by the formula [15]: EF ¼ q 

1  sin2 Um  E0 ; u20

ð1Þ

where q – the reflection coefficient of the concentrator of solar radiation; u0 – the opening angle of the elementary sunray; Um – the largest opening angle of the paraboloid to the side; E0 – density of the solar radiation. The imperfection of the reflecting surface of the concentrator of solar radiation leads to blurring of the spot due to the mismatch of their centers according to a random law. The focal spot illumination is best described by the normal Gaussian distribution curve [15]: ET ¼ Emax  ecr ;  Emax ¼

180 p

2

ð2Þ

E0  q  h2  sin2 u;

ð3Þ

 2 h ð1 þ cos uÞ2 ; p

ð4Þ

2

c ¼ 3;283  103

where r – the radius in the focal plane; h – the measure of the accuracy of the concentrator of solar radiation. In the calculations, the focal length f and half-opening angle Um are considered known, from where, using the representation of the parabola equation in the polar coordinate system, the diameter of the concentrator of solar radiation is obtained [15]: D¼

4  f  sin um : 1 þ cos um

ð5Þ

The optimal value Um at which the average coefficient of concentration of solar radiation will be maximum is equal to 45° at the achieved concentration Kmax equal 11300 [16] and the concentration of solar radiation at the focus of the ideal concentrator of solar radiation takes the form [17]: Efid ¼ Rs 

1:2  sin2  um  E0 u20

ð6Þ

where Rs – the integral transmittance of the system; u0 – the angular radius of the Sun equal to 0,004654 rad.; E0 – the density of direct solar radiation. The curves of the distribution of concentration of solar radiation in the focal plane for four concentrators of solar radiation (spherical, quasiparabolic and parabolotoric) with the same diameters and focal lengths (500 mm each) are shown in the Fig. 2 [18].

Formation of Surface of the Paraboloid Type Concentrator of Solar Radiation

87

Fig. 2. Distribution of concentration of solar radiation in the focal plane of spherical, quasiparabolic and parabolotoric concentrators of solar radiation

Solar radiation concentrators, the concentration distribution of which is shown in the Fig. 2, can be successfully used in conjunction with solar radiation photoreceiver, however, uniform distribution of concentrated solar radiation over the focal region is necessary for solar photovoltaic converters. Therefore, when designing the working profile of paraboloid type concentrators, it is necessary to pay considerable attention to the distribution of illumination in the focal region of the concentrator. For the manufacture of paraboloid type solar radiation concentrators, the following methods are mainly used [15]: centrifugal method; the method of electroforming; the method of glass bending; the manufacture of a reflective surface from flat curved mirrors. Along with the methods considered, parabolic concentrators with small manufacturing accuracy for cooking and heating water are manufactured in southern and not rich countries, which are presented in the Fig. 3 [19–21]. Of great interest is the fan-shaped method of parqueting of surface of the paraboloid type concentrator of solar radiation [22], with the help of which the concentrators in the Fig. 3 on the right are manufactured.

Fig. 3. Paraboloid type solar radiation concentrators, the base of which is made of reed, guide ribs, satellite dishes and using the fan-shaped method of parqueting of surface

88

V. Panchenko and S. Sinitsyn

The term “parquetting” refers to the task of approximation of surfaces, which allows simplifying the technological process of their production while respecting the differential geometric characteristics of finished products. The task of selecting the shape of the parquet elements of the shell and their dimensions is solved in two ways: by dividing the structure into small flat elements and dividing the shell into elements of a curvilinear outline. As elements of the parquet are considered figures the planned projections of which are straight lines.

3 Method of the Orthogonal Parqueting of Surface of the Paraboloid Type Concentrator of the Solar Radiation As the initial data for parqueting, a planned projection of the surface of the paraboloid is set (Fig. 4). The method for solving the task is related to the breakdown into projection elements with the subsequent finding on the surface of the third coordinates corresponding to the planned breakdown.

Fig. 4. The scheme of orthogonal parqueting of the surface of the paraboloid of the concentrator

According to the classification of types of parquet accepted: a) the planned projection of the elements in the form of closed n - squares with the inclusion of circular arcs; b) the equidistant arrangement of internal and external surfaces [22].

Formation of Surface of the Paraboloid Type Concentrator of Solar Radiation

89

When solving it is taken into account that the mathematical model of the inner surface is given by the equation x2 þ y2 ¼ 2  p  z. The surface of the paraboloid S is dissected by one-parameter families of projecting planes (Fig. 5). In this case, the planned projection is divided by a network with the shape of the cell, depending on the relative position of the projecting planes. It is most advisable to split the surface into elements with a planned shape of rectangles or arbitrary quadrangles.

Fig. 5. The sectional diagram of the surface of the paraboloid by families of projecting planes

The cell parameters of the surface partition network are determined taking into account the accuracy of the surface approximation ðDn  D  nÞ, where the numerical value of the limiting error parameter D  n ¼ 0;15%. If the surface of a paraboloid is represented by an approximated set of elements having the shape of equal squares in plan (Fig. 4) Si, then their angles on the surface S correspond to the set of points of the skeleton, approximately taken as regular. To determine the side of the square of the unit cell of the partition, in the circle of the trace of the paraboloid on the plane X0Y, will enter a square with a side b (Fig. 4). The parameter b is calculated by the area of a circle of radius r: rffiffiffiffiffiffiffi Scir b ¼ 1;4  : p The constructed square S is a trace of the surface part of the paraboloid area.

ð7Þ

90

V. Panchenko and S. Sinitsyn

Based on the relation: rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi h i n pffiffiffi o pffiffiffiffi D n ¼ 4 exp 2 ln S  ln ð1 þ Nsm ÞNsm ð N  1Þ  CNsm þ 2D2 a: 

ð8Þ

For the regular point skeleton, the dimension parameter N is calculated:   N ¼ f S; Nsm ; D n; CNsm at Da = 0:

ð9Þ

where Nsm = 1 – the smoothness order of the approximating contour of the surface composed of parquet elements; D  n ¼ 0;15%.; CNsm – statistical parameter. A point frame of dimension N = 680 is placed in the grid nodes of a square with a side b, including segments of its sides, so the total number of unit cells of the partition is determined by the formula: N ¼

pffiffiffiffi 2 N  1 ¼ 625

ð10Þ

So, for example, for an area S = 250 m2 of a piece of the surface of a paraboloid having a square trace on a plane X0Y, one cell has: Si ¼

S 250 ¼ ¼ 0;4ðm2 Þ; N 625

therefore, the square cell of the partition of the planned projection has a side li equal to: li ¼

pffiffiffiffi pffiffiffiffiffiffiffi Si ¼ 0;4 ¼ 0;63:

Given the approximations, we finally obtain: li ¼ 0; 9  li ¼ 0;57: So, the surface of a paraboloid is dissected by a family of horizontally projecting planes ki ¼ ð1; 2; . . .; k þ 1Þ perpendicular to a straight line a, selected on the plane X0Y, and the distance between the planes is li = 0,57 m. The line a is defined on the plane by two parameters: the angle a and the distance R from the center of coordinates. The set of elements located between adjacent planes is conventionally called a row or a belt. Of further interest are cells that partially or fully cover the surface projection. Technological information about the nodal points of all elements of the parquet, including those located outside the “square b”, is presented in the form of a matrix: y1 ; y1 ; XN1;1 ; XK1;1 ; . . .; XN1;K1 ; XK1;K1 min max 2 2 ð11Þ kSk ¼ ymin ; ymax ; XN2;1 ; XK2;1 ; . . .; XN2;K2 ; XK2;K2 ; yn ; yn ; XNn;1 ; XKn;1 ; . . .; XNn;Kn ; XKn;Kn min

max

Formation of Surface of the Paraboloid Type Concentrator of Solar Radiation

91

where Kj is the number of elements in the i row (Fig. 4); n is the number of rows; (XN and XK) are designations X1 of the coordinates of the boundaries of the planned projection of the parquet element. So, for example, the selected element of the plan projection Si (Fig. 4) is located in  the i row with borders yimin ; yimax and in the j column with the X1 coordinates of the borders (XNj,i and XKj,i). Given the equal width of all belts yj+1 = yj + li, without taking into account the gaps between the elements of the parquet, the matrix (11) is simplified and takes the form: y1 ; XN1;1 ; XN1;2 ; . . .; XK1;K1 ð12Þ kSk ¼ y2 ; XN2;1 ; XN2;2 ; . . .; XK2;K2 : yn ; XNn;1 ; XNn;2 ; . . .; XKn;Kn Based on the approximated surface model x2 þ y2 ¼ 2  p  z, the third coordinates of the corner points of the outer surface of the parquet elements are calculated, taking into account which a complete information model of the parquet is used, which is used in the preparation of control programs for the technological cycle of manufacturing parquet elements.

4 Paraboloid Type Concentrators of the Solar Radiation for Various Photoreceivers The developed method for calculating the surface of a paraboloid profile using orthogonal parquetting can be applicable for the manufacture of the working surface of paraboloid type solar radiation concentrators. Similar solar radiation concentrators are used in solar modules, where photovoltaic converters (including high-voltage matrix ones), thermal photoreceivers and also combined photovoltaic thermal photoreceivers can be used as photoreceivers of concentrated solar radiation. In concentrator solar photovoltaic thermal modules, along with electrical energy, the consumer receives thermal energy in the form of a heated coolant at the output. The profiles of such paraboloid type solar radiation concentrators, providing uniform illumination in the focal region on the surface of the photoreceivers, are presented in the Fig. 6 on the left, where the solar radiation concentrator is intended for photovoltaic converters located on the solar concentrator itself, which is an air-cooled radiator for photovoltaic converters. Figure 6 below shows the profile of a paraboloid type solar radiation concentrator that provides uniform illumination in the focal region of the photovoltaic thermal cylindrical photoreceiver, in which the side surface converts concentrated solar radiation into electrical energy and the upper end surface converts concentrated solar radiation into thermal energy.

92

V. Panchenko and S. Sinitsyn

Fig. 6. Profiles of working surfaces of paraboloid type solar radiation concentrators (on the left), partition of their surfaces into unit elements (in the middle) and individual elements of solar radiation concentrators themselves (on the right)

Concentrators of solar radiation can be made of reflective metal sheet material. Breakdown of the working surface of solar radiation concentrators into individual elements by projecting planes can occur on cells with a side of 20 mm (Fig. 6 in the middle) (or with other sizes depending on the manufacturing requirements), as a result of which unit cells of the surfaces of solar radiation concentrators are formed (Fig. 6 on right).

5 Conclusion Thus, it should be noted that the planned development of solar plants will occur at a steadily increasing pace, and the share of use of hybrid photovoltaic thermal installations will increase, which will significantly increase the need for the design, manufacture and study of solar radiation concentrators and in particular the paraboloid type. The presented method of orthogonal parquetting of the surface of paraboloid type solar concentrators will allow optimizing the manufacturing process of concentrators already at the design stage and will make it possible to obtain almost any working profile of paraboloid type concentrators. The orthogonal scheme of parquetting surfaces of rotation, in particular a paraboloid, is easy to implement data preparation. Also presented method will help to control and set the manufacturing error at the design stage of this kind of parabolic concentrators in order to ensure the expected characteristics of the focal region of the concentrator. The developed method of parquetting surfaces allows designing and manufacturing surfaces of rotation, including solar radiation concentrators, with specified differential-geometric requirements, which will ensure the expected distribution of the solar radiation illumination in the focal area. Such solar radiation concentrators provide uniform illumination in the focal region of various types of photoreceivers (photovoltaic, thermal, photovoltaic thermal), which will positively affect the efficiency of solar modules and their specific power.

Formation of Surface of the Paraboloid Type Concentrator of Solar Radiation

93

Acknowledgment. The research was carried out on the basis of financing the state assignments of the All-Russian Institute of the Electrification of Agriculture and the Federal Scientific Agroengineering Center VIM, on the basis of funding of the grant “Young lecturer of RUT” of the Russian University of Transport, on the basis of funding of the Scholarship of the President of the Russian Federation for young scientists and graduate students engaged in advanced research and development in priority areas of modernization of the Russian economy, direction of modernization: “Energy efficiency and energy saving, including the development of new fuels”, subject of scientific research: “Development and research of solar photovoltaic thermal modules of planar and concentrator structures for stationary and mobile power generation”.

References 1. Ibrahim, A., Othman, M.Y., Ruslan, M.H., Mat, S., Sopian, K.: Recent advances in flat plate photovoltaic/thermal (PV/T) solar collectors. Renew. Sustain. Energy Rev. 15, 352–365 (2011) 2. Nesterenkov, P., Kharchenko, V.: Thermo physical principles of cogeneration technology with concentration of solar radiation. Adv. Intell. Syst. Comput. 866, 117–128 (2019). https://doi.org/10.1007/978-3-030-00979-3_12 3. Buonomano, A., Calise, F., Vicidimini, M.: Design, simulation and experimental investigation of a solar system based on PV panels and PVT collectors. Energies 9, 497 (2016) 4. Panchenko, V.: Photovoltaic solar modules for autonomous heat and power supply. IOP Conf. Ser. Earth Environ. Sci. 317 (2017). 9 p. https://doi.org/10.1088/1755-1315/317/1/ 012002. 5. Panchenko, V., Izmailov, A., Kharchenko, V., Lobachevskiy, Y.: Photovoltaic Solar Modules of Different Types and Designs for Energy Supply. Int. J. Energy Optim. Eng. 9(2), 74–94 (2020). https://doi.org/10.4018/IJEOE.2020040106 6. Rawat, P., Debbarma, M., Mehrotra, S., et al.: Design, development and experimental investigation of solar photovoltaic/thermal (PV/T) water collector system. Int. J. Sci. Environ. Technol. 3(3), 1173–1183 (2014) 7. Sevela, P., Olesen, B.W.: Development and benefits of using PVT compared to PV. Sustain. Build. Technol. 90–97 (2013) 8. Kharchenko, V., Nikitin, B., Tikhonov, P., Gusarov, V.: Investigation of experimental flat PV thermal module parameters in natural conditions. In: Proceedings of 5th International Conference TAE 2013, pp. 309–313 (2013) 9. Hosseini, R., Hosseini, N., Khorasanizadeh, H.: An Experimental study of combining a Photovoltaic System with a heating System. World Renew. Energy Congress 2993–3000 (2011) 10. Kemmoku, Y., Araki, K., Oke, S.: Long-term performance estimation of a 500X concentrator photovoltaic system. In: 30th ISES Biennial Solar World Congress 2011, pp. 710–716 (2011) 11. Rosell, J.I., Vallverdu, X., Lechon, M.A., Ibanez, M.: Design and simulation of a low concentrating photovoltaic/thermal system. Energy Convers. Manage. 46, 3034–3046 (2005) 12. Kharchenko, V., Panchenko, V., Tikhonov, P.V., Vasant, P.: Cogenerative PV thermal modules of different design for autonomous heat and electricity supply. In: Handbook of Research on Renewable Energy and Electric Resources for Sustainable Rural Development, pp. 86–119 (2018). https://doi.org/10.4018/978-1-5225-3867-7.ch004

94

V. Panchenko and S. Sinitsyn

13. Panchenko, V., Kharchenko, V., Vasant, P.: Modeling of solar photovoltaic thermal modules. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 866, pp. 108–116 (2019). https://doi.org/10.1007/978-3-030-00979-3_11 14. Kharchenko, V., Nikitin, B., Tikhonov, P., Panchenko V., Vasant, P.: Evaluation of the silicon solar cell modules. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 866, pp. 328–336 (2019). https://doi.org/10.1007/978-3-03000979-3_34 15. Strebkov, D.S., Tveryanovich, E.V.: Koncentratory solnechnogo izlucheniya (Concentrators of the Solar Radiation), Moskva, GNU VIESKH [Moscow, GNU VIESH], 12–30 (2007). (in Russian) 16. Andreev, V.M., Grilikhes, V.A., Rumyantsev, V.D.: Fotoehlektricheskoe preobrazovanie koncentrirovannogo solnechnogo izlucheniya (Photoelectric conversion of concentrated solar radiation), Leningrad, Nauka (Leningrad, Science) (1989). 310 p. (in Russian) 17. Zahidov, R.A., Umarov, G.Y., Weiner, A.A.: Teoriya i raschyot geliotekhnicheskih koncentriruyushchih system (Theory and calculation of solar concentrating systems), Tashkent, FAN (Tashkent, Science) (1977). 144 p. (in Russian) 18. Alimov, A.K., Alavutdinov, J.N., et al.: Opyt sozdaniya koncentratorov dlya modul'nyh fotoehlektricheskih ustanovok (Experience in creating concentrators for modular photovoltaic plants), Koncentratory solnechnogo izlucheniya dlya fotoehlektricheskih ustanovok (Concentrators of solar radiation for photovoltaic plants) 17–18 (1986). (in Russian) 19. Hassen, A.A., Amibe, D.A.: Design, manufacture and experimental investigation of low cost parabolic solar cooker. ISES Solar World Congress 28 August–2 September 2011 (2011). 12 p. https://doi.org/10.18086/swc.2011.19.16 20. Chandak, A., Somani, S., Chandak, A.: Development Prince-40 solar concentrator as do it yourself (DIY) kit. In: ISES Solar World Congress August–2 Sept. 2011 (2011). 8 p. https:// doi.org/10.18086/swc.2011.23.02 21. Diz-Bugarin, J.: Design and construction of a low cost offset parabolic solar concentrator for solar cooking in rural areas. In: ISES Solar World Congress August–2 Septe 2011 (2011). 8 p. https://doi.org/10.18086/swc.2011.30.05 22. Sinitsyn, S., Panchenko, V., Kharchenko, V., Vasant, P.: Optimization of parquetting of the concentrator of photovoltaic thermal module. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 1072, pp. 160–169 (2020). https://doi. org/10.1007/978-3-030-33585-4_16

Determination of the Efficiency of Photovoltaic Converters Adequate to Solar Radiation by Using Their Spectral Characteristics Valeriy Kharchenko1, Boris Nikitin1, Vladimir Panchenko2,1(&), Shavkat Klychev3, and Baba Babaev4 1 Federal Scientific Agroengineering Center VIM, 1st Institutskiy Passage 5, 109428 Moscow, Russia [email protected] 2 Russian University of Transport, Obraztsova st. 9, 127994 Moscow, Russia [email protected] 3 Scientific and Technical Center with a Design Bureau and Pilot Production, Academy of Sciences of the Republic of Uzbekistan, 100125 Tashkent, Uzbekistan [email protected] 4 Dagestan State University, Gadzhieva str., 43-a, 367000 Makhachkala, Russia [email protected]

Abstract. The paper presents experimental studies of silicon photovoltaic solar radiation converters. An experimental evaluation of the photoconversion efficiency is considered, the spectral dependences of the current photoresponses of the photoelectric converter are determined and the spectral densities of the current photoresponses of various photovoltaic converters from an artificial light source are compared. Based on the results obtained, it becomes possible to determine the efficiency of the photovoltaic converter for a given wavelength of monochromatic radiation, as well as to calculate the efficiency of the photovoltaic converter for a given radiation spectrum. Keywords: Solar energy  Efficiency  Photovoltaic converters  Spectral density  Wavelength  Spectrum of solar radiation  Monochromatic radiation

1 Introduction The experimental assessment of the spectral values of the efficiencies of photovoltaic converters is a very urgent task. By the nature of this dependence, it is possible to obtain information on the corresponding values of the efficiency of the studied converter for certain wavelengths of monochromatic radiation and also to calculate the real value of the efficiency of the photoconversion for a predetermined radiation spectrum, including for standard solar radiation AM 1.5 [1–11]. The solution of the tasks posed is possible by comparing the current spectral (using specially selected light filters) photoresponses of the reference and studied photoconverters. A reference photoconverter should be considered a semiconductor converter calibrated in the area of the photoactive part for a given standard solar radiation semiconductor in the form of the spectral density of short circuit currents of the converter. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 95–102, 2021. https://doi.org/10.1007/978-3-030-68154-8_10

96

V. Kharchenko et al.

2 Comparison of the Spectral Densities of Current Photoresponses of a Reference Silicon Photovoltaic Converter Under Various Exposure Conditions A prerequisite for the planned analysis is the adjustment of the equipment used to ensure the equality of the integral (without the use of light filters) short circuit currents of the reference converter from standard solar radiation and from a laboratory light source. A comparison of the spectral (with the help of light filters) current photoresponses of the reference converter from two compared light sources gives an idea of the values of the spectral averaged (over the bandwidth of the selected filter) power of the monochromatic line of the used laboratory light source. Figure 1 shows the spectral dependences of the current photoresponses of a reference silicon photoconverter when illuminating with standard solar radiation AM 1.5 (according to the rating data for calibration of this converter) and when illuminated from a laboratory light source (halogen incandescent lamp).

Fig. 1. Comparison of the spectral densities of current photoresponses of the reference silicon photovoltaic converter when exposed to standard solar radiation AM 1.5 (sunlight exposure) and a laboratory light source (halogen lamp with a water filter h = 42 mm.) for equal integral short circuit currents

From the analysis of these dependencies it follows that the spectrum of the laboratory light source is depleted in the intensity of light fluxes in its short-wave part, but differs in increased intensity in the long-wave part. The areas bounded by these dependencies are equal in size, since they have the integral values of short circuit currents equal.

Determination of the Efficiency of Photovoltaic Converters Adequate

97

3 Averaged Spectral Densities of Power of the Photoactive Part of the Spectrum for Silicon Figure 2 presents the averaged spectral densities of power of the light bands (50 nm wide in accordance with the calibration of the reference converter) of the photoactive part (for Si) of the standard solar radiation AM 1.5 spectrum. It also shows the numerical values of the power of the light flux of each such band. Figure 2 shows the DP0kiAM1;5 values of standard solar radiation AM 1.5 obtained by dividing its photoactive part of the spectrum for silicon into bands with a width of 50 nm.

Fig. 2. Averaged spectral densities of power of the bandwidths of 50 nm wide of the standard solar radiation AM 1.5 of the photoactive part of the spectrum for silicon (numerical spectral densities of power of the light bands are shown above the graph and have a dimension of W/m2)

The data in Fig. 1 and Fig. 2 make it possible to determine the power of the light bands of the used laboratory light source for each selected wavelength.

4 Comparison of the Spectral Densities of Current Photoresponses of the Silicon Photovoltaic Converters Figure 3 shows the experimentally recorded spectral dependences of the current photoresponses of two different photoconverters (reference photoconverter and the studied one) at the same level and illumination spectrum from a laboratory light source. From the analysis of the figure it follows that, due to the individual design features and the technology of manufacturing photoconverters, these dependencies are not identical.

98

V. Kharchenko et al.

Comparison of the photoresponses of the converters allows to determine for the studied photoconverter sample the current photoresponse per unit of the power of the light current for a given spectrum wavelength.

Fig. 3. Comparison of the spectral densities of the current photoresponses of the silicon reference and investigated photoconverters under exposure from a laboratory light source

The volt-ampere characteristics of the investigated converter, taken by the traditional method under illumination from the used laboratory light source (naturally differing in spectrum from standard solar radiation), contains important data for further calculations: 1) The converter short circuit current is integral Io, and it can be represented as the sum of the contributions of all monochromatic lines of the used laboratory light source. The same can be said about the standard solar radiation, since the short circuit currents are equal in accordance with the above. 2) The maximum power of the photoelectric converter is denoted as Pmax.

5 Dependence of the Efficiency of the Silicon Photovoltaic Converter Versus the Wavelength According to the well-known definition, the efficiency of the photovoltaic converter is equal to the ratio of the useful power to the total consumed power of the total light flux. In our case, under the useful power of the photoconverter, it should be taken the maximum power taken from the converter according to the volt-ampere characteristics.

Determination of the Efficiency of Photovoltaic Converters Adequate

99

It should be noted that the nature (form) of the volt-ampere characteristics of the converter should not change if the light fluxes of some lines are mentally replaced by equivalent ones according to the current photoresponse of other lines, and therefore, the position of the operating point on the volt-ampere characteristics will not change. This circumstance is due to the fact that mobile charge carriers (primarily electrons), according to the kinetic energy obtained by the absorption of high-energy photons, almost instantly thermolize according to the thesis described in [3]. This circumstance makes these mobile charge carriers indistinguishable by the history of their origin. In accordance with the foregoing, the power of monochromatic radiation with a wavelength ki, generating the total short circuit current I0, is determined by the expression:

P

ð1Þ

P

where: DP0kiAM1;5 is the density of power of the light fluxes of standard solar radiation AM 1.5 with a wavelength ki and bandwidth of 50 nm; SPVC is the area of the investigated photovoltaic converter; i0kilab:ref : is the spectral density of the short circuit current of the reference photovoltaic converter from the laboratory light source at a wavelength ki; i0kiAM1;5cal: is the spectral density of the short circuit current of the reference transducer from the solar radiation AM 1.5 at a wavelength ki; I0 is the short circuit current of the investigated photovoltaic converter according to the volt-ampere characteristics; i0kilab:PVC is the spectral density of the short circuit current of the investigated photovoltaic converter under illumination from a laboratory light source at a wavelength ki; Δkcal. is the step of the calibration of the reference photovoltaic converter (usually 50 nm). The spectral values of the coefficient of performance of the investigated photovoltaic converter will be determined according to the expression: COPki ¼

Pmax PVC ; Pki

ð2Þ

where PmaxPVC is the maximum power of the photovoltaic converter, calculated by its volt-ampere characteristics, taken under illumination from the laboratory light source tuned to standard solar radiation by the integrated value of the short circuit current I0 using a reference photovoltaic converter; Pki is the power of monochromatic radiation with a wavelength ki, which causes the same value of the short circuit current I0 of the photovoltaic converter under study, which corresponds to the current-voltage characteristic. The value of such power is defined by the expression 1.

100

V. Kharchenko et al.

Figure 4 shows the dependence of the spectral values of efficiency of the investigated serial silicon photovoltaic converter [12, 13] versus wavelength. From the analysis of the obtained dependence, it follows that the COPki gradually increases from zero (at ki = 0.4 lm) to 40% with ki = 0.95 lm with increasing wavelength. An almost zero value of the COPki is observed under conditions when photons of the corresponding (small) wavelength are completely absorbed by the doped layer of the converter without p-n separation – by the transition of electron-hole pairs.

Fig. 4. The characteristic form of the spectral values of the efficiency of silicon photovoltaic converters

Based on the obtained experimentally recorded spectral dependence of the efficiency of the photoconverter, it seems possible to determine the generalized efficiency of the photovoltaic converter for a predetermined emission spectrum. The generalized efficiency of the photoconverter can be represented as the sum of the partial contributions of the COPki for each band of a given spectrum, taking into account the fraction of these bands in the power of the total light flux of radiation. The generalized efficiency of the photoconverter, as adequate for a given radiation spectrum, is described by the expression: COPgen ¼

X

Kki  COPki ;

ð3Þ

where COPki are spectral values of the efficiency of the investigated photoconverter obtained according to expression 2; Kki is the fraction of the luminous flux power of the wavelength ki in the total luminous flux power of a given spectrum.

Determination of the Efficiency of Photovoltaic Converters Adequate

101

The efficiency of the photoconverter, adequate to the standard solar radiation of AM 1.5 (1000 W/m2), calculated according to expression 3 in the Fig. 4 is presented in the form of a “big dot” with a value of 13, located on the spectral curve of the dependence of the efficiency versus wavelength.

6 Conclusion Thanks to the considered method, it becomes possible to experimentally estimate the spectral values of the efficiencies of photovoltaic converters. Using the considered provisions, one can obtain information on the corresponding values of the efficiencies of photovoltaic converters for specific wavelengths of monochromatic radiation. It is also possible to calculate the efficiency of the photovoltaic converter for a given emission spectrum. Acknowledgment. The research was carried out on the basis of financing the state assignments of the All-Russian Institute of the Electrification of Agriculture and the Federal Scientific Agroengineering Center VIM, on the basis of funding of the grant “Young lecturer of RUT” of the Russian University of Transport, on the basis of funding of the Scholarship of the President of the Russian Federation for young scientists and graduate students engaged in advanced research and development in priority areas of modernization of the Russian economy, direction of modernization: “Energy efficiency and energy saving, including the development of new fuels”, subject of scientific research: “Development and research of solar photovoltaic thermal modules of planar and concentrator structures for stationary and mobile power generation”.

References 1. Bird, R.E., Hulstrom, R.L., Lewis, L.J.: Terrestrial solar spectral, data sets. Sol. Energy 30 (6), 563–573 (1983) 2. Kharchenko, V., Nikitin, B., Tikhonov, P., Panchenko, V., Vasant, P.: Evaluation of the silicon solar cell modules. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 866, pp. 328–336 (2019). https://doi.org/10.1007/978-3-03000979-3_34 3. Arbuzov, Y.D., Yevdokimov, V.M.: Osnovy fotoelektrichestva (Fundamentals of Photoelectricity), Moscva, GNU VIESKH (Moscow, GNU VIESH) (2007). 292 p. (in Russian) 4. Kharchenko, V.V., Nikitin, B.A., Tikhonov, P.V.: Theoretical method of estimation and prediction of PV cells parameters. Int. Sci. J. Alt. Energy Ecol. 4(108), 74–78 (2012) 5. Kharchenko, V.V., Nikitin, B.A., Tikhonov, P.V.: Estimation and forecasting of PV cells and modules parameters on the basis of the analysis of interaction of a sunlight with a solar cell material. In: Conference Proceeding - 4th International Conference, TAE 2010, pp. 307– 310 (2010) 6. Nikitin, B.A., Gusarov, V.A.: Analiz standartnogo spektra nazemnogo solnechnogo izlucheniya intensivnost'yu 1000 Vt/m2 i ocenka na ego osnove ozhidaemyh harakteristik kremnievyh fotoelektricheskih preobrazovatelej (Analysis of the standard spectrum of ground-based solar radiation with an intensity of 1000 W/m2 and assessment based on it of the expected characteristics of silicon photovoltaic converters). Avtonomnaya energetika: tekhnicheskij progress i ekonomika (Auton. Power Eng. Tech. Progress Econ. no. 24–25, 50–60 (2009). [in Russian]

102

V. Kharchenko et al.

7. Strebkov, D.S., Nikitin, B.A., Gusarov V.A.: K voprosu ocenki effektivnosti raboty fotopreobrazovatelya pri malyh i povyshennyh urovnyah osveshchennosti (On the issue of evaluating the efficiency of the photoconverter at low and high levels of illumination). Vestnik VIESKH (Bulletin VIESH), 119 (2012). (in Russian) 8. Nikitin, B.A., Mayorov, V.A., Kharchenko, V.V.: Issledovanie spektral'nyh harakteristik solnechnogo izlucheniya dlya razlichnyh velichin atmosfernyh mass (Investigation of the spectral characteristics of solar radiation for various atmospheric masses). Vestnik VIESKH [Bulletin of VIESH], 4(21), 95–105 (2015). (in Russian) 9. Nikitin, B.A., Mayorov, V.A., Kharchenko V.V.: Vliyanie velichiny atmosfernoj massy na spektral'nuyu intensivnost' solnechnogo izlucheniya (The effect of atmospheric mass on the spectral intensity of solar radiation). Energetika i avtomatika (Energy Autom.) 4(26), pp. 54– 65 (2015). (in Russian) 10. Kharchenko, V., Nikitin, B., Tikhonov, P., Adomavicius, V.: Utmost efficiency coefficient of solar cells versus forbidden gap of used semiconductor. In; Proceedings of the 5th International Conference on Electrical and Control Technologies ECT-2010, pp. 289–294 (2010) 11. Strebkov, D.S., Nikitin, B.A., Kharchenko, V.V., Arbuzov, Yu.D., Yevdokimov, V.M., Gusarov V.A., Tikhonov, P.V.: Metodika analiza spektra issleduemogo istochnika sveta posredstvom tarirovannogo fotopreobrazovatelya i komplekta svetofil'trov (Method of spectrum analysis of the studied light source by means of a calibrated photoconverter and a set of light filters). Vestnik VIESKH (Bulletin of VIESH), 4(9), 54–57 (2012). (in Russian) 12. Panchenko, V.: Photovoltaic solar modules for autonomous heat and power supply. IOP Conf. Ser. Earth Environ. Sci. 317, 9 p. (2019). https://doi.org/10.1088/1755-1315/317/1/ 012002 13. Panchenko, V., Izmailov, A., Kharchenko, V., Lobachevskiy, Ya.: Photovoltaic solar modules of different types and designs for energy supply. Int. J. Energy Optim. Eng. 9(2), 74–94 (2020). https://doi.org/10.4018/IJEOE.2020040106

Modeling of the Thermal State of Systems of Solar-Thermal Regeneration of Adsorbents Gulom Uzakov1, Saydulla Khujakulov1, Valeriy Kharchenko2, Zokir Pardayev1, and Vladimir Panchenko3,2(&) 1

3

Karshi Engineering - Economics Institute, Mustakillik str. 225, Karshi, Uzbekistan [email protected] 2 Federal Scientific Agroengineering Center VIM, 1st Institutskiy passage 5, 109428 Moscow, Russia [email protected] Russian University of Transport, Obraztsova st. 9, 127994 Moscow, Russia [email protected]

Abstract. Energy supply of fruit and vegetable storages, especially in places remote from centralized energy supply using solar energy is especially relevant. The authors of the paper propose systems for the thermal regeneration of adsorbents based on the use of solar energy. The paper presents studies of heat transfer and thermal regime of the developed system of solar-thermal regeneration of the adsorbent (activated carbon) in non-stationary mode. The paper also offers a mathematical model of the temperature field of the adsorbent layer during solar heating using a solar air collector. The proposed mathematical model of the thermal regime of the solar adsorption installation allows qualitatively controlling the technological process of thermal regeneration of adsorbents and significantly reducing the cost of traditional energy resources. Keywords: Solar air heater  Air temperature energy  Adsorbent  Temperature  Air flow

 Thermal efficiency  Solar

1 Introduction The leadership of Uzbekistan has set tasks to reduce the energy and resource intensity of the economy, the widespread introduction of energy-saving technologies in production, and the expansion of the use of renewable energy sources. The implementation of these provisions, including increasing the efficiency of the use of solar energy in thermotechnological processes of fruit and vegetable storages, is considered one of the most important tasks. Based on our studies, we developed a solar air-heating system for thermal regeneration of adsorbents and active ventilation of fruit and vegetable chambers [1–3]. Studies have shown that the thermal regime of thermal regeneration of adsorbents depends on the intensity of convective heat transfer between the surface of the adsorbent and the hot washer air. The main heat engineering parameters of convective heat transfer is the heat transfer coefficient, which depends on many factors [4–8]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 103–110, 2021. https://doi.org/10.1007/978-3-030-68154-8_11

104

G. Uzakov et al.

2 Research Problem Statement The aim of this work is to study the heat transfer and thermal regime of the developed system of solar-thermal adsorbent regeneration in non-stationary mode. Convective heat transfer during forced air movement through a fixed adsorbent layer in the solar regeneration mode has a pronounced unsteady character, that is, air temperatures and adsorbent temperatures change both in time and in space. In order to calculate the cycle of adsorption plants, determine the duration of the solar regeneration regime, and select the optimal thermotechnical parameters for their implementation, it is necessary to calculate the temperature field and calculate the change in the temperature of the adsorbent along the length of the adsorber at the studied time.

3 Research Results The heat transfer coefficient in the granular layer of activated carbon in the processes of thermal regeneration is determined by the method of modeling convective heat transfer using criteria-based similarity equations [9, 10]. The experimental results were processed using the following criteria similarity equations, that is, according to the formula of V.N. Timofeeva: at 20 \ Reliq \ 200 it is used Nuliq ¼ 0; 106  Reliq

ð1Þ

at Re [ 200; it is used Nuliq ¼ 0; 61  Re

ð2Þ

The results of studies to determine the heat transfer coefficient from air to adsorbent are shown in the Table 1.

Table 1. The results of studies of heat transfer in a granular adsorbent layer during thermal regeneration by atmospheric air № 1 2 3 4 5 6

W, m/s 0,2 0,3 0,4 0,5 0,6 1,0

Reliq 141,2 211,7 282,3 352,9 423,5 705,9

Nuliq a, W/m2  K 14,95 35 22 51,15 26,7 62,1 31 72 35,1 81,6 49,4 114,6

The process of unsteady heat transfer in the main apparatus of the adsorption unit – the adsorber – allows presenting the physical model of the process as follows. The process is one-dimensional, there is no heat exchange with the environment through the side surface of the adsorber, at any time the temperature of the adsorbent particle can be considered constant throughout its volume, convective heat transfer between the air

Modeling of the Thermal State of Systems

105

flow and the adsorbent layer is decisive, heat transfer is heat conduction through the air and the layer in the axial direction and the heat transfer by radiation is small and can be neglected, the air pressure during movement through the layer remains unchanged, the mass flow rate of air is constant, all the thermal properties of the air and layer are considered constant and independent of temperature [11–13]. When considering non-stationary temperature fields in adsorbers in heat transfer processes, in addition to the above assumptions, it is assumed that the adsorption properties of the layer do not affect the heat transfer process – the specific mass air flow in any section remains constant, and the thermal effects associated with adsorption and desorption are insignificant, and they can be neglected. To derive the basic equations for air and particles enclosed in the elementary volume of the layer, heat balance equations are compiled. For air: the change in the enthalpy of air over the considered period of time plus the heat introduced by the flow is equal to the amount of heat transferred to the layer during convective heat transfer. For particles of the layer: the change in the enthalpy of particles over the considered period of time is equal to the amount of heat transferred to the layer during convective heat transfer [14–18].

Fig. 1. Design scheme of unsteady heat transfer during heating of the adsorbent layer

The desired functions are the air temperature ta and the temperature of the particles of the layer tl, these functions are functions of two independent variables (Fig. 1). The temperature of the adsorbent layer is determined by the following data, among which mad – the mass of the adsorbent, kg; cl – specific heat capacity of the adsorbent J/ (kg  °C); t0 – initial temperature of the adsorbent layer, °C; tl = t (s) – temperature of the adsorbent at the time s (s  0).

106

G. Uzakov et al.

The amount of accumulated heat in the adsorbent grain layer at time s is determined by: Q ¼ cl  mad  tðsÞ ¼ cl  mad  t;

ð3Þ

Q0 ¼ cl  mad  t0 :

ð4Þ

and at time s = 0:

Within the time ds, the amount of heat received by the adsorbent layer will increase by: dQ ¼ cl  mad  dtðsÞ:

ð5Þ

This amount of heat is transferred to the adsorbent by air at the constant temperature during convective heat transfer between the air and the adsorbent: dQ ¼ a  ðta  tl Þ  ds;

ð6Þ

where a – the heat transfer coefficient, W/m2 ºC; ta – air temperature, ºC. Equating (5) and (6), the following heat balance equation is obtained: cl  mad  dt ¼ a  ðta  tl Þ  ds:

ð7Þ

By separating the variables as follows: dt a ¼  ds: ta  tl cl  mad

ð8Þ

and integrating Eq. (8): Z

dt a ¼  ta  tl cl  mad

Z ds

ð9Þ

dt a ~1 ¼ sþN ta  tl cl  mad

ð10Þ

determined by: Z

where C1 – an arbitrary integration constant. To calculate the integral on the left side of Eq. (10), the following substitution is introduced: ta  tl ¼ x;

ð11Þ

Modeling of the Thermal State of Systems

107

it turns out d(ta – tl) = dx, if we assume that ta = const, dtl ¼ dx:

ð12Þ

Taking into account expressions (11), (12) and the presence of the integration constant C1 in Eq. (10), after integration we obtain:  Z  dx  ¼  ln x: x

ð13Þ

Replace the left side of Eq. (10), taking into account (12):  ln x ¼

a ~1  s þ ln N cl  mad

ð14Þ

The resulting Eq. (13) after the conversion, taking into account (10), has the following form: a 1 s t ¼ ta   ecl mad :

ð15Þ

1

Under the initial condition s = 0, t (0) = t0, from (15) we get: t0 ¼ ta 

1 0 1 1  e ¼ ta  or ¼ ta  t0 : ~1 ~1 ~1 N N N

ð16Þ

Further, for the convenience of calculations, we introduce the following notation: b¼

a ; cl  mad

ð17Þ

from (15) taking into account (17) we obtain the final equation: t ¼ ta  ðta  t0 Þ  ebs :

ð18Þ

The obtained Eq. (18) allows us to determine the temperature of heating of the adsorbent in the solar air-heating installation, taking into account the environmental parameters and thermal characteristics of the adsorbent itself (a, cl, mad). From the analysis of the obtained dependence, we can conclude that the heating temperature of the adsorbent in the adsorber is determined by its mass and heat capacity, initial temperature, heat transfer coefficient, and duration of heating. The resulting Eq. (18) will solve the following problems:

108

G. Uzakov et al.

1) Given the maximum temperature for heating the adsorbent, it is possible to determine the maximum duration of its heat treatment with air through a solar air heater (the duration of the thermal regeneration of adsorbents). 2) Given the maximum duration of heat treatment of the adsorbent, we determine the possible temperature of its heating. The resulting mathematical Eq. (18) is convenient for practical engineering calculations of thermal regeneration systems of adsorbents and does not require a large amount of initial and experimental data. We will calculate based on the following source data: W = 0,2 m/s; a = 35 W/(m2 °C); cl = 0,84 kJ/(kg°C); mad = 28,8 kg; ta = 60 °C; t0 = 18 °C; b¼

a 35 ¼ 1;44  103 ¼ cl  mad 0;84  103  28;8

Based on the calculation results, a graph of the temperature change of heating the adsorbent in the adsorber was constructed (Fig. 2).

Fig. 2. The graph of the temperature of heating the adsorbent (activated carbon) in the adsorber of a solar air-heating installation

4 Conclusion 1. The proposed mathematical model of the temperature field of the adsorbent layer during solar heating allows one to determine the change in the temperature of the adsorbent along the length of the adsorber and in time and also takes into account the influence of the thermal characteristics of the adsorbent itself. 2. At the maximum heating temperature of the adsorbent, it is possible to determine the duration of heat treatment with hot air heated in a solar air-heating installation.

Modeling of the Thermal State of Systems

109

3. As can be seen from Fig. 2 with increasing heat transfer coefficient (a) from air to the adsorbent, the heating intensity of the adsorbent layer increases. 4. Thus, the obtained results of the study of the thermal regime of solar-thermal adsorbent regeneration and the mathematical model of the temperature field of the adsorbent layer make it possible to qualitatively control the process of solar-thermal regeneration of the adsorbent and choose the optimal technological parameters of the gas medium regulation systems in fruit storage.

References 1. Khuzhakulov, S.M., Uzakov, G.N., Vardiyashvili, A.B.: Modelirovanie i issledovanie teplomasso- i gazoobmennyh processov v uglublennyh plodoovoshchekhranilishchah (Modeling and investigation of heat and mass and gas exchange processes in in-depth fruit and vegetable storages). Problemy informatiki i energetiki (Prob. Inf. Energy), no. 6, 52–57 (2010). (in Russian) 2. Khujakulov, S.M., Uzakov, G.N., Vardiyashvili, A.B.: Effectiveness of solar heating systems for the regeneration of adsorbents in recessed fruit and vegetable storages. Appl. Solar Energy 49(4), 257–260 (2013) 3. Khujakulov, S.M., Uzakov, G.N.: Research of thermo moisten mode in underground vegetable storehouses in the conditions of hot-arid climate. Eur. Sci. Rev. 11–12, 164–166 (2017) 4. Uzakov, G.N., Khuzhakulov, S.M.: Geliovozduhonagrevatel'naya ustanovka s solnechnotermicheskoj regeneraciej adsorbentov (Helio-air-heating installation with solar-thermal regeneration of adsorbents). Tekhnika. Tekhnologii. Inzheneriya [Technics. Technol. Eng.], no. 2, 7–10 (2016). https://moluch.ru/th/8/archive/40/1339/. (in Russian) 5. Uzakov, G.N., Khuzhakulov, S.M.: Issledovanie teploobmennyh processov v sistemah solnechno-termicheskoj regeneracii adsorbentov (Study of heat exchange processes in systems of solar-thermal regeneration of adsorbents). Tekhnika. Tekhnologii. Inzheneriya (Tech. Technol. Eng.) (2), 10–13 (2016). https://moluch.ru/th/8/archive/40/1340/. (in Russian) 6. Uzakov, G.N., Khuzhakulov, S.M.: Issledovanie temperaturnyh rezhimov geliovozduhonagrevatel'noj ustanovki dlya sistem termicheskoj regeneracii adsorbentov (Study of the temperature conditions of a solar air heating installation for thermal regeneration of adsorbents). Geliotekhnika (Solar Eng.) (1), 40–43 (2017). (in Russian) 7. Abbasov, E.S., Umurzakova, M.A., Boltaboeva, M.P.: Effektivnost’ solnechnyh vozduhonagrevatelej (Efficiency of solar air heaters). Geliotekhnika (Solar Eng.) (2), 13–16 (2016). (in Russian) 8. Klychev, Sh.I., Bakhramov, S.A., Ismanzhanov, A.I.: Raspredelennaya nestacionarnaya teplovaya model’ dvuhkanal’nogo solnechnogo vozduhonagrevatelya (Distributed nonstationary thermal model of a two-channel solar air heater). Geliotekhnika (Solar Eng.) (3), 77–79 (2011). (in Russian) 9. Akulich, P.V.: Raschety sushil'nyh i teploobmennyh ustanovok (Calculations of drying and heat exchange plants). Minsk, Belarus. navuka (Minsk, Belarus. Navuka) (2010). 443 p. (in Russian) 10. Mikheev, M.A., Mikheeva, I.M.: Osnovy teploperedachi (Fundamentals of heat transfer). Moskva, Energiya (Moscow, Energy) (1977). 320 p. (in Russian)

110

G. Uzakov et al.

11. Yanyuk, V.Ya., Bondarev, V.I.: Holodil'nye kamery dlya hraneniya fruktov i ovoshchej v reguliruemoj gazovoj srede (Refrigerators for storing fruits and vegetables in a controlled gas environment). Moskva, Legkaya i pishchevaya promyshlennost’ (Moscow, Light and food industry) (1984). 128 p. (in Russian) 12. Kharitonov, V.P.: Adsorbciya v kondicionirovanii na holodil'nikah dlya plodov i ovoshchej (Adsorption in conditioning on refrigerators for fruits and vegetables). Moskva, Pishchevaya promyshlennost’ (Moscow, Food industry) (1978). 192 p. (in Russian) 13. Chabane, F.: Design, developing and testing of a solar air collector experimental and review the system with longitudinal fins. Int. J. Environ. Eng. Res. 2(I. 1), 18–26 (2013) 14. Henden, L., Rekstad, J., Meir, M.: Thermal performance of combined solar systems with different collector efficiencies. Sol. Energy 72(4), 299–305 (2002) 15. Kolb, A., Winter, E.R.F., Viskanta, R.: Experimental studies on a solar air collector with metal matrix absorber. Sol. Energy 65(2), 91–98 (1999) 16. Kurtas, I., Turgut, E.: Experimental investigation of solar air heater with free and fixed fins: efficiency and exergy loss. Int. J. Sci. Technol. 1(1), 75–82 (2006) 17. Garg, H.P., Choundghury, C., Datta, G.: Theoretical analysis of a new finned type solar collector. Energy 16, 1231–1238 (1991) 18. Kartashov, A.L., Safonov, E.F., Kartashova, M.A.: Issledovanie skhem, konstrukcij, tekhnicheskih reshenij ploskih solnechnyh termal’nyh kollektorov (Study of circuits, structures, technical solutions of flat solar thermal collectors). Vestnik YUzhno-Ural’skogo gosudarstvennogo universiteta (Bull. South Ural State Univ.), (16), 4–10 (2012). (in Russian)

Economic Aspects and Factors of Solar Energy Development in Ukraine Volodymyr Kozyrsky, Svitlana Makarevych, Semen Voloshyn, Tetiana Kozyrska, Vitaliy Savchenko(&), Anton Vorushylo, and Diana Sobolenko National University of Life and Environmental Sciences of Ukraine, St. Heroiv Oborony, 15, Kyiv 03041, Ukraine {epafort1,t.kozyrska}@ukr.net, [email protected], [email protected], [email protected], [email protected]

Abstract. The paper is devoted the development of renewable energy in Ukraine with particular directions of state stimulation this branch. The functional diagram of interrelations the factors influencing their efficiency and mass using is given (on the example of solar power plants). The concept of MicroGrid is considered as a power supply system for remote territories. The concept of “dynamic tariff” is proposed as an integrated indicator of the current cost of electricity at the input of the consumer. It is formed on the basis of the real cost of electricity from sources in the MicroGrid system, the cost of electricity losses during transportation, taxes, planned profits and a number of functional factors that determine the management of the balance of electricity generation and consumption, the impact of consumers and the MicroGrid system on electricity quality. Keywords: Renewable energy sources  Solar electricity  Solar energy station  Electricity storage  MicroGrid system  Dynamic tariff  Reclosers

Renewable energy sources (RES) are one of the priorities of energy policy and instruments to reduce carbon emissions. Efforts by countries to address this issue under the Kyoto Protocol are not yielding the expected effect. The first steps of the world community to solve this problem began at the 18th Conference of the Parties to the UN Framework Convention and the 8th meeting of the Parties to the Kyoto Protocol, which took place from November 26 to December 7, 2012 in Doha (Qatar) [1]. Under the second period of the Kyoto Protocol (2013–2020), Ukraine has committed itself to reducing greenhouse gas emissions by 20% (from 1990 levels) and has announced a long-term goal by 2050 - to reduce emissions by 50% compared to 1990. Why is the number of RES use projects growing in Ukraine, despite the fact that tariffs are falling? It turns out that at the same time there were several positive factors that determine the development of alternative energy. The first factor is regulatory. In addition to reducing green energy tariffs, the state guarantees that support will last long enough to recoup investment in energy facilities. As in some European countries, where the tariff decreases as the EU targets are met (20% of energy is obtained from renewable sources by 2020), Ukraine also has © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 111–117, 2021. https://doi.org/10.1007/978-3-030-68154-8_12

112

V. Kozyrsky et al.

mechanisms in place to reduce it. The green tariff will be reduced from the base level in 2009 by 20% in 2020 and another 30% in 2025, and abolished in 2030 as a temporary incentive for the development of RES. The second factor in stimulating the construction of stations is that the conditions for the construction of new facilities have been liberalized. In particular, the Verkhovna Rada changed the rule on the mandatory local component in the equipment. Instead, an incentive mechanism was introduced: the more domestic components in the station, the higher the rate of tariff increase. Another important factor is the rapid decline in technology in the world. In recent years, capital expenditures for the construction of SES have decreased significantly, as the cost of equipment has decreased. As of 2017, investments in 1 MW of capacity in Ukraine fluctuate at the level of 0.75–1.05 million euros. Payback periods of projects are 6–7 years. China intends to put into operation 15–20 GW of solar capacity annually. By 2020, their total number should triple. Such a plan envisages investments in China’s energy - $368 billion, this amount of investment (one of the main manufacturers of solar energy station (SES) components) will help reduce the cost of SES. A factor in reducing the cost of SES, too, is to increase the efficiency of the elements of SES (photovoltaic panels (FEP), batteries, etc.). To date, the production of most commercial modules of solar cells is based on crystalline Si (I generation FE) and amorphous thin-film SE with a large area of FE with a value of η * 5–8% (IIgeneration FE). The concept of III-generation is the use of nano- and microstructures (microwires). The main characteristic of FEP is the efficiency of photoelectric conversion or efficiency (efficiency), which for the currently available technological industrial FEP is in the range from 7% to 18% [1–7], and in laboratory developments reaches 39–43% [4, 8]. Thus, the improvement of technologies and efficiency of SES elements has made solar generation one of the leaders in terms of capacity growth not only in Ukraine but also in the world. One of the main elements of SES are batteries for electricity storage. Electricity storage technologies are also developing rapidly - lithium-ion batteries, hydrogen energy storage technologies, supercapacitors. These innovative developments have higher productivity and lower cost. Unfortunately, the current electricity storage technologies in Ukraine have a high cost. One kW of storage capacity costs from $ 500 to $ 3,000. It is expected that within 3–5 years the price will drop by a third or more. The factor that negatively affects the market of the domestic segment of RES is the low technical condition of electrical networks. For example, due to the high density of buildings in the Kyiv region, private homes have problems due to insufficient capacity of electrical networks. Importantly, the capacity of solar panels increased almost sevenfold: from 2.2 MW (2015) to 16.7 MW at the end of 2016. That is, households began to install more powerful panels. For all those who installed solar panels in 2017, there will be a fairly high green tariff of 19 cents per 1 kWh. Gradually, this tariff will be reduced on the same principle that works for industrial stations - up to 14 cents by 2030. An important factor that encourages private individuals to install panels is the reduction in the cost of small SES technologies. Ekotechnik Ukraine Group forecasts a

Economic Aspects and Factors of Solar Energy Development in Ukraine

113

5–10% drop in prices for solar power plant equipment for private households. In 2017, the average price of the entire set of equipment is about 900–950 euros per 1 kW of SES. Ukrainian startup SolarGaps has offered a new solution - the world’s first sun blinds have appeared. The project has already attracted the first investments and is preparing to enter various markets around the world. The equipment of a usual window blinds in the apartment, according to the startup, will cost $300. SolarGaps blinds in a threeroom apartment with windows facing south will be able to produce up to 600 W  h, or about 4 kW  h per day (100 kW  h hour per month). Thus, solar energy is gradually becoming the cheapest source of energy in many countries in the long run. Ukraine is no exception. Solar activity in Ukraine is enough to ensure a return on investment for 6–7 years using the green tariff and 13–15 years without it. This term can be compared with the payback of a classic thermal power plant. Ukraine has all the natural and regulatory prerequisites for the development of RES. The share of green generation while maintaining favorable factors may reach 30% by 2035 due to the construction of 15 GW of new SES and wind farms. Investments in RES and Ukraine’s economy will create tens of thousands of jobs – which is an important social factor. It is important in determining the conditions for the development of SES to establish the degree of influence of various factors. In Fig. 1 presents a functional diagram of the relationship of factors for two types of SES with a capacity of P > 30 kW and P < 30 kW (private sector). Analytical relationship between the function of SES development and factors: ð1Þ ð2Þ Determination of coefficients can be performed by the method of expert assessments. The direction of Smart Grid technologies is considered to be a promising concept for the development of human energy supply systems, and for remote areas, such as rural areas – the creation of MicroGrid systems [8]. A number of pilot projects of similar systems have already been implemented in the world [9–12]. However, this concept creates a number of scientific, technical and economic problems that need to be solved to ensure the successful operation of MicroGrid systems. For example, the economic ones include the formation of a tariff policy in a closed MicroGrid system. MicroGrid system – a local power system containing two or more, homogeneous or heterogeneous power sources, means of energy storage, elements of transportation, distribution of electricity and switching, power consumers, systems for receiving, transmitting and analyzing information. The system is considered as a single integrated

114

V. Kozyrsky et al.

and controlled unit of subsystems “generation-accumulation-transportationdistribution-consumption” of electricity with an intelligent control system. The MicroGrid system can operate autonomously and as an integrated subsystem in a centralized power system.

SES P > 30 kW

SES P < 30 kW

State regulation (tariff) X1

State regulation (non-tariff) X2 State regulation (liberalization of construction) X3 Reducing the cost of technology X4 Increasing the efficiency of the elements X5 The level of reliability and bandwidth of electrical networks X6

Social factor X7

Logistics X8

Climatic factor X9

Environmental factor X10

Fig. 1. Functional diagram of the relationship between the factors of development of solar electricity.

Economic Aspects and Factors of Solar Energy Development in Ukraine

115

Since electricity is a commodity, and the cost of its generation and transportation changes over time, it is advisable to consider a new principle - the formation of the tariff for each consumer in the dynamics (during the period Dt, when the components of the tariff are constant). This tariff can be called - dynamic. Dynamic tariff (DT) is an integrated indicator of the current cost of electricity at the input of the consumer, which is formed on the basis of the actual cost of electricity from sources in the MicroGrid system (see Fig. 1), the cost of electricity losses during transportation, taxes, planned profits and a number of functional factors, which determine the management of the balance of generation and consumption of electricity, the impact of consumers and the MicroGrid system on the quality of electricity.

Fig. 2. Example of electrical network diagram in MicroGrid system

The scheme of Fig. 2 shows the main elements of the MicroGrid system: consumers – various objects of the settlement (utility and industrial spheres); SES - solar power plants (can be with electricity storage); reclosers – switching devices with automation and remote control; MicroGrid system control center – a server with a computer center and an intelligent system control algorithm; information transmission system from each element of the MicroGrid system to the control center (elements –

116

V. Kozyrsky et al.

SES, electricity storage, consumer meters, reclosers – equipped with means of transmitting and receiving information). During the operation of the MicroGrid system during the day the electrical load of consumers changes and as the loads in the nodes of the circuit Fig. 2 will be controlled on part of the reclosers, and the other part of the shutdown – the circuit configuration will be optimized by the minimum total power loss in the MicroGrid system RDPmin . Thus, the scheme of electricity transportation from SES can change and the consumer can receive electricity from different sources during the day. Functionality of determination of the dynamic tariff on input of the consumer of the electric power: ТД(Ci, Ki) = (Т1(С1)+Т2(С2) +Т3(С3) +Т4(С4) – Т5(С5)) · К1(S) · К2(I1) · К3(І2)

ð3Þ

where the components of the electricity tariff: T1(C1) – the dependence of the tariff for a particular consumer on the cost of electricity produced; T2(C2) – dependence of the tariff on the cost of electricity losses for transportation; T3(C3) – the dependence of the tariff on the amount of taxes; T4(C4) – a component determined by the planned profit of the MicroGrid system; T5(C5) – a component determined by the sale of electricity to the centralized power supply system at a “green tariff”; K1(S) – coefficient that takes into account the management of the balance of generation and consumption of electricity; K2(I1) – coefficient that takes into account the negative impact of the consumer on the quality of electricity; K3(I2) – is a factor that takes into account the negative impact of the MicroGrid system on the quality of electricity. The coefficient K1(S) is determined by the total electrical load S of the MicroGrid system and motivates consumers to use electricity during hours of minimum electrical load of the system and can take values within n  K1(S)  m, for example – n > 0, and m < 2 and can be determined on a contractual basis between electricity consumers, if they are co-owners of the MicroGrid system, or consumers and the owner of the system. The coefficients K2(U1) and K3(U2) are determined by the deviation of electricity quality parameters from the norms and determine the motivation of the consumer and the MicroGrid system not to reduce the quality of electricity. Component T1(C1) determines the dependence of the tariff on the cost of electricity produced by the generating installation, from which the consumer receives electricity during the time Dt. In the case of a closed mode of operation of electrical networks (see Fig. 2) of the MicroGrid system, the daily change of electrical loads of consumers will be a switching change of the network. In this case, in order to optimize the mode of operation of the network by the criterion RDPMIH (minimum total power losses during the time period Dt) there is a change in the configuration of the circuit and the consumer can receive electricity from one or more generating units. Since the cost of electricity produced from different generating units will be different, the dynamics of this component of the tariff for a particular consumer will change. Component T2(C2) determines the dependence of the tariff on the cost of electricity losses for its transportation to the consumer. Based on the conditions of formation in the MicroGrid system of component T1(C1), and periodic changes in the configuration time of the electrical network, it can be concluded that the component of tariff T2(C2) should be calculated for the consumer for each time period Dt.

Economic Aspects and Factors of Solar Energy Development in Ukraine

117

Component T3(C3) determines the dependence of the tariff on the amount of taxes and is a constant value at constant tax rates. Component T4(C4) is determined by the planned profit of the MicroGrid system and can be determined on a contractual basis by consumers as owners, or consumers and owners of the MicroGrid system. Component T5(C5) is determined by the sale of electricity to the centralized power supply system at a “green tariff” and can be included in the formation of the tariff on a contractual basis by consumers, owners of the MicroGrid system.

1 Conclusions 1. The most promising concept for the development of human energy supply systems is the direction of Smart Grid technology, and for remote areas, such as rural areas the creation of MicroGrid systems. 2. Modern technical means allow to create a system of online accounting of the cost of electricity input to the consumer (in real time), which requires the solution of a number of economic problems. 3. Introduction of the concept of dynamic tariff - an integral indicator of the current cost of electricity at the input of the consumer will reduce electricity losses for its transportation, which will significantly affect the cost of electricity consumed by consumers.

References 1. Introduction to Microgrids – What is a Microgrid. ABB. https://new.abb.com/distributedenergy-microgrids/introduction-to-microgrids 2. International Energy Agency. https://www.iea.org/ 3. European Commission. https://ec.europa.eu/ 4. Bondarenko, S.A., Zerkina, O.O.: Smart Grid as a basis for innovative transformations in the electricity market. BUSINESSINFORM, no. 4, pp. 105–111 (2019) 5. Cheremisin, M.M., Cherkashina, V.V., Popadchenko, S.A.: Features of introduction of smart grid technologies in the electric power industry of Ukraine. Sci. J. Sci. Rise /4/2(9), pp. 27– 31 (2015) 6. Kozyrsky, V.V., Guy, O.V.: SMART GRID technologies in energy supply systems: monograph. Comprint, Kyiv (2015). 336 p. 7. LAW of Ukraine on Voluntary Association of Territorial Communities (Vidomosti Verkhovnoi Rady (VVR)), no. 13, p. 9 (2015) 8. Forst, M.: Germany’s module industry poised for growth. SUN Wind Energy 5, 256–263 (2011) 9. Bekirov, E.A., Khimich, A.P.: Computer modeling of complex power systems with solar energy concentrators. Renew. Energy 1(24), 74–81 (2011) 10. Solar energy: industry review: based on materials from Nitol Solar Limited. - World Wide Web access mode: https://nitolsolar.com/rusolarenergy/ 11. Eckart von Malsen. Opportunities for large-scale projects. SUN Wind Energy 5, 254–255 (2011) 12. Solar energy/Wikipedia. [Free Internet Encyclopedia]. – World Wide Web access mode: https://en.wikipedia.org/wiki/Solar_energy

A Method for Ensuring Technical Feasibility of Distributed Balancing in Power Systems, Considering Peer-to-Peer Balancing Energy Trade Mariusz Drabecki(&) Institute of Control and Computation Engineering, Warsaw University of Technology, 15/19 Nowowiejska Street, Warsaw, Poland [email protected] Abstract. In this paper a method for ensuring network feasibility of power flow (in terms of all network constraints), when energy in power system is balanced via peer-to-peer contracts, is proposed and analyzed. The method considers subjective benefits (utility functions) of market participants. The method is based on two optimization problems originated from Optimal Power Flow standard formulations, which can be solved by system’s operator. It possibly can be used in power systems with high penetration of distributed energy resources (DERs) to give incentive to build and actively control those resources by market participants. The method was tested on 9-bus test system, under three different preference scenarios. Keywords: Balancing energy market flow  Network constraints

 Peer-to-peer energy trade  Power

1 Introduction Currently, electrical energy is traded on specially designed markets. Their basic architecture is similar worldwide. Depending on when the trade happens, as opposed to the actual delivery date, one can distinguish following markets: long-term contracts market, where energy is traded bilaterally long before the delivery, day-ahead market where trade happens for the next day, intraday market where participants trade for the same day, at least one hour prior to delivery and the balancing market, being the real (or nearly real) time market. Through the balancing market, system’s operator assures that supplied energy exactly equals its consumption (transmission losses included) and that the power system operates safely and securely – i.e. that all technical constraints are met. This is assured by the fact that the operator participates in every buy/sell transaction and, by consequence that the operator is responsible for final dispatch of generating units [1]. However, it is believed that due to such a centralized balancing scheme, operations on the market are not optimal from each of the market participants’ subjective perspective. This is true even despite that the balancing is performed by the operator © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 118–131, 2021. https://doi.org/10.1007/978-3-030-68154-8_13

Ensuring feasibility of power flow

119

aiming to maximize the social welfare function, which usually means minimizing the overall generation cost, subject to all power flow technical constraints. Although being least costly for the aggregated set of energy consumers, this approach completely neglects individual preferences of market participants and of bilateral, peer-to-peer, agreements made between them. I argue that allowing them in the balancing process would add more freedom to the market itself. Such a liberated way of energy balancing is especially interesting when considering market mechanisms for power systems integrating many distributed energy resources (DERs). This energy balancing scheme is important to systems with high DERs penetration, as it directly gives incentives to market participants to both build DERs and actively control their power output, to maximize participants’ subjective profits. Distributed power systems, where energy is traded on a peer-to-peer basis are gaining recognition both theoretically and practically in multiple integration trials performed worldwide [2]. Yet, as described in the following section, such a distributed self-balancing and unit self-commitment may cause problems in terms of technical feasibility of power flow, resulting from so-dispatched units. Some other problems may also arise while trying to optimally balance demand and supply of energy in the system. Addressing these technical issues lay in the scope of this paper and form its contribution. Some interesting research in this field has already been conducted. Authors of [2] outlined that depending on whether network constraints are taken into account or not, and whether individual actions of peers are controlled centrally multiple ways of peerto-peer transactions can be identified, with their associated control models. One more approach towards optimizing distributed p2p trade on the market was given in [3]. Yet, authors of the latter neglected all network constraint and arising technical problems. These were addressed in [4], yet limited to active power only. However, both of these papers, together with [5] addressed the issue of maximizing subjective benefits of market participants. Authors of [6] have pointed that p2p trading might be even more beneficial when consumers optimize their power consumption and later trade negawatts. Some other examples of related research on relevant models of trade optimization are given in [7–9]. However, the role of system’s operator in these works remains unclear and possibly suppressed. According to authors of [2, 7, 10], the technical feasibility issues (network constraints) might be addressed by special control systems at generation/load bus level – Energy Management Systems. They will need to have capabilities of limiting possible generation/demand of a given market participant to ensure network feasibility of the dispatch. However, these systems would require correct integration and control knowing the full picture of current situation in the grid. Therefore, it is reasonable to assume that, at least in the period of transition from centralized to fully distributed systems architecture, the system’s operator is a relevant entity to guard security and stability of power supply to its customers, as it is the operator who knows all technical issues of the grid. This paper addresses the problem of possible infeasibility of power flow (both active and reactive) when energy balancing in the system is accomplished through peer-to-peer trading. In other words, the paper proposes a method for finding feasible generating units dispatch when peer-to-peer balancing energy contracts are made.

120

M. Drabecki

For this a multi-step method basing on optimization models is proposed. As an assumption to it, a typical scenario in which the power system under consideration may be a local grid, or a wider area sub-network, managed by the system operator striving for system self-balancing is considered.

2 Notion of Power Flow and Its Feasibility The main goal behind any electrical power system is to safely and securely provide the demanded amounts of electrical power to its consumers at each time instant. This consists of two main tasks, namely generation of the power in system’s generation buses and its transmission to the load buses. For the first task it is necessary to produce the amount of apparent power which equals exactly the demanded amount of power plus transmission losses. Transmission of the power through the system is referred to as the power flow. It results from combination of generating units setpoints (both for active and reactive power), demand attached to every load bus and from current technical parameters of the power grid itself. Normally, the power flow is estimated by numerically solving a set of nonlinear equations as shown in [11]. The power flow can only be technically attainable (in this paper referred as feasible) if all the variables, i.e. generating units setpoints, branch power flows, nodal voltages and nodal voltage angles fall within their technical limits. As it was shown in [12], network feasibility of the power flow, considering both active and reactive flow, depends highly on grid model used for determining the power dispatch of generating units. Thus, depending on the model of dispatching the units, despite the fact that all generation setpoints lay within generating units capabilities, a non-feasible power flow may be obtained.

3 Proposed Method In this section the proposed method for ensuring technical feasibility of power flow, when generating units dispatch is obtained via p2p balancing energy trade is described in detail. It is assumed that the operator is only responsible for checking if the generating units dispatch resulting from so-agreed contracts yields a feasible power flow and not for any a priori dispatch of generating units. This check is to be performed at the moment of accepting bilateral contracts. If resulting power flow is not feasible, the operator proposes a direction in which the market participants should be adjusting their contractual position, so to get as close to the power flow feasible value as possible. This is obtained by applying an additional constraint on accepted contracts. We can thus say, that the operator serves only as a feasibility guard of the power flow without imposing anything on the market participants for as long as possible. What is more, it is assumed that only active power balance is subject of trade between peers. The method is based on specially designed optimization problems. As contracts are signed directly between market players basing on their subjective goals, overall generation cost (social welfare function) is unknown and is of little interest to the operator.

Ensuring feasibility of power flow

121

The optimization goal is thus to minimize the violations of agreed balancing contractual positions, subject to all power flow technical constraints. We shall consider two network flow optimization sub-problems formulated further in this paper. The first sub-problem (Formulation (2)) allows for identifying the unit, whose total amount of contracted power should be adjusted the most in order to make the power flow feasible. This unit shall be considered as most problematic in terms of taken contracts and should be the first to change its contractual position, while in search of network-feasible power dispatch. The second formulation (Formulation (3)) however, arbitrarily imposes changes on generating units contractual positions, to obtain a feasible power flow while maximizing the use bilateral contracts between suppliers and consumers. I propose, the operator to use this formulation when market participants could not come to a networkfeasible dispatch through negotiations in a considerable amount of time/rounds when (2) was applied. Formulation (3) was first proposed in [13] and is cited in this paper as part of the now-proposed method. Both proposed formulations are the restrictions of the standard Optimal Power Flow (OPF) problem [14]. Thus, any feasible solution of one of these two subproblems would yield a network feasible power flow in terms of satisfying all grid technical constraints. The method can be summarized in the following generic steps: 1. Accept contracts between suppliers and consumers that lay within all technical limits of generating units, i.e. which do not exceed their technical maxima/minima. Architectural design of an appropriate IT platform is beyond the scope of this article and will not be elaborated. 2. Check the network feasibility of power flow that results from accepted contracts between suppliers and consumers. In case of infeasibility, apply Formulation (2) to identify the most problematic unit and to estimate transmission losses. 3. Calculate new constraint to be added. 4. Return information on most problematic unit and on the new constraint to market participants. 5. Enforce market participants to adjust their market positions through bilateral negotiations following the information issued in step 3 of this method, respecting the additional constraint. After they agree on new contracts distribution go to step 0. 6. Use Formulation (3) to impose changes on market positions of the participants, if no feasible solution is found through negotiations in a considerable amount of time (or rounds of negotiations). 3.1

Optimal Power Flow Problem (OPF)

The OPF problems are well-known and widely used nonlinear and non-convex optimization problems, solved by system operators for determining feasible active and reactive generating units dispatch. Usually, OPF is a problem of minimizing the total generation cost, with respect to all system constraints such as technical maxima/minima of generating unit constraints, line flow constraints, voltage levels and angle constraints and power balance

122

M. Drabecki

constraints. However, different other cost functions can be also used, such as: minimization of transmission losses or re-dispatch of reactive power for enhancing the level of system’s stability such as in [15]. Below, in (1), I cite a simplified formulation of the OPF problem as was given in [14], with standard cost function i.e. minimization of the overall generation costs: min

ð1aÞ

fP

subject to: D Pinj i  Pi þ Pi ¼ 0

8i 2 N

ð1bÞ

D Qinj i  Qi þ Qi ¼ 0

8i 2 N

ð1cÞ

 Pi  Pmax Pmin i i

8i 2 N G

ð1dÞ

 Qi  Qmax Qmin i i

8i 2 N G

ð1eÞ

 U i  U max U min i i

8i 2 N

ð1fÞ

Hmin  Hi  Hmax i i

8i 2 N

ð1gÞ

0  Sl  Smax l

8l 2 N f

ð1hÞ

where: f P – the total cost of generation and transmission, N- set of indices of all buses in the system, N G - set of indices of all generating units, N f – set of indices of all inj

branches in the system, Pinj i =Qi - active/reactive power injection at bus i calculated D using standard, highly nonlinear power flow equations [11], PD i =Qi - active/reactive min=max

power demand at bus i, Pi =Qi – active/reactive output of unit i, Qi generation limits of unit i, U i – voltage magnitude at bus i, min=max

min=max Ui

min=max

=Pi



- limits on voltage

magnitude of bus i, Hi – voltage angle at bus i, Hi - limits on voltage angles of – maximum value of apparent bus i, Sl – apparent power flow through line l, Smax l power flow through line l. The above standard OPF problem formulation provides basis for the proposed optimization sub-problem Formulations (2) and (3) described below. 3.2

Formulation (2) – Identification of Most Problematic Unit

In the step 1 of method presented in this paper I propose to identify the unit, whose total contractual position (total amount of output power) should be adjusted the most to make the power flow network-feasible. To obtain Pareto-optimal solution, deviations from other units’ contractual positions should also be minimized. Below in (2), the model is presented. It assumes that reactive power is not subject of trade and that market players contract supply of active power only. As feasible set of

Ensuring feasibility of power flow

123

presented problem is a restriction of the standard OPF’s feasible set shown in (1), once a feasible solution to (2) is found, the resulting power flow is guaranteed to be technically feasible. min

X

G= þ

ð2aÞ

sG P;i  T

8i 2 N G

ð2bÞ

þ sG P;i  T

8i 2 N G

ð2cÞ

c1 T þ c2

i2N G

ðsP=Q;i Þ; where c1  c2

subject to:

X k2CN;i

PkC;i  sG P;i  Pi 

X k2CN;i

Pmin  Pi  Pmax i i G= þ

sP;i

0

þ PkC;i þ sG P;i

8i 2 N G

ð2dÞ

8i 2 N G

ð2eÞ

8i 2 N G

ð2fÞ

þ constraintsð1bÞ  ð1hÞ

ð2gÞ G þ =

where c1 ; c2 – arbitrarily chosen positive costs of contracts violation, sP;i - slack variable for making violation of limits possible for active power, CN; i – set of contracts signed with generating unit i, PkC;i – contracted volume of active/reactive power for generating unit i with contract k. This formulation allows for identification of most problematic generating unit. Once it is identified, one or more new constraints shall be formulated to limit the set of acceptable contracts by the operator, as proposed for step 4 of the method. This procedure is explained in Sect. 3.2.1. 3.2.1 Addition of New Constraint In step 4 of the method it is proposed to add a new constraint (calculated in step 2) after each round of negotiations between market participants. This is to ensure that in the power flow obtained in each round is closer to its technical network feasibility. The constraint is not added directly to the optimization problem, but is to be imposed on market participants negotiations– they simply need to take it into account while agreeing on new contractual positions. Contracts violating this constraint shall not be accepted by the operator. In the proposed method the constraint is formulated basing on the identified most problematic unit – as the result of problem (2). It is thus known by how much the active power output setpoint needs to deviate from desired contractual position in order to make the power flow network-feasible. The easiest way to get this exact information is to calculate differences d ðr Þ between the contractual positions and corresponding calculated optimal solution for most problematic unit p, in round r of negotiations

124

M. Drabecki

P (dðrÞ ¼ k2CN;p PkC;p ðr Þ  Pp ðrÞ). Next, if dðrÞ 6¼ 0 a new constraint on accepted contracts in next round of iterations (r þ 1) shall be added for the most problematic P unit. If dðrÞ [ 0: the constraint shall take form of k2CN;i PkC;p ðr þ 1Þ  Pp ðr Þ  dðrÞ, P and of k2CN;i PkC;p ðr þ 1Þ  Pp ðrÞ þ dðrÞ when d\0. Since it is guaranteed that the deviation for most problematic unit is minimal, it is obvious that constraining its accepted contracts by d will drive the dispatch closer to feasibility. Once added, the method returns to end of step 4 – negotiations between peers. 3.3

Formulation (3) – Operator’s Corrective Actions

Despite the fact of adopting a method for helping market participants agree on dispatch resulting in technically feasible power flow (as in Sects. 3.1–3.2), it is still possible that market participants would be unable to find one in a considerable amount of time or rounds of negotiations. Thus, in this section the optimization problem from [13] for allowing the operator to arbitrarily change generating units’ setpoints to achieve network feasibility of power flow, while maximizing the use of bilateral balancing contracts is cited. Similarly to Sect. 3.2 , it is assumed that market participants trade and contract supply of active power only. What is more, it assumed that each generating unit can change its contractual position – either by generating more or by reducing the output within its technical limits. Yet, we assume that such an action can be accomplished at a certain unit cost. This cost is known to the operator through offers submitted by power suppliers, prior to the balancing energy delivery, for changing their contractual positions. The discussed formulation is given in (3). X

min

i2N G

  G þ = G þ = cP;i sP;i

ð3aÞ

subject to: X

PkC;i  sG P;i  Pi 

X k2CN;i

k2CN;i

Pmin  Pi  Pmax i i G= þ

sP;i

0

þ PkC;i þ sG P;i

8i 2 N G 8i 2 N G

þ constraintsð1bÞ  ð1hÞ G þ =

8i 2 N G

ð3bÞ ð3cÞ ð3dÞ ð3eÞ

where: cP;i – positive cost (price) of violation of upper/lower limits on generation of unit i for active power.

Ensuring feasibility of power flow

125

4 Assessment of the Method In the paper I propose to add more freedom into the balancing energy market. As a result, this should add more economical efficiency to the market, whose amount should be maximal. Yet, the freedom is achieved through rounds of bilateral negotiations, whose number should be minimal. Thus, given the above, I propose to assess the performance of the method by considering two criteria - measure of market efficiency and number of iterations it takes to come to consensus. 4.1

Measure of Market Effectiveness

Number of rounds of negotiations is fairly straightforward to calculate. However, things are different when it comes to market efficiency. One should keep in mind that the amount of efficiency is very subjective for each participant and differs from the simple social welfare function, as discussed previously. Thus, for the sake of assessing the performance let me formulate an effectiveness measure (f e ) that combines subjective goals of participants with objective total cost of generation, which should undergo maximization. Formulation of the measure is given in (4). f E ¼ 2f S=L  cG  cA ;

ð4Þ

where f S=L is the total amount of subjective benefits for making contracts between suppliers and consumers (sometimes referred to as utility function), and cG is the total cost of generation and cA is the total cost of adjustments of units operating points when formulation (3) is used. Assuming that subjective benefits of consumer j for making a bilateral contract with generating unit i is equal to subjective benefit of i making a contract with j, f S=L is formulated as f S=L ¼

X

X j2N L

a PD i2N G ji ji

¼

X

X j2N L

a PD ; i2N G ij ji

ð5Þ

where aji – factor of benefit for delivering balancing power from i to j quantifying how important the benefit is, PD ji – amount of delivered balancing power from i to j, N L – set of indices of consumers. For assessment, it is assumed that total generation cost of each unit is well known. Normally it can be of any form, yet most likely quadratic formulation is used, as in (6) where ai ; bi ; ci are cost coefficients and Pi is the active power output of unit i. X cG ¼ ðai P2i þ bi Pi þ ci Þ ð6Þ i2N G

4.2

Simulation Results

The proposed approach is tested on a nine-bus test system (presented in Fig. 1 and proposed in [16]) over three scenarios. The system for tests is deliberately chosen small, so that results are more evident and interpretable than on a larger test system.

126

M. Drabecki

In the tests we look at the measures described previously – i.e. market efficiency ðf E Þ and number of negotiation rounds (max. Round). Resulting value of f E is benchmarked with the result of standard OPF problem (1) – to check performance as opposed to currently used dispatching methods. All simulations were coded in Matlab and solved with MATPOWER MIPS solver [17].

Fig. 1. Test system topology

Test system comprises of nine buses, three of whom being generator buses and three load buses. All generating units have their technical minima equal to 10 MW and technical maxima of unit 1, 2 and 3: 250 MW, 300 MW, 270 MW respectively, giving the total generating capacity of 820 MW. The load is assumed constant and attached to buses 5, 7 and 9. Value of load in each of these buses is as follows, given as pairs (active demand, reactive demand): bus 5 – (90 MW, 30 MVAR); bus 7 – (100 MW, 35 MVAR); bus 9 – (125 MW, 50 MVAR), giving the total demand of (315 MW, 115 MVAR). The system consists also of nine transmission lines. Their ratings (maximum flow capacities) can be found in [16]. For simulations it is assumed that system topology, with locations of load and values of the load and with costs of generation are constant. It is also assumed that, demanded load and technical limits of generating units are known and guarded in step 0 of the method. What is more, I assume that subjective benefit for unit i for making a contract with consumer j is symmetrical and equal to the benefit for consumer j, i.e. aij ¼ aji . Apart from this, I also assume that after estimation of transmission losses, consumers are forced to contract their compensation evenly, i.e. in the discussed test system each consumer contracts exactly one third of the losses. Generation costs for each of the generating units are presented in Table 1. They correspond to quadratic formulation given in (6). For simplicity it is assumed here that generating units sell energy at its cost, meaning that sell price equals generation costs. Benefit coefficients are assumed as given in Table 2. These reflect both benefit of the consumers and of suppliers meaning that benefit should be summed only once into f E .

Ensuring feasibility of power flow

127

Previous statements yield that economic surplus of the market is produced only through the benefits. It is assumed here that each consumer wants to contract as much demand with its preferred unit, as possible. From generating unit’s perspective – the unit accepts those contracts, which bring most benefit to it until its maximum capacity is reached. What is more, it is also assumed that operator’s IT system guards that sum of contracts made for each unit falls within this unit’s technical capacities – from minimum to maximum. For all of the scenarios, with the load as written previously, the objective of standard OPF problem (1) is equal to 5,296.69 USD, which in terms of benefit function equals f E ¼ 5; 296:69 USD. This is the cost value for optimal solution: P1 ¼ 89:7986MW,P2 ¼ 134:3207MW,P3 ¼ 94:1874MW. Table 1. Generation cost coefficients Generation cost coefficients ai [USD/MW2] Gen. Unit 1 0.1100 Gen. Unit 2 0.0850 Gen. Unit 3 0.1225

bi [USD/MW] 5.0000 1.2000 1.0000

ci [USD] 150.0000 600.0000 335.0000

Table 2. Benefit coefficients assumed in test cases Benefit coefficients [100 USD/MW] Scenario Consumer 1 (bus 5) Consumer 2 (bus Gen. Gen. Gen. Gen. Gen. Unit 1 Unit 2 Unit 3 Unit 1 Unit 2 S1 0.2 0.7 0.1 0.3 0.6 S2 0.2 0.1 0.7 0.3 0.1 S3 1 0 0 1 0

7) Gen. Unit 3 0.1 0.6 0

Consumer 3 (bus Gen. Gen. Unit 1 Unit 2 0.1 0.8 0.1 0.1 1 0

9) Gen. Unit 3 0.1 0.8 0

4.2.1 Numerical Results In this section brief description of each of the scenario is given with most important results presented in Table 3. In Scenario 1, each of the consumers finds most benefit in making a bilateral balancing contract with generating unit 2. Yet, each of the consumers to a slightly different extent. Thus, it is possible for the units to maximize their benefits and accept those contracts, from whom they can benefit the most. At first, most of the power was to be supplied by unit 2. Yet, this highly violated maximum flow capacity of line adjacent to it. Therefore, significant changes in dispatch were required. In this scenario market participants agreed on a network feasible solution in the third round of negotiations. During the course of the method two different units were identified as most problematic – in round 1: unit 2 and in round 2: unit 1. To guide the search new constraints were imposed on their acceptable contracts. Results are shown in Table 3.

128

M. Drabecki

Scenario 2, however, is slightly different from the previous one. Here the preferred unit for all of consumers was unit 3. Similarly to previously considered example, benefit coefficients differed making it easy for the generating unit to decide on which contracts to accept. However, this time most of the problem with network feasibility resulted from transmission losses which were not considered by the participants at first round of balancing. They were estimated by solving (2) and an additional constraint was added on most problematic unit’s acceptable contracts. Market participants managed to find consensus in two rounds of negotiations. Results are presented in Table 3. Scenario 3 differs significantly from both previously presented cases. This time all consumers wish to contract all energy supply from unit 1 without any will to compromise. After a considerable round of negotiations (here assumed 10), market participants did not come to a conclusion on dispatch yielding a network- feasible power flow. Therefore, the Operator used formulation (3) to arbitrarily adjust operating points G þ = of generating units. It was assumed that prices cP;i were all equal to 100 for all units. They form the cost cA which is considered while calculating the value of f E . Table 3. Results of numerical examples Scenario 1, f E ¼ 12; 484:40USD, max. Round = 3 Gen. Unit 1 Original contract [MW] Customer 1 Customer 2 Customer 3 Total [MW] Additional constraint added? Customer 1 Customer 2 Customer 3 Total [MW] Additional constraint added? Customer 1 Customer 2 Customer 3 Total [MW] Additional constraint added?

3.3400 3.3300 3.3300 10.0000 P

Feasible contract [MW] 4.8505 61.5815 4.8405 71.2726

PkC;1  71:28 (in round 2) k2CN;i

Gen. Unit 2 Original contract [MW]

Feasible contract [MW]

83.3200 83.0896 93.3400 38.1096 118.3400 118.1096 295.0000 239.3087 P k P  249 k2CN;i C;2 (in round 1)

Scenario 2, f E ¼ 9; 901:26USD, max. Round = 2 3.3400 4,9968 3.3400 4.9968 28.3300 29.9868 3.3300 4.9868 3.3300 4.9868 3.3300 4.9868 35.0000 39.9704 10.0000 14.9704 P k No k2CN;i PC;2  14

Gen. Unit 3 Original contract [MW]

Feasible contract [MW]

3.3400 3.3300 3.3300 10,0000 No

4.8505 3.0996 4.8405 12.7906

83.3200 68.3400 118.3400 270.0000 No

83.3200 68.3400 118.3400 270.0000

3.3400 3.3300 3.3300 10.0000 No

3.3333 3.3333 3.3333 10.0000

(in round 2) Scenario 3, f E ¼ 7; 820:06USD, max. Round = 10+ 3.3400 20.5085 83.3200 83.1551 3.3300 20.5085 93.3400 83.1551 3.3300 20.5085 118.3400 83.1551 10.0000 61.5255 295.0000 249.4652 No Yes, without satisfactory results

Ensuring feasibility of power flow

129

The results show that under the assumptions made, adding more freedom to the balancing energy market might add some significant benefit as opposed to the standard dispatching method, e.g. solving the regular Optimal Power Flow. As shown in Table 3, this is the case for all scenarios. Not surprisingly, it is more beneficial to the market when participants manage to decide on contracts alone (as in Scenario 1 and in Scenario 2) rather than as when these contracts are arbitrarily changed by the operator – as in Scenario 3. What is more, the results show that using the proposed method it is possible to find a network-feasible units dispatch for every discussed scenario, which actually maximizes use of bilateral balancing energy contracts made between participants.

5 Conclusions and Discussion Power systems with high penetration of distributed energy resources are nowadays becoming of interest to both academia and industry. Balancing of energy in such a system ideally should performed via peer-to-peer trading between energy market participants, as the participants should be given direct incentives to both develop DER infrastructure and to actively control it to maximize their subjective profit. Therefore, in this paper a multi-step method for assuring network-feasibility of power flow resulting from generating unit dispatch obtained via bilateral, peer-to-peer, balancing contracts is presented and analyzed. The proposed method allows market participants to make bilateral contracts between consumers and suppliers for provision of necessary amounts of balancing energy. The method assumes that market participants do not have sufficient knowledge on technical situation in the power system and thus, do not know a priori what dispatch to agree on. Therefore, it is assumed that it is system operator’s responsibility to guide participants in the search of network-feasible dispatch. The method however, returns necessary information on direction of this search, as a result of optimization problem (2) with heuristically added constraints on accepted bilateral contracts. If despite having this information market participants are unable to agree on technically feasible contracts, the method foresees the ability for the operator to arbitrarily adjust those contracts using optimization problem (3). Yet, when such a situation arises, suppliers are to be paid for adjusting their contractual positions. The basis of the proposed method is formed by two optimization problems given in formulations (2) and (3), which aim to maximize use of such contracts, while always guaranteeing feasibility of resulting power flow. Proposed approach was tested in simulations over three scenarios. All tests showed that allowing free trade on the balancing energy market may bring more benefits to the market than currently used centralized balancing options – namely when it is the system’s operator who is always a party in any transaction made on the balancing market. What is more, it has been shown in the paper, that it is possible to derive methods which can satisfactorily help to achieve technical feasibility of power flow resulting from so obtained dispatch of generating units.

130

M. Drabecki

Some perspective for further research can be identified. First of all, one may consider designing architecture for an operator’s IT system for accepting contracts – such as assumed in step 0 of described method. Another perspective may be development of new market efficiency measures or new models for assuring technical feasibility of power flow. They could be developed basing on different optimization, modelling and simulation problems, possibly including development of agent systems. When all of the above issues are deeply considered and followed-up by academic communities worldwide, hopefully it will be possible to establish highly effective bilateral balancing energy markets, that may be able to give incentives for transition towards high penetration of green energy. Specific design of these markets shall be also given thorough consideration, to make room for all their participants, regardless of their size and power supply capabilities.

References 1. Wang, Q., et al.: Review of real-time electricity markets for integrating distributed energy resources and demand response. Appl. Energy 138, 695–706 (2015) 2. Guerrero, J., et al.: Towards a transactive energy system for integration of distributed energy resources: Home energy management, distributed optimal power flow, and peer-to-peer energy trading. Renew. Sustain. Energy Rev. 132, 110000 (2020) 3. Lee, W., et al.: Optimal operation strategy for community-based prosumers through cooperative P2P trading. In: 2019 IEEE Milan PowerTech. IEEE (2019) 4. Guerrero, J., Archie, C., Verbič, G.: Decentralized P2P energy trading under network constraints in a low-voltage network. IEEE Trans. Smart Grid 10(5), 5163–5173 (2018) 5. Pasha, A.M., et al.: A utility maximized demand-side management for autonomous microgrid. In: 2018 IEEE Electrical Power and Energy Conference (EPEC). IEEE (2018) 6. Okawa, Y., Toru, N.: Distributed optimal power management via negawatt trading in realtime electricity market. IEEE Trans. Smart Grid 8(6), 3009–3019 (2017) 7. Zhang, Y., Chow, M.Y.: Distributed optimal generation dispatch considering transmission losses. In: 2015 North American Power Symposium (NAPS), Charlotte (2015) 8. Kar, S., Hug, G.: Distributed robust economic dispatch: a consensus + innovations approach. In: 2012 IEEE Power and Energy Society General Meeting, San Diego (2012) 9. Lin, C., Lin, S.: Distributed optimal power flow with discrete control variables of large distributed power systems. IEEE Trans. Power Syst. 23(3) (2008) 10. Garrity T.F.: Innovation and trends for future electric power systems. In: 2009 Power Systems Conference, Clemson (2009) 11. Machowski, J., Lubosny, Z., Bialek, J.W., Bumby, J.R.: Power System Dynamics: Stability and Control. Wiley (2020) 12. Drabecki, M., Toczyłowski, E.: Comparison of three approaches to the security constrained unit commitment problem. Zeszyty Naukowe Wydziału Elektrotechniki i Automatyki Politechniki Gdańskiej (62) (2019). http://yadda.icm.edu.pl/baztech/element/bwmeta1.ele ment.baztech-0345a0bb-f574-41c1-be5d-71f50b8e060c?q=5838bcf6-a5ee-44ec-93f8-ea79d 2cd037b$1&qt=IN_PAGE 13. Drabecki, M., Toczyłowski, E.: Obtaining feasibility of power flows in the deregulated electricity market environment. Przegląd Elektrotechniczny 95 (2019)

Ensuring feasibility of power flow

131

14. Zhu, J.: Optimization of Power System Operation. Wiley, Hoboken (2015) 15. Drabecki, M.: A method for enhancing power system’s steady-state voltage stability level by considering active power optimal dispatch with linear grid models. In: Zeszyty Naukowe Wydziału Elektrotechniki i Automatyki Politechniki Gdańskiej (62) (2019). http://yadda. icm.edu.pl/baztech/element/bwmeta1.element.baztech-2eb9eb53-193d-4970-827d-4dba236e 0936?q=5838bcf6-a5ee-44ec-93f8-ea79d2cd037b$2&qt=IN_PAGE 16. Chow, J.H. (ed.): Time-Scale Modelling of Dynamic Networks with Applications to Power Systems. Lecture Notes in Control and Information Sciences vol. 26, pp. 59–93. Springer. Berlin (1982) 17. Zimmerman R.D., Murillo-Sanchez C.E., Thomas R.J.: MATPOWER: steady-state operations, planning and analysis tools for power systems research and education. IEEE Trans. Power Syst. 26(1) (2011)

Sustainable Optimization, Metaheuristics and Computing for Expert System

The Results of a Compromise Solution, Which Were Obtained on the Basis of the Method of Uncertain Lagrange Multipliers to Determine the Influence of Design Factors of the Elastic-Damping Mechanism in the Tractor Transmission Sergey Senkevich(&), Ekaterina Ilchenko, Aleksandr Prilukov, and Mikhail Chaplygin Federal Scientific Agroengineering Center VIM, 1st Institute pas. 5, Moscow 109428, Russia [email protected], [email protected], [email protected], [email protected]

Abstract. The article is devoted to the research of a compromise solution for finding the optimal parameters of the Elastic Damping Mechanism (EDMṇ) in the 14 kN class tractor transmission. The tractor was part of three different machine-tractor units and performed the main agricultural operations: plowing, cultivation and sowing. The task was to define a single function (compromise solution), which can be used to describe the processes in the transmission when performing these operations. The Lagrange multiplier method was used to obtain a compromise solution. It was necessary to create one General mathematical model out of three mathematical models, which should correctly reflect the nature of the ongoing processes. It was necessary to determine the Lagrange multipliers k1, k2…km for this purpose. All calculations were made in the software environment Maple and MatLab. Compromise problem solution was found. The extremum of the «transmission transparency degree» function is found based on Lagrange multipliers. A compromise model is obtained that expresses the influence of the main EDM parameters on the «transmission transparency degree». The values of factors included in the resulting Lagrange function are found. Keywords: Method of uncertain lagrange multipliers mechanism  Transmission  Optimal parameters

 Elastic damping

1 Introduction Optimizing the operation of certain elements and replacing outdated mechanisms with new developments can significantly improve performance. A large number of studies exist in the field of improving mobile technology.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 135–144, 2021. https://doi.org/10.1007/978-3-030-68154-8_14

136

S. Senkevich et al.

One of the significant elements requiring optimization is the transmission of mobile power equipment. Improvements will allow us to significantly influence the efficiency of resource use and control precision. These questions are being investigated all over the world. For example, a dynamic transmission model has been created for improved gear shifting by some researchers, analysis of gear shifting processes has been conducted, and coordinated strategies for managing these processes have been developed with the replacement of multiple clutches while the tractor is operating in the field [1]. A numerical model of dynamic characteristics for a new transmission with multistage end gears as the main component was developed by other scientists [2]. A simulation model of the synchronous transmission unit was created, which differs from other models in the simplicity and quality of the model, which can be computed in real time [3]. A large number of analyses of different types of transmissions exist, for example, such as analysis of the control of a hybrid automatic manual transmission and a hybrid dual-clutch transmission [4]. The dynamic analysis of a vehicle with three different transmission configurations is of interest [5]: with spring buffer; machine with spring buffer in combination with stabilizer bars; machine with spring damper in combination with hydraulically linked suspension on the surface of the roll. Linear stability analysis for groups of vehicles researches were conducted to reduce fuel consumption and increase the fuel reserve not so long ago [6]. Analysis and modeling of various systems provide the basis for creating new transmissions, for example, compact 18-speed epicyclic transmission for a small vehicle, that presented in the work [7]. Authors showed the presented optimal power distribution algorithms and algorithms for determining the optimal mode are effective by using process modeling in their work [8]. Other authors presented new rolling-resistant hydraulically linked suspension with two batteries for each liquid line in their work [9]. Correct computing of rolling-resistant hydraulically linked suspension is a key to quality management system [10]. Reducing dynamic loads on drive shafts has a positive effect on system reliability [11]. The effective interaction of the chassis with the ground also helps to increase productivity and reduce fuel consumption [12, 13]. The review shows the relevance of research on mobile machines transmissions optimization. These questions, which are investigated all over the world, allow us to significantly improve the parameters of the mobile tractor.

2 Purpose of Research Our previous works [14–17] were aimed at finding the optimal parameters of the Elastic Damping Mechanism (EDM) in the 14 kN class tractor transmission. Conducted research is described in detail in these papers [14–17]. The indicator «P» is proposed to assess the protective qualities of the mechanism («transmission transparency degree») as the ratio of the current amplitude of vibrations of the engine speed to its maximum value [14, 15, 18]. If P = 1 is reduction gear is absolutely «transparent», the engine is not protected from fluctuations in the traction load (this happens in serial transmissions). If P = 0, reduction gear is absolutely «not transparent» and will

The Results of a Compromise Solution, Which Were Obtained

137

completely absorb vibrations transmitted to the engine. Research has been conducted for next agricultural operations: plowing, cultivation and seeding. Purpose of this research is finding the conditional extremum of the “transmission transparency degree” function regarding restrictions as a result of a compromise solution based on Lagrange multipliers. The following tasks were solved to achieve this goal: 1. Acquisition a compromise model that expresses the EDM main parameters influence on the «degree of transparency of the transmission»; 2. Finding the factors values included in the resulting Lagrange function.

3 Materials and Methods Main limitations for conducting this research the restrictions were chosen based on our previous research [14, 15, 17, 18]. The name and designation of the elastic damping mechanism factors are given in Table 1. Table 1. Name and designation of factors of the elastic damping mechanism installed in the tractor transmission. N 1 2 3 4 5

Factor name Factor identification The throttle cross-sectional area Sth, m2 Volume of hydropneumatic accumulator (HPA) Vhpa, m3 The air pressure in HPA Pa, Pa Inertia moment of additional load Jth, kgm2 The oscillation frequency of the traction load f, Hz

Code mark x1 x2 x3 x4 x5

The method of Lagrange multipliers was used to create a compromise solution based on three previously obtained models [14, 15, 18], the method consists of this: First. The lagrange function must be composed to find a conditional extremum, this has the form in General for a function of several variables n f(x1, x2,…,xn) and m coupling equations (herewith n > m) in general (1) [19]: F ðx1 ; x2 ; . . .; xn ; k1 ; k2 ; . . .; km Þ ¼ f þ k1  u1 þ k2  u2 þ    þ km  um

where k1, k2…km is Lagrange multiplier; f is function selected as the main one; u1, u2…um is functions selected as a constraint.

ð1Þ

138

S. Senkevich et al.

Restrictions are presented as equalities in our study, however, the Lagrangian method allows for the constraint by inequalities and other functions. Second. The necessary conditions for the extremum are set by a system of Eqs. (2), consisting of partial derivatives of an Eq. (1) and constraints, equal to zero, stationary points are determined from this system of equations [20]: ¼ 0; ði ¼ 1. . . nÞ uj ¼ 0; ðj ¼ 1. . . mÞ dF dxi

ð2Þ

The presence of a conditional extremum can be determined based on this. The symbol d2F is a sufficient condition that can be used to find out the nature of the extreme. If d2F > 0 at a stationary point, the function f(x1, x2,…, xn) has a conditional minimum at this point, if d2F < 0, function has conditional maximum. A compromise solution had to be found in this study: to create one General mathematical model out of three mathematical models, which should correctly reflect the nature of the ongoing processes. It was necessary to determine the Lagrange multipliers k1, k2…km for this purpose.

4 Discussion Results Providing all models. Model (3) that was taken as the main function f(x1, x2,…, xn): y1 ðx1 ; x2 ; x3 ; x4 ; x5 Þ ¼ 0:578 þ 2:29  103 x1  3:72  103  x2 þ 0:053  x3 þ 4:51  103  x4 þ 0:021  x5  1:34  103  x1  x2  3:14  103  x1  x3  2:41  103  x1 x4 þ 3:32  104  x1  x5 ð3Þ  1:25  103  x2  x3 þ 1:37  103  x2  x4  0:044  103  x2  x5  6:04  105  x3  x4  0:035  x3  x5 þ 8:09  103  x4  x5 þ 0:03  x21 þ 0:03  x22 þ 0:031  x23 þ 0:03  x24 þ 0:095  x25 : The model (4) and (5), which were taken as the conditions and restrictions. y2 ðx1 ; x2 ; x3 ; x4 ; x5 Þ ¼ 0:722  6:325  103  x1  6:657  103 x2 þ 0:033  x3  5:526  103  x4  3:072  103  x5 þ 6:981  103  x1  x2 þ 6:851  103  x1  x3 þ 5:709  103  x1  x4  7:045  103  x1  x5 þ 8:462  103  x2  x3 þ 7:245  103  x2  x4  5:554  103  x2  x5 þ 7:115  103  x3  x4 þ 2:725  103  x3  x5  6:826  103  x4  x5 þ 2:905  103  x21 þ 2:466  103  x22  7:031  103  x23 þ 3:445  103  x24 þ 0:038  x25 : ð4Þ

The Results of a Compromise Solution, Which Were Obtained

y3 ðx1 ; x2 ; x3 ; x4 ; x5 Þ ¼ 0:6212 þ 0:06538  x3 þ 0:06613  x5 þ 0:04968  x23  0:06994  x1  x2 þ 0:04868  x2  x4  0:03494  x2  x5  0:04581  x3  x5 :

139

ð5Þ

All further calculations were made in the Maple and MATLAB software environment for convenience. The Lagrange function is an Eq. (6) in general form in this case: Lðx1 ; x2 ; . . .; x5 ; k1 ; k2 Þ ¼ y1 ðx1 . . .x5 Þ þ k2  y2 ðx1 . . .x5 Þ þ k3  y3 ðx1 . . .x5 Þ

ð6Þ

that takes the form (7) when Eqs. (3), (4), and (5) are substituted into it: L :¼ 0:578 þ 2:29  103 x1  3:72  103  x2 þ 0:053  x3 þ 4:51  103  x4 þ 0:021  x5  1:34  103  x1  x2  3:14  103  x1  x3  2:41  103  x1 x4 þ 3:32  104  x1  x5  1:25  103  x2  x3 þ 1:37  103  x2  x4  0:044  103  x2  x5  6:04  105  x3  x4  0:035  x3  x5 þ 8:09  103  x4  x5 þ 0:03  x21 þ 0:03  x22 þ 0:031  x23 þ 0:03  x24 þ 0:095  x25 þ k2  ð0:722  6:325  103  x1  6:657  103 x2 þ 0:033  x3  5:526  103  x4  3:072  103  x5 þ 6:981  103  x1  x2 þ 6:851  103  x1  x3 þ 5:709  103  x1  x4  7:045  103  x1  x5 þ 8:462  103  x2  x3 þ 7:245  103  x2  x4  5:554  103  x2  x5 þ 7:115  103  x3  x4 þ 2:725  103  x3  x5  6:826  103  x4  x5 þ 2:905  103  x21 þ 2:466  103  x22  7:031  103  x23 þ 3:445  103  x24 þ 0:038  x25 Þ þ k3  ð0:6212 þ 0:06538  x3 þ 0:06613  x5 þ 0:04968  x23  0:06994  x1  x2 þ 0:04868  x2  x4  0:03494  x2  x5  0:04581  x3  x5 Þ: ð7Þ Partial derivatives were taken from the variables x1, x2, x3, x4, x5 in Eq. (7) to find the extremes of the Lagrange function, then they were equated to zero. The resulting equations, the constraint Eqs. (4) and (5), those are equated to zero, constitute a system of equations, during the solution of which all the variables x (x1, x2, x3, x4, x5) were expressed in terms of k2 and k3 those are means L2 and L3 in the program text [21]. Next, the values x1…x5 were inserted into the main equation and the constraint equations, then into the Lagrange function. This part of the calculations was performed in the Maple 5 software environment (Fig. 1).

140

S. Senkevich et al.

Fig. 1. Maple software environment.

Two small programs were written to find Lagrange coefficients in the MATLAB software environment (Fig. 2).

Fig. 2. MATLAB software environment.

The Results of a Compromise Solution, Which Were Obtained

141

The results of calculations of variables x1…x5 and coefficients k2 and k3 and the resulting value of the Lagrange function are presented in Table 2. Table 2. Results of calculating included in the General Lagrange function variables. x2 x3 x4 x5 k2 k3 L x1 1 0,35 −0,0872 0,0528 −1,2677 −0,0067 −1,6901 −1 1 2 0,68 −0,1258 0,1496 0,4461 0,0435 0,1184 1 −1

Analysis of the Table 2 shows that the best version of the Lagrange function is shown in the first row of the table. But these values are not applicable because the factors x3 and x5 are outside the scope of research. So, the values of line 2 are assumed to be optimal values. The Lagrange compromise model (8) was obtained by substituting optimal values: L ¼ 0:032905  x21 þ 0:032466  x22  0:025711  x23 þ 0:033445  x24 þ 0:133  x25  0:004035  x1 þ 0:075585  x1  x2 þ 0:003715  x1  x3 þ 0:003304  x1  x4  0:0067131  x1  x5 ð8Þ þ 0:007213  x2  x3  0:040062  x2  x4 þ 0:073386  x2  x5 þ 0:00977963  x3  x4 þ 0:01081  x3  x5 þ 0:001259  x4  x5 þ 0:6788  0:010373  x2 þ 0:02062  x3  0:001013  x4  0:048202  x5 ; Graphical dependencies of the Lagrange function are shown in Fig. 3. They are constructed using Eq. (9). Factors not shown on the graph are fixed at zero (equal to zero) in the function (8). Figure 3 shows the surface plot of the Lagrange function depending on the variables x1 and x5 which is obtained using the Eq. (9):

Fig. 3. Graph of the Lagrange function dependence on variables x1 and x5

142

S. Senkevich et al.

Lðx1 ; x5 Þ ¼ 0:6788  0:004035  x1  0:048202  x5 þ 0:032905  x21 þ 0:133  x25  0:0067131  x1  x5 ;

ð9Þ

5 Conclusions A compromise solution to this problem was found in this study. The extremum of the “transmission transparency degree” function is found using Lagrange multipliers. A compromise model is obtained that expresses the influence of the main EDM parameters on the «transmission transparency degree». The values of factors included in the resulting Lagrange function are found. We have determined the value of the desired function is as a solution to the compromise problem L = 0.68; the factor values are x1 = −0.1258, x2 = 0.1496, x3 = 0.4461, x4 = 0.0435, x5 = 0.1184, they are included in the Lagrange function. Finding natural values is determined by formulas those are described in detail in the research [14]. The values are: Sth = 2.285  10–4 m2, Vhpa, = 4.030  10–3 m3, Pa, = 4.446 5 10 Pa, Jth, = 4.740  10–3 kgm2, f, = 0.940 Hz. Acknowledgments. The team of authors expresses recognition for the organization of the Conference ICO'2020, Thailand, and personally Dr. Pandian Vasant. The authors are grateful to anonymous referees for their helpful comments.

References 1. Li, B., Sun, D., Hu, M., Zhou, X., Liu, J., Wang, D.: Coordinated control of gear shifting process with multiple clutches for power-shift transmission. Mech. Mach. Theory 140, 274– 291 (2019). https://doi.org/10.1016/j.mechmachtheory.2019.06.009 2. Chen, X., Hu, Q., Xu, Z., Zhu, C.: Numerical modeling and dynamic characteristics study of coupling vibration of multistage face gearsplanetary transmission. Mech. Sci. 10, 475–495 (2019). https://doi.org/10.5194/ms-10-475-2019 3. Kirchner, M., Eberhard, P.: Simulation model of a gear synchronisation unit for application in a real-time HiL environment. Veh. Syst. Dyn. 55(5), 668–680 (2017). https://doi.org/10. 1080/00423114.2016.1277025 4. Guercioni, G.R., Vigliani, A.: Gearshift control strategies for hybrid electric vehicles: a comparison of powertrains equipped with automated manual transmissions and dual-clutch transmissions. Proc. Inst. Mech. Eng. Part D J. Autom. Eng. 233(11), 2761–2779 (2019). https://doi.org/10.1177/0954407018804120 5. Zhu, S., Xu, G., Tkachev, A., Wang, L., Zhang, N.: Comparison of the road-holding abilities of a roll-plane hydraulically interconnected suspension system and an anti-roll bar system. Proc. Inst. Mech. Eng. Part D J. Autom. Eng. 231(11), 1540–1557 (2016). https://doi.org/10. 1177/0954407016675995

The Results of a Compromise Solution, Which Were Obtained

143

6. Sau, J., Monteil, J., Bouroche, M.: State-space linear stability analysis of platoons of cooperative vehicles. Transportmetrica B Transp. Dyn. 1–26 (2017).https://doi.org/10.1080/ 21680566.2017.1308846 7. Kim, J.: Design of a compact 18-speed epicyclic transmission for a personal mobility vehicle. Int. J. Automot. Technol. 17(6), 977–982 (2016). https://doi.org/10.1007/s12239016-0095-9 8. Park, T., Lee, H.: Optimal supervisory control strategy for a transmission-mounted electric drive hybrid electric vehicle. Int. J. Automot. Technol. 20(4), 663–677 (2019). https://doi. org/10.1007/s12239-019-0063-2 9. Chen, S., Zhang, B., Li, B., Zhang, N.: Dynamic characteristics analysis of vehicle incorporating hydraulically interconnected suspension system with dual accumulators. Shock Vib. 2018, 1–5 (2018). https://doi.org/10.1155/2018/6901423 10. Ding, F., Zhang, N., Liu, J., Han, X.: Dynamics analysis and design methodology of rollresistant hydraulically interconnected suspensions for tri-axle straight trucks. J. Franklin Inst. 353(17), 4620–4651 (2016). https://doi.org/10.1016/j.jfranklin.2016.08.016 11. Kuznetsov, N.K., Iov, I.A., Iov, A.A.: Reducing of dynamic loads of excavator actuators. In: Journal of Physics: Conference Series, vol. 1210, no. 1, p. 012075. IOP Publishing (2019). https://doi.org/10.1088/1742-6596/1210/1/012075 12. Ziyadi, M., Ozer, H., Kang, S., Al-Qadi, I.L.: Vehicle energy consumption and an environmental impact calculation model for the transportation infrastructure systems. J. Clean. Prod. 174, 424–436 (2018). https://doi.org/10.1016/j.jclepro.2017.10.292 13. Melikov, I., Kravchenko, V., Senkevich, S., Hasanova, E., Kravchenko, L.: Traction and energy efficiency tests of oligomeric tires for category 3 tractors. In: IOP Conference Series: Earth and Environmental Science, vol. 403, p. 012126 (2019). https://doi.org/10.1088/17551315/403/1/012126 14. Senkevich, S., Kravchenko, V., Duriagina, V., Senkevich, A., Vasilev, E.: Optimization of the parameters of the elastic damping mechanism in class 1, 4 tractor transmission for work in the main agricultural operations. In: Advances in Intelligent Systems and Computing, pp. 168–177 (2018). https://doi.org/10.1007/978-3-030-00979-3_17 15. Senkevich, S.E., Sergeev, N.V., Vasilev, E.K., Godzhaev, Z.A., Babayev, V.: Use of an elastic-damping mechanism in the tractor transmission of a small class of traction (14 kN): Theoretical and Experimental Substantiation. In: Handbook of Advanced Agro-Engineering Technologies for Rural Business Development, pp. 149–179. IGI Global, Hershey (2019). https://doi.org/10.4018/978-1-5225-7573-3.ch006 16. Senkevich, S., Duriagina, V., Kravchenko, V., Gamolina, I., Pavkin, D.: Improvement of the numerical simulation of the machine-tractor unit functioning with an elastic-damping mechanism in the tractor transmission of a small class of traction (14 kN). In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing and Optimization. ICO 2019. Advances in Intelligent Systems and Computing, vol. 1072, pp. 204–213. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-33585-4_20 17. Senkevich, S.E., Lavrukhin, P.V., Senkevich, A.A., Ivanov, P.A., Sergeev, N.V.: Improvement of traction and coupling properties of the small class tractor for grain crop sowing by means of the hydropneumatic damping device. In: Kharchenko, V., Vasant, P. (eds.) Handbook of Research on Energy-Saving Technologies for EnvironmentallyFriendly Agricultural Development, pp. 1–27. IGI Global, Hershey (2020). https://doi.org/ 10.4018/978-1-5225-9420-8.ch001

144

S. Senkevich et al.

18. Senkevich, S., Kravchenko, V., Lavrukhin, P., Ivanov, P., Senkevich, A.: Theoretical study of the effect of an elastic-damping mechanism in the tractor transmission on a machinetractor unit performance while sowing. In: Handbook of Research on Smart Computing for Renewable Energy and Agro-Engineering, pp. 423–463. IGI Global, Hershey (2020). https:// doi.org/10.4018/978-1-7998-1216-6.ch017 19. Nocedal, J., Wright, S.: Numerical Optimization, p. 664. Springer, Heidelberg (2006) 20. Härdle, W., Simar, L.: Applied Multivariate Statistical Analysis, p. 580. Springer, Berlin (2015) 21. Malthe-Sorenssen, A.: Elementary Mechanics Using Matlab: A Modern Course Combining Analytical and Numerical Techniques, p. 590. Springer, Heidelberg (2015)

Multiobjective Lévy-Flight Firefly Algorithm for Multiobjective Optimization Somchai Sumpunsri, Chaiyo Thammarat, and Deacha Puangdownreong(&) Department of Electrical Engineering, Faculty of Engineering, Southeast Asia University, 19/1 Petchkasem Road, Nongkhangphlu, Nonghkaem, Bangkok 10160, Thailand [email protected], {chaiyot,deachap}@sau.ac.th

Abstract. The firefly algorithm (FA) was firstly proposed during 2008–2009 as one of the powerful population-based metaheuristic optimization techniques for solving continuous and combinatorial optimization problems. The FA has been proved and applied to various real-world problems in mostly single objective optimization manner. However, many real-world problems are typically formulated as the multiobjective optimization problems with complex constraints. In this paper, the multiobjective Lévy-flight firefly algorithm (mLFFA) is developed for multiobjective optimization. The proposed mLFFA is validated against four standard multiobjective test functions to perform its effectiveness. The simulation results show that the proposed mLFFA algorithm is more efficient than the well-known algorithms from literature reviews including the vector evaluated genetic algorithm (VEGA), non-dominated sorting genetic algorithm II (NSGA-II), differential evolution for multiobjective optimization (DEMO) and multiobjective multipath adaptive tabu search (mMATS). Keywords: Lévy-flight firefly algorithm techniques  Multiobjective optimization

 Metaheuristic optimization search

1 Introduction Regarding to optimization context, many real-world optimization problems usually consist of many objectives which conflict each other [1, 2]. It leads to trade-off phenomenon among the objectives of interest. Also, it makes the multiobjective problems much more difficult and complex than single-objective problems. The multiobjective problem often possesses multiple optimal solutions (or non-dominated solutions) forming the so-called Pareto front [1, 2]. The challenge is how to perform the smooth Pareto front containing a set of optimal solutions for all objective functions. Following the literature, the multiobjective optimization problems can be successfully solved by some powerful metaheuristic optimization search techniques, for examples, vector evaluated genetic algorithm (VEGA) [3], non-dominated sorting genetic algorithm II (NSGA-II) [4] and differential evolution for multiobjective optimization (DEMO) [5].

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 145–153, 2021. https://doi.org/10.1007/978-3-030-68154-8_15

146

S. Sumpunsri et al.

During 2008–2009, the firefly algorithm (FA) was firstly proposed by Yang [6] based on the flashing behavior of fireflies and a random drawn from the uniform distribution for randomly generating the feasible solutions. The FA has been widely applied to solving real-world problems, such as industrial optimization, image processing, antenna design, business optimization, civil engineering, robotics, semantic web, chemistry, meteorology, wireless sensor networks and so on [7, 8]. Afterwards in 2010, the Lévy-flight firefly algorithm (LFFA) was proposed by Yang [9]. The LFFA utilized a random drawn from the Lévy distribution to randomly generate new solutions. The LFFA was tested and its results outperformed the genetic algorithm (GA) and the particle swarm optimization (PSO). The state-of-the-art and its applications of the original LF and LFFA algorithms are reported [7–9]. In early 2020, the multiobjective Lévy-flight firefly algorithm (mLFFA) is proposed for particularly solving multiobjective optimization problems [10]. In this paper, the mLFFA is applied to solve well-known standard multiobjective optimization problems. After an introduction given in Sect. 1, the original FA and LFFA algorithms are briefly described, and the proposed mLFFA algorithm is illustrated in Sect. 2. Four standard multiobjective test functions used in this paper are detailed in Sect. 3. Results are discussions of the performance evaluation are provided in Sect. 4. Conclusions are followed in Sect. 5.

2 Multiobjective Lévy-Flight Firefly Algorithm In this section, the original FA and LFFA algorithms are briefly described. The proposed mLFFA algorithm is then illustrated as follows. 2.1

Original FA Algorithm

Development of the original FA algorithm by Yang [6] is based on the flashing behavior of fireflies generated by a process of bioluminescence for attracting partners and prey. The FA’s algorithm is developed by three following rules. Rule-1: Rule-2:

Rule-3:

All fireflies are assumed to be unisex. A firefly will be attracted to others regardless of their sex. Attractiveness is proportional to the brightness. Both attractiveness and brightness decrease when distance increases. The less brighter firefly will move towards the brighter one. If there is no brighter one, it will move randomly. Brightness of each firefly will be evaluated by the objective function of interest.

The light intensity I of the FA will vary according to the inverse square law of the distance r, I = Is/r2, where Is is the intensity at the source. For a fixed light absorption coefficient, the light intensity I varies with the distance r as stated in (1), where I0 is the original light intensity. The attractiveness b will vary according to the distance r as stated in (2), where b0 is the attractiveness at r = 0. From (1) and (2), the scaling factor

Multiobjective Lévy-Flight Firefly Algorithm

147

c is defined as the light absorption coefficient. The distance rij between two fireflies i and j at their locations xi and xj can be calculated by (3). In FA algorithm, a new solution xt+1 can be obtained by an old solution xt as stated in (4), where ei is a vector of random numbers drawn from a uniform distribution. The randomization parameter at can be adjusted as stated in (5), where a0 is the initial randomness scaling factor. The original FA algorithm can be described by the pseudo code as shown in Fig. 1. I ¼ I0 ecr b ¼ b0 ecr

ð1Þ 2

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u d X   u   rij ¼ xi  xj ¼ t ðxi;k  xj;k Þ2

ð2Þ ð3Þ

k¼1

xti þ 1 ¼ xti þ b0 ecrij ðxtj  xti Þ þ at eti 2

at ¼ a0 dt ;

ð0\d\1Þ

Initialize: The objective function f(x), x = (x1,…,xd)T Generate initial population of fireflies xi = (i = 1, 2,…,n) Light intensity Ii at xi is determined by f(x) Define light absorption coefficient γ while (Gen ≤ Max_Generation) for i =1 : n all n fireflies for j =1 : i all n fireflies if (Ij > Ii) - Move firefly i towards j in d-dimension via Lévy-flight distributed random end if - Attractiveness varies with distance r via exp[-γr] - Evaluate new solutions and update light intensity end for j end for i Rank the fireflies and find the current best x* end while Report the best solution found x* Fig. 1. Pseudo code of the original FA algorithm.

ð4Þ ð5Þ

148

2.2

S. Sumpunsri et al.

LFFA Algorithm

The LFFA is the one of the modified versions of the original FA [9]. Movement of a firefly in (4) is changed to (6) by using the random numbers drawn from the Lévy distribution. The product ⊕ stands for the entrywise multiplications. The sign[rand1/2] where rand 2 [0, 1] provides a random direction. From (6), a symbol Lévy(k) represents the Lévy distribution as stated in (7). The step length s can be calculated by (8), where u and v stand for normal distribution as stated in (9). Also, the standard deviations of u and v are expressed in (10), where C is the standard Gamma function. ð6Þ

ð7Þ s ¼ u=jvj1=b u  Nð0; r2u Þ;  ru ¼

2.3

ð8Þ

v  Nð0; r2v Þ

Cð1 þ bÞ sinðpb=2Þ C½ð1 þ bÞ=2b2ðb1Þ=2

ð9Þ

1=b ;

rv ¼ 1

ð10Þ

mLFFA Algorithm

The mLFFA is represented by the pseudo code as shown in Fig. 2 [10]. The multiobjective function F(x) is formulated in (11), where f1(x), f2(x),…, fn(x) are the objective functions, gj(x) is the inequality constraints and hk(x) is the equality constraints. F(x) in (11) will be simultaneously minimized by the mLFFA according to gj(x) and hk(x). The best solution will be updated in every iteration. If it is a non-dominated solution, it will be sorted and stored into the Pareto optimal set P* as stated in (12). After the search terminated, the solutions stored in P* will be conducted to perform the Pareto front PF* as expressed in (13), where s is the solutions stored in set S and s* is non-dominated solutions. All non-dominated solutions appeared on the PF* are the optimal solutions. 9 Min FðxÞ ¼ ff1 ðxÞ; f2 ðxÞ; . . .; fn ðxÞg = subject to gj ðxÞ  0; j ¼ 1; . . .; m ; hk ðxÞ ¼ 0; k ¼ 1; . . .; p

ð11Þ

P ¼ fx 2 Fj9x 2 F : Fðx Þ  FðxÞg

ð12Þ

PF  ¼ fs 2 Sj9s 2 S : s  sg

ð13Þ

Multiobjective Lévy-Flight Firefly Algorithm

149

Multiobjective function: F(x) = {f1(x), f2(x),…,fn(x)}, x = (x1,…,xd)T Initialize LFFA1,…,LFFAk, and Pareto optimal set P* Generate initial population of fireflies xi = (i = 1, 2,…,n) Light intensity Ii at xi is determined by F(xi) Define light absorption coefficient γ while (Gen ≤ Max_Generation) for z =1: k all k LFFA for i =1 : n all n fireflies for j =1 : i all n fireflies if (Ij > Ii) - Move firefly i towards j in d-dimension via Lévy-flight distributed random end if - Attractiveness varies with distance r via exp[-γr] - Evaluate new solutions and update light intensity end for j end for i end for z - Rank the fireflies and find the current best x* - Sort and find the current Pareto optimal solutions end while - Pareto optimal solutions are found and - Pareto front PF* is performed. Fig. 2. Pseudo code of the mLFFA algorithm.

3 Multiobjective Test Functions In order to perform its effectiveness, the mLFFA is thus evaluated against several multiobjective test functions. In this work, four widely used standard multiobjective functions, ZDT1 – ZDT4, providing a wide range of diverse properties in terms of Pareto front and Pareto optimal set are conducted [11, 12]. ZDT1 is with convex front as stated in (14) where d is the number of dimensions. ZDT2 as stated in (15) is with non-convex front, while ZDT3 with discontinuous front is expressed in (16), where g and xi in functions ZDT2 and ZDT3 are the same as in function ZDT1. For ZDT4, it is stated in (17) with convex front but more specific. In evaluation process, the error Ef between the estimated Pareto front PFe and its corresponding true front PFt is defined in (18), where N is the number of solution points. ZDT1 : f1 ðxÞ ¼ x1 ; pffiffiffiffiffiffiffiffiffi f2 ðxÞ ¼ gð1  f1 =g Þ;  P . g ¼ 1 þ 9 di¼2 xi ðd  1Þ; xi 2 ½0; 1;

9 > > > = > > > i ¼ 1; . . .; 30 ;

ð14Þ

150

S. Sumpunsri et al.

9 ZDT2 : = f1 ðxÞ ¼ x1 ; ; f2 ðxÞ ¼ gð1  f1 =gÞ2

ð15Þ

9 ZDT3 : > = f1 ðxÞ ¼ x1;  pffiffiffiffiffiffiffiffiffi > f2 ðxÞ ¼ g 1  f1 =g  f1 sinð10pf1 Þ=g ; ZDT4 : f1 ðxÞ ¼ x1 ; pffiffiffiffiffiffiffiffiffi f2 ðxÞ ¼ gð1  f1 =gÞ; P  g ¼ 1 þ 10ðd  1Þ þ di¼2 x2i  10 cosð4pf1 Þ ; xi 2 ½0; 1; Ef ¼ kPFe  PFt k ¼

N X

ðPFej  PFt Þ2

ð16Þ 9 > > =

i ¼ 1; . . .; 30:

> > ;

ð17Þ

ð18Þ

j¼1

4 Results and Discussions The proposed mLFFA algorithms were coded by MATLAB version 2018b run on Intel (R) Core(TM) i5-3470 [email protected] GHz, 4.0 GB-RAM. Search parameters of each LFFA in the mLFFA are set according to Yang’s recommendations [6, 9], i.e. the numbers of fireflies n = 30, a0 = 0.25, b0 = 1, k = 1.50 and c = 1. These parameter values are sufficient for most optimization problems because the LFFA algorithm is very robust (not very sensitive to the parameter adjustment) [6, 9]. In this work, the termination criteria (TC) either use a given tolerance or a fixed number of generations. As simulation results, it was found that a fixed number of generations is not only easy to implement, but also suitable to compare the closeness of Pareto front of test functions. Therefore, for all test functions, Max_Gen = 2,000 is set as the TC. For comparison, the results obtained by the proposed mLFFA over all four standard multiobjective test functions are compared with those obtained by the well-known algorithms from literature reviews, i.e. VEGA [3], NSGA-II [4], DEMO [5] and multiobjective multipath adaptive tabu search (mMATS) [13]. The performance of all algorithms is measured via the error Ef stated in (18) and for all algorithms, a fixed number of generations/iterations of 2,000 (Max_Gen) is set as the TC. The results obtained from all test functions are summarized in Table 1 – Table 2, and the estimated Pareto fronts and the true front of functions ZDT1 – ZDT4 are depicted in Fig. 3, Fig. 4, Fig. 5, Fig. 6, respectively. It was found from Fig. 3, Fig. 4, Fig. 5 and Fig. 6 that the mLFFA can satisfactory provide the Pareto front containing all Pareto optimal solutions of all test functions very close to the true front of each multiobjective test function. Referring to Table 1 – Table 2, the mLFFA shows superior results to other algorithms with the less Ef and search time consumed.

Multiobjective Lévy-Flight Firefly Algorithm 1.0 True Pareto front mLFFA

0.9 0.8 0.7

f2(x)

0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.1

0.2

0.3 0.4 f1(x)

0.5

0.6

0.7

Fig. 3. Pareto front of ZDT1. 1.0 0.9 0.8 0.7

f2(x)

0.6 0.5 0.4 0.3 0.2

True Pareto front mLFFA

0.1 0

0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 f1(x)

Fig. 4. Pareto front of ZDT2. Table 1. Error Ef between PFe and PFt. Algorithms Error Ef ZDT1 VEGA 2.79e-02 NSGA-II 3.33e-02 DEMO 2.08e-03 mMATS 1.24e-03 mLFFA 1.20e-03

ZDT2 2.37e-03 7.24e-02 7.55e-04 2.52e-04 2.48e-04

ZDT3 3.29e-01 1.14e-01 2.18e-03 1.07e-03 1.01e-03

ZDT4 4.87e-01 3.38e-01 2.96e-01 1.02e-01 1.01e-01

151

S. Sumpunsri et al. 1.0 True Pareto front mLFFA

0.8 0.6 0.4

f2(x)

0.2 0 -0.2 -0.4 -0.6 -0.8 -1.0 0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 f1(x)

Fig. 5. Pareto front of ZDT3. 1.0 True Pareto front mLFFA

0.9 0.8 0.7 0.6

f2(x)

152

0.5 0.4 0.3 0.2 0.1 0

0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 f1(x)

Fig. 6. Pareto front of ZDT4. Table 2. Search time consumed. Algorithms Search time (sec.) ZDT1 ZDT2 ZDT3 VEGA 125.45 132.18 121.40 NSGA-II 126.82 145.63 158.27 DEMO 89.31 98.44 102.32 mMATS 65.54 72.33 82.47 mLFFA 52.42 65.18 71.53

ZDT4 122.24 165.51 120.86 78.52 64.78

Multiobjective Lévy-Flight Firefly Algorithm

153

5 Conclusions The multiobjective Lévy-flight firefly algorithm (mLFFA) has been presented in this paper for global optimization. Based on the original FA and LFFA, the mLFFA has been tested against four standard multiobjective test functions to perform its effectiveness of multiobjective optimization. Details of the performance evaluation of the mLFFA algorithm have been reported. As simulation results, it was found that the mLFFA algorithm performs more efficient than the well-known algorithms including the VEGA, NSGA-II, DEMO and mMATS. This can be noticed that the mLFFA algorithm is one of the most effective metaheuristic optimization search techniques for solving global optimization problems. For future research, the proposed mLFFA algorithm will be applied to various real-world engineering optimization problems.

References 1. Zakian, V.: Control Systems Design: A New Framework. Springer, London (2005) 2. Talbi, E.G.: Metaheuristics form Design to Implementation. Wiley, Hoboken (2009) 3. Schaffer, J.D.: Multiple objective optimization with vector evaluated genetic algorithms. In: The 1st International Conference on Genetic Algorithms, pp. 93–100 (1985) 4. Deb, K., Pratap, A., Agarwal, S., Mayarivan, T.: A fast and elitist multiobjective algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6, 182–197 (2002) 5. Robič, T., Filipič, B.: DEMO: Differential Evolution for Multiobjective Optimization. Lecture Notes in Computer Sciences, vol. 3410, pp.520–533 (2005) 6. Yang, X.S.: Firefly algorithms for multimodal optimization. In: Stochastic Algorithms, Foundations and Applications. Lecture Notes in Computer Sciences, vol. 5792, pp. 169–178 (2009) 7. Fister, I., Fister Jr., I., Yang, X.S., Brest, J.: A comprehensive review of firefly algorithms. In: Swarm and Evolutionary Computation, vol. 13, pp. 34–46. Springer, Heidelberg (2013) 8. Fister, I., Yang, X.S., Fister, D., Fister Jr., I.: Firefly algorithm: a brief review of the expanding literature. In: Yang, X.S. (ed.) Cuckoo Search and Firefly Algorithm, pp. 347– 360 (2014) 9. Yang, X.S.: Firefly algorithm, Lévy flights and global optimization. In: Research and Development in Intelligent Systems, vol. XXVI, pp. 209–218. Springer, Heidelberg (2010) 10. Sumpunsri, S., Puangdownreong, D.: Multiobjective Lévy-flight firefly algorithm for optimal PIDA controller design. Int. J. Innovative Comput. Inf. Control 16(1), 173–187 (2020) 11. Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach. IEEE Trans. Evol. Comput. 3, 257–271 (1999) 12. Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: empirical results. Evol. Comput. 8, 173–195 (2000) 13. Puangdownreong, D.: Multiobjective multipath adaptive tabu search for optimal PID controller design. Int. J. Intell. Syst. Appl. 7(8), 51–58 (2015)

Cooperative FPA-ATS Algorithm for Global Optimization Thitipong Niyomsat, Sarot Hlangnamthip, and Deacha Puangdownreong(&) Department of Electrical Engineering, Faculty of Engineering, Southeast Asia University, 19/1 Petchkasem Road, Nongkhangphlu, Nonghkaem, Bangkok 10160, Thailand [email protected], {sarotl,deachap}@sau.ac.th

Abstract. This paper presents the novel cooperative metaheuristic algorithm including the flower pollination algorithm (FPA) and the adaptive tabu search named the cooperative FPA-ATS. The proposed cooperative FPA-ATS possesses two states. Firstly, it starts the search for the feasible solutions over entire search space by using the FPA’s explorative property. Secondly, it searches for the global solution by using the ATS’s exploitative property. The cooperative FPA-ATS are tested against ten multimodal benchmark functions for global minimum finding in order to perform its search performance. By comparison with the FPA and ATS, it was found that the cooperative FPA-ATS is more efficient than the FPA and ATS, significantly. Keywords: Cooperative FPA-ATS algorithm Adaptive tabu search  Global optimization

 Flower pollination algorithm 

1 Introduction Metaheuristic algorithms have been consecutively developed to solve combinatorial and numeric optimization problems over five decades [1]. The metaheuristic optimization search technique can be classified by two main properties, i.e. exploration (or diversification) and exploitation (or intensification) [1, 2]. The explorative property is the ability to generate diverse solutions to explore the overall search space on the global scale. Another is the exploitative property performing the ability to generate intensive solutions on the local scale of search space. With these two properties, metaheuristics are thus divided into two types, i.e. population-based and single solution-based metaheuristic algorithms [2]. The population-based metaheuristic algorithms have strong explorative property, whereas the single-solution based metaheuristic algorithms have strong exploitative property [1, 2]. From literature reviews, one of the most efficient population-based metaheuristic algorithms is the flower pollination algorithm (FPA) proposed by Yang in 2012 [3]. The FPA was successfully applied to solve many real-world problems such as process control, image processing, antenna design, wireless network, robotics and automatic control problems [3, 4]. Moreover, the FPA algorithm has been proven for the global convergence properties [5]. On the other hand, one of the most powerful single-solution based metaheuristic algorithms is the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 154–163, 2021. https://doi.org/10.1007/978-3-030-68154-8_16

Cooperative FPA-ATS Algorithm for Global Optimization

155

adaptive tabu search (ATS) proposed by Sujitjorn, Kulworawanichpong, Puangdownreong and Areerak in 2006 [6]. The ATS was developed from the original TS proposed by Glover in 1989 [7, 8]. The ATS was widely conducted to solve several real-world optimization problems such as model identification, control system design, power system design and signal processing [6]. In addition, the ATS algorithm has been proven for the global convergence [6, 9, 10]. To combine two main properties, the novel cooperative metaheuristic algorithm called the cooperative FPA-ATS is proposed in this paper. The proposed cooperative FPA-ATS includes the FPA and the ATS in cooperating search for global optimization. This paper consists of five sections. After an introduction in Sect. 1, the original FPA and ATS algorithms are briefly described and the proposed cooperative FPA-ATS algorithm is illustrated in Sect. 2. Ten standard multimodal benchmark functions used in this paper are detailed in Sect. 3. Results and discussions of the performance tests of the FPA, ATS and cooperative FPA-ATS algorithms against ten selected multimodal benchmark functions are performed in Sect. 4. Finally, conclusions are given in Sect. 5.

2 Cooperative FPA-ATS Algorithm In this section, the original FPA and ATS algorithms are briefly described. Then, the proposed cooperative FPA-ATS algorithm is elaborately illustrated as follows. 2.1

FPA Algorithm

The FPA algorithm mimics the pollination of the flowering plants in nature. It can be divided into cross (or global) pollination and self (or local) pollination [3]. For the cross-pollination, pollen will be transferred by the biotic pollinator. In this case, the new position (or solution) xt+1 can be calculated by (1), where xt is the current position, g* is the current best solution, L is the random drawn from Lévy flight distribution as stated in (2) and C(k) is the standard gamma function. For the self-pollination, pollen will be transferred by the abiotic pollinator. For this case, xt+1 can be calculated by (3), where e is the random drawn from a uniform distribution as expressed in (4). The proximity probability p is used for switching between cross-pollination and selfpollination. The FPA algorithm can be described by the flow diagram shown in Fig. 1, where TC stands for the termination criteria. xit þ 1 ¼ xti þ Lðxti  gÞ L

kCðkÞ sinðpk=2Þ 1 ; p s1 þ k

ðs [[ s0 [ 0Þ

xti þ 1 ¼ xti þ eðxtj  xtk Þ

ð1Þ ð2Þ ð3Þ

156

T. Niyomsat et al.

 eðqÞ ¼

2.2

1=ðb  aÞ; a  q  b 0; q\a or q [ b

ð4Þ

ATS Algorithm

The ATS method is based on iterative neighborhood search approach [6]. It consists of the tabu list (TL), adaptive-radius (AR) mechanism and back-tracking (BT) mechanism. The TL is used to record the list of visited solutions. The AR is conducted to speed up the search process by reducing the search radius. Once the local entrapment is occurred, the BT is activated by using some solutions collected in TL to escape from the entrapment and move to new search space. The ATS algorithm is represented by the flow diagram in Fig. 2.

Start

Start - Objective function f(x), x = (x1, x2,…, xd) - Initialize a population of n flowers (pollen gametes) with random solutions - Find the best solution g* among the initial population via f(x) - Define a proximity probability p ∈ [0, 1] rand > p

- Objective function f(x), x = (x1, x2,…, xd) - Initialize a search radius R, weighting search radius α, 0 0. Static and Dynamic Switch Probability. The static switch probability (p) is 0.8 because this works best in most applications [17]. The dynamic switch probability (pn+1 ) in iteration n + 1 is adapted from [14]:   Niter − n , (3) pn+1 = pn − λ Niter where pn is the value of the switch probability in the previous iteration, Niter is the maximum number of iterations, n is the current iteration, and the scaling factor λ = 1.5. The value of the dynamic switch probability begins from 0.9 and decreases using Eq. (3). Algorithm 1 shows the flow of the FPA.

Meta-Heuristic Algorithms in PID Tuning

253

Algorithm 1 FPA pseudo code 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20:

3.2

Initialize popsize N Initialize iteration number maxIT Define objective function f(x) Find global best solution g ∗ Define dynamic switch probability function while t < maxIT do Evaluate dynamic switch probability function Obtain switch probability p for i = 1 : N do if rand < p then Draw a d-dimensional step vector L Global pollination: xt+1 = xti + L(xti − g ∗ ) i else Draw  from a uniform distribution in [0,1] Randomly choose j and k among solutions Local pollination: xt+1 = xti + (xtj − xtk ) i Evaluate new solutions Compare with previous solution Update new solutions if they are better Find the current best solution g ∗

Teaching-Learning-Based Optimization Algorithm

The Teaching-Learning-Based Optimization (TLBO) algorithm is comprised of two phases: the teacher phase and the learner phase. The teacher phase is where the learners gain knowledge from the teacher and the learner phase is where learners interact and gain knowledge from each other. The advantage of the TLBO is that it only requires non-algorithmic specific parameters (i.e. common parameters), such as the population size and number of generations [19]. In this particular study, each learner of the population has four subjects being the three PID gains and the N filter coefficient. Algorithm 2 shows how the TLBO was implemented in this study. Note that in Algorithm 2, ri , Tf , and Mi represent a random number between 0 and 1, a teaching factor which can either be 0 or 1, and the mean result of the learners in that particular iteration, respectively.

4

Computational Experimental Design

This study used a basic design of experiments (DOE) [20] methodology which is the 2k factorial design technique to compare the outcome effect of two values for each parameter. Table 1 shows these two parameters, along with the two values of each parameter.

254

S. Baartman and L. Cheng

Algorithm 2 TLBO pseudo code 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18:

Initialize population N Choose iteration number maxIT Define objective function f(X) Evaluate population and find best learner k∗ while t < maxIT do for i = 1 : N do procedure Teacher Phase ¯ k,i = ri (Xk∗ ,i − Tf Mi ) Find difference mean: Δ  ¯ k,i New solution: Xk,i = Xk,i + Δ procedure Learner Phase   Randomly choose learners P & Q s.t XP,i = XQ,i   if XP,i < XQ,i then     Update P: XP,i = XP,i + ri (XP,i − XQ,i )   if XQ,i < XP,i then     Update Q: XP,i = XP,i + ri (XQ,i − XP,i )

Evaluate new solutions Update new solutions if they are better Find the current best learner k∗

These parameter values were chosen considering various factors. A low population size may result in the algorithm not having enough population members to find the optimal solution, and a high population size may result in a waste of computational space. A low search space bound may result in the optimal solution not included in the search space, and a high search space bound may reduce the probability of the algorithm obtaining the optimal solution. The detail of each algorithm design is shown in Table 2. Table 1. 2k factorial design bounds Factor

Level 1 Level 2

Search space upper bound 100 Population size

20

500 60

The algorithm design and operation are evaluated for two problem instances: a step input which represents a constant target velocity of 1 rad s−1 and a ramp input which represents a constant acceleration of 1 rad s−2 . The study ran each algorithm ten times (with random seed) in order to observe their reactive nature, as also done in [11]. This research used the best PIDN solutions for performance comparison because in a real-life scenario, the best solution from the number of runs would be chosen [21]. The maximum iteration counts for the FPA and TLBO algorithm were kept constant throughout the experiments at 50 and 25 respectively, as the TLBO performs two fitness function evaluations at each iteration.

Meta-Heuristic Algorithms in PID Tuning

255

Table 2. Factorial design AD Population size Search space

5 5.1

1

20

100

2

60

100

3

20

500

4

60

500

Results and Discussions Parameter Optimization

Table 3 shows that the TLBO algorithm, implemented with algorithm design 4 (AD = 4), obtained the lowest fitness value. Furthermore, the lowest mean fitness value was obtained by AD = 4 for all the algorithms, showing that AD = 4 is the best performing for this problem instance when observing the fitness value. Table 3. Fitness value statistics AD Best (Min) Mean

Max

Std Dev

Dynamic 1 2 FPA 3 4

34.7076 34.7114 14.8154 14.8167

35.0040 34.7304 25.4771 21.2140

35.6922 34.7682 34.7096 34.4665

0.343004 0.0139906 9.19568 6.71812

Static FPA

1 2 3 4

34.7128 34.7207 15.4595 16.0833

35.1717 34.8006 29.3366 24.4707

36.8325 34.9912 35.2087 34.5326

0.6085149 0.0871953 8.25672 6.35065

TLBO

1 2 3 4

34.6111 34.5891 14.2111 13.9728

34.6611 34.6242 23.1669 21.0753

34.6753 34.6699 34.4206 34.4200

0.017996 0.0311257 9.247853 8.781703

The results on Table 3 also show that the dynamic FPA generally obtained lower fitness values compared to the static FPA, i.e., the problem benefits from fine-tuning and exploitation in later iterations rather than a constant higher probability of global pollination throughout all iterations. This could also be the reason why the TLBO algorithm obtained the lowest fitness value, as this algorithm works by fine-tuning the solutions at each iteration. Both the FPA and TLBO resulted in a PI controller solution, as the FPA resulted in a low derivative gain, and the TLBO resulted in a low filter coefficient

256

S. Baartman and L. Cheng Table 4. Optimal PID parameters and results AD

KP

KI

KD

N

tR (s)

tS (s)

%OS

Dynamic

1

94.7108

45.0245

0

82.7842

0.0260

0.0389

0.0295

FPA

2

99.5786

46.3494

0.0239934

72.0288

0.0258

0.0389

0.0368

3

25.0692

219.082

0

368.749

0.0445

0.3159

13.3413

4

24.9139

278.149

0

500

0.0415

0.2687

16.2584

Static

1

98.8912

47.1230

0

100

0.0258

0.0380

0.0382

FPA

2

99.6802

46.182

0

84.5793

0.0257

0.0378

0.0339

3

23.0263

175.111

0

500

0.0497

0.3559

12.4789

4

21.7208

143.351

0

409.259

0.0544

0.3959

11.4223

1

95.7066

48.0759

97.7518

0.0138523

0.0259

0.0384

0.0425

2

99.2521

53.2571

91.8365

0.287085

0.0251

0.0341

0.0303

3

25.1757

25.7884

0

462.389

0.0668

0.1197

0.0135

4

26.5995

23.8218

0

455.577

0.0634

0.1163

0.0030

N/A

4.5

3.6

0.05

100

0.4422

0.8526

0

TLBO

Journal results

1.2

1

A D1 A D2

1

A D3

A D2 A D3 A D4

0.9

journal input

Step Response

Step Response

A D4

0.8

A D1

0.95

0.6

0.4

journal input

0.85 0.8 0.75 0.7

0.2 0.65 0

0

0.5

1

Time (s)

(a) Steady-State Response

1.5

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Time (s)

(b) Zoomed in Figure of 1a

Fig. 1. Step response comparing the algorithm design performance for the TLBO

as seen on Table 4. The PI controller solution means that the system is sufficiently damped as the purpose of the derivative controller is to reduce overshoot and oscillations, i.e. increasing damping. The best performing algorithm design is AD = 2 as seen on Fig. 1 for the TLBO algorithm, and Fig. 2 for the FPA, with the static FPA performing the best. In order to find the overall best performing algorithm and algorithm design, the best performing FPA algorithm (implemented with the best performing algorithm design) was compared with the TLBO algorithm, to observe whether this problem instance prefers algorithms with algorithm-specific parameters. The algorithm design chosen as best performing for the all the algorithms is AD = 2 as it showed a good compromise with the most favourable response, yet with a reasonable fitness value. Since the static FPA (implemented using AD = 2) resulted in the lowest rise and settling time, and followed the commanded input the closest, this algorithm was chosen to be compared with the TLBO algorithm. Figure 3 shows that the TLBO algorithm reaches the commanded input faster.

Meta-Heuristic Algorithms in PID Tuning

257

1.2 1.02

D-AD1

D-AD1

D-AD2

1

D-AD2

D-AD3

D-AD3

1

D-AD4

S-A D1

0.8

S-A D2

Step Response

Step Response

D-AD4

S-A D3 S-A D4

0.6

journal input

0.4

S-A D1

0.98

S-A D2 S-A D3

0.96

S-A D4 journal input

0.94 0.92

0.2

0

0.9

0

0.5

1

1.5

0.045

Time (s)

0.05

0.055

0.06

0.065

0.07

0.075

Time (s)

(a) Steady-State Response

(b) Zoomed in Figure of 2a

Fig. 2. Step response comparing the algorithm design performance for the FPA 1.2

TLBO Static FPA input

0.99 0.98 0.97

0.8

Step Response

Step Response

1

TLBO Static FPA input

1

0.6

0.4

0.96 0.95 0.94 0.93 0.92 0.91

0.2

0.9 0

0

0.5

1

Time (s)

(a) Steady-State Response

1.5

0.03 0.035 0.04 0.045 0.05 0.055 0.06 0.065 0.07 0.075

Time (s)

(b) Zoomed-in view of Figure 3a

Fig. 3. The comparison of the step response of the algorithm performance

Thus, the TLBO algorithm, using AD = 2 was chosen to be the best performing for this problem instance. 5.2

Robustness Analysis

Table 5 shows that the increase in population size from AD = 1 to AD = 2 resulted in a decrease in the fitness value for all the algorithms. However, increasing the search space bound, as done from AD = 2 to AD = 3, caused a significant decrease in the fitness value. This is because of the increase in the integral controller gain solutions (KI ) obtained, as seen on Table 6. An increase in this term shows its importance when the input is changed from step to ramp. The lowest fitness values were obtained using AD = 4 for all the algorithms, with the lowest obtained from the TLBO algorithm. Figure 4 shows that the increase in the integral gain in the algorithm designs reduced this steady-state error which resulted in a decreased fitness value.

258

S. Baartman and L. Cheng Table 5. Fitness value statistics AD Best

Mean

Max

Std Dev

Adaptive 1

5.9056

6.0110

6.1584

0.079769

2

5.8152

5.9067

5.9618

0.04944

3

1.8489

2.4535

4.4354

0.7344

4

1.7788 1.8828

2.077

0.07754

Static

1

5.9162

6.04216

6.1049

0.05763

FPA

2

5.86

5.9424

6.0185

0.05194

3

1.7962 2.6258

4.3594

0.88795

4

1.8326

1

5.8078

6.0329

6.2369

0.1189

2

5.7446

5.8962

6.1151

0.09516

3

1.7793

1.7851

1.791

0.003700

4

1.759

1.7761

1.7883 0.007449

FPA

TLBO

1.91394 2.1113 0.08125

Table 6. Algorithm solutions AD KP

KI

Kd

N

Adaptive 1

68.4011 34.2160 13.2017 100

FPA

2

97.7425 0

13.5612 94.1247

3

22.6570 500

0

36.0330

0

165.944

4

25.2946 500

Static

1

59.0788 55.6679 14.3738 89.4286

FPA

2

37.3178 79.6967 13.6054 99.8358

3

25.2989 500

0

84.1660

4

24.0282 500

0

415.586

1

42.9984 66.3752 13.491

2

43.2900 83.3955 13.4509 100

3

24.806

4

25.0514 499.932 0

TLBO

500

0

99.8793 500 358.493

This problem favoured the dynamic FPA instead of the static FPA, as the dynamic FPA gave the lowest fitness value from the two. This means this problem instance is not multi-modal and requires fine-tuning of a specific region in the search space. This experiment resulted in AD = 4 performing best with the lowest mean fitness value and best response behaviour for all the algorithms. Because the dynamic FPA gave the lowest fitness function value when compared to the static FPA, the dynamic FPA was chosen for comparison, along with the TLBO using the algorithms ramp response shown in Fig. 6. This Figure illustrates that though the TLBO algorithm resulted in the lowest fitness value, the best performing algorithm is the dynamic FPA with it following the ramp input closest. The low fitness value from the TLBO algorithm, when compared to the dynamic FPA, may be a result of the dynamic FPA taking slightly longer to settle. Thus, there is no best performing algorithm in this instance as the

Meta-Heuristic Algorithms in PID Tuning 1.6

0.068

A D1 A D2

1.4

A D4

Step Response

Step Response

1 0.8 0.6

0.06

0.056 0.054

1

A D4

0.058

0.2

0.5

A D3 input

0.062

0.4

0

A D2

0.064

input

0

A D1

0.066

A D3

1.2

259

1.5

0.052 0.054 0.056 0.058 0.06 0.062 0.064 0.066 0.068 0.07 0.072

Time (s)

Time (s)

(a) Steady-State Response

(b) Zoomed in Figure of 4a

Fig. 4. Ramp response comparing the algorithm design performance for the TLBO 1.6

0.075

D-AD1 D-AD2

1.4

D-AD3 D-AD4

1.2

Step Response

Step Response

S-A D2 S-A D3 S-A D4

0.8

D-AD2 D-AD3 D-AD4

0.065

S-A D1

1

D-AD1

0.07

input

0.6

S-A D1 S-A D2

0.06

S-A D3 S-A D4

0.055

input

0.05 0.4 0.045 0.2 0.04 0

0

0.5

1

1.5

0.055

0.06

0.065

Time (s)

0.07

0.075

0.08

0.085

Time (s)

(a) Steady-State Response

(b) Zoomed in Figure of 5a

Fig. 5. Ramp response comparing the algorithm design performance for the FPA 1.6

0.887

input TLBO Dynamic FPA

1.4

input TLBO Dynamic FPA

0.886

1

Step Response

Step Response

1.2

0.8 0.6

0.885

0.884

0.883

0.4 0.882

0.2 0

0

0.5

1

Time (s)

(a) Steady-State Response

1.5

0.881 0.8825

0.883

0.8835

0.884

0.8845

0.885

0.8855

0.886

Time (s)

(b) Zoomed-in view of Figure 6a

Fig. 6. The comparison of the ramp response of the algorithm performance

260

S. Baartman and L. Cheng

TLBO algorithm gave the lowest fitness value, but the dynamic FPA follows the input the fastest, though it results in a longer time to settle. 5.3

Suggestions for Future Practitioners

– If the target is already moving with a constant velocity (step input), the proportional gain should be high since the gimbal is starting from zero velocity and needs to reach the target velocity. If the target begins from zero as well and moves with constant acceleration (ramp input), the integral gain is significant which shows the error over time because the gimbal moves with the target rather than attempting to reach the target motion. The search space bound of the algorithm must be adjusted accordingly to ensure that optimal results are within the search space. – Ensure that the algorithm-specific parameter values chosen give a higher probability for exploitation rather than a constant exploration in the iteration. – Since the best solutions for all the algorithms resulted in higher PI controller gains for both the problem instances but with lower derivative controller gains or filter coefficients, one must reduce the search space upper bound of the derivative controller so that this may increase the number of viable solutions (Fig. 5).

6

Conclusion

The tuning of PID controllers is imperative to ensuring a good performance as classical tuning methods have proven to only produce sub-optimal results and the need for better tuning method remains. Nature-inspired meta-heuristic algorithms have shown to be a good alternative solution to PID controller tuning. However, these algorithms have their own (meta) parameters which need to be tuned in order to ensure good algorithm performance. This study made use of the 2k factorial method for tuning non-algorithm specific parameters, and compared self-tuning methods with static parameter values for tuning algorithm-specific parameters. The performance of the algorithms were then compared on two different simulation conditions which represent two different target motions. It was found that the TLBO algorithm resulted in the lowest (best) fitness value and reached the near optimal solution with the lowest iteration number. Most gimbal stabilization systems have two, three or even more axes. Future work should consider how these algorithms perform when optimising PID systems for multi-axis gimbal systems that include Coriolis-force cross-coupling disturbance torque between the axes.

References ˇ 1. Jan, J.A., Sulc, B.: Evolutionary computing methods for optimising virtual reality process models. In: International Carpathian Control Conference, May 2002

Meta-Heuristic Algorithms in PID Tuning

261

2. Memari, A., Ahmad, R., Rahim, A.R.A.: Metaheuristic algorithms: guidelines for implementation. J. Soft Comput. Decis. Support Syst. 4(7), 1–6 (2017) 3. Hilkert, J.M.: Inertially stabilized platform technology concepts and principles. IEEE Control Syst. Mag. 28, 26–46 (2008) 4. Bujela, B.W.: Investigation into the robust modelling, control and simulation of a two-dof gimbal platform for airborne applications. Msc. Thesis, University of the Witwatersrand, Johannesburg, SA (2013) 5. Ribeiro, J.M.S., Santos, M.F., Carmo, M.J., Silva, M.F.: Comparison of PID controller tuning methods: analytical/classical techniques versus optimization algorithms. In: 2017 18th International Carpathian Control Conference (ICCC), pp. 533–538, May 2017 6. Jalilvand, A., Kimiyaghalam, A., Ashouri, A., Kord, H.: Optimal tuning of pid controller parameters on a DC motor based on advanced particle swarm optimization algorithm, vol. 3, no. 4, p. 9 (2011) 7. Salem, A., Hassan, M.A.M., Ammar, M.E.: Tuning PID controllers using artificial intelligence techniques applied to DC-motor and AVR system. Asian J. Eng. Technol. 02(02), 11 (2014) 8. Rajesh, R.J., Kavitha, P.: Camera gimbal stabilization using conventional PID controller and evolutionary algorithms. In: 2015 International Conference on Computer Communication and Control (IC4), (Indore), IEEE, September 2015. https://doi.org/10.1109/IC4.2015.7375580 9. Dash, P., Saikia, L.C., Sinha, N.: Flower pollination algorithm optimized PI-PD cascade controller in automatic generation control of a multi-area power system. Int. J. Electr. Power Energy Syst. 82, 19–28 (2016) 10. Jagatheesan, K., Anand, B., Samanta, S., Dey, N., Santhi, V., Ashour, A.S., Balas, V.E.: Application of flower pollination algorithm in load frequency control of multiarea interconnected power system with nonlinearity. Neural Comput. Appl. 28, 475–488 (2017) ˇ 11. Fister, D., Fister Jr., I., Fister, I., Safariˇ c, R.: Parameter tuning of PID controller with reactive nature-inspired algorithms. Robot. Autonom. Syst. 84, 64–75 (2016) 12. Sabir, M.M., Khan, J.A.: Optimal design of PID controller for the speed control of DC motor by using metaheuristic techniques. Adv. Artif. Neural Sys. 2014, 1–8 (2014). https://doi.org/10.1155/2014/126317 13. Rajinikanth, V., Satapathy, S.C.: Design of controller for automatic voltage regulator using teaching learning based optimization. Procedia Technol. 21, 295–302 (2015) 14. Li, W., He, Z., Zheng, J., Hu, Z.: Improved flower pollination algorithm and its application in user identification across social networks. IEEE Access 7, 44359– 44371 (2019) 15. Jia, R., Nandikolla, V., Haggart, G., Volk, C., Tazartes, D.: System performance of an inertially stabilized gimbal platform with friction, resonance, and vibration effects. J. Nonlinear Dyn. 2017, 1–20 (2017). https://doi.org/10.1155/2017/ 6594861 16. Pan, F., Liu, L.: Research on different integral performance indices applied on fractional-order systems. In: 2016 Chinese Control and Decision Conference (CCDC), pp. 324–328, May 2016 17. Yang, X.-S.: Flower pollination algorithm for global optimization. In: DurandLose, J., Jonoska, N. (eds.) Unconventional Computation and Natural Computation vol. 7445, pp. 240–249. Springer, Heidelberg (2012)

262

S. Baartman and L. Cheng

18. Alyasseri, Z.A.A., Khader, A.T., Al-Betar, M.A., Awadallah, M.A., Yang, X.-S.: Variants of the flower pollination algorithm: a review. In: Yang, X.-S. (ed.) NatureInspired Algorithms and Applied Optimization, vol. 744, pp. 91–118. Springer, Cham (2018) 19. Rao, R.V.: Teaching-Learning-Based Optimization and Its Engineering Applications. Springer, Cham (2016) 20. Adenso-D´ıaz, B., Laguna, M.: Fine-tuning of algorithms using fractional experimental designs and local search. Oper. Res. 54, 99–114 (2005) 21. Eiben, A.E., Jelasity, M.: A critical note on experimental research methodology in EC. In: Proceedings of the 2002 Congress on Evolutionary Computation, CEC 2002, vol. 1, pp. 582–587. IEEE, Honolulu, 12 May 2002

Solving an Integer Program by Using the Nonfeasible Basis Method Combined with the Cutting Plane Method Kasitinart Sangngern and Aua-aree Boonperm(&) Department of Mathematics and Statistics, Faculty of Science and Technology, Thammasat University, Bangkok, Thailand [email protected], [email protected]

Abstract. For solving an integer linear programming problem, a linear programming (LP) relaxation problem is solved first. If the optimal solution to the LP relaxation problem is not integer, then some techniques to find the optimal integer solution is used. For solving the LP relaxation problem, the simplex method is performed. So, the artificial variables may be required which it means that the problem size will be expanded and wasted more computational time. In this paper, the artificial-free technique, called the nonfeasible basis method (NFB), is used combined with the cutting plane method for solving an integer linear programming problem, called the nonfeasible-basis cutting plane method (NBC). It performs in condensed table which is the smaller simplex tableau. From the computational results, we found that the computational time can be reduced. Keywords: Integer linear programming basis method  Artificial-free technique

 Cutting plane method  Nonfeasible

1 Introduction In many real-world problems, there are a lot of problems which require integer solutions such as travelling salesman problems [1], packing problems [2], knapsack problems [3], assignment problems [4], hub location problems [5] etc. To solve these problems, integer linear programming problems are formulated which there are many methods such as branch and bound method [6], cutting plane method [7, 8] and branch and cut method [9] can be used to find the optimality. The steps of these methods are solving an LP relaxation problem which the integer condition is dropped, then linear programming subproblems are solved for finding the integer solution. One of the most widely used to solve the integer programming problem is branch and bound method [6] which is a divide-and-conquer strategy. This method starts by considering the fractional optimal solution to the LP relaxation problem. Then, it is branched on generating sub-problems and bounded by the integer solution, infeasible solution and a fractional solution that gives the worse objective value than the current

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 263–275, 2021. https://doi.org/10.1007/978-3-030-68154-8_26

264

K. Sangngern and A. Boonperm

best solution. However, for solving a large problem, the branch and bound method has a lot of sub-problems for solving which this leads to the expensive computational time. In 1958, Gomory [7] introduced the technique named the cutting plane method to solve an integer linear programming problem. This technique starts by solving the LP relaxation problem. If an optimal solution to the LP relaxation problem is not an integer, then the cut is added to find the integer solution. The construction of an efficient added cut is one of the interesting issues to improve the cutting plane method. The well-known cut is the Gomory’s cut which is obtained from the optimal simplex tableau after the LP relaxation problem is solved. This cut is constructed by the coefficients of nonbasic variables in the row that has a fractional solution. Later in 2005, Glover and Sherali [10] proposed the improvement of the Gomory’s cut, named Chvatal-Gomory-tier cut. In 2011, Wesselmann et al. [11] developed the Gomory’s cut enhance the performance of the Gomory mixed integer cut. Additionally, the Gomory’s cut was improved by many researchers [8, 10]. From the above researches, each method which is used to solve the optimal integer solution requires the algorithm to solve subproblems which are linear programming problems. Therefore, the efficient algorithm for solving linear programming problems has been investigated. The well-known method for solving a linear programming problem is the simplex method which is the traditional iterative method introduced by Dantzig in 1947 [12]. However, the simplex method is not the best method since Klee and Minty [13] demonstrated the simplex method has poor worst-case performance. Many issues that researchers attempt to improve the simplex method are proposed, for example, proposed the technique for removing the redundant constraints [14], introduced the new method for solving the specific linear programming [15, 16] and developed the artificial-free techniques [17–24]. Before the simplex method starts, the linear programming problem in the standard form is required. So, the slack or surplus variables are added. Then, a coefficient matrix of constraints is separated into two sub matrices called the basic matrix (the basis) and the nonbasic matrix. The variables corresponding to the basis are the basic variables while the remaining variables are the nonbasic variables. The simplex method starts by choosing an initial basis to construct a basic feasible solution, then the basis is updated until the optimal solution is found. However, it is hard to construct an initial feasible basis by choosing basic variables among decision variables. In the case that surplus variables are added, the basic variables should consist of slack variables and artificial variables. It leads to the enlarged problem size. To solve this issue, some researchers develop a new method for solving linear programming problems which start by infeasible initial basis. In 1969, Zionts [17] introduced the artificial-free technique named criss-cross technique. This technique focuses on both of primal and dual problems. It alternates between the (primal) simplex method and the dual simplex method until a feasible solution of a primal problem or a dual problem is found. Later in 2000, Pan [18] proposed two novel perturbation simplex variants combined with the dual pivot rules for solving a linear programming problem without using the artificial variable.

Solving an Integer Program by Using the Nonfeasible Basis Method

265

In 2003, Paparrizos et al. [19] introduced a method for solving a linear programming by using the infeasible basis. The chosen basis is generated by two artificial variables associated with the additional constraint with a big-M number. Furthermore, this infeasible basis returns the dual feasible solution. In 2009, Nabli [20] described the artificial-free technique name the nonfeasible basis method. The nonfeasible basis method starts when the initial basis gives the primal infeasible solution. It performs on the condensed tableau which is the briefly version of the simplex tableau. In the condensed tableau, the columns of nonbasic variables are only considered because the column of basic variables for each iteration is not changed. By the operation of the nonfeasible basis method, its performance is better than two-phase method. Later in 2015, NFB was improved by Nabli and Chahdoura [21] by using the new pivot rule for choosing an entering variable. In 2020, Sangngern and Boonperm [22] proposed the method for constructing an initial basis which closes to the optimal solution by considering the angles between the objective function and constraints. If the current basic is infeasible, then the nonfeasible method will be performed. In 2014, Boonperm and Sinapiromsaran [23, 24] introduced the artificial-free techniques by solving the relaxation problem which is constructed by considering the angle between the objective function and constraints. Due to the above researches, we found that one of techniques for solving an integer program and one of techniques for solving a linear programming problem are compatible. That is the nonfeasible basis method and the cutting plane method with the Gomory’s cut. Since the added Gomory’s cut is generated by only the coefficients of nonbasic variables like the nonfeasible basis method, which is performed on the condensed tableau, we will use this compatibility to develop the algorithm for solving an integer linear programming problem by the combination of the nonfeasible basis method and the cutting plane method. Then, it is named the nonfeasible-basis cutting plane method (NBC). In this paper, it is organized into five sections. First section is the introduction. Second, all tools that are used for the proposed method are described. Third, the proposed method is detailed. The computational result is shown in the fourth section. Finally, the last section is the conclusion.

2 Preliminaries In this section, the cutting plane method with the Gomory’s cut for solving an integer linear programming problem and the artificial-free technique named the Nonfeasible Basis method are described. We begin with the cutting plane method.

266

2.1

K. Sangngern and A. Boonperm

Cutting Plane Method

Consider a (pure) integer linear programming problem: ðILPÞ

max s:t:

cT x Ax  b x  0 and integer;

where A 2 Rmn ; b 2 Rm ; c 2 Rn and x 2 Rn . Before an integer linear programming problem is solved, the solution to the LP relaxation is required. The LP relaxation problem (LPR) of ILP is its integer condition dropped which it can be written as follows: ðLPRÞ

max s:t:

cT x Ax  b x  0:

To solve LPR, if the origin point is a feasible point, then the simplex method can start. Otherwise, the two-phase method is performed.  T   Let A ¼ ½ B N ; x ¼ xTB xTN and cT ¼ cTB cTN . The standard form of LPR associated to the basis B, the nonbasic matrix N, the basic variable xB and the nonbasic variable xN is written as follows: max s:t:

  0T xB þ cTN  cTB B1 N xN ¼ cTB B1 b I m xB þ B1 NxN ¼ B1 b xB ; xN  0

For the cutting plane method, after LPR is solved and its optimal solution is not an integer solution, then the cuts are added to the LP relaxation problem for cutting of a non-integer solution until an optimal integer solution is found. In this paper, we are interested in the Gomory’s cut which can be defined below. Let B be the optimal basis of LPR. The associated tableau is as follows:

Im

B 1N

0

cTN cTB B 1 N xN

xB

B 1b

xB

cTB B 1b

Suppose that there is at least one row of the optimal tableau, called the ith row, with ðB1 bÞi is fraction. This row is corresponding to the equation xBi þ

X

aij xj ¼ bi ;

ð1Þ

j2IN

where aij ¼ ðB1 N Þij ; bj ¼ ðB1 bÞj and IN is the set of indices of nonbasic variables.

Solving an Integer Program by Using the Nonfeasible Basis Method

The Gomory’s cut generated by the ith row is defined by X      aij  aij xj þ xn þ 1 ¼  bi   bi ;

267

ð2Þ

j2IN

where xn þ 1 is the non-negative slack variable. Algorithm 1 : Cutting Plane method (with Gomory’s cut) Input Optimal tableau of LPR * Let x be the optimal solution to LPR *

While x is not integer do Add the Gomory’s cut into the optimal tableau Perform the dual simplex method If the optimal solution is found then The optimal integer solution is found Else The optimal integer solution does not exist End End

The step of the cutting plane method can be described as follows: Before the cutting plane method is performed, LPR must be solved by the simplex method or the two-phase method. If the two-phase method is chosen to perform, then artificial variables are required. Consequently, the size of problem must be expanded, and the computational time will be huge. To avoid the usage of artificial variables, the artificial-free technique is introduced. This technique can deal with the artificial variable and reduce the computational time. 2.2

Nonfeasible Basis Method

The nonfeasible basis method (NFB) is one of the artificial-free techniques performed on the condensed tableau which is described below. Consider the following the original simplex tableau:

Im

0 xB

B 1N T N

c

T B

B 1b 1

c B N xN

T B

xB

1

c B b

For all iterations, the column matrix of basic variables ðxB Þ is the identity matrix. Thus, the condensed tableau is introduced instead of the original simplex tableau by removing the column of basic variables. It can be written as follows:

268

K. Sangngern and A. Boonperm

B 1N T N

c

T B

B 1b 1

c B N xN

T B

xB

1

c B b

For using the simplex method with the condensed tableau, the updated condensed tableau for each iteration is defined by the theorem in [9]. NFB was introduced to construct the feasible basis without using artificial variables and deals with the condensed tableau. So, the size of a matrix of each problem handled by this method is smaller than the original simplex method. The NFB method starts when the chosen basis B is an infeasible basis (i.e., there exists the ith row that ðB1 bÞi \0). Finally, this method returns a basic feasible solution. The detail of the algorithm is shown as follows: Algorithm 2 : Nonfeasible Basis method (NFB) Input Infeasible basis B

Bb

min

While Bb

k

0 do

arg min

and

If K then Exit /*the feasible domain is empty*/ Else

s

arg min

r

arg max

basic

nonbasic

End Apply pivoting End The current basis is feasible. Apply the simplex method by the desired pivot rule.

Solving an Integer Program by Using the Nonfeasible Basis Method

269

3 Nonfeasible-Basis Cutting Plane Method Consider the following integer linear programming problem: max ðILPÞ s:t:

cT x Ax  b x  0 and integer;

ð3Þ

where A 2 Rmn ; b 2 Rm ; c 2 Rn and x 2 Rn . Then, the standard form of the LP relaxation problem of ILP is as follows: max s:t:

cT x Ax þ Im s ¼ b x; s  0:

ð4Þ

If b  0, then the simplex method can be performed for solving the problem (4). Otherwise, the origin point is not a feasible point and artificial variables are required. From the previous section, the nonfeasible basis method is better than the twophase method because it performs on the smaller tableau and it is not involved the artificial variable. Consequently, NFB is used for solving the problem (4). After the problem (4) is solved, the optimal integer solution to the problem (3) can be found by the following cases: 1. If the optimal solution to the LP relaxation problem is found and it is integer, then it is the optimal solution to the problem (3). 2. If the optimal solution to the LP relaxation problem is not integer, then the cutting plane method is performed in the condensed tableau. 3. If the LP relaxation problem is unbounded, then the problem (3) is unbounded. 4. If the LP relaxation problem is infeasible, then the problem (3) is infeasible. From the problem (4), the optimal condensed tableau is written as seen below:

B 1N

cTN cTB B 1 N xN

B 1b

xB

cTB B 1b

From the above tableau, the reduced cost cTN  cTB B1 N  0 is obtained, that is the dual solution is feasible. Suppose that there exists the fractional solution. Then, the cutting plane method is preferred for finding the integer solution.

270

K. Sangngern and A. Boonperm

If the Gomory’s cut is used, then one of rows in the optimal tableau, say the kth row, which has a fractional solution is selected to generate an added cut. Then, the added cut is   ðbak c  ak ÞxN þ xm þ n þ 1 ¼ bk   bk

ð5Þ

k ¼ where ak is the k th row of matrix B1 N, bak c ¼ ½ b a1k c    b amk c  and b ðB1 bÞk . From the added cut (5), if xm þ n þ 1 is chosen to be a basic variable, then this cut can add all coefficients of nonbasic variables to the optimal condensed tableau which is suitable for the cutting plane method. Hence, the condensed tableau with the added cut can be written as follows:

B 1N ak ak

cTN cTB B 1 N xN

xB

B 1b

bk

bk

xm

n 1

cTB B 1b

  Since bk  bk cannot be positive, the dual simplex method will be performed for solving the condensed tableau with the added cut. After the LP relaxation problem is solved, if the obtained solution is not integer, then repeat steps by adding a new cut one by one until the optimal integer solution is found. The algorithm of this method can be summarized as follows: Algorithm 3 : Nonfeasible-Basis Cutting Plane method (NBC) Input A, b and c

Generate the LP relaxation problem If the origin point is feasible do Perform the simplex method in the condensed tableau Else

Perform NFB End While the solution is not integer do Perform the cutting plane method on the optimal condensed tableau If the optimal integer solution is found then The optimal integer solution is found Else There is no integer feasible solution End End

Solving an Integer Program by Using the Nonfeasible Basis Method

271

4 Computational Results In Sect. 3, the combination method for solving an integer linear programming problem without using artificial variables was designed. To show the performance of NBC, the computational time for solving random generated integer programs between the NBC method and the traditional method (the simplex method or the two-phase method combined with the cutting plane method) is compared. Both methods are implemented through Python. Finally, the computational results will be shown and discussed. 4.1

Generated Problem

Consider the following integer linear programming problem max s:t:

cT x Ax  b x  0 and integer;

ð6Þ

where A 2 Rmn ; b 2 Rm ; c 2 Rn and x 2 Rn . Generated problems are in the form of problem (6) and satisfy the following conditions:   1. A coefficient matrix A ¼ aij , aij is randomly generated in an interval ½9; 9.  T 2. A vector cT ¼ cj and vector x ¼ ½xi , cj and xi are randomly generated in an interval ½0; 9. 3. After a coefficient matrix A and vector x are generated, a right-hand-side vector b is computed by Ax ¼ b. The different sizes of the number of variants ðnÞ and the number of constraints ðmÞ are tested and summarized in the following table (Table 1): Table 1. The summarized data of the generated problems n 21 41 61 81 101 201 301 401 501 601 m 10 20 30 40 50 100 150 200 250 300 Size Small Large

All methods are implemented by using Python coding on the Google Colaboratory, and these tests were run on Intel® Core™ i7-5500U 8 GB 2.4 GHz. For the traditional method, the two-phase method is used for solving the LP relaxation problem, and then the original cutting plane method is used for finding the optimal integer solution and they are performed on the full simplex tableau.

272

4.2

K. Sangngern and A. Boonperm

Computational Results

According to the experimental design, we will show the comparison of the average of the computational time before the cutting plane method is performed, the average of the total computational time and bar charts of the average of the total computational time for each size of problem with the standard deviation. Table 2. The average of the computational time (sec.) Size (row, col.) Small 21,10 41,20 61,30 81,40 101,50 201,100 Large 301,150 401,200 501,250 601,300 Total average

TRAD BC 0.14 0.56 1.83 4.15 7.75 70.12 274.85 687.05 1408.01 2972.74 5427.2

TC 0.15 0.58 1.87 4.22 7.85 70.50 275.75 688.82 1410.14 2976.51 5436.39

NBC BC TC 0.03 0.04 0.16 0.19 0.41 0.49 0.94 1.19 1.92 3.08 32.23 32.33 153.88 154.48 234.65 258.05 307.72 628.82 845.10 1334.19 1577.04 2412.86

Table 2 exhibits the average computational time before the cutting plane method is performed and the average total computational time. In Table 2, the main column named TRAD and NBC are represented the computational time for solving problem by using the traditional method and the NBC method, respectively, and the sub column named BC and TC are represented the average computational time before the cutting plane method is performed and the average total computational time, respectively. Note that the bold face numbers indicate the smallest average computational time. From Table 2, the average computational time before the cutting plane method is performed and the average total computational time of NBC are less than TRAD. Since NBC is not involved the artificial variable and is performed on the condensed tableau while TRAD requires artificial variables and the original cutting plane method, NBC is faster than TRAD. From the computational result, we found that the average total computational time of NBC is 2412:86 5436:39  0:4438 or approximate 44.38% of the average total computational time of TRAD. It means that the proposed method can reduce the computational time approximately 55.62% from the computational time of the traditional time (Fig. 1).

Solving an Integer Program by Using the Nonfeasible Basis Method

273

Fig. 1. The average of the total computational time

4.3

Discussion

From the computational results, we found that the average of the total computational time of the traditional method is greater than the NBC method. There are two main reasons why the NBC method has more effective than the traditional method. First, the size of the performed tableau for each method is different since the traditional method performs on the full simplex tableau which is bigger than the condensed tableau that used in the NBC method. To compare the size of the tableaux between the original simplex tableau and the condensed tableau, if one cut is added, then the original simplex tableau will be expanded with one row and one column

Im

as

0

0

0

B 1N T N

as T B

c

c B N xN

xB

bs

1

1

bs

T B

0 xm

xB

B 1b

xm

n 1

1

c B b

n 1

while the condensed tableau will be expanded with only one row.

B 1N

as T N

c

as T B

xB

B 1b 1

c B N xN

bs T B

bs 1

c B b

xm

n 1

274

K. Sangngern and A. Boonperm

Moreover, if k cuts are added, then the original simplex must be expanded with k rows and k columns while the condensed tableau is expanded with only k rows. Therefore, the size of tableaux performed by NBC is smaller than the origin simplex method causing that the NBC can reduce the computational time. The remaining reason is about the use of artificial variable. The traditional method is involved in the artificial variables. This leads to the number of variables in the system of the traditional method more than the NBC method which is the artificial-free technique.

5 Conclusion There are two types of methods for solving an integer linear programming problem, the exact method and the heuristic method. In this paper, we focus on the exact method. The traditional exact method consists of the method for solving the LP relaxation problem and the technique for finding the integer solution. For solving the LP relaxation problem, the simplex method or the two-phase method is preferred. If the twophase method is chosen, then artificial variables are added, and it implies that the problem must be expanded. To avoid the artificial variable, the artificial-free technique is required. The artificial-free technique that we choose for dealing with the artificial variable is the nonfeasible basis method (NFB). NFB is not only the artificial-free technique but also it performs on the small simplex tableau named condensed tableau. The nonfeasible-basis cutting plane method (NBC) is a combination of the nonfeasible basis method and the cutting plane method performed on the condensed tableau. The step of NBC starts with NFB for solving the LP relaxation problem, then the cutting plane method is used to find the integer solution. Since NBC is not use the artificial variable and perform on the condensed tableau, the computational time of NBC must less than the traditional method. From the computational results, we found that the computational time of NBC is approximate 44% of the computational time of the traditional method.

References 1. Fatthi, W., Haris, M., Kahtan H.: Application of travelling salesman problem for minimizing travel distance of a two-day trip in Kuala Lumpur via Go KL city bus. In: Intelligent Computing & Optimization, pp. 227–284 (2018) 2. Torres-Escobar, R., Marmolejo-Saucedo, J., Litvinchev, I., Vasant, P.: Monkey algorithm for packing circles with binary variables. In: Intelligent Computing & Optimization, pp. 547– 559 (2018) 3. Yaskov, G., Romanova, T., Litvinchev, I., Shekhovtsov, S.: Optimal packing problems: from knapsack problem to open dimension problem. In: Advances in Intelligent Systems and Computing, pp. 671–678 (2019) 4. Marmolejo-Saucedo, J., Rodriguez-Aguilar, R.: A timetabling application for the assignment of school classrooms. In: Advances in Intelligent Systems and Computing, pp. 1–10 (2019)

Solving an Integer Program by Using the Nonfeasible Basis Method

275

5. Chanta, S., Sangsawang, O.: A single allocation P-Hub maximal covering model for optimizing railway station location. In: Intelligent Computing & Optimization, pp. 522–530 (2018) 6. Land, A.H., Doig, A.: An automatic method of solving discrete programming problems. Econometrica 28, 497–520 (1960) 7. Gomory, R.E.: Outline of an algorithm for integer solutions to linear programs. Bull. Am. Math. Soc. 64, 275–278 (1958) 8. Gomory, R.E.: Solving linear programming problems in integers. Proc. Symposia Appl. Math. 10, 211–215 (1960) 9. Padberg, M., Rinaldi, G.: A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems. SIAM Rev. 33, 60–100 (1991) 10. Glover, F., Sherali, H.D.: Chvatal-gomory-tier cuts for general integer programs. Discrete Optim. 2, 51–69 (2005) 11. Wesselmann, F., Koberstein, A., Suhl, U.: Pivot-and-reduce cuts: an approach for improving gomory mixed-integer cuts. Eur. J. Oper. Res. 214, 15–26 (2011) 12. Dantzig, G.B.: Activity Analysis of Production and Allocation. Wiley, New York (1951) 13. Klee, V., Minty, G.: How Good is the Simplex Algorithm. In Equalities. Academic Press, New York (1972) 14. Paulraj, S., Sumathi, P.: A comparative study of redundant constraints identification methods in linear programming problems. Math. Problems Eng. 2010, 1–16 (2010) 15. Gao, C., Yan, C., Zhang, Z., Hu, Y., Mahadevan, S., Deng, Y.: An amoeboid algorithm for solving linear transportation problem. Physica A 398, 179–186 (2014) 16. Zhang, X., Zhang, Y., Hu, Deng, Y., Mahadevan, S.: An adaptive amoeba algorithm for constrained shortest paths. Expert Syst. Appl. 40, 7607–7616 (2013) 17. Zionts, S.: The criss-cross method for solving linear programming problems. Manage. Sci. 15, 426–445 (1969) 18. Pan, P.Q.: Primal perturbation simplex algorithms for linear programming. J. Comput. Math. 18, 587–596 (2000) 19. Paparrizos, K., Samaras, N., Stephanides, G.: A new efficient primal dual simplex algorithm. Comput. Oper. Res. 30, 1383–1399 (2003) 20. Nabli, H.: An overview on the simplex algorithm. Appl. Math. Comput. 210, 479–489 (2009) 21. Nabli, H., Chahdoura, S.: Algebraic simplex initialization combined with the nonfeasible basis method. Eur. J. Oper. Res. 245, 384–391 (2015) 22. Sangngern, K., Boonperm, A.: A new initial basis for the simplex method combined with the nonfeasible basis method. J. Phys. Conf. Ser. 1593, 012002 (2020) 23. Boonperm, A., Sinapiromsaran, K.: The artificial-free technique along the objective direction for the simplex algorithm. J. Phys. Conf. Ser. 490, 012193 (2014) 24. Boonperm, A., Sinapiromsaran, K.: Artificial-free simplex algorithm based on the non-acute constraint relaxation. Appl. Math. Comput. 234, 385–401 (2014)

A New Technique for Solving a 2-Dimensional Linear Program by Considering the Coefficient of Constraints Panthira Jamrunroj and Aua-aree Boonperm(&) Department of Mathematics and Statistics, Faculty of Science and Technology, Thammasat University, Pathum Thani 12120, Thailand [email protected], [email protected]

Abstract. The popular method to solve a 2-dimensional linear program is the graphical method or the simplex method. However, if the problem has more constraints then they take more time to solve it. In this paper, a new method for solving a 2-dimensional linear program by considering only the coefficient of constraints is proposed. If the problem satisfies the following 3 conditions: the vector of the objective function is nonnegative, the right-hand-side values are nonnegative, and the decision variables are nonnegative. Then, the optimal solution can be obtained immediately if there exists a variable which has a positive coefficient vector. Moreover, this technique can apply for choosing the entering variable for the simplex method. Keywords: 2-dimensional linear program simplex method

 Simplex method  Double pivot

1 Introduction A linear program is an optimization technique which has a linear objective function subject to linear equality or inequality constraints. The aim of this technique is an investigation of the solution that satisfies the constraints and gives the minimum or maximum objective value. In real-world problems, a linear program is widely used for solving many industrial problems which are required to achieve the best outcome such as a production planning problem, a traveling salesman problem, an assignment problem, a transportation problem, etc. For solving a linear program, the graphical method is an easy method for solving a 2 or 3-dimensional problem. However, for a high-dimensional problem, it is not a practical method. The first practical method which is used to solve a linear program was presented in 1947 by George Dantzig [1], namely the simplex method. However, in 1972, Klee and Minty [2] had presented the worst-case computational time of the simplex method. Then, many researchers have presented efficient methods to solve this problem. Some issues that the researchers have been interested to study are to propose the new method [3–10], to improve the simplex method by the new pivot rules [11–21].

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 276–286, 2021. https://doi.org/10.1007/978-3-030-68154-8_27

A New Technique for Solving a 2-Dimensional Linear Program

277

In 1976, Shamos and Hoey [22] proposed the algorithm which is used to solve a 2dimensional linear program. This algorithm forms the intersection of geometric objects in the plane. Then, it starts by considering n line segments in the plane and finding all intersecting pairs. After that, they use the Oðn log nÞ algorithm to determine whether any 2 intersection points and use it to find out whether two simple plane polygons intersecting. They proved that a linear program with 2 variables and n constraints can be solved in Oðn log nÞ time while the simplex method requires Oðn2 Þ time. In 1983, Megiddo [23] introduced the linear-time algorithm to solve a linear program in R2 . This method searches for the smallest circle enclosing n given points in the plane. Moreover, it disproves a conjecture by Shamos and Hoey that their algorithm requires Oðn log nÞ time. In addition, a 2-dimensional linear program which has a closely related problem called a separability problem in R2 is constructed and solved in OðnÞ time. That is, this algorithm corrects the error in Shamos and Hoey’s paper that it can solve in the Oðn log nÞ time. In 1984, Dyer [24] also proposed the algorithm which requires OðnÞ time to solve a 2-dimensional linear program. The concept of this algorithm is utilization of convexity which is well-known ideas for creating the linear time algorithm. This algorithm starts at any feasible point and coefficient values of the objective function, c1 and c2 are considered. If c1 ¼ c2 ¼ 0, then this feasible point is optimal. Otherwise, the problem is transformed to provide a gradient direction of the objective function. After that, the optimal solution is gained by performing the pairing algorithm. All processes of Dyer’s algorithm require OðnÞ time for each process. Recently, Vitor and Easton [25] proposed a new method to solve a 2-dimensional linear program which is called the slope algorithm such that the problem must satisfy the following conditions: the coefficient values of the objective function are positive, the right-hand side values of all constraints are also positive and both variables are nonnegative. Not only this algorithm can solve the 2-dimensional problem, but also can be applied to identify the optimal basis for the simplex method. They apply the slope algorithm into the simplex framework for solving a general linear program. The simplex method is improved by exchanging two variables into the basis which are identified by using the slope algorithm and this method is called the double pivot simplex method. For each iteration of this method, it requires the optimal basis from a 2dimensional linear program to indicate leaving variables for the original problem. Then, the slope algorithm is used to identify such variables. After the slope algorithm is performed, the simplex tableau is updated corresponding to the new basis. It repeats steps until the optimal solution to the original problem is obtained. From the computational result, this algorithm can reduce the iterations of the simplex method. Although the slope algorithm can identify the basis and can reduce the iterations of the simplex method, it takes more computations for each iteration. So, in this paper, we propose a new method for solving a 2-dimensional linear program. It can improve the double pivot simplex method by avoiding the computation of the slope algorithm in such case that is there exists only one pivot. We specifically determine the 2dimensional linear program such that the problem satisfies the following conditions: the cost coefficient values are positive, the right-hand side values are nonnegative and both variables are also nonnegative. This approach can identify the optimal solution by

278

P. Jamrunroj and A. Boonperm

considering only the coefficient values of constraints. Additionally, it can apply to choose the effective pivot element for the simplex method. The details of the proposed approach give in the next section.

2 The Proposed Approach Consider the following special 2-dimensional linear program: Maximize Subject to

z = c1 x1 + c2 x2

a11 x1 + a12 x2 am1 x1 + am 2 x2 x1 , x2

≤ b1 ≤ ≤ bm ≥ 0,

ð1Þ

where c1 ; c2 [ 0 and bi  0 for all i ¼ 1; . . .; m. Since the origin point is feasible, the problem (1) is either optimal or unbounded solution (see in Fig. 1).

Fig. 1. The possible cases of a solution to the problem (1)

The motivation of our proposed method is to identify a solution to such a problem by considering only the coefficient values of constraints. For the case of ai2  0 or ai1  0 for all i ¼ 1; . . .; m, we see that x1 or x2 can increase infinitely. This implies that the problem is unbounded (see in Fig. 2). For the other cases, the following theorems can identify the optimal solution if the coefficient of constraints satisfies the conditions.

A New Technique for Solving a 2-Dimensional Linear Program

279

Fig. 2. The coefficient value ai2  0 for all i ¼ 1; . . .; m

Theorem 1. Consider the 2-dimensional linear program (1) with ai2 [ 0 for all i ¼ 1; . . .; m. Let n o n o k ¼ arg min abi1i jai1 [ 0; i ¼ 1; . . .; m and r ¼ arg min abi2i jai2 [ 0; i ¼ 1; . . .; m . i) If ak1 \ak2 ðc1 =c2 Þ, then x ¼ ðx1 ; 0Þ is the optimal solution to problem (1) where x1 ¼ bk =ak1 . ii) If ar1 [ ar2 ðc1 =c2 Þ, then x ¼ ð0; x2 Þ is the optimal solution to problem (1) where x2 ¼ br =ar2 . Proof. i) Suppose that ak1 \ak2 ðc1 =c2 Þ. Let x1 ¼ bk =ak1 . We need to show that x ¼ ðx1 ; 0Þ is the optimal solution by using the optimality conditions. First, we show that ðx1 ; 0Þ is a feasible solution of the primal problem. Consider the following system: a11 x1 þ a12 x2 .. .

¼ .. .

am1 x1 þ am2 x2

¼ am1 x1

Ax ¼ ak1 x1 þ ak2 x2 .. .

¼ .. .

a11 x1 .. . ak1 x1 .. .

Since bi  0 for all i ¼ 1; . . .; m, if ai1  0, then ai1 x1  bi and if ai1 [ 0, then ai1 x1 ¼ ai1 abk1k  ai1 abi1i ¼ bi . Therefore, ðx1 ; 0Þ is a feasible solution. Next, we will show that there exists a dual feasible solution. Consider the dual problem: Minimize b1 w1 + b2 w2 + + bm wm = bT w Subject to

Choose w ¼ ½ 0   

a11 w1 + a21 w2 + + ak1 wk + a12 w1 + a22 w2 + + ak 2 wk +

c1 =ak1

+ am1 wm ≥ c1 + am 2 wm ≥ c2 w1 ,..., wm ≥ 0.

   0 T . Since ak1 [ 0 and c1 [ 0, c1 =ak1 [ 0.

280

P. Jamrunroj and A. Boonperm

Consider the following system: a11 w1 þ a21 w2 þ    þ ak1 wk þ    þ am1 wm a12 w1 þ a22 w2 þ    þ ak2 wk þ    þ am2 wm

¼ ¼

c1 ak2 ðc1 =ak1 Þ:

Since ak1 \ak2 ðc1 =c2 Þ, c2 \ak2 ðc1 =ak1 Þ. Hence, w ¼ ½ 0    c1 =ak1    0 T is a dual feasible solution. Finally, consider the objective values of the primal and the dual problems. We get cT x ¼ c1 ðbk =ak1 Þ and bT w ¼ bk ðc1 =ak1 Þ. Since x and w are a primal feasible solution and a dual feasible solution, respectively, and cT x ¼ bT w , x is the optimal solution to problem (1). ii) Suppose that ar1 [ ar2 ðc1 =c2 Þ. Let x2 ¼ br =ar2 . We need to show that x ¼ ð0; x2 Þ is the optimal solution by using the optimality conditions. First, we will show that ð0; x2 Þ is a primal feasible solution. Consider the following system: a11 x1 þ a12 x2 .. .

¼ .. .

am1 x1 þ am2 x2

¼ am1 x1

Ax ¼ ak1 x1 þ ak2 x2 .. .

¼ .. .

a11 x1 .. .

ak1 x1 : .. .

Since bi  0 for all i ¼ 1; . . .; m, if ai2  0, then ai2 x2  bi and if ai2 [ 0, then ai2 x2 ¼ ai2 abr2r  ai2 abi2i ¼ bi . Therefore, the primal problem is feasible. Next, we will show that there exists a dual feasible solution. Consider the dual problem: Minimize b1 w1 + b2 w2 + + bm wm = bT w Subject to

a11 w1 + a21 w2 + + ak1 wk + a12 w1 + a22 w2 + + ak 2 wk +

+ am1 wm ≥ c1 + am 2 wm ≥ c2 w1 , , wm ≥ 0.

Choose w ¼ ½ 0    c2 =ar2    0 T . Since ar2 [ 0 and c2 [ 0, c2 =ar2 [ 0. Consider the following system: a11 w1 þ a21 w2 þ    þ ar1 wr þ    þ am1 wm a12 w1 þ a22 w2 þ    þ ar2 wr þ    þ am2 wm

¼ ¼

ar1 ðc2 =ar2 Þ c2 :

Since ar1 [ ar2 ðc1 =c2 Þ, c1 \ar1 ðc2 =ar2 Þ. Hence, w ¼ ½ 0    c2 =ar2    0 T is a dual feasible solution. Finally, we will prove that the objective values of primal and dual problems are equal. Consider cT x ¼ c2 ðbr =ar2 Þ and bT w ¼ br ðc2 =ar2 Þ.

A New Technique for Solving a 2-Dimensional Linear Program

281

Since x and w are a primal feasible solution and a dual feasible solution, respectively, and cT x ¼ bT w , x is the optimal solution to problem (1). Theorem 2. Consider the 2-dimensional linear program (1) with ai1 [ 0 for all i ¼ 1; . . .; m. Let n o n o k ¼ arg min abi1i jai1 [ 0; i ¼ 1; . . .; m and r ¼ arg min abi2i jai2 [ 0; i ¼ 1; . . .; m . i) If ar2 \ar1 ðc2 =c1 Þ, then x ¼ ð0; x2 Þ is the optimal solution to problem (1) where x2 ¼ br =ar2 . ii) If ak2 [ ak1 ðc2 =c1 Þ, then x ¼ ðx1 ; 0Þ is the optimal solution to problem (1) where x1 ¼ bk =ak1 : Proof. The proof is similar to Theorem 1. From Theorem 1 and Theorem 2, the solution to the problem (1) is obtained by considering only the coefficient values of constraints. Therefore, the 2-dimensional linear program can be solved easily when it satisfies the conditions of these theorems. We can see that this technique can reduce the computation to solve a 2-dimensional problem. The examples of the proposed approach are shown in the next section.

3 The Illustrative Examples For this section, we give two examples for solving 2-dimensional linear problems and one example for applying with the simplex method. The first and second examples present that the 2-dimensional linear program gives the optimal solution. For the third example, we apply the proposed approach with the general linear program for selecting a leaving variable. Example 1. Consider the following 2-dimensional linear program: Maximize z = 3x1 + 4 x2 Subject to −3 x1 + 5 x2 ≤ 15 −2 x1 + x2 ≤ 22 − x1 + 4 x2 ≤ 16 x1 + 7 x2 ≤ 30 x1 + 2 x2 ≤ 11 x1 , x2 ≥ 0 .

ð2Þ

Since c1 ¼ 3; c2 ¼ 4 [ 0; bi  0 for i ¼ 1; . . .; m, we compute  k ¼ arg min

   bi b4 b5 ¼ 30; ¼ 11 ¼ 5: jai1 [ 0; i ¼ 1; . . .; m ¼ arg min ai1 a41 a51

Since ai2 [ 0 for i ¼  1; . . .; m and ak1 ¼ 1\ak2 ðc1 =c2 Þ ¼ 2ð3=4Þ ¼ 3=2, the optimal solution is x1 ; x2 ¼ ðbk =ak1 ; 0Þ ¼ ð11; 0Þ with z ¼ 44.

282

P. Jamrunroj and A. Boonperm

For solving this problem by the simplex method, it requires 4 iterations while our method can identify the optimal solution by considering the coefficients. Example 2. Consider the following 2-dimensional linear program: Maximize

z = 3x1 + 2 x2

Subject to

5 x1 + x2 − x1 + x2 2 x1 + x2 5 x1 + 4 x2

≤ 5 ≤ 5 ≤ 4 ≤ 25

x1 , x2

≥ 0.

ð3Þ

Since c1 ¼ 3; c2 ¼ 2 [ 0, bi  0 for i ¼ 1; . . .; m, we compute  r ¼ arg min

b1 b2 b3 b4 25 ¼ 5; ¼ 5; ¼ 4; ¼ 4 a12 a22 a32 a42

 ¼ 3:

Since  ai2 [  0 for i ¼ 1; . . .; m and ar1 ¼ 2 [ ar2 ðc1 =c2 Þ ¼ 3=2, the optimal solution is x1 ; x2 ¼ ð0; br =ar2 Þ ¼ ð0; 4Þ with z ¼ 8. The next example, we will apply our algorithm to solve a general linear program for identifying an entering variable. Example 3. Consider the following linear program: Maximize

z = 5 x1 + 3x2 + x3 + 2 x4

Subject to

2 x1 + x2 + 2 x3 + x4 + 2 x4 7 x1 + 4 x2 6 x1 + 3x2 + 3x3 3x1 + x2 + x3 + 3x4 9 x1 + 2 x2 + 4 x3 + x4 + 2 x4 3x1 + x2 x1 , x2 , x3 , x4

≤ ≤ ≤ ≤ ≤ ≤ ≥

10 12 15 8 9 5 0.

ð4Þ

First, construct the initial tableau by adding the slack variables and they are set to form a basis. The initial tableau can be obtained below.

z x5 x6 x7 x8 x9 x10

x1 −5 2 7 6 3 9 3

x2 −3 1 4 3 1 2 1

x3 −1 2 0 3 1 4 0

x4 −2 1 2 0 3 1 2

x5 0 1 0 0 0 0 0

x6 0 0 1 0 0 0 0

x7 0 0 0 1 0 0 0

x8 0 0 0 0 1 0 0

x9 0 0 0 0 0 1 0

x10 0 0 0 0 0 0 1

RHS 0 10 12 15 8 9 5

A New Technique for Solving a 2-Dimensional Linear Program

283

Next, consider the coefficients of x1 and x2 . The 2-dimensional linear program associated with x1 and x2 can be constructed as follows: Maximize z = 5 x1 + 3 x2 Subject to 2 x1 + x2 ≤ 10 7 x1 + 4 x2 ≤ 12 6 x1 + 3 x2 ≤ 15 ð5Þ 3 x1 + x2 ≤ 8 9 x1 + 2 x2 ≤ 9 3 x1 + x2 ≤ 5 x1 , x2 ≥ 0. After we use the proposed method, we get x ¼ ð0; 3Þ is the optimal solution to (5). So, x2 is selected to be an entering variable. The variable x6 is chosen to be a leaving variable because its corresponding constraint gives the minimum value of bi =ai2 for all i ¼ 1; . . .; m. The simplex tableau is updated as the following tableau.

z x5 x2 x7 x8 x9 x10

x1 0.25 0.25 1.75 0.75 2.25 5.5 1.25

x2 0 0 1 0 0 0 0

x3 −1 2 0 3 1 4 0

x4 −0.5 0.5 0.5 −1.5 2.5 0 1.5

x5 0 1 0 0 0 0 0

x6 0.75 −0.25 0.25 −0.75 −0.25 −0.5 −0.25

x7 0 0 0 1 0 0 0

x8 0 0 0 0 1 0 0

x9 0 0 0 0 0 1 0

x10 0 0 0 0 0 0 1

RHS 9 7 3 6 5 3 2

From the above tableau, x3 and x4 are considered. Since no positive coefficient vector of both variables, the proposed method cannot determine entering and leaving variables. Then, we back to use the slope algorithm to solve it and get that x3 ; x4 are selected to be entering variables and x9 ; x10 are chosen to be leaving variables. The updated simplex tableau is as follows:

z x5 x2 x7 x8 x3 x4

x1 2.042 −2.917 1.333 −2.125 −1.208 1.375 0.833

x2 0 0 1 0 0 0 0

x3 0 0 0 0 0 1 0

x4 0 0 0 0 0 0 1

x5 0 1 0 0 0 0 0

x6 0.542 0.083 0.333 −0.625 0.292 −0.125 −0.167

x7 0 0 0 1 0 0 0

x8 0 0 0 0 1 0 0

x9 0.25 −0.5 0 −0.75 −0.25 0.25 0

x10 0.333 −0.333 −0.333 1 −1.667 0 0.667

RHS 10.417 4.833 2.333 5.75 0.917 0.75 1.333

Since all reduced costs are   nonnegative, the optimal solution is obtained at this tableau which is x1 ; x2 ; x3 ; x4 ¼ ð0; 2:333; 0:75; 1:333Þ with z ¼ 10:417.

284

P. Jamrunroj and A. Boonperm

From Example 3, at the first iteration, the proposed method selects the second most negative reduced cost to be an entering variable while the simplex method chooses the most negative reduced cost to be an entering variable. At the second iteration, the proposed method performed double pivots while the simplex method performed only one a pivot. In summary, we can reduce the number of iterations for this case because the proposed method performed only 2 iterations to solve the problem in Example 3 while the simplex method performed 4 iterations. In addition, at the first iteration, we can reduce a computation because an entering variable was indicated without using the slope algorithm.

4 The Computational Results In this section, we report the computational results by comparing the number of iterations between the simplex method with the proposed technique and the original simplex method. By solving the following canonical linear program: Maximize z = cT x ð6Þ Subject to Ax ≤ b x ≤ 0. The sizes of tested problems have six different sizes of the coefficient matrix which are m  n ¼ 6  4; 6  8; 6  10; 8  6; 8  10 and 8  12. In addition, most of the tested problems are generated by strictly at the first iteration that requires the second most negative reduced cost to be an entering variable. Then, the computational results are shown in Table 1. We can see that the average number of iterations of the simplex method with the proposed technique are less than the average number of iterations of the original simplex method. So, we can reduce the number of iterations when a problem is solved by the simplex method with the proposed technique.

Table 1. The computational results of the tested problems. Sizes of problems 64 68 6  10 86 8  10 8  12 Average

The average number of iterations The original simplex The simplex method with the proposed method technique 3.4 1.8 5.4 2.2 3.4 1.9 3.0 1.6 3.2 1.5 3.6 1.7 3.667 1.783

A New Technique for Solving a 2-Dimensional Linear Program

285

5 Conclusions In this paper, a new method for solving a 2-dimensional linear program by considering only the coefficient of constraints is proposed. If there exists a positive coefficient vector of one variable, then the optimal solution can be identified immediately. Moreover, this technique can fix some cases of the double pivot simplex method that is the case of having only one entering variable. In this case, the double pivot simplex method performs the slope algorithm to solve the relaxed 2-dimensional linear program for identifying an entering variable that has more computation while the proposed method considers only the coefficient constraints for specifying an entering variable. By this reason, we can reduce computation for this case. But if the relaxed 2dimensional linear program does not satisfy the conditions, it is solved inevitably by the slope algorithm. Now, we can fix only the case of one entering variable for the double pivot simplex method. In the future work, we need to find a method that can fix the case of two entering variables.

References 1. Dantzig, G.B.: Linear Programming and Extensions. RAND Corporation, Santa Monica (1963) 2. Klee, V., Minty, G.: How Good is the Simplex Algorithm. In Equalities. Academic Press, New York (1972) 3. Terlaky, T.: A finite crisscross method for oriented matroids. J. Comb. Theory B 42(3), 319– 327 (1987) 4. Pan, P.: A new perturbation simplex algorithm for linear programming. J. Comput. Math. 17 (3), 233–242 (1999) 5. Pan, P.: Primal perturbation simplex algorithms for linear programming. J. Comput. Math. 18(6), 587–596 (2000) 6. Elhallaoui, I., Villeneuve, D., Soumis, F., Desaulniers, G.: Dynamic aggregation of setpartitioning constraints in column generation. Oper. Res. 53(4), 632–645 (2005) 7. Elhallaoui, I., Desaulniers, G., Metrane, A., Soumis, F.: Bi-dynamic constraint aggregation and subproblem reduction. Comput. Oper. Res. 35(5), 1713–1724 (2008) 8. Elhallaoui, I., Metrane, A., Soumis, F., Desaulniers, G.: Multi-phase dynamic constraint aggregation for set partitioning type problems. Math. Program. 123(2), 345–370 (2010) 9. Elhallaoui, I., Metrane, A., Desaulniers, G., Soumis, F.: An improved primal simplex algorithm for degenerate linear programs. INFORMS J. Comput. 23(4), 569–577 (2010) 10. Raymond, V., Soumis, F., Orban, D.: A new version of the improved primal simplex for degenerate linear programs. Comput. Oper. Res. 37(1), 91–98 (2010) 11. Jeroslow, R.G.: The simplex algorithm with the pivot rule of maximizing criterion improvement. Discr. Math. 4, 367–377 (1973) 12. Bland, R.G.: New finite pivoting rules for the simplex method. Math. Oper. Res. 2(2), 103– 107 (1977) 13. Pan, P.: Practical finite pivoting rules for the simplex method. OR Spectrum 12, 219–225 (1990) 14. Bixby, R.E.: Solving real-world linear programs: a decade and more of progress. Oper. Res. 50(1), 3–15 (2002)

286

P. Jamrunroj and A. Boonperm

15. Pan, P.: A largest-distance pivot rule for the simplex algorithm. Eur. J. Oper. Res. 187(2), 393–402 (2008) 16. Pan, P.: A fast simplex algorithm for linear programming. J. Comput. Math. 28(6), 837–847 (2010) 17. Csizmadia, Z., Illés, T., Nagy, A.: The s-monotone index selection rules for pivot algorithms of linear programming. Eur. J. Oper. Res. 221(3), 491–500 (2012) 18. Liao, Y.: The improvement on R. G. Blands method. In: Qi, E., Shen, J., Dou, R. (eds.) The 19th International Conference on Industrial Engineering and Engineering Management, Changsha, pp. 799–803 (2013) 19. Pan, P.: Linear Programming Computation, 1st edn. Springer, Heidelberg (2014) 20. Etoa, J.: New optimal pivot rule for the simplex algorithm. Adv. Pure Math. 6, 647–658 (2016) 21. Ploskas, N., Samaras, N.: Linear Programming Using MATLAB®, volume 127 of Springer Optimization and Its Applications, 1st edn. Springer, Cham (2017) 22. Shamos, M., Dan Hoey: Geometric intersection problems. In: 17th Annual Symposium on Foundations of Computer Science (SFCS 1976), pp. 208—215 (1976) 23. Megiddo, N.: Linear-time algorithms for linear programming in R3 and related problems. SIAM J. Comput. 12(4), 759–776 (1983) 24. Dyer, M.: Linear time algorithms for two- and three-variable linear programs. SIAM J. Comput. 13(1), 31–45 (1984) 25. Vitor, F., Easton, T.: The double pivot simplex method. Math. Meth. Oper. Res. 87, 109–137 (2018)

A New Integer Programming Model for Solving a School Bus Routing Problem with the Student Assignment Anthika Lekburapa, Aua-aree Boonperm(&), and Wutiphol Sintunavarat Department of Mathematics and Statistics, Faculty of Science and Technology, Thammasat University, Pathum Thani 12120, Thailand [email protected], {aua-aree,wutiphol}@mathstat.sci.tu.ac.th

Abstract. The aim of this research is to present an integer linear programming model for solving the school bus routing problem concerning the travel distance of all buses and the student assignment to each bus stop which depends on two views. Students’ satisfaction is more interested than the total travel distance of buses in the first model, and vice versa in the second model. Keywords: Capacity constraint routing problem

 Integer linear programming  School bus

1 Introduction One of the four elements of the marketing mix is distribution, which is the process involving selling and delivering products and services from a manufacturer to customer. Distribution is also an essential part of manufacturers’ operations since a poor distribution leads to a loss of trust of customers, retailers and suppliers. Therefore, it becomes important to improve product distribution to ensure that customers are pleased. There are many ways to improve the efficiency of the distribution. One of these improvements is finding a route for distributing products to maximize the satisfaction of a manufacturer and customer. For instance, in 2019, Mpeta et al. [1] introduced the binary programming model for solving a municipal solid waste collection problem. The obtained solution is an optimal route to reduce waste collection time and costs. The vehicle routing problem (VRP) is a problem for finding the route to distribute products from center depot to customers in different locations and return to center depot. The VRP is commonly investigated in companies for product distribution. There are many factors to seek the route in the VRP such as the capacity of vehicles, the number of vehicles, the driver cost, the travel distance, etc. In 1959, the VRP was first studied by Dantzig et al. [2]. The purpose in their work is to find the shortest route for delivery gasoline between a bulk terminal and many stations corresponding to the demand in each station. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 287–296, 2021. https://doi.org/10.1007/978-3-030-68154-8_28

288

A. Lekburapa et al.

The VRP is adjusted to the wider problem named the open vehicle routing problem (OVRP). The OVRP relaxes the condition that the vehicles must return to the center depot from the VRP. It aims that the OVRP is a much more precise model for conforming business compare to the VRP with close routes returning to the center depot. There are several algorithms for solving the OVRP. For instance, Tarantilisl et al. [3] presented the threshold-accepting algorithm called the list based threshold accepting (LBTA) to solve the OVRP on artificial case with 50–199 depots. The school bus routing problem (SBRP) is one type of the OVRP which is designed for planning the route to pick up students from bus stops to a school or from a school to bus stops. The SBRP was first considered by Newton and Thomas [4]. They solve this problem by generating routes using a computer. A case study of this research is in Indiana with 1500 students. Afterward, many researchers improved a model for solving the SBRP. For instance, In 1997, Braca et al. [5] attempted to minimize the number of buses with many constraints involving the capacity of buses, riding time, school time window, walking distance of student from home to bus stop and the number of students. They also studied the case study in New York with 838 bus stops and 73 schools. More importantly, they solved all the problems of every school in one state. In 2012, Jalel and Rafaa [6] developed a hybrid evolutionary computation based on an artificial ant colony with a variable neighborhood local search algorithm for the Tunisia school bus routing problem. In 2013, Taehyeong Kim et al. [7] formulated a mixedinteger programming problem for the SBRP, and developed a heuristic algorithm based on the harmony search to find the solution of this problem. The obtained solution from the developed heuristic algorithm are compared with the exact solution by CPLEX. The solution of both methods is the same, but the developed heuristic method is calculated faster. In 2016, Hashi et al. [8] used Clarke & Wright’s algorithms for solving the school bus routing and scheduling problem with time window. They also tested this algorithm with the Scholastica School in Dhaka, Bangladesh. Different methods for solving the SBRP can be seen from results of Park and Kim [9], Ellegood et al. [10], Bodin and Berman [11], Schittekat et al. [12], Eldrandaly and Abdallah [13] and references therein. We now discuss the research about the SBRP due to Baktas et al. [14], which is the main motivation for writing this paper. In their research, they presented a model to minimize the number of buses with capacity and distance constraints was presented. Both constraints prevent a sub-tour. They used this model for solving the SBRP with a real case in Turkey. In this model, they solved the SBRP under the situation that the number of students in each bus stop is known. However, the satisfaction of all students before arrival at each bus stop is not considered in their results. This leads to the inspiration for investigation and improving the process for solving the SBRP in this view. In this paper, we are interested for solving the SBRP with capacity and distance constraints in view of the above mentioned inspiration. Distance and capacity constraints in our model are revised from the model in [14] to handle the satisfaction of all students together with the total distance to all buses. In addition, we provide a comparison of results providing from our models between two situations based on two distinct objectives including to maximize the total satisfaction of students and to minimize the total travel distance of all buses.

A New Integer Programming Model

289

2 Problem Description and the Proposed Model In this paper, the SBRP with the student assignment depending on the satisfaction score of students at each bus stop is investigated. For this problem, all students go to bus stops under two situations including to maximize the total score of all students and to minimize the total travel distance of all bus stops. Then, each bus starts from some bus stops and picks-up students for going to a school before the school starts, and it returns to the same route for dropping-off students. The number of students in each bus must not exceed the capacity of the bus. Each bus is involved in a single path, and the total distance is limited by the specified distance. An example of a school, bus stops and home of each student are illustrated as Fig. 1.

Fig. 1. An example of the node of school, bus stops and home of students

For the proposed model, sets, parameters and decision variables are as follows: Sets: I is the set of all bus stops. V is the set of all bus stops and a school, that is, V ¼ I [ f0g; where 0 is a school node. V 0 ¼ V [ fdg; where d is a dummy node which is the first node of each path. L is the set of all students. Parameters: dij is the distance between nodes i and j for all i; j 2 I:  0; if i ¼ d; j 2 I dij ¼ where M is a large number. M; if i ¼ d; j ¼ 0; sil is the satisfaction score of a student l waiting a school bus at bus stop i:

290

A. Lekburapa et al.

k is the number of vehicles. Q is the capacity of a school bus. T is the limit of the total length of each path. Decision variables:  xij ¼  yil ¼

1; if arc ði; jÞ is traveled by bus 0; otherwise, 1; if student l is picked up at bus stop i 0;

otherwise:

Before developing the capacity constraint of Bektas et al. [14], we give its formula as follows: ui  uj þ Qxij þ ðQ  qi  qj Þxji  Q  qj ui  q i ui  qi x0i þ Qx0i  Q

8i; j 2 I with i 6¼ j 8i 2 I 8i 2 I;

where ui is the total number of students after pick-up from bus stop i; and qi is the number of students waiting at bus stop i. In our focused problem, the number of students in each bus stop is unknown because the number ofP students is obtained from the selection of the model. So, the variable qi is replaced by yil : We also change the l2L

index to make it easier to understand. The formula is as follows:   P P P ui  uj þ Qxij þ Q  yil  yjl xji  Q  yjl l2L l2L l2L P ui  yil l2L P ui  yil xdi þ Qxdi  Q

8i; j 2 I with i 6¼ j 8i 2 I 8i 2 I:

l2L

We can see that the first and third constraints are nonlinear. Then, the modification is obtained by linearization to make it easier to find theanswer. To make alinearization, P P we will introduce new decision variables as zji ¼ Q  yil  yjl xji and pi ¼ l2L l2L P yil xdi : Then, the capacity constraints can be rewritten as follows: l2L

A New Integer Programming Model

ui  uj þ Qxij þ zji  Q 

P

291

8i; j 2 I with i 6¼ j

yjl

l2L

zji  Qxji P P zji  Q  yil  yjl P l2L P l2L Q  yil  yjl  ð1  xji ÞQ  zji l2L l2L P ui  yil

8i; j 2 I with i 6¼ j 8i; j 2 I with i 6¼ j 8i; j 2 I with i 6¼ j 8i 2 I

l2L

ui  pi þ Qxdi  Q pi  Qx Pdi pi  yil P l2L yil  ð1  xdi ÞQ  pi

8i 2 I 8i 2 I 8i 2 I 8i 2 I:

l2L

Next, we present a model for maximizing the total student satisfaction with the improved constraint as follows: PP

Maximize

sil yil

i2I l2L

P

s:t:

xi0  k

ð2Þ

i2I

X

ð1Þ

xdi  k

ð3Þ

i2I

P

yil ¼ 1

8l 2 L

ð4Þ

i2I

P j2I [ f0g

P i2I [ fdg

xij ¼ 1

8i 2 I

ð5Þ

xij ¼ 1 8j 2 I

ð6Þ

ui  uj þ Qxij þ zji  Q 

P

8i; j 2 I with i 6¼ j

yjl

l2L

zji  Qxji 8i; j 2 I with i 6¼ j P P zji  Q  yil  yjl 8i; j 2 I with i 6¼ j l2L

Q

P l2L

yil 

P

l2L

yjl  ð1  xji ÞQ  zji

8i; j 2 I with i 6¼ j

l2L

ui 

P

yil

8i 2 I

l2L

ui  pi þ Qxdi  Q

8i 2 I

ð7Þ ð8Þ ð9Þ ð10Þ ð11Þ ð12Þ

292

A. Lekburapa et al.

pi  Qxdi 8i 2 I P pi  yil 8i 2 I

ð13Þ ð14Þ

l2L

P

yil  ð1  xdi ÞQ  pi

8i 2 I

l2L

vi  vj þ ðT  ddi  dj0 þ dij Þxij þ ðT  ddi  dj0  dji Þxji  T  ddi  dj0 ðxij þ xji Þ 8i; j 2 I with i 6¼ j vi  di0 xi0  0

8i 2 I

vi  di0 xi0 þ Txi0  T xij 2 f0; 1g yil 2 f0; 1g zji  0 pi  0

8i 2 I

ð15Þ ð16Þ ð17Þ ð18Þ

8i; j 2 V 0

ð19Þ

8i 2 I; l 2 L

ð20Þ

8i; j 2 I

ð21Þ

8i 2 I

ð22Þ

For the objective function (1), the total student satisfaction is maximized. In constraints (2) and (3), the number of vehicles used does not exceed the number of vehicles specified. Constraint (4) ensures that each student can only board at one bus stop. Constraints (5) and (6) ensure that vehicles visiting bus stop i must exit bus stop i: Constraints (7)–(15) ensure that each path does not occur sub tour by the number of students and the capacity of buses. Constraints (16)–(18) ensure that the cumulative distance at a bus stop i; denoted by vi ; does not exceed the limit T: Constraints (19)– (20) guarantee that the decision variables are binary. Constraints (21)–(22) claim that decision variables are nonnegative. In addition, we consider the other model by changing the objective function from (1) to a different view of the problem, that is, to minimize the total travel distance of all buses. In this considering this problem, we solve the following objective function: XX Minimize dij xij ð23Þ i2V 0 j2V 0

with the same constraints (2)–(22).

3 Experimental Results In this section, an experimental result is given to show the usage of our model. This experiment solves the SBRP in the case of a kindergarten with 70 students using the school bus service. Each school bus has 13 seats and a kindergarten has 8 school buses.

A New Integer Programming Model

293

There are 12 bus stops in our experiment. We solve this experiment in two situations by using CPLEX. Results of the first and the second objective functions are presented in Table 1 and Table 2, respectively. In these tables, the first column is the number of buses. The second column is the path of each bus for picking up students. The number of all students who are picked up by each bus shows in the third column and the satisfaction of students (%) for waiting the bus at each bus stop shown in the fourth column. The final column presents the total travel distance of buses for picking up students at each bus stop in the path to school. Table 1. Results from the proposed model with the objective function (1). Bus 1 2 3 4 5 6

Bus route d-3-5-0 d-4-2-0 d-6-9-0 d-7-8-0 d-11-1-0 d-12-10-0

Number of students Total satisfaction score (%) Total distance (km.) 11 100 20.5 12 100 37.1 11 95.46 14 11 97.23 14.3 12 99.17 34.3 13 100 13.7 98.71 133.9

Table 2. Results from the proposed model with the objective function (23). Bus 1 2 3 4 5 6

Bus route 1-2-0 3-8-0 5-10-0 7-4-0 9-6-0 12-11-0

Number of students Total satisfaction score (%) Total distance (km.) 12 73.33 22.3 12 84.17 15 10 90 13.35 12 76.67 8.9 11 83.64 11 13 78.46 12.5 80.71 83.05

In Table 1, The total distance of buses is 133.9 km. and the optimal solution is 691 scores, which is calculated as 98.71%. It implies that all students are extremely satisfied using the school bus service. Table 2 shows the solution of the second objective function. The total distance of buses is 83.05 km. The total satisfaction score is 565. A comparison of the satisfaction score and the total distance of the first and the second objective functions is given in Table 3. The solutions for these two objective functions give different values. The choice to use to solve the problem depends on the purpose of the user of this algorithm.

294

A. Lekburapa et al. Table 3. Comparison of the first and the second models. Total Satisfaction score (%) Total distance (km.) The first proposed model 98.71 133.9 The second proposed model 80.71 83.05

Figure 2 shows an example route in the first model. It can be seen that each student travels to the designated bus stop. The distance for each student will vary depending on the satisfaction score the student provides. The path of the school bus is quite

Fig. 2. An example route from the first model.

Fig. 3. An example route from the second model.

A New Integer Programming Model

295

complicated because we do not take into account the shortest route. But we want the distance of each route to not exceed the specified distance. Figure 3 shows an example route in the second model. The satisfaction of the students with the bus stop was 80.71%. The shortest distance is found caused a reducing travel time and expenses. It follows that the school can be saved money and so the school bus price for students is reduced. As traveling time is reduced, students have more time for homework or family activities.

4 Conclusion In this paper, we presented two integer linear programming models formulated to solve the SBRP with capacity and distance constraints. In these two models, students can be assigned to each bus stop based on the choice of an objective function. The first model is to maximize the total satisfaction score of students, and the second model is to minimize the total travel distance of buses. The selection of the model will depend on the needs of the user. That is, if the user takes the student’s needs in mind, the first objective function should be chosen. If the user takes into account the distance, time or cost of the travel, then the second objective function should be chosen. Acknowledgement. This work was supported by Thammasat University Research Unit in Fixed Points and Optimization.

References 1. Mpeta, K.N., Munapo, E., Ngwato, M.: Route optimization for residential solid waste collection: Mmabatho case study. In: Intelligent Computing and Optimization 2019. Advances in Intelligent Systems and Computing, vol. 1072, pp. 506–520 (2020) 2. Dantzig, G.B., Ramser, J.H.: The Truck dispatching problem. INFORMS. Manage. Sci. 6(1), 80–91 (1959) 3. Tarantilis, C.D., Ioannou, G., Kiranoudis, C.T., Prastacos, G.P.: Solving the open vehicle routing problem via a single parameter metaheuristic algorithm. J. Oper. Res. Soc. 56(5), 588–596 (2005) 4. Newton, R.M., Thomas, W.H.: Design of school bus routes by computer. Socio-Econ. Plan. Sci. 3(1), 75–85 (1969) 5. Braca, J., Bramel, J., Posner, B., Simchi-Levi, D.: A computerized approach to the New York City school bus routing problem. IIE Trans. 29, 693–702 (1997) 6. Jalel, E., Rafaa, M.: The urban bus routing problem in the Tunisian case by the hybrid artificial ant colony algorithm. Swarm Evol. Comput. 2, 15–24 (2012) 7. Kim, T., Park, B.J.: Model and algorithm for solving school bus problem. J. Emerg. Trends Comput. Inf. Sci. 4(8), 596–600 (2013) 8. Hashi. E.K., Hasan, M.R., Zaman, M.S.: GIS based heuristic solution of the vehicle routing problem to optimize the school bus routing and scheduling. In: Computer and Information Technology (ICCIT), 2016 19th International Conference, pp. 56–60. IEEE (2016) 9. Park, J., Kim, B.I.: The school bus routing problem: a review. Eur. J. Oper. Res. 202, 311– 319 (2010)

296

A. Lekburapa et al.

10. Ellegood, W.A., Solomon, S., North, J., Campbell, J.F.: School bus routing problem: contemporary trends and research directions. Omega 95, 102056 (2020) 11. Bodin, L.D., Berman, L.: Routing and scheduling of school buses by computer. Transp. Sci. 13(2), 113–129 (1979) 12. Schittekat, P., Sevaux, M., Sorensen, K.: A mathematical formulation for a school bus routing problem. In: Proceedings of the IEEE 2006 International Conference on Service Systems and Service Management, Troyes, France (2006) 13. Eldrandaly, K.A., Abdallah, A.F.: A novel GIS-based decision-making framework for the school bus routing problem. Geo-spat. Inform. Sci. 15(1), 51–59 (2012) 14. Bektas, T., Elmastas, S.: Solving school bus routing problems through integer programming. J. Oper. Res. Soc. 58(12), 1599–1604 (2007)

Distributed Optimisation of Perfect Preventive Maintenance and Component Replacement Schedules Using SPEA2 Anthony O. Ikechukwu1(&), Shawulu H. Nggada2, and José G. Quenum1 1

2

Namibia University of Science and Technology, Windhoek, Namibia [email protected], [email protected] Higher Colleges of Technologies, Ras Al Khaimah Women’s Campus, Ras Al Khaimah, United Arab Emirates [email protected]

Abstract. The upsurge of technological opportunities has brought about the speedy growth of the industry’s processes and machineries. The increased size and complexity of these systems, followed by high dependence on them, have necessitated the need to intensify the maintenance processes, requiring more effective maintenance scheduling approach to minimise the number of failure occurrences; which could be realised through a well-articulated perfect preventive maintenance with component replacement (PPMR) schedules from infancy stage through to completion stage. Then, using Strength Pareto Evolutionary Algorithm 2 (SPEA2), we devise a multi-objective optimisation approach that uses dependability and cost as objective functions. Typically, due to large scale problems, SPEA2 implementation on a single node proves to be computationally challenging, taking up to 1 h 20 min to run an instance of SPEA2 PPMR scheduling optimisation, the search time, the quality of the solution and its accuracy suffer much. We address this limitation by proposing a distributed architecture based on MapReduce, which we superimpose on SPEA2. The evaluation of our approach in a case study presents the following results: (1) that our approach offers an effective optimisation modelling mechanism for PPMR; (2) that the distributed implementation tremendously improves the performance by 79% computational time of the optimisation among 4 nodes (1 master-node and 3 worker-nodes) and the quality and accuracy of the solution without introducing much overhead. Keywords: Maintenance scheduling  Component replacement  Hierarchical  Population balancing  Parallel  Strength Pareto Evolutionary Algorithm 2 (SPEA2)  MapReduce  Optimisation

1 Introduction Although maintenance and appropriate preventive maintenance schedules have been shown to elongate component useful life [1, 19], and improve reliability and availability. However, at a certain point the component would become unmaintainable or its © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 297–310, 2021. https://doi.org/10.1007/978-3-030-68154-8_29

298

A. O. Ikechukwu et al.

reliability falls below a predefined threshold. Consequently, the work in (Ikechukwu et al. 2019) considered component replacement as an improvement factor in form of optimisation parameter to evaluate and determine [2] when the components reliability has dropped beyond defined unacceptable level which informs whether the component will be maintained or replaced. At this stage, the computation of the cost of maintenance is evaluated to either include the cost of component replacement or not when only maintenance activity is carried out. We established mathematical models for reliability, unavailability and cost for perfect preventive maintenance using Weibull distribution and age reduction model. Then we developed a variant of SPEA2, defined and established an optimisation approach for optimising PPMR schedules. PPMR schedules of Fuel Oil Service System (FOSS) model case study with respect to unavailability and cost are optimised using the developed optimisation approach. SPEA2 is an extension of SPEA. It is an enhanced elitist multi-objective, multidirectional optimisation algorithm. SPEA2 is an enhanced version of SPEA that mitigates some of the drawbacks of SPEA. SPEA2 employs density valuation method to differentiate among individuals possessing similar raw suitability values. SPEA2 uses the archive update process and archive size remains same all through and the truncation method avoids border solutions removal. SPEA2 outperforms most of the other evolutionary algorithms (EAs) on almost all test problems [20] in the expense of computational time, when processed step by step (sequentially) because of its huge iterative processes. Evolutionary algorithms like SPEA2 are generally slow especially when evaluation function is computationally-intensive and slow. It takes so long for an evolutionary algorithm to find near optimal solutions when processed sequentially; therefore, techniques are needed to speed up this process. It is our view that distributing the optimisation process will help achieve the needed speed up. With data parallel model languages, data mapping and task assignments to the processors can be achieved. Data parallel model adopted in this work is MapReduce. MapReduce programming model [18] is efficient in solving large data-scale problems in a parallel and distributed environment. The introduction of parallelism in a distributed architecture for solving complex problems can increase performance [3]. This paper presents adoption of other methods like hierarchical decomposition, population balancing to distributed processing to further reduce computational time. The remainder of the paper is organised as follows. Section 2 discusses related work. Section 3 discusses the distributed solution which includes MoHPBPSPEA2 candidate solution algorithm, architecture, and flowcharts, while Sect. 4 presents MoHPBPSPEA2 implementation. Section 5 presents and discusses the results. Section 6 draws some conclusions and sheds light on future work. Section 7, the references.

Distributed Optimisation of Perfect Preventive Maintenance

1.1

299

Challenges with Sequential Evolutionary Algorithm

While EA executes processes sequentially step after step, and crossover and mutation carried out on couple of individuals, computational time usually grows higher for large populations as the number of steps also increases. Another aspect that further increases execution time is the number of objective functions that is to be met and the complexity of the system model under consideration. Another challenge with sequential implementation is its memory consumption for storing huge population. The execution memory requirements are as well higher as the entire population must be available in memory for the operations of EA. Even though EA still leads to optimal solution, sometimes sequential EAs may experience partial run in the search space thus outputting local optimal solutions instead of the global optimal solutions if it had completed its full run. Consequently, real-world problems that require faster solutions are not being considered for EA computation. This is so as EAs do not find solutions by mere using mathematical illustration of the problem, but are stochastic, non-linear, discrete and multidimensional. The use of evolutionary algorithms can be improved by reduction in computational time to achieving solutions faster. This paper proposes a technique to reduce computational time of evolutionary algorithms.

2 Related Work In the first International Conference of Genetic Algorithms (ICGA), there were no papers about parallel GAs at all. This situation changed in the second ICGA in 1987, where six papers were published. From then on there has been a steady flow of papers published in conferences and journals of GAs and parallel computation. To reduce the computation load of genetic search, a lot of methods for searching in parallel and distributed manner have been proposed [4–6]. D. Abramson and J. Abela [7] presented a paper in which they discussed the application of an EA to the school timetabling problem [7], and showed that the execution time can be reduced by using a commercial shared memory multiprocessor. The program code was written in Pascal and run on an Encore Multimax shared memory multiprocessor. The times were taken for a fixed number of generations (100) so that the effects of relative solution quality could be ignored and proved that the speedup was attained from a parallel implementation of the program. Although much work is reported about PEA models (and implementations on different parallel architectures), involving SOPs (single objective optimization problems), PEA models could also be applied for multiobjective optimization problems (MOPs) [8]. MOPs normally have several (usually conflicting) objectives that must be satisfied at the same time. The first multi-objective GA, called Vector Evaluated Genetic Algorithms (or VEGA), was proposed by Schaffer [9]. Afterward, several major multi-objective evolutionary algorithms were developed such as Multi-objective Genetic Algorithm (MOGA) [10], Niched Pareto Genetic Algorithm [11], Random Weighted Genetic Algorithm (RWGA) [12], Nondominated Sorting Genetic Algorithm (NSGA) [13], Strength Pareto Evolutionary Algorithm (SPEA) [14].

300

A. O. Ikechukwu et al.

Muhammad Ali Ismail [15] has presented his work by using a master-slave paradigm on a Beowulf Linux cluster using MPI programming library. With the help of this he has written a pseudo code that first initializes the basic population and also the number of nodes present in the cluster. Then it assigns the fitness function to the slave or other nodes in the cluster. The slave ones compute the fitness objective and do the mutation. After this, they send back the result to the master where it makes the final counterpart. This paper presented a view of implementation and realization of algorithms on a parallel architecture. Lim, Y. Ong, Y. Jin, B. Sendhoff, and B. Lee, proposed a Hierarchical Parallel Genetic Algorithm framework (GE-HPGA) [16] based on standard grid technologies. The framework offers a comprehensive solution for efficient parallel evolutionary design of problems with computationally expensive fitness functions, by providing novel features that conceals the complexity of a Grid environment through an extended GridRPC API and a metascheduler for automatic resource discovery (Fig. 1).

Multiobjective Hierarchical Population Balanced and Parallel SPEA2 using MapReduce (MoHPBPSPEA2) 1. Accept user input objectives (tasks). 2. Generate a population, P, at random (the solution space). 3.Decompose the problem based on population and objective (by the server). 3.1 Divide P into subp1, subp2…, subpNsubpopulations (N is the no of nodes use to solve the problem). 4.Parallel execution on each node (initialise parallel processing on the nodes). 5.For subpi, i=1... N, execute in parallel the next steps on nodes (N). 6.On each node: 6.1 Receive the subpopulation, subpi and the objectives (tasks). 6.2 Perform parallel SPEA2 on worker: 6.2.2 For subpw, w=1...W, execute in parallel the next steps on the available working thread (W). 6.2.2.1 Apply the selection mechanism and the genetic operators using MapReduce. 6.2.2.2If the termination criteria are not satisfied, return to 6.2.2 6.3 Combine results from the nodes. 6.4 Send the results to the server. 7.If the termination criteria are not met, return to 5. 8.Execute a local EA on server for best results. 9.Display best results (by the server).

Fig. 1. The proposed MoHPBPSPEA2 Algorithm

Distributed Optimisation of Perfect Preventive Maintenance

301

3 The Distributed Solution It is extremely obvious from literature that a range of variants of EAs, particularly hierarchical and parallel EAs perform better than the sequential EAs [16, 17]. Consequently, we consider this kind of EA and propose the application of these hierarchical and parallel EAs (MoHPBPSPEA2) to a multiobjective optimisation problems of perfect preventive maintenance with component replacement schedules, using a MapReduce model for the construct of the parallel design. MapReduce, [18] a software framework used to provide distributed computing on large scale problems on a cluster of nodes. The subsequent section presents an architecture depicting how a multiobjective real-world problem can be solved using this framework. 3.1

The Proposed Architecture

Shown in Fig. 2, is the proposed system architecture. The architecture setup consists of the server and the nodes. They are connected together in form of a cluster. The connection is such that information flows from the server to the nodes, between the nodes and from the nodes back to the server. Each node contains processor(s) which contains threads and each thread computes a process. A process consists of the task (objective) and the subpopulation. We propose to apply objective (task) and population (data) strategies together to decompose SPEA2. The construct for decomposition will be hierarchical and all the nodes in a cluster will be offered equal set of population, which they are to work on with the same task. The global decomposition level (server) decomposes the several SPEA2 tasks (objectives) and then assigns them to the local decomposition level (nodes). Each local level (node) then takes up the responsibility of decomposing the given population and task, and run parallel tasks on the cluster. The MapReduce model offers a parallel design arrangement for streamlining MapReduce utilisation in distributed architecture. MapReduce splits a large task into smaller tasks and parallelise the execution of the smaller tasks. The proposed system architecture shows the global and local levels of MoHPBPSPEA2 decomposition.

Fig. 2. Proposed system architecture

302

3.2

A. O. Ikechukwu et al.

The Proposed Distributed Algorithm and Flowcharts

We proposed the following MoHBPSPEA2 candidate solution (Fig. 2) above which we superimposed on the developed SPEA2 variant in [2] for the above-mentioned system architecture. MoHBPSPEA2 approach of distributing jobs (tasks and population) at the nodes and threads has two major benefits. First, it offers MoHBPSPEA2 the construct to sustain different computer specifications, that is, being able to run on a node with many threads or a node with only one thread – MoHBPSPEA2 ability to run on various hardware architecture. Then, second is that, it creates opportunity to study various variables, like communication between nodes of the same computer type and that of different computer type, communication between thread of same node and that of different nodes that may affect the performance of MoHBPSPEA2 so as to offer better understanding of MoHBPSPEA2 and enable its improvement.

4 MoHPBPSPEA2 Implementation In the proposed approach, the problem is first decomposed based on the number of objectives, i.e., tasks and population P, on individual node Ni. Then the second stage deals with population balancing by parallel decomposition of the given population on each node. Each node is offered P/N number of individuals in the population, where P is the population size and N is the number of nodes considered for solving the problem. The smaller population or subpopulation; which forms part of the jobs or tasks is then assigned to each node. Each node further distributes its workload or job among its working threads and the jobs are performed in parallel in the cluster. This dual-phase distribution approach also contributes to the efficiency and effectiveness of the developed MoHBPSPEA2. The dataset is first processed and stored in the datastore and almost all the timeconsuming computations and evaluations- population generation, fitness assignment, selection and objective functions are moved to the mapper for processing while the output is sent to the reducer as input for crossover and mutation, finally, the reducer output the results. These strategies help to save time of dataset processing (input-output process) that would have been done at different stages of the entire processing thereby saving both computation time and communication time amongst the nodes. Then the results from all the participating nodes in the clusters are combined and the results sent to the server for execution of local EA for best results which are also displayed by the server.

Distributed Optimisation of Perfect Preventive Maintenance

303

Julia is adopted to implement the developed distributed algorithm, the reliability threshold tr used is 0.8, the shortest PM interval T is 180, population and archive population size of 500 and after 500 generations, The MoHPBPSPEA2 algorithm is deployed on the cluster of nodes. The proposed approach is also generic as it offers flexibility for expansion to more than one local cluster. In this case, the server is connected to the n local clusters and have all the nodes on each cluster run in parallel. The algorithms and the flowcharts presented, illustrate the processes that would be executed to implement MoHPBPSPEA2. A simple model of a fuel oil service system of a ship (FOSS) case study described as in [2] is used to demonstrate the developed approach. The constituent components of the FOSS are marked with failure behaviour data and their short names described as in [1] (Figs. 3 and 4).

Fig. 3. Flowchart showing process flow on the server

304

A. O. Ikechukwu et al.

Fig. 4. Flowchart showing process flow on the nodes

5 Results A set of optimal PPMR schedules for the fuel oil system was obtained for (i) serial processing and (ii) parallel and distributed processing. Also, a set of trade-off PM schedules shown in the Pareto frontier of Table 1 and Fig. 6; Table 2 and Fig. 5 is produced. Each of the solutions or PM schedule represents an optimal trade-off between unavailability and cost. 5.1

Discussion

A total number of 53 optimal solutions are obtained under FOSS PPMR scheduling optimisation via serial and 64 optimal solutions via parallel and distributed process. The first and last 5 of the optimal PPM schedules for serial and parallel and distributed processes are as presented in the Table 1 and Table 2 above respectively and their graphic representation as shown in Figs. 5 and 6 respectively. Their statistical tool result comparison is shown in Table 3.

Distributed Optimisation of Perfect Preventive Maintenance

305

Table 1. Optimal PPMR schedules – parallel and distributed optimisation via NUST Cluster of Servers

Table 2. Optimal PPMR schedules – serial optimisation on a NUST Server

306

A. O. Ikechukwu et al.

Fig. 5. Pareto frontier of PPMR schedules via serial optimisation on a NUST Server

Fig. 6. Pareto frontier of PPMR schedules via parallel and distributed optimisation on NUST Cluster of Servers

Distributed Optimisation of Perfect Preventive Maintenance

307

Table 3. Statistical tool comparison for serial on server versus parallel and distributed optimisation of FOSS PPMR scheduling

From Table 3 above, the unavailability minimum and cost minimum values for both serial and, parallel and distributed processes are the same. However, this is not the same for cost, via parallel and distributed process has cost maximum value of 0.012276 greater than for serial and, the difference in range is also 0.012276 greater than for serial. In the same vein, parallel and distributed process has cost maximum value of 166000 with range variance of 166000 greater than serial. This implies that parallel and distributed process explored wider space into the feasible region than serial, and also obtained more optimal PPMR schedules than serial. The unavailability standard deviation for parallel and distributed process is 0.00151 smaller than for serial. But the reverse is the case for cost standard deviation as parallel and distributed process is 12026 greater than for serial. Similarly, the unavailability mean for parallel and distributed process is 0.014682 smaller than for serial. But the cost mean for parallel and distributed process is 16347 greater than for serial. However, to have a better and clearer perception of what happened in the optimisation of FOSS PPMR schedules, the distances in objective function values for both via serial and, parallel and distributed processes are required to be established further. From Table 2 (via serial), the least optimal schedule has unavailability as 0.094808367 and cost as 4161191 in generation 500 while the highest optimal schedule was also found in generation 500 with 0.37103334 as unavailability and cost as 2741967. While the least optimal schedule in Table 1 (via parallel and distributed process) has unavailability as 0.094808367 and cost as 4373191 at generation 497 and the highest optimal schedule obtained at generation 499 has unavailability as 0.383308865 and 2704967 as cost. The unavailability and cost variance for serial are 0.276225 and 1419224 respectively and the unavailability and cost variance for parallel and distributed process are 0.2885 and 1668224 respectively. Comparing the two objective functions variance values above, it is seen that distances (spread) between the optimal solutions obtained by parallel and distributed process shrunk more than that for serial. This means that the optimal PPMR schedules

308

A. O. Ikechukwu et al.

obtained via parallel and distributed process are closer in distance to themselves than that of serial and this satisfies one of the goals of multiobjective optimisation that says the smaller the distances of the objective function values between one solution and the other obtained non-dominated solutions, the better diverse the non-dominated solutions are. Interestingly, from Fig. 5 (serial), the first and last optimal solutions of the Pareto frontier for serial have objective functions as approximately 0.38 unavailability and 2.810 cost and; 0.08 unavailability and 4.310 cost respectively, giving objective function space value difference as 0.30 and 1.500 for unavailability and cost respectively. In the same vein, from Fig. 6 (parallel and distributed process), the first and last optimal solutions of the Pareto frontier for parallel and distributed process have objective functions as approximately 0.39 unavailability and 2.810 cost and; 0.07 unavailability and 4.410 cost respectively, giving objective function space value difference as 0.32 and 1.600. From the analysis, the difference of the objective functions for parallel and distributed process is seen to strike a balance more than that for serial. That is, the optimal schedules (solutions) obtained via parallel and distributed process has the objective functions (unavailability and cost) better trade-off. It can be seen from Fig. 5, Fig. 6, Table 1 and Table 2 that the PPMR schedules obtained via parallel and distributed process are superior to PPMR schedules obtained via serial. The computational time for optimisation via parallel and distributed process is 3792 s lower than the time for serial as shown in Table 3. In other words, parallel and distributed process improved the optimisation run time by 79%.

6 Conclusion The developed scalable, hierarchical, and distributed SPEA2 algorithm (MoHPbPSPEA2) for optimising preventive maintenance with component replacement schedules (PPMR) provides a wider contribution to knowledge in the field of AI application. The results obtained from subjecting FOSS constituent components to the developed optimisation approach were shown to have met the result expectations; that optimising PPMR schedules in distributed processes can reduce computational time and enhance cost effectiveness; inject more diversity into the solution space, as well as produce better design solutions. There are many possibilities for future work. The work in this paper can be extended to: (1) investigate and demonstrate the effect on design solution, of the use of Island model and SPEA2 for perfect preventive maintenance with replacement in a distributed environment. Island model could further help to enhance the design solution quality. (2) Investigate optimising imperfect preventive maintenance schedules under replacement policy using SPEA2 in a distributed process. (3) Investigate a network model to track the fitness of system components in between PM times and offer data that might be useful in recommending possible preventive measures on the system should it be requiring attention before the next PM time.

Distributed Optimisation of Perfect Preventive Maintenance

309

(4) Investigate and demonstrate the effect of PPMR schedules execution on more than 4 nodes or more than one cluster. This will significantly further reduce the computational time.

References 1. Nggada, S.H.: Reducing component time to dispose through gained life. Int. J. Adv. Sci. Technol. 35(4), 103–118 (2011) 2. Ikechukwu, O.A, Nggada, S.H, Quenum, J.G, Samara, K.: Optimisation of perfect preventive maintenance and component replacement schedules (PPMCRS) Using SPEA2. In: Proceedings of ITT 2019 6th International Conference on Emerging Technologies on Blockchain and Internet of Things, HCT, Women Campus, Ras Al Khaimah, UAE 20–21 November 2019. IEEE inspec access no: 19569157 (2019). https://doi.org/10.1109/itt48889. 2019.9075124. ISBN: 978-1-7281-5061-1 3. Pawar, R.S.: Parallel and distributed evolutionary algorithms. seminar computational intelligence in multi-agent systems, 10 July 2018 4. Cantú-Paz, E.: A survey of parallel genetic algorithms. Calculateurs Paralleles, Reseaux et Systems Repartis 10(2), 141–171 (1998) 5. Cantú-Paz, E., Goldberg, D.E.: Are multiple runs of genetic algorithms better than one? In: Proceedings of the 2003 International Conference on Genetic and Evolutionary Computation (GECCO 2003) (2003) 6. Hiroyasu, T., Kaneko, M., Hatanaka, K.: A parallel genetic algorithm with distributed environment scheme.In: Proceedings of the 1999 IEEE International Conference on Systems, Man, and Cybernetics (SMC 1999) (1999) 7. Abramson, D., Abela, J.: A parallel genetic algorithm for solving the school timetabling problem. In: Proceedings of the 15th Australian Computer Science Conference, Hobart, pp 1–11, February 1992 8. De Toro, F., Ortega, J., Fernandez, J., Diaz, A.: PSFGA: a parallel genetic algorithm for multiobjective optimization. In: Proceedings of the 10th Euromicro Workshop on Parallel, Distributed and Network-based Processing (2002) 9. Schaffer, J.D.: Multiple objective optimization with vector evaluated genetic algorithms. In: Proceedings of the International Conference on Genetic Algorithm and their applications (1985) 10. Fonseca, C.M., Fleming, P.J.: Multiobjective genetic algorithms. In: IEEE Colloquium on Genetic Algorithms for Control Systems Engineering (Digest No. 1993/130) (1993) 11. Horn, J., Nafpliotis, N., Goldberg, D.E.: A Niched Pareto genetic algorithm for multiobjective optimization. In: Proceedings of the 1st IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence (1994) 12. Murata, T., Ishibuchi, H.: MOGA: multi-objective genetic algorithms. In: Proceedings of the 1995 IEEE International Conference on Evolutionary Computation, Perth (1995) 13. Srinivas, N., Deb, K.: Multiobjective optimization using nondominated sorting in genetic algorithms. J. Evol. Comput. 2(3), 221–248 (1994) 14. Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 3(4), 257–271 (1999) 15. Ismail, M.A.: Parallel Genetic Algorithms (PGAS): master slave paradigm approach using MPI, ETech (2004)

310

A. O. Ikechukwu et al.

16. Lim, D., Ong, Y., Jin, Y., Sendhoff, B., Lee, B.: Efficient hierarchical parallel genetic algorithms using grid computing. Proc. Future Gener. Comput. Syst. 23(4), 658–670 (2007) 17. Sefrioui, M., Srinivas, K., Periaux, J.: Aerodyanmic shape optimization using a hierarchical genetic algorithm. In: European Conference on Computational Methods in Applied Science and Engineering (ECCOMAS 2000) (2000) 18. Verma, A., Llora, X., Goldberg, D.E., Campbell, R.H.: Scaling genetic algorithms using MapReduce. In: Proceedings of the 9th International Conference on Intelligent Systems Design and Applications, ISDA 2009 (2009) 19. Nggada, S.H.: Multi-objective system optimisation with respect to availability, maintainability and cost. Ph.D. dissertation, The University of Hull, vol. 3 2012 (2012) 20. Zitzler, E.K., Deb, L.T.: Comparison of multiobjective evolutionary algorithms: Empirical results (revised version). Technical Report 70, Computer Engineering and Networks Laboratory (TIK), Swiss Federal Institute of Technology (ETH) Zurich, Gloriastrasse 35, CH 8092 Zurich, Switzerland (1999)

A Framework for Traffic Sign Detection Based on Fuzzy Image Processing and Hu Features Zainal Abedin(B) and Kaushik Deb Chittagong University of Engineering and Technology (CUET), Chattogram, Bangladesh [email protected], [email protected]

Abstract. The Traffic Sign Recognition (TSR) system is an essential part of traffic management, being support system for driver and intelligent vehicle. Traffic Sign Detection (TSD) is the prerequisite step of the automatic TSR system. This paper proposes a TSD framework by exploring fuzzy image processing and invariant geometric moments. In this framework, fuzzy inference system is applied to convert the HSV image into gray tone. Then, statistical threshold is used for the segmentation. After shape verification of every connected component using Hu moments and Quadratic Discriminant Analysis (QDA) model, the candidate signs are detected by referencing bounding box parameters. The framework is simulated in different complex scenarios of day and night mode. Experimental result shows that its performance is satisfactory and recommendable with the state of the art research. The proposed framework yields 94.86% F-measure in case of Bangladesh road signs and 93.01% in German road signs. Keywords: Traffic sign detection · Fuzzy inference system moments · Quadratic discriminant analysis

1

· Hu

Introduction

Traffic sign is an indispensable component to assist traffic police and to provide safety mechanism by controlling flow of traffic and guiding walkers and drivers in every aspect of road environment. Automatic traffic sign recognition system using computer vision attracts special interest in real time applications such as intelligent vehicles and traffic maintenance. In case of automatic TSR system, on board mounted module is used to detect and return the information of traffic signs for proper navigation of the vehicle. Regarding transportation maintenance, well positioning and accurate and exact information of traffic signs definitely enhance road security and safety. In a word, TSR is an important part of Intelligent Transportation System (ITS) [1,2,4]. Traffic sign is designed using distinct colours and shapes, which are really salient and visible, easily discriminable by human from the environment. Most of c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021  P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 311–325, 2021. https://doi.org/10.1007/978-3-030-68154-8_30

312

Z. Abedin and K. Deb

the signs are made using red, blue and yellow colors where shapes are triangular, circular, square and rectangle. A sign is made distinctive by three things :border color, pictogram and background. The signs are installed in a location along side with the road, so that driver can easily detect the signs [11]. Also, the shape and colors determine the categories of signs. The properties of Bangladesh road signs are outlined in Table 1 and some examples of traffic signs are illustrated in the Fig. 1. Table 1. Properties of Bangladesh traffic signs [5]. Color

Shape

Type

Red border with white background Circular

Regulatory

Red border with white background Triangular

Warning

Blue background

Circular

Regulatory

Blue background

Square or Rectangle Information

Fig. 1. Examples of Bangladesh traffic signs [5].

Uncertainty in image processing is a prevailing matter such that visual pattern is featured by inherent ambiguity, digital image is noised while capturing, object presentations are not crisp in practical scenario, knowledge of object detection is a vague term and color properties of a pixel is a linguistic matter (such as dark, bright and high luminance). Fuzzy logic in the field of soft computing is a tool to handle these sorts of uncertainty. Hence, the application of fuzzy logic can be a great field to explore in developing vision based application [3]. As in [9,10], large amount of research works has been done on traffic signs of countries like Germany, Belgium, Sweden, France, US, UK and China. Another important fact from the literature review of [10] is that traffic signs are not look alike across the globe. In spite of the clear definition of laws by the Vienna Treaty, dissimilarity of sign structures still prevail among the countries’ signatories to the agreements, and in some scenarios, noteworthy variation in traffic sign appearances can remain within the country. These variations can be managed easily by humans while in automatic detection and recognition system, being the challenging issues, needing to be addressed. A graphic is illustrated in the Fig. 2 showing this variation in the case of stop sign of Bangladesh with some

A Framework for Traffic Sign Detection

313

countries. Another important research gap is that most of the research did not consider the scenario of night mode sign detection [10].

Fig. 2. Variation in sign appearances

In the light of above situation, the aim of this research is to propose a framework for the detection of Bangladesh traffic signs, where along with the common challenges of the detection algorithm, night mode scenarios are emphasized. The key contributions of the framework are intensity transformation using fuzzy interference that can handle uncertainties in image processing and a computing tool provided for incorporating human knowledge, segmentation by a statistical threshold and shape verification using geometric invariant Hu moments and QDA model. The rest of the paper is structured as such that Sect. 2 presents related research, Sect. 3 presents the proposed framework while result and analysis is demonstrated in Sect. 4. Finally, conclusion is drawn in Sect. 5.

2

Related Works

To detect and localize the RoIs from an image is called Traffic Sign Detection (TSD) which is the first step of TSR. According to the comprehensive reviews regarding TSD accomplished by the contributors [9,10], the major categorizes of TSD algorithms are color based, shape based and the machine learning models. In color based method, the images are segmented using a threshold where the threshold is chosen on the basis of color information while in shape base method, shapes of the signs are detected using some popular edge detection algorithms such as Hough transformation. Recently, machine learning based sign detection has become very popular due to better accuracy. In this approach, pixels are classified from the background by using different features and classification models. In [11], a color based detection algorithm was proposed for Spanish traffic signs where the threshold was determined by observing the Hue and Saturation histogram of HSI images of the manually cropped traffic signs. Also, this algorithm used Distances to Borders (DtBs) vectors and SVM model to verify the shapes of the signs. Similar segmentation method was applied by the authors of [4] where the shape verification was implemented by Hu features and the cross correlation values of the detected shapes and the ground truth shapes were calculated and the shapes within an empirical threshold were considered as

314

Z. Abedin and K. Deb

positive shapes. Similarly, in [8], the authors proposed a detection algorithm for Bangladeshi road signs using YCbCr color space and used statistical threshold values for segmentation and DtBs vectors to verify shapes. In [12], a shape and color based detection algorithm was proposed with fmeasure of 0.93. The potential region of interests were searched in the scene area by using Maximally Stable Extremal Regions (MSERs) and then, used HSV threshold to detect the sign regions based on text. The detected regions were further filtered by using temporal information through consecutive images. Yang et al. proposed a computational efficient detection algorithm which can detect signs at a rate of six frames per second on image of size 1360 × 800 [13]. He developed probability model for red and blue channel to increase the traffic colors visibility using Gaussian distribution and Ohta space; then, the MSERs were applied to locate potential RoIs. Next, the RoIs are classified either signs or not using HOG features and SVM classifier. The authors of [2] presented a machine learning based approach to segment the image where positive colors are converted to foreground and others are mapped to black. Mean shift clustering algorithm based on color information was used to cluster the pixels into regions and then, centroid of the cluster was used to classify the cluster whether it is sign or not by random forest. In addition, a shape verification was included in the framework to discard the false alarm and this verification was based on log-polar transformation of the detected blob and correlations values with the ground truth. The authors of [15] proposed an efficient detection algorithm by using AdaBoost and Support Vector Regression (SVR) for discriminating sign area from the background where a salient model was designed by using specific sign features (shape, color and spatial information). By employing the salient information, a feature pyramid was developed to train an AdaBoost model for the detection of sign candidates in the image and a SVR model was trained using Bag of Word (BoW) histograms to classify the true traffic signs from the candidates. In [1], the chromatic pixels were clustered using k-NN algorithm taking the values of a and b in the L*a*b space and then, by classifying these clusters using Support Vector Machine. Alongside, the achromatic segmentation was implemented by threshold in HSI space and shape classification was done by Fourier descriptors of the regions and SVM model. Most of the machine learning algorithms methods are based on handcrafted features to identify signs. Recently, deep learning has been very popular in the computer vision research to detect objects due to their the capabilities of features representation from the raw pixels in the image. The detection algorithm proposed in [19], used dark area sensitive tone mapping to enhance the illumination of dark regions of the image and used optimized version of YOLOV3 to detect signs. The authors of [20] used to detect signs using Convolution Neural Network (CNN) where modified version of YOLOv2 was applied. An image segmentation by fuzzy rule based methods in YCbCr color space was presented where triangular membership function and weighted average defuzzification approach were selected [18].

A Framework for Traffic Sign Detection

3

315

Detection Framework

In general, the TSR is comprised of two steps: sign detection from an image or video frame and sign classification of the detection. This research concentrates on the detection of the signs from an image as a mandatory step before sign recognition. The proposed framework for traffic signs detection is illustrated in Fig. 3. According to the framework, the image is captured by a camera and then some pre-processing such as reshaping of the image into 400 × 300 pixels and color space mapping of RGB to HSV are applied. After pre-processing, the image is transformed into gray level by using fuzzy inference where the inputs are the crisp value of hue and saturation of a pixel. Then, the gray image is segmented by applying a statistical threshold. The segmented image is subjected to some processing such as morphological closing and filtering using area and aspect ratio to discard some unwanted regions. Next, the shape of remaining regions are verified by extracting Hu features and a trained Quadratic Discriminate Analysis (QDA) model. The regions which are not classified as circle or triangle or rectangle are discarded in this stage. After shape verification, the connected components remaining in the binary image are considered as the Region of Interest (RoI). Finally, the RoIs are detected as the potential traffic signs in the input image by bounding box parameters.

Fig. 3. Steps of Traffic sign detection framework.

3.1

Preprocessing

Nonlinear color space mapping of RGB to Hue Saturation Values (HSV) is applied in the input image to reduce the illumination sensitivity of the RGB model. Hue and saturation levels of HSV image are isolated for the crisp inputs in the fuzzy inference system. 3.2

Transformation Using Fuzzy Inference

In this work, we use Mamdani Fuzzy Inference System (FIS) to map the hue and saturation values of a pixel to gray value. This inference consists of mainly three steps: fuzzification, inference using fuzzy knowledge base and defuzzification [3,7]

316

Z. Abedin and K. Deb

with two inputs and one output. The inputs are the hue and saturation value of a pixel and the output is the gray value of a pixel. Fuzzification: it converts crisp sets into fuzzy sets with the assignment of degree of membership values. There are many techniques for fuzzification process. Here, the fuzzification is based on intuition of an expert. A fuzzy set is defined by linguistic variable where degree of the membership values is assigned in the range (0, 1) by a characteristic function. Intuition is used to make the membership functions and the knowledge of an expert in the domain is utilized to develop the intuition. For the universe of ‘hue’, we consider five linguistic labels such as reddish-1, reddish-2, bluish, noise-1, noise-2, for ‘saturation’, they are reddish and bluish and for the output they are reddish, bluish and black. To assign the membership values to each linguistic variable, triangular and trapezoidal functions are used. The Table 2 presents the parameters values of each membership function. Table 2. Membership parameters of the fuzzy sets. Linguistic Variables

Membership Function Parameters

Reddish-1 (hue)

Triangular

Reddish-2 (hue)

Triangular

a = 240, m = 255, c = 255

Bluish (hue)

Trapezoidal

a = 115, m = 140, n = 175, b = 190

Noise-1 (hue)

Trapezoidal

a = 150, m = 210, n = 255, b = 255

Noise-2 (hue)

Triangular

a = 0, m = 0, b = 22

a = 0, m = 0, b = 22

Reddish (saturation) Trapezoidal

a = 96, m = 102, n = 255, b = 255

Bluish (saturation)

a = 150, m = 210, n = 255, b = 255

Trapezoidal

Reddish (output)

Triangular

a = 220, n = 235, b = 252

Bluish (output)

Triangular

a = 110, n = 150, b = 255

Black (output)

Triangular

a = 0, m = 15, b = 35

Inference: it is the main step of FIS where the fuzzy knowledge base is used to infer a set of rules on the input fuzzy set. In fuzzy logic systems, the fuzzy knowledge base is developed by the rules and linguistic variables based on the fuzzy set theory to do inference. This knowledge base is constructed by using a set of If-Then rules. The structure of the rules in the standard Mamdani FIS is as follows: the kth rule is Rk : IF x1 is F1k and . . . and xp is Fpk , THEN y is Gk , where p is the number of inputs. Example of a rule is R1 = ‘If hue is reddish-1 AND saturation is reddish, THEN output is reddish’. In our knowledge base, five rules are incorporated. The inference is done by four steps. Step1: the inputs are fuzzified. Here the crisp value of hue and saturation value of a pixel are converted to fuzzy variables. Step2: all the antecedents of a rule are combined to yield a single fuzzy value using the fuzzy logic max operation, depending on whether the parts are connected by ORs or by ANDs. Step3: the output of each rule is inferred by an implication method. We use AND operator for the implication which do the min operation. Step4: This step aggregates the output of each rule to yield a single fuzzy output. Here fuzzy max operator is used.

A Framework for Traffic Sign Detection

317

Defuzzification: in this step, we defuzzify aggregate output(fuzzy values) to create a scalar value by applying centroid defuzzification method. Figure 4 shows three examples of gray images after the fuzzy intensity transformation.

Fig. 4. Gray image after fuzzy inference.

3.3

Segmentation

The gray image, obtained by fuzzy inference, is converted to binary image using the statistical threshold presented by the Eq. 1 which is derived on the statistical parameters of the image.  1, if I(x, y) > μ + σ Binaryimage (x, y) = (1) 0, otherwise where, μ is mean and σ is standard deviation of the gray image, I(x, y). 3.4

Morphological Closing and Filtering

The goals of the steps is three folds: merging regions which are belong to the same sign but fragmented during segmentation, to discard the regions which are very small or big and, to eliminate regions which do not comply with the aspect ratios of the signs. To get the desire output, morphological closing with circular structural element and area and aspect ratio filtering are applied in this order to the every connected components of the segmented image. The filtering criteria which are derived empirically are listed in the Table 3. Table 3. Filtering criteria. Parameter

Candidate region

Aspect ratio 0.66 to 0.71 Area

300 to 850

318

3.5

Z. Abedin and K. Deb

Shape Verification

Fig. 5. Steps in shape verification.

In this step, regions after the post processing, are verified to filter false alarm. To verify the regions, geometric invariant Hu moments are extracted for every regions and Quadratic Linear discriminant (QDA) classifier is used to verify the shapes. The steps of shape classification are depicted in Fig. 5. After the shape verification, the regions which are not triangular or circular or rectangle are filtered. According to [3,17], the 2D moment of order(p + q) of a digital image f(x,y) of size M X N is defined as mpq =

−1 M −1 N  

xp y q f (x, y)

(2)

x=0 y=0

where p = 0,1,2,...... and q = 0,1,2........are integers. The corresponding central moment of order (p + q) is defined as μpq =

−1 M −1 N  

(x − x ¯)p (y − y¯)q f (x, y)

(3)

x=0 y=0 10 for p = 0,1,2,..... and q = 0,1,2,......... where x ¯= m ¯= m00 and y central moments, denoted by ηpq are defined as

ηp q =

m01 m00

The normalized

μpq μγpq

(4)

where γ = p+q 2 + 1 for p + q = 2,3,4,.......... A set of seven invariant moments are calculated where h1 to h2 are orthogonal(independent of size, position and orientation) moments and h7 is invariant skew moment(good for processing mirror images) [17]. (5) h1 = η20 + η02 2 h2 = (η20 − η02 )2 + 4η11

(6)

h3 = (η30 − 3η12 ) + (3η21 − η03 ) 2 2

2

2

h4 = (η30 + η12 ) + (η21 + η03 )

h5 = (η30 − 3η12 )(η30 + η12 )((η30 + η12) )2 − 3(η21 + η03 )2 ) +(3η21 − η03 )(η21 + η03 )((3(η30 + η12 )2 − (η21 + η03 )2 )

(7) (8) (9)

A Framework for Traffic Sign Detection

319

h6 = (η20 − η02 )((η30 + η12 )2 − (η21 + η03 )2 ) + 4η11 (η30 + η12 )(η21 + η03 ) (10) h7 = (3η21 − η03 )(η30 + η12 )((η30 + η12 )2 − 3(η21 + η03 )2 ) +(3η12 − η30 )(η21 + η03 )(3(η30 + η12 )2 − (η21 + η03 )2 )

(11)

The classification model, discriminate analysis is defined by the Eq. 12. G(x) = arg max δk (x) k

(12)

where δk (x) is the discriminate function and definition of this is given by the Eq. 13. 1 1 deltak (x) = − log |Σk | − (x − μk )T Σk−1 (x − μk ) (13) 2 2 where Σk is co-variance matrix and πk is prior probability. 3.6

Detection

After the shape verification, the potential candidate of sign region is finalized. The bounding box parameters of the remaining connected component in the binary image after shape filtering, are determined. Then, using these parameters, the traffic sign area is detected in the input frame and can be easily cropped for recognition of the sign.

4 4.1

Result and Analysis Experiment Setup

To simulate the proposed detection framework, Matlab R2018a is used for the experiment. The configuration of the machine is Pentium5 quad-core processors with 8 GB ram without any GPU. 4.2

Detection Results

The presented traffic sign detection framework is evaluated over one publicly available data sets, including the German Traffic Sign Detection (GTSD) and Bangladesh Traffic Sign Detection(BTSD) created by own. For BTSD, the images are captured under many scenarios (rural,city and highway) during daytime, dusk and night focusing on different conditions such as illumination variation, shadows, different orientation and complex background. The images were down sampled to a dimension of 400 × 300 pixels. Figure 6, Fig. 7 and Fig. 8 demonstrate every steps of the detection framework. Figure 9 shows some samples of the detection results in different scenarios and Fig. 10 vivified the detection results in case of night mode. The incorporation of night mode scenarios is unique,as the previous research did not test their detection framework in nigh time. Table 4 tabulates the average computational cost of the detection framework per input image without any kind of code optimization and dynamic algorithm.

320

Z. Abedin and K. Deb

Fig. 6. Steps of detection framework (image from Bangladeshi road scene): a) input image b) HSV image c) fuzzified image d) binary image e) after morphological processing f) after filtering g) after shape verification h) detection of sign.

Fig. 7. Steps of detection framework (image from GTSD): a) input image b) HSV image c) fuzzified image d) binary image e) after morphological processing f) after filtering g) after shape verification h) detection of sign.

A Framework for Traffic Sign Detection

321

Fig. 8. Steps of detection framework (night mode): a) input image b) HSV image c) fuzzified image d) binary image e) after morphological processing f) after filtering g) after shape verification h) detection of sign.

Fig. 9. Some examples of road sign detection.

Fig. 10. Some examples of detection results in night mode scenarios.

322

4.3

Z. Abedin and K. Deb

Performance Evaluation

The Pascal’s measures are used to evaluate detection performance [6,16]. A detection is regarded as true positive if the intersection of the bounding box of the prediction and the ground truth is more than fifty percent. The Intersection over Union (IoU) is defined by the Eq. 14 where BBdt is the detection bounding box of detection and that of ground truth is BBgt . By computing the IoU score for each detection, the detection is classified into True Positives (TP) or False Negative (FN) or False Positives (FP) and from these values, precision, recall and f-measure of the detector are calculated. An illustration is presented in the Fig. 11. The precision and recall parameters of this detection framework are calculated for both German data sets and Bangladeshi road scenes. The performances in both cases are quite remarkable and comparable. In case of Bangladesh, the detection algorithm shows 95.58% recall and 94.15% precision and in GTSDB, recall is 92.8% and precision is 93.23%. To assess the efficiency of the framework, a comparative analysis is performed against five existing detection algorithms and it is summarized in the Table 5. It is evident from the Table 5 that the proposed framework achieves a F-measure value of 0.93 for GTSDB and 0.94 for BTSD. area(BBdt ∩ BBgt ) (14) IoU = area(BBdt ∪ BBgt )

Fig. 11. Illustration of IoU (red BB is ground truth and blue BB is detected with TP is 1, FN = 0 and =0).

4.4

Comparisons of Classifiers in Shape Verification

A data sets of 900 shapes for circle, triangle and rectangle are formed where the shapes are taken from the segmented road images. Next, the seven Hu features are extracted for every shape and a feature vector of size 900 × 8 is created. Then, the models are trained with 10-fold cross validation. The Fig. 12 illustrates the accuracy of different models. From Fig. 12, it is evident that the accuracy of the QDA model is better than others, with 96% accuracy. Hence, this model is deployed in our detection framework for shape verification.

A Framework for Traffic Sign Detection

Table 4. Average computational cost of the steps of detection framework. Process

Average simulated computational cost(s)

Image acquisition

0.016

Prepossessing

0.02

Intensity transformation using fuzzy inference

0.22

Segmentation

0.002

Morphological processing

0.035

Filtering

0.08

Shape verification

0.1

Detection using bounding box

0.015

Total

0.49

Table 5. Comparative analysis for the detection framework. Methods

Recall(%)

Precision(%)

F-measure(%)

Data sets

[4]

91.07

90.13

90.60

GTSDB

[2]

92.98

94.03

93.50

GTSDB

[20]

90.37

95.31

92.77

GTSDB

[21]

87.84

89.65

88.74

GTSDB

[14]

87.84

89.6

88.71

GTSDB

Proposed framework

92.8

93.23

93.01

GTSDB

Proposed framework

95.58

94.15

94.86

Bangladesh Data sets

Fig. 12. Comparison of accuracy in shape verification.

323

324

5

Z. Abedin and K. Deb

Conclusion

In this paper, a detection framework is presented by utilizing fuzzy image processing where crisp sets of hue and saturation are converted into fuzzy sets and Mamdani Fuzzy Inference scheme is applied to transform those into gray values. A statistical threshold is applied to segment the fuzzified image. After some filtering, to reduce the false alarm, shape classification is done by QDA model trained on Hu features. In addition, an analysis is drawn to compare accuracy of some popular models where QDA outperforms. The framework is evaluated by calculating the IoU scores where the average recall and precision rate are 94.19% and 93.69% respectively. The future work aims recognizing the traffic sign based on deep learning model.

References ´ Traffic sign segmentation and classification 1. Lillo, C., Mora, J., Figuera, P., Rojo, A.: using statistical learning methods. Neurocomputing 15(3), 286–299 (2015) 2. Ellahyani, A., Ansari, M.: Mean shift and log-polar transform for road sign detection. Multimed. Tools Appl. 76, 24495–24513 (2017) 3. Rafael, C.G., Richard, E.W.: Digital Image Processing, 3rd edn. Pearson Education, Chennai (2009) 4. Ayoub, E., Mohamed, E.A., Ilyas, E.J.: Traffic sign detection and recognition based on random forests. Appl. Soft Comput. 46, 805–815 (2015) 5. Road Sign Manual Volume-1.pdf. www.rhd.gov.bd/documents/ConvDocs/. Accessed 4 Nov 2020 6. Hung, P.D., Kien, N.N, : SSD-Mobilenet implementation for classifying fish species. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing and Optimization. ICO 2019, Advances in Intelligent Systems and Computing, vol. 1072, pp. 399–408. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33585-440 7. Borse, K., Agnihotri, P.G.: Prediction of crop yields based on fuzzy rule-based system (FRBS) using the Takagi Sugeno-Kang approach. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing and Optimization. ICO 2018, Advances in Intelligent Systems and Computing, vol. 866, pp. 438–447. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00979-3 46 8. Soumen, C., Kaushik, D.: Bangladeshi road sign detection based on YCbCr color model and DtBs vector. In: 2015 International Conference on Computer and Information Engineering (ICCIE) on Proceedings, Rajshahi, Bangladesh, pp. 158–161. IEEE (2015) 9. Safat, B.W., Majid, A.A., Mahammad, A.H., Aini, H., Salina, A.S., Pin, J.K., Muhamad, B.M.: Vision Based traffic sign detection and recognition systems: current trends and challenges. Sensors 19(9), 2093 (2019) 10. Chunsheng, L., Shuang, L., Faliang, C., Yinhai, W.: Machine vision based traffic sign detection methods: review, analyses and perspectives. IEEE Access 7, 86578– 86596 (2019) 11. Maldonado, S., Lafuente, S., Gil, P., Gomez, H., Lopez, F.: Road sign detection and recognition based on support vector machines. IEEE Trans. Intell. Transp. Syst. 8(2), 264–278 (2007) 12. Greenhalgh, J., Mirmehdi, M.: Recognizing text-based traffic signs. IEEE Trans. Intell. Transp. Syst. 16(3), 1360–1369 (2015)

A Framework for Traffic Sign Detection

325

13. Yang, Y., Luo, H., Xu, H., Wu, F.: Towards real time traffic sign detection and classification. IEEE Trans. Intell. Transp. Syst. 17(7), 2022–2031 (2016) 14. Cao, J., Song, C., Peng, S., Xiao, F., Song, S.: Improved Traffic sign detection and recognition algorithm for intelligent vehicles. Sensors 19(18), 4021 (2019) 15. Tao, C., Shijian, L.: Accurate and efficient traffic sign detection using discriminative AdaBoost and support vector regression. IEEE Trans. Veh. Technol. 65(6), 4006– 4015 (2016) 16. Everingham, M., Van, G.L., Williams, C.K.I.: The pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 88, 303–338 (2010) 17. Hu, M.K.: Visual pattern recognition by moment in-variants. IRE Trans. Inf. Theory 8(2), 179–187 (1962) 18. Alvaro, A.R., Jose, A.M., Felipe, G.C., Sergio, G.G.: Image segmentation using fuzzy inference system on YCbCr color model. Adv. Sci. Technol. Eng. Syst. J. 2(3), 460–468 (2017) 19. Jameel, A.K., Donghoon, Y., Hyunchul, S.: New dark area sensitive tone mapping for deep learning based traffic sign recognition. Sensors 18(11), 3776 (2018) 20. Zhang, J., Huang, M., Jin, X., Li, X.: A real time Chinese traffic sign detection algorithm based on modified YOLOv2. Algorithms 10(4), 127 (2017) 21. Xue, Y., Jiaqi, G., Xiaoli, H., Houjin, C.: Traffic sign detection via graph-based ranking and segmentation algorithms. IEEE Trans. Syst. Man Cybern. Syst. 45(12), 1509–1521 (2015)

Developing a Framework for Vehicle Detection, Tracking and Classification in Traffic Video Surveillance Rumi Saha, Tanusree Debi, and Mohammad Shamsul Arefin(&) Department of CSE, CUET, Chittagong, Bangladesh [email protected], [email protected], [email protected]

Abstract. Intelligent Transportation System and safety driver assistance systems are significant topics of research in the field of transport and traffic management. The most challenging of moving vehicle detection, tracking and classification is in the Intelligent Transportation System (ITS) and smart vehicles sector. This paper provides a method for vision based tracking and classify different classes of vehicle by controlling the video surveillance system. Several verification techniques were investigated based on matching templates and classifying images. This paper focused on improving the performance of a single camera vehicle detection, tracking and classification system and proposed a method based on a Histogram of Oriented Gradient (HOG) function that is one of the most discriminatory features to extract the object features and trained for classification and object on the Linear Support Vector Machine (SVM) classifier. Also categorize vehicles on the shape or dimension based feature extraction with cascade based Adaboost classifier which has the high predictive accuracy and the low cost of storage affirm the efficacy for real-time vehicle classification. In the final stage, for minimizing the number of missing vehicles, Kalman Filter was used to track the moving vehicles in video frame. Our proposed system is checked using different videos and provided the best output with appropriate processing time. The experimental results shows the efficiency of the algorithm. Keywords: Vehicle detection  Histograms of Oriented Gradients (HOG)  Support Vector Machine (SVM)  Occlusion  Kalman filter  Vehicle tracking  Vehicle classification

1 Introduction As vehicle detection and classification system has attracted many researchers from both institutions and industry in the field of Intelligent Transportation System (ITS). We proposed a new method in this paper to effectively and robustly track and classify vehicles. We also compared our own improved system with various existing algorithms of detection and tracking them. The suggested approach is evaluated using selfcollected video sequences in condition of motorway driving. Experimental results show how effective and feasible our approach is.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 326–341, 2021. https://doi.org/10.1007/978-3-030-68154-8_31

Developing a Framework for Vehicle Detection

327

In recent years the rise of vehicles has caused significant transport problems. As a result of the increase in automobile traffic, some issues emerged. Due to the rise in vehicles, the jamming of traffic in cities is a major problem and people’s health is at risk. Thus, road traffic video monitoring is becoming crucial work on Intelligent Transportation System (ITS) is attracting considerable attention. The area of automated video surveillance systems is actually of immense concern because of its safety impacts. The objective of this work is vehicle detection, tracking and classifying from the high quality video sequence. The paper will be carried out to achieve following goals: detection and tracking of multiple moving vehicles in traffic video sequence. Also classifying vehicles into their types based on the dimension of the objects. It is an important and difficult job to monitor frame by frame of identifying objects in video. It is a vital part of the smart surveillance system because it would not be possible for the system to obtain coherent temporal information about objects without monitoring objects, and measures for higher level behavioral analysis. The first step in video processing is identification of moving objects. Several regions may be used, such as video monitoring, traffic controls, etc. According to the relevant works mentioned in Sect. 2, the remainder of this paper is structured. In Sect. 3, the system architecture and design is suggested. Section 4 and 5 display experimental results and conclusions.

2 Related Work For vehicle detection, tracking and classification several different methods have been suggested in the literature review. Detection and tracking of vehicles are major computer vision issues that serve as the basis for further study of target motion. Vehicle detection takes advantage of technology such as digital image processing to distinguish the vehicle and its context [15], multimedia automation [16] statistics etc. In [11] an efficient algorithm to track multiple people is presented. For feature extraction and neural networks for identification, PCA was used in Matthews et al. [12]. A method of measuring traffic parameters in real time is defined here. It uses a feature-based approach to track vehicles in congested traffic scenes, along with occlusion logic. Instead of monitoring whole vehicles, sub-features of the vehicle are monitored for handling occlusions. However this approach is very expensive in terms of computation [13]. For vehicle identification and spatio-temporal surveillance, Chengcu et al. [1] suggested a system for adaptive context learning where in order to analyze image/video segmentation process using SPCPE, this technique uses unsupervised vehicle detection and spatio-temporal tracking. This work’s difficulty that lacks user interaction and doesn’t know a priori can pixel belongs to the class. Gupte et al. [2] suggested a method, where processing is carried out at three levels: raw images, the level of the area and the level of the vehicle, and classification of two items of the type. The limitation of this work is cannot classify a large number of category object and error occur for occlusion. The experiment result shows the highway scene which can fail in case of narrow road video sequence.

328

R. Saha et al.

Peng et al. [3] implemented a system using data mining techniques for vehicle type classification where the front of the vehicle was precisely determined using the position of the license plate and the technique of background-subtraction. Also vehicle type probabilities were extracted from eigenvectors rather than explicitly deciding the type of vehicle and using SVM for classification. The limitation of this method is that all vehicle license plate can’t found so there is an error occur in detection object and also handling complex. Sun et al. [4] proposed a structure for a real-time precrash vehicle detection method utilizing two forms of detection algorithms, such as the generation of multicycle-driven hypotheses and the verification of appearance-based hypotheses, and the classification of SVM algorithm used. The shortcoming of this work is that parameter failed in the case of environment and also cause error in the detection moving object. Jayasudha et al. [5] set out a summary of road traffic and accident data mining in the report. They provide an overview of the techniques and applications of data mining in accident analysis working with different road safety database and also need data discriminator which compares the target class one or more set. Chouhan et al. [6] suggested a technique for retrieving images using data mining and image processing techniques to introduce an image’s size, texture and dominant color factors. If the shape and texture same, they use weighted Euclidean distance. Chen et al. [7] developed a framework on PCA based vehicle classification framework where they used for feature extraction segment individual vehicles using SPCPE algorithm and for classification use two algorithms – Eigenvehicle and PCASVM. The weakness of this method detects vehicle have to use an unsupervised algorithm SPCP. On-road vehicle detection technique with training linear and nonlinear SVM classifiers using modified HOG based features in Y. Zakaria et al. [9]. For training dataset, they used the KITTTI dataset. Vehicle detection, tracking and classification is complicated as any traffic surveillance system due to the interference of illumination, blurriness and congestion of narrow road in rural area but exposed to real world problems. This demand makes the task very challenging for requirements of increasing accuracy. This paper, we developed a robust vehicle detection, tracking and classification method by extraction HOG features and Haar like features, with Support Vector Machine (SVM) and cascade based classifier for classifying the types of vehicles. Then Kalman filter technique is used. The system increases the tracking and classification accuracy by using a proper algorithm with less computational complexity and reduces the processing time.

3 System Architecture and Design As we discuss previous related works step about the related works on vehicle detection, tracking and classification procedure. We notice that there are some limitations of all the procedure for which reason those methods can’t very efficient framework in the Intelligent Transportation System. For solving all of the limitations of those methods we proposed an efficient framework of this field, according to Fig. 6 which can reduce the limitations of the background work. The system architecture of vehicle detection,

Developing a Framework for Vehicle Detection

329

Tracking and classification System comprises some basic modules: Dataset initialization, feature extraction, Haar like feature, classifier, tracking and length calculation. The function of dataset initialization is to set up the database with positive and negative samples. The feature extraction module is for generating important feature from the positive samples. Classifier module is for training samples for separate vehicles and non- vehicles. Tracking module is for detecting all moving vehicles and also reduce the number of missing objects. Finally, the most important part of the system is classification module for classifying vehicle types for better recognition based on the dimension of the object. Algorithm 3 shown the whole evaluating procedure of Vehicle detection and Tracking through an Algorithm. 3.1

Dataset

Samples are collected to evaluate the algorithm presented, in various scenes, city highway, wide roads, narrow roads, etc. In the first stage,7325 samples were collected for training and testing, including 3425 samples of vehicles (positive samples) and 3900 samples of non-vehicles (negative samples). The vehicle samples include various vehicle types such as car, truck and buses with various directions such as front, rear, left and right. In addition, the vehicle samples include both vehicles near the cameramounted vehicle, as well as those far away. At both stages, traffics sign etc. Illustration Fig. 1 and Fig. 2 shows example of training on vehicles and non-vehicles images. The training samples were resized to 64 by 64 RGB images.

Fig. 1. Samples for vehicle image

Fig. 2. Samples for non-vehicle images

330

R. Saha et al.

3.2

Sample Process

The samples collected are then pre-processed prior to extraction of the feature. The sample is resized so its dimensions can be 64  64 pixel where height is 64 pixels and width is 64 pixels. The height is selected based on the sample’s original size, then the width is determined so the sample retains the average aspect ratio. The next step is to calculate the different variations in the HOG feature for each sample, the cell size change for each sample size so all samples have the same vector length. 3.3

Histogram of Oriented Gradient (HOG) Feature Extraction

In the vehicle detection process Histogram of Oriented Gradients (HOG) features extraction is the main components. The original HOG computation is performed in five steps. First the image goes through the normalization of color and the correction of gamma. The picture is then divided up as a cell grid. The cells are grouped into larger overlapping blocks which allow the cells to belong to more than one block. Example of dividing an image into 16  16 cells where each cell has 256 pixels and 2  2 size blocks indicating that each block contains 2 cells in each direction. The blocks in the figure have a 50% overlap ratio where half of the block cells are shared with the neighboring block. Cell size and block size are parameters that the user must determine according to the size of the image and the amount of details needed to be captured (Fig. 3).

Algorithm 1: HOG calculation 1: input A: a gradient orientation image 2: initiate H 0 3: for all points inside the image (p, q) do 4: i A (m, n) 5: k the small region including (p, q) 6: for all offsets (r, s) such as the representing neighbors do 7: if (m + s, n + s) is inside of the image then 8: j I (m + x, n +y) 9: H (C, A, B, r, s) H (C, A, B, r, s) + 1 10: end if 11: end for Fig. 3. Algorithm for HOG implementation

The key element of HOG features extraction in the object detection are as follows: The images are converted to grayscale input color. The Gamma correction procedure adopts the standardization of the color space (normalized) of the input image; the aim is to adjust the contrast of the image, decrease the shadow of the local image and the effect of changes in the illumination. Though it can restrain interference with noise.

Developing a Framework for Vehicle Detection

331

Measure the gradient; primarily with a view to collecting contour details and reducing light interference. Project gradient to a cell steering gradient. All cells within the block are normalized; normalization has compressed the light, shadow, and edges, and the block descriptor is called the HOG descriptor after normalization. Collect HOG features from all blocks in the detection space; this step is intended to collect and merge HOG features into the final classification feature vectors to overlap blocks in the detection window. In this Fig. 4 and 5 HOG feature extraction shown from vehicle and non- vehicle image.

Fig. 4. Hog Feature for Vehicle image

Fig. 5. Hog feature for non-vehicle image

The variants of HOG features use the same technique for extraction of the function but variations in the parameters below: 1. The number of angles to orientation. 2. The number of bins with histograms. 3. The method by which the pixel gradient is determined. We may also optionally apply a color transformation to your HOG function vector and attach binned color characteristics as well as color histograms.

332

3.4

R. Saha et al.

Support Vector Machine (SVM) Classifier

Support Vector Machine (SVM) is a kind of algorithm based on the structural risk theory. We may use a Support Vector Machine (SVM) for binary classification, if our results contain exactly two classes. An SVM classifies data by seeking the best hyperplane that differentiates in one class for all data points from those of the other class. For an SVM the best hyperplane means the one with the greatest gap between the two groups. Two classes are divided between two classes, with the highest interval. Linear classification of SVM and nonlinear classifiers of SVM is designed to detect artifacts. As a nonlinear SVM kernel, a Gaussian kernel was used. Perform a dense multi-scale scan at each location of the test image using the classifier, disclosing preliminary object decisions. In image recognition, HOG features combined with the SVM classifier were widely used, especially when an object detection was very effective. The goal is to differentiate between the two groups by a feature as separate vehicles and non-vehicles which are induced from the available examples. By analyzing the accuracy with the other classifier, it has been seen that HOG-SVM accuracy is more than other classifier. 3.5

Non-linear Suppression

A non-linear suppression process is performed to minimize overlapping detected windows to only one window, the suppression process is performed at each window according to the confidence level of the classifier window with the highest confidence level is left, while other windows that overlap with it are suppressed with a percentage overlap greater than certain thresholds. 3.6

Kalman Filter for Tracking

The next step after detecting vehicles is to track vehicles for moving objects, from frame to frame. Because of the detector’s relatively high cost we are interested in contrasting an algorithm with a lower tracking complexity. That is why we choose to use a tracking algorithm known as the Kalman Filter for tracking objects in our system. It can help system, improve auto tracking accuracy and reduce frame losses. Kalman filter typically has two sections. The former is prediction and update. Prediction: Kalman filter is used to predict the position in the subsequent frame. The vehicle speed is determined based on the blobs distance. The vehicle position in the current frame is calculated using the vehicle velocity, the location of the current frame and the time elapsed since the last frame. It is state dependent and used to get the prediction of the state vector and the error covariance.

Developing a Framework for Vehicle Detection

333

Fig. 6. System Architecture of the proposed framework.

Calculating Vehicle Positions: We use a heuristic approach in which each vehicle patch is shifted around its current position to cover as many of the vehicles related blobs are possible. This is taken as the real vehicle location. Estimation: A measurement in a position in the system of image coordinates, as determined previous section. It updates the prediction parameters to reduce the error between the vehicle’s predicted and measured location. By predicting the vehicle’s location in the next frame, the next step is to rectify the prediction stage errors. We use the function for the correction step feature based. Now we add two constraints on the color and the size features for the corresponding point. The first condition is a contrast of blobs in two consecutive color frames. The blob must be the same compared to the color of the blobs in the n + 1 frame in frame n. The second condition in two frames is about the size of the blobs. Given that the distance between the vehicle and the camera varies, the size of the vehicle is always different. But the size variance ratio is minimal, from one frame to the next, and the size difference. Two blobs must be low in one consecutive frame. So, if two blobs match colors and the difference in size between them is lower than the threshold and the choice foreseen satisfy these two conditions, we consider the prediction is true, and set the predicted position to the blob’s center in a new frame. In this case, if two vehicles are similar to each other, the criteria are met for each vehicle to be chosen as the acceptable choice (Fig. 7).

334

R. Saha et al.

Algorithm 2: Kalman filter Tracking 1: If (time==0) { 2: Consider a new track for any vehicle detected 3: } 4: Else { 5: For (All current tracks) { 6: New position and track size predict by Kalman-filter 7: Overlap new place with all blobs found 8: If (Overlap (track (a), blob (b))! =0) 9: Mark match_matrix [a] [b]; 10: Analyze (match_matrix); } Fig. 7. Algorithm for Kalman Filter Tracking

3.7

Haar –Like Feature Extraction

Many approaches to machine learning have the benefit of being more effective in object detection computed. Haar classification is a Decision Tree-based technique in which a statistically improved rejection cascade is created during the training process. At first, a classifier (namely a cascade of boosted graders operating with Haar like - Features) train with a few hundred of positive samples scaled to them same size and negative samples might be random images are also scaled to same size as positive images. After resizing, listing these images into a text file and create a vec. File (using OpenCV creates sample). After that, start training the samples using cascade classifier and built classifier (.xml file). By following the above steps, we can extract Haar like feature through training the cascade boosted classifier. 3.8

Load Classifier

Training with a standard classifier is the most important task of the work that has been done our previous section. Now cascade classifier is loaded (.xml file), our training the classifier for the giving positive images according to their vehicles types. 3.9

Length Calculation

In the shape based feature extraction of the object, its dimension such as height, width, length etc. are very necessary things. The height and width of each detected vehicle are calculated by using below formula (Fig. 8): D¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r  ðX2  X1 Þ2 þ ðY2  Y1 Þ2

ð1Þ

Developing a Framework for Vehicle Detection

335

Algorithm 3: Vehicle Detection and Tracking 1: Input: The video sequence that is being monitored by the target specified 2: Output: The outcome of tracking 3: for t 1 to N do 4: if It has been detected for 5 seconds, compared with the last detection then 5: HOG and SVM to detect the vehicle; 6: If objects are found to be of concern (ROI) Then 7: Tracking the entity in the bounding box using the Kalman filter tracking algorithm 8: if It has been detected for 10 seconds 9: Comparing with the last detection then Normal SVM and HOG to judge whether Object still a vehicle in boundary box 10: else 11: Back to do kalman filter tracking 12: end 13: else 14: Back to the last step for detection 15: end 16: else 17: Back to the last step for detection 18: end Fig. 8. Vehicle detection and tracking algorithm

3.10

Classification

The classification of vehicle types has recently gained a great deal of research interest. It follows two main directions on the basis of the vehicle’s shape or appearance. Classification is achieved by categorizing the vehicles, according to the size of the vehicles into three groups, namely, large, medium and small. Since the length of the vectors is easy to find, the length was taken as the parameter for classifying vehicles according to the given size. When any vehicle object comes to our region of interest, the length of the vector has been calculated by using length calculation equation, Classification was carried out for that vehicle then pass the length requirements (Fig. 9).

336

R. Saha et al.

Vehicle Type

Vehicles Name

Large Medium

Bus, Truck Car, Van

Small

Motorbike, Bicycle Fig. 9. vehicle type classification

4 Implementation and Experiments In this portion, providing our development framework with the implementation procedure and performance review. 4.1

Experimental Setup

The developed vehicle Detection, Tracking and classification System have been implemented on a machine having the window 10, 2.50 Core i3-4100 processor with 4 GB RAM. The system has been developed in python 3.6.7 (version). 4.2

Implementation

Figure 10(a) shown the results only where there is some false position for detection. But the false positive rate can be reduced after applying the tracking algorithm which is shown in Fig. 10 (b), (c) and (d) and also track multiple vehicles. The detection boxes are in this bounding box red Color, the tracking boxes are the blue boxes.

(a)

(b)

(c)

(d)

Fig. 10. Vehicle detection and tracking example in different scenes such as (b) urban road with high light, (c) rural congested road, (d) low light.

Developing a Framework for Vehicle Detection

337

Vehicle tracking in town area is much more difficult because objects are nearby to each other, while tress or other background might cast shadws both on the road and the vehicles. Figure 10(b)-(d) shown some tracking results under different scenario where our proposed system work very satisfactory. By using different time and scene traffic sequence to check kalam-Filter based tracking system. We can see the system has a good tracker both videos and images. It can also track fast moving objects. When a vehicle enters the scene, a new tracking object is detected, a new number is transmitted and a tracking window is configured for the new vehicle. We tested the system on highway image sequences. Most vehicles can be successfully tracked and classified by the system. Different videos and images were chosen for testing of different types of vehicles, including 3 types, including buses, cars and motorbikes. By plugging in haar cascade-based classifier algorithms based on vehicle extract features or dimensions, we loop through them and draw a rectangle around each frame of the videos. After those preprocessing, we have integrated the framework that can automatically extract and classify vehicles into different traffic video scenes.

(e)

(g)

(f)

(h)

Fig. 11. Vehicle classification example in different types such as (e)-(f) motorbike types of vehicle, (g) car type of vehicle, (h) Bus type of vehicle

4.3

Performance Evaluation

To evaluate the efficiency of the proposed solution, we precisely break the training data train and test it by retaining 20% of the vehicle images and 20% of the vehicle subimages for training. We used a fixed set of 231 sub-images of vehicles and nonvehicles that were collected for testing as the data set.

338

R. Saha et al.

Correct correspondence is the number of points which the proposed method detects and tracks correctly. And the term for correspondence is the total number of match points. Accuracy ¼

Number of corrected correspondence  100% Number of correspondence

(1) Performance of Kalman Filter Tracking The traffic sequence videos we have tested for checking result Table 1, shown some information about those videos such as videos frame height and width, number of frames, total processing time. From this result of tracking video, we can understand about our proposed system accuracy and effectiveness. Table 1. Execution time analysis for vehicle detection and tracking Parameter Frame height Frame width Video length No. of frames Processing time

Video 1 320 320 14 s 256 2.5 min

Video 2 720 1280 30 s 918 6 min

As shown in Table 2, we use traffic sequences of video to check the Kalman Filter-based tracking system, with the tracking results as shown above. We can see that the system has a good tracking result and it can also track fast moving objects like vehicles. When a new vehicle enters the scene, a new Tracking object is seen, a new number is distributed and a tracking window in that vehicle initialized. For moving destinations such as vehicles quickly, this has a strong detection and tracking effect. This can detect and track vehicles that randomly join the monitoring scenario. As we assume that good tracking performance depends on good observation, we put more computational complexity on detection. This method is also a main-stream of current research trend called ‘Detection-byTracking’ or ‘Tracking-Based-Detection’. Table 2. Performance analysis for videos Test video Video 1 Video 2 Video 3 Video 4 Video 5 Video 6 Video 7 Video 8 Average accuracy

Tracking vehicles Actual vehicles Accuracy 62 68 91% 35 36 97% 5 5 100% 17 17 100% 22 25 88% 12 12 100% 2 2 100% 13 15 86% 95.25%

Developing a Framework for Vehicle Detection

339

Table 3. Comparison with some existing method with our proposed method. Methods Gabor + SVM PCA + NN Wavelet + Gabor +NN LDA Hog + SVM + Mean-shift HOG + PCA +SVM +EKF Our proposed method

Accuracy 94.5% 85% 89% 88.8% 94% 95% 95.25%

False rate 5.4% 7.5% 6.2% 5.5% 3.7% 3.5% 3.7%

(2) Comparison with other Existing Method From the Table 3, shown the comparison of other vehicle detection and tracking existing method with our proposed methods. From where, we see that wavelet feature extraction [17] with another feature reduction dimensional method achieved higher accuracy than other method but the false rate also increase in that method. Other existing method HOG [16] with SVM and Mean-shift tracker gain less accuracy comparing with other tracking method and this method failed to track object in bad weather and congested traffic scenes. But applying Kalman-filter tracker with mean-shift tracker increase the accuracy with decreasing the false rate. From the result of the accuracy, we see that the average accuracy for detection and tracking rate is 95.25%. The false rate of our proposed method is also less which is 3.4%. As increasing the size of video it is difficult to handle for tracking. (3) Performnace of Classification We managed to achieve a 90% right classification rate and the 15 fps frame rate. The accuracy of classification is also an important component for evaluating the classifier in machine learning. It is specified as the number of vehicles correctly categorize, divided by the total number of vehicles. Since the project’s success depends on a proper position line for the camera view, it was necessary to place the camera on an overhead bridge directly above the traffic route flow to reduce vehicle occlusion. Some of our system ‘s outcomes are shown in Fig. 11 (e)-(h). Classification errors were mainly due to the slight break between classes of vehicles. Since we have only used the scale as our metric, all vehicles cannot be properly categorized. More features will need to be explored to further improve our success in classification rates. Our information was collected different time on a day. In more complex scenes and under a wider range of lighting and weather conditions, we intend to further test the system. We measure the actual number and correct classification rate for each class, the final result for all types of vehicle is as shown Table 4.

340

R. Saha et al. Table 4. Performance analysis for result of vehicle classification Vehicle types Number of vehicles Number of classified vehicle Success rate Bus 6 5 83.3% Car 7 7 100% Motorbike 8 7 87.5% Total 18 15 90.26%

5 Conclusion and Future Work Robust and accurate vehicle detection, tracking and classification of frames collected by a moving vehicle is a significant issue for autonomous self-guided vehicle applications. This paper provides a summary study of the techniques proposed that were used in traffic video and images with a very high accuracy of tracking and classification rate. We have given a comprehensive analysis of the state-of-the-art literature in this paper dealing with computer vision techniques used in video-based traffic surveillance and monitoring systems. A HOG and Haar-like feature-based, integrated vehicle classification architecture is proposed in this paper that integrates several main components relevant to an Intelligent Transportation Surveillance System. The idea of using Histogram of gradients to extract features of different scales and orientations is key to our approach. The Kalman filter is used to track multiple objects in video. Classification has done based on the extracting Haar-like feature with cascade based classifier which is categorize vehicle based on the threshold value. The goal of the approach presented is to reduce the calculation time. Experimental findings show that the system is accurate and efficient and that a high performance rate for vehicles can be achieved by the system.

References 1. Zhang, C., Chen, S., Shy, M., Peeta, S.: Adaptive background learning for vehicle detection and spatio- temporal tracking. In: Information, Communication and Signal Processing 2003 and the Fourth Pacific Rim Conference on Multimedia.0-7803-8185-8/03/$17.00 0, pp. 797801. IEEE (2003). https://doi.org/10.1109/icics.2003.1292566 2. Gupte, S., Masud, O., Martin, R.F.K.: Detection and classification of vehicles. Trans. Intell. Transp. Syst. 3(1) 37–47 (2002) 3. Peng, Y., Jin, J.S., Luo, S., Xu, M., Au, S., Zhang, Z., Cui, Y.: Vehicle type classification using data mining techniques. In: Jin, J.S. et al. (ed.) The Era of Interactive Media, LLC, pp. 325–335. Springer, Cham (2013). https://doi.org/10.1007/978-1-4614-3501-3_27 4. Sun, Z., Miller, R., Bebis, G., DiMeo, D.: A real-time precrash vehicle detection system. In: Proceedings on the sixth Workshop on Application Computer Vision (WACV 2002), pp. 253–259. IEEE (2002) 5. Jayasudha, K., Chandrasekar, C.: An overview of data mining in road traffic and accident analysis. J. Comput. Appl. 4(4), 32–37 (2009) 6. Chouhan, P., Tiwari, M.: Image retrieval using data mining and image processing techniques, pp. 53–58 (2015). https://doi.org/10.17148/IJIREEICE.31212

Developing a Framework for Vehicle Detection

341

7. Zhang, C., Chen, X., Chen, W.: A PCA-based vehicle classification framework. In: Proceedings of the 22nd International Conference on Data Engineering Workshops (ICDEW 2006) 0–7695-2571-7/06 $20.00 IEEE, pp. 451–458 (2006) 8. Sun, Z., Bebis, G., Mille, R.: On-road vehicle detection using gabor filters and support vector machines. &7803-7S03- 3/02/$17.00 02002, pp. 1019–1022. IEEE (2002) 9. Zakaria, Y., Abd El Munim, H. E., Ghoneima, M., Hammad, S.: Modified HOG based on road vehicle detection method. In: International Journal of Pure and Applied Mathematics, vol. 118, no. 18, pp. 3277–3285 (2018) 10. Tian, Q., Zhang, L., Wei, Y., Zhao, W., Fei, W.: Vehicle detection and tracking at night. In: Video Surveillance. iJOE – vol. 9, no. 6 “AIAIP2012”, pp. 60–64 (2013) 11. Tao, H., Sawhney, H., Kumar, R.: A sampling algorithm for tracking multiple objects. In: Conference Proceedings of the International Workshop on Vision Algorithms (1999). https:// doi.org/10.1007/3-540-44480-7_4 12. Kul, S., Eken, S., Sayar, A.: A concise review on vehicle detection and classification. In: 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, IEEE (2017). https://doi.org/10.1109/IcengTechnol.2017.8308199 13. Cho, H., Rybski, P. E., Zhang, W.: Vision-based bicycle detection and tracking using a deformable part Model and an EKF Algorithm. In: 13th International Conferenceon Intelligent Transportation Systems, Funchal, Portugal. IEEE (2010). https://doi.org/10.1109/ ITSC,2010.5624993 14. Kong, J., Qiu, M., Zhang, K.: Authoring multimedia documents through grammatical specification. In: IEEE ICME 2003, Baltimore, USA, pp. 629–632 (2003) 15. Qiu, M.K., Song, G.L, Kong, J., Zhang, K.: Spatial graph grammars for web information transformation. In: IEEE Symposium on Visual/Multimedia Languages (VL), Auckland, New Zealand, pp. 84–91. (2003) 16. Xu, W., Qiu, M., Chen, Z., Su, H.: Intelligent vehicle detection and tracking for highway driving. In: IEEE International Conference on Multimedia and ExpoWorkshops, pp. 67–72 (2012). https://doi.org/10.1109/ICMEW.2012.19 17. Chitaliya, N.G., Trivedi, A.I.: Feature-extraction using Wavelet- PCA and neural network for application of object classification and face recognition. In: Second International Conference on Computer Engineering and Applications, pp. 510–514, IEEE (2010). https:// doi.org/10.1109/ICCEA.2010.104

Advances in Algorithms, Modeling and Simulation for Intelligent Systems

Modeling and Simulation of Rectangular Sheet Membrane Using Computational Fluid Dynamics (CFD) Anirban Banik1(&), Sushant Kumar Biswal1, Tarun Kanti Bandyopadhyay2(&), Vladimir Panchenko3, and J. Joshua Thomas4 1

Department of Civil Engineering, National Institute of Technology, Jirania, Agartala 799046, Tripura, India [email protected], [email protected] 2 Department of Chemical Engineering, National Institute of Technology, Jirania, Agartala 799046, Tripura, India [email protected] 3 Russian University of Transport, Obraztsova St., 127994 Moscow, Russia [email protected] 4 Department of Computing, School of Engineering, Computing and Built Environment, UOW Malaysia, KDU Penang University College, George Town, Malaysia [email protected]

Abstract. The study demonstrates the modeling and simulation of the flow phenomena inside the rectangular sheet-shaped membrane module using Computational fluid dynamics (CFD) based solver. The module was implemented to enhance the quality of effluent generated from the Rubber Industry. Commercially available CFD software (ANSYS) implemented to mimic the flow inside the porous membrane. The meshing of the developed model was done using Gambit software. The grid independency study reports a grid size of 375000 was the best grid for the simulation procedure. To mimic the flow pattern inside the membrane, the second-order laminar model was considered. The accuracy of the simulation process is evaluated using error analysis. In the error analysis, methods like percent bias, Nash-Sutcliffe, and the ratio of RMSEobservation standard deviation are selected for error analysis. The assessed estimations of PBIAS, NSE, and RSR are close to ideal value, justifying the adequacy of the simulation. From model validation, it demonstrates that CFD predicted values follow the experimental values with high precision. Keywords: CFD  Membrane separation and filtration process Simulation  Wastewater treatment

 Modeling 

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 345–356, 2021. https://doi.org/10.1007/978-3-030-68154-8_32

346

A. Banik et al.

1 Introduction Over the last decades, membrane separation processes played a crucial role in the industrial separation process. Numerous studies focus on the optimal way of using membrane separation and filtration processes. Computational fluid dynamics (CFD) techniques provides lots of information regarding the development of the membrane process. Numerous advancements in membrane technology allowed the selection procedure of the suitable membrane for different procedure easy and quick. Membrane filtration has been used in broad range of application like wastewater treatment [1, 2], protein separation [3], Food Processing industry [4] etc. In membrane separation and filtration technique, the hydrodynamics of the fluid flow pattern inside the membrane bed is very crucial. A membrane separation and filtration process is a combination of free flow inside the membrane module and flow through the porous membrane. Fluid dynamics description of free flow inside the membrane module is easy to simulate by using Navier-Stokes equation for this part but most complex one to reproduce is the flow of the fluid through the porous membrane bed which can be modeled by coupling Darcy’s law with the Navier Stokes equation. And the validity of Darcy law for simulating the incompressible flow through the porous zone with small porosity has been found to be acceptable. The important part is to ensure the continuity of flow field variables through the interface of laminar flow region and porous flow region is adequately maintained [5]. Initial simulation of flow inside the membrane was simulated using laminar condition along with porous wall [6]. Many authors have used computational fluid dynamics (CFD) for optimizing the membrane processes [7]. Rahimi et al. [8] used hydrophilic polyvinylidene fluoride (PVDF) membrane for preliminary work and studied using cross-flow velocities of 0.5, 1, and 1.3 m/s. Fouling of the membrane are analyzed with the help of photographs and micrographs of Scanning electron microscope. 3D CFD modelling of the developed porous membrane bed was carried out using Fluent 6.2 which was used to pronounce the shear stress distribution upon the membrane for explaining the fouling process. Particle deposition pattern on the membrane surface have also been pronounced using discrete phase model. Ghadiri et al. [9] developed model upon mass and momentum transfer equation for solute in all phases including feed, solvent and membrane. The flow behaviour inside the model was simulated using Navier Stokes equation by finite element method under steady state condition. CFD predicted results are compared with the experimental data to evaluate the accuracy of the model. A 2D mathematical model was developed for illustrating seawater purification using direct contact membrane desalination (DCMD) system [10]. The model was developed by coupling the equations of conservation for water molecule in three domains of the module and the governing equations are solved by using finite element method. From the study of Reza Kazemi [10], found that enhancing the channel length, the temperature of concentration stream and e⁄(T.d) ratio or reducing the inlet velocity of the concentrated solution, rejection of the salts can be enhanced. Such detail CFD based study considering rectangular sheet shaped membrane to enhance the quality of the rubber industrial effluent has not been reported earlier.

Modeling and Simulation of Rectangular Sheet Membrane

347

In the present paper, numerical finite volume method (FVM) is used to solve the three-dimensional convective diffusion equations for solute transport in laminar flow condition over a porous zone in stationary rectangular sheet membrane. CFD simulation is used to predict the distribution pressure drop, velocity profile, wall shear stress over the static rectangular sheet membrane. The CFD simulated data validated with the experimental data.

2 Experiment Descriptions Cellulose Acetate rectangular sheet membrane utilized in laborotary with a solo objective of improving the effluent quality. Figure 1 is the schematic illustration of the experimental setup. The experimental setup consists of neutralizing tank, feed tank, a centrifugal pump, and permeate tank. The effluent was permitted to flow through the neutralizing tank to maintain the pH of the feed sample as any deviation from the operating pH of the membrane may affect the lifespan of the membrane module. Optimu dose of soda ash was used to maintain the pH as raw rubber industrial effluents are acidic in nature. After neutralizing tank, the feed was allowed to flow into the feed tank from which the feed was permitted to flow through the module using centrifugal pump. The centrifugal pump was used to maintain trans-membrane pressure, and facilitate the movement of the feed across the membrane. The permeate tank was used to collect the permeate flux. The rejects of the membrane module were re-flowed to the

Fig. 1. Schematic diagram of experimental setup.

348

A. Banik et al.

feed tank. Table 1 demonstrate the characterization of the raw feed stream collected from Rubber Industry of Tripura. Table 1. Feed Stream Characterization. Sl. No. Parameters 1 Total suspended solids 2 Total dissolved solids 3 pH 4 Biochemical oxygen demand 5 Oil and grease 6 Sulfide

Units Value mg/L 398 mg/L 3030 5.2 mg/L 12.3 mg/L 1080 mg/L 23.5

3 Computational Fluid Dynamics (CFD) Advancement of computers and growing computational power, the computational fluid dynamics (CFD) become widely used computational tool for predicting solution for fluid related problem [11–14]. The mathematical model developed by using 3D hexahedral grid geometry utilizing Gambit software. continuum and Boundary condition of the model was defined for simulation purpose. Examination of the mesh was conducted for evaluate the skewness of the mesh/grid, which is found to be less than 0.5 and considered as acceptable. Developed geometry was exported to the pressure based solver fluent for predicting the pressure distribution, wall shear stress and Concentration profile over the membrane, while effluent from rubber industry allowed to flow through it. Simple pressure-velocity coupled with 2nd order upwind scheme implemented under relaxation for simulation purpose. The flux through the membrane bed is considered as laminar, axisymmetric, and Iso-thermal. Flow through the membrane bed was considered to be Laminar as Reynolds’s number was less than 2300. 3.1

Assumptions

Following assumptions and ideas are considered for building up the CFD model describing effluent flow through membrane bed [11]: I. Effluents generated from rubber industry was considered to be Newtonian fluid because of its dilution in water. II. Effluent from rubber industry was considered to be Isothermal and incompressible. III. Mathematical model was assumed as laminar single phase pressure based simple pressure-velocity coupling model for the simulation. IV. Membrane was considered to be isotropic and homogeneous porous zone throughout the geometry. V. According to the necessity of the simulation work, under-relaxation factor reduced to 0.7–0.3 or lower.

Modeling and Simulation of Rectangular Sheet Membrane

349

VI. Hexahedral grids considered for meshing the model for computational simplicity. VII. Large gradient of pressure and swirl velocities resolved by using refined grids. VIII. The mathematical model developed for simulation work was confined to flow model only. 3.2

Governing Equations

The flow through the membrane is found to be administered by equations like continuity, momentum, Darcy law, solute transfer [12]. The governing equations of the porous membrane bed defined below: Continuity equation Equation of Continuity for effluent flow through the membrane bed can be defined by Eq. 1,       d d ! b d V i þ bj þb k ¼0 dx dy dz

ð1Þ

Darcy’s law Darcy’s law for flow of effluent through the membrane bed is defined by Eq. 2, rP ¼ 

l! V a

ð2Þ

Momentum equation Axial momentum equation for effluent flow through the membrane bed is given by Eq. 3,  !  dP þ lr2 u r qu V ¼  dx

ð3Þ

Equation of radial momentum for rubber industrial effluent flow through membrane is given by Eq. 4,  !  dP þ lr2 u r qu V ¼  dr

ð4Þ

Mass transfer or solute transfer equation. The equation of solute transfer through the membrane is given by Eq. 5,  !   r q V C ¼ qDr2 C

ð5Þ

350

3.3

A. Banik et al.

Boundary Conditions

The following boundary conditions were considered for modeling and simulation of rectangular sheet membrane [11]: 1. 2. 3. 4. 3.4

Inlet was assumed to be mass inlet Outlet was considered to be pressure outlet with guage pressure equal to zero Membrane was assumed to be porous zone No slip condition is considered near the wall, where fluid velocity tends to zero. Convergence and Grid Independency Test

Default criteria of convergence for all equation was selected to be 10−5 except for the transport equation which was selected as 10−3. A computational domain used for calibrating the results of fully developed flow obtained for rectangular sheet membrane. From the results of the study, it observed that the final predicted results depend upon the mesh/grid geometry. Gradual increment and decrement of mesh/grid resolution by 50% have been applied to evaluate if the employed mesh/grid resolution was adequate to obtain results with minimum error percentage. It found that when 50% decrease in mesh/grid resolution, the pressure profile has 8–15% error of the currently employed mesh/ grid pressure profile for rectangular sheet membrane. When 50% increased in the mesh/grid resolution, the pressure profile has 1–5% error of the employed mesh/grid pressure profile for rectangular sheet membrane. From the results, it concluded that current mesh/grid resolution had been found sufficient for obtaining grid independent solution for the proposed model of the rectangular sheet membrane.

4 Results and Discussions 4.1

Selection of Optimum Grid Size

The grid independency test is implemented to select the optimum grid to carryout the simulation process. Table 2 demonstrate the grid selection procedure, where parameters like computational time, and deviation are used to select the optimum grid. The grid selection procedure is conducted by assuming membrane pore size and inlet velocity to be 0.2 µm, and 2.1 m/s respectively. The study is conducted to obtain grid independent solution. In the study, three type of grid size (coarse-81000, fine-375000, and finer-648000) are used to select the optimum grid. From the study it was found that there is no significant change in the results between the fine and finer grid. The fine grid is selected as the optimum one for its properties like less computational time and cost, less deviation. The fine grid having size 375000 is found to be optimum one and used for the simulation process as further increasing in grid size does not show any significant change in pressure profile inside the rectangular sheet membrane.

Modeling and Simulation of Rectangular Sheet Membrane

351

Table 2. Optimum grid selection. Sl. No. 1 2 3

4.2

Mesh nature Coarse Fine Finer

Mesh size 81000 375000 648000

No. of nodes 249300 1142500 1969200

No. of faces 87451 392751 673501

Time (min) 43 167 240

Exp. (Pa) 761.7 761.7 761.7

CFD (Pa) 742 756 756

Deviation 19.8 5.8 5.8

CFD Analysis

Figure 2 shows the profile of the concentration for inlet and outlet section of the membrane. The figure demonstrate quality of the permeate flux produced during the separation process. Red and blue color in the plot shows the maximum and minimum concentration of effluent respectively. The figures demonstrate that concentration of mass is high at the inlet contrasted with outlet of the membrane. It is because of the high resistance offered by the membrane bed and selectively permitting particle having diameter size less than the membrane pore size. Figure 3 is a graphical illustration of wall shear stress (Pa) of the rectangular sheet shaped membrane. It is observed that deformation of fluid is high in the vicinity of wall and low at the center of the membrane bed. The deformation of the fluid is high due to the existence of dominating adhesive force close to the wall compared to the weak cohesive force in the center. Plot of mass imbalance (kg/s) with respect to time (min) for rectangular sheet membrane is illustrated using Fig. 4. In this case, the permeate flux of the membrane gradually decreases with the progress of filtration time. This is because of the cake layer formation over the surface of the membrane which in partially or totally hinders the membrane pores and obstruct the flow, thus reducing the permeate flux. Membrane shows a membrane filtration time of 160 min when it is implemented to improve the effluent quality.

Fig. 2. Contour of concentration profile.

352

A. Banik et al.

Fig. 3. Graphical illustration of wall shear stress (Pa).

Fig. 4. Plot of mass imbalance (kg/s) with respect to time (min).

4.3

Error Analysis

The error analysis is implemented to assess the accuracy of the CFD model. The methods like percent bias (PBIAS), Nash-Sutcliffe efficiency (NSE), and ratio of RMSE-observation standard deviation (RSR) are used to conduct the error analysis [15–21]. The PBIAS, NSE, and RSR values are estimated using Eqs. 6–8;

Modeling and Simulation of Rectangular Sheet Membrane

2P n 

6i¼1 PBIAS ¼ 6 4

3 Yi  Yi  100 7 7 m 5 P ðYi Þ

353

ð6Þ

i¼1

2 P n  3  2 Y  Yi 6 i¼1 i 7 7 NSE ¼ 1  6 n 4P 5 2 ðYi  Ymean Þ

ð7Þ

i¼1

2 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3 n P 6 ðYi  Yi Þ2 7 7 6 i¼1 7 6 RSR ¼ 6sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi7 7 6 P n 4 ðY  Y Þ2 5 i

ð8Þ

mean

i¼1

In Eqs. 6–8, Yi, Y*i , and Ymean represents actual dataset, predicted dataset, and mean of the actual dataset, respectively. Where, n illustrate total number of observation in actual dataset. The value of PBIAS, NSE, and RSR are considered to be best, if the values are close to 0, 1, and 0, respectively. Table 3 shows the error analysis of the developed CFD model and the result of PBIAS (0.317), RSR (0.022), and NSE (0.99) are close to the ideal value. Error analysis demonstrate the accuracy of simulation process and justify the use of CFD to demonstrate the flow through the membrane. Table 3. Error analysis of the developed CFD model. Sl. No. Methods 1 PBIAS 2 NSE 3 RSR

4.4

Estimated value Best value 0.317 0 0.99 1 0.022 0

Model Validation

Figure 5 demonstrate the graphical plot between pressure drop (Pa) and feed velocity (m/s) of rectangular sheet membrane for validation purpose. From Fig. 5, it is observed that increment in inlet velocity impact the momentum of the fluid. As momentum of the liquid is a function of velocity (V). So, increase in speed increases the momentum of the fluid, which collides with membrane wall and pore wall at a higher rate, which causes kinetic energy loss of the fluid. This kinetic energy loss of the fluid due to increment in velocity of the fluid changed over into a form of pressure head which is demonstrated as a increased pressure drop. Computational fluid dynamics predicted results validated with the experimental values. From the validation plot, it is observed that results predicted by CFD hold good agreement with the results of the experiment.

354

A. Banik et al.

Fig. 5. Validation plot between pressure drop and inlet velocity.

5 Conclusions The model of the rectangular membrane was developed by using Gambit and effluent flow pattern inside the membrane was simulated using Fluent. To pronounce outcomes with least error percentage, hexahedral mesh with least PC memory was utilized for meshing. From grid independence test, the mesh size of 375000 was selected for carrying out the simulation work as any further refinement of the grid size does not show any change in pressure profile of rectangular sheet membrane. The mass imbalance (kg/sec) study proves that the membrane bed poses high separation efficiency. CFD predicted results are validated against experimental data, where CFD results follows the results of experiments with high precision and the error percentage typically varies in the range of 1–5%. CFD simulation provides an insight to hydrodynamic properties such as wall shear stress and concentration profile over the membrane. The error analysis was implemented to evaluate the accuracy of the simulation process. The methods such as PBIAS, NSE, and RSR are used for error analys. Determined estimation of NSE, PBIAS, and RSR were 0.99, 0.317, and 0.022. respectively, which were close to the ideal value. Pronounced results from error analysis justify the accuracy of the simulation process. Results obtained from the study can be used to develop a cost-effective membrane separation technique to treat the rubber industrial effluent.

Modeling and Simulation of Rectangular Sheet Membrane

355

References 1. Chen, Z., Luo, J., Hang, X., Wan, Y.: Physicochemical characterization of tight nanofiltration membranes for dairy wastewater treatment. J. Memb. Sci. 547, 51–63 (2018). https://doi.org/10.1016/j.memsci.2017.10.037 2. Noor, S.F.M., Ahmad, N., Khattak, M.A., et al.: Application of Sayong Ball Clay Membrane Filtration for Ni (II) Removal from Industrial Wastewater. J. Taibah Univ. Sci. 11, 949–954 (2017). https://doi.org/10.1016/j.jtusci.2016.11.005 3. Emin, C., Kurnia, E., Katalia, I., Ulbricht, M.: Polyarylsulfone-based blend ultrafiltration membranes with combined size and charge selectivity for protein separation. Sep. Purif. Technol. 193, 127–138 (2018). https://doi.org/10.1016/j.seppur.2017.11.008 4. Nath, K., Dave, H.K., Patel, T.M.: Revisiting the recent applications of nanofiltration in food processing industries: Progress and prognosis. Trends Food Sci. Technol. 73, 12–24 (2018). https://doi.org/10.1016/j.tifs.2018.01.001 5. Pak, A., Mohammadi, T., Hosseinalipour, S.M., Allahdini, V.: CFD modeling of porous membranes. Desalination 222, 482–488 (2008). https://doi.org/10.1016/j.desal.2007.01.152 6. Berman, A.S.: Laminar flow in channels with porous walls. J. Appl. Phys. 24, 1232–1235 (1953). https://doi.org/10.1063/1.1721476 7. Karode, S.K.: Laminar flow in channels with porous walls, revisited. J. Memb. Sci. 191, 237–241 (2001). https://doi.org/10.1016/S0376-7388(01)00546-4 8. Rahimi, M., Madaeni, S.S., Abolhasani, M., Alsairafi, A.A.: CFD and experimental studies of fouling of a microfiltration membrane. Chem. Eng. Process. Process Intensif. 48, 1405– 1413 (2009). https://doi.org/10.1016/j.cep.2009.07.008 9. Ghadiri, M., Asadollahzadeh, M., Hemmati, A.: CFD simulation for separation of ion from wastewater in a membrane contactor. J. Water Process. Eng. 6, 144–150 (2015). https://doi. org/10.1016/j.jwpe.2015.04.002 10. Rezakazemi, M.: CFD simulation of seawater purification using direct contact membrane desalination (DCMD) system. Desalination 443, 323–332 (2018). https://doi.org/10.1016/j. desal.2017.12.048 11. Banik, A., Bandyopadhyay, T.K., Biswal, S.K.: Computational fluid dynamics simulation of disc membrane used for improving the quality of effluent produced by the rubber industry. Int. J. Fluid Mech. Res. 44, 499–512 (2017). https://doi.org/10.1615/InterJFluidMechRes. 2017018630 12. Banik, A., Biswal, S.K., Bandyopadhyay, T.K.: Predicting the optimum operating parameters and hydrodynamic behavior of rectangular sheet membrane using response surface methodology coupled with computational fluid dynamics. Chem. Papers 74(9), 2977–2990 (2020). https://doi.org/10.1007/s11696-020-01136-y 13. Myagkov, L., Chirskiy, S., Panchenko, V., et al.: Application of the topological optimization method of a connecting rod forming by the BESO technique in ANSYS APDL. In: Vasant, P., Zelinka, I., Weber, G. (eds) Advances in Intelligent Systems and Computing. Springer, Cham (2020) 14. Vasant, P., Zelinka, I., Weber, G-W.: Intelligent computing and optimization. In: Proceedings of the 2nd International Conference on Intelligent Computing and Optimization 2019 (ICO 2019), 1st edn. Springer, Cham (2020) 15. Banik, A., Biswal, S.K., Majumder, M., Bandyopadhyay, T.K.: Development of an adaptive non-parametric model for estimating maximum efficiency of disc membrane. Int. J. Converg. Comput. 3, 3–19 (2018)

356

A. Banik et al.

16. Panchenko, V.I., Kharchenko, A., Valeriy Lobachevskiy, Y.: Photovoltaic solar modules of different types and designs for energy supply. Int. J. Energy Optim. Eng. 9, 74–94 (2020). https://doi.org/10.4018/IJEOE.2020040106 17. Panchenko, V.A.: Solar roof panels for electric and thermal generation. Appl. Sol. Energy (English Transl. Geliotekhnika) 54, 350–353 (2018). https://doi.org/10.3103/ S0003701X18050146 18. Banik, A., Dutta, S., Bandyopadhyay, T.K., Biswal, S.K.: Prediction of maximum permeate flux (%) of disc membrane using response surface methodology (rsm). Can. J. Civ. Eng. 46, 299–307 (2019). https://doi.org/10.1139/cjce-2018-0007 19. Kalaycı, B., Özmen, A., Weber, G.-W.: Mutual relevance of investor sentiment and finance by modeling coupled stochastic systems with MARS. Ann. Oper. Res. 295(1), 183–206 (2020). https://doi.org/10.1007/s10479-020-03757-8 20. Kuter, S., Akyurek, Z., Weber, G.W.: Retrieval of fractional snow covered area from MODIS data by multivariate adaptive regression splines. Remote Sens. Environ. 205, 236– 252 (2018). https://doi.org/10.1016/j.rse.2017.11.021 21. Vasant, P., Zelinka, I., Weber, G-W.: Intelligent computing & optimization. In: Conference proceedings ICO 2018. Springer, Cham (2019)

End-to-End Supply Chain Costs Optimization Based on Material Touches Reduction César Pedrero-Izquierdo1(&) , Víctor Manuel López-Sánchez1, and José Antonio Marmolejo-Saucedo2 1

2

Universidad Anáhuac, Av. Universidad Anáhuac 46, Lomas Anáhuac, 50130 Huixquilucan, Edo. México, Mexico [email protected] Facultad de Ingeniería, Universidad Panamericana, Augusto Rodin 498, 03920 Ciudad de México, Mexico

Abstract. The global manufacturing industry requires high standard productivity, quality, delivery and flexibility. This is especially true when it comes to the trucking industry, which has gained high efficiency by adopting lean manufacturing tools. Nevertheless, it is crucial to look into the supply chain to reach higher efficiency and sustainable competitive advantages. Multifold research on supply chain costs optimization treats it as a collection of interrelated and indivisible levels whose internal operations can be neglected. This prevents us from spotting non-value and wasteful activities inside these levels. To fix this drawback, this research proposes a different cost optimization strategy taking advantage of those internal operations. This procedure breaks the supply chain levels down into basic elements to generate new sets of operations that collectively satisfy the original supply chain. The solution to this combinatorial problem, which is inspired in the IP “crew-pairing problem”, provides a new and optimized supply chain that minimizes costs. Keywords: Supply chain Trucks

 Set covering  End to end  Material touches 

1 Introduction 1.1

Motivation and Objective

The global manufacturing industry requires the highest standards in productivity, quality, delivery and flexibility. This is especially true for the truck manufacturing industry, which is a relevant example due to the technical complexity of its assembly process, as well as its tendency toward mass personalization, creating a very complexsupply chain (Fogliatto 2003). There has been vast research on supply chain optimization; nevertheless, most of it relates to lean manufacturing concepts (Monden 2011, Krajewski 2010) or to the minimum cost flow problem or MCFP (Hammamia and Frein 2013). When related to lean concepts, it is normally applied to limited procedures or steps in the supply chain (or SC); and when related to the MCFP, SC is conceptualized as a net of indivisible levels whose internal operations are neglected. This prevents us from realizing that inside these levels there are redundant activities © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 357–368, 2021. https://doi.org/10.1007/978-3-030-68154-8_33

358

C. Pedrero-Izquierdo et al.

that do not add any value for further process (Closs 2010). In this particular research, those internal activities are referred to as “material touches” or just “touches”, meaning every action related to handling, moving or interacting in general with the product flowing on the SC. Based on the background above, it follows that the general objective of this research is to develop a different cost optimization methodology for a SC of i  1 stages and i levels serving to a truck assembly company from suppliers to final assembly process. This methodology must be capable of generating cost improvements within the internal processes of the SC i levels by fragmenting them in j basic touches and neglecting the traditional division between them. 1.2

Methodology

This research introduces a four-stage methodology. The first stage analyzes and segments the original i levels of the SC into individual j touches that are subsequently cataloged according to specific attributes. The result of this stage plus the selection of touches proposed to be externally outsourced, give rise to the next step that consists in generating new sets of touches according to individual characteristics. These subsets won´t satisfy the original SC touches individually; however, a combination of them will satisfy them collectively. The third stage is calculating the costs of the brand-new touch subsets. The last stage is dedicated to formulate and solve the mathematical model as a “set covering problem”. The case studies used to validate this methodology, come from a collaborative truck assembling company, which is the largest in the world and the one with the highest offer for customization in Mexico and Latin America. 1.3

Contribution

The main objective and contribution to the state of the art of this research, was to develop a new cost optimization strategy dedicated to the SC of the truck assembly industry that could visualize improvement opportunities within the SC internal steps. Out of the vast research dedicated to this topic, were lean concepts and MCFP stand out, this research found the following outcome that hadn´t been considered within these two branches: • The concept of material touch is introduced, as well as its mathematical description as the basis of a new methodology dedicated to SC costs optimization. • An iterative process was developed to segment the entire supply chain in touches based on individual attributes. • A practical application of the “end to end” concept related to the SC was presented as part of the cost optimization methodology. • A mathematically strict methodology was provided in order to merge both strengths and weaknesses of different logistics operators to optimize total costs. • A four-stage methodology inspired by the aircrew-assigning problem “crew pairing”, was developed as an IP in a “set covering” format.

End-to-End Supply Chain Costs Optimization

359

2 Literature Review Kumar and Nambirajan (2014) suggested crossing borders between client and provider to expand capabilities. Arif-Uz-Zaman and Ahsan (2014) suggested that the SC should be treated as companies linked to each other. Stavrulaki and Davis (2010) suggested that a SC improves the overall effectiveness when every stage is aligned. Krajewski et al. (2010) integrated the design of the SC under “lean manufacturing system”. Bezuidenhout (2015) added the agile concept to the existing lean concepts to describe what he described as a more realistic “lean-agile” process. As an additional tool to optimize a SC, Walters and Lancaster (2000) proposed the third party logistics (or 3PL) figure as part of this process. Arias-Aranda et al. (2011) verified the correlation of 3PL services and flexibility levels within a company. Wu et al. (2012) suggest flexibility and specialization improvements trough 3PL services. Dolguia and Proth (2013) considered that 3PL services can reduce non-essential activities and reduce costs. Agrawal (2014) mention that using 3PL services enhances flexibility and innovation. Lee (2010) used the concept “end to end” to suggest the elimination of redundant SC processes. Closs et al. (2010) suggested also to migrate to a collaborative culture to eliminate waste by eliminating silos along the SC. Trang (2016), Kram et al. (2015) and Ageron (2013), propose the elimination of redundant activities through shared SC design. González-Benito (2013) adopted the term “co-makership” as shared responsibility for SC design. Ashley (2014) affirmed that SC co-responsibility must go from the birth of the product to the end of its cycle. As of optimization methodologies, CastilloVillar (2014) added quality costs as optimization criterion. Hammamia and Frein (2013) added delivery-time as optimization criterion. Sakalli (2017) included stochastic parameters under uncertainty. Cheng and Ye (2011) adapted classic MCFP model to handle parallel suppliers. Fahimniaa et al. (2013) introduced production planning and SC parameters together. Ding (2009) proposed an optimization method using simultaneous production and distribution parameters. Paksoy and Ozceylan (2011) included operation-balancing parameters to MCFP. Paksoy and Ozceylan (2012), particularized their previous work by including U-shaped production balancing. Hamtaa (2015) proposed an SC optimization problem by adding production with uncertain demand parameters. Minoux (2006) solved the crew assignment problem by using heuristic methods. Souai and Teghem (2009) proposed reassigning routes by using a genetic algorithm. Deng and Lin (2010) proposed an ant colony algorithm to solve the crew scheduling problem. The same problem but considering also legal requirements is solved by Deveci and Demirel (2015) using a genetic algorithm. Tekiner et al. (2008) proposed a column generation method incorporating disruptive events. Later on Muter et al. (2010) also proposed a two-steps columns generation methodology solving firs the crew pairing and then using these data to solve the crew rostering as well as incorporating possible disruptions since the planning of the problem, what they called “robust planning”. Özener et al. (2016) solved the aircraft allocation and flight sequence assignation by using exact and meta heuristics methods.

360

C. Pedrero-Izquierdo et al.

3 Touches-Based Cost Optimization Methodology 3.1

Analysis of a SC Levels Using a Touches-Approach

This methodology starts by segmenting the i levels of a SC into j internal and independent basic steps for every k component. The segmentation is carried out through an empirical process that follows the knowledge dynamics described by Bratianu (2018). These individual actions called “material touches” are generically denoted as Tijk , which describe the touch j belonging to level i that acts on component k; this concept is represented in Fig. 1. Every touch Tijk will be particularized depending on the type of activity and cost it generates. Nine types of touches are proposed to describe every activity in the SC. These are listed below: • • • • •

Internal manual touch (TMNijk ): parts manipulation by physical labor force. Internal machinery touch (TQNijk ): parts movements by devices and mechanisms. Internal manual touch implying packaging (TMENijk ) packaging by force of labor. Internal machinery touch implying packaging (TQENijk ): packaging using devices. External manual touch (TMXijk ), external machinery touch (TQXijk ), external machinery touch that implies packaging (TMEXijk ) and external machinery touch that implies packaging (TQEXijk ): similar concepts but provided by a third party. • Transportation touch (TTRijk ): parts transportation by motor-vehicles of any kind.

Fig. 1. The concept of material touches is represented in this diagram, where level i ¼ 2 is fragmented in j ¼ 8 material touches for component k.

End-to-End Supply Chain Costs Optimization

3.2

361

Incidence Matrix Generation and Numerical Example

The incidence matrix is the registration of every touch subset that will collectively satisfy the activities of the original SC. Let A be a matrix of m  n where m represents every touch of the original SC and n the number of internal touch subsets plus the subsets proposed to be outsourced. The subsets are formed in consecutive and inseparable operations determined by the process interrelation they keep with their immediate previous activity. The three proposed interrelations are: initial (IN), used only to tag the first touch; consecutive (C), intended for touches that are technically not separable from the previous process; and non-consecutive (NC) used for touches that can be performed independently from the previous process. The original m touches are recorded in the first columns of A strictly in the order and position of the original SC as described next. Let M = {touches of original SC} and let P = {touches proposed to be outsourced} where P  M. Column n ¼ 1 corresponds to Pc , assigning a value of 1 to the positions where a touch exists and 0 to those positions where it does not. Thereafter, the elements of subset P are recorded in the subsequent columns. P is divided into subsets depending on its touches interrelation as follows: subset P will be registered in column ðn þ 1Þ starting with the initial touch IN followed by the subsequent touches that are classified as consecutive C. Once a non-consecutive NC touch is reached, it will be assigned to the next column followed by the subsequent touches C, until another NC touch appears again. This iterative process is performed for the remaining elements of P. Let be r the quantity of external offers to outsource selected subsets of P. Let us consider Pr as the subset outsourced by offer r. Each Pr will be recorded in an individual column for every r. As a numerical example, consider a SC of i ¼ 4 levels with i  1 ¼ 3 stages, where each level contains j ¼ 2 touches for k ¼ 1 components. Let us propose six touches within the SC: T111 ; T121 ; T211 ; T221 ; T311 ^ T321 . Then, set M would be M ¼ fT111 ; T121 ; T211 ; T221 ; T311 ; T321 g. Let us suppose that touches T111 ; T211 ; T221 can be outsourced and that there are three external proposals, therefore r ¼ 3. Set P is formed then by the touches P ¼ fT111 ; T211 ; T221 }. Let us propose the three external offers as P1 ¼ fT211 ; T221 g, P2 ¼ fT111 ; T211 ; T221 g and P3 ¼ fT211 ; T221 g, where every Pr P; therefore the complement set is Pc ¼ fT121 ; T311 ; T321 g. Each element belonging to Pc is recorded in column n = 1 in rigorous order and position as of the original SC. Succeeding, the elements of the set P are recorded starting at column n ¼ 2 depending on their touches interrelation (see Table 1) as described avobe. Finally, the r ¼ 3 proposals P1 ; P2 ; P3 to be outsourced are recorded in independent columns. Resulting incidence matrix is shown in Fig. 2.

Table 1. Interrelation of set P from numerical example presented in Sect. 3.2.

362

C. Pedrero-Izquierdo et al.

Fig. 2. Incidence matrix: internal touches registered in n ¼ 1  3 and external in n ¼ 4  6.

3.3

Touches Subsets Cost Calculation

The next step is to obtain the costs of both, internal and external touch subsets. The calculation parameters include: Te which represents the total amount of touches per day and Tt, the amount of touches corresponding to each k component. For both Te and Tt, dimensional units are ½touches=day. Both parameters must be particularized to individually represent the corresponding types of touches. The daily demand is represented by d ½units=day. The quantity of parts that component k needs to form one unit of product is Uijk ½pieces=unit and Qijk ½pieces is the quantity of k components in a standard package. Let us develop the formulation of Te and Tt for the first type of touch TMNijk ; the equations for the other eight type of touches are calculated the same way. For all of them, the dimensional units are: ðððUnits=dayÞ  ðPices=UnitÞÞ=PicesÞ Touches ¼ Touches=day. The equations for TeTMNi and TtTMNijk are: TeTMNi ¼

XJ XK d  Uijk  TMNijk 8i ¼ 1; 2; . . .; I j¼1 k¼1 Qijk

ð1Þ

d  Uijk  TMNijk Qijk

ð2Þ

TtTMNijk ¼

8i ¼ 1; 2; . . .; I; 8j ¼ 1; 2; . . .; J; 8k ¼ 1; 2; . . .; K Where: 

Tijk ¼ 1 if specific touch type j exists in level i for k: Tijk ¼ 0 in any other case:

ð3Þ

Note that in (3) Tijk represents each of the nine type of touches TMNijk , TQNijk , TMENijk , TQENijk , TMXijk , TQXijk , TMEXijk , TQEXijk and TTRijk . Once Te and Tt are calculated, the cost for each type of touch can be calculated. These costs are listed below where the dimensional units for all of them are [monetary unit/day]. • • • •

CTMNi Internal manual touch costs. CTQNi Internal machinery touch costs. CTMENi Internal manual touch that implies packaging costs. CTQENi Internal machinery touch that implies packaging costs.

End-to-End Supply Chain Costs Optimization

• • • • •

363

CTMXijk External manual touch costs. CTQXijk external machinery touch costs. CTMEXijk external machinery touch that implies packaging costs. CTQEXijk external machinery touch that implies packaging costs. CTTRi Transportation costs.

Calculation of costs for internal and external touches subsets follow completely different forms of logic. First, let us develop the internal cost calculation for the first type of touch TMNijk ; the formulation for the other eight types is done the same way. Let CTMNi be the total cost related to TMNijk activities in level i. Let CuTMNi be the unitary cost of touch TMNijk of level i, which is the equitable partition by touch of the resources dedicated to the execution of this type of touch. The dimensional units for CuTMNi are: ½Monetaryunit=Touch. The mathematical expression (4) is stated as follows: CuTMNi ¼ ðCTMNi =TeTMNi Þ8i ¼ 1; 2; . . .; I

ð4Þ

Once the unitary costs per touch-type are obtained for level i, the total cost for each individual touch is calculated by multiplying the unitary cost by the total touches Tt. This calculation is done for each touch belonging to each subset in column n. Its sum represents the total cost of every touch existing in that subset. The internal subset cost CIntn ½monetary unit=day is the sum of every element belonging to column n: 0

CuTMNi  TtTMNijk þ CuTQNi  TtTQNijk þ CuTMENi

1

B  TtTMEN þ CuTQEN  TtTQEN þ CuTMX  TtTMX C ijk i ijk i ijk C B CIntn ¼ Rijk B C @ þ CuTQXi  TtTQXijk þ CuTMEXi  TtTMEXijk þ CuTQEXi A  TtTQEXijk þ CuTTRi  TtTTRijk ð5Þ Calculation of costs for the subsets to be outsourced follows a different logic from the previous one. These costs depend on two factors; the first one, CextTr , refers to the commercial proposal from third parties. The second one, defined as approaching cost CTextTrijk , is the relocation cost of the components to the point where they will continue to be processed. This cost makes external proposals cost-comparable with the internal ones since it compensates the fact that the outsourced processes will take place in another facility and later on, the components will return to the starting point to continue the original flow. The total cost CExtr of subset Pr is the sum of both costs, CextTr and CTextTrijk ½monetary unit=day; this cost is formulated in Eq. (6). CExtr ¼ CextTr þ

X  ijk

 CTextTrijk 8r

ð6Þ

The resulting cost matrix is shown in Table 2, were touches have already been illustratively particularized.

364

C. Pedrero-Izquierdo et al. Table 2. Cost matrix of internal and external touch subsets.

3.4

Generation of the Mathematical Model in Its “Set Covering” Form

The mathematical model is formulated as a combinatorial optimization problem in the form of “set covering”. The solution to the particular application presented can be obtained by using exact methods due to the limited size of the problems found in the industrial practice under study. Each subset will be represented by a decision variable PA1 ; PA2 ; PA3 ; . . .; PAn . The coefficients of the objective function are CIntn and CExtnni , in which the quantity of external proposals is r ¼ n  ni. Decision variables are binary type, where:  PAn ¼

1 0

if the subset isselected: otherwise:

Internal and external touches subsets are recorded in the incidence matrix A, where:  amn ¼

1 0

if element exists in the subset registered in n: otherwise:

The objective function minimizes the total cost by selecting a combination of subsets PA1 , PA2 , … PAn . The selection will collectively satisfy the original SC touches. The resulting mathematical model is: Minimize Z ¼

Xni

 XN  CInt  PA CExt  PA þ n n nni n n¼1 n¼ni þ 1

Subject to :

XN n¼1

amn  PAn  18m ¼ 1; 2; . . .; M

PAn ¼ 0; 1 where n ¼ 1; 2; . . .; N

ð7Þ ð8Þ

End-to-End Supply Chain Costs Optimization

365

4 Results Analysis The methodology was evaluated in two different ways. The first validation consisted in applying the proposed methodology to four designed instances under extreme conditions. The purpose was to verify the correct functioning of the methodology even under unusual circumstances. The second validation consisted in empirical tests, where three real documented cases previously solved for the company in collaboration would be solved again with the new methodology. Table 3 shows the seven instances. Table 3. Description of the instances used to validate the proposed methodology.

Validation IC1 verified the right segmentation of the SC based on the interrelation of touches. Validations of instances IC2 and IC3 consisted of proposing atypical parameters in combination with a random interrelation assignment. For IC2, very high value coefficients were selected for the external subsets. For IC3, equal parameters were proposed for all decision variables. As expected, for IC2 the result was the rejection of external subsets, and for IC3 the solution yielded to multiple solutions. IC4 validation consisted in using the longest and most complex internal supply chain found in the collaborative company. The empirical validation consisted in solving three real cases found in the collaborating industry and then comparing the results with the real solutions observed in industrial practice. For the empirical instances IR1 and IR2, the results obtained were consistent with the real observed solutions. However, IR3 presented a discrepancy when compared to the real solution observed. The new methodology recommended rejecting the outsourcing offers (in results table improvements show as zero), whereas in practice, it was observed that external services were engaged. When comparing the data of the business case in detail, it was confirmed that IR3 would be costlier if performed externally. However, in the business case IR3 was part of a package of services that, on the whole, was financially convenient. The methodology results are shown in Table 4.

366

C. Pedrero-Izquierdo et al. Table 4. Improvements achieved by using the new touches-based methodology.

The proposed methodology offers improvements over the common cost reduction methodologies used in the industrial practice, since it conveniently allows merging, substituting or eliminating sections of the SC in the most cost effective combination. This new methodology is capable of finding cost reduction opportunities in the internal processes of the SC levels that some other methodologies cannot reach. In addition, it allows combining touches belonging to one or more levels since the entire SC separates in its basic elements and follows the “E2E” lean principle. This methodology allows companies to search for cost opportunities that are hidden for other methodologies.

5 Conclusions and Future Research This methodology is suitable to be used as a mathematical based tool to support the decision-making process of the trucking industry as well as other industries whose SC can be divided in individual touches. The study of the truck manufacturing industry’s SC regarding touches and end-toend perspective allows visualizing redundant activities that do not add value or would turn out to be a waste for future processes. The presented SC segmentation strategy does not only apply to the trucking industry. It is readily applicable to any human activity that can be broken down into individual activities that can be performed independently from one another. The growing trend for companies to despise commercial division between levels of the SC is a fundamental aspect of the proposed methodology. Companies that share this progressive vision for both customer and supplier will have competitive advantages over companies with isolationist policies. Future research which could expand and strengthen the present investigation may not just include focusing on costs, but other equally important objectives in optimizing a supply chain. For instance, the aforementioned methodology could be applied to optimize quality levels, service satisfaction, labor in manufacturing processes, etc. Additionally, this methodology could be expanded to study two or more parallel supply chains to carry out the optimization by combining touches from the involved supply chains to take advantage of the strengths of some and to amend the weaknesses of others and therefore obtain a benefit for the complete cluster. A third future research is the design of a green field project using the proposed methodology since early stages. The intention is to select the best operation combination since the design of the SC and processes instead of modifying it once it is in operations.

End-to-End Supply Chain Costs Optimization

367

References Dolgui, A., Proth, J.M.: Outsourcing: definitions and analysis. Int. J. Prod. Res. 51(23–24), 6769–6777 (2013). Enero 2020. De PROQUEST Base de datos Agrawal, A., De Meyer, A., Van Wassenhove, L.N.: Managing value in supply chains: case studies on the sourcing hub concept. California Management Review, University of California, Berkeley, vol. 56, pp. 22–54 (2014) David, A.: Differentiating through Supply chain innovation. Bus. Econ. Market. Purchas. 1, 1–4 (2014) Fahimniaa, B., Farahani, R.Z., Sarkis, J.: Integrated aggregate supply chain planning using memetic algorithm – a performance analysis case study. Int. J. Prod. Res. 51(18), 5354–5373 (2013). http://dx.doi.org/10.1080/00207543.2013.774492 Bratianu, C., Vătămănescu, E.-M., Anagnoste, S.: The influence of knowledge dynamics on the managerial decision- making process. In: Proceedings of the European Conference on Knowledge Management, vol. 1, pp. 104–111 (2018). Accessed from http://search.ebscohost. com/login.aspx?direct=true&db=lih&AN=132145882&lang=es&site=ehost-live&custid= s9884035 Kumar, C.G., Nambirajan, T.: Direct And Indirect Effects: SCM Componnets. SCMS J. Ind. Manage. 1, 51–65 (2014) Bezuidenhout, C.N.: Quantifying the degree of leanness and agility at any point within a supply chain. School of Engineering, University of KwaZulu-Natal, Scittsville, South Africa and SCION, Rotorua, New Zealand, vol. 118, no. 1, pp. 60–69, 16 September 2015 Arias-Aranda, D., Bustinza, O.F. and Barrales-Molina, V.: Operations flexibility and outsourcing benefits: an empirical study in service firms. Routledge Taylor & Francis Group, vol. 31, no. 11, pp. 1849–1870. Enero 2020 (2011). De EBSCO Base de datos Closs, D.J., Speier, C., Meacham, N.: Sustainability to support end-to-end value chains: the role of supply chain management. Acad. Market. Sci. 39, 101–116 (2010) Walters, D., Lancaster, G.: Implementing value strategy through the value chain. Manag. Decis. 38(3), 160–178 (2000) Zeghal, F.M., Minoux, M.: Modeling and solving a crew assignment problem. Eur. J. Oper. Res. 175, 187–209 (2006). De ProQuest Base de datos Cheng, F., Ye, F.: A two objective optimisation model for order splitting among parallel suppliers. Int. J. Prod. Res. 49, 2759–2769 (2011). De EBSCO Base de datos Fogliatto, F.S., Da Silveira, G.J.C., Royer, R.: Int. J. Prod. Res. 41(8), 1811–1829 (2003) Deng, G.F., Lin, W.T.: Ant colony optimization-based algorithm for airline crew scheduling problem. Expert Syst. Appl. 38, 5787–5793. (2011). De EBSCO Base de datos Tekiner, H., Birbil, S.I., Bubul, K.: Robust crew pairing for managing extra flights. Manuf. Syst. Ind. Eng. Sabanci Univ. 1, 1–30 (2008). De EBSCO Base de datos Lee, H.L.: Don’t tweak your supply chain-rethink it end to end. Harvard Bus. Rev. I, 62–69 (2010) Ding, H., Benyoucef, L., Xie, X.: Stochastic multi-objective production-distribution network design using simulation-based optimization. Int. J. Prod. Res. 47(2), 479–505. (2009). De PROQUEST Base de datos Muter, I., Birbil, S.I., Bulbul, K., Sahin, G., Yenigun, H.: Solving a robust airline crew pairing problem with column generation. algopt Alg. Optim. 40, 1–26 (2013). De EBSCO Base de datos González-Benito, J., Lannelonguea, G., Alfaro-Tanco, J.A.: Study of supply-chain management in the automotive industry: a bibliometric analysis. Int. J. Prod. Res. 51(13), 3849–3863 (2013)

368

C. Pedrero-Izquierdo et al.

Wu, J.-Z., Chien, C.-F., Gen, M.: Coordinating strategic outsourcing decisions for semiconductor assembly using a bi-objective genetic algorithm. Int. J. Prod. Res. 50(1), 235–260 (2012) Arif-Uz-Zaman, K., Ahsan, A.M.M.N.: Lean supply chain performance measurement. Int. J. Product. Perform. Manage. 63(5), 588–612 (2014) Krafcik, J.F.: Triumph of the lean production system. Sloan Manag. Rev. 30(1), 41–52 (1988) Krajewski, L.J., Ritzman, L.P., Malhotra, M.K.: Operations Management – Processes and Supply Chains, vol. 07458, 9th edn. Pearson Education Inc, Upper Saddle River (2010) Castillo-Villar, K.K., Smith, N.R., Herbert-Acero, J.F.: Design and optimization of capacitated supply chain networks including quality measures. Hindawi Publishing Corporation, Mathematical Problems in Engineering, pp. 1–17 (2014). Article ID 218913. De PROQUEST Base de datos Kram, M., Tošanović, N., Hegedić, M.: Kaizen approach to supply chain management: first step for transforming supply chain into lean supply chain. Ann. Faculty Eng. Hunedoara – Int. J. Eng. Tome XIII – Fascicule 13(1), 161–164 (2015) Deveci, M., Demirel, N.C.: Airline crew pairing problem: a literature review. In: 11th International Scientific Conference on Economic and Social Development, vol. 1, 17 December 2015–July 2019. De EBSCO Base de datos Souai, N., Teghem, J.: Genetic algorithm based approach for the integrated airline crew-pairing and rostering problem. Eur. J. Oper. Res. 199, 674–683 (2009). De ProQuest Base de datos Trang, N.T.X.: Design an ideal supply chain strategy. Advances In Management 9(4), 20–27, April 2016 Hamta, N., Shirazi, M.A., Fatemi Ghomi, S.M.T., Behdad, S.: Supply chain network optimization considering assembly line balancing and demand uncertainty. Int. J. Prod. Res. 53(10), 2970–2994 (2015). De PROQUEST Base de datos Özener, O.Ö., Matoğlu, M.Ö., Erdoğan, G.: Solving a large-scale integrated fleet assignment and crew pairing problem. Ann. Oper. Res. 253, 477–500 (2016). De ProQuest Base de datos Paksoy, T., Ozceylan, E., Gokcen, H.: Supply chain optimization with assembly line balancing. Int. J. Prod. Res. 50, 3115 (2011). https://doi.org/10.1080/00207543.2011.593052 Hammami, R., Frein, Y.: An optimisation model for the design of global multi-echelon supply chains under lead time constraints. Int. J. Prod. Res. 51(9), 2760–2775 (2013). De EBSCO Base de datos Rubin, J.: A Technique for the solution of massive set covering problems, with application airline crew scheduling. Transp. Sci. 7(1), 15–34 (1973) Rushton, A., Croucher, P., Baker, P.: Logistics and distribution management. 4th edn. Kogan Page, London, 636 p (2010). ISBN 978 0 7494 5714 3 Stavrulaki, E., Davis, M.: Aligning products with supply chain processes and strategy. Int. J. Logist. Manage. Ponte Vedra Beach, 21(1), 127–151 (2010) Paksoy, T., Özceylan, E.: Supply chain optimisation with U-type assembly line balancing. Int. J. Prod. Res. 50(18), 5085–5105 (2012). De EBSCO Base de datos Sakalli, U.S.: Optimization of production-distribution problem in supply chain management under stochastic and fuzzy uncertainties. Hindawi, Mathematical Problems in Engineering, pp. 1–29 (2017). Article ID 4389064. De EBSCO data base Versión 2018 • Agenda Automotriz DIÁLOGO CON LA INDUSTRIA AUTOMOTRIZ 2018 • 2024 (2018). http://www.amia.com.mx/boletin/dlg20182024.pdf. Accessed 20 Jan 2019 Monden, Y.: Toyota Production System, An Integrated Approach to Just-In-Time, 4edn., CRC Press, New York (2011) Ageron, B., Lavastre, O., Spalanzani, A.: Innovative supply chain practices: the state of French companies. Supply Chain Management Int. J. 18(3), 256–276 (2013)

Computer Modeling Selection of Optimal Width of Rod Grip Header to the Combine Harvester Mikhail Chaplygin, Sergey Senkevich(&), and Aleksandr Prilukov Federal Scientific Agroengineering Center VIM, 1 St Institute Pas. 5, Moscow 109428, Russia [email protected], [email protected], [email protected]

Abstract. Economic and mathematical model and computer program that allows you to simulate the choice of the optimal header size for a combine harvester are presented in the article. Direct expenses for harvesting taking into account the quality of combine harvesters work in farm conditions at different speeds are accepted as an optimization criterion Keywords: Combine harvester  Economic efficiency optimization model  PC software

 Header  Computer

1 Introduction The results of conducted research [1–5] on the rationale the combine harvesters parameters are that the main factors determining the effectiveness of combine harvesters fleet choosing with the rationale the optimal header width are qualitative and quantitative composition of combine harvesters and the standard header size and the optimal harvesting period duration. The optimal these parameters ratio can be established only during the study of the combine harvester fleet operation by mathematical modeling methods, because natural experiments require a lot of time and material resources. The operating experience and results of comparative combine harvesters tests in different areas of the country show that the main factor determining their capacity underuse for direct combine harvesting is the insufficient headers width. Optimizing the harvester header width will allow to use effective it at different crop yield. Research and testing domestic and foreign combine harvesters have allowed to develop a refined economic and mathematical model for optimizing the size of the header for a combine harvester in recent years. The crop yield in the Russian regions varies widely from 10 to 80 c/Ha (centners per hectare), so there is a question of header purchasing by farms, it allows them to realize the nominal combine harvester productivity and ensure the minimum cash expenditures for bread harvesting. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 369–378, 2021. https://doi.org/10.1007/978-3-030-68154-8_34

370

M. Chaplygin et al.

Increasing the header width leads to a number of positive aspects: – the working strokes coefficient increases so the shift performance increases too; – the harvester operates at low speeds, which reduces the grain loss behind the header and thresher, it should also be noted that many scientists in their work suggest installing control systems and other technological and technical solutions, that help reduce grain losses, on the header and combine harvester [6–12]; – the cost of power and fuel for moving heavy harvesting units at low speeds decreases significantly and it realizes the nominal productivity; – machine operator’s work environment are improving. All this leads to an improvement in economic efficiency indicators. On the other hand, increasing the header width leads to a number of disadvantages: – the price of the header and combine harvester as a whole becomes higher, it make the harvesting more expensive; – the header weight increases, it requires additional power costs for self-movement of the harvesting unit; – soil compaction increases by heavy harvesting units.

2 Materials and Methods Theoretical (system analysis, methods for optimizing the combine harvester fleet composition, optimal header width calculation), logical (comparison, generalization and analyzing scientific literature on the research problem), statistical (processing experiment data). Mathematical modeling with the development deterministic economic and mathematical model and calculation the optimal header width for the combine harvester was used during the choosing header width. Research objectives ensuring effective combine harvester operational and technological indicators due to the scientifically proven header width. Research Tasks: – to develop economical and mathematical model of choosing header width for the combine harvester and a software for calculating by PC in relation to two levels of winter wheat productivity; – to conduct field research of a combine harvester with 12 kg/s throughput, that can work with headers of different capture widths, with the definition of conditions and operational and technological study indicators – to calculate by PC for the choosing the optimal width of the header for combine harvester with 12 kg/s throughput.

3 Discussion Results The optimal header width is selected according to the thresher bandwidth and the combine harvester engine power in specific zonal conditions. In this regard, a refined economic and mathematical optimization model for the header size is proposed.

Computer Modeling Selection of Optimal Width of Rod Grip Header

371

Structural scheme of a refined economic and mathematical model for optimizing header for the combine harvester size is shown in Fig. 1.

Fig. 1. Structural diagram of the algorithm of calculation and selection of the type size of the header to the combine harvester.

The model consists interconnected blocks that determine the sequence of calculations for choosing the optimal combine harvester header size according to the formulas (1−16) those are given in the text below. The general version of the winter wheat harvesting organization using the direct combining technology with grinding the non-grain crop part was adopted by us in order to make the calculations as close as possible to the real economic task.

372

M. Chaplygin et al.

Investigated variable parameter is purpose working header width function (Bh ): Bh ¼ f V; Gop ch ; K2 ; Nmov ; Wsh



ð1Þ

The following parameters change depending on the using different header width in the combine unit: – – – – –

working movement speed V, km/h; operating combine harvester weight Gop ch , kg; combine harvester working moves coefficient K2 ; power for self-movement Nmov , h.p.; shift combine unit productivity Wsh , ton/h.

Combine harvester working movement speed with different header width, km/h, it will be determined by the formula V¼

W0n 0; 1  Bh  Ug

ð2Þ

where Wn0 is nominal combine harvester productivity, ton/h; Bh is header width, m; Ug is grain crop productivity, ton/Ha. Combine harvester operating weight, kg, is determined by: Gop ch ¼ Gch þ Gh þ Gg

ð3Þ

where Gch is combine harvester weight without header, kg; Gh is header weight, kg; Gg is grain weight (in grain tank), kg. The combine harvester working moves coefficient is determined by the formula  K2 ¼

103  T2  W0n 1þ 6  Lr  Bh  Ug

1 ð4Þ

where T2 is turn time, h; W0n is nominal combine harvester productivity, ton/h; Lr is rut length, m; Bh is header width, m; Ug is crop productivity, ton/Ha. Power for self-movement, h.p., will be calculated by the formula Nmov ¼

Gop ch  fst  V 3:6  75

where Gop ch is combine harvester operating weight, kg; fst is rolling over stubble coefficient, 0.13; V is operating speed, km/h;

ð5Þ

Computer Modeling Selection of Optimal Width of Rod Grip Header

373

3.6 is conversion from km/h to m/sec coefficient; 75 is conversion from kW to h.p. coefficient. Combine harvester shift productivity, ton/h, will be determined by formula  Wsh ¼

W0W

1 1 þ 1 sh K K2

1

ð6Þ

where is productivity (per main time hour), ton; Nen is engine power, h.p.; Kr is power reserve coefficient; Nmov is power for self-movement, h.p.; thr is specific power for threshing, h.p.∙ h/ton; Nsp 0

Ksh is using shift time normative coefficient. Grain weight in combine harvester grain tank, kg, is determined by formula: Gg ¼ Vgt  cg

ð7Þ

where Vgt is grain tank volume, m3; cg is grain density, kg/m3. Changes in grain loss behind the header depending on working speed, %, were obtained as a result of research [3] by formula Hh ¼ a  V

ð8Þ

where V is working speed, km/h; a is grain losses behind the header on working speed linear dependence coefficient, that is established experimentally. Power for threshing, h.p., will be determined by the formula Nthr ¼ Nen  Kr  Nmov

ð9Þ

where Nen is combine harvester power, h.p.; Kr is power reserve coefficient; Nmov is power for self-movement, h.p. Total harvesting costs is taken as an optimization criterion, RUB/ton, taking into account the header work at different speeds quality Ch ¼ S þ F þ R þ D þ Lh where S is machine operators salary, RUB/ton; F is fuel cost, RUB/ton; R is repair cost, RUB/ton; D is depreciation cost, RUB/ton; Lh is wastage due to grain losses behind the header volume, RUB/ton. Machine operator’s salary will be determined by formula

ð10Þ

374

M. Chaplygin et al.



Ts Wsh

ð11Þ

where s is hourly machine operators wage, RUB/h; T is machine operators number; Wsh is combine harvester shift productivity, ton/h. Specific fuel consumption for movement per Hectare, kg/ton: qmov ¼

Nmov  K2 W0W

ð12Þ

where Nmov is power for movement, h.p.; W0w is productivity per hour, ton/h; K2 is combine harvester working strokes coefficient. Fuel cost, RUB/ton, will be determined F ¼ ðqthr þ qmov ÞPf

ð13Þ

where qthr is specific fuel consumption for threshing bread mass, kg/ton; qmov is specific fuel consumption for self-movement, kg/ton; qthr ¼

qmov Nmov Wsh

ð14Þ

Pf is diesel fuel price, RUB/kg; Repair cost, RUB/ton, will be determined by formula R¼

ðPch þ Ph Þsr Wsh  Tzon

ð15Þ

where Pch is combine harvester without header cost, thous. RUB.; Ph is header cost, thous. RUB.; Tzon is zonal combine harvester workload at harvesting, h; sr is repair cost, %. Depreciation cost, RUB/ton, will be determined: D¼

ðPch þ Ph Þd Wsh  Tzon

ð16Þ

where Pch is combine harvester cost, RUB.; Ph is header cost, RUB.; d is depreciation cost, %; Wsh is shift productivity, ton/h; Tzon is zonal combine harvester workload at harvesting, h. Wastage due to grain losses behind the header volume, RUB/ton, will be determined by formula

Computer Modeling Selection of Optimal Width of Rod Grip Header

Lh ¼

Ug Hh Pg 100

375

ð17Þ

where Hh is grain losses behind the header, %; Pg is commercial grain selling price, RUB/ton. Let us assume, that some farm has combine harvester, that works in wet work mode with specified normative indicators. The manufacturer produces different width headers for this combine harvester model. We will accept next condition: winter wheat productivity is two-level: achieved 5,0 ton/ha and predictable 7,0 ton/ha. Indicator are being calculated taking into account certain harvesting characteristics and combine harvesting productivity, after that shift productivity should be determined and harvesting cost will be calculated with different header width variants. It will be the first task solution. However we know that the answer to the question – how many combine harvesters are required for this farm with certain winter wheat productivity area? – it need calculations sequence for last block. Economical and mathematical model and PC software, that is named « ECOAdapter » were developed for solution this task [13, 14]. Software is used for calculating combine harvester work with different header width economical efficiency at grain harvesting and comparative analysis with using different width headers. Until 4 variants calculating can be in comparative analysis. Software includes two guide: header and combine guides. The guides contains information about technical tools, cost and another combine harvesters and headers characteristics. Command « Task – New » from main window menu is for making new task. Next, window « Add new task » appears. There must be additional source task data: – – – – – – – – – – –

fuel cost RUB/kg; crop productivity, ton/ha; grain density, kg/m3; typical rut length, m; turn time, min; rolling over stubble coefficient; using shift time coefficient; hourly machine operators wage, RUB/h; repair cost, %; depreciation cost, %; commercial grain selling price, RUB/ton.

Task report is generating and displayed in Excel. It can be formed and printed by menu command « Report – Task » in main form. Tasks with equal crop productivity values and different headers (different width), the same combine harvester is aggregated by those, are selected for comparative analysis. Selected tasks are named « VARIANT 1 » , « VARIANT 2 » , « VARIANT 3 » or « VARIANT 4 » by pushing right mouse button on the needful task line and by choosing variant number from context menu. Selected variants are displayed at the lower part of main window in yellow rectangle. Comparative analysis report is

376

M. Chaplygin et al.

forming by menu command « Report – Comparative analysis » , after that it is displaying in Excel, now it is available for editing, saving and printing. Header standard size optimization is made for combine harvester type TORUM 740 that manufactured by Rostselmash Combine Plant, LLC for header width of 6, 7 and 9 m and winter wheat productivity 5.0 and 7.0 ton/ha. Optimization results are in Table 1. Table 1. Results of optimization. No Width of cut Direct costs of means, RUB./ton header, m Productivity 5.0 ton/Ha Productivity 7.0 ton/Ha 1 6 877.3 923.7 2 7 829.5 870.1 3 9 740.3 783.8

Calculating results can be presented as a graph taking into account the entered optimal working speed range limit 3 to 7 km/h (Fig. 2). Choosing optimal working speed range limit is based on the research results, that were reviewed in study [15].

Fig. 2. Selection of the optimal width of the header, taking into account operating speed limits.

Graph show that optimal combine thresher workload with minimal grain losses and crushing values is provided at choosing optimal header width in the crop productivity range 2.5−9.5 ton/ha subject to entered working speed range limit 3−7 km/h. Thresher workload with working speed limit will be optimal for header width of 6 m at crop

Computer Modeling Selection of Optimal Width of Rod Grip Header

377

productivity 4.6−9.8 ton/Ha, for 7 m is 4.2−9.0 ton/Ha and for 9 m is 3.0−7.6 ton/Ha. All this indicates that header with 7 and 9 m width have bigger crop productivity range. Made for the future calculating 12 m header shows that its using in conjunction with combine harvester with 12 kg/sec capacity will be the most optimal at working speed ranged workload and crop productivity 2.5−5.6 ton/Ha. Combine harvester realizes its throughput potential fully under given conditions. To work with a larger productivity range, engine unit power must be increased.

4 Conclusions Analysis obtained results allows to make next conclusion. High-efficiency combine harvester type TORUM 740 should be equipped by 9 m header for bread harvesting by direct combining at 5.0 ton/Ha and 7.0 ton/Ha crop productivity. It will allow to decrease harvesting costs in the comparison with 6.0 m and 7.0 m header by 11−16% at 5.0 ton/Ha crop productivity and 15% at 7.0 ton/Ha. Economical and mathematical model of choosing optimal header width differs than previously developed by: – operational and economical indicators are taken into account at choosing optimal header width; – power for self-movement, that has a significant effect on bread mass threshing, is taken into account in calculations; – grain losses behind header are taken into account. Multi-year accounting for harvesting operations data for a number of Southern Federal District farms show that costs are in the range from 748.4 to 890.5 RUB/ton, so average calculations error does not exceed 15%. It proves optimization model correctness and suitability for modeling optimal header width at winter wheat harvesting for two crop productivity levels: 5 and 7 ton/Ha. Combine harvesting with optimal header width will allow to decrease grain losses behind the header, grain injury rate and power and fuel costs for heavy harvesting units, that is used for realizing the nominal productivity, also, this optimal header width harvesting will improve the machine operator work environments. The use of the proposed economical and mathematical model of choosing optimal header width differs of the harvest grip will help to rationally choose a fleet of harvesting machines from the proposed range of harvests produced abroad by the manufacturer having different width of the capture width, taking into account the features and needs of the farm. Further studies will be aimed at studying the effect of the grab width on grain loss and the spread width of crushed straw on the width of the passage, as well as on the development of new computer programs and test equipment.

378

M. Chaplygin et al.

References 1. Zhalnin, E.V.: Matematicheskoye modelirovaniye protsessov zemledel’cheskoy mekhaniki [Mathematical modeling of agricultural mechanics processes]. Tractors Agric. Mach. 1, 20– 23 (2000). (in Russian) 2. Lipkovich, E.I.: Osnovy matematicheskogo modelirovaniya sistemy mashin [Fundamentals of mathematical modeling of a machine system]. Povysheniye effektivnosti uborochnykh rabot. VNIPTIMESKH. Zernograd, pp. 13–22 (1984) (in Russian) 3. Tabashnikov, A.T.: Optimizatsiya uborki zernovykh i kormovykh kul’tur [Optimization of harvesting of grain and forage crops]. Agropromizdat. Moscow, p 159 (1985) (in Russian) 4. Kavka, M., Mimra, M., Kumhala, F.: Sensitivity analysis of key operating parameters of combine harvesters. Res. Agric. Eng. 62(3), 113–121 (2016). https://doi.org/10.17221/48/ 2015-rae 5. Badretdinov, I., Mudarisov, S., Lukmanov, R., Permyakov, V.: Mathematical modeling and research of the work of the grain combine harvester cleaning system. Comput. Electron. Agric. 165, 104966 (2019). https://doi.org/10.1016/j.compag.2019.104966 6. Šotnar, M., Pospíšil, J., Mareček, J., Dokukilová, T., Novotný, V.: Influence of the combine harvester parameter settings on harvest losses. Acta Technol. Agric. N21(3), 105–108 (2018). https://doi.org/10.2478/ata-2018-0019 7. Chen, J., Wang, S., Lian, Y.: Design and test of header parameter keys electric control adjusting device for rice and wheat combined harvester. Trans. Chin. Soc. Agric. Eng. 34 (16), 19–26 (2018). https://doi.org/10.11975/j.issn.1002-6819.2018.16.003 8. Liu, H., Reibman, A.R., Ault, A.C., Krogmeier, J.V.: Video-based prediction for headerheight control of a combine harvester. In: IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), San Jose, CA, USA, pp. 310–315 (2019). https://doi.org/ 10.1109/mipr.2019.00062 9. Zhang, K., Shen, H., Wang, H., et al.: Automatic monitoring system for threshing and cleaning of combine harvester. IOP Conference Series: Materials Science and Engineering. 452(4), p. 042124 (2018). https://doi.org/10.1088/1757-899X/452/4/042124 10. Shepelev, S.D., Shepelev, V.D., Almetova, Z.V., et al.: Modeling the technological process for harvesting of agricultural produce. In: IOP Conference Series: Earth and Environmental Science. 115(1), p. 012053 (2018). https://doi.org/10.1088/1755-1315/115/1/012053 11. Almosawi, A.A.: Combine harvester header losses as affected by reel and cutting indices. Plant Archives. 19, 203–207 (2019). http://www.plantarchives.org/PDF%20SUPPLEMENT %202019/33.pdf 12. Zhang, Y., Chen, D., Yin, Y., Wang, X., Wang, S.: Experimental study of feed rate related factors of combine harvester based on grey correlation. IFAC-PapersOnLine 51(17), 402– 407 (2018). https://doi.org/10.1016/j.ifacol.2018.08.188 13. Reshettseva, I.A., Tabashnikov, A.T., Chaplygin, M.E.: Certificate of state registration of the computer program “ECO-Adapter” No. 2015613469. Registered in the Program Registry 03/17/2015. (in Russian) 14. Chaplygin, M.Y.: Ekonomiko-matematicheskaya model’ optimizatsii tiporazmera khedera k zernouborochnomu kombaynu [Economic-mathematical model for optimizing the size of a header to a combine harvester]. Machinery and Equipment for Rural Area. (2), pp. 23–24 (2012) (in Russian) 15. Chaplygin, M.Y.: Povishenie effektivnosti ispolzovaniya zernouborochnogo kombaina putem obosnovaniya optimalnoi shirini zahvata jatki dlya uslovii yuga Rossii [Improving the efficiency of the combine harvester by justification the optimal header width for the conditions in southern Russia]. (Dissertation Candidate of Technical Sciences). Volgograd State Agrarian University, Moscow (2015) (in Russian)

An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition Nanziba Basnin1(&) , Lutfun Nahar1 and Mohammad Shahada Hossain2 1

,

International Islamic University Chittagong, Chittagong, Bangladesh 2 University of Chittagong, Chittagong, Bangladesh [email protected]

Abstract. Vision based micro gesture recognition systems enable the development of HCI (Human Computer Interaction) interfaces to mirror real-world experiences. It is unlikely that a gesture recognition method will be suitable for every application, as each gesture recognition system rely on the user cultural background and application domain. This research is an attempt to develop a micro gesture recognition system suitable for the asian culture. However, hands vary in shapes and sizes while gesture varies in orientation and motion. For accurate feature extraction, deep learning approaches are considered. Here, an integrated CNN-LSTM (Convolutional Neural Network- Long Short-Term Memory) model is proposed for building micro gesture recognition system. To demonstrate the applicability of the system two micro hand gesture-based datasets namely, standard and local dataset consisting of ten significant classes are used. Besides, the model is tested against both augmented and unaugmented datasets. The accuracy achieved for standard data with augmentation is 99.0%, while the accuracy achieved for local data with augmentation is 96.1% by applying CNN-LSTM model. In case of both the datasets, the proposed CNNLSTM model appears to perform better than the other pre-trained CNN models including ResNet, MobileNet, VGG16 and VGG9 as well as CNN excluding LSTM. Keywords: CNN-LSTM model  Augmented dataset  Unaugmented dataset  Micro hand gesture  Standard dataset  Local dataset

1 Introduction Gesture symbolizes the posturized instance of a body through which some information is conveyed [3]. Gesture usually categorize as macro and micro. Where macro gesture demonstrates the perpetuating motion of the hand in coordination to the body, while micro gesture pictorializes the relative position of the fingers of the hand. This research makes use of micro gestures with static images. Hand gesture recognition systems are generally used to narrow the bridge between human and computer interactions (HCI). Human interactions through hand gesture sometimes involve one hand or both hands. An interactive machine which successfully mimics the natural way of human hand interaction can be developed [16]. The research presented in this paper is an attempt to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 379–392, 2021. https://doi.org/10.1007/978-3-030-68154-8_35

380

N. Basnin et al.

develop a classification tool, enabling the accurate recognition of both single and double hand gestures. Thus, this paper focuses on building an effective tool for the recognition of hand gesture, by introducing an integrated [17] CNN-LSTM (Convolutional Neural Network - Long Short Term Memory) model. The CNN model is considered [13] because it can handle large amount of raw data with comparatively less pre-processing effort [2], while LSTM can better optimize the back propagation. Therefore, the integration of CNN with LSTM [4] would play an important role to achieve better model accuracy [1]. This research is further enhanced by investigating the impacts of data augmentation on the proposed CNN-LSTM model. To perform these two different datasets, namely standard and local with augmentation as well as without augmentation have been used to train the CNN-LSTM model. It is important to note that the local dataset contains classes of gesture of both hands such as ‘dua’, ‘namaste’, ‘prayer’ and ‘handshake’ which are unique in the context of Indian subcontinent. The accuracy of the proposed CNN-LSTM model has been compared with LSTM excluded CNN model as well as with state-of-the-art pre-trained CNN models, including ResNet, Mobile Net, VGG9 and VGG16 by taking account of both augmented and unaugmented datasets. The real time validation against local dataset has also been carried out, this will be discussed in result section. To check the overfitting and underfitting aspects of the CNN-LSTM model four folds cross-validation has also been carried out. The remaining of the paper is structured as follows. Section 2 analyses the different works related to hand gesture recognition. Section 3 outlines the methodology undertaken in order to develop the model. Section 4 presents the results, while Sect. 5 concludes the paper with a discussion on future research.

2 Related Work There exist a plethora of research works in the area of hand gesture recognition systems. In [14] the different classes of hand gestures were classified using CNN on augmented and unaugmented datasets. It was seen that augmented dataset produced an accuracy of 97.18%, while unaugmented dataset produced an accuracy of 92.91%. However, the recognition of gesture by both hands was not considered in this research. In [19] Artificial Neural Networks (ANN) were used to classify ten different categories of hand gestures with an average accuracy of 98%. Due to the low computational cost to identify the different categories of hand gestures, this method became a good candidate to execute in real-time. However, the use of ANN requires pre-processing of raw data and hence, becomes computationally expensive. Another effective method [20] to recognize American Sign Language Hand Gesture for 26 different classes was proposed. Principal Component Analysis (PCA) was used to extract features from the dataset. These extracted features Micro Hand Gesture Recognition 3 were later fed into the ANN for training the dataset. This method produced an accuracy of 94.3%. In spite of this accuracy, the feature extraction technique, applied in this paper, was unable to extract features based on depth and hand direction. In [21] twenty-six different classes of American Sign language Hand Gesture dataset were trained using three different CNN models. The hidden layers of each model increased from the other by one. It was observed that, by increasing the number of hidden layers results in decreasing

An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition

381

recognition rate of CNN model but increasing its run time. Consequently, testing accuracy appears to steeply fall from 91% to 83%. In another approach [24], a selforganizing as well as self-growing neural gas network along with an YCbCr (Green, Blue, Red) color spacing method for hand region detection was proposed. The data used in this paper consists of six different classes of gestures and after applying classification produces an accuracy of 90.45%. Although it is fast to compute, the hand region detection method sometimes produces wrong feature detection. In [27] two different datasets were utilized, namely self-constructed and Cambridge Hand Gesture Dataset. Each dataset consists of 1500 images, which are categorized into five different classes. The self-constructed dataset was acquired from images of both left and right hands separately. The dataset was preprocessed using Canny Edge method which removed the effect of illuminations. In this study, a CNN-Architecture of five convolution layers were used to train each dataset. Although the method produced an accuracy of 94%, it was found suitable for recognizing complex gestures. It can be observed that none of the study as discussed above considered the classification of both hand gestures as well as considered context-based gestures such as ‘dua’, ‘namaste’, ‘handshake’ and ‘prayer’, resulting to inefficient nonverbal machine human interaction. In addition, the use of CNN models produces better accuracy than other machine learning models such as ANN [19]. However, layering of CNN’s introduce the problems with saturated neurons and vanishing gradient, which can correspond to a poor testing accuracy [21]. Such a drawback of CNN models can be overcome by integrating the model with such a model like LSTM because it reduces saturation of neurons as well as overcomes the vanishing gradient problem by optimizing backpropagation. Therefore, in this research an integrated model of CNN-LSTM has been proposed, which will be described in the following section.

3 Methodology Figure 1 illustrates the main components of the methodology used in this research to identify the different classes of hand gestures as mentioned previously. A brief discussion of each of the components is presented below.

Fig. 1. System Framework

382

N. Basnin et al.

3.1

Data Collection

As mentioned, ins Sect. 1 before, this research utilizes two datasets namely standard and local. The standard dataset is collected from a publicly available domain [10] as shown in Fig. 2. It is used as a benchmark dataset which requires no preprocessing. Two datasets are introduced in order to highlight a comparison between standard dataset and the locally collected dataset. Moreover, this will not only aid to deduce how the accuracy varies between double handed gestures and single-handed gestures but also recognize how effective data pre-processing on the local dataset is in terms of accuracy. So, the size of the standard dataset is kept similar to the size of the local dataset. As it consists 10,000 images, which are divided into ten different classes of hand gestures namely fist, thumb, palm, palm moved, okay, index, fist move, down, c and L. Each class consists of 1000 images. The model uses 7000 images for training while 3000 images for testing. The local dataset was collected using Webcam. Ten people with ethical approval were used as subjects to build the dataset. Likewise, local dataset consists of 10000 images. The sample of the local dataset consists of ten gesture classes. Out of these ten classes five are associated with double hand gestures as illustrated in Fig. 3. These five classes of gestures namely dua, handshake, namaste, pray and heart are gestures used in the Indian sub-continent. The remaining five classes, are gestures by single hand and they consist of palm, fist, one, thumb and okay as can be seen from Fig. 3. In this way, the local dataset demonstrates a heterogeneous combination of hand gesture classes, which cannot be found in the standard dataset. Each class consists of 1000 images. Likewise, standard dataset, the model uses 7000 images for training and 3000 images for testing from the local dataset.

Fig. 2. Sample of Standard Dataset

Fig. 3. Sample of Local Dataset

An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition

3.2

383

Data Pre-processing

Since the standard dataset is already pre-processed, so it can be directly passed into the CNN-LSTM model. To pre-process the local dataset the steps shown in Fig. 4 are followed. Firstly, in order to extract the foreground, the background of the image is subtracted. Secondly, gray scale conversion is applied to the image. The single channel property of gray scale image benefits the CNN model to gain a faster learning rate [9]. Thirdly, Morphological erosion is used [12]. Fourthly, the median filtering method lowers down the noise in the image [28]. Finally, for the convenience of CNN-LSTM model the images are resized into a pixel size of 60  60.

Fig. 4. Steps of Data Pre-Processing

3.3

CNN-LSTM Integrated Model

The proposed CNN-LSTM model comprises four convolution layers, two max pooling layers, a time distributed flattening layer, one hidden LSTM layer and a output layer. After every two convolution layers, a dropout layer is considered. After the LSTM layer, a dropout layer is also considered. The use of dropout layer facilitates addressing over-fitting issue [23] by deactivating the nodes. For example, the dropout value of 0.25, deactivates 25% of the nodes. The first input layer consists of 32 filters with a kernel size of 5  5. Zero padding is also added to this input layer in order to attach a border of zeros to the input volume. The convolution layer generates a feature map which is fed into the next layer. In each convolution layer of the network, the activation function Rectified Linear Unit (ReLU) is used. Rectified Linear Unit is chosen because it not only outperforms all the other activation functions including sigmoid and tanh but also avoids suffering from vanishing gradient problem. ReLU does this by introducing non-linearity to the linear process of convolution. A max pooling layer is added after two convolution layers in the model. The layer works by extracting the maximum features from the feature map and fitting each feature into a reduced window size of 2  2 dimension. This not only narrows down the parameters required to train the model but also retains the most important features in the feature map. Next, a flattening layer is introduced after the convolution layers. This layer flattens the image into a linear array so that it can easily be fed into the neural network. The CNN layers are integrated with an LSTM layer of 200 nodes. While CNN layers obtain extraction of features from

384

N. Basnin et al.

an image, the LSTM layer supports by interpreting those features in each distributed time stamps. These interpretations gives the model exposure to some complex temporal dynamics of imaging [6]. As a result, the perceptual representation of images in the convolutions become well defined. All the layers are connected to the output layer which consists of ten nodes that indicate ten different classes of hand gestures used to train the model. Afterwards, an activation function SoftMax is included in this layer to obtain a probabilistic value for each of the classes. SoftMax is used in the very last layer of the network to provide a nonlinear variant of multinomial logistic regression. Stochastic Gradient Descent (SGD) of a learning rate 0.01 is used as an optimizer to compile the model. SGD improves the optimization techniques by working with large datasets as it is computationally faster [7] and hence, reduces the model loss. Table 1 summarizes the various components of CNNLSTM model.

Table 1. CNN-LSTM integrated configuration Model content 1st convolution layer 2D 2nd convolution layer 2D 1st max pooling layer Dropout layer 3rd convolution layer 2D 4th convolution layer 2D 2nd Max pooling layer Dropout layer Flattening layer LSTM layer Dropout layer Output layer Optimization function Learning rate Matrix

3.4

Details Input size 60  60, 32 filters of kernel size 5  5, Zero Padding, ReLU 64 filters of kernel 3  3, ReLU Pool size 2  2 Randomly deactivates 25% neurons 64 filters of kernel 3  3, Same Padding, ReLU 64 filters of kernel 3  3, ReLU 64 filters of kernel 3  3, ReLU Randomly deactivates 25% neurons Time Distributed 200 nodes Randomly deactivates 25% neurons 10 nodes, SoftMax Stochastic Gradient Descent (SGD) 0.01 Loss, Accuracy

System Implementation

The data collection and real-time validation of the model are carried out in Jupyter Notebook IDE. The training module is developed in Google Collab. This platform facilitates the execution of programs in run time as well as support deep learning libraries by providing access to a powerful GPU (Graphics Processing Unit) and TPU (Tensor Processing Unit). Moreover, TPU comes with faster throughput. Python is supported in both Jupyter and Google Collab environments. The libraries required for implementation include Tensor flow, Keras, OpenCV, sklearn, Matplotlib, Tkinter,

An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition

385

PIL, Pandas and Numpy. Here, Tensor flow acts as the backend of the system while Keras is used to build the CNNLSTM classifier since it provides built-in functions for layering, optimization and activation. Further, Keras contains advanced tools for feature extraction. OpenCV is required for image processing [18]. On the other hand, Sklearn gives access to many supervised and unsupervised algorithms. Matplotlib is used to generate the confusion matrix. Tkinter supports easier configuration tools for the user interface, required for real-time data collection and validation. PIL is an integration tool for image processing, while Numpy is used to carry out the operations of arrays [8]. Callbacks are used to train the model. Callbacks not only prevents overfitting that occurs as a result of too many epochs but also avoids underfit models [11]. The callbacks, which are implemented in this model include checkpoints, early stopping and reducing learning rate on the plateau. Checkpoints allow saving the best models by monitoring the loss invalidation. Once the model stops performing well on the validation dataset, early stopping is applied to stop the training epochs. Reducing the learning rate on the plateau is used when the validation loss abstains from any further improvement. The data augmentation is directly applied to the CNN-LSTM model through Keras built-in Data Augmentation API, which enables the generation of the dataset while training is carried out on the model [5, 29]. Parameters namely shifting, rotation, normalization and zoom are applied to induce robustness of dataset. Therefore, the augmented dataset will expand the number of data points, enabling the reduction of the distance between training and testing datasets. Thus, this will decrease overfitting in the training dataset. 3.5

Cross Validation

Cross-validation is carried out to assess the quality of the CNN-LSTM when the accuracy of the model is greater or equal to 99.0%. k-4 cross-validation is performed. The testing set is divided into four equal sets (i.e. k-1, k-2, k-3, k-4), each comprising of 2,500 images for testing and 7500 images for training. For instance, from Fig. 5 ‘experiment 1 the first tuple consists of 2500 images for testing while the remaining three tuples consist of 7500 images for training.

Fig. 5. Demonstration of Cross Validation

386

N. Basnin et al.

4 Result and Discussion The section investigates the application of the developed CNN-LSTM model by taking account of the both standard and local datasets as introduced in Sect. 3. 4.1

CNN-LSTM Model on Standard Dataset

Fig. 6. Confusion Matrix

Fig. 7. Model Accuracy

Fig. 8. Model Loss

Figure 6 illustrates the confusion matrix generated by applying CNN-LSTM model on the augmented standard dataset. By taking account of the results presented in this figure the accuracy of the hand gesture class, named ‘Palm Moved’ is calculated as 99.2% by applying equation mentioned in [26]. Figures 7 and 8 illustrate the Model Accuracy and Model Loss of the augmented standard dataset by applying CNN-LSTM model. It can be seen from both the curves that an early stop at 68 epochs has been achieved. The training accuracy as well as testing accuracy achieved 99.1% and 99.2% respectively. Hence, it can be concluded that the model is well fit. 4.2

CNN-LSTM Model on Local Dataset

Fig. 9. Confusion Matrix

Fig. 10. Model Accuracy

Fig. 11. Model Loss

Figure 9 illustrates the confusion matrix generated by applying CNN-LSTM model on the augmented local dataset. By taking account of the results presented in this figure the accuracy of the both hand gesture class, named ‘dua’, ‘namaste’, ‘heart’, and ‘handshake’ are calculated as 90.0%, 99.3%, 93.8% and 99.8% respectively, by applying equation mentioned in [26]. Figure 10 and 11 illustrate the Model Accuracy and Model

An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition

387

Loss of the augmented local dataset by applying CNN-LSTM model. It can be seen from both the curves that an early stop at 54 epochs has been achieved. The training accuracy as well as testing accuracy achieved are 95.0% and 96.1% respectively. Hence, it can be concluded that the model is well fit. 4.3

Comparison of Result Table 2. Classification result of CNN-LSTM model on augmented datasets Performance measure Standard dataset Local dataset Precision 0.9909 0.8444 Recall 0.9907 0.8110 F1-score 0.9908 0.8224

Table 3. Classification result of CNN-LSTM model on unaugmented datasets Performance measure Standard dataset Local dataset Precision 0.7298 0.6894 Recall 0.6559 0.6133 F1-score 0.6909 0.6492

To evaluate each model used in this research, performance matrices such as precision, recall, F1-score and accuracy are used. These matrices calculate the performance by using parameters such as TP (true positive) and FP (False Positive). Precision tries to reduce the false positive values, in order to induce a precise model. Recall mainly focuses on reducing the false negative value. F1- score reduces the false positive and false negative values, in order to increase F1- score for the model. Accuracy focuses solely on true positive and true negative values, in order to correctly identify the classes. Table 2 provides the classification results of CNN-LSTM model for augmented data of both standard and local datasets. Table 3 provides the classification results off CNN-LSTM model for the unaugmented data of both the standard and local datasets. From both the tables it can be observed that the values of evaluation metrics of augmented data are much higher than that of unaugmented data. Table 4 illustrates a comparison between the proposed CNN-LSTM model and four pre-trained CNN models, namely RasNet, VGG9, VGG16 and MobileNet by taking account of augmented local dataset. In addition, it also illustrates the comparison between CNN-LSTM model and CNN model excluding LSTM by taking account of augmented local dataset. It can be observed from the values of the evaluation metrics against ResNet that the model performs poorly. It can also be observed that none of the pre-trained CNN models are well fit because testing accuracy for VGG9 and VGG16 are much higher than training accuracy, meaning they are underfitted. On the other hand, the testing accuracy of MobileNet is much lower than training accuracy meaning it is overfitted. The CNN model excluding LSTM also appears overfitted since its

388

N. Basnin et al.

testing accuracy is much higher than training accuracy. Thus, it can be concluded that none of the models are well fit. On the other hand, the proposed CNN-LSTM model is well fit since its testing and training accuracy is much closer. Table 5 illustrates a comparison between the proposed CNN-LSTM model and four pre-trained CNN models, namely RasNet, VGG9, VGG16 and MobileNet by taking account of augmented standard dataset. In addition, it also illustrates the comparison between CNNLSTM model and CNN model excluding LSTM by taking account of augmented standard dataset. ResNet demonstrates a poor performance in comparison to other CNN models because of its poor training accuracy. Both Mobile Net and CNN suffer from overfitting because their training accuracies are much lower than their testing accuracies. In contrast, VGG9 and VGG16 are well fit models. However, the training accuracy of VGG 9 is only 64%. Although VGG16 is a well fit model, its accuracy is lower than the proposed CNN-LSTM model. Therefore, it can be concluded that the proposed CNN-LSTM model is well fit as well as its performance in terms of accuracy is better than other models.

Table 4. Comparision between CNN-LSTM model with other CNNmodels on augmented local dataset CNN model ResNet VGG9 MobileNet CNN VGG16 CNN-LSTM

Precision 0.03 0.11 0.44 0.80 0.87 0.83

Recall 0.12 0.10 0.28 0.80 0.86 0.81

F1-score 0.04 0.10 0.24 0.81 0.86 0.82

Train Acc. Test Acc. 0.10 0.12 0.27 0.72 0.40 0.28 0.89 0.75 0.87 0.97 0.96 0.95

Table 5. Comparison between CNN-LSTM model with other CNNmodels on augmented standard dataset CNN model ResNet VGG9 MobileNet CNN VGG16 CNN-LSTM

4.4

Precision 0.02 0.09 0.57 0.98 0.98 0.99

Recall 0.11 0.09 0.36 0.98 0.97 0.99

F1-score 0.03 0.08 0.32 0.98 0.98 0.99

Train Acc. Test Acc. 0.10 0.11 0.64 0.64 0.43 0.36 0.98 0.88 0.98 0.96 0.99 0.99

Cross Validation

Cross-validation is carried out on the augmented standard dataset. In order to, justify the 99.0% accuracy. The average training accuracy is 96.0%, while the average testing accuracy is around 98.0%. This means the testing accuracy is higher than the training

An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition

389

accuracy. This means our proposed CNN-LSTM model is well fit. The average testing loss is 19.0% while the average training loss is around 26.0%. This means the testing loss is lower than the training loss (Table 6). Table 6. Cross validation result of testing verses training accuracy and loss Dataset k-1 k-2 k-3 k-4 Average

4.5

Test accuracy 0.96 0.99 0.98 0.97 0.98

Train accuracy Test loss 0.97 0.30 0.98 0.10 0.96 0.20 0.92 0.15 0.96 0.19

Train loss 0.15 0.20 0.40 0.30 0.26

Real Time Validaiton

The CNN-LSTM model has been validated against real world data [25]. In doing so, the real time unknown images of hand gesture was fed into to the model. From the Fig. 12 it can be observed that the model can accurately identify one of the classes of hand gesture named ‘dua’.

Fig. 12. Sample Clip of Real Time Validation Program

390

N. Basnin et al.

5 Conclusion and Future Work In this research, two datasets were used namely, standard and local dataset for visionbased micro hand gesture recognition. The local dataset is a heterogeneous dataset which includes five double hand gesture classes out of the ten classes. These five different classes of gestures include gestures used in the Indian sub-continent. Thus, a socio-physical approach is taken into account. The CNNLSTM model proposed in this study can accurately classify the micro gestures used in both the datasets. Besides, the CNN-LSTM model for micro gesture datasets has been compared with four pre-trained models namely ResNet, Mobile Net, VGG9 and VGG16 and with LSTM excluded CNN model. It has been demonstrated in the result section the CNN-LSTM model outperforms other pretrained models in terms of accuracy in case of both standard and local dataset. Therefore, the system not only recognizes the gestures of one hand but also two hands. In future, this research aims to build a dynamic dataset by involving a greater number of micro gesture classes in the context of Asia as well as collect images from a greater number of subjects. Few environmental factors such as the intensity of light, a variety of image backgrounds and skin tone can be considered while building the dataset. Furthermore, the performance of this robust dataset will present interesting phenomena. This complexity can be overcome if an ensemble learning methodology proposed in [22] is considered. This will not only result in better accuracy but also help in selecting the optimal learning algorithm.

References 1. Ahmed, T.U., Hossain, M.S., Alam, M.J., Andersson, K.: An integrated cnn-rnn framework to assess road crack. In: 22nd International Conference on Computer and Information Technology (ICCIT), pp. 1–6. IEEE (2019) 2. Ahmed, T.U., Hossain, S., Hossain, M.S., ul Islam, R., Andersson, K.: Facial expression recognition using convolutional neural network with data augmentation. In: Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), pp. 336–341. IEEE (2019) 3. Akoum, A., Al Mawla, N., et al.: Hand gesture recognition approach for asl language using hand extraction algorithm. J. Softw. Eng. Appl. 8(08), 419 (2015) 4. Basnin, N., Hossain, M.S., Nahar, L.: An integrated cnn-lstm model for bangla lexical sign language recognition. In: Proceedings of 2nd International Conference on Trends in Computational and Cognitive Engineering (TCCE-2020) Springer Joint 8th International Conference on Informatics (2020) 5. Chowdhury, R.R., Hossain, M.S., ul Islam, R., Andersson, K., Hossain, S.: Bangla handwritten character recognition using convolutional neural network with data augmentation. In: Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), pp. 318–323. IEEE (2019) 6. Donahue, J., Anne Hendricks, L., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2625–2634 (2015)

An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition

391

7. Goyal, P., Dollar, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., He, K.: Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677 (2017) 8. Greenfield, P., Miller, J.T., Hsu, J., White, R.L.: Numarray: a new scientific array package for python. PyCon DC (2003) 9. Grundland, M., Dodgson, N.A.: Decolorize: fast, contrast enhancing, color to grayscale conversion. Pattern Recogn. 40(11), 2891–2896 (2007) 10. Gti: Hand gesture recognition database (2018), https://www.kaggle.com/gtiupm/ leapgestrecog 11. Gulli, A., Pal, S.: Deep learning with Keras. Packt Publishing Ltd (2017) 12. Haralick, R.M., Sternberg, S.R., Zhuang, X.: Image analysis using mathematical morphology. IEEE Trans. Pattern Anal. Mach. Intell. 4, 532–550 (1987) 13. Hossain, M.S., Amin, S.U., Alsulaiman, M., Muhammad, G.: Applying deep learning for epilepsy seizure detection and brain mapping visualization. ACM Trans. Multimedia Comput. Commun. Appl. (TOMM) 15(1s), 1–17 (2019) 14. Islam, M.Z., Hossain, M.S., ul Islam, R., Andersson, K.: Static hand gesture recognition using convolutional neural network with data augmentation. In: Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), pp. 324–329. IEEE (2019) 15. Islam, R.U., Hossain, M.S., Andersson, K.: A deep learning inspired belief rulebased expert system. IEEE Access 8, 190637–190651 (2020) 16. Jalab, H.A.: Static hand gesture recognition for human computer interaction. Inf. Technol. J. 11(9), 1265 (2012) 17. Kabir, S., Islam, R.U., Hossain, M.S., Andersson, K.: An integrated approach of belief rule base and deep learning to predict air pollution. Sensors 20(7), 1956 (2020) 18. Nandagopalan, S., Kumar, P.K.: Deep convolutional network based saliency prediction for retrieval of natural images. In: International Conference on Intelligent Computing & Optimization. pp. 487–496. Springer (2018) 19. Nguyen, T.N., Huynh, H.H., Meunier, J.: Static hand gesture recognition using artificial neural network. J. Image Graph. 1(1), 34–38 (2013) 20. Nguyen, T.N., Huynh, H.H., Meunier, J.: Static hand gesture recognition using principal component analysis combined with artificial neural network. J. Autom. Control Eng. 3(1), 40–45 (2015) 21. Oyedotun, O.K., Khashman, A.: Deep learning in vision-based static hand gesture recognition. Neural Comput. Appl. 28(12), 3941–3951 (2017) 22. Oz¨o˘g¨ur-Aky¨uz, S., Otar, B.C., Atas, P.K.: Ensemble cluster pruning via convex- concave programming. Comput. Intell. 36(1), 297–319 (2020) 23. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929– 1958 (2014) 24. Stergiopoulou, E., Papamarkos, N.: Hand gesture recognition using a neural network shape fitting technique. Eng. Appl. Artif. Intell. 22(8), 1141–1158 (2009) 25. Uddin Ahmed, T., Jamil, M.N., Hossain, M.S., Andersson, K., Hossain, M.S.: An integrated real-time deep learning and belief rule base intelligent system to assess facial expression under uncertainty. In: 9th International Conference on Informatics, Electronics & Vision (ICIEV). IEEE Computer Society (2020) 26. Wang, W., Yang, J., Xiao, J., Li, S., Zhou, D.: Face recognition based on deep learning. In: International Conference on Human Centered Computing. pp. 812– 820. Springer (2014)

392

N. Basnin et al.

27. Yingxin, X., Jinghua, L., Lichun, W., Dehui, K.: A robust hand gesture recognition method via convolutional neural network. In: 6th International Conference on Digital Home (ICDH), pp. 64–67. IEEE (2016) 28. Zhu, Y., Huang, C.: An improved median filtering algorithm for image noise reduction. Phys. Procedia 25, 609–616 (2012) 29. Zisad, S.N., Hossain, M.S., Andersson, K.: Speech emotion recognition in neurological disorders using convolutional neural network. In: International Conference on Brain Informatics. pp. 287–296. Springer (2020) 30. Zivkovic, Z., Van Der Heijden, F.: Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recogn. Lett. 27(7), 773–780 (2006)

Analysis of the Cost of Varying Levels of User Perceived Quality for Internet Access Ali Adib Arnab1(B) , Sheikh Md. Razibul Hasan Raj1 , John Schormans2 , Sultana Jahan Mukta3 , and Nafi Ahmad2 1 2

University of Global Village, Barisal, Bangladesh [email protected] Queen Mary University of London, London, UK 3 Islamic University, Kushtia, Bangladesh

Abstract. Quality of Service (QoS) metrics deal with network quantities, e.g. latency and loss, whereas Quality of Experience (QoE) provides a proxy metric for end-user experience. Many papers in the literature have proposed mappings between various QoS metrics and QoE. This paper goes further in providing analysis for QoE versus bandwidth cost. We measure QoE using the widely accepted Mean Opinion Score (MOS) rating. Our results naturally show that increasing bandwidth increases MOS. However, we extend this understanding by providing analysis for internet access scenarios, using TCP, and varying the number of TCP sources multiplexed together. For these target scenarios our analysis indicates what MOS increase we get by further expenditure on bandwidth. We anticipate that this will be of considerable value to commercial organizations responsible for bandwidth purchase and allocation. Keywords: Mean Opinion Score (MOS) · Quality of Experience (QoE) · Bandwidth · Bandwidth cost · Quality of Service (QoS)

1

Introduction

Quality of Experience (QoE) has a significant but complex relationship with Quality of Service (QOS) and it’s underlying factors [1]. Mean Opinion Score (MOS) ranges from 1 to 5 (Bad to Excellent) and represents QoE [2]. Although considerable work has been carried out in papers like [3–5], in this paper, we have considered QOE within a budget with numerous metrics such as PLP, bandwidth, and round trip time, TCP sending rate factor, packet buffer lengths and packet bottleneck capacity. From the curve fitting, we obtained an analytical expression for bandwidth and bandwidth cost. The goodness of fit is obtained from SSE, R Square, Adjacent R Square and RMSE. Consequently, we found one equation with variable MOS and bandwidth and another one with variable bandwidth and bandwidth cost. The analysis has been performed multiple times for varying the number of TCP sources. The major objective of this research is to identify the mathematical relationship between MoS and bandwidth cost. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021  P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 393–405, 2021. https://doi.org/10.1007/978-3-030-68154-8_36

394

2

A. A. Arnab et al.

Related Work

Well-grounded bandwidth anticipated by the Internet Protocol is a pivotal requirement for the advanced services, which also includes reliability amplification arranged by TCP [6]. The paper signifies QoS (Quality of service) as a contemporaneous considered issue worldwide that offers further reliable bandwidth network traffic is based on TCP which keeps increasing the sending rate. Packet loss could be resultant from above issues which is analyzed during the QoS model improvement [7]. The author implies it is also possible for the bottleneck links to have high utilization without any packet loss. According to Roshan et al. [8] paper evaluates the relationships between QoS metrics packet loss probability (PLP) and the user-perceived Quality of Experience (QoE) for video on demand (VoD). QoS has major implications from a policy standpoint [9]. The authors indicates about a present situation where users acquire internet service according to a fixed price. A simple monthly subscription fee or price according to the connection time is common [10]. Dial-up modem, Ethernet, cable Ethernet, or DSL are utilized to provide bandwidth. It is very clear from the analysis of the authors that there may be no bandwidth guarantee or only statistical bandwidth guarantees either one can be offered. Conventional Internet applications mainly utilize TCP as their transport layer protocol [11]. We can understand from the paper that Packet loss is used as a sign of congestion. As soon as the sender experiences network congestion, TCP responds quickly by reducing its transmission rate multiplicative [12]. The UK is ranked third in terms of the services and metrics included in this analysis followed by France and Germany [13]. In this paper, The UK has occupied third place in a table of the weighted average and lowest available basket prices across the services which are covered in analyzing bandwidth cost at the varying level. USA is the most expensive regarding both average and lowest available prices across all the services while France is the least expensive. We can also learn that UK’s highest ranking is acquired by the mobile phone service price, triple-play bundle price, in which category the UK is ranked second followed by France. There are numerous analysis of QoE, QoS and relationship with bandwidth where our paper actually provides relationship between QoE and Bandwidth cost. This can cover up business aspects of many organizations. It also implements proper idea about within allocating budget how much QoE can be achieved which has not yet been discussed in mentioned papers. 2.1

Abbreviations and Acronyms

Mean Opinion Score and Its Indicator. Mean Opinion Score is broadly known as MOS, and has become a widespread perceived media quality indicator [12]. MOS (also superfluously known as the MOS score), is one way of classifying the characteristic of a phone call. This score is set out of a five-point scale, shown in Table 1 below: MOS of 4.0 or higher is toll-quality. Once within the

Analysis of the Cost

395

Table 1. MOS Score vs Performance MOS Score Performance 5

Excellent

4

Good

3

Fair

2

Poor

1

Bad

building, enterprise voice quality patrons generally expect constant quality while employing their telephone [14].

3

Method

Our first requirement is to determine PLP and MOS for different numbers of TCP sources, round trip time, bottleneck capacity in packets and buffer length. We take buffer length values from 10 to 1000 [8]. Then we obtained a MOS vs bandwidth graph for different settings of buffer lengths. Different countries pays different amounts of money for internet access. It largely depends on the internet providers, facilities, internet access availability of networking telecommunication product, infrastructure, etc. For example, someone in North America may not be paying the same as someone in Africa. We considered values for UK which will give us range of bandwidth and estimated average cost for that bandwidth. From our analysis we can obtain a graph for bandwidth and bandwidth cost. Our goal is to relate MOS score and bandwidth cost analytically. The next step was to use existing formulas for PLP and MOS again. Initially we used bottleneck capacity as one of the parameters to determine PLP. We obtained a formula for MOS and bandwidth cost and plotted various bandwidth cost value and evaluated MOS scores against those values. Also, we obtained a similar curve which we previously obtained from MOS vs bandwidth. If we want to differentiate for big organizations and small family houses we need to modify our formula for MOS by changing number of TCP sources. In big organizations, the number of TCP sources will be large, and for small household, it can be much smaller. We modify the number of TCP sources in our initial formula for MOS. We obtain one graph of MOS vs bandwidth cost for a larger number of TCP sources and one graph of MOS vs bandwidth cost for smaller number of TCP sources. 3.1

Determining Parameters Value for Experimental Analysis

An existing formula for PLP in TCP has been used to determine Mean Opinion Score (MOS) [15]. MOS was plotted against bandwidth and bandwidth have been plotted against cost. The relationship of MOS and cost has been determined from MOS vs bandwidth and bandwidth vs cost plots.

396

A. A. Arnab et al.

To determine the values of MOS we require PLP values. According to the analytical expression and performance evaluation of TCP packet loss probability is given by: 32N 2 (1) Pi = 3b(m + 1)2 (C.RT T + Q)2 N = Number of TCP sources=50 C = Bottleneck capacity in packets per second=12500 b = number packets acknowledged by an ACK packet=1 m = factor by which TCP sending rate is reduced =1/2 RTT = Round Trip Time=0.1 s Q = Packet buffer lengths(bits) According to [10] we can obtain MOS from PLPs shown in Eq. 2 below: M OS = 1.46 ∗ exp(exp(−44 ∗ P LP )) + 4.14 ∗ exp(−2.9 ∗ P LP )

(2)

Using different buffer length we get values for PLP and MOS (See Table 2). The Q value has been taken from 10 to 1000. With the same packet buffer length and increasing bandwidth-15 Mbps each time, from 15 to 120 we evaluate PLP and MOS in Table 3. Bandwidth and cost pricing is different worldwide. We can get an estimation of bandwidth vs cost for United Kingdom which is used as sample data here as a unit of analysis [13]. bandwidth= [10 30 50 100 200 400 600 800 1000] Mbps Cost= [20 37 40 42 43 45 46 46 46] Dollars

Table 2. Q, PLP and MOS value with different Q Q

PLP

MOS

10

7.465263e−01 4.750997e−01

100

6.503074e−01 6.280123e−01

200

5.637028e−01 8.073142e−01

400

4.353297e−01 1.171447e+00

600

3.462922e−01 1.516566e+00

800

2.820191e−01 1.827308e+00

1000 2.341107e−01 2.099708e+00

3.2

Determining Formula for Bandwidth and Bandwidth Cost by Curve Fitting

To obtain bandwidth vs cost curve, we need a specific formula for bandwidth and cost which is obtained by curve fitting. f (x) = axb

(3)

Analysis of the Cost

397

Table 3. Q, bandwidth, PLP and MOS value with same Q Q Bandwidth PLP

MOS

10 15

6.503074e−01 6.280123e-01

10 30

1.753233e−01 2.490591e+00

10 45

7.995852e−02 3.326486e+00

10 60

4.556652e−02 3.824152e+00

10 75

2.939265e−02 4.202314e+00

10 90

2.051913e−02 4.492741e+00

10 105

1.513212e−02 4.712481e+00

10 120

1.161832e−02 4.878501e+00

Coefficients (with 95% confidence bounds) a= 27.13 (22.65, 31.61) b= 0.0986 (0.06944, 0.1279) The values of a or b were provided by the curve fitting as the best fit for the graph we got for bandwidth vs cost. Confidence bounds value are: Goodness of fit: SSE: 38.48 R-square 0.9589 Adjusted R-square 0.953 RMSE 2.345 See Table 4 below where goodness of fit parameters are shown. From Table 5 and 6 we know the method named in curve fitting is NonlinearleastSquares which is taken automatically by the characteristics, shape and number of squares present in the curve. Among Interpolant, Linear fitting, Polynomial, Rational, Sum of shine, Smoothing Spline, Weibull, Exponential, Gaussian, Fourier we selected Power1 since it gives better visualization and better goodness of fit prediction. The prediction includes calculation of SSE, R-square, Adjacent Rsquare, RMSE, coefficient with confidence bound. The robust option is enabled and Least Absolute Residuals (LAR) shows more accurate results than Bi Square. LAR mainly focuses on the contrast between residuals rather than squares and Bi-Square reduces weight of the sum of the squares which is necessary for our curve’s case. Table 4. Goodness of fit parameters for curve fitting Fit name Data R-square SSE Adj R-sq RMSE Fit 1 38.4806 0.9530

Fit type DFE Coeff

Cost vs bandwidth Power1 0.9589 7 2.3446 2

398

A. A. Arnab et al. Table 5. Method and Algorithm for curve fitting Method

NonlinearleastSquares

Robust

LAR

Algorithm

Trust Region

DiffMinChange 1.0e-8 DiffMaxChange 0.1 MaxFunEvals

600

Maxlter

400

TolFun

1.0e-6

TolX

1.0e-6

Table 6. Method and Algorithm for curve fitting Coefficient StartPoint Lower Upper a

9.2708

−lnf

lnf

b

0.3019

−lnf

lnf

We will take bandwidth as ‘x’ which lies X axis and bandwidth cost as ‘f(x)’ which lies in Y axis. We can acquire values for ‘a’ and ‘b’ from curve fitting tool which is visible in Fig. 1. After that by implementing the Eq. (3) we obtain Eq. (4) below: (4) Cost = 27.13 ∗ bandwidth0.0986

Fig. 1. Setup for applying Curve fitting formula from Cost vs bandwidth to obtain a formula for MOS vs bandwidth cost

Analysis of the Cost

3.3

399

Implementing MOS and Cost Relationship by Eliminating Bandwidth from Bandwidth vs Cost Formula

By taking different values of cost we can get different values of bandwidth and if we replace bandwidth from Eq. (6) we can get different values for cost and MOS. The MATLAB code for MOS vs Bandwidth cost is shown in Fig. 2. Bottleneck capacity is given by Eq. (5) below:

Fig. 2. MOS vs bandwidth cost relationship

C=

bandwidth ∗ 1000000 12000

(5)

MOS and Bandwidth. From Eq. (1), (2) and (5) we obtained relationship between MOS and bandwidth which is Eq. (6): −44∗

11851.851 0.01( BW.1000 )2 +2( BW.1000 )+100 12 12

−2.9∗

11851.851 0.01( BW.1000 )2 +2( BW.1000 )+100 12 12

M OS = 1.46 ∗ e + 4.14 ∗ e

(6)

MOS and Bandwidth Cost. If we put bandwidth cost in the place of bandwidth by help of Eq. (4) we obtain the following relationship which is Eq. (7):

400

A. A. Arnab et al.

M OS = 1.46 ∗ e + 4.14 ∗ e 3.4

−44∗

−2.9∗

√ Cost1185151.1

√ Cost

√ Cost1185151.1

√ Cost

0.01(83.33 0.0986

0.01(83.33 0.0986

0.0986 2 27.13 ) +2(83.33

27.13

)2 +2(83.33 0.0986

27.13 )+100

(7)

27.13 )+100

MOS and Bandwidth Cost

To evaluate MOS with different numbers of TCP sources we changed the values for N. We took 2 sample values of N which are 80 and 500. 80 TCP sources mainly represent a small building or bandwidth use for family purposes. 500 TCP sources represents bigger companies and organizations. To get MOS and cost relationship, we took a value of N=80 and 500 instead of 50 which provided us a different PLP and a different MOS. The bandwidth and cost relationship remains same as before because it is here seen to have nothing to do with the number of TCP sources. We were able to obtain different MOS and bandwidth formula and different MOS and Cost formula and get output for different number of TCP sources. For N=80, MOS and Cost relationship is obtained in Eq. (8):

M OS = 1.46 ∗ e + 4.14 ∗ e

−44∗

−2.9∗

√ Cost30340.740

√ Cost

√ Cost1185151.1

√ Cost

0.01(83.33 0.0986

0.01(83.33 0.0986

0.0986 2 27.13 ) +2(83.33

27.13

)2 +2(83.33 0.0986

27.13 )+100

(8)

27.13 )+100

Similarly for N=500, we can obtain Eq. (9):

M OS = 1.46 ∗ e + 4.14 ∗ e 4 4.1

−44∗

−2.9∗

√ Cost11851.51

√ Cost

√ Cost1185151.1

√ Cost

0.01(83.33 0.0986

0.01(83.33 0.0986

0.0986 2 27.13 ) +2(83.33 0.0986 2 27.13 ) +2(83.33

27.13 )+100

(9)

27.13 )+100

Results and Testing Plotting MOS vs Bandwidth (Bps) with Different Packet Buffer Lengths

Our initial formula for PLP provides a relationship between packet loss probability and various parameters including the number of TCP sources, bottleneck capacity, round trip time, number of packets acknowledged by an ACK packet, factor by which TCP sending rate is reduced and packet buffer length. We can determine the PLP by taking sample data for this parameter. The MATLAB code and formula for PLP is discussed in Sect. 3.1. Then we calculated MOS from PLP. Figure 3 is MOS vs bandwidth (Mbps) with different packet buffer lengths.

Analysis of the Cost

401

Fig. 3. MOS vs bandwidth (Bps) with different packet buffer lengths

4.2

Plotting of MOS vs Bandwidth (Mbps) with Constant Packet Buffer Length Output

If we keep changing buffer length it is very difficult to evaluate MOS change with the effects of bandwidth change. So by keeping the same buffer length, Q which is 10, we can obtain a MOS vs bandwidth curve. So it is quite evident from Fig. 4 that when we increase bandwidth, MOS is also increasing proportionally bandwidth. As a sample when bandwidth is 50 Mbps, MOS is 3.5 (approximately), when bandwidth is 100 Mbps MOS is 4.5 (approximately) and when bandwidth is 150 Mbps MOS is close to 5. 4.3

Plotting of Bandwidth vs Bandwidth Cost Relationship

We took some different parameters for bandwidth cost in the UK. If we look at the Fig. 5 we can see initially the cost increases within a limited range of bandwidth. In this case even for 20 Mbps the bandwidth price is somewhere close to 35£ per month. The rate increases until 100 Mbps, from the graph customer has to pay nearly 43£ for 100 Mbps per month. Beyond 100 Mbps the cost increment is very slow. From 100 Mbps to 1000 Mbps cost only increases from 43£ to 46£ per month which is incredibly low. So we can draw conclusions about how much it is worth spending on bandwidth before we run into a law of diminishing returns.

402

A. A. Arnab et al.

Fig. 4. MOS vs bandwidth (Mbps) with same packet buffer length

Fig. 5. Bandwidth vs bandwidth cost relationship

Analysis of the Cost

403

Fig. 6. MOS vs bandwidth cost relationship

4.4

Plotting of MOS vs Bandwidth Cost Relationship Output

Initially MoS is very low with low cost. That indicates there is a certain amount of money a customer needs to pay initially to get internet access. A customer paying 38£ can obtain quality of experience of 2 MoS. Where a customer paying 45£ is receiving far better quality of experience, MoS is close to 5 according to Fig. 6. The experiment makes sense if we compare broadband prices in the UK. For example, Broadband provider named ‘Now Broadband’ offers 11 Mbps in 18£ per month in the ‘Brilliant Broadband’ plan. The Same broadband provider is offering 36 Mbps speed in 24£ per month in the ‘Fab Fibre’ plan. So the bandwidth is increasing more than 3 times while cost is only increasing by £ 6 [16]. 4.5

Plotting of MOS and Bandwidth Cost Output with Different TCP Sources

Initially we took the number of TCP sources to be 50. If we change the TCP sources we obtain different outputs and results. There are 2 output graphs in Fig. 7. The first one in Fig. 7 is calculated by taking 80 TCP sources and second one is calculated taking 500 TCP sources. If we take more TCP sources the quality increases more rapidly with the price increase compared to taking fewer TCP sources. Big organizations usually have TCP sources and so that they have the luxury of getting better quality of experience within the same cost but after crossing initial price barrier. In small household fewer TCP sources are most

404

A. A. Arnab et al.

Fig. 7. MOS vs bandwidth Cost Output with different TCP sources.

likely used which also experiences a rise in MOS which provides good quality within a price range but that is less than that seen in big organizations.

5

Discussion and Further Work

Prior work in the field has shown that QoE has a complex relationship with QoS factors like packet loss probability (PLP), delay or delay jitter. Furthermore, contemporary analyses have indicated that the relationship between these QoS factors and QoE can significantly vary from application to application. In this paper we take prior analyses of the relationship between the key QoS metric of packet loss probability and QoE and target an internet access scenario. We use this relationship to show how QoE (measured as MOS) varies as more money is spent on the bandwidth of the internet access link. Our results target two different scenarios – a small number of multiplexed TCP sources and a relatively large number of multiplexed TCP sources. We show that increase in MOS is not a linear with increasing spent on bandwidth at all, and considering these two different we are able to resolve the highly non-linear fashion in which MOS does increase with spend on bandwidth.

References 1. Quality of Experience Paradigm in Multimedia Services. ScienceDirect (2017) 2. Streijl, R.C., Winkler, S., Hands, D.S.: Mean opinion score (MOS) revisited: methods and applications, limitations and alternatives. SpringerLink (2014) 3. Ahmad, N., Schormans, J., Wahab, A.: Variation in QoE of passive gaming video streaming for different packet loss ratios. In: QoMEX 2020- The 12th International Conference on Quality of Multimedia Experience, Athlone (2020) 4. Ahmad, N., Wahab, A., Schormans, J.: Importance of cross-correlation of QoS metrics in network emulators to evaluate QoE of video streaming applications. Bordeaux (2020)

Analysis of the Cost

405

5. Vasant, P., Zelinka, I., Weber, G. W.: Intelligent computing & optimization. in SpringerLink (2018) 6. Camp, L.J., Gideon, Carolyn.: Limits to Certainty in QoS Pricing and Bandwidth. 22 July 2002. https://dspace.mit.edu/handle/1721.1/1512. Accessed 2 May 2020 7. Xiao, X.: Technical, commercial and regulatory challenges of QoS. Amsterdam [u.a.]: Elsevier/Morgan Kaufmann, p.30. Zarki, M. (2019). QoS and QoE (2008) 8. Roshan, M., Schormans, J., Ogilvie, R.: Video-on-demand QoE evaluation across different age-groups and its significance for network capacity. EAI Endorsed Transactions on Mobile Communications and Applications, no. EAI (2018) 9. Gideon, C. (2002). Limits To Certainty in QoS Pricing and bandwidth. http://hdl. handle.net/1721.1/1512. Accessed 11 Aug 2019 10. Internet Cost Structures and Interconnection Agreements. The journal of electronic publishing (1995) 11. Joutsensalo, J., H¨ am¨ al¨ ainen, T.D., Siltanen, J., Luostarinen, K.: Delay guarantee and bandwidth allocation for network services. In: Next Generation Internet Networks, 2005 (2005) 12. Aragon, J.C.: Analysis of the correlation between packet loss and network delay and their impact in the performance of surgical training applications. In: Semantic Scholar (2006) 13. Ofcom (2017). International Communications Market Report 2017. London. https://www.ofcom.org.uk/research-and-data/multi-sector-research/cmr/cmr2017/international. Accessed 7 Aug 2019 14. Expert System (2017). What is Machine Learning? A definition - Expert System. Expertsystem.com. https://www.expertsystem.com/machine-learning-definition/. Accessed 6 Aug 2019 15. Bisio, I., Marchese, M.: Analytical expression and performance evaluation of TCP packet loss probability over geostationary satellite. IEEE Commun. Lett. 8(4), 232–234 (2004) 16. Cable.co.uk (2019). Best Broadband Deals August 2019 — Compare Broadband Offers - Cable.co.uk. Cable. https://www.cable.co.uk/broadband/. Accessed 16 Aug 2019

Application of Customized Term Frequency-Inverse Document Frequency for Vietnamese Document Classification in Place of Lemmatization Do Viet Quan and Phan Duy Hung(&) FPT University, Hanoi, Vietnam [email protected], [email protected]

Abstract. Natural language processing (NLP) is a problem which attracts lots of attention from researchers. This study analyzes and compares a different method to classify text sentences or paragraphs in Vietnamese into different categories. The work utilizes a sequence of techniques for data-preprocessing, customize learning model and methods before using Term Frequency-Inverse Document Frequency (TF-IDF) for model training. This classification model could contribute positively to many Vietnamese text-analyzing based businesses, such as social network, e-commerce, or data mining in general. This problem’s challenge relies on two main aspects: the Vietnamese language itself and current NLP researches for the Vietnamese language. The paper utilizes the pros of many different classification methods to provide better accuracy in text classification. Keywords: NLP Lemmatization

 Text-classification  Vietnamese  IF-TDF  POS-tag 

1 Introduction Vietnamese is a special language whether how it was formed and developed, or how it is expressed and used. The language itself is ambiguous, but we can find various reasons as why Vietnamese is not a simple language to be processed by a typical NLP model such as deep learning model [1], thus we need customization. As a historical result, Vietnamese vocabulary is majorly structured from words derived from Chinese, notably words in scientific or politic domains, namely Han-Viet (Vietnamese words taken from Chinese since the Han-Dynasty era), which existence takes roughly 70% [2]. Moreover, because of French colonization, Vietnamese had taken in a numerous loanword from French language in multiple form. Recently, many words from English have joined the Vietnamese vocabulary family, either being translated word-by-word or being used directly as a part of Vietnamese [2]. NLP researches in Vietnam have been greatly delayed compare to what it could have reached, as they have been made without a common, academic direction, there are some notable projects but still lack the systematic approach to the some important issues (text summarization, context based intelligent search, etc.). The lack of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 406–417, 2021. https://doi.org/10.1007/978-3-030-68154-8_37

Application of Customized Term Frequency-Inverse Document Frequency

407

inheritable researches also hinders the followers to continue researching. Although some methods have been implemented for Vietnamese, NLP in general and text classification in specific still face several issues and can mostly work in formal document situations. Inheriting from this vocabulary issue, we have stated the following reasons that make Vietnamese unable to efficiently utilize typical NLP pre-processing methods [3] and they are available in Table 1. Table 1. Reasons why Vietnamese are relatively easy for human to learn, but cause difficulties for Machine Learning. Short words

No gender No plural, no cases No articles No verb conjugation

For human Easy to learn

Easy to Easy to listen Easy to listen Easy to

learn speak, hard to speak, hard to learn

Simple verb tenses

Easy to speak, hard to comprehend

Tenses are optional

Easy to speak, hard to comprehend

No agreement

Faster to learn, less knowledge to remember Easy to understand compound words if one understands each element Hard to distinguish

Single syllable words

Homophones: due to singlesyllable characteristics, in Vietnamese, homophones also mean words that are written identically

For NLP efficiency Repetitive structure, hence, many synonyms and homophones Inefficient lemmatization No lemmatization needed, thus no lemmatization No lemmatization needed Difficult to differentiate between tenses or nouns’ genres Shift the complicating task(s) onto the listening side instead of the speaking side, thus increasing NLP difficulties Shift the complicating task(s) onto the listening side instead of the speaking side, thus increasing NLP difficulties No explicit relationship in terms of data between words in a sentence Increase the difficulties in distinguishing if a word is a compound word or it is multiple separated words Hard to distinguish

These characteristics effectively reduce the typical Stemming and Lemmatization steps’ methods, and this will also affect the accuracy of POS-tagging task. Many

408

D. V. Quan and P. D. Hung

Vietnamese NLP experts have been working to address these steps with different approaches. This project does not have the ambition to overcome all these challenges by itself, we will implement some works of other experts before adding our contribution. Our work concern in the differentiating potential of synonyms and classifiers in Vietnamese. Ho Le et al. conducted a systematic research using Vectorized IF-TDF. The authors prepared their dataset for training and archived an average impressive result of 86,7%, and peak result of 92.8%. We follow their team’s footsteps [4] while conducting this paper. Using several different libraries, our average result provides comparable performance with faster processing speed, and minor accuracy rate improvement. The following sections are ordered as: Sect. 2 describes our methodology, Sect. 3 explains and describes the pre-processing step; we provide some case studies in Sect. 4; and finally, Sect. 5 is reserved for conclusions and what to do next.

2 Methodology As we mentioned in Sect. 1, Vietnamese lemmatization is not as effective using the same method of lemmatization for Indo-European languages. The Vietnamese language consists of mostly single-syllable words, which have meaning on their own, depending on contexts. Additionally, when the Vietnamese language needs to express a more complicated meaning, it uses compound words, but, by any mean, still being written by two or more single-syllables words next to each other (e.g.: “anh em” is compounded from “anh” – big brother and “em” – little brother, which means “brothers”) [5]. This explains why, homophones in Vietnamese are more common than Indo-European languages, because Vietnamese syllables consist of 5 components: initial sounds, rhymes (main sounds, ending sounds and accompaniments) and tones [2]. However, unlike the Indo-European languages, since Vietnamese does not transform, there is only one type of homonym which make homophones to be written and spoken identically. This phenomenon makes Vietnamese having a high number of words which have different meaning, but are written identically (for single-syllables) or partially identically (for unit(s) in a compound word), just like “giáp” may mean “12 years”, “armor”, or “beside” depending on context, or “tiền tuyến” (frontline) has one common unit with “tiền bạc” (money) [5]. With the rule of combining phonetics, theoretically Vietnamese can create over 20,000 different syllables, but in practice, it only uses about 6000. Digging deeper into the study of Ho Le et al. [4], we reckon that more than 20% of Vietnamese vocabulary is occupied by homophones, among which, 73.6% are single-syllable words, 25.9% are compound words which are created using 2 single-syllable words, effectively occupy 99.56% of all cases in Vietnamese Dictionary, edition of 2006 as stated in Table 2.

Application of Customized Term Frequency-Inverse Document Frequency

409

Table 2. Number of Homophone cases in Vietnamese (2006). Statistic of Homophone in Vietnamese, according to Vietnamese Dictionary [5] Single-syllable word Compound words Coincident Same From 2 units From 3 units From 4 units origin 1913 807 Coincident Same Coincident Same Coincident Same origin origin origin 282 673 04 03 02 07 2720 cases 955 cases 07 cases 09 cases 3691 cases

Looking further into the characteristic of these cases in [2] Sect. 2.1.2, we gathered the statistic of these cases by Parts of Speech (POS) as presented in Table 3.

Table 3. Number and percentage of Homophone distributed onto POS. Same POS Different POS Noun Verb Adjective 2 POS Number 111 38 53 Noun Noun Verb Adjective 202 279 245 Percentage 22.69% 31.35% 27.53%

3 POS Adjective Verb 114 12.81%

Others Total 36 14 4.04% 1.57%

Moreover, most of Vietnamese homophones consist a small number of different meanings (01−06 meanings). Among these, 77.31% (rounded number) are Homophones with different POS, meaning if a word is detected to have different POS in different contexts, it will highly occur that they have a different meaning. This effectively replaces traditional lemmatizing methods, specifically in text classification, as 2 words can be synonyms but are used in 2 different genres of text (e.g.: “sư phụ” as master in ancient time, and “thầy giáo” as teacher in more modern time can have a similar meaning, but exist and be used in two totally different text genre) [6]. Following these conclusions, we decide to combine a word with its POS to give it a unique ID, thus effectively distinguish between homophones while preserving the difference between synonyms. The execution of this theory will be presented in the following sections.

410

D. V. Quan and P. D. Hung

3 Data Collection and Pre-processing 3.1

Data Collection

To perform the experiment, we have prepared a dataset (modest size) using several books and multiple collected articles, taken from various fields as data source, as following: • • • • • • •

Lord of the ring story [7] A collection of Grim fairy tales [8] Diary of a Cricket [9] A collection of 109 Vietnamese fables [10]. A collection of 80 articles about the world’s politic, written in 2019 and 2020 [11, 12]. A collection of 80 articles about Vietnam’s showbiz, written in 2019 and 2020 [12]. A collection of 80 articles about Vietnam’s economics, written in 2019 and 2020 [11, 13].

Each of these categories consists of 18,000 to 22,000 sentences. Selected data are versatile enough to contain different writing styles, purposes, and vocabularies. Moreover, the chosen articles and stories all talk about the developing stories of one or more subjects (instead of a more specific matter such as scientific documents) so that it can show us the ability to distinguish between different contexts of our method. 3.2

Data Processing

The dataset is manually prepared, and trim lined using the following steps to clean up and for consistency sake: The dataset is copied and process semi-manually, going through some preprocessing tasks. First step, we filter and delete unnecessary special characters, unprintable characters. Since the data come from various sources (including from websites, pdf files, etc.), we needed to filter all the unprintable characters such as pilcrows, spaces, non-breaking spaces, tabs, etc. Next, we remove all formatting and unnecessary line chunking, since there are some parts where the dialog gets displayed as a separate line and it is not consistent. Into the next steps, we fix the content if needed. Throughout Vietnamese language history, there are multiple “popular” way of writing and it is correct in their own era. For example, we had two ways of writing words with phonetics: “hòa bình” and “hoà bình” (peace). We needed to unify how words are written across all documents to ensure accuracy. Along the way, we also fix all detected typos. The expectation from this step is to make all words under the same set of standards, which reduce the risk of mis-interpreting words meaning or its nature. Next step, we remove redundant spaces, dots, and commas. Afterward, we modify the dataset to standardize keep-able special characters (such as: “…” for dialogs and [1] for annotations). It is important to note that all this step is done automatically instead of manually. These special characters are kept distinguishing between different categories of document, for example, scientific documents often use “[…]” for references, but this does not appear as much in others.

Application of Customized Term Frequency-Inverse Document Frequency

411

Lastly, we reordered these data onto files to represent the following classes: • • • • • • •

Western fantasy novel (Lord of the ring) Western fairy tale (Grimm stories) Vietnamese fairy tale and stories for kids (Diary of a Cricket, Vietnamese fables) World’s politic Vietnam’s showbiz Vietnam’s economics Others

It would be acceptable if in our test run, we find out that a text can exist in more than one category, for example, Vietnam’s economics matters may coerce with World’s politic event(s) in one way or another, and a predict in either one of these categories can be considered acceptable.

4 Data Training To train the collected data, first we perform sentences tokenizing, using the sentenceboundary-detection based on Naïve Bayes algorithm by Vuong Quoc Binh and Vu Anh [4]. The result of this step is one text file for each document, with each sentence in a separated line, for example, with the story “Diary of a Cricket” we have a chunk of result as sampled in Table 4.

Table 4. Sample output of Sentence Tokenizing step performed on “Diary of a Cricket”. … tôi là em út, bé nhất nên được mẹ tôi sau khi dắt vào hang, lại bỏ theo một ít ngọn cỏ non trước cửa, để tôi nếu có bỡ ngỡ, thì đã có ít thức ăn sẵn trong vài ngày rồi mẹ tôi trở về tôi cũng không buồn trái lại, còn thấy làm khoan khoái vì được ở một mình nơi thoáng đãng, mát mẻ tôi vừa thầm cảm ơn mẹ, vừa sạo sục thăm tất cả các hang mẹ đưa đến ở khi đã xem xét cẩn thận rồi, tôi ra đứng ở ngoài cửa và ngửng mặt lên trời qua những ngọn cỏ ấu nhọn và sắc, tôi thấy màu trời trong xanh tôi dọn giọng, vỗ đôi cánh nhỏ tới nách, rồi cao hứng gáy lên mấy tiếng rõ to từ đây, tôi bắt đầu vào cuộc đời của tôi …

Afterward, we perform words tokenizing step combining with POS-tagging, we loop through each sentence and execute the POS-tagging method of Vu Anh et al. [4] utilize Conditional Random Fields. The output of POS-tagging is a python array where each element contains a pair of tokenized word (single or compound) with its respectively POS-tag, available in Table 5.

412

D. V. Quan and P. D. Hung Table 5. Sample output of Word Tokenizing step performed on “Diary of a Cricket”.

… [(‘tôi’, ‘P’), (‘là’, ‘V’), (‘em út’, ‘N’), (‘,’, ‘CH’), (‘bé’, ‘N’), (‘nhất’, ‘A’), (‘nên’, ‘C’), (‘được’, ‘V’), (‘mẹ’, ‘N’), (‘tôi’, ‘P’), (‘sau’, ‘N’), (‘khi’, ‘N’), (‘dắt’, ‘V’), (‘vào’, ‘E’), (‘hang’, ‘N’), (‘,’, ‘CH’), (‘lại’, ‘R’), (‘bỏ’, ‘V’), (‘theo’, ‘V’), (‘một ít’, ‘L’), (‘ngọn’, ‘Nc’), (‘cỏ’, ‘N’), (‘non’, ‘A’), (‘trước’, ‘E’), (‘cửa’, ‘N’), (‘,’, ‘CH’), (‘để’, ‘E’), (‘tôi’, ‘P’), (‘nếu’, ‘C’), (‘có’, ‘V’), (‘bỡ ngỡ’, ‘A’), (‘,’, ‘CH’), (‘thì’, ‘C’), (‘đã’, ‘R’), (‘có’, ‘V’), (‘ít’, ‘A’), (‘thức ăn’, ‘N’), (‘sẵn’, ‘A’), (‘trong’, ‘E’), (‘vài’, ‘L’), (‘ngày’, ‘N’), (‘.’, ‘CH’)] [(‘rồi’, ‘C’), (‘mẹ’, ‘N’), (‘tôi’, ‘P’), (‘trở về’, ‘V’), (‘.’, ‘CH’)] [(‘tôi’, ‘P’), (‘cũng’, ‘R’), (‘không’, ‘R’), (‘buồn’, ‘V’), (‘.’, ‘CH’)] [(‘trái lại’, ‘N’), (‘,’, ‘CH’), (‘còn’, ‘C’), (‘thấy’, ‘V’), (‘làm’, ‘V’), (‘khoan khoái’, ‘N’), (‘vì’, ‘E’), (‘được’, ‘V’), (‘ở’, ‘V’), (‘một mình’, ‘X’), (‘nơi’, ‘N’), (‘thoáng đãng’, ‘V’), (‘,’, ‘CH’), (‘mát mẻ’, ‘N’), (‘.’, ‘CH’)] [(‘tôi’, ‘P’), (‘vừa’, ‘R’), (‘thầm’, ‘A’), (‘cảm ơn’, ‘V’), (‘mẹ’, ‘N’), (‘,’, ‘CH’), (‘vừa’, ‘R’), (‘sạo sục’, ‘V’), (‘thăm’, ‘V’), (‘tất cả’, ‘P’), (‘các’, ‘L’), (‘hang’, ‘N’), (‘mẹ’, ‘N’), (‘đưa’, ‘V’), (‘đến’, ‘V’), (‘ở’, ‘V’), (‘.’, ‘CH’)] [(‘khi’, ‘N’), (‘đã’, ‘R’), (‘xem xét’, ‘V’), (‘cẩn thận’, ‘A’), (‘rồi’, ‘T’), (‘,’, ‘CH’), (‘tôi’, ‘P’), (‘ra’, ‘V’), (‘đứng’, ‘V’), (‘ở’, ‘E’), (‘ngoài’, ‘E’), (‘cửa’, ‘N’), (‘và’, ‘C’), (‘ngửng mặt’, ‘V’), (‘lên’, ‘V’), (‘trời’, ‘N’), (‘.’, ‘CH’)] [(‘qua’, ‘V’), (‘những’, ‘L’), (‘ngọn’, ‘Nc’), (‘cỏ’, ‘N’), (‘ấu’, ‘N’), (‘nhọn’, ‘A’), (‘và’, ‘C’), (‘sắc’, ‘V’), (‘,’, ‘CH’), (‘tôi’, ‘P’), (‘thấy’, ‘V’), (‘màu’, ‘N’), (‘trời’, ‘N’), (‘trong’, ‘E’), (‘xanh’, ‘A’), (‘.’, ‘CH’)] [(‘tôi’, ‘P’), (‘dọn giọng’, ‘V’), (‘,’, ‘CH’), (‘vỗ’, ‘V’), (‘đôi’, ‘M’), (‘cánh’, ‘N’), (‘nhỏ’, ‘A’), (‘tới’, ‘E’), (‘nách’, ‘N’), (‘,’, ‘CH’), (‘rồi’, ‘C’), (‘cao hứng’, ‘V’), (‘gáy’, ‘V’), (‘lên’, ‘V’), (‘mấy’, ‘L’), (‘tiếng’, ‘N’), (‘rõ’, ‘A’), (‘to’, ‘A’), (‘.’, ‘CH’)] [(‘từ’, ‘E’), (‘đây’, ‘P’), (‘,’, ‘CH’), (‘tôi’, ‘P’), (‘bắt đầu’, ‘V’), (‘vào’, ‘E’), (‘cuộc đời’, ‘N’), (‘của’, ‘E’), (‘tôi’, ‘P’), (‘.’, ‘CH’)] …

Noted that we will perform this step for all included words, including proper noun for an individual person, place, or organization. These are valuable data that helps with document classification, for e.g., multiple presidents’ name will help concluding the said document is politic related. Moreover, special characters are also POS-tagged, as dialogs keep their respectively quotation marks, the end of sentences can be a period or a quotation mark depending on the nature of the sentence itself. This reservation helps with indicating if a certain category will include a lot of dialogues by counting quotation marks, or we indicate that brackets are mostly used in scientific articles and books, or sometime used in novels. Next, we transform these data into a combined state, where each item consists of a word and its POS-tag, which we address as CWP (combined word with POS-tag). Following what we have been studying, this combined item effectively differentiate itself with its homophones as effective as 77.31% or better. The output of this step should look like the sample presented in Table 6.

Application of Customized Term Frequency-Inverse Document Frequency

413

Table 6. Sample output of POS-tag and Word combining step performed on “Diary of a Cricket”. … tôi__P là__V em_út__N,__CH bé__N nhất__A nên__C được__V mẹ__N tôi__P sau__N khi__N dắt__V vào__E hang__N,__CH lại__R bỏ__V theo__V một_ít__L ngọn__Nc cỏ__N non__A trước__E cửa__N,__CH để__E tôi__P nếu__C có__V bỡ_ngỡ__A,__CH thì__C đã__R có__V ít__A thức_ăn__N sẵn__A trong__E vài__L ngày__N.__CH rồi__C mẹ__N tôi__P trở_về__V.__CH tôi__P cũng__R không__R buồn__V.__CH trái_lại__N,__CH còn__C thấy__V làm__V khoan_khoái__N vì__E được__V ở__V một_mình__X nơi__N thoáng_đãng__V,__CH mát_mẻ__N.__CH tôi__P vừa__R thầm__A cảm_ơn__V mẹ__N,__CH vừa__R sạo_sục__V thăm__V tất_cả__P các__L hang__N mẹ__N đưa__V đến__V ở__V.__CH khi__N đã__R xem_xét__V cẩn_thận__A rồi__T,__CH tôi__P ra__V đứng__V ở__E ngoài__E cửa__N và__C ngửng_mặt__V lên__V trời__N.__CH qua__V những__L ngọn__Nc cỏ__N ấu__N nhọn__A và__C sắc__V,__CH tôi__P thấy__V màu__N trời__N trong__E xanh__A.__CH tôi__P dọn_giọng__V,__CH vỗ__V đôi__M cánh__N nhỏ__A tới__E nách__N,__CH rồi__C cao_hứng__V gáy__V lên__V mấy__L tiếng__N rõ__A to__A.__CH từ__E đây__P,__CH tôi__P bắt_đầu__V vào__E cuộc_đời__N của__E tôi__P.__CH …

Since Vietnamese compound words are multiple single-syllable words being written next to each other, to present these words, we use underscore special character “_” to separate its units and its POS-tag. Thus, the noun “thức ăn” (food) will be presented as “thức_ăn__N”. The final output of this step is a list of all sentences being tokenized, with a label for each of them, stating which category it belongs to. For the sake of simplicity, we have written these data to a separate file for each book or each article. Fourth step is where we merge the output files of third step into two large datasets: one with CWP and one without. This way we can compare the difference between barebone training and CWP method. These datasets are written onto separate files, but with the same structure as following: a csv file with two columns: Encoded Label and Processed sentence. Encoded Labels are used to differentiate between classes to be classified, and to improve processing performance. We have used a sample Encoded Labels set as in Table 7. Table 7. Encoded labels. Category Label Vietnamese fairy tale and stories for kids 1 Vietnamese History books 2 Western fantasy novel 3 Vietnamese news articles 4 World news articles 5 Others 100

414

D. V. Quan and P. D. Hung

Step five, we perform TD-IDF vectorizer by building up weight matrix. This step evaluates the relevance of each CWP toward its sentence and its Encoded Label. We will perform this step on top of the training dataset, consisting 80% number of rows in the combined dataset in step four. The output of this step is presented in Table 8. Table 8. Sample output of Sentence Tokenizing step performed on “Diary of a Cricket”. … ‘tôi__p’: 4068, ‘sống__v’: 3289, ‘từ__e’: 4223, ‘bé__n’: 218, ‘__ch’: 48, ‘một__m’: 2327, ‘sự__n’: 3331, ‘đáng__v’: 4694, ‘suốt__a’: 3209, ‘đời__n’: 4933, ‘ấy__p’: 4973, ‘là__v’: 1935, ‘lâu_đời__a’: 1966, ‘trong__e’: 3807, ‘họ__p’: 1560, ‘nhà__n’: 2562, ‘chúng_tôi__p’: 559, ‘vả_lại__c’: 4378, ‘mẹ__n’: 2301, ‘thường__r’: 3556, ‘bảo__v’: 281, ‘rằng__c’: 3117, ‘phải__v’: 2877, ‘như__c’: 2603, ‘thế__p’: 3617, ‘để__e’: 4858, ‘các__l’: 715, ‘con__n’: 673, ‘biết__v’: 138, ‘một_mình__x’: 2331, ‘cho__e’: 467, ‘quen__v’: 2946, ‘đi__v’: 4644, ‘con_cái__n’: 678, ‘mà__c’: 2194, ‘cứ__r’: 894, ‘vào__e’: 4307, ‘bố_mẹ__n’: 358, ‘thì__c’: 3510, ‘chỉ__r’: 623, ‘sinh__v’: 3180, ‘ra__v’: 3037, ‘tính__v’: 4056, ‘xấu__a’: 4522, ‘lắm__r’: 2075, ‘rồi__c’: 3128, ‘ra_đời__v’: 3044, ‘không__r’: 1720, ‘gì__p’: 1269, ‘đâu__p’: 4711, ‘bởi__e’: 383, ‘nào__p’: 2685, ‘cũng__r’: 786, ‘vậy__p’: 4393, ‘đẻ__v’: 4841, ‘xong__v’: 4474, ‘là__c’: 1933, ‘có__v’: 746, ‘ba__m’: 88, ‘anh_em__n’: 69, ‘ở__v’: 4993, ‘với__e’: 4426, ‘hôm__n’: 1482, ‘tới__e’: 4203, ‘thứ__n’: 3687, ‘trước__n’: 3885, ‘đứa__nc’: 4942, ‘nửa__n’: 2781, ‘lo__v’: 1896, ‘vui__a’: 4294, ‘theo__v’: 3379, ‘sau__n’: 3164, ‘dẫn__v’: 1033, ‘và__c’: 4301, ‘đem__v’: 4635, ‘đặt__v’: 4836, ‘mỗi__l’: 2322, ‘cái__nc’: 720, ‘hang__n’: 1318, ‘đất__n’: 4807, ‘ở__e’: 4992, ‘bờ__n’: 382, ‘ruộng__n’: 3072, ‘phía__n’: 2844, ‘bên__n’: 223, ‘kia__p’: 1766, ‘chỗ__n’: 641, ‘trông__v’: 3864, ‘đầm__n’: 4815, ‘nước__n’: 2722, ‘đã__r’: 4718, ‘đắp__v’: 4830, ‘thành__v’: 3452, ‘bao_giờ__p’: 103, ‘nhất__a’: 2618, ‘nên__c’: 2697, ‘được__v’: 4775, ‘khi__n’: 1658, ‘dắt__v’: 1041, ‘lại__r’: 2044, ‘bỏ__v’: 352, ‘một_ít__l’: 2336, ‘ngọn__nc’: 2517, ‘cỏ__n’: 862, ‘non__a’: 2679, ‘trước__e’: 3884, ‘cửa__n’: 902, ‘nếu__c’: 2750, ‘ít__a’: 4596, ‘thức_ăn__n’: 3693, ‘sẵn__a’: 3281, ‘vài__l’: 4302, ‘ngày__n’: 2449, ‘trở_về__v’: 3955, ‘buồn__v’: 167, ‘còn__c’: 743, ‘thấy__v’: 3588, ‘làm__v’: 1936, ‘vì__e’: 4319, ‘nơi__n’: 2721, ‘vừa__r’: 4437, ‘thầm__a’: 3589, ‘cảm_ơn__v’: 821, ‘thăm__v’: 3544, ‘tất_cả__p’: 4133, ‘đưa__v’: 4764, ‘đến__v’: 4848, ‘xem_xét__v’: 4462, …

The TF-IDF vocabulary can now be accessed, with top rows being as in Table 9 as sampling data. We can now look at some cases where CWP effectively differentiate homophones as sampled in Table 10.

Application of Customized Term Frequency-Inverse Document Frequency

415

Table 9. TF-IDF vocabulary. (0, (0, (0, (0, (0, (0, (0, (0, (0, (1, (1, (1, (1, (1, (1, (1, (1, (1, …

4644) 0.168109877554555 4254) 0.4531296121077332 2872) 0.3682155978026012 2595) 0.35653421293953347 2383) 0.36033217381957333 1496) 0.21122947912465437 804) 0.34709010848646565 703) 0.43111976142881975 48) 0.15139449063208063 4718) 0.18776376846938878 3367) 0.39076469627344607 3102) 0.3703815706378775 1784) 0.33743975469057436 1749) 0.3143520631846064 1720) 0.16931995401726685 1235) 0.4196143541973651 745) 0.30462437923547336 185) 0.40110986375766133

Table 10. Sampling processed data – Homophone differentiation. # Homophone Words 218 bé bé__n 217 bé__a 1183 giáp giáp__v 1182 giáp__n 1181 giáp__m

Notes A noun (a baby) An adjective (small) A verb (being adjacent to) A noun (armor) An article

Afterward, we train and test with Support Vector Machine (SVM). The reason for this choice is because it is a very universal learner, and it is independent of the dimensionality of the feature space according to Thorsten Joachims [14]. It is proven to be effective and can be used to generalize even with many features involved, which is ideal in our case study. In practice, with some tuning, we achieve the following result in Table 11.

Table 11. SVM accuracy rate. Random seed Max feature With CWP Without CWP, with combined words

500 5 63.295 63.173

100 5 63.341 63.296

100 50 67.940 69.892

500 50 68.189 69.781

1000 100 72.029 72.631

5000 100 72.282 72.788

5000 500 82.917 81.510

500 5000 90.734 90.144

200 5000 91.075 90.140

100 5000 90.573 90.179

50 5000 90.799 90.106

416

D. V. Quan and P. D. Hung

The highest accuracy rate we received is 91.075% with CWP, and 90.179% without CWP. Based on these data, we concluded that CWP does not always give better results, but with tuning it can give significant improvement over without. For comparison purpose, we also perform Naïve-Bayes Classifier algorithm training model (arguably providing among one of the fastest processing speeds out of the box) with a slight improvement as stated in Table 12. Table 12. Naïve-Bayes classifier accuracy rate. Random seed Max feature With CWP Without CWP, with combined words

500 5 63.295 63.173

100 5 63.341 63.296

100 50 66.916 67.272

500 50 66.532 67.107

1000 100 70.805 70.643

5000 100 70.847 70.417

5000 500 80.041 78.990

500 5000 87.980 87.248

200 5000 88.329 87.336

100 5000 88.179 87.551

50 5000 88.064 86.999

The best we were able to achieve without overfitting is 88.329% with CWP, and 87.551% without CWP. The trend followed through as we needed proper tuning for this result. It is also safe to conclude that in general, even though SVM leads the accuracy rate in most of cases, Naïve-Bayes Classifier still process faster with reasonably accuracy rate. Comparing with results from “underthesea group”, who has been developing and maintaining the popular NLP python toolkit for Vietnamese language processing, that they have also published in their official website [2] and quoted as in Table 13, we conclude that after skipping lemmatizing step, CWP provide a modestly comparable result. This proves that CWP not only saves processing power, but also remain competitive in accuracy front.

Table 13. underthesea group’s work result – version 1.1.17 - 2020 [2]. TfidfVectorizer(ngram_range = (1, 2), max_df = 0.5) CountVectorizer(ngram_range = (1, 3), max_df = 0.7) TfidfVectorizer(max_df = 0.8) CountVectorizer(ngram_range = (1, 3) TfidfVectorizer(ngram_range = (1, 3)) CountVectorizer(max_df = 0.7)

92.8 89.3 89.0 88.9 86.8 85.5

5 Conclusion and Perspectives This paper proposes the combination of a word with its POS to give it a unique ID, thus effectively distinguish between homophones while preserving the difference between synonyms. The dataset is collected and (semi-)manually processed from various fields, then a sequence of techniques for customizing and training learning model is used. The classification model could contribute positively to many Vietnamese text-analyzing based businesses, such as social network, e-commerce, etc. The work results prove that

Application of Customized Term Frequency-Inverse Document Frequency

417

CWP not only saves processing power but also competes for accuracy with the best-inclass results in the same field. For future work, we shall apply studied method into sentiment analysis of social network posts and comments, e-commerce reviews analysis for data mining purpose. The study also can give a reference to other fields, in general, any text processing field that requires synonyms difference preservation such as social network analysis, sentiment analysis [15, 16].

References 1. Cai, J., Li, J., Li, W., Wang, J.: Deeplearning model used in text classification. In: Proceedings 15th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, pp. 123–126 (2018) 2. Le, H., Hoang, T., Toan, D.M.: Homophone and polysemous words in Vietnamese (in comparison with modern Han language) - Đồng âm và đa nghĩa trong tiếng Việt (Đối chiếu với tiếng Hán hiện đại) (Vietnamese) (2011) 3. Halpern, J.: Is Vietnamese a hard language? http://www.kanji.org/kanji/jack/vietnamese/is_ VN_hard_sum_EN_VN.pdf 4. Le, H. et al.: Vietnamese NLP Toolkit https://underthesea.readthedocs.io 5. Institute of Linguistics of Vietnam: Vietnamese dictionary - Republish 12th ed, (2006) 6. Thuat, D.T.: Vietnamese phonetic (Ngữ âm tiếng Việt (Vietnamese)), Hanoi, p. 89 (1977) 7. Tolkien, J.R.R.: Lord of the Ring - Book 1: The Fellowship of the Ring, Vietnamese Literature publisher, translated by Yen. N.T.T, Viet, D.T. (2013) 8. A collection of Grim fairy tales, Dan Tri publisher, translated by various members (2008) 9. Hoai, T.: Diary of a Cricket, Kim Dong publisher (2014) 10. 109 modern Vietnamese fables – Various artists, Hong Duc publisher (2018) 11. Various newspaper published during 2019–2020 at https://vnexpress.net 12. Various newspaper published during 2019–2020 at https://thanhnien.vn 13. Various newspaper published during 2019–2020 at https://dantri.com.vn 14. Thorsten, J.: Text Categorization with Support Vector Machines: Learning with Many Relevant Features (2005) 15. Hung, P.D., Giang, T.M., Nam, L.H., Duong, P.M., Van Thang, H., Diep, V.T.: Smarthome control unit using Vietnamese speech command. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing and Optimization. ICO 2019. Advances in Intelligent Systems and Computing, vol 1072. Springer, Cham (2019) 16. Tram, N.N., Hung, P.D.: Analyzing hot Facebook users posts’ sentiment. In: Emerging Technologies in Data Mining and Information Security Proceedings of IEMIS (2020)

A New Topological Sorting Algorithm with Reduced Time Complexity Tanzin Ahammad1 , Mohammad Hasan1,2(&) and Md. Zahid Hassan1,2

,

1

2

Chittagong University of Engineering and Technology, Chattogram, Bangladesh [email protected] Department of CSE, Bangladesh Army University of Science and Technology (BAUST), Saidpur, Bangladesh

Abstract. In the case of finding the topological ordering of a directed acyclic graph (DAG), kahn’s and Depth First Search (DFS) topological sorting algorithms are used. Both of these algorithms time complexity is O(|V| + |E|). Here a topological sorting algorithm is proposed that is completely new and it reduces the time complexity of the previous algorithms. By separating the vertices having outgoing edges and the vertices having no outgoing edges then removing outgoing edges step by step, we can find a topological ordering of any DAG. The P   jvj  time complexity after using the proposed algorithm reduces to O ðNE Þ  i¼1

i

for both average case and worst case but for best case it reduce to O(|V|), here |V| is the number of vertex and |(NE)| is the number of vertex contains at least one outgoing edge. This algorithm also can detect cycle in a graph. This algorithm cab be used for resolving dependencies, scheduling system, planning and in many graph algorithms. Keywords: Topological sorting Vertex

 Directed acyclic graph  Complexity  Set 

1 Introduction Topological sorting for a directed acyclic graph (DAG) means the linear ordering of the vertices of a graph. In this ordering for every directed edge (u, v) from vertex u to vertex v, u comes before v. In a graph, vertex represents task which has to perform and edge indicates constraints that the task performed one after another, then a valid sequence for the tasks is the topological ordering of the tasks. Topological ordering is possible only for DAGs. If at least one cycle exists in a graph then the topological ordering of this graph is not possible. Every DAG has at least one topological ordering. The first topological sorting algorithm was given by Kahn (1969). In Kahn’s algorithm, for topological sorting first have to take the vertices from the graph which do not have any incoming edges. If the graph is acyclic then the set must contain at least one vertex that does not have any incoming edges. From this set of vertices, graph traversing will begin. Then have to find the vertices of outgoing edges and have to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 418–429, 2021. https://doi.org/10.1007/978-3-030-68154-8_38

A New Topological Sorting Algorithm with Reduced Time Complexity

419

remove them. For other incoming edges this vertex have to check again, if there are no incoming edges then this vertex has to insert into the set of sorted elements. So, Kahn’s algorithm uses a breadth first search technique. In this algorithm, we have to check all the vertices and edges for the start vertex. Then have to start check every elements over again. So, the overall time complexity of Kahn’s algorithm is O(|V| + |E|). Another popular topological sorting algorithm is DFS topological sorting algorithm, which is based on DFS graph traversal technique. In DFS topological sorting algorithm, graph traversing started from the start vertex and traversing goes through down from the start vertex. When a vertex is found which contain more incoming vertices then by backtracking procedure a previous level vertex is traversed. The time complexity of this DFS topological sorting algorithm is also O(|V| + |E|). In the proposed algorithm traversing of the full graph is not needed. Just list the empty vertices (vertices contain no outgoing edge) in a set and non-empty vertices (vertices contain at least one outgoing edge) in another set, then order the vertices by removing empty vertices from vertices set. In our proposed algorithm time P non-empty  jvj   complexity reduced to O ðNE Þ , because of using two sets and here also i¼1

i

elements will decrease in each step. The space complexity of our proposed algorithm is O(|V| + |E|). So, the proposed algorithm is completely new that finds the topological ordering of a graph in an optimum time. The proposed algorithm also can detect cycle in a graph. This is a completely new algorithm for topological sorting which reduces the time complexity for topological sorting. This algorithm avoids using any graph traversing algorithm like BFS or DFS. This is a new approach for simply implementation of topological sorting algorithm. This algorithm returns false, if any cycle presents in the graph that means, then not any topological sorting is possible. So, this new topological sorting algorithm can be used for detecting cycle in the graphs. This algorithm implementation is very simple. Topological sorting is a popular algorithm for scheduling tasks. From a long period of time researchers continuing research over this topic for improving this algorithm and they also working on application of topological sorting. So, there is a large space for researching on this algorithm. The application of topological sorting algorithm is huge. It can be used for complex database tables that have dependencies. It is also useful for ordering object in a machine, course prerequisite system and many more. Topological ordering is mostly used for planning and scheduling.

2 Related Works A depth first discovery algorithm (DFDA) is proposed to find topological sorting in paper [1]. To reduce the temporary space of a large DAG they used depth first search. DFDA is more efficient and simple than Discovery Algorithm (DA). To generate all topological sorting, three algorithms are described in paper [2]. From this three algorithms they improved the Well’s topological sorting algorithm. A new sorting algorithm is implemented that reduces the time complexity of topological sorting algorithm in paper [3] by using matlab script. This algorithm is more efficient than Kahn’s and DFS topological sorting algorithm and required storage is few than DFS. But, in the

420

T. Ahammad et al.

case of complex and large system this algorithm increases the complexity. In a food supply chain, for checking the adulterant nodes, a new algorithm is proposed in paper [4]. A novel optimization algorithm is proposed in paper [5] to find the solution of many diverse topological sorts which implies many cut-sets for a directed acyclic graph. For enhancing grid observation, the proposed algorithm is very effective. For large graphs, another topological sorting algorithm is given in paper [6]. For topological sorting, they proposed an I/O-efficient algorithm named IterTs. But, this algorithm is inefficient for worst case. A new algorithm is given in paper [7] that gives the topological sorting of sparse digraphs with better time complexity than all other previous algorithms. This topological sorting algorithm is dynamically maintained. They have given an experimental comparison result of topological sorting for large randomly generated directed acyclic graphs. In paper [8], P. Woelfel solved the topological sorting with O(log2|V|) OBDD operations. For a fundamental graph problem, it is the first true runtime analysis of a symbolic OBDD algorithm. In paper [9], they proposed some algorithms to solve the problem of single-source shortest path, computing a directed ear decomposition and topological sorting of a planar directed acyclic graph in O(sort(N)) I/Os, here sort(N) is the number of I/Os needed to sort N elements. All topological ordering of a directed acyclic graph is found in paper [10]. They proposed an extended algorithm which is capable of finding all topological solutions of a DAG. To implement this algorithm they have used backtracking, iteration in the place of recursion, data structures and many more. In paper [11], they proposed a new algorithm by using parallel computation approach. To find topological sorting of a directed acyclic graph, this algorithm is very simple. In this algorithm, they have traversed the nodes parallel from a node and marked as visited. After visiting all the source nodes, all other nodes in the graph is marked as visited. For acyclic graph the parallel traversing will terminate. For an SIMD machine the implementation of this algorithm is discussed in this paper. Time complexity of this proposed parallel algorithm is ordered for longest distance between a source and a sink node for a directed acyclic graph. Large networks topological sorting is given in paper [12]. The general topological algorithms are not efficient to find topological ordering for a large network. So, A. B. Kahn proposed this procedure to find the topological ordering of large networks efficiently. By using this algorithm a PERT (Program Evaluation Review Technique) network has been maintained which contains 30,000 activities which can be ordered in less than one hour of machine time. This is an efficient method because the location of any item is known so that no need to searching an item and for correct sequence of network there only needed a single iteration, so the method go in single step for this case. If all events are traversed or any cycle found in the network then the network is terminated. This method is more efficient than all other topological algorithm for large network. In paper [13], T. Ahammad et al. proposed a new dynamic programming algorithm to solve job sequencing problem that reduces the time complexity of job sequencing problem from O(n2) to O(mn). They have used the tabulation method of dynamic programming. They have showed the step by step simulation of their algorithm working procedure with experimental analysis. In paper [14], J.F. Beetem proposed an efficient algorithm for hierarchical topological sorting. This algorithm is also applicable in the presence of apparent loops. Its application is also described in this paper.

A New Topological Sorting Algorithm with Reduced Time Complexity

421

3 Overview of the Proposed Algorithm In this proposed algorithm, from a graph we put the empty vertices means those vertices which contains no outgoing edges in a set E and the non-empty vertices means those vertices which contains at least one outgoing edge in another set NE. In each step, we will take an empty vertex and remove this vertex from all non-empty vertices and also remove this particular vertex from set E. In this procedure, if any vertex becomes empty then we will put it in set E. The topological ordering is the set T, the sequence in which we choose the empty vertices from the set E. When the set E becomes empty then the algorithm terminated and if T contains all vertices of the graph then no cycle exists and the graph is a DAG and T is the correct topological order. But if T doesn’t contain all vertices of the graph then the graph contains cycle and the graph is not DAG, then any topological ordering is not possible (Fig. 1).

Fig. 1. Flow Chart of topological sorting algorithm.

422

T. Ahammad et al.

4 Algorithm 4.1

Pseudocode

Here G is the set of vertices of the graph which divided into two set E (contains empty vertices) and NE (contains non-empty vertices). Empty vertex contains no outgoing edges and non-empty vertex contains outgoing edges. When the set of empty vertices E becomes empty then the ordering is completed. If the size of T is not equal to the size of G then the graph has at least one cycle and the graph is not DAG. So, the topological sorting of this graph is not possible. Otherwise if the size of T is equal to the size of G then topological ordering is possible and the elements of set T in reverse order is in topological ordered. From this algorithm, we can get the correct topological ordering of a graph in set T after reversing T. If topological ordering is not possible or the algorithm detect any cycle then the algorithm return false otherwise the algorithm return set T. T is correct topological ordering of graph G. By this procedure this algorithm also detect cycle in a graph. 1. G: a set of vertices of a graph 2. T: an empty set that will fill in topological order 3. Procedure Topological Sort(G,T) 4. E 0 3->2 4->5, 6, 7 5-> 6->7 7->8 8-> 9->4 10->0, 1, 12 11->13 12->0, 11 13->5 Since the graph is a directed acyclic graph (DAG), so the graph has at least one topological ordering exist. Here the empty vertices (vertex with no outgoing edges) are (0, 5, 8). We can take those empty vertices in any order. Each order is valid for a particular step. After taking those empty vertices the procedure begins. If the graph is a DAG then in the first step at least one empty vertex will exists. Here showed adjacency list simulation for finding a topological ordering of this graph. Here first table represents the adjacency list of the following graph (Fig. 3). By removing its empty vertices, the size reduces which is shown below-

424

T. Ahammad et al.

Fig. 3. Step by step visualization of proposed algorithm working procedure with an example.

Step 1: Find the empty vertices (0, 5, 8) and insert these vertices in a set T (0, 5, 8). Then remove these vertices from adjacency list and also remove these from all other vertices. Now the empty vertices are (2, 7, 13). Step 2: Find the empty vertices (2, 7, 13) and insert these vertices in the set T(0, 5, 8, 2, 7, 13). Then remove these vertices from adjacency list and also remove these from all other vertices. Now the empty vertices are (3, 6, 11). Step 3: Find the empty vertices (3, 6, 11) and insert these vertices in the set T(0, 5, 8, 2, 7, 13, 3, 6, 11). Then remove these vertices from adjacency list and also remove these from all other vertices. Now the empty vertices are (4, 12). Step 4: Find the empty vertices (4, 12) and insert these vertices in the set T(0, 5, 8, 2, 7, 13, 3, 6, 11, 4, 12). Then remove these vertices from adjacency list and also remove these from all other vertices. Now the empty vertex is 9 only. Step 5: Find the empty vertex 9 and insert this vertex in the set T(0, 5, 8, 2, 7, 13, 3, 6, 11, 4, 12, 9). Then remove this vertex from adjacency list and also remove this from all other vertices. Now the empty vertex is 1 only. Step 6: Find the empty vertex 1 and insert this vertex in the set T(0, 5, 8, 2, 7, 13, 3, 6, 11, 4, 12, 9, 1). Then remove this vertex from adjacency list and also remove this from all other vertices. Now the empty vertex is 10 only.

A New Topological Sorting Algorithm with Reduced Time Complexity

425

Step 7: Insert the last empty vertex 10 in the set T(0, 5, 8, 2, 7, 13, 3, 6, 11, 4, 12, 9, 1, 10). Then remove this vertex from adjacency list. Now there is no such empty vertex remaining. After reversing T, we will get the topological ordering of the graph. The topological ordering of the graph by proposed algorithm is(10->1->9->12->4->11->6->3->13->7->2->8->5->0) By Kahn’s algorithm, the topological ordering is(3->10->2->1->12->9->0->11->4->13->6->5->7->8) By DFS toposort algorithm, the topological ordering is(10->12->11->13->3->2->1->9->4->6->7->8->5->0) The proposed topological sorting algorithm is less complex than Kahn’s and DFS topological sorting algorithms.

5 Analysis The time complexity of this proposed algorithm is O

P

j vj i¼1

  ðNE Þ  and the space i

complexity of this proposed algorithm is O(|V| + |E|). The calculation process of the time and space complexity is given in the next sections. 5.1

Time Complexity

In the proposed algorithm the outer loop will execute |V| number of times and the inner loop will be execute for |NE| number of times. But in this algorithm for each vertex 1 to |V| the inner loop value |NE| will be different  i ¼ 1. . .ðNE Þ1    i ¼ 2. . .ðNE Þ2    i ¼ 3. . .ðNE Þ3    i ¼ 4. . .ðNE Þ4    i ¼ 5. . .ðNE Þ5    i ¼ 6. . .ðNE Þ6  :

:

:

:

:

:     i ¼ jV j. . .ðNE Þjvj 

426

T. Ahammad et al.

Total complexity will be counted for the summation of non-empty vertex elements of each step. So, for 1 to |V|                 Compexity ¼ ðNE Þ1  þ ðNE Þ2  þ ðNE Þ3  þ ðNE Þ4  þ ðNE Þ5  þ ðNE Þ6  þ . . . þ ðNE Þjvj  Xjvj   ðNE Þ ; where i ¼ 1; 2; 3; 4; 5; 6; . . .; jV j ¼ i i¼1

ð1Þ So, the general time complexity is jmj X

jðNEÞi j

i¼1

The time complexity for best case, average case and worst case is depends on the jðNEÞi j term. Best Case Analysis From Sect. 1, the general time complexity is jmj X

jðNEÞi j

i¼1

If the graph has not any edge then empty vertex set E will be full and non-empty vertex set NE will be empty. Then only the outer loop will execute |V| times. Here, jðNEÞi j is the total number of elements in non-empty vertex set where (i) is step number. The value of jðNEÞi j will 1 for each step means|(NE)1| = |(NE)2| = |(NE)3| = |(NE)4| = |(NE)5| = |(NE)6| = 1 Equation (1) becomesTotal complexity ¼ 1 þ 1 þ 1 þ . . . þ 1 ¼ jV j So, the time complexity for the best case of this algorithm is X(|V|), where |V| is the total number of vertices of the graph. Average Case Analysis From Sect. 1, the general time complexity is jmj X

jðNEÞi j

i¼1

Time complexity for the average case depends on the jðNEÞi j term. The outer loop of the algorithm will execute for |V| number of times and the inner loop will be execute for |NE| number of times.

A New Topological Sorting Algorithm with Reduced Time Complexity

427

Equation (1) becomes                Compexity ¼ ðNE Þ1  þ ðNE Þ2  þ ðNE Þ3  þ ðNE Þ4  þ ðNE Þ5  þ ðNE Þ6  þ . . . þ ðNE Þjvj  Xjvj   ðNE Þ ; where i ¼ 1; 2; 3; 4; 5; 6; . . .; jV j ¼ i i¼1

So, the average case of this algorithm is H

P   jvj  ðNE Þ  . i¼1

i

Which depends on the jðNEÞi j term. jðNEÞi j is the total number of elements in nonempty vertex set in each step(i). Worst Case Analysis From Sect. 1, the general time complexity is jmj X

jðNEÞi j

i¼1

Here, the complexity depends on the jðNEÞi j term. In worst case the |NE| will be greater than average case. Here also the outer loop of the algorithm will execute for |V| number of times and the inner loop will be execute for |NE| number of times. Equation (1) becomes                Compexity ¼ ðNE Þ1  þ ðNE Þ2  þ ðNE Þ3  þ ðNE Þ4  þ ðNE Þ5  þ ðNE Þ6  þ . . . þ ðNE Þjvj  Xjvj   ðNE Þ ; where i ¼ 1; 2; 3; 4; 5; 6; . . .; jV j ¼ i i¼1

So, time complexity for the worst case of this algorithm is O

P   jvj  ðNE Þ  . i¼1

i

Which depends on the jðNEÞi j term. jðNEÞi j is the total number of elements in nonempty vertex set in each step(i). 5.2

Space Complexity

Space complexity of this algorithm is O(|V| + |E|). Here |V| is the number of vertices and |E| is the number of edges. In this algorithm, we have used adjacency list so that the sum of |NE| in the worst case will be O(|V| + |E|). So the space complexity of this proposed algorithm is O(|V| + |E|).

428

T. Ahammad et al.

6 Comparison with Related Works Table 1. Comparison of time and space complexity of popular algorithms. Algorithm Name/Paper No Kahn’s Algorithm DFS Algorithm Paper [1]

Time complexity for the best case X(|V| + |E|)

Time complexity for the average case H(|V| + |E|)

Time complexity for the worst case

Space complexity

O(|V| + |E|)

O(|V|)

X(|V| + |E|)

H(|V| + |E|)

O(|V| + |E|)

O(|V|)

X(|A|k log k)

H(|A|k log k)

O(|A|k log k)

Paper [2] Proposed Algorithm

X(m + n) X(|V|)

H(m + n) P   jv j   H i¼1 ðNE Þi

O(m + n) P   jv j   O i¼1 ðNE Þi

O(|A| + || K||) O(n) O(|V| + | E|)

It is clear that the proposed algorithm’s time complexity is less than all other topological sorting algorithms (Table 1).

7 Conclusions and Future Recommendation In this paper, we proposed a completely new algorithm in simplest way where no such graph traversing technique is needed and the complexity has reduced. Time complexity for the best case becomes X(|V|), here |V| is the total number of vertices of the graph. P  jvj   Time complexity for the average case is H i¼1 ðNE Þi , here |NE| is the number of P   jvj   non-empty edges. Time complexity for the worst case is O i¼1 ðNE Þi . Normally the time complexity of general topological sorting algorithm is O(|V| + |E|). In this paper, we have showed that our algorithm is better which gives results faster than any other topological sorting algorithm. As in future, we will again try to reduce the average case and worst case time complexity. We will also try to do further research about the application of this proposed algorithm.

References 1. Zhou, J., Müller, M.: Depth-first discovery algorithm for incremental topological sorting of directed acyclic graphs. Inf. Process. Lett. 88, 195–200 (2003) 2. Kalvin, A.D., Varol, Y.L.: On the generation of all topological sorting. J. Algorithm 4, 150– 162 (1983) 3. Liu, R.: A low complexity topological sorting algorithm for directed acyclic graph. Int. J. Mach. Learn. Comput. 4(2) (2014)

A New Topological Sorting Algorithm with Reduced Time Complexity

429

4. Barman, A., Namtirtha, A., Dutta, A., Dutta, B.: Food safety network for detecting adulteration in unsealed food products using topologcal ordering. Intell. Inf. Database Syst. 12034, 451–463 (2020) 5. Beiranvand, A., Cuffe, P.: A topological sorting approach to identify coherent cut-sets within power grids. IEEE Trans. Power Syst. 35(1), 721–730 (2019) 6. Ajwani, D., Lozano, A.C., Zeh, N.: A topological sorting algorithm for large graphs. ACM J. Exp. Algorithmic 17(3) (2012) 7. Pearce, D.J., Kelly, P.H.J.: A dynamic topological sort algorithm for directed acyclic graphs. ACM J. Exp. Algorithmic 11(1.7) (2006) 8. Woelfel, P.: Symbolic topological sorting with OBDDs. J. Discrete Algorithms 4(1), 51–71 (2006) 9. Arge, L., Toma, L., Zeh, N.: I/O-efficient topological sorting of planar DAGs. In: Proceedings of the Fifteenth Annual ACM Symposium on Parallel Algorithms and Architectures, pp. 85–93 (2003) 10. Knuth, D.E., Szwarcfiter, J.L.: A structured program to generate all topological sorting arrangements. Inf. Process. Lett. 2(6), 153–157 (1974) 11. Er, M.C.: A parallel computation approach to topological sorting. Comput. J. 26(4) (1983) 12. Kahn, A.B.: Topological sorting of large networks. Commun. ACM 5, 558–562 (1962) 13. Ahammad, T., Hasan, M., Hasan, M., Sabir Hossain, M., Hoque, A., Rashid, M.M.: A new approach to solve job sequencing problem using dynamic programming with reduced time complexity. In: Chaubey, N., Parikh, S., Amin, K. (eds.) Computing Science, Communication and Security. COMS2 2020. Communications in Computer and Information Science, 1235. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-6648-6_25 14. Beetem, J.F.: Hierarchical topological sorting of apparent loops via partitioning. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 11(5), 607–619 (1992) 15. Intelligent Computing & Optimization, Conference proceedings ICO 2018, Springer, Cham, ISBN 978-3-030-00978-6 16. Intelligent Computing and Optimization, Proceedings of the 2nd International Conference on Intelligent Computing and Optimization 2019 (ICO 2019), Springer International Publishing, ISBN 978-3-030-33585-4

Modeling and Analysis of Framework for the Implementation of a Virtual Workplace in Nigerian Universities Using Coloured Petri Nets James Okpor1(&) and Simon T. Apeh2 1

Department of Computer Science, Federal University Wukari, Wukari, Taraba State, Nigeria [email protected] 2 Department of Computer Engineering, University of Benin, Benin City, Edo State, Nigeria

Abstract. This paper presents a Hierarchical Coloured Petri Nets model specifically designed to support the implementation process of a virtual workplace in Nigerian Universities. CPN Tools 4.0 was used to capture the various phases and activities of the framework for the implementation of a virtual workplace into a Hierarchical Coloured Petri Nets (CPN) model. The developed virtual workplace CPN model has been analyzed using simulation, and state space analysis methods. The developed CPN model being a graphical and also an executable representation of the implementation framework; will assist those empowered with the responsibility of implementing a virtual workplace to have a better understanding of the implementation process. Keywords: Coloured petri nets workplace

 Framework  Nigerian universities  Virtual

1 Introduction The sudden emergence of the COVID-19 pandemic has impacted the workplace. Several nations of the world have introduced lockdown measures and social distancing in a bid to curtail the spread of the novel coronavirus. This has led to the mass implementation of virtual workplace by organizations in an attempt to ensure business continuity [1]. In Nigeria, only a few organizations have adopted the virtual workplace before the global outbreak of the COVID-19 Pandemic. Unfortunately, Nigerian Universities are among those that are yet to embrace the concept of virtual workplace [2] even though the recent technological advancement in information and communication technology has altered the workspace, and has ushered a new form of work known as virtual workplace [3, 4]. Although the impact of the COVID-19 in Nigerian Please note that the AISC Editorial assumes that all authors have used the western naming convention, with given names preceding surnames. This determines the structure of the names in the running heads and the author index. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 430–439, 2021. https://doi.org/10.1007/978-3-030-68154-8_39

Modeling and Analysis of Framework

431

universities are enormous due to the closure of schools [2, 5], this crisis offers a great opportunity for policymakers in the educational sector to explore ways on how to reposition Nigerian universities by adopting the necessary technologies needed for elearning, and also creating a virtual workplace for staff. In a virtual workplace, employees work remotely away from the central office, communicate with colleagues, and managers through electronic means [6]. Research suggests that the implementation of a virtual workplace comes with a lot of benefits such as enabling employees to work wherever it is convenient for them to carry out their work [4], reduction in traffic congestion and vehicular emissions [7–10], greater talent pool, less unnecessary meetings, reduction in commuting time, increased employees productivity [3], reduction in vehicle miles traveled [11], provides opportunities for people with disabilities to work from home, savings to the organization in terms of cost of real estate [3, 12], decrease energy consumption, and ultimately leads to a significant reduction in air pollution [10]. Despite the enormous benefits that can be derived from the implementation of a virtual workplace, there is currently minute theoretical and empirical research as regards the development of a framework for the implementation of a virtual workplace in developing countries, and more specifically Nigeria. Although there are few publications and blog posts about how best to implement a virtual workplace, however, many of these materials are not research-based [13]. According to [14] organizations still lack much-needed theoretical and empirical support as regards the planning and implementation of a virtual workplace. The objective of this study was to transform the phases and activities of the framework for the implementation of a virtual workplace in Nigerian Universities into a CPN model and to analyze the behaviour of the model. Interestingly, Coloured Petri Nets have been used by several authors in modeling workflow systems and business processes, manufacturing systems, computer networks, and embedded systems [15–17, 19]. This motivated us to used coloured Petri Nets to capture the various phases and activities of the framework for the implementation of a virtual workplace in Nigerian Universities into a CPN model. The advantage of using Coloured Petri Nets to model the framework for the implementation of a virtual workplace is that being a graphical model and also an executable representation [16, 18], the CPN model will help to provide a better understanding of the implementation framework than textual description. Also, simulation from the CPN model will provide a more accurate view of the framework for the implementation of a virtual workplace, and could be used to demonstrate to management and other stakeholders on the best approach to implementing a virtual workplace. The choice of coloured Petri Nets was based on fact that it is a graphical modeling language and executable [19], and the analysis methods [15, 19]. The rest of this paper is organized as follows: Sect. 2 shows the proposed implementation framework for virtual workplace. Section 3 presents the developed CPN model of the virtual workplace. Section 4 describes the state space analysis. Section 5 concludes the paper.

432

J. Okpor and S. T. Apeh

2 Framework for the Implementation of a Virtual Workplace in Nigerian Universities This section gives an overview of the developed framework for the implementation of a virtual workplace in Nigerian universities. The framework for the implementation of a virtual workplace was developed based on a comprehensive literature review, and data obtained from selected organizations in Nigeria that have successfully implemented virtual workplace in their respective organizations. Although the framework was developed for Nigerian universities, it can be adopted by other organizations in Nigeria that intend to implement the concept of a virtual workplace. The developed framework, when adopted, will assist universities and other organizations in the successful implementation of a virtual workplace in Nigeria. The developed virtual workplace framework is composed of four phases, namely conceptual phase, pre-implementation phase, implementation phase, and post-implementation phase as shown in Fig. 1.

Fig. 1. Proposed Virtual Workplace implementation framework for Nigerian Universities

In the conceptual phase, the university set up a committee that will be empowered with the responsibility of identifying the benefits, barriers, and risks associated with virtual workplace implementation. The implementation of a virtual workplace should only be considered after an adequate evaluation of university needs for the virtual

Modeling and Analysis of Framework

433

workplace, identification of Job suitability, cost-benefits analysis, determination of impact (in terms of structure, people, task, culture), and the identification of critical success factor have been carried out successfully as indicated in the pre-implementation phase. Immediately after the pre-implementation phase has been concluded, the next stage (implementation phase) involves four sets of activities; which include planning, adoption of the implementation strategy, and program launch. Finally, the last phase (post-implementation) involves program evaluation.

3 Top-Level Module of the Virtual Workplace CPN Model The Hierarchical CPN model of the virtual workplace was developed based on the framework for the implementation of a virtual workplace in Fig. 1, where the various phases and activities involved in the implementation framework were used to develop a CPN model of the virtual workplace, and is shown in Fig. 2. The top-level module of the Virtual Workplace CPN model in Fig. 2 consists of 10 places and 4 substitution transitions (drawn as rectangular boxes with double-line borders) representing the conceptual phase, Pre-implementation phase, implementation phase, and postimplementation phase. Each of the substitution transition has a substitution tag (submodule) beneath it, that model a more detailed behaviour of the conceptual phase, Preimplementation phase, implementation phase, and post-implementation phase.

Fig. 2. The Top-level module of the virtual workplace CPN model

The relationship between the submodule and their functions is described as follows: The conceptual phase module in Fig. 3 represents the initial phase of the virtual workplace implementation process. The benefits, barriers, and risks are identified in this module, and thereafter a decision is made either to accept or abandon the virtual workplace implementation. Once a decision is made to accept a virtual workplace, the next phase (pre-implementation phase) involves series of activities such as the identification of university needs for the virtual workplace, job suitability, cost analysis, determination of the impact, and the identification of success factors as indicated in the pre-implementation phase module in Fig. 4. Also, some critical decisions such as the selection of appropriate

434

J. Okpor and S. T. Apeh

implementation methods, selection of interested staff/participants, training, and the launch of virtual workplace programs are made in the implementation phase module in Fig. 5. Finally, the post-implementation phase module in Fig. 7 model two sets of activities, which include; assessing program implementation and the subsequent analysis of the assessment result. 3.1

Conceptual Phase Module of the Virtual Workplace Model

Figure 3 shows the conceptual phase module, which is the submodule of the conceptual phase substitution transition in Fig. 2. The essence of the conceptual phase is to ensure that the university identifies the benefits, barriers, and risks associated with the implementation of a virtual workplace. Port place university management is given the marking “1`Management” and signifies the first step of the conceptual phase. If a valid token as specified by the arc expression mgt is presented in the input port place University Management, then the transition setup committee will be enabled. When the transition setup committee is fired, a committee will be set up in place VW committee. Transition Assess becomes enabled immediately place VW committee receives its desired token(s). Place VWorkplace is connected to transition Assess and is declared as colour set VWrkplace, which is a record data type with two tokens of benefits, barriers, and risk assessment. When the transition Assess occurs it extracts tokens from place VWorkplace and placed each of these tokens on the corresponding output places VW benefits identified, VW barriers identified, and Asset Identified, as specified by the output arc expressions #Ben(V_Wrk), #Barr(V_Wrk), and #Risk_Ass(V_Wrk). When the place Asset Identified has received its required tokens, transition Identify Threat Vuln Likehd Imp becomes enabled and ready to fire. Transition Identify Threat Vuln Imp has two input places. The first place Threat Vuln and Iikehd is defined as product colour set Thrt_Vuln_LHd containing two tokens of threat, vulnerabilities, and threat likelihood. While the second place Impact is defined as an enumerated colour set Imp and contains three tokens of High_Impact, Medium_Impact, and Low_Impact. So when the transition Identify Threat Vuln Likehd Imp fires, token(s) is moved to place Risk Determination, and either transition Low Risk, Medium Risk, or High Risk becomes eligible for execution. A guard is placed on each of the three transitions to ensure that only the required threat likelihood and threat impact passed through the transitions. Place Risk level received the token(s) immediately after the execution of transition Low Risk, Medium Risk, and High Risk, and trigger transition Make Decision. When transition Make decision is triggered, function Acpt(RL1,RL2,VW_Ben, VW_Barr), and Rej(RL3,VW_Ben,VW_Barr) ensure that place Accept, and Abandon meet the requirement of the output arc expression. 3.2

Pre-implementation Phase Module of the Virtual Workplace CPN Model

The pre-implementation phase module shown in Fig. 4 consists of five sets of activities which include the identification of needs, job suitability, cost analysis, determination of impact, and the identification of success factors. Two of these activities (job suitability, and cost analysis) are modeled with a substitution transition, and the remaining three

Modeling and Analysis of Framework

435

Fig. 3. Conceptual Phase module of the virtual workplace CPN Model

activities (identification of needs, determination of impact, and identification of success factor) are modeled with an elementary transition. The port place Accept connect the conceptual phase module to the pre-implementation phase module. Transition Identify needs is enabled when the input port place Accept received the desired token. When the transition Identify needs is fired, place identified needs received the tokens as specified by the output arc expression Nds,deci. The substitution transitions job suitability, and cost analysis and its associated submodules, modeled in more detail the activities involved in each of the submodules.

Fig. 4. Pre-implementation Phase module of the virtual workplace model

436

J. Okpor and S. T. Apeh

Port place Suitable Job serves as an interface between the Job suitability module and the cost Analysis module. The cost analysis module estimates the total cost and the expected savings. The transition Determine Impact model the expected impact of the implementation of a virtual workplace in the university while transition Identify success factor identifies the critical success factor to successful virtual workplace implementation shown in Fig. 4. 3.3

Implementation Phase Module of the Virtual Workplace Model

Figure 5 shows the submodule associated with the implementation phase substitution transition. There are four sets of activities in this phase which include planning, implementation strategy, and training modeled by three substitution transitions, and program launch modeled with an elementary transition.

Fig. 5. Implementation phase module of the virtual workplace model

The submodule of the planning substitution transition is shown in Fig. 6. Place Planning is defined as a record colour set containing “Policy_Devt”, “Eva_ICT_infra”, “Sele_Remote_Access”. When the transition identify planning area fires, place Policy Development, Evaluate ICT Infra., and Selection of remote Access will receive the desired token(s) according to output arc expressions. Once the tokens are received, the transition Evaluate becomes enabled and ready to fire. When the transition Evaluate fires, tokens are moved from places Existing ICT Infra., Policy Guidelines, and Remote Acess Technique according to the function defined by the output expression DPG(PD, PG), EICT(EVAII,EIIE), and RATECH(SRA,rt) to place Developed Policy Guide, ICT Evaluation, and selected remote access respectively. The transition finalize plan with output arc expression Plan(PG,EIIE,rt) ensures that every part of the plan is captured. Place VW plan serves as a connection between the planning module, and the implementation strategy module. The implementation phase module model all the activities required in the implementation phase. While on the other hand, the planning module handles all the planning activities. The CPN model of the virtual workplace cannot be presented in full in this paper due to page limitations.

Modeling and Analysis of Framework

437

Fig. 6. Planning submodule

3.4

Post-implementation Phase Module of the Virtual Workplace

The submodule of the post-implementation phase substitution transition is shown in Fig. 7. It contains two elementary transitions; conduct IMP Assmnt, and Analyze Assmnt Result. This module model the program evaluation.

Fig. 7. Post-implementation phase module of virtual workplace model

Transition conduct IMP Assmnt model all activities relating to the conduct of the implementation assessment. When transition conduct IMP Assmnt is fired, Places Imp Assment Pilot, Imp Assment Phased, and Imp Assment full will receive tokens concurrently for the conduct of implementation assessment as defined by the output arcs expression with function PI_Assmnt(Assmnt,SSP), PH_Assmnt(Assmnt,GS), and FU_Assmnt(Assmnt,CAES). Transition Analyze Assmnt result immediately become enable as Place Imp Assment Pilot, Imp Assment Phased, and Imp Assment full receive a valid token from transition Conduct Imp Assmnt. Transition Analyze Assmnt result ensures that the strength and weakness of the virtual workplace program are critically examined.

438

J. Okpor and S. T. Apeh

4 Simulation and State Space Analysis Simulation and state space are the two methods of analysis provided by CPN Tools [19]. We conducted several simulations on the module and submodule of the developed virtual workplace CPN model using CPN Tools installed on an Intel (R) Pentium (R) CPU P6200 @2.13 GHz PC to ascertain whether the module and submodule work as expected. The simulation shows that token(s) were received/consume in the correct order, and the model always terminates in the desired state. State space was then applied to explore the standard behavioral properties of the virtual workplace CPN model. The analysis of the state space was achieved using the state space report. The state space report revealed important information about the Boundedness properties, home properties, liveness properties, and fairness properties of the CPN model. Analysis of the state space report for virtual workplace CPN model shows that there are 12889 nodes, 30400 arcs, and was generated in 36 s while the SCC graph has 12889 nodes, 30400 arcs, and was generated in 2 s. The state space report shows that the SCC graph and state space have the same number of nodes and arcs. It is obvious from the report that no infinite occurrence exists, and hence we can conclude the model terminate appropriately. The liveness properties show that are 24 dead markings in the developed CPN model. A query was used to investigate these dead states and to determine whether these dead markings represent the terminal state. The query result proves that these dead markings are end states in the model. That is the state at which the CPN model will terminate. The fact that these dead markings in the CPN model are end states shows that the virtual workplace CPN model will produce the desired result when it terminates. The state space report also shows that there are no dead transitions in the CPN model. A transition is dead if there are no reliable markings in which it is enabled [19].

5 Conclusions In this paper, we have presented a framework designed for the implementation of a virtual workplace in Nigerian Universities. A CPN model was developed based on the framework for the implementation of virtual workplace using CPN Tools. The CPN model covers the four phases and activities involved in the virtual work implementation framework. The developed virtual workplace CPN model was verified using the model simulation and state space analysis methods. The result from the analysis of the developed virtual workplace CPN model shows that the CPN model work as expected. Therefore, the CPN model being an executable model and also a graphical representation of the developed framework will help to provide a better understanding of the virtual workplace implementation framework. Future work may be geared towards the development of an animation that will be interface to the virtual workplace CPN model.

Modeling and Analysis of Framework

439

References 1. International Labour Organization: Teleworking during the COVID-19 pandemic and beyond A Practical Guide, pp. 1–48 (2020) 2. Pauline, E.O., Abiodun, A., Hindu, J.A.: Telecommuting: a panacea to COVID-19 spread in Nigerian Universities. Int. J. Innov. Econ. Devt. 6(1), 47–60 (2020) 3. PwC S.A.: HR Quarterly. 1–16 (2015) 4. Richard, E., Carl, J.K., Christopher, G.P., Antony, I.S.: The new workplace: are you ready? How to capture business value. IBM Global Serv. 1–12 (2011) 5. Ogunode, N.J., Abigeal, I., Lydia, A.E.: Impact of COVID-19 on the higher institutions development in Nigeria. Elect. Res. J. Soc. Sci. Hum. 2(2), 126–135 (2020) 6. Cascio, W.F.: Managing a virtual workplace. Acad. Mgt. Exec. 14(3), 81–90 (2000) 7. Caldow J..: Working outside the box: a study of the growing momentum in telework. Inst. for Elect. Govt. IBM Corp. 1–14 (2009) 8. Obisi, C.: The empirical validity of the adjustment to virtual work arrangement by business organisations in Anambra State. Nigeria. Int. J. Sci. Res. Edu. 9(3), 173–181 (2016) 9. Koemg, B.E., Henderson, D.K., Mokhtanan, P.L.: The travel and emissions impacts of telecommuting for the state of California telecommuting pilot project. Transpn. Res-C. 4(1), 13–32 (1996) 10. Choo, S., Mokhtarian, L.P., Ilan, S.: Does telecommuting reduce vehicle-miles traveled? An aggregate time series analysis for the U.S. Transportation 32(1), 37–64 (2005) 11. Henderson K.D., Koening E.B., Mokhtarian L.P.: Travel diary-based emissions analysis of telecommuting for the puget sound demonstration project. Res. Report UCD-ITS-RR-94–26, 1–54 (1994) 12. Thompson C., Caputo P.: The Reality of Virtual Work: is your organization Ready?, AON consulting. 1–12 (2009) 13. Onpoint Consulting: Success factors of top performing virtual teams. Res. report. 1–14 (2015) 14. Madsen, S.R.: The benefits, challenges, and implications of teleworking: a Literature Review. Cult. Relig. J. 1(1), 148–158 (2011) 15. vander Aalst, W.M.P.: Modelling and analysis of production systems using a Petri Net based approach. In: Boucher, T.O., Jafari, M.A., Elsayed, E.A. (eds.) Proceedings of the Conference on Computer Integrated Manufacturing in the Process Industry, pp. 179–193 (1994) 16. Kristensen, L.M., Mitchell, B., Zhang, L., Billington, J.: Modelling and initial analysis of operational planning processes using coloured Petri nets. In: Proceedings of Workshop on Formal Meth. Applied to Defence System, Australian Computing Society in Conference in Research and Practice in Information Technology (12), 105–114 (2002) 17. Hafilah, D.L., Cakravastia, A., Lafdail, Y., Rakoto, N.: Modeling and simulation of air france baggage handling system with colored Petri Nets. IFAC PapersOnLine. 2443–2448 (2019) 18. Jensen, K., Kristensen, L.M., Wells, L.: Coloured Petri Nets and CPN tools for modelling and validation of concurrent systems. Int. J. Softw. Tools Techno. Trans. (STTT) 9(3–4). 213–254 (2007) 19. Jensen, K., Kristensen, L.M.: Coloured Petri Nets. Springer-Verlag, Modelling and Validation of Concurrent Systems (2009)

Modeling and Experimental Verification of Air - Thermal and Microwave - Convective Presowing Seed Treatment Alexey A. Vasiliev(&) , Alexey N. Vasiliev , Dmitry A. Budnikov , and Anton A. Sharko Federal Agroengineering Scientific Center VIM, Moscow, Russian Federation [email protected]

Abstract. The use of electrophysical effects for presowing treatment of seeds belongs to most effective methods of improving their sowing quality. However, application of such methods is limited by the fact that specialized new-type technological equipment is normally required for their implementation in seed processing lines. This problem can be solved in a simpler way where processing lines designed for forced ventilation are applied for presowing treatment of grain crops. This work deals with the issues related to the use of aerated bins and convection-type microwave grain driers for presowing processing of seeds. The requirement for homogeneity of external field distribution over a dense grain layer has to be met, in the course of processing. It is necessary to ensure both the preset values of temperature and the dynamics of its change. Computer simulations were carried out to check whether the applied processing facilities are physically capable to maintain the required operation modes during these seed treatment procedures. For the purposes of modeling, the entire dense grain layer was divided onto three sections, and the heat-and-moisture exchange processes developing during treatment of seeds were simulated, in each section. The curves of change for temperature and moisture content in seeds during processing were calculated. Simulation results for heat exchange in the course of air-thermal treatment and for convective-microwave processing have proved the implementability of the required operation modes of presowing treatment. The results of field experiments on seed presowing processing in aerated bins and in convective-microwave grain dryers made it possible to evaluate the advantages and effectiveness of each method. Thus, air-thermal treatment stimulates the development of secondary root system and intensive growth of green-basis weight of plants. Application of convective-microwave processing of seeds makes it possible to increase the number of productive culms, as well as that of grain heads (ears) per plant. Therefore, specific modes of electrophysical exposure, in the course of seeds presowing treatment, can be selected depending on the goals that have to be achieved, in each particular case. Keywords: Air-thermal treatment  Convective-microwave processing Computer simulations  Presowing treatment  Seeds

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 440–451, 2021. https://doi.org/10.1007/978-3-030-68154-8_40



Modeling and Experimental Verification of Air

441

1 Introduction Presowing treatment of seeds is an essential link in the chain of technological operations within the entire seeding cycle. Prior to chemical treatment with the use of chemical solutions for disinfection, seeds have to be calibrated and culled out. Seeds are vernalized and hardened off depending on particular plant species. All those operations are designed to improve sowing quality of seeds and, therefore to increase yields [1]. Electrophysical method of seed treatment play an important role in presowing processing technologies [2–4]. Wide-scale experimental research [5–8] has proved the effectiveness of various methods of electrophysical treatment. However, their application at grain-growing farms is limited by the technological performance of existing processing equipment. It is necessary that a newly-designed processing plant or treatment technology could be optimally integrated into the technological processes of seed presowing treatment adhered to at a particular agricultural enterprise. This requirement is, to a major extent, fulfilled for grain dryers including aerated ones, as well as, plants for drying and disinfection of grain with the use of microwave fields [9–11]. In case that presowing treatment lines are used to implement convectivemicrowave processing of seeds one has to be sure that the requirements for processing modes are complied with and no damage to seeds occurs. Objective of research is to study, with the help of computer simulations, the possibility to apply the combination of aerated bins and convective-microwave grain dryers for seed presowing processing and to evaluate the effectiveness of such presowing treatment.

2 Materials and Methods In order to study the issues related to the implementability of grain presowing treatment in such processing lines, modeling of the processes air-thermal and convectivemicrowave seed treatment processes were performed. The basis of simulation model was the following system of equations and transfer functions making it possible to calculate heat-and-moisture exchange in elementary grain layer [12, 13]: 1 T ðpÞ ¼ T0 ðpÞ ð1 þ peps1  eps1 Þ  A1 hðpÞp  A2 W ðpÞp; p

ð1Þ

1 1 WðpÞp; DðpÞ ¼ D0 ðpÞ ð1 þ peps1  eps1 Þ  p A3

ð2Þ

hð pÞ ¼

T ð pÞ1 1  T0 ð pÞ ðð1  A4 Þ  ð1  eps1 Þ þ peps1 Þ; A4 pA4

ð3Þ

1 1 þ h0 þ A5 W ð pÞp þ A5 W0 þ A6 Qv ; T1 p þ 1 p

ð3:1Þ

h ð p Þ ¼ ð h0  T ð pÞ Þ

442

A. A. Vasiliev et al.

WðpÞ ¼ Weq

K ; pþK



lnð1  F Þ Weq ¼ 5:47106 ðT þ 273Þ

ð4Þ 0:435 ;

ð5Þ

K ¼ 7:4897  0:1022T  0:6438W þ 0:0134T  V þ 0:0148T  W þ 0:0029V  W  0:0026T 2 þ 0; 0071W 2 ; ð6Þ F ¼ where A1 ¼

cg cg eca ca ;

A2 ¼

745D : 7:5T ð622 þ DÞeð0:622 þ 238 þ T Þ

cg r 0 100ca ca e;

A3 ¼

eca 103 cg ;

A4 ¼

aq sv eca ca ;

ð7Þ A5 ¼ r 0 cge ; A6 ¼ cg1q . g

T is air temperature, °C; D is air moisture content, g/kg; W is grain moisture content, %; F is air relative humidity, a.u.; Weq is equilibrium moisture content of grain, %; K is drying coefficient, 1/h; h is temperature of grain, °C; Qv is specific microwave power dissipated in dielectric medium, W/m3; V is air velocity, m/s; ca is specific heat capacity of air, kJ  kg–1  °C–1; qg is volumetric density of grain, on dry basis, kg/m3; e is pore volume of grain layer; cg is specific heat capacity of grain, kJ  kg–1  °C–1;r’ is specific heat of evaporation for water, (kJ/kg); cg is grain bulk weight (kg/m3); ca is air volumetric density kg/m3; s is time h. Equation (3) was used in computer model for air-thermal treatment while Eq. (3.1) was applied to that of convective-microwave presowing treatment of seeds. The developed computer model includes those of corresponding elementary grain layers. The thickness of elementary layer equals to the size of one seed. Therefore, the sequence of models for all elementary layers makes it possible to describe the process of heat-and-moisture exchange in dense grain layers of any thickness owing to the conditions when the out parameters of each precedent layer are equal to input parameters of the next one. In the process of air-thermal presowing treatment, it is important to not only maintain parameters of air within a required range but also insure a required value of grain temperature and exposure time over the entire grain layer. In the process of air-thermal presowing treatment, seeds are exposed to heated air. Application of microwave fields for heating seeds is not provided for. That is why conventional electric heaters are used to heat seeds. Therefore, electric heater control has to be considered while modeling the process. In this work, the applicability of aerated bins with radial air distribution for airthermal presowing treatment was studied. Grain layer of thickness 1.2 m is located in such bin in a stable manner and is blown through with air directed from the central air duct towards the perforated outer wall. For the modeling purposes, the entire gran layer was divided onto three zones of equal thickness. However, the air velocity will differ, in each zone. It is a result of radial direction of air flux from the center of bin. It means that the cross-section area of grain layers increases while the aggregate air flow remains constant. While modeling the process, the average value of air velocity was assumed to

Modeling and Experimental Verification of Air

443

be equal to 0.7 m/s, 0.5 m/s and 0.7 m/s, for the first, second and third zone, respectively. Simulink software was used for developing computer model. The diagram of the developed model is shown in Fig. 1.

Fig. 1. Simulink model for air-thermal presowing treatment of seeds in bins with radial air distribution.

Units ‘Interpeted MATLAB Fcn’ [14] (F = f(tau), Ta = f(tau)) serve for definition of temperature and relative humidity change profile of ambient air, in correspondence with the statistically processed data by meteorological services. Unit ‘Q0’ is designed for setting the initial value of grain temperature. The dependence of relative humidity of air fed into the grain layer on the capacity of electric heater is set with the help of unit ‘Interpeted MATLAB Fcn’ (F = f(Pk)). Unit ‘C’ is used to set the temperature level to which ambient air has to be heated in order to provide the required seed presowing processing algorithm. Oscillograph units (F, W, Q, T) make it possible to monitor the change of parameters of seeds and air over the layer thickness, in the course of treatment. In the process of modeling the process of presowing treatment with the use of heated air the requirements of process control algorithm for aerated bins were considered [15]. In accordance with this algorithm the following operations have to be performed: – to measure the initial value of grain temperature, – to heat ambient air in the input to the value equal to 1.3 of initial grain temperature with the help of electric heater, – to blow heated air through the grain layer during 1.1 h, – to cool seeds with the help of ambient air during 1.1 h, – to blow through the layer during 1.1 h with the help of air heated to the temperature value equal to 1.8 of initial grain temperature, – to cool seeds with the help of ambient air. It is assumed that the initial grain temperature value is 20. The results of modeling enable to monitor the change of seed temperature, that of air blowing through the grain layer, as well as moisture content of seeds. These results of modeling are presented in Figs. 2, 3 and 4.

444

A. A. Vasiliev et al.

3 Results and Discussion 3.1

Air-Thermal Treatment of Seeds

Heated air is the major external factor, in this kind of presowing treatment of seeds. Therefore, it is important to ensure conditions when air blowing through the grain layer could heat seeds to a specific required temperature value. The analysis of diagrams shown in Fig. 2 allows for the conclusion that a slight (by 0.11 h) temperature delay inside the grain layer and temperature change over the grain layer compared to that in its input do not considerably affect air temperature variations within the grain layer.

а) temperature in the input of grain layer, b) temperature in the output of Zone 1, c) temperature in the output of Zone 2, d) temperature in the output of Zone 3 Fig. 2. Temperature change of air blowing through the grain layer.

It means that the requirements for temperature mode of seed presowing treatment are complied with. A short-term air temperature drop within inter-seed spaces can be explained by water transfer from the areas of grain layer located close to the air inlet towards its downstream areas. This can be seen from the time dependences of grain moisture content presented in Fig. 3. Variations of moisture content of seeds do not exceed 3%. Besides, the values of their moisture content upon completing the presowing process are identical to those in the beginning. It is an essential feature since seeds have not to be exposed to overdrying resulting in loss of their internal water. Modeling results for temperature change in seeds under presowing treatment are presented in Fig. 4. Diagrams shown in Fig. 4 give the reason to conclude that the requirements for control algorithm of seed presowing treatment in aerated bins are fully complied with, and such processing lines can be applied for the purposes under this study.

Modeling and Experimental Verification of Air

445

а) moisture content in Zone 1, b) moisture content in Zone 2, c) moisture content in Zone 3 Fig. 3. Results of modeling the change of moisture content in grain layer during presowing treatment with heated air.

а) grain temperature in Zone 1, b) grain temperature in Zone 2, c) grain temperature in Zone 3 Fig. 4. Temperature change of seeds during air-thermal presowing treatment in aerated bins.

446

3.2

A. A. Vasiliev et al.

Dependence of Operation Modes of Seed Presowing Treatment on Parameters of Convective-Microwave Zone

Microwave fields have the advantages of their combined character of effects produced on seeds in the course of presowing processing including thermal and electromagnetic effects. Combination of microwave method with thermal treatment enables to implement various seed processing options that may yield specific useful reactions for plant vegetation processes. Seed presowing treatment method of this kind is assumed to be of perspective ones. However its implementation requires application of special equipment. Based on the results of analysis of electrophysical methods applicable to the presowing treatment technologies, it was found out that one of the critical aspects is providing homogeneous effects over the entire depth of grain layer under processing. That is why it is advisable to carry out modeling the process of air-thermal treatment of seeds with the use of microwave fields in order to specify the basic requirements for processing line design. The equations described above were applied for modeling heat-and-moisture exchange in grain layer. The thickness of grain layer exposed to processing, in convective-microwave plant, is 15 cm to 20 cm. Therefore, the average thickness value of 18 cm was chosen for simulations. For descriptive reasons, conditionally thick grain layer was divided in three sections, 6 cm each. Computer model for convectivemicrowave presowing seed treatment designed in Simulink is presented in Fig. 5.

Fig. 5. Block diagram of computer model for convective-microwave presowing seed treatment

This computer model enables to set values of input parameters of air (temperature and relative humidity). For this purpose, model units Tvh and Fvh. Magnetron operation control is performed with the use of two-step action controller that is represented

Modeling and Experimental Verification of Air

447

by unit Relay. Operation control of magnetron is organized so that it has to be switched-off when grain temperature in the first zone attains 45 °C. This temperature value was selected in order to avoid overheating of seeds in the course of their thermal treatment. The graphs of seed temperature change in each Layer (unit ‘Q’), grain moisture content (unit ‘W’), air relative humidity in the output of each Layer (Fvih), temperature of drying agent in the output of each Layer (Tvih) are displayed in oscillographs. The results of modeling are shown in Figs. 6 and 7.

Fig. 6. Change of grain temperature under microwave heating, in three sublayers

From the diagrams presented in Fig. 6, different layers are heated to different temperatures. However, this heterogeneity disappears and the grain temperature becomes uniform over a thick layer as the grain passes through the entire dryer. Because when moving inside the dryer, the grain is mixed and processed by a microwave field of different intensities. Magnetron operation control ensures the required modes of presowing grain treatment without exceeding set values of seed temperature. Diagrams for moisture content of seeds in each layer (see Fig. 7) show that no water loss occurs in seeds in the process of presowing treatment. Variations of grain moisture content in the process of treatment do not exceed 0.2% and they will not affect sowing quality of seeds.

448

A. A. Vasiliev et al.

а) moisture content in Layer 1, b) moisture content in Layer 2, c) moisture content in Layer 3 Fig. 7. Change of grain moisture content in the process of presowing treatment with the use of microwave fields

3.3

Experimental Tests of Seed Presowing Treatment Modes.

Experiment tests of seed presowing treatment modes were performed on seeds of winter barley of grade ‘Elita’. At the first step of tests, presowing treatment was carried out with the use of air-thermal processing. Seeds were treated in aerated bins with radial distribution of air. During processing, processing modes described above in this article were adhered to.

1 – receiving container, 2 – microwave active chamber, 3 – magnetrons, 4 – airduct of magnetron cooling system, 5 – input airduct Fig. 8. Convective-microwave unit for presowing treatment of seeds.

Modeling and Experimental Verification of Air

449

Convective-microwave processing unit was applied whose layout is shown in Fig. 8. Grain is fed into the module from receiving container (1). Waveguides provide the link between microwave active chamber (2) and the magnetron unit (3). Air for cooling power supplies of magnetrons is blown through via airduct (4). Air for cooling seeds enters the grain layer through airduct (5). Two different physical methods were studied in the course of presowing treatment. In the first one, called ‘air-thermal treatment’, is heating grain in a dense layer by blowing hot air through it. The second one called ‘convective-microwave treatment’ is heating grain with the use of microwave fields with simultaneous cooling by air. Field experiments have been carried out for adequate interpretation of the results of presowing treatment. For this purpose, experimental working plots were used for seeding grain treated after two-year lying period [16]. The results of treatment were evaluated from the day of seedling emergence and during the whole cycle of plant vegetation. In accordance with the practice adhered to in agricultural science all necessary measurements were made during the whole period of plant vegetation including formation stems and ears. Productivity indicators of plants as grain weight and their number per one ear were evaluated after harvesting such [17]. The results of observations are presented in Table 1. Table 1. Experimental data on the results of pre-sowing seed treatment Indicators for evaluating the effectiveness of pre-sowing treatment Seedling density Number of secondary roots Plant length Number of leaves Weight of 20 green plants Number of plants per m2 Number of stems per m2 Number of productive stems per m2 Number of ears per 1 plant Plant height, cm Ear length, cm The number of grains per ear, pcs Weight of 1000 grains, g Grain weight per ear, g Productivity, c / ha

Presowing treatment option Without Air –thermal processing treatment 516 491 2.7 5.4 18.9 26.0 6.8 9.9 32.5 71.4 488 480 1030 871 809 788 1.58 1.60 76.7 77.9 6.9 7.6 15.0 16.1 45.5 48.4 0.72 0.82 57.5 64.0

Microwave field treatment 486 3.5 21.2 7.9 44.3 476 984 877 1.79 81.9 7.4 15.4 46.0 0.77 67.0

The data presented in Table make it possible to evaluate the effectiveness of each presowing treatment technology for particular purposes. For instance, air-thermal treatment is preferable for winter crops cultivation because it ensures better conditions

450

A. A. Vasiliev et al.

for secondary root development. In agricultural technologies oriented on production of crops for green fodder it is advisable, as well, to apply air-thermal presowing treatment. The best crop yield was obtained with the application of convective-microwave treatment of seeds. This treatment technology made it possible to increase the number of ears per one plant. Probably, this particular indicator will enable to achieve productivity growth. It has to be taken into account that the results of experiments may differ depending on microclimate variations. Therefore, additional research has to be carried out to study possible effects of climatic factors.

4 Conclusion The results of both computer simulations and experimental research afford a strong ground to claim that technological equipment designed for grain drying with the application of forced ventilation method can be effectively applied in presowing treatment technologies of seeds. Aerated bins make it possible to implement air-thermal presowing processing technique. In convective-microwave grain driers, the required conditions of electrophysical processes are implementable, as well. The results of field experiments have shown that application of various external factors in the process of presowing treatment of seeds is an effective tool to control the structure of grain yield. For example, application of air-thermal method may be advisable for winter crops in which case development of secondary roots is essentially important. Convective-microwave technique makes it possible to stimulate development of multiple productive stems and ears per one plant.

References 1. Isaeva, A.O., Kirilin, G.M., Iskakov, A.Y., Giniyatulina, A.K.: Effects of vernalizing and seeding method on output yield of carrot. In: Selection of Collected Papers: Scientific Society of the 21th Century Students. Technical Science. E-print Selection of Collected Papers Based on Materials of the 67th Inteernational Students Scientific-practical Conference, pp. 119–123 (2018). (Russ.) 2. Shabanov, N.I., Ksenz, N.V., Gazalov, V.S., et al.: The substantiation of dose for presowing treatment of cereal seeds in electromagnetic field of industrial frequency. Ambient Sci. 5(2), 20–24 (2018) 3. Dukic, V., Miladinov, Z., Dozet, G., et al.: Pulsed electro-magnetic field as a cultivation practice used to increase soybean seed germination and yield. Zemdirbyste Agric. 104(4), 345–352 (2017) 4. Badridze, G., Kacharava, N., Chkhubianishvili, E., et al.: Effect of UV radiation and artificial acid rain on productivity of wheat. Russ. J. Ecol. 47(2), 158–166 (2016) 5. Gilani, M.M., Irfan, A., Farooq, T.H., et al.: Effects of pre-sowing treatments on seed germination and morphological growth of acacia nilotica and faidherbia albida. Scientia Forestalis 47(122), 374–382 (2019) 6. Luna, B., Chamorro, D., Perez, B.: Effect of heat on seed germination and viability in species of Cistaceae. Plant Ecol. Divers. 12(2), 151–158 (2019)

Modeling and Experimental Verification of Air

451

7. Mildaziene, V., Aleknaviciute, V., Zukiene, R., et al.: Treatment of common sun-flower (Helianthus annuus L.) seeds with radio-frequency electromagnetic field and cold plasma induces changes in seed phytohormone balance, seedling development and leaf protein expression. Sci. Rep. 9(1), 1-12 (2019) Article Number: 6437 8. Nurieva, K.O.: Electrophysical factors for treating seeds. In: Collected Papers ‘The Youth: Education, Science, Creating – 2019’ Collected Papers Based on Materials of Regional scientific-practical Conference, pp. 122–124 (2019) (Russ.) 9. Budnikov, D.A., Vasiliev, A.N., Vasilyev, A.A., Morenko, K.S., Mohamed, S.I., Belov, F.: Application of electrophysical effects in the processing of agricultural Materials. In: Advanced Agro-Engineering Technologies for Rural Business Development. Valeriy Kharchenko (Federal Scientific Agroengineering Center VIM, Russia) and Pandian Vasant (Universiti Teknologi PETRONAS, Malaysia), pp. 1–27 (2019) 10.4018 / 978–1–5225– 7573–3.ch001. 10. Vasiliev, A.N., Ospanov, A.B., Budnikov, D.A.: Controlling reactions of biological objects of agricultural production with the use of electrotechnology. Int. J. Pharm. Technol. 8(4), 26855–26869 (2016) 11. Wang, S., Wang, J., Guo, Y.: Microwave irradiation enhances the germination rate of tartary buckwheat and content of some compounds in its sprouts. Pol. J. Food Nutr. Sci. 68(3), 195– 205 (2018) 12. Ospanov, F.B., Vasilyev, A.N., Budnikov, D.A., Karmanov, D.K., Vasilyev, A.A., et al.: Improvement of grain drying and disinfection process in microwave field. Almaty Nur-Print 155 (2017) 13. Vasilyev, A.A., Vasilyev, A.N., Dzhanibekov, A.K., Samarin, G.N., Normov, D.A.: Theoretical and experimental research on pre-sowing seed treatment. IOP Conf. Ser.: Mater. Sci. Eng. 791(1), 012078 (2020) https://doi.org/10.1088/1757-899X/791/1/012078 14. Dabney, J., Harman, T.L.: Mastering Simulink 4. 412 p. Prentice-Hall, Upper Saddle River (2001) 15. Udintsova, N.M.: Mathematical substantiation of electric heater capacity for presowing treatment of seeds. Deposited manuscript All-Russian Institute of Scientific and Technical Information No. 990-B2004 10.06.2004. (Russ.) 16. Sabirov, D.C.: Improvement of seed presowing treatment effectiveness. In: Collected Papers ‘Electric Equipment and Technologies in Agriculture’. Collected papers based on materials of the 4th International Scientific-practical Conference, pp. 222–225 (2019) (Russ.) 17. G. A. Burlaka, E. V. Pertseva. Effect of presowing treatment of seeds on germination readiness and germination ability of spring wheat. In: Collected Papers ‘Innovative Activities of Science and Education in Agricultural Production. Proceedings of International scientific-practical Conference, pp. 301–305 (2019) (Russ.)

Modeling of Aluminum Profile Extrusion Yield: Pre-cut Billet Sizes Jaramporn Hassamontr1(&)

and Theera Leephaicharoen2

1

2

King Mongkut’s University of Technology North Bangkok, Bangkok 11000, Thailand [email protected] Thai Metal Aluminum Company Limited, Samutprakarn 10280, Thailand

Abstract. In some manufacturing industries, starting raw material size is critical to determining how much material will be scrapped by the end of a process. Process planners working in aluminum profile extrusion industry, for example, need to select appropriate aluminum billet sizes to be extruded to meet specific customer orders while minimizing resulting scraps. In this research, extrusion process is classified according to how billets are extruded, namely multiple extrusions per billet and multiple billets per extrusion. Mass balance equations for each configuration are used to formulate a yield optimization problem where billets are pre-cut to specific sizes and kept in stock. Both models are non-linear and discrete. The solution procedure is developed using an enumeration technique to identify the optimal solution. It is validated with extrusion data from an industrial company. Its effectiveness is demonstrated using various case studies. Keywords: Aluminum profile extrusion  Yield optimization equation  Billet selection problem. Scrap reduction

 Mass balance

1 Introduction Most manufacturing processes or operations can be optimized to save time and cost. Yet theory and practice are sometimes far apart. Most practitioners in industry deem optimization as too idealistic, cannot be done in practice. Thus, many manufacturing processes are not yet optimized to the most effective manner. In this research an attempt has been made to apply optimization technique to aluminum profile extrusion process. Aluminum extrusion process is used to manufacture long aluminum profiles that are widely used in construction, mechanical and electrical appliances. Obvious examples are window frames, heat sink and steps on SUVs. Global aluminum extrusion market value was $78.9 billion in 2018 [1] and is still growing. Aluminum extrusion process typically involves 3 main steps. First, aluminum billets are cut into specific sizes, heated and then loaded into a container, one billet at a time. Then each billet will be pushed through a die by hydraulic press. Aluminum profile will be formed at the other side of the die. Once the profile reaches prespecified extrusion length, it will be cut by the saw in front of the die. The profiles will be straightened by a stretcher and cut to customer-specified lengths afterward. Figure 1 illustrates an example of extrusion plan where one billet is extruded © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 452–462, 2021. https://doi.org/10.1007/978-3-030-68154-8_41

Modeling of Aluminum Profile Extrusion Yield: Pre-cut Billet Sizes

453

three times, i.e. three identical extrusion lengths. Due to stretcher capacity, this extrusion length is constrained to within a certain range, such as from Rmin to Rmax.

Fig. 1. Aluminum extrusion process: 3 extrusions per billet

Given a customer order, consisting of product length L and order quantity n, process planner needs to select available pre-cut billet sizes bi and extrusion lengths LE1 to meet individual demand whilst resulting in minimal scrap. Many short billets need to be extruded consecutively until the extrusion length exceeds Rmin before being cut by the saw. This type of configuration will be called multiple billets per extrusion in this paper. On the other hand, a large billet can be extruded and cut by the saw more than once with each extrusion length between Rmin and Rmax. This will be called multiple extrusions per billet, though a single extrusion per billet is also included in this category. So far, in the literature, there is no explicit modeling that will consider both extrusion setups simultaneously. The objective of this research is to formulate a yield optimization model for the extrusion process using mass balance equations. Available pre-cut billet sizes, process setup and customer order requirements are taken into accounts in the model. It is validated with actual data from an aluminum extrusion company. The model is then used to illustrate possible yield improvements through case studies. Specifically, it is shown how two objective functions, namely minimizing total billet weight used versus maximizing yield, provide different optimal solutions. The effects of extrusion die design on yield is also studied.

454

J. Hassamontr and T. Leephaicharoen

2 Literature Review Aluminum extrusion yield was first investigated by Tabucanon [2]. A dynamic programming model was proposed to identify an optimal set of billets to be stocked, that will minimize overall scraps. Masri and Warburton [3, 4] formulated yield optimization as a billet selection problem. Using past customer orders’ data, the goal was to identify optimal billet sizes that should be stocked. Their model considered a global problemmultiple profile cross sections and lengths. Multiple pushes within the same billet were not considered due to longer cycle time. Wastes from each billet size-customer order pair was calculated a priori, though not explicitly shown. Hajeeh [5] formulated a mathematical model to optimize cutting aluminum logs into billets to fill customer orders. The scraps considered were from log cutting, butt scrap and profile cutting. The goal was to minimize overall scrap. Again, wastes from each billet assignment were calculated before optimization. Another approach to scrap reduction was introduced by researchers in mechanical engineering discipline. Reggiani, Segatori, Donati and Tomesani [6] used Finite Element simulation to predict charge welds between two consecutive billets that must be scrapped. Oza and Gotowala [7] used HyperXtrude, a commercial CAD software for extrusion, to study the effect of ram speed, billet and die temperatures on traverse weld length. Ferras, Almeida, Silva, Correia and Silva [8] conducted an empirical study on which process parameters contributing the most to material scrap. In this research, an alternative approach for extrusion yield optimization is proposed. Instead of considering a global problem where multiple customer orders are optimized simultaneously, the problem size can be greatly reduced by considering one customer order at a time. Secondly, mass balance equations are used explicitly in the model, allowing clear indication how billets are extruded in the optimal solution. Thirdly, both general extrusion setups, including multiple pushes, are considered. As will be shown later, there are cases where multiple pushes can lead to lower scrap. Last, it is possible to identify the impact of improving hardware constraints. For example, how die design can improve the optimal solution.

3 Research Methodology 3.1

Problem Formulation

There are three types of material scrap in aluminum extrusion process. The first is incurred from cutting aluminum log into specific billet lengths. The second comes from in-process extrusion, which are denoted by billets’ backend e, in kg, and aluminum material remained in the die pot, P, also in kg, after the die is removed from the extrusion press. These are denoted in Fig. 1. The third type of waste is induced by profile cutting. They come in many forms as follow. • Aluminum portion between die and cutting saw, Ls, is usually too short for customer usage and therefore discarded.

Modeling of Aluminum Profile Extrusion Yield: Pre-cut Billet Sizes

455

• Part of the profile gripped by pullers for stretching are usually deformed and later cut off. These are denoted by head loss h and tail loss t in Fig. 1. Generally, portion of the profile that first comes out and gripped by puller, h, is less than Ls. Thus, the scrap h is already included in Ls and can be considered zero in most cases. • An allowance, d, shown in Fig. 2, is to avoid irregularities within the profiles. These irregularities may come from stop marks while profile is being cut, or where material from two different billets are merged within the same profile. These irregularities may lead to profile failures when undertaking large loads. Thus, manufacturers will cut this portion out. • Another allowance, though not shown in both figures, is the material scrap remained after the extruded profile cut to pre-determined number of pieces, say q1, of customer order’s length L. Other input variables for the planning includes billet weight per length, b, number of die openings (how many profiles can be extruded in one time), a, and extruded profile’s average weight per length, w. In the case of multiple extrusions per billet, process planner must determine billet sizes to be used as the first and consecutive billets B1 and B2 consecutively. For each billet, how many extrusions C1 and C2 are performed and in each extrusion how many pieces of product with length L are made within each profile, q1 and q2. The extrusion lengths LE1 and LE2 from each billet should be constant for simplicity in profile cutting. All C’s are at least 1. Figure 1, for example, demonstrates the case of three extrusions per billet (C1 = 3), whereas consecutive billets are not shown.

Fig. 2. Multiple billets per extrusion (3 billets per extrusion).

Schematics of multiple billets per extrusion are shown in Fig. 2. Here at most three different billet sizes, B1, B2 and B3, can be used to form a set of billets that will result in single extrusion. The second billet B2 can be used D2 times within each billet set while the first billet B1 and the last billet B3 are used only once. Each billet results in q1, q2, and q3 pieces of product within each profile respectively. As it will be shown in the

456

J. Hassamontr and T. Leephaicharoen

solution procedure, the first billet in the next set, B1′, can be smaller than B1. The extrusion length LE can be computed for each billet set. Assumptions for the problem formulation are summarized as follows. • • • • • • • • •

3.2

Billets are pre-cut to specific sizes and available in stock with unlimited quantity. Extrusion length must be between Rmin and Rmax. Saw used for profile cutting on the runout table is at fixed position Ls from the die. Material losses due to cutting saw is negligible. Desired product length, L and minimum required quantity n are known input variables for billet selection. Aluminum density w along the profile is constant. Process parameters such as extrusion speed, temperature, etc. are pre-determined. No significant adjustment is made during extrusion. For multiple pushes per billet case, extrusion length is uniform for one billet size. At most two billet sizes are used to meet a customer order. For multiple billets per extrusion case, extrusion length should be uniform for each set of billets. At most three billet sizes are used to meet a customer order. Solution Procedure

Based on mass balance equations from both extrusion setups, it is possible to devise a divide-and-conquer approach. Step 1. Classify available billet sizes into two categories. For each billet size bi, maximum extrudable length can be calculated from Lmax;i ¼

bi  e  P  Ls aw

ð1Þ

Let U be the set of billets bu of which maximum extrudable length Lmax,u  Rmin and V the set of billets bv of which maximum extrudable length Lmax,v < Rmin. Step 2. For billets in U, solve the following optimization model for multiple extrusions per billet setup. First, a few more decision variables are needed. Let. X1u = 1 if billet u is used as the first billet and zero otherwise, 8u. X2u = 1 if billet u is used as consecutive billets and zero otherwise, 8u. nb2 = number of consecutive billets used. The objective function is to minimize total billet weight used. Equations (2) and (5) are mass balance equations that can be derived from Fig. 1. Equations (2)–(7) are to ensure that only one billet size is used as the first billet and only one for consecutive billets. Equations (8)–(11) are to impose constraint on extrusion lengths for the first and consecutive billets respectively. The number of consecutive billets, nb2, can be calculated from (12). All decision variables are denoted in (13) and (15).

Modeling of Aluminum Profile Extrusion Yield: Pre-cut Billet Sizes

457

minimize B1 þ nb2 B2 . Subject to B1  e þ P þ awfC 1 ðLs þ Lq1 þ tÞ þ Ls þ hg X B1 ¼ X b u 1u u X u

X 1u ¼ 1

B2  e þ awfC 2 ðt þ Ls þ Lq2 Þ þ hg: X B2 ¼ X b u 2u u X u

X 2u ¼ 1

ð2Þ ð3Þ ð4Þ ð5Þ ð6Þ ð7Þ

LE1 ¼ Ls þ t þ Lq1 þ h

ð8Þ

Rmin  LE1  Rmax

ð9Þ

LE2 ¼ Ls þ t þ Lq2 þ h

ð10Þ

Rmin  LE2  Rmax

ð11Þ

aC 2 q2 nb2  n  aC 1 q1

ð12Þ

C1 ; C 2  1; integer

ð13Þ

q1 ; q2 ; X 1u ; X 2u ; nb2  0; integer

ð14Þ

B1 ; B2 ; LE1 ; LE2  0

ð15Þ

Step 3. For billets in V, solve the following optimization model for multiple billets per extrusion case. A few more variables are needed. Let. X1v = 1 if billet v is used as the first billet and 0 otherwise, 8v. X2v = 1 if billet v is used as consecutive billets and 0 otherwise, 8v. X3v = 1 if billet v is used as the last billet and 0 otherwise, 8v. Y1v = 1 if billet v is used as the first billet for the next set and 0 otherwise, 8v. ns = number of consecutive billet sets used. The objective is to minimize total billet weight which consists of the first billet set and consecutive billet sets. Equations (16), (19), (22) and (25) represent mass balance equations that can be derived from Fig. 2. Equations (16)–(27) are used to make sure that only one billet size is used for each billet type. The extrusion length for each billet set is computed in (28) while its limits are imposed by (29). The total number of consecutive billet sets is calculated by (30). Constraints on decision variables are summarized in (31) and (32).

458

J. Hassamontr and T. Leephaicharoen

 0  minimize ðB1 þ D2 B2 þ B3 Þ þ ns B1 þ D2 B2 þ B3 . Subject to B1  e þ P þ awfLq1 þ Ls þ hg X B1 ¼ X b v 1v v X v

X 1v ¼ 1

   P B2  e þ P þ aw Lq2 þ d  aw X X b B2 ¼ v 2v v X v

X 2v ¼ 1

   P B3  e þ P þ aw Ls þ t þ Lq3 þ d  aw X X b B3 ¼ v 3v v X v

X 3v ¼ 1

   P 0 B1  e þ P þ aw Lq1 þ d  aw X 0 Y b B1 ¼ v 1v v X v

Y 3v ¼ 1

ð16Þ ð17Þ ð18Þ ð19Þ ð20Þ ð21Þ ð22Þ ð23Þ ð24Þ ð25Þ ð26Þ ð27Þ

LE ¼ ðLs þ Lq1 Þ þ D2 ðd þ Lq2 Þ þ ðd þ Lq3 þ tÞ

ð28Þ

Rmin  LE  Rmax

ð29Þ

aðq1 þ D2 q2 þ q3 Þns  n  aðq1 þ D2 q2 þ q3 Þ

ð30Þ

0

B1 ; B1 ; B2 ; B3  0

ð31Þ

D2 ; q1 ; q2 ; q3 ; X 1u ; X 2u ; X 3u ; Y 1v ; ns  0; integer

ð32Þ

Step 4. Show results from steps 2 and 3 so that the better solution can be selected. Even though both optimization models are non-linear, all independent variables are discrete. A computer program can be developed to enumerate all possible values of these variables and identify the optimal solutions, if any, in each model. The solution procedure is developed using Visual Basic Application in Microsoft Excel.

Modeling of Aluminum Profile Extrusion Yield: Pre-cut Billet Sizes

459

4 Case Studies 4.1

Case Study 1

Actual operating data from an aluminum extrusion company is used to validate the model. The input data, shown in Table 1, is used for all cases, unless specified otherwise. In the company where the case data is taken, a human process planner decides which billet sizes are used. Table 2 illustrates how this conventional extrusion data compares with results from the enumeration program developed. To make 600 pieces of 3.1 m-long profile, the first billet used is 64 cm in length whereas consecutive billets are smaller at 59 cm. Each billet is extruded once, single push per billet, to make 10 pieces of product in each profile extruded. There are two profiles extruded at the same time (a = 2). Each billet, therefore, generates 20 pieces of product. With 29 consecutive billets, exactly 600 pieces of product are made. Total weight for all billets used are 1185.7 kg. Since material yield is defined as a ratio of total product weight to total billet weight, the resulting yield is 85.8%. Table 1. Parameters used in case studies. Extrusion press setup Ls = 1.5 m, h = 0 m, t = 1 m, d = 0 m, e = 1.75 kg Billet Rmin = 25 m, Rmax = 47 m b = 0.668 kg/cm Available billet sizes 35, 40, 59, 64, 74, 78 cm Die P = 1.25 kg, a = 2 Product w = 0.545 kg/m, L = 3.1 m, n = 600 pcs

Table 2. Conventional plan versus results from the enumeration program. Result from industry Min. Billet weight First billet size (cm) pcs/profile  pushes/billet q1C1 Extruded length LE1 (m) Consecutive billet size (cm) pcs/profile  pushes/billet q2  C2 Extruded length (m) LE2 Number of consecutive billets nb2 Last billet size (cm) pcs/billet q3 Extruded length (m) LE3 Number of sets ns Total no. of workpieces made (pcs) Total billet weight (kg) Yield (%)

Max yield

Multiple billets/ext

Multiple billets/ext Multiple ext/billet Multiple ext/billet

64 10  1 33.5 59 10  1 33.5 29

40/35 6

600 1185.7 85.80

40 6 39.70 24 600 1255.84 81.02

59 91 30.40 78 14  1 45.90 21

78 13  1 42.80 78 14  1 45.90 21

606 1133.60 90.65

614 1146.29 90.83

460

J. Hassamontr and T. Leephaicharoen

With the enumeration program, the first two billet sizes, 35 and 40 cm, are put into set V, to be considered for multiple billets per extrusion. The program selects the first billet set consisting of 40-cm billet as the first and the last billet while no billets are extruded in between. Both billets result in 6 pieces per profile or 24 pieces per billet set. The second set is similar to the first, except that the first billet size can be reduced to 35 cm, but still able to produce the same number of products per billet set. The extrusion length for each billet set is 39.7 m. Making 600 pieces product requires another 24 sets of billets besides the first set. Certainly, this extrusion setup requires more billet weight than the conventional case. The resulting yield is 81.02%. In this particular case, multiple billets per extrusion could not perform better than multiple extrusions per billet. As the program considers billets in U for multiple (or single) pushes per billet, it selects 59 cm for the first billet size and 78 cm for consecutive billet sizes. The first billet created 9 pieces of product per profile while consecutive billets 14 pieces. It should be noted that extrusion lengths from the first billet is not the same as those from consecutive billets. Here, only 21 consecutive billets are needed. The total product quantity made is 606 pieces while total billet weight required is less than that from the conventional case. The resulting yield, 90.65%, is significantly better than that from the conventional solution. The last column in Table 2 is used to demonstrate how objective function influences the optimal solution. If the objective function is to change from minimizing total billet weight to maximizing yield, the optimal solution will look to add more pieces of products, the nominator of yield, while keeping total billet weight as low as possible. Thus, the optimal solution from maximizing yield objective is to use the largest billet sizes for the first and consecutive billets. The resulting yield is 90.83% with more products made. Selecting which objective function to use will depend on whether customers allow over-shipment or if there exists a need to make extra pieces, for example, to compensate for product defects. 4.2

Case Study 2

As a common practice in industry, extrusion companies try to improve productivity by increasing the number of die openings, denoted by a in this paper, to as high as possible. For example, the original number of die openings is 2. If the third opening can be added, the number of profiles extruded will be increased by 50% immediately. In this case study, the impact of increasing a is investigated. As a increases, the amount of material remained in die pot, P, also increases. There are some other drawbacks in increasing a. Notably, there are more material scraps introduced in each profile and it is generally more difficult to control extrusion process (Table 3).

Modeling of Aluminum Profile Extrusion Yield: Pre-cut Billet Sizes

461

Table 3. Effects of increasing the number of die openings a within a die. a=1 40 13  1 42.80 78 14  2 45.90 21

2 59 91 30.40 78 14  1 45.90 21

3 78 81 27.30 78 91 30.40 22

First billet size (cm) pcs/profile  pushes/billet q1C1 Extruded length LE1 (m) Consecutive billet size (cm) pcs/profile  pushes/billet q2C2 Extruded length (m) LE2 Number of consecutive billets nb2 Last billet size (cm) pcs/billet q3 Extruded length (m) LE3 Number of sets ns Total no. of workpieces made (pcs) 601 606 618 Total billet weight (kg) 1120.9 1133.6 1198.4 Yield (%) 90.92 90.65 87.45

4 40/35 3

5 35/35 2

40 4

78 6

2 40 3 45.90 10 616 1142.4 91.44

1 59 4 39.70 9 600 1149 88.55

The enumeration program is used to investigate the extrusion process in case study 1 with various number of die openings, from a = 1 to 5, assuming that extrusion press capacity is sufficient, and the process is under control. Here, only the objective of minimizing total billet weight is used and only the better solutions from two extrusion setups are shown. When a = 1, 2 or 3, the optimal solutions are from multiple extrusions per billet. As a increases, the optimal yield decreases. The optimal solutions from a = 1 and a = 2 are very close to each other as the main difference comes from only the first billet size used. The consecutive billets, when a = 1 or 2, will create 28 pieces of product. When a = 4 or 5, the optimal solution comes from multiple billets per extrusion with only slightly less yield. At a = 5, all billet sizes are in U. Thus, multiple billets per extrusion can be beneficial when companies look to improve both yield and productivity at the same time.

5 Conclusion In this research, an alternative yield optimization model is proposed for aluminum profile extrusion process using mass balance equations. The model takes into account two general ways to extrude aluminum profiles, namely multiple billets per extrusion and multiple extrusions per billet. The model proposed has three distinct advantages. First, the problem size is drastically reduced since customer orders are considered one at a time. Secondly, all possible ways to extrude billets are explored to find the optimum solution. And last, it allows process planners to investigate the effects of process parameters, such as allowable runout length.

462

J. Hassamontr and T. Leephaicharoen

Based on limited case studies from the industry, preliminary conclusions can be drawn. • The choice of objective function will have profound effects on the optimal solution. Minimizing total billet weight seems to be a better objective than maximizing yield, at least from the overproduction quantity standpoint. • Multiple pushes, such as multiple extrusions per billet and multiple billets per extrusion, can be beneficial for material utilization. Both setups should be considered unless there are some physical constraints on the extrusion press. More work can be done to extend the model for online billet cutting where billets can be cut to lengths and extruded directly. This should provide even better yield.

References 1. Grand View Research. https://www.grandviewresearch.com/industry-analysis/aluminumextrusion-market. 2. Tabucanon, M.T., Treewannakul, T.: Scrap reduction in the extrusion process: the case of an aluminium production system. Appl. Math. Modelling 11, 141–145 (1987). https://doi.org/10. 1016/0307-904X(87)90158-2 3. Masri, K., Warburton, A.: Optimizing the yield of an extrusion process in the aluminum industry. In: Tamiz, M. (ed.) Multi-Objective Programming and Goal Programming. LNE, vol. 432, pp. 107–115. Springer, Heidelberg (1996). https://doi.org/10.1007/978-3-64287561-8_9 4. Masri, K., Warburton, A.: Using optimization to improve the yield of an aluminum extrusion plant. J. Oper. Res. Socy. 49(11), 1111–1116 (1998). https://doi.org/10.1057/palgrave.jors. 2600616 5. Hajeeh, M.A.: Optimizing an aluminum extrusion process. J. Math. Stat. 9(2), 77–83 (2013). https://doi.org/10.3844/jmssp.2013.77.83 6. Reggiani, B., Segatori, A., Donati, L., Tomesani, L.: Prediction of charge welds in hollow profiles extrusion by FEM simulations and experimental validation. Intl. J. Adv. Manuf. Technol 69, 1855–1872 (2013). https://doi.org/10.1007/s00170-013-5143-2 7. Oza, V.G., Gotowala, B.: Analysis and optimization of extrusion process using Hyperworks. Int. J. Sci. Res. Dev. 2(8), 441–444 (2014) 8. Ferras, A.F., Almeida, F. De, Silva, E. C, Correia, A., Silva, F.J.G.: Scrap production of extruded aluminum alloys by direct extrusion. Procedia Manufacturing, 38, 1731–1740 (2019). https://doi.org/10.1016/j.promfg.2020.01.100

Models for Forming Knowledge Databases for Decision Support Systems for Recognizing Cyberattacks Valery Lakhno1 , Bakhytzhan Akhmetov2 , Moldyr Ydyryshbayeva3 , Bohdan Bebeshko4 , Alona Desiatko4 , and Karyna Khorolska4(&) 1

National University of Life and Environmental Sciences of Ukraine, Kiev, Ukraine [email protected] 2 Abai Kazakh National Pedagogical University, Almaty, Kazakhstan [email protected] 3 Al Farabi Kazakh National University, Almaty, Kazakhstan [email protected] 4 Kyiv National University of Trade and Economics, Kioto Street 19, Kyiv, Ukraine {b.bebeshko,desyatko,k.khorolska}@knute.edu.ua

Abstract. Patterns of Bayesian networks have been developed for the computing core of the decision support system in the course of threats prediction and stages of intrusion into information and communication networks of informatization objects. The proposed Bayesian networks templates allow one to operate with a variety of random variables and determine the probability of a cyber threat or a specific stage of an invasion under given conditions. Probabilistic models for detecting network intrusions based on the use of dynamic Bayesian networks have been added. The training of Bayesian networks parameters based on the EM-algorithm was carried out. In contrast to existing solutions, the proposed approach makes it possible not only to take into account the main stages of intrusions but also to make more reasonable decisions based on the use of both typical intrusion patterns and newly synthesized patterns. All templates and models make up the decision support system computing core for intrusion detection. The effectiveness of the developed models was tested on test samples that were not previously used in training. Keywords: Decision support system networks  Models

 Intrusion recognition  Bayesian

1 Introduction It is possible to resist the constant growth of the complexity of illegitimate impacts on objects of informatization (OBI), in particular, using systems for intelligent recognition of anomalies and cyber-attacks [1]. In the face of the increasing complexity of attack scenarios, many companies that are developing intrusion detection systems began to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 463–475, 2021. https://doi.org/10.1007/978-3-030-68154-8_42

464

V. Lakhno et al.

integrate intelligent decision support systems (DSS) into their products. Note that the basis of modern DSS is formed by various models and methods, which together form its computational core [2, 3]. In difficult situations of guaranteed information security of various objects of informatization, the decision-making process will take place under the condition of active interaction of information security means and cybersecurity systems with experts. The above circumstances determined the relevance of our research in the field of synthesis of models for the computational core of the decision support systems as part of the I intrusion detection system.

2 Review and Analysis of Previous Research As noted in [4], a promising direction of research in this area has become studies on the development of methods, models, and software systems for DSS [5] and expert systems (ES) [6] in the field of information security. In studies [6, 7, 24], Data Mining technologies in information security problems were considered. The emphasis in these studies was made on the task of identifying the situation evolution pattern, which is associated with the provision of informatization objects information security. The considered works had no practical implementation, in the form of a software system. The studies [8, 9] analyzed the methodology of intelligent modeling in the problems of informatization objects information security. The methodology proposed by the authors is intended to provide analysis and decisionmaking under different scenarios of informatization objects intrusion implementation. However, these studies have not been brought down to hardware or software implementation. The appearance of new classes makes it difficult to analyze and support decisionmaking in information security tasks due to poorly amenable formalization and structuring of the task themselves for providing information security [10, 11]. In such cases, the parameters of the informatization objects information security state can be represented by qualitative indicators. The latter is not always advisable. According to the authors of [12, 13], the analysis of the security level of informatization objects and the development of plans to counter targeted cyber-attacks should be preceded by the stage of identifying the main threats. The authors of [14] point out that it is problematic to qualitatively solve such a problem without the appropriate DSS. The researchers did not describe the practical results of their development. In studies [15, 16], it was shown that aspects of building information security means can be taken into account in the approach based on the applied Bayesian networks (hereinafter BN). Compared to existing data analysis methods, they provide a clear and intuitive explanation of their findings. Also, BN (often dynamic BN - DBN) implies a logical interpretation and modification of the structure of the relations among the variables of the problem. The representation of the BN in the form of a graph makes it a convenient tool for solving the problem of assessing the probabilities of the possible cyber threat occurrence, for example, for the information and communication networks of informatization objects.

Models for Forming Knowledge Databases for Decision

465

The computing core of the developed decision support system is based on probabilistic models that make up the knowledge base. Such probabilistic models are capable of describing processes under conditions when the data is poorly structured or insufficient for statistical analysis. Problems that can be solved based on the use of knowledge bases probabilistic models should also use the procedures of probabilistic inference. Various processes take place in information and communication networks of informatization objects to generate their own data sequences. Accordingly, these data sequences should be reflected in the description of the course of each process. The above analysis showed that research in the field of new model development for the formation of decision support systems knowledge bases in the process of recognizing cyber-attacks is relevant.

3 Objectives of the Study The research aims to develop models based on Bayesian networks for the knowledge base of the computing core of the decision support system in the course of identifying complex cybernetic attacks.

4 Methods and Models It should be noted that unauthorized actions of an information and communication networks intruder, as a rule, generate corresponding anomalies in its operation. And in this context, information and communication networks itself is not something completely isolated but is located in a certain external environment (environment, which includes personnel, competitors, legislative and regulatory bodies, etc.). Such an environment is usually rather weakly formalized, since the connections between the components are not always clearly defined, and to solve the problems of cyberattacks detection that have generated anomalies in this environment, appropriate tools and methods are needed to detect an intrusion based on a variety of different characteristic features and characteristics. The main selection criterion for intrusion signs and their parameters is the level of statistical significance. But since there are no signs of network intrusions following KDD99 [18] 41 (see Table 1 [17, 18]), then the stage of minimizing the number of the most informative of these signs is necessary. However, as was shown in [19, 20], the number of informative features can be minimized. Such minimization will allow, as in the previous example, to build a Bayesian network for simulation of the intrusion detection logic for the intrusion detection systems. Thus, after the procedure of minimization of the total number of parameters for the analysis and selection of the most informative characteristics, the number of parameters decreased from 41 to 8. In Table 1 are shown the 8 rows with features that were recognized as the most informative for detecting network intrusions [20]. These 8 features make it possible to recognize with an accuracy of 99% the presence of typical network attacks such as Probe, U2R, R2L, Dos / DDos. Note that some of the selected informative features (in particular, lines 23 and 28 of Table 1) are dynamic and change their value in the time interval 0–2 s. Therefore, to

466

V. Lakhno et al.

design a DSS and fill its knowledge base with appropriate Bayesian network patterns, in this subtask it is better to use a dynamic Bayesian network - DBN. Dynamic Bayesian networks are ordinary Bayesian networks that are related by variables at adjacent time steps [21]. In fact, the dynamic Bayesian network is a Markov process, which can be attributed to the 1st order. The designed dynamic Bayesian network will consist of two networks. Let's designate the first BN as ðB1Þ; respectively, the second as ðB2Þ: The network ðB1Þ will be the original one. Transit network ðB2Þ: The original network ðB1Þ defines the prior distributions of the available model ðpðzð1ÞÞÞ: The transit model ðB2Þ will determine the probabilities of transitions between time slices. Table 1. Parameters of network traffic for building a BN (according to [18, 19]) № 2. 3. 4. 23.

Parameter protocol_type service src_bytes count

26. 27. 28. 34.

same_srv_rate diff_srv_rate srv_count dst_host_same_srv_rate

Description Protocol type (TCP, UDP, etc.) Attacked service Number of bytes from source to destination The number of connections per host in the current session for the last 2 s % of connections with the same service % of connections for various services Number of connections for the same service in the last 2 s % of connections to localhost established by the remote side and using the same service

Then, taking into account the above written, one can hold the dynamic Bayesian networks for the variables marked in lines 2–4, 23, 26–28, 34 of Table 1. Also, one can add a variable type to the DBN, which, will display the moment of transitions between networks B1 and B2. For the design of dynamic Bayesian networks was used editor GeNIe Modeler. The probabilistic model of the designed dynamic Bayesian network for transitions between the vertices of the joint distribution graph of available models is presented as follows: pðZt jZt1 Þ ¼

N Y     p p Zti jPa Zti ;

ð1Þ

t¼1

pðZ1:t Þ ¼

N Y N Y    p Zti jPa Zti ;

ð2Þ

t¼1 i¼1

where Z t - is the BN slice for the time point t; Z it - BN node at the moment in time t;  i Pa Z t - a set of parent nodes for a BN node; the number of nodes to cut the BN. Expression (2) will describe the probabilities of transitions between BN nodes.

Models for Forming Knowledge Databases for Decision

467

If the model is divided into a set of non-observable variables ðV t Þ, as well as a lot of variables recorded ðU t Þ, in such a case, the expression (1) should be supplemented by the expression (2) that, accordingly, asks not only state model first, but and the observation model first. The solution to the redundancy problem of the traffic properties analysis for the designed test network, was made by analysis of traffic informative characteristics. In the course of such an analysis of informative characteristics, based on the analysis of publications [17–20], it was proposed to apply the criterion of information gain IðV; UÞ: According to this criterion, information gain IðV; UÞ one attribute to another attribute (e.g., line 3 - attribute V, line 23 - attribute U) indicates that the uncertainty about the values U decrease when limestone defined values V: Hence, IðV; UÞ ¼ HðUÞ  HðUj V), where HðUÞ; HðUjVÞ - is the entropy of the attribute U and ðUjVÞ; respectively. Since the values U and V are both discrete, and can take values in the ranges for fu1 ; :::; uk g andfv1 ; :::; vk g, respectively, the entropy values for the attribute U were determined as follows: H ðU Þ ¼ 

i¼k X

PðU ¼ ui Þ  log2 ðPðU ¼ ui ÞÞ:

ð3Þ

i¼1

Conditional entropy can be found like this: H ðUjV Þ ¼ 

j¼l X

PðV ¼ vi Þ  H ðUjV ¼ vi Þ:

ð4Þ

j¼1

Because the calculation was IðV; UÞ performed for discrete values, the values from the set of fu1 ; :::; uk g and fv1 ; :::; vk g had to be discretized. In the course of test experiments, the method of equal frequency intervals was used for sampling. Following the method value space attribute value should be pitched be an arbitrary number of partitions. Each section will contain the same number of points described by the corresponding data. The application of this method allowed us to achieve a better classification of data values. The information amplification factor IðV; UÞ will depend on the number of values of the corresponding attributes. Therefore, as the number of values increases, the entropy value for the attribute decreases. More informative attributes can be selected using the gain [23]: GðUjV Þ ¼

I ðU; V Þ H ðU Þ  H ðUjV Þ ¼ : H ðV Þ H ðV Þ

ð5Þ

The information amplification factor GðUjVÞ will take into account not only its amount, which is necessary to represent the result in the DSS but also HðVÞ that is necessary to separate the information by the current attribute ðVÞ: Consider the following example of designing a dynamic Bayesian network for a knowledge base decision support system. We will proceed with the following

468

V. Lakhno et al.

assumptions. Typically, intrusion into the network is implemented in a three-stage scheme. In the first step, the attacker performs a network scan ðSÞ: In the second stage, the impact on information and communication networks vulnerabilities occurs ðEÞ: At the final third stage, the attacker seeks to gain access to the information and communication networks, in particular, through the backdoor ðBÞ: Modern attacks most often occur remotely and attackers do not have all the information about the attacked network. The attacker needs to collect as much data as possible about the target of the attack. Otherwise, you will have to go through all the known vulnerabilities, and this is a rather lengthy procedure. The scan network process certainly will impose its mark on the network traffic. But when the attacker already gained information, they are able to focus more specifically on the potential vulnerabilities of network devices, as well as of services, operating systems and application software. In this case of impact on the network, traffic will change respectively. The sequence of attackers actions has been already described in sufficient details, therefore, without delving into the details and techniques of the invasion, we will focus on the construction of a dynamic Bayesian network, which will describe a probabilistic model of network states at different stages of the invasion. Note that if the stages of the ðSÞ; ðEÞ; ðBÞ were not detected by protection services, then they will be considered as hidden. Fig. 1 shows a general model of dynamic Bayesian network for the invasion stage ðSÞ; ðEÞ; ðBÞ: The model shows two slices that correspond to the invasion process.

Fig. 1. General DBN model for the invasion of S, E, B

Fig. 2. An example of the relationship between the observed parameters (lines 3–6, 12, 25, 29, 30, 38, 39 of Table 1)

Models for Forming Knowledge Databases for Decision

469

Invasion stages are hidden. Accordingly, it makes it necessary to collect statistics on the network traffic observed parameters (Table 1). An example of the relationship between the observed parameters (lines 3–6, 12, 25, 25, 29, 30, 38, 39 of Table 1) is shown in Fig. 2. Having a state graph of the attacked network, one can describe it as the following model: PðV ðnÞjPaðV ðnÞÞÞ ¼

3 Y

Pðvi ðnÞjPaðvi ðnÞÞÞ;

ð6Þ

i¼1 3 Y

Pðvi ðnÞjPaðvi ðnÞÞÞ

i¼1

ð7Þ

¼ PðsðnÞjsðn  1ÞÞ  PðeðnÞjeðn  1Þ; sðn  1ÞÞ  PðbðnÞjbðn  1Þ; eðn  1ÞÞ: For example, for the Bayesian network variant, which is shown in Fig. 2, the model, which describes the relationship between the observed variables for network traffic and the probabilities of transition from state to the state of the graph vertices, will look like following: PðU ðnÞjPaðU ðnÞÞÞ ¼

11    Y P uNj ðnÞjPa uNj ðnÞ ;

ð8Þ

j¼1 11 4    Y   Y P uNj ðnÞjPa uNj ðnÞ ¼ P uNj ðnÞjPaðsðnÞÞ j¼1

j¼1

8 11   Y   Y  P uNj ðnÞjPaðeðnÞÞ  P uNj ðnÞjPaðbðnÞÞ ; j¼5

ð9Þ

j¼9

      P uNj ðnÞjPaðsðnÞÞ ¼ P u31 ðnÞjsðnÞ  P u42 ðnÞjsðnÞ     P u53 ðnÞjsðnÞ  P u64 ðnÞjsðnÞ ;

ð10Þ

     26  P uNj ðnÞjPaðeðnÞÞ ¼ P u25 5 ðnÞjeðnÞ  P u6 ðnÞjeðnÞ    39  P u38 7 ðnÞjeðnÞ  P u8 ðnÞjeðnÞ ;

ð11Þ

11   Y P uNj ðnÞjPaðbðnÞÞ : j¼9

   29   30  ¼ P u12 9 ðnÞjbðnÞ  P u10 ðnÞjbðnÞ  P u11 ðnÞjbðnÞ ;

ð12Þ

470

V. Lakhno et al.

Where j ¼ 1; . . .; 11 - is the number of variables that are observed for the network state graph (see Fig. 2), N ¼ 1; . . .; 41 the corresponding property (traffic parameter) from Table 1. Therefore, for the previously selected informative signs of a DDoS attack, lines 2– 4, 23, 26–28, 34 of Table 1, the dynamic Bayesian networks will look like this, see Fig. 3.

Fig. 3. An example of the relationship between the observed parameters (lines 2 - 4, 23, 26–28, 34 of Table 1)

Accordingly, for a given attack pattern, the model that describes the relationship between the observed variables for network traffic and the probabilities of transition from state to the state of the graph vertices will look like following: PðU ðnÞjPaðU ðnÞÞÞ ¼

8    Y P uNj ðnÞjPa uNj ðnÞ ;

ð13Þ

j¼1 8 3    Y   Y P uNj ðnÞjPa uNj ðnÞ ¼ P uNj ðnÞjPaðsðnÞÞ j¼1

j¼1



6 Y j¼4

    P uNj ðnÞjPaðeðnÞÞ P u76 ðnÞjPaðbðnÞÞ :

ð14Þ

Models for Forming Knowledge Databases for Decision

        P uNj ðnÞjPaðsðnÞÞ ¼ P u21 ðnÞjsðnÞ  P u32 ðnÞjsðnÞ  P u43 ðnÞjsðnÞ ;

471

ð15Þ

     26  P uNj ðnÞjPaðeðnÞÞ ¼ P u23 4 ðnÞjeðnÞ  P u5 ðnÞjeðnÞ    28  P u27 6 ðnÞjeðnÞ  P u7 ðnÞjeðnÞ ; :

ð16Þ

    P uNj ðnÞjPaðbðnÞÞ ¼ P u34 7 ðnÞjbðnÞ :

ð17Þ

Based on the above calculations, it became possible to compose Bayesian network templates and corresponding models that describe the relationship of observable variables for network traffic for typical attacks Probe, U2R, R2L, Dos / DDos and simulate the probabilities of transition from state to the state of the graph vertices. Bayesian networks templates and corresponding models form the basis of the decision support system knowledge base.

5 Computational Experiments Below we consider the results of testing Bayesian networks templates for the decision support system knowledge base, see Fig. 2 and 3. Test samples included 30 entries for each template. Testing of samples was implemented on a test network, see Fig. 1. The PC and EM algorithms were used [15, 16, 21] to train the networks. The experimental results are shown in Fig. 4 and 5. The graphs show the results of modeling the probabilities of the correct definition and interpretation of Dos / DDos attacks (errors of the 1st kind, respectively) for networks that were trained: using the PC algorithm (line 1) and the EM algorithm (line 2). And the analysis of the simulation results for the obtained Bayesian networks (see Fig. 2, 3) was carried out in the direction of estimating errors of the 1st and 2nd kind. Of the 60 records that were used to support decision-making and did not take part in training the BN, the first 30 records are correct data to check whether a test attack was missed (type 2 error). The remaining 30 entries were used to check for a false positive (type 1 error).

472

V. Lakhno et al.

1- EM algorithm; 2 - PC algorithm Fig. 4. Probability of correct determination of type 1 error when interpreting a DDoS attack for BN that was trained using various algorithms

1-EM algorithm; 2 - PC algorithm Fig. 5. The probability of correctly detecting a DDoS type attack (type 2 errors) for BNs that were trained using various algorithms

6 Discussion of the Results of the Computational Experiment. Figure 4 and Fig. 5 shows errors of the 1st and 2nd kind, respectively. The PC and EM algorithms [15, 16, 21] have confirmed their effectiveness. BN testing showed that the developed templates correctly interpret the attack with a probability of 95–96%. Like the previous series of experiments that were described above, the experiments carried out confirmed that Bayesian reasoning makes it possible to determine the likely representation for each component in the diagram. Some laboriousness of drawing up templates is compensated by the possibility of their repeated use without significant modification if all the developed templates become part of the decision support system knowledge base. Hypothetically, a Bayesian network can be constructed using a simple enumeration to construct the sets of all possible models, if they are not cyclical. However, note that, as shown in [15] and [16], this approach is not optimal. This is because, with the number of more than 7 vertices, a full search requires significant computational resources and takes quite a long time. Therefore, for the developed decision support system and its knowledge base, the preferable option is the preliminary creation of attack patterns, minimization of hidden variables, and the elimination of uninformative variables that do not have a determining effect on the accuracy of intrusion detection. Bayesian networks training consists of incorrectly adjusting the parameters of individual nodes for a specific task, for example, to identify a certain type of network attack. The prospect for the development of this research is the software implementation of the decision support system in high-level algorithmic languages and the subsequent testing of this decision support system and its knowledge base on the segments of real informatization objects networks.

Models for Forming Knowledge Databases for Decision

473

7 Further Research Further development of the proposed method has two standalone directions. The first direction of the research will be sharpened on the development of specific thread simulation tools that will work under an adjustable environment with most likely biobehavior. As soon as the proposed model is an artificial emulation it really cannot mirror the psychology and unpredictability of real threads. However, the development of these tools will provide an unlimited thread pool that could/will possibly be made against the system and therefore provide silicon datasets for models to train how to detect those threads during their lifecycle without actually being under real attack. The second direction will be to create a hardware supply for a thread detection system that can work in conjunction with a prediction system and will be able to block suspicious threads. Hardware will be installed right after the main gateway transferring all the data through itself. As a result of the described directions, we are planning to implement this system on the real working network.

8 Conclusions Patterns of Bayesian networks have been developed for the computational core of the decision support system in the course of predicting threats and stages of intrusion into the OBI of information and communication networks. The constructed BN templates allow you to operate with a variety of random variables and determine the probability of a cyber-threat or a specific stage of an invasion under given conditions. To improve the efficiency of intrusion forecasting, the network parameters were trained. The EM algorithm and the PC algorithm were used, as well as the available statistics for the test network. Probabilistic models for detecting network intrusions based on the use of BN are described. In contrast to the existing models, the proposed approach makes it possible not only to take into account the main stages of intrusions but also to make more reasonable decisions based on the use of both typical intrusion patterns and newly synthesized patterns. All templates and models make up the decision support system computing core for intrusion detection. The effectiveness of the developed models was tested on test samples that were not previously used in training. The results obtained indicate the feasibility of using the EM algorithm to obtain a high-quality result of the recognition of cyber threats to information and communication networks.

References 1. Elshoush, H.T., Osman, I.M.: Alert correlation in collaborative intelligent intrusion detection systems–a survey. Appl. Soft Comput. 11(7), 4349–4365 (2011) 2. Shenfield, A., Day, D., Ayesh, A.: Intelligent intrusion detection systems using artificial neural networks. ICT Express 4(2), 95–99 (2018)

474

V. Lakhno et al.

3. Rees, L.P., Deane, J.K., Rakes, T.R., Baker, W.H.: Decision support for Cybersecurity risk planning. Decis. Support Syst. 51(3), 493–505 (2011) 4. Akhmetov, B., Lakhno, V., Boiko, Y., & Mishchenko, A.: Designing a decision support system for the weakly formalized problems in the provision of cybersecurity. Eastern-Eur. J. Enterp. Technol. (1(2)), 4–15 (2017) 5. Fielder, A., Panaousis, E., Malacaria, P., Hankin, C., Smeraldi, F.: Decision support approaches for cybersecurity investment. Decis. Support Syst. 86, 13–23 (2016) 6. Atymtayeva, L., Kozhakhmet, K., Bortsova, G.: Building a knowledge base for expert system in information security. In: Chapter Soft Computing in Artificial Intelligence of the series Advances in Intelligent Systems and Computing, vol. 270, pp. 57–76 (2014) 7. Dua S., Du, X.: Data Mining and Machine Learning in Cybersecurity, p. 225. CRC Press (2016) 8. Buczak, A.L., Guven, E.: A Survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Commun. Surv. Tutor. 18(2), 1153–1176 (2016) 9. Zhang, L., Yao, Y., Peng, J., Chen, H., Du, Y.: Intelligent information security risk assessment based on a decision tree algorithm. J. Tsinghua Univ. Sci. Technol. 51(10), 1236–1239 (2011) 10. Ben-Asher, N., Gonzalez, C.: Effects of cybersecurity knowledge on attack detection. Comput. Hum. Behav. 48, 51–61 (2015) 11. Goztepe, K.: Designing fuzzy rule based expert system for cyber security. Int. J. Inf. Secur. Sci. 1(1), 13–19 (2012) 12. Gamal, M.M., Hasan, B., Hegazy, A.F.: A Security analysis framework powered by an expert system. Int. J. Comput. Sci. Secur. (IJCSS) 4(6), 505–527 (2011) 13. Chang, L.-Y., Lee, Z.-J.: Applying fuzzy expert system to information security risk Assessment – a case study on an attendance system. In: International Conference on Fuzzy Theory and Its Applications (iFUZZY), pp. 346–351 (2013) 14. Kanatov, M., Atymtayeva, L., Yagaliyeva, B.: Expert systems for information security management and audit, Implementation phase issues, Soft Computing and Intelligent Systems (SCIS). In: Joint 7th International Conference on and Advanced Intelligent Systems (ISIS), pp. 896–900 (2014) 15. Lakhno, V.A., Lakhno, M.V., Sauanova, K.T., Sagyndykova, S.N., Adilzhanova, S.A.: Decision support system on optimization of information protection tools placement. Int. J. Adv. Trends Comput. Sci. Eng. 9(4), 4457–4464 (2020) 16. Xie, P., Li, J. H., Ou, X., Liu, P., Levy, R.: Using Bayesian networks for cybersecurity analysis. In: 2010 IEEE/IFIP International Conference on Dependable Systems & Networks (DSN), pp. 211–220. IEEE, June 2010 17. Shin, J., Son, H., Heo, G.: Development of a cybersecurity risk model using Bayesian networks. Reliab. Eng. Syst. Saf. 134, 208–217 (2015) 18. Özgür, A., Erdem, H.: A review of KDD99 dataset usage in intrusion detection and machine learning between 2010 and 2015. PeerJ Preprints, 4, e1954v1 (2016) 19. Elkan, C.: Results of the KDD’99 classifier learning. ACM SIGKDD Explorat. Newsl. 1(2), 63–64 (2000) 20. Lakhno, V.A., Kravchuk, P.U., Malyukov, V.P., Domrachev, V.N., Myrutenko, L.V., Piven, O.S.: Developing of the cybersecurity system based on clustering and formation of control deviation signs. J. Theor. Appl. Inf. Technol. 95(21), 5778–5786 (2017)

Models for Forming Knowledge Databases for Decision

475

21. Lakhno, V.A., Hrabariev, A.V., Petrov, O.S., Ivanchenko, Y.V., Beketova, G.S.: Improving of information transport security under the conditions of destructive influence on the information-communication system. J. Theort. Appl. Inf. Technol. 89(2), 352–361 (2016) 22. Heckerman, D.: A tutorial on learning with bayesian networks, Tecnical report, Redmond: Microsoft Research (1995). 58 p. 23. Raileanu, L.E., Stoffel, K.: Theoretical comparison between the gini index and information gain criteria. Ann. Math. Artif. Intell. 41(1), 77–93 (2004) 24. Alhendawi, K.M., Al-Janabi, A.A.: An intelligent expert system for management information system failure diagnosis. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol. 866. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-00979-3_26

Developing an Intelligent System for Recommending Products Md. Shariful Islam1, Md. Shafiul Alam Forhad1(&), Md. Ashraf Uddin2, Mohammad Shamsul Arefin1, Syed Md. Galib3, and Md. Akib Khan4 1

2

Department of CSE, CUET, Chattogram, Bangladesh [email protected], {forhad0904063,sarefin}@cuet.ac.bd Information and Communication Technology Division, Dhaka, Bangladesh [email protected] 3 Department of CSE, JUST, Jessore, Bangladesh [email protected] 4 Department of EEE, AUST, Dhaka, Bangladesh [email protected]

Abstract. When it comes to making decisions on which product to buy, knowing the overall reviews from other users becomes very helpful. Evaluating this task from user ratings is so simple. Although a machine can be used for evaluating the recommendations, simply by calculating its user ratings, sometimes it becomes difficult to provide accurate and efficient results. As therefore, evaluating users’ comments usually leads to assigning humans to read all the comments one by one and then let them decide on how useful the product seems. This is a tedious process which wastes our valuable time and resources due to no way of automating the process. On the other hand, selecting the most valuable product from an enormous number of reviews becomes a hectic task for the consumers. Considering all of the above, we have developed a machine learning based intelligent system which not only evaluates the ratings from users’ reviews but also provides a reflection about the products which are popular simply by analyzing those reviews. Keywords: Sentiment analysis  Machine learning  Random forest classifier  K-nearest neighbors  Support vector machine  Deep learning

1 Introduction With the ever-increasing growth of the internet, there have been a lot of websites containing a continuously huge number of user-generated texts all around the world. Depending on the types of websites, a large amount of those texts contain reviews belonging to various types of products. If processed properly, these texts can be used in order for understanding public opinion on different products so that manufacturing companies can adjust their products accordingly. On the other hand, this can also help customers make proper decisions, when it comes to buying products. While it is easier to figure out the utilization of a product from learning the average rating, it becomes © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 476–490, 2021. https://doi.org/10.1007/978-3-030-68154-8_43

Developing an Intelligent System for Recommending Products

477

difficult when ratings are not provided on the website. Finding the best product involves reading a plenty number of reviews on several items and finally making decision for getting the best product. In either case, performing sentiment analysis on online reviews for various products can help both the companies and customers for saving their valuable time. Supervised learning is a learning technique which requires suitable labels for efficient result. However, polarity of individual text review is highly unlikely to be available with texts themselves. Even if they are available, these are either machine generated or human viewed, and the main problem for both cases is that they do not reflect the reviewer’s original intention. Moreover, ratings provided with the reviews can be considered a valid label since a rating is the originally intended quantitative representation of the reviews. Hence, in this paper, we have proposed an intelligent system in order to predict ratings for different products and also evaluate which product is best depending on the user’s reviews. After predicting ratings with different machine learning models, various performance measurements were checked for the comparison and evaluation of the overall efficiency. The rest of the paper is organized as follows: In Sect. 2, Literature review related to our work is shown. A detail description of our proposed system is shown in Sect. 3. In Sect. 4, Experimental result is shown. Finally, a conclusion section is provided in Sect. 5.

2 Literature Review There are some researches which have previously been done on similar topics. For example, Ahmed et al. Have proposed a framework in their paper [1] about deep learning in order to predict ratings from online reviews from Yelp and Amazon Datasets. In their paper, they have managed to achieve a very good performance using various models. However, it is seemed that they had enough lacking using the linguistic rules during the analysis of review text. Due to the ever-growing availability of online reviews, it becomes difficult to draw chronological insights from them. For this reason, in the paper [1] by Murtadha et al., chronological sentiment analysis using dynamic and temporal clustering was presented. In their research, they have also considered consistency in order to assess the performance of their algorithms. However, they have utilized unified feature set of window sequential clustering in order to enhance the ensemble learning. Murtadha et al. Have also proposed in another paper [2] an automated unsupervised method of sentiment analysis. Their method involves contextual analysis and unsupervised ensemble learning. SentiWordNet has been deployed in both phases. The research datasets include Australian Airlines and HomeBuilders reviews for domain performance and algorithm seems to be the most accurate for some datasets even though Support Vector Machine (SVM) seems to be the most accurate on average. Applying deeper contextual analysis might have resulted in a higher accuracy score. Long Mai and Bac Le have taken a slightly different approach towards rating prediction of product reviews. In their paper [3], they have proposed a unique framework for automatic collection and processing of YouTube comments on various products which are then used for sentiment analysis of those products. Various models

478

Md. S. Islam et al.

have been used on their reviews performance with accuracy ranging from 83%–92%. However, the domain which has been used for the dataset needs to be expanded in order to make their models more effective in the real world. In [4], they presented a novel approach that uses aspect level sentiment detection. They have tested their model on Amazon customer reviews and their acquired crossvalidation score seems to be somewhere near 87%, which is 10% less than the score has been claimed in the paper. More challenging areas like spam, sarcasm etc. have not been touched in their current research. Rishith et al. Introduces in their paper [5], a sentiment analysis model that uses Long Short Term Memory for analyzing movie reviews. Their model seems to have a very high accuracy on both the train and the test data. However, it might be able to achieve a better performance if it was applied on more varied datasets. This paper [6] reviews the foremost later considers approximately that have utilized profound learning to disentangle conclusion examination issues, such as suspicion limit. It is superior to combine significant learning techniques with word embeddings than with Term Frequency-Inverse Document Frequency (TF-IDF) when performing a supposition examination. Eventually, a comparative take into account has been conducted on the searching comes more or less gotten for the differing models and input highlights. In [7], the authors used applied machine learning algorithms to detect polarity of the reviews. Their models seem having a very high accuracy with Naïve Bayes. However, their algorithm cannot detect from review why people like or dislike a product. In [8], they used Isolation Forest for sentiment analysis on data recovered from Amazon on products of different brands. While their project flow diagram shows sentiment detection process, no prediction measurement score is present within their paper. The use of convolutional neural network is also common in text-based prediction. In [9], they proposed an approach for predicting the helpfulness of different online reviews in the form of helpfulness score by using Convolutional Neural Network (CNN). However, in this paper, their dataset is limited to Amazon and Snapdeal only, and it has seemed that they never use any model for getting metadata information either. Mamtesh et al. Analyzed customer reviews for a number of movies using KNearest Neighbors (KNN), Logistic Regression and Naïve Bayes in their paper [10]. They were able to predict the sentiment of the reviews with very high accuracy. They might be able to achieve higher accuracy with a hybrid approach. In [11], the author analyzes user sentiments of Windows Phone applications and classifies their polarity by using Naïve Bayes model. The model seems to be able to achieve a high accuracy. But it can classify in only 2 categories: positive and negative. Rintyarna et al. Proposed a method in order to extract features from texts by focusing on the words’ semantics in their paper [12]. To achieve this, they used an extended Word Sense Disambiguation method. From their experiment, it seems that their proposed method boosts the performance of the ML algorithms sufficiently. In [13], they have proposed a method for data pre-processing within three machine learning techniques Support Vector Machine (SVM), Naïve Bayes Multinomial (NBM) and C4.5 for the sentiment analysis for both English and Turkish language.

Developing an Intelligent System for Recommending Products

479

Finally, they concluded that although SVM worked well for the English language, NBM performed well for the Turkish language. In [14], they have proposed an appropriate and accurate algorithm for finding movie reviews among some of machine learning based algorithms. In [15], they proposed a hybrid sentiment analysis method for analyzing Amazon Canon Camera Review in order to classify them into 2 polarity classes, positive &negative. Compared to SVM, their approaches seem to be having very high score in different performance measurements including accuracy, precision, recall, and others. In [16], Haque et al., they used supervised learning method on Amazon Product Review on large scale. The model used in their algorithms has performance around 86% to 94% varying on different models and different datasets. Their main limitation is the scarcity of standard dataset required with 10-fold cross-validation. Sentiment analysis is a demanding task comprising of natural language processing, data mining, and machine learning. To tackle this challenge furthermore, deep learning models are often merged with them. In [17], they have emphasized on the execution of different DL techniques in Sentiment Analysis. In [18], the assessment was done by utilizing 10-Fold Cross Validation and the estimation precision is done by Confusion Matrix and Receiver Operating Characteristic (ROC) bend. The result appeared an expanding in precision SVM of 82.00% to 94.50% results of testing the model can be concluded that the Support Vector Machinebased Particle Swarm Optimization provide solutions to problems of classification more accurate smartphone product reviews.

3 Proposed Methodology The proposed methodology consists of the following steps for polarity prediction through sentiment analysis. Figure 1 shows a block diagram for the entire process. 3.1

Dataset Description

Two different datasets have been used here. The first dataset contains over 400,000 reviews of unlocked phones for sale the Amazon website. 40,000 of these reviews have been used for this paper. This dataset has 6 columns: Product Name, Brand Name, Price, Rating, Reviews and Review Votes. It was collected from [19] as CSV format. The second dataset has more than 34,000 reviews provided by Datafiniti on Amazon products like Kindle E-readers, Echo smart speaker, and so on. This dataset contains 24 columns: id, dateAdded, dateUpdated, primary Categories, imageURLs, keys, reviews. id, reviews.rating, reviews.text, reviews.title, etc. However, it does not contain type of the products; which has been manually added to the dataset. It has reviews mostly on batteries and tablets. This is why only these two product types have been used for this paper. This dataset [20] was acquired as CSV format.

480

3.2

Md. S. Islam et al.

Pre-processing

This step involves preparing data from the dataset for training model. There are several steps to this. 3.2.1 Initial Database Based Pre-processing Since the dataset is very large with about 40 million reviews, only the first 40000 reviews have been selected for training for the first dataset while 26467 reviews were selected for the second. Since the product pages are written by the sellers, almost all of the product names consist of their description in the first dataset. As a result, the same product ended up with multiple different unique product names. For this reason, the names of products with such issues have been replaced only by the common name of the products in the first dataset for better product suggestion. Moreover, among those reviews, some products have a very small number of user reviews, indicating very low popularity among users. However, due to such a low amount, very small number of reviews with high ratings (either because only the satisfied customers ended up reviewing the products or because of fake reviews with high ratings) will give them higher average ratings than most of the actually popular products. This is why products with less than 25 reviews have been filtered out. For the second dataset, an extra column is added before filtering in order to identify which product belongs in which product type. Since this dataset is smaller, products with less than 15 reviews were filtered out this time. Next, only the reviews of the most common product types are selected for machine learning. The next part is Text Cleaning. 3.2.2 Cleaning Text This step involves transforming the text into a much simpler form of data so that the result can easily be crunched into numbers. This step involves the following sub-steps. 3.2.2.1. Punctuation Removal In this step, all the punctuation symbols are removed from the texts since they usually do not have any significant impact on the final rating. 3.2.2.2. Normalization Here, all uppercase letters in all the texts are replaced by their lowercase counterparts for further simplicity. 3.2.2.3. Stop-Word Removal There are some words in every language that are so common in texts that their existence in a sentence does not influence the meaning of the sentence too much. For example, articles, prepositions, conjunctions etc. are some common stop-words. Since their influence in the meaning of a sentence is very little removing them results in a more condensed text.

Developing an Intelligent System for Recommending Products

481

3.2.2.4. Lemmatization Lemmatization involves resolving words to their dictionary form. Hence, dictionary search is usually used in order to proper lemmatize the words into their dictionaryaccurate base forms. The purpose of lemmatization is to make sure that the algorithm can distinguish a word in its different forms. This also reduces the word index of the tokenizer and hence makes the result more efficient.

3.2.3 Final Pre-processing and Training In this step, the labels are adjusted and the dataset is split in the ratio of 3:1. Then the training data used for tokenization and all out of vocabulary words are represented by the “< OOV>” symbol. Next, the texts both training and test data are converted to sequences according to the tokenizer. Finally, all the reviews are padded in order to ensure equal length for all reviews. 3.3

Training

In this step, K-Nearest Neighbors (KNN) classifier, Random Forest Classifier (RFC) & Support Vector Machine (SVM) classifier are trained using padded training data as features and ratings as labels. Moreover, a deep learning classifier is also developed for the same purpose. For the second dataset, rather than being trained separately, the Machine Learning (ML) models are trained on 75% of the entire combined dataset to Battery reviews and Tablet reviews for better generalization. 3.4

Testing and Rating Prediction

When training is complete, these classifiers are tested against the padded test data for performance measurement. Even though the ML models are trained on 75% of the combined dataset, they are tested individually against Battery’s and Tablet’s test datasets. The values predicted by the classifiers are stored as the predicted ratings. 3.5

Product Recommendation

This step is performed once after the completion of training session for each classifier. Here, average predicted rating for each product is separately calculated. These average ratings are used for recommending products.

482

Md. S. Islam et al. Storage Module Unlocked Phone Reviews

Amazon Product Reviews

Dataset Splitting

Training Set

Pre-Processing

Eliminating Unnecessary Reviews

Test Set

Tokenization

Sentence Splitting Text Sequencing

Stopword Removal Padding Special Character Removal Training

Testing

Normalization

Rating Prediction Lemmatization

Recommendation Generation

Fig. 1. An intelligent system for recommending products

4 Experimental Results The dataset & number of reviews used for machine learning for different types of products is shown in Table 1. Table 1. Review count for different types of products Product types Unlocked Phones Batteries Tablets

Dataset no. Number of reviews 1 40000 2 12071 2 14396

Developing an Intelligent System for Recommending Products

483

Table 2. Accuracy, precision and recall for several classifiers on different products Classifiers Products KNN Accuracy Precision Recall RFC Accuracy Precision Recall SVM Accuracy Precision Recall DL Model Accuracy Precision Recall

Unlocked Phones Batteries 82.75% 69.18% 78.67% 47.26% 76.37% 41.12% 86.63% 77.5% 90.35% 77.92% 77.44% 40.12% 81.80% 78.66% 91.93% 89.48% 72.27% 36.17% 86.90% 77.80% 80.86% 53.28% 79.54% 47.22%

Tablets 84.7% 67.54% 73.3% 88.86% 95.81% 73.57% 89.52% 97.26% 74.57% 87.21% 78.34% 77.38%

From Table 1, we can see that 40000 reviews on unlocked phone reviews from the first dataset have been used. On the other hand, Reviews for batteries and tablets both come from the second dataset and contain 12071 and 14396 reviews respectively. The different score types used for measuring performance of different classifiers are accuracy, precision, and recall. Accuracy ¼

The Number of Correct Predictions Total Number of Predictions

Precision ¼ Recall ¼

TP TP þ FP

TP TP þ FN

Here, TP, FP & FN denote True Positive, False Positive & False Negative respectively. The overall precision for a multi-class classification is the average of the precision scores of every individual class. The same goes for the overall recall score as well. Table 2 shows different performance scores for certain classifiers on several types of products. Figure 2 shows a graphical representation of accuracy, precision and recall for several classifiers on different product types. From this figure, it can be seen that Deep Learning based classifier has both the highest accuracy and the highest recall score for unlocked phone reviews, with an accuracy score of 86.90% and a recall score of 79.54%. However, Support Vector Machine (SVM) has the highest precision score of 91.93% for the same reviews.

484

Md. S. Islam et al.

Fig. 2. Graphical presentation of performance measurement scores of different classifiers

The second type of products is battery. For this product’s reviews, Support Vector Machine (SVM) seems to be having both the highest accuracy score and the highest precision with accuracy score of 78.66% and precision score of 89.48%. However, the Deep Learning (DL) based classifier has the highest recall value of 47.22%. Finally, tablets are the last product type. For the selected tablet reviews, Support Vector Machine again has the highest accuracy score and the highest precision score of 89.52% and 97.26% respectively. However, Deep Learning (DL) based model again seems to have the highest recall score with a score of 77.38%.

Developing an Intelligent System for Recommending Products

485

Table 3 shows the training time for all 4 of the classifiers. From here, it can be seen that the classifiers sorted in their ascending order of their training time are: KNN, RFC, SVM and DL for either dataset; with KNN’s training time being less than 1second in both cases and DL model’s training being greater than 300seconds for both cases. It can be noted that training time is higher on the 2nd dataset for all classifiers; despite the 2nd dataset being smaller. This is because the 1st dataset is very large and that is why, compared to the 2nd dataset, less training was required for better generalization. Table 3. Training time (in seconds) for different classifiers Classifier Dataset 1 (Unlocked Phones) Dataset 2 (Batteries and Tablets) KNN 0.0568621 0.0903794 RFC 0.8510487 1.2277246 SVM 41.0242428 52.5817226 DL Model 372.0272711 519.7536614

Figure 3 shows the top 5 phones according to the average of their individual ratings. This can used as a benchmark for comparing the effectiveness recommendation systems implemented with the ML models.

Fig. 3. Graphical presentation of the average of the actual ratings of Top 5 phones

From Fig. 4, we can see that the DL (Deep Learning) model managed to predict 4 items from the top 5 Phones based on the average of the average ratings (Fig. 3) with mostly the same order.

486

Md. S. Islam et al.

Fig. 4. Graphical presentation of average of predicted scores for Top 5 phones recommended by different machine learning models

Similar to Fig. 3, Fig. 5 also shows the average of the actual ratings, but for the batteries. From Fig. 6, we can see that all four machine learning models predicted the order correctly. However, both KNN and DL models were able to predict the average close to the actual values.

Developing an Intelligent System for Recommending Products

487

Fig. 5. Graphical presentation of the average of the actual ratings of the types of batteries

Fig. 6. Graphical presentation of average of predicted scores for the types of batteries

Figure 7 shows a graphical representation of top 5 tablets based on the average of their actual ratings.

488

Md. S. Islam et al.

Fig. 7. Graphical presentation of the average of the actual ratings of Top 5 tablets

From Fig. 8, it can be seen that KNN model was able to predict 4 items from Fig. 7, with the order and the values also being mostly accurate.

Fig. 8. Graphical presentation of average of predicted scores for top 5 tablets recommended by different machine learning models

Developing an Intelligent System for Recommending Products

489

5 Conclusion In this paper, we have developed an intelligent system which predicts ratings from different product reviews and provides recommendation to the customers. From the experimental results, it has been found that SVM gives better result based on the performance scores. However, during training time for each classifier, it seems that Random Forest (RF) is more efficient than SVM and the performance is too close to SVM for the three product types. Overall, performance scores for all four classifiers are pretty good. Bag of Words has the tendency of increasing the dimensionality of the matrix with the increase of vocabulary size. Moreover, since the context of the words cannot be recovered from Bag of Words approach, semantic meaning is often not preserved in this method. In order to address these issues, we have decided to posttruncate and post-pad the review texts with a maximum word length of 15 and 30 for Dataset 1 and Dataset 2 respectively. In future, we will try to improve the performance of our system by using some rule-based pre-processing techniques.

References 1. Al-sharuee, M.T.: Sentiment analysis : dynamic and temporal clustering of product reviews (2020) 2. Al-sharuee, M.T., Liu, F., Pratama, M.: Sentiment analysis: an automatic contextual analysis and ensemble clustering approach and comparison. Data Knowl. Eng. (2018) 3. Mai, L., Le, B.: Joint sentence and aspect-level sentiment analysis of product comments. Ann. Oper. Res. (2020) 4. Nandal, N., Tanwar, R., Pruthi, J.: Machine learning based aspect level sentiment analysis for Amazon products. Spat. Inf. Res. (2020) 5. Rishith, U.T., Rangaraju, G.: Usense: a user sentiment analysis model for movie reviews by applying LSTM. Int. J. Res. Eng. Appl. Manag. 01, 369–372 (2020) 6. Dang, N.C., Moreno-García, M.N., De la Prieta, F.: Sentiment analysis based on deep learning: a comparative study. Electron. 9(3) (2020) 7. Jagdale, R.S., Shirsat, V.S., Deshmukh, S.N.: Sentiment Analysis on Product Reviews Using Machine Learning Techniques : Proceeding of CISC 2017 Sentiment Analysis on Product Reviews Using Machine Learning Techniques, no. January. Springer Singapore (2019) 8. Salmiah, S., Sudrajat, D., Nasrul, N., Agustin, T., Harani, N.H., Nguyen, P.T.: Sentiment Analysis for Amazon Products using Isolation Forest (6), 894–897 (2019) 9. Saumya, S., Singh, J.P., Dwivedi, Y.K.: Predicting the helpfulness score of online reviews using convolutional neural network. Soft Comput., no. BrightLocal 2016 (2019) 10. Mamtesh, M., Mehla, S.: Sentiment analysis of movie reviews using machine learning classifiers. Int. J. Comput. Appl. 182(50), 25–28 (2019) 11. Normah, N.: Naïve Bayes algorithm for sentiment analysis windows phone store application reviews. SinkrOn 3(2), 13 (2019) 12. Rintyarna, B.S., Sarno, R., Fatichah, C.: Semantic features for optimizing supervised approach of sentiment analysis on product reviews. Computers 8(3), 55 (2019) 13. Parlar, T., Özel, S.A., Song, F.: Analysis of data pre-processing methods for sentiment analysis of reviews. Comput. Sci. 20(1), 123–141 (2019) 14. Dwivedi, S., Patel, H., Sharma, S.: Movie reviews classification using sentiment analysis. Indian J. Sci. Technol. 12(41), 1–6 (2019)

490

Md. S. Islam et al.

15. Chhabra, I.K., Prajapati, G.L.: Sentiment analysis of Amazon canon camera review using hybrid method. Int. J. Comput. Appl. 182(5), 25–28 (2018) 16. Haque, T.U., Saber, N.N., Shah, F.M.: Sentiment analysis on large scale amazon product reviews. In: 2018 IEEE International Conference Innovation Research Deviation, no. May, pp. 1–6 (2018) 17. Kalaivani, A., Thenmozhi, D.: Sentimental Analysis using Deep Learning Techniques, pp. 600–606 (2018) 18. Wahyudi, M., Kristiyanti, D.A.: Sentiment analysis of smartphone product review using support vector machine algorithm-based particle swarm optimization. J. Theor. Appl. Inf. Technol. 91(1), 189–201 (2016) 19. Amazon Reviews: Unlocked Mobile Phones | Kaggle. https://www.kaggle.com/ PromptCloudHQ/amazon-reviews-unlocked-mobile-phones 20. Consumer Reviews of Amazon Products | Kaggle. https://www.kaggle.com/datafiniti/ consumer-reviews-of-amazon-products

Branch Cut and Free Algorithm for the General Linear Integer Problem Elias Munapo(&) Department of Statistics and Operations Research, School of Economic Sciences, North West University, Mafikeng, South Africa [email protected]

Abstract. The paper presents a branch, cut and free algorithm for the general linear integer problem. This proposed algorithm, like most of the other exact algorithms for the general linear integer problem also relies on the usual strategy of relaxing the linear integer problem and then solve it to obtain a continuous optimal solution. If solution is integer then the optimal solution is found else the largest basic variable in the continuous optimal solution is selected and freed of integral restrictions and then the branch and cut algorithm is used to search for the optimal integer solution. The main and obvious challenge with the branch and bound related algorithms is that the numbers of nodes generated to verify the optimal solution can sometimes explode to unmanageable levels. Freeing a selected variable of integral restrictions which is proposed in this paper can significantly reduce the complexity of the general linear integer problem. Keywords: Linear integer problem  Continuous optimal solution  Variable freeing  Branch and cut algorithm  Computational complexity  Optimality verification

1 Introduction The general linear integer programming (LIP) problem has so many important applications in life. These applications include, interactive multimedia systems [1], home energy management [15], cognitive radio networks [7], mining operations [22], relay selection in secure cooperative wireless communication [11], electrical power allocation management [18, 20], production planning [4], selection of renovation actions [2], waste management, formulation and solution method for tour conducting and optimization of content delivery networks [16]. The main challenge with the branch and bound related algorithms is that the numbers of nodes that are required to verify the optimal solution can sometimes explode to unmanageable levels. Freeing a variable of all its integral restrictions which is proposed in this paper can significantly reduce the complexity of the general linear integer problem. In fact the LIP problem is NP hard, thus it is very difficult to solve and heuristics are still being used up to now to approximate optimal solutions in reasonable times. An efficient consistent exact solution for the general LIP is still at large. The coefficient matrix for the general linear integer problem is not unimodular [14]. A branch, cut and free algorithm which combines the branch and cut algorithm and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 491–505, 2021. https://doi.org/10.1007/978-3-030-68154-8_44

492

E. Munapo

freeing of a selected variable is proposed in this paper. This algorithm is similar to [17] but differs in that it can be used for any linear integer model with any number of variables and linear constraints. On the other hand the method proposed in [17] requires calculation of variable sum limits before solving and it works for a single constrained linear integer problem and no variables are freed in the solving process.

2 The General LIP Maximize Z ¼ c1 x1 þ c2 x2 þ . . . þ cn xn , Such that: a11 x1 þ a12 x2 þ . . . þ a1n xn  b1 ; a21 x1 þ a22 x2 þ . . . þ a2n xn  b2 ; ... am1 x1 þ am2 x2 þ . . . þ amn xn  bm :

ð1aÞ

Minimize Z ¼ c1 x1 þ c2 x2 þ . . . þ cn xn , Such that: a11 x1 þ a12 x2 þ . . . þ a1n xn  b1 ; a21 x1 þ a22 x2 þ . . . þ a2n xn  b2 ; ...

ð1bÞ

am1 x1 þ am2 x2 þ . . . þ amn xn  bm : Where aij ; bi and cj are constants and xj is integer 8; i ¼ 1; 2; . . .; m and j ¼ 1; 2; . . .; n:

3 Variable Integral Restriction Freeing Theorem Given n variables in an equality constraint such as that given in (2), then one of the n variables can be freed of integral restrictions. In other words one out of the n variables is not supposed to be restricted to integer. a1 x1 þ a2 x2 þ . . . þ aj xj þ . . . þ an xn ¼ b:

ð2Þ

Where aj and b are integral constants and the unknown variables are integer 8j ¼ 1; 2; . . .; n: If all the variables are integers then one variable ðxj Þ out of n variables is not supposed to be restricted integer.

Branch Cut and Free Algorithm

493

Proof From (2) the variable ðxj Þ can be made the subject as given in (3). aj xj ¼ b  ða1 x1 þ a2 x2 þ . . . þ an xn Þ:

ð3Þ

If a sum of integers is subtracted from an integer, then the difference is also an integer. According to constraint (2) the variable xj is a free variable.

4 Continuous Optimal Solution The general continuous optimal tableau is given in Table 1. The arrangement of variables is done in this way just for convenience, as any optimal continuous tableau can be arranged in many ways. Table 1. Continuous optimal tableau.

Basic variables ( x)

Non-basic variables ( s )

0 0 0 … 0 1 0 0 … 0 0 1 0 … 0 0 0 1 … 0

ω1 α11 α 21 α 31

ω2 α12 α 22 α 32

ω3 α13 α 23 α 33

α m2

α m3

ωm … α1m … α 2m … α 3m



rhs

γ β1 β2 β3



0

0 0 … 1

α m1



α mn

βm

Since this is the continuous optimal tableau then (4) and (5) are valid. x1 ; x2 ; x3 ; . . .; xm  0:

ð4Þ

b1 ; b2 ; b3 ; . . .; bm  0:

ð5Þ

In addition c is the objective value and aij is a constant. Both c and aij can either be negative or positive. In this paper Zcopt is the continuous optimal solution. The other examples of general continuous optimal solutions are given in Table 2 and Table 3. The specific continuous optimal solutions for numerical illustrations are given in Table 4 and Table 5.

5 Surrogate Constraint The surrogate constraint or clique constraint is obtained by adding all the rows of original variables that are basic at optimality as given in (6).

494

E. Munapo Table 2. Sub-Problem 1.

Table 3. Sub-Problem 2.

x1 þ a11 s1 þ a12 s2 þ a13 s3 þ . . . þ a1m sm ¼ b1 þ x2 þ a21 s1 þ a22 s2 þ a23 s3 þ . . . þ a2m sm ¼ b2 þ x3 þ a31 s1 þ a32 s2 þ a33 s3 þ . . . þ a3m sm ¼ b3

ð6Þ

... xm þ am1 s1 þ am2 s2 þ am3 s3 þ . . . þ amm sm ¼ bm : This simplifies to (7). x1 þ x2 þ x3 þ . . . þ xm þ k1 s1 þ k2 s2 þ k3 s3 þ . . . þ km sm ¼ bc :

ð7Þ

Branch Cut and Free Algorithm

495

Where kj ¼ a1j þ a2j þ a3j þ . . . þ amj 8j; j ¼ 1; 2; 3; . . .; m:

ð8Þ

bc ¼ b1 þ b2 þ b3 þ . . . þ bm :

ð9Þ

Since bc is not necessarily integer then we have (10). bc ¼ I þ f :

ð10Þ

Where I is the integer part and f is the fractional part. Since at optimality the non-basic variables are zero as given in (10) i.e. s1 ¼ s2 ¼ s3 ¼ . . . ¼ sm ¼ 0:

ð11Þ

Then (12) and (13) are valid. Sub-Problem a: x1 þ x2 þ x3 þ . . . þ xm  I:

ð12Þ

x1 þ x2 þ x3 þ . . . þ xm  I þ 1:

ð13Þ

Sub-Problem b:

In other words we add (14) to Sub-Problem a instead of (12). x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm þ xn þ 1 ¼ I:

ð14Þ

Similarly (13) becomes (15). x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm  xn þ 2 ¼ I þ 1:

ð15Þ

As a diagram, the original problem and the two sub-problems are related as shown in Fig. 1.

0

x1 + x2 + x3 + ... + x j

x1 + x2 + x3 + ... + x j

+... + xm + xn +1 = I .

+... + xm − xn + 2 = I + 1.

a

b Fig. 1. The two initial sub-problems of an LIP.

496

E. Munapo

Where Node 0 is the original problem, Node a is given by Table 2 which is the original problem after adding constraint (14) and Node (b) is given by Table 3 which is obtained after adding constraint (15) to original problem. The unrestricted variable is selected from that basic variable satisfying (16). b‘ ¼ Maxfb1 ; b2 ; b3 ; . . .; bm g:

ð16Þ

Justification: The larger the variable range, the more branches we are likely to obtain, so it makes sense not to subject such a variable to integral restriction. The equality constraint (14) is added to continuous optimal tableau to obtain Sub-Problem a. Similarly (15) is added to the continuous optimal tableau to obtain Sub-Problem b. Where variable xj is now unrestricted by using the variable integral restriction freeing theorem. In this paper Zopt is the optimal integer solution.

6 Measuring Complexity Reduction The percentage complexity reduction ðqÞ when using the proposed algorithm is given as (17). q¼

ðR  rÞ 100%: R

ð17Þ

Where R is the number of branch and bound nodes before using the proposed algorithm and r is the number of nodes after freeing a selected variable. 6.1

Numerical Illustration 1

Maximize Z ¼ 111x1 þ 211x2 þ 171x3 þ 251x4 þ 151x5 , Such that: 110x1 þ 210x2 þ 115x3 þ 112x4  31x5  7189; 50x1 þ 183x2 þ 261x3  79x4 þ 259x5  6780; 142x1 þ 244x2  140x3 þ 139x4 þ 153x5  2695;

ð18Þ

224x1  87x2 þ 128x3 þ 129x4 þ 133x5  12562; 155x1 þ 252x2 þ 258x3 þ 156x4 þ 157x5  2533: Where x1 ; x2 ; x3 ; x4 ; x5  0 and integer. The continuous optimal solution for Numerical Illustration 1 is presented in Table 4.

Branch Cut and Free Algorithm

497

Table 4. Continuous optimal tableau (18).

Where s1 ; s2 ; s3 ; s4 ; s5  0 are slack variables and these satisfy (19). 110x1 þ 210x2 þ 115x3 þ 112x4  31x5 þ s1 ¼ 7189; 50x1 þ 183x2 þ 261x3  79x4 þ 259x5 þ s2 ¼ 6780; 142x1 þ 244x2  140x3 þ 139x4 þ 153x5 þ s3 ¼ 2695;

ð19Þ

224x1  87x2 þ 128x3 þ 129x4 þ 133x5 þ s4 ¼ 12562; 155x1 þ 252x2 þ 258x3 þ 156x4 þ 157x5 þ s5 ¼ 2533: Solving (18) directly by the automated branch and bound algorithm it takes 459 nodes to verify the optimal solution given in (20). x1 ¼ 31; x2 ¼ 0; x3 ¼ 22; x4 ¼ 9; x5 ¼ 0; Zopt ¼ 9462:

ð20Þ

Using the simplex method it gives (21) as the continuous optimal solution. x1 ¼ 31:5636; x2 ¼ 0; x3 ¼ 22:7229; x4 ¼ 9:8910; x5 ¼ 0:1264; Zcopt ¼ 9890:8977:

ð21Þ

6.1.1 Freeing the Selected Variable The surrogate constraint or clique equality becomes (22). x1 þ x3 þ x4 þ x5 ¼ 64:3039:

ð22Þ

The variable to be freed of integer restrictions is determined by (23). b‘ ¼ Maxf31:5636; 0; 23:7229; 9:8910; 0:1264g ¼ 31:5636:

ð23Þ

The largest value 31.5636 comes from variable x1 which implies that this variable is to be freed of the integral restriction.

498

E. Munapo

Sub-Problem a - additional constraint: x1 þ x3 þ x4 þ x5 þ x6 ¼ 64:

ð24Þ

Sub-Problem b - additional constraint: x1 þ x3 þ x4 þ x5  x7 ¼ 65:

ð25Þ

Note that we consider only the original variables ðx1 ; x3 ; x4 &x5 Þ that are in the optimal basis. Where x6 and x7 are the additional variables which are also restricted to integers. In diagram form the original problem and the two sub-problems (a) and (b) are related as shown in Fig. 2. 0

x1 + x3 + x4 + x5 + x6 = 64

x1 + x3 + x4 + x5 − x7 = 65

a

b

Fig. 2. The two initial sub-problems of LIP for Numerical Illustration 1

Solving Sub-Problem (a) using the automated branch and bound algorithm, it takes 21 nodes (from Sub-Problem a) to verify the optimal solution given in (20). x1 ¼ 31; x2 ¼ 0; x3 ¼ 22; x4 ¼ 9; x5 ¼ 0; Z ¼ 9462:

ð26Þ

Solving Sub-Problem 2 using the automated branch and bound algorithm, it takes 1 node to verify infeasibility. The complexity of this problem is reduced from 459 to just (21 + 1) = 22 nodes i.e. complexity reduction is 95.2 as given in (27). q¼

ð459  22Þ 100% 459

¼ 95:2%:

ð27Þ

Mere addition of the cut given in (28) for Sub-problem (a) and the cut given in (29) for Sub-problem (b) does not reduce the total number of nodes required to verify an optimal solution by using the automated branch and bound algorithm. x1 þ x3 þ x4 þ x5  64:

ð28Þ

x1 þ x3 þ x4 þ x5  65:

ð29Þ

In fact it takes 461 nodes for (Sub-Problem a) and 1 node for (Sub-Problem b) which gives a total of 462 nodes to verify optimality.

Branch Cut and Free Algorithm

499

A set of 100 randomly generated linear integer problems have shown that complexity decreases significantly if the largest basic variable in the continuous optimal solution is freed of integral restrictions. As a result of this, freeing of variables is now combined with cuts to form what is proposed in this paper as the branch, cut and free algorithm for the general linear integer problem.

7 Branch and Cut Algorithm Previously cuts have been used to enhance the performance of the branch and bound algorithm [3, 17, 22] to form what is now known as the branch and cut algorithm [6, 8, 15, 19]. In addition pricing has been used in a branch and bound setting to come up with the well-known branch and price algorithm [5, 21]. The branch and cut and branch and price are well-known ideas and have been combined to form a hybrid which is now called the branch, cut and price [9, 10]. With all these wonderful and impressive ideas available, the general linear integer problem is still NP hard and very difficult to solve up to now. Heuristics are still being used to this day to approximate optimal solutions to this difficult problem [11–13]. Freeing a variable from integral restriction is now being used in the context of a branch and cut to give birth to the branch, cut and free algorithm for the general integer problem. 7.1

Proposed Branch, Cut and Free Algorithm

The proposed algorithm is made up of the following steps. Step 1: Relax the given LIP and use linear programming techniques to obtain a continuous optimal solution. If optimal solution is integer then it is also optimal to the original problem else go to Step 2. Step 2: Use the continuous optimal tableau to construct Sub-Problem a and SubProblem b. Determine the variable xj to be freed of integral restriction. Step 3: From the Sub-Problems a and b, use the branch and cut to search for smallest i that satisfies (30) and (31). x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm ¼ I  i; i ¼ 0; 1; 2. . .

ð30Þ

x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm ¼ I þ 1 þ i; i ¼ 0; 1; 2. . .

ð31Þ

Where xj is the freed variable and the rest are integers, 8j ¼ 1; 2; 3; . . .; m. Step 4: Call integer solution from (30) Za and that from (31) Zb . The optimal solution ðZopt Þ is given by (32) if a maximization problem and (33) if a minimization problem. Zopt ¼ Max½Za ; Zb :

ð32Þ

Zopt ¼ Min ½Za ; Zb :

ð33Þ

Step 4: Verify optimality to determine the actual optimal integer solution.

500

E. Munapo

7.2

Verification of Optimality

We assume the continuous optimal solution and integer optimal solution are known as given in Fig. 3.

Fig. 3. Proof of optimality

Where I. II. III. IV. V.

‘copt is when hyperplane x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm is at Zcopt , ‘opt is when hyperplane x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm is at Zopt , ‘s is when hyperplane x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm has the smallest value, ‘m is when hyperplane x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm has the largest value, c1 x1 þ c2 x2 þ . . . þ cn xn is the green objective plane and it moves from the

continuous optimal point ðZcopt Þ to the optimal integer point ðZopt Þ. From Fig. 3, it can be noted that there are no other integer points in the shaded region i.e. in the region between the continuous optimal point ðZcopt Þ and the optimal integer point ðZopt Þ besides that optimal integer point. 7.2.1 Searching for the Optimal Integer Point Armed with these important ideas we can now develop the branch, cut and free algorithm. The first stage is to assume that Zopt is not known. From the diagram we know that this optimal integer point ðZopt Þ can be on the left hand side or the right hand side of ‘copt . Since Zopt is not known then ‘s and ‘m are also not known. We can avoid not knowing ‘s and ‘m ; but we still get ‘opt . This is done by searching from ‘copt and going in both directions until ‘opt is obtained. After obtaining ‘opt there is a need to verify this value for optimality.

Branch Cut and Free Algorithm

501

7.2.2 Optimality Is Verification Let Zopta be the optimal solution from Sub-Problem a. Optimality can be verified by adding constraint set (34) to Sub-Problem a. x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm  ‘opta  1; c1 x1 þ c2 x2 þ . . . þ cn xn  Zopta  1:

ð34Þ

Let Zoptb be the optimal solution coming from Sub-Problem b. Optimality in this case can be verified by adding constraint set (35) to the Sub-problem b. x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm  ‘optb þ 1; c1 x1 þ c2 x2 þ . . . þ cn xn  Zoptb  1:

ð35Þ

The optimal integer solution Zopt is optimal to the original ILP if both (34) and (35) are infeasible. Mixed Linear Integer Problem In the case of a mixed linear integer problem, only those variables that are supposed to be integer in the original mixed linear integer problem can be freed of integral restriction.

7.3

Numerical Illustration 2 [18]

Minimize Z ¼ 162x1 þ 38x2 þ 26x3 þ 301x4 þ 87x5 þ 5x6 þ 137x7 ; Such that 165x1 þ 45x2 þ 33x3 þ 279x4 þ 69x5 þ 6x6 þ 122x7 ; Where x1 ; x2 ; . . .; x7  0 and are integers. The continuous optimal solution for Illustration 4 is given in Table 5.

Table 5. Continuous optimal tableau (36).

ð36Þ

502

E. Munapo

The variable x3 is the only original variable that is in the optimal basis. This variable must be freed from integral restrictions. The continuous optimal solution for Numerical Illustration 2 is given in (37). x1 ¼ x2 ¼ x4 ¼ x5 ¼ x6 ¼ x7 ¼ 0; x3 ¼ 568:88788; Zcopt ¼ 14790:8495: b‘ ¼ Maxf0; 0; 568:8788; 0; 0; 0; 0g ¼ 568:8788:

ð37Þ ð38Þ

The largest value 568.8788 comes from variable x3 which implies that this variable is to be freed of the integral restriction.

0

x3 ≤ 568

a x3 = 568, x6 = 4.8333, Z = 14792.1667.

x3 ≥ 569

b x3 = 569, Z = 14894. (fathomed)

Fig. 4. The two initial sub-problems of the LIP in Numerical Illustration 2.

The branch and cut algorithm is now used to search Sub-Problem a for integer points. x3 ¼ 568  i; i ¼ 0; 1; 2. . . i ¼ 0 : Add x3 ¼ 568 to problem where x3 is a free variable. Solving by branch and cut we have (39). x3 ¼ 568; x6 ¼ 5:0000; Z ¼ 14793:

ð39Þ

There is no need to search Sub-Problem b since it is fathomed already. i.e. x3 ¼ 569; Z ¼ 14794:

ð40Þ

Zopt ¼ Min½14793; 14794 ¼ 14793:

ð41Þ

Branch Cut and Free Algorithm

503

This solution can be verified by adding (42) to Sub-Problem a. x3  568  1 ¼ 567; 162x1 þ 38x2 þ 26x3 þ 301x4 þ 87x5 þ 5x6 þ 137x7  14792:

ð42Þ

Solving Sub-Problem a using the branch ad cut algorithm with x3 as a free variable we obtain an infeasible solution which shows that there are no other integer points on the left hand side of ‘copt besides (41). Similarly we add (43) to Sub-Problem b. x3  569 þ 1 ¼ 570; 162x1 þ 38x2 þ 26x3 þ 301x4 þ 87x5 þ 5x6 þ 137x7  14792:

ð43Þ

Adding (43) to Sub-Problem b results in infeasibility and this verifies that (39) is optimal.

8 Computational Experience One hundred randomly generated pure LIPs with variables ranging from 10 to 110 were used in the computational analysis. The branch and cut algorithm was compared with the proposed branch, cut and free algorithm. The same number of cuts were used for each of these two algorithms. What emerged from the computations is that freeing a basic variable in a branch and cut algorithm is more effective than the plain branch and cut algorithm in solving pure LIPs.

9 Conclusions In the paper we presented a way of selecting the variable to be freed. We also presented a way of dividing the problem into simpler parts as given in (51) and (52). It is easier to search the separate divisions ði ¼ 0; 1; 2. . .Þ of Sub-Problem a or ði ¼ 1; 2; 3; . . .Þ of Sub-Problem b than the original ILP problem as a whole. In addution we presented optimality verification of the optimal integer solution ðZopt Þ: This is a new avenue for research that will attract the attention of many