Advances in Intelligent Computing and Communication: Proceedings of ICAC 2021 9811908249, 9789811908248

The book presents high-quality research papers presented at 4th International Conference on Intelligent Computing and Ad

270 54 17MB

English Pages 599 [570] Year 2022

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Advances in Intelligent Computing and Communication: Proceedings of ICAC 2021
 9811908249, 9789811908248

Table of contents :
Preface
Contents
Editors and Contributors
An Unsupervised Learning Approach Towards Credit Risk Modelling Using DFT Features and Gaussian Mixture Models
1 Introduction
2 Previous Works
3 Methodology
4 Experiments, Results, and Discussion
5 Conclusion
References
Human Activity Detection-Based Upon CNN with Pruning and Edge Detection
1 Introduction
2 Literature Survey
3 Problem Definition
4 Methodology of Work
5 Performance Analysis and Results
6 Conclusion and Future Scope
References
Improvement in Breast Cancer Detection Using Deep Learning
1 Introduction
2 Literature Review
3 Method Used
4 Proposed Methodology
5 Data
5.1 Channels
5.2 Initialization
6 Results
7 Conclusion
References
Measure to Tackle Forest Fire at Early Stage Using Applications of IoT
1 Introduction
2 Literature Survey
3 Problem Definition
4 Proposed Model
4.1 Data Accumulation Layer
4.2 Data Pre-processing Layer
4.3 Categorization Layer
4.4 Cloud Layer
4.5 Event Classification
5 Performance Analysis and Results
6 Conclusion and Future Scope
References
Start and Stop Policy for Smart Vehicles Using Application of IoT
1 Introduction
2 Literature Survey
3 Methodology
4 Results and Discussion
5 Conclusion and Future Scope
References
Fake News Detection Using Lightweight Machine Learning Models
1 Introduction
2 Related Work
3 Experiment and Result Analysis
3.1 Performance of SVM Model
3.2 Performance of Logistic Regression Model
3.3 Performance of Decision Tree Model
3.4 Performance of Neural Network
3.5 Result
4 Conclusions
References
A Comparative Analysis of Regression Approaches for Prediction of COVID-19 Active, Recovered, and Death Cases in India
1 Introduction
2 Related Work
3 Related Concepts
3.1 Random Forest Regression
3.2 Multiple Regression
4 Experimental Setup
4.1 Preparation of Dataset
4.2 Data Preprocessing
4.3 Visualization
4.4 Performance Calculation
5 Results
6 Conclusion
References
Energy Configuration Management Framework Using Automated Data Mining Algorithm
1 Introduction
1.1 Energy Conservation
1.2 Introduction to Data Mining and K-Means Clustering and R Programming
2 Literature Review
3 Present Works
3.1 Description of the System
4 Methodology and Results
5 Conclusions
References
Lecture Notes in Computer Science: Pathological Voice Recognition Based on Acoustic Phonatory Features
1 Introduction
1.1 Types of Voice Disorders
2 Related Work
2.1 Research Gaps Identified
3 Methodology
3.1 Preprocessing
3.2 Feature Extraction
3.3 Classification
3.4 Steps in Voice Pathology Recognition Process
4 Results and Discussions
5 Conclusion
References
Analog/RF Performance Analysis of Downscaled Cylindrical Gate Junctionless Graded Channel MOSFET
1 Introduction
2 Device Architecture and Simulation Model
3 Result Analysis
4 Conclusion
References
Process Design to Self-extract Text from Images for Similarity Check
1 Introduction
2 Proposed Document Processing Approach
2.1 Proposed Approach
3 Experimental Results and Discussions
4 Conclusion
References
Enactment Assessment of Machine Learning Model for Chronic Disease (Dysrhythmia) Using an Consistent Attributes and KNN Algorithm
1 Introduction
1.1 Narration of the Dataset
1.2 Data Preprocessing
1.3 Splitting the Data
2 Related Works
3 Proposed Techniques
3.1 Selection of Dataset
3.2 Methodology
3.3 Estimation of ML Algorithms
4 Result and Analysis
5 Conclusion
References
Data Classification by Ensemble Methods in Machine Learning
1 Introduction
2 Review of Literature
3 Overview of Algorithms
3.1 K-NN Algorithm
3.2 Decision Tree Algorithm
4 Experimental Setup and Result Analysis
5 Conclusion
References
Image Caption Generator Using Machine Learning and Deep Neural Networks
1 Introduction
2 Literature Survey
3 Proposed Image Caption Generator
3.1 System Architecture
3.2 Object Detection
3.3 Sentence Generation
3.4 Working
4 Methodology
4.1 Feature Extraction
4.2 Encoder
4.3 Decoder
5 Experimental Results
6 Conclusion
References
Power Quality Issues Mitigation in an AC Microgrid Through Bayesian Regularization Algorithm-Trained Artificial Neural Network
1 Introduction
2 System Modeling
2.1 Dstatcom
3 Results and Discussion
4 Conclusion
References
Rectified Adam Optimizer-Based CNN Model for Speaker Identification
1 Introduction
2 Related Work
3 Proposed Model
4 Experimental Setups
4.1 Dataset Description
4.2 Training Details
5 Experiments and Analysis of the Results
6 Conclusion
References
Fault Classification in Transmission Line Using Empirical Mode Decomposition and Support Vector Machine
1 Introduction
2 Empirical Mode Decomposition
3 Feature Extraction
4 Support Vector Machine
5 System Under Study
6 Simulation and Results
7 Conclusion
References
Adaptive Grey wolf Optimization Algorithm with Gaussian Mutation
1 Introduction
2 Proposed Adaptive Grey Wolf Optimization Algorithm (AGWO)
2.1 Encircling Prey
2.2 Hunting
2.3 Attacking Prey
2.4 Gaussian Mutation
3 Discussion on Experimental Finding
4 Conclusion
References
A Study on Apodization Profiles of Fiber Bragg Gratings
1 Introduction
2 Theory
3 Modeling
4 Discussions
5 Conclusion
References
Analysis on Polycystic Ovarian Syndrome and Comparative Study of Different Machine Learning Algorithms
1 Introduction
2 Literature Survey
3 Research Methodology
3.1 KNN (K-Nearest Neighbors Algorithm):
3.2 Linear Regression
3.3 K-means Clustering
3.4 Support Vector Machine (SVM)
4 Result
5 Conclusion
References
Area and Energy Optimized QCA-Based Binary to Gray Code Converters
1 Introduction
2 Related Work
3 Basics of QCA
4 Design Theory and Implementation
5 Results and Discussion
6 Conclusion and Future Scope
References
Sentiment Analysis on COVID-19 Tweeter Dataset
1 Introduction
2 Literature Review
3 Dataset and Data Preprocessing
4 Result and Discussion
5 Conclusion
References
Developing Arithmetic Optimization Algorithm for Travelling Salesman Problem
1 Introduction
2 Literature Survey
3 Arithmetic Optimization Algorithm
4 Objective Function
5 Results and Discussions
5.1 Experimental Setup
5.2 Convergence Analysis
5.3 Performance Analysis of AOA
5.4 Computational Complexity
6 Conclusion
References
COVID-19 Chest X-ray Image Generation Using ResNet-DCGAN Model
1 Introduction
2 Related Work
3 Proposed Model
4 Experimental Setups
4.1 Dataset Description
4.2 Training Details
5 Experiments and Analysis of the Results
6 Conclusion
References
25 A Framework for Segmentation of Characters and Words from In-Air Handwritten Assamese Text
Abstract
1 Introduction
2 Proposed Approach
2.1 Text Segmentation
2.1.1 Spotting of Boundary Points of Text Components in an IAHT Sequence
2.1.2 Extraction of Valid Text Components and Elimination of External Ligatures
2.1.3 Segregation of Valid Components into Characters and Words
2.2 Post-Processing
3 Experimental Results and Discussion
3.1 Spotting of Boundary Points and Extraction of Valid Components from an IAHT Sequence
3.2 Segregation of Character and Word Components and Post-Processing
3.3 Overall Segmentation Performance
4 Conclusion
References
Cybersecurity in Digital Transformations
1 Introduction
2 Problem Statement
3 Proposed Solution
4 Typesetting of Your Paper at Springer Case Studies
5 Conclusion
References
Comparison of Different Shapes for Micro-strip Antenna Design
1 Introduction
2 Design
3 Results and Discussion
4 Conclusion
References
Time and Frequency Domain Analysis of a Fractional-Order All-Pass Filter
1 Introduction
2 Theoretical Background of Fractional-Order Filter
3 Use of the Oustaloup Approximation Method to Analyze a Fractional-Order All-Pass Filter
3.1 Time Domain Analysis
3.2 Frequency Domain Analysis
4 Discussions
5 Conclusions
References
Cohort Selection using Mini-batch K-means Clustering for Ear Recognition
1 Introduction
2 Methods
3 Database Description and Experiment
4 Experimental Result and Analysis
5 Advantages and Disadvantages
6 Conclusion and Future Scope
References
A Hybrid Connected Approach of Technologies to Enhance Academic Performance
1 Introduction
2 Related Works
3 Proposed Methods and Materials
3.1 A Hybrid LMS System
3.2 Effect of Environment on Academic Performance
3.3 Block Chain Supported IoT-Based Attendance Monitoring System
4 Conclusion
References
Estimation of Path Loss in Wireless Underground Sensor Network for Soil with Chemical Fertilizers
1 Introduction
2 Classification of WUSNs and Deployment Strategies
3 Path Loss Estimation
4 Results and Discussion
5 Conclusions
References
Molecular Communication via Diffusion—An Experimental Setup using Alcohol Molecule
1 Introduction
2 Hardware Implementation of Molecular Communication System
2.1 Transmitter
2.2 Receiver
3 Concentration-based Modulation
3.1 Encoding
4 Performance Analysis with Distance between Transmitter and Receiver
5 Decoding
6 Conclusion
References
Optimal Pilot Contamination Mitigation-Based Channel Estimation for Massive MIMO System Using Hybrid Machine Learning Technique
1 Introduction
2 System Model
3 Optimal Channel Estimation Using Hybrid Machine Learning Technique (OCE-HML)
3.1 Reduce Pilot Contamination Using Discrete Bacterial Optimization-Based Clustering
3.2 Optimal Channel Estimation Using Capsule Learning-Based Convolutional Neural Network
3.3 Hybrid Detector for Multi-cell Massive MIMO
4 Numerical Results
4.1 NMSE Analysis of Proposed and Existing Techniques
4.2 Average Data Rate Analysis of Proposed and Existing Techniques
5 Conclusion
References
Concentration Measurement of Urea in Blood using Photonic Crystal Fiber
1 Introduction
2 Mathematical Approach
3 Result
4 Conclusion
References
Automated Detection of Myocardial Infarction with Multi-lead ECG Signals using Mixture of Features
1 Introduction
2 Materials and Method
2.1 Dataset
2.2 Pre-processing and Detection of R-peaks
2.3 Feature Extraction
2.4 Classification
3 Results
4 Conclusion
References
Metamaterial CSRR Loaded T-Junction Phase Shifting Power Divider Operating at 2.4 GHz
1 Introduction
2 Design Details of Power Divider
3 Fabricated Deign of the Power Divider
4 Experimental Results
5 Conclusion
References
An Adaptive Levy Spiral Flight Sine Cosine Optimizer for Techno-Economic Enhancement of Power Distribution Networks Using Dispatchable DGs
1 Introduction
2 Problem Formulation
2.1 Minimization of Total Active Power Loss (PLoss)
2.2 Minimization of AEL
2.3 Operational Constraints
3 Proposed Algorithm
3.1 Conventional SCA
3.2 Proposed ALSFSCO
4 Application of Proposed Algorithm to Optimal DG Allocation Problem
5 Results and Discussions
6 Conclusion
References
Design of RAMF for Impulsive Noise Cancelation from Chest X-Ray Image
1 Introduction
1.1 Median Filter
2 Literature Survey
3 Method
4 Significance Measures
4.1 PSNR
5 Result
6 Conclusion
References
Smart Street Lightning Using Solar Energy
1 Introduction
1.1 Problem Definition
1.2 Objectives
1.3 Methodology
2 Implementation
3 Simulation and Results
3.1 Simulation
3.2 Results
4 Conclusion and Future Work
References
Fine-Tuning of a BERT-Based Uncased Model for Unbalanced Text Classification
1 Introduction
2 BERT-Based Uncased Model for Unbalanced Text Classification
3 Results and Analysis
3.1 The Data Collection
3.2 Evaluation Criteria
3.3 Analysis from the Outcomes
4 Conclusion and Future Work
References
Segmentation of the Heart Images Using Deep Learning to Assess the Risk Level of Cardiovascular Diseases
1 Introduction
2 Literature Review
3 Deep Learning
4 Results and Discussion
4.1 Program Code
5 Conclusion
References
Integrative System of Remote Accessing Without Internet Through SMS
1 Introduction
2 Related Work
3 Proposed System
3.1 Android Application
3.2 Fetch the Contact
3.3 Profile Migration
3.4 Location Tracking
3.5 Lock the Phone
4 Results and Discussion
5 Conclusion
References
Detect Fire in Uncertain Environment using Convolutional Neural Network
1 Introduction
2 Existing System
3 Proposed System
4 Work Flow
5 Conclusion
References
A Joint Optimization Approach for Security and Insurance Management on the Cloud
1 Introduction
2 Related Work
2.1 Inference from Existing System
3 Proposed System
4 Module Description
4.1 Purchase the Security Services
4.2 Cloud Service
4.3 Protecting Data Traffic
4.4 Claim Insurance
5 Conclusion
References
Improving the Efficiency of E-Healthcare System Based on Cloud
1 Introduction
2 Related Work
3 Existing System
4 Proposed System
4.1 Advantages of Proposed System
5 Modules Description
5.1 User Module
5.2 Registration Module
5.3 Creation Storage and Instance
5.4 Data Protection
5.5 Data Recovery Module
6 Conclusion
References
Analysis on Online Teaching Learning Methodology and Its Impact on Academics Amidst Pandemic
1 Introduction
2 Related Works
2.1 Before Pandemic
2.2 After Pandemic
3 Online Teaching Platforms and Tools
3.1 Teaching Platforms
3.2 Assessment Tools
3.3 Learning Management Tools
3.4 Online Learning Platforms
4 Survey: Results and Discussions
5 Conclusion
References
Real Estate Price Prediction Using Machine Learning Algorithm
1 Introduction
2 Related Work
3 Proposed Work
3.1 Data Collection
3.2 Data Preprocessing
3.3 Data Analysis
3.4 Prediction Models
4 Experimental Results
5 Conclusion
References
An Efficient Algorithm for Traffic Congestion Control
1 Introduction
2 Related Works
3 Suggested Method
3.1 Parking Space Detection Stage
3.2 Feature Extraction and Classification
4 Experimental Results
5 Conclusion
References
LSTM-Based Epileptic Seizure Detection by Analyzing EEG Signal
1 Introduction
2 Algorithm Used
2.1 LSTM
2.2 Spot-Check Algorithms
2.3 Support Vector Machines (SVMs)
3 System Design
4 Database
5 Implementation
5.1 Processing the Data
5.2 Defining the Label
5.3 Fitting Data into the Model
6 Result
7 Conclusions
References
Machine Learning for Peer to Peer Content Dispersal for Spontaneously Combined Finger Prints
1 Introduction
2 Literature Review
3 Proposed Work
3.1 Overview
3.2 System Architecture
3.3 Signature Message Block Download
4 Simulation Results
4.1 Node Login with Peer Process
4.2 Large Compressed Small Hash-Cdn
4.3 Receiving Packet with Packet Strength
5 Conclusion
References
Recognition of APSK Digital Modulation Signal Based on Wavelet Scattering Transform
1 Introduction
2 Material and Methods
2.1 Signal Model
2.2 WST Algorithm
2.3 Classification
3 Results and Discussion
3.1 Setup of Experiment
3.2 Simulation Results
4 Conclusion
References
Novel Method to Choose a Certain Wind Turbine for Al. Hai Site in Iraq
1 Introduction
2 Materials and Methodology
2.1 Weibull Distribution
2.2 Capacity Factor
2.3 Normalized Power
2.4 Turbine Performance Index (TPI)
3 Data Analysis and Interpretation
4 Conclusion
References
Analysis of Pressure and Temperature Sensitivity Based on Coated Cascade FBG-LPFG Sensor
1 Introduction
2 Work Principle of Fiber Grating Sensor
3 Fabrication of FBG-LPFG Sensor
4 TiO2 Coating Technique
5 Experimental Work
6 Result and Discussion
7 Conclusion
References
High Gain of Rectangular Microstrip Patch Array in Wireless Microphones Applications
1 Introduction
2 Analysis of Rectangular Microstrip Antenna
3 The Designed Antennas
3.1 Single Rectangular Microstrip Patch Antenna
3.2 1*2 Rectangular Micro-strip Patch Array Antenna
3.3 2*2 Rectangular Microstrip Patch Array Antenna
3.4 2*4 Rectangular Microstrip Patch Array Antenna
4 Simulation Results and Discussion Using Advanced Design System (ADS)
5 Conclusion
References
Assessment Online Platforms During COVID-19 Pandemic
1 Introduction
2 Problem Statement
3 Questionnaire Contents
4 The Aims of the Study
5 Live Webinar Platforms
6 Method and Procedure
6.1 The Study Community
6.2 Study Tool
6.3 Study Tool Application
6.4 Data Analysis
7 Results and Discussion
8 Conclusion
References
Fabrication and Analysis of Heterojunction System’s Electrical Properties Made of Compound Sb2O3:In2O3Si(N,p) Films by Spin Coating
1 Introduction
2 Experimental
3 Results and Discussion
3.1 The Electrical Properties of (Sb2O3:In2O3) Thin Film
4 Conclusions
4.1 Electrical Properties (Hall Effect)
4.2 V Characteristics for Si(p,n)/Sb2O3:In2O3 Heterojunction
References
Generation of Interactive Fractals Using Generalized IFS
1 Introduction
2 Random Fractals
3 Generation of Fractals Using IFS
4 Experimental Setup and Analysis
4.1 Case 1: Fern
4.2 Case 2: Sierpinski Triangle
4.3 Case 3: Koch Curve
4.4 Case 4: Twin Christmas Tree
4.5 Case 5: Castle and Herb Generation
5 Research Challenges
6 Conclusion
References
Real-Time CPU Burst Time Prediction Approach for Processes in the Computational Grid Using ML
1 Introduction
2 Literature Review
3 Proposed Approach
4 Data Set Description
5 Experiments and Results
6 Results and Discussion
7 Conclusion
References
EMG-Based Arm Exoskeleton
1 Introduction
2 Methodology
3 Results and Discussion
4 Conclusion
References
Author Index

Citation preview

Lecture Notes in Networks and Systems 430

Mihir Narayan Mohanty Swagatam Das   Editors

Advances in Intelligent Computing and Communication Proceedings of ICAC 2021

Lecture Notes in Networks and Systems Volume 430

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas— UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Turkey Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).

More information about this series at https://link.springer.com/bookseries/15179

Mihir Narayan Mohanty · Swagatam Das Editors

Advances in Intelligent Computing and Communication Proceedings of ICAC 2021

Editors Mihir Narayan Mohanty Department of Electronics and Communication Engineering Institute of Technical Education and Research (ITER) Siksha ‘O’ Anusandhan Deemed to be University Bhubaneswar, Odisha, India

Swagatam Das Electronics and Communication Sciences Unit Indian Statistical Institute Kolkata, West Bengal, India

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-981-19-0824-8 ISBN 978-981-19-0825-5 (eBook) https://doi.org/10.1007/978-981-19-0825-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

This issue of Lecture Notes in Networks and Systems is dedicated to the 4th International Conference Intelligent Computing and Communication (ICAC-2021) held at the campus of Institute of Technical Education and Research (Faculty of Engineering and Technology), Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India, from November 25 to 26, 2021. This conference was organized by the Department of Electronics and Communication Engineering of Institute of Technical Education and Research (Faculty of Engineering and Technology). The conference had three tracks, namely advances in communication systems, intelligent systems and signal processing. From 131 papers received from different authors around the globe, 58 papers were selected for inclusion into the conference proceedings. Each paper was peer reviewed by at least two reviewers. Due to the COVID-19 pandemic, this time the conference was organized by virtual mode. The objective of the conference was to bring together experts from academic institutions, industries, research organizations and professional engineers for sharing of knowledge, expertise and experience in emerging trends related to the computer, communication and electrical topics. The aim of this international conference is to coverage all the issues on a single platform and provide international forum for researchers to discuss the real-time problems and solutions to exchange their valuable ideas and showcase the ongoing works which may lead to pathbreaking foundation of the futuristic engineering. This conference mainly aims at advanced communication protocol, database security and privacy, advanced computing system, saving energy, etc., on several updated techniques. The conference offers a platform to focus on the inventive information and computing toward the investigation of cognitive mechanisms and processes of human information processing and the development of the next-generation engineering and advanced technological systems. Siksha ‘O’ Anusandhan is a deemed to be university located in Bhubaneswar, Odisha, India. It was originally founded as Institute of Technical Education and Research (ITER) in the year 1996. The university has been at the forefront nourishing a learning ambience, encouraging academic, research and innovations since its inception in 2007. The university is composed of nine degree-granting schools with 10,000 students. Many of SOAU’s programs are nationally accredited for meeting v

vi

Preface

high standards of academic quality, including engineering, medicine, pharmacy, business, nursing, biotechnology, humanities, environment, nanotechnology, agriculture and law. University was ranked 20th by National Institutional Ranking Framework (NIRF) under the aegis of the Ministry of Human Resource Development, Government of India, and has been awarded ‘A’ Grade by NAAC. It has established 12 research centers and 39 research laboratories to fulfill the need of faculties and students. Faculty of Engineering and Technology is a constituent of SOAU having thirteen departments with more than four hundred faculty members. The department of ECE is continually working to provide quality research outputs in the areas of signal and image processing, communication engineering, microelectronics devices. The editors thank the authors for extending their fullest cooperation in preparing the manuscripts to Springer Lecture Notes guidelines, taking aboard the additional review comments. The editors would also like to convey their heartfelt thanks to Prof. M. R. Nayak, President, Siksha ‘O’ Anusandhan (Deemed to be University), Prof. Ashok Kumar Mohapatra, Vice-Chancellor, Siksha ‘O’ Anusandhan (Deemed to be University), Prof. M. K. Mallick, Director, ITER (FET), Siksha ‘O’ Anusandhan (Deemed to be University), Prof. J. K. Nath, Dean (R&D), Siksha ‘O’ Anusandhan (Deemed to be University), Prof. P. K. Nanda, Pro VC, Siksha ‘O’ Anusandhan (Deemed to be University) and Prof. P. K. Sahoo, Dean, ITER, Siksha ‘O’ Anusandhan (Deemed to be University) for their constant inspiration and motivation in all stages of the conference. Bhubaneswar, India Kolkata, India

Dr. Mihir Narayan Mohanty Dr. Swagatam Das

Contents

An Unsupervised Learning Approach Towards Credit Risk Modelling Using DFT Features and Gaussian Mixture Models . . . . . . . . . Amit Kant Pandit, Ashutosh Vashishtha, Shubam Sumbria, and Shubham Mahajan Human Activity Detection-Based Upon CNN with Pruning and Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marvi Sharma and Dinesh Kumar Garg Improvement in Breast Cancer Detection Using Deep Learning . . . . . . . . Manvi Gupta, Shubham Mahajan, Anil Kumar Bhardwaj, and Amit Kant Pandit Measure to Tackle Forest Fire at Early Stage Using Applications of IoT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ankita Sharma and Er. Sorab Kumar

1

9 17

33

Start and Stop Policy for Smart Vehicles Using Application of IoT . . . . . Nisha Thakur and Er. Deepak Kumar

43

Fake News Detection Using Lightweight Machine Learning Models . . . . Satakshee Mishra, Mallika Srivastava, Manish Raj, Sukant Kishoro Bisoy, and Rasmi Ranjan Khansama

53

A Comparative Analysis of Regression Approaches for Prediction of COVID-19 Active, Recovered, and Death Cases in India . . . . . . . . . . . . Binita Kumari and Sipra Sahoo Energy Configuration Management Framework Using Automated Data Mining Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nidhi Sharma, Binu Kuriakose Vargis, Kamal Upreti, Rituraj Jain, and Arvind Kumar Sharma

63

79

vii

viii

Contents

Lecture Notes in Computer Science: Pathological Voice Recognition Based on Acoustic Phonatory Features . . . . . . . . . . . . . . . . . . . P. Deepa and Rashmita Khilar

89

Analog/RF Performance Analysis of Downscaled Cylindrical Gate Junctionless Graded Channel MOSFET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 S. Misra, K. P. Swain, S. M. Biswal, S. K. Pati, and J. K. Das Process Design to Self-extract Text from Images for Similarity Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Adiba Sharmeen, Nidhi Agarwal, Arpit Suman, Sumanshu Agarwal, and Kundan Kumar Enactment Assessment of Machine Learning Model for Chronic Disease (Dysrhythmia) Using an Consistent Attributes and KNN Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 K. Kiruthika and Rashmita Khilar Data Classification by Ensemble Methods in Machine Learning . . . . . . . . 127 G. Jagadeeswara Rao, A. Siva Prasad, S. Sai Srinivas, K. Sivaparvathi, and Nibedan Panda Image Caption Generator Using Machine Learning and Deep Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Mogula Yeshasvi and T. Subetha Power Quality Issues Mitigation in an AC Microgrid Through Bayesian Regularization Algorithm-Trained Artificial Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Subhashhree Choudhury, Devi Prasad Acharya, and Niranjan Nayak Rectified Adam Optimizer-Based CNN Model for Speaker Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Avirup Mazumder, Subhayu Ghosh, Swarup Roy, Sandipan Dhar, and Nanda Dulal Jana Fault Classification in Transmission Line Using Empirical Mode Decomposition and Support Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . . 163 Shitya Ranjan Das, Ranjan Kumar Mallick, Pravati Nayak, and Sairam mishra Adaptive Grey wolf Optimization Algorithm with Gaussian Mutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Bibekananda Jena, Manoj Kumar Naik, Aneesh Wunnava, and Rutuparna Panda A Study on Apodization Profiles of Fiber Bragg Gratings . . . . . . . . . . . . . . 183 Manish Mishra and Prasant Kumar Sahu

Contents

ix

Analysis on Polycystic Ovarian Syndrome and Comparative Study of Different Machine Learning Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Gino Sinthia, T. Poovizhi, and Rashmita Khilar Area and Energy Optimized QCA-Based Binary to Gray Code Converters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 K. J. Nikhil and B. S. Premananda Sentiment Analysis on COVID-19 Tweeter Dataset . . . . . . . . . . . . . . . . . . . 207 Anubhav Kumar, Kyongsik Yun, Destalem Negusse, Haile Misgna, and Moges Ahmed Developing Arithmetic Optimization Algorithm for Travelling Salesman Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Madugula Murali Krishna and Santosh Kumar Majhi COVID-19 Chest X-ray Image Generation Using ResNet-DCGAN Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Sukonya Phukan, Jyoti Singh, Rajlakshmi Gogoi, Sandipan Dhar, and Nanda Dulal Jana A Framework for Segmentation of Characters and Words from In-Air Handwritten Assamese Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Ananya Choudhury and Kandarpa Kumar Sarma Cybersecurity in Digital Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 A. Swain, K. P. Swain, S. K. Pattnaik, S. R. Samal, and J. K. Das Comparison of Different Shapes for Micro-strip Antenna Design . . . . . . . 253 Hirak Keshari Behera and Laxmi Prasad Mishra Time and Frequency Domain Analysis of a Fractional-Order All-Pass Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Tapaswini Sahu, Kumar Biswal, and Madhab Chandra Tripathy Cohort Selection using Mini-batch K-means Clustering for Ear Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Ravishankar Mehta, Jogendra Garain, and Koushlendra Kumar Singh A Hybrid Connected Approach of Technologies to Enhance Academic Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Sushil Kumar Mahapatra, Binod Kumar Pattanayak, and Bibudhendu Pati Estimation of Path Loss in Wireless Underground Sensor Network for Soil with Chemical Fertilizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Amitabh Satpathy, Manoranjan Das, and Benudhar Sahu Molecular Communication via Diffusion—An Experimental Setup using Alcohol Molecule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Meera Dash and Trilochan Panigrahi

x

Contents

Optimal Pilot Contamination Mitigation-Based Channel Estimation for Massive MIMO System Using Hybrid Machine Learning Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Lipsa Dash and Anand Sreekantan Thampy Concentration Measurement of Urea in Blood using Photonic Crystal Fiber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Bhukya Arun Kumar, Sanjay Kumar Sahu, and Gopinath Palai Automated Detection of Myocardial Infarction with Multi-lead ECG Signals using Mixture of Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Santanu Sahoo, Gyana Ranjan Patra, Monalisa Mohanty, and Sunita Samanta Metamaterial CSRR Loaded T-Junction Phase Shifting Power Divider Operating at 2.4 GHz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Kumaresh Sarmah, Roktim Konch, and Sivaranjan Goswami An Adaptive Levy Spiral Flight Sine Cosine Optimizer for Techno-Economic Enhancement of Power Distribution Networks Using Dispatchable DGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Usharani Raut, Sivkumar Mishra, Subrat Kumar Dash, Sanjaya Kumar Jena, and Alivarani Mohapatra Design of RAMF for Impulsive Noise Cancelation from Chest X-Ray Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Jaya Bijaya Arjun Das, Archana Sarangi, Debahuti Mishra, and Mihir Narayan Mohanty Smart Street Lightning Using Solar Energy . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Priya Seema Miranda, S. Adarsh Rag, and K. P. Jayalakshmi Fine-Tuning of a BERT-Based Uncased Model for Unbalanced Text Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 Santosh Kumar Behera and Rajashree Dash Segmentation of the Heart Images Using Deep Learning to Assess the Risk Level of Cardiovascular Diseases . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Shafqat Ul Ahsaan, Vinod Kumar, and Ashish Kumar Mourya Integrative System of Remote Accessing Without Internet Through SMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 L. Sujihelen, M. Dinesh, Sai Shiva Shankar, S. Jancy, M. D. Antopraveena, and G. Nagarajan Detect Fire in Uncertain Environment using Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 L. K. Joshila Grace, P. Asha, J. Refonaa, S. L. Jany Shabu, and A. Viji Amutha Mary

Contents

xi

A Joint Optimization Approach for Security and Insurance Management on the Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 L. K. Joshila Grace, S. Vigneshwari, R. Sathya Bama Krishna, B. Ankayarkanni, and A. Mary Posonia Improving the Efficiency of E-Healthcare System Based on Cloud . . . . . . 415 L. Sujihelen, S. T. Nikhil Sidharth, Miryala Sai Kiran, M. D. Antopraveena, M. S. Roobini, and G. Nagarajan Analysis on Online Teaching Learning Methodology and Its Impact on Academics Amidst Pandemic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 R. Yogitha, R. Aishwarya, G. Kalaiarasi, and L. Lakshmanan Real Estate Price Prediction Using Machine Learning Algorithm . . . . . . 431 D. Vathana, Rohan Patel, and Mohit Bargoti An Efficient Algorithm for Traffic Congestion Control . . . . . . . . . . . . . . . . 441 A. Mary Posonia, B. Ankayarkanni, D. Usha Nandhini, J. Albert Mayan, and G. Nagarajan LSTM-Based Epileptic Seizure Detection by Analyzing EEG Signal . . . . 449 Shashank Thakur, Aditi Anupam Shukla, R. I. Minu, and Bhasi Sukumaran Machine Learning for Peer to Peer Content Dispersal for Spontaneously Combined Finger Prints . . . . . . . . . . . . . . . . . . . . . . . . . . 459 T. R. Saravanan and G. Nagarajan Recognition of APSK Digital Modulation Signal Based on Wavelet Scattering Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 Mustafa R. Ismael, Haider J. Abd, and Mohammed Taih Gatte Novel Method to Choose a Certain Wind Turbine for Al. Hai Site in Iraq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Sura T. Nassir, Mohammed O. Kadhim, and Ahmed B. Khamees Analysis of Pressure and Temperature Sensitivity Based on Coated Cascade FBG-LPFG Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Zahraa S. Alshaikhli and Wasan A. Hekmat High Gain of Rectangular Microstrip Patch Array in Wireless Microphones Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 Hiba A. Alsawaf Assessment Online Platforms During COVID-19 Pandemic . . . . . . . . . . . . 519 Zinah Abdulridha Abutiheen, Ashwan A. Abdulmunem, and Zahraa A. Harjan

xii

Contents

Fabrication and Analysis of Heterojunction System’s Electrical Properties Made of Compound Sb2 O3 :In2 O3 Si(N,p) Films by Spin Coating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529 Ali J. Khalaf, Abeer S. Alfayhan, Raheem G. K. Hussein, and Mohammed Hadi Shinen Generation of Interactive Fractals Using Generalized IFS . . . . . . . . . . . . . 541 Sukanta Kumar Das, Jibitesh Mishra, and Soumya Ranjan Nayak Real-Time CPU Burst Time Prediction Approach for Processes in the Computational Grid Using ML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 Amiya Ranjan Panda, Shashank Sirmour, and Pradeeep Kumar Mallick EMG-Based Arm Exoskeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 K. P. Jayalakshmi, S. Adarsh Rag, and J. Cyril Robinson Azariah Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571

Editors and Contributors

About the Editors Dr. Mihir Narayan Mohanty is presently working as Professor in the Department of Electronics and Communication Engineering, Institute of Technical Education and Research, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India. He has published over 500 papers in international/national journals, chapters, and conferences along with approximately 25 years of teaching experience at UG and PG levels. He is Active Member of many professional societies like IEEE, IET, ISTE, IRED, IETI, EMC and EMI Engineers, India, ISCA, ACEEE, IAEng, etc. Also, he is Fellow of IE (I) and IETE and Senior Member of IEEE. He has received his M.Tech. degree in Communication System Engineering from Sambalpur University, Sambalpur, Odisha. Also, he has done his Ph.D. work in Applied Signal Processing. He was working as Associate Professor and Head in the Department of Electronics and Instrumentation Engineering, Institute of Technical Education and Research, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, Odisha. His area of research interests includes applied signal and image processing, digital signal/image processing, biomedical signal processing, and microwave communication engineering. He has worked as Guest Lecturer in many universities. Simultaneously, he has given many invited talks at conferences, webinars, workshops, and FDPs. He has reviewed many Springer and IEEE-based conference papers as well as for international journal papers. He has also received many national and international awards like Cosmic International Awards, Green Thinkerz Award, Green Warriors Award, Rajiv Gandhi Sadbhavana Award, Institution Award from Institute of Engineers (India), and many best paper awards in international conferences. Dr. Swagatam Das received his B.E. Tel. E., M.E. Tel. E (Control Engineering specialization), and Ph.D. degrees, all from Jadavpur University, India, in 2003, 2005, and 2009, respectively. Currently serving as Associate Professor at the Electronics and Communication Sciences Unit of the Indian Statistical Institute, Kolkata, India,

xiii

xiv

Editors and Contributors

his research interests include evolutionary computing, pattern recognition, multiagent systems, and wireless communication. Dr. Das has published more than 300 research articles in peer-reviewed journals and international conference proceedings. He is Founding Co-Editor-in-Chief of Swarm and Evolutionary Computation, an international journal from Elsevier. He has also served as or is serving as an Associate Editor of various journals, including Pattern Recognition, Neurocomputing, Information Sciences, IEEE Access, and so on. He is Editorial Board Member of numerous other journals, including Progress in Artificial Intelligence, Applied Soft Computing, and Artificial Intelligence Review. Dr. Das is Recipient of the 2012 Young Engineer Award from the Indian National Academy of Engineering (INAE) and of the 2015 Thomson Reuters Research Excellence India Citation Award for the highest cited researcher from India in the Engineering and Computer Science category for the period 2010–2014.

Contributors Haider J. Abd College of Engineering, Electrical Engineering Department, University of Babylon, Babel, Iraq Ashwan A. Abdulmunem College of Computer Science Information Technology, Department of Computer Science, University of Kerbala, Kerbala, Iraq Zinah Abdulridha Abutiheen College of Computer Science Information Technology, Department of Computer Science, University of Kerbala, Kerbala, Iraq Devi Prasad Acharya Department of Electrical and Electronics Engineering, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, India Nidhi Agarwal Department of Electronics and Communication Engineering, ITER, Siksha ‘O’ Anusandhan (Deemed To Be University), Bhubaneswar, India Sumanshu Agarwal Department of Electronics and Communication Engineering, ITER, Siksha ‘O’ Anusandhan (Deemed To Be University), Bhubaneswar, India Moges Ahmed Faculty School of Computing, EIT-M, Mekelle University, Mekelle, Ethiopia Shafqat Ul Ahsaan Department of Computer Science, NIMS University, Jaipur, Rajasthan, India R. Aishwarya Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India J. Albert Mayan Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India Abeer S. Alfayhan Department of Physics, College of Science, University of Babylon, Babylon, Iraq

Editors and Contributors

xv

Hiba A. Alsawaf Ninevah University, Mosul, Iraq Zahraa S. Alshaikhli Laser and Optoelectronics Engineering Department, University of Technology—Iraq, Baghdad, Iraq B. Ankayarkanni Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India M. D. Antopraveena Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India P. Asha Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India J. Cyril Robinson Azariah Department of Nanotechnology, Institute of Electronics and Communication Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, Tamilnadu, India Mohit Bargoti Department of Computing Technologies, SRM Institute of Science and Technology, Chennai, India Hirak Keshari Behera Department of ECE, S’O’A Deemed to be University, Bhubaneswar, India Santosh Kumar Behera Department of Computer Science and Engineering, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, Odisha, India Anil Kumar Bhardwaj School of Electronics and Communication, Shri Mata Vaishno Devi University, Katra, India Sukant Kishoro Bisoy C.V. Raman Global University, Bhubaneswar, Odisha, India Kumar Biswal School of Electronics Engineering, Kalinga Institute of Industrial Technology, Deemed to be University, Bhubaneswar, India S. M. Biswal Silicon Institute of Technology, Bhubaneswar, India Ananya Choudhury Department of Electronics and Communication Engineering, Gauhati University, Guwahati, India Subhashhree Choudhury Department of Electrical and Electronics Engineering, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, India J. K. Das Silicon Institute of Technology, Bhubaneswar, India; School of Electronics Engineering, KIIT University, Bhubaneswar, India Jaya Bijaya Arjun Das Department of Computer Science and Engineering, ITER, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India Manoranjan Das Institute of Technical Education and Research, SOA (Deemed to be University), Bhubaneswar, India

xvi

Editors and Contributors

Shitya Ranjan Das Siksha ‘O’ Anushandhan Deemed to be University, Bhubaneswar, India Sukanta Kumar Das Department of Computer Science and Application, Biju Patnaik University of Technology, Rourkela, Odisha, India Lipsa Dash School of Electronics Engineering, Vellore Institute of Technology, Vellore, India Meera Dash Department of ECE, ITER, SOA, Bhubaneswar, India Rajashree Dash Department of Computer Science and Engineering, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, Odisha, India Subrat Kumar Dash Government College of Engineering, Kalahandi, India P. Deepa Department of CSE, Panimalar Engineering College, Chennai, Tamil Nadu, India Er. Deepak Kumar Department of Computer Science and Engineering, Sri Sai College of Engineering and Technology, Pathankot, India Sandipan Dhar Department of Computer Science and Engineering, National Institute of Technology Durgapur, Durgapur, India M. Dinesh Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India Jogendra Garain Department of Computer Science and Engineering, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, Odisha, India Dinesh Kumar Garg Department of Computer Science and Engineering, Sri Sai College of Engineering and Technology, Pathankot, India Mohammed Taih Gatte College of Engineering, Electrical Engineering Department, University of Babylon, Babel, Iraq Subhayu Ghosh Department of Computer Science and Engineering, National Institute of Technology Durgapur, Durgapur, India Rajlakshmi Gogoi Jorhat Engineering College, Jorhat, India Sivaranjan Goswami Department of Electronics and Communication Technology, Gauhati University, Guwahati, Assam, India Manvi Gupta School of Electronics and Communication, Shri Mata Vaishno Devi University, Katra, India Zahraa A. Harjan College of Computer Science Information Technology, Department of Computer Science, University of Kerbala, Kerbala, Iraq Wasan A. Hekmat Laser and Optoelectronics Engineering Department, University of Technology—Iraq, Baghdad, Iraq

Editors and Contributors

xvii

Raheem G. K. Hussein Department of Physics, College of Science, University of Babylon, Babylon, Iraq Mustafa R. Ismael College of Engineering, Electrical Engineering Department, University of Babylon, Babel, Iraq G. Jagadeeswara Rao Department of Information Technology, Aditya Institute of Technology and Management, Tekkali, Andhra Pradesh, India Rituraj Jain Department of Electrical and Computer Engineering, Wollega University, Nekemte, Ethiopia Nanda Dulal Jana Department of Computer Science and Engineering, National Institute of Technology Durgapur, Durgapur, India S. Jancy Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India S. L. Jany Shabu Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India K. P. Jayalakshmi Department of ECE, St. Joseph Engineering College, Vamanjoor, Mangalore, India Bibekananda Jena Department of Electronics and Communication Engineering, Anil Neerukonda Institute of Technology & Science, Sangivalasa, Visakhapatnam, Andhra Pradesh, India Sanjaya Kumar Jena Institute of Technical Education and Research, SOA University, Bhubaneswar, India L. K. Joshila Grace Department of Computer Science and Sathyabama Institute of Science and Technology, Chennai, India

Engineering,

Mohammed O. Kadhim Renewable Energy Department, Environment and Energy Sciences College, Al. Karkh University of Sciences, Baghdad, Iraq G. Kalaiarasi Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India Ali J. Khalaf Radiology Techniques Department, College of Medical Technology, The Islamic University, Najaf, Iraq Ahmed B. Khamees Renewable Energy Research Center, Renewable Energy Directorate, Ministry of Science and Technology, Baghdad, Iraq Rasmi Ranjan Khansama Government Degree College (A), Tuni, Andhra Pradesh, India Rashmita Khilar Department of IT, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Thandalam, Chennai, Tamil Nadu, India

xviii

Editors and Contributors

Miryala Sai Kiran Department of Computer Science and Sathyabama Institute of Science and Technology, Chennai, India

Engineering,

K. Kiruthika Panimalar Engineering College, Poonamalle, Chennai, Tamil Nadu, India Roktim Konch Department of Electronics and Communication Technology, Gauhati University, Guwahati, Assam, India Madugula Murali Krishna Department of Computer Science and Engineering, Veer Surendra Sai University of Technology, Burla, Odisha, India Anubhav Kumar Faculty of Engineering and Technology, Mandsaur University, Mandsaur, India Bhukya Arun Kumar School of Electronics and Electrical Engineering, Lovely Professional University, Phagwara, India Kundan Kumar Department of Electronics and Communication Engineering, ITER, Siksha ‘O’ Anusandhan (Deemed To Be University), Bhubaneswar, India Vinod Kumar Department of Computer Science, NIMS University, Jaipur, Rajasthan, India Binita Kumari Department of Computer Science and Engineering, Siksha ‘O’ Anusandhan Deemed To Be University, Bhubaneswar, Odisha, India L. Lakshmanan Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India Shubham Mahajan School of Electronics and Communication, Shri Mata Vaishno Devi University, Katra, Jammu & Kashmir, India Sushil Kumar Mahapatra Department of Computer Science and Engineering, Siksha O Anusandhan Deemed to be University, Bhubaneswar, Odisha, India Santosh Kumar Majhi Department of Computer Science and Engineering, Veer Surendra Sai University of Technology, Burla, Odisha, India Pradeeep Kumar Mallick School of Computer Engineering, KIIT Deemed To Be University, Bhubaneswar, Odisha, India Ranjan Kumar Mallick Siksha ‘O’ Anushandhan Deemed to be University, Bhubaneswar, India A. Mary Posonia Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India Avirup Mazumder Department of Computer Science and Engineering, National Institute of Technology Durgapur, Durgapur, India Ravishankar Mehta Department of Computer Science and Engineering, National Institute of Technology, Jamshedpur, Jharkhand, India

Editors and Contributors

xix

R. I. Minu SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India Priya Seema Miranda St. Joseph Engineering College, Vamnjoor, Mangaluru, India Haile Misgna Faculty School of Computing, EIT-M, Mekelle University, Mekelle, Ethiopia Debahuti Mishra Department of Computer Science and Engineering, ITER, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India Jibitesh Mishra Department of Computer Science and Application, Odisha University of Technology and Research, Bhubaneswar, Odisha, India Laxmi Prasad Mishra Department of ECE, S’O’A Deemed to be University, Bhubaneswar, India Manish Mishra Indian Institute of Technology, Argul, Bhubaneswar, Odisha, India Satakshee Mishra C.V. Raman Global University, Bhubaneswar, Odisha, India Sivkumar Mishra Centre for Advanced Post Graduate Studies, Biju Pattnaik University of Technology, Rourkela, India Sairam mishra Siksha ‘O’ Anushandhan Deemed to be University, Bhubaneswar, India S. Misra Gandhi Institute of Technological Advancement, Bhubaneswar, India Mihir Narayan Mohanty Department of Electronics and Communication Engineering, ITER, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India Monalisa Mohanty Department of ECE, Siksha O Anusandhan, Bhubaneswar, India Alivarani Mohapatra KIIT Deemed to be University, Bhubaneswar, India Ashish Kumar Mourya Department of Computer Science and Engineering, School of ICT, Gautam Buddha University, Noida, India G. Nagarajan Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Kattankulathur, Chennai, Tamil Nadu, India Manoj Kumar Naik Faculty of Engineering and Technology, Siksha O Anusandhan, Bhubaneswar, Odisha, India Sura T. Nassir Renewable Energy Department, Environment and Energy Sciences College, Al. Karkh University of Sciences, Baghdad, Iraq Niranjan Nayak Department of Electrical and Electronics Engineering, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, India

xx

Editors and Contributors

Pravati Nayak Siksha ‘O’ Anushandhan Deemed to be University, Bhubaneswar, India Soumya Ranjan Nayak Amity School of Engineering and Technology, Amity University Uttar Pradesh, Noida, India Destalem Negusse Faculty School of Computing, EIT-M, Mekelle University, Mekelle, Ethiopia S. T. Nikhil Sidharth Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India K. J. Nikhil Department of ETE, RV College of Engineering, Bengaluru, India Gopinath Palai Gandhi Institute for Technological Advancement, Bhubaneswar, Orissa, India Amiya Ranjan Panda School of Computer Engineering, KIIT Deemed To Be University, Bhubaneswar, Odisha, India Nibedan Panda Department of CSE, Presidency University, Bengaluru, Karnataka, India Rutuparna Panda Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, Odisha, India Amit Kant Pandit School of Electronics and Communication, Shri Mata Vaishno Devi University, Katra, Jammu & Kashmir, India Trilochan Panigrahi Department of ECE, NIT Goa, Ponda, India Rohan Patel Department of Computing Technologies, SRM Institute of Science and Technology, Chennai, India Bibudhendu Pati Department of Computer Science, Ramadevi Women’s University, Bhubaneswar, Odisha, India S. K. Pati Silicon Institute of Technology, Bhubaneswar, India Gyana Ranjan Patra Department of ECE, Siksha O Anusandhan, Bhubaneswar, India Binod Kumar Pattanayak Department of Computer Science and Engineering, Siksha O Anusandhan Deemed to be University, Bhubaneswar, Odisha, India S. K. Pattnaik School of Electronics Engineering, KIIT University, Bhubaneswar, India Sukonya Phukan Jorhat Engineering College, Jorhat, India T. Poovizhi Saveetha School of Engineering, Chennai, Tamil Nadu, India B. S. Premananda Department of ETE, RV College of Engineering, Bengaluru, India

Editors and Contributors

xxi

S. Adarsh Rag Department of Nanotechnology, Institute of Electronics and Communication Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, Tamilnadu, India Manish Raj C.V. Raman Global University, Bhubaneswar, Odisha, India Usharani Raut International Institute of Information Technology, Bhubaneswar, India J. Refonaa Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India M. S. Roobini Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India Swarup Roy Department of Computer Science and Engineering, National Institute of Technology Durgapur, Durgapur, India Santanu Sahoo Department of ECE, Siksha O Anusandhan, Bhubaneswar, India Sipra Sahoo Department of Computer Science and Engineering, Siksha ‘O’ Anusandhan Deemed To Be University, Bhubaneswar, Odisha, India Benudhar Sahu Institute of Technical Education and Research, SOA (Deemed to be University), Bhubaneswar, India Prasant Kumar Sahu Indian Institute of Technology, Argul, Bhubaneswar, Odisha, India Sanjay Kumar Sahu School of Electronics and Electrical Engineering, Lovely Professional University, Phagwara, India Tapaswini Sahu BPUT, Rourkela, Odisha, India S. Sai Srinivas Department of Information Technology, Aditya Institute of Technology and Management, Tekkali, Andhra Pradesh, India S. R. Samal Silicon Institute of Technology, Bhubaneswar, India Sunita Samanta Department of ECE, Siksha O Anusandhan, Bhubaneswar, India Archana Sarangi Department of Computer Science and Engineering, ITER, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India T. R. Saravanan Department of Computational Intelligence, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India Kandarpa Kumar Sarma Department of Electronics and Communication Engineering, Gauhati University, Guwahati, India Kumaresh Sarmah Department of Electronics and Communication Technology, Gauhati University, Guwahati, Assam, India

xxii

Editors and Contributors

R. Sathya Bama Krishna Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India Amitabh Satpathy Institute of Technical Education and Research, SOA (Deemed to be University), Bhubaneswar, India Sai Shiva Shankar Department of Computer Science and Sathyabama Institute of Science and Technology, Chennai, India

Engineering,

Ankita Sharma Department of Computer Science and Engineering, Sri Sai College of Engineering and Technology, Pathankot, India Arvind Kumar Sharma Department of Computer Science and Engineering, MIT Kota, Kota, India Marvi Sharma Department of Computer Science and Engineering, Sri Sai College of Engineering and Technology, Pathankot, India Nidhi Sharma Department of Computer Science and Engineering, NIMS University, Jaipur, India Adiba Sharmeen Department of Electronics and Communication Engineering, ITER, Siksha ‘O’ Anusandhan (Deemed To Be University), Bhubaneswar, India Mohammed Hadi Shinen Department of Physics, College of Science, University of Babylon, Babylon, Iraq Aditi Anupam Shukla SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India Jyoti Singh Jorhat Engineering College, Jorhat, India Koushlendra Kumar Singh Department of Computer Science and Engineering, National Institute of Technology, Jamshedpur, Jharkhand, India Gino Sinthia Saveetha School of Engineering, Chennai, Tamil Nadu, India Shashank Sirmour School of Computer Engineering, KIIT Deemed To Be University, Bhubaneswar, Odisha, India A. Siva Prasad Department of Computer Science, Govt. Degree College, Tekkali, Andhra Pradesh, India K. Sivaparvathi Department of Information Technology, Aditya Institute of Technology and Management, Tekkali, Andhra Pradesh, India Er. Sorab Kumar Department of Computer Science and Engineering, Sri Sai College of Engineering and Technology, Pathankot, India Mallika Srivastava C.V. Raman Global University, Bhubaneswar, Odisha, India T. Subetha Departmemt of IT, BVRIT HYDERABAD College of Engineering for Women, Hyderabad, India

Editors and Contributors

xxiii

L. Sujihelen Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India Bhasi Sukumaran SRM Medical College Hospital & Research Centre, Kattankulathur, Tamil Nadu, India Arpit Suman Department of Electronics and Communication Engineering, ITER, Siksha ‘O’ Anusandhan (Deemed To Be University), Bhubaneswar, India Shubam Sumbria Shri Mata Vaishno Devi University, Katra, Jammu & Kashmir, India A. Swain System Engineer, Tata Consultancy Services, Bangalore, India K. P. Swain Gandhi Institute of Technological Advancement, Bhubaneswar, India Nisha Thakur Department of Computer Science and Engineering, Sri Sai College of Engineering and Technology, Pathankot, India Shashank Thakur SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India Anand Sreekantan Thampy Centre for Nanotechnology Research, Vellore Institute of Technology, Vellore, India Madhab Chandra Tripathy Department of Instrumentation and Electronics Engineering, CET, BPUT, Rourkela, Odisha, India Kamal Upreti Department of Computer Science & Engineering, Dr. Akhilesh Das Gupta Institute of Technology & Management, Delhi, India D. Usha Nandhini Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India Binu Kuriakose Vargis Department of Information Technology, Inderprastha Engineering College, Ghaziabad, India Ashutosh Vashishtha Shri Mata Vaishno Devi University, Katra, Jammu & Kashmir, India D. Vathana Department of Computing Technologies, SRM Institute of Science and Technology, Chennai, India S. Vigneshwari Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India A. Viji Amutha Mary Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India Aneesh Wunnava Faculty of Engineering and Technology, Siksha O Anusandhan, Bhubaneswar, Odisha, India Mogula Yeshasvi Departmemt of IT, BVRIT HYDERABAD College of Engineering for Women, Hyderabad, India

xxiv

Editors and Contributors

R. Yogitha Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India Kyongsik Yun California Institute of Technology, Pasadena, CA, USA

An Unsupervised Learning Approach Towards Credit Risk Modelling Using DFT Features and Gaussian Mixture Models Amit Kant Pandit, Ashutosh Vashishtha, Shubam Sumbria, and Shubham Mahajan Abstract One of the most important problems in the present times is to estimate the risk in lending financial resources with respect to its returns. Credit risk can harm a lender by increasing collection costs and causing cash flow inconsistency. Lenders use credit risk modelling to assess the amount of credit risk associated with lending credit to borrowers. Financial statement analysis, default likelihood and machine learning are the options for credit risk analysis models. And, solving this kind of problem using machine learning techniques is known as credit risk modelling. In this process, we fit data having loads of features related to the financial conditions of a person into the model to classify a lender as a defaulter or nondefaulter. In this study, we used an unsupervised machine learning technique for this task. First, we applied two feature selection methods, viz. using the Pearson’s correlation coefficient and chi-square test, to select certain features which are less informative for the task. Feature selection is one of the pre-processing standards in designing advanced solutions because it does not only alleviate dataset dimensions but also improves a model’s performance measures. We also applied the fast Fourier transform (FFT) algorithm to get the discrete Fourier transform (DFT) of all the selected features, as supplementary and artificial feature vectors to the model. To deal with class imbalance, we used a specific variant of synthetic minority oversampling technique (SMOTE), i.e. Gaussian-SMOTE. Finally, we applied Gaussian mixture model (GMM) at distinct values for its parameters like the ‘covariance type’. With this and the optimally selected parameters, our methodology was able to achieve a classification accuracy of 85.48%. Keywords Credit risk modelling · Unsupervised machine learning · Gaussian mixture model · Discrete Fourier transform · Gaussian-SMOTE · Feature selection

A. K. Pandit (B) · A. Vashishtha · S. Sumbria · S. Mahajan Shri Mata Vaishno Devi University, Katra, Jammu & Kashmir 182320, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_1

1

2

A. K. Pandit et al.

1 Introduction It has been an ages-long trend to lend financial or other resources within most communities around the globe. Many studies have proven that this plays a critical role in the balance of the financial growth of the respective group. But, along with this, there also co-exists a problem of risk in same, which refers to the fact that the shuffling and unsteady certainty in returns of such resources after the lends are made exists as a hindrance to this process. Hence, it must be an undeniable need of humankind to resolve the issue. With this said, AI researchers have also made efforts in the same for many years using numerous tools and techniques. Among these are some very highly and competitively performing methodologies making it easier to trust upon technologies based on AI. Unsupervised learning approaches have already established themselves as a reliable set of methodologies. In this regard, Gaussian mixture models have proven to be one of the most popular strategies among AI researchers worldwide in a variety of applications. Some of the applications of GMM are anomaly detection, language identification, tracking the target in the video clip, and genre classification of songs. It is also commonly used in signal processing related tasks. Gaussian mixture models allocate data points to Gaussian distributions using the soft clustering method. With this established, our study proposes a robust methodology for the task of credit risk modelling using various pre-processing and data enhancement techniques like feature selection, up-sampling, feature modification, cleaning, etc., and used the unsupervised machine learning technique, the GMM. The results indicate the high potential of the proposed methodology and its robustness. The main objectives and contributions of this study are— • To develop a robust and efficient methodology for the task of credit risk modelling. • To study the effect of various parameters of several techniques within the pipeline of the implementation. • To study the performance of unsupervised learning-based ML techniques for the task, which is not explored much in previous studies. This paper is further divided into five sections. Introduction is explained in Sect. 1 followed by literature review in Sect. 2. Methodology is explained in Sect. 3. Results are discussed in Sect. 4, followed by conclusion in Sect. 5.

2 Previous Works With the combination of the two very different fields that are finance and data science, extensive research on credit risk modelling has been carried out. The primary task of credit risk analysis is to predict whether a loanee is a defaulter or not. Some of the past studies proposed conventional approaches, whereas others enforced some extremely complicated but also better models. Besides model implementation, data pre-processing techniques such as statistical data conversion and feature selection

An Unsupervised Learning Approach …

3

techniques also play an important role. Only useful features can get high scores in the target category classification. Having numerous highly correlated features in the data is not very useful for classification models. Thus, it is highly recommended to use data pre-processing techniques to obtain appropriate results. In 2019, Arora and Kaur [1] proposed random forest (RF) which in Bolasso enabled for the classification of the target class. Besides this, they use various feature selection methods like chi-square, gain ratio, ReliefF and compared other Bolasso-enabled classification algorithms like naive Bayes (NB), support vector machine (SVM), K-nearest neighbour (KNN). In 2020, Wang et al. [2] did a comparative analysis of different ML classification algorithms like NB, logistic regression, random forest, decision tree, and KNN. By calculating performance in terms of accuracy, precision, recall, and AUC, they conclude RF performs better among all the classifiers. Shen et al. [3] proposed an improved SMOTE algorithm to reduce the drawbacks of previous versions to deal with imbalance dataset. Long short-term memory (LSTM) and adaptive boosting (AdaBoost) techniques are combined with deep learning assembled classifier after which various test methods along with the area under the curve (AUC) are used to get the performance of the model to improve so. Jadwal et al. [4] in 2020 introduced an integrated classification and clustering algorithm. In this study, they initially convert data into clusters of similar nature and then apply various classification algorithms to these clusters. In this way, there is a noise reduction in data and the results show an improvement in the performance of the model. Munkhdalai et al. [5] in 2021 introduced a state-of-the-art model called partially interpretable adaptive Softmax (PIA-Soft) regression model. This model comprises two parts: linear and nonlinear to identify the linear and nonlinear relation between input features and target class, respectively. Suhaimi and Abas [6] in 2020 selected 61 different studies based on selection criteria and data extraction and made a compiled result. They conclude that in the classification of spam, text, and also in healthcare and medical researches supervised machine learning (SML) is mostly used. Among all SML, artificial neural network (ANN) and SVM are the top two best performing classification algorithms. Also, Bhatore et al. [7] surveyed 136 studies from 1993 to 2019 and observed that SVM and hybrid neural networks are more accurate to credit risk scoring. One important thing they conclude that there is a lack of public datasets for this research domain which is the point of concern for researchers.

3 Methodology We focussed our experimentation primarily on three different open-source datasets, which are considered a benchmark for credit risk modelling. These are Taiwan [8], the Australian [9], and the Japanese [10]. Also, it can be observed that Taiwan consists of a much larger number of records than the other two and provides better insight into the model for predictions (Table 1).

4

A. K. Pandit et al.

Table 1 Summary of the datasets used Dataset

Total samples

Default/non-default samples

Number of attributes

Taiwan

30,000

6636/23364

23 (X1 − X23)

Australian

688

307/381

14 (X1 − X14)

Japanese

690

307/383

15 (X1 − X15)

Table 2 Variable removed by feature selection methods

Feature selection method Dataset

Removed variables

Pearson’s correlation

X7, X8, X9, X10, X11

Taiwan

Australian X10 Chi-square test

Japanese

X11, X5

Taiwan

X4, X14

Australian X1, X4, X10, X11, X12 Japanese

X1, X4, X7, X12, X13

Some of the features are also likely to be redundant or far less informative to be considered for the network’s computations. Hence, as the first part of our implementation, we applied feature selection to all three datasets. For this, we analysed two different algorithms, i.e. using the Pearson’s correlation coefficient and the chi-square test, over all datasets. Both the methods compelled to reduce different variables in the datasets, which can be read from Table 2. It has been proven by several researchers earlier that it is highly likely to attain distinct representational information in Fourier space of any series. With this established, it can be safely assumed that the Fourier space representation of a series might add information hidden otherwise in the input space. Thus, we applied the FFT algorithm to compute the DFT of all of these selected attributes in the datasets and appended the real part of the result to the original dataset. Since the model is not much computationally expensive, this addition should not affect the performance considerably. Another observation from these can be the highly imbalanced classes, which certainly might prove to be a hurdle towards the efficient working of any machine learning-based model. Hence, the next step in our implementation, we applied the traditionally famous oversampling algorithm, viz. the SMOTE [11] or, more specifically, the Gaussian-SMOTE [12], with the parameters set for the minority sample population to be achieved near 0.6 times the size of majority samples. It has been experimentally recommended to scale down the data points before applying SMOTE for effective performance; thus, we also did the standard scaling of all of the mean data sets. Figure 1 shows the label’s status size for each class in all datasets before and after the application of Gaussian-SMOTE. In probabilistic machine learning and economics, latent variables are accepted as of the high potential for long. The GMM uses unsupervised implementation using cluster index as a latent variable, much like a k-means clustering but without hard

An Unsupervised Learning Approach …

5

Fig. 1 Minority class up-sampling using Gaussian-SMOTE

Fig. 2 Flow chart of the methodology implemented

cluster assignment and other improvised factors. Figure 2 represents the flow of the pipeline of the methodology implemented in the study.

4 Experiments, Results, and Discussion We implemented the GMM as available in Python’s Scikit-Learn module, with several components set as 2 and all other parameters to their default values except for ‘covariance type,’ which is available in four different options. The type ‘full’, allows each component to have its own general covariance matrix, whereas all components share one same general covariance matrix, if set to ‘tied’. Similarly, diagonal covariance matrices can be allotted to each component with the parameter set as ‘diag’, while with ‘spherical’, each component gets its own single variance.

6

A. K. Pandit et al.

To set the optimal value, we did an extensive search with all parameters, and the accuracy values so obtained for all of the datasets are shown in Table 3. From Table 3, we can observe that the ‘full’ covariance type led to the highest accuracy measure for the Japanese and Australian, whereas for the Taiwan dataset, we achieved maximum accuracy of 83.43% with the covariance type set as ‘tied’ with Pearson’s correlation feature selection method. The ‘diag’ covariance type resulted in the worst model performance given the data and hyper-parameters with accuracy measures 47.65%, 46.55%, and 51.91% in the Taiwan, Australian, Japanese dataset, respectively (Fig. 3). Table 3 Accuracy correspond to different parameters

Dataset

Feature selection method

Covariance type

Accuracy

Taiwan

Pearson’s correlation

Full

0.8343

Tied

0.5192

Chi-square test

Australian

Pearson’s correlation

Chi-square test

Japanese

Pearson’s correlation

Chi-square test

Diag

0.8142

Spherical

0.4765

Full

0.8343

Tied

0.4765

Diag

0.4765

spherical

0.7837

Full

0.4808

Tied

0.8548

Diag

0.4655

Spherical

0.4808

Full

0.4808

Tied

0.7342

Diag

0.5781

Spherical

0.5192

Full

0.4655

Tied

0.5191

Diag

0.5191

Spherical

0.5191

Full

0.5845

Tied

0.7207

Diag

0.5191

Spherical

0.5191

An Unsupervised Learning Approach …

7

Fig. 3 Accuracy corresponds to different parameters

5 Conclusion Credit risk modelling is in high demand and value; most banks incur major losses each year because of loan defaults. This issue has been dealt with for a long time in several directions. There have been several efforts on behalf of the AI community to demonstrate the possibility of the domain developing an ideal approach. The purpose of this study is to find the optimal model for credit risk detection. This study finds the optimal model for credit risk detection. To get successful outcomes, our study shows a formal approach to the same problem. We analysed all three datasets to get some useful insights and perform pre-processing on all of them. We then performed feature selection methods to select certain features which are less informative for the task and remove them before the next step in our model pipeline. And, then applied fast Fourier transform to get the discrete Fourier transform of all the selected features like additional and artificial vectors to the model. Also, we used GaussianSMOTE to solve the class imbalance issue of our datasets. At the end of the pipeline, we implemented GMM with various values for its ‘covariance-type’ parameters. Finally, through its productive success on three separate datasets, we introduced the credibility of our methodology. As a result, it is yet another validation that AIbased techniques have immense potential and critical practical aspects in resolving problems, such as credit risk modelling. Funding Acknowledgement This research was supported by the IMPRESS Grants of Indian Council of Social Science and Research, Government of India.

8

A. K. Pandit et al.

References 1. Arora N, Kaur PD (2020) A Bolasso based consistent feature selection enabled random forest classification algorithm: an application to credit risk assessment. Appl Soft Comput 86:105936 2. Wang Y et al (2020) A comparative assessment of credit risk model based on machine learning—a case study of bank loan data. Procedia Comput Sci 174:141–149 3. Shen F et al (2021) A new deep learning ensemble credit risk evaluation model with an improved synthetic minority oversampling technique. Appl Soft Comput 98:106852 4. Jadwal PK, Jain S, Agarwal B (2020) Financial credit risk evaluation model using machine learning-based approach. World Rev Entrepreneurship Manag Sustain Dev 16(6):576–589 5. Munkhdalai L et al (2021) A partially interpretable adaptive Softmax regression for credit scoring. Appl Sci 11(7):3227 6. Suhaimi NAD, Abas H (2020) A systematic literature review on supervised machine learning algorithms. PERINTIS eJ 10(1):1–24 7. Bhatore S, Mohan RYR (2020) Machine learning techniques for credit risk evaluation: a systematic literature review. J Bank Financ Technol 4(1):111–138 8. https://archive.ics.uci.edu/ml/datasets/Statlog+%28Australian+Credit+Approval%29 9. https://archive.ics.uci.edu/ml/datasets/Japanese+Credit+Screening 10. https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients 11. Chawla NV et al (2002) SMOTE: synthetic minority over-sampling technique. J Artif Intell Res 16:321–357 12. Lee H, Kim J, Kim S (2017) Gaussian-based SMOTE algorithm for solving skewed class distributions. Int J Fuzzy Logic Intell Syst 17(4):229–234

Human Activity Detection-Based Upon CNN with Pruning and Edge Detection Marvi Sharma and Dinesh Kumar Garg

Abstract Human activity detection is basic requirement especially within smart environments like smart homes. The primary requirement of smart environment is energy conservation. To achieve this, first target is to detect the human activities accurately using neural network-based approach. This paper presents a unique combination of CNN with pruning and edge detection mechanism to accurately detect human activities. The entire process of human activity detection is portioned into set of phases. In the first phase, data acquisition is performed. The CNN with pruning and edge detection utilized KTH dataset derived from Kaggle. In the second phase, pre-processing to eliminate the noise from the image frames. In the third phase, edge detection and feature extraction were ensured. In the last phase, classification is performed. The result of the human activity detection mechanism is expressed in the form of classification accuracy. High classification accuracy of over 95% is observed. Keywords Human activity · CNN · Pruning · Feature extraction · Classification accuracy

1 Introduction Human activity detection using the machine learning mechanism offers advantages. To accomplish this, wearable device is employed. Wearable devices contain sensors to monitor the activities performed by the user. Li et al. [1] discussed the detection of human activity using wearable sensors. Wearable sensors detect the activities and record them onto the dataset. The dataset is then analysed using the machine learning-based approach. LSTM-based mechanism yields accurate results but is slow M. Sharma (B) · D. K. Garg Department of Computer Science and Engineering, Sri Sai College of Engineering and Technology, Pathankot, India D. K. Garg e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_2

9

10 Fig. 1 Process of human activity detection

M. Sharma and D. K. Garg

Data Acquisition •Kaggle Preprocessing •Contrast Slicing •Back Elimination Segmentation •Pruning •Edge Detection •Features Extraction Classification •Feature Selection •CNN

for larger datasets. The result of the detection is expressed in the form of classification accuracy. Khelalef et al. [2] provide the mechanism of human activity detection by the use of deep learning. The deep learning approach requires larger dataset. Feature extraction from such approaches is slow but accurate. To avoid complexity, entire process of extracting features is portioned into layers. In the initial phase, dataset is presented to the input layer, the processing layer extracts the features, and output layer classifies the result. Overall classification accuracy will be high in case preprocessing is successfully performed. The overall process of detection of human activity is given in Fig. 1. The rest of the paper is organized as under Sect. 2 gives the literature survey of the techniques used for human activity detection, Sect. 3 gives the problems discovered from the literature, Sect. 4 gives the proposed mechanism, Sect. 5 gives the performance analysis, and Sect. 6 gives the conclusion.

2 Literature Survey Oukrich et al. [3] detect human activity-based on ontology-based mechanism. This mechanism was strictly based on fuzzy logic and mechanism of human activities detected with pre-training model. The hold out ratio of 0.1 was used. The result was obtained using classification accuracy of 95%. Marinho et al. [4] discussed the fake profile detection mechanism using machine learning mechanism. The layered-based approach is followed for the detection of human activities. The classification accuracy of over 90% was achieved. Bharathi [5] detects the human activities from the Kaggle dataset using deep and machine learning mechanism. Both the mechanisms were used since machine learning operates on smaller datasets and deep learning perform operation on larger datasets. The result of the approach was presented in the form of classification accuracy. Xu et al. [6] presented the convolution neural

Human Activity Detection-Based …

11

network-based mechanism for the detection. The human activity detection mechanism follows set of steps including pre-processing, segmentation, and classification. The classification phase produces the classification accuracy of 94%. Sun et al. [7] proposed machine learning and deep neural network-based mechanism for the detection of human activity. The activity recognition resulted in better detection of human activity in terms of classification accuracy. Sarma et al. [8] use LSTM network for the human activity detection. Preprocessing is used to transform the raw data in a useful and efficient format. The result of the approach was presented through classification accuracy that was in the range of 92%.

3 Problem Definition The approach discussed in [8–12] extracts the features from the image-based or text-based dataset. However, in case video dataset is presented, all the described techniques leads to lower classification accuracy or may not operate at all. The extracted problems are listed as under • Image frame extraction from the video dataset is missing. • Classification accuracy calculated though CNN without edge detection and pruning is low. • Pre-trained model is not used; hence, detection and classification are slow.

4 Methodology of Work The proposed mechanism is based upon the CNN with pruning and edge detection. The dataset used is extracted from KTH dataset. First, image frames are extracted from the video-based dataset. After extraction of image frames, noise from the image frames is eliminated using pruning and edge detection. After the noise from the image is removed, feature extraction and selection take place. The model for this is known as convolution neural network. The process of feature extraction and selection takes place through iterative approach. The methodology of the proposed work is given in Fig. 2.

12

M. Sharma and D. K. Garg

Fig. 2 Proposed methodology

Data Acquisition

• KTH Dataset • Select Video from KTH dataset

Preprocessing

• Contrast slicing • Back Elimination

Segmentation

• Neccessary parts calculated using edge detection • Pruning for feature point selection

Classification

•Classification with CNN and result in the form of classification accuracy

Algorithm Human_Activity_Detection • KTH data acquisition • Video to image frame conversion Imagei=Video2image(KTH_Dataseti) • Pre-processing Contrast slicing Imagei=Red_Comonent(Image)*2 Imagei=Green_Comonent(Image)*2 Imagei=Blue_Comonent(Image)*2 Back Elimination: Imagei=255-Backgroundi • Segmentation\\ Retaining the image components if weight factors are higher than threshold. The process is known as edge detection and feature point selection If(Weighti>Threshold) Imagei=Imagei+Imagei+1 End of if Classification Compare the features extracted with pre-trained model for result prediction.

5 Performance Analysis and Results The result of the proposed mechanism is presented in the form of classification accuracy, detection rate, specificity, and sensitivity. The performance analysis is given in Table 1. The plot to clearly visualize the result is given in Fig. 3. Clearly, the result from

Human Activity Detection-Based …

13

Table 1 Classification accuracy result Activity detected with KTH dataset

Classification accuracy (without edge detection and pruning) in %

Classification accuracy with edge detection and pruning in %

Skating

80

92

Running

82

93

Walking

85

94

Fig. 3 Classification accuracy comparison

Classification Accuracy (%) 100 95 90 85 80 75 70

Skating

Running

Walking

Classification accuracy (Without edge detection and pruning) in % Classification accuracy with edge detection and pruning in %

KTH dataset when walking is performed which is higher as compared to skating and running. The detection rate is another significant attribute determining the worth of CNN with edge detection and pruning-based mechanism. The detection rate was also improved using this mechanism. The result obtained with discussed approach is given in Table 2. The visualization of result for the detection of human activity is presented in Fig. 4. Table 2 Recognition rate Activity detected with KTH dataset

Recognition rate (without edge detection and pruning) in %

Recognition rate with edge detection and pruning in %

Skating

69

90

Running

69

91

Walking

72

92

14

M. Sharma and D. K. Garg

Fig. 4 Recognition rate with existing and proposed mechanism

Recognition Rate 100 90 80 70 60 50 40 30 20 10 0 Skating

Running

Walking

Recognition Rate (Without edge detection and pruning) in % Recognition Rate with edge detection and pruning in %

The recognition rate is improved by the margin of 16% approximately. The mechanism clearly shows improvement over existing CNN-based mechanism. The last comparison of result is in the form of specificity and sensitivity. Both parameters enhance the classification accuracy. The result of specificity and sensitivity is given in Table 3. The plot for Table 3 is given in Fig. 5. The specificity and sensitivity obtained from the edge detection and pruning-based CNN are much better as compared to existing approach proving worth of study. Table 3 Specificity and sensitivity with proposed and existing work Activity detected with KTH dataset

Specificity (without edge detection and pruning) in %

Sensitivity (without edge detection and pruning) in %

Specificity with Sensitivity with edge detection and edge detection and pruning in % pruning in %

Skating

62

70

70

84

Running

65

71

75

85

Walking

70

72

79

89

Human Activity Detection-Based … Fig. 5 Specificity and sensitivity for the proposed and existing mechanism

15

Specificity and Sentivity comparison 100 80 60 40 20 0 Skating

Running

Walking

Specificity (Without edge detection and pruning) in % Sensitivity (Without edge detection and pruning) in % Specificity with edge detection and pruning in % Sensitivity with edge detection and pruning in %

6 Conclusion and Future Scope The approach used for the detection of human activities in the proposed work is CNN with pruning and edge detection. The dataset was derived from Kaggle named KTH dataset. The followed approach first converts the video-based dataset into image frames, and after that, it lists the image frames for pre-processing. Within preprocessing contrast, slicing and back elimination is performed. In the segmentationbased mechanism, edge detection and elimination of unnecessary image area was performed. At the end, pre-trained model of CNN was used for performing classification. Overall, the result in terms of classification accuracy, recognition rate, specificity, and sensitivity was improved by significant margin. The proposed model was implemented on small scale dataset, and in future, large dataset can be tested upon the proposed model.

References 1. Li F, Shirahama K, Nisar MA, Köping L, Grzegorzek M (2018) Comparison of feature learning methods for human activity recognition using wearable sensors. Sensors (Switzerland) 18(2). https://doi.org/10.3390/s18020679 2. Khelalef A, Ababsa F, Benoudjit N (2019) An efficient human activity recognition technique based on deep learning 29(4):702–715. https://doi.org/10.1134/S1054661819040084 3. Oukrich N, Cherraqi EB, Maach A (2018) Human daily activity recognition using neural networks and ontology-based activity representation. Lect Notes Networks Syst 37:622–633. https://doi.org/10.1007/978-3-319-74500-8_57 4. Marinho LB, de Souza Junior AH, Filho PPR (2017) A new approach to human activity recognition using machine learning techniques. Adv Intell Syst Comput 557:529–538. https:// doi.org/10.1007/978-3-319-53480-0_52

16

M. Sharma and D. K. Garg

5. Bharathi B, Bhuvana J (2020) Human activity recognition using deep and machine learning algorithms. Int J Innov Technol Explor Eng 9(4):2460–2466. https://doi.org/10.35940/ijitee. c8835.029420 6. Xu W, Pang Y, Yang Y (2018) Human activity recognition based on convolutional neural network. In: 2018 24th international conference pattern recognition, pp 165–170 7. Sun J, Fu Y, Li S, He J, Xu C, Tan L (2018) Sequential human activity recognition based on deep convolutional network and extreme learning machine using wearable sensors, vol 2018, no 1 8. Sarma N, Chakraborty S, Banerjee DS (2019) Learning and annotating activities for home automation using LSTM. In: 2019 11th international conference on communication systems and networks, COMSNETS 2019, May 2019, pp 631–636. https://doi.org/10.1109/COMSNETS. 2019.8711433 9. Bevilacqua A, Macdonald K, Rangarej A (2018) Human activity recognition with convolutional neural networks human activity recognition with convolutional neural networks, September 2018 10. Baradel F, Neverova N, Wolf C, Mille J, Mori G (2018) Object level visual reasoning in videos. In: Lecture notes computer science (including subseries lecture notes artificial intelligent lecture notes bioinformatics), LNCS, vol 11217, pp 106–122. https://doi.org/10.1007/978-3-030-012 61-8_7 11. Soliman M, Abiodun T, Hamouda T, Zhou J, Lung C (2013) Smart home: integrating internet of things with web services and cloud computing, pp 317–320. https://doi.org/10.1109/Clo udCom.2013.155 12. Ieee F, Ieee M (2017) Multidimensional optical sensing and imaging systems (MOSIS): from macro to micro scales, pp 1–25

Improvement in Breast Cancer Detection Using Deep Learning Manvi Gupta, Shubham Mahajan, Anil Kumar Bhardwaj, and Amit Kant Pandit

Abstract It is essential to recognize breast cancer disease as right on time as could be expected. As we all know breast malignancy is the most well-known disease in ladies around the world, with almost 1.68 million new cases analysed in 2012, addressing around 24.8% of all tumours in ladies. Also, a developing global acknowledgement of Western style and practices has been related with an expansion in overall malignancy rates. In 2020, an expected 276,500 new instances of obtrusive breast disease are analysed in ladies in the USA just as 48,525 new instances of nonintrusive (in situ) breast malignant growth. This cancer credibly investigated by various tests, together with mammogram, radiology scan, biopsy, and MRI. A mammogram is a X-ray test of breast. Here, the method used is mammography. Mammography is the path towards using low-energy X radiates (generally speaking around 30 kVp) to take a gander at the human breast for investigation and screening. The deep convolutional neural organization (DCNN) is utilized for include extraction. Further patch extraction and pixel-based extraction are used to detect the breast cancer at the early stage. In patch-level abstraction, “patch-wise” network goes about as an auto-encoder that concentrates the most notable highlights of image patches”, and in “pixel-level” abstraction, the presence of cancer is detecting by using pixel values as this is a sort of more elevated-level morphological idea that can be applied to pictures which is particularly valuable in pictures that are diagrams, maps, and so forth. Diminishing with the end goal of text acknowledgement shows up less fitting.

1 Introduction Breast cancer is a common reason for females passing around the world. It has brought about more passing as contrasted with valuable sicknesses like tuberculosis M. Gupta (B) · S. Mahajan · A. K. Bhardwaj · A. K. Pandit School of Electronics and Communication, Shri Mata Vaishno Devi University, Katra 182320, India A. K. Bhardwaj e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_3

17

18

M. Gupta et al.

or malaria. The International Agency for Disease Research (IARC) which is the exploration office of WHO’s (World Health Organization) what’s more, the American Cancer Society referenced that surpassing 17 million new cases are enlisted in 2018 across the world identified with this. It is amid the four dangerous tumours in females throughout the planet which are lung, breast, and entrail (including anus), stomach, what is more bollock harmful developments. By far, most of the 12–15% of ladies are told to return after an uncertain testing mammogram go through second mammogram as well as ultrasound for an explanation. After the extra scanning tests, large numbers of these discoveries are resolved as favourable and just 9–18% are told to go for a biopsy based on needle method for additional information. Among these, just 15– 40% yield an analysis of malignant growth. There is a neglected need to move the equilibrium of routine bosom disease screening towards more advantage and less damage [1]. In 2020, there were 2.3 million ladies determined to have breast malignant growth and 685,000 passing internationally. As of the finish of 2020, there were 7.8 million ladies alive who were determined to have breast malignant growth in the previous five years, making it the world’s most pervasive disease. There is more lost incapacity changed life years (DALYs) by ladies to breast malignant growth around the world than some other sort of disease. Breast disease happens in each nation of the world in ladies at whatever stage in life after adolescence however with expanding rates in later life. Breast malignant growth mortality changed little from the 1930s through to the 1970s. Upgrades in endurance started during the 1980s in nations with early location programs joined with various methods of treatment to annihilate the obtrusive sickness [2–4]. Breast cancer can be diagnosed through various methods which are as follows: • • • •

Mammogram/mammography Ultrasounds MRI Biopsy

A mammogram is an X-ray of the breast. In this research, we have discussed mammograms. Mammograms use computer-aided detection (CAD) to detect or diagnose breast cancer [5]. Computer-aided design for mammography is utilized to decipher mammographic pictures and check for the presence of breast malignancy. The CAD framework places a mammogram into the advanced structure, and afterwards, PC programming looks for strange spaces of thickness, mass, or calcification. Mammography is the path towards using low-energy X radiates (generally speaking around 30 kVp) to take a gander at the human breast for investigation and screening. This assessment is parted into two primary areas. The first segment presents a short dispatch of different steps of a customary method learning technique which incorporates amplification, include extraction and classification, the second segment comprises deep learning procedures, with weight on multiview which is craniocaudal (CC) and mediolateral calculated (MLO) mammographic information. The current deep learning pieces can be shown for bosom robustness differentiation, spotting, and arranging

Improvement in Breast Cancer Detection Using Deep Learning

19

of the scraped area in the disease inside the multiview robotized mammographic realities [3, 4, 6].

2 Literature Review We present a profound convolutional neural organization for bosom malignancy screening test order, prepared, and assessed on more than 200,000 tests (more than 1,000,000 pictures). Our organization accomplishes an AUC of 0.895 in anticipating the presence of disease in the bosom when tried on the screening populace. We quality the high exactness to a couple of innovative advances [1]. This part gives data about the connected work of the examination that has been as of now done. Fundamentally, two procedures are utilized to identify bosom malignant growth. The first is AI, and the second is profound learning. Numerous bits of examination are directed through AI. Yet AI methods have a few issues that are eliminated through profound learning. This segment gives data about the machine and deep learning methods. By far, most breast diseases happen in ladies who are not at raised danger so that barring them from screening and just screening high danger ladies will keep the advantages from getting early recognition to most ladies who foster breast malignant growth. Rules boards ought not to settle on choices that avoid ladies from screening. Ladies ought to be furnished with precise data, so they can settle on educated choices and have unrestricted admittance to screening if that is their inclination [6]. Ongoing work has shown that complicated organizations can be generously exact, more effective and deeper to prepare on the off chance that they contain more limited associations between layers near the info and those near the yield. Here, they enfold this perception and present the (DenseNet), which interfaces in each and every layer through a feed-forward style. Though CCNs with L layers have L associations one between each layer and its resulting layer, our organization has L(L + 1)/2 direct associations [6–8]. DIB-fundamental MG’s method is a deep convolutional neural network, which is a deep learning network that specializes in image recognition. Each sample (case) is a test consisting of four photos. The cancer probability ranges from DIB-MG are compared with the per-case ground-truth label for each sample in a training set [9].

3 Method Used In this research work, we have used particularly two methods to find out the presence of cancer in the breast; the two methods are as follows: • Patch-level abstraction method • Pixel-level abstraction method

20

M. Gupta et al.

• Patch-level abstraction method In patch-level abstraction, “patch-wise” network goes about as an auto-encoder that concentrates the most notable highlights of image patches’ [3]. • Pixel-level abstraction method In “pixel-level” abstraction, the presence of cancer is detecting by using pixel values as this is a sort of more elevated-level morphological idea that can be applied to pictures which are particularly valuable in pictures that are diagrams, maps, and so forth Diminishing with the end goal of text acknowledgement shows up less fitting. A strong circle diminishes to a spot. This abstraction method marks demonstrating the area of biopsied dangerous and benign discoveries. Deep learning strategy is the cycle of identification of breast disease; it comprises many secret layers to deliver the most proper yields. This research tells the best way to distinguish breast diseases at a beginning phase utilizing this calculation that for the most part utilizes CAD vision, picture preparing, clinical finding, and neural language handling [10].

4 Proposed Methodology For each bosom, we allot two twofold labels: the nonappearance of threatening discoveries in a breast, what is more, the nonattendance/presence of favourable discoveries in a bosom. With left and right bosoms, every test has an aggregate of four twofold marks. We will likely create four forecasts relating to the four names for every test. As information, we take high-tenacity pictures comparing to the four standard shielding scans sees. For each photograph, we modified the input pixel value to a constant size of 4096 × 3661 pixels for CC viewpoints and 4393 × 4258 pixels for MLO viewpoints. We changed the pixel value to get an accurate and clear output which can increase the accuracy of the result. We prepared a deep multi-view CNN of design. The general network comprises of two centre modules: 4 view-explicit sections, each dependent on the ResNet design that yields a permanent-measurement covered-up portrayal for every scan view, and 2 completely associated layers to plan from the processed secret portrayals to the yield predictions. We utilized four ResNet-22 segments to process a ResNet-22 alludes to our adaptation of a 22-layer ResNet, with extra alterations, for example, a bigger portion in the first complicated layer. 256-measurement covered-up portrayal part of each view. Loads are split in the portions related to L-CC/R-CC viewpoints [1–4]. L-MLO/R-MLO load divisions are also seen in the sections applicable to L-MLO/R-MLO. To develop estimates for the various yields, we connect both depictions into a 512-measurement mechanism and apply two entirely associated coatings. Both for L-MLO and R-MLO

Improvement in Breast Cancer Detection Using Deep Learning

21

sees, we do the same thing. To get our last expectations, we normalize the possibilities predicted by the CC and MLO components of the representation. Here, we have used auxiliary patch-level classification, models. Conditions that we used in this research are as follows: • • • • • • • • • • •

Python Pytorch Torchvision NumPy SciPy H5py Imageio Pandas Tqdm Opencv-python Test zone

5 Data To utilize one of the pre-prepared models, the info is needed to comprise of in any event four pictures, at any rate, one for each view. The first 12-bit X-rays are stored as redimensioned 16-bit pictures to safeguard the upsample of the pixel forces, although as yet existing accurately. A rundown of test data is included in the data set of the test list prior to actually cropping. Every test is addressed as a word reference with the accompanying organization. The sample data that we utilized is taken from the reference [1].

5.1 Channels The channel comprises of four phases. • • • • •

Crop X-rays Calculate optimal centres Create graphs Run Classifiers

22

M. Gupta et al.

5.2 Initialization Run the accompanying orders to edit mammograms and figure data around expansion aperture. Crop mammograms shaves the mammogram around the breast and disposes of the foundation to advance picture stacking era and era to run division calculation and saves each trimmed picture. What is more, it adds extra data for each picture and makes another picture rundown [11].

6 Results Earlier works approach the assignment of bosom disease screening test arrangement in two ideal models. In one worldview, as it was test-level, bosom-level or picturelevel names are accessible. A CNN is first applied to every one of the four standard perspectives, and the subsequent component vectors are consolidated to create a last expectation. A portion of these works straightforwardly total yields from the fix-level classifier to shape a picture-level expectation. A significant impediment of such models is that data outside the clarified locales of interest will be dismissed. Different works apply the fix-level classifier as a stacked, and the whole model is then upgraded together. A drawback of this sort of design is the necessity for the entire model to fit in GPU memory for preparing, which restricts the size of the minibatch utilized (ordinarily to one), profundity of the fix-level model and how thickly the fix-level model is applied. Here, we are sharing the results we got after performing the above work (Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 and 16). Fig. 1 1-L-CC

Improvement in Breast Cancer Detection Using Deep Learning

23

Fig. 2 1-L-MLO

Fig. 3 1-R-CC

The above results show the mammograms (Figs. 17, 18, 19 and 20). These predictions show the approximate value of images and image + heatmaps.

24

M. Gupta et al.

Fig. 4 1-R-MLO

Fig. 5 2-L-CC

7 Conclusion By using a tremendous setting up a clump with chest-level and pixel-level names, we built an anxious association that can absolutely bunch chest harm screening tests. It is hard to prepare this representation in a thoroughly beginning to conclude the style with right now accessible gear [12]. Despite the fact that our outcomes are empowering, we perceive, we have used such examine set used in examinations is pretty much nothing, and our results require additional analytical endorsement. Regularly, shielding test is matchless the initial phase in a logical course, with the radiotherapist settling on a last affirmation and consideration to surgery exclusively after survey for supplementary characteristic mammogram pictures and achievable

Improvement in Breast Cancer Detection Using Deep Learning

25

Fig. 6 2-L-MLO

Fig. 7 2-R-CC

imaging. Regardless, in our examination, a creamer representative including both a neuronal association and skilful radiotherapists beat one of two only, proposing the usage of a specific representative could improve radiotherapist’s affectability for bosom danger disclosure. Moreover, the arrangement of our representative is by and large essential. More perplexing and careful representations are workable [13]. In addition, the endeavour we investigated in this procedure, forecasting whether or not the patient had the observable illness at the hour of the shielding mammography test, is the most un-troublesome amid various assignments of concern. Just as testing the utility of this model in the persistent examining of shielding mammography, an indisputable after stage would predict the progression of chest harm later on before it is even clear to a pre-arranged characteristic.

26 Fig. 8 2-R-MLO

Fig. 9 3-L-CC

M. Gupta et al.

Improvement in Breast Cancer Detection Using Deep Learning Fig. 10 3-L-MLO

Fig. 11 3-R-CC

27

28 Fig. 12 3-R-MLO

Fig. 13 4-L-CC

M. Gupta et al.

Improvement in Breast Cancer Detection Using Deep Learning Fig. 14 4-L-MLO

Fig. 15 4-R-CC

29

30 Fig. 16 4-R-MLO

Fig. 17 Graph of Image_predictions

Fig. 18 Data of image_predictions

Fig. 19 Graph of imageheatmaps_predictions

M. Gupta et al.

Improvement in Breast Cancer Detection Using Deep Learning

31

Fig. 20 Data of imageheatmaps_predictions

References 1. Wu N et al (2019) Deep neural networks improve radiologists’ performance in breast cancer screening. IEEE Trans Med Imaging 39(4):1184–1194 2. Nazeri K, Aminpour A, Ebrahimi M (2018) Two-stage convolutional neural network for breast cancer histology image classification. In: International conference image analysis and recognition, Springer, Cham 3. Shen L et al (2019) Deep learning to improve breast cancer detection on screening mammography. Sci Rep 9(1):1–12 4. Rakhlin A et al (2018) Deep convolutional neural networks for breast cancer histology image analysis. In: International conference image analysis and recognition, Springer, Cham 5. Agarap AFM (2018) On breast cancer detection: an application of machine learning algorithms on the wisconsin diagnostic dataset. In: Proceedings of the 2nd international conference on machine learning and soft computing 6. Geras KJ et al (2017) High-resolution breast cancer screening with multi-view deep convolutional neural networks. arXiv preprint arXiv: 1703.07047 7. Févry T et al (2019) Improving localization-based approaches for breast cancer screening exam classification. arXiv preprint arXiv: 1908.00615 8. Wu N et al (2018) Breast density classification with deep convolutional neural networks. In: 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE 9. Kyono T, Gilbert FJ, van der Schaar M (2018) MAMMO: a deep learning solution for facilitating radiologist-machine collaboration in breast cancer diagnosis. arXiv preprint arXiv: 1811.02661 10. Calisto FM, Nunes N, Nascimento JC (2020) BreastScreening: on the use of multi-modality in medical imaging diagnosis. In: Proceedings of the international conference on advanced visual interfaces 11. Leland M et al (2018) UMAP: uniform manifold approximation and projection. J Open Sour Softw 3(29):861 12. Kooi T, Karssemeijer N (2017) Classifying symmetrical differences and temporal change for the detection of malignant masses in mammography using deep neural networks. J Med Imaging 4.4:044501 13. Becker AS, Marcon M, Ghafoor S, Wurnig MC, Frauenfelder T, Boss A (2017) Deep learning in mammography: diagnostic accuracy of a multipurpose image analysis software in the detection of breast cancer. Invest Radiol 52(7):434–440 14. Lotter W, Sorensen G, Cox D (2017) A multi-scale CNN and curriculum learning strategy for mammogram classification. Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, Cham, pp 169–177 15. Ribli D et al (2018) Detecting and classifying lesions in mammograms with deep learning. Sci Rep 8(1):1–7 16. Zhu W et al (2017) Deep multi-instance networks with sparse label assignment for whole mammogram classification. In: International conference on medical image computing and computer-assisted intervention. Springer, Cham 17. Shen L (2017) End-to-end training for whole image breast cancer diagnosis using an all convolutional design. arXiv preprint arXiv: 1711.05775

32

M. Gupta et al.

18. Teare P et al (2017) Malignancy detection on mammography using dual deep convolutional neural networks and genetically discovered false color input enhancement. J Digital Imaging 30(4):499–505 19. Gao Y et al (2019) New frontiers: an update on computer-aided diagnosis for breast imaging in the age of artificial intelligence. Am J Roentgenol 212(2):300–307 20. Harvey H et al (2019) The role of deep learning in breast screening. Curr Breast Cancer Rep 11(1):17–22 21. Rodriguez-Ruiz A, Lång K, Gubern-Merida A, Broeders M, Gennaro G, Clauser P, Sechopoulos I et al (2019) Stand-alone artificial intelligence for breast cancer detection in mammography: comparison with 101 radiologists. JNCI J Nat Cancer Inst 111(9):916–922

Measure to Tackle Forest Fire at Early Stage Using Applications of IoT Ankita Sharma and Er. Sorab Kumar

Abstract IoT provides the sensor-driven approach for accessing data from the wearable devices. Abnormal situations can be tackled using set of interconnected components where the present infrastructure may not be able to predict the situation. This paper provides the insight into the techniques and frameworks used to analyze the abnormal situations such as fire or floods. This paper provides the unique mechanism using neural network and sensors such as heat and smoke sensors to determine the fire situation within forest area. In case detection do take place, fire alarm system will be deployed, and information will be conveyed to the control center. Comparative analysis of techniques also provided to extract best possible approach in terms of parameters. These parameters include energy efficiency which is critical in case of sensors and fault tolerance for improving reliability of detection process. Keywords Fault tolerance · Energy conservation · Internet of things · Sensors

1 Introduction Internet of things (IoT) consists of sensors to extract information from the wearable devices and transferred them to nearest possible cluster heads [1]. With the help of IoT, it is possible to integrate the physical world into the computer systems. With this integration, it is entirely possible to access and extract meaningful information using data mining. Neural network is another approach that is layered based for the processing of information. In [2], patient record monitoring with IoT was suggested. These records could be large in size. This means with IoT and mining-based approach, it becomes possible to extract useful patterns to conclude disease. In [3, 4], proximitybased sensors attached with human body extract useful patterns from patient body to A. Sharma (B) · Er. Sorab Kumar Department of Computer Science and Engineering, Sri Sai College of Engineering and Technology, Pathankot, India Er. Sorab Kumar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_4

33

34

A. Sharma and Er. Sorab Kumar

determine disease. In [5, 6], fire and smoke sensors are placed over the commercial area. Threshold limits are programmed into the control circuits. In case issues of heat or smoke arise and exceeded the threshold levels, heat and smoke sensors convey the information to the control circuit. Control circuit blows the alarm in case smoke or heat is beyond the dangerous levels.

2 Literature Survey The existing literature provides framework for tackling the issues associated with fire detection over the remote area using the application of sensors over the given premises. Wireless sensor-based fire detection system is proposed by Reddy et al. [7]. In this literature, nodes are distributed. The sensors nodes with high energy will be appointed as cluster head that is common for nodes having threshold distance. This indicates that nodes having minimum distance will transmit the packets toward that selected base station. The fire detection system detects the abnormal or intruder in terms of heat and flames. The fire detection system then gives the signal to the alarm system to show critical situations. Fire detection with the help of Zigbee approach is presented with in [8]. Forest fire detection using the Zigbee and GPRS is handled through the said literature. Forest fire detection proposed through this literature includes algorithm detecting humidity and temperature change. The detection process through the said approach is faster however due to lack of energy efficiency; lifetime of the network is less or poor. Intelligent fire detection system is proposed by Mobin et al. [9]. This literature presents a system in which dissipative fires as citrates, welding smoke etc. is eliminated using fusion algorithm. As the abnormal situation in terms of fire or smoke exceeded the threshold levels, fire alarm will be activated. The application of IoT will be employed in this case for the detection and prediction of fire at early stage.

3 Problem Definition From the literature, the fire alarm system used in the existing literature does not use any energy conservation mechanism. The energy conservation mechanism can ensure better detection process, but this is not the case in existing literature. The problems are listed as under. • Secure transmission of data from controller to secure is missing. • Sensor energy conservation mechanism is missing. • Lifetime of the sensors is limited, and hence, crucial time could be lost and forest could be completely burnt. • Idle sensor identification is missing.

Measure to Tackle Forest Fire at Early Stage …

35

4 Proposed Model The proposed model uses the applications of IoT with cloud to detect the forest fire. The forest fire detection becomes critical due to increasing awareness toward green computing.

4.1 Data Accumulation Layer Data accumulation layer is critical in the processing of information. The accumulation layer ensures that information that is similar in nature is grouped within same cluster. To accomplish this, Euclidean distance-based mechanism is applied. The information is obtained from the heat and smoke sensors. The threshold values are provided and fed within the control circuits. The information that is gathered from the nodes will be transmitted toward the destination nodes also termed as sink node or base station.

4.2 Data Pre-processing Layer Data pre-processing layer is a data mining technique which converts raw data into usable information so that accurate information is used. The task performed in this model is collecting data through sensors, pre-processing and filtering collected data, communicating with the cloud and sending only necessary data and monitoring power consumption of IoT devices. Sometimes, our data may be not correct, incomplete or inconsistent in some manner as well as can contain errors. So, these issues can be resolved by using data pre-processing. In data pre-processing, data is cleansed such as filled up the missing values, removing of data inconsistency and smoothening the noisy data.

4.3 Categorization Layer The categorization layer is used for collecting data from sensors. The collected data includes fire related dataset, environmental dataset, location dataset, topological dataset and historical dataset.

36

A. Sharma and Er. Sorab Kumar

Fig. 1 Hexagonal map

4.4 Cloud Layer Cloud consists of physical and logical devices. Physical devices also termed as datacenters and logical terms are known as cloudlets. The cloud provides the resources on pay per use basis. The information about the abnormal situation is provided by the control centers and decision will be made through threshold values. The hexagonal map structure corresponding to forest for fire detection followed is given in Fig. 1.

4.5 Event Classification The classification of information that it is normal or abnormal will accomplished with the help of fuzzy clustering mechanism. The clustering mechanism is ensured through the nearest neighbor-based approach. Classes are formed that are divided into four categories denoted with Y = {Extreme (y1), High (y2), intermediate (y3), low (y4)} Each obtained data value is independent of other data values and may belong to one defined cluster. Threshold value is associated with each class forming the cluster. If Th1 is the minimum value and Th2 is the maximum value of class extreme (Y 1), then obtained result from the data (d) lies in class Y 1 if following inequality is satisfied th1d && th2d then class Y 1] Similar conditions can be defined for y2, y3 and y4 as th3d && th4d then class Y 2

Measure to Tackle Forest Fire at Early Stage …

37

th5d && th6d then class Y 3 th7d && th8d then class Y 4 All of the above listed criteria’s are defined to determine severity of situation. Class y4 is at low and class y1 is at high end of severity. Let X = {x 1 , x 2 , x 3 …, x n } be the set of data points obtained though sensors detecting heat and fire also V = {v1 , v2 , v3 …, vc } be the set of centers. Algorithm defining FCM used for fire detection is given as under (Figs. 2, 3 and 4) (Algorithm 1 and 2).

Fig. 2 Process of communication

Fire Data Set

A H11

Topological Data Set

B

Historical Data Set

C

H21

O H12

H22

Fig. 3 Neural network layer for the detection of extreme conditions of fire

Heat EmiƩed

38

Fig. 4 Data pre-processing and other layers of proposed model

Algorithm 1 Fuzzy C-means algorithm

A. Sharma and Er. Sorab Kumar

Measure to Tackle Forest Fire at Early Stage …

39

Algorithm 2 • Get the values from the IOT sensors and store it within appropriate variables to be inputted to the input layer • Initialize Heat = 0 • Compare heat [i + 1] with the heat emitted at previous instance • Present heat [i + 1] to the input layer for deciding membership • Compare fire dataset values with the obtained result to decide criticality of the situation • Compare historical dataset values with the obtained result to decide criticality of the situation • Compare topological dataset values with the obtained result to decide criticality of the situation • Communicate the information to the cloud

5 Performance Analysis and Results The proposed system generates the result in terms of classification accuracy and execution time. The mechanism applied ensures that fast detection of fire and classification accuracy is also improved. Pre-processing mechanism removes the noise if any from the real-time dataset. The result in terms of execution time is given as under (Table 1). The comparison of existing and proposed model in terms of plot is given as under (Fig. 5). The classification accuracy of the proposed system is also improved. The classification accuracy parameter indicates that how accurate our system is in detecting the heat and smoke. The result in terms of classification accuracy is given as under (Fig. 6 and Table 2). The improved result in terms of classification accuracy and execution time proves worth of study. Table 1 Execution time comparison

Dataset

Execution time (ms) existing

Execution time (ms) proposed

Heat and smoke offline

8

6

Real-time dataset

10

6

9

5

Random

40

A. Sharma and Er. Sorab Kumar

12 10 8 6

Execution time (ms) Existing

4

Execution time (ms) Proposed

2 0 Heat and smoke offline

Real time dataset

Random

Fig. 5 Comparison of execution time

92 90 88 86

Classification Accuracy Existing

84

Classification Accuracy proposed

82 80 Heat and Real Time smoke offline dataset

Random

Fig. 6 Classification accuracy comparison Table 2 Classification accuracy of existing and proposed system

Dataset

Classification accuracy existing

Classification accuracy proposed

Heat and smoke offline

85

90

Real-time dataset

86

91

Random

84

91

Measure to Tackle Forest Fire at Early Stage …

41

6 Conclusion and Future Scope The neural network-based approach along with missing value handling achieves higher classification accuracy. The existing approach does not achieve fire and smoke detection at early stage since hexagonal system was not employed. The mechanism achieves the classification accuracy in the range of 90% and existing system achieve the accuracy in the range of 85%. The execution time in the proposed approach is also least in the proposed system. The future work with the proposed system could include large real-time data and mode-based pre-processing mechanism.

References 1. Thomas BD, Mcpherson R, Paul G, Irvine J (2016) Consumption of Wi-Fi for IoT devices. IEEE Access 92–100 2. How the internet of things is revolutionizing healthcare—white paper—IOTREVHEALCARWP.pdf (Online). Available https://cache.freescale.com/files/corporate/doc/white_paper/ IOTREVHEAL.CARWP.pdf. Accessed 23 May 2016 3. Abdelwahab S, Hamdaoui B, Guizani M, Znati T (2015) R EPLISOM: disciplined tiny memory replication for massive IoT devices in LTE edge cloud, vol 4662. https://doi.org/10.1109/JIOT. 2015.2497263 4. Xu W et al (2014) A context detection approach using GPS module and emerging sensors in smartphone platform. In 2014 ubiquitous positioning indoor navigation and location based service (UPINLBS), pp 156–163. https://doi.org/10.1109/UPINLBS.2014.7033723 5. Guo H-W, Huang Y-S, Lin C-H, Chien J-C, Haraikawa K, Shieh J-S (2016) Heart rate variability signal features for emotion recognition by using principal component analysis and support vectors machine. In: 2016 IEEE 16th international conference on bioinformatics and bioengineering (BIBE), pp 274–277. https://doi.org/10.1109/BIBE.2016.40 6. Gubbi J, Buyya R, Marusic S. Internet of things (IoT): a vision, architectural elements, and future directions. ACM 1:1–19 7. Reddy PNN, Basarkod PI, Manvi SS (2011) Wireless sensor network based fire monitoring and extinguishing system in real time environment. IEEE Access 1075:1070–1075 8. Kiran D, Kishore G, Suresh TV (2017) Fire monitoring system for fire detection using ZigBee and GPRS system 12(1):23–27. https://doi.org/10.9790/2834-1201032327 9. Mobin I, Islam N, Hasan R (2016) An intelligent fire detection and mitigation system safe from fire (SFF) 133(6):1–7

Start and Stop Policy for Smart Vehicles Using Application of IoT Nisha Thakur and Er. Deepak Kumar

Abstract It is closed to impossible to imagine life without vehicles. As the population increases, needs of vehicles also increase rapidly. Increased in road vehicles also led to increase in accidents. Avoidance mechanisms corresponding to accidents are old and may be useless. This means no proper accident detection and prevention mechanism is in place. Thus, safety concerns must be considered while increasing on road vehicles. This paper aims to provide a prototype of smart vehicles that could be able to stop automatically by detecting the neighboring objects. Neighboring objects could be traffic lights. This means traffic light violations could be reduced using this model, and hence, roadside safety increases. The vehicles will automatically stop when the light is red, and it will move when the untried light appears. The traffic can be controlled profoundly, and accidents can be reduced by the using of this approach. Keywords Internet of Things · Smart vehicles · Sensors

1 Introduction This research is conducted toward detection of traffic lights and smart vehicles. The vehicles will be integrated with the sensor-driven processors. The vehicles will automatically stop when the light is red, and it will move when the green light appears. The driver can control the vehicle manually as well, but traffic can be controlled greatly, and accidents can be reduced by the application of this approach. Smart vehicles contain the sensors that are attached with the dashboard camera. The recording will be fed into the controller where color of the lights will be entered. As the color of the light enters into the sensor, command to the controller will be generated regarding breaks or move. The information extracted must be accurately N. Thakur (B) · Er. Deepak Kumar Department of Computer Science and Engineering, Sri Sai College of Engineering and Technology, Pathankot, India Er. Deepak Kumar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_5

43

44

N. Thakur and Er. Deepak Kumar

Fig. 1 Smart vehicles

interpreted so that breaks should be pressed at wrong time otherwise accidents can occur. The vehicles accommodated with sensors and automatic decisions are known as smart vehicles (Fig. 1). The sensors used within the smart vehicles are radar, LIDARs, ultrasonic and vision-based sensors. These sensors are capable of detecting the motion and color of the lights. The sensors however have limited energy associated with them. The energy dissipation is improved by attaching battery with the controller sensor. The Internet of Things is desired technology that is a collection of sensors arranged in such a way to convey useful information to the controller nodes. The Internet of Things contains sensors having limited energy associated with them. This energy is consumed on transmission of packets. This energy conservation mechanism must be applied to devise smart vehicles. IoT works on the principal of transferring information to the neighboring sensors. The sensors energy conservation is compulsory while using the applications of Internet of Things. The interclustering mechanism within the IoT allows the transfer of packets from source toward destination. The transmission mechanism also causes the packet drop ratio to increase in case energy of sensors completely depleted. To overcome the issue, energy conservation mechanisms are deployed. The deployment of sensors within clusters depends upon the energy associated with sensors. Internet of Things used within proposed mechanism ensures that accurate information is transmitted from cluster head to the base station. The cluster head becomes dead as energy associated with cluster head is completely depleted. To ensure smooth operation, small batteries are associated with sensors. As energy of sensor goes below threshold levels, sensors are being charged. During that period, another sensor from the cluster is recruited as cluster head. The final sink node selected for receiving the data is known as base station, but in proposed work, it will be termed as control center. The Internet of Things is desired technology that

Start and Stop Policy for Smart Vehicles Using Application of IoT

45

is a collection of sensors arranged in such a way to convey useful information to the controller nodes. The Internet of Things contains sensors having limited energy associated with them. This energy is consumed on transmission of packets. This energy conservation mechanism must be applied to devise smart vehicles.

2 Literature Survey This section provides the analysis of existing techniques used in the smart vehicles for achieving the prescribed objectives. Zhang et al. [10] proposed a smart vehicle detection to control the traffic signals. In order to accomplish this, reinforcement learning mechanism is used. In this case, the learning mechanism follows the rules of both supervised and unsupervised mechanism. In case the data appear from old source and matches with the existing data, then supervised learning is used, and otherwise, unsupervised learning is used. The mechanism ensures that traffic signals should not be violated. The only problem is the preprocessing mechanism which is missing in this case. Li et al. [5] proposed a red-light prevention system for avoiding any on road accident. The artificial neural network is used for achieving this. The neural network-based approach ensures that automatic decision is made based on the dashboard camera. Alsrehin et al. [1] intelligent transportation system is proposed using the application of data mining and machine learning. The direct approach is followed, but iterative approach could yield better and accurate results. Dangi et al. [3] proposed a density-based traffic control system; the traffic control system works well during the heavy traffic and may not work well during least traffic. The iterative approach is neglected, and hence, classification accuracy is poor in this case. Srinivasan et al. [6] proposed a neural network-based traffic control system. This system is capable of making the decision by itself, but classification accuracy will be hampered in case noise prone video is generated by the dashboard camera. Vellampalli [8] described five techniques that utilized supervised learning to perform human activity detection. The techniques which are described are decision tree, artificial neural network, multinomial logistic regression, and k-nearest neighbor that perform classification. It gives better classification accuracy of ANN algorithm. But, Naïve Bayes algorithm is not efficient. Bharathi and Bhuvana [2] proposed deep learning approach that helps to detect human activity without any time delay. It utilizes sensors to observe human activities and implement deep learning mechanism to recognize activities of human. It uses time domain features like mean, min, max, variance, and range for classifying human actions. The machine learning approaches give better accuracy. But, it does not work on large dataset. Gulzar et al. [4] compares various techniques KNN, neural network, and random forest for human activity detection. It uses orange tool for comparing feature extracted using various techniques. The classification accuracy is good, but neural network is not efficient. Xu et al. [9] proposed convolution neural network based for the detection of traffic and diverting the vehicles to other roads. This greatly reduce the accidents and saves life. Sun et al. [7] proposed a

46

N. Thakur and Er. Deepak Kumar

LSTM-based approach to extract patterns from the dataset to generate the predictions. The predictions through LSTM were highly accurate. The issue however within this approach is increased execution time while prediction. This means in case large dataset appears, it can take up to hours for generating prediction. This model thus is not suitable for larger datasets. The problems extracted from the existing paper are listed as under: • Preprocessing mechanism to increase clarity is missing. • No optimization mechanism is involved, and hence, classification accuracy is poor. • The waiting time is also high since large video can consume more time. • Sensitivity is also low due to low classification accuracy.

3 Methodology The methodology specifies the steps required to achieve the desired objectives. In this case, first of all dashboard camera is used to record the videos. The frames out of the video frames are extracted and presented to the controller camera. The controller camera applies the Gaussian filter to reduce the noise if any from the video. Once the noise is reduced, feed forward-based neural network is applied to form the decision regarding stoppage or movement of vehicle. The mechanism is given in Fig. 2. The proposed system first of all extracts the data from the IoT-driven approach using Netbeans and Cloudsim toolkit. The extracted data are passed to the filtering Fig. 2 Proposed methodology

Receive video from dashboard

Extract video Frame

Apply Pre-processing (Gaussian Filter)

Training using Feed forward approch

Predict Result(Stop or Moon)

Start and Stop Policy for Smart Vehicles Using Application of IoT

47

known as Gaussian filter to eliminate the side views containing any environmental aspects from the scene. The color components are extracted using feed forward approach. In case red light component is extracted, this information is conveyed to the controller nodes. The controller node passes this information to the smart vehicles. The information to move or stop depends upon the color of the light extracted through feed forward network. The green color extracted will initiate routing process. Bellman ford algorithm is applied to determine the shortest path toward the destination. The primary objective is to discover shortest path toward destination without violating the traffic constraints. The algorithm for the proposed system is given as under.

The shortest path approach is used for detecting the best possible route toward the destination. The execution time will be reduced considerably using the proposed mechanism. The structure of this approach is given in Fig. 3.

Fig. 3 Bellman–Ford approach

48

N. Thakur and Er. Deepak Kumar

Bellman–Ford Algorithm • • • • • • • • • •

for each vertex V in distance[V] wci, Vi is designated as character, else it is considered as word component.

2.2

Post-Processing

The character and/or word components segmented out from an IAHT line may contain some internal ligatures. So, in this section, we implement a key stroke detection (KSD) methodology by introducing a normalized chord angle (NCA) skewness parameter in order to detect the predominant strokes of the extracted character and word components. The procedure of KSD is described below.

A Framework for Segmentation of Characters and Words from …

241

Methodology: 1. The trajectory points of the extracted character or word component are resampled so that they are equally spaced, by replacing it with a new sequence of points having same spatial distance. 2. The normalized chord angle (NCA) values (hc) of the resampled trajectory are computed by taking the inverse tangents of the line segments connecting consecutive trajectory points, normalized by the maximum chord angle. 3. For each point of the resampled trajectory, we consider the previous five and succeeding five NCA values (i.e., from hcj5 to hcj þ 5 ) and determine the NCA skewness denoted by Mc3 using [11] Mc3j ¼

1 where Mc1 ¼ 10

Pj þ 5 i¼j5

jþ5 1 1 X : ðhci  Mc1 Þ3 3 10 Mc2 i¼j5

hci and Mc2 ¼

ð5Þ

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pj þ 5 2 1 i¼j5 ðhci  Mc1 Þ are the chord 10

angle mean and standard deviation, respectively. 4. In the last step, the Mc3j value for a particular point is compared with a threshold value of NCA skewness (Ts, which is empirically considered as 0.01). If Mc3j \Ts , then the point under consideration is designated as internal ligature point, since ligatures hypothetically have moderate skewness as compared to primary HW strokes. The residual points in the trajectory after removing the ligature points collectively form the key strokes, which represent the desired character or word in the IAHT sequence.

3 Experimental Results and Discussion In this section, we present the results derived from different modules of the IAHT segmentation framework. To evaluate our proposed approach, we have formed a moderate-sized experimental data set comprising 14 air-written Assamese sentences, repeated 10 times by four writers resulting in a total of 560 recordings. The IAHT sequences involve different types of combinations such as sequence of characters, words, or combinations of both. The number of character and/or word components in a sentence varies from two to three, although it can be increased as per requirement. Here, it is restricted to three, in order to keep the AW sequence within the field of view of the webcam. So, considering the characters and words from all the recorded sentences, the data set consists of a total of 1280 characters and word components.

242

3.1

A. Choudhury and K. K. Sarma

Spotting of Boundary Points and Extraction of Valid Components from an IAHT Sequence

Figure 2a shows a few instances of HW trajectory traced while air-writing a text sequence “ও-মা” along with spotted S and E points obtained using MAP-MRF analysis. As observed, we obtain three sets of S and E points, and as such three text components as highlighted in Fig. 2b–d. The valid text components of the IAHT sequence are then detected using the KL divergence methodology. Figure 3 shows the chord angle distributions of the three spotted text components w.r.t a reference distribution (represented by black dashed curve), based on which the DKL values are computed. Table 1 illustrates the DKL values determined for all the three text components of the IAHT sequence “ও-মা”, along with the spotted frame boundaries. As inferred from Fig. 3 and Table 1, the second component (represented by blue curve) has comparatively lesser deviations from the reference distribution and hence has lesser DKL value compared to the first and third components. Thus, the first and third components are attributed as valid components.

Fig. 2 Spotting of boundary points of text components from a continuous IAHT sequence “ও-মা”: a frame showing the spotted S and E points (b) (c) (d) extracted HW components

Fig. 3 Chord angle distributions of the spotted text components from the IAHT line “ও-মা”

A Framework for Segmentation of Characters and Words from …

3.2

243

Segregation of Character and Word Components and Post-Processing

A continuous line of in-air handwritten text is segregated into character and word components using the CH-based space-width estimation technique. Table 2 shows the categorization of the detected valid components of the IAHT “ও-মা” based on the proposed technique. As inferred from the results, the ws between the CHs of the two components is greater than the first component size but less than the size of the next component, and accordingly they are attributed as character and word, respectively. The segregated character and word components of an air-written sentence may however contain some intrinsic redundant ligatures, which are detected and extricated using the KSD methodology. Figure 4 shows the post-processing results of an IAHT sequence “নাম-ক” consisting of a word and a character component. The overall air-written sentence as depicted in Fig. 4a, b shows the segmented word and character components obtained after applying the three-stage heuristic-based methodology, and Fig. 4c depicts the final word and character trajectories obtained after elimination of internal ligatures using KSD technique.

Table 1 Extraction of valid text components using DKL metric Spotted text components

Spotted boundaries

Ground truth

DKL

Decision

First component Second component Third component

7–57 85–109 137–232

6–58 82–109 136–233

0.48 0.16 1.0

Valid External ligature Valid

Table 2 Discrimination of character and word components for the text sequence “ও-মা”

Valid component

wc

ws

Decision

First component Third component

103 182

119

Character Word

Fig. 4 Post-processing results of an IAHT sequence “নাম-ক”: a input air-writing trajectory, b segmented text components, c KSD and elimination of internal ligatures

244

3.3

A. Choudhury and K. K. Sarma

Overall Segmentation Performance

We evaluate the performance of our proposed IAHT segmentation model by calculating the overall segmentation rate (SR) which is calculated as the fraction of character or word components correctly segmented (NS) out of the total number character and word components (NT) in the sentences. Table 3 illustrates the text segmentation results for different combinations of air-written Assamese sentences using our proposed framework. An average segmentation accuracy of around 95% is recorded using our proposed method, thus establishing its effectiveness in extracting out character and word components from varied types of air-written sentences. In order to provide a fair evaluation, we have compared the performance of our proposed model with the work of Agarwal et al. [6] as mentioned in Table 4, which addressed segmentation of words from IAHT lines. As the results connote, our proposed approach provides better segmentation performance as compared to the aforementioned method which has utilized a single determining heuristic, i.e., directional displacement between consecutive components in comparison with specified thresholds for text segmentation. Further, they have dealt with air-written sentences comprising only word combinations. Our method, in contrast, provides effective results for different compositions of text components.

Table 3 Summary of text segmentation results for our proposed framework Text sequence

No. of words or characters (NT)

Correctly segmented components (NS)

SR (%)

অ-ক ঐ-ৰ ও-মা নাম-ক এক-দিন হাত-মুখ অ-মই-যাও মই-কল-খাম মই-ঘৰ-যাও চাহ-পাত বগা-বগলী মাছ-ভাত-খাম ডাঙৰ-গছ গৰম-ঠাই Total

24 24 24 24 24 24 34 34 34 24 24 34 24 24 1280

79 79 78 77 76 77 111 113 112 78 76 110 75 76 1217

98.75 98.75 97.50 96.25 95.00 96.25 92.50 94.17 93.33 97.50 95.00 91.67 93.75 95.00 95.07

             

10 10 10 10 10 10 10 10 10 10 10 10 10 10

(80) (80) (80) (80) (80) (80) (120) (120) (120) (80) (80) (120) (80) (80)

A Framework for Segmentation of Characters and Words from …

245

Table 4 Comparative in-air handwritten text segmentation performance Ref.

Segmentation approach

Segmentation accuracy (%)

Agarwal et al. [6] Ours

Directional displacement between successive words Proposed three-phase heuristic analysis

80.4 95

4 Conclusion In this paper, we have presented a multilevel heuristical framework for segmentation of characters and words from continuous air-written text lines. In our formulated approach, the detection of legitimate text components is accomplished by applying a MAP-based MRF labeling technique followed by a KL divergence measurement scheme, while the discrimination of character and word components is achieved using a CH-based space width analyzing method. Experimental evaluation on Assamese IAHT sequences reveals that our proposed approach can effectively extract out characters and words from different sentence compositions with an accuracy of around 95%. The utilization of essential heuristics acquired linearly over time makes the system feasible for practical implementation. Although the data set considered in this study is sufficient to test the viability of our developed method, in future work, the data set may be extended to include some other forms of character and word combinations and also by increasing the length of the text sequences.

References 1. Chen M, AlRegib G, Juang BH (2016) Air-writing recognition—Part II: detection and recognition of writing activity in continuous stream of motion data. IEEE Trans Human-Mach Syst 46(3):436–444 2. Jin XJ, Wang QF, Hou X, Liu CL (2013) Visual gesture character string recognition by classification-based segmentation with stroke deletion. In: 2013 2nd IAPR Asian conference on pattern recognition, pp 120–124. IEEE 3. Chiang CC, Wang RH, Chen BR (2017) Recognizing arbitrarily connected and superimposed handwritten numerals in intangible writing interfaces. Pattern Recogn 61:15–28 4. Vikram S, Li L, Russell S (2013) Handwriting and gestures in the air, recognizing on the fly. In: Proceedings of the CHI, vol 13, pp 1179–1184 5. Yin F, Pai Liu P, Lin Huang L, Liu CL (2015) Lexicon-driven recognition of one-stroke character strings in visual gesture. In: 2015 13th international conference on document analysis and recognition (ICDAR), pp 421–425. IEEE 6. Agarwal C, Dogra DP, Saini R, Roy PP (2015) Segmentation and recognition of text written in 3d using leap motion interface. In: 2015 3rd IAPR Asian conference on pattern recognition (ACPR), pp 539–543. IEEE 7. Kumar P, Saini R, Roy PP, Dogra DP (2017) Study of text segmentation and recognition using leap motion sensor. IEEE Sens J 17(5):1293–1301

246

A. Choudhury and K. K. Sarma

8. Choudhury A, Sarma KK (2021) A CNN-LSTM based ensemble framework for in-air handwritten Assamese character recognition. Multimedia Tools Appl 1–36 9. Choudhury A, Sarma KK (2021) A vision-based framework for spotting and segmentation of gesture-based Assamese characters written in the air. J Inf Technol Res (JITR) 14(1):70–91 10. Theodoridis S (2015) Probability and stochastic processes. Machine learning: a Bayesian and optimization perspective, pp 9–51 11. Rangayyan RM, El-Faramawy NM, Desautels JL, Alim OA (1997) Measures of acutance and shape for classification of breast tumors. IEEE Trans Med Imaging 16(6):799–810

Cybersecurity in Digital Transformations A. Swain, K. P. Swain, S. K. Pattnaik, S. R. Samal, and J. K. Das

Abstract The cyberspace plays a major role in the rapid digital transformation of business due to its extensive interconnectivity and cyber potential. With the integration of automation and information technology, the quality, productivity and optimization of enterprise workflow are achieved. Due to rapid growth in digital technologies such as cloud computing, Internet of things, big data, artificial intelligence and machine learning, managing and processing a huge amount of digital data has been a complex process. This has increased the risk in data security and challenges in digital transformation. This paper provides a comprehensive state-of-the-art summary of cybersecurity in the digital transformation and aims at identifying security risks associated with these emerging technologies along with providing solution to overcome these challenges. Keywords Cybersecurity · Cyber threats · Artificial intelligence (AI) · Machine learning (ML)

A. Swain System Engineer, Tata Consultancy Services, Bangalore, India K. P. Swain (B) Gandhi Institute for Technological Advancement, Bhubaneswar, India S. K. Pattnaik · J. K. Das School of Electronics Engineering, KIIT University, Bhubaneswar, India e-mail: [email protected] J. K. Das e-mail: [email protected] S. R. Samal Silicon Institute of Technology, Bhubaneswar, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_26

247

248

A. Swain et al.

1 Introduction Cybersecurity is an important business priority in this digital era [1, 2]. In this digital business transformation, apart from security in applications and processes, the user experience, performance, speed, automation, connectivity and agility also matter. However, cybersecurity comes in the way of digital transformation of business, since it acts like a layer that slows down the digital transformation initiatives and prevents optimization of processes. Cybersecurity is all about rules and regulations, protection of computer systems and networks from unauthorized exploitation, training and awareness within an organization [3]. As per a research conducted by “Dell and Dimensional Research”, it is found that even if various security solutions like cloudbased solutions do not affect the user experiences and other intermediary goals, still they tend to be of less priority in digital transformation projects, as it is believed that the digital transformation efforts might get blocked by the intervention of cybersecurity. However, digital transformation can only be accelerated by incorporating security with other parameters in the early stage of transformation. There are several other reasons why cybersecurity development and strategies are lagging behind in digital transformation projects, some of which are stated below [4]: • Lack of strategy in integrating security with systems by prioritizing critical processes, potential sources of attacks and vulnerabilities in a system, especially in an IoT-based system. • Difficulty in making business cases for security in terms of customer experience and digital workplace due to unpredictable risk factors, resulting in security failures. • Difficulty in finding the right security professionals for specific applications, especially in projects where a huge amount of data and new technologies are involved. The threats of cybercrimes have been increasing with the growth and evolution of the businesses [5, 6]. Organizations have to pay a very hefty price for the security breaches and unauthorized access to the networks and applications. Losses and financial damage worth more than $50 million have been reported by around 20% organizations due to security breaches. Around 42% have reported security incidents through time-sensitive applications where impacts can be severe if threats are not resolved quickly. Over 35% of firms have reported impacts in critical operations and applications due to cyber-attacks, leading to shutdown of the system and production lines for days. Cyber-attacks have been reportedly increasing in the latest digital mediums that increase the attack surface for exploitation by hackers. Around 49% organization have stated that security incidents through cloud services have increased by 17%, and around 42% have reported an average increase of 16% in the security incidents through IoT devices. It is evident from these facts that the rise in latest digital technologies has led to an increase in the number of cyber-attacks as well.

Cybersecurity in Digital Transformations

249

2 Problem Statement Data and information are the source of profits and new business models in an organization. Due to the growing connectivity of technologies and processes, expansion of networks and clouds, mobility, a huge amount of data is generated every day [7]. Due to rapid increase in the global business Internet traffic, cyber analysts face numerous challenges in effectively monitoring the present levels of data volume, variations and velocity across the firewalls. With the handling of large volume of data, the privacy and security of the systems have been a major concern, as data are stored or transferred over an open network called Internet. With the advancements in information technology, the cyberspace is being used by criminals and hackers to commit cybercrimes, which leads to significant disruption in the cyber society. In order to avoid cybercrimes, safe and proper transmission of data over a network is achieved through cybersecurity. Signature-based cybersecurity solutions are found to be ineffective [8, 9] against detecting new attack vendors. Since new malware attacks are being discovered every day, it is quite difficult to keep up with the new virus updates regularly. At times, cybersecurity can pose the greatest challenge to an organization due to the rise of different emerging technologies, data, apps and different endpoints. It is difficult for the organizations to safeguard their assets and clients’ privacy. This necessitates several proactive measures for early detection of threats in order to avoid them or reduce their impact. The global information security expenses are estimated to reach $170 billion by 2022. Therefore, it is necessary to implement more effective, robust and cost-saving mechanisms in cybersecurity that can be more promising and efficient than the existing infrastructure.

3 Proposed Solution Traditional antivirus software operates well for only previously encountered threats through its public signature, and it cannot detect new threats. According to the Norton Research, the average cost of recovery from a data breach is expected to be $3.86 million, and it can take organizations around 196 days on average to detect a data breach in their system. This is where AI plays an essential role in providing insights that help organizations to analyze threats and reduce the response times while ensuring best security practices. While most traditional software can achieve only 90% threat detection rate, AI-enhanced antivirus software is able to achieve a threat detection rate of 95% or above. Moreover, this software does not need malware signature updates, it rather identifies threats by learning from the existing patterns of the malicious programs. Both AI and cybersecurity can be used in organizations to ensure early detection of cyber threats and mitigate risks. AI technologies facilitate in early detection of malicious activities and immediately responding to the threats by using the pattern

250

A. Swain et al.

from previous cyber-attacks and determining the best strategy to prevent them from happening again. The most important use cases of AI and machine learning in information security are network security, data security, mobile endpoint security, email monitoring, threat analysis, malware detection, AI-based threat mitigation, security analyst augmentation and response time reduction. Due to the increasing use of hybrid networks, extensive network security is achieved by AI that monitors all the incoming and outgoing traffic in a network for any suspicious behavior. AI can also help in detecting and classifying malware or suspicious code before opening malicious files by using well-trained ML algorithms. For this purpose, millions of labeled sets of data have been extracted from both malware and normal applications. A comprehensive approach to the core data helps in effectively deploying machine learning. It enables systems to quickly scan large amounts of data, learn and analyze the pattern of attacks using statistical techniques like predictive modeling in order to prevent them from occurring again. Predictive modeling uses machine learning and data mining to detect irregularities in data and predict the future outcomes for generating security threat alerts before the attack occurs. Some of the most widely used predictive models are neural networks, decision trees, clustering algorithms, time series algorithms, support vector machines, regression and ensemble models. The malware detection approach using a predictive model is illustrated in Fig. 1. In order to manage the volume of potential threat vectors, augmented security tools can be used that combine machine learning and cyber experts for faster analysis, visualization and configuration of specific solutions based on the problems in higherorder threats. AI can also monitor and optimize various data center processes like temperature, power consumption and bandwidth usage and provide solutions that can improve the security system and effectiveness of the infrastructure in the data center. Machine learning can help cybersecurity systems to respond to the changing behavior in real time, thus preventing possible threats and enabling organizations to reduce the amount of time and resources spent on routine tasks, making cybersecurity more proactive and efficient.

Fig. 1 Malware detection using predictive analytics

Cybersecurity in Digital Transformations

251

4 Typesetting of Your Paper at Springer Case Studies Organizations have been adopting AI-based technique in cybersecurity at a rapid pace due to its various use cases. As per the recent survey conducted by Capgemini Research Institute, around 69% organizations have acknowledged that they will not be able to deliver the required performance in identifying and responding to threats without the help of artificial intelligence. By 2019, around 28% firms were using security products embedded with AI, 30% were using proprietary AI-based solutions, and 42% were using both embedded products and proprietary AI algorithms. By the year 2020, almost 63% organizations have been reported to be employing AI in cybersecurity. The budget for AI in cybersecurity has increased by an average of 29% in the financial year 2020. There are several case studies based on the impact of AI-based cybersecurity techniques in business that have led to widespread adoption of this technology. Some of these are stated below: • “Energy Saving Trust” is an organization that uses a ML-based platform “Darktrace’s Enterprise Immune System” in order to detect anomalies as soon as they occur and alert the company in real time, thus preventing sophisticated cyberattacks. This platform learns specific patterns from the behavioral model of every device, network and user and identifies the anomalous behavior automatically. • Upon facing numerous cyber threats and advanced attacks, a global bank’s security team enhanced their threat detection and response capabilities by deploying “Paladon’s AI-based Managed Detection and Response Service” which is a threat hunting service based on ML techniques. This included ransomware, malware, encrypted attacks, data exfiltration, etc. • After experiencing a security incident several years ago, a commodities trading company “ED&F Man Holdings” improved its cybersecurity tools and methods by adopting a Vectra’s AI-based threat detection and response platform called “Cognito” that gathers network metadata and stores it to enhance it with unique security aspects. It then uses this metadata with ML techniques to identify and prioritize attacks in real time. Cognito later discovered a command-and-control malware that had been hiding in the system for years. • Google uses AI to analyze mobile endpoint threats in order to protect the growing number of personal mobile devices. It was reported in 2016 that Google has achieved 15% reduction in power consumption and around 40% reduction in cooling expenses in their data centers after employing AI technology. • Siemens, a very old global tech leader, handles around 60,000 cyber threats per second using AWS machine learning.

252

A. Swain et al.

5 Conclusion Inspired by the growing importance of AI technologies in cybersecurity, in this paper, we have discussed how AI can transform normal security systems to data-driven intelligent cybersecurity systems. We have also discussed how machine learning techniques can solve various security challenges that traditional security solutions fail to overcome. The purpose of this paper was to provide a comprehensive overview of cybersecurity in digital transformation and identify and solve the security challenges in this digital era.

References 1. Angelo C, Mariangela L, Marianna L (2020) Cybersecurity in the context of industry 4.0: a structured classification of critical assets and business impacts. Comput Ind 114:103165 2. Mariia P, Deniz K, Rob L (2022) Digital business model configurations in the travel industry. Tour Manage 88:104408 3. Paul F, Michael W, Deborah R (2021) A principlist framework for cybersecurity ethics. Comput Secur 109:102382 4. Russell SJ, Norvig P (1995) Artificial intelligence: a modern approach. Prentice Hall, New Jersey 0-13-103805-2 5. Giuseppe C, Damian AT, Willem-JanVan DH (2021) Cybercrime threat intelligence: a systematic multi-vocal literature review. Comput Sec 105:102258 6. Giuseppe C, Damian AT, Willem-JanVan DH (2021) Cybercrime threat intelligence: a systematic multi-vocal literature review. Comput Sec 105:102258 7. Nayan C, Chitvan M, Singh AS (2017) Risk for big data in the cloud. In: International conference on computing, communication and automation (ICCCA), IEEE, Greater Noida, India, 5–6 May 2017 8. Nuno M, Jose MC, Tiago C, Pedro HA (2020) Adversarial machine learning applied to ıntrusion and malware scenarios: a systematic review. IEEE Access 8:35403–35419 9. Toya A, Ishan K, Annamalai A, Chouikha MF (2021) Efficacy of machine learning-based classifiers for binary and multi-class network ıntrusion detection. In: International conference on automatic control and ıntelligent systems (I2CACIS), IEEE, Shah Alam, Malaysia, 26–26 June 2021

Comparison of Different Shapes for Micro-strip Antenna Design Hirak Keshari Behera and Laxmi Prasad Mishra

Abstract Micro-strip patch antenna is the most common form of antennas in which the patch can be of any size and shape. Some common shapes used are rectangular, circular ring, polygonal. Different types of micro-strip antennas have been rolled out which are the common variations of those basic structures. Micro-strip antennas are usually designed very thinly as they are very exceptionally valuable components for application purposes. Micro-strip patch antennas have low-profile configuration and are capable of dual and triple frequency. Due to these advantages, these antennas are most suitable for aerospace and mobile applications. This paper presents a design and simulation of different shapes of micro-strip patch antenna such as rectangular, square, circular, elliptical, and hexagonal operating at different frequencies for wireless communication. All the simulations and results were analyzed using the Ansoft Ansys HFSS. Keywords Micro-strip patch · Resonating frequency · Substrate · Ground plane · Dielectric

1 Introduction An antenna is a device which is used for the transformation of an RF signal into an electromagnetic wave in free space. Due to the integration ability with other systems and compactable size, such devices are mostly needed in the world of wireless communication. For such devices, micro-strip patch antenna is a good choice due to its easy fabrication, low price manufacturing, and light in weight [1]. Micro-strip patch antenna is made up of three parts; first part is a “Patch” which consists of metal such as gold or cooper. Second part is “Ground plane” which is made up of metal and is larger in size, and finally, third part is “Substrate plane.” It is embedded in H. K. Behera (B) · L. P. Mishra Department of ECE, S’O’A Deemed to be University, Bhubaneswar, India L. P. Mishra e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_27

253

254

H. K. Behera and L. P. Mishra

Fig. 1 Micro-strip rectangular patch

between patch and ground plane [2]. In recent years, researchers are paying attention toward the UWB range which ranges from 3.1 to 10.6 GHz. The reason of interest are data rate transmission is high, low manufacturing cost, fabrication process is simple, power dissipation is low, hardware complexity is reduced, compact in size, price is low, and its omni-directional radiation characteristics [3–5]. In practical, the main use of patch antenna is for microwave frequency applications, in which the patches are conveniently small as the wavelengths are short enough. Mostly for the simplicity creation, their uses include portable remote gadgets like cell phones and hand set devices [6]. Some of the common and regular shapes of micro-strip patch antennas are rectangular, square, circular, and elliptical [7]. The radiation of micro-strip patch antenna occurs due to the fringing fields between the edges of patch and the ground plane. The selection of material for a substrate is highly important. The height (h) of the substrate plays a vital role on the bandwidth and resonant frequency of the antenna. With an increment in height (h) and substrate thickness, bandwidth of the micro-strip antenna will increase but up to a certain limit, or else the antenna resonating progress will be stopped [8] (Fig. 1).

2 Design The expression of εreff is given by Eq. (1) [9]. εr eff

  h −1/2 εr + 1 εr − 1 + 1 + 12 = 2 2 W

(1)

The change in length is given by   (εr eff + 0.3) Wh + 0.264 L   = 0.412 h (εr eff − 0.258) Wh + 0.8 The effective length (L eff ) becomes

(2)

Comparison of Different Shapes for Micro-strip Antenna Design

255

Fig. 2 Rectangular patch antenna

L eff = L + 2L

(3)

The effective length, for a given resonance frequency, (f o ) is given by L eff =

C √

2 f o εr eff

(4)

Width (W ) is given by C 

W = 2 fo

(εr +1) 2

(5)

From the above expressions, by fixing the value of resonating frequency to be 10 GHz, we found out the value of length and width for a rectangular patch antenna which resulted the value of L = 7.6 mm and W = 13.2 mm. Here, the substrate height (h) = 1.6 mm and the material used is FR4 epoxy. Similarly, for the square patch antenna, we found out the value of side of a square to be 6.58 mm, and the height of the substrate is 1.6 mm; here also, the substrate chosen is FR4 epoxy. For circular-shaped patch antenna, the radius comes out to be 5.465 mm, and substrate’s height is 1.6 mm; and material used is FR4 Epoxy. For elliptical patch antenna, the major axis radius and minor axis radius are 9.9 mm and 7.07 mm, respectively. The substrate height is taken as 1.6 mm, and the material used is FR4 epoxy. For the hexagonal-shaped patch antenna, the side of the hexagon is taken to be 24 mm, and substrate height is 1.6 mm; material used as the substrate is FR4 epoxy (Figs. 2, 3, 4, 5 and 6).

3 Results and Discussion See Figs. 7, 8, 9, 10, 11, 12, 13, 14, 15, and 16.

256

H. K. Behera and L. P. Mishra

Fig. 3 Square patch antenna

Fig. 4 Circular patch antenna

Fig. 5 Elliptical patch antenna

Fig. 6 Hexagonal patch antenna

The return loss graph and radiation pattern of different types of patches have been simulated and analyzed by using HFSS software. From the above outputs, it can be discussed that by taking 10 GHz as the resonating frequency and then finding out the different parametric values like length, width, and radius of different patches, it can be analyzed that for rectangular patch, the return loss is −26.5 dB and its near about having good return loss value is −30 dB of hexagonal patch. But in comparison to overall gain, the rectangular patch is having a good value, and its VSWR is also

Comparison of Different Shapes for Micro-strip Antenna Design Fig. 7 S11 plot of rectangular patch

Fig. 8 Radiation pattern of rectangular patch

Fig. 9 S11 plot of square patch

257

258 Fig. 10 Radiation pattern of square patch

Fig. 11 S11 plot of circular patch

Fig. 12 Radiation pattern of circular patch

H. K. Behera and L. P. Mishra

Comparison of Different Shapes for Micro-strip Antenna Design

259

Fig. 13 S11 plot of elliptical patch

Fig. 14 Radiation pattern of elliptical patch

Fig. 15 S11 plot of hexagonal patch

better in comparison to any other patches for this paper. So for this frequency range, we can say that rectangular patch is the best patch for giving suitable outputs (Table 1).

260

H. K. Behera and L. P. Mishra

Fig. 16 Radiation pattern of hexagonal patch

Table 1 Comparison table for different patch Types of patch

Resonating frequency (GHz)

Return loss (dB)

Bandwidth

Rectangular

10

−26.5

8.68–10.34

Square

10

−13

9.69–10.33

Circular

10

−18.5

9.35–10.68

Elliptical

10

−13.6

9.66–10.28

Hexagonal

10

−30

9.59–10.23

Gain (dB) E-plane H-plane 3.197 −0.32 0.355 −8.21 1.189

−0.414

VSWR

1.1

−0.328

1.6

0.355

1.3

−6.744

1.5

−0.5203

1.2

4 Conclusion From the above comparison table, we can conclude that the patch having low return loss, low VSWR, and high gain will result in a better performance. Here, from the above table, we can observe that the rectangular micro-strip patch antenna is giving all types of satisfactory values when compared to other types of patches. It can also be observed that the hexagonal patch is also giving a good return loss (−30 dB), but if compared in terms of VSWR and gain, then rectangular patch is having better results. So from the above data, we can draw a conclusion that for the 10 GHz frequency range, rectangular patch is the best to be analyzed and further can be simulated for the fabrication process.

Comparison of Different Shapes for Micro-strip Antenna Design

261

References 1. Sahoo S, Mohanty MN, Mishra LP (2018) Bandwidth improvement of compact planar antenna for UWB application with dual notch band performance using parasitic resonant structure. Prog Electromagn Res M 66:29–39 2. Balanis CA (1997) Antennas theory—analysis and design, 3rd edn. John Wiley & Sons, Inc 3. Ali F, Hassani HR, Sajad MAN (2012) Small UWB planar monopole antenna with added GPS/GSM/WLAN bands. IEEE Trans Ant Propag 60(6):2987–2992 4. Foad F, Somayyeh C, Seyed Abdullah M (2012) Systematic design of UWB monopole antennas with stable omnidirectional radiation pattern. IEEE Ant Wirel Propag Lett 11:752–755 5. Samal PB, Soh PJ, Vandenbosch G (2013) A systematic design procedure for microstrip based unidirectional UWB antennas. Prog Electromagn Res 143:105–130 6. Rathod JM (2010) Comparative study of micro strip patch antenna for wireless communication application. Int J Inno Manage Technol 1(2) 7. Sharma S, Bhusan B (2013) Performance comparison of micro strip antennas with different shapes. Int J u- and v-Service, Sci Technol 6(3) 8. Krishna CR, Ganesh N, Prasad DD (2018) Design of elliptical shaped micro-strip patch antenna for Ka. Band Int J Res Anal Rev 5(3) 9. Sahoo S, Mishra LP, Mohanty MN, Mishra RK (2018) Design of compact UWB monopole planar antenna with modified partial ground plane. Microw Opt Technol Lett 60:578–583

Time and Frequency Domain Analysis of a Fractional-Order All-Pass Filter Tapaswini Sahu, Kumar Biswal, and Madhab Chandra Tripathy

Abstract In this research article, the comparison of integer-order and fractionalorder all-pass filter was investigated with simulation results. The time domain performance analysis of all-pass filter was investigated with respect to the variation in rise time, settling time, peak time, and percent overshoot. A Laplacian operator is used in the filter analysis with its transfer function as components without finding the time domain representation of partial differential equation. The frequency domain analysis was also performed to observe the variation in its response of the filter. Various approximation methods are explained in these two cases for studying the behavioral response of the proposed filter. By comparing the parameters between integer and non-integer order, it is interpreted that fractional-order is better than integer-order counterpart. Keywords Laplacian operator · Oustaloup recursive approximation (ORA) · Fractional-order all-pass filter (FOAPF) · Transfer function

1 Introduction Many phenomena in engineering and other research areas have been successfully explained by models which are based on fractional calculus mathematical tools. Fractional calculus is now a potent and widely utilized technique for improved modeling in a wide range of science and engineering disciplines [1, 2]. The fractional calculus is known for interpreting the positive number more accurately than T. Sahu (B) BPUT, Rourkela, Odisha, India K. Biswal School of Electronics Engineering, Kalinga Institute of Industrial Technology, Deemed to be University, Bhubaneswar, India e-mail: [email protected] M. C. Tripathy Department of Instrumentation and Electronics Engineering, CET, BPUT, Rourkela, Odisha, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_28

263

264

T. Sahu et al.

traditional integer-order numbers as the systems are categorized naturally [3–5]. Fractional calculus refers to an order which is not a whole number (fractional). Fractional calculus simplifies the integration and differentiation [6–8] of partial order which can be expressed in a single primary operator Dtα α ∈ R, where a and t are the parameters and “α” the operating order. ⎧ d∝ , R(∝) > 0 ⎪ ⎪ ⎨ dt ∝ R(∝) = 0 a Dt∝ = 1, t ⎪ ∝ ⎪ ⎩ ∫ dt , R(∝) < 0

(1)

a

Fractional calculus uses this to design FO circuits to view all integration and differentiation [9, 10] of non-integer-order operators. Nowadays, fractional calculus has numerous applications in many areas and it is based on the definition of partial integration such as D −v f (t) =

1 t ∫(t − ξ )v−1 f (ξ )dξ (v) 0

(2)

The gamma function is represented by (v). In terms of the fractional integral D μ f (t), the fractional derivatives of order μ > 0 can be defined as follows;   D μ f (t) = D μ D −(m−μ) f (t)

(3)

where m is an integer ≥ [μ], different fractional derivatives and fractional integrals definitions are discussed [11]. Fractional-based differentiation and integration provide a powerful tool for the interpretation of memory in various objects. There are two main ways in which an explanation can be made. The Grunwald-Letinkove definition in Eq. (2) follows this pattern. The definitions of Reimann-Liouville and Caputo are basically related to fractions. In this section, these are the key definitions for the integration and derivatives of fractional-order (FO) [12] in description of Riemann-Liouville, Grunwald-Letnikove, and M. Caputo definition. The Laplacian fractional operator is particularly useful in the design of filters with stop-band components with representation of the transfer function other than the complexity obtained by fractions. However, no commercial customization tools are available to physically monitor these filters in their fractional domain [13, 14]. For the following transfer function, H (s) = s α , where H (s) = s α is the operator of fractional Laplacian, with the fractional order which is defined in the Laplace domain. These methods are provided by Carlson, Matsuda, Oustaloup, continued fraction expansion (CFE), and the most logical methods where each method has its own characteristics [15, 16] with common representation as Z ( jω) = ( jω)α . The angular frequency is represented by “ω,” which takes values like −1, 0, 1 for capacitance, resistance, and inductance, respectively [17]. In it should be self-evident

Time and Frequency Domain Analysis of a Fractional …

265

that for α = 1, Z is an inductor, for α = 0, Z is a resistor and for α = −1, Z is a capacitor. The inductor, resistor, and capacitor have phase impedance angles of π/2, 0, −π/2, respectively [18]. As a result, a fractional-order element (FOE), which is a familiar feature of an existing electrical circuit element, can be called an ideal circuit element that provides a steady phase angle. Suppose, voltage v(t) = V.u(t) is used for a capacitor that does not have precharged charges. Therefore, the device will have a standard in this form; i(t) =

v for t > 0, 0 < α < 1 ht α

(4)

Here, h and v are real-valued parameters. Systems governed by different equalized orders whose value is not a whole number in fractional-order electrical circuit, which represents a fractional capacitor. The mathematical representation of fractional capacitor is given by i c (t) =

c∝ d ∝ vc (t) dt ∝

(5)

In s-domain, this relationship becomes vc (s) =

I (s) cs ∝

(6)

This relationship achieves π/2 phase shifts between current and voltage. As a result, the order “α” phase switching becomes nπ . Compared to the standard 2 elemental, the fractional element is a function of parameters (C or L) and fractional order (FO) α, allowing for greater design and application flexibility.

2 Theoretical Background of Fractional-Order Filter The fundamental advantage of FO systems over integer-order (IO) systems is that fractional-order (FO) systems have limitless memory, whereas IO systems have finite memory [16, 17]. The use of fractional calculus has much better calculations like greater flexibility and freedom. The word “fractional” basically refers to all nonnumerical numbers (e.g., fractions, odd numbers, and complex numbers); hence, the most appropriate noun number is a fractional calculation. Because integer-order filters perform faster than conventional setups, they were often the best filters at high levels or when utilizing a slow CPU. When compared to a standard integer-order filter, fractional-order filters can provide more versatility. The fractional order filter (FOF) satisfies design criteria that a regular integer order filter (IOF) would not be able to meet.

266 Table 1 Comparison between integer-order and fractional-order filter

T. Sahu et al. Features

FOF

IOF

Flexibility

More

Less

Memory

Infinite

Finite

Cut-off frequency

Sharper

Normal

Stability

More stable

Less stable

Speed

High

Low

Degree of freedom

More

Less

Accuracy

More

Less

We also demonstrate that the integer-order filter will always be satisfactory with the design. The magnitude roll-off can be customized to a specific slope, which can already be obtained using fractional-order systems [18, 19]. The design of filter with proper fractional order is important factors in determining the performance specification such as cut-off frequency, quality factor, and bandwidth. Consecutive the fulfillment of fractional-order filter requires fractional capacitor, fractional resistor, and fractional inductor (Table 1). In general, parameters such as transfer function, magnitude, phase shift, upper cutoff frequency ( f h ), lower cut-off frequency ( fl ), bandwidth (BW), quality factor (Q), and peak frequency (ωm ) are measured in integer-order filters. The filter’s transfer function T (S) is the ratio of the output signal Y (S) to the input signal X(S) given as a complex frequency function “S,” i.e., T (S) =

L{y(t)} Y (S) = X (S) L{x(t)}

(7)

The order of the transfer function with s = σ + jω is determined by the largest power of “S” encountered in either the numerator or denominator. The low-pass filter and the high-pass filter are provided by their transfer functions as a s+a

(8)

as , bs + 1

(9)

T (S) = and T (S) =

respectively. In practical applications, non-integer-order systems are particularly appealing because they allow for the creation of regulator with great durability and excellent design. The remarkable performance of these controllers, however, has several flaws. Because of the infinite memory requirements, FO systems cannot be constructed

Time and Frequency Domain Analysis of a Fractional …

267

directly using fractional calculus. The standard method is developed by Oustaloup, which is based on an system rating with selected domain of frequency that approximates the order. The order of fractional-order filter is determined by the number of fractional capacitor used in the circuit. If there are one fractional capacitor is used, the order of filter lies between 0 and 1. If there are more than one fractional capacitor present in the filter circuit, then the order of filter varies accordingly.

3 Use of the Oustaloup Approximation Method to Analyze a Fractional-Order All-Pass Filter Different methods for approximation of a fractional-order transfer function using rational transfer functions are available in literature. One of the most popular method is the Oustaloup recursive approximation (ORA) of fractional-order representations in frequency domain [1, 2] that has been used late in this work for simulation purpose. The most popular method is the Oustaloup recursive approximation (ORA) of fractional-order representations in frequency domains on two complex parameters is defined as sα =

N 1+  n=1

1+

s wz .n s w p .n

(10)

3.1 Time Domain Analysis Modern control theory applications are increasingly focusing on non-integer-order systems. They are particularly appealing in terms of potential applications, enabling for the building of controllers with increased robustness and design flexibility. Regrettably, there are certain issues with the efficient implementation of these controllers. Because of the infinite memory requirements, FO regulators cannot be achieved directly using FO differentiators and integrators. As a result, efficient approximation is required for implementation. Oustaloup’s conventional approach, this is based on predicting the frequency domain FO system for the given range. It restores the order “α” FO system with a high IO system of order N, with exactness increasing as N increases. Unfortunately, as the order of approximation and frequency band increase, transfer functions that can only be estimated in digital contexts become unstable. Physical processes are usually represented by vast and complex mathematical models. Because simpler models are often used in control, design, and analysis, the order of complex systems must be reduced using the Oustaloup filter approximation methodology. The analysis was done in the frequency domain and time domain using

268

T. Sahu et al.

MATLAB software to compare the reduction approaches with the optimum response (Fig. 1). The fractional-order all-pass filter with its transfer function is represented as TFAPF =

b(s α − a) sα + a

(11)

The value of α is held in the range of 0.1 ≤ α ≤ 1.0 (Fig. 2). All-pass filters are commonly employed in phase shifting applications for analog signal processing systems, where they pass all frequency ranges with a predictable phase shift that is also electronically controlled [19]. When generalized into fractional domain, it can be seen that an all-pass filter has additional freedom in structuring the filter response. The FO all-pass filter response in its time domain of order 0.5–0.9 is shown, which is close to theoretical results (Table 2). Simulating fractional-order filters in MATLAB software yield a time domain analysis. The fractional-order filters’ performance is evaluated for various values Fig. 1 Circuit diagram of an all-pass filter

Fig. 2 Time domain response of fractional all-pass filter when 0.1 ≤ α ≤ 1

Time and Frequency Domain Analysis of a Fractional …

269

Table 2 Specification of response parameters for various values of α α

Rise time (s)

0.1

2.3065

7.0211

4.8534

5.4839

0.2

2.5498

13.3112

13.3946

5.4279

0.3

2.6387

17.5234

24.4328

5.8299

0.4

2.7938

17.7927

37.6998

6.5874

0.5

3.0422

14.3741

53.4641

7.3643

0.6

3.3009

20.9886

72.0299

8.2009

0.7

3.4842

33.4776

93.4987

9.3091

0.8

3.6997

56.9563

118.7174

9.8997

0.9

3.9878

108.1285

146.3289

11.5987

1.0

4.1732

463.2057

180.3028

12.4384

Settling time (sec)

% overshoot

Peak time (s)

of α, and the simulations are carried out in MATLAB. As a result, fractional-order filters appear to offer greater freedom in shaping the filter response.

3.2 Frequency Domain Analysis The frequency domain analysis of a fractional-order all-pass filter is illustrated using the magnitude and phase obtained from its transfer function. The T (s) of FAPF is given in (11), and its magnitude and phase angle are obtained by putting s = jω.

+ a2 ω2α − 2aωα cos απ 2 = b√ FAPF + a2 ω2α + 2aωα cos απ 2 α ω cos απ − a + jωα sin απ 2 2 PhaseFAPF ( jω) = b α α sin απ ω cos απ + a + jω 2 2 Mag

(12)

(13)

From Fig. 3, it is evident that the bandwidth increases with decrease in filter order (α), which results decrease in quality factor (Q). However, the height of the frequency increases using high frequency filtering. The magnitude and phase response of allpass filter through the value of α = 1 are shown in Fig. 3, and the magnitude response indicates slope of 40α. It is clear that as α value is increased the filter bandwidth decreases by the same parameter, whereas the notch appears with a higher order α. Therefore, α value incorporates independent control over the bandwidth of the filter, thus increasing the design level of freedom. The magnitude response has a low deviation value when the α value is low. It is observed that the fractional-order filter with higher value of exponent has a lower drop rate both in magnitude and phase response.

270

T. Sahu et al.

Fig. 3 Magnitude and phase response of FAPF with 0.1 ≤ α ≤ 1

The fractional-order filter is simulated in MATLAB yields time and frequency domain responses. The performance of the FOF was evaluated in MATLAB for various values of order α. As a result, it is noticed that in the case of a FOF, more flexibility is achieved in designing a filter particular specification.

4 Discussions From the results in time domain analysis, it is observed that the rise time, settling time, percent overshoot, and peak time of the fractional-order all-pass filter increase with increase in order α and attain maximum value when α is equal to 1. So a better performance is found to possess by the proposed filter when the α is a fractional one. In frequency domain, it is found that the bandwidth increases with decrease in order α which results decrease in quality factor Q. The magnitude and phase response of all-pass filter over the range of 0 < α ≤ 1 are plotted. It is clear that as α value increases the bandwidth of the filter decreases. Therefore, α value controls the bandwidth of the filter, thus increases more degree of freedom. The deviation is low, when the α value is less.

5 Conclusions The time and frequency domain analysis of the FO all-pass filter were studied in this article. All-pass filter using the Oustaloup measurement method is adopted for various α values. In time domain, we use different values of α and observed its effect on rise time, peak time, settling time, and overshoot. Finally, it was found that when

Time and Frequency Domain Analysis of a Fractional …

271

order α increases significantly, the response rate has increased with this accordingly. In frequency domain, the response of magnitude and phase is obtained using different α values. It is noteworthy that with small amounts of α, the asymptotic measurement becomes inaccurate but with increase in order α, the higher side in the asymptotic plot becomes steeper, so the cut-off frequency becomes sharper, and the phase variation corresponds to a smaller frequency range. The performance parameters of the all-pass filter are created in MATLAB using different values of α in this study. The results show that the system response differs significantly between real and non-real values, and that the fractional-order filter provides a progressive change in the output response when the value of α is changed from 0 to 1. Using the Oustaloup approach, the all-pass filter is quantitatively analyzed in the time and frequency domain. The effect of variation of exponent in the fractionalorder all-pass filter is investigated. The response speed improves when the order of the filter is a fractional value. As a result, the fractional-order filter outperforms the integer-order filter in terms of performance. When the fractional-order filter is compared to the integer-order filter, the filter’s response becomes more unstable with some deviations when α is a fractional one.

References 1. Debnath L (2003) Recent applications of fractional calculus to science and engineering. Int J Math Math Sci 54:3413–3442 2. Swain S, Biswal K, Tripathy MC, Kar SK (2020) Performance analysis of fractional order Sallen-Key high-pass filter using fractional capacitors. In: 2020 International conference on computational intelligence for smart power system and sustainable energy (CISPSSE), Keonjhar, Odisha, India, vol 1, pp 1–5 3. Tripathy MC, Monda D, Biswas K, Sen S (2013) Design and performance study of phaselocked loop by using fractional-order loop filter. Int J Circ Theory Appl 43(6):776–792, Nov 2013 4. Tripathy MC, Mondal D, Biswas K, Sen S (2014) Experimental studies on realization of fractional inductors and fractional-order bandpass filter. Int J Circuit Theory Appl 43(9):1183– 1196 5. Tripathy MC, Biswas K, Sen S (2013) A design example of a fractional-order KerwinHuelsman-Newcomb Biquad filters with two fractional capacitors of different order 6. Kumar S, Tanwar R (2013) Analysis and design of fractional-order filters. Int J Inno Res Electr Electron Instrum Control Eng 1(3):112–113 7. Ali AS, Radwan AG, Soliman AM (2013) Fractional order butterworth filter: active and passive realizations. IEEE J Eng Sel Top Circ Syst 3(3), Sept 2013 8. Elawakil S, Radwan AG, Psychalions C, Maundy BJ (2018) Approximation of fractional-order laplacian operator as a weighted sum of first-order high-pass filters. IEEE Trans Circ Syst II 65(8) 9. Biswal K, Tripathy MC, Kar SK (2020) Performance analysis of fractional order low-pass filter. In: Lecture notes in network and systems (ICAC), vol 109. Springer, pp 224–231, Feb 2020 10. Biswal K, Tripathy MC, Kar SK (2020) Performance analysis of fractional order high-pass filter. In: Lecture notes in electrical engineering (IEPCCT), vol 630. Springer, pp 511–519, Mar 2020

272

T. Sahu et al.

11. Zahra WK, Hikal MM, Bahnasy TA (2017) Solutions of fractional order electrical circuits via Laplace transform and nonstandard finite difference method. J Egypt Math Soc 25:252–261 12. Sierociuk D, Podlubny I, Petráš I (2013) Experimental evidence of variable-order behavior of ladders and nested ladders. IEEE Trans Control Syst Technol 21:459–466 13. Gómez-Aguilar JF, Escobar-Jiménez RF, Olivares-Peregrino VH, Taneco-Hernández MA, Guerrero-Ramírez GV (2017) Electrical circuits RC and RL involving fractional operators with bi-order. Adv Mech Eng 9:1–10 14. Garrappa R (2015) Numerical evaluation of two and three parameter Mittag-Leffler functions. SIAM J Num Anal 53(3):1350–1369 15. Oustaloup A, Levron F, Mathieu B, Nanot FM (2000) Frequency band complex non-integer differentiator: characterization and synthesis. IEEE Trans Circ Syst-I: Fund Theory Appl 47(1):25–39 16. El-Khazali R, Tawalbeh N (2012) Realization of fractional-order capacitors and inductors. In: Proceedings of the 5th workshop on fractional differentiation and its applications, Hohai University, Nanjing, China 17. Biswal K, Swain S, Tripathy MC and Kar SK (2021) Modeling and performance improvement of fractional-order band-pass filter using fractional elements. IETE J Res https://doi.org/10. 1080/03772063.2021.1906334. Apr, 2021 (in press) 18. Ahmadi P, Maundy B, Elwakil AS, Belostotski L (2012) High-quality factor asymmetric-slope band-pass filters: a fractional-order capacitor approach. IET Circuits Devices Syst 6:187–197 19. Sahu T, Tripathy MC, Biswal K, Kar SK (2021) Performance analysis of fractional order filter using fractional order elements. In: Lecture notes in network and systems (LNNS), vol 202. Springer, pp. 401–408, May 2021

Cohort Selection using Mini-batch K-means Clustering for Ear Recognition Ravishankar Mehta, Jogendra Garain, and Koushlendra Kumar Singh

Abstract A 2D ear recognition contributes a vital role for biometric identification techniques in a fast-paced networked and security-conscious society due to its easy acquiring data. However, due to scarcity of data, sometimes, a recognition system suffers to achieve expected accuracy. That is why, it uses the study of cohort selection for ongoing research in the biometric system. It makes the traditional biometric system to have high-performance rate and minimum complexity and cost. The proposed idea presents a techniques for selecting the cohort using mini-batch k-means clustering algorithm. It makes use of oriented FAST and rotated BRIEF (ORB) feature extraction technique for ear feature and generates the matching scores between images for each ear pair. T-norm normalization technique is applied for calculating the normalized cohort score. For experiments purpose, annotated Web ears (AWE) dataset is used where this cohort selection method achieves an excellent result which shows its superiority over non-cohort face recognition systems. Keywords Cohort selection · Mini-batch k-means · Oriented FAST and rotated BRIEF (ORB) · Normalization

1 Introduction The ear recognition is a sought-after technologies in the field of biometric identification. These systems comprise of two phases: ear detection followed by ear recognition. For more than twenty years of exclusive research, many papers have been presented in journals and conferences in this field [1, 2], but still we cannot say R. Mehta · K. K. Singh Department of Computer Science and Engineering, National Institute of Technology, Jamshedpur, Jharkhand, India e-mail: [email protected] J. Garain (B) Department of Computer Science and Engineering, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, Odisha, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_29

273

274

R. Mehta et al.

that AI systems can assess to individual performance. Because of variation in rotation, occlusion, environmental factor, sensing device, and poor illumination factor, the act of ear recognition system shows a discrepancy. That is why, it is quite necessary to minimize the effect of these factor so that it can reduce the non-matching errors and increase the effectiveness of the system. Ear recognition is a demanding yet interesting problem that it has attracted researchers with dissimilar background: augmented intelligence, psychology, pattern detection and recognition, computer vision, neural network, computer graphics. To take a photograph of human ear does not require his or her intervention since it can be done from a distance and it also does not need very costly device. Tistarelli et al. [3] use recognition-based system. The lacuna of the feature extraction techniques and the lack of sufficient images in the datasets throw a challenge to the researcher towards building a robust ear recognition system. In such a scenario, use of cohort images may be a solution [4–6]. Cohort images are the non-matched images which provide complimentary information to the system against each probe images. Therefore, it is paramount importance to design a cohort selection algorithm which is efficient enough to select a moderate number of cohort images for each individual. In this article, the authors use mini-batch k-means clustering algorithm to fulfil the purpose efficiently. We can also apply geometric features for mini-batch k-means clustering algorithm for this purpose [8, 9].

2 Methods Cohort selection methods mainly based on two phases: (a) similarity measure of the instant ear and the stored ear and (b) similarity measure between the current ear and the database ear. The first one gives actual score, and second gives cohort measure. The compilation of photographs excluding the claimed image is called the cohort set. This cohort set which has separate collection of pictures is not included as registered or checked of pictures. In the Fig. 1, the concept of genuine score and cohort score has been illustrated. The matching between first and second images provides genuine score since they belong to the same person. The second and third images belong to the different person. Therefore, the third image which is considered as cohort image provides cohort score being matched with the second image which is the query image. There are mainly three steps for cohort selection—feature extraction, match score calculation, and cohort image selection for each enrolled subjects. All three steps are illustrated below one by one. Here, x, y are picture element’s positions and I (x, y) is grayscale tone of the matching pixel. With the help of moment, we search the centre point C of the picture element’s cluster as shown in Eq. 1.  C=

m 10 m 01 , m 00 m 00

 (1)

Cohort Selection using Mini-batch …

275

Fig. 1 Measuring of real score and cohort score from ear images

Finally, with the help of O and C(the geometric centre as well as the centre point, respectively) of the picture element’s cluster, obtaining the orchestration trajectory −→ OC and the direction is given in Eq. 2. θ = tan−1



m 10 m 01

 (2)

Generate feature point descriptors: ORB uses enhanced binary robust independent elementary features (BRIEF) algorithm [7] after retrieving the features from accelerated segment test (FAST) characteristic findings with orientation towards calculating the descriptors for each points. BRIEF is a two-valued (0 and 1) directional descriptor given by Eq. 3:  τ ( p; x, y) =

1, p(x) < p(y) 0, p(x) ≥ p(y)

(3)

where { p(x), p(y)} represents the grey tone at the field x and y individually throughout the picture feature point. To decrease the clamour impact, Gaussian sifting is to begin with executed on the picture. Select a Cohort Cluster using Mini-batch K-means Clustering: The following steps explain step by step how to prepare a set of cohort-profiles which relevant for a user identity. Step 1: Extract the intended features for each stored profiles. Step 2: Measure the similarity scores for each pair of profiles.

276

R. Mehta et al.

Fig. 2 Proposed frameworks

Step 3: Construct clusters with the obtained scores by applying mini-batch kmeans clustering algorithm. Step 4: Find the centroid of clusters. Step 5: Finally choose the most relevant cluster based on centroid value. The choice depends on the applications and biometric-profiles used.

3 Database Description and Experiment ORB features are used in this proposed cohort-based ear recognition system for evaluating and validating purpose. This algorithm is based on batch value. Initially, the dataset is divided into small batches and then clustering algorithm is applied. The experiments are conducted partially on AWE database which consists of 1000 ear images taken from unconstrained environment of famous people, across different ethnicities, genders, and ages. In the proposed framework, shown in Fig. 2, the feature set is derived from the ORB feature descriptor. Then, a similarity measure is calculated between them. Here, we consider Euclidean norm for similarity measure. All similarity measures are then process further for recognition.

4 Experimental Result and Analysis Table 1 shows the quantitative results as in form of false accept rate (FAR), false reject rate (FRR), and equal error rate (EER) of non-cohort system as well as the proposed system. For any biometric system, the correct recognition rate is considered as most important parameters which are computed from FAR and FRR. The tabulated value

Cohort Selection using Mini-batch …

277

Table 1 Performance metric of the cohort method and without cohort method Method used

FAR (%)

FRR (%)

EER (%)

Accuracy (%)

Non-cohort

9.42

13.34

11.38

88.62

Proposed method [with cohort]

6.64

8.56

7.60

92.40

shows that with cohort method is superior to non-cohort method, and this hike in performance occurs due to the use of cohort profile.

5 Advantages and Disadvantages As there is no rotation invariance feature in the original BRIFE descriptor, when the image is rotated, it is prone to loss of data. Hence, the circle calculation employments the direct BRIEF calculation to summaries the course for each include findings. On the other hand, mini-batch k-means clustering algorithm guarantees to converge although it is simple to implement. The count of selected cohorts for every subject is plotted in Fig. 3. This clustering method also adopts new samples easily and efficiently. However, since the clustering mechanism is based on k-means algorithm, so it has few bottleneck too like dependency on the initial ‘k’ value, facing problem to cluster with variable size of data, etc. Another problem with this method is to select the initial

Fig. 3 Number of cohort selected using ORB for AWE database

278

R. Mehta et al.

‘k’ value. It is also very difficult to decide that at what point, the algorithm should converge. However, if we can overcome these few drawbacks then it will have all the advantages of cohort use. We can use deep learning approach for this purpose [10]. With the help of the non-match templates, the accuracy can be increased to a remarkable value. Another advantage is that to shot a picture of a person’s ear does not require his or her coordination since it can be done from far with a low priced device.

6 Conclusion and Future Scope A biometric system which uses specific cohort selection method based on ORB feature by using mini-batch k-means clustering is presented in this paper. The work has been carried out on AWE [11] ear database having various challenges. The result shows the improvement in accuracy after applying cohort images over the traditional non-cohort ear recognition system. The proposed method can also be used with other feature extraction technique. In the proposed work, for one subject, we have considered only one template for constructing the cohort pool. More than one sample per subject can also be taken initially in the cohort set. Though, we have focussed only on ear biometric, but it can also be applied in other biometric traits without much modification.

References 1. Ramesh KP, Rao KN (2009) Pattern extraction methods for ear biometrics-a survey. In: 2009 World congress on nature and biologically inspired computing (NaBIC). IEEE 2. Yuan L, Mu Z, Xu Z (2005) Using ear biometrics for personal recognition. In: International workshop on biometric person authentication. Springer, Berlin, Heidelberg 3. Tistarelli M, Sun Y, Poh N (2014) On the use of discriminative cohort score normalization for unconstrained face recognition. IEEE Trans Inf Foren Security 9(12):2063–2075 4. Garain J, Kumar RK, Kisku DR, Sanyal G (2019) Addressing facial dynamics using k-medoids cohort selection algorithm for face recognition. Multimedia Tools Appl 78(13):18443–18474 5. Aggarwal G, Ratha NK, Bolle RM (2006) Biometric verification: looking beyond raw similarity scores. In: 2006 Conference on computer vision and pattern recognition workshop (CVPRW’06). IEEE, pp 31–31 6. Garain J, Kumar RK, Kisku DR, Sanyal G (2016) Selection of user-dependent cohorts using Bezier curve for person identification. In: International conference image analysis and recognition. Springer International Publishing, pp 566–572 7. Calonder M, Lepetit V, Strecha C, Brief FP (2010) Binary robust independent elementary features. In: Proceedings of the European conference on computer vision, pp. 778–792 8. Rahman M, Sadi MS, Islam MR (2014) Human ear recognition using geometric features. In: 2013 International conference on electrical information and communication technology (EICT). IEEE 9. Albiol A, Monzo D, Martin A, Sastre J, Albiol A (2008) Face recognition using HOG–EBGM. Patt Recogn Lett 29(10):1537–1543

Cohort Selection using Mini-batch …

279

10. Priyadharshini RA, Arivazhagan S, Arun M (2021) A deep learning approach for person identification using ear biometrics. Appl Intell 51(4):2161–2172 11. Emeršiˇc Ž, Štruc V, Peer P (2017) Ear recognition: more than a survey. Neurocomputing 255:26–39

A Hybrid Connected Approach of Technologies to Enhance Academic Performance Sushil Kumar Mahapatra, Binod Kumar Pattanayak, and Bibudhendu Pati

Abstract The technological advancement in every filed has a great impact on education system to modernize the teaching and learning approach. The traditional teaching and learning system fails to cope up with the advancement of technology. As the technology is changing in a rapid fashion, the need of a dynamic educational system is highly necessary that can satisfy the need of real-time learning scenario. Our proposed learning model is a hybrid approach of the recently popular learning methodology, i.e., case-based learning, flip learning, and gamification of courseware. This hybrid model can satisfy the student’s expectations by driving them to face the real-world problem with the help of cutting-edge technology such as IoT, blockchain, and machine learning. The proposed learning methodology is presented within a learning management system (LMS) by exploiting the features of Internet of things (IoT). IoT in integration with LMS enhances the real-time scenario for the learning of students. As a result, students can face the different case studies using their critical and innovative thinking. This paper also reveals the factors that influence the student’s learning capability by utilizing IoT. This paper presents a real-time framework for enhancing the learning and teaching methodology with the different features of the model and its benefits of utilization. Keywords Educational system · LMS · IoT · Blockchain · Flip learning · CBL · Gamification · Academic performance · Air quality index · Air pollutant

S. K. Mahapatra (B) · B. K. Pattanayak Department of Computer Science and Engineering, Siksha O Anusandhan Deemed to be University, Bhubaneswar, Odisha, India B. K. Pattanayak e-mail: [email protected] B. Pati Department of Computer Science, Ramadevi Women’s University, Bhubaneswar, Odisha, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_30

281

282

S. K. Mahapatra et al.

1 Introduction Since the evolution of Internet in late 90s, some researchers have speculated the potential of Internet. They suggested that the Internet was not only meant for the data sharing, but it can also be utilized for the services. Later in 1999, Kevin Aston, a businessman and researcher in the field of supply chain optimization, first initiated the term “Internet of Things” which can provide data sharing and services by connecting things of the Internet [1]. Therefore, depending upon the potential of utilization of IoT, it is defined by various names by different researchers and global IoT leading companies as Internet of Everythings (IoE), Internet of Services (IoS), Internet of Data (IoD), Internet of Anything (IoA), Internet of People (IoP), Internet of Signs (IoSs), and Industrial Internet of Things (IIoT) etc. According to CISCO, a global networking company, IoE has the capability to bring all together, i.e., people, services, data, and connected things to give a new means to the Internet [2]. This new means to the Internet is not only capable of creating new business opportunities but also capable of turning the information into an intelligent actionable services. Nowadays, the demand of automation in every field is increasing exponentially. So the IoT along with some cutting-edge cloud services like Microsoft Azure, IBM Node Red, and Amazon Web Services comes into picture, and the demand of these technologies is increasing rapidly [3]. One of the researchers from the USA named as Gartner who is leading a research and advisory company predicts that at around 27 billion devices with around 2 zettabytes of data storage will be connected to the Internet cloud by the end of the 2025 [4]. Therefore, the key challenges for implementing and maintaining such a huge number of IoT devices are its privacy and security. Along with these challenges, some other challenges are scalability, trust management, reliability, mobility, and interoperability [5]. On the other hand, pandemic-like events have recently compelled educational policymakers to shift the educational system from face-to-face (f2f) to the virtual mode of instruction. However, virtual courses face several problems such as a lack of network infrastructure, network connectivity, data security and integrity, attendance and class records management, and taking feedback, to name a few. Aside from all of these elements, environmental, social, and economic indicators are also important. The Internet of things (IoT) combined with blockchain (BC) in a learning management system (LMS) would be a superior answer to all of the difficulties listed above. Many academics have recently suggested that because IoT devices are small, portable, and inexpensive, they may be used to collect data on the environment, social, and economic backgrounds of students in various geographical locations. The blockchain, on the other hand, maybe used in LMS to ensure data security and integrity. LMS may also be utilized to help with IoT and BC. When combined with IoT and BC, several instructional learning techniques such as flip learning, blended learning, and gamification of course components can improve academic achievement in LMS. The proposed work is to deliver an effective LMS with cutting-edge technology to enhance academic performance.

A Hybrid Connected Approach of Technologies …

283

Many studies have been carried out to determine the relationship between environmental conditions and academic achievement [6]. Environmental factors such as air pollution concentrations, temperature, humidity, and so on. An air pollutant in a specific region is the primary cause of respiratory-related health issues. The influence of these contaminants, on the other hand, is producing cognitive problems in children, which is reducing their attention levels [7]. The goal of this study is to see how air pollution affects a student’s educational results by utilizing the Internet of Everything (IoE) to measure their attentiveness in class. Because the influence of ambient air quality differs from place to place, the research was carried out in several parts of Odisha, India. This study found that high concentrations of PM10 , PM2.5 , NO2 , and SO2 can negatively influence a student’s cognitive response, but O3 had no effect on cognitive response and, hence, academic achievement. Keeping track of student attendance at VC is a challenging endeavor. Because the students are not there in person, there are several technical options for manipulating attendance. IoT is utilized in this suggested project to collect real-time data from students and to track their actions during the VC. While, BC is used to prevent data tampering as well as to ensure the integrity and dependability of the data stored [8]. The traditional teaching and learning system is unable to keep up with technological advancements. Due to the rapid evolution of technology, a hybrid LMS is presented that incorporates lately popular learning methodologies such as casebased learning, flip learning, and gamification [9, 10]. This hybrid system can meet students’ expectations by forcing them to confront a dynamic challenge using cuttingedge technology. The suggested hybrid LMS system makes use of IoT characteristics to boost student motivation and learning capacity.

2 Related Works This section describes about the recent advancement of the educational methodology using IoT. As the technological advancement is in flow, the IoT recently gets more attention due to its diversity in every field. Nowadays, IoT also plays a significant role in the field of education by developing the infrastructure as well as by improving the teaching and learning methodology. It is transforming the traditional learning system to an interactive and intelligent learning system. IoT helps every entity of the educational system starting from the administrator to the student as end user. It has a significant impact on all the stakeholders of the educational system. Some researcher has presented a model to correct the pronunciation and shape of the mouth while learning English language using IoT gadgets [11]. The authors in [12] use the IoT to deliver the concepts of programming language. Some researcher has developed a learning management system (LMS) to analyze the student’s learning method using analytics [13]. They presented a real-time case analysis model to use cloud computing and IoT in the structure of educational resource. The authors in [14] deliver a comparison of utilizing the IoT, cloud computing, data mining, and tripleplay for the distance mode of education. Vo et al. presented a qualitative study to

284

S. K. Mahapatra et al.

evaluate the instructors designed blended course by conducting a semi-structured interview. Here, they have presented a content construction communication platform as an evaluation tool [15]. In another approach, Ozqur et al. show a relationship between different ways of thinking and computational thinking skill along with the demographic variables. This is to predict the student’s skill in accordance with the some variables by the help of structure equation model [16]. Magiera et al. explore the students and teachers’ ability to explain the solution of different problems and also express the critiques to others solutions [17]. A study was conducted by Seage et al. to evaluate the impact of traditional science teaching and the innovative blended learning approach over the elementary school students of lower socioeconomic class [18]. Recently, so many researchers have been addressed to improve the attendance system using different embedded technologies such as RFID, GPS, and Bluetooth. All these technologies have been tested in monitoring the student’s attendance in traditional classes for its efficiency and reliability [19]. Initially, some researcher presented a RFID-based attendance system [20]. But the main disadvantage of RFID is that it can be used by any person, and this system is not tamper-proof. A locationbased attendance system is proposed in [21] using GSM module to validate whether an employee is present at the designated place or not. Many works have been carried out by using face recognition system, but it requires a very precise algorithm to record and detect the faces [22]. However, the authors in [23] introduce an iris recognition system to record the attendance. Abubakar et al. have presented a system which uses a finger print module based on IoT [24]. On the other hand, an effective attendance monitoring system and a good learning management system are not enough to improve the student’s academic capability. Some authors have identified some external factors such as environmental factors that are influencing the student’s academic performance. Ham has investigated in the area of California and found the inter-relation between the different air pollutants with the test score [25]. He also revealed a close inter-dependence of AQI with the academic performance. Carroll shows that the continuous exposure to the different air pollutants beyond the permissible limit of the WHO could create severe health issues and, hence, increases absenteeism in the class [26]. Hence, this would decrease the academic performance.

3 Proposed Methods and Materials 3.1 A Hybrid LMS System Kindly, nowadays, the flip learning in conjunction with CBL draws the attention of many researchers. It has gained its popularity when it is used with the IoT. The proposed LMS is presented in Fig. 1. In flip learning system [9], the course materials are provided to the students in the form of video lectures, online lecture notes along

A Hybrid Connected Approach of Technologies …

285

Fig. 1 Proposed hybrid learning management system using flip learning, gamification, and CBL

with some questionnaires. The students first came across these online course materials in the first stage. After completion of introductory part, the domain expert will assign some task to the students. Then, each individual student or a group of student will try to solve it depending upon their prior knowledge. In the third stage, classroom discussion will be carried out in either online mode or offline mode with the domain expert. The domain expert clarifies all the questionnaires raised by the student. In the fourth stage, the students give their assessment report to the expert. The expert evaluates the report and gives his feedback to the student. Meanwhile, the students are also asked to submit their feedback about the online course materials and classroom experience. All these workflows are governed by the administrator through an online portal. The assessing policy for the expert and students is designed by the administrator. The administrator is held responsible for the data security and privacy of the students as well as domain expert. Different application delivery approaches are employed here such as public, private, and hybrid. For example, the course material application assess is kept under public assess policy. The content of course material preparation, updating, evaluation process, and feedback to the student application is coming under hybrid policy for the expert and the administrator. The overall maintenance and monitoring application policy is kept for the administrator assesses policy. In this approach, the student will have exposure to the real-world problems. Here, their problem-solving capability will improve which leads them to solve their domain complex problem by creative thinking [27]. In this modern era, students are getting attracted more to the laptops, palmtops, and mobile phones. They spend more time in these gadgets round the clock rather than the text book. So this shifts the learning trend from the text book to the mobile phone. Here, by introducing the gamification learning mode in the learning methodology, the educational institutions can attract the students and create more interest among

286

S. K. Mahapatra et al.

students by transforming the course module to a gamified course module [28]. This method will enhance each students thinking process and progress their skill for a given task. As the traditional teaching method fails to bind the students in this rapid change of technological era, gamification learning approach not only creates the interest but also improves critical thinking. The gamification facilitates the knowledge transfer, micro-learning, assessments, and competition among the students [29]. It also engages the students at their maximum level of concentration through the rewards points at different level of gamification process.

3.2 Effect of Environment on Academic Performance Nowadays, air pollutants are increasing in different areas whether it is a metro city or a rural area. These air pollutants not only create respiratory diseases but also are responsible for the cardiac problems. The constant exposure to these pollutants like PM10 , PM2.5 , CO2 , SO2 , and NO2 etc., can degrade the cognitive response of a child, and it also degrade its academic performance [30]. Thus, this study is to find out the inter-relationship between the air pollutant and academic performance of a student. This work also reveals the variation of test score with the different level of air pollutant in different region as well as in different season. The proposed air pollutant measurement system consists of different low-cost sensor network to measure different air pollutant like PM10 , NO2 , SO2 , etc., and the air pollution measurement system is shown in Fig. 2. These sensors network is connected to IoT devices to send the collected data to the IoT cloud. The sensors network is kept across a specified area generally a state or a country. The collected data is further analyzed by different machine learning algorithms like SVM, KNN, NN etc., for categorizing the student’s attentiveness during a specified AQI level. Furthermore, at that specified AQI level, the test scores of the students are collected and analyzed to see the influence of different air pollutant over student’s test score.

Fig. 2 Air pollutant measurement system using IoT

A Hybrid Connected Approach of Technologies …

287

3.3 Block Chain Supported IoT-Based Attendance Monitoring System Throughout the journey of modern educational system, the attendance plays an important role in achieving the academic goals. In recent years, as virtual classes are becoming an effective tool to deliver the class content, it is very interesting issue to monitor the student’s attendance in virtual mode of class. Some researchers are carried on to monitor the attendance without human interference by utilizing RFID tag, face recognition, biometric sensors, etc. [31]. But all of the above-discussed attendance monitoring systems are based on face-to-face mode of class. Furthermore, they are silent about the data privacy and integrity of the collected data from these devices. After collecting these data from a device, these data can be manipulated at the server end, and hence, the academic outcomes may differ as expected. To resolve this issue, a hybrid approach is presented by utilizing blockchain along with IoT for student’s data security and integrity. In this proposed approach, face recognition and finger print algorithm are utilized to monitor the attendance. While along with these algorithms, a Python code is utilized to monitor the network status. When there is any issue with the network, it will be log in a file and send it to the server. The continuous attendance monitoring system is presented in Fig. 3. In the server end, all the attendance of a particular course for a specified time is logged, and a block is created within the blockchain. This private blockchain is integrated within a LMS system. All the attendance along with the activity of a particular student can be monitored by the faculty or by any academic council member.

Fig. 3 IoT-based continuous attendance monitoring system

288

S. K. Mahapatra et al.

4 Conclusion This work is emphasized on the different approaches to improve the academic outcomes. This work also reveals the different factors that are influencing the academic performance. To improve the academic performance, a hybrid teaching learning method is presented. Furthermore, to effectively monitor the attendance as well as performance of a student, a secure LMS system is presented utilizing blockchain with IoT. This work further investigates that how the pollution level at different areas also affects the academic performance and the diversified utilization of IoT in the field of education to improve its performance.

References 1. Ashton K (2009) That ‘internet of things’ thing. RFID J 22(7):97–114 2. Selinger M, Sepulveda A, Buchan J (2013) Education and the internet of everything: how ubiquitous connectedness can help transform pedagogy. White Paper, Cisco, San Jose, CA 3. Atlam HF, Alenezi A, Alharthi A, Walters RJ, Wills GB (2017) Integration of cloud computing with internet of things: challenges and open issues. In: 2017 IEEE International conference on Internet of Things (iThings) and IEEE green computing and communications (GreenCom) and IEEE cyber, physical and social computing (CPSCom) and IEEE smart data (SmartData). IEEE, pp 670–675 4. Gartner Inc. (2017) IT glossary—Internet of Things. http://www.gartner.com/it-glossary/int ernet-of-things/ ˇ 5. Colakovi´ c A, Hadžiali´c M (2018) Internet of Things (IoT): a review of enabling technologies, challenges, and open research issues. Comput Netw 144:17–39 6. Miller S, Vela M. The effects of air pollution on educational outcomes: evidence from Chile 7. Suglia SF, Gryparis A, Wright RO, Schwartz J, Wright RJ (2008) Association of black carbon with cognition among children in a prospective birth cohort study. Am J Epidemiol 167(3):280– 286 8. Meyliana YUC, Cadelina Cassandra S, Erick Fernando HAEW. Recording of student attendance with blockchain technology to avoid fake presence data in teaching learning process 9. Zhamanov A, Sakhiyeva Z, Zhaparov M (2018) Implementation and evaluation of flipped classroom as IoT element into learning process of computer network education. Int J Inf Commun Technol Educ (IJICTE) 14(2):30–47 10. Vu P, Feinstein S (2017) An exploratory multiple case study about using game-based learning in STEM classrooms. Int J Res Educ Sci (IJRES) 3(2):582–588. https://doi.org/10.21890/ijres. 328087 11. Wang Y (2010, October). English interactive teaching model which based upon Internet of Things. In: 2010 International conference on computer application and system modeling (ICCASM 2010), vol 13. IEEE, pp V13–587 12. Chin J, Callaghan V (2013) Educational living labs: a novel internet-of-things based approach to teaching and research. In: 2013 9th International conference on intelligent environments. IEEE, pp 92–99 13. Cheng HC, Liao WW (2012) Establishing an lifelong learning environment using IOT and learning analytics. In: 2012 14th International conference on advanced communication technology (ICACT). IEEE, pp 1178–1183 14. Castellani AP, Bui N, Casari P, Rossi M, Shelby Z, Zorzi M (2010) Architecture and protocols for the internet of things: a case study. In: 2010 8th IEEE International conference on pervasive computing and communications workshops (PERCOM workshops). IEEE, pp 678–683

A Hybrid Connected Approach of Technologies …

289

15. Vo MH, Zhu C, Diep AN (2019) Examining blended learning implementation in hard and soft sciences: a qualitative analysis 16. Özgür H (2020) Relationships between computational thinking skills, ways of thinking and demographic variables: a structural equation modeling. Int J Res Educ Sci (IJRES) 6(2):299– 314 17. Magiera MT, Zambak VS (2020) Exploring prospective teachers’ ability to generate and analyze evidence-based explanatory arguments. Int J Res Educ Sci (IJRES) 6(2):327–346 18. Seage SJ, Türegün M (2020) The effects of blended learning on STEM achievement of elementary school students. Int J Res Educ Science (IJRES) 6(1):133–140 19. Gowri CSR, Kiran V, Rama Krishna G (2016) Automated intelligence system for attendance monitoring with open CV based on internet of things (IoT). Int J Sci Eng Technol Res (IJSETR) 5(4):905–913 20. Uddin MS, Allayear SM, Das NC, Talukder FA (2014) A location based time and attendance system. Int J Comput Theor Eng 6(1):1–2 21. Shoewu O, Olaniyi OM, Lawson A (2011) Embedded computer-based lecture attendance management system. Afr J Comput ICT 4(3):27–36 22. Mani Kumar B, Praveen Kumar M, Rangareddy (2015) RFID based attendance monitoring system using IOT with TI CC3200 launchpad. Int J Mag Eng Technol Manag Res 2(7):1465– 1467 23. Kadry S, Smaili M (2013) Wireless attendance management system based on iris recognition. Scien Res Essays 5(12):1428–1435 24. Abubakar I, Kpochi PK, Eiyike JS (2018) Design and implementation of a smart attendance register. Int J Adv Eng Res Dev 5(2) 25. Zweig JS, Ham JC, Avol EL (2009) Air pollution and academic performance: evidence from California schools. National Institute of Environmental Health Sciences, 1–35 26. Carroll HC (2010) The effect of pupil absenteeism on literacy and numeracy in the primary school. Sch Psychol Int 31(2):115–130 27. Herreid CF, Schiller NA (2013) Case studies and the flipped classroom. J Coll Sci Teach 42(5):62–66 28. Kapp KM (2012) The gamification of learning and instruction: game-based methods and strategies for training and education. John Wiley & Sons 29. Muntean CI (2011) Raising engagement in e-learning through gamification. In: Proceedings 6th international conference on virtual learning ICVL, vol 1, pp 323–329 30. Hoek G, Krishnan RM, Beelen R, Peters A, Ostro B, Brunekreef B, Kaufman JD (2013) Long-term air pollution exposure and cardio-respiratory mortality: a review. Environ Health 12(1):1–6 31. Ahmed A, Olaniyi OM, Kolo JG, Durugo C (2016) A multifactor student attendance management system using fingerprint biometrics and RFID techniques. In: International conference on information and communication technology and its applications (ICTA 2016), pp 69–74, Minna, Nigeria, November 2016

Estimation of Path Loss in Wireless Underground Sensor Network for Soil with Chemical Fertilizers Amitabh Satpathy, Manoranjan Das, and Benudhar Sahu

Abstract The wireless underground sensor network (WUSN) is one of the capable application areas of the recently emerging wireless sensor network (WSN) techniques. Though, WUSN is treated as the expansion of applications of WSN in the underground (UG) environment, soil being the important portion of the communication path makes the implementation challengeable. The change in attenuation loss due to inherent characteristics of soil is a major contribution towards the total path loss experienced by a signal travelling in WUSN. More specifically in agricultural application of WUSN, besides moisture the presence of chemical fertilizer also plays an important role towards contributing the loss. In this work, the path loss is computed considering the soil being mixed with chemical fertilizers. The signal path loss seems to be maximum when the fertilizer content is high according to the findings of this work. Keywords Wireless underground sensor network · Path loss · Dielectric constant · Chemical fertilizers

1 Introduction In the current era, the applications of WSN are widespread. Such networks work fine with the coordination amongst sensor nodes deployed in a predefined area. Temperature, humidity, stress, and other environmental parameters are monitored by sensors, and the respective data are transferred to a central location via the sensor network. Most of the WSNs are well analyzed on the basis of communication in the free space. Exploration of applications of such networks is now not limited with the communication in free space only leading to abbreviations such as wireless underground sensor networks (WUSNs). In recent years, WUSN has come up as a research area since it is different from terrestrial WSN concerning the communication medium. To be more specific, a WUSN is structured with the wireless sensor nodes A. Satpathy · M. Das (B) · B. Sahu Institute of Technical Education and Research, SOA (Deemed to be University), Bhubaneswar, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_31

291

292

A. Satpathy et al.

being placed underneath the ground (i.e. inside the soil) to sense the parameters in the UG region. Further, the UG sensed information is transmitted making the use of UG soil as the communication medium in wireless mode to the sink node which may be installed in a nearby aboveground (AG) region [1]. Hence, it can be stated that WUSNs are the expansion of the application of WSN in the underground environment [2, 3]. Though WUSN can be treated as the expansion of applications of WSN in the underground environment only, an acute difference lies between WUSNs and terrestrial WSNs due to the communicating medium. In case of terrestrial WSNs, the electromagnetic signal from the field node(s) reaches the sink node travelling through the free space communication medium only. Accordingly, the signal at the sink node may suffer from distance dependent path loss, multipath fading, scattering, etc., depending on the geographical area where the WSN is structured. On the other side, in case of WUSN, the sensed data from the field node(s) have to travel through a mixed communication path (i.e. soil + air) to reach the aboveground placed sink node. As compared to air, the soil is a light dense medium that absorbs and attenuates electromagnetic waves quite well. Therefore, besides the free space path loss, the transmitted signal in WUSN has to suffer a loss during the travel inside the soil medium itself which is termed soil attenuation. This leads to the use of WUSNs being more difficult than in the AG environment due to the complicated and unknown aspects of the subterranean environment such as limited communication range and hard energy enhancement. It is well known that the soil at the UG level has an uneven composition of elements like sand, clay, and silt based on geographical location. Besides, the water concentration in the soil also plays an important role in the contribution of attenuation loss offered to the electromagnetic signal. Hence whilst computing the path loss in WUSN communication, the attenuation loss is also included. Some of the literature focussing on WUSN communication and computation of path loss are mentioned below. In [4], the authors have discussed about the techniques adopted for transmitting a signal between sensor nodes in WUSN along with the channel characteristics. A broad study has been made by the authors in [5] to derive the path loss in UG channel propagation. A similar study has been made in [6] to analyze the path loss in WUSN considering both UG to AG and AG to UG communication. Specific focus on soil attenuation is given in [7] as a part of oppositions offered to the electromagnetic signal in WUSN. As discussed in the works of literature above, it is seen that in WUSN, the dielectric loss offered due to water content in the soil is considered as a major part of soil attenuation. However, based on certain applications such as agriculture, chemical fertilizers are usually applied in the soil to improve fertility. This addition of chemical fertilizers leads to a change in the electrical characteristics (e.g. dielectric behaviour) of UG soils. Thus, it is worthy to spell out that being a part of the UG soil environment, the chemical fertilizer also affects the transmitted signal. Accordingly, in case of WUSN-based agricultural process, it is also important to compute the path loss in the presence of chemical fertilizer. In this work to compute the path loss in UG to AG

Estimation of Path Loss in Wireless Underground …

293

communication, we considered the effect of two commonly used chemical fertilizers such as urea and potash. The residue part of this paper is organized as follows. Section 2 gives a glimpse of WUSN classification. In Sect. 3, the analytical methodology to compute the path loss for UG to AG communication is explained. Simulation results obtained from the evaluation of analytical expressions are presented in Sect. 4. The conclusion is presented in Sect. 5.

2 Classification of WUSNs and Deployment Strategies The categorization of WUSN is completely based on the soil region in which the UG sensor nodes are placed. In general, the UG soil region can be classified into two types based on the distance from the ground surface to see a difference in the soil composition. These are (1) topsoil that refers to an approximate distance up to 30 cm from the ground surface and (2) subsoil which is considered as the region below topsoil usually in 30–100 cm zone [8]. Considering the agricultural application, it has been observed that in some cases, the plant growth is limited to the topsoil region (e.g. the vegetable plants) and in some other cases, the root growth may exceed to subsoil region (e.g. fruit plants). Hence, subjected to the area of interest, the WUSN can be configured in topsoil or subsoil region. However, in both cases, the sink node must be in the nearby AG area. If the WUSN is structured in a topsoil zone, it is called topsoil WUSN and if it is installed in the subsoil region, it is called subsoil WUSN.

3 Path Loss Estimation Considering a single communication link in WUSN from UG to AG (i.e. between a UG sensing node and the aboveground sink node), the signal from the source (i.e. UG sensing node) is transmitted in wireless mode. Thus, the power associated with the signal received at the destination (i.e. sink node) is computed using the standard Friis equation expressed in (1) below [8]. Pr = Pt (dBm) + G t (dB) + G r (dB) − L ph (dB)

(1)

where Pt is the power used to transmit the signal, Gt and Gr being the gains of the transmitter and receiver antennas, respectively. The third term in (1) is the total path loss (L ph ) experienced by the electromagnetic signal and is further expressed as in (2) below [5]. L ph (dB) = L 0 (dB) + L c (dB) + L a (dB)

(2)

294

A. Satpathy et al.

As indicated in (2), the total path loss is the accumulation of three differently experienced path losses, which are briefly described here. The first term in (2) defines the path loss that occurred due to travel of distance ‘d’ between transmitter and receiver node and is evaluated using the mathematical expression given in (3) [8]. 

4π d f L 0 (dB) = 20 log c

 (3)

where L 0 is the path loss estimation in the air, f is the frequency of operation used by wireless sensor nodes and c is the velocity with which the electromagnetic signal travels. Secondly, whilst travelling from UG soil towards the AG sink node, the electromagnetic signal experiences a change in the transmission medium, which leads to developing the loss ‘L c ’. This kind of loss is computed using the analytical expression (4) given below [5]. 

λ0 L c (dB) = 20 log λ

 (4)

which where  wavelength of the signal from source to destination   is calculated  λ0 is the c 2π by λ0 = f and λ is called wave factor which is given by λ = β . Here, β is the phase shift constant and is calculated using the expression (5) given below [8].  ⎤ ⎡    2   ε  με ⎣ 1+ + 1⎦ β = ω 2 ε

(5)

As indicated in (5), the terms ε and ε are defined as the dielectric permittivity and dielectric loss factor associated with the UG soil, respectively. The other terms are μ (i.e. the magnetic permeability of the UG soil composition) and ω (i.e. the angular frequency). Lastly, the third term in (2) defines the loss L a , which arises due to attenuation offered when the electromagnetic signal travels in UG soil. This kind of loss is represented mathematically using the expression (6) given below [5]. L a (dB) = 8.69αd

(6)

where α is the attenuation constant and is calculated using the analytical expression (7) as written below [8].

Estimation of Path Loss in Wireless Underground …

 ⎤ ⎡    2   ε  με ⎣ α = ω 1+ − 1⎦ 2 ε

295

(7)

Hence, as presented above the path loss in WUSN for UG to AG wireless communication depends on two basic parameters (i.e. α and β), which in turn are based on the soil dielectric characteristics. Moreover, the values of two respective dielectric components (i.e. ε and ε ) from mixed soil compositions can be obtained through experimental approaches as proposed in [9–11] to compute the path loss.

4 Results and Discussion The numerical results obtained from the evaluation of the analytical expressions discussed in Sect. 3 are presented and analyzed in this section. Further, the estimation of path loss is carried out for the operating frequency 2.44 GHz (used for WSN) assuming the transmitter–receiver distance of 3 m. To analyze the effect of chemical fertilizer on path loss, we consider a mixture of soil and fertilizer in a fixed proportion. The two commonly used chemical fertilizers (i.e. urea and potash) are used for the analysis with different percentages of concentration being mixed with the soil independently. It is also assumed that the soil is free of magnetic materials. The results obtained through the experimental method for the dielectric components help to determine the respective values of the two basic influencing parameters (i.e. attenuation and phase shift constant) [9]. Correspondingly, the analysis is carried out for the impact of both urea and potash on individual losses: (1) due to attenuation, (2) due to change in transmission medium and also total path loss experienced during travel. Figure 1 presents the variation in path loss due to soil attenuation for both urea and potash mixed with the soil with 4 different concentrations. It is observed from the plotted results that increase in fertilizer concentration leads to an increase in path loss. This is because an increase in percentage concentration of fertilizers makes the soil more porous. Further, change in the type of fertilizer also makes a noticeable difference in path loss at a higher percentage of fertilizer in the soil mixture. This is because of the difference in their dielectric characteristics towards electromagnetic signals. It is known that during travel, due to changes in transmission medium, there is an effect on the electromagnetic signal strength. Here, the change in transmission medium (i.e. soil to air) also has an effect on path loss which is presented in Fig. 2. As depicted from the figure, this kind of path loss also increases due to an increase in the amount of fertilizer component in the soil mixture. The trend remains the same for both urea and potash. However, as compared to the soil attenuation effect, the loss incurred because of phase shift is less. Further, it is also noticed from both Figs. 1 and 2 that for a fixed amount of change in fertilizer concentration, the amount of

296

A. Satpathy et al.

Fig. 1 Path loss due to soil attenuation versus fertilizer concentration

Fig. 2 Path loss due to phase shift versus fertilizer concentration

change in attenuation loss is less in comparison with the loss due to change in the transmission medium. Lastly, the total path loss experienced by the electromagnetic signal as an effect of fertilizer concentration is plotted in Fig. 3. As expressed in (2), the total path loss is the sum of three different kinds of path loss suffered by the signal during its travel from UG source to AG destination. Hence, as expected, the total path loss follows an increasing trend with an increase in fertilizer concentrations. Further, it is also noticed in all three plotted results that at zero percentage concentration of fertilizer, the soil itself produces a precise amount of path loss. Also, depending on the burial depth of the UG sensor node, the amount of distance travelled by the signal changes, which creates a noticed contribution in path loss.

Estimation of Path Loss in Wireless Underground …

297

Fig. 3 Total path loss versus fertilizer concentration

5 Conclusions In WUSN, it is known that the field sensor nodes are placed below the ground surface. So, because of UG soil composition, the electromagnetic signal suffers from path loss. Since in agriculture application, chemical fertilizer is a major component of the UG soil composition, as observed the fertilizer concentration greatly affects the signal strength during its travel specifically in the soil region itself. On the other side, the concentration of fertilizer directly affects the fertility level of the soil. Hence, care must be taken in deciding the application of proper concentration of fertilizer to be mixed with soil to achieve a good signal strength without hampering a lot in the fertility level. It is also observed form that the loss incurred is related to the operating frequency. Though the present work is carried out at 2.44 GHz only, the study can be made for other frequencies meant for WSN applications to see the effect of chemical fertilizer in signal strength to choose a proper operating frequency for practical implementation.

References 1. Akyildiz IF, Stuntebeck EP (2006) Wireless underground sensor networks: research challenges. Ad Hoc Netw 4:669–686 2. Akyildiz IF, Su W, Sankarasubramaniam Y, Cayirci E (2002) Wireless sensor networks: a survey. Comput Netw 38(4):393–422 3. Vidhya J, Danvarsha B (2021) Design and implementation of underground soil statistics transmission gadget utilizing WUSN. ICCES India 4. Sun Z, Akyildiz IF (2010) Key communication techniques for underground sensor networks. Found Trends Netw 5(4):283–420 5. Akyildiz IF, Sun Z, Vuran MC (2009) Signal propagation techniques for wireless underground communication networks. Phys Commun J 2(3):167–183

298

A. Satpathy et al.

6. Xiaoya H, Chao G, Bingwen W, Wei X (2011) Channel modeling for wireless underground sensor networks. In: 35th IEEE Annual computer software and applications conference workshops 7. Bogena HR, Huisman JA, Meier H, Rosenbaum U, Weuthen A (2009) Hybrid wireless underground sensor networks: quantification of signal attenuation in soil. Vadose Zone J 8(3):755–761 8. Yu X, Han W, Zhang Z (2016) Path loss estimation for wireless underground sensor network in agricultural application. NAAS (National Academy of Agricultural Sciences), Agri Res 6:97–102 9. Rajesh Mohan R, Mridula S, Mohanan P (2015) Study and analysis of dielectric behavior of fertilized soil at microwave frequency. Eur J Adv Eng Technol 2(2):73–79 10. Ahire V, Ahire DV, Chaudhari PR (2015) Effect of chemical fertilizers on dielectric properties of soils at microwave frequency. Int J Scien Res Publ 5(5) 11. Navarkhele VV, Kapde KE, Shaikh AA (2015) Dielectric properties of black soil with chemical fertilizers at x band. Indian J Radio & Space Phys 44:102–105

Molecular Communication via Diffusion—An Experimental Setup using Alcohol Molecule Meera Dash and Trilochan Panigrahi

Abstract Molecular communications systems are used to transmit digital information through a physical medium with the help of molecules. The receiver detects the presence or absence of transmitted molecule to digitally encode the messages. The biocompatible molecules are spread into a medium such as air and water for transmission. The system requires less energy and is not requires an antennas that are constrained to specific wavelength of the signal. In the present experiment, alcohol molecules are used as a carrier of information. Alcohol is evaporated at the transmitter as per the data given into it. An A alcohol detector is used at receiver to detect evaporated alcohol molecules present in air. When alcohol molecules are spread to air, the presence of it is detected by the sensor and provides appropriate voltage value. On the other hand, in absence of the alcohol molecules, the detector will give very low voltage. This can be further modified for data transmission (ASCII-based) and modulation techniques based on concentration. In order to minimize the transmission time and to increase channel capacity, the Morse code is incorporated successfully. Keywords Molecular communication · Diffusion · Concentrated-based modulation

1 Introduction The nano-network is a new paradigm of communication where various communication methods are used to transmit information between very tiny machines of micro and nanoscale [1]. Molecular communication is one of the promising methods in nano and microscale as an alternative to conventional approaches based on electromagnetic or acoustic waves [2].

M. Dash Department of ECE, ITER, SOA, Bhubaneswar, India T. Panigrahi (B) Department of ECE, NIT Goa, Ponda, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_32

299

300

M. Dash and T. Panigrahi

Fig. 1 Architecture of molecular communication [6]

In molecular communication, information is conveyed between transmitter and receiver via molecules. In literature, different communication methods for molecular communication systems have been proposed. These approaches are inspired by the cell biological communication systems [2]. In the present work, we focus on the short and the medium range of diffusion communication (CvD) in nanoscale networks. Here, the information is encoded on messenger (that is organic molecules) concentration waves. The transmitter generates a particular level of concentration as a messenger molecules based on the instantaneous bit in the data sequence. These organic known as messenger molecules propagate in the environment as a result of broadcast. At the receiver, some messenger molecules are received by forming chemical bonds at the receptor level, as indicated in Fig. 1. As per the concentration of the messenger molecules in the medium, the number of chemical bond are formed and that triggers a series of events on the receiver to decode the information which was transmitted. In literature, several aspects of the molecular communication system are studied. Especially, various types of channel models are developed and then, the channel capacity is also evaluated in [2–4]. But in the present work, we are not modeling any physical channel for molecular communication. We are using air is the diffusion medium, and organic molecules are the messenger of information based on their concentration. Communication via diffusion (CvD) system is a popular molecular communication system. Here, the information is sent by the transmitter by using a sequence of symbols that spread over sequential time slots instead of transmitting the binary bits. The symbol sent by the transmitter is known as the “intended symbol”, and the received symbol is called the “received symbol”. Alcohol sensor MQ3 is used as receiver which gives the appropriate digital and analog output for concentration of alcohol molecule that observed by the sensor [5]. Different modulation techniques are used to the map between messenger molecule and the received symbol that helps for symbol detection. The symbols are modulated

Molecular Communication via Diffusion—An Experimental …

301

over various “messenger molecule arrival properties” at the receiver, e.g., concentration, frequency, phase, molecule type, to form a signal. In the present scenario, concentration-based modulation scheme is realized first. In the concentration-based modulation, the concentration of messenger molecules is used to differentiate symbols. Different concentration values at the receiver represent different symbols. The concentration level is divided by proper thresholds to avoid error in detection. This method of transmission and detection of information is called concentration-based modulation.

2 Hardware Implementation of Molecular Communication System The molecular communication system is realized by designing a transmitted to spread the alcohol molecule as per the input data and corresponding Morse code. The receiver is designed to detect the alcohol molecule. The details description is as follows.

2.1 Transmitter In transmitter part, we have used these following components: 1. 2. 3. 4.

Electric portable spray painting machine. Input AC voltage requirement is 220 V, 50 Hz. Arduino Uno: To encode the input data into appropriate voltage values. Relay: AC devices can be controlled by the DC input which is given to relay. Here, the output of the Arduino is given as input to the relay and it controls the electric portable spray painting machine and fan. The hardware setup for the transmitter is shown in Fig. 2a.

(a) Transmitter Fig. 2 Experimental setup for molecular communication

(b) Receiver

302

M. Dash and T. Panigrahi

2.2 Receiver The present molecular system is based on alcohol molecule. Therefore, the receiver is designed to detect the transmitted alcohol molecule. In order to make RES use of the alcohol detector, a DC fan is also used to dry up the detector. In receiver part, these following components are used: 1. 2. 3.

Alcohol Sensor (MQ3) [8]: An alcohol sensor detects the attentiveness of alcohol gas in the air, and an analog voltage is an output reading. Arduino Uno: To decode the observed analog output across sensor. Fan: To disperse the alcohol molecules which are settled over alcohol sensor when the zeros are transmitting. The setup for the receiver is shown in Fig. 2b.

3 Concentration-based Modulation In concentration-based modulation, the information is modulated to the different values of concentration. It is quite similar to amplitude shift keying (ASK), where ASK represents the digital data as the variations in the amplitude of carrier wave. The concentration observed near the receiver is directly proportional to pulse width of transmitter. By keeping the transmitter on for few seconds (lower pulse width), the concentration observed near the receiver will be low. On increasing the time duration of transmitter, the concentration observed will be more. Based on the concentration of the molecules, sensor will give the appropriate analog output and it can be decoded by designing proper receiver.

3.1 Encoding Here, we are going to send characters using Morse code scheme [7]. Where the characters (alphabets or numbers) are defined into series of dots (.) and dashes (_). Morse code is defined in such a way that the minimum codeword length will be there of the characters of higher probability (http://www.codebug.org.uk/learn/step/540/ morse-code-alphabet/). For example: Characters of higher probability like A and E have codeword “._” and “.” and for the characters of least probability like Z and X have codeword “__..” and “_.__”. Using this scheme, we are encoding dots (.) for lower pulse width and dashes (_) for higher pulse width followed by low input for some time duration. We keeping low input in transmitter after each and every dots and dashes because after each dots and dashes, the molecules will be present near the sensor. One should allow it to

Molecular Communication via Diffusion—An Experimental …

303

Fig. 3 Varying pulse-width input to the transmitter

get free from the molecules for some duration. By keeping the low inputs, we can observe almost same analog output value for all dots and same for dashes as well. In our proposed experiment, we have defined pulse width of three seconds for dot (.), five seconds for dash (_), four and half seconds for low input after each dots and dashes, and five seconds for low inputs after each character. Based on this conditions, the waveform observed at the input of transmitter for the word “NITG” in Fig. 3 where “N” = _ ., “I” = . ., “T” = _, “G” = _ _ .. First decode each of the character into dots and dashes with the help of Morse code. The logic to make on the transmitters is based on the dot, dash, and gap between them. We also added delay of five seconds between two consecutive characters. In molecular communication, the organic molecules diffuse to the physical medium. One of the important performance parameters is the distance between transmitter and receiver sensor. Therefore, in the following section, we have discussed the receiver characteristics at different distances from the transmitter.

4 Performance Analysis with Distance between Transmitter and Receiver As the distance between the transmitter and receiver increases, the molecule concentration observed near the sensor will decrease. Then, the analog output of the receiver alcohol sensor is also reduces. Thus, it is very important to fix the proper distance between transmitter and receiver in order to decode the transmitted information without error.

304

M. Dash and T. Panigrahi

Fig. 4 Analog output observed at different distances

Based on the observation, selecting the proper distance between the transmitter and the receiver should follow these criteria: 1. 2.

Can be able to distinguish the observed peak values of dashes and dots. Analog output should not increase or stay constant when the zeros (no input) are transmitted. Observations from the Fig. 4 are given as

• At lower distances (in this case d = 50 cm), the peak values will be higher for both dots and dashes as compared to the higher distances. • At lower distances, there is uncertainty in output when the zeros are transmitting so it is not desired. In this case, higher distance (d = 150 cm) output will be desired. • In order to match these above two criteria intermediate distance will be chosen. • Distance of 100 cm will be chosen because it satisfies both the above conditions. • And one more reason to choose 100 cm is the difference between the peak values of dot “.” and dash “_” is higher at this distance. It can be observed from the Fig. 4.

5 Decoding Decoding the information from the received analog output of the receiver sensor is an important task in molecular communication system. The appropriate analog output can be observed from the sensor (MQ3) for the varying concentrations due to different pulse width given at the transmitter end. For dots, the analog output is smaller and for the dash, the analog output is higher. When low input was given at

Molecular Communication via Diffusion—An Experimental …

305

Fig. 5 Output after removing unwanted peaks

the transmitter end, the output of sensor reduces and it will try to get back to the original state. While decoding, we are going to find all the peak which is there in this output. Out of them few are not related to the transmitted information. During this process, small peaks are observed within a waveform due non-ideal characteristics of sensor. These peaks are removed by selecting peaks which are having minimum peak width of 3. Then, the output of these peaks is shown in Fig. 5. Now, we have to find out what are the outputs of dots and dashes. Usually, higher output ones will be dashes, but the dots which are coming after the successive dashes will have the higher outputs than that of the normal dots. So we are going to compare the outputs with the previous one in order to decide whether it is dash or dot. To compare the conditions that we are going to give are 1. 2. 3.

If the difference between the previous output and the present is lesser than −29, then we can say that the previous output is dash (_) and the present if dot (.); If the difference between the previous output and the present is greater than 29, then we can say that the previous output is dot (.) and the present if dash (_); If the difference between the previous output and the present is ranging between −29 and 29, we can say that the previous output is equal to present one.

Based on these conditions, the output observed at the output is shown in the Fig. 6. In this way, the encoded Morse code in transmitter is decoded in receiver. Here, amplitude 2 denotes dashes and amplitude 1 denotes dots. By converting these Morse codes into character, we can get back the word which is transmitted, i.e., “NITG”.

306

M. Dash and T. Panigrahi

Fig. 6 Decoded output, where dashes = 2 and dots = 1

6 Conclusion In this paper, a working model for molecular communication system is presented. Concentration-based modulation scheme gives better results when compared to traditional way of sending data using ASCII code scheme. The characters are encoded with dash and dot by using Morse code. Major problems in diffusion-based molecular communication are intersymbol interference (ISI) and low data rates. Going for different modulation schemes and encoding techniques will help in achieving error free communication and increase in data rate.

References 1. Pierobon M, Akyildiz IF (2010) A physical end-to-end model for molecular communication in nanonetworks. IEEE J Sel Areas Commun 28(4) 2. Kuran MS, Yilmaz HB, Tugcu T, Akyildiz IF (2012) Interference effects on modulation techniques in diffusion based nanonetworks. Nano Commun Netw 3(1):65–73. ISSN 1878-7789 3. Nakano T, Suda T, Moore M, Egashira R, Enomoto A, Arima K (2005) Molecular communication for nanomachines using intercellular calcium signaling. In: 5th IEEE conference on nanotechnology. IEEE, pp 478–481 4. Kuran M, Tugcu T, Edis B (2012) Calcium signaling: overview and research directions of a molecular communication paradigm. IEEE Wirel Com mun 19(5) 5. Guo W, Asyhari T, Farsad N, Yilmaz HB, Li B, Eckford A, Chae C-B (2016) Molecular communications: channel model and physical layer techniques. IEEE Wirel Commun 23(4):120–127 6. Yilmaz HB, Chae C-B (2014) Simulation study of molecular communication systems with an absorbing receiver: modulation and ISI mitigation techniques. In: Simulation modelling practice and theory, vol 49, pp 136–150

Molecular Communication via Diffusion—An Experimental …

307

7. Liu Q, Yang K, He P (2013) Channel capacity analysis for molecular communication with continuous molecule emission. In: 2013 International conference on wireless communications and signal processing (WCSP). IEEE, pp 1–6 8. Hanwei Electronics. Technical data MQ-3 gas sensor. v2.2 2010, pp 1–6. Available: http://rog erbit.com/wprb/wp-content/uploads/2017/01/mq3-1.pdf. Accessed Aug 2018

Optimal Pilot Contamination Mitigation-Based Channel Estimation for Massive MIMO System Using Hybrid Machine Learning Technique Lipsa Dash and Anand Sreekantan Thampy

Abstract Massive multiple-input multiple-output (MIMO) plays an important role in future generation communication improving both energy and spectral efficiency. The non-orthogonal pilot sequences cause pilot contamination affecting performance of the system. Moreover, larger number of base station antennas increases the channel matrix dimension which affects pilot overhead and channel state information (CSI). To solve these problems, we propose an optimal channel estimation technique for multi-cell massive MIMO system using hybrid machine learning technique (OCEHML). At first, we introduce a discrete bacterial optimization-based clustering (DBO) algorithm used to classify users into the cell-edge and cell-centre depending on their receiver side SINR, network throughput and distance threshold. Secondly, we illustrate capsule learning-based convolutional neural network (CCNN) to select best detection method for channel estimation based on cell-edge users SINRs. Celledge users with low SINR use Lanczos algorithm-based signal detection method by using different pilots to avoid interference and users with high SINR use minimum mean square error (MMSE) detection method to improve pilot efficiency use by reusing the same pilots. The simulation results show the proposed detectors perform much effectively compared to existing state-of-art detection techniques in terms of different performance metrics. Keywords Massive MIMO · Channel estimation · Hybrid machine learning · Convolutional neural network · Lanczos detector · MMSE detector

1 Introduction Channel estimation is one of the major issues in a mobile communication system which directly affects the speed and accuracy of the channel rating. In general, a L. Dash (B) School of Electronics Engineering, Vellore Institute of Technology, Vellore 632014, India A. S. Thampy Centre for Nanotechnology Research, Vellore Institute of Technology, Vellore 632014, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_33

309

310

L. Dash and A. S. Thampy

channel can be evaluated with an introductory or pilot carrier known to the transmitter and receiver, using a variety of interpolation techniques to evaluate the response of sub-channels between pilot tones [1, 2]. Generally, a data signal and a training signal or both can be used to evaluate a channel. MIMO is new futuristic network architecture which has the potential to increase spectral and energy efficiency to meet growing demand for wireless services [3, 4]. Given the limited bandwidth for wireless communication, main drawback of training-based channel evaluation methods is the high cost of sending training data. The spatial and temporal total spacing of large MIMO channels forces us to evaluate the theory of increasing compression sensitivity (CS) and to use feedback channels with significantly less overhead [5, 6]. An accurate channel rating system is required to ensure optimal system performance [7, 8]. For further enhancement, an optimal pilot contamination is proposed for channel estimation in massive MIMO system using hybrid machine learning technique (OCE-HML). Firstly, a discrete bacterial optimization-based clustering (DBO) algorithm is used to classify users into cell-centre and cell-edge. Further a capsule learning-based convolutional neural network (CCNN) is used to select best detection method for channel estimation based on cell-edge users SINRs. Finally, we combine the Lanczos and MMSE detector (i.e. hybrid detector) for channel estimation with low and high SNIR, respectively, to avoid interference and enhance pilot efficiency.

2 System Model We consider a massive MIMO cellular system with L hexagonal cells where each cell comprises one base station (BS) with M antennas and k users. A multi-cell massive MIMO system with ‘T’ number of transmits antennas and ‘R’ number of receiver antennas is denoted as follows: y = Hx + N

(1)

where x denotes complex transmitted symbol vector. i.e. x = (x 1 , x 2 , …, x T )T . where each element is composed independently of complex galaxy of the M-QAM series. y = (y1 ,y2 , …, yR )T represents complex received symbol, H is the equivalent base channel model and N is supposed to be AWGN vector with covariance matrix and zero mean [9].

Optimal Pilot Contamination Mitigation-Based Channel Estimation for Massive …

311

3 Optimal Channel Estimation Using Hybrid Machine Learning Technique (OCE-HML) 3.1 Reduce Pilot Contamination Using Discrete Bacterial Optimization-Based Clustering Many parameters of transmitter and receiver antenna significantly affect performance of MIMO. Separating multiple connected users from multiple-input (MU-MIMO) systems using a spatial connection (SC) will reduce losses [10, 11]. For that we are using discrete bacterial optimization (DBO) algorithm to classify the users. In SINR networks, the path loss is designed similar to the following equation; Path loss(db) = 127.0 + 30 log10 (S)

(2)

S represents the distance from FBS to FUE in kilometres. Below is equation for calculating the obtained interaction (J): C−1 N  

J=

j

Fi qisi(b) δs j(b)si(a)

(3)

i=1,i= j a=0

The number of FBSs and RBs used in each FBS is N and C, respectively. Attempting to select FBS with a strong signal and access to low interference RBs must achieve high SINR, so the data rate or performance (H) is higher than that of Shannon–Hartley’s theory. H = A log 2(1 + sin r )

(4)

Chemo taxis is the movement of bacteria to find the most fertile place and to avoid harmful places. The chemo taxis motion is generated using this equation: θ j (T + 1) = θ j (T ) + H ( j) √

( j) t ( j)( j)

(5)

The position of bacteria before and after drop is denoted by θ j (T ) and θ j (T + 1), respectively. It is therefore necessary to change the motion of the chemo taxis, i.e. by rounding as indicated in the following formula: 

( j) θ (T + 1) = θ (T ) + Round H ( j) √ t  ( j)( j) j

j

At the end of the chemo taxis i give the bacteria health.

 (6)

312

L. Dash and A. S. Thampy

T IHealth =

Mh 

I j (T )

(7)

T =1

  μxUl = max μxU λ1 , μ xU2 μx(l) = μx(l)   μx(N ) = max μx(N 1) , μx(N 2) μx(C) = μx(C)   μx(U C) = max μx(U C1) , μx(U C1)

(8)

Based on the centre of the mean system in the following equation, M

j μjxj x = M j μj ∗

(9)

The output of the system is denoted by x * , the output set j describes the membership degree μj , and output set j is denoted by x j . The output of the system for determining the number of bacteria in FBS is as follows.









μx(Ul) × 0 + μx(l) × 1 + μx(N ) × 2 + μx(C) × 3 + μx(U C) × 4 ∗ x = μx(Ul) + μx(l) + μx (N ) + μx(C) + μx(U C) (10) The choice of cell and RB is very difficult; this is a unique domain name. In this case, DBO search is completed during chemo taxis phase. Therefore, improving multiplication events improves searches in the most promising areas.

3.2 Optimal Channel Estimation Using Capsule Learning-Based Convolutional Neural Network The transmission speed of each user is determined based on the SINR. The best cell-edge users SINR need to be detected by CCNN algorithm on the basis of the pilot levels, large-scale fading and neighbouring cells range. CCNN can obtain the internal characteristics of a large data channel matrix and enable more accurate channel estimation. A capsule learning is an activity that aims to determine the existence and characteristics of an object in a particular area. It combines all the important information about the status of the feature that they find in vector format. It indicates the ability to find a characteristic, along with the length of the output vector. The detected property moves or changes its position and the probability or length of the vector is the same. The output vector of the capsule is expressed as

Optimal Pilot Contamination Mitigation-Based Channel Estimation for Massive …

Ui =

ri 2 rj 2 rj 1 + rj

313

(11)

The margin loss is used, when learning the capsule neural network, 2 2



K l = tl MAX 0, N + − Ul + λ(1 − tl )MAX 0, u l − n −

(12)

Although the case extension using P > 2 is small, the channel estimation procedure for the nearest P (=2) sub-lines l0 and l 0 + 1 facilitates the consideration. Without generality loss, let us assume that Z l = Z , El = E and Yl =

Q J for l ∈ {l0 , l0 + 1}

then, the power transmit is denoted as Q. The Xl pilot signal matrix becomes as Eq. (12), Xl =

Q Z G G l H + Ml

(13)

In the tentative estimation the X l will be processed where the H k and H s matrices are used. The Gl which is output of the estimation process is, Here

(14) Sl = HK X l HS = Q HK Z G G l G HS + G K Ml G S  HK =  HS =

Z, NS < MS ,

G −1 ZZ Z , NS ≥ MS ,

(15)

EG N < Mt ,

−1 t EG E EG N t ≥ Mt ,

(16)

The tentatively estimated dimensions Sl0 and Sl0+1 are simultaneously included in CCNN, which reflects the estimated channel dimensions through Gˆ l0 and Gˆ l0+1 mapping connections. 

 Gˆ l0 , Gˆ l0+1 = eφ (Sl0 , Sl0+1 ; φ)

(17)

where φ indicates the CCNN parameter set. The objective of the CCNN is to reduce MSE losses in Eq. (17), MSELOSS =

1 MTS D 2

MTS  2 2  j j G l0 + p−1 − Gˆ i0 + p−1 j=0 p=1

E

(18)

314

L. Dash and A. S. Thampy

Based channel estimation of CCNN, the computational complexity is given by Eq. (19),  CCCNN =∼ O P Mt M S (Mt + M S ) + Mt M S

Kd 

 E T2 MT −1 Mk

(19)

k=1

3.3 Hybrid Detector for Multi-cell Massive MIMO After signal detection, the data is classified based on different pilots. To avoid interference Lanczos algorithm is used which classifies users based on different pilots. Consider a large-scale MIMO system with multiple base station antennas for single antenna customer service. It is common for the number of detection antennas to exceed the number of user devices, we have N >> K, for, e.g. L = 16 and M = 128 which is used in the Eq. (2). From L users the bit stream is first broadcasted. The values are taken from a modulated constellation and the data is converted into constellation codes P. Here, r denotes L × 1 transmitted signal vector which includes different users L data flow. The flat Rayleigh fading channel matrix is represented as G ∈ DM × L , all its components are independent of the zero mean and unit variants, which are evenly distributed. At base station, the x refers the M × 1 transmitted signal vector. x = Gr + m

(20)

where m indicates the Gaussian noise vector and m ∈ M(0, σ 2 ). Z = arg min Fr − Z y2

(21)

The value of M is uncertain, so it is impossible to directly receive the transmitted vector. Thus, an estimated value is required. The best way to find the approximate value of MMSE is methods rˆ = Z y, which is a very small error compared to the real value. By solving the Eq. (21) the final evaluated signal vector is obtained. where −1

rˆ = G G G + σ 2 JL G G x = Z −1 xˆ

(22)

Z = H + σ 2 JL , xˆ = G G x

(23)

Hence, GG indicates the conjugate transpose of G and the gramme matrix denotes the H = GG G. Then, obtain the modified Eq. (24) according to Eq. (22),

Optimal Pilot Contamination Mitigation-Based Channel Estimation for Massive …

xˆ = Z rˆ

315

(24)

From the Eq. (23), Z is denoting as definite symmetric matrix. Thus, problem is to obtain a solution to the specific linear Eq. (24) represented by the Lanczosbased method. Let us assume symmetric definite matrix B ∈ D L×L and a ∈ D L×1 is referred as zero vector. Then, consider the By = a as a solution of the linear equation. The Eq. (25) is the quadratic function. φ(y) =

1 t y By − y t a 2

(25)

The approximate minimum value φ is the nearest optimal solution of By = a. Then, yl = y0 + Pl xl

(26)

where Pl = [p1 , p2 , …, pl ]. If l = m, x m indicates the equation set least solution. Plt B Pl xl = Plt (a − By0 )

(27)

Then, ym = y0 + Pm x m is the solution of the Bym = a. Two issues need to be addressed in order for the process to work effectively. First Eq. (27) must be easily solved. The other is that the calculation of yl is p1 , p2 , …, pl obviously, data protection is important. If the pl is referred as Lanczos vector, then the trouble can be conquered. B Pl = Pl tl + sl flt

(28)

The Eq. (28) is obtained when the lth step is completed. After the l times iteration S l indicates the residual error vector. ⎡

α1 β1

···

0 .. . .. .



⎥ ⎢ ⎥ ⎢ β1 α1 . . . ⎥ ⎢ ⎥ ⎢ tl = Plt B Pl = ⎢ . . . . . . ⎥ ⎥ ⎢ ⎥ ⎢ . . . .. .. β ⎦ ⎣ .. l−1 0 ··· βl−1 αl where t l denotes a symmetric definite tri-diagonal matrix.

(29)

316

L. Dash and A. S. Thampy



⎡ ⎤ ⎤ c1 0 · · · 0 0 ··· 0 ⎢ .. ⎥ 1 0 0⎥ ⎢ 0 c2 .⎥ ⎥ ⎢ ⎥ = , C ⎥ . . . . .. ⎦ l ⎢ . . . ⎥ . . . ⎣ .. . . . . 0 ⎦ 0 · · · μl−1 1 0 · · · 0 cl

1 ⎢ μ1 ⎢ Kl = ⎢ . ⎣ ..

The μl and cl is evaluated by t l = KCK t , 

l−1 μl−1 = βcl−1 cl = αl − βl−1 μl−1

(30)

To solve the yl , the vector and matrix are ql ∈ S L and Dl ∈ S M × L, respectively. Then, 

Dl K lt = Pl K l Cl ql = Q lt (a − By0 )

(31)

−1 t yl = y0 + Pl K l Cl K lt Pl s0 = y0 + Dl pl

(32)

Therefore, dividing Dl and ql , and then update the values in the Eq. (31). The Eq. (32) is obtained. Dl = pl − u l−1 dl−1

(33)

Hence, ⎡ ⎢ ⎣



K l−1 Cl−1 .. . 0

0 · · · μl−1 cl−1



ρ1 ρ2 .. .





p1t s0 p2t s0 .. .



⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ ⎥⎢ ⎥=⎢ ⎥ ⎦⎢ ⎢ ⎥ ⎢ ⎥ t ⎣ ⎣ ⎦ cl ρl−1 pl−1 s0 ⎦ ρl plt s0

(34)

So, yl = y0 + Dl−1 ql−1 + ρl dl = yl−1 + ρl dl Finally, the last iteration Eq. (35) is deduced.

(35)

Optimal Pilot Contamination Mitigation-Based Channel Estimation for Massive …

317

4 Numerical Results In this section, we evaluate the performance of proposed OCE-HML technique with different simulation scenarios. In our simulation, we consider 19 cellular networks with 256 antennas per PS and 50 pilots per cell. Using the specified OCE-HML method, the number of users per service should not exceed 20, and all users should be divided into cell kernel types as the same test as neighbouring cell users—using useful functions. The largest multi-cell MIMO system with an M = 200 antenna BS and multiple users is considered. Number of points in the angle channel matrix is set to N = 256. The uplink channel calculated at the uplink stage is less than the angle limit. After receiving the channel reference, assume that the S columns of the channel matrix have zero coefficients at the corners. Simulation and uplink channel support are similar and randomly selected in all simulations.

4.1 NMSE Analysis of Proposed and Existing Techniques Here, we analyse the proposed CCNN technique using normalized mean square error (NMSE) performance metric. NMSE is defined as follows: E NMSE =

S 1  Hk − Hs 2 S k=1 Hk 2

(36)

Figure 1 NMSE performance of proposed technique is 12.5% and 17.97% better at low pilot length and 21.5% and 23.41% better at high pilot length than the existing ZF + MRT technique and M-JOMP technique, respectively. In Fig. 2, NMSE performance of proposed technique is 4.98% and 12.79% better Fig. 1 Comparative analysis of NMSE with pilot length

318

L. Dash and A. S. Thampy

Fig. 2 Comparative analysis of NMSE with SNR

Fig. 3 Comparative analysis of NMSE with number of users

at low SNR and 19.23% and 20.17% better at high SNR than the existing ZF + MRT technique and M-JOMP technique, respectively. Figure 3 NMSE performance of proposed technique is 14.08% and 19.17% better with less number of users and 23.78% and 31.17% better with more number of users than the existing ZF + MRT and M-JOMP techniques, respectively.

4.2 Average Data Rate Analysis of Proposed and Existing Techniques Figure 4a, b when the pilot sequence is 1, the average data rate performance is 8.2% and 9.3% better with Lanczos detector, 6.23% and 8.63% better with MMSE detector than the existing ZF and MRT detector, respectively. Similarly, when pilot sequence is 3, the average data rate performance is 7.2% and 10.34% better with Lanczos

Optimal Pilot Contamination Mitigation-Based Channel Estimation for Massive …

319

Fig. 4 Average data rate with number of BS antennas a pilot sequence = 1 and b pilot sequence =2

detector, 9.3% and 11.63% better with MMSE detector than the existing ZF and MRT detector, respectively. Figure 5 when the pilot sequence is 8, the average data rate performance is 18.92% and 19.63% better with Lanczos detector, 16.34% and 18.39% better with MMSE detector than the existing ZF and MRT detector, respectively (Fig. 5). Fig. 5 Average data rate with number of pilot sequence

320

L. Dash and A. S. Thampy

Fig. 6 Average data rate with SNR for pilot sequence =3

5 Conclusion We have proposed an optimal channel estimation technique for multi-cell massive MIMO system using hybrid machine learning technique (OCE-HML). The simulation results showed the proposed detectors perform better than existing state-of-art detection techniques in terms of different performance metrics such as NMSE and average data rate.

References 1. Gao Z, Dai L, Dai W, Wang Z (2015) Block compressive channel estimation and feedback for FDD massive MIMO. In: 2015 IEEE conference on computer communications workshops (INFOCOM WKSHPS). IEEE, pp 49–50 2. Qian C, Fu X, Sidiropoulos ND (2019) Algebraic channel estimation algorithms for FDD massive MIMO systems. IEEE J Sel Top Signal Process 13(5):961–973 3. Wang X, Wan L, Huang M, Shen C, Zhang K (2019) Polarization channel estimation for circular and non-circular signals in massive MIMO systems. IEEE J Sel Top Signal Process 13(5):1001–1016 4. Rao S, Mezghani A, Swindlehurst AL (2019) Channel estimation in one-bit massive MIMO systems: angular versus unstructured models. IEEE J Sel Top Signal Process 13(5):1017–1031 5. Sure P, Bhuma CM (2015) A pilot aided channel estimator using DFT based time interpolator for massive MIMO-OFDM systems. AEU-Int J Electron Commun 69(1):321–327 6. Dong P, Zhang H, Li GY, Gaspar IS, NaderiAlizadeh N (2019) Deep CNN-based channel estimation for mmWave massive MIMO systems. IEEE J Sel Top Signal Process 13(5):989– 1000 7. Gu Y, Zhang YD (2019) Information-theoretic pilot design for downlink channel estimation in FDD massive MIMO systems. IEEE Trans Signal Process 67(9):2334–2346 8. Han Y, Jin S, Wen CK, Ma X (2020) Channel estimation for extremely large-scale massive MIMO systems. IEEE Wirel Commun Lett 9(5):633–637

Optimal Pilot Contamination Mitigation-Based Channel Estimation for Massive …

321

9. Zaib A, Masood M, Ali A, Xu W, Al-Naffouri TY (2016) Distributed channel estimation and pilot contamination analysis for massive MIMO-OFDM systems. IEEE Trans Commun 64(11):4607–4621 10. Alshammari A, Albdran S, Ahad MAR, Matin M (2016) Impact of angular spread on massive mimo channel estimation. In: 2016 19th International conference on computer and information technology (ICCIT). IEEE, pp 84–87 11. Mangqing G, Gang X, Jinchun G, Yuan’an L (2015) Enhanced EVD based channel estimation and pilot decontamination for massive MIMO networks. J China Univ Posts Telecommun 22(6):72–77

Concentration Measurement of Urea in Blood using Photonic Crystal Fiber Bhukya Arun Kumar, Sanjay Kumar Sahu, and Gopinath Palai

Abstract We propose a photonic crystal fiber with circular air pores of 400 nm in dimension to evaluate urea concentration in human blood. When the structure excited with a signal of 590 nm, the refractive index subjected to change for blood samples which in result exhibit the varying electric field at the output. This variation decides the amount of urea content in the sample. The plane wave expansion (PWE) method is explicitly adopted in finding the required scattering field in the photonic crystal structure. It is interesting to note that the transmitted light varies linearly as a function of amount of concentration of urea filled within the air gaps. Keywords PCF · Plane wave expansion · Transmittance

1 Introduction In the recent era, good amount of research is progressed on of photonic crystal fiber (PCF) technology because of its excellent optical properties. This is one of the reasons why selectivity of the electromagnetic field which can spread over such structures increases day by day fetching many exciting features. The principle of operation of the PCF is based on the photonic band gap. The permittivity keeps on changing inside the structure may invite photonic bandgap of an optical fiber. The periodic fluctuation of this parameter changing as a function of position determines the amount of photonic bandgap in an optical fiber which is assisting to evaluate many relevant consequences. The great number of publications demonstrating the use of optical fiber for analyte sensing has likely concealed the most important goal: obtaining practical, simple, easy-to-handle, and cost-effective biosensors [1]. Because of its optical strengths, photonic crystal fiber (PCF) innovation has gotten a part of consideration within the next decade. When there is change in the refractive index of the B. A. Kumar (B) · S. K. Sahu School of Electronics and Electrical Engineering, Lovely Professional University, Phagwara, India G. Palai Gandhi Institute for Technological Advancement, Bhubaneswar, Orissa, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_34

323

324

B. A. Kumar et al.

medium, light will sense the changes, the propagation constant is going to change and can measure those changes. The resonant wavelength depends on the refractive index. The ability to distinguish the electromagnetic field that may propagate across such structures are a fascinating feature to observe [2]; in other words, the ability to generate certain frequencies is restricted. The operating principle of photonic crystal fiber is photonic band gap. The photonic band gap of the fiber is determined by the periodic variation in permittivity as a work of sensing. As the light propagating in the medium, it senses the changes in the refractive index of the medium. Photonic band gap also depends on another parameter such as defect accessible within the PCF. The distribution of electric field inside the fiber is very much controlled by the photonic band gap, and this is more pronounce in case of PCF with defects. As per application point is concerned, these are used practically in all fields related to sensor, delivering high power as well as in the field of nonlinear optics [3–5]. To measure the concentration of salt, sugar, alcohol, and CyGel, newly, optical sensor has been developed using 2D photonic crystal structure [6, 7]. In this article, we have taken an initiation to design a photonic crystal fiber which measures the concentration of urea especially in urine or blood. The 2D photonic crystal biosensor developed for measurement of urea in urine is exploited by refractive index method. With variation in the concentration of urea in urine, the output signal strength and the resonant frequency get changed and make a small swing. Two different biosensors with different structures are designed with high quality factor [8]. In recent years, sensing applications using photonic crystal had developed an interesting area of research. Improved precision in detecting many parameters as per as sensor is concern got to be an exciting area in research. Photonic crystal features are considered to be unusual than optical fiber sensors due to its geometric. By proposing the peculiarity in the geometry, PCF can have potential to improve the sensitivity as per the desired application [9]. A FDTD and plane wave expansion method are adopted to characterized sensors’ performances. We obtained advanced sensitivity and excellence figures in recent study with moderate light dependent photonic crystal arrangements with bending cell, as previously discussed. In one of the works, a novel compact biosensor is exemplified where the quality factor increased from 570 to 723. Similarly, the sensitivity factor expanded from 550 to 650 nm/RIU in reference [10]. There are some other techniques like surface plasmon resonance, localized surface plasmon resonance (LSPR), and many others are used to detect urea content in a sample. Fiber optic bio-detecting test is simple to design and cost effective and quite competent of remote detection. Gas molecules that they absorb certain frequency, molecules vibrate at certain frequency, when light of identical frequency, these molecules will absorb, then the amount of light at detector decreases depending on absorption. Moreover, it works well within the pathophysiological process to find urea in human blood and subsequently can be useful for measuring the urea concentration in therapeutic diagnostics as well [11]. In this case, the linear variation of electric field with concentration is the basis for concentration measurement. By using the plane wave expansion (PWE) method, the electric field distribution is estimated. The graphic diagram of the PCF is shown in Fig. 1.

Concentration Measurement of Urea in Blood …

325

Fig. 1 Schematic diagram for photonic crystal fiber

Figure 1 shows a four-sided cross-section PCF structure having imperfection at the middle, and silicon is taken as foundation fabric. The air gaps in this fiber comprise the required arrangement taking diverse measurement of urea. The grid constants and distance across of air gaps are considered 1 µm and 0.4 µm separately.

2 Mathematical Approach The Helmholtz equality is used to calculate the electric field scattering in photonic crystal fiber. Which is given by ω2 1 ∇ × {∇ × E(r )} = 2 E(r ) ε(r ) c E(r) = Aj einjkxj + Bj e−injkxj , where A, B are the amplitudes of forward and backward waves. By solving E j (x) which is a second-order differential equation, we can find out wavelength of maximum reflectance with which suitable initial and final boundary conditions. The solution for the equation is E(r ) = E k,r (r ) · eikr where E k ,r is the periodic function with lattice periodicity. Figure 2 explains about the experimental setup to evaluate the amount of transmittance pertaining to an input signal 590 nm. In this diagram, along with the light source, a photo detector and power meter are used.

326

B. A. Kumar et al.

Fig. 2 Experiment setup for determining urea concentration

3 Result We have designed a photonic structure which contains 25 holes including a defected one. After completion of the design, we performed the simulation to obtain voltage and transmittance at 590 nm. While conducting, the simulation sodium light is used as an input source to the PCF. Interestingly, the defected mechanism plays the key role in finding transmittance and corresponding voltage for finding urea concentration in the blood sample. The simulation result is tabulated (Table 1) and utilized to plot a graph which is reflected in Fig. 3. In this figure, potential is taken along abscissa and concentration is placed along ordinate. It is observed from the graph that the concentration is varying as a function of potential in linear fashion. The output what we have got in terms of potential is derived from the transmittance result taken in volt/meter [electric field]. Table 1 Simulation result of concentration with refractive index

Refractive index

V

Transmittance

Concentration gm/ml

1.333

0.7076

35.23

0.1

1.3345

0.7075

35.22

0.2

1.3355

0.7073

35.22

0.3

1.3363

0.7072

35.21

0.4

1.3371

0.707

35.2

0.5

1.338

0.7068

35.19

0.6

1.3387

0.7067

35.18

0.7

1.3397

0.7065

35.18

0.8

1.3405

0.7064

35.17

0.9

1.3413

0.7062

35.16

1

Concentration Measurement of Urea in Blood …

327

Fig. 3 Potential (V ) versus concentration (gm/ml)

4 Conclusion This paper witnessing a photonic crystal fiber structure with circular air pores of 400 nm in dimension. The classical way to visualize this work is to evaluate the urea concentration in human blood. The idea for measuring the concentration of urea accomplished through refractive index followed by the electric field strength due to application of a sodium light about 590 nm. The transmitted electric field plays the key role to measure the urea content in human blood. The plane wave expansion (PWE) method is being utilized to imitate field scattering in a photonic crystal structure. From the results, a linear graph is obtained between transmitted light and the concentration of urea present in the blood available in the air gaps. In conclusion, the structure along with the results may suffice the applicability in biomedical field as a sensor.

References 1. Socorro-Leránoz AB, Santano D, Del Villar I, Matias IR (2019) Trends in the design of wavelength-based optical fiber biosensors (2008–2018). In: Biosensors and bioelectronics: X, vol 1, p 100015. ISSN 2590-1370 2. Knight JC, Broeng J, Birks TA, Russell PSJ (1998) Photonic band gap guidance in optical fibers. Science 282(5393):1476–1478. https://doi.org/10.1126/science.282.5393.1476 3. Knight J, Birks T, Mangan B, Russell PSJ (2002) Photonic crystal fibers: new solutions in fiber optics. Optics Photon News 13(3):26–30

328

B. A. Kumar et al.

4. Fini J, Bise R (2004) Progress in fabrication and modeling of micro structured optical fibers. Jpn J Appl Phys 43:5717–5730. https://doi.org/10.1143/JJAP.43.5717 5. Birks TA, Knight JC, Russell PSJ (1997) Endlessly single-mode photonic crystal fiber. Optics Lett 22(13):961–963 6. Palai G, Tripathy SK (2012) A novel method for measurement of concentration using twodimensional photonic crystal structures. Optics Commun 285(10):2765–2768 7. Palai G, Tripathy SK, Muduli N, Patnaik D, Patnaik SK (2012) A novel method to measure the strength of cygel by using two-dimensional photonic crystal structures. AIP Conf Proc 1461:383–386. https://doi.org/10.1063/1.4736926 8. Gharsallah Z, Najjar M, Suthar B, Janyani V (2018) High sensitivity and ultra-compact optical biosensor for detection of UREA concentration. Opt Quan Electron 50:1–10 9. Ramamoorthy H, Revathi S (2020) Photonic crystal fiber for sensing application. Int J Eng Adv Technol 9. https://doi.org/10.35940/ijeat.E9613.069520 10. Gharsallah Z, Najjar M, Suthar B, Janyani V (2019) Slow light enhanced bio sensing properties of silicon sensors. Opt Quan Electron 11. Gupta SV, Tejavath K, Verma RK (2020) Urea detection using bio-synthesized gold nanoparticles: an SPR/LSPR based sensing approach realized on optical fiber. Opt Quan Electron 52:1–14 12. Sukhoivanov IA, Guryev IV (2009) Photonic crystals: physics and practical Modelling. Springer, Berlin, Heidelberg, p 242p

Automated Detection of Myocardial Infarction with Multi-lead ECG Signals using Mixture of Features Santanu Sahoo, Gyana Ranjan Patra, Monalisa Mohanty, and Sunita Samanta

Abstract Myocardial infarction (MI) is the alarming symbol of heart attack which causes the heart muscles to get damaged due to MI which leading to death. The early diagnosis and detection of symptoms based on myocardial infarction are extremely necessary to reduce probability of death of the patient. The main objective of this work is to develop a classification framework for electrocardiogram (ECG) signals using morphological, time domain, and empirical mode decomposition (EMD) features to classify between MI and healthy control (HC). The PTBDB database was used to test the whole experiment. A logistic model trees (LMT) classifier is proposed to classify the MI and HC samples. As per the experimental study for detection of MI and HC, the classifier achieves an accuracy of 98.75% over fusion of features. Keywords Myocardial infarction (MI) · Empirical mode decomposition (EMD) · Electrocardiogram (ECG) · Logistic model tree (LMT)

1 Introduction As per studies conducted by the World Health Organization (WHO), cardiovascular disease is the foremost cause of almost 17.9 million deaths per year [1] and can be termed as one of the leading causes of death. Sudden cardiac arrest accounts for more than half of all coronary heart disease deaths, with ventricular tachyarrhythmias accounting for around 80% of these. As a result, ventricular tachyarrhythmias cause around seven million sudden cardiac deaths each year. Identifying people at the greatest risk of CVDs and ensuring that they receive adequate treatment can help S. Sahoo (B) · G. R. Patra · M. Mohanty · S. Samanta Department of ECE, Siksha O Anusandhan, Bhubaneswar, India e-mail: [email protected] G. R. Patra e-mail: [email protected] M. Mohanty e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_35

329

330

S. Sahoo et al.

to reduce early deaths [2]. Electrocardiogram (ECG or EKG) is a bioelectric signal representing the periodical contraction and relaxation of the human heart and can be termed as a non-invasive technique for diagnosing cardiovascular anomalies [3]. It detects the abnormalities and other related cardiovascular problems. Myocardial infarction (MI) is a heart anomaly where the heart cells get damaged due to the reduced blood supply to them. The different attributes of ECG signals get changed in the form of Q wave, ST deviation, and T wave inversion. All these result in variation of heart rate which can be detected by the cardiologists using the ECG waveforms [4]. Proper analysis of ECG beats with informative features can significantly correlated the pathological information’s for automatic detection and classification of ECG patterns. Several research till now has been made to detect the myocardial infarction. Using QRS measurement and neural networks to classify MI, author proposed a classification accuracy of 79% and specificity of 97%, respectively [5]. Diker et al. proposed a technique which is combination of morphological, time domain, and DWT features used for the classification of MI samples using a SVM classifier. Author claims an accuracy, sensitivity, and specificity of 87.8%, 86.97%, and 88.67%, respectively [6]. Detection accuracy of 95.30% was claimed by the author using mode-n singular values (MSVs) and the normalized multiscale wavelet energy (NMWE) features for the detection of MI [7]. PCA-based feature reduction techniques are used for the detection of MI using SVM classifier which gives a classification accuracy of 96.96% [8]. In another study, Baloglu et al. propose CNN for diagnosis of MI which gives an accuracy and sensitivity of 99% on all lead ECG signals [9]. In this work, we analyze ECG signals using empirical mode decomposition (EMD) techniques. The data are collected from PhysioNet PTBDB database and are enhanced in the pre-processing stage. The EMD method is adopted to get the accurate R-peak signal. Different features such as RR interval, time-frequency, and other morphological features are extracted from the recorded ECG signal and are applied to the logistic model trees (LMT) classifier to classify the myocardial infarction (MI) and healthy control (HC). The detailed flowgraph of the undertaken study has been presented in Fig. 1.

2 Materials and Method 2.1 Dataset In this study, Physikalisch-Technische Bundesanstalt Diagnostic Database (PTBDB) [10] is used which comprises of 549 records taken from 290 subjects (each subject has 1–5 records). Each record is diagnosed and verified by cardiologist. The database signals are sampled at 1 kHz with 16-bit resolution over a range of ±16.384 mV with 2000 A/D units per mV. Some subjects (124, 132, 134, and 161) are missing in the database. There are 12 leads of each signal (I, II, III, aVR, aVL, aVF, V1, V2, V3, V4,

Automated Detection of Myocardial Infarction …

331

Fig. 1 Block diagram of the proposed model

V5, V6) with three frank leads (VX , VY , VZ ). The database contains different healthy and disease category subjects such as myocardial infarction, cardiomyopathy/heart failure, bundle branch block, dysrhythmia, myocardial hypertrophy, valvular heart disease, myocarditis. Also, 148 MI recordings have been labelled as abnormal and 52 ‘healthy control’ recordings were labelled as normal in the PTB database.

2.2 Pre-processing and Detection of R-peaks In this work, we have taken 12 lead ECG signals. A 1 Hz cut-off high-pass filter is used to reduce the low-frequency noise, and a 30 Hz cut-off Butterworth low-pass filter is used for the high frequency noise. The denoised lead I and lead II are added

332

S. Sahoo et al.

to form a single ECG signal. The composite lead signals now applied to EMD where the signal gets decomposed into series of intrinsic mode frequency (IMF) signals which are represented from high to low frequency signal. The IMF signals meet the narrow frequency band resulting in a time frequency spectrum known as HilbertHuang (HH) spectrum. The number of extrema and zero crossings present in the dataset is required to be either equal or at-most differ by one, and at the same time, the upper and lower envelope mean of the signal need to be zero. For the detection of R-peak, we referred our paper [11] published before.

2.3 Feature Extraction For accurate classification, the crucial step for successfully for characterizing the signal is the feature extraction process. The features can be extracted from the signal either in the form of time domain or in the frequency domain and contain maximum information about the signal. In this study, we have collected 30 healthy control (HC) and 30 myocardial infarction (MI) records with each 1 min duration from PTBDB database. We have added Lead I and II signals for our work. Each signal is then subdivided into 30 frames of 2 s each, so a total of 900 frames formed from each group. After Pre-processing the signals are passed through the empirical mode decomposition process for getting the accurate R-peak detection. Here, we decomposed the signal up to 10 levels. We have considered first 5 intrinsic mode functions (IMFs), from where mean, standard deviation, skewness, and kurtosis of each IMFs from each framed signals are calculated separately which forms a total of 25 statistical features. A set of 3 other heart beat features such as Pre-RR interval, post-RR interval, and RR-average are also calculated from each signal. The signal (1 min duration) can be represented by x(t) = [x1 (t) x2 (t) x3 (t) x4 (t) . . . x30 (t)]30×30

(1)

Which is then applied to EMD up to 10th mode decomposition which can be expressed as x(t) =

L 

c L (t) + res L (t)

(2)

i=1

where L is the no of IMFs, c L (t) is the Lth IMF and resL (t) is the residue. The analytical representation of c L (t) is w(t) = c L (t) + H {c L (t)}

(3)

Automated Detection of Myocardial Infarction …

333

where H {c L (t)} is the Hilbert transform of c L (t) which is the Lth IMF extracted from each framed signal. The following are some temporal features extracted from each IMF. S 1 wi S i=1

(4)

S 1 (wi − μ(t))2 S i=1

(5)

μ(t) =

σ (t) =

 S  1  wi − μ(t) 3 Sq (t) = S i=1 σ (t)

(6)

 S  1  wi − μ(t) 4 K (t) = S i=1 σ (t)

(7)

where S is the no of samples in the IMF, μ(t) is the mean, σ (t) is the standard deviation, Sq (t) and K (t) are the skewness and kurtosis of the respective IMFs. The spectral analysis of the signal can be done by using the EMD. These spectral features can give the information related to some physiological changes in the ECG signal. The frequency feature [12] includes the power spectral density which defined by the variation of power in different frequency and spectral centroid which is the measure to characterize the spectrum.  k f (k)P(k) Cs =  k P(k)

(8)

where f (k) is the Kth frequency and P(k) is the spectral value. A set of twenty-eight RR interval, temporal, and spectral features are combined and are fed as the input of the LMT classifier to differentiate the MI and HC classes.

2.4 Classification Logistic model tree (LMT) classifier is a combined function of two of the most popular machine learning techniques, namely decision tree learning and logistic regression (LR) models [13]. At the leaves of the tree, the logistic regression function is present and contains two child nodes which are branched left or right depending on the underlying threshold function. Logistic regression models have the advantages

334

S. Sahoo et al.

of being less prone to over fitting the data. LMT has improved detection of linear relationships and then is able to combine them to generate an equation which accepts the independent variables and can reach at a particular outcome for the dependant variables.

3 Results The experimental results are evaluated using the described method. The PTBDB database signals are pre-processed as explained in Sect. 2.2. The detail pre-processing of the MI and HC signal is given in Fig. 2. For better classification performance, the receiver operating characteristic (ROC) curve is the most important measure. The more is the area in ROC plot is the more classifier performance which is presented in Fig. 3. The reliability of the classifier is tested by means of tenfold cross-validation procedure. Table 1 summarizes the performance of the LMT classifier representing

(a)

(b)

(c) Fig. 2 Pre-processed HC signal a noisy HC signal and its time frequency domain representation, b HC signal and its time frequency representation after noise removal, c noisy MI signal and its time frequency domain representation, d MI signal and its time frequency representation after noise removal, e HC R-peak detected signal, f MI R-peak detected signal

Automated Detection of Myocardial Infarction …

335

(d)

(e)

(f)

Fig. 2 (continued)

Fig. 3 ROC plot of the classifier

the classification accuracy (CA), mean absolute error (MAE), root mean square error (RMSE), F-measure, MCC, and ROC area. The performance table indicates at fold 4, the classifier outperforms an efficiency of 98.75% for the classification of MI elements form the signal.

336

S. Sahoo et al.

Table 1 Experimental result of LMT classifier Fold

Accuracy (%)

MAE

RMSE

F-Measure

MCC

ROC area

2

98.51

0.0173

0.1091

0.985

0.970

0.997

3

98.51

0.0164

0.1118

0.985

0.970

0.997

4

98.7532

0.0141

0.1045

0.988

0.975

0.997

5

98.6715

0.0135

0.11

0.987

0.973

0.997

6

98.7328

0.0138

0.105

0.987

0.975

0.997

7

98.5488

0.016

0.1112

0.985

0.971

0.997

8

98.6783

0.0138

0.106

0.987

0.974

0.997

9

98.4058

0.0194

0.1175

0.984

0.968

0.997

10

98.5829

0.0144

0.1093

0.986

0.972

0.997

4 Conclusion In this work, EMD has been used to decompose the ECG signals collected from PTBDB database. First, the raw signals are denoised and framed each of 2 s duration. For this work, we have added Lead I and II to extract different features from the signal. From the selected intrinsic mode functions (IMF1-IMF5), a group of temporal, spectral, and heartbeat interval features are extracted. A fusion of twentyeight features is applied to the input to the LMT classifier. A tenfold cross-validation algorithm has been used in this work for ensuring reliable classification process. In this work, a top of accuracy of 98.75% has been obtained in the fold 4. Also, 99.7% of area under the ROC is obtained indicating that the LMT classifier achieves the best results when a mixture of features has been used. The main benefits of the LMT classifier lie in the fact that it can automatically detect the MI with higher accuracy and thus is able to minimize the man-made errors resulting in reduction of workload of cardiologists and the related healthcare professionals.

References 1. Roger VL, Go AS et al (2012) Heart disease and stroke statistics-2012 update. Circulation 125(1):e2–e220 2. Mehra R (2007) Global public health problem of sudden cardiac death. J Electrocardiol 40(6):S118–S122 3. Haykin S (2002) Neural networks. Pearson Education Asia, New Delhi 4. Schamroth L (2009) An Introduction to electrocardiography, 7th edn. Wiley, New York, NY, USA 5. Reddy MRSE, Svensson L, Haisty J, Pahlm WK (1992) Neural network versus electrocardiographer and conventional computer criteria in diagnosing anterior infarct from the ECG. In: Proceedings of computers in cardiology, pp 667–670

Automated Detection of Myocardial Infarction …

337

6. Diker ZC, Avci E, Velappan S (2018) Intelligent system based on genetic algorithm and support vector machine for detection of myocardial infarction from ECG signals. In: 2018 26th Signal processing and communications applications conference (SIU), pp 1–4. https:// doi.org/10.1109/SIU.2018.8404299 7. Padhy S, Dandapat S (2017) Third-order tensor based analysis of multilead ECG for classification of myocardial infarction. Biomed Sig Process Control 31:71–78. ISSN 1746-8094 8. Dohare AK, Kumar V, Kumar R (2018) Detection of myocardial infarction in 12 lead ECG using support vector machine. Appl Soft Comput 64:138–147 9. Baloglu UB, Talo M, Yildirim O, Tan RS, Acharya UR (2019) Classification of myocardial infarction with multi-lead ecg signals and deep cnn Pattern Recognit. Lett 122:23–30. https:// doi.org/10.1016/j.patrec.2019.02.016 10. Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PC, Mark RG, Mietus JE, Moody GB, Peng CK, Stanley HE (2000) PhysioBank, PhysioToolkit, and PhysioNet—components of a new research resource for complex physiologicsignals. Circulation 101:e215–e220 11. Sahoo S, Mohanty M, Behera S, Sabut SK (2017) ECG beat classification using empirical mode decomposition and mixture of features. J Med Eng Technol 41(8):652–661 12. Riaz F, Hassan A, Rehman S, Niazi IK, Dremstrup K (2016) EMD based temporal and spectral features for the classification of EEG signals using supervised learning. IEEE Trans Neural Syst Rehabil Eng 24(1):28–35 13. Landwehr N, Hall M, Frank E (2005) Logistic model trees. Mach Learn 59(1/2):161–205

Metamaterial CSRR Loaded T-Junction Phase Shifting Power Divider Operating at 2.4 GHz Kumaresh Sarmah, Roktim Konch, and Sivaranjan Goswami

Abstract A power divider T-network requires dissimilar lengths of the two output branches if a phase difference is desired between the two output ports. This often leads to an asymmetrical structure of the power divider. In this paper, a phase difference between the two output ports of a power divider is achieved with the help of a complementary split-ring resonator (CSRR) structure. The T-junction divider is a three-port network. The middle port is fed by a 50  microstrip line, and the other two ports are considered as an output port. A CSRR is etched from the ground plane below one of the output branches. It is observed that by adjusting the position of the CSRR structure in a company with a microstrip line, it is possible to modify the phase variation between output branches and this divider additionally having less power loss characteristics. A design prototype is fabricated in an FR4 substrate and tested at a frequency 2.4 GHz for a phase modification of about 45°. The proposed power divider is compact and simple in design and less power losses so it is easily integrated with microstrip antennas and convenient for phased array antenna design. Keywords Power divider · Phase shift · Metamaterial · CSRR

1 Introduction Microstrip power divider is a passive microwave device which is commonly used as power dividing, power combining, or feeding networks for a microstrip antenna array. The T-junction power divider is typically a three-port network, and it can be implemented and coupled in any kind of transmission line media. There have been researches worldwide to investigate on different technologies to achieve compact microstrip power dividers with various phase shifting properties [1–5]. Microstrip phase shifters consisting of parallel stubs and ground slots are demonstrated which yields phase variations of 45° and 22.5° are discussed in [6]. In [7], the authors have K. Sarmah (B) · R. Konch · S. Goswami Department of Electronics and Communication Technology, Gauhati University, Guwahati, Assam, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_36

339

340

K. Sarmah et al.

demonstrated an electrical size reduction of about 35% and about 32% in a microstrip power divider using I shaped and split-ring-shaped defected ground structure (DGS). In recent years, metamaterial-based approaches for the design of improved microstrip power dividers are gaining popularity amongst researchers [8]. Metamaterials are artificial material structures that can provide negative values of effective permittivity or permeability or both [9]. Metamaterials are applicable in wide research interest due to their having negative permittivity and permeability, so they received lots of application attention during the last few years [3, 4]. Originally, dielectric media that exhibits synchronic negative permittivity and permeability were achieved by designing split-ring resonators (SRRs) and wires. A novel type of metamaterial inspired T-junction power divider has been reported with 70% size reduction of electrical length of the impedance transformer [10]. In another work, slow wave transit time effect introduced to achieve electrical size reduction and third harmonic suspension, by embedding the open complementary split-ring resonator in the microstrip transmission line [8]. In [7], they demonstrated dual-band power divider using artificial lines-based complementary resonators. Using a CSRR, an equal length multi-way T-junction arbitrary differential phase shifter first time reported and mathematically explained [1]. In a conventional differential phase shifter (DPS), to generate arbitrary phase shifts, the reference and main microstrip lines are made of different physical lengths. To achieve a certain phase shift in a particular port, we need to increase the length of the strip line in the corresponding port. Due to the increasing length, the overall structure of the divider becomes bulky in size. Size reduction is a major concern for antenna designers. In this paper, we proposed a CSRR aided the adjustable design of microstrip power divider at operating frequency of 2.4 GHz, which capable of tune a phase difference between the two output branches. This power divider exhibits equal power splitting characteristics with a 45° out of phase in a specified branch. The proposed power divider is designed and simulated using Ansys HFSS.

2 Design Details of Power Divider The geometrical structure of a simple T-junction power divider is illustrated in Fig. 1. For a 50  input line, a 3-dB (equal split) power divider can be designed by using two 100  outputs lines. A quarter wave transformer must be incorporated to bring the output line impedances back to the desired levels. To minimize the return loss, it is necessary to match the output line properly. The width (W ) and length (L) of the divider are 30, 87 mm. A CSRR was reported for the first time in [11]. This paper proposed the CSRR structure as a means for realizing a notch-filter. CSRR, to fit the convention of the transmission line model, is often modelled as an LC tank circuit [12] as shown in Fig. 2. A CSRR structure is placed in the ground plane below one of the output branches of the T-junction power divider as shown in Fig. 3. The

Metamaterial CSRR Loaded T-Junction Phase …

341

Fig. 1 Design details of a T-junction power divider

Fig. 2 CSRR and its equivalent circuit model

Fig. 3 Geometry of the proposed power divider with CSRR at ground plane

combination of strip line and the CSRR introduces a series inductance and shunt capacitance which lead to a slow wave effect [13] (Table 1). In order to study the design effect of the CSRR structure on a power divider network, CSRR is first etched from the ground plane in such a way that the centre of the CSRR structure lies exactly below the junction of the three lines of the power

342 Table 1 Dimensions of the power divider

K. Sarmah et al. Symbol

Characteristic impedance (ohm)

Length (mm)

Width (mm)

d

50

17.02

2.91

q

70.7

17.50

1.51

g

100

18.02

0.62

divider network. It is then gradually shifted towards one of the output ports along the corresponding microstrip line.

3 Fabricated Deign of the Power Divider The fabricated power divider is shown in Fig. 3. Here, initially, the CSRR structure below the power divider is etched from the ground plane in such a way that its centre is exactly below the junction at (x = 0 mm, y = 0 mm). The CSRR structure is gradually moved towards the end of a particular port along the x-axis whilst keeping the y-position fixed. The phase of the S23 parameter is obtained from simulation using Ansys HFSS for each step. The effect of CSRR is same along the positive or negative x-axis, due to its symmetry structure. The final design is fabricated on a PCB board, and the performance is measured using a Rohde and Schwarz ZNB20 vector network analyser (VNA). Figure 4 shows the photograph of the fabricated phase shifter prototypes at CSRR position 10 mm from the centre of the T-junction.

4 Experimental Results Figure 5 shows the variation of the phase of the S12 parameter and S13 parameters with the position of the centre of the CSRR structure along x-axis. Since the CSRR structure is below the microstrip line towards port 3, more variation in the phase is observed for the S13 parameter. When the centre of the CSRR structure is at x = 10 mm, the phase difference between port 1 and port 3 is almost equal to zero. This is the reason why this point is considered for fabrication of the power divider structure. There is a phase difference of 45.2° between the port 2 and port 3 at this point. Figure 6 shows the magnitudes of the S11 , S12 , and S13 parameters for the final design. It is observed that the return loss (S11 ) is approximately—25 dB , which indicates a good impedance matching. The magnitude of the S13 parameter is slightly less than that of the S12 parameter. It can be attributed to the loss due to the presence of the CSRR structure below this microstrip line.

Metamaterial CSRR Loaded T-Junction Phase …

343

(a)

(b) Fig. 4 Fabricated power divider a top view, b bottom view

Fig. 5 Variation of the phases of S12 and S13 with position of the CSRR structure along x-axis at 2.4 GHz frequency

344

K. Sarmah et al.

Fig. 6 Magnitude (dB) of the S-parameters of the fabricated power divider

5 Conclusion A metamaterial aided compact phase shifted power divider proposed in this study. To achieving desired phase shifting, a CSRR is used below the quarter wavelength line in the respective branch. The phase shifting behaviours of CSRR at the different position were illustrated, and design procedure was described. A quarter wave microstrip line with a CSRR and the operating frequency of 2.4 GHz were implemented for verification. The proposed design demonstrated the length reduction capability of the power dividing branch as compared to the conventional method. The key advantage of this approach is the symmetry of the structure. Further, since the CSRR structure is fabricated by etched from the ground plane below the power divider network, it is possible to obtain different phase shifts without altering the primary design of the power divider network. This is advantageous for better placement optimization in larger microstrip networks. The network may be used in phased arrays where a fixed phase difference is required between the two antenna elements.

References 1. Qamar Z, Zheng SY, Chan WS, Ho D (2016) An equal-length multiway differential metamaterial phase shifter. IEEE Trans Microw Theory Tech 65(1):136–146 2. Gil M, Bonache J, Selga J, Garcia-Garcia J, Martín F (2007) Broadband resonant-type metamaterial transmission lines. IEEE Microw Wirel Compon Lett 17(2):97–99 3. Smith DR, Padilla WJ, Vier DC, Nemat-Nasser SC, Schultz S (2000) Composite medium with simultaneously negative permeability and permittivity. Phys Rev Lett 84(18):4184 4. Shelby RA, Smith DR, Schultz S (2001) Experimental verification of a negative index of refraction. Science 292(5514):77–79

Metamaterial CSRR Loaded T-Junction Phase …

345

5. Lim JS, Lee SW, Kim CS, Park JS, Ahn D, Nam S (2001) A 4.1 unequal Wilkinson power divider. IEEE Microw Wirel Comp Lett 11(3):124–126 6. Sis G, Bonache J, Martn F (2008) Dual-band Y-junction power dividers implemented through artificial lines based on complementary resonators. In: IEEE MTT-S international microwave symposium digest, pp 663–666 7. Packiaraj D, Bhargavi A, Ramesh M, Kalghatgi AT (2008) Compact power divider using defected ground structure for wireless applications. In: 2008 International conference on signal processing, communications and networking, pp 25–29 8. Karthikeyan SS, Kshetrimayum RS (2001) Compact, harmonic suppressed power divider using open complementary split-ring resonator. Microw Opt Technol Lett 53(12):2897–2899 9. Bialkowski M, Wang Y (2011) Broadband microstrip phase shifters employing parallel stubs and ground slots. Microw Opt Technol Lett 53(4):723–728 10. Saenz E, Cantora A, Ederra I, Gonzalo R, De Maagt P (2007) A metamaterial T-junction power divider. IEEE Microwave Wirel Compon Lett 17(3):172–174 11. Falcone F, Lopetegi T, Baena JD, Marqués R, Martin F, Sorolla M (2004) Effective negative epsilon microstrip lines based on complementary split ring resonators. IEEE Microwave Wirel Compon Lett 14(6):280–282 12. Baena JD, Bonache J, Martin F, Sillero RM, Falcone F, Lopetegi T, Laso MA, Garcia-Garcia J, Gil I, Portillo MF, Sorolla M (2005) Equivalent-circuit models for split-ring resonators and complementary split-ring resonators coupled to planar transmission lines. IEEE Trans Microw Theory Tech 53(4):1451–1461 13. Liu H, Li Z, Sun X (2005) Compact defected ground structure in microstrip technology. Electron Lett 41(3):132–134

An Adaptive Levy Spiral Flight Sine Cosine Optimizer for Techno-Economic Enhancement of Power Distribution Networks Using Dispatchable DGs Usharani Raut, Sivkumar Mishra, Subrat Kumar Dash, Sanjaya Kumar Jena, and Alivarani Mohapatra Abstract This paper proposes an adaptive levy spiral flight sine cosine optimizer (ALSFSCA) to maximize the benefits of power utilities in terms of power loss minimization (PLM) and economic benefits maximization (EBM) by incorporating dispatchable DGs. The optimal location and sizing of the DGs are determined simultaneously with consideration of different single and multiple objectives optimization cases without violating the system operating constraints. The effectiveness of the algorithm is verified on a 33-bus test distribution system considering various practical load models like constant power (CP), constant current (CC) and constant impedance (CI) models under three different loading conditions like light, nominal and heavy load. The effect of DG type is also investigated to find the best possible solution. The result analysis demonstrates the superiority of the proposed model over existing models available in the literature. Keywords Distributed generation · Power loss · Techno-economic benefits · Adaptive sine cosine optimizer

U. Raut (B) International Institute of Information Technology, Bhubaneswar, India e-mail: [email protected] S. Mishra Centre for Advanced Post Graduate Studies,Biju Pattnaik University of Technology, Rourkela, India S. K. Dash Government College of Engineering, Kalahandi, India S. K. Jena Institute of Technical Education and Research,SOA University, Bhubaneswar, India e-mail: [email protected] A. Mohapatra KIIT Deemed to be University, Bhubaneswar, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_37

347

348

U. Raut et al.

1 Introduction The rapid development of DG technologies in various forms and capacities has drastically altered traditional planning of distribution networks (PDNs). The optimal DG planning is a challenging issue with regard to the constraints rising from the nature of the smart grids. Optimal DG allocation (ODGA) can meet the expected power loss reduction which leads to improved voltage profile, reliability and power quality. Numerous conventional and metaheuristics-based methods have been developed in literature to address various issues of ODGA considering the primary concerns of DG planner. Since the last few years, optimal location and sizing of DGs have been considered to be an active research area. Yammami et al. [17] formulated a shuffled bat algorithm-based multi-objective technique for ODGA to address different technical issues related to DG penetration under various load conditions. In 2015, Viral and Khatod [16] presented an analytical technique to determine the ODGA in a balanced radial DN by considering loss minimization. Mohandas et al. [7] developed a multi-objective performance index to determine the ODGA for enhancing voltage stability (VSE) under different practical load models. In [8], different sensitivitybased approaches for ODGA are presented and compared. Sultana and Roy [15] presented an oppositional Krill Herd algorithm-based method to minimize yearly energy loss using renewable DGs. Several approaches are based on voltage stability and loadability enhancement. In [3], power loss, operating cost and voltage stability are simultaneously considered for ODGA using bacterial foraging optimization algorithm (BFOA). In [11], the authors proposed a stochastic fractal search algorithmbased approach considering PLM, voltage profile improvement (VPI) and VSE. In [12], a novel VSI-based renewable DG model is developed with load growth. In [2, 13], techno-economic and environmental benefits are maximized by using renewable DGs. Several research works are focused on Pareto-based solutions. Nekooei et al. [9] developed an enhanced harmony search algorithm (EHSA)-based Pareto solution considering PLM and VPI. In [18], the authors presented a multi-objective PSO (MOPSO)-based approach for allocation of both DG and capacitor with simultaneous consideration of current balancing, voltage stability and PLM. In [1], Behera and Panigrahi[1] presented a multi-objective DE-based ODGA algorithm to maximize the profit of DG integration. In [5], a Pareto-based quasi-oppositional grey wolf optimizer-based model is developed for ODGA with different load models and loading conditions. Several approaches attempted to enhance reliability of the network by using DG and capacitor bank in reconfigured network [10, 14]. Literature review reveals that from optimization perspective still it is a challenge for the researchers to handle a complex nonlinear problem like oDGA in the presence of multiple objectives, which provides ample opportunity for further research. In this paper, an adaptive levy spiral flight sine cosine optimizer-based technique is applied for the first time to address the techno-economic issues of ODGA in a multi-objective environment. The problem formulation is described in Sect. 2 . The

An Adaptive Levy Spiral Flight Sine Cosine Optimizer …

349

proposed methodology is presented in Sect. 3. Section 4 includes the application of proposed approach to ODGA problem. In Sect. 5, the simulation results and discussion are presented. Section 6 includes the overall conclusion.

2 Problem Formulation In this work, a multi-objective function is developed with consideration of total power loss (PLoss) and yearly economic loss (AEL) to maximize the techno-economic benefits, subjected to various equality and inequality constraints. F(x) = Min[w1 × PLoss + w2 × AEL]

(1)

w1 + w2 = 1

(2)

In this work, the weight factors w1 and w2 are selected on the basis of individual impact on the system performance. Accordingly, the values of w1 and w2 are set at 07 and 0.3, respectively.

2.1 Minimization of Total Active Power Loss (PLoss) The PLoss is evaluated as per Eq. (3). PLoss =

Nbr 

| Ibr,i |2 ×Rbr,i

(3)

i=1

where Nbr denotes the total branches and Ibr,i and Rbr,i are the branch current and the branch resistance of the ith branch.

2.2 Minimization of AEL The cost of AEL without DG (AELwoDG ) is calculated using Eq. (4) as per [4]: AELwoDG = PLosswoDG × ER × T

(4)

where PLosswoDG is the PLoss in the absence of DG and T (Hour) is the DG operation period. ER ($/kWh) is the energy rate. The AEL with DG (AELwDG ) is evaluated as per Eq. (5). (5) AELwDG = PLosswDG × ER × T + CDGP

350

U. Raut et al.

where CDGP is the cost of DG generated power, which is determined as per Eq. (6). CDGP = Costtotal ×

nDG 

PDG,n

(6)

n=1

In Eq. (6), Costtotal includes installation, operation and maintenance cost of DG and nDG is the total number of DGs to be installed. The net annual saving (AS) is calculated using Eq. (7), assuming the DG life to be 10 years. (7) AS = AELwoDG − (AELwDG + CDGP/10)

2.3 Operational Constraints • Power balance constraint: PLosssub +

nDG 

PDG,i −

n Bus

PLm − PLossDG = 0

(8)

m=2

i=1

• Voltage constraint: 0.95 ≤ Vi ≤ 1.05, i = 1 to nBus

(9)

• DG location constraint: 2 ≤ DG location ≤ nBus

(10)

• DG size constraint:

0.1

nBus  k=2

PLk ≤

nDG 

PDG,k ≤ 0.8

k=1

where, pLk is the load demand at node k.

nBus  k=2

P Lk

(11)

An Adaptive Levy Spiral Flight Sine Cosine Optimizer …

351

3 Proposed Algorithm In this work, an improved version of SCA, namely adaptive levy spiral flight SCA, is proposed to address the issues of ODGA.

3.1 Conventional SCA SCA is a stochastic search-based metaheuristic algorithm, where incoming solutions are updated using sine and cosine functions as per Eq. (12) [6]. 

itr itr itr X m,n + Rn 1 × sin(Rn2 )× | Rn3 × X best,n − X m,n |, if Rn4 < 0.5 itr itr itr |, otherwise X m,n + Rn1 × cos(Rn2 )× | Rn3 × X best,n − X m,n (12) where n, m and itr represent the dimension, population size and current iteration, respectively. Rn1 is the conversion parameter which is adjusted as per Eq. (13). itr+1 X m,n

=

Rn1 =

 max − itr  itr a1 maxitr

(13)

where a1 is a constant. maxitr is the maximum iteration number. Rn2 and Rn3 are random numbers in the range of [0, 2π ] and [0, 2], respectively. Rn4 is a switching parameter in the range of [0, 1].

3.2 Proposed ALSFSCO The proposed ALSFSCO improves in the following aspects: • Inclusion of exponential variation of conversion parameter itr

Rn1 (t) = a1 e maxitr

(14)

• Levy flight distribution for enhancing the exploration process of the algorithm itr+1 itr itr = Pbest,k + levy × θ (k) × Pbest,k Pbest,k

levy = 0.01 ×

u1 × σ 1

v1 a

where u1 and v1 are randomly selected between 0 and 1. The value of σ is obtained as per Eq. (17).

(15) (16)

352

U. Raut et al.

 σ =

) (1 + a) × sin( πa 2 ( 1+a ) × a × 2( 2

 a1

a−1 2 )

(17)

where, (n) = (n − 1)!

(18)

The control coefficient θ is obtained as per Eq. (19). θ ( j) = e

r ( j) ( −ε×itr max )(1− rmax ( j) )

where

itr

(19)

1  itr − X | npop i=1 i, j

(20)

rmax ( j) = max(P jitr ) − min(P jitr )

(21)

npop

r ( j) =|

itr Pbest, j

4 Application of Proposed Algorithm to Optimal DG Allocation Problem The ODGA is a complex mixed integer problem, where the control variables are DG position, size and power factor (pf). The solution vector for the problem can be formulated as per Eq. (22). popi = [LOCDG1 LOCDG2 , . . . , LOCDGnDG , PDG1 PDG2 , . . . , PDGnDG , pfDG1 , pfDG2 , . . . , pfDGnDG ] i = 1, 2, . . . , npop

(22)

Each individual of pop is initialized according to Eqs. (23–25) LOCDG,i = round[LOCDG, min,i + rand ∗ (LOCDG, max,i − LOCDG,min,i ]

(23)

PDG,i = round[PDG, min,i + rand ∗ (PDG, max,i − PDG,min,i ]

(24)

PfDG,i = round[PfDG,min,i + rand ∗ (PfDG,max,i − PfDG,min,i ]

(25)

The pseudocode of proposed ALSFSCA to get optimal solution to ODGA problem is given in Algorithm 1. Equations (26) and (27) are used to handle the discrete and continuous variables, respectively. Ynew = mod(uint16(Y ), YUlimit )

(26)

An Adaptive Levy Spiral Flight Sine Cosine Optimizer …

If(Yi < L Olimit

then, Yi =L Olimit

IfYi >UPPlimit

353 then, Yi =UPPlimit

(27)

Algorithm 1 Pseudocode of proposed ALSFSCA for DG Allocation Input: Line and load data, objective functions Output: Global optimal solution, G best 1: Input the line and load data of the distribution system 2: Initialize the parameters of ALSFSCA: maxitr , npop, a1 ,  and β 3: Initialize the pop X 4: X i = [X 1 , X 2 , · · · , X npop ] 5: X i = [Loc DG , Si ze DG , p f DG 6: Calculate f obj (X i ) and obtain (X best ) 7: itr = 1 8: while itr ≤ maxitr do 9: Set G best = X best 10: Update Rn 2 , Rn 3 and Rn 4 11: update X using Eq. (12) 12: Check all the constraints and generate X updated 13: f obj (X updated) and obtain the best solution X best,updated 14: Use Eqs. 14) and (15) to mute X best,updated 15: if f obj (X best,updated ) < f obj (X best ) then 16: X best = X best,updated 17: f obj (X best ) = f obj (X best,updated ) 18: end if 19: itr = itr + 1 20: End 21: end while 22: Return G best

5 Results and Discussions The effectiveness of the algorithm is tested on a 33-bus test RDN. The performance of the algorithm is tested at three distinct load levels: LL (0.5), NL (1.0) and HL (1.6). The simulations are carried out using MATLABR2014a. The parameters of ALSFSCA used in the simulation are maxitr = 100, population size, npop=30, a1 =2,  = 30 and a = 1.5. A comparative performance analysis for CP, CC and CI is studied, and the findings are reported in Table 1. From the table, it is seen that the performance in terms of PLoss (kW), QLoss (kVAr), AEL ($) and AS ($) is more noticeable for all type of load with DGs. The superiority of the proposed approach is validated by comparing the results obtained from ALSFSCA with other methods, namely OCDE, KHA and SFSA, which is presented in Figs.1 and 2.

354

U. Raut et al.

Table 1 Comparative Performance Analysis considering multi-objectives for different load models Type of Load Load Level

Parameters

CP WODG

WDG

CC WODG

14/396 DG Location/ DG size (kW) -

24/520

-

1435

WDG 13/360

-

24/485

-

1425

30/580

Total DG Size (kW)

-

1456

V min/Bus no.

0.95/18

0.9850/33 0.9556/18 0.9867/33 0.9572/18 0.9849/18

P Loss (kW)

48.8

17.6

45.57

17.49

42.8

16.98

Q Loss (kVAr)

33

12.01

30.84

11.91

28.92

11.56

AE L ($)

21368

12097

19,964.6

11966

18744.3

11715

AS ($)

-

9263

-

7998.6

-

7029.3

DG Location/ DG size (kW) -

24/1080

13/830 -

30/1052 2933

24/1063

13/730 -

24/1048

-

2701

30/1100 -

-

V min/Bus no.

0.904/18 0.9690/33 0.9115/18 0.9713/33 0.9175/18 0.9655/18

P Loss (kW)

210.98

72.8

182.48

69.5

161.19

65.65

Q Loss (kVAr)

143.12

50.66

123.45

48.43

108.79

45.53

AE L ($)

92413

40697

79,926

39463

70599.46 36860

AS ($)

-

51716

-

40463

-

DG Location/ DG size (kW) -

24/1250

14/1007 -

30/1424 3729

2993

30/923

Total DG Size (kW)

14/1055

HL (1.6)

14/385 30/600

14/801

NL (1.0)

CI WODG

24/450

30/540 LL (0.5)

WDG

24/1217

-

24/1198

-

3349

30/1320 -

3544

33739 14/944 30/1207

Total DG Size (kW)

-

V min/Bus no.

0.8822/18 0.9680/33 0.8937/18 0.966/33 0.902/18 0.965/33

P Loss (kW)

314.5

108

262.87

102

226.72

95.4

Q Loss (kVAr)

213.5

75

177.83

71

152.95

66.4

AE L ($)

137745

58543

115133.6 55191

AS ($)

-

79202

59943

99301

51829

-

47472

6 Conclusion In this paper, an effective implementation of adaptive levy spiral flight sine cosine optimizer (ALSFSCA) for optimum DG allocation in a multi-objective framework has been presented. The proposed model is validated considering four different scenarios under three distinct load conditions. The proposed algorithm is found to give significant improvement in terms of PLoss QLoss, AEL and Vmin with DG inclusion for CP, CC and CI load. It reveals the superiority of ALSFSCA in handling both single and multi-objective cases with increased load levels.

An Adaptive Levy Spiral Flight Sine Cosine Optimizer …

355

Fig. 1 Effects of Sine and Cosine functions

Total Power Loss (kW)

80 ALSFSCA OCDE KHA SFSA

70 60 50 40 30 20 10 0

3 DGs @ Upf

3 DGs @ 0.95 lagging Pf

Fig. 2 Comparative performance analysis of different algorithms

References 1. Behera SR, Panigrahi B (2018) A multi objective approach for placement of multiple DGs in the radial distribution system. Int J Mach Learn Cybern 1–15 2. Farh HM, Al-Shaalan AM, Eltamaly AM, Al-Shamma’A AA (2020) A novel crow search algorithm auto-drive pso for optimal allocation and sizing of renewable distributed generation. IEEE Access 8:27807–27820 3. Kowsalya M et al (2014) Optimal size and siting of multiple distributed generators in distribution system using bacterial foraging optimization. Swarm Evol Comput 15:58–65 4. Kumar S, Mandal KK, Chakraborty N (2019) Optimal dg placement by multi-objective opposition based chaotic differential evolution for techno-economic analysis. App Soft Comput 78:70–83 5. Kumar S, Mandal KK, Chakraborty N (2021) Optimal placement of different types of dg units considering various load models using novel multiobjective quasi-oppositional grey wolf optimizer. Soft Comput 25(6):4845–4864

356

U. Raut et al.

6. Mirjalili S (2016) SCA: a sine cosine algorithm for solving optimization problems. KnowledgeBased Syst 96:120–133 7. Mohandas N, Balamurugan R, Lakshminarasimman L (2015) Optimal location and sizing of real power DG units to improve the voltage stability in the distribution system using ABC algorithm united with chaos. Int J Electr Power Energy Syst 66:41–52 8. Murthy V, Kumar A (2013) Comparison of optimal DG allocation methods in radial distribution systems based on sensitivity approaches. Int J Electr Power Energy Syst 53:450–467 9. Nekooei K, Farsangi MM, Nezamabadi-Pour H, Lee KY (2013) An improved multi-objective harmony search for optimal placement of DGs in distribution systems. IEEE Trans Smart Grid 4(1):557–567 10. Nguyen TP, Nguyen TA, Phan TVH, Vo DN (2021) A comprehensive analysis for multiobjective distributed generations and capacitor banks placement in radial distribution networks using hybrid neural network algorithm. Knowl-Based Syst 231:107387 11. Nguyen TP, Vo DN (2018) A novel stochastic fractal search algorithm for optimal allocation of distributed generators in radial distribution systems. Appl Soft Comput 70:773–796 12. Parihar SS, Malik N (2020) Optimal allocation of renewable DGs in a radial distribution system based on new voltage stability index. Int Trans Electr Energy Syst 30(4):e12295 13. Raut U, Mishra S (2020) A new pareto multi-objective sine cosine algorithm for performance enhancement of radial distribution network by optimal allocation of distributed generators. Evol Intell 1–22 14. Raut U, Mishra S (2021) Enhanced sine-cosine algorithm for optimal planning of distribution network by incorporating network reconfiguration and distributed generation. Arab J Sci Eng 46(2):1029–1051 15. Sultana S, Roy PK (2015) Oppositional krill herd algorithm for optimal location of distributed generator in radial distribution system. Int J Electr Power Energy Syst 73:182–191 16. Viral R, Khatod DK (2015) An analytical approach for sizing and siting of DGs in balanced radial distribution networks for loss minimization. Int J Electr Power Energy Syst 67:191–201 17. Yammani C, Sydulu M, Matam SK (2015) Optimal placement and sizing of DGs at various load conditions using shuffled bat algorithm. In: 2015 IEEE power and energy conference at illinois (PECI). IEEE, pp 1–5 18. Zeinalzadeh A, Mohammadi Y, Moradi MH (2015) Optimal multi objective placement and sizing of multiple DGs and shunt capacitor banks simultaneously considering load uncertainty via MOPSO approach. Int J Electr Power Energy Syst 67:336–349

Design of RAMF for Impulsive Noise Cancelation from Chest X-Ray Image Jaya Bijaya Arjun Das, Archana Sarangi, Debahuti Mishra, and Mihir Narayan Mohanty

Abstract Denoising of medical images is an important pre-processing step for analysis, diagnosis, and treatment of various diseases. Images are normally affected by impulse noise when being transmitted through communication channels or because of noisy sensors. The most common noise that occurs in electronic communication is an impulse noise, specifically a salt-and-pepper noise. The median filter is typically used to reduce the presence of such noise. However, it works well for images with low-noise density. So, in order to get a better image restoration, we can use another image restoration technique which is adaptive median filtering which works very well for any density of noise. The adaptive median filter is frequently used in image processing to improve or restore data by eliminating undesirable noise without severely affecting the image’s structures. This method works in a two-step process. We tested the images containing noise levels ranging from 10 to 50% and calculated the PSNR value. Keywords Adaptive median filter · Image processing · Salt-and-pepper noise · PSNR

1 Introduction Medical images, such as magnetic resonance imaging (MRI), ultrasound, computed tomography (CT), and mammography, are confronted with a variety of levels of J. B. A. Das · A. Sarangi (B) · D. Mishra Department of Computer Science and Engineering, ITER, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India e-mail: [email protected] D. Mishra e-mail: [email protected] M. N. Mohanty Department of Electronics and Communication Engineering, ITER, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_38

357

358

J. B. A. Das et al.

noise that may arise during transmission and processing. There are many influencing aspects which are responsible for the corruption of images during transmission or acquisition in the arena of image processing. The image’s content is degraded due to the presence of noise in the image. To recover the original information from the image and remove the noise, certain denoising techniques must be used. These denoising approaches utilize a kernel that is used to convolve over the image, yielding a noisefree image as a result of the convolution. The kernel (window) size used for the same varies, as does the desired result. Median filtering is one such widely used approach. When the image noise density is low, median filtering works properly, but when the density of noise in the image is high, it starts to fail. In order to overcome this problem, we can make use of the spatial filtering technique. The adaptive median filter is one such filtering method that is used. It is better than the median filter as it is a two-step filtering technique. The adaptive median filter’s major benefit is that its behavior changes depending on the features of the image being filtered. Another main feature of the adaptive filter is that it works well not only for impulse noise but also for speckle noise and Gaussian noise.

1.1 Median Filter It is the most commonly used filter. It is a nonlinear method of filtering. The size of the kernel can be of n × n size which is made to convolve or slide over a m × m corrupted image. While performing this operation, the median value of n × n kernel on the image is obtained and then the value of a particular pixel is replaced with the median value of n × n kernel. Drawbacks of Median Filter The median filter does not work efficiently when the spatial density of noise is high. Even if the pixels under consideration are uncorrected (other than 0 or 255), they are replaced by the median of the window. This will damage the overall visual quality of the image. For large kernel sizes, there is no proper smoothening of the image; instead, valuable information from the image gets blurred. There is no error propagation. In this paper, we used an improved adaptive median filter for the removal of saltand-pepper noise from medical images which contain unwanted noise. This method is better than the standard median filter, and it gives better results over high noise density. The filtering of the medical image is done in two-step process. The paper is organized as follows. Section 3 describes the state-of-the-art methods. In Sect. 4, the proposed method of adaptive median filter is described. Section 5 presents the results obtained, and Sect. 6 concludes the paper.

Design of RAMF for Impulsive Noise Cancelation …

359

2 Literature Survey There are different techniques used to denoise the medical images for better diagnosis of the patients. Different median filtering techniques have been proposed and discussed. Most of the techniques are applicable to images with less noise density present in them. The comparison of various filtered images is based on mean square error (MSE) and peak signal-to-noise ratio (PSNR). The standard median filter is the most basic of all the median filters. In this method, the center pixel is filtered using a square window of size 2k + 1, where k ranges from 1 to N. The pixels in the window are sorted first, and then, the center pixel is adjusted to the sorted sequence’s median value. This is the most basic method, and it has been used for a long time. The removal of impulsive noise from the image has been accomplished using the decision-based approach. The primary goal of this technique is to maintain image features while suppressing impulsive noise [1]. Adaptive decision-based median filtering, rather than decision-based median filtering, might provide greater visual clarity and PSNR values. For noise densities up to 50%, adaptive median filtering also provides good visual clarity while denoising impulsive noise [1]. In [2], a selfadaptive median filter for eliminating salt-and-pepper noise based on local pixel distribution information is described. A threshold and the standard median are used in [3] to identify noise and replace the original pixel value with one that is closer to or equal to the standard median. Two new methods for removing high-density impulse noises, i.e., ranked order-based adaptive median filter (RAMF) and impulse size-based adaptive median filter (SAMF), were introduced in [4] based on the kinds of impulse noise damaged image models. These feature a changeable window size for impulse reduction while maintaining sharpness [4]. The method described in [5] is a spatial domain approach that filters the signal using an overlapping window based on the selection of an effective median per window. This method has been applied to images corrupted by impulse noise and has a relatively small number of distortions [5].

3 Method The adaptive median filtering method is proposed to overcome the limitations of the standard median filter. The main advantage of this method is that the size of the kernel surrounding the corrupted image is variable due to the better output results it produces. The other main advantage of the adaptive filter is that, unlike the median filter, it does not replace all the pixel values with the median value. The working of the adaptive filter is a two-step process. In the first step, it finds the median value for the kernel, and in the second step, it checks whether the current pixel value is an impulse (salt-and-pepper noise) or not. The following two assumptions are commonly used to identify impulses: first one is a noise-free image made up of locally smooth changing regions separated by edges, and the second one is a noisy pixel which has a propensity

360

J. B. A. Das et al.

to have a very high or extremely low-gray value in comparison with its neighbors. The standard median is a spatially based median that uses a windowing method with a 3 × 3 filter size. Normally, the dimensions of both filter sizes are odd. Given an input picture I, the filtered image f is defined by, f (x, y) = median{I (s, t)}

(1)

(s,t)∈Sx y

In the above equation, (x, y) are the coordinates of the pixel in the contextual region W xy specified by 3 × 3 and (s, t) are the coordinates of the pixels in that region. The method is followed by the steps: • Step 1: Begin by using the filtering window (3 × 3) as a starting point. • Step 2: In the current filtering window, check how many pixels are identified as noisy and noise free. • Step 3: Check whether the center region is corrupted or not. • Step 4: If the number of uncorrupted pixels in the filtering window is less than half of the total number of pixels in the filtering window, then go for Step 5. • Step 5: Iteratively expand the window size outward by one pixel on all four corners of the window. The above steps should be repeated if condition are not satisfied. As a result, the current pixel will not participate in the filtering process since it has been designated as noisy. In the median filtering procedure, only pixels identified as noise free in the filtering window will be used. As a consequence, you will get superior filtering results with less distortion (Fig. 1).

Input Image

Check the input image is corrupted or not

No filtering

Denoised image

NO

corrupted YES Adaptive median filtering

Fig. 1 Block diagram of the method

Design of RAMF for Impulsive Noise Cancelation …

361

Algorithm: Implementation of adaptive median filtering method Wxy = window with the center (x, y) Wmax = current window size, i.e., Wmax = 3 Gmin = The value of the lowest grey level present in Wxy Gmax = The Value of the highest grey level present in Wxy Gmed = median of gray level value in Wxy The adaptive median filter is divided into two levels, Level 1 and Level 2, as shown below: Level 1: P1= Gmed – Gmin P2= Gmed – Gmax If (P1 > 0 & P2 < 0), then proceed to level 2 Else maximize the window size If (window size 0 And Q2 < 0), output Gxy Else output Gmed

4 Significance Measures 4.1 PSNR The peak signal-to-noise ratio is one of the parameters that, among others, provides the best comparative analysis since the peak signal-to-noise ratio should be as high as possible, implying that the signal content in the output is high and the noise is low. We tested for noise levels ranging from 10 to 50%. The mean square error (MSE) and peak signal-to-noise ratio (PSNR) measurements were used to analyze the experimental observations, as shown below,  PSNR = 10 log10

max2 MSE

 (2)

Here, “max” is the image’s absolute maximum pixel value, which in the case of a grayscale image is 255. In this equation, “MSE” refers to mean square error that should be less, which indicates that the input and output images’ pixel densities should be as near as feasible. MSE =

m n 1  [A(i, j) − B(i, j)]2 mn i=1 j=1

(3)

362

J. B. A. Das et al.

where “A” is the original image and “B” represents the denoised image with m × n resolution.

5 Result The adaptive median filter has been implemented using Python 3 on a system with a Windows OS. The peak signal-to-noise ratio (PSNR) is calculated to understand the efficiency of the filter. Basically, the higher the PSNR value, the better the noise reduction by the filter. The following figures show the results of applying the adaptive median filter on the image with a different frequency of noise, from 10 to 50%. In Fig. 2, the first image shows the original image of the X-ray of the chest. The second image shows the noisy image containing minimum noise, i.e., 10% of noise, and the third image shows the result obtained after applying the adaptive median filter over the noisy image. The result shows that it is quite similar to the original image. Likewise, in Fig. 3, as we can see, the original image, along with the noisy image, contains 20% noise, and the

(i)

(ii)

(iii)

Fig. 2 The original image of the chest X-ray is represented in column (i). Column (ii) shows the image with 10% noise, while column (iii) shows the image after using the adaptive median filter to denoise it

(i)

(ii)

(iii)

Fig. 3 The original image of the chest X-ray is represented in column (i). Column (ii) shows the image with 20% noise, while column (iii) shows the image after using the adaptive median filter to denoise it

Design of RAMF for Impulsive Noise Cancelation …

363

denoised image after applying the filtering technique over it. Similarly, Figs. 4, 5, and 6 show the results obtained after applying the filtering technique. Basically, the noise which we are removing from the noisy image is the salt-and-pepper noise. The images are given noise densities ranging from 10 to 50%, and the MSE and PSNR are calculated. Equations 2 and 3 are used to determine the PSNR and MSE

(i)

(ii)

(iii)

Fig. 4 The original image of the chest X-ray is represented in column (i). Column, (ii) shows the image with 30% noise, while column (iii) shows the image after using the adaptive median filter to denoise it

(i)

(ii)

(iii)

Fig. 5 The original image of the chest X-ray is represented in column (i). Column (ii) shows the image with 40% noise, while column (iii) shows the image after using the adaptive median filter to denoise it

(i)

(ii)

(iii)

Fig. 6 The original image of the chest X-ray is represented in column (i). Column (ii) shows the image with 50% noise, while column (iii) shows the image after using the adaptive median filter to denoise it

364 Table 1 PSNR values for medical image at different noise densities

J. B. A. Das et al. Noise percentage (%)

Adaptive median filter (PSNR) (dB)

10

33.6507

20

32.7506

30

32.3798

40

31.9462

50

31.7087

value. It should be noted that the filtering approach is better when the PSNR is higher and the MSE is lower. The results are given in Table 1.

6 Conclusion Here, we can observe that the adaptive median filter works accurately for the impulse noise that is salt-and-pepper noise. Adaptive filtering is a more effective filtering approach than median filtering, which only applies filtering to the image’s damaged pixels while leaving the uncorrupted pixels alone. During the filtering process, adaptive filtering is utilized to minimize the number of noisy pixels. In the case of highdensity impulse noises, the advantage of using an adaptive filter is that it keeps the edge information. The adaptive filter is found to capture finer features in the image, and the restored images have a greater visual quality.

References 1. Shrestha S (2014) Image denoising using new adaptive based median filters. arXiv preprint arXiv:1410.2175 2. Gao Z (2018) An adaptive median filtering of salt and pepper noise based on local pixel distribution. In: 2018 International conference on transportation and logistics, information and communication, smart city (TLICSC 2018) 3. Chang C-C, Hsiao J-Y, Hsieh C-P (2008) An adaptive median filter for image denoising. In: 2008 Second international symposium on intelligent information technology application, vol 2. IEEE, pp 346–350 4. Hwang H, Haddad RA (1995) Adaptive median filters: new algorithms and results. IEEE Trans Image Process 4(4):499–502 5. Boateng KO, Asubam BW, Laar DS (2012) Improving the effectiveness of the median filter 6. Ibrahim H, Kong NSP, Ng TF (2008) Simple adaptive median filter for the removal of impulse noise from highly corrupted images. IEEE Trans Consum Electron 54(4):1920–1927 7. Mehta R, Aggarwal NK (2014) Comparative analysis of median filter and adaptive filter for impulse noise a review. Int J Comput Appl 975:8887 8. Soni H, Sankhe D (2019) Image restoration using adaptive median filtering. Image 6(10) 9. Dwivedy P, Potnis A, Mishra M (2017) Performance assessment of several filters for removing salt and pepper noise, Gaussian noise, Rayleigh noise and uniform noise. Empirical Research Press Ltd., pp 176

Design of RAMF for Impulsive Noise Cancelation …

365

10. Sathesh A, Rasitha K (2010) A nonlinear adaptive median filtering-based noise removal algorithm. In: Proceedings of first international conference on modeling, control, automation and communication (ICMCAC-2010), pp 108–113 11. Panda B, Nayak SK, Mohanty MN (2021) Noise suppression in nonstationary signals using adaptive techniques. In: Advances in electronics, communication and computing. Springer, Singapore, pp 261–270 12. Kar P, Mohanty MN (2020) An intelligent approach for noise elimination from brain image. In: Advanced computing and intelligent engineering. Springer, Singapore, pp 391–400 13. Jyoti A, Mohanty MN, Kar SK, Biswal BN (2015) Optimized clustering method for CT brain image segmentation. In: Advances in intelligent systems and computing, vol 327. Springer International Publishing Switzerland, pp 317–324 14. Dehuri A, Sanyena S, Dash RR, Mohanty MN (2015) A comparative analysis of filtering techniques on application in image denoising. In: IEEE conference CGVIS-2015, KIIT, Bhubaneswar, Odisha

Smart Street Lightning Using Solar Energy Priya Seema Miranda, S. Adarsh Rag, and K. P. Jayalakshmi

Abstract Solar lamp is a lighting system which generally consists of solar panels to gather energy, rechargeable battery to store the charge, LEDs or halogen lamps to provide illumination. Solar controlled lamps produce no pollution unlike traditional sources of light. Most solar lamps turn ON or OFF based on external light conditions. In one of the existing projects, solar panels were the only source of energy for the battery which created an issue if the battery ran out of charge. This project nullifies this problem by introducing the main power grid as a backup in case of insufficient charge in the battery. In another existing project, the solar panel is directly connected to the LED which powers it. The problem with this concept is that the LEDs can only be switched on when the ambient sunlight shines on the solar panel. In the proposed system, the brightness of the street lights is kept at a dim state and the system increases the brightness of the LEDs in the presence of an object and they go to a default state after a certain delay. The proposed system is self-sufficient, ecofriendly, beneficial financially and is especially viable in countries with high levels of poverty and limited or no access to electricity. In the future, the proposed system can use a battery with higher storage capacity and more efficient solar panels. The proposed method can also include an automated self-cleaning apparatus for the solar panel. Keywords Solar panel · Street lightning · LED

1 Introduction There is a huge problem of an increasing shortage of non-renewable sources in our country. With most of the countries already depending on renewable sources for P. S. Miranda · K. P. Jayalakshmi (B) St. Joseph Engineering College, Vamnjoor, Mangaluru 575028, India e-mail: [email protected] S. A. Rag Saveetha School of Engineering, Saveetha School of Medical and Technical Sciences, Chennai 602105, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_39

367

368

P. S. Miranda et al.

electricity and with the problem of overpopulation we need to cut down on our energy needs. Street lights are a huge part of our roads since they are integral to our safety. The importance of street lights cannot be ignored since according to UN World Health Organization, India has the highest road fatality rate in the world. Solar energy is used as the main source to power these lights. There already exists an optimal way to convert solar energy to DC current [1]. This was used as one of the foundations for this project. There already exists some concepts of solar power being used as a backup power source for street lights [2]. Research has also been done in using only solar power as the source [3] but due to the different weather conditions in India this is not feasible since present day street lights have an issue of inadequate dimming control high energy usage and low efficiency, there was a need for a more efficient street lighting system. The aim of the smart street lighting system is to control LEDs to turn on only when needed and to remain in a dim state otherwise. LEDs have been proved to be more energy efficient as well as a great way to fight against climate change [4]. The system integrates the use of STM 32 Microcontroller, LEDs, LDR and a solar panel along with other components. The smart street light uses solar panels to detect the intensity of the sunlight. During the daytime the LEDs are turned off and during the night time the street lights are switched on and the light intensity is adjusted according to the conditions. The LDR sensor and a laser is used in tandem to check the presence of vehicles in the vicinity of the smart system. IR sensors had been similarly used in previous projects [5], but due to the chance of them not detecting movement in some cases an LDR sensor is used. The LDR sensor and the solar panels are linked, as in during the night time when the intensity of sunlight is zero and there is no presence of vehicles, the brightness of the LEDs are kept at a minimum to offer visibility for the pedestrians. A LDR sensor and a laser is used to detect objects. When the sensor detects the presence of vehicles, the LEDs brightness increases to full potential. The entire system is powered by a battery which is charged by a solar panel thereby decreasing dependence on traditional power grids. This system uses a very small portion of the energy expended by normal street lamps and saves money and energy required to power these street lights thus reducing the dependence on non-renewable sources by a huge margin.

1.1 Problem Definition The problem of energy shortage in India is severe. According to Forbes, the problem is even more severe in rural areas some of which have limited access to electricity or none at all. We aim to solve this problem by using solar panels to decrease the dependency on power grids which primarily run on non-renewable power sources such as coal, diesel, and petrol. The proposed method also aims to cut government expenditure by a huge margin as the project depends on solar energy instead of the power grids, saving a huge amount of money. The spending will only be on the components, eliminating the need for constant power bills. The system also makes

Smart Street Lightning Using Solar Energy

369

use of LEDs instead of traditional halogen lamps. LEDs use 66% less power than traditional halogen lamps at full potential. The usage of LED lamps and solar panels is also very eco-friendly.

1.2 Objectives • The main objective of the project is to provide an energy efficient and cost effective alternative to the halogen street lamps and the dependence on non-renewable sources of power currently used worldwide. • The project intends to design a smart street light system which only requires an initial spending in comparison to the constant power bills which are required of operating traditional halogen lamps. • The LDR sensor and laser work in tandem to detect the presence of vehicles thereby switching the LED lamps between dim or bright state accordingly. • This entire system is powered by a solar panel which stores the charge in a lead acid battery. • During monsoon seasons or during weather conditions when the amount of the sunlight is at a minimum, the project makes use of the power grid which serves as a backup source of power.

1.3 Methodology • Solar panel is connected to the battery. • A switch is used for the purpose of switching between automatic and manual control of the system. • A potential divider is connected to the solar panel to limit the amount of voltage. The voltage sensor connected to this potential divider gives us the voltage of the solar panel. The battery is a 12 V battery that gets charged by the solar panel. Similar to the solar panel, the battery is also connected to a voltage sensor and a potential divider which is used to detect the charge in the battery. • Both the sensors are connected to the STM32 to display the voltage. • The second voltage sensor is used by the STM32 to switch to the main supply when the voltage goes below a certain threshold, which in this case is 5 V. • A regulated power supply is connected to the battery for the purpose of supplying the 5 V necessary for the STM32 as well as for the various other components. • A relay is used for this purpose. The normally open port of the relay is connected to the main supply and the normally closed port is connected to the battery. • An AC adapter is used to convert the 220 V AC voltage into the 12 V required for the system. • The common port of the relay is connected to the motor driver which is connected to the street light.

370

P. S. Miranda et al.

Fig. 1 Block diagram

• Motor driver is used for the purpose of adjusting the brightness by limiting the amount of current. • Laser and LDR sensors are used for the detection of vehicles. The laser is powered using a 5 V supply. • The LDR sensor is connected to STM32. LDR sensor sends signals to the STM32 during the movement of vehicles or pedestrians. • A 16 × 2 LCD is used to display the voltage of the battery, solar panel and also the current count of the street light (Fig. 1).

2 Implementation • Solar panels are going to be used as the main source of power for the LED street lamps with the power grid functioning as a backup power source. • A solar panel charges the battery which in turn is connected to a relay alongside the main grid.

Smart Street Lightning Using Solar Energy

371

• According to the solar light conditions, the microcontroller decides which power source to use for the microcontroller. • When battery is not sufficiently charged the microcontroller switches the relay to work with the grid current to power the street lights. • The battery is connected to the microcontroller through a potential divider and a voltage sensor. The potential divider helps to decrease the voltage to its required value and the voltage sensor is used to display the voltage. • An LCD is connected to the microcontroller to display the voltage values. • Multiple LDR sensors are placed alongside the street lamps. • During the day, the LED street lamps are OFF. During the night when the electricity generated by a solar panel is below a certain threshold, the microcontroller gives a command to switch on all street lights in a dim condition. During this time the LDR sensors start detecting the vehicles, and the street lamps change their state from dim state to bright state. • The LED lamp stays in the bright state for a certain time and goes back to the dim state after some given delay. • In order to switch between automatic and manual modes, a Windows Bluetooth application is used (Fig. 2).

3 Simulation and Results 3.1 Simulation 1.

2.

Alternating brightness of the LED: The proposed method alternated the LEDs between two states, dim and bright state as shown in Fig. 3, depending on the following condition: When the object is detected, the LEDs switch to the bright state and when the object moves out of the sensors vicinity, after a certain delay the LEDs go back to the dim state. In this case, the brightness is set to 20% of the LEDs maximum brightness. This method of alternating between 2 states saves a lot of energy which would have otherwise gone to waste if there was no object in the vicinity of the sensors. In order to the streets illuminated even when there is no object, the LEDs are kept in a dim state which provides enough illumination for visibility for safety reasons Controlling application: The system consists of a Bluetooth module which can be used to control the system remotely using an application, i.e., to turn the lights ON or OFF. There are two modes: (1) (2)

Manual Mode Automatic Mode

The manual mode is shown in Fig. 4 where the street lights can be controlled remotely, while in the automatic mode the STM32 decides to change the LED’s brightness based on input.

372

P. S. Miranda et al.

Fig. 2 Flowchart

3.

Alternating supply of power to the system: The proposed system has a failsafe for when the battery runs out of charge that is it when the charge in the battery falls below a certain threshold (in this case 4 V), the system connects to a main power grid for energy as shown in Fig. 5.

Smart Street Lightning Using Solar Energy

373

Fig. 3 LED at max. and min. brightness

Fig. 4 Manual mode

Fig. 5 LEDs connected to battery and mains

3.2 Results 1.

When no Object is detected: When no object obstructs or crosses the path of the first laser the LDR sensor sends an input to the STM32 and the first LED stays in dim state as shown in Fig. 6

374

P. S. Miranda et al.

Fig. 6 No object detected

Fig. 7 Object detected at first LED

2.

3.

When Object is detected at the First LDR sensor: Fig. 7 shows when object is detected at an LDR sensor. In this case the light connected to that particular sensor switches to a bright condition and stays in that state for some time before it switches back to a dim state. If both sensors detect movement both the lights turn on and go back to a dim state after a particular time has passed. When Object is detected at the Second LDR sensor: In the same instance of the object detected at first sensor, the second LED switches to bright state and goes back to the dim state after a certain delay as shown in Fig. 8. The switching of LED to a bright state is instant.

4 Conclusion and Future Work In the proposed method, a wide variety of fairly inexpensive components are brought together to provide illumination. This method involved the use of LEDs with bright and dim states to save energy, solar panels to make the system eco-friendly and to reduce the dependence on traditional power grids. Here to cut costs by using cheaper components, simpler design and the use of a solar panel to heavily cut down periodic

Smart Street Lightning Using Solar Energy

375

Fig. 8 Object detected at second LED

costs of power due to a main grid. The proposed method tackles the problem of lack of illumination in today’s streets in lesser developed countries. LED lamps are chosen due to its increased illumination and higher lifespan compared to traditional halogen lamps. The proposed method also keeps the LEDs at dim state at night for security reasons. A future scope for the proposed system can involve the use of more efficient and higher storage capacity batteries. There is also scope for more efficient LEDs which consume less power. There is also a need for more efficient solar panels. This way we can eliminate the dependence on main power grid for backup. The proposed method can also include an automated self-cleaning apparatus for the solar panel.

References 1. Liu K, Makaran J (2009) Design of a solar powered battery charger. In:2009 IEEE electrical power and energy conference (EPEC). IEEE, pp 1–5 2. Al-Mamun A, Sundaraj K, Ahmed N, Ahamed N, Rahman S, Ahmad R, Kabir MH (2013) Design and development of a low cost solarenergy system for the rural area. In: 2013 IEEE conference on systems, process and control (ICSPC). IEEE, pp 31–35 3. El-Faouri FS, Sharaiha M, Bargouth D, Faza A (2016) A smart streetlighting system using solar energy. In: 2016 IEEE PES innovative smart grid technologies conference Europe (ISGTEurope). IEEE, pp 1–6 4. Garcia RB, Angulo GV, Gonzalez JR, Tavizón EF, Cardozo JIH (2014) Led street lighting as a strategy for climate change mitigation at localgovernment level. In: IEEE global humanitarian technology conference (GHTC 2014). IEEE, pp 345–349 5. Kim D, Lee J, Jang Y, Cha J (2011) Smart led lighting system implementation using human tracking US/IR sensor. In: ICTC 2011. IEEE, pp 290–293

Fine-Tuning of a BERT-Based Uncased Model for Unbalanced Text Classification Santosh Kumar Behera and Rajashree Dash

Abstract Unbalanced datasets make it hard for text classifiers to learn well. Having limited information in minority classes makes it difficult to classify the unbalanced texts. In this study, a BERT-based uncased model is developed and fine-tuned to address the unbalanced text classification problem. We use the BERT model with 12 layers and 110 M parameters in order to classify an unbalanced dataset, taking into account both the majority and minority groups. The model is being fine-tuned by varying the learning rate and the maximum token length. Precision, recall, and f-measure are used to measure the performance of the BERT-based uncased model to classify text. Keywords BERT · Precision · Recall · f-measure

1 Introduction With the faster growth in information technology, emails, news articles, Web pages, and digital libraries are present in the form of a huge amount of electronic text documents. Text classification (TC) has become a crucial technology for the discovery and classification of text documents to manage such huge information. The text classification problem is used to automatically label the unlabeled text documents to one or more per-defined categories based on the content of the documents [1]. It has been seen that the TC problem is successfully applied in many applications such as document categorization [2], spam detection [3], email classification [4] and so on. Documents are usually represented as vectors in the vector space in the process of text classification, where each word is represented as a feature. The features are generally represented as a term frequency score or term frequency-inverse document frequency score. When a dataset is very large, it is time-consuming in manual text classification; therefore, automatic text classification methods have been increasingly used in S. K. Behera (B) · R. Dash Department of Computer Science and Engineering, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, Odisha, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_40

377

378

S. K. Behera and R. Dash

various applications [5]. After preprocessing, each individual token is converted into a vector using the tf-idf method. The term frequency-inverse document frequency (tf-idf) is used to check how a term or feature is relevance in a document among corpus [6, 7]. Term frequency (tf) says how many times a term appears in a document, whereas inverse document frequency (idf) gives importance to the rare term which is able to differentiate the documents [8]. SVM and ULMFiT can be combined to get good results in text classification. The ULMFiT and its fine-tuning technique are similar to a pre-trained BERT model [9]. In natural language processing (NLP), the BERT model outperformed and is also helpful in optimizing search results [10, 11]. A multilingual-case model of BERT is used for multilingual text classification in [12]. An array-based fine-tuning method is being provided for fine-tuning different parameters of BERT for text classification, question answer, and sentiment classification. The pre-trained model bidirectional encoder representations from transformers (BERT) is developed by Google for natural language processing (NLP). Unlike other machine learning models, BERT is a deeply bidirectional model. Text in BERT can be represented in both directions, left to right or right to left. BERT is per-trained on a large number of unlabeled text corpuses including Wikipedia and book corpus which contain 2500 million and 800 million words, respectively. BERT mainly uses two training strategies, the first is masked language prediction (masked LM) in which a small portion of masked words are supplied to the BERT model for prediction [11]. The context of the preceding and subsequent words is unmasked by the BERT model to forecast the masked word. Second is next sentence prediction (NSP) in which at the beginning of each sentence, a [CLS] token is inserted, and at the end of every sentence, a [SEP] token is inserted. In NSP, two sentences s1 and s2 are inputted to BERT, and it is able to distinguish whether s1 is followed by s2 or not. BERT contains a stack of transformer architecture with 12 layers, which contains a large feedforward network of 768 hidden units and 12 attention heads. There are different parameters through which BERT can be fine-tuned like token length, number of epochs, training batch size, testing batch size, with Adam optimizer the value of learning rate, β 1 and β 2. By fine-tuning the parameter, the performance of the model can be increased. In our experiment, we have taken the token length and learning rate parameter for fine-tuning. As there is limited work of BERT in unbalanced text classification, we have applied a BERT-based uncased model for our experiment. The major contribution of this paper is as follows: (1) Fine-tuning BERT-based uncased model on the basis of maximum token length. (2) Fine-tuning BERTbased uncased model on the basis of learning rate. (3) Analyzing the BERT model performance under unbalanced text classification. The rest of the paper is as follows: Sect. 2 gives an overview of the related work using BERT and unbalanced class distribution, the data collection, preprocessing in Sect. 3.1, the detailed BERT model and its fine-tuning is presented in Sect. 3.3.

Fine-Tuning of a BERT-Based Uncased …

379

2 BERT-Based Uncased Model for Unbalanced Text Classification The BERT-based uncased model doesn’t make any difference between lowercase, uppercase, or initcap. Initially, the uncased model ignores all the “numbers,” “punctuation marks,” and “special characters.” All the uppercase letters are converted to lowercase letters. In the process of tokenization, the input raw text is being converted into tokens using 30,522 words. The word piece tokenizer helps to stem the word from prefix as well as suffix to get the root word. Ex: playing, played, and plays all are stemmed to root word play. A snapshot of the raw text (Text[0]), the integer representation (input_ids[0]), and the attention mask (attention_mask[0]) is shown in Fig. 1. As shown in Fig. 2, BERT contains 12 transformer layers, output dimension 768, number of multi-masked attention 12, and 110 million parameters. Sentences are embedded with [CLS], and [SEP] tokens are given as input to masked multi selfattention which converts each word into a vector. After tokenization, it is passed through different layers of BERT. The BERT model contains 12 layers which is termed as transformer. The architecture of transformer is shown in Fig. 3. In text and position embed, the tokens are passed to the token embeddings layer. The [CLS] and the [SEP] are added to the token sentence. The tokens play the key role in classification process. The masked and multi-set attentions are used to batch the sequence or input together of same length. A snapshot of attention mask is shown in Fig. 1. Layer normalization helps to normalize the distribution among different layers which helps in smooth gradient, faster training, and better accuracy. After normalization, it is fed to feedforward layer. After proper weights are initialized, again it is transformed for layer normalization. Text[4]:SEES 1ST QTR NET UP SIGNIFICANTLY Rogers Corp said first quarter earnings will be up significantly from earnings of 114,000 dlrs or four cts share for the same... input_ids[4]:[[ 101, 1050, 1012, 1062, 1012, 6202, 2924, 12816, 3930, 9466, 3621, 2047, 3414, 1005, 1055, 6202, 2924, 12348, 2135, 4551, 1999, 2285, 1998, 2410, 1012, 2385, 4551, 1999, 2254, 3069, 1012, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]] attention_mask[4]:[[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]] Fig. 1 Tokenization

380

S. K. Behera and R. Dash

Fig. 2 Basic BERT language model for NLP

3 Results and Analysis 3.1 The Data Collection By collecting all the text documents, a corpus is formed, with some pre-defined categories. In this paper, the standard Reuters dataset is used. The Reuters dataset is a multiclass (containing 90 classes) and multi-label (one document may lie in more than one categories) dataset. It contains 7769 number of training documents, and 3019 number of testing documents. It is the ModApte (R(90)) subset of the Reuters21578 benchmark [13]. Table 1 presents the top and bottom five classes with their training and testing documents of Reuters dataset, which is a clear indication of the unbalanced class distribution feature of the dataset.

Fine-Tuning of a BERT-Based Uncased …

381

Fig. 3 Transformer architecture

Table 1 Document distribution among different class of Reuters dataset Top 5 class

Bottom 5 class

Class name

Number of training documents

Number of testing documents

Class name

Number of training documents

Number of testing documents

Earn

2877

1087

Sun-meal

1

1

acq

1650

719

Lin oil

1

1

Money-fx

538

179

Coconut-oil

1

2

Grain

433

149

Rye

1

1

Crude

389

189

nkr

1

2

3.2 Evaluation Criteria The evaluation metrics precision (P), recall (R), f-measure (F), and Hamming loss (HL) are used to evaluate the performance of the BERT-based uncased model for text

382

S. K. Behera and R. Dash

classification [7]. Precision measures the fraction of relevant documents. Precision is the fraction of true positive to the whole summation of true positive and false positive. Recall refers to the fraction of relevant documents that are correctly classified. The harmonic mean of precision and recall is called f-measure. It shows the balance between precision and recall. The Hamming loss is used to measure the percentage of labels that are incorrectly predicted. It is the ratio of the percentage of wrongly predicted labels to the total number of labels. The optimal value of the Hamming loss function is zero. The micro-averaged precision, recall, f-measure, and hamming loss is being used in our experiments. C Pmicro = C

i=1

t pi

(1)

t pi + f pi C i=1 t pi

i=1

Rmicro = C

i=1

Fmicro = HL =

(2)

t pi + f n i

2 ∗ pmicro ∗ rmicro pmicro + rmicro

(3)

L N   1  XOR targeti j , predictedi j N L i=1 j=1

(4)

3.3 Analysis from the Outcomes In our experiment, we fixed the training batch size to 32 and testing batch size to 16. The number of epochs is fixed to 2. Two parameters, Max_sequence length and the learning rate, have been varied for fine-tuning the Bert-based uncased model. Table 2 shows the precision, recall, f-measure, and Hamming loss by varying the Max_sequence length or token length with fixed learning rate. It has been observed Table 2 Performance of BERT-based uncased model with different “Maximum_sequence length” Learning rate

Maximum_sequence length

Precision

Recall

f-measure

Hamming loss

1e−05

100

0.92

0.61

0.73

0.006

200

0.94

0.59

0.73

0.006

300

0.93

0.59

0.72

0.006

350

0.92

0.60

0.72

0.006

400

0.93

0.61

0.74

0.006

450

0.92

0.57

0.71

0.006

512

0.93

0.60

0.73

0.006

Fine-Tuning of a BERT-Based Uncased …

383

Table 3 Performance of BERT-based uncased model with differing “Learning rate” Maximum_sequence length

Learning_ rate

Precision

Recall

f-measure

Hamming loss

400

1e−05

0.93

0.61

0.74

0.006

2e−05

0.93

0.63

0.75

0.006

3e−05

0.92

0.63

0.75

0.006

4e−05

1.00

0.25

0.40

0.010

from the experiment that when the token length is 200, the highest precision value of 0.94 is achieved, but the recall and f-measure are not performing well. The highest value of recall 0.61 is achieved when token length is 100 and 400. Similarly the highest value of f-measure 0.74 is achieved when the token length is 400. The hamming loss of 0.006 is achieved, and it is constant with different token length. So by considering all the highest values of “precision,” “recall,” and “f-measure,” the token length of 400 has been fixed for the next level of fine-tuning. Table 3 shows the precision, recall, f-measure, and Hamming loss by varying the learning rate with fixed token length. The token length is fixed to 400 as per our analysis from table 2. With a fixed token length of 400, we vary the learning rate from 1e−05 to 4e−05. The highest value of precision 1.00 is achieved when the learning rate is 4e−05, but recall and f-measure are decreased drastically to 0.25 and 0.40, respectively, and also the hamming loss is also increased to 0.010. The next highest value of precision 0.93 is achieved with a learning rate of 1e−05 and 2e−05. But when we tune our model to 2e−05 of learning rate, we get the next highest level of precision 0.93 with highest recall 0.63 and highest f-measure 0.75. We also got the fixed hamming loss of 0.006 with all learning rates except the learning rate of 4e−05 as per table 3. So from all the analysis, we achieved the highest value of precision, recall, and f-measure with the learning rate of 2e−05.

4 Conclusion and Future Work In this research, a BERT-based uncased model is fine-tuned by varying the token length and the learning rate. From the analysis, it is clearly observed that by finetuning the maximum token length to 400 and learning rate to 2e−05, the model is able to attain a precision of 0.93, a recall value of 0.63, a f-measure of 0.75, and a hamming loss of 0.006 for the unbalanced Reuter’s dataset. As a future work, the parameters like the number of training and validation size, number of epochs with all combinations of maximum sequence length and learning parameters will be explored for tuning with different threshold values.

384

S. K. Behera and R. Dash

References 1. Adeva JJ, Atxa JM (2007) Intrusion detection in web applications using text mining. Eng Appl Artif Intell 20(4):555–566 2. Jiang L, Li C, Wang S, Zhang L (2016) Deep feature weighting for Naive Bayes and its application to text classification. Eng Appl Artif Intell 1(52):26–39 3. Abu Hammad AS. An approach for detecting spam in Arabic opinion reviews 4. Nikhath AK, Subrahmanyam K, Vasavi R (2016) Building a k-nearest neighbor classifier for text categorization. Int J Comput Sci Inf Technol 7(1):254–256 5. Perikos I, Hatzilygeroudis I (2016) Recognizing emotions in text using ensemble of classifiers. Eng Appl Artif Intell 1(51):191–201 6. Harish BS, Revanasiddappa MB (2017) A comprehensive survey on various feature selection methods to categorize text documents. Int J Comput Appl 164(8):1–7 7. Behera SK, Dash R (2021) Performance of ELM using max-min document frequency-based feature selection in multilabeled text classification. In: Intelligent and cloud computing 2021. Springer, Singapore, pp 425–433 8. Sabbah T, Selamat A, Selamat MH, Al-Anzi FS, Viedma EH, Krejcar O, Fujita H (2017) Modified frequency-based term weighting schemes for text classification. Appl Soft Comput 58:193–206 9. Hepburn J (2018) Universal language model fine-tuning for patent classification. In: Proceedings of the Australasian language technology association workshop 2018 Dec, pp 93–96 10. Elmazoglu Z, Goker B, Bek ZA, Aktekin CN, Bitik B, Karasu C (2020) Standardized extract of Olea europaea leaves suppresses pro-inflammatory cytokines through regulating redox signaling pathways in human chondrocytes. Osteoarthritis Cartilage 1(28):S105–S106 11. Geetha MP, Renuka DK (2021) Improving the performance of aspect based sentiment analysis using fine-tuned Bert base uncased model. Int J Intell Netw 1(2):64–69 12. Mao J, Liu W (2019) Factuality classification using the pre-trained language representation model BERT. In: IberLEF@SEPLN 2019, pp 126–131 13. Thaoma M (2017) The reuters dataset. https://martin-thoma.com/nlp-reuters/

Segmentation of the Heart Images Using Deep Learning to Assess the Risk Level of Cardiovascular Diseases Shafqat Ul Ahsaan, Vinod Kumar, and Ashish Kumar Mourya

Abstract Deep learning technique is used in the field of the health care sector for predictive analysis and disease diagnosis. This technique is applied widely for diagnosis and prediction. Today’s lifestyle contributes to an epidemic of cardiovascular disease that causes monotony and death. Evidence of congestive heart failure should be determined minutely and competently to ensure that the diagnosis is correct. In 2015, the United Nations Health Organization (UNHO) estimated that one-third of all deaths worldwide were caused by cardiovascular diseases. Scientists have been using several machine learning techniques to help the health care sector in the diagnosis of acute myocardial infarction. Deep learning, however, can reduce the amount of testing required. To reduce death rates from cardiovascular diseases, there must be a fast and competent detection technique. This study proposes a deep learning-based model for the diagnosis of coronary diseases. Using the proposed method, one can predict the health risk factors and the health risk level of an individual based on any medical dataset. Our proposed method has achieved the highest overall accuracy to identify the risk level of patients efficiently. Keywords Cardiovascular diseases · Machine learning · Convolution neural network (CNN) · Healthcare · Electronic health records (EHR) · Classification · Medical imaging

1 Introduction The health care requisites of the world’s population are to be expected to go through remarkable switching as a result of the continuing demographic transition. Noncommunicable diseases, like cancer, depression, diabetes, and heart disease, are S. U. Ahsaan · V. Kumar Department of Computer Science, NIMS University, Jaipur, Rajasthan, India A. K. Mourya (B) Department of Computer Science and Engineering, School of ICT, Gautam Buddha University, Noida, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_41

385

386

S. U. Ahsaan et al.

expeditiously ousting epidemic diseases and malnourishment as the most important element of disorder and early death. Heart diseases or cardiovascular diseases (CVD) are caused by the contraction or blocking of blood vessels, which can result in heart attacks, chest pain (angina), and strokes. A couple of additional states cause the heart’s muscle and valves to become altered, which can have serious consequences for its normal functioning. One of the most basic causes that lead to heart failure is an increase in blood pressure, coronary problems, and an increase in sugar levels [1]. World Health Organization data shows that around 17.9 million people worldwide have died of cardiovascular disease in 2019. The mortality rate because of cardiovascular diseases corresponds to 32% of the total deaths in common. It is anticipated that the number of deaths that occurred because of coronary heart diseases touches the scale of 7.4 million (13% of deaths in the whole) and the striking point is that 6.7 million deaths are because of stroke. Despite being one of the greatest killers on the planet, cardiovascular disease is still one of the most common causes of death [2]. CVD is not a solo illness; however, a set of ailments and damages that influence the routine functioning of the cardiovascular system (the heart and its related blood vessels). Heart and blood vessel diseases linked to the brain and heart are the most prevalent types of CVDs. CVD mainly attacks people over the age of 40 years, even though, as per leading cardiologists, symptoms of heart diseases may be encountered after the age of 35 years. Deep convolutional nets have achieved a remarkable breakthrough in the processing of speech, audio, images, and video, while recurrent nets have shown breakthroughs in handling and processing sequential information, such as speech and text. Convolutional neural networks or ConvNets are intended to deal with data that exists to be in a compound fashion. It is believed that ConvNet’s core concepts include sharing the weights, pooling the connections, and using many layers that have the benefits of natural signals. The structural design of a distinctive ConvNet is composed of several stages that form a chain-like structure. The primitive phase consists of two kinds of layers: convolutional layers and pooling layers. The component components, or units, of the convolutional layer, are described by feature maps, where all units are linked to local patches of the feature map of the earlier layers using a set of weights known as a filter bank. After passing this locally weighted sum through a nonlinearity, such as a Rectified Linear Unit (ReLU), the outcome is then compared and analyzed. A single filter bank is used for all feature map elements, while different layer maps use several filter banks [3].

2 Literature Review There is a wide range of decision support systems except to the conventional methods that use computational techniques like fuzzy logic, neuro-fuzzy, machine learning, and artificial neural network (ANN), etc. were used to make a diagnosis of heart ailments. Polat et al. [4] contributed to the medical field and make use of fuzzy

Segmentation of the Heart Images Using Deep Learning …

387

logic in preprocessing of weights and resistant recognition systems artificially to analyze the diseases related to the heart. Das et al. [5] detected heart-related ailments by developing software that worked based on SAS. Ba¸sçiftçi et al. [6] analyzed the sickness regarding the heart with the help of a decision support system that is web-based with the characteristic of minimization of Boolean functions. Bengio et al. [7] used CNN’s that are trained end-to-end. LeNet LeCun et al. [8] and Krizhevsky et al. [9] set up a model of a convolutional neural network having two to five convolutional layers and used kernels with great accessible fields in layers that are close to the input and fewer significant kernels that are nearer to the output. A fuzzy rule-based decision support system with weights was developed by Anooj [10]. Amiri and Armano [11] make use of regression trees and classification techniques to divide heart sounds. In their study, Nahar et al. applied computational intelligence methods to determine heart diseases. Using association rule mining, Nahar et al. [12] investigated the heart diseases of men and women. Simonyan and Zisserman [13] discovered the novel deeper networks and used diminutive, predetermined size of kernels in all the layers. VGG19, a model consisting of 19 layers, won the 2014 challenge in the field of ImageNet. Szegedy et al. [14] presented a network consisting of 22 layers called GoogLeNet, which works on inception blocks [15], a unit that substitutes the mapping using a set of convolutions of diverse sizes. Bouktif et al. [16] combines different types of classifiers like Bayesian classifiers and applied optimization techniques like Ant Colony for the detection of heart diseases and made predictions based on cardiography. Kim et al. [17] investigated heart-related diseases employing a prediction system that works on fuzzy logic. Hedeshi and Abadeh [18] classified the diseases of the artery through a PSO algorithm together with a boosting technique. Shaoa et al. [19] make classification of heart diseases using a hybrid model that employs logistic regression, and multivariate adaptive regression. Alsalamah et al. [20] evaluated data related to heart problems and presented a function basis radial network having a classifier called Gaussian function. Olaniyi et al. [21] with the combination of backpropagation and feedforward neural network introduced a model based on a multilayer neural network. Jabbar et al. [22] focused on the accuracy of the Naïve Bayes classification method and make a decent advancement inaccuracy by using feature selection methods. Paul et al. [23] diagnose heart diseases based on some critical attributes while introducing a DMS-PSO system. Ronneberger et al. [24] increased the image size by using U-net architecture that consists of an upgraded version of regular CNN. Paul et al. [25] employed genetic algorithms in combination with a fuzzy-based decision support system to detect risk levels following heart diseases. Using the adaptive boosting algorithm, Miao et al. [26] presented a collection of machine learning algorithms for diagnosing cardiac illnesses. According to Samuel et al. [27], ANN and fuzzy AHP can be used to aid in decision-making regarding the probability of cardiac failure. By utilizing the trained recurring neural fluid (RFNN) approach Uyar [28] mentions, Sagir and Sathasivam [29] can diagnose cardiovascular disease using the ANFIS model.

388

S. U. Ahsaan et al.

3 Deep Learning Deep learning algorithms have also the ability to learn through unsupervised learning in certain conditions because most accessible imaging data sets at the beginning are unrelated to the definite output. Practicing on a large package of unlabeled data can sometimes have a positive impact on the performance of an algorithm as compared to a labeled data file that is significantly smaller in size [30]. However, to carry out unsupervised feature generation using deep learning is striking for a couple of reasons, like the proficient and flawless possible application to approximately any field and problem. Although the major negative aspect is the explanation of any newly exposed imaging feature. Deep learning-built features while dealing with imaging or non-imaging data files that are usually unintuitive and complicated to figure out [6–9]. The goal of the research is to break through this limitation, and specifically to provide neural networks with the inspiration to notice the most triggering characteristics of an image [31]. It is feasible that the most fruitful methods to acquire knowledge from a huge number of imaging data will engage deep learning characteristics with additional helpful information. Technologies that allow combining different sources that provide a vast amount of information can probably facilitate the production of all existing information linked to a certain medical image, with data compatible to image attainment, text from the beginning and confirm cardiovascular imaging and radiological reading reports, and data from medical or clinical notes, corresponding diagnostics, and any extra records that are interrelated to the imaged patient. An analysis of images and allied sources, data can lead to enhanced clustering, classification, and, in turn, finding accuracy in physical composition [32]. The discovery of any unique imaging biomarker will still need further confirmation and comprehensive investigation before it is encountered in practice. Instead of innovative advancements in neural networks, “it is not an easy task to develop algorithms for a single deep learning network to create images from different types and sources of data in the same manner as humans can “carry out” when performing a clinical or research evaluation. Soon, such multifarious and complex tasks may be carried out by a group of networks, where each network is allotted to a different type of data” set [12, 13, 33]. Transfer “learning is another striking option, where knowledge acquired from one task is applied to address another task and this approach might be proved fruitful for data sets with the same set of attributes or characteristics [34–36]. CNN’s trained on a massive and arduously data set that is characterized by hand can be returned to execute relatively well on interrelated smaller-sized data sets so that a model that is trained on one large original task can be influenced to be successful for another task [1]. CNN can be trained, for instance, to forecast 1-year cardiovascular death risks from massive datasets of echocardiograms. Later on, this same CNN for a different data set can be repurposed to analyze the probability of hospital readmission directly from ultrasound images even if the new data set may only be 1 of the 20ths or 1 of 30th the size of the original data set.”

Segmentation of the Heart Images Using Deep Learning …

389

4 Results and Discussion This research has used the dataset in various patients, which is available at the UCI repository website [37] through an open access policy. This dataset contains about 14 columns with different parameters. The correlation plot of the dataset is given below. It is also showing a significant level with values (Figs. 1 and 2). Additionally, the dataset has been normalized using the entire feature engineering methods. Additionally, the authors have used decision trees, random forest, and support vector machines to classify processed data. In the proposed model, Adam serves as an optimizer function, while ReLu is a function for activating Adam. Additionally, the data has been split into 9 batch sizes and 130 epochs.

Fig. 1 Correlation of dataset with the help of R studio

390

S. U. Ahsaan et al.

Fig. 2 Comparison of models (accuracy)

4.1 Program Code

classifier = Sequential () classifier.add(Dense(activation = "relu", input_dim = 13, units = 8, kernel_initializer = "uniform")) classifier.add(Dense(activation = "relu", units = 14, kernel_initializer = "uniform")) classifier.add(Dense(activation = "sigmoid", units = 1, kernel_initializer = "uniform")) classifier.compile(optimizer = 'adam' , loss = 'binary_crossentropy', metrics = ['accuracy'] )

5 Conclusion Machine learning encompasses a set of algorithms known as neural networks, which are fundamental to deep learning. A high-level generalization of data processing is proposed with multiple layers stacked on top of each other, with alternately linear and nonlinear transformations. The emergence of deep neural networks, which consist of thousands of layers piled on top of each other, led to major progress in speech, image, and text processing. There are currently many different areas where deep learning is being used, including medical, defense, and manufacturing. Deep learning was used to predict and process DICOM images for analysis with segmentation. This analysis has also been compared to results obtained with three other approaches. The proposed model showed the highest accuracy in comparison to other models.

Segmentation of the Heart Images Using Deep Learning …

391

References 1. Henglin M, Stein G, Hushcha PV, Snoek J, Wiltschko AB, Cheng S (2017) Machine learning approaches in cardiovascular imaging. Circulation: Cardiovasc Imag 10(10):e005614 2. Xu S, Ilyas I, Little PJ, Li H, Kamato D, Zheng X, … Weng J (2021) Endothelial dysfunction in atherosclerotic cardiovascular diseases and beyond: from mechanism to pharmacotherapies. Pharmacol Rev 73(3):924–967 3. Alankar B, Yousf N, Ahsaan SU (2019) Predictive analytics for weather forecasting using back propagation and resilient back propagation neural. In: New paradigm in decision science and management: proceedings of ICDSM 2018, 1005, 99 4. Polat K, Güne¸s S, Tosun S (2006) Diagnosis of heart disease using artificial immune recognition system and fuzzy weighted pre-processing. Pattern Recogn 39(11):2186–2193 5. Das R, Turkoglu I, Sengur A (2009) Effective diagnosis of heart disease through neural networks ensembles. Expert Syst Appl 36(4):7675–7680 6. Ba¸sçiftçi F, ˙Incekara H (2011) Web based medical decision support system application of coronary heart disease diagnosis with Boolean functions minimization method. Expert Syst Appl 38(11):14037–14043 7. Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828 8. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324 9. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105 10. Anooj PK (2013) Implementing decision tree fuzzy rules in clinical decision support system after comparing with fuzzy based and neural network based systems. In: 2013 International conference on IT convergence and security (ICITCS). IEEE, pp 1–6 11. Amiri AM, Armano GI (2014) A decision support system to diagnose heart diseases in newborns. In: Proceedings of the 2014 3rd international conference on health science and biomedical systems (HSBS 2014) NANU, Florence, Italy, pp 22–24 12. Nahar J, Imam T, Tickle KS, Chen YPP (2013) Association rule mining to detect factors which contribute to heart disease in males and females. Expert Syst Appl 40(4):1086–1093 13. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 14. Szegedy C, Reed S, Erhan D, Anguelov D, Ioffe S (2014) Scalable, high-quality object detection. arXiv preprint arXiv:1412.1441 15. Lin M, Chen Q, Yan S (2013) Network in network. arXiv preprint arXiv:1312.4400 16. Bouktif S, Fiaz A, Ouni A, Serhani MA (2018) Optimal deep learning LSTM model for electric load forecasting using feature selection and genetic algorithm: comparison with machine learning approaches. Energies 11(7):1636 17. Kim S, Kim D, Cho SW, Kim J, Kim JS (2014) Highly efficient RNA-guided genome editing in human cells via delivery of purified Cas9 ribonucleoproteins. Genome Res 24(6):1012–1019 18. Hedeshi N, Saniee Abadeh M (2014) Coronary artery disease detection using a fuzzy-boosting PSO approach. Comput Intell Neurosci 19. Shao YE, Hou CD, Chiu CC (2014) Hybrid intelligent modeling schemes for heart disease classification. Appl Soft Comput 14:47–52 20. Mourya AK, Singhal N (2014) Managing congestion control in mobile ad-hoc network using mobile agents. arXiv preprint arXiv:1401.4844 21. Olaniyi EO, Oyedotun OK, Adnan K (2015) Heart diseases diagnosis using neural networks arbitration. Int J Intell Syst Appl 7(12):72 22. Jabbar MA, Deekshatulu BL, Chandra P (2016) Intelligent heart disease prediction system using random forest and evolutionary approach. J Netw Innov Comput 4(2016):175–184 23. Mourya AK, Alankar B, Kaur H (2021) Blockchain technology and its implementation challenges with IoT for healthcare industries. In: Advances in intelligent computing and communication. Springer, Singapore, pp 221–229

392

S. U. Ahsaan et al.

24. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, Cham, pp 234–241 25. Kumar P, Bhatnagar A, Jameel R, Mourya AK (2021) Machine learning algorithms for breast cancer detection and prediction. In: Das S, Mohanty MN (eds) Advances in intelligent computing and communication. Lecture notes in networks and systems, vol 202. Springer, Singapore. https://doi.org/10.1007/978-981-16-0695-3_14 26. Miao KH, Miao JH, Miao GJ (2016) Diagnosing coronary heart disease using ensemble machine learning. Int J Adv Comput Sci Appl 7(10):1–12 27. Mourya AK, Kaur H, Uddin M (2020) A novel approach to heterogeneous multi-class SVM classification. In: New paradigm in decision science and management. Springer, Singapore, pp 39–47 28. Uyar K, ˙Ilhan A (2017) Diagnosis of heart disease using genetic algorithm based trained recurrent fuzzy neural networks. Procedia Comput Sci 120:588–593 29. Sagir AM, Sathasivam S (2017) A novel adaptive neuro fuzzy inference system based classification model for heart disease prediction. Pertanika J Sci Technol 25(1) ˇ 30. Mikolov T, Deoras A, Povey D, Burget L, Cernocký J (2011) Strategies for training large scale neural network language models. In: 2011 IEEE workshop on automatic speech recognition and understanding (ASRU). IEEE, pp 196–201 31. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC (2015) Imagenet large scale visual recognition challenge. Int J Comput Vision 115(3):211–252 32. Oakden-Rayner L, Carneiro G, Bessen T, Nascimento JC, Bradley AP, Palmer LJ (2017) Precision radiology: predicting longevity using feature engineering and deep learning methods in a radiomics framework. Sci Rep 7(1):1648 33. Nahar J, Imam T, Tickle KS, Chen YPP (2013) Computational intelligence for heart disease diagnosis: a medical knowledge driven approach. Expert Syst Appl 40(1):96–104 34. Raina R, Battle A, Lee H, Packer B, Ng AY (2007) Self-taught learning: transfer learning from unlabeled data. In: Proceedings of the 24th international conference on Machine learning. ACM, pp 759–766 35. Anavi Y, Kogan I, Gelbart E, Geva O, Greenspan H (2016) Visualizing and enhancing a deep learning framework using patients age and gender for chest x-ray image retrieval. In: Medical imaging 2016: computer-aided diagnosis, vol 9785, p 978510. International Society for Optics and Photonics 36. Angermueller C, Pärnamaa T, Parts L, Stegle O (2016) Deep learning for computational biology. Mol Syst Biol 12(7):878 37. Blake C, Keogh E, Merz CJ (1998) UCI repository of machine learning databases

Integrative System of Remote Accessing Without Internet Through SMS L. Sujihelen, M. Dinesh, Sai Shiva Shankar, S. Jancy, M. D. Antopraveena, and G. Nagarajan

Abstract A mobile phone is a device that is used to make and receive calls from anywhere in the world. The major problems of these mobiles are losing it. This proposed system is to focus on the problem of finding the lost cell phone. The observations compiled in existing mobile applications show that most mobile applications for keeping track of lost phones focus on three functions, namely, client–server communication, recovery of SIM card data and the activation event. The major drawback of these existing applications is that they require Internet. The proposed system focuses on overcoming those A problems without Internet as it may not be available all the time. This proposed system develops an application that helps to track our lost mobile phone without the help of Internet with less communication and less memory storage. Keywords Mobile tracking · Application · Internet · Confidential information · Profile migration · Communication · Contact retrieving

1 Introduction Mobile phones play a very important role in today’s world, they are used by more than 90% of people in the world [1]. Humans became addicted to mobiles as they do not complete any work without it. The mobile phone stores a lot of confidential information on mobile phones which includes personnel and sensitive information that should be secure. Mobile theft is a major issue faced these days [2, 3]. To overcome the issue of hacking the sensitive information in those mobiles should not be used for wrong things. There are existing applications that help us track our device using the Internet [4]. As the Internet may not be available all the time this is an important drawback that needs to be focused on [5]. The proposed system overcomes this issue by tracking the mobile without the help of Internet. The main objective is to track the phone when it is lost, automatic locking the phone, conversion of silent L. Sujihelen (B) · M. Dinesh · S. S. Shankar · S. Jancy · M. D. Antopraveena · G. Nagarajan Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_42

393

394

L. Sujihelen et al.

mode to general mode and finally make a call by fetching the contact number from remote mobile all through SMS without the use of the Internet [6, 7].

2 Related Work More researchers have proposed more applications for finding the lost mobile using Internet. The drawbacks of the existing applications are discussed in this section. The existing application is [8] to implement the application prototype as proof of the concept for designed architecture. Observations collected in existing mobile applications show that most mobile applications for keeping track of lost phones focus on three functions, namely, client–server communication, SIM card data recovery and activation event mobile application [9]. One of the important characteristics of the communication system is finding a person’s position. The location of the person is done via the Internet or via mobile devices. The ability to provide a system to locate or locate the receiver regardless of the commercial mobile network available in an emergency is required. In some cases, some mobile networks may be suspended due to some emergency conditions. They may also not be available in remote areas where extreme sports or explorations take place. During an emergency evacuation, tracking the location of the receiver can be very helpful for the rescue team to locate the victim. This paper analyzes the architecture of the OpenBTS system and the Air Probe GSM receiver and proposed a way to achieve convenient positioning for mobile phones with these open source platforms. The goal is to build a profitable prototype capable of detecting the position of the laptop more precisely [10–12]. This paper organized in Sect. 1 as introduction, in Sect. 2 as related work, in Sect. 3 as proposed work, results and discussion in Sect. 4 and conclusion in Sect. 5 [13, 14].

3 Proposed System The proposed system is to detect the theft mobile without the Internet. It also used to implement by sending an SMS to your phone with contact name and you will get number back as an SMS to make calls, we also deploy automatic profile changing from silent to general when we could not find the phone. We also lock your screen immediately from the remote place through SMS (Fig. 1).

3.1 Android Application Mobile application is created using Android Studio. The first page contains the user registration process. It is created with specific tools that are used to create Android

Integrative System of Remote Accessing Without …

395

Fig. 1 Architecture of the proposed system

applications. Once completed, the mobile application will be created as an Android Platform Kit (APK) file. This APK will be installed on the user’s mobile phone.

3.2 Fetch the Contact In this module, if the user misplaces or left the phone in the home and he wants to get the particular number they can send the SMS by specifying the contact person’s name with the proper spelling to the particular misplaced phone. This application installed on the lost phone will automatically scan the SMS received by everyone on that phone. Once the application receives the keyword to fetch the contact along with the name and application will automatically check the contacts and send the required contact to the user.

3.3 Profile Migration In this particular module, we implement the concept of profile migration, it is to automatically convert the particular profile mode which is silent to normal and abnormal to silent.

396

L. Sujihelen et al.

3.4 Location Tracking This module describes location tracking of the mobile. When users get lost or someone steals the phone, through this application we can easily get the location of the phone without the Internet. The user will send SMS to that mobile through the registered alternate mobile number. Then our application will send a location from that mobile by reading the keyword that we sent to the mobile.

3.5 Lock the Phone In this module, we implement the phone locking system by sending SMS to the mobile. People having more sensitive information and application on their mobile if mobile was stolen or any other people can misuse user mobile so to protect, we send the locking keyword as an SMS to the mobile then the application will read the SMS and lock the mobile.

4 Results and Discussion This proposed system fast detect the theft mobile. Figures 2 and 3 shows about the two-way security measures in mobile. Figure 4 shows the detection accuracy of the proposed system.

5 Conclusion In this work, we deploy an integrative approach to track the phone when it is theft, automatic lock the phone, conversion of silent mode to general mode and finally make a call by fetching the contact number from remote all through SMS not using Internet.

Integrative System of Remote Accessing Without … Fig. 2 Sample output 1

Fig. 3 Sample output

397

398

L. Sujihelen et al.

Fig. 4 Detection accuracy

References 1. Martinez JJL, Widjaya N (2012) Architecture for lost mobile tracker application. Int J e-Educ e-Bus e-Manag e-Learn 2(3):197–201 2. Bhatia S, Hilal S (2012) Determination of mobile phone tracking using various software. Int J Comp Appl:0975–8887 3. Arigela LS, Veerendra PA, Anvesh S, Hanuman KSS (2013)Mobile phone tracking and positioning. Int J Inno Res Sci Eng Technol 2(4) 4. Hunag J-S, Cheng R, Harwahyu R (2015) Study of low-cost mobile phone tracking system. In: International symposium on next generation electronics (ISNE) 5. Adenubi OS, Opeoluwa AS, Moriyike LA (2012) Mobile phones and adult education in Nigeria. Int Inst Sci Technol Educ (IISTE) 8:1–6 6. Shan Khan AU, Qureshi MN, Abdul Qadeer M (2012) Anti-theft application for android based devices. In: IEEE International advance computing conference (IACC) 7. Kinage R, Kumari J, Zalke P, Kulkarni M (2013) Mobile tracking application. Int J Inno Res Sci Eng Technol 2(3):2319–8753 8. Sujihelen L, Jayakumar C, Senthilsingh C (2018) SEC approach for detecting node replication attacks in static wireless sensor networks. J Electr Eng Technol 13(6):2447–2455 9. Kiran PM, Reddy PN, SujiHelen L (2020) Malware detection in smartphone using SVM. In: 2020 4th International conference on trends in electronics and informatics (ICOEI) (48184), pp 344–347. https://doi.org/10.1109/ICOEI48184.2020.9142880 10. Nagarajan G, Minu RI (2015) Fuzzy ontology based multi-modal semantic information retrieval. Procedia Comp Sci 48:101–106 11. Nagarajan G, Minu RI (2018) Wireless soil monitoring sensor for sprinkler irrigation automation system. Wirel Pers Commun 98(2):1835–1851 12. Nagarajan G, Minu RI, Jayanthila Devi A (2020) Optimal nonparametric Bayesian model-based multimodal BoVW creation using multilayer pLSA. Circ Syst Sig Process 39(2):1123–1132 13. Nagarajan G, Minu RI, Jayanthiladevi A (2019) Brain computer interface for smart hardware device. Int J RF Technol 10(3–4):131–139 14. Simpson SV, Nagarajan G (2021) A fuzzy based co-operative blackmailing attack detection scheme for edge computing nodes in MANET-IOT environment. Future Gener Comput Syst 125:544–563

Detect Fire in Uncertain Environment using Convolutional Neural Network L. K. Joshila Grace, P. Asha, J. Refonaa, S. L. Jany Shabu, and A. Viji Amutha Mary

Abstract Haptic Internet can distinguish inventiveness through versatile edge figuring and information move to 5G organizations, hence incorporating more innovations. All the more as of late, numerous strategies dependent on convolutional neural networks (CNNs) intended for front line innovation have been utilized to find a fire some place with the right exactness and season of activity. Nonetheless, these strategies cannot identify a fire in an unsure zone of Internet of things (IoT) with smoke, mist, and day off. Furthermore, in confined hardware, it is a test to get great precision while shortening the working time and lessening the model size. Hence, in this article, we propose a CNN-based compelling fire identification framework for recordings taken in dubious checking circumstances. Our technique utilizes a profound non-hefty neural organization without completely packed layers, which makes its computation costs low. Tests were performed on a bunch of fire information, and the outcomes showed that our strategy worked better contrasted with the most recent innovation. We trust it is the opportune individual to distinguish fire discovery in established and inserted vision applications during reconnaissance in an unsure IoT environment. Keywords Neural networks · Profound non-hefty neural organization · CNN · IoT · Fire discovery

1 Introduction The tactile Internet can understand knowledge through versatile edge registering and information transmission on 5G organizations, subsequently coordinating more advancements [1, 2]. As of late, numerous convolutional neural network (CNN) strategies utilizing edge insight have been utilized to precisely and ideal distinguish fires in a region. In any case, these strategies neglected to distinguish smoke, mist, and snow fire in questionable IoT zones. What is more, for asset touchy gear, it is L. K. Joshila Grace (B) · P. Asha · J. Refonaa · S. L. Jany Shabu · A. Viji Amutha Mary Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_43

399

400

L. K. Joshila Grace et al.

additionally a test to get great precision while lessening working time and decreasing model size.

2 Existing System To discover the fire source, the specialists built up the two most basic fire source identification strategies. In the writing, conventional techniques use tone or development components to get flares. For instance, by testing distinctive shading models like HSI, YUV, YCbCr, RGB and YUC, the shading qualities for fire is discovered. The issue most firmly identified with these techniques is their undeniable degree of alert [3]. A few endeavors have been made to take care of this issue by consolidating ambiguous strategies with advancement, fire examination, and variety.

3 Proposed System We suggest utilizing a CNN-based working system for fire identification of recordings from dangerous observation circumstances. Our strategy utilizes a low-thickness neural organization without a completely compacted layer, which makes it numerically less expensive [4, 5]. Tried on the fire rate estimation informational collection, the outcomes show that our strategy has better execution contrasted and current innovation. Algorithm Used: • CNN-Convolution Neural Network Architecture Diagram: The flow of the entire frame work is shown in the above diagram. The process initially begins from the webcam videos. These videos are streamed into the convolutional neural network which is the algorithm that is been used in our frame work. The MobileNet architecture is used as the next layer. At last, the classification of the images is done in order to detect the fire occurring in a location (Fig. 1).

4 Work Flow • CNN-based fire detection • Details of the proposed architecture for fire detection • Motivations of using MobileNet for fire detection

Detect Fire in Uncertain Environment using Convolutional …

401

Fig. 1 Architecture diagram

CNN-based Fire Detection: At the point, when a fire is recognized, the construction of CNN is frequently altered with the goal that the last layer is completely associated with two stages, in particular fire and fire [6, 7]. CNN will give information and yield preparing subtleties true to form. During this time, the heaviness of an enormous number of neurons can be changed and we figure out how to isolate ourselves from the medication and have the option to convey it [8]. Details of the Proposed Architecture for Fire Detection: The MobileNet (v2) version is ahead of other models, being AlexNet, GoogleNet, and SqueezeNet. Therefore, the same design as MobileNet and modify it according to the problem of finding a fire in an unattended monitoring area [9]. For this reason, neurons in the last layer of the structure at 2 instead of 1000, so that we can classify them as shooting and not shooting. Motivations of using MobileNet for Fire Detection: The MobileNet is used because it has higher memory performance and bandwidthconstrained hardware architecture [10, 11]. The cloud will be passed on to the user, and the work will come to your wallet and the credit limit initiator [12–14] (Figs. 2, 3, and 4).

402

L. K. Joshila Grace et al.

Fig. 2 Fire detection

Fig. 3 Firebase console

5 Conclusion In this venture, we are giving a lightweight neural network (“FireNet”) worked without any preparation and prepared on altogether different informational collections. A definitive objective of an incorporated task project is to balance the bogus alerts and postponements of firemen and related issues. The presented neural organization can function admirably on minimal effort installed gadgets like the Raspberry

Detect Fire in Uncertain Environment using Convolutional …

403

Fig. 4 Notification

Pi 3B at an edge pace of 24 edges each second. Exactness of activity of this model on standard fire information bases and customized test data sets (testing pictures with genuine flames and non-fire fires, picture quality is like that of a Raspberry Pi associated camera), precision, metric emanations and infections continuous reaction, and alarms in case of a fire crisis. In the coming work, we intend to improve the exhibition of the different informational collections model.

References 1. Maier M, Chowdhury M, Rimal BP, Van DP (2016) The tactile internet: vision, recent progress, and open challenges. IEEE Commun Mag 54(5):138–145 2. Simsek M, Aijaz A, Dohler M, Sachs J, Fettweis G (2016) 5G-enabled tactile internet. IEEE J Sel Areas Commun 34(4):460–473

404

L. K. Joshila Grace et al.

3. Muhammad K, Hamza R, Ahmad J, Lloret J, Wang HHG, Baik SW (2018) Secure surveillance framework for IoT systems using probabilistic image encryption. IEEE Trans Ind Inform 14(8):3679–3689 4. Pogkas N, Karastergios GE, Antonopoulos CP, Koubias S, Papadopoulos G (2007) Architecture design and implementation of an ad-hoc network for disaster relief operations. IEEE Trans Ind Inform 3(1):63–72 5. Filonenko A, Hernández DC, Jo K-H (2018) Fast smoke detection for video surveillance using CUDA. IEEE Trans Ind Inform 14(4):725–733 6. Puvvadi UL, Di Benedetto K, Patil A, Kang K-D, Park Y (2015) Cost effective security support in real-time video surveillance. IEEE Trans Ind Inform 11(6):1457–1465 7. Mumtaz S, Bo A, Al-Dulaimi A, Tsang K-F (2018) 5G and beyond mobile technologies and applications for industrial IoT (IIoT). IEEE Trans Ind Inform 14(6):2588–2591 8. Dhanalakshmi A, Nagarajan G (2020) Convolutional neural network-based deblocking filter for SHVC in H. 265. SIViP 14:1635–1645 9. Guha-Sapir D, Hoyois P (2015) Estimating populations affected by disasters: a review of methodological issues and research gaps. Centre Res Epidemiol Disast. Inst. Health Soc., Univ. Catholique de Louvain, Brussels, Belgium 10. Foggia P, Saggese A, Vento M (2015) Real-time fire detection for video surveillance applications using a combination of experts based on color, shape, and motion. IEEE Trans Circuits Syst Video Technol 25(9):1545–1556 11. Töreyin BU, Dede˘oglu Y, Güdükbay U, Cetin AE (2006) Computer vision based method for real-time fire and flame detection. Pattern Recognit Lett 27:49–58 12. Nagarajan G, Minu RI, Jayanthiladevi A (2019) Brain computer interface for smart hardware device. Int J RF Technol 10(3–4):131–139 13. Nirmalraj S, Nagarajan G (2019) An adaptive fusion of infrared and visible image based on learning of sparse fuzzy cognitive maps on compressive sensing. J Ambient Intell Humanized Comput: 1–11 14. Nirmalraj S, Nagarajan G (2021) Fusion of visible and infrared image via compressive sensing using convolutional sparse representation. ICT Express 7(3):350–354

A Joint Optimization Approach for Security and Insurance Management on the Cloud L. K. Joshila Grace, S. Vigneshwari, R. Sathya Bama Krishna, B. Ankayarkanni, and A. Mary Posonia

Abstract Cloud computing is a technology that allows the access of various applications that are developed and various kinds of data that are generated from any region worldwide and any kind of device either hand held mobile device or desktop which are allowed to access the Internet. Energy efficient deployment which involves saving done in the cost of implementation, accessing the necessary resources for development of several customized applications in the industrial side. All the corporate sectors are indulged in keeping track of the security in the cloud making it even more secured for the end users. The Security-as-a-Service (SECaaS) that constitutes one of the paradigm cloud computing allows all the users who are acquiring access to the cloud to outsource the activity of applying necessary security to the information in the cloud, through the payment that is considered as a subscription fee. However, no security systems that are implemented are not so strong in real time so anyone can attack our system and no one is responsible for the loss of our data. To overcome the cost-effectiveness one must get insured their data, in case any attack occurs we can retrieve our data through the subscription which we have done by the website. Keywords Security-as-a-Service · SECaaS · Intrusion detection system · Subscription management process · Insurance management process

1 Introduction Figuring administrations are progressively cloud-based, assets are put resources into cloud-based safety efforts. The Security-as-a-Service (SECaaS) permits clients to give security to the cloud, through the membership charge. Notwithstanding, no security framework is impenetrable. Along these lines, one fruitful assault can bring about the loss of information and income worth a huge number of dollars. To remunerate the misfortune, clients may likewise buy cyber insurance [1]. It utilizes a L. K. Joshila Grace (B) · S. Vigneshwari · R. Sathya Bama Krishna · B. Ankayarkanni · A. Mary Posonia Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_44

405

406

L. K. Joshila Grace et al.

stochastic advancement model to ideally arrange security, and protection benefits in the cloud. The exact estimation of harms brought about by digital assaults is one of the key difficulties in cyber insurance. The model planned might be a blended whole number issue we additionally present a fractional Lagrange multiplier calculation that exploits the whole unimodularity property to unwind the appropriate response in polynomial time. Right now on the client system that we predict, cloud administration like Amazon. Applications get information parcels as per their working reason, for example, email information or money related exchanges. [2] Real parcels are called safe bundles, while parcels utilized in digital assaults are called risky bundles. Hazardous bundles are regarded as taken care of in the event that they are accurately recognized by security administrations, or unhandled on the off chance that they are not effectively prepared. These unhandled parcels will cause harm, which brings about expenses to the client will discount the amount to protection firm. And afterward IMP will discount the specific information cost to client. Similarly as with all security models, it is difficult to guarantee relief of every single imaginable danger. Unexpected dangers can bring about misfortunes of information and costly expenses because of the misfortunes. For instance, the evaluated price of the Sony PlayStation Network blackout in April 2011 brought about by an interruption [3] was $171 million. To manage unforeseen misfortunes, organizations have put resources into cyber insurance by moving dangers of digital dangers to an insurance agency (for example, digital safety net provider) for a measure of protection inclusion [4]. AIG [5] and Chubb [6] are, for instance, digital safety net providers. Cyber insurance strategies can cover first gathering dangers, e.g., the guaranteed information defiled by malware, and outsider dangers, e.g., clients’ Visa data spilled from the safe-guarder’s applications [7]. Significant difficulties of the security administration allotment and protection the board for a client (for example protected) are because of vulnerabilities, for example, (1) unpredicted interest for security administration utilization, (2) unsure costs of security administrations and protection premiums, and (3) the arbitrary number of dangers unhandled by assigned security administrations. Without from the earlier data about such vulnerabilities, wasteful designation of security administrations and poor administration of cyber insurance can happen—bringing about overheads, wastage, overprovision or under arrangement of SECaaS administrations and protection approaches. To address these issues, we propose a system for the SECaaS client to deal with the distribution of security administrations and cyber insurance. Specifically, this structure applies arrangements got from explaining an improvement model that is inferred by stochastic programming with three-organize plan of action [8]. The goal is to limit complete expense from buying security administrations and cyber insurance arrangement. The imperative on fulfilling the dubious security need is forced. We use recreation to assess the proposed system and contrast it and pattern models, e.g., a mean worth issue. The recreation results plainly show that our proposed improvement model can extraordinarily lessen the all-out expense caused to the client, e.g., up to 8–10% contrasted and the mean worth issue. In addition, the outcomes show that our proposed system can accomplish the arrangement near the perfect ideal arrangement acquired from the advancement model with ideal data about the vulnerabilities, for

A Joint Optimization Approach for Security …

407

example, just 1–3.76% contrast. The proposed system will be valuable for boss security officials (CSOs) to make arrangements and to designate spending plan of security administration and arrangement sending and execution for their associations. The undertaking manages the joint way to deal with security and cyber insurance provisioning in the cloud. Utilizing a stochastic streamlining, we have introduced a strategy for ideally provisioning the two administrations notwithstanding vulnerability in regards to future estimating, approaching traffic and digital assaults. Hence, an application may prepare for assaults by provisioning security administrations from suppliers, for example, Avast and Trend Micro. These administrations may take different structures, for example, secure information stockpiling, character and access the board Identity Access Management (IAM), and interruption identification administrations approaching. Afterward insurance is utilized to give unequivocal spread if noxious action prompts monetary misfortune. Protection inclusion might be first-or outsider with, for example, robbery of cash and computerized resources, business interference, and digital blackmail, security ruptures, loss of outsider information. Right now, joint way to deal with this project. Utilizing a likelihood advancement, strategy for orchestrating the two administrations in the outside of unusualness concerning, for example, future estimating, approaching traffic, and digital assaults. Accordingly, an activity may ensure against strikes by holding security administrations from suppliers like Avast and Trend Micro. Administrations will have different structures, for example, secure information stockpiling, character the board, and interruption identification administrations to mirror approaching traffic. Afterward cyber insurance is utilized to give unequivocal spread if noxious movement prompts money related misfortune. Protection inclusion might be first gathering or outsider with, for example, robbery of cash and computerized resources, business interference, and digital coercion, security breaks.

2 Related Work Various ways to deal with portion of security administrations have been proposed in the writing, e.g., [9] Without thinking about vulnerabilities and streamlining, a system receiving security benchmarks was created in [9] for picking a confided in cloud supplier offering administrations that meet the client’s security necessities. Security-mindful planning calculations that consider security administrations for apportioning tied down assets to occupations were created in [10] and [11]. Without thinking about vulnerabilities, a joint security and spending plan mindful work process booking calculation was proposed in [12] for diminishing make range of work process execution while required security administrations can be assigned to the work process. Security programming organization utilizing stochastic programming with two phase plan of action was proposed in [13] for sending security programming in a multihop remote system while vulnerability of assaults is considered. Be that as it may, these works didn’t consider cyber insurance. The allotment of security administrations may moderate a few dangers however can’t keep all dangers from assaulting

408

L. K. Joshila Grace et al.

data frameworks. Subsequently, right now, recommend that the security administration distribution ought to be mutually finished with cyber insurance the board since the protection can cover misfortunes and liabilities from unhandled dangers [4]. In [3], game hypothesis was applied to break down the effect of cyber insurance on Internet security and uncover that the protection can be a hazard the board apparatus and can upgrade the security of the Internet. In [7], a hazard the executives structure was proposed for choosing fitting cyber insurance arrangements. The structure presents the protection choice arrangement including four stages, to be specific data security hazard review, current protection inclusion appraisal, protection approaches assessment, and protection strategy determination. Be that as it may, the system did not ensure the ideal answer for buy the protection arrangements. Apparently, the stochastic streamlining issue that mutually limits the expenses of both cloud-based security administration allotment and cyber insurance the board has never been concentrated in writing. There are two viewpoints to the framework model which we propose during this paper. One is the matter of Security Service assignment, and hence the second is cyber insurance provisioning. Ongoing include security inside the paper titled will Cyber Insurance Improve Network Security. Has represented that arrangements pointed toward discovery and end of security dangers alone are probably not going to end in a solid Internet. As a symmetrical methodology in alleviating security issues, some have sought after the usage of cyber insurance as a fitting danger the executives procedure. Such a methodology can possibly mutually line up with the motivations of security merchants, digital safety net providers, cloud suppliers, and system clients progressively making ready for far reaching and strong digital security instruments. To the present end, during this work, we are inspired by the consequent significant inquiry: [14] can cyber insurance truly improve the security during a system? To manage this inquiry, we receive a market-based methodology. This reality additionally stresses the need for structuring components that boost the backup plan to for all time be a piece of the market. With the execution of cloud stages in portable framework, inside the paper titled Security in Mobile Cloud Computing by Prashant Pranav the capacity of mass information by customer has gotten simpler. IT Industries additionally are misusing the benefits of distributed computing by creating an ever increasing number of advanced cells that exploit the highlights of mists. Since the utilization of PDAs by the clients is expanding quickly, the trouble of security related with utilization of distributed computing strategy in portable figuring condition has risen together with the most significant difficulties during this respect. Security with respect to portable distributed computing is regularly tended to at three levels viz. portable terminal, versatile system security, and distributed storage. Albeit numerous endeavors are made in building up a model which guarantees protection and security of information in portable cloud framework, no model is liberated from malignant assaults. During this survey paper, we have concentrated on hardly any models which are pointed toward giving security and protection of information in portable cloud. A relative investigation of an alternate cloud administration, cloud security issues, and cloud suppliers help in picking the correct cloud administration. Distributed computing is the most rising field in the field of processing. Right now

A Joint Optimization Approach for Security …

409

benefits alongside the cloud security issues has been dissected. [15] Likewise, the correlation of three significant cloud specialist co-ops in particular Amazon AWS, Windows Azure, and Google App Engine have been completed in the terms of security and security issues. It will help the buyers of cloud administrations to pick the correct cloud supplier as per their necessities and requirements.

2.1 Inference from Existing System Past frameworks keep up the privacy benefits as it were. So administration ought to be securing the client framework and data. Sometimes may aggressor assault the client resources and information. Like occurrences and fiascos, for example, information rupture, information debasement, and business interference. Be that as it may, one fruitful assault can bring about the loss of information and income worth a huge number of dollars. That repays them for clients may likewise buy cyber insurance to get reward on account of misfortune. So misfortune will be questionable. We cannot expect that harm can be precisely decided. This might be in different structures, for example, a ’deliver’ paid. It is important to adjust the holding of threat and protection, in any event, upcoming expenses, and dangers are unsure. Key difficulties in computerized insurance is the precise Estimating harms brought about by digital assaults.

3 Proposed System Right now, present SECaaS in firewall-style to give security arrangement implementation and observing the foundation for organizing traffic. [16] Which centers around organize traffic examination like Intrusion Detection System (IDS) usage to distinguish assault practices then connection between computerized insurance and SECaaS provisioning, containing client, utilizes applications, which get data traffic as bundles. These parcels are examined by administrations SECaaS suppliers, provisioned by a subscription management process (SMP). If destructive bundles evade security, digital back up plans, bought in to by an insurance management process (IMP), give pay to harm caused. Right now on client system we accept to be Internet available on a cloud administration (Fig. 1). Applications get information parcels as per their working reason, for example, email information or monetary exchanges. Authentic parcels are called safe bundles, while parcels utilized in digital assaults are called hazardous parcels. Dangerous bundles are considered taken care of in the event that they are effectively identified by security administrations, or unhandled on the off chance that they are not effectively prepared (for instance on the off chance they are unidentified). These unhandled bundles will cause harm, which brings about expenses to the client and

410

L. K. Joshila Grace et al.

Fig. 1 Overview of the proposed system

will discount the sum to the insurance agency. And afterward IMP will discount the specific information price to the client.

4 Module Description 4.1 Purchase the Security Services Right now first register the cloud site and then give client subtleties (Name, secret word, email, versatile, DOB) and then login the client qualification subtleties like username, secret key. When the client name and secret phrase is legitimate, the client profile screen will be shown. After login, the client will buy the redistributed security administration in the cloud. The security administration has a different control and value, legitimacy. Clients will pick our framework execution-based administrations and afterward quickly move to add security to the executives. Once got the administration, that will ensure the client application and framework to specific timeframes.

4.2 Cloud Service In the module, clients register the cloud administration dependent on client certification subtleties and afterward login the cloud asset. Once entered the cloud site or application to use the site. On the off chance that your application might be an

A Joint Optimization Approach for Security …

411

Fig. 2 Experimental result

informal community, share your post and visit with our companions. Clients will transfer their photos into the person to person communication site. While transferring, the client gives labels to the image simultaneously and the security framework will ensure the application to every single solicitation to cloud and afterward method for making sure about cloud-based information.

4.3 Protecting Data Traffic In this security model, administration is overseen by the client patterns and afterward screens the traffic stream and protects approaching information parcels as per their working reason, for example, email information or budgetary exchanges, WebPages Legitimate parcels are called safe bundles, while bundles utilized in digital assaults are called risky bundles. Risky bundles are esteemed taken care of on the off chance that they are effectively identified by security administrations, or unhandled in the event that they are not effectively prepared (for instance on the off chance they are unidentified). These unhandled parcels will cause harm, which increases expenses to clients. SECaaS note on client bundle. Simultaneously, administration will divert to protect the executives procedure (IMP) (Fig. 2).

4.4 Claim Insurance Right now will check the client in the event that if client authorized check the client present premium information and assess the current unhandled information, ascertain the specific for each bundle, value, length, most extreme number of parcels

412

L. K. Joshila Grace et al.

influenced. We present a halfway Lagrange multiplier calculation to locate the ideal arrangement in parameter change to compute the sum to information size. And then discount the sum to specific client. After case the client current premium is low to change the new future premium dependent on approaching hazardous parcels. The cost for protection bought ahead of time is charged at a rate is known as a ‘future premium’. The IMP buys protection strategies, which incorporate the superior, kinds of dangers secured repayment worth, and approach length.

5 Conclusion This paper provides the trust between the client and protection the executives framework and the pay sum as per the protection bought in by the client is given to the client proportionate to the specific information that have been broken in fast time. In spite of the fact that cutting edge innovation has improved so a lot, security is as yet a worry for cloud. Despite the fact that the security dangers by cloud organization are attainable and basic, cloud specialist co-ops are worried about security risk from outer assaults instead of inner assaults. This task has introduced cloud framework design so that the present security issues are evacuated and the cloud security is upgraded.

References 1. Delamore B, Ko RKL (2015) Security as a Service (SecaaS)—an overview. In: The cloud security ecosystem. Elsevier, pp 187–203 2. Ko RKL (2014) Data accountability in cloud systems. Security, privacy and trust in cloud systems. Springer, Berlin, pp 211–238 3. PCMag Digital Group (2015) PlayStation hack to cost Sony $171M; Quake costs far higher [Online]. Available: http://www.pcmag.com/article2/0817,2385790,00.asp 4. Majuca RP, Yurcik W, Kesan JP (2006) The evolution of cyber insurance. arXiv preprint cs/0601020 5. AIG CyberEdge (2015) http://www.aig.com/CyberEdge3171417963.html 6. CyberSecurity by Chubb (2015) http://www.chubb.com/businesses/csi/chubb822.html 7. Gordon LA, Loeb MP, Sohail T (2003) A framework for using insurance for cyber-risk management. Commun ACM 46(3):81–85 8. Asha P, Jesudoss A, Prince Mary S, Sai Sandeep KV, Harsha Vardhan5 K (2021) An efficient hybrid machine learning classifier for rainfall prediction. J Phys Conf Series 1770 (2021):012012 9. Prince Mary S, Ankayarkanni UN, Sathyabama AS (2020) A survey on image segmentation using deep learning. Journal of physics: conference series, volume 1712, international conference on computational physics in emerging technologies (ICCPET) 2020, 1 Aug 2020, Mangalore, India 10. Gokilavani N, Bharathi B (2021) Multi-objective based test case selection and prioritization for distributed cloud environment. Microprocess Microsyst 82:103964 11. Nagarajan G, Minu RI, Jayanthiladevi A (2019) Brain computer interface for smart hardware device. Int J RF Technol 10(3–4):131–139

A Joint Optimization Approach for Security …

413

12. Nirmalraj S, Nagarajan G (2019) An adaptive fusion of infrared and visible image based on learning of sparse fuzzy cognitive maps on compressive sensing. J Ambient Intell Humanized Comput:1–11 13. Pravin A, Prem Jacob T, Nagarajan G (2019) Robust technique for data security in multicloud storage using dynamic slicing with hybrid cryptographic technique. J Ambient Intell Humanized Comput:1–8 14. Gartner Press Release (2013) Gartner says cloud-based security services market to reach $2.1 Billion in 2013. Available: http://www.gartner.com/newsroom/id/2616115 15. McAfee (2015) McAfee Security-as-a-Service—Solution Brief [Online]. Available: http:// www.mcafee.com/us/resources/solution-briefs/sb-saas.pdf 16. Trend Micro (2015) Deep Security as a Service Online Available: http://www.trendmicro.com/ us/business/saas/deep-security-as-a-service/

Improving the Efficiency of E-Healthcare System Based on Cloud L. Sujihelen, S. T. Nikhil Sidharth, Miryala Sai Kiran, M. D. Antopraveena, M. S. Roobini, and G. Nagarajan

Abstract The human services supplier might be moving from paper records to electronic health records (EHRs) or might be utilizing EHRs as of now. EHRs permit suppliers to utilize data all the more successfully to improve the quality and productivity of your consideration; however, EHRs won’t change the protection assurances or security shields that apply to your wellbeing data. This venture centers around creating secure cloud structure for developing and getting to believe registering administrations in all degrees of open cloud sending model. Along these lines, disposes of both inside and outside security dangers. These outcomes in accomplishing information classification, information honesty, confirmation, and approval, disposing of both dynamic and detached assaults from cloud arrange condition. Building up a safe cloud structure for getting is to believe figuring and capacity benefits in all degrees of open cloud organization model. Keywords E-Healthcare · EHR · Attribute-based encryption · Patient’s data

1 Introduction E-Health is intended for correspondence and expulsion of wellbeing data [1, 2]. EHR assumes a significant job right now. Putting away data on paper is profoundly wasteful and just builds stockpiling and recuperation unpredictability [3]. The explanation cloud-based frameworks ought to be actualized is on the grounds that any restorative establishment can share patient’s data in a flash among themselves, any place they may be found [4]. The main precautionary measures one must mull over when sending a cloud-based is the protection and protection from dangers. Security managers working in the IT division must be prepared to have the option to deal with the sort of dangers experienced. There is constant development of information ruptures L. Sujihelen (B) · S. T. Nikhil Sidharth · M. S. Kiran · M. D. Antopraveena · M. S. Roobini · G. Nagarajan Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_45

415

416

L. Sujihelen et al.

in human services throughout the last 2–3 years, which is the explanation that incited this research [5].

2 Related Work To tackle the previously mentioned issue, Guo et al. [3] considered the appropriated idea of e-Health framework when structuring a security-saving verification framework. Right now, of letting brought together foundations deal with validation, the two end clients (patients and doctors) do the confirmation procedure [6, 6]. Specifically, clients are permitted to verify each other without unveiling their traits and characters, which takes care of the issue of keeping up security and fluctuation of every client’s properties. Missah et al. [8] perceived dangers in the wellbeing records framework, by distinguishing vulnerabilities of the framework and territories which are prone to assault. Different arrangements were talked about by contrasting diaries and articles on unapproved passage in PEHR. Pai et al. [9] wherein first preferences of the cloud storage are indicated which incorporates storeroom, effectiveness, simple information recovery, versatility, atomicity, and so forth right now party cloud specialist co-ops claim the framework and clients/persistent utilize their administrations. An answer model is proposed utilizing information discontinuity absent a lot of overhead. Chiuchisan et al. [10] talked about encryption technique to include security in the cloud. They did a contextual analysis on an eye clinic, talked with medicinal faculty in regard to the difficulties they face in taking care of the information [11, 12].

3 Existing System Cloud computing security is dependent on set of control-based innovations. Information level security is for taking care of information in a safe way. Stage level security is for giving secure stage to process the information. Secure structure is for demonstrating confided in condition to the client, yet it needs elevated level security and various degrees of security, less spotlight on insider risk, dynamic and uninvolved assaults.

4 Proposed System EHRs permit suppliers to utilize data all the more successfully to improve the quality and proficiency of your consideration, yet EHRs won’t change the security insurances or security shields that apply to your wellbeing data. This undertaking centers around creating secure cloud structure for advancing and getting to believe processing

Improving the Efficiency of E-Healthcare System Based on Cloud

417

Fig. 1 Overview of the proposed system

administrations in all degrees of open cloud sending model. In this way, disposes of both interior and outer security dangers. These outcomes in accomplishing information classification, information trustworthiness, validation, and approval, wiping out both dynamic and detached assaults from cloud organize condition. Building up a protected cloud structure for getting is to believe figuring and capacity benefits in all degrees of open cloud arrangement model as shown in Fig. 1.

4.1 Advantages of Proposed System • • • • •

Provides information respectability. Data privacy. Authentication and approval. Eliminates both inward and outside security dangers. Avoids both dynamic and uninvolved assaults in cloud organize condition.

5 Modules Description 5.1 User Module User module is to register the users for storing the data in the cloud.

418

L. Sujihelen et al.

5.2 Registration Module Registration module is to register all the users who all want to register in the cloud.

5.3 Creation Storage and Instance The information proprietor has not authority over the information after it is transferred on cloud. Right now, unique information gets scrambled into two distinct qualities. The information in each cut can be encoded by utilizing diverse cryptographic calculations and encryption key before putting away them in the cloud.

5.4 Data Protection Right now, procedure is to store information in a legitimate secure and safe way so as to maintain a strategic distance from interruptions and information assaults in the meantime it will diminish the expense and time to store the scrambled information in the cloud storage.

5.5 Data Recovery Module Right now, it can recuperate the information from cloud server utilizing various sorts of strategies.

6 Conclusion This venture centers around creating secure cloud structure for advancing and getting to believe figuring administrations in all degrees of open cloud arrangement model. Along these lines, it takes out both inward and outside security dangers. These outcomes in accomplishing information classification, information respectability, validation and approval, dispensing with both dynamic and latent assaults were from cloud arrange condition. Building up a safe cloud structure for getting is to believe registering and capacity benefits in all degrees of open cloud organization model.

Improving the Efficiency of E-Healthcare System Based on Cloud

419

References 1. Barua M et al (2011) PEACE: an efficient and secure patient-centric access control scheme for eHealth care system. In: IEEE conference in computer communications workshops (INFOCOM WKSHPS) 2. Evans RS (2016) Electronic health records: then, now, and in the future. Yearbook Med Inform 25(S 01):S48–S61 3. Guo L et al (2012) Paas: a privacy-preserving attribute-based authentication system for ehealth networks. In: IEEE 32nd international conference on cloud computing systems (ICDCS) 4. Prakash C, Dasgupta S (2016) Cloud computing security analysis: challenges and possible solutions. In: 2016 international conference on electrical, electronics, and optimization techniques (ICEEOT). IEEE, pp 54–57 5. Rana ME, Kubbo M, Jayabalan M (2017) Privacy and security challenges towards cloud-based access control. Asian J Inform Technol 16(2–5):274–281 6. Els F, Cilliers L (2017) Improving the information security ofpersonal electronic health records to protect a patient’s health information. In: 2017 conference on information communication technology and society (ICTAS). IEEE, pp 1–6 7. Pai T, Aithal PS (2017) Cloud computing security issues-challenges and opportunities 8. Missah YM, Dighe P, Miller MG, Wall K (2013) Implementation of electronic medical records—a case study of an eye hospital. South Asian J Bus Manag Cases 2(1):97–113 9. Choi M, Paderes REO (2015) Biometric application for healthcare records using cloud technology. In: 2015 8th international conference on bioscience and bio-technology (BSBT). IEEE, pp 27–30 10. Chiuchisan I, Balan DG, Geman O, Chiuchisan I, Gordin I (2017) A security approach for health care information systems. In: 2017 E-health and bioengineering conference (EHB). IEEE, pp 721–724 11. Cherian S, Singh CS, Manikandan M (2014) Implementation of real time moving object detection using background subtraction in FPGA. In: 2014 international conference on communication and signal processing, Melmaruvathur, pp 867–871 12. Sujihelen L, Jancy S, Roobini MS, Mary AVA, MP Selvan, Automated nutrition monitoring system in IoT. J Critical Rev 7(19):5071–5075 13. Nagarajan G, Minu RI, Jayanthiladevi A (2019) Brain computer interface for smart hardware device. Int J RF Technol 10(3–4):131–139

Analysis on Online Teaching Learning Methodology and Its Impact on Academics Amidst Pandemic R. Yogitha, R. Aishwarya, G. Kalaiarasi, and L. Lakshmanan

Abstract Twenty years back, pursuing education online “anywhere–anytime” was still a very distant dream. But change is inevitable, a pandemic situation had forced all of us to upgrade our teaching learning methodology. Amidst the lockdown prevailing all over the country, the work of academicians did not stop at any point because of the enormous technological innovations that has been made throughout the decade. This work focusses on the teaching learning techniques that has been practised before and after pandemic, significance of online education, various technological tools that helps in continuing the education successfully from different parts of the world by connecting all of us together and the assessment tools available to assess the students taking up online education. We also did a survey with questionnaire on three types of teaching learning methodology—blackboard teaching learning, online teaching learning and classroom digital teaching learning. The questionnaire helps in assessing the effectiveness of the three methodologies in terms of both student and teacher’s point of view. It also includes the merits and demerits of online education and the key learning that the pandemic has taught in person as a student and as a teacher. The result shows that both teaching fraternity and student community prefer classroom digital teaching learning that includes blended practice of both traditional and online teaching learning techniques. Keywords Pandemic · Online education · Blackboard teaching learning · Online teaching learning · Classroom digital teaching learning

1 Introduction This year 2020 has been an unexpected year for every individual across the universe. In spite of many hardships faced by many people, a student has become a responsible citizen in their education journey. This happened because of the never R. Yogitha (B) · R. Aishwarya · G. Kalaiarasi · L. Lakshmanan Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_46

421

422

R. Yogitha et al.

before known/seen/read lockdown. One of the foremost affected sectors is educational institution from KGs to graduates. Education system was stunned by facing the sudden lockdown overnight by most of the schools, colleges and universities. This disrupted the learning process worldwide. Not only learning process, almost the entire world faced a situation of “what-to-do”. The economy has fallen down for many people. This led to emotional and psychological stress for many families. Many of us survived with the new challenge of staying at home and continued to work from home. Among all these survivals, education system had to follow a new system in learning. Necessity makes invention—necessity in change of education escalates education to online learning. But it should abide by the new government regulations of safety and staying at home. Thus, most educational institutions offered online classes for their students to help them continue their learning journey. A virtual classroom or online educational institution somehow engaged the students and relieved from their stress. They continued to learn despite of being facing the lockdown. The learners got accommodated with the scenario of online classes, homework, assessments and got groomed themselves to be a committed and disciplined individual in learning. Another important point to notice is almost all the students became more responsible. These changes are possible only with the help of technological advancements happened in due course. The teaching strategies used before and after pandemics are listed in Table 1. Online teaching is being favoured after pandemics. This is been achieved by virtual classroom. Virtual classrooms are easy to access as the requirements are minimal and easy. It was made easy because the requirements are materialistic and avoided transport. Usually, reaching the educational institution on time is a great achievement for most of the students’ life. The virtual classrooms required only devices, and it connected students with their friends/teachers. Not only avoided travel, but also reduced time with the advantage of recording the classes and students revisited the lesson based on their convenience and requirement. Furthermore, all the recordings are made available in the cloud. Thus, students learning is made easy Table 1 Teaching strategies: before versus after pandemic Teaching strategies used before

Teaching strategies used after pandemic

Traditional blackboard teaching

Virtual teaching

Handwritten notes

e-notes like PDFs, PPTs, videos

Using OHP (overhead projector)

PPT (power point presentation)

PPT, brown bag method, flipped classroom

Flipped class room

Quiz

Online quiz, quiz using game-based learning platform

Assignment submission in A4 sheets

Online assignment submission using latest methods, thus improving their creativity

Traditional assessment method—submitted in papers

Assessment methods—quiz, online submission of exam papers

Less interaction between the students and faculty More interaction—just a message helps the students more interactive

Analysis on Online Teaching Learning Methodology …

423

and enhanced their understanding level. Virtual classroom provided a convenient way of contacting their teachers by means of message/chat. This option allows the students to interact with their teachers at any time, to clear doubts, or ask questions, and teachers also got satisfaction in having interactive session. Virtual classroom paves a way for the students to explore and learn new digital tools. Students can interact with their friends using online tools and feel them connected always. This sense of connection never took the students to be stress-filled and avoided depression during lockdown. Since the students no longer have to travel to attend class, they are physically fit and provided good attendance. There is a significant improvement in participation and interaction in the class as many students feel safer interacting with teachers behind a screen. The performance of students can be maintained well in online classes, and students can easily track their records such as attendance, assessments and assignments score. Using these data, their improvement can be visualized with the help of graph/curve. Based on the curve, teachers can create courses/topics for the students to cope up with the learning patterns and meet their need/demand. To make the session interesting, fun-filled activities such as gaming quiz and timed quiz are conducted. This paved way for the learners to learn new technological tools for their betterment. Thus, accommodating to online classes is an interesting and convenient change for them. Especially, with an availability of many modes of teaching such as presentations, activities and cartoon videos, even the most tedious topic can be made exciting for the students. Online education is a learning technique which totally depends on the electronic devices that uses Internet. This technique is used by many organizations, industries, institutions and so on. This platform is mainly used for conducting conference, web-based training, course, interview, assessment, sharing of study materials, exchanging of knowledge in the form of PPT, audio, video, etc., virtually. Educational theorem, also known as learning theorem, is a concept with which the instructor can understand the knowledge transfer, knowledge process, knowledge receiver and improvement of performance from students as shown in Table 2. Table 2 Online learning theorem

Principles of online education

Education theorem

Instructor and learner/listener relation

Behaviour, cognitive and connective

Icebreaker session

Behaviour and cognitive

Monitoring learners individually

Constructive and connective

Time on task

Behaviour and constructive

Data/material security

Cognitive and connective

Instructor and student relation

Behaviour, cognitive, constructive and connective

424

R. Yogitha et al.

2 Related Works 2.1 Before Pandemic In [1], Mikel Perales et al. say the importance of online education in practical than theory. Also, they described the Rich Tool Kit (RTK) for all types of practical session such as face-to-face online education for live demo, simulation practical laboratory for engineering field and remote desktop/laboratories for various streams such as mechanical, electrical, civil and computer. The same can be reflected during practical examination. In [2], Yuvan Jiugen et al. say the importance of online education in research based on huge amount of data and cloud computing techniques. They divided the research concept into three different phases such as database, data mining and data standardization layer to improve the research through online mode in efficient manner by combining cloud computing and big data. Zhang et al. [3] say the importance of big data platform due to vast amount of data increasing day by day while using online. This big data platform helps to facilitate all kinds of function performed and also provides bridge between learners and job provided to improve the career of the learners. In [4], Kebritchi shows the avoidance of issues appearing during online courses through Cooper’s framework. The issues can be categorized into three ways such as learner issue, content issue and instructor issue. To overcome this, above problem should provide talented and profession development for instructors, best training for learners and highly engaged technical support for developing content.

2.2 After Pandemic In traditional blackboard-based teaching classroom, teachers interact with students, whereas in online education, interaction between teachers and students is very less. When there is a compulsion of following online education, it is very important to design quality and pedagogically effective lectures [5]. A well-designed learning environment can be given to learners by pouring technological innovations to the traditional teaching learning system [6]. In online education, students experience physical presence of the teacher, lack of strong bond between student and teacher and lose a lively environment [7]. Certain principles to be followed in for online education are as follows [8]: (a) design of curriculum, (b) effective delivery on the designed curriculum, (c) adequate support to students based on the curriculum, (d) participation/interaction by students and (e) plan to meet the interruptions in online education platforms. Mentoring activities should be introduced [9] in both traditional and online education systems. The learners experienced the safe and secure online learning [10]. But the gaps between online learning and traditional classroombased learning should be filled. To improve interaction in online education, teleinstruction was proposed in [11] to enable connectivity between the peers and the instructor of the course to interact when needed. The introduction of Zoom, Slack,

Analysis on Online Teaching Learning Methodology …

425

EduPage platform, TV School, Microsoft teams and Google Meet to support the online education system proved that the online education system is successful, and teachers/learners are adaptable to every cause of nature [12].

3 Online Teaching Platforms and Tools The online education consists of n number of tools for varies categories. Some of them include teaching platforms, assessment tools, learning management tools and online learning platforms.

3.1 Teaching Platforms Online teaching platform played a major role during pandemic (recently COVID-19) situation in providing non-stop education all over the world.

3.2 Assessment Tools The successful outcome of the online education for each course will be decided only by the assessment result of learners. The most popular five assessment tools are Google Forms, Kahoot, Mentimeter, Poll Everywhere and Socrative.

3.3 Learning Management Tools Learning management system is used to monitor and keep track of all the online courses provided by any particular organization or institution. LMS is used in almost all the places such as government companies, private and public educational institution and corporate companies.

3.4 Online Learning Platforms These are the services provided by the instructor to the learner through tools, information and some resources to enhance the quality content delivery. The online learning platforms are divided based on the contents provided and category of students involved.

426

R. Yogitha et al.

4 Survey: Results and Discussions The pandemic situation forced the whole country to take up online education in mainstream. It gave rise to varied opinions and comparison of the virtual teaching learning methodology with existing traditional methodologies. As the saying goes, “Change is the only thing that remains unchanged”, a step forward towards the online education is a change that is necessary and inevitable. A survey was taken from students and teachers in the ration 2:1 collecting opinions on three types of teaching learning practice. The survey questionnaire starts with the poll to choose the most preferred teaching learning methodology among the three teaching learning techniques—blackboard teaching learning technique (traditional method), online-based teaching learning technique (currently used method) and classroom digital teaching learning technique (technique with future scope) followed by set of questions that constitutes various parameters to assess the process of each teaching learning methodology. The bar graph that shows the result of the questionnaire of both teachers and students is shown in Figs. 1 and 2. The questions in the questionnaire are listed in Table 3. The numbers in the table clearly depict the preference of teachers and students over both the methodologies of teaching learning process. The questionnaire was taken only for two main teaching learning process (blackboard and online) since these are majorly in practice in current scenario, whereas the classroom digital teaching learning technique which is practised only in very few international-based schools and universities has been kept only as choice at the beginning and end of the survey. This is because of the reason that a technique cannot

Fig. 1 Blackboard teaching learning methodology—teacher’s and student’s opinions on questionnaire

Analysis on Online Teaching Learning Methodology …

427

Fig. 2 Online teaching learning methodology—teacher’s and student’s opinion on questionnaire

be assessed by any particular individual unless it has been practised by them. Hence, classroom digital teaching learning has been proposed for future methodology in teaching learning practice. Coming back to the questionnaire in Table 3, it is evident that majority of students and teachers still prefer blackboard teaching learning methodology when compared with online teaching learning technique. Especially when it comes to interaction, be it interaction between teacher and learner or interaction between the fellow learners, online teaching learning technique fails to satisfy the required level of standard, whereas the corresponding teaching learning technique does allow to use a variety of teaching learning aids and tools that will eventually make the whole process more interesting and effective. But it does not tend to satisfy the two important criteria of the whole teaching learning process fully one being the level of attention of the learner throughout the session and the other being the level of retention of what the learner has learnt throughout the course. The optimal and main point of taking up a course or subject and completing it does not mean that the syllabus of the course or subject has been completed by the teacher or the learner has passed the assessment test conducted by the corresponding organization. Rather it means that the learner, after taking up that particular course, has satisfied all the course outcomes that have been specified before taking up the course. To achieve all the course outcomes successfully, it is necessary for the learner to pay attention to the ongoing classes which will automatically retain all the knowledge he tends to understand by listening to the class. Main shortcoming of online teaching learning technique is that its attention and retention rates of the learner are very low due to various factors like lack of proper monitoring of student’s activities while they take up classes from home, bandwidth

64

34

Level of retention of what is learnt during class

42

38

Overall feedback

14

External disturbance

42

62

Level of attention throughout the class

34

52

42

62

Time management (10 min summary of previous class +30 min teaching + 10 min doubt clarification +10 min general interaction)

58

44

Discussion of current days topic with lively introduction

18 30

Use of technology-based teaching tools (PPT, PDF, videos, audios, animated GIFs, etc.) 46

82 66

Peer interaction (with fellow classmates and friends)



2

12



44



4

4

4

73

54

35

39

9

45

52

59

65

23

42

38

55

36

47

40

32

28

4

4

27

6

55

8

8

9

7

20

70

12

24

14

32

34

10

22

H

58

30

56

58

34

54

50

46

42

M

22



32

18

52

14

16

44

36

L

19

65

17

39

22

30

27

15

18

H

55

23

58

30

38

52

43

28

51

M

26

12

25

31

40

18

30

57

31

L

Students poll

Teachers poll

L

H

M

Students poll

H

L

Teachers poll M

Online teaching learning (in percentage)

Blackboard teaching learning (in percentage)

Teacher–learner interaction

Questions

Table 3 Questionnaire to assess the acceptance and compatibility of teaching learning methodologies currently in practice

428 R. Yogitha et al.

Analysis on Online Teaching Learning Methodology …

429

connectivity problems, high level of external disturbance, etc. That is why when it comes to the overall feedback, the blackboard teaching learning methodology is chosen by more than fifty per cent of both teachers and students in an average. Every method and every technique has its own pros and cons. The pros mighty make the technique more preferrable by the community that makes use of it while the cons help to make a new innovation from the existing technique to overcome the cons. Next part of the survey is the main opinion poll amidst students and teachers that assess the compatibility and preference on online teaching learning platform.

5 Conclusion The survey clearly depicts the incompatibility that both students and teachers face with online teaching learning platforms. This might be due to two major reasons. One being the sudden change in teaching learning process in mainstream that is not very easy to accept since people tend to find difficulty to adapt to new changes after practising certain techniques for a really very long amount of time. The other one might be the disadvantages the online platform constitutes that cannot be denied. Nevertheless, no matter how good a practice is, it has to be updated according to the current situation and necessity. Classroom digital teaching learning process allows us to keep the traditional method alive but also uses modern technological innovations that will make the whole process interesting and worthwhile.

References 1. Perales M, Pedraza L, Moreno-Ger P (2019) Work-in-progress: improving online higher education with virtual and remote labs. In: 2019 IEEE global engineering education conference (EDUCON). IEEE, pp 1136–1139 2. Jiugen Y, Ruonan X, Rongrong K (2017) Research on interactive application of online education based on cloud computing and large data. In: 2017 IEEE 2nd international conference on big data analysis (ICBDA). IEEE, pp 593–596 3. Zhang G, Yang Y, Zhai X, Yao Q, Wang J (2016) Online education big data platform. In: 2016 11th international conference on computer science and education (ICCSE). IEEE, pp 58–63 4. Kebritchi M, Lipschuetz A, Santiague L (2017) Issues and challenges for teaching successful online courses in higher education: a literature review. J Educ Technol Syst 46(1):4–29 5. Srivastava S, Lamba S, Prabhakar TV (2020) Lecture breakup-a strategy for designing pedagogically effective lectures for online education systems. In: 2020 IEEE 20th international conference on advanced learning technologies (ICALT). IEEE, pp 27–28 6. Shukla T, Dosaya D, Nirban VS, Vavilala MP (2020) Factors extraction of effective teachinglearning in online and conventional classrooms. Int J Inform Educ Technol 10(6) 7. Raza SA, Khan KA, Rafi ST (2020) Online education & MOOCs: teacher self-disclosure in online education and a mediating role of social presence. South Asian J Manag 14(1):142–158 8. Bao W (2020) COVID-19 and online teaching in higher education: a case study of Peking University. Hum Behav Emerg Technol 2(2):113–115

430

R. Yogitha et al.

9. Nugumanova LA, Shaykhutdinova G, Yakovenko TV (2020) Mentoring as an effective practice of teacher professional development managing in the continuous pedagogical education system. ARPHA Proc 3:1829 10. Allo MDG (2020) Is the online learning good in the midst of Covid-19 pandemic? The case of EFL learners. Jurnal Sinestesia 10(1):1–10 11. Derakhshandeh Z, Esmaeili B (2020) Active-learning in the online environment. arXiv preprint arXiv:2004.08373 12. Basilaia G, Kvavadze D (2020) Transition to online education in schools during a SARS-CoV-2 coronavirus (COVID-19) pandemic in Georgia. Pedagogical Res 5(4):1–9

Real Estate Price Prediction Using Machine Learning Algorithm D. Vathana, Rohan Patel, and Mohit Bargoti

Abstract Real estate business is one of the leading business types; it is referring to buying and selling. Real estate business is making crucial for an effective house prediction model. This is making the investors directly get the effective house price from estate owners without third party agents. For this purpose, machine learning is proposed. In this paper, various machine learning model is proposing for automatically prediction of house sale price. House sale price predicts by some innumerable aspects such as area property, house site, construction use substantial, property age, counting bedrooms and garages so on. Here, the machine learning predicting models like decision and support vector regression, logistic regression and Lasso regression technique are used. 3000 properties are considering for housing data. Our predicting models such as logistic regression, Lasso regression and decision tree and random forest showed the R-squared value of 0.99, 0.82, 0.96 and 0.98, respectively. Additionally, above algorithms are compared by some parameters like accuracy, RMSE, MAE and MAE. This paper likewise indicates importance of our method and the technique. Keywords House price prediction · Logistic and Lasso regression · SVM · Random forest · Machine learning · Decision tree

D. Vathana (B) · R. Patel · M. Bargoti Department of Computing Technologies, SRM Institute of Science and Technology, Chennai, India e-mail: [email protected] R. Patel e-mail: [email protected] M. Bargoti e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_47

431

432

D. Vathana et al.

1 Introduction Real estate property is an important thing in each human life; it makes human respect in society and representing person riches and prestige. Generally, the property value is not reducing rapidly so investment in real estate gives as profitable for investors. But sometimes changes will be possible in real estate price; at the time various household investors, policy makers, bankers are affected. So, in economic, predicting of real estate value is very important. In 2011, households in India is 24.67 crore that make second place in world. Initially prediction of house rate is started with manual process but it is not satisfied and discomfortable for buyers and sellers. For solve this problem, we introduce the automatic prediction model for prediction of real estate value, i.e. sale house price. This model is very useful to unawareness customers to manage the correct house price from the buyers and third parties. Nowadays, various parameters-based property price was decided in economy and society. However, old analyses are mainly dependent on the property size and their location [1, 2]. Also analysed numerous inherent constraints like area and construction material, external parameter, i.e. property location, proximity, upcoming projects, etc. [3, 4] and total of bedrooms. After that predicting the sale house price using couple of diverse machine learning algorithms such as support vector regression and linear regression model based on the constraints. Finally, developed the comparison between two machine learning models based on some performance metrics parameters, i.e. accuracy, RMSE, etc. In this paper, we propose the four mockups, i.e. Lasso regression, support vector regression, decision tree and linear regression for predicting the house sale price; find their models accurateness and make associating them grounded on some error metrics such as R-Squared value, Mean Absolute Error (MAE), Mean Squared Error (MSE), and Root Mean Squared Error (RMSE).

2 Related Work In [5], writer has collected the housing information set from Centris.ca and duProprio.com. This dataset has 25,000 instances and 130 aspects. Round 70 structures were tattered from the overhead websites and real estate interventions like, Sutton, RE/MAX, Century 21, and etc. Additional 60 structures were socio demographic grounded on, where the property is sited. In this paper, Principal Component Analysis and four regression models is implemented for predict the property price value. First PCA is explored to reduce the large dimensionality dataset. After that four regression models like K-nearest neighbours (KNN) and support vector machine, linear regression and random forest and a collaborative approach is proposed to predict price cost of property. Combined KNN and random forest practice are called collaborative approach. The novel ensemble approach predicted the property prices

Real Estate Price Prediction Using Machine Learning Algorithm

433

with 0.0985 errors. However, PCA model is not effective for reduce the prediction error. In [6], the author proposed a two comparison models. They make comparison between ANN model and hedonic price model based on envisage the property values. Hedonic price models are generally cast-off to envisage the worth of any things; it is reliant on core and external physiognomies. The neural network process is first skilled and the weightiness and the nodes and edges prejudices, respectively are wellthought-out utilizing probationary and error method. In this article, hedonic model gave low R-Squared value and high error rate compared with the neural network model. So, it is clinched that neural network achieves superior than Hedonic model. Classifiers are used to predict the house property costs in [7, 8]. The author has placid the figures from Multiple Listing Service (MLS), historical hypothecations duties and public-school assessments. Metropolitan Regional Information Systems (MRIS) catalogue is cast-off for real estate data collection. This article main theme is predicting the cost of property and finds the predicted cost is higher or lower than original cost. So, the writer cast-off four machine learning mockups like AdaBoost, RIPPER, C4.5 and Naive Bayesian for envisaging the cost. In [9] article, the author proposed linear regression technique for predict the stock market prices. They used TCS stock database for their work. Further, they also proposed RBF and polynomial regression models for make comparison with linear regression model and finally found that is greater to the other models [10].

3 Proposed Work Proposed system presents a machine learning models to automatically predict the sale of house price. The below steps are using to prediction (Fig. 1). 1. 2. 3. 4.

Dataset was collected from open source database and 3000 property housing data is used for our work. Data preprocessing is employed to make uniformity data values in dataset by removing of missing values. After the preprocessing, data analysis is proposed to neglected same parameters in dataset for increase the prediction accuracy. Finally, machine learning models like decision tree, logistic regression and Lasso regression and support vector regression technique are proposed to figure a prognostic model and make comparison between different machine learning

Fig. 1 Overall proposed work flow

434

D. Vathana et al.

Fig. 2 Proposed prediction model

procedures stranded on constraints such as accuracy, MSE, MAE and RMSE. The proposed system improves accuracy than existing work.

3.1 Data Collection In our work, we collected the dataset from KaggleInc [11]; it is an open source dataset. This dataset containing 3000 annals with 80 constraints that has upsetting the value of property. So, we have chosen only 37 parameters from total 80 parameters [12–14]. Used parameters, i.e. House area, Overall quality, Location of house, House built year, Counts of bedrooms and bathrooms, Area of Garage, Area of swimming pool (Fig. 2).

3.2 Data Preprocessing This module is used for converting of raw and complex data into understandable data for easy systematic processing. It is done by finding of missing data and redundant data in original dataset. Thus, NaN values are deleted from whole dataset and make uniformity values in dataset.

Real Estate Price Prediction Using Machine Learning Algorithm

435

3.3 Data Analysis In data analysis, we discover out the physiognomies of our preprocessed dataset. That means, we need to study the limitations, examine our dataset, parameters affiliations and also find out outliers occur in dataset.

3.4 Prediction Models Four machine learning models are proposed to build the prediction. First, preprocessed and analysed whole dataset splitted into training and testing data. Then training data are trained by four machine learning models one by one and then testing data are predicted based on trained learn model. Finally, we can get predicted sale of house price higher/lower or closely to original price rate.

3.4.1

Logistic Regression

We projected logistic regression for envisage the house value targets; the logistic purpose defined by Eq. (1) LRk =

1 1+

e−Yk β+εk

(1)

where LRk denote the continuous variable and Y k denote the independent data. Above function produces the binary outputs that means zero and one, so it is transformed into simple linear regression model by Eq. (2)   LRk = −Yk β + ε K log log 1 − LRk

(2)

Finally, actual value LRk replaced by other variable LR∗k ; thus, we can get the final logistic regression by Eq. (3) 

LR∗k log log 1 − LR∗k

 = −Yk β + u K

where uk is the error factor? The logistic regression-based prediction result given in Fig. 3

(3)

436

D. Vathana et al.

Fig. 3 Prediction result by logistic regression

3.4.2

Support Vector Regression

This model also considered as a regression model. It is same as the support vector machine classifier. To find the normal vector V ∈ RM of the linear function used convex minimization of support vector regression. Linear function follows as  minimization

1 2 v 2

+C

k  k=1

 γk + γk∗

The support vector regression-based prediction result given in Fig. 4 Fig. 4 Prediction result by support vector regression

(4)

Real Estate Price Prediction Using Machine Learning Algorithm

437

Fig. 5 Prediction result by Lasso regression

3.4.3

Lasso Regression

In regression technique, Lasso is an influential model. LR is works by fining of coefficients magnitude of structures laterally with reducing the error rate between projected targets and genuine rates. It is also named as L1iRegularization model. Lasso is used for minimalize the cost function and the cost function is follow as Cost(V ) = RSS(V ) + α(Sum of squares of weight) where RSS stands as ‘Residual Sum of Squares’ that means sum of square or errors between projected marks and concrete data in exercise set; α is refers to coefficient can be take any values. Three cases are used for α value such as α = 0, α = ∞ and 0 < α < ∞ In Eq. (5), the Lasso function referred in mathematical form, cost(v) =

 N  k=1

uk −

n  l=0

 vl wkl



n  |vl |

(5)

l=0

The lasso regression-based prediction result given in Fig. 5

3.4.4

Decision Tree

Decision tree is a one of the most used administered erudition algorithms. This prototype has the capacity for prediction result with high accuracy with minimum error rate. Any problems, i.e. regression and classification are predicted by the decision tree. Binary tree concept is used in this model. In t terminal nodes, decision tree is

438

D. Vathana et al.

Fig. 6 Prediction result by decision tree

cast-off for transferring the cataloguing decision. The decision tree-based prediction result given in Fig. 6

4 Experimental Results In this part, we show the predicted result from various prediction models. We used different parameters for making comparison with different models; the parameters, i.e. Root Mean Squared Value (RMSE), Mean Absolute Value (MAE), accuracy, Rsquared value and Mean Squared Value (MSE). Table 1 shows the comparison with four models. From the prediction result in Table 1 we identify the decision tree model give more high R-squared value and less error rate and accurateness (Fig. 7). In above graph, decision tree gives improved accuracy 84.64% than other models and Lasso model gives a least accuracy of 60.32% (Fig. 8). Also, decision tree obtained less RMSE and high RMSE in Lasso. Finally, based on the above predicted results, we conclude the decision tree is best one and give more accuracy than other models. Table 1 Quantitative evaluation with different models Method

LR

SVR

Lasso

DT

Accuracy

0.7281

0.6781

0.6032

0.8464

R-Square

0.987

0.968

0.81

0.99

RMSE

8922

14,101

34,275

217

MAE

6118

76,429

21,058

5.68

MSE

79,604,145

1.99e+08

1.7e+08

47,184.93

Real Estate Price Prediction Using Machine Learning Algorithm

439

Fig. 7 Accuracy analysis with different models

Fig. 8 RMSE analysis with different models

5 Conclusion In this paper, various machine learning model is proposed for prediction of sale of house price without any manual process. We mainly followed four steps such as data collection, data preprocessing, data analysis and predicting model for our work. First data was collected; second NaN values are cleared in data preprocessing step; third data characteristics was analysed; last analysed data is given to four prediction models for predicting the house sale price automatically. Hence, different performance metrics are calculated to four models and based on performance the models are compared with them. At last, based on the performance, we concluded decision tree gives a more accuracy that is 84.64% and high R-squared value that is 0.99 with less error rate.

References 1. Belsley D, Kuh E, Welsch R (1980) Regression diagnostics: identifying influential data and source of collinearity. Wiley, NewYork 2. Quinlan JR (1993) Combining instance-based and model-based learning. Morgan Kaufmann, pp 236–243

440

D. Vathana et al.

3. Bourassa SC, Cantoni E, Hoesli M (2010) Predicting house prices with spatial dependence: a comparison of alternative methods. J Real Estate Res 32(2):139–160 [Online]. Available: http://EconPapers.repec.org/RePEc:jre:issued:v:32:n:2:2010:p:139-160 4. Bourassa SC, Cantoni E, Hoesli ME (2007) Spatial dependence, housing submarkets and house price prediction. Eng 330:332/658, ID: unige:5737 [Online]. Available: http://archive-ouverte. unige.ch/unige:5737 5. Nissan P, Janulewicz E, Liu L (2014) Applied machine learning project 4 prediction of real estate property prices in Montréal 6. Smola AJ, Schölkopf B (2004) A tutorial on support vector regression. Stat Comput 14(3):199– 222 7. Park B, Bae JK (2015) Using machine learning algorithms for housing price prediction: the case of Fairfax County, Virginia housing data. Expert Syst Appl 42(6):2928–2934 8. Minu R, Nagarajan G, Suresh A, Devi JA (2016) Cognitive computational semantic for high resolution image interpretation using artificial neural network. Biomed Res-India 27:S306– S309 9. Bhuriya D et al (2017) Stock market predication using a linear regression. In: 2017 International conference on electronics, communication and aerospace technology (ICECA), vol 2. IEEE 10. Vasanth K, Elanangai V, Saravanan S, Nagarajan (2016) FSM-based VLSI architecture for the 3 × 3 window-based DBUTMPF algorithm. In: Proceedings of the international conference on soft computing systems. Springer, New Delhi, pp 235–247 11. https://www.kaggle.com/ohmets/feature-selection-forregression/data 12. Nagarajan G, Minu RI, Jayanthiladevi A (2019) Brain computer interface for smart hardware device. Int J RF Technol 10(3–4):131–139 13. Nirmalraj S, Nagarajan G (2019) An adaptive fusion of infrared and visible image based on learning of sparse fuzzy cognitive maps on compressive sensing. J Ambient Intell Humanized Comput 1–11 14. Nirmalraj S, Nagarajan G (2021) Biomedical image compression using fuzzy transform and deterministic binary compressive sensing matrix. J Ambient Intell Humanized Comput 12(6):5733–5741

An Efficient Algorithm for Traffic Congestion Control A. Mary Posonia, B. Ankayarkanni, D. Usha Nandhini, J. Albert Mayan, and G. Nagarajan

Abstract Recently, car parking space is one of the hard tasks in modern-day traits. Area operation and supervision of vehicles’ area is now a required area of study. Searchings’ for an unfilled parking’ vacancy in overfilled visitors is a time-taking manner. The prevailing to be had parking area popularity techniques are not robust or worldwide for photos took from one-of-a-kind photographic digicam lookouts. Final results of a right parking space in a hectic town is simply a stimulating trouble and public are facing this elaborate on an everyday base. The most reason of this observe is to extensively speak the previous research of vehicle parking area recognition and equivalence them from modified factors. To conquer these problems, we suggest an aircraft-primarily based method which adopts an operational three-D parking lot characteristic such as abundant planar aspects. The aircraft-based totally three-D segment model performs a key element in conduct inter-item occlusion and point of view alteration. However, to relieve the interference of unpredictable illumination deviations and sun shades, we endorse an aircraft-based class system. Furthermore, by using supplying a Bayesian categorized framework to participate the three-D prototypical with the aircraft-based organization technique, we methodically infer the space popularity. Final, to triumph over the insufficient lighting fixtures within the night-time, we additionally gift a preprocessing step to improve picture excellence. The investigational consequences expression that the deliberate framework can comprehend strong locating of car parking places in each sunlight hours and night-time. Keywords Bayesian · Histogram · Classification · Parking space detection

1 Introduction A Wireless Sensor Network (WSN) is a system encompassed of units which have the capability of measuring (sensing), computing and communicating which provides A. Mary Posonia (B) · B. Ankayarkanni · D. Usha Nandhini · J. Albert Mayan · G. Nagarajan Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_48

441

442

A. Mary Posonia et al.

an administrative entity to gather, implement and respond to events in a particular territory/habitat. Any governmental, industrial, civil or commercial organization can act as the administrative entity. The habitat could be a biological setup, a framework of Information Technology (IT) or the physical world. The classical applications of WSNs include monitoring, data collection, surveillance and telemetry in biology and medical field [1–3]. Sensor nodes are distributed in a special environment named as the sensor field. Each sensor node has the potential to observe data from the sensor field, analyze them and forward the data to a (central point) sink for further processing. Generally, data packets are routed from the source nodes to the sink in a multi-hop fashion. This sink may be a static one or it may support mobility and it may provide communication between WSNs and outside world through Internet, where the end users have permission to access the data observed from the sensor field.1.1 checking the PDF File (Fig. 1). The processor in the processing unit of each sensor deals with the methods that are used by each sensor node to interact with the other sensor nodes to cooperatively perform the assigned task. The processor is also responsible for other computational tasks such as data aggregation, filtering, etc. on the perceived data from the sensing environment and routed from the other nodes. The processing unit in the sensor node includes both Random Access Memory (RAM) and Read Only Memory (ROM) for storage purposes. The memory unit is further organized into instruction memory and data memory. Instruction memory contains program instructions, and data memory includes raw and computed data [4–7]. The aircraft-based totally three-D segment model performs a key element in conduct inter-item occlusion and point of view alteration. However, to relieve the interference of unpredictable illumination deviations and sun shades, we endorse

Fig. 1 Situation including presently accessible robotized stopping frameworks

An Efficient Algorithm for Traffic Congestion Control

443

an aircraft-based class system. Furthermore, by using supplying a Bayesian categorized framework to participate the three-D prototypical with the aircraft-based organization technique, we methodically infer the space popularity. Final, to triumph over the insufficient lighting fixtures within the night-time, we additionally gift a preprocessing step to improve picture excellence [8–11].

2 Related Works Recently, the performance of WSNs has achieved a worldwide consideration. Emergence of wireless sensors has a tendency to substitute the wired devices which have been employed to observe various parameters such as pressure, temperature, etc. Generally, sensor nodes forward the data periodically, and in certain cases, it is continuously transmitted to the sink. In WSNs, the sensor nodes can be deployed in a random or in a pre-determined manner. These different deployment strategies make node communication a tedious task to be realized. Congestion and interference are the two major problems that may occur at any location in WSNs. The many-to-one convergent nature of data transmission from the sensor nodes to the sink in upstream direction leads to congestion. Since congestion, which results in increased transmission delay and packet drops, has direct influence on QoS of the application and energy efficiency, it has to be effectively controlled. Data transmission in WSNs leads to congestion that severely affects the performance of the entire network. The objective of this research work is to design congestion control approaches that effectively predict and alleviate congestion and thus improve detection reliability of the application, network performance in terms of throughput, end-to-end latency and network lifetime and also ensure on-time delivery of data. Furthermore, the approaches support service discrimination which is a key requirement for applications of Wireless Multimedia Sensor Networks (WMSNs).

3 Suggested Method The work Probabilistic QoS Aware Congestion Control (PQACC) approach for Wireless’ Multimedia ‘Sensor Networks’ is a traffic ‘control approach. This approach considers the characteristics of Wireless Multimedia’ Sensors’ Networks’ (WMSNs) in controlling congestion. PQACC predicts congestion prior to its occurrence using congestion prediction algorithm and mitigates congestion by regulating the data rate based on the importance of data perceived by the sensors, deployed location of the sensors and predicted intensity of congestion. PQACC provides service discrimination which is a key requirement for WMSNs. The work Efficient Resource Control Mechanism using Multi Criteria DecisionMaking Techniques (ERCMCDM) takes benefit of dense deployment of sensor nodes in the network to build alternative paths to divert excess traffic from the congested

444

A. Mary Posonia et al.

Fig. 2 Suggested techniques

area. Two popular MCDM methods, namely, Analytic Hierarchy Process (AHP) (Saaty 1980, 2008) and Technique for Orders’ Preference by Similarity to Ideal’ Solution (TOPSIS) z’ (Hwang and z’ Yoon 1981) are used to select the bests elective path for data transmission. Another version of ERCMCDM approach is suggested where PROMETHEE (Brans and Vincke 1985), another popular MCDM technique is used instead of TOPSIS to select the best alternative path (Fig. 2).

3.1 Parking Space Detection Stage The proposal ERCMCDM with PROMETHEE (ERCMCDM-P) is also a resource control-based congestion mitigation approach. The working of ERCMCDM-P is similar to ERCMCDM. However, instead of TOPSISz’ (Hwang and z’ Yoon 1981), PROMETHEE (Brans and Vincke 1985), a popular outranking method was applied to evaluate and rank the alternatives in the process of alternative path creation. In the suggested work, the weights of the criteria were determined using AHP method. The proposed parking spot disclosure strategy sheets just the territory to one side of the vehicle.

An Efficient Algorithm for Traffic Congestion Control

445

Fig. 3 Characteristics of parking space markings

3.2 Feature Extraction and Classification As shown in Fig. 3 ERCMCDM was evaluated for various performance metrics such as throughput, end-to-end delay and percentage of successfully received packets. The results obtained by ERCMCDM were correlated with DAlPaS (Sergiou et al. 2014) algorithm. Simulation results proved that ERCMCDM achieves lower end-to-end delay, better throughput and percentage of received packets compared to DAlPaS. ERCMCDM achieves this betterment with proper weight assessment using AHP (Saaty 1980, 2008) method and efficient evaluation and selection. The suggested approaches can be used in several applications such as traffic and video surveillance, inventory management, weather monitoring, healthcare services, home monitoring (Fig. 4).

4 Experimental Results The ERCMCDM approach introduced a resource control strategy to mitigate congestion in WSNs. In ERCMCDM, two popular MCDM techniques’, namely, Analytic Hierarchy Process (AHP) (Saaty 1980, 2008) and Techniques’ for Order Preferences’ Similarity to Ideal Solution (TOPSIS)z’ (Hwang and z’Yoon 1981) are used. ERCMCDM decides its alternative path choice on the basis of three parameters, namely, remaining buffer occupancy, remaining power and number of hops to reach the sink. AHP technique was applied to determine the weight for each criterion.

446

A. Mary Posonia et al.

Fig. 4 Feature detection results

TOPSIS was applied to ranks’ the decision alternatives’ and too choose the better alternative path to divert the excess traffic from the congested area toward the sink (Fig. 5).

5 Conclusion This have a look at suggested a parking space detection technique-based totally on an AVM device and related emptiness’ analysis’ algorithms’ too facilitates’ indoor parking’, wherein numerous different lights assets are gift, and’ out of doors parking’, where environmentally versions can be vast. We suggest an aircraft-primarily-based method which adopts an operational three-D parking lot characteristic such as abundant planar aspects. The aircraft-based totally three-D segment model performs a key element in conduct inter-item occlusion and point of view alteration. However, to relieve the interference of unpredictable illumination deviations and sun shades, we endorse an aircraft-based class system. Furthermore, by using supplying a Bayesian categorized framework to participate the three-D prototypical with the aircraft-based organization technique, we methodically infer the space popularity. Final, to triumph over the insufficient lighting fixtures within the night-time, we additionally gift a preprocessing step to improve picture excellence.

An Efficient Algorithm for Traffic Congestion Control

447

Fig. 5 Experimental process

References 1. Groh BH, Friedl M, Linarth AG, Angelopoulou E (2014) Advanced real-time indoor parking localization based on semi-static objects. In: Proceedings of the 2014 17th international conference on information fusion (FUSION), Salamanca, Spain, 7–10 July 2014, pp. 1–7 2. Ata KM, Soh AC, Ishak AJ, Jaafar H, Khairuddin NA (2019) Smart indoor parking system based on Dijkstra’s algorithm. IJEEAS 2:13–20 3. Jeong SH, Choi CG, Oh JN, Yoon PJ, Kim BS, Kim M, Lee KH (2010) Low cost design of parallel parking assist system based on an ultrasonic sensor. Int J Autom Technol 11:409–416 4. Pohl J, Sethsson M, Degerman P, Larsson J (2006)A semi-automated parallel parking system for passenger cars. Proc Inst Mech Eng D J Automob Eng 220:53–65 5. Jung HG, Cho YH, Yoon PJ, Kim J (2008) Scanning laser radar-based target position designation for parking aid system. IEEE Trans Intell Transp Syst 9:406–424 6. Suhr JK, Jung HG, Bae K, Kim J (2010) Automatic free parking space detection by using motion stereo-based 3D reconstruction. Mach Vis Appl 21:163–176 7. Jung HG, Kim DS, Yoon PJ, Kim J (2006) Parking slot markings recognition for automatic parking assist system. In: Proceedings of the IEEE intelligent vehicles symposium, Tokyo, Japan, 13–15 June 2006, pp 106–113 8. Lee S, Seo SW (2016) Available parking slot recognition based on slot context analysis. IET Intell Transp Syst 10:594–604 9. Suhr JK, Jung HG (2013) Full-automatic recognition of various parking slot markings using a hierarchical tree structure. Opt Eng 52:037203

448

A. Mary Posonia et al.

10. Sevillano X, Màrmol E, Fernandez-Arguedas V (2014) Towards smart traffic management systems: vehicle on-street parking spot detection based on video analytics. In: Proceedings of the 17th international conference on information fusion, Salamanca, Spain, 7–10 July 2014 11. Nagarajan G, Minu RI, Jayanthiladevi A (2019) Brain computer interface for smart hardware device. Int J RF Technol 10(3–4):131–139 12. Lee, Grace KL, Edwin HW Chan (2008) The analytic hierarchy process (AHP) approach for assessment of urban renewal proposals. Social Indic Res 89(1):155–168

LSTM-Based Epileptic Seizure Detection by Analyzing EEG Signal Shashank Thakur, Aditi Anupam Shukla, R. I. Minu, and Bhasi Sukumaran

Abstract Seizures are getting more widespread as the population continues to expand at a rapid rate. In India, almost a million instances are reported each year, with another 50 million worldwide. A seizure is a curable disorder, and the cure rate can be improved by detecting it early and treating it gradually with appropriate methods and medication. A seizure is triggered by a breakdown in nerve cells, which can happen as a result of a condition such as epilepsy. It can be discovered by examining the EEG data with the long short-term memory (LSTM) model. Keywords Epileptic · Seizure · Epileptic detection · LSTM · Nerve cells · EEG

1 Introduction Seizures are common in today’s environment. In India alone, more than one million cases are reported every year. A seizure is a curable condition that demands continual monitoring and treatment. A seizure is triggered by a breakdown in nerve cells, which can happen as a result of a condition such as epilepsy. The most prevalent condition in the Central Nervous System (CNS) group is epilepsy [1]. The brain deviates from its normal behavior during a seizure. Figure 1 shows the difference of EEG. By disrupting the neurological system, creates disturbance throughout the body, which may be recorded as an Electroencephalogram (EEG) signal and analyzed individually by neurologists. The long short-term memory (LSTM) model [2] is used to make this work easier and to predict any future occurrence in a person. The spectrogram approach is employed in this work to convert EEG data [3, 27] into an LSTM-based algorithm to predict the future possibilities in a patient’s life connected to epilepsy

S. Thakur · A. A. Shukla · R. I. Minu (B) SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India e-mail: [email protected] B. Sukumaran SRM Medical College Hospital & Research Centre, Kattankulathur, Tamil Nadu, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_49

449

450

S. Thakur et al.

Fig. 1 Normal waves versus sharp seizure waves [5]

and thereby categorize the different seizures [4]. The overall review of different researchers finding are shown in Table1

2 Algorithm Used 2.1 LSTM The algorithm implemented in the proposed system is CNN along with LSTM as shown in Fig. 2. The LSTM algorithm is utilized in the classifier stage of the architecture diagram. On comparing the various classifiers used in the reference papers like KNN, SVM, etc. it was visible that the accuracy and time consumption by the LSTM module was considerably less [13–15]. LSTM consists of multiple layers that are fully connected and seizes the ability where in which the system selectively sustains relevant data for a longer period of time that is further used in sequential prediction [16–18]. LSTM is a subset of recurrent neural network that has the capability to learn order dependencies in sequential prediction hence LSTM is also a complex department of deep learning.

LSTM-Based Epileptic Seizure Detection by Analyzing EEG Signal

451

Table 1 Review on different approach Ref

Dataset and feature component method used

Classification method

Performance metrics used

[6]

Twenty-one intracranialictal recordings acquired from the database of University Hospital of Freiburg

Discrete wavelet transform (DWT) support vector machines (SVM)

Average sensitivity = 93% Average specificity = 99%

[7]

CHB-MIT

Independently recurrent neural network (IndRNN)

Cross-validation

[8]

50 pairs of focal/non-focal EEG signal

Empirical wavelet transform technique least squares SVM (LS-SVM)

Accuracy = 90% Sensitivity = 88% Specificity = 92%

[9]

CHB-MIT

LSTM-RNN

Sensitivity and low false prediction rates

[3]

CHB-MIT

Genetic algorithm (GA) and particle swarm optimization (PSO) support vector machines (SVM)

Accuracy = 79.07% F1-score = 77.04%

[10]

CHB-MIT

Convolutional neural network (CNN) Artificial neural network (ANN)

Accuracy of CNN = 99.07% Accuracy of ANN = 98.62%

[11]

University of Bonn, Germany weighted visibility graph entropy (WVGE)

SVM, K-nearest neighbor (KNN), decision tree (DT)

Average accuracy = 99.20% for 2-class classification

[2]

University of Bonn, Germany Weighted visibility graph entropy (WVGE)

Continuous wavelet transform Classification (CWT) - PCA-SVM accuracies (more than 95%

[12]

Freiburg database

Signal energy and discrete Accuracy = 88.67%, wavelet transform multi-class specificity = 90.00%, support vector machines sensitivity = 95.00% (SVM)

Fig. 2 LSTM

452

S. Thakur et al.

2.2 Spot-Check Algorithms It is impossible to claim which algorithm would perform best on a particular dataset in advance. There would be no need for a machine learning algorithm if everything was already known. No single algorithm can outperform the others. Spot-check compares [11] leading approaches to find which performs better and which does not. Instance approaches such as support vector machines (SVM) and k-nearest neighbors (KNN) are being evaluated [19, 20]. It aids in the discovery of the optimum method for our datasets. It is a model that predicts how the algorithm will perform. It can also be a manual way of comparing two or more algorithms on the merit of their accuracy.

2.3 Support Vector Machines (SVMs) There is an excellent and adaptable approach for regression and classification in supervised algorithms. The concept of constructing support vector machines is applied to clustering difficulties here. In order to proceed with Bayesian classification, [6] it is determined that a specific model representing the distribution of each batch in datasets [21] and utilize the result acquired to choose labels for new points as shown in Fig.3.

3 System Design Figure 4 shown the overall system design of the proposed system. The seizure may well be recognized by analyzing data using the LSTM machine learning method. Because of its implicit structure and properties, LSTM is a good choice for processing Fig. 3 Support vector machine

LSTM-Based Epileptic Seizure Detection by Analyzing EEG Signal

453

Fig. 4 System architecture

visual input. The dataset is processed using Google colab, and the results may be further extended by exhibiting the results in graph form using python libraries [8].

4 Database The dataset [9] is a clinical EEG recording of various patients, which may be accessible at the UCI machine learning repository. The database, which is a subset that comprises epilepsy details of each patient as discovered by Neurologists, is updated at a continuous interval of time. Patients’ consent is obtained, and data is gathered in collaboration with partners, including the National Institutes of Health. The original data set was compilation of 5 folders having 100 files each where each file belongs to a particular patient. Each file was recording EEG signals from

454

S. Thakur et al.

Fig. 5 Database

brain for about 23.6 s which was then sampled as 4097 data points each representing different point in time series. There are total of 179 columns, i.e., {x1, x2,…, x178} and resultant column y. The y column embeds 5 categorical data as follows where 2–5 are the signals taken from non-epileptic seizure patient except from 1: 5-EEG signal recorded with patient’s eyes kept open 4-EEG signal recorded with patients eyes closed 3-EEG signal recorded where the region of tumor was successfully located in the patient’s brain 2-EEG signal from the tumor region of the patient’s brain 1-EEG signal of seizure activity The 4097 data points were divided by 23 chunks gives 178 data points. A snapshot of the database is shown in Fig. 5. Since there are total of 500 patients having 23 chunks each that gives total of 11,500 rows of information.

5 Implementation Recording the activity and reaction of the human brain during a seizure using electrophysiological analysis. This may be done by placing an EEG cap on the patient’s head and recording the signal graphically. The collected EEG data [22] is then transformed into a spectrogram, which provides a visual depiction. The following three processes are followed in order to examine the data generated: i. ii. iii.

Processing the data Defining the label Fitting data into the model

5.1 Processing the Data Two procedures are done in the initial step of this study, namely, eliminating redundant and noisy data [23]. This is accomplished by the use of hardware or software

LSTM-Based Epileptic Seizure Detection by Analyzing EEG Signal

455

solutions, such as the inbuilt google colab tool by the help of python libraries for data cleaning.

5.2 Defining the Label The data acquired after processing is used in the colab model in the second phase of this experiment. The transformed dataset is run with the numerical data’s time and frequency signal using the Tensorflow API on colab [24].

5.3 Fitting Data into the Model The supplied dataset is divided into two groups in the third and final portion of this analysis: test sets and training sets, where the test set is 20% and the remaining is training sets [12]. Furthermore, the distribution of samples among the both sets are done. After processing, the dataset is checked for quality to ensure that no samples are incorrectly classified into classes. The threshold value is set to 0.5, and the provided sample is then classified as positive (seizure) or negative (non-seizure).

6 Result Feature selection models, such as R2 value, regression model, KNN, Bayesian network, random forest [25, 26, 28–30], are examined to choose the best suited one. The results are shown with the assistance of a graph using the categorization models indicated above. For further processing, the one with the best accuracy is chosen. There are several models for identifying different types of seizures now available. With the use of a numerical database, the model constructed using LSTM supplied us with an 87.34% accuracy in identifying seizures. Nevertheless, the LSTM model’s future prospects suggest that using image processing techniques, one may pinpoint the location of a seizure in a specific section of the brain with more accuracy than 87.34%. Figures 6 and 7 the implementation results. Fig. 6 Accuracy computation

456

S. Thakur et al.

Fig. 7 Seizure accuracy using LSTM

7 Conclusions Currently, the identification of seizures using EEG signals is performed using numerical data. Automatic seizure categorization has a number of benefits, including realtime seizure surveillance, greater accuracy rate, cost savings, and the utilization of LSTM sample structure. The LSTM, or long short-time memory, allows for quick development of innovative algorithms that produce appropriate prediction performance by feature extraction from the waveform generated by the LSTM of EEG data. Despite the output, the breadth of this approximation is restricted, given the sophistication of the attributes. Such creates issues such as computing efficiency, system correctness, database feature extraction, and signal noise filtration. As a result, new strategies for minimizing the aforementioned problems in all phases of analysis are necessary. In random testing, the spectrogram has supplied poor resolution, which has an impact on the product’s output, taking into consideration that high resolution requires more computing labor and training time. In order to solve these varied obstacles, a variety of extracting features and selection approaches are used, or genetic algorithm (ga) can be used instead, due to the high dimension of the data vector. In general, approaches look at a single EEG signal to see if it exhibits seizure or non-seizure at a specific time. The goal, on the other hand, is to categorize many EEG signals at once. When two datasets are classified, their accuracy and techniques are compared, resulting in a single dataset with findings from one study. The future prospect includes the automation and efficient handling of seizure by developing online tool and forecasting the result.

References 1. Raghu S, Sriraam N, Temel Y, Rao SV, Kubben PL (2020) EEG based multi-class seizure type classification using convolutional neural network and transfer learning. Neural Netw 124:202–212

LSTM-Based Epileptic Seizure Detection by Analyzing EEG Signal

457

2. Peachap AB, Tchiotsop D (2019) Epileptic seizures detection based on some new laguerre polynomial wavelets, artificial neural networks and support vector machines. Inf Med Unlocked 16:100209 3. Subasi A, Kevric J, Canbaz MA (2019) Epileptic seizure detection using hybrid machine learning methods. Neural Comput Appl 31(1):317–325 4. Zhou M, Tian C, Cao R, Wang B, Niu Y, Hu T, Xiang J (2018) Epileptic seizure detection based on EEG signals and CNN. Front Neuroinform 12:95 5. https://www.mayoclinic.org/diseases-conditions/epilepsy/diagnosis-treatment/drc-20350098 6. Tzimourta KD, Tzallas AT, Giannakeas N, Astrakas LG, Tsalikakis DG, Tsipouras MG (2018) Epileptic seizures classification based on long-term EEG signal wavelet analysis. In Precision medicine powered by pHealth and connected health. Springer, Singapore, pp 165–169 7. Yaswanth P, Pranav AS, Minu RI (2020) Automatic seizure classification using CNN. In: 2020 International conference on communication and signal processing (ICCSP), July 2020. IEEE, pp 0743–0745 8. Bhattacharyya A, Sharma M, Pachori RB, Sircar P, Acharya UR (2018) A novel approach for automated detection of focal EEG signals using empirical wavelet transform. Neural Comput Appl 29(8):47–57 9. Tsiouris KM, Pezoulas VC, Zervakis M, Konitsiotis S, Koutsouris DD, Fotiadis DI (2018) A Long Short-Term Memory deep learning network for the prediction of epileptic seizures using EEG signals. Comput Biol Med 99:24–37 10. Boonyakitanont P, Lek-uthai A, Chomtho K, Songsiri J (2019) A comparison of deep neural networks for seizure detection in EEG signals. bioRxiv, 702654 11. Bayram KS, Kızrak MA, Bolat B (2013) Classification of EEG signals by using support vector machines. In: 2013 IEEE INISTA. IEEE, pp 1–3 12. Acharya UR, Oh SL, Hagiwara Y, Tan JH, Adeli H (2018) Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals. Comput Biol Med 100:270–278 13. Nagarajan G, Minu RI (2016) Multimodal fuzzy ontology creation and knowledge information retrieval. In: Proceedings of the International conference on soft computing systems. Springer, New Delhi, pp 697–706 14. Nagarajan G, Minu RI, Muthukumar B, Vedanarayanan V, Sundarsingh SD (2016) Hybrid genetic algorithm for medical image feature extraction and selection. Proc Comput Sci 85:455– 462 15. Simpson SV, Nagarajan G (2021) A table based attack detection (TBAD) scheme for internet of things: an approach for smart city environment. In: 2021 International conference on emerging smart computing and informatics (ESCI). IEEE, pp 696–701 16. Rajasekaran Indra M, Govindan N, Divakarla Naga Satya RK, Somasundram David Thanasingh SJ (2020) Fuzzy rule based ontology reasoning. J Ambient Intell Human Comput 1–7 17. Dhanalakshmi A, Nagarajan G (2020) Convolutional neural network-based deblocking filter for SHVC in H. 265. SIViP 14:1635–1645 18. Sajith PJ, Nagarajan G (2021) Optimized intrusion detection system using computational intelligent algorithm. In: Advances in electronics, communication and computing. Springer, Singapore, pp 633–639 19. Nirmalraj S, Nagarajan G (2020) Fusion of visible and infrared image via compressive sensing using convolutional sparse representation. ICT Express 20. Govidan N, Rajasekaran Indra M (2018) Smart fuzzy-based energy-saving photovoltaic burp charging system. Int J Ambient Energy 39(7):671–677 21. Yamada T, Meng E (2009) Practical guide for clinical neurophysiologic testing: EEG. Lippincott Williams & Wilkins, Philadelphia, PA 22. Ramos-Aguilar R, Olvera-López JA, Olmos-Pineda I (2017) Analysis of EEG signal processing techniques based on spectrograms. Res Comput Sci 145:151–162 23. Elahian B, Yeasin M, Mudigoudar B, Wheless JW, BabajaniFeremi A (2017) Identifying seizure onset zone from electrocorticographic recordings: a machine learning approach based on phase locking value. Seizure 51:35–42

458

S. Thakur et al.

24. Moghim N, Corne DW (2014) Predicting epileptic seizures in advance. PloS one 9(6):e99334 25. Page A, Shea C, Mohsenin T (2016) Wearable seizure detection using convolutional neural networks with transfer learning. In: 2016 IEEE international symposium on circuits and systems (ISCAS). IEEE, pp 1086–1089 26. Ansari AH, Cherian PJ, Caicedo A, Naulaers G, De Vos M, Van Huffel S (2019) Neonatal seizure detection using deep convolutional neural networks. Int J Neural Syst 29(04):1850011 27. Tzallas AT, Tsipouras MG, Fotiadis DI (2009) Epileptic seizure detection in EEGs using time–frequency analysis. IEEE Trans Inf Technol Biomed 13(5):703–710 28. Aloysius N, Geetha M (2017) A review on deep convolutional neural networks. In: 2017 International conference on communication and signal processing (ICCSP). IEEE, pp 0588– 0592 29. Gajic D, Djurovic Z, Di Gennaro S, Gustafsson F (2014) Classification of EEG signals for detection of epileptic seizures based on wavelets and statistical pattern recognition. Biomed Eng: Appl Basis Commun 26(02):1450021 30. Yao X, Cheng Q, Zhang G-Q (2019) A novel independent rnn approach to classification of seizures against non-seizures. arXiv preprint arXiv:1903.09326

Machine Learning for Peer to Peer Content Dispersal for Spontaneously Combined Finger Prints T. R. Saravanan and G. Nagarajan

Abstract Network security begins with validating the client, normally with a username and a secret word. Framework incorporated code development calculation that runs in polynomial time in the quantity of edges, the quantity of sinks, and the base size of the min-cut. They likewise noticed that albeit the outcomes show that organization coding does not further develop the attainable transmission rate when all hubs aside from the source are sinks, tracking down the ideal multicast rate without coding is NP-hard. Arbitrary organization coding was proposed an approach to guarantee the unwavering quality of the organization in a disseminated setting where the hubs do not have the foggiest idea about the organization geography, which could change after some time. To overcome the challenges a machine learning-based efficient finger print system is developed which out performs all the challenges in existing work. Keywords Wormhole detection · Trusted authority · Delegation authority

1 Introduction Peer to Peer authentication blocks a distant PC from interfacing with a customer PC until the customer PC has verified that far off computer. Since this requires only one detail confirming the client name and secret phrase is one factor validation. With two-factor confirmation, something the portable number-based calls or instant messages will be utilized. For three factor verification [1] client is utilizing retinal sweep, unique finger impression. In the existing framework the integrity is not accessible. The proposed work is to approach to keep up with the respectability of content information [2] and how to appropriate the information with exceptionally T. R. Saravanan (B) Department of Computational Intelligence, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India G. Nagarajan Department of CSE, Sathyabama Institute of Science and Technology, Kattankulathur, Tamil Nadu, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_50

459

460

T. R. Saravanan and G. Nagarajan

efficient security. Irregular straight network coding [3], the methodology performs with enduring likelihood near one organization hubs with complete co-appointment between the capacity area and downloader. A mark conspire built by the hash-andsign worldview hash the message and afterward signs the hash, emblematically σ H(M). As of late, a new on-the-fly check method has been which utilizes an old style homomorphism hash work. In any case, this strategy is hard to be applied to arrange coding due to high computational and correspondence overhead. Network coding [4, 5] is a strategy for enhancing the progression of advanced information in an organization by sending computerized proof with regards to messages. The method is utilized to further develop the organization throughput proficiency and versatile just as strength to assaults and steadily dropping. Assailants can adjust our information and make our framework shaky and degenerate every one of our cycles. Along these lines, content check is a significant and down to earth issue when organization coding is utilized. Content distribution [6] is completely conveyed environment.

2 Literature Review Wang et al. [7] proposed a throughput overlay infrastructure in which network coding has been as of late proposed in data hypothesis as another element of the data multicast issue that accomplishes ideal transmission rate or cost. End has in overlay networks are regular possibility to perform network coding, because of its accessible computational capacities. In this paper, we look to acquire hypothetical advances network coding to the act of high-throughput multicast in overlay organizations. We have finished the primary genuine execution of organization coding in end has, just as decentralized calculations to build the steering techniques and to perform irregular code task. Our encounters recommend that moving toward greatest throughput with network coding is not just hypothetically strong, yet in addition essentially encouraging. We likewise present various novel difficulties in planning and acknowledging coded information dispersal, and comparing arrangement methods to address them. Krohn et al. [8] proposed a productive substance dispersion wherein the nature of shared substance conveyance can endure when noxious members deliberately degenerate substance. A few frameworks utilizing basic square by-block downloading can confirm blocks with customary cryptographic marks and hashes, however these strategies do not make a difference well to more exquisite frameworks that utilization rate less eradication codes for productive multicast moves. This paper presents a reasonable plan, in view of homomorphic hashing that empowers a downloader to perform on-the-fly check of eradication encoded blocks. Fuentes et al. [9] proposed a productive secure substance dispersion in distributed to frameworks. As per the work an issue lenient community oriented design for putting away and disseminating mixed media documents dependent on the data dispersal calculation (IDA). Our proposed design depends on the P2P worldview so that the hubs can take both the worker and customer jobs simultaneously. Thusly, a

Machine Learning for Peer to Peer Content …

461

document can be allotted proficiently to many friends by utilizing the data dispersal calculation, and just a subset of these companions are needed to modify the first record. Late advances in correspondence organizations and the processing abilities of PCs have opened up new freedoms for content stockpiling and dispersion, where distributed storage and media conveyance over the Internet have acquired huge prominence. Ariza-Garzón et al. [10] proposed a machine learning model for shared organizations. Typical machine learning calculations offer high expectation execution, however the vast majority of them need logical force. In any case, this inadequacy can be settled with the assistance of the reasonableness apparatuses proposed over the most recent couple of years, for example, the SHAP esteems. In this work, we survey the notable calculated relapse model and a few AI calculations for giving scoring in P2P loaning. The correlation uncovers that the AI elective is predominant as far as arrangement execution as well as reasonableness. All the more exactly, the SHAP esteems uncover that AI calculations can reflect scattering, nonlinearity and primary breaks in the connections between each component and the objective variable.

3 Proposed Work 3.1 Overview Tracking down the ideal multicast [11] rate without coding is NP-hard. In network hubs not discoverable, not further develop the attainable transmission rate and it is slowest strategy. In proposed framework a new on-the-fly confirmation conspire proposed dependent on a faster homomorphic hash work, and demonstrated its security. We likewise consider the calculation and correspondence cost brought about during the substance dissemination measure. We recognize different sources of the expense, and examine approaches to kill or lessen the expense. Specifically, we propose a sparse variation of the traditional arbitrary straight organization coding, where just a little constant number of squares are joined each time. Moreover, we examine some conceivable enhancements under certain states of the boundaries, and approaches to compromise among various expense [12].

3.2 System Architecture We consider the non-streaming exchange of extremely huge records over deletion channels like the Internet. Normally, a record F is separated into n consistently estimated blocks. In above engineering the single substance is partitioned into number of squares, the bigger document is isolated into number of parts, in the event that we send like we will decrease the data transfer capacity burns-through. Figure 1 clearly

462

T. R. Saravanan and G. Nagarajan

Fig. 1 System architecture

explains about the system architecture. Also, we simply add safer to the figures, so we applying some innovation to hashing the information, in the event that we send the information utilizing hashing values implies the malware hard to infuse the information and false the information, the honesty of information is more securable by utilizing the hash work. We simply send the figures in shared organization every single information send by various hubs, in the present circumstance likewise + malware and assailant hard to distinguish the friend and the information progress. The objective part we applying the innovation for download the hash esteem in the progression of organization coding. In the wake of downloading the information the check and remaking is done that the objective use the information.

Machine Learning for Peer to Peer Content …

463

Fig. 2 Signature message block download

3.3 Signature Message Block Download For the entire signature block downloading process implementing login page with peer network, the large compressed small hash-cdn and finally the reconstructed data is downloading is performed. Figure 2 clearly illustrates the signature-based message block download process. Signature message block download the modest number of records represent a sizable level of all out transfers novel development that allows beneficiaries to confirm the respectability of check hinders promptly, prior to devouring a lot of data transmission or contaminating their download stores. In our plan, a scrape F is compacted down to a more modest hash esteem, H(F), with which the collector can confirm the uprightness of any conceivable really take a look at block.

4 Simulation Results 4.1 Node Login with Peer Process Application Users need to see the application they need to login through the User Interface GUI is the media to associate User and Media Database and login. screen where client can enter his/her client name, secret key and secret phrase will check in information base, assuming that will be a legitimate username and secret word, he/she can get to the data set. Figure 3 explains about the peer process. Peer network module the substance is spitted into number of parcel with the name. By parting this parcel we can get to just the less transfer speed. That the splited content are sent by means of the companion organization, In this above movement, we will do by executing this organization creation that is peer creation. Distributed organization

464

T. R. Saravanan and G. Nagarajan

Fig. 3 Node login with peer process

is only one hub interlinked with one another hub, a solitary get reaction from multi hubs that is called as Peer2peer organization.

4.2 Large Compressed Small Hash-Cdn Large compacted little hash-cdn module the substance is spitted into number of bundle with the name. By parting this parcel we can get to just the less data transfer capacity. Arbitrary straight calculation is utilized in this substance circulation the substance of the information is sent into packet. The unique X is separated into n blocks x1; x2;…; xn, and every hub registers and advances some arbitrary straight blend every one of its downstream hubs. Figure 4 depicts the key generation process. The hash esteem h(X) utilizing an advanced mark plot S with a marking key k. The mark Sk(h(X)) is then used to confirm got information Y. Organization can perform coding rather than basically sending data, multicast meeting can accomplish their greatest organization stream at the same time. This procedure is alluded to as organization coding. Signature message block download the modest number of records represent a sizable level of complete transfers novel development that allows beneficiaries to confirm the respectability of check hinders quickly, prior to devouring a lot of data transmission or dirtying their download reserves.

4.3 Receiving Packet with Packet Strength Decoder gets blocks whose degree is more noteworthy than one, a similar kind of cycle applies. In the encoding system, helper blocks act like message blocks; in the interpreting system, they act like really look at blocks. At the point when the decoder

Machine Learning for Peer to Peer Content …

465

Fig. 4 Key generation process

recuperates an assistant square, it then, at that point, adds it to the pool of unrecovered really take a look at blocks. Figure 5 clearly explain the data strength process. At the point when the decoder recuperates a message block, it just works the square out to a record in the suitable area. Conveyed P2P-CDNs comprise of hubs who work all the while as distributers, and downloaders of content. Hubs move content by sending coterminous record pieces over highlight point joins.

5 Conclusion In this paper a proposed work on the issue of on-the-fly confirmation is considered of the uprightness of the information in transit. Although a past conspire dependent on homomorphic hash capacities is relevant experiments are led to analyze the productivity of the proposed hash work, just as the effectiveness of the proposed meager arbitrary straight organization coding. The outcomes show that the new hash work can accomplish sensible speed, and the scanty variation performs similarly also as the arbitrary organization coding utilizing run of the mill parameters. In our future work tests are directed to look at the proficiency of the proposed hash work, just as the adequacy of the proposed inadequate irregular straight organization coding. We analyze the information got by every hub, and decide whether it is adequate for the hub to reproduce the first information.

466

T. R. Saravanan and G. Nagarajan

Fig. 5 Packet strength receiving process

Acknowledgements Thanks for all supporting.

References 1. Li SR, Yeung RW, Cai N (2003) Linear network coding. IEEE Trans Inf Theory 49(2):371–381 2. Jaggi S, Sanders P, Chou PA, Effros M, Egner S, Jain K, Tolhuizen LM (2000) Polynomial time algorithms for multicast network code construction. IEEE Trans Inf Theory 51(6):1973–1982 3. Acedanski S, Deb S, Medard M, Koetter R (2005) How good is random linear coding based distributed networked storage. In: Proceedings of workshop network coding, theory and applications 4. Chou PA, Wu Y, Jain K (2005) Practical network coding. In: Proceedings of Allerton conference on communication, control, and computing 5. Zhu Y, Li B, Guo J (2004) Multicast with network coding in application-layer overlay networks. IEEE J Selected Areas Comm 22(1):107–120 6. Gkantsidis C, Rodriguez PR (2005) Network coding for large scale content distribution. In: Proceedings of IEEE INFOCOM, pp 2235–2245 7. Wang M, Li Z, Li B (2005) A high-throughput overlay multicast infrastructure with network coding. In: Proceedings of international workshop quality of service (IWQoS) 8. Krohn M, Freedman M, Mazières D (2004) On-the-fly verification of rateless erasure codes for efficient content distribution. In: Proceedings of IEEE symposium on security and privacy

Machine Learning for Peer to Peer Content …

467

9. Fuentes FAL, Almanza JM, Marcelín-Jiménez R, Velázquez-Méndez B (2019) Efficient content distribution and storage P2P system based on information dispersal. In: 6th international conference on control, decision and information technologies (CoDIT) 10. Ariza-Garzón MJ, Arroyo J, Caparrini A, Segovia-Vargas MJ (2020) Explainability of a machine learning granting scoring model in peer-to-peer lending. IEEE Access 8:64873–64890 11. Ahlswede R, Cai N, Li S-YR, Yeung RW (2003) Network information flow. IEEE Trans Inf Theory 46(4):1204–1216 12. Koetter R, Médard M (2003) An algebraic approach to network coding. IEEE/ACM Trans Netw 11(5):782–795

Recognition of APSK Digital Modulation Signal Based on Wavelet Scattering Transform Mustafa R. Ismael, Haider J. Abd, and Mohammed Taih Gatte

Abstract The amplitude phase shift keying (APSK) employs as an efficient digital modulation scheme in modern satellite communication. Many researchers, in the field of automatic modulation recognition, entice to the automatic modulation of APSK signals. This paper exhibits a new algorithm based on machine learning, for APSK modulation recognition, using wavelet scattering transform. The algorithm handles three orders of APSK signals, 16APSK, 32APSK, and 64APSK. The scattering framework is performed with two filter banks to extract the scattering coefficients (SC) that have been applied to three well-known machine learning techniques, support vector machine (SVM), k-nearest neighbors (KNN), and Naïve Bayes (NB). The proposed method attained high accuracies with a low signal-to-noise ratio (SNR) and outperformed the existing works. Keywords Wavelet scattering transform · Digital modulation recognition · APSK · DVB-S2

1 Introduction In wireless communication, automatic modulation recognition (AMR) is defined as a modulation format identification to the received noisy signal [1]; this step is conducted before the detection and modulation of the signal [2]. AMR has several applications in the military field and civilian fields [3] like satellite communications, surveillance, and threat analysis [4]. In satellite communication systems, there is an improvement in services such as TV broadcasting, voice call, the Internet, and high-quality video streaming. This improvement results in increasing the data traffic and the number of orbits, and therefore, a robust and reliable automatic modulation classification system is a must [5]. Satellite broadcasting implements APSK modulation due to its powerful spectral efficiency and strength against nonlinear satellite channels [5–7]. Digital video broadcasting of satellite-2nd generation (DVB-S2) M. R. Ismael (B) · H. J. Abd · M. T. Gatte College of Engineering, Electrical Engineering Department, University of Babylon, Babel, Iraq e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_51

469

470

M. R. Ismael et al.

goes for high-order modulation formats, like 16APSK and 32APSK, which build to be suited for nonlinear satellite channels. Automatic modulation recognition is an important step in digital satellite receivers before signal detection and demodulation [5]. Generally, two approaches are widely used in the recognition of digital modulation schemes, maximum likelihood, and feature-based methods [2, 3, 8, 9]. The drawbacks of the first method are the requirement of prior knowledge about the channel and the complexity of implementation [2]. On the other hand, featurebased (FB) algorithms take advantage of pattern recognition techniques to recognize different schemes of modulations using specific features of the signal. Due to its low complexity and substantial performance, it is widely used in the field of AMR [9]. Usually, the FB method is implemented in two steps: feature extraction and classification [2]. Information entropy along with ensemble learning has been proposed by [10, 11]. Higher-order statistics are efficiently applied to classify different kinds of digital modulation signal schemes [12]. The decision tree classifier employs to determine the relationship between higher-order cumulants [13]. The higher-order statistics (cumulants) were used besides the SNR estimated from the baseband signal [14]. In the presence of additive white Gaussian noise (AWGN), the higher-order cumulants feature is extracted from the raw signal [8]. The wavelet transform (WT) considers a forceful signals analysis for different frequencies and resolutions [1]. Histogram peaks of the detailed wavelet coefficients were adopted for distinguishing between quadrature phase shift keying (QPSK) and Gaussian minimum shift keying (GMSK) signals [15]. The template matching method uses to classify the multilevel APSK signals depend on Kullback–Leibler divergence [16]. An identification model of modulation had been proposed based on WT and statistical parameters in [17]. The feature of higher-order statistical moments extracted from the continuous wavelet transforms [18]. The mean value, variance, and central moments are used in the construction of the feature set from the continuous wavelet transform [19]. The focus of this paper is on the use of wavelet scattering transform (WST) to generate a feature set from three orders of APSK signals. Three classifiers are implemented, including the support vector machine (SVM) classifier, the k-nearest neighbor (KNN), and Naïve Bayes (NB), to classify the extracted feature set. The other sections of the paper outline as follows: Sect. 2 specifies the suggested algorithm. Section 3 discusses the material used in this study, the setup of the experiment, and the obtained results with discussion. Section 4 is the conclusion and future work.

2 Material and Methods The proposed method is implemented in two stages, training and testing as shown in Fig. 1. WST adopts for the feature extraction of APSK signals in each stage. Then, the classification algorithms SVM, KNN, and NB have been implemented in the detection of APSK signal type.

Recognition of APSK Digital Modulation Signal Based on Wavelet …

APSK Signals (Training Set)

Feature Extraction Wavelet Scattering

471

Classification

Labels

Training Stage

Feature Extraction Wavelet Scattering

APSK Signals (Testing Set)

Testing Stage

Trained Classifier

16-APSK

32-APSK

64-APSK

Fig. 1 Scheme of proposed method model

2.1 Signal Model The expression of the received signal can be written as follows: s(t) = x(t) + n(t)

(1)

where n(t) acts as a noise signal that considers AWGN here and x(t) represents the modulated signal, which relies on the modulation type. The APSK signal is modeled as follows [7]:    ⎧ j m2π k+θ1 , ⎪ 1 r e ⎪ ⎨ 1    j m2π k+θ2 , x(t) = 2 r2 e    ⎪ ⎪ ⎩ j 2π k+θnr , rnr e mr

k=0,...,m 1 −1,(ringl=1) k=0,...,m 2 −1,(ring l=2)

(2)

k=0,...,m r −1,(ringl=nr )

where (n, r, θ ) are the number of the points, the radius, and the relative phase shift of the first ring, respectively. The received signal is the input signal that passes via an AWGN fading channel.

2.2 WST Algorithm The wavelet scattering is a framework that uses wavelet and scaling filters to produce low-variance features to utilize in machine learning algorithms [20]. WST is similar to deep learning architectures in three properties: linearization of hierarchical symmetries, multiscale contractions, and sparse representation. The difference between wavelet scattering and deep learning is the implementation of filters in the former one without learning [20]. The framework of the wavelet scattering shown in Fig. 2

472

M. R. Ismael et al. Input Signal X X*φJ

|X*Ψ13|

|X*Ψ11|

|X*Ψ12|

|X*Ψ11|* φJ |X*Ψ13|* Ψ21

|X*Ψ12|* Ψ21

|X*Ψ11|* Ψ21 |X*Ψ11|* Ψ21|* φJ

Scalogram Coefficients Scattering Coefficients

Fig. 2 Wavelet scattering framework

is produced in three main operations: convolution, nonlinearity, and averaging. The result of these operations is two types of coefficients: scalogram coefficients and scattering coefficients. The averaging operation is performed by convolving the input signal X with an averaging function (ϕ(t)) that represents a low-pass filter: S[0] = X ∗ ϕ(t)

(3)

where S[0] is the zero-order SC that maintains the signal energy and is similar to the original signal [21]. The first-order scalogram coefficients are obtained by taking the modulus filtered signal using dilated wavelet in the first filter bank [21, 22]: U [1] = |X ∗ ψλ |

(4)

where U [1] is the first-order scalogram coefficients, convolving the moduli with the scaling filter ϕ (t) produce the first-order SC is as follows: S[1] = |X ∗ ψλ | ∗ ϕ(t)

(5)

where λ is the center frequency of the wavelet filter ψ [20, 22, 23]. The low-pass filter removes the high-frequency information from the signal which is recovered using the scalogram coefficients in the next stage [19, 21]. For an input signal X (t) with T duration, and N/T sampling rate,  N samples, the number of first-order SC is Q 1 log2 N , the same number of wavelet filters per octave in the first filter bank. Furthermore, of second-order coefficients  the number 2   obtained from the second filter bank is Q 1 Q 2 log2 N /2 [22]; Q1 and Q2 (also

Recognition of APSK Digital Modulation Signal Based on Wavelet …

473

called the quality factor) are the number of wavelet filters per octave in the first filter bank and the second filter bank, respectively.

2.3 Classification In this algorithm, three classification techniques were used for digital modulation recognition, SVM, KNN, NB classifiers. SVM maps the vector of the input feature to a new feature space by producing a hyperplane to separate the training data with minimal error. In the SVM classification, it is necessary to select a suitable kernel function to avoid insufficient classifier performance [10]. Two kernel functions have been employed in this method, polynomial kernel function, and Gaussian radial basis function. K-nearest neighbor (KNN) is an instance-based classification technique that is widely used in pattern recognition applications due to its simplicity. The KNN classifier’s fundamental idea is to calculate the distance between one sample and the samples in the labeled instances using the distance similarity function. The algorithm looking for (k) targets that best fit the sample is to be classified. The range measurement employs to determine the range between targets after quantization. The similarity between the samples is inversely proportional to the difference between them. The most generally used way of measuring distance is Euclidean distance [10]. Naive Bayes is a classification algorithm that takes advantage of the Bayes theorem assuming that the features are statistically independent. It assigns observations to the most probable class by applying density estimation to the input samples. For each class, the posterior probability is predicted; the maximum posterior probability is achieved after the observation is assigned to the class of interest [24–27].

3 Results and Discussion 3.1 Setup of Experiment In this section, three digital modulation signals utilize, 16APSK, 32APSK, and 64APSK, for the simulation experiment. The setting of the signal parameter is as follows: carrier frequency fc = 10 MHz, symbol rate = 100 kHz, sample per symbol = 8, signal length N = 1024. The baseband signal is generated using a random code with 128 symbols. The SNR ranges between −6 and 20 dB with a 2 dB step length. The digital signal is constructed using a rectangle pulse with a 0.5 roll-off factor. Datasets include two parts a training and test sets; both sets have 3000 samples. The variables of the fading channel are as follows: average path gains of 0, −0.9, −4.9, −8, −7.8, and −23.9 dB, path delays of (0, 2, 8, 12, 23, 37) *1e−9 s, and maximum

474

M. R. Ismael et al.

Doppler shift of 50 Hz. For WST, four values of quality factors are tested for the first filter bank, while the quality factor for the second filter bank is set to 1.

3.2 Simulation Results The proposed algorithm is implemented using three types of APSK signals (16APSK, 32APSK, and 64APSK); the wavelet scattering framework is designed with Q1 = 8 and Q2 = 1; the number of samples is 3000 (1000 for each class) for both training and testing sets. Testing the classification techniques using the extracted features results in an overall accuracy of 100% for each classifier indicating a powerful performance of the proposed algorithm. The SNR has a significant effect on the classification accuracy for the signal of digital modulation; therefore, the performance of the algorithm should be tested with a reasonable range of SNR values. In Fig. 3, the proposed algorithm is implemented with different ranges of SNR from −6 dB to 20 dB in a step of 2 dB and four values of the quality factor of the scattering framework (2, 4, 8, and 16). It is clear from Fig. 3 that the KNN classifier has the worst accuracy as compared with

Fig. 3 SNR versus accuracy with four different values of quality factors

Recognition of APSK Digital Modulation Signal Based on Wavelet …

475

SVM and NB classifiers. The reason for this performance is the optimal nature of SVM and NB classifiers and their reliability. Besides, KNN classifies data based on the distance metric which spends a lot of time and memory to measure all the distances. The proposed method shows high accuracies with low SNR which is a requirement for practical application in DVB-S2 satellite application. Despite its effect on the number of scattering features, the quality factor of the wavelet scattering transform exhibits a slight difference in classification accuracies. Therefore, the scattering transform can generate separable features that are easily implemented with classification techniques; besides, it exhibits flexibility with the quality factor settings. Selecting the number of training samples affects the classification accuracy; therefore, the algorithm is tested with a different number of training samples (500, 1000, 1500, 2000, and 2500). Figure 4 exhibits the accuracies obtained using SVM, KNN, and NB classifiers, respectively.

Fig. 4 Proposed method accuracy according to various values of training samples using a SVM classifier, b KNN classifier, and c NB classifier

476

M. R. Ismael et al.

Table 1 Comparison with related works in terms of classification accuracy

SNR Method in [13]

0 dB

5 dB

>10 dB

88%

99.5

100

Method in [16]

70%

99

100

The proposed algorithm

91.47

100

100

Increasing the number of training samples above 1000 does not have a significant effect on the performance improvement of the used classification techniques. This is the result of using the wavelet scattering features which is utilized for feature extraction purposes with a small size of the training dataset. The proposed algorithm performance is counterweight with the existing algorithm performance based on the same modulation schemes as depicted in Table 1. The proposed method using wavelet scattering features outperforms the template matching method [16], and the higher cumulants features are proposed by [5] in the low SNR values. The superior performance comes from the ability of the scattering transform to extract low-variance features to be used with a machine learning algorithm like an SVM classifier. Therefore, the extracted features minimize differences within a class and maintain discriminability across classes.

4 Conclusion This paper presents an implementation of WST for features extracting from three types of APSK signals (16APSK, 32APSK, and 64APSK) to be used in the proposed AMR system. In the classification step, SVM, KNN, and NB are employed as a classification technique to recognize the considered APSK signals. Simulation results show the effect of using the scattering features on the accuracy of the proposed method, especially at low SNR. The proposed algorithm achieved high accuracies with various values of quality factors and the training number of samples, which gives it superior performance. As a result, the wavelet scattering features can be implemented with other schemes of digital modulation like quadrature amplitude modulation (QAM). Other classification techniques like ensemble learning are suggested for further study in the field.

References 1. Al-Nuaimi DH, Hashim IA, Zainal Abidin IS, Salman LB, Mat Isa NA (2019) Performance of feature-based techniques for automatic digital modulation recognition and classification—a review. Electronics 8(12):1407 2. Jajoo G, Kumar Y, Yadav SK, Adhikari B, Kumar A (2017) Blind signal modulation recognition through clustering analysis of constellation signature. Expert Syst Appl 90:13–22

Recognition of APSK Digital Modulation Signal Based on Wavelet …

477

3. Li X, Dong F, Zhang S, Guo W (2019) A survey on deep learning techniques in wireless signal recognition. Wirel Commun Mob Comput 2019 4. Shakra MM, Shaheen EM, Bakr HA, Abdel-Latif MS (2015) C3. Automatic digital modulation recognition of satellite communication signals. Natl Radio Sci Conf NRSC, Proc 2015:118– 126. https://doi.org/10.1109/NRSC.2015.7117822 5. Ali AK, Erçelebi E (2020) Automatic modulation recognition of DVB-S2X standard-specific with an APSK-based neural network classifier. Measurement 151:107257 6. Anedda M, Meloni A, Murroni M (2015) 64-APSK constellation and mapping optimization for satellite broadcasting using genetic algorithms. IEEE Trans Broadcast 62(1):1–9 7. Liolis KP, De Gaudenzi R, Alagha N, Martinez A, i Fàbregas AG, De Rango F (2010) Amplitude phase shift keying constellation design and its applications to satellite digital video broadcasting. Digit video 1:425–452 8. Ali AK, Erçelebi E (2019) An M-QAM signal modulation recognition algorithm in AWGN channel. Sci Program 2019 9. Xu W, Wang Y, Wang F, Chen X (2017) “PSK/QAM modulation recognition by convolutional neural network. IEEE/CIC international conference on communications in China (ICCC) 2017:1–5 10. Zhang Z et al (2018) Modulation signal recognition based on information entropy and ensemble learning. Entropy 20(3):198 11. Liu T, Guan Y, Lin Y (2017) Research on modulation recognition with ensemble learning. EURASIP J Wirel Commun Netw 2017(1):1–10 12. Han Y, Wei GH, Song C, Lai LJ (2012) Hierarchical digital modulation recognition based on higher-order cumulants. In: Proceedings of 2012 2nd international conference on instrumentation, measurement, computer, communication and control IMCCC 2012, pp 1645–1648. https://doi.org/10.1109/IMCCC.2012.398 13. Ali AK, Erçelebi E (2020) Algorithm for automatic recognition of PSK and QAM with unique classifier based on features and threshold levels. ISA Trans 102:173–192 14. Smith A, Evans M, Downey J (2017) Modulation classification of satellite communication signals using cumulants and neural networks. Cognitive communications for aerospace applications workshop (CCAA) 2017:1–8 15. Prakasam P, Madheswaran M (2007) Automatic modulation identification of QPSK and GMSK using wavelet transform for adaptive demodulator in SDR. In: 2007 international conference on signal processing, communications and networking, pp 507–511 16. Deng Y, Wang Z (2014) Modulation recognition of MAPSK signals using template matching. Electron Lett 50(25):1986–1988 17. Prakasam P, Madheswaran M (2008) Digital modulation identification model using wavelet transform and statistical parameters. J Comput Syst Netw Commun 2008 18. Hassan K, Dayoub I, Hamouda W, Berbineau M (2010) Automatic modulation recognition using wavelet transform and neural networks in wireless systems. EURASIP J Adv Signal Process 2010:1–13 19. Walenczykowska M, Kawalec A (2016) Type of modulation identification using Wavelet transform and neural network. Bull Polish Acad Sci Tech Sci 64(1) 20. Misiti M, Misiti Y, Oppenheim G, Poggi J-M (2015) Wavelet toolboxTM user’s guide, p 700 21. Soro B, Lee C (2019) A wavelet scattering feature extraction approach for deep neural network based indoor fingerprinting localization. Sensors 19(8):1790 22. Andén J, Mallat S (2014) Deep scattering spectrum. IEEE Trans Signal Process 62(16):4114– 4128 23. Lostanlen V, Scattering.m—A MATLAB toolbox for wavelet scattering [Online]. Available: https://github.com/lostanlen/scattering.m 24. Matlab (2015) Statistics and machine learning toolbox, p 9954, [Online]. Available: https://se. mathworks.com/help/stats/index.html

478

M. R. Ismael et al.

25. Hastie T, Tibshirani R, Friedman J (2009) Random forests. In: The elements of statistical learning. Springer, pp 587–604 26. Theodoridis K, Koutroumbas S (2009) Pattern recognition. Elsevier 27. Jeyalaksshmi S et al (2021) J Phys Conf Ser 1963 012145

Novel Method to Choose a Certain Wind Turbine for Al. Hai Site in Iraq Sura T. Nassir, Mohammed O. Kadhim, and Ahmed B. Khamees

Abstract This study creates a new and simplified method for selecting the suitable site for building wind turbines, using standard power factor and power curves. The electrical energy generated from wind energy be influenced by on the physical characters of the wind site and the factors of the wind turbine; thus, the matching of the turbine with the site depends on determining the parameters of the optimum speed of the turbine, which is estimated from the performance index (PI) curve. This indicator is a new rating parameter, obtained from the highest value of the standard power and capacitance curves. The relationship between the three indices is plotted against the rated wind speed of a specific value of the Weibull shape parameter of the location. Thus, a more skillful method was used for Weibull parameters evaluation which is called equivalent energy method (EEM). Keywords Weibull distribution function · Capacity factor · Normalized power · Performance index

1 Introduction The relationship between energy consumption and environmental pollution has become clear due to the exacerbation of broad negative consequences such as climate change; which includes global warming and humidity in the atmosphere. The spread of destructive floods and hurricanes and other changes that are difficult to control [1]. Therefore, it is necessary to switch from the consumption of fossil fuels to alternative or renewable energy [2]. One of the most affordable and most suitable alternative energy sources for the production of electric energy is wind energy, which has the S. T. Nassir (B) · M. O. Kadhim Renewable Energy Department, Environment and Energy Sciences College, Al. Karkh University of Sciences, Baghdad, Iraq e-mail: [email protected] A. B. Khamees Renewable Energy Research Center, Renewable Energy Directorate, Ministry of Science and Technology, Baghdad, Iraq © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_52

479

480

S. T. Nassir et al.

greatest impact on the spread of mankind in the world [3]. It is important to choose the appropriate site to generate electricity from wind capacity, so it is required to have knowledge of the speed of wind that is available in the area. So, when studying a specific site to estimate wind speed and hence its energy [4]. Several factors must be considered, including: identifying all areas where wind energy resources are available, accurate determination of the optimum locations for wind fans, a short-term prediction (for hours) or medium term (for a few days) of how much wind energy is possible on a site, these and other factors influence energy development and, as a result, the increase in electric energy production from wind turbines [5]. The electric power industry in a detailed location consists of a lot of factors; significant among them being characteristics of the turbine itself, namely cut-in (VC), cut-out (V f ), the rated speed (V r ) as well as a the hub height [6]. Alternatively, if the wind speed is excessive, it will be hard for the turbine to manage at its rated capacity, but it will forget much of its energy when the wind speed decreases. This explains that the rated speed is specified to manufacture higher power from the turbine. Generally speaking, (V c ) is chosen to be approximately one-half of (V r ) and (V r ) two times that of (V r ) [7]. We conclude from the above, the correlation between them, as will be shown in this investigation, is V c = 0.259 V r and V f = 1.95 V r [8]. The percentage of the predictable power above a specified period of time to the rated power of the turbine is called capacity factor [9]. The performance index (PI) is a short-discovered term. There is an isolated PI graph for each location from which the speed parameters are determined, and the optimum location can also be obtained for the selected turbine. The PI graph is obtained from the standard power factor and power curvatures and plotted on the public axis of normalized rated speed [10]. Commercially, there are several types of wind turbines imported from several international origins. Hence, gain turbines are useful for a specific location to obtain optimum electrical energy from wind energy. These key aspects are important in studying the energy latent of a given position and selecting suitable turbines [11].

2 Materials and Methodology 2.1 Weibull Distribution Since the wind speed is variable, the power depends on the possibility density function (PDF) and not just the arithmetic means which stated in expression of the Weibull distribution using the following relationship [12]:     v k k  v k−i exp − f (v) = c c c The collective distribution function is calculated for the velocity (V ) by [13]:

(1)

Novel Method to Choose a Certain Wind Turbine …

481

    v k F(V ) = 1 − exp − c

(2)

where (c) is the scale factor (units of velocity) and (k) is the dimensionless form factor. The equivalent energy method (EEM) is a new and simple method that was recently derivative to calculate the limits of the Weibull distribution. This technique is based on the energy signification and was developed in this research to improve the accuracy and efficiency parameters [14]. This method used the best procedure for determining Weibull parameters that matched the measured wind velocity to greatest fitting the wind speed distribution and cube. Based on the distribution of Weibull, the likelihood of occurrence of wind speed is better than or alike to a specified rate; V is concluded by [15]: P(v) = exp −

 v K

(3)

c

The possibility of wind speeds equal to or more than (V − 1) and less than V (V − 1 ≤ V < V ) is: P(v) = P(v − 1) − P(v)

(4)

   v k v−1 K − exp − P(v) = exp − c c

(5)

Random adjustable (Pv) characterized by the likelihood function (P) is written in the following form []: 

   v k v−1 k P V = p(v) + ε = exp − − exp − +ε c c

(6)

(ε) coincides with the random variable. A state of energy ingredient parity among the recorded wind speed and Weibell distribution, size element (c) can be expressed by a cube as follows []: c = [(v)s / (1 + s/k)]1/3

(7)

Substituting Eq. 7 in Eq. 6 yields: ⎛

(V − 1) 1 + ⎝ pv = exp − V

S K



1S ⎞k (V ) 1 + ⎠ − exp −⎝ V

S K

1S ⎞k ⎠ +ε

(8)

Weibull form factor, k, can be obtained using the following least squares method:

482

S. T. Nassir et al.



n  i



(vi − 1) 1 + ⎢ = 1⎣ pVi − exp −⎝ V =

n 

S K



1S ⎞k (vi ) 1 + ⎠ + exp −⎝ V

= 1(εi )2

S K

⎤2

1S ⎞k ⎠ ⎥ ⎦

(9)

i

That (PVi ) is the probability for (ith) bin; (n) is the quantity of indices of graph; 3 (V i ) is the maximum value of the (ith) bin, (V ) the mean cube (observed). After calculating (k), the scaling factor is calculated from Eq. 7 [16].

2.2 Capacity Factor The capacitance factor is extracted from the following equation [17]:       exp −(Vc /c) K − exp −(Vr /c) K k   /c) − exp −(V c f = f (Vr /c) K − (Vc /c) K

(10)

where (C F ) is the capacity factor, Vc , Vr , and V f is predefined, and (V r /c) is normalized rated speed. The capacitance factor is obviously a meaning of the main turbine parameters (V c , V r , and V f ) the c and k locate factors [18].

2.3 Normalized Power The energy formed from a turbine is calculated using the resulting equation [19]: Pe (V ) = 0.5c p ηm ηe ρ A

(11)

where (ηm ) is the efficiency of the mechanical power transmission systems; (ηe ) is the electric power generation system; (ρ) is the air mass; (A) is the turbine growing zone, while (C p ) is the turbine power factor. The simulations rely on three dissimilar speeds; V c , V r , and V f , and on the rated power Pr , the upward curvature concerning V c and V r can also be suitably deduced using the statics model. Consequently, Pe (V ) may be formed as [20]:  Pe (V ) = Pr



0V − 1 Vr2 − 2

 V < c or V < V f Vc2 ⎨ Vc ≤ V ≤ V f Vc2 ⎩ Vr ≤ V ≤ V f

⎫ ⎬ ⎭

(12)

Novel Method to Choose a Certain Wind Turbine …

483

where Pr = 0.5C pr ηmr ηer ρ AVr3 , (C pr ) is the performance parameter at rated wind speed V r ; (ηmr ) is the productivity of the machine-driven interface at rated wind power; (ηer ) is the competence made at rated power. The average power shaped by a wind turbine can be premeditated by participating in Eq. 12 and Eq. 1. [21]. !∞ Pe,av =

Pe (V ) f (V )dV

(13)

0

where f (V ) is a possibility mass function of the wind speed, by utilizing the method of (Pe ), Pe,av = Pr

⎧V ⎨! r V 2 − V 2  k  V k−1 ⎩

Vr2 −

Vc

c Vc2

c

c

"   # V k exp − dV c

"   #⎫ !VF  k−1 ⎬ k V V k + exp − dV ⎭ c c c

(14)

Vr

Standardizing the average of electrical power, (Pe, av ) equation is defined as: Pe,av PN = = cf 0.5c pr ηmr ηer ρ Ac3



Vr c

3 (15)

where (PN ) the normalized power.

2.4 Turbine Performance Index (TPI) Normalized power with a parallel value of (V r /c) is diametrically opposed to capacitance factor and vice versa [22]. A rate of (V r /c) parallels to (PN * C f ) is positioned between (V r /c) values of (PN , max) and (C f , max) as a maximum. As a consequence, (V r /c) can be nominated for a definite location where the highest output energy value is nearby to the determined capacity factor. This value of (V r /c) o and consistent ideal speeds of cut-in and cut-out can be calculated by V c = X1V r and V f = X2V r . Thus, a performance index (PI) is defined as [23, 26]: PI =

PN c f PN max c f max

(16)

The turbine performance index is demarcated as a function of standard speed and designed with typical power factor and power curves (V r /c) matches to the

484

S. T. Nassir et al.

concentrated value of TPI. Presentation of standard power curves, capacitance factor, and turbine performance index to decide the optimum limitations of the wind turbine for the site [26].

3 Data Analysis and Interpretation Figure 1 show the information: The percentage of amount of time the wind shocks from a specified direction. It can also produce the wind direction for us. The invention of this calculation and wind velocity in this direction. This brings us to the average wind strength ranges. This brings us to the average wind strength ranges. The percentage to an amount of time and a cube of wind speed. This leads us to a method for determining the energy from different ways of wind gusts at a given place. Figure 2 presents the histogram and close-fitting function in red color for (12) sectors. This figure illustrates wind behavior and how much potential energy each sector has. Al-Hai site in Iraq has been used in this revision for a value (4.8 m/s) at height of 10 m. This site’s data were collected using a data logger installed on-site. The normalized power and capacity factor are recorded for collection variables of k and (V r /c). Figure 3 shows two collections of regular curves plotted on the x-axis. The first set of bends signifies the capacity factors, and the second group represents the customary energy. For both straight, k varies from (1) to (2.6) with periods of (0.4) and (V r /c) unstable beginning (0.0–3.94) in periods of (0.3). The variety of k and (V r /c) values is engaged to contain the various situation and turbine connection. By exchanging the equations (V c =X1V r ) and (V f =X2V r ) in Eq. 10, the standardized power can be distinct totally according to the rated wind speed (V r /c) and the Weibull shape factor (k) which is displayed in Fig. 4 at height of 10 m. Figure 4 signifies the wind frequency (histogram bars) and Weibull dissemination (continuous line) over all sectors, in accumulation to Weibull measurements values, scale

Fig. 1 Wind rose as frequency rose, velocity rose, and energy rose

Fig. 2 Wind speed distributions and Weibull fitted function for 12 sectors

Novel Method to Choose a Certain Wind Turbine … 485

486

S. T. Nassir et al.

Fig. 3 Normalized power and capacity factor curves at height 10 m

Fig. 4 Weibull distribution of wind speed for Al. Hai site

and shape factors, average speed, and potential power. The standardized rated speed is able to develop matching to any topic on the standardized power graph. With the distinct Weibull scale factor (c) for the site, the rated wind speed of the turbine can calculate. Extra turbine parameters, such as cut-in and cut-out speeds, are estimated

Novel Method to Choose a Certain Wind Turbine …

487

Table 1 Turbine speed parameter estimation for a site Parameters

PN,MAX

PN at r = 0.90

PN at r = 0.83

At PIMAX

V r /C

03.46

02.21

02.61

01.71

Vr

05.03

03.22

03.81

02.52

Vc

18.34

11.65

11.89

09.12

Vf

33.94

21.56

21.69

16.85

PN

02.65

02.13

02.44

01.54

Cf

00.52

00.21

00.14

00.21

by using the relationships (V c =X1 V r ) and (V f =X2 V r ), where X1 and X2 are based on the turbine’s owner’s credibility. Principles of X1 and X2 for Al. Hai locates are positioned to be (0.259) and (1.95), separately. Consequently, this parameter can be considered that would give extreme power. In a parallel transaction, the speed parameters of a turbine that would yield the highest capacity factor can be assumed, as displayed in Table 1. Figure 5 emphasizes that the C f and PN curves reach the maximum at different normal velocities, which means that at the full received power, the capacitance factor is very low, but at the full capacitance factor, the average power production is little and vice versa. The higher value for rated velocity, which matches the maximum power, is inversely proportional to the capacitance factor. This increase in (Pe,av ) for a given turbine leads to an increase in the prices of the generators, transformers, circuit breakers, switches, and necessary distribution lines, while the decrease in C f means that these elements are used within a short time. Hence, a reasonable design procedure seems to be done with (V r /c) where the modular power is probably a percentage of PN, max. (r PN where 0.5) for a given wind system so that it produces total power output nearer to the all-out, with a far better power element. Standard power curves, capacitance factor as well as performance index are plotted for a position known as the shape parameter (1.58) as shown in Fig. 5.

Fig. 5 Normalized power, capacity factor, and turbine performance index curve for Al. Hai site

488

S. T. Nassir et al.

The standardized rated wind speed (V r /c) at PN, max, r PN, max, and PImax are accomplished from the curve. The rated speed of the turbine consistent to PN, max, r PN, max, and PImax can be resolved once the scale parameter is recognized. The cut-in and cut-out speed parameters of the turbine are subtracted by means of the interactions declared above.

4 Conclusion A new, simplified method has been devised to match the wind turbine to the location in which the wind farm is to be constructed, using standard power factor and power curves. This method relies on determining the parameters of the turbine’s optimum speed from the performance index bend, obtained from normal curves, so as to produce more efficient power when the amount of capacitance factor is increased. Wind velocity is determined by the equivalent energy method and statistical modeling using the Weibull probability distribution function. An equation expressing the standard power and capacitance factor, expressed as the measured rated velocity, is resulting. So the performance index of the wind turbine is defined, as a novel parameter. Standard power diagrams, capacity factor as well as performance index are plotted against the estimated wind speed with a certain value of the Weibull shape parameter of the location.

References 1. Jangamshetti S, Rau V (2001) Normalized power curves as a tool for identification of optimum wind turbine generator parameters. IEEE Trans Energy Convers 16(3):283–288 2. Jangamshelti S (2001) Normalized power curves as a tool for identification of optimum wind turbine generator parameters. IEEE Trans Energy Convers 16:283–288 3. Nemes C, Munteanu F (2011) The wind energy system performance overview: capacity factor vs. technical efficiency. Int J Math Models Methods Appl Sci 5:159–166 4. Hadi F (2014) Construction of mathematical-statistical model of wind energy in Iraq using different Weibull distribution functions. Department of Physics, AL-Nahrain University, College of Science, pp 42–44 5. Abdel Hamid R, Abu Adma M, Fahmy A, Abdel Samad S (2009) New technique for identifying optimal generating units’ parameters for wind energy plant. department of electrical machine and power engineering, faculty of engineering, Helwan university 6. Justus C, Hargraves W, Mikhail M, Graber D (1978) Methods for estimating wind speed frequency distributions. school of aerospace engineering. Georgia Institute of Technology, Atlanta 17:350–353 7. Himria Y, Boudghene S, Draouic B (2007) Prospects of wind farm development in Algeria. Sci Direct 239:130–138 8. Resen A, Abdulrazzaq F, Jawad S (2015) Nmr, suitable wind turbine identification using capacity factor and economic feasibility. Int J Sci Technol 9. Resen A, Mahmood A, Nmr J (2019) Statistical calculations of wind data utilizingWAsP model. AIP Conf Proc 2123:1–8

Novel Method to Choose a Certain Wind Turbine …

489

10. Albadi M, Saadany E (2009) wind turbines capacity factor modeling-anovel approach. IEEE Trans Power Syst 24:3 11. Mostafaeipour J, Jadidi M, Mohammadi K, Sedaghat A (2014) An analysis of wind energy potential and economic evaluation in Zahedan. Iran Renew Sustain Energy Rev 30:641–650 12. Ditkovich Y, Kuperman A, Yahalom M (2014) An alternative approach to wind turbines 1 performance index assessment. J Energy Eng 140(4):06014001 13. Ali S, Shaban A, Resen A (2014) Wind power estimation for al-hay district (Eastern South of Iraq). Iraqi J Sci 55:1997–2004 14. Ali K, Emad J, Jawad S (2015) Analysis of solar radiation data from satellite and ground station for PV system applications in Al-Diwaniyah location. J Phys 15. Masters G (2004) Renewable and efficient electric power systems. Wiley, New York 16. Chang T (2011) Performance comparison of six numerical methods in estimating Weibull parameters for wind energy application. Appl Energy 88:272–282 17. Hu S, Cheng J (2007) Performance evaluation of pairing between sites and wind turbines. Renew Energy 32:1934–1947 18. Akhlaque M, Firoz A (2004) Assessment of wind power potential for coastal areas of Pakistan 19. Rosen K, (1998) An Assessment of the potential for utility-scale wind power Generation in Eritrea 20. Abed K, El Mallah A (1997) Capacity factor of wind turbines. Energy 22, 5:487–491 21. Albadi M, El-Saadany E (2010) New method for estimating CF of pitch-regulated wind turbines. Electr Power Syst Res 80:1182–1188 22. Ditkovich Y, Kuperman A, Yahalom A, Byalsky M (2012) A generalized approach to estimating capacity factor of fixed speed wind turbines. IEEE Trans Sust Energy 3(3):607–608 23. Ali R et al (2020) J Phys Conf Ser 1530 012156

Analysis of Pressure and Temperature Sensitivity Based on Coated Cascade FBG-LPFG Sensor Zahraa S. Alshaikhli and Wasan A. Hekmat

Abstract Recently, an optical fiber sensor technology OFS takes a significant researchers’ attention due to its advantages over a conventional sensor technologies. Fiber Bragg grating (FBG) and long period fiber grating (LPFG) are the most important OFS technologies. Enhance the sensor sensitivity and performance are still a matter of challenge nowadays. This work is analysis of the new configuration of TiO2 -coated cascade FBG-LPFG sensor performance and comparing it with bare FBG-LPFG and single FBG sensors. The results show an enhancement of sensitivity, repeatability, and stability of the new configuration sensor. For temperature sensitivity, the enhanced values were 2.5 nm/°C and 3.4 nm/°C higher than the sensitivity of bare FBG-LPFG and single FBG sensors, respectively. For pressure sensitivity, the enhanced values were 3.1 nm/Pa and 2.8 nm/Pa higher than the sensitivity of bare FBG-LPFG and single FBG sensors, respectively. Thus, TiO2 -coated FBGLPFG could be regarded as an ideal candidate for high temperature and pressure sensitivities. Keywords LPFG · FBG · Coated grating sensor · TiO2 · Pressure sensor · Temperature sensor

1 Introduction Temperature and pressure are crucial magnitudes in different fields, for example, health monitoring, harsh environment, biomedical, and industrial applications. Using OFS technologies to sense these parameters is a vital area of research nowadays. Due to the advantages of OFS such as lightweight, immunity to electromagnetic field, easy to fabricate, low cost, and high sensitivity, the OFS became more reliable than the electrical sensors [1]. FBG and LPFG are types of these technologies. These two technologies involve on exposing the fiber core to a high laser power such as excimer Z. S. Alshaikhli (B) · W. A. Hekmat Laser and Optoelectronics Engineering Department, University of Technology—Iraq, Baghdad, Iraq e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_53

491

492

Z. S. Alshaikhli and W. A. Hekmat

laser in order to induce a refractive index modulation, and hence, grating is written on the fiber core [2, 3]. The grating period  is the parameter that distinguishes the FBG and the LPFG. To put it differently, the FBG has a few of nm grating period, whereas the LPFG has a few of μm grating period. Several researches have been conducted in order to sense temperature and pressure using different configuration of OFS technologies. Importantly, many researches have been conducted for the purpose of enhancing the sensor sensitivity, for example, coating the sensor with different material and different thickness values. For instance, Wang et al. have fabricated polymer-coated LPFG sensor based on polyethylene glycol (PEG)/polyvinyl alcohol (PVA) composite films [4]. Wang et al. have reported that their proposed sensor has a high sensitivity with coating thickness of 906.3 μm. In addition, their sensor showed a good linearity and reversibility in different operating range. Further, Zhao et al. have written their LPFG sensor on the thinned cladding, and the sensor exhibited a high sensitivity performance [5]. Zou et al. have proposed a technique for enhancing temperature sensitivity of LPFG by depositing a layer of Al2 O3 on the LPFG [6]. This study was achieved 0.77 nm/°C as temperature sensitivity. Moreover, a dual LPFG was presented by Saurabh et al., and they coated it by thin layer of gold [7]. This study had approached a cost-effective, reliable, and rapid sensor. Wei et al. have synthesized an Ag layer and graphene thin layer on a surface plasmon resonance SPR of LPFG [8]. They have stated that this method has a significant impact on maximizing the evanescent field intensity on the SPR; thus, the sensor achieved a high sensitivity and linearity. Shaymaa et al. have presented a single FBG sensor coated with TiO2 thin layer [9]. They have fabricated the sensor by phase mask method and coated the D-shaped FBG by 312 μm of TiO2 so that they obtained propagation within the core with less loss. Their sensor was achieved an enhancement of 4 times of the sensitivity of uncoated sensor. As a consequence, several studies have been reported and listed different techniques for coating and fabrication for sensor in particular and for optoelectronics application in general [10–12]. Most of them explained different methods of TiO2 coating [13–15]. However, this paper presents in the following section a new configuration of FBG-LPFG sensor coated by TiO2 thin layer. The paper aim is to analyze the sensor sensitivity for temperature and pressure and to compare the results with bare FBG-LPFG and single FBG sensors. Thus, operation principle, fabrication, and coating technique need to be highlighted first.

2 Work Principle of Fiber Grating Sensor The basic principle of fiber grating sensor working is based on a wavelength shift or modification of transmission or reflection spectrums. Equation (1) is the basic principle of FBG working [16]: λFBG = 2n eff FBG

(1)

Analysis of Pressure and Temperature Sensitivity …

493

λFBG is the Bragg wavelength of FBG, neff is the effective refractive index of the fiber core, and FBG is the grating period of FBG which is few of nanometers. And the basic principle of LPFG working can be expressed by Eq. (2) [17]:   λLPFG = n core − n cladding LPFG

(2)

λLPFG is the Bragg wavelength of LPFG, ncore is the refractive index of the fiber core, ncladding is the refractive index of the fiber cladding, and LPFG is the grating period of LPFG which is few of micrometers. The term of (ncore − ncladding ) can be denoted by nd . However, the overall temperature sensitivity of the sensor will be [17, 18]: λ(FBG+LPFG) = λ(FBG + LPFG) (n d αcore αTiO2 + 2ξ )T(FBG + LPFG)

(3)

α core and αTiO2 are the thermal expansion coefficient of the fiber core and TiO2 layer, respectively, and ζ is thermo-optic coefficient of the fiber core. And the overall pressure sensitivity of the sensor will be:  λFBG+LFPG /PFBG+LPFG = λFBG+LFPG −

(1 − 2σcore ) n2 + core E core 2E core

+(1 − 2σcore )(2 p12 + p11 ) −

2 σTiO2 rTiO 2 2 2 ) E TiO2 σTiO2 (rTiO − rcore 2 (4)

where σ core , r core , and E core are core axial stress, radius, and Young’s modulus, respectively. And p12 and p11 are the index of Pockel. Besides, E TiO2 , σTiO2 . and rTiO2 are Young’s modulus, axial stress, and radius TiO2 layer, respectively. Nevertheless, this equation confirms that the mechanical strength can be improved by choosing appropriate coating materials and, hence, can improve the sensitivity measurements as mentioned in Sect. 1. This paper proposed an analysis of a new sensor FBG-LPFG coated by TiO2 aiming to improve its performance.

3 Fabrication of FBG-LPFG Sensor A commercial single-mode optical fiber SMOF was used in this work with 8 μm core diameter and 126 μm cladding diameter. For gratings inscription, a simple and a common technique was used. The amplitude mask and phase mask methods were used for FBG and LPFG fabrication within the core of SMOF. This was done by exposing a UV light from excimer laser through amplitude and phase mask for FBG and LPFG inscription, respectively. The SMOF was exposed to pure hydrogen for one week in order to maximize the photosensitivity of fiber and increase the inscription rate of grating on fiber. Firstly, for the FBG fabrication, the fiber core was exposed to a UV light from excimer laser KrF (with wavelength of 248 nm and power of 16 mJ)



494

Z. S. Alshaikhli and W. A. Hekmat

Fig. 1 Fiber grating sensor

through phase mask. Therefore, the light was modulated and diffracted and creating an interference pattern. These patterns induced refractive index modulation along the fiber core leading to FBG inscription with 350.21 nm period. Secondly, for LPFG fabrication, the fiber core was exposed to the same UV light through amplitude mask. In this time, the light was shaded into bright and dark pattern or fringes on the fiber core. These fringes formed LPFG with 394 μm periods. However, the length of the LPFG and FBG was 12 mm each with separation of 4 mm.

4 TiO2 Coating Technique Sol–gel method was used in order to prepare TiO2 thin film. The method procedure involved the titanium alkoxides hydrolysis. Following expression can illustrate the hydrolysis concept [9, 19]: Ti{OCH(CH3 )2 }4 + 2 H2 O → TiO2 + 4(CH3 )2 CHOH The 0.5 g of titanium isopropoxide was mixed with 7 g of ethanol for half an hour. Then, a solution of 1 g deionized water and 1 g of nitric acid HNO3 was added to the Ti solution with stirring for three hours. Finally, a 0.5 g tetraethyl orthosilicate was added and thermally dried. After evaporating the solvent and annealing process at 500 °C for another three hours, a TiO2 thin film was obtained and deposited on the fiber with thickness of 25 μm. Figure 1 shows a simple sketch of the optical fiber used in this study.

5 Experimental Work In this work, the SMOF has a coated FBG-LPFG sensor was embedded in the setups shown in Figs. 2 and 3. According to temperature sensor analysis, the setup shown in Fig. 2 was used. The SMOF was glued on two fixed stages. One end of coated sensor was connected to data acquisition system which was Smart Scan FBG interrogator. The second end was connected to optical spectral analyzer (OSA MS9740C three

Analysis of Pressure and Temperature Sensitivity …

495

Fig. 2 Temperature sensor setup

Fig. 3 Pressure sensor setup

microns, 1500–3400), and then, a PC was connected to the OSA for monitoring and recording data. The coated sensor was centered through an iron tube; the iron tube was connected to heat-controller system. The iron tube was wrapped by thermal heating foil which was connected to power supply in order to adjusting and increasing the applied temperature. Moreover, thermocouple ends were inserted in the center and on the walls of the tube. The thermocouple ends were connected to the PicoLog recorder which linked to another PC in order to record and track the temperature values with time. Consequently, by increasing the voltage of power supply from (0 to 25 V), the foil heater works, and hence, the temperature increased subsequently. The applied temperature values used in this work were from room temperature 25 °C up to 125 °C. For a certain steps of temperature, the sensor performance was tracked and monitored.

496

Z. S. Alshaikhli and W. A. Hekmat

According to the pressure sensor analysis, the setup shown in Fig. 3 was used. This time, the pressure values were obtained from a conversion of applied force on the sensing region. This was done by placing the sensor on a precision balance in order to read the applied amount of water, for example, in the container. The container with water applied a certain amount of force which then converted into pressure. The applied pressure range was from 100 to 1000 kPa. Again, one end of SMOF was connected to the FBG Smart Scan interrogator, and the second was connected to OSA. Then, any changing in the data was followed up and recorded through a PC. Figure 3 illustrates the connection of the above components with zooming out the sensing region of the SMOF. To confirm that the coated FBG-LPFG sensor was better than single uncoated FBG and bare FBG-LPFG sensors; both single FBG and bare FBG-LPFG sensors were used in the same mentioned setups to monitor the sensitivity of wavelength shifting caused by temperature and pressure. Furthermore, the new results were compared with previous published studies.

6 Result and Discussion In this study, the FBG-LPFG sensor configuration which was based on fabrication of FBG followed by LPFG coated by a thin layer of TiO2 was investigated. Firstly, the space between FBG and LPFG is about 4 mm near enough to obtain a transmission spectrum which have two peaks near from each other to ensure widest operating ranges of both temperature and pressures. Moreover, this spectral characteristic will be influenced by any changes in surrounding temperature or pressures. Figure 4 shows a transmission spectrum fabricated FBG-LPFG before coating and without any external effects. This spectrum is verified that both FBG and LPFG were written successfully on the fiber core. In other words, the spectrum indicates that there is a 1.3 1.2

Optical Transmitted Power μW

Fig. 4 Transmission spectrum of dual uncoated FBG-LPFG sensor

1.1 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2

λFBG 1529 λLFPG 1536

0.1

0.0 1500 1505 1510 1515 1520 1525 1530 1535 1540 1545 1550

Wavelength nm

Analysis of Pressure and Temperature Sensitivity …

497

refractive index modulation with in the fiber core. The spectrum has two peaks of 1529 nm λFBG and 1536 nm λLPFG . So, it assures to cover widest operating range of the applied temperatures and pressures. So, we mainly concentrated on these two peaks in order to analysis and evaluate the sensing performance of the new sensor. However, an external effect of temperature or pressure will lead to a shift of both these wavelengths and thus to a transmission spectrum modification. Depositing a TiO2 layer on the FBG-LPFG sensor surface was again the aim of this study to enhance the sensitivity performance of the sensor. Figure 5a, b exhibits the response of the coated sensor at different values of temperature and pressure, respectively. In case of Fig. 5a, when the temperature increases, the transmission spectrum was modified in terms of both of the Bragg wavelength shift and optical transmitted power. In other word, since there is a correspondence between the wavelength shift of FBG and temperature fluctuation and there is a relation between LPFG transmission amplitude and temperature fluctuation, the transmission spectrum will be altered. This thermal analysis result was obtained by using setup illustrated in Fig. 2. To evaluate the property of the TiO2 layer, the same test was performed on uncoated FBG-LPFG sensor and single FBG sensor. The enhanced sensitivity of the coated sensor with TiO2 was estimated to be 2.5 nm/°C higher than the sensitivity without coating. Besides, it is estimated to be 3.4 nm/°C higher than the sensitivity of single FBG sensor value that reported in [20–22]. Then, similar scenario exhibits in Fig. 5b for applied pressure which demonstrates by wavelength shifting and peak modification. Again, the same test was performed on bare FBG-LPFG sensor and single FBG sensor. The improvement of pressure sensitivity of the TiO2 -coated FBG-LPFG was estimated to be 3.1 nm/Pa and 2.8 nm/Pa higher than the sensitivity of bare FBG-LPFG and single FBG sensors values. As a result, it can be confirmed that the impact of physical parameters on the sensor performance can be enhanced—in terms of sensitivity—by coating of TiO2 layer onto the sensor surface which improves the intensity of the evanescent field on the surface of the fiber. Table 1 summarizes the results of three sensors performance. The results are clearly indicated that the coated sensor with TiO2 is the most sensitive

b

1.4 1.2 1.0 0.8 0.6 0.4

25οC

0.2

35οC

0.0

90οC

50οC 100οC

-0.2

125οC

Optical Transmitted Power μW

Optical Transmitted Power μw

a

1.2 1.0 0.8 0.6 0.4 0.2 0.0

100 KPa 200 KPa 300 KPa 400 KPa 600 KPa 1000KPa

-0.2 -0.4

1510 1515 1520 1525 1530 1535 1540 1545

1510 1515 1520 1525 1530 1535 1540 1545

Wavelength nm

Wavelength nm

Fig. 5 Transmission spectrum of dual-coated FBG-LPFG with applied a temperatures b pressures

498

Z. S. Alshaikhli and W. A. Hekmat

Table 1 Selected wavelength shift results of three sensors for temperature and pressure effect Applied temperature (°C)

Applied pressure (KPa)

Three sensors

55

200

Single FBG sensor

1509

1515

1519

1523

1512

1516

1519

1521

Bare FBG-LPFG sensor

1517

1520

1522

1524

1512

1518

1524

1582

Coated FBG-LPFG sensor

1515

1530

1540

1550

1505

1520

1535

1545

85

105

125

500

800

1000

Wavelength shift (nm)

one to both temperature and pressure changes. It can be indicated by the values of shifting. Further, comparisons were carried out by extracting the wavelength shifts from Fig. 5 with temperature and pressure changes. Figure 6 was plotted from the extracted data which illustrates the three sensors performance. Although Fig. 6a, b shows a good linearity of the sensor performance, the best linear one is the coated FBG-LPFG with correlation coefficients of 0.998 and 0.997 for temperature and pressure sensitivities, respectively. Further, investigation on the response characteristic of the coated FBGLPFG was carried out for applied temperature and pressure. Figure 7 illustrates the performance outcomes of three different tests under the same conditions. One week was the time interval between each test. In addition, Fig. 7 shows a smoothly and a highly coincided points at different range of both temperature and pressure which verify the repeatability of the three tests. In addition, stability test of the sensor was estimated for both temperature and pressure as shown in Fig. 8. The test was carried out within 90 min, and during this time, the wavelength change values approximately remain similar to each other for two applied values of (75 and 125) °C for temperatures and of (500 and 1000) KPa for pressure. To sum up, the coated TiO2 FBG-LPFG sensor exhibited a good repeatability and stability. Besides, it could be regarded as an ideal candidate for high temperature and pressure sensitivities. Moreover, and in term of response time,

a

b

1550 1545

Coated TiO2 Cascade FBG-LFPG Sensor Uncoated Cascade FBG-LFPG Sensor Single FBG Sensor

1535

1540 1535

Wavelength nm

Wavelength nm

1540 1530 1525 1520 1515

1545

1530 1525 1520 1515

1510

1510

1505

1505

1500

40

50

60

70

80

90 100 110 120

Temperature oC

Coated TiO2 Cascade FBG-LFPG Sensor Uncoated Cascade FBG-LFPG Sensor Single FBG Sensor

1500 100 200 300 400 500 600 700 800 900 1000

Pressure KPa

Fig. 6 Three sensors performance with applied a temperatures b pressures

Analysis of Pressure and Temperature Sensitivity …

b

1550

1550

1545

1545

1540

1540

1535

1535

Wavelength nm

Wavelength nm

a

499

1530 1525 1520 1515 1510 1500

40

50

60

70

80

1525 1520 1515 1510

First test Second test Third test

1505

1530

First test Second test Third test

1505

1500 100 200 300 400 500 600 700 800 900 1000

90 100 110 120

Temperature οC

Pressure KPa

Fig. 7 Repeatability of the sensor performance of three tests for applied a temperatures b pressures

b

a 1550

1545

Wavelength nm

1545

Wavelength nm

1550

1540 o

75 C 125 oC

1535 1530 1525

1540 1535 1530 1525

1520

1520

1515

1515

1510 10

20

30

40

50

60

Time minute

70

80

90

500 KPa 1000 KPa

1510 10

20

30

40

50

60

70

80

90

Time minute

Fig. 8 Stability of the sensor performance at certain values of a temperatures b pressures

which is associated with the coating material type and its thickness, the response time of the sensor in this study was about 600 ns which is more responsive to reported value in [20].

7 Conclusion A fabrication and an analysis of TiO2 coated cascaded FBG-LPFG sensor sensitivity for temperature and pressure were proposed in this study. The fabricated sensor had two peaks near from each other to which confirmed the sensor coverage of widest operating ranges of both temperature and pressures. Furthermore, a comparison between the coated sensor and single FBG and bare FBG-LPFG was carried out. It can be concluded that the TiO2 coated cascaded FBG-LPFG sensor had the best performance in terms of sensitivity for both temperature and pressure changes. Also, the TiO2 coated cascaded FBG-LPFG sensor exhibited a good linearity behavior

500

Z. S. Alshaikhli and W. A. Hekmat

compared with other two sensors. Finally, repeatability and stability tests were done, and the result reflects an efficient performance of the coated sensor. Thus, it could be regarded as an ideal candidate for high temperature and pressure sensitivities. Further, considerations related to coating material type and its thickness need to be taken into account in order to obtain more responsive sensor to any external physical parameters.

References 1. Hung-Ying C, Chien Y, Chuan H, Ming F, Chi-Wai C, Wen L (2018) In-fiber long-period grating and fiber Bragg grating-based sensor for simultaneously monitoring remote temperature and stress. Sens Mater 30(1):23–32 2. Dongxue F, Ya-nan Z, Aozhuo Z, Bo H, Qilu W, Yong Z (2019) Novel fiber grating for sensing applications. Phys Status Solidi A 1800820 3. Francesco C, Francesco B, Sara T, Cosimo T, Ambra G (2017) Biosensing with optical fiber gratings. Nanophotonics 6(4):663–679 4. Wang Y, Liu Y, Zou F, Jiang C, Mou C, Wang T (2019) Humidity sensor based on a long-period fiber grating coated with polymer composite film. Sensors 19:2263 5. Wang H, Feng CD, Sun SL, Segre CU, Stetter JR (2016) Comparison of conductometric humidity-sensing polymers. Sens Actuators B Chem 40:211–216 6. Zhao Y, Liu Y, Zhou C, Guo Q, Wang T (2016) Sensing characteristics of long-period fiber gratings written in thinned cladding fiber. IEEE Sens J 16:1217–1223 7. Saurabh MT, Krishnendu D, Wojtek JB, Predrag M, Jonathan P, Balasubramanian S (2019) Gold coated dual-resonance long-period fiber gratings (DR-LPFG) based aptasensor for cyanobacterial toxin detection. Sens Bio-Sens Res 25:100289 8. Wei W, Jinpeng N, Guiwen Z, Linlong T, Xiao J, Na C, Suqin L, Guilian L, Yong Z (2016) Graphene-based long-period fiber grating surface Plasmon resonance sensor for high-sensitivity gas sensing. Sensors 17:2 9. Shaymaa RT, Rong ZC, Sheng H, Khalil IH, Kevin PC (2017) Fabrication of Fiber Bragg grating coating with TiO2 nanostructured metal oxide for refractive index sensor. J Nanotechnol 2791282 10. Fakhri MA, Numan NH, Alshakhl ZSi, Dawood MA, Abdulwahhab AW, Khalid FG, Hashim U, Salim ET (2018) Physical investigations of nano and micro lithium-niobate deposited by spray pyrolysis technique. In: AIP conference proceedings, 020015, 2045 11. Khalid FG, Raheema AQ, Alshakhli ZS, Fakhri MA (2020) Preparation of nano indium oxide for optoelectronics application. In: AIP conference proceedings, 020229, 2213 12. Changxu L, Wenlong Y, Min W, Xiaoyang Y, Jianying F, Yanling X, Yuqiang Y, Linjun L (2020) A review of coating materials used to improve the performance of optical fiber sensors. Sensors 20:4215 13. Uday MN, Kadhim AH, Zahraa JA (2016) Characterisation of TiO2 nanoparticles on porous silicon for optoelectronics application. Mater Technol Adv Funct Mater 31:14 14. Uday MN, Kadhim AH, Zahraa JA (2016) Ultraviolet photodetector based on TiO2 nanoparticles/porous silicon hetrojunction. Optik 127:2806–2810 15. Hubeatir KA, Kamil F, Al-Amiery AA, Kadhum AAH, Mohamad AB (2016) Polymer solar cells with enhanced power conversion efficiency using nanomaterials and laser techniques. Mater Technol. https://doi.org/10.1080/10667857.2016.1215080 16. Alshaikhli ZS (2018) Simultaneous measurements of temperature and strain sensitivity using two Fiber Bragg grating sensors. Int J Comput Appl Sci 5(2) 17. Alejandro M, David M, Ismael T, Guillermo S (2012) Long period fibre gratings, fiber optic sensors. In: Dr Moh Yasin (ed) ISBN: 978-953-307-922-6, InTech, Available from: http://www. intechopen.com/books/fiber-optic-sensors/long-period-fibre-gratings

Analysis of Pressure and Temperature Sensitivity …

501

18. Sankhyabrata B, Palas B, Francesco C, Tanoy K, Nandini B, Cosimo T, Ambra G, Sara T, Francesco B, Somnath B (2017) Long-period fiber grating: a specific design for biosensing applications. Appl Optics 56:35 19. Kamil F, Hubiter KA, Abed TK, Al-Amiery AA (2016) Synthesis of aluminum and titanium oxides nanoparticles via sol-gel method: optimization for the minimum size. J Nanosci Technol 2(1):37–39 20. Daniel A, Alberto R, Maria TMR (2012) Hybrid FBG–LPG sensor for surrounding refractive index and temperature simultaneous discrimination. Optics Laser Technol 44:981–986 21. Berrettoni C, Trono C, Vignoli V and Baldini F (2015), Fibre tip sensor with embedded FBGLPG for temperature and refractive index determination by means of the simple measurement of the FBG characteristics. J Sens 491391 22. Jeyalaksshmi S et al (2021) J Phys Conf Ser 1963:012145

High Gain of Rectangular Microstrip Patch Array in Wireless Microphones Applications Hiba A. Alsawaf

Abstract Currently, the microstrip antenna is one of the fastest-growing antennas in the modern telecommunications market. Several studies have been conducted in the recent past to increase the efficiency and performance of patch antennas. For wireless communication applications, in particular, this study presents many designs of rectangular patch array antennas using microstrip patches. In this work, Advanced Design System (ADS) 2020 is used to design and simulate 2 * 4, 4 * 1, 2 * 1 and single element. It also compares the performance of rectangular antennas in terms of gain and direction as well as radiated power and loss. The increase in gain and direction is observed when the antenna elements are increased, so the performance of the antenna 2 * 4 is better. These antennas are appropriate for some digital wireless microphones since they have a resonance frequency of 2.4 GHz. Keyword Microstrip patch antenna · Array antenna · Gain · ADS

1 Introduction High-performance antennas are important in wireless communication systems, as the performance of the antenna directly affects the quality of wireless communication [1–5]. The transfer of data such as music, images, video, and other information through electrical and optical impulses is referred to as communication. Over shorter distances, light or light waves are employed, whereas telecommunication is used for larger distances. This data are transmitted via a one-of-a-kind innovation, such as the most practicable antenna in various communication systems [6–14]. The microstrip antenna’s multiple benefits, including its lightweight, tiny size, and simplicity of manufacture using printed circuit techniques, have led to the development of numerous variants for diverse purposes [15–17]. The microstrip antenna has risen to prominence as a result of growing demands for personal and mobile communications, such as smaller, less-visible antennas [6]. The microstrip antenna H. A. Alsawaf (B) Ninevah University, Mosul, Iraq e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_54

503

504

H. A. Alsawaf

for one side of the insulating substrate has a radiator patch and a ground plane on the other side. Patch conductors are often composed of gold or copper in any form. To make analysis and forecasting easier, traditional formats are frequently employed. Photoetched feed lines on an insulating substrate are commonly used for radioactive materials. Energy from peripheral RF sources has a relatively low energy collection rate. As a result, increasing the power level with a single patch antenna is insufficient. As a result, the antenna is required to boost the label’s power [7]. Low-frequency bands have very low antenna gain because electromagnetic wavelengths are very long, often on the scale of several miles, and considerably longer than the dimensions of the antennas, and the antenna’s gain is directly proportional to its size with respect to the wavelength. Therefore, the antenna gain is relatively weak at these frequencies [7]. To build a small microstrip antenna, with a better dielectric constant must be used, which are less efficient and result in a narrower bandwidth [8]. WLAN has become suitable due to its ease of installation for home and small area users [11, 15]. Therefore, portable antenna technology has grown along with portable and cellular technologies. A small, appropriate antenna will improve transmission and reception and reduce power consumption [15, 16]. The major emphasis of this research was on the construction of 2 * 1 and 4 * 1 array patch antennas with a 2.45 GHz operating frequency. The suggested antennas’ properties were investigated. The 4 * 1 array antenna has been shown to perform better in terms of radiation parameters than the 2 * 1 array antenna using HFSS and CST [17]. The goal of this study was to achieve multi-band resonance in a flexible antenna for WLAN use. The antenna size ranged from 100 to 90 mm, and the frequencies were 2.4 and 4.38 GHz. Full-wave modeling software was used to create and analyze the suggested antenna design. [18] The antenna in this work is a 4 × 1 element circular phased array of inset-fed rectangle patch antenna that operates in the millimeter-wave range (24.81 GHz–33 GHz) [19]. At a frequency of 2.4 GHz, this research study presented and built a rectangle-corrected microstrip antenna. The proposed antenna has an erroneous ground construction. For the suggested antenna design, a FR4 substrate material with a thickness of 0.8 mm, a dimension of 60 mm × 60 mm, and a permittivity of 4.4 was chosen [20]. The design and analysis of a capacitive-fed slotted I-patch microstrip antenna are described, and they are compared to previously published coplanar capacitive-fed various slotted patch microstrip antennas suspended above the ground plane. In [21], it was discovered that when compared to 4 × 1 and 4 × 2 matrix patch antennas, the simulated 4 × 3 matrix patch antenna reached a compact size with a dimension decrease of up to 26%. The goal of this study is to use the matrix to increase the gain. A microstrip antenna array with eight elements is introduced. Results from simulations and measurements were also presented and evaluated. The array’s maximum gain and directivity are 11.1879 dB and 14.1481 dB, respectively. The array has been built for 2.4 GHz WLAN applications. The remainder of the paper is laid out as follows. The analysis of a rectangular microstrip antenna is presented in Sect. 2. Section 3 discusses the planned antennas, array geometry, and

High Gain of Rectangular Microstrip Patch Array in Wireless …

505

array size. The Advanced Design System (ADS) simulation and discussion results have been presented in Sect. 4. Finally, Sect. 5 brings the work to a close.

2 Analysis of Rectangular Microstrip Antenna A rectangular patch of dimensions (W ) and length (L) is positioned above a wing plane of substrate thickness (h) with a dielectric constant (1r ) in rectangular microstrip antennas. There are a variety of substrates that may be utilized to create microstrip antennas, with dielectric constants typically ranging from 2.2 to 12. Thicker substrates with lower dielectric constants are the most ideal for antenna performance. Bigger bandwidth is roughly connected to radiation fields in space, but at the cost of larger element size, out of range because it offers higher efficiency; out of range because it provides better efficiency [8, 10]. The substrate dielectric constant (1r ), the height of substrate (h), and the resonant frequency are all assumed to be unique information in the microstrip patch antenna design (f o ) [5]. Set the width (W ) and length (L) values once you have defined 1r , f o , and h. A rectangular microstrip antenna’s various dimensions may be computed as follows: The width is calculated for good radiation efficiency (W ): vo 

W = 2 fo vo

(εr +1) 2

(1)

light’s speed in free space Microstrip antenna’s effective dielectric constant (εr e f f ) is compute by: εr e f f =

  1 h −2 εr + 1 εr − 1 + 1 + 12 2 2 W

(2)

The length extension expression is given by (L)    εr e f f + 0.3 wh + 0.264   L = 0.412h ∗  εr e f f − 0.258 wh + 0.8

(3)

Calculate the actual length L, as well as the Ground’s breadth and length. L=

vo − 2 ∗ L √ 2 f o εr e f f

(4)

The patch and the inset-fed (gf) patch are separated by 1 mm. Table 1 shows the microstrip patch antenna parameters that were computed.

506

H. A. Alsawaf

Table 1 Parameter calculated of microstrip patch antenna

Parameter

Value

Frequency (f o )

2.4 GHz

Wavelength (λ)

125 mm

The substrate

FR4

Relative permittivity (1r )

4.6

Patch Width (W )

37.35 mm

Patch Length (L)

29.05 mm

Line feed (L f )

15.136 mm

Line width (W f )

2.957 mm

Distance of inset (d)

8.3 mm

Height of substrate (h)

1.6 mm

Input impedance

50 

3 The Designed Antennas 3.1 Single Rectangular Microstrip Patch Antenna The values of patch width length, length of feed line width, and feed input are shown in Table 1. Figure 1 illustrates a single rectangular microstrip antenna’s layout and schematic.

W

MLOC TL1 Subst="MSub1" W=37.35 mm L=20.75 mm MACLIN3 CLin1 Subst="MSub1" W1=14.94 mm W2=2.957 mm W3=14.94 mm S1=2.2565 mm S2=2.2565 mm L=8.3 mm

L

d Lf

MLIN TL2 Subst="MSub1" W=2.957 mm L=15.136 mm

Wf

a. Layout

P1 Num=1

b. Schematic

Fig. 1 Layout and schematic of single rectangular microstrip antenna

High Gain of Rectangular Microstrip Patch Array in Wireless …

507

MLOC TL1 Subst="MSub1" W=37.35 mm L=20.75 mm

MLOC TL3 Subst="MSub1" W=37.35 mm L=20.75 mm MACLIN3 CLin2 Subst="MSub1" W1=14.94 mm W2=2.957 mm W3=14.94 mm S1=2.2565 mm S2=2.2565 mm L=8.3 mm

MACLIN3 CLin1 Subst="MSub1" W1=14.94 mm W2=2.957 mm W3=14.94 mm S1=2.2565 mm S2=2.2565 mm L=8.3 mm MLIN TL2 Subst="MSub1" W=2.957 mm L=15.136 mm

61.906

MSABND_MDS Bend1 Subst="MSub1" W=2.957 mm Angle=90 M=0.5

MLIN TL6 Subst="MSub1" W=1.557 mm L=18.173 mm

MLIN TL5 Subst="MSub1" W=2.957 mm L=6.39 mm

MTEE_ADS Tee1 Subst="MSub1" W1=0.672 mm W2=0.672 mm W3=2.957 mm

MLIN TL7 Subst="MSub1" W=0.672 mm L=6.39 mm

MLIN TL4 Subst="MSub1" W=2.957 mm L=15.136 mm

MLIN TL9 Subst="MSub1" W=1.557 mm L=18.173 mm

MLIN TL10 Subst="MSub1" W=0.672 mm L=6.39 mm

MLIN TL8 Subst="MSub1" W=2.957 mm L=6.39 mm

MSABND_MDS Bend2 Subst="MSub1" W=2.957 mm Angle=90 M=0.5

MLIN TL25 Subst="MSub1" W=2.957 mm L=15.138 mm

P1 Num=1

a. Layout b. Schematic Fig. 2 Layout and schematic of 1 * 2 rectangular microstrip array antenna

3.2 1*2 Rectangular Micro-strip Patch Array Antenna To enhance the antenna’s performance, two elements were utilized, each with the same dimensions. There is a 61.906-mm gap between each center of the feed line. Layout and schematic of 1 * 2 rectangular microstrip array antenna are shown in Fig. 2.

3.3 2*2 Rectangular Microstrip Patch Array Antenna Four elements were employed, each with the same dimensions as those shown above in order to enhance the antenna’s performance. Figure 3 illustrates a 2 * 2 rectangular microstrip array antenna.

3.4 2*4 Rectangular Microstrip Patch Array Antenna In this case, eight elements were used, all of which had the same dimensions as those listed in Table 1. Layout and schematic of 2 * 4 rectangular microstrip array antenna may be seen in Fig. 4.

508

H. A. Alsawaf MLOC TL1 Subst="MSub1" W=37.35 mm L=20.75 mm

MLOC TL3 Subst="MSub1" W=37.35 mm L=20.75 mm MACLIN3 CLin2 Subst="MSub1" W1=14.94 mm W2=2.957 mm W3=14.94 mm S1=2.2565 mm S2=2.2565 mm L=8.3 mm

MACLIN3 CLin1 Subst="MSub1" W1=14.94 mm W2=2.957 mm W3=14.94 mm S1=2.2565 mm S2=2.2565 mm L=8.3 mm MLIN TL2 Subst="MSub1" W=2.957 mm L=15.136 mm

MSABND_MDS Bend1 Subst="MSub1" W=2.957 mm Angle=90 M=0.5

61.906

MLIN TL6 Subst="MSub1" W=1.557 mm L=18.173 mm

MLIN TL5 Subst="MSub1" W=2.957 mm L=6.39 mm

MTEE_ADS Tee1 Subst="MSub1" W1=0.672 mm W2=0.672 mm W3=2.957 mm

MLIN TL7 Subst="MSub1" W=0.672 mm L=6.39 mm

MLIN TL4 Subst="MSub1" W=2.957 mm L=15.136 mm

MLIN TL9 Subst="MSub1" W=1.557 mm L=18.173 mm

MLIN TL10 Subst="MSub1" W=0.672 mm L=6.39 mm

MLIN TL8 Subst="MSub1" W=2.957 mm L=6.39 mm

MSABND_MDS Bend2 Subst="MSub1" W=2.957 mm Angle=90 M=0.5

MLIN TL25 Subst="MSub1" W=2.957 mm L=6.39 mm

61.906m

MLIN TL26 Subst="MSub1" W=1.557 mm L=18.173 mm MLIN TL27 Subst="MSub1" W=0.672 mm L=6.39 mm MTEE_ADS Tee3 Subst="MSub1" W1=0.672 mm W2=0.672 mm W3=2.957 mm MLIN TL28 Subst="MSub1" W=0.672 mm L=6.39 mm

MLIN TL24 Subst="MSub1" W=2.957 mm L=15.136 mm

P1 Num=1

MLIN TL29 Subst="MSub1" W=1.557 mm L=18.173 mm MLIN TL23 Subst="MSub1" W=2.957 mm L=6.39 mm

MLIN TL18 Subst="MSub1" W=1.557 mm L=18.173 mm

MSABND_MDS Bend3 Subst="MSub1" W=2.957 mm Angle=90 M=0.5

MLIN TL19 Subst="MSub1" W=2.957 mm L=6.39 mm

MLIN TL17 Subst="MSub1" W=0.672 mm L=6.39 mm

MLIN TL14 Subst="MSub1" W=2.957 mm L=15.136 mm

MLIN TL15 Subst="MSub1" W=1.557 mm L=18.173 mm

MLIN MTEE_ADS Tee2 TL16 Subst="MSub1" Subst="MSub1" W1=0.672 mm W=0.672 mm W2=0.672 mm L=6.39 mm W3=2.957 mm

MLIN TL21 Subst="MSub1" W=2.957 mm L=6.39 mm

MACLIN3 CLin3 Subst="MSub1" W1=14.94 mm W2=2.957 mm W3=14.94 mm S1=2.2565 mm S2=2.2565 mm L=8.3 mm MLOC TL13 Subst="MSub1" W=37.35 mm L=20.75 mm

MLOC TL12 Subst="MSub1" W=37.35 mm L=20.75 mm

a. Layout

MSABND_MDS Bend4 Subst="MSub1" W=2.957 mm Angle=90 M=0.5 MLIN TL22 Subst="MSub1" W=2.957 mm L=15.136 mm

MACLIN3 CLin4 Subst="MSub1" W1=14.94 mm W2=2.957 mm W3=14.94 mm S1=2.2565 mm S2=2.2565 mm L=8.3 mm

b. Schematic

Fig. 3 Layout and schematic of 2 * 2 rectangular microstrip array antenna

MLOC TL1 TL31 TL58 Subst="MSub1" W=37.35 mm L=20.75 mm

MLOC TL3 TL60 TL33 Subst="MSub1" W=37.35 mm L=20.75 mm

MLIN TL2 TL59 TL32 Subst="MSub1" W=2.957 mm L=15.136 mm

MSABND_MDS Bend5 Bend9 Bend1 Subst="MSub1" W=2.957 mm Angle=90 M=0.5

61.906mm

61.906mm

MLOC TL98 TL123 TL150 Subst="MSub1" W=37.35 mm L=20.75 mm

MACLIN3 CLin6 CLin10 CLin2 Subst="MSub1" W1=14.94 mm W2=2.957 mm W3=14.94 mm S1=2.2565 mm S2=2.2565 mm L=8.3 mm

MACLIN3 CLin1 CLin5 CLin9 Subst="MSub1" W1=14.94 mm W2=2.957 mm W3=14.94 mm S1=2.2565 mm S2=2.2565 mm L=8.3 mm

MLIN TL36 TL63 TL6 Subst="MSub1" W=1.557 mm L=18.173 mm

MLIN TL5 TL35 TL62 Subst="MSub1" W=2.957 mm L=6.39 mm

MTEE_ADS Tee7 Tee4 Tee1 Subst="MSub1" W1=0.672 mm W2=0.672 mm W3=2.957 mm

MLIN TL7 TL37 TL64 Subst="MSub1" W=0.672 mm L=6.39 mm

MLIN TL4 TL34 TL61 Subst="MSub1" W=2.957 mm L=15.136 mm

MLIN TL9 TL39 TL66 Subst="MSub1" W=1.557 mm L=18.173 mm

MLIN TL40 TL67 TL10 Subst="MSub1" W=0.672 mm L=6.39 mm

MLIN TL38 TL65 TL8 Subst="MSub1" W=2.957 mm L=6.39 mm

MACLIN3 CLin14 CLin19 CLin23 Subst="MSub1" W1=14.94 mm W2=2.957 mm W3=14.94 mm S1=2.2565 mm S2=2.2565 mm L=8.3 mm

MLIN TL90 TL131 TL158 Subst="MSub1" W=2.957 mm L=15.136 mm

MSABND_MDS Bend6 Bend10 Bend2 Subst="MSub1" W=2.957 mm Angle=90 M=0.5

P1 Num=1

MLIN TTL52 L2759 Subst="MSub1" W=2.957 mm L=6.39 mm

MLIN TL124 TL97 TL151 Subst="MSub1" W=2.957 mm L=15.136 mm

MSABND_MDS Bend13 Bend20 Bend24 Subst="MSub1" W=2.957 mm MLIN MLIN MLIN MLIN MLIN MLIN MSABND_MDS Angle=90 Bend14 Bend19 Bend23 TL127 TL95 TL91 TL92 TL94 TL93 TL96 TL130 TL125 TL126 TL128 TL129 TL157 TL152 TL153 TL154 TL155 TL156 M=0.5 Subst="MSub1" Subst="MSub1"Subst="MSub1" MTEE_ADS Subst="MSub1" Subst="MSub1"Subst="MSub1"Subst="MSub1" Tee14 Tee17 W=0.672 mm W=1.557 mm W=2.957 mm W=2.957 mm W=2.957 mm W=1.557 mm W=0.672 mm Tee11 Subst="MSub1" L=6.39 mm L=6.39 mm L=18.173 mm L=6.39 mm L=18.173 mm L=6.39 mm Angle=90 W1=0.672 M=0.5 MLINmm W2=0.672 TL89mm TL132 TL159 W3=2.957 mm Subst="MSub1" W=2.957 mm L=6.39 mm

MLIN TL26 TL80 TL53 Subst="MSub1" W=1.557 mm L=18.173 mm

MLIN TL84 TL137 TL164 Subst="MSub1" W=1.557 mm L=18.173 mm

MLIN TL24 Subst="MSub1" W=2.957 mm L=15.136 mm

MLIN TL27 TL54 TL81 Subst="MSub1" W=0.672 mm L=6.39 mm MTEE_ADS Tee9 Tee3 Tee6 Subst="MSub1" W1=0.672 mm W2=0.672 mm W3=2.957 mm

2 * 61.906mm

MLOC TL122 TL99 TL149 Subst="MSub1" W=37.35 mm L=20.75 mm

MACLIN3 CLin13 CLin20 CLin24 Subst="MSub1" W1=14.94 mm W2=2.957 mm W3=14.94 mm S1=2.2565 mm S2=2.2565 mm L=8.3 mm

MLIN M LIN MLIN M LIN TTL200 L1945 TTL196 L1932 Subst="MSub1"Subst="MSub1" W=2.957 mm W=1.557 mm L=6.39 mm L=18.173 mm

MLIN TL55 TL82 TL28 Subst="MSub1" W=0.672 mm L=6.39 mm

MLIN TL211 TL210 TL209 Subst="MSub1" W=1.557 mm L=18.173 mm

MLIN MLIN MLIN MLIN MTEE_ADS Tee20 Tee19 Tee21 TL171 TL172 TL174 TL165 TL169 TL167 TL178 TL175 TL179 TL18210 Subst="MSub1" Subst="MSub1" Subst="MSub1" Subst="MSub1"Subst="MSub1" W1=0.672 mm W=0.672 mm W=1.557 mm W=1.557 mm W=0.672 mm W2=0.672 mm L=6.39 mm L=18.173 mm L=18.173 mm L=6.39 mm W3=2.957 mm

MLIN TL85 TL136 TL163 Subst="MSub1" W=0.672 mm L=6.39 mm

MLIN MLIN TL205 TL202 TL201 TL18876 Subst="MSub1" Subst="MSub1" W=1.557 mm W=1.557 mm L=18.173 mm L=18.173 mm

MLIN TL198910 Subst="MSub1" W=2.957 mm L=6.39 mm

MLIN TL74 TL47 TL18 Subst="MSub1" W=1.557 mm L=18.173 mm MLIN TL75 TL19 TL48 Subst="MSub1" W=2.957 mm L=6.39 mm MLIN TL14 TL70 TL43 Subst="MSub1" W=2.957 mm L=15.136 mm

MLIN TL78 TL51 TL23 Subst="MSub1" W=2.957 mm L=6.39 mm

MLIN TL88 TL133 TL160 Subst="MSub1" W=2.957 mm L=6.39 mm

MLIN TL44 TL71 TL15 Subst="MSub1" W=1.557 mm L=18.173 mm

MLIN MLIN MTEE_ADS Tee5 Tee2 Tee8 TL72 TL17 TL45 TL16 TL73 TL46 Subst="MSub1" Subst="MSub1" Subst="MSub1" W1=0.672 mm W=0.672 mm W=0.672 mm W2=0.672 mm L=6.39 mm L=6.39 mm W3=2.957 mm

MLIN TL21 TL76 TL49 Subst="MSub1" W=2.957 mm L=6.39 mm

MACLIN3 CLin7 CLin3 CLin11 Subst="MSub1" W1=14.94 mm W2=2.957 mm W3=14.94 mm S1=2.2565 mm S2=2.2565 mm L=8.3 mm MLOC TL12 TL41 TL68 Subst="MSub1" W=37.35 mm L=20.75 mm

a. Layout

MLIN TL86 TL135 TL162 Subst="MSub1" W=0.672 mm L=6.39 mm MLIN TL87 TL134 TL161 Subst="MSub1" W=1.557 mm L=18.173 mm

MLIN TL29 TL83 TL56 Subst="MSub1" W=1.557 mm L=18.173 mm

MSABND_MDS Bend7 Bend3 Bend11 Subst="MSub1" W=2.957 mm Angle=90 M=0.5

MTEE_ADS Tee10 Tee15 Tee18 Subst="MSub1" W1=0.672 mm W2=0.672 mm W3=2.957 mm

MLOC TL42 TL69 TL13 Subst="MSub1" W=37.35 mm L=20.75 mm

MSABND_MDS Bend4 Bend12 Bend8 Subst="MSub1" W=2.957 mm Angle=90 M=0.5 MLIN TL77 TL22 TL50 Subst="MSub1" W=2.957 mm L=15.136 mm

MACLIN3 CLin4 CLin12 CLin8 Subst="MSub1" W1=14.94 mm W2=2.957 mm W3=14.94 mm S1=2.2565 mm S2=2.2565 mm L=8.3 mm

MLIN TL102 TL119 TL146

MLIN TL147 TL101 TL120

MLIN TL121 TL100 TL148

MSABND_MDS Subst="MSub1"Subst="MSub1" Subst="MSub1" Bend15 Bend18 Bend22 W=2.957 mm W=1.557 mm W=0.672 mm Subst="MSub1" L=6.39 mm L=18.173 mm L=6.39 mm MLIN mm W=2.957 TL142 TL106 TL115 Angle=90 Subst="MSub1" M=0.5 W=2.957 mm L=15.136 mm

MACLIN3 CLin15 CLin18 CLin22 Subst="MSub1" W1=14.94 mm W2=2.957 mm W3=14.94 mm S1=2.2565 mm S2=2.2565 mm L=8.3 mm MLOC TL114 TL141 TL107 Subst="MSub1" W=37.35 mm L=20.75 mm

MLIN MLIN MLIN MTEE_ADS Tee12 Tee13 Tee16 TL117 TL144 TL103 TL104 TL105 TL118 TL116 TL145 TL143 Subst="MSub1" Subst="MSub1"Subst="MSub1" Subst="MSub1" W1=0.672 mm W=0.672 mm W=1.557 mm W=2.957 mm W2=0.672 mm L=6.39 mm L=18.173 mm L=6.39 mm W3=2.957 mm

MSABND_MDS Bend16 Bend17 Bend21 Subst="MSub1" W=2.957 mm Angle=90 M=0.5 MLIN TL108 TL113 TL140 Subst="MSub1" W=2.957 mm L=15.136 mm

MACLIN3 MACLIN3 CLin16 CLin17 CLin21 Subst="MSub1" W1=14.94 mm W2=2.957 mm W3=14.94 mm S1=2.2565 mm S2=2.2565 mm L=8.3 mm MLOC TL112 TL110 TL139 Subst="MSub1" W=37.35 mm L=20.75 mm

b. Schematic

Fig. 4 Layout and schematic of 2 * 4 rectangular microstrip array antenna

4 Simulation Results and Discussion Using Advanced Design System (ADS) The graphs of the return loss for single, 1 * 2, 2 * 2, and 2 * 4 microstrip patch antennas are shown in Figs. 5, 6, 7, and 8, respectively. The mono-antenna outperforms other antennas in terms of performance. A 3D radiation pattern (E_theta) for a single antenna, 1 * 2, 2 * 2, 2 * 4 microstrip patch array is shown in Figs. 9, 10, 11, and 12 with the side lobe increasing as the

High Gain of Rectangular Microstrip Patch Array in Wireless … freq=2.413GHz dB(S11_fitted)=-24.536

Adaptively Fitted Points

Magnitude [dB]

0

dB(S11_fitted) dB(S11_discrete)

509

-5 -10 -15 -20 m1 -25 1.8

2.0

2.2

2.4

2.6

2.8

3.0

freq, GHz

Fig. 5 Magnitude of return loss for single microstrip rectangular patch antenna

m2 freq=2.420GHz dB(S11_fitted)=-17.607

dB(S11_fitted) dB(S11_discrete)

0

Magnitude [dB]

-5

-10

-15

m2

-20 1.8

2.0

2.2

2.4

2.6

2.8

3.0

freq, GHz Fig. 6 Magnitude of return loss for 1 * 2 micro-strip rectangular patch array antenna

number of antenna components grows. The antennas’ gain and directivity are shown in Figs. 13, 14, 15, and 16, and it is noticed that increasing the antenna elements increases their value. Figures 17, 18, 19, and 20 represent the radiation power of a single rectangular micro-strip patch antenna, 1 * 2, 2 * 2, 2 * 4, respectively. However, Figs. 21, 22, 23, and 24 show the antennas’ efficiency.

510

H. A. Alsawaf m1 freq=2.422GHz dB(S11_fitted)=-18.652

dB(S11_fitted) dB(S11_discrete)

0

Adaptively Fitted Points

Magnitude [dB]

-5 -10 -15 m1

-20 1.8

2.0

2.2

2.4

2.6

2.8

3.0

freq, GHz Fig. 7 Magnitude of return loss for 2 * 2 microstrip rectangular patch array antenna Adaptively Fitted Points

m1 freq=2.431GHz dB(S11_fitted)=-18.806

Magnitude [dB]

dB(S11_fitted) dB(S11_discrete)

0 -5 -10 -15

m1 -20 1.8

2.0

2.2

2.4

2.6

2.8

3.0

freq, GHz Fig. 8 Magnitude of return loss for 2 * 4 microstrip rectangular patch array antenna Fig. 9 3D radiation pattern (E_Theta) for single microstrip patch antenna

a. Front view of 3D radiation pattern

b. Opposite view of 3D radiation pattern

High Gain of Rectangular Microstrip Patch Array in Wireless …

511

Fig. 10 3D radiation pattern (E_Theta) for 1 * 2 microstrip patch array antenna

a. Front view of 3D radiation pattern

b. Opposite view of 3D radiation pattern

a. Front view of 3D radiation pattern

b. Opposite view of 3D radiation pattern

Fig. 11 3D radiation pattern (E_Theta) of 2 * 2 microstrip patch array antenna

Fig. 12 3D radiation pattern (E_Theta) of 2 * 4 microstrip patch array antenna

a. Front view of 3D radiation pattern

b. Opposite view of 3D radiation pattern

5 Conclusion This paper offers a design for a microstrip patch array antenna utilizing ADS 2020 for WLAN simulations using only one patch, (1 * 2), (2 * 2), and (2 * 4) microstrip patch antenna arrays. The 2 * 4 antenna array has a directivity of 14.1481 dB and a gain of 11.1879 dB. Therefore, the 2 * 4 antenna array is the best performance in terms of gain and directivity. As for the radiated power, its values are similar for the designed antennas, but the 1 * 2 antenna is the best in terms of radiated power

512

H. A. Alsawaf

Gain, Directivity 0º 13

º

-4

10*log10(mag(Gain))

45



10*log10(mag(Directivity)) -7

-27

-90º

90º

Mag. [dBi]

-47

-67

13 5º

º 35 -1 180º

Theta (-90.000 to 90.000)

Fig. 13 Gain, directivity for single microstrip rectangular patch antenna

Gain, Directivity 0º 20

-4

10*log10(mag(Gain))

º 45



10*log10(mag(Directivity)) 0

-20

-90º

90º

Mag. [dBi]

-40

-60

13

º 35



-1 180º

Theta (-90.000 to 90.000)

Fig. 14 Gain, directivity for (1 * 2) microstrip rectangular patch array antenna

and performance efficiency. Return losses are known to be reduced as the number of antenna elements is increased. Therefore, the individual antenna is the best in terms of returning losses because the higher the value of the return losses, the more it means its decrease because its values are negative.

High Gain of Rectangular Microstrip Patch Array in Wireless …

513

Gain, Directivity 0º -10

-4

10*log10(mag(Gain))

º 45



10*log10(mag(Directivity))

-20

-30

-40

-90º

90º

Mag. [dBi]

-50

-60

13 5º

º 35 -1 180º

Theta (-90.000 to 90.000)

Fig. 15 Gain, directivity for (2 * 2) micro-strip rectangular patch array antenna

Gain, Directivity 0º 0

-4

10*log10(mag(Gain))

º 45



10*log10(mag(Directivity))

-10 -20

-90º

-40 90º

Mag. [dBi]

-30

-50

13 5º

º 35 -1 180º

Theta (-90.000 to 90.000)

Fig. 16 Gain, directivity for (2 * 4) micro-strip rectangular patch array antenna

514

H. A. Alsawaf 0.0012

Fig. 17 Radiation power of single microstrip rectangular patch antenna RadiatedPower

0.0010

0.0008

0.0006

0.0004

0.0002

0.0000 1.8

2.0

2.2

2.4

2.6

2.8

3.0

2.6

2.8

3.0

2.6

2.8

3.0

freq, GHz 0.0014

Fig. 18 Radiation power of 1 * 2 microstrip rectangular patch array antenna RadiatedPower

0.0012 0.0010 0.0008 0.0006 0.0004 0.0002 0.0000 1.8

2.0

2.2

2.4

freq, GHz

Fig. 19 Radiation power for (2 * 2) micro-strip rectangular patch array antenna

0.0012

RadiatedPower

0.0010 0.0008 0.0006 0.0004 0.0002 0.0000 1.8

2.0

2.2

2.4

freq, GHz

High Gain of Rectangular Microstrip Patch Array in Wireless …

515

0.0012

Fig. 20 Radiation power of 2 * 4 microstrip rectangular patch array antenna RadiatedPower

0.0010 0.0008 0.0006 0.0004 0.0002 0.0000 1.8

2.0

2.2

2.4

2.8

3.0

2.6

2.8

3.0

2.6

2.8

3.0

2.6

freq, GHz 50

Fig. 21 Radiation efficiency (%) for single microstrip rectangular patch antenna

Efficiency

40

30

20

10

0 1.8

2.0

2.2

2.4

freq, GHz 60

Fig. 22 Radiation efficiency (%) of (1 * 2) micro-strip rectangular patch array antenna

50

Efficiency

40 30 20 10 0 1.8

2.0

2.2

2.4

freq, GHz

516

H. A. Alsawaf 60

Fig. 23 Radiation efficiency (%) of 2 * 2 microstrip rectangular patch array antenna

50

Efficiency

40 30 20 10 0 1.8

2.0

2.2

2.4

2.6

2.8

3.0

2.6

2.8

3.0

freq, GHz

60

Fig. 24 Radiation efficiency (%) of 2 * 4 microstrip rectangular patch array antenna

50

Efficiency

40 30 20 10 0 1.8

2.0

2.2

2.4

freq, GHz

Acknowledgments I’d like to thank my parents for everything they’ve done for me and the College of Electronics Engineering who helped me complete this work.

References 1. Vinayak S, Swarna P, Bansi L (2014) Designing and optimization of inset fed rectangular microstrip patch antenna (RMPA) for varying inset gap and inset length. Int J Electron Electr Eng 7(9):1007–1013 2. Bekimetov A, Zaripov F (2016) Feed line calculations of microstrip antenna. Int J Res Appl Sci Eng Technol 4(2321–9653):73–79 3. Taher K, Norsuzlin MS, Ramli N, Islam MT (2019) Circularly polarized microstrip patch antenna array for GPS application. Indonesian J Electr Eng Comput Sci (IJEECS) 15(2):920– 926

High Gain of Rectangular Microstrip Patch Array in Wireless …

517

4. Xiong L, Gao P (2012) Compact dual-band printed diversity antenna for WIMAX/WLAN applications. Progress Electromagnet Res 32:151–165 5. Adamu YI, Mohamad RH, Mohamad KAR, Mohd FMY (2019) Wideband frequency reconfigurable metamaterial antenna employing SRR and CSRR for WLAN application. Indonesian J Electr Eng Comput Sci 15(3):1436–1442 6. Rashid AS, Sabira Kh (2005) Design of microstrip antenna for WLAN. J Appl Sci 5(1):47– 51.https://doi.org/10.3923/jas.2005.47.51 7. Monijit M (2004) Microwave engineering. Dhanpat Rai and Company(P)Ltd. 8. Kin-Lu W (2002) Compact and broadband. Microstrip antenna. Willy, pp 1–12 9. Garg R, Bhartia P, Bahl I, Ittipiboon (2001) Microstrip antenna design handbook. Artech House 10. Jayant GJ, Shyam SP, Swapna D, Mohan RL (2011) Bandwidth enhancement and size reduction of microstrip patch antenna by magneto inductive waveguide loading. Wirel Eng Technol 2(2):37–44 11. Ram K, Vijay L (2015) Design of microstrip antenna for wireless local area network. Int J Comput Sci Mobile Comput 4(4):361–365 12. Ashish S, Mohammad A, Kamakshi (2017) Analysis of microstrip line fed patch antenna for wireless communications. Open Eng 7(1):279–286. https://doi.org/10.1515/eng-2017-0034 13. Abil FM, Yulisdin M (2019) Aperture coupling rectangular slotted circular ring microstrip patch antenna. Indonesian J Electr Eng Comput Sci (IJEECS) 15(3):1419–1427 14. Shashi K, Suganthi S (2017) Performance analysis of optimized corporate-fed microstrip array for ISM band applications. IEEE WiSPNET 15. Anil KP (2019) Practical microstrip and printed antenna design 16. Ouazzani O, Bennani SD, Jorio M (2017) Design and simulation of 2*1 and 4*1 array antenna for detection system of objects or living things in motion. IEEE. https://doi.org/10.1109/WITS. 2017.7934640 17. Adelina GN, Cristina C, Alexandra CB, Alexandru MG (2021) Nanomaterials synthesis through microfluidic methods: an updated overview. Nanomaterials 11(4):864 18. Md Farid S, Aheibam DS (2020) Design and analysis of microstrip patch antenna arrays for millimeter wave wireless communication. International journal of engineering and advanced technology (IJEAT). ISSN: 2249-8958, vol 9 (Issue-3) 19. Salah BSA, Anil K, Arvind K (2021) Theoretical analysis of defected ground multiband rectangular shape microstrip patch antenna. Lecture notes in electrical engineering. Springer 20. Santosh KG, Sangaraju V (2021) Slotted I-patch with capacitive probe fed microstrip antenna for wideband applications. Lecture notes in electrical engineering. Springer 21. Norfishah A, Zuhani IK, Suzi SS (2020) Microstrip array antenna with inset-fed for WLAN application. Indonesian J Electr Eng Comput Sci 17(1):340–346

Assessment Online Platforms During COVID-19 Pandemic Zinah Abdulridha Abutiheen, Ashwan A. Abdulmunem, and Zahraa A. Harjan

Abstract Because of the corona pandemic, most people were forced to quarantine to prevent this disease from spreading. For continuing work and not to stop the wheel of life, most people went to use training workshops to develop the skills of employees and commence different webinar meetings and conferences. Therefore, many platforms were used to conduct these workshops and scientific meetings. This study was carried out for evaluating the performance of these platforms based on data collected through an electronic questionnaire. The questionnaire is designed on base of answering by academic professors. The answers to that questionnaire express the point of view of the participants in the courses or e-learning and the lecturers. The platforms included in this study are the most used during these days called: Free Conference Call (FCC), Zoom, Webex, Google meet. The study addressed the issues to evaluate the most appropriate platform to use in point of view of lecturer or attendee. Keywords Assessment training courses · Online platforms · Webinars · Skill developments · COVID-19

1 Introduction The electronic revolution has swept the entire world from its broadest gates, as it is nowadays a necessity that does not include a home, an institution, a ministry or an educational edifice, and it has entered into all economic, political, commercial, service, and even educational fields where education took another course in the exploitation of the Internet and the Web sites in the learning process [1]. Therefore, the Internet offers great opportunities for developing educational and training systems, as it provided many possibilities for innovation in teaching and learning, training, teaching methods, means of communication, and its forms. The Internet has become Z. A. Abutiheen (B) · A. A. Abdulmunem · Z. A. Harjan College of Computer Science Information Technology, Department of Computer Science, University of Kerbala, Kerbala, Iraq e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_55

519

520

Z. A. Abutiheen et al.

a major role because of the importance to the trained person acquiring the general foundations of knowledge [2–4]. Training and knowledge are among the foundations of contemporary society’s life, along with other social services such as health, housing, and others [5]. Traditional education is the pioneer, but due to the Corona pandemic (Covid-19) and the cause of home quarantine, e-learning has become the pioneer and this was done through a number of electronic platforms such as zoom, FCC, google meet, ... etc. There is an incredible increase in using of online platforms to organize all electronic conferences and electronic workshops. These events that most trainees have resorted to for many reasons. One of them was an annual evaluation or increased knowledge or other reasons, which greatly affected the increased use of these electronic educational platforms. There are several studies about e-learning platforms such as [6] that show the platform Moodle outperforms all ATutor, Dokeos, dotLRN, ILIAS, LON-CAPA, OpenUSS, Sakai, and Spaghetti learning platforms. Sinclair et al. [7] conducted a literature review to examine MOOCs, and how to what live up to best expectations, concerns arising from this review and unrealistic expectations for novice learners are discussed. Alseelawi et al. [8] a strategic plan has been developed to build an e-learning platform using multi-layered architecture; each layer is created in the Asp.Net program using N-Tier architecture. Petkovi´c and Deni´c [9] made an e-learning platform based on object-oriented approach. Achtemeier et al. [10] categorized principles gathered from an extensive literature focusing on current best practices for effective teaching and learning in online courses. Sampson and Zervas [11] presented evaluation results from the application of the e-Acces2Learn Framework to provide services that facilitate the design, production, and sharing of accessible e-training resources and courses baring the potential to be inter-exchanged between different e-training platforms and programs. In this research, the electronic platforms that used to communicate in e-learning are evaluated by offering an electronic questionnaire to the trainees and lecturers in the electronic courses and workshops. The structure of this paper is organized as follows: Sect. 2 explains the problem statement. Section 3 shows the nature of the collected data. Section 4 depicts aims of the paper. Section 5 illustrates the common online used in the study. The last three sections give a detailed explanation of the data and the tools to analyze the results.

2 Problem Statement The problem of this study is crystallized in the need to identify the reality electronic courses and workshops that have spread widely in societies and especially Iraqi society in light of the corona pandemic at the faculty members in Iraqi universities. Also, determine the best platform used in courses and electronic workshops. Accordingly, an electronic questionnaire was prepared to contain a number of questions aimed at knowing the reality of the courses and electronic workshops in Iraq.

Assessment Online Platforms During COVID-19 Pandemic

521

3 Questionnaire Contents Questionnaire is designed to involve three categories, one of which is personal information, such as age and gender. The second part is about professional information such as workplace/university, college name, department, specialization, educational qualification, academic rank, and participation role lecturer or attendee; last part of questionnaire is to evaluate the performance of the workshop from different aspects which are as follows: (a) (b) (c) (d) (e) (f) (g) (h)

The best organization/organizations of the course? The number of certificates obtained in the courses in which they participated Number of courses/workshops without a certificate of participation Which platforms are the best in delivering courses and workshops from your point of view? The goal of delivering or participating in the course and rate to benefit from the courses and the participating workshops Workshop topics and courses The topics of the workshops and courses are duplicated to a degree? How satisfied are you with the workshops or online courses in their current form?

4 The Aims of the Study This study aims to learn about the effectiveness of employing the Internet in elearning, as a learning tool in the way to solve problems under the coronavirus pandemic. Moreover, providing one of the modern technologies that help to speed the spread of information and means of communication, given that the Internet is effective in the educational process. In addition, identify if there are differences between the electronic platforms and identify the best and worst ones.

5 Live Webinar Platforms Webinar marketing is a vital strategy for B2B businesses, and a lot of customer manufacturers are also turning to it for their own B2C marketing efforts. Currently, as a result of this pandemic, the need has become very urgent and all personal activities and sales conferences have been moved to a digital environment. Webinars give you the threat to build a greater private relationship with your audience, delve deeper into the subjects that challenge them, and build your manufacturer as a location; human being can come for important info. We are not right here to discuss about what makes a fantastic webinar right here today, though. Instead, we are going to run through some of the fine webinar platforms

522 Table 1 Sample of study

Z. A. Abutiheen et al. Row labels

Female

Male

Grand total

BA

8

13

21

PhD

63

51

114

MSC

97

58

155

Grand total

171

125

296

available right now to compare the best webinar which can be used in hosting training courses to develop employer’s skills. We will be walking through the features with a brief clarification on what motive every platform is nice suitable to select the exceptional webinar platform for needs. There are many live webinars that were used in COVID-19 crisis which is dramatically multiplied utility of these platforms. The platforms can be divided into two categories: Non-free and free based on number of people that the webinar can host. Non-free are Demio, WebinarNinja, JetWebinar, GoToWebinar, Webex, GetResponse, ClickMeeting, Livestream, Webinars OnAir. Most of these platforms need to pay to host meeting for more than 50 persons. Wherever, free hosting platforms for more than 50 persons are as follows: Zoom, Google meet, Google Hangout. Most platforms allow people to sign up once for an ongoing series of webinars. Users have the option to send private messages to event coordinators or public messaging visible to everyone [11, 12]. In this study, we are focusing on the most popular whose are used in e-workshops which were used in presenting. These platforms are Zoom, Google meet, Google Hangout, Google Classroom, Free Conference Call (FCC).

6 Method and Procedure 6.1 The Study Community The sample of the study consisted of the total participants in the courses and electronic workshops who are a group of faculty members in Iraqi universities and some employees and graduates in the State of Iraq. The number of respondents to the study tool reached 302 employees and academics. Table 1 shows the distribution of study personnel with specialization, gender, and age, as follows.

6.2 Study Tool To achieve the goal of the study, which is represented in identifying the reality of the e-workshops and e-courses for a sample of faculty members and a number of

Assessment Online Platforms During COVID-19 Pandemic

523

employees, and their purposes of participating in it and to find out the mechanism of its organization, an electronic questionnaire was built through Google forms as a tool for collecting data and information that contains two parts: The first contains personal information such as age, gender, specialization, academic qualification, and academic rank. The second section of them is related to e-courses and e-workshops; they were established by relying on a review of previous literature by reviewing the theoretical background related to the subject of the current study and the role of previous data.

6.3 Study Tool Application The study tool was applied in its final image after ensuring its sincerity and consistency to the study sample members during its electronic distribution in social media sites (Telegram and Facebook) during the corona pandemic, and after that these answers were analyzed to determine the desired goals of this study. It took only about one week to publish the study tool while verifying its authenticity.

6.4 Data Analysis The data collected were analyzed through the study tool in Excel using the pivot tables to obtain the statistics for the purpose of the study.

7 Results and Discussion In this section, we show question’s answers as it is as follows: Q1: The best organization/organizations of the course? There are a lot of organizations considered the best in their work, which number according to the opinion of the study sample 56 organizations and universities. We eliminate all organization that their frequency is less than 10 and keep the rest; Fig. 1 explains the best organizations of their frequency in range 11–61 such as (universities: Mosul, Tusi, Bagdad, Kufa, Al-Mustansirya, Diyala, and Tikrit and foundation: Al-Aayan organization). Q2: Some criteria using in this study. There are several criteria mentioned in this study as explained in Table 2. Q3: Which platforms are the best/worst in delivering courses and workshops from your point of view? With this question, the question is chosen to be determining which platform is beneficial in organizing the workshop. The answers to this question depict in Fig. 2.

524

Z. A. Abutiheen et al.

Fig. 1 Best organization (s) for the e-workshops or e-courses

Table 2 Some criteria using in this study Criteria

Standard deviation

Mean

The number of e-courses/e-workshops participated in

11.02518

The number of participations without a certificate

14.12946

13.86047

The number of certificates obtained in the e-courses/e-workshops in which they participated

20.1261

22.84718

Total

16.58445

15.23145

8.986711

Fig. 2 Best platform for the e-workshop or e-course

The figure explains that FCC platform is the preferable one, while YouTube is the lowest one. Q4: The goal of delivering or participating in the course and rate to benefit from the courses and the participating workshops.

Assessment Online Platforms During COVID-19 Pandemic

525

The goal of this question is to know to attend these online workshops. The most answer that to develop the skills while others just to collect certificates. Figure 3 shows the responses to this question. In this question, the participant replies to how they are benefiting from these workshops according to the skill developments, annual evaluations. Q5: Workshops topics and courses. Workshops topics also are considered to evaluate and assess the e-learning skill development process. Figure 4 illustrates the rate of the answer to the subjects of workshops if they duplicate or no. While in Table 3, it explains the rate of some questions about the workshop’s topics.

Fig. 3 Rate to benefit from the courses and the participating workshops

Fig. 4 Topics of the e-workshops and e-courses are duplicated to a degree

526

Z. A. Abutiheen et al.

Table 3 Workshops topics and courses Questions

Excellent Very good Good Median Accept Weak Total

How satisfied are you with the 32 e-workshops and the e-courses in their current form

59

59

23

2

4

179

E-workshop topics and e-courses

67

128

86

14

6

X

301

The utilization rate of the 52 e-courses and the e-workshops participating in them

129

90

26

4

X

301

8 Conclusion The conclusions of this study can be drawn as follows, firstly, giving a complete vision to the officials in continuing education about the reality of e-courses and eworkshops, as well as taking into consideration this reality when making decisions regarding e-learning. Secondly, the study may give practical indicators that contribute to developing and organizing them. Also, suggesting some solutions that increase the efficiency of e-learning by demonstrating the most efficient oneof the electronic platforms with the best experience. This study was limited to a sample of Iraqi university professors. Also, this study includes a specific set of statistical analysis of the extent to which faculty members benefit from workshops and electronic courses in scientific promotions, annual evaluation, skills development, and standing up to obstacles in the electronic educational process, also determine the FCC as the best platform for electronic training. Acknowledgments We are happy to thank University of Kerbala to support our work.

References 1. Samir Abou El-Seoud M, Taj-Eddin IATF, Seddiek N, El-Khouly MM, Nosseir A (2014) E-learning and students’ motivation: a research study on the effect of e-learning on higher education. Int J Emerg Technol Learn 9(4):20–26 2. Ngang TK, Chan TC, a/p Vetriveilmany UD (2015) Critical issues of soft skills development in teaching professional training: educators’ perspectives. Procedia Soc Behav Sci 205:128–133 3. Tang KN (2020) The importance of soft skills acquisition by teachers in higher education institutions. Kasetsart J Soc Sci 41(1):22–27 4. Ariratana W, Sirisookslip S, Ngang TK (2015) Development of leadership soft skills among educational administrators. Procedia Soc Behav Sci 186:331–336 5. Rebele JE, St Pierre EK (2019) A commentary on learning objectives for accounting education programs: the importance of soft skills and technical knowledge. J Account Educ 48:71–79 6. Graf S, List B (2005) An evaluation of open source e-learning platforms stressing adaptation issues. In: Proceedings—5th IEEE international conference on advanced learning technology ICALT 2005, vol 2005, pp 163–165

Assessment Online Platforms During COVID-19 Pandemic

527

7. Sinclair J, Boyatt R, Rocks C, Joy M (2015) Massive open online courses: a review of usage and evaluation. Int J Learn Technol 10(1):71–93 8. Alseelawi NS, Adnan EK, Hazim HT, Alrikabi HTS, Nasser KW (2020) Design and implementation of an E-learning platform using N-tier architecture. Int J Interact Mob Technol 14(06):171 9. Petkovi´c D, Deni´c N (2020) Neuro-fuzzy assessment of pupil performance based on e-learning platform implementation. J Inst Electron Comput 2(1):12–27 10. Achtemeier SD, Morris LV, Finnegan CL (2003) Considerations for developing evaluations of online courses. J Asynchronous Learn Netw 7(1) 11. Sampson DG, Zervas P (2010) Technology-enhanced training for all: evaluation results from the use of the e-Access2Learn framework. In: Proceedings—10th IEEE international conference on advanced learning technology ICALT 2010, pp 718–719 12. Yagyanath R et al (2021) J Phys Conf Ser 1804:012054

Fabrication and Analysis of Heterojunction System’s Electrical Properties Made of Compound Sb2 O3 :In2 O3 Si(N,p) Films by Spin Coating Ali J. Khalaf, Abeer S. Alfayhan, Raheem G. K. Hussein, and Mohammed Hadi Shinen Abstract Thin films from Sb2 O3 :In2 O3 /Si(n,p) were prepared with different weights; thin films were prepared by spin coating, and the rotation speed are (2000, 3000, 4000, 5000, 6000) rpm, respectively, and the rotation time is 7 s. The electrical properties of Hall effect were studied. The results show that the prepared films were p-type a (o.1) gm Sb2 O3 ; it becomes n-type when the distortion ratios is increased. The characteristics of the voltage–current in the dark and lighting state showed that the compound films are sensitive to light, where we notice a decrease with a dispirit in the current values in the dark state and an increase in the sensitivity of the samples after lighting in the forward and reverse bias. It was noted that the best efficiency of heterojunction is with a value of (3.5714286%) when depositing the type (n) film of the Sb2 O3 :In2 O3 sample on the Si (p) substrate at weight ratios (Sb2 O3 60% : In2 O3 40% ). As for the efficiency of the heterojunction prepared, it is noted that it increases weight ratio Si (p)/(n); then, it starts to decrease when the (n) film is deposited on the substrate of silicon type (n) noticed that the highest value of open-circuit voltage is (90 mV ) for the Si (p) sample with a (P) type film deposited for the Sb2 O3 : In2 O3 sample at a weight ratios (80%gm) Sb2 O3 (20% gm) In2 O3 . There is a variation in the values (Isc , Voc , Imax , Vmax , FF) when depositing isotype (p/p), (n/n) and unis type (n/p), (p/n). Keywords Heterojunction · Electrical · Hall Effect · I–V characteristics · Heterojunction conductivity · Spin coating

A. J. Khalaf (B) Radiology Techniques Department, College of Medical Technology, The Islamic University, Najaf, Iraq e-mail: [email protected] A. S. Alfayhan · R. G. K. Hussein · M. H. Shinen Department of Physics, College of Science, University of Babylon, Babylon, Iraq e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_56

529

530

A. J. Khalaf et al.

1 Introduction Nanocomposites, composite materials means the group of engineering materials produced by adding certain weight or volume ratios of one or more materials. (Reinforcement materials) for base material (matrix material) so that the supporting materials are well combined and mixed with the mold material which ensures a homogeneous overlap in which the particles of the supporting materials are ideally distributed. The purpose of producing overlapping materials is to add certain properties to the mold material or to add traits that were not inherent to it [1, 2]. Thin films is one of the important branches of solid-state physics that crystallizes into a branch in itself. This branch deals with micro-devices, all of which have a very small thickness of less than 1 μM [3]. The properties of the material in the form of thin films have attracted the attention of physicists since the second half of the seventeenth century, where many theoretical research was conducted in this area, and then developed the study of the practical side at the beginning of the nineteenth century when the semiconductor entered into practice [4, 5]. Due to the thickness of these membranes, they are deposited on substrates of different materials depending on the nature of the use and study, such as glass, quartz, silicon, and aluminum. [6, 7]. Biswas et al. [8] identified an oxide solar cell with a Cu2 O absorber layer electrochemically deposited and a Zn1-xMgxO buffer layer/FTO fabricated by spin coating/electrochemical deposition. They examined and improved technologies for making solar cells that are both costeffective and simple to make. Using a simple chemical reaction and a spin coating rule, Hernández-Arteaga et al. [9] investigated the structural and electrical properties of CeO2 thin films with an integrated Gd doped. The TiO2 thin films doping enhanced the sol–gel precursor’s viscosity, minimizing mass delivery, and annealing the ionimplanted films twice improved densification using sol–gel spin coating [10]. Hassan and Ali [11] considered the fabricated vertical P–I–N thin-film and heterojunction diode-based UVC detector where the sol–gel spin-coating technique installed the layers of NiO, BaTiO3 , and ZnO. The application of p-CuO/n-ZnO heterojunctions for the growth of efficiency and low-cost optoelectronic devices, especially photodetectors, solar cells, and gas sensors, was highlighted in the study of Prabhu et al. [12].

2 Experimental The thin film of Sb2 O3 :In2 O3 was prepared by using spin coating; the rotation speed is (2000,3000,4000,5000,6000) rpm, respectively, and the rotation time is 7 s.(0.1) gm of Sb2 O3 , solve in 5 ml of ionic water with a 0.5 g PVA addition (PVA was used as a plaster only). Figure 1 exhibits a sketch of the circuit design for measuring current–voltage characteristics. The samples were doped with In2 O3 addition, and then applied on a glass slide at room temperature. All substrates were

Fabrication and Analysis of Heterojunction System’s Electrical Properties …

531

Fig. 1 Circuit diagram for measuring current–voltage characteristics

washed thoroughly using distilled water, and then left to dry out. Interdigitated electrodes (IDEs) substrate to study the some electrical properties of Sb2 O3 :In2 O3 (Hall effect), heterojunction solar cell, and photo sensor.

3 Results and Discussion 3.1 The Electrical Properties of (Sb2 O3 :I n2 O3 ) Thin Film 3.1.1

Hall Effect

The Hall modulus (RH ) is important for the electrical properties; (RH ) depends on the value of the magnetic field incident on the film perpendicular to the field in front of the field. The studied hall effect measurements of Sb2 O3 :In2 O3 films were different weight ratio, determining the type of charge, its concentration, the values of the Hall modulus, resistance and influence at room temperature, where it was found that the carrier for Sb2 O3 : In2 O3 films is positive type (p-type) for the positive Hall effect value (RH ), increases weight ratio. The pure Sb2 O3 films were converted from the p-type to the n-type positive Hall effect value (RH ), and the reason for an increase in the weight ratios, which lead to an increase in the size of the granules that absorb the energy of the incident photon, where electrons are the primary conductors and the secondary gaps in the film. (1) Using a computer program, different values of conductivity (σ),resistance (ρ), concentration (nH ), and Hall coefficient (RH ) for the type σ of carriers (n), but values of movement increase the distortion ratios values are disparate for the type of carriers (n). But the mobility values decrease as the weight ratios increases (Table 1).

1.114 ×

8.593 ×

− 9.464 ×

2.868 × 1011

− 8.683 ×

− 4.918 × 1010

Sb2 O3( 90%) :In2 O3 (10%)

Sb2 O3 (80%) :In2 O3(20% )

Sb2 O370% :In2 O3 9(30%)

Sb2 O3(60%) :In2 O3(40%)

1010

1.942 ×

2.606 × 10–6

3.837 × 105

105

5.150 × 10–6

1.499 × 105

6.673 × 10–6

105

1.073 × 105

10–6

9.323 × 10–6

1010

5.517 × 1011

Sb2 O3 (0.1gm)

Resistivity ρ(.Cm)

Conductivity σ(.Cm)−1

Concentration of Carriers nH Cm−3

Sample name

Table 1 Hall parameters for (Sb2 O3 :In2 O3 ) films a different weight ratio

3.308

3.702

1.452

5.668

1.655

Mobility μH (Cm2 /V.s) × 102

− 3.292 × 107

− 1.269 ×

108

2.176 × 107

− 6.569 × 107

1.775 × 107

Hall effect RH (Cm−3 (

n-type

n-type

p-type

n-type

p-type

Type of carriers

532 A. J. Khalaf et al.

Fabrication and Analysis of Heterojunction System’s Electrical Properties …

533

150 0.1gm

(p)

Current (mA)

Si(n)/ Sb2O3

100 50 0

-2

-1.5

-1

-0.5

0 -50

0.5

1

1.5

2

Voltage (V)

-100 -150

ID IL

Fig. 2 I–V characteristics in dark and light for film Si (n)/Sb2 O3 0.1 gm (p)

V Characteristics for Si(p,n)/Sb2 O3 :In2 O3 Heterojunction in Light and Dark. The study of current–voltage properties in state dark and lighting is important because it is a clear indication of the possibility of using the Sb2 O3 :In2 O3 thin films as a light-sensitive material (solar cell). Where we observe the effect of light is evident in the change of current and voltage values for the samples prepared for the Sb2 O3 :In2 O3 thin films at different weight ratios (0.1, 20%, 30%, 40%) gm. Where Fig. 1 shows that the current values in the dark state are less than the illumination condition in the forward transitions of the Si (p) type, and the Sb2 O3 p-type semiconductor transition to the n-type when the doping is increased, where we note the divergent current values of the char and reverse bias at weight ration with In2 O3 (10%, 20%, 30% g) as in Figs. 2, 3 as in Figs. 4, 5 in both forward and reverse biases and effect of the samples is higher in the dark state (Fig. 6). This is due to the change in the grain sizes and aggregations grain on Si (n) substrates Figs. 7, 8, 9, 10 that the electrical current of the Si (p) sample does not flow at weight ratio (20%, 30%, 40%)gm in the dark state because of the high potential barrier formed at the junction (p–n) of the samples the higher the weight ratio, however, the increase is evident when the (n) films is deposited on the Si (n) substrate due to a decrease in the potential barrier between (n–n) the difference in the weight ratio of carriers as in Figs. 4, 5. As for the efficiency of the heterojunction prepared, it is noted that it increases at the weight ratio (40 gm) Si (p)/Sb2 O3 60:In2 O3 40 gm (n); then, it starts to decrease when the (n) film is deposited on the (n) substrate. Because the electron density increases when the weight ratio increases, and Table 2 indicates open-circuit voltage (Voc) and short-circuit current (Isc), maximum voltages, current, fill factor, and efficiency of samples from heterojunctions of the Si/Sb2 O3 :In2 O3 compound. Using a computer program, it was found that these values are dispirit and do not indicate an increase or decrease pattern, while it was noticed that the highest value

534

A. J. Khalaf et al.

90%

50

:In2O3 10%(n)

40

Current (mA)

Si(n)/ Sb2O3

30 20 10 0

-2

-1.5

-1

-0.5

-10

0

0.5

1

1.5

2

Voltage (V)

-20 -30 ID

-40

IL

-50

Fig. 3 I–V characteristics in dark and light for film Si (n)/Sb2 O3 90% :In2 O3 10% (n) 30

Current (mA)

Si(n)/ Sb2O3( 80%):In2O3 (20%) (p)

-2

-1.5

-1

20 10 0

-0.5

0 -10 -20 -30

0.5

1

1.5

2

Voltage (V)

ID IL

Fig. 4 I–V characteristics in dark and light for film Si (n)/Sb2 O3 80% :In2 O3 0% (p)2

of open-circuit voltage is (40 mV ) for the Si (p) sample with a (n) type film. It was noted that the best efficiency is with a value of (3.5714286%) when depositing the type (n) film of the Sb2 O3 :In2 O3 sample on the Si (p) substrate at weight ratio (60% Sb2 O3 and 40% In2 O3 ) (6,7) (Fig. 11).

Fabrication and Analysis of Heterojunction System’s Electrical Properties …

535

5

Si(n)/ Sb2O3(70%):In2O3 (30%) (n)

Current (mA)

4 3 2 1 0

-2

-1.5

-1

-0.5

-1

0

0.5

1

1.5

Voltage (V)

2

-2 ID

-3

IL

-4

Fig. 5 I–V characteristics in dark and light for film Si (n)/Sb2 O3 (70%) :In2 O3 (30%) (n)

-2

-1.5

30%(n)

-1

Current (mA)

60

Si(n)/ Sb2O3 60%:In2O3

40 20 0

-0.5

0

0.5

1

1.5

2

-20 -40

Voltage (V) -60 -80 -100 -120

ID IL

Fig. 6 I–V characteristics in dark and light for film Si (n)/Sb2 O3 60% :In2 O3 40% (n)

4 Conclusions 4.1 Electrical Properties (Hall Effect) The carrier type changes of the films of the sb2 O3 :In2 O3 compound from p-type to n-type when the weight ratio increases, so the density of carriers, conductivity and resistivity is dispirit, while the mobility decreases with increasing weight ratio.

536

A. J. Khalaf et al. 40 (0.1) (P)

Current (mA)

S(P)/Sb2O3

30 20 10 0

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

-10

Voltage (V) -20 -30 ID

-40

IL

-50

Fig. 7 I–V characteristics in dark and light for film Si (p)/Sb2 O3( 0.1) 350

-2

-1.5

-1

Current (mA)

S(P)/ Sb2O3 (90%):In2O3 (10%) (n)

-0.5

250 150 50 -50 0

0.5

1

1.5

2

-150 -250

Voltage (V)

-350 -450 -550 -650

ID IL

Fig. 8 I–V characteristics in dark and light for film Si (P)/Sb2 O3 (90%) :In2 O3 (10%) (n)

4.2 V Characteristics for Si(p,n)/Sb2 O3 :In2 O3 Heterojunction 1. 2.

We note a decrease with dispirit of current values in dark state and an increase in the sensitivity of the samples after lighting in forward and reverse bias. It was noted that the best efficiency of heterojunction is with a value of (3.5714286) when depositing the type (n) film of the Sb2 O3 :In2 O3 \n-si sample on the Si (p) substrate at weight ratio 60% Sb2 O3 40% In2 O3 .

Fabrication and Analysis of Heterojunction System’s Electrical Properties …

537

700

S(P)/Sb2O3(80): In2O3(20) (P)

600

Current (mA)

500 400 300 200 100

Voltage (V)

0 -2

-1.5

-1

-0.5

-100

0

0.5

1

1.5

2 ID

-200

IL

-300

Fig. 9 I–V characteristics in dark and light for film Si (P)/Sb2 O3 (80%) :In2 O3 (20%) (P) 4

S(P)/Sb2O3 (70): In2O3 (30) (n)

Current (mA)

3 2 1 0

-2

-1.5

-1

-0.5

-1 -2

0

0.5

1

1.5

2

Voltage (V)

-3 -4 -5 -6

ID IL

Fig. 10 I–V characteristics in dark and light for film Si (P)/Sb2 O3 (60%) :In2 O3 (40%) (n)

3.

4.

5.

As for the efficiency of the heterojunction prepared, it is noted that it increases weight ratio, then it starts to decrease when the (n) film is deposited on the (n) substrate. Because the electron density increases when the weight ratio. Noticed that the highest value of open circuit voltage is (40 mV) for the Si (p) sample with a (n) type film deposited for the Sb2 O3 :In2 O3 \n-si sample at a weight 70% Sb2 O2 , 30% In2 O3 ), There is a variation in the values (Isc, Voc, Imax, Vmax, FF) when depositing isotype (p/p), (n/n) and unis type (n/p), (p/n). It is noticed that the highest value of open-circuit voltage is (40 mV) for the Si (p) sample with a (n) type film deposited for the Sb2 O3 :In2 O3 sample at a

538

A. J. Khalaf et al.

Table 2 Values open circuit voltage, closed circuit current and values maximum for voltage and current, fill factor, and efficiency for samples of heterojunction of Si(p,n)/Sb2 O3 :In2 O3 at different weight ratio Sample name

Voc (mV) Isc (mA) Vmax (mV) Imax (mA) F.F

Si (n)/Sb2 O3 0.1gm (p)

55.0000 0.0005

η%

2.5

0.0003

0.03 0.0012755

Si (n)/Sb2 O3 (90%) :In2 O3 (10%) (n)

590.0000 0.0023

3

0.0012

0.00 0.0061224

Si (n)/Sb2 O3 (80%) :In2 O3 (20%) (p)

85.0000 0.038

38

0.02

0.24 1.2925170

Si (n)/Sb2 O3 (70%) :In2 O3 (30%) (n)

850.0000 0.005

45.00

0.003

0.04 0.3020945

Si (n)/Sb2 O3 (60%) :In2 O3 (40%) (n)

32.0000 0.03

15

0.018

0.06 0.1037215

Si (p)/Sb2 O3(0.1)

48.0000 0.08

30

0.03

0.23 1.5306122

Si (P)/Sb2 O3 90% :In2 O3 10% (n)

28.0000 0.042

12

0.024

0.24 0.4897959

90

0.00073

0.07 0.1117347

Si (P)/Sb2 O3 80% :In2 O3 20% (P)

980.00

0.0009

Si (P)/Sb2 O3 70% :In2 O3 30% (n)

89.00

0.03

40

0.015

0.22 1.0204082

Film Si (P)/Sb2 O3 (60%) :In2 O3 (40%) (n)

60.00

0.15

30

0.07

0.23 3.5714286

300

Current (mA)

S(P)/ Sb2O3 (60) :In2O3 (40) (n) 200 100 0 -2

-1.5

-1

-0.5

0 -100

0.5

1

1.5

2

Voltage (V)

-200 -300 -400

ID IL

Fig. 11 I–V characteristics in dark and light for film Si (P)/Sb2 O3( 60%) :In2 O3 (40%) (n)

weight of (70% Sb2 O3 , 30% In2 O3 . There is a variation in the values (Isc, Voc, Imax, Vmax, FF) when depositing isotype (p/p), (n/n) and unis type (n/p), (p/n).

Fabrication and Analysis of Heterojunction System’s Electrical Properties …

539

References 1. Hench L (1991) Bioceramics: from concept to clinic. J Am Ceram Soc 74(7):1487–1510 2. Bengisu M (2013) Engineering ceramics. Springer Science & Business Media, Berlin 3. Usha K, Sivakumar R, Sanjeeviraja C (2013) Optical constants and dispersion energy parameters of NiO thin films prepared by radio frequency magnetron sputtering technique. J Appl Phys 114(12):123501 4. Eckcrtova L (1977) Physics of thin film. Plenum press, New York 5. Ali R et al (2020) Performance analysis of photovoltaic cells at varying environmental parameters and solar cell precise algorithm. J Phys Conf Ser 1530:012156 6. López R, Gómez R (2012) Band-gap energy estimation from diffuse reflectance measurements on sol–gel and commercial TiO2 : a comparative study. J Sol-Gel Sci Technol 61(1):1–7 7. Singh R, Yadav L, Shweta T (2019) Effect of annealing time on the structural and optical properties of n-CuO thin films deposited by sol-gel spin coating technique and its application in n-CuO/p-Si heterojunction diode. Thin Solid Films 685:195–203 8. Biswas I, Roy P, Maity U, Sinha P, Chakraborty A (2020) Effects of Mg% on open circuit voltage and short circuit current density of Zn1-xMgxO/Cu2 O heterojunction thin film solar cells, processed using electrochemical deposition and spin coating. Thin Solid Films 711:138301 9. Hernández-Arteaga J, Moreno-García H, Rodríguez A (2021) Low concentration (x < 0.01) Gd doping of CeO2 thin films for n-type layers deposited by spin coating. Thin Solid Films 724:138602 10. Chen A, Chen W, Majidi T, Pudadera B, Atanacio A, Manohar M, Koshy P (2021) Mo-doped, Cr-doped, and Mo–Cr codoped TiO2 thin-film photocatalysts by comparative sol-gel spin coating and ion implantation. Int J Hydrogen Energy 46:12961–12980 11. Hassan A, Ali G (2020) Thin film NiO/BaTiO3 /ZnO heterojunction diode-based UVC photodetectors. Superlattices Microstruct 147:106690 12. Prabhu R, Saritha A, Shijeesh M, Jayaraj M (2017) Fabrication of p-CuO/n-ZnO heterojunction diode via sol-gel spin coating technique. Mater Sci Eng, B 220:82–90

Generation of Interactive Fractals Using Generalized IFS Sukanta Kumar Das, Jibitesh Mishra, and Soumya Ranjan Nayak

Abstract An iterated function system (IFS) is a set of constriction affine transformations that express relationships among the portions of an image parts. It is made up of m affine transformations such that W 1 , W 2 , …, W m , each with its own probability. The probabilities have an impact on the rate at which the various regions and characteristics of the image are filled in. The conventional objects like line, square, and cube is very easy to design but the unconventional objects like trees, mountains, and clouds are very difficult to design. By using some affine transformation, we can generate such complex objects. The generation self-similar objects basically use some fixed techniques but unconventional objects uses some random variations in its designing process. Attempt is made to generate various fractal sets and beautiful images in those fractal sets which can be generated by using generalized IFS along with different probabilities totaling up to 1 distributed to each affine transformation. Keywords Fractals · Iterated function system · Affine transformation · Conventional objects

1 Introduction A fractal is a complex pattern, when you break down fractals into smaller sections; each smaller portion looks exactly like the original fractal, which is known as its self-similarity property. In mathematics, fractals are a type of set that exhibits selfsimilarity patterns. Fractals in nature are so complex and irregular structure having S. K. Das Department of Computer Science and Application, Biju Patnaik University of Technology, Rourkela, Odisha, India J. Mishra Department of Computer Science and Application, Odisha University of Technology and Research, Bhubaneswar, Odisha, India e-mail: [email protected] S. R. Nayak (B) Amity School of Engineering and Technology, Amity University Uttar Pradesh, Noida, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_57

541

542

S. K. Das et al.

different size, shape, and dimension. These complex objects are not rendered with convincing fidelity due to the difficulties in defining and rendering their geometry, that is way the classical geometry are failed to analyze these complex objects. In order to measure the complexity of these objects, fractal dimension comes into existence. Downscale, twist, and tilt reproductions of themselves are embedded in the structure of the fractals. Since its inception, fractals have evolved into a wide range of diverse shapes and sizes. It is this principle of self-similarity that underpins fractal geometry. Creating a fractal can be as simple as iteratively or recursively creating a repeating pattern. There are no smooth barks or straight lines of lightning in the sky according to Mandelbrot [1], who also claims that mountains, coastlines, and even clouds are not spherical. Barnsley et al. [2] have proposed collage theorem that gave the solution to the inverse problem of creating IFS from images. He gave generalized affine transformations for creating a fern. But, the same cannot be extended to other class of fractals. Subsequently, Jacquin [3] gave a domain to range mapping process to create fractal coding technique for any image. However, the onset of IFS is characterized are as follows: I F Sx y = ∫ z x dμ(W ), where x = 0, 1, 2, . . .

(1)

h

where W represents the points generated by affine transformation ωi . Florindo and Bruno [4] had given fractal descriptors for different classes of fractals. An attempt has been made in this paper to find IFS for a generalized fractal sets such as Cantor set, Koch curve, and Sierpinksi’s gasket. Also, this paper shows how to design some fractal objects with random variations in parameters and probabilities. It shows how to design fractal objects by passing parameters interactively. Fractal images can be generated from modified MRCM by including inverse functions in the unrepresented areas and making a complete mapping of functions in the entire region of the target image. Beautiful images have been created using a modified MRCM [5, 6]. Chang [7] modified some IFS to generate the new 2D fractal sets that can be the arbitrary affine transformation of original fractal sets. Contraction mapping principles are applied suitably for the particular class of fractal in order to generate any fractal image in that set. The rest of this article reconcile as pursue like, Sect. 2 brings the random fractals, Section 3 interprets the IFS fractal generation, experimental setup and analysis are described in Sect. 4, and result discussion and conclusion are described in Sects. 5 and 6, respectively.

2 Random Fractals Random fractals objects are basically statistically self-similar objects. Randomness effects can be applied on the object to generate new object. Random fractal contains

Generation of Interactive Fractals Using Generalized IFS

543

Fig. 1 Koch curve with random effects [2]

objects having irregular patterns. Fractals can be self-similar, self-affine, or random in type. In self-similar type fractal, the scale down parts are similar with the original image. In case of self-affine, the scale down image will vary with the original image comparing with X, Y, and Z axes. The random fractals have infinite details at X, Y, Z scales. This method is used to generate irregular patterns like cloud, mountains, trees, etc. Fractals having irregular patterns are very complex to design. A fractal object is a never ending pattern. Randomness can be applied on Koch curve which is shown on Fig. 1. Algorithm 1: To generate Random fractals Step 1: Take an IFS Ti , Pi  where i = 1, 2, …n which contains Transformation Ti and n Probability Pi such that i=1 Pi = 1 Step 2: Start with a point which will act as origin Step 3: Randomly choose Transformation Ti and Probability Pi Step 4: Transform the point using Ti and plot it Step 5: Goto Step 3 Step 6: Continue this process for a number of iterations

3 Generation of Fractals Using IFS Fractals can be generated by using iterated function system (IFS). Fractal objects are generated by repeatedly applying a specified transformation function. Using Q 0 = (X 0 , Y0 , Z 0 ), P as the starting position, a transformation function F iterates to generate increasing levels of detail through calculations are as follows: Q 1 = F(Q 0 ),

Q 2 = F(Q 1 ),

Q 3 = Q(Q 2 )

(2)

In simple Q n + 1 = F(Q n )

(3)

544

S. K. Das et al.

An IFS [8] is a limited set of mappings on an infinite-dimensional space. It is a set of transforms that make things to change in its size, shape, and position. It uses fixed geometric replacement rules may be stochastic or deterministic. Scaling, rotation, shearing, and translation of points are all examples of affine transformations [4]. There are a set of contraction mappings Wk in whole metric 2D space (x, d) where each affine map has its own restriction factor for k = 1, 2, 3, … m. IFS is defined by these contraction factors of W k : X → X with Sk ≤ 1 with respect to individual affine map. A few examples are the Koch snowflake, the Sierpinski gasket, and the Cantor set. It is made up of W1 , W2 , . . . , Wm and so on, each of which has an associated probability. As the image is being filled in, its many regions and properties are affected by the probability. The mathematical formulation of same can be described are as follows:        x ab x e Fi = = = (ax + by + e, cx + dy + f ) (4) y cd y f here Fi is the affine transformation where i = 1, 2, . . . , n. Each fractal object generation needs multiple affine transformations. The different coefficients like a, b, c, d represent rotation/dilation for each transformation and e and f represent the translation for each transformation. Each affine transformation F1 , F2 , . . . , Fn should contain probability value P1 , P2 , Pn . The sum of P1 , P2 , . . . , Pn must be equals to 1. The appropriate steps to be followed for generation of IFS are described in algorithm 2. Algorithm 2: To generate fractals using IFS Step 1: Design the initial pattern Step 2: Create a set of transformations Step 3: Apply transformation on the initial pattern Step 4: Apply transformation on the newly created pattern Step 5: Repeat Step 4 Step 6: Stop

4 Experimental Setup and Analysis Initially, Barnsley and Demko [8] introduced IFS for generating fractals. The authors have generated various fractal patterns interactively by using generalized IFS. By changing different parameters like rotation/dilation, translation and probability values different fractal patterns can be generated. In this set of experiments, we tried to generate different IFS patterns by using single IFS algorithm with different coefficient values which are discussed in different cases.

Generation of Interactive Fractals Using Generalized IFS

545

4.1 Case 1: Fern The generalized IFS for the Fern with differing parameters and probabilities as per the collage theorem given by Barnsley has generated by using different coefficients tabulated Tables 1 and 2, respectively. Form this experiment, it shows that the different images may be generated in terms of connected and disconnected patterns based on probability value depicted in Fig. 2a and b, respectively. Table 1 Fern generation using different probability value a

b

c

d

e

f

p

F1

0

0

0

0.16

0

0

0.01

F2

0.85

0.04

−0.04

0.85

0

1.60

0.85

F3

0.20

−0.26

0.23

0.22

0

1.60

0.07

F4

−0.15

0.28

0.26

024

0

0.44

0.07

Table 2 Fern generation using same probability value a

b

c

d

e

f

p

F1

0.849

F2

0.197

0.037

−0.037

0.849

0.075

0.183

0.25

−0.226

0.226

0.197

0.4

0.049

F3

0.25

−0.15

0.283

0.26

0.237

0.575

−0.084

0.25

F4

0

0

0

0.16

0.5

0

0.25

Fig. 2 Generation of fern images. a Different probability. b Same probability

546

S. K. Das et al.

4.2 Case 2: Sierpinski Triangle In like to Case 1, here also we perform same experiment to generate Sierpinski triangle by considering different coefficient value with similar and dissimilar probability values tabulated in both Tables 3 and 4, respectively. There corresponding graphical representation also presented in Fig. 3a and b, respectively. Table 3 Fern generation using same probability with different coefficient value a

b

c

d

e

f

p

F1

0.5

0.0

0.0

0.5

0.0

0.0

0.25

F2

0.5

0.0

0.0

0.5

0.5

0.0

0.25

F3

0.5

0.0

0.0

0.5

0.0

0.5

0.25

F4

0.5

0.0

0.0

0.5

0.0

0.5

0.25

Table 4 Sierpinski triangle generation using different probability with different coefficient value a

b

c

d

e

f

p

F1

0.5

0

0

0.5

0

0

0.33

F2

0.5

0

0

0.5

1

0

0.33

F3

0.5

0

0

0.5

0.5

0.5

0.34

F4

0

0

0

0

0

0

0

Fig. 3 Sierpinski triangle generation. a Using same probability. b Using different probability

Generation of Interactive Fractals Using Generalized IFS

547

Table 5 Koch curve generation using same probability with different coefficient value a

b

c

d

e

f

P

0.0000

0.3333

0.0000

0.0000

0.25

−0.2887

0.2887

0.1667

0.3333

0.0000

0.25

0.2887

−0.2887

0.1667

0.5000

0.2887

0.25

0.0000

0.0000

0.3333

0.6667

0.0000

0.25

F1

0.3333

0.0000

F2

0.1667

F3

0.1667

F4

0.3333

Fig. 4 Fractal curve and tree generation. a Koach curve generation. b Twin Christmas generation

4.3 Case 3: Koch Curve In Case 3, we performed the experiment to generate Koch curve by assigning different parameter values by keeping same probability. The outcome of this case is tabulated in Table 5, and its graphical representation have shown in Fig. 4a.

4.4 Case 4: Twin Christmas Tree In this experiment, the authors have generated the twin Christmas tree fractal images by using inverse replica. The same also achieved by implementing same parameters with same probability values which are reflected in Table 6 and Fig. 4b, respectively.

4.5 Case 5: Castle and Herb Generation In this case, we have tried to generate both castle and herb fractal images by using different parameters with same probability values. The tabulated value presented in

548

S. K. Das et al.

Table 6 Twin Christmas tree generation using inverse replica a

b

c

d

e

f

p

F1

0

−0.5

0.5

0

0.5

0

0.25

F2

0

0.5

−0.5

0

0.5

0.5

0.25

F3

0.5

0

0

0.5

0.25

0.5

0.25

F4

0.5

0

0

0.5

0.25

0.5

0.25

Table 7 Castle pattern generation using similar probability values a

b

c

d

e

f

P

F1

0.5

0

0

0.5

0

0

0.25

F2

0.5

0

0

0.5

2

0

0.25

F3

0.4

0

0

0.4

0

1

0.25

F4

0.5

0

0

0.5

2

1

0.25

Table 7 represents castle generation, where Table 8 represents the value for herb generation. There corresponding fractal patterns are represented in Fig. 5a and b, Table 8 Herb pattern generation using similar probability values a

b

c

d

e

f

P

F1

0.5

0

0

0.75

0.25

0

0.25

F2

0.25

−0.2

0.1

0.3

0.25

0.5

0.25

F3

0.25

0.2

−0.1

0.3

0.5

0.4

0.25

F4

0.2

0

0

0.3

0.4

0.55

0.25

Fig. 5 Fractal image generation. a Castle generation. b Herb generation

Generation of Interactive Fractals Using Generalized IFS

549

respectively.

5 Research Challenges The generalized IFS adapted here from Barnsley and Demko [8] has been studied. Here, five different case are considered by varying different coefficient values along with similar and dissimilar value with respect to four numbers of affine transformation. Initially, fern generation have been made by considering different parameters along with probability values as shown in Table 1 and the output is displayed in Fig. 2a. However by changing the coefficient values and using an equal probability value (Table 2), we have generated Fig. 2b which is a modified fern. In Case 2, we have tried to generate Sierpinski triangle by using four affine functions shown in Table 3 (Fig. 3a) and three functions shown in Table 4 (Fig. 3b). In Case 3 we have taken four functions to generate Koch curve which is shown in Table 5 (Fig. 4a). In Case 4, twin Christmas tree generation have made and its corresponding affine transformation with respect to similar probability value presented in Table 6 and Fig. 4b, respectively. The four function depreciated in this research are presented as follows:   1/3 0 x (5) F1 (x) = 0 1/3 √     1/3 1/6 − 3/6 √ x+ (6) F2 (x) = 3/6 1/6 0 √     3/6 1/2 1/6 √ x+ √ (7) F3 (x) = 3/6 − 3/6 1/6     1/3 0 2/3 F4 (x) = x+ (8) 0 1/3 0 However, we can increase the number of functions like the one used by Jacquin [3] to generate any binary image. In Case 5, the authors have generated castle which is shown in Fig. 5a. This figure is generated by passing the data from Table 7. Here, we are jumping 2 units in case of translation of F2 and F4 function which generates a 3D illusion.

550

S. K. Das et al.

6 Conclusion The experiment shows that different types of fractal images can be generated by using generalized IFS. In order to generate different kind of fractal image, we have to use multiple functions having different parameters. The IFS is an important method to design fractal objects but it needs a detailed step by step designing approach. The probability distribution method also helps to generate a complete image. A set of contraction mapping principles can be used with IFS to generate beautiful images. Also, color can be added to improve their beauty.

References 1. Mandelbrot BB (1982) The fractal geometry of nature. Freeman, San Francisco 2. Barnsley MF, Ervin V, Hardin D, Lancaster J (1986) Solution of an inverse problem for fractals and other sets. Proc Natl Acad Sci USA 83(7):1975 3. Jacquin AE (1993) Fractal image coding: a review. Proc IEEE 81(10):1451–1465 4. Florindo JB, Bruno OM (2014) Fractal descriptors based on the probability dimension: a texture analysis and classification approach. Pattern Recogn Lett 42:107–114 5. Bisoi AK, Mishra J (1999) Fractal images with inverse replicas. Machine Graphics and Vision 8(1):77–82 6. Bisoi, A. K., & Mishra, J. (1999, September). Enhancing the beauty of fractals. In: Proceedings third ınternational conference on computational ıntelligence and multimedia applications. ICCIMA’99 (Cat. No. PR00300), IEEE, pp 454–458 7. Chang HT (2004) Arbitrary affine transformation and their composition effects for twodimensional fractal sets. Image Vis Comput 22(13):1117–1127 8. Barnsley MF, Demko S (1985) Iterated function systems and the global construction of fractals. Proc R Soc Lond A 399(1817):243–275

Real-Time CPU Burst Time Prediction Approach for Processes in the Computational Grid Using ML Amiya Ranjan Panda, Shashank Sirmour, and Pradeeep Kumar Mallick

Abstract Shortest-Job-First (SJF) and other CPU scheduling techniques are achieved by analysing the duration of the CPU bursts in the ready queue processes. Static and dynamic approaches can estimate the time of CPU bursts, although they may not provide accurate or dependable predictions. This research proposes a method based on machine learning (ML) to evaluate the CPU bursts of processes. Feature selection approaches are employed to identify and estimate CPU burst times for grid processes in real-time without spending many computational resources and processing time. The suggested method is tested and evaluated using a grid workload data set known as “GWA-T-4 AuverGrid”, utilising ML approaches such as linear regression and decision trees regression. We conducted an experiment that found a linear correlation between CPU burst strength and process properties to test this. Furthermore, in nearly all cases, we strive to design an algorithm that predicts burst times in real-time with minimal time and space complexity to be implemented in the real world. Keywords CPU burst · CPU scheduling algorithm · Feature selection · Machine learning

1 Introduction To identify which process has control of the CPU, CPU scheduling determines which process will run while another is put on hold. To ensure that the operating system picks at least one of the processes available for execution while the CPU is idle, A. R. Panda · S. Sirmour · P. K. Mallick (B) School of Computer Engineering, KIIT Deemed To Be University, Bhubaneswar, Odisha, India e-mail: [email protected] A. R. Panda e-mail: [email protected] S. Sirmour e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_58

551

552

A. R. Panda et al.

CPU scheduling plays an important role. The CPU scheduler will take care of this. It selects a ready-to-execute memory process from the available options. Various methods are used to choose a process from the ready queue and allocate it to a processor in such algorithms [1]. CPU scheduling techniques include FCFS, SJF, SRTF, priority scheduling, round-robin, and more. CPU burst is a prerequisite for scheduling methods like SJF and SRTF that use this information. It is essential for SJF and SRTF to finish the small burst-time cycle as quickly as possible so that the system may be freed up for greater bursts. A process’s CPU burst time is determined by the amount of time it spends on the CPU. Before implementing the SJF and SRTF, we need to know how long the CPU burst period would last. This is the primary issue with the implementation. Determining the CPU burst time is not a simple task for a process requiring more effort and computations. Traditionally, there are two methods by which we can predict the process’s burst time: the static method and the dynamic method. The traditional static method depends on process type or size. The traditional dynamic way of estimating the duration of the next CPU explosion relies on using the previous history to predict the length of the CPU burst time. In grid scheduling, the first phase involves discovering available resources; the second phase, the most critical, is selecting and transferring jobs to feasible resources. The third is the execution of jobs. In the second phase of computational grid (CG) preparation, assigning and assigning work to feasible equipment, often a machine in the grid is completed [2]. Implementing CG of scheduling algorithms that rely on the CPU burst requires that the dedicated machine predict the duration of the CPU burst time. We have proposed an ML-based approach in this paper to predict CPU burst time in the real-time scenario for processes to be executed in CG and the machine showing similar attribute properties. To accurately predict the next CPU burst, the suggested solution benefits from analysing the most critical attributes of a process. The paper is distributed as follows. Section 2 provides an overview of the relevant literature. Section 3 outlines the suggested methodology and its accompanying modelling framework. Section 4 describes the grid data set used in detail and analyses the related attributes of the process. Section 5 offers the tests, while the results are discussed in Sect. 6. Conclusions and future research work are included in Sect. 7.

2 Literature Review The CPU burst length will historically be calculated by the dynamic method to apply scheduling algorithms such as SRTF and SJF. The length of a CPU burst cycle is approximated to the previous execution according to a dynamic method procedure. For Mahesh Kumar et al. [3], to learn the length and the length of the next burst in the SJF planning algorithm, they proposed a way of predicting the time of CPU burst-time.

Real-Time CPU Burst Time Prediction Approach …

553

“Dual-Simplex Optimization method (DSOM), based on Linear Programming Models (LPMDSA), was suggested, which begins with double feasibility limitations to find primary feasibility while preserving optimal functions” [4]. This approach is based on a dual-simplex optimizations method (LPMDSA). They begin their work with the traditional SJF scheduling algorithm and calculate the time of waiting for each process and its turnaround time. The burst time, waiting time and turnaround time have been transformed in to a mathematical model called linear programming, and the DSOM has been used. Another method of calculating and forecasting the length of the SJF algorithm for the next CPU burst was provided by Pourali et al. [5]. The approach used a fuzzy system as a knowledge-based rule system [6]. This method is used when a process’s historical performance can predict its next burst. In this method, an amount of previously consumed CPU burst time is the input, and an estimated amount of newly available CPU burst time is the output. However, Smith et al. demonstrate that previous data may forecast the future time needed to complete the process [7]. They proposed a technique to estimate the performance of parallel jobs from the runtime of similar previous jobs. This study used two new search approaches (greedy search and the evolutionary algorithm) to determine which application qualities best represent the meaning of similarity to predict. According to the results, genetic search work is more efficient than greedy search for any workload. Their work also provides insight into the attributes of work essential to identify similar jobs where the name of the user who submits and the application form the most important task attributes. Predicting how users will use application resources is an appealing technique that has been adopted by various previous research that has employed ML technology. Matsunaga et al. [8] carried out a comparative assessment of the suitability of several ML systems to forecast spatiotemporal resource usage via apps. They predicted the execution time of the two bioinformatical applications, storage and disk requirements in their work. “Tarek Helmy, Sadam Al-Azaniand Omar Bin-Obaidellah presented another ML-based approach as models of regression, SVM and K-NN were used in their study” [9]. But we found that the proposed algorithm takes considerable time and space complexity. It is like a paradox where we are proposing an algorithm for optimisation. Still, for computing that algorithm, we create more and more processes for which we need to apply the algorithm to optimise the process. We need an MLbased approach where we have something like a weight vector to directly predict the burst time in real-time so, as a model of regression, linear regression and decision tree regression is used in this paper.

3 Proposed Approach It is possible to estimate the CPU burst time of a process by analysing the essential process characteristics. It is common practice in computer science and engineering to employ machine learning (ML) techniques when developing computer programmes with the goal of continuously improving their performance over time. A subset of

554

A. R. Panda et al.

the data is used as a training set, while another is used as a test set to verify the attributes learnt from the training set. Decision tree regression and linear regression were utilised to forecast burst times in real-time using two general ML approaches. We chose this technique because the time and space complexity matter a lot in the case of production, so we need a low latency system to calculate the burst time as quickly as possible without giving too much power and time to predict the burst time. These steps (Fig. 1) are summarised as follows 1. 2. 3.

4. 5. 6.

7.

8.

9.

Preparing the Dataset: After getting the required attributes for the data point, we get a data set for future use. Attributes Filtering: Here, we filter the required essential features from the attributes of process contained by process control block (PCB). Generating the Model: In this process, we apply ML techniques like linear regression to get a weight vector or decision tree to get a threshold to predict burst time from future unseen data. Preparing the Dataset: We get a data set for future use after getting the required attributes for the datapoint. Attributes Filtering: Here, we filter the required important features from the attributes of process contained by PCB. Generating the Model: In this process, we apply ML techniques like linear regression to get a weight vector or decision tree to get a threshold to predict burst time from future unseen data. Retraining the Model: As user requirement or interest changes over time generally, we need to retrain our data at a certain point when we have enough new data points to retrain our model to keep Performing better for future unseen data. Predicting the CPU burst time of the process: The generated model weight vector or threshold was used to predict the CPU burst time of a new process in the input queue. Applying for Scheduling: The task is added to the ready queue with its predicted CPU burst time scheduled by a scheduler such as SJF scheduler.

4 Data Set Description We utilised the “GWA-T-4 AuverGrid” grid load1 stated in the previous section to test the suggested model. Production grid platform AuverGrid has five clusters. Scientific Linux is installed on dual 3 GHz Pentium-IV Xeons in each cluster. AuverGrid has 475 processors spread over five clusters, each with several processors. The Auvergne area, France, is where these clusters were located. There are 404,176 tasks in the Grid’s workload, each of which has 29 characteristics. All 404,176 grid-workload operations with a positive or zero burst time were selected. Attributes 19 through 29 in the data set were next evaluated, and it was discovered that they had no information or values. There are no data loss or consistent values for the remaining 11 characteristics

Real-Time CPU Burst Time Prediction Approach …

555

Fig. 1 The proposed approach

that have been left out since they all have the same value. The run-time property is the CPU’s real burst time for the rest of the attributes from 1 to 18 (Table 1). [Link:http://gwa.ewi.tudelft.nl/datasets/gwa-t-4-auvergrid].

5 Experiments and Results As user behaviours generally change over time, we need to build a model that predicts future unseen data well. So, we sort the data points in ascending order by ‘Submit_Time’ and removing all the data points where the runtime is less than zero as runtime data was lost there. First, we divide the first 80% data as training data and the rest 20% as test data; then, we apply our ML algorithms and traditional

556

A. R. Panda et al.

Table 1 Data set features description #

Features name

Description

1

Job_ID

Unique job identification

2

Submit_Time

Process submit time

3

Waiting_Time

Waiting time of process in seconds

4

Runnning_Time

Running time or burst time in seconds

5

N-Procs

Number of assigned processors

6

Average CPU Time Used Average time of CPU over all assigned processors

7

Used Memory

Avg memory kilobytes. used per processor in

8

Required_N_Procs

Required number of processors

9

Requested_Time

Requested time in seconds

10 Requested_Memory

Requested memory preprocessor in Kilobytes (average per processor)

11 Status

Job completed = 1, job failed = 0, job cancelled = 5

12 User_ID

A string identifier for a user

13 Group_ID

A string identifier for group user belongs to

14 Executable_ID

Executable name (application), a natural number, between one application and the number of different applications that appear in the workload

15 Queue_ID

An identifier for queue

16 Partition_ID

An identifier for partition

17 Orig_Site_ID

An identifier for the submission site

18 LastRunSite_ID

An identifier for execution site

19 Job_Structure

Single job = UNITARY, composite job = BoT

20 JobStructure_ParaMs

If Job Structure = BoT, contains batch identifier

21 Used_Network

Used network resources in kb/second

22 Used_LocalDiskSpace

Used local disk space in MB

23 Used_Resources

comma-separated generic resources

24 Req_Platform

The requested platform, i.e. CPU architecture, OS, OS version

25 Req_Network

The requested network is in kb/s

26 Req_Local Disk SpaCe

The requested local disk space in megabytes

27 Req_Resources

comma-separated generic resources

28 VOID

An identifier for virtual organisation

29 Project_ID

Project identifier

dynamic simple average technique. In the best performing model, we use ensemble techniques bagging and boosting for model enhancement and we showed our result by selecting the best features (Table 2). We used various ML techniques as stated above to predict the CPU. These techniques were implemented in Jupyter notebook with Python as language. We used linear regression and decisions tree as a model in our experiments. Linear regression

Real-Time CPU Burst Time Prediction Approach …

557

Table 2 Selected features for experiment Attribute

Reason for selection

Submit_Time

Shows the process arrival time

Used_Memory

Shows the memory size required in advance by the process

Req_N_Procs

It shows how the process works depending on the number of processors

Req_Time

Shows the initial assessed time to complete the task

Req_Memory

Shows the amount of memory the process will need in addition to the used memory

User_ID

Shows the user type and its importance

Group_ID

Shows the type of the group and its priority

Queue_ID

Shows the type of the queue and its priority

Partition_ID

Locates machine in a cluster

OrigSite_ID

Locates the original site of the process

Wait_Time

It shows the gap between the submit time of the process and the time it started running

CPUTime_Used Shows the average CPU time over all allocated processors N-Procs

Shows the number of processors the process uses

Status

Categorise whether or not the process has done the job; 1 if it has done it,0 if it has failed

Executable_ID

Shows the type of the application and its importance

LastRunSite_ID

It shows the site where the process finished its work

was used with default values in sci-kit learn. Decisions tree were used with max depth as six, and for the remaining parameters, we used default values in sci-kit learn. We have chosen all the jobs for training and testing purposes, with the division of 80% as training and cross-validation data and the rest 20% as testing data. Attributes were categorised into two groups: those that did not have features selected and those that had features picked. We conducted experiments on both groups to see if it could apply just the best attributes, saving data storage space. We stored it in our proposed architecture for future retraining of the model. We assessed the effectiveness of the proposed model in terms of correlation coefficient (CC) and relative absolute error (RAE). The results achieved were in Table 3. We want to build a low latency model that can be practically implementable in real-life and consume less computation power and storage for future retraining of Table 3 The CC and RAE of the process attributes without feature engineering

Exp1

CC

RAE

LR

0.808

0.199

DT

0.862

0.129

Simple Avg

0.037

1.023

558 Table 4 Training and test time

Table 5 The CC and RAE of the process attributes with feature engineering

A. R. Panda et al. Exp1

Training time in second Test time per process in a microsecond

LR

16.93

3.28

DT

5.481

4.059

Simple Avg

0.037

1.023

Exp1

CC

RAE

LR

0.809

0.189

DT

0.869

0.125

Simple Avg

0.037

1.023

data. We choose this low latency algorithm that gives a weight vector or threshold for future prediction (Table 4). We used sequential feature selection (SFS) to select the most significant attributes using the mlxtend’s sequential feature selector in the Jupyter notebook. SFS delete or add one feature based on the classification performance until a feature subset of desired size k is achieved. Selected top 8 features are ‘Job_ID’, ‘Used_CPUTime’, ‘Used_Memory’, ‘Status’, ‘Req_Time’, ‘LastRunSite_ID’, ‘Group_ID’, ‘OrigSite_ID’ (Table 5). We can see through our experiment that DT is performing well both with feature selected and without feature selected. Now, to enhance our model, we can use ensembling techniques in experiments as well as in practical implementation, so here, to see enhancement in the model, we applied boot strap aggregation throw random forest (RF) and boosting throw gradient boosted decision trees (GBDT). The results achieved were in Tables 6 and 7: Table 6 The CC and RAE of the process attributes without feature engineering

Table 7 The CC and RAE of the process attributes with feature engineering

Exp1

CC

RAE

DT

0.862

0.129

RF

0.871

0.120

GBDT

0.877

0.157

Exp1

CC

RAE

DT

0.862

0.129

RF

0.871

0.120

GBDT

0.877

0.157

Real-Time CPU Burst Time Prediction Approach …

559

Fig. 2 CC of process attributes in different ML and traditional technique

6 Results and Discussion We can apply ML techniques for burst time prediction with a better result than the traditional way with little computation power and storage unit by using techniques like DT or linear regression, which give threshold value or weight vector that predicts the result in real-time. From Figs. 2 and 3, we can decide that the decision tree is performing better concerning RAE and CC, so we can use DT as a model and reduce stored data points for future retraining. We can apply feature selection that does not affect our model performance. They gave an approximately equal performance in RAE and CC in both cases. We can also apply ensemble techniques for model performance enhancement if we have enough time for training. The results achieved were in Figs. 4 and 5:

7 Conclusion To predict the burst time for processes waiting in the ready queue, we proposed using ML approaches such as LR and DT in this paper. The recommended solution uses attribute selection techniques and experiment findings to a grid workload data set called “GWA-T-4 AuverGrid”. There is a significant correlation between process characteristics and burst CPU time, as shown by the experimental findings

560

Fig. 3 RAE of process attributes in different ML and traditional technique

Fig. 4 CC of the process attributes in different ensemble techniques

A. R. Panda et al.

Real-Time CPU Burst Time Prediction Approach …

561

Fig. 5 RAE of the process attributes in other ensemble techniques

for DT and its ensemble model RF, GBDT in CC and RAE, instead of other ML approaches. Space and time complexity are reduced by using attribute selection and ML approaches. SJF and SRTF are implemented in CGs using the proposed manner. Future unseen data will be more accurately predicted if a model is often retrained utilising a time-based split of training and test data with more data points.

References 1. Silberschatz A, Galvin PB, Gagne G (2013) Operating system concepts, vol 8. Wiley, NY 2. Fernández-Baca D (1989) Allocating modules to processors in a distributed system. IEEE Trans Software Eng 15(11):1427–1436 3. Mahesh Kumar MR, Renuka Rajendra B, Niranjan CK, Sreenatha M (2014) Prediction of the length of the next CPU burst in SJF scheduling algorithm using the dual simplex method. In: 2014 2nd international conference on current trends in engineering and technology (ICCTET), IEEE, pp 248–252 4. Kasana HS, Kumar KD (eds) (2004) Introductory operations research: theory and applications. Springer Science & Business Media, Berlin 5. Pourali A, Rahmani AM (2009) A fuzzy-based scheduling algorithm for prediction of next CPU-burst time to implement shortest process next. In: Computer science and information technology-spring conference, IACSITSC’09, IEEE, pp 217–220 6. Kosko B (1991) Neural networks and fuzzy systems: a dynamical systems approach to machine intelligence. Prentice-Hall Inc., Englewood Cliffs 7. Smith W, Foster I, Taylor V (2004) Predicting application runtimes with historical information. J Parallel Distrib Comput 64(9):1007–1016

562

A. R. Panda et al.

8. Matsunaga A, Fortes JAB (2010) On the use of machine learning to predict the time and resources consumed by applications. In: Proceedings of the 2010 10th IEEE/ACM international conference on cluster, cloud and grid computing, IEEE Computer Society, pp 495–504 9. Helmy T, Al-Azani S, Bin-Obaidellah O (2015) A machine learning-based approach to estimate the CPU-bursttime for processesin the computational grid. In: 2015 third international conference on artificial intelligence, modelling and simulation 10. Mehmood Shah SN, Mahmood AKB, Oxley A (2010) Analysis and evaluation of grid scheduling algorithms using real workload traces. In: Proceedings of the international conference on management of emergent digital eco systems, ACM, pp 234–239 11. Kohavi R, Kohavi R (1997) Wrappers for feature subset selection. Artif Intell 97(1–2):273–324. Retrieved http://linkinghub.elsevier.com/retrieve/pii/S000437029700043X 12. Mitchell T (1997) Machine learning, 1st edn. The Mc-Graw Hill Company. Inc. International Edition, pp 52–75, 154–183, 230–244 13. Amur HR, Shenoy GR, Sarma D, Vaddagiri S, Plimsoll: a DVS algorithm hierarchy

EMG-Based Arm Exoskeleton K. P. Jayalakshmi, S. Adarsh Rag, and J. Cyril Robinson Azariah

Abstract Although the era of automation is slowly taking over, manual labor still has not lost its importance in the industrial sector. The demanding productivity leads to a lot of stress and strain on the muscles causing them to become weaker and thus eventually reducing mobility. Primary prevention is the best method. So, the exoskeleton plays a vital role, as it reduces the stress being concentrated on the arm muscle. The exoskeleton belongs to the branch of orthotics where an external brace or frame is used to provide support and strength to the bone and muscle area. The proposed project of implementing an EMG-based arm exoskeleton aims to reduce the stress and strain on muscles faced daily in industries. On accessing the various studies made by researchers, the gaps in implementation and noteworthy discoveries of a product have been designed keeping all of these aspects in mind. The exoskeleton would be powered with cost-effective pneumatic artificial muscles which are triggered by EMG signals from the muscles. The exoskeleton would concentrate the weight on the PAM muscles which would transfer the weight to the lower body using arm belts. Keywords Exoskeleton · EMG · PAM · McKibben

1 Introduction An exoskeleton arm is an upper-body assistive arm that instantly increases human strength. It is a wearable, external mechanical structure that enhances the physical strength of a person. The power unit of the exoskeleton can be electric, hydraulic or pneumatic. Hence, exoskeletons can have potential advances in reducing the K. P. Jayalakshmi Department of ECE, St. Joseph Engineering College, Vamanjoor, Mangalore, India S. A. Rag (B) · J. C. R. Azariah Department of Nanotechnology, Institute of Electronics and Communication Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, Tamilnadu 602105, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5_59

563

564

K. P. Jayalakshmi et al.

Fig. 1 Exoskeleton arm

underlying factors associated with developing work-related musculoskeletal injuries. There are two types of exoskeletons—active and passive. The major difference between an active exoskeleton and a passive exoskeleton is that a passive exoskeleton does not require the system of electric motors, pneumatics, levers, hydraulics or a combination of these technologies [1]. The structure of this model is shown in Fig. 1. To tackle these injuries, rehabilitation purposes as well as strength enhancement purposes, people have often resorted to exoskeletons. From the twentieth century itself, the development of exoskeletons has begun. In 1965, General Electric developed the Hardiman, which is a large full-body exoskeleton specifically designed to enlarge the user’s strength for lifting heavy objects. Manual labor has always been an integral part of the industrial sector [2]. Due to the demanding productivity and aging, there is a lot of stress and strain on the muscles causing them to become weaker and thus reducing mobility. In this work, an exoskeleton arm has been designed and developed that provides physical assistance to the arm while carrying heavy loads up to 25 kg in industries. This work focuses on to reduce stress and strain on the muscles by concentrating the weight on the pneumatic artificial muscle.

2 Methodology An exoskeleton frame material is chosen, keeping in mind the properties expected from it during its functionality. A metal with good tensile strength and density is chosen. The frame is designed, and a sketch is made which is used as a reference for fabrication. Self-made McKibben muscles are to be used. These muscles are referred to as pneumatic artificial muscles. They serve as actuators which are the main lifting elements of the exoskeleton. They mainly consist of an inner latex rubber tube also called a bladder, outer braided mesh clamped at the ends and a PU tube which is used as an inlet for air from the air compressor [3–5].

EMG-Based Arm Exoskeleton

565

When the air from the compressor enters the inner bladder, the PAM expands and lifting is achieved. The geometry of the mesh acts as a linkage and translates this radial expansion into linear contraction. Compressor pressure is chosen based on the calculations which help in effectively lifting the load without causing any damage to the PAM. An EMG sensor is used to capture the muscle signals, and the received signals are processed to remove noise with the help of a microcontroller. A suitable noise removing technique is implemented for the processing of EMG signal, and then thresholding is performed to give the input to the control valve which controls the flow of compressed air into the PAM. Here, the control valve is a directional solenoid valve. Failure mode and effect analysis (FMEA) is carried out to identify potential failure modes, along with their causes, and take preventive measures to reduce the severity of these failures. FMEA helps to frame design constraints and requirements for the exoskeleton and to ensure rigidity welding was performed, and thus the exoskeleton frame was ready. Later, the inner part of the frame was attached with the foam to prevent cuts and bruises on the skin and also to slightly increase the fit of the frame. Hinges are used on the two loops near the wrist to make the frame adjustable.

3 Results and Discussion EMG electrodes were placed on the arm, i.e., reference electrode was placed on the non-muscle area and the other two electrodes on the muscle area. Proper skin preparation is necessary to get accurate results. EMG sensor performance was observed, and signal analysis is done as follows. Initially, the EMG sensor is interfaced with Analog Discovery 2. Raw EMG signals are extracted from the muscles and are viewed on ‘Waveforms’ application through ‘Analog Discovery 2’ kit for analysis. EMG signal amplitudes of relaxed, flexed positions are taken to calculate the threshold. Realtime values, i.e., amplitude values of EMG signal of the muscle which are relaxed and flexed simultaneously, are considered as input results. EMG sensor performance was observed, and signal analysis is done as follows. Code for signal analysis was written using MATLAB. We export three sets of values obtained by the EMG signals through Analog Discovery Kit 2. We obtained 8000 samples for each code for signal analysis which was written using MATLAB. We export three sets of values obtained by the EMG signals through Analog Discovery Kit. We obtained 8000 samples for each of the cases at 4 kHz frequency. The three sets of values are amplitude values when the muscle was at rest, flexed and simultaneously relaxed and flexed [6–9]. The first two sets were used to calculate the boundary values, and the third set was used as input. From the boundary values, we calculate the threshold as mentioned in the program. The raw data is averaged by taking its absolute values by considering 80 samples at first. After averaging the values in an above-mentioned way, we average the result again by considering 800 samples at a time. We plot the following results and observe that, after thresholding, the graph found to be closely matching with that of the third set of values where the muscle was flexed and relaxed simultaneously.

566

K. P. Jayalakshmi et al.

Figure 2 shows the amplitude values of the EMG signal when the arm muscle is in a relaxed state (top) and flexed state (bottom). We can observe that when the muscle is in a relaxed state, EMG signal amplitudes are lower. But when the muscle is flexed, amplitude rises significantly. Figure 3 shows raw data, which is close to real scenario wherein repetitive lifting will have arm movements that cause the muscles to flex and relax. Sampled data is the averaged/filtered version of the raw data. And we can infer that the threshold plot is very much close to that of the sampled data plot. Initially, the artificial muscle with specifications—14 mm braided mesh, 6 mm PU tube and 10 mm latex tube—was tested with an air compressor, and poor results were obtained, as the diameter of the braided mesh was restricting the muscle to expand radially or contract axially. This would also cost the weightlifting capabilities of the exoskeleton. Hence, 14 mm braided mesh was replaced with a 20 mm one, which resulted in a better radial expansion, and the effective length of the muscle reduced significantly, which would, in turn, assist in the lifting process. A 140mm-long McKibben muscle was contracted to an approx. length of 100 mm. It was observed that a nearly 40 mm reduction in the length of the muscle, i.e., 28.5% length reduction was obtained. Code was developed on Energia IDE to control the solenoid valve. The value of the threshold was obtained when the EMG signals were analyzed using MATLAB. This value of threshold was used in the code, and satisfactory results were obtained. On testing the exoskeleton under loaded conditions with three different weights, it was found that the effectiveness of the exoskeleton was poor for 15 and 20 kg, respectively. However, for the weight of 10 kg, the performance of

Fig. 2 Plot of EMG output for the flexed and relaxed state

EMG-Based Arm Exoskeleton

567

Fig. 3 Thresholding performed on EMG output

the exoskeleton was satisfactory as it assisted in lifting the load without dropping it down. The preferred 90º angle was obtained for the load for 10 kg. This is illustrated in Fig. 4. The main observation that has to be noted is that, by using this exoskeleton arm, the subject was able to hold the load for a longer duration when compared to the scenario when the subject is not wearing the exoskeleton arm. The exoskeleton was then integrated with all its subsystems, and the working was found to be satisfactory and according to standards. However, on asking feedback from the subjects, it was found that the weight on their wrist was still a major concern, and it was studied that an addition of an additional metal member below the wrist as support would solve this issue and improve performance. It was also discussed that attaching additional muscles would drastically improve system performance provided there is proper weight distribution to the body.

4 Conclusion The project ‘EMG-based arm exoskeleton’ presents a solution to the problems faced by an individual specifically in industries where long-term periodic lifting results in

568

K. P. Jayalakshmi et al.

Fig. 4 Loaded condition of the arm

muscle injuries and pain. The frame was designed to meet standards of ergonomic comfort and provide proper support during the weightlifting action. The failure mode effect analysis gave critical areas to be kept in mind during the design. Proper padding and support made sure the exoskeleton was stable during working conditions. The designed exoskeleton frame on testing in loaded conditions was ineffective for weights of 15 and 20 kg due to identified reasons such as lack of existing metal members in the wrist region and insufficient muscles and their weight distribution. For 10 kg, we get good performance, and lifting assistance is satisfactory. Overall, most of the project objectives were achieved except for the fact that the lifting of weights of range 15–25 kg was not achieved.

References 1. de Looze MP, Bosch T, Krause F, Stadler KS, O’Sullivan LW (2015) Exoskeletons for industrial application and their potential effects on physical workload. Ergonomics 59:671–681 2. Pawar MV, Ohol SS, Patil A (2018) Modelling and development of compressed air powered human exoskeleton suit human exoskeleton. In: 2018 7th international conference on reliability, infocom technologies and optimization (trends and future directions) (ICRITO) 3. Najmuddin WSWA, Mustaffa MT (2017) A study on contraction of pneumatic artificial muscle (PAM) for load-lifting. J Phys: Conf Ser 908(1), article id. 012036 4. Firoozabadi F (2020) Modelling and control of pneumatic artificial muscles—a tutorial 5. Nazmi N, Rahman MAA, Mazlan SA, Zamzuri H (2015) Electromyography (EMG) based signal analysis for physiological device application in lower limb rehabilitation. In: 2015 2nd international conference on biomedical engineering (ICoBE), Penang, 30–31 March 2015 6. Majid MSH, Khairunizam W, Shahriman AB, Zunaidi I, Sahyudi BN, Zuradzman MR (2018) EMG feature extractions for upper-limb functional movement during rehabilitation. In: 2018 international conference on intelligent informatics and biomedical sciences (ICIIBMS)

EMG-Based Arm Exoskeleton

569

7. Pavana Kumara B, Pais KA, Pereira RM, Sahil MA, Mallya VS (2017) Design and fabrication of a pneumatically powered human exoskeleton arm. J Mech Eng Autom 7(3):85–88 8. Asokan A, Vigneshwar M (2019) Design and control of an EMG-based low-cost exoskeleton for stroke rehabilitation. In: 2019 fifth Indian control conference (ICC) 9. Gopal Krishna UB, Prajwal Hosmutt HR, Nyamagoud BM, Patil MV, Hunnur O (2018) Design and fabrication of pneumatic powered exoskeleton suit for arms. Int Res J Eng Technol (IRJET) 05(04), e-ISSN: 2395-0056

Author Index

A Abd, Haider J, 469 Abdulmunem, Ashwan A., 519 Abutiheen, Zinah Abdulridha, 519 Adarsh Rag, S., 367, 563 Adiba Sharmeen, 109 Aditi Anupam Shukla, 449 Ahmed, Moges, 207 Aishwarya, R., 421 Albert Mayan, J., 441 Alfayhan, Abeer S., 529 Alivarani Mohapatra, 347 Alsawaf, Hiba A., 503 Alshaikhli, Zahraa S., 491 Amitabh Satpathy, 291 Amit Kant Pandit, 1, 17 Amiya Ranjan Panda, 551 Anand Sreekantan Thampy, 309 Ananya Choudhury, 235 Aneesh Wunnava, 173 Anil Kumar Bhardwaj, 17 Ankayarkanni, B., 405, 441 Ankita Sharma, 33 Antopraveena, M. D., 393, 415 Anubhav Kumar, 207 Archana Sarangi, 357 Arpit Suman, 109 Arvind Kumar Sharma, 79 Asha, P., 399 Ashish Kumar Mourya, 385 Ashutosh Vashishtha, 1 Avirup Mazumder, 155

B Benudhar Sahu, 291 Bhasi Sukumaran, 449 Bhukya Arun Kumar, 323 Bibekananda Jena, 173 Bibudhendu Pati, 281 Binita Kumari, 63 Binod Kumar Pattanayak, 281 Binu Kuriakose Vargis, 79 Biswal, S. M., 101

C Cyril Robinson Azariah, J., 563

D Das, J. K., 101, 247 Debahuti Mishra, 357 Deepa, P., 89 Devi Prasad Acharya, 145 Dinesh Kumar Garg, 9 Dinesh, M., 393

E Er. Deepak Kumar, 43 Er. Sorab Kumar, 33

G Gatte, Mohammed Taih, 469 Gino Sinthia, 191 Gopinath Palai, 323 Gyana Ranjan Patra, 329

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. N. Mohanty and S. Das (eds.), Advances in Intelligent Computing and Communication, Lecture Notes in Networks and Systems 430, https://doi.org/10.1007/978-981-19-0825-5

571

572 H Harjan, Zahraa A., 519 Hekmat, Wasan A., 491 Hirak Keshari Behera, 253 Hussein, Raheem G. K., 529

I Ismael, Mustafa R., 469

J Jagadeeswara Rao, G., 127 Jancy, S., 393 Jany Shabu, S. L., 399 Jaya Bijaya Arjun Das, 357 Jayalakshmi, K.P., 367, 563 Jibitesh Mishra, 541 Jogendra Garain, 273 Joshila Grace, L. K., 399, 405 Jyoti Singh, 227

K Kadhim, Mohammed O., 479 Kalaiarasi, G., 421 Kamal Upreti, 79 Kandarpa Kumar Sarma, 235 Khalaf, Ali J., 529 Khamees, Ahmed B., 479 Kiruthika, K., 119 Koushlendra Kumar Singh, 273 Kumar Biswal, 263 Kumaresh Sarmah, 339 Kundan Kumar, 109

L Lakshmanan, L., 421 Laxmi Prasad Mishra, 253 Lipsa Dash, 309

M Madhab Chandra Tripathy, 263 Madugula Murali Krishna, 217 Mallika Srivastava, 53 Manish Mishra, 183 Manish Raj, 53 Manoj Kumar Naik, 173 Manoranjan Das, 291 Manvi Gupta, 17 Marvi Sharma, 9 Mary Posonia, A., 405, 441

Author Index Meera Dash, 299 Mihir Narayan Mohanty, 357 Minu, R. I., 449 Miryala Sai Kiran, 415 Misgna, Haile, 207 Misra, S., 101 Mogula Yeshasvi, 137 Mohit Bargoti, 431 Monalisa Mohanty, 329

N Nagarajan, G., 393, 415, 441, 459 Nanda Dulal Jana, 155, 227 Nassir, Sura T., 479 Negusse, Destalem, 207 Nibedan Panda, 127 Nidhi Agarwal, 109 Nidhi Sharma, 79 Nikhil, K. J., 197 Nikhil Sidharth, S. T., 415 Niranjan Nayak, 145 Nisha Thakur, 43

P Pati, S. K., 101 Pattnaik, S. K., 247 Poovizhi, T., 191 Pradeeep Kumar Mallick, 551 Prasant Kumar Sahu, 183 Pravati Nayak, 163 Premananda, B. S., 197 Priya Seema Miranda, 367

R Rajashree Dash, 377 Rajlakshmi Gogoi, 227 Ranjan Kumar Mallick, 163 Rashmita Khilar, 89, 119, 191 Rasmi Ranjan Khansama, 53 Ravishankar Mehta, 273 Refonaa, J., 399 Rituraj Jain, 79 Rohan Patel, 431 Roktim Konch, 339 Roobini, M. S., 415 Rutuparna Panda, 173

S Sairam mishra, 163 Sai Shiva Shankar, 393

Author Index Sai Srinivas, S., 127 Samal, S. R., 247 Sandipan Dhar, 155, 227 Sanjaya Kumar Jena, 347 Sanjay Kumar Sahu, 323 Santanu Sahoo, 329 Santosh Kumar Behera, 377 Santosh Kumar Majhi, 217 Saravanan, T. R., 459 Satakshee Mishra, 53 Sathya Bama Krishna, R., 405 Shafqat Ul Ahsaan, 385 Shashank Sirmour, 551 Shashank Thakur, 449 Shitya Ranjan Das, 163 Shubam Sumbria, 1 Shubham Mahajan, 1, 17 Shinen, Mohammed Hadi, 529 Sipra Sahoo, 63 Sivaparvathi, K., 127 Siva Prasad, A., 127 Sivaranjan Goswami, 339 Sivkumar Mishra, 347 Soumya Ranjan Nayak, 541 Subetha, T., 137 Subhashhree Choudhury, 145 Subhayu Ghosh, 155 Subrat Kumar Dash, 347 Sujihelen, L., 393, 415 Sukanta Kumar Das, 541

573 Sukant Kishoro Bisoy, 53 Sukonya Phukan, 227 Sumanshu Agarwal, 109 Sunita Samanta, 329 Sushil Kumar Mahapatra, 281 Swain, A., 247 Swain, K. P., 101, 247 Swarup Roy, 155

T Tapaswini Sahu, 263 Trilochan Panigrahi, 299

U Usha Nandhini, D., 441 Usharani Raut, 347

V Vathana, D., 431 Vigneshwari, S., 405 Viji Amutha Mary, A., 399 Vinod Kumar, 385

Y Yogitha, R., 421 Yun, Kyongsik, 207