Proceedings of Second International Conference on Smart Energy and Communication: ICSEC 2020 9811567069, 9789811567063

This book gathers selected papers presented at the 2nd International Conference on Smart Energy and Communication (ICSEC

399 79 28MB

English Pages 713 Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Proceedings of Second International Conference on Smart Energy and Communication: ICSEC 2020
 9811567069, 9789811567063

Table of contents :
Preface
Contents
About the Editors
An Evaluation into Deep Learning Capabilities, Functions and Its Analysis
1 Introduction
2 Motivation
3 Study
4 Techniques
5 Applications
6 Challenges
7 Conclusion
References
Mathematical Modeling and Simulation of DC-DC Converters Using State-Space Approach
1 Introduction
2 Mathematical Modeling of DC-DC Converters
2.1 Buck Converter
2.2 Boost Converter
2.3 Buck-Boost Converter
3 DC-DC Converter Simulink Modeling
3.1 Buck Converter
3.2 Boost Converter
3.3 Buck-Boost Converter
4 Result Discussion
5 Conclusion
References
Detection of Forgery in the JPEG Images Using Forward Quantization Noise Method
1 Introduction
2 JPEG Compression-Forgery Detection in an Image
3 Simulation Results
4 Conclusion
References
Error-Controlling Technique in Wireless Communication
1 Introduction
2 Golay Code
3 Proposed Methodology
3.1 Encoder: One Example on Generation of Check Bit Is Discussed Here
3.2 Architecture
3.3 Unit of Weight Measurement
4 Simulation Results
4.1 CRC Technique
4.2 Code Converter
5 Conclusion
References
Cloud Computing: The New World of Technology
1 Introduction
2 Cloud Service Models
2.1 Infrastructure as a Service (IaaS)
2.2 Platform as a Service (PaaS)
2.3 Software as a Service (SaaS)
3 Cloud Deployment Models
3.1 Private Cloud
4 Community Cloud
4.1 Public Cloud
4.2 Hybrid Cloud
5 Future of Cloud Computing
6 Conclusion
References
Questions Generation for Reading Comprehension Using Coherence Relations
1 Introduction
1.1 Background Overview
2 Related Work
3 Proposed Question Generation Method
3.1 Data Set Preparation
3.2 Paragraph Selection
3.3 Apply Rhetorical Structure Theory for Coreference Detection
3.4 Steps 4 Text Span Identification
3.5 Steps 5 Text Spans Syntax Transformations
3.6 Steps 6 Question Generation
4 Experimental Results
4.1 Grammatical Correctness of Questions
4.2 Semantic Correctness of Questions
5 Results Discussion
6 Conclusion and Future Scope
References
Segmentation of Brain Tumor from MRI Images Using Modified Morphological Novel Approach
1 Introduction
2 Proposed Methodology
2.1 MRI Database and Image Preprocessing
2.2 MRI Image Segmentation
3 Feature Extraction and Appropriate Reasoning
4 Experimental Result
5 Conclusion and Future Work
References
Implementation and Use of ERP System in Organization and Educational Institution
1 Introduction
2 Methodology
2.1 Drivers of Project
2.2 Project Management
2.3 Project Resources
2.4 Management Reviews
2.5 Delivery of Product
3 ERP Trends and Perspective
4 Implementation
References
WebGIS Concept of River Pollution Monitoring System—A Case Study of Yamuna River
1 Introduction
2 Research Methodology
2.1 WebGIS System Structure
3 Discussion
4 Conclusions
References
Data Mining in Crime Analysis
1 Introduction
2 Literature Review
3 Conclusion
References
Thermophotovoltaic Cells: Electrical Power Generation at Night
1 Introduction
2 Thermoradiative Photovoltaics
3 Integration
4 Conclusion
References
Insights of Kinship Verification for Facial Images—A Review
1 Introduction
2 Related Literature
3 Results
4 Findings
5 Conclusion and Future Work
References
Development of a Novel Approach for Classification of MRI Brain Images Using DWT by Integrating PCA, KSVM and GRB Kernel
1 Introduction
2 Existing Methods
3 Proposed Methodology
3.1 Feature Extraction Scheme Using DWT
3.2 Feature Reduction Using PCA
3.3 Kernel SVM
4 Performance Metrics
5 Simulation Results
5.1 Performance Evaluation
5.2 Time Analysis
6 Conclusion and Future Scope
References
Optimization of Sustainable Performance: Housing Project, Bahir Dar, Ethiopia
1 Introduction
1.1 Site Analysis
2 Research Aim
3 The Research Framework
3.1 Sustainable Architecture
3.2 Energy and Environment
3.3 Construction in Architecture
4 Research Approach
5 Case Study
6 Result and Findings
7 Analysis
8 Discussion
8.1 Sustainable Architectural Practice
8.2 Energy Environment Construction and Architecture
9 Conclusion and Recommendations
References
3D Image Conversion of a Scene from Multiple 2D Images with Background Depth Profile
1 Introduction
2 Literature Review
3 Methodology
3.1 Depth Hue Separation
3.2 Estimation of the Foreground
3.3 Classification of Background Pixels
3.4 The Combined Depth Estimation
4 Post Processing of Resulted Images
4.1 Calculation of Background Profile
4.2 Enhancement of the Color Pixels
5 Results and Discussion
6 Conclusion
References
Optimization of a Stand-Alone Hybrid Renewable Energy System Using Demand-Side Management for a Remote Rural Area in India
1 Introduction
2 Study Area
3 Demand and Resource Estimation
3.1 Demand Estimation
3.2 Resource Estimation
4 Demand-Side Management
4.1 DSM Strategy
5 Modeling and Input Parameters
6 Results and Discussion
6.1 HRES with and Without DSM
6.2 Effect of Batteries on System Performance
6.3 Effect of DSM
7 Conclusion
References
Advance Security and Challenges with Intelligent IoT Devices
1 Introduction
1.1 Internet of Things (IoT)
1.2 Internet of Everything (IoE)
1.3 Internet of Nano Things (IoNT)
2 Related Work
2.1 IoT Security and Challenges
2.2 IoT Security with Artificial Intelligence
2.3 IoT Security with Machine Learning and Deep Learning
3 Proposed Methodology
3.1 Proposed Steps for Provide Security to ATM Arena
3.2 Advantages
4 Open Issues
5 Future Work
6 Conclusion
References
Systematic Assessment and Overview of Wearable Devices and Sensors
1 Introduction
2 Brief Categorization of Wearable Technology
3 Sensors Modality
3.1 Body Worn Sensors
3.2 Object Sensors
3.3 Ambient Sensors
3.4 Hybrid Sensors
4 Deep Models
4.1 Deep Neural Network
4.2 Convolutional Neural Network
5 Discussions
6 Conclusion
References
An Optimal Design of 16 Bit ALU
1 Introduction
2 Architectures of Various Digital Circuits
2.1 Carry Look Ahead Adder
2.2 Carry Skip Adder
2.3 Kogge–Stone Adder
2.4 Booth Multiplier
2.5 Vedic Multiplier
2.6 Barrel Shifter
2.7 Binary Division
2.8 Block Diagram of ALU
3 Results and Discussion
4 Conclusion
References
Preventing SSRF (Server-Side Request Forgery) and CSRF (Cross-Site Request Forgery) Using Extended Visual Cryptography and QR Code
1 Introduction
2 Survey Study
3 Methodology
3.1 Generating OTP
3.2 Forming OTP from QR Code
3.3 Applying EVC on QR and Generating Shares
3.4 Applying Steganography on Share A
3.5 Merging Back Shares into One QR Code Using De-steganography
3.6 QR Code Conversion to OTP
4 Result Comparison
5 Conclusion
References
IoT: Security Attacks and Countermeasures
1 Introduction
1.1 IoT Enabling Technologies
1.2 Structure of IoT
1.3 IoT Main Communication Protocols
2 Attacks in IoT
2.1 Vulnerabilities in Embedded Systems
2.2 Vulnerabilities in Firmware, Software and Applications
2.3 Vulnerabilities in Radiocommunications
3 Security Countermeasures
3.1 Countermeasures Against Embedded System Attacks
3.2 Countermeasures Against Firmware, Software, and Applications
3.3 Countermeasures Against Radio Communications
4 Conclusion
References
Intelligent Street Light System Using ARM Cortex M0+
1 Introduction
2 Intelligent Street Light System
2.1 Street Light
2.2 Real-Time Clock with ARM Cortex M0+ Processor
2.3 Controlled Sensors with Object Movement
3 Proposed System
3.1 Mode 1 of Operation
3.2 Mode 2 of Operation
4 Conclusion and Future Scope
References
Image Segmentation
1 Introduction
2 Related Work
3 Network Architecture
3.1 Mask R-CNN
3.2 PSP-Net
4 Implementation
4.1 Mask R-CNN
4.2 PSP-Net
4.3 Spectral Matting
5 Results
6 Conclusions
7 Future Scope
References
Application of Bio Sensor in Carpal Tunnel Syndrome
1 Introduction
2 Biosensors
3 Types of Bio Sensors
3.1 Electrochemical Sensors
3.2 Piezoelectric Sensors
3.3 Surface Plasmon Resonance (SPR)
4 Diagnosis of Carpal Tunnel Syndrome
5 Design of Pressure Transducer for Diagnosis of Carpal Tunnel Syndrome
6 Data Acquisition and Signal Processing
6.1 Pressure Transducer
6.2 Analog Signal Processing (ASP)
6.3 Data Acquisition
6.4 Processor
7 Conclusion
References
Applications of Artificial Intelligence Techniques for Cognitive Networks
1 Introduction
1.1 Artificial Intelligence
1.2 AI for Cognitive Radio Networks
2 Attacks on CRN
2.1 Primary User Emulation Attack
2.2 Objective Function Attack
2.3 Jamming Attack
2.4 Spectrum Sensing Data Falsification Attack
2.5 Hello Attack
2.6 Sinkhole Attack
2.7 Key Depletion Attack
2.8 Lion Attack
3 AI Techniques for Cognitive Radio
3.1 Artificial Neural Networks (ANNs)
3.2 Metaheuristic Algorithms
3.3 Rule-Based System (RBS)
3.4 Ontology-Based Systems (OBS)
3.5 Case-Based System (CBS)
4 Applications of AI in Cognitive Radio Networks
4.1 Military
4.2 Healthcare
4.3 Transportation
4.4 Emergency and Public Safety
4.5 Automotive Industry
4.6 Robotics
4.7 Bandwidth-Intensive Applications
4.8 Real-Time Surveillance Applications
4.9 Indoor Applications
References
Design of Pitch Attitude Hold Mode for Commercial Aircraft Using Extended State Observer
1 Introduction
1.1 Pitch Attitude Hold Mode
1.2 State Observer Design
2 Result Analysis
2.1 Tuning, Design and Response of ADRC
2.2 Disturbance Rejection Characteristics with ESO
2.3 Comparison of Disturbance Rejection Between PID and ESO
2.4 Comparison of Disturbance Rejection Between PID and ESO
3 Conclusion and Future Work
References
UAV—A Boon Towards Agriculture
1 Introduction
2 Working of UAV
3 Advantages
4 Challenges
5 Future Scope
6 Conclusion
References
Modelling and Design of 5T, 6T and 7T SRAM Cell Using Deep Submicron CMOS Technology
1 Introduction
2 SRAM Cell Structure and Working
3 Simulation Results of 5T, 6T and 7T SRAM Cell
4 Conclusion
References
Machine Learning Approach Towards Road Accident Analysis in India
1 Introduction
2 Related Work
3 Dataset Analysis
4 Methodology
5 Result and Analysis
6 Conclusion
References
IoT-Based Big Data Storage Systems in Cloud Computing
1 Introduction
1.1 Notable Attributes of IoT-Based Information in Cloud Stages Are
2 Related Work
3 The Operational Framework of Cloud-Based IoT Applications
4 Structure and Objections
4.1 Data Acquisition and Integration Module
4.2 Data Storage Module
4.3 Data Management Module
4.4 Data Processing Module
4.5 Data Mining Module
5 Future Scope and Conclusion
References
Optimized Hybrid Electricity Generation
1 Introduction
2 Proposed Work
3 The Material and Method
4 Results Discussion
5 Result Analysis
6 Conclusion
References
Access Control of Door and Home Security System
1 Introduction
1.1 Proposed System
1.2 Implementation of the IoT-Based Security System
2 Methodology
2.1 Hardware Implementation
2.2 Software Implementation
3 Practical Implementation
3.1 Results and Discussion
4 Advantages
5 Limitations
6 Conclusion
7 Applications
References
Bluetooth-Based Smart Sensor Networks
1 Introduction
1.1 Technology Overview
2 Bluetooth-Based Sensor Network
2.1 Bluetooth Hardware Architecture
3 A Wireless Sensor Network
3.1 Sensor Network Implementation
3.2 Smart Sensor Nodes Discovery
4 Advantages
5 Limitations
6 Conclusion
7 Applications
References
Behaviour of Hollow Core Concrete Slabs
1 Introduction
2 Literature Review
3 Research Significance
4 Conclusions
References
Business Process Reengineering: Issues and Challenges
1 Introduction
1.1 Categories in Software Engineering
1.2 Aspects of Software Project Management
1.3 Software Reengineering
1.4 Software Engineering Techniques
2 Overview of Business Process Reengineering
2.1 Concept of Business Process Reengineering
3 Business Process Reengineering Methodology
4 Business Process Reengineering Life Cycle
4.1 In the Implementation of BPR Methods, Some Guiding Principles Include
5 Conclusion
References
Creating a Biological Intranet with the Help of Medical Sciences and Li-Fi
1 Introduction
2 Light Fidelity
3 Bionic Eye
4 Human Eye
5 Li-Fi for Eye
6 Proposed Working
7 Proposed Architecture
8 Creating an Intranet Using Li-Fi and Bionic Eye
9 Real-Life Applications
10 Present Restrictions
11 Challenges and Difficulties
12 Conclusion
References
Optimization of Low Power LNA Using PSO for UWB Application
1 Introduction
2 Mathematical Model of PSO for Optimization of Analog Circuit
3 Analysis of LNA Circuit
4 Results and Discussion
4.1 S-Parameters
4.2 Noise Figure
5 Conclusion
References
Automatic Segregation and Supervision of Waste Material Using Industrial Control Devices
1 Introduction
2 Methodology
3 Project Requirement
3.1 Software Requirement
3.2 Hardware Requirement
4 Results
5 Conclusion
References
An Improved Model for Breast Cancer Classification Using Random Forest with Grid Search Method
1 Introduction
2 Literature Survey
3 Machine Learning
3.1 Random Forest
3.2 Grid Search Method
4 Proposed Methodology
4.1 Data Collection
4.2 Indices of Performance Measure
5 Experimental Results
6 Conclusion and Future Scope
Appendix
References
A Review on Low-Noise Amplifier for Wideband Applications
1 Introduction
2 Topologies
3 Conclusion
References
Effects of Single and Double Wide Slots on Microstrip Patch Antennas Characteristics Using Direct Contact Probe Feed Excitation with Broadsided Radiation
1 Introduction
2 Design of Slotted Patch Antennas
3 Measured and Simulated Results
3.1 Bandwidth of Conventional Rectangular Microstrip Antenna (C-RMA)
3.2 Bandwidth of Single Wide Slot-RMA (SWS-RMA)
3.3 Bandwidth of Double Wide Slot-RMA (DWS-RMA)
4 Conclusion
References
Solar Roadways: A Road Toward Betterment
1 Introduction
2 Working
3 Advantages
4 Challenges
5 Future Scope
6 Conclusion
References
A Review on Radiomic Analysis for Medical Imaging
1 Introduction
2 Tumors
3 Why Radiomics?
4 Process
4.1 Image Acquisition and Reconstruction
4.2 Image Segmentation
4.3 Feature Extraction and Qualifications
4.4 Database and Data Sharing
4.5 Ad Hoc Informatic Analysis
5 Application
6 Drawbacks
7 Future Scope
References
Design of CMOS 6T and 8T SRAM for Memory Applications
1 Introduction
2 Review of Existing SRAM Architectures
2.1 Conventional 6T SRAM
2.2 Conventional 8T RAM
2.3 10T Single-Ended SRAM
2.4 8T SRAM Cell with Virtual Ground
2.5 Proposed 6T Single-Ended SRAM Cell
3 Proposed SRAM
3.1 Proposed SRAM Reversible Cell with Read and Write Signals
4 Simulation Results
5 Conclusion
References
New Concept for Solar Efficiency Improvement
1 Introduction
2 Solar Tree
3 Related Work
4 Proposed Work
5 Conclusion
References
Sustainable Smart Cities and Their ICT Practices
1 Introduction
2 Review of Literature
3 Sustainable Smart Cities and ICT—Handshake
3.1 The Development of Latest Cities Badging Themselves as Smart
3.2 The Development of Older Cities Regenerating Themselves as Smart
3.3 The Development of Science Parks, Tech Cities, and Techno-poles Focused on High Technologies
3.4 The Development of Urban Services Using Contemporary ICT
3.5 The Use of ICT to Develop New Urban Intelligence Functions
3.6 The Development of Online and Mobile Sorts of Participation
4 Alternative Models that Ensure IT Initiatives’ Viability
5 Conclusion
References
Development of Hybrid Energy System for a Rural Area
1 Introduction
2 Methodology
2.1 Study Area
2.2 Load Demand
2.3 Resource Assessment
2.4 Modeling of HES Components
3 Results and Discussion
3.1 Feasible Configurations
3.2 Most Optimal Configuration
3.3 Sensitivity Analysis
4 Conclusion
References
Face Recognition with Inception-Based CNN Models
1 Introduction
2 Deep Learning Models
3 The Simulation Results
4 Conclusion
References
Design of Protocol for Handwriting Recognition Using FPGA
1 Introduction
1.1 Online Handwriting
1.2 Isolated Handwriting
1.3 Unconstrained Handwriting
1.4 Writer Independent
2 Hidden Markov Model
3 Artificial Neural Network
3.1 Artificial Neural Network BP
4 Canny Edge Detection
5 Probabilistic Patch-Based Filter
5.1 Proposed Block Diagram
5.2 Flowchart of the Proposed Work
6 Virtex-5 FPGA (XC5VLX50T)
7 Results and Discussion
8 Conclusion
References
Diesel Engine Performance with Coolant Temperature Control System and Phase Change Material (PCM) in the Cold Ambient Conditions: A Review
1 Introduction
2 Literature
2.1 The Selection Standards for PCM Are as Per the Following
2.2 An Improved One-Dimensional Model Is Introduced
3 Objective
4 Workflow Chart
5 Codes
6 Conclusion
7 Future Work
References
Mechanical Properties of Sisal Fibre and Human Hair Reinforced Epoxy Resin Hybrid Polymer Composite
1 Introduction
1.1 Sisal Plant Description
1.2 Human Hair Description
1.3 Sisal Fibre Processing Steps
1.4 Sisal Fibre Processing Steps
2 Conclusion
References
Simulation of Swirl Cup as Vane Angle with 58°
1 Introduction
2 Dimension of Swirler
3 Parameters to Flow Investigates
3.1 Evaluated the Swirl Number [1]
4 Geometric View of Model at 58ο
4.1 Meshing of Model
5 Result and Discussion
6 Conclusion
References
Data Acquisition Technique for Temperature Measurement Through DHT11 Sensor
1 Introduction
2 Background
3 Materials and Method
3.1 Arduino
3.2 Sensors
3.3 IDE
4 Implementation and Results
5 Conclusion
References
Urban Sprawl Over a Lotic Ecosystem of Doon Valley: Trend and Future Implications
1 Introduction
2 Study Area
3 Materials and Method
3.1 Database and Pre-processing
3.2 Land Use Change Detection and Transition Potential Modeling
3.3 Future Trend Modeling
3.4 Landscape Analysis
4 Results and Discussions
4.1 Landuse Change from 2005 to 2010
4.2 Transitional Probabilities
4.3 Landscape Analysis
5 Conclusions
References
Arduino-Based Therapy Device for Carpal Tunnel Syndrome
1 Introduction
2 Literature Survey
3 Review of Literature Survey
4 Fabrication
4.1 Electronic Part
4.2 Software Requirement
5 Experimental Setup
6 Working Routine
7 Results
8 Conclusion
References
IOT-Based Smart Traffic Light System for Smart Cities
1 Introduction
2 Proposed System
3 Conclusion and Future Scope
References
Self-Diagnosis Medical Chatbot Using Artificial Intelligence
1 Introduction
2 Literature Survey
3 Work Done
3.1 spaCy
3.2 Introduction to Dialogflow
4 Result
5 Conclusion
References
A Comparison Analysis of Mobile Forensic Investigation Framework
1 Introduction
2 Related Work
2.1 Four-Phase Methodology
3 Mobile Forensic Process and Types
4 Problem Statement
5 Proposed Model
6 Conclusion
References
Review in Energy Harvesting for Self-Powered Electronics Sensor
1 Introduction
1.1 Piezoelectric Energy Through Human Motion
1.2 Power Generate Through Bicycle
2 Working Principle
2.1 This Work in Two Modes
3 Design and Applications
3.1 Smart Textile
3.2 Smart Footwear
3.3 Smart Skin
4 Issues with Implementation and Design
5 Conclusion
References
Review in Smart Oculus Lenses
1 Introduction
2 Issues with the Smart Oculus Lenses
2.1 Computer Vision Syndrome
2.2 Retina Damage
2.3 Potential Cataracts
3 Application of Smart Lenses
3.1 Smart Eye Technology for Businesses with Significant Document Security Challenges
3.2 Health care
3.3 Aerospace and MRO
3.4 Retail and Logistics Management
3.5 Manufacturing and Training
4 Future Scope of Smart Oculus Lenses
5 Conclusion
References
Car Accident Prevention Using Alcohol Sensor
1 Introduction
2 Alcohol Sensor (MQ-3)
3 Display Driver IC (LM3914)
4 Relay
5 Starter Motor Circuit
6 Result
7 Conclusion
References
Performance Analysis Permanent Magnet Synchronous Motor Drives with Pulse Width Modulation Control Technique
1 Introduction
2 Modeling of PMSM Drive
3 Torque Control Scheme of PMSM Drive
4 Implementation of SVPWM Current Control Scheme of PMSM Drive
5 Simulation and Results of a Torque-Controlled Drive System
6 Conclusion
References
A Review on Methanol-Blended-Diesel Fuel Combustion in CI Engine for LTC Technology Using EGR
1 Introduction
1.1 Review Objectives
2 Methanol
3 Exhaust Gas Recirculation (EGR)
4 Effects and Discussion
4.1 Effect of Blends on Combustion and Engine Emissions
5 Conclusions
References
Current Trend and Methodologies of Content-Based Image Retrieval: Survey
1 Introduction
2 Content-Based Image Retrieval (CBIR)
3 CBIR at Early Years
3.1 Relevance Feedback: A Power Tool for Interactive CBIR
3.2 The Bayesian Image Retrieval System, PicHunter: Principle, Application, and Psychophysical Tests
3.3 An Image Retrieval System with Automatic Query Modification
3.4 CBIR Based on a Fuzzy Approach
3.5 CBIR: Methodologies and Trends of the New Age
3.6 CBIR: Theory and Applications
4 CBIR at the End of Early Years
4.1 Dynamic User Concept Pattern Learning Framework for CBIR
4.2 An Innovative Evolutionary Method for Improving Content-Based Image Indexing Procedures
4.3 Localized CBIR
4.4 Image Color Distributions Based Adaptive Color Feature Extraction
4.5 Methodologies, Challenges and Future Way of Image Retrieval
4.6 An Advanced System for CBIR Using Texture and Color Features
5 Current Trend and Methodologies
5.1 Effective Relevance Feedback by Mining User Navigation Patterns for CBIR
5.2 Local Tetra Patterns: An Innovative Feature Descriptor for CBIR
5.3 Deep Learning for CBIR: A Comprehensive Study
5.4 CBIR in E-Commerce for Quality Products
6 Comparison of Various CBIR Systems
6.1 Some of the Available Systems
7 Discussion and Conclusion
References
Number Plate Recognition: Concept and Its Applications
1 Introduction
2 Process of Vehicle Number Identification
2.1 Image Acquisition
2.2 Pre-processing and ROI Extraction
2.3 Region of Interests Based Analysis
2.4 Multifaceted Design Improvement Utilizing Histogram Change
2.5 Plate Impediment
2.6 Character Segmentation
3 Applications of the Vehicle Number Identification
3.1 Halting
3.2 Access Control
3.3 Motorway Road Tolling
3.4 Excursion Time-Based Measurement
3.5 Enforcement of Laws
4 Conclusion
References
Crowd Analysis in Public Transport Based on Face Detection
1 Introduction
2 Literature Survey
3 Existing Work
3.1 Dilax
3.2 Smart Bus
3.3 Prime Edge
4 Proposed Work
4.1 Online Mode
4.2 Offline Mode
5 Mathematical Model
6 Challenges
7 Conclusion
References
Review on Performance Analysis of PV Module at Different Sites and Locations
1 Introduction
2 Methodology
2.1 Architecture of Grid-Connected Solar PV System
2.2 Selection and Finalization of Different Location
2.3 Solar Radiation and Meteorology
2.4 Selection and Finalization PV Module Specification
2.5 Selection and Finalization Inverter Specification
2.6 Orientation and Horizon Finalization of PV Module
3 Interconnection of Inverter and Array
4 Performance Analysis of PV Module
4.1 Yield Simulation
4.2 Losses Simulation
4.3 Performance Ratio
4.4 Generation Simulation
5 Conclusion
5.1 Losses and Efficiency Simulation
5.2 Performance Ratio
5.3 Generation Forecasting
References
Digital Watermarking System Performance Using QIM Techniques and Wavelet Transforms
1 Introduction
2 Related Discussion
2.1 Discrete Wavelet Transform (DWT) and Integer Wavelet Transform (IWT)
2.2 Quantization Index Method (QIM)
3 Watermarking Scheme
3.1 Watermark Generation Process
3.2 Watermark Embedment Process
3.3 Watermark Extraction Process
3.4 Extracted Generated Watermark Process
4 Performance Evaluation
5 Experiment Results
6 Conclusion
References
A Distributed Spanning Tree-Based Scalable Fault-Tolerant Algorithm for Load Balancing in Web Server Farms
1 Introduction
2 Mathematical Model for DST-Based Load Balancing
3 DST-Based Load Balancing Algorithm
4 Conclusion
5 Future Enhancements
References
Context-Aware Computing for IoT: History, Applications and Research Challenges
1 Introduction
2 History and Evolution of Context-Aware Computing
3 Context-Aware Computing and IoT Applications
4 Research Issues and Future Challenges in Implementing Context-Aware Computing in IoT
5 Conclusion
References
Overview of IoT Privacy and Security Challenges for Smart Carpooling System
1 Introduction
2 IoT and Its Application
3 IoT-Based Carpooling
4 Security Analysis of IoT System
5 Security and Privacy Issues Related to IoT-Based Carpooling
6 Security Threats in the Carpooling
7 Conclusion
References
Author Index

Citation preview

Algorithms for Intelligent Systems Series Editors: Jagdish Chand Bansal · Kusum Deep · Atulya K. Nagar

Dinesh Goyal Pradyumn Chaturvedi Atulya K. Nagar S. D. Purohit   Editors

Proceedings of Second International Conference on Smart Energy and Communication ICSEC 2020

Algorithms for Intelligent Systems Series Editors Jagdish Chand Bansal, Department of Mathematics, South Asian University, New Delhi, Delhi, India Kusum Deep, Department of Mathematics, Indian Institute of Technology Roorkee, Roorkee, Uttarakhand, India Atulya K. Nagar, School of Mathematics, Computer Science and Engineering, Liverpool Hope University, Liverpool, UK

This book series publishes research on the analysis and development of algorithms for intelligent systems with their applications to various real world problems. It covers research related to autonomous agents, multi-agent systems, behavioral modeling, reinforcement learning, game theory, mechanism design, machine learning, meta-heuristic search, optimization, planning and scheduling, artificial neural networks, evolutionary computation, swarm intelligence and other algorithms for intelligent systems. The book series includes recent advancements, modification and applications of the artificial neural networks, evolutionary computation, swarm intelligence, artificial immune systems, fuzzy system, autonomous and multi agent systems, machine learning and other intelligent systems related areas. The material will be beneficial for the graduate students, post-graduate students as well as the researchers who want a broader view of advances in algorithms for intelligent systems. The contents will also be useful to the researchers from other fields who have no knowledge of the power of intelligent systems, e.g. the researchers in the field of bioinformatics, biochemists, mechanical and chemical engineers, economists, musicians and medical practitioners. The series publishes monographs, edited volumes, advanced textbooks and selected proceedings.

More information about this series at http://www.springer.com/series/16171

Dinesh Goyal Pradyumn Chaturvedi Atulya K. Nagar S. D. Purohit •





Editors

Proceedings of Second International Conference on Smart Energy and Communication ICSEC 2020

123

Editors Dinesh Goyal Poornima Institute of Engineering and Technology Jaipur, Rajasthan, India Atulya K. Nagar School of Mathematics, Computer Science and Engineering Liverpool Hope University Liverpool, UK

Pradyumn Chaturvedi Visvesvaraya National Institute of Technology Nagpur, Maharashtra, India S. D. Purohit Rajasthan Technical University Kota, Rajasthan, India

ISSN 2524-7565 ISSN 2524-7573 (electronic) Algorithms for Intelligent Systems ISBN 978-981-15-6706-3 ISBN 978-981-15-6707-0 (eBook) https://doi.org/10.1007/978-981-15-6707-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

The International Conference on “Smart Energy and Communication” (ICSEC 2020) was organized by Poornima Institute of Engineering & Technology, Jaipur, Rajasthan, India, on March 20–21, 2020. The core vision of ICSEC 2020 was to disseminate new knowledge and technology for the benefit of everyone ranging from the academics and professional researched communities to industrial practitioners in a range of topics in electronics and communication engineering and electrical engineering in general and analog circuit design, image processing, wireless and microwave communication, optoelectronics and photonic devices, nano-electronics, renewable energy, smart grid, power system and industry applications. It also provided a venue for high-caliber researchers, PHD scholars and professionals to submit ongoing research and developments in these areas. The objective was to provide a platform to all the researchers, engineers, academicians, industry delegates and students to discuss the contribution of engineers in “smart energy and communication.” This conference was to analyze key trends and showcase technology solutions, discussed and proposed solutions and strategies to make renovation. In the conference, we received a total of 270 papers. We had selected a total of 200 papers on the behalf of reviewer comments and the plagiarism report. A total of 180 papers had been registered and presented in this conference; out of this, 71 quality and premier papers had been selected for publication with Springer Nature publication. It highlighted contributions by researchers, technocrats and experts regarding the trends deployed to developed intelligence in all the domains of information management. This book was to disseminate new knowledge and technology for the benefit of everyone ranging from the academics and professional researched communities to industrial practitioners in a range of areas in electronics and communication engineering and electrical engineering, discussing recent trends in mobile computing and advancements of electronic systems. This book covered topics such as general and analog circuit design, image processing, wireless and microwave communication, optoelectronics and photonic devices, nano-electronics, renewable energy, smart grid, power system and industry applications. The solutions discussed v

vi

Preface

here will encourage and inspire researchers, industry professionals, and policymakers to put these methods into practice. Its aim was to encourage scholars and professionals to overcome disciplinary barriers, as demanded by current trends in the industry and in the consumer market, which were rapidly leading toward a convergence of data-driven applications, computation, telecommunication and energy awareness; given its coverage, this book will benefit graduate students, researchers and practitioners who need to keep up with the latest technological advances. Jaipur, India Nagpur, India Liverpool, UK Kota, India 21 December 2020

Dinesh Goyal Pradyumn Chaturvedi Atulya K. Nagar S. D. Purohit

Contents

An Evaluation into Deep Learning Capabilities, Functions and Its Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aamir Hamid Rather, Zubaid Hamid Rather, and Suhail Rafiq Tantray

1

Mathematical Modeling and Simulation of DC-DC Converters Using State-Space Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Piyush Sharma, Dheeraj Kumar Dhaked, and Ashok Kumar Sharma

11

Detection of Forgery in the JPEG Images Using Forward Quantization Noise Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Satish Pratapur and D. C. Shubangi

31

Error-Controlling Technique in Wireless Communication . . . . . . . . . . . Neelesh Kumar Gupta, Narbada Prasad Gupta, Pradeep Gupta, and Kapil Kumar

45

Cloud Computing: The New World of Technology . . . . . . . . . . . . . . . . Arsheen Qureshi and Ashwani Sharma

55

Questions Generation for Reading Comprehension Using Coherence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anamika, Vibhakar Pathak, Vishal Shrivastava, and Akil Panday

61

Segmentation of Brain Tumor from MRI Images Using Modified Morphological Novel Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Harendra Singh and Rajeev Ratan

73

Implementation and Use of ERP System in Organization and Educational Institution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fakih Awab Habib, Ghatte Saqib Nisar, Singh Sudhanshu Somnath, and Shinde Abhijit Jagannath WebGIS Concept of River Pollution Monitoring System—A Case Study of Yamuna River . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rahat Zehra, Madhulika Singh, and Jyoti Verma

85

91

vii

viii

Contents

Data Mining in Crime Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nahid Jabeen and Parul Agarwal

97

Thermophotovoltaic Cells: Electrical Power Generation at Night . . . . . 105 Devesh Bhatnagar Insights of Kinship Verification for Facial Images—A Review . . . . . . . . 111 Shikha Sharma and Vijay Prakash Sharma Development of a Novel Approach for Classification of MRI Brain Images Using DWT by Integrating PCA, KSVM and GRB Kernel . . . . 123 Preeti Arora and Rajeev Ratan Optimization of Sustainable Performance: Housing Project, Bahir Dar, Ethiopia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Ambuj Kumar and Harveen Bhandari 3D Image Conversion of a Scene from Multiple 2D Images with Background Depth Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Denny Dominic and Krishnan Balachandran Optimization of a Stand-Alone Hybrid Renewable Energy System Using Demand-Side Management for a Remote Rural Area in India . . . 161 M. Ramesh and R. P. Saini Advance Security and Challenges with Intelligent IoT Devices . . . . . . . 177 Neha Sharma and Deepak Panwar Systematic Assessment and Overview of Wearable Devices and Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Shashikant Patil, Zerksis Mistry, and Kushagra Chtaurvedi An Optimal Design of 16 Bit ALU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Pasuluri Bindu Swetha, N. Sai Vamshi, Md. Mujeeb Ur Rehamaan, and V. Karthik Preventing SSRF (Server-Side Request Forgery) and CSRF (Cross-Site Request Forgery) Using Extended Visual Cryptography and QR Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Nilesh Arora, Priya Singh, Soniya Sahu, Vineet Kr Keshari, and M. Vinoth Kumar IoT: Security Attacks and Countermeasures . . . . . . . . . . . . . . . . . . . . . 229 Harshit Reylon and Alka Chaudhary Intelligent Street Light System Using ARM Cortex M0+ . . . . . . . . . . . . 239 Divesh Kumar, Manish Kumar, and Manish Gupta Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Anurag Jindal, Samarth Joshi, Rishabh Jangwal, Ankit Rathi, and Rachna Jain

Contents

ix

Application of Bio Sensor in Carpal Tunnel Syndrome . . . . . . . . . . . . . 261 Mayank Agrawal and Nikita Gautam Applications of Artificial Intelligence Techniques for Cognitive Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 G. Yashasree, Davanam Ganesh, M. Pavan, and K. Bindu Design of Pitch Attitude Hold Mode for Commercial Aircraft Using Extended State Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Princy Randhawa and Tushar Pradeep Basakhatre UAV—A Boon Towards Agriculture . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Manish Verma, Sayed Imran Ali, and Gaurav Agrawal Modelling and Design of 5T, 6T and 7T SRAM Cell Using Deep Submicron CMOS Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Nidhi Tiwari, Varun Sankath, Akhilesh Upadhyay, Mukesh Yadav, Ruby Jain, Pallavi Pahadiya, Madhavi Bhanwsar, and Shivangini Mouraya Machine Learning Approach Towards Road Accident Analysis in India . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Shruti Singhal, Bhavini Priyamvada, Rachna Jain, and Muskan Chawla IoT-Based Big Data Storage Systems in Cloud Computing . . . . . . . . . . 323 Prachi Shah, Amit Kr. Jain, Tarun Mishra, and Garima Mathur Optimized Hybrid Electricity Generation . . . . . . . . . . . . . . . . . . . . . . . . 335 Pushpa Gothwal, Paridhi Palliwal, and Shubhangi Access Control of Door and Home Security System . . . . . . . . . . . . . . . . 341 Anila Dhingra, Tanya Mittal, Soniya Moolchand Heera, and Varun Menaria Bluetooth-Based Smart Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . 349 Vibha Beniwal, Tarun Mishra, Amit K. Jain, and Garima Mathur Behaviour of Hollow Core Concrete Slabs . . . . . . . . . . . . . . . . . . . . . . . 357 Mayank Mehandiratta and Praveen Kumar Business Process Reengineering: Issues and Challenges . . . . . . . . . . . . . 363 A. Harika, M. Sunil Kumar, V. Anantha Natarajan, and Suresh Kallam Creating a Biological Intranet with the Help of Medical Sciences and Li-Fi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Yagya Buttan and Komal Saxena Optimization of Low Power LNA Using PSO for UWB Application . . . 393 Manish Kumar, Manish Gupta, Divesh Kumar, and Vinay kumar Deolia

x

Contents

Automatic Segregation and Supervision of Waste Material Using Industrial Control Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 Fakih Awab Habib, Khan Salman Mehtabali, Khan Athar, and Ansari Mohd Afwan An Improved Model for Breast Cancer Classification Using Random Forest with Grid Search Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 Yagya Buttan, Alka Chaudhary, and Komal Saxena A Review on Low-Noise Amplifier for Wideband Applications . . . . . . . 417 Dheeraj Kalra Effects of Single and Double Wide Slots on Microstrip Patch Antennas Characteristics Using Direct Contact Probe Feed Excitation with Broadsided Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Ambresh P. Ambalgi, S. K. Sujata, Udit Mamodiya, and Priyanka Sharma Solar Roadways: A Road Toward Betterment . . . . . . . . . . . . . . . . . . . . 433 Arpit, Richa Ferwani, and Swikrati Gupta A Review on Radiomic Analysis for Medical Imaging . . . . . . . . . . . . . . 439 Nitika Gupta and Priyanka Sharma Design of CMOS 6T and 8T SRAM for Memory Applications . . . . . . . 449 Binduswetha Pasuluri, V. J. K. Kishor Sonti, S. M. M. Trinath, and N. Bala Dastagiri New Concept for Solar Efficiency Improvement . . . . . . . . . . . . . . . . . . . 463 Ravi Sharma, Ahmad Hasan Khan, and Ankit Kumar Sharma Sustainable Smart Cities and Their ICT Practices . . . . . . . . . . . . . . . . . 469 K. Bhavana Raj and Mohmad Mushtaq Khan Development of Hybrid Energy System for a Rural Area . . . . . . . . . . . 477 Arushi Misra and M. P. Sharma Face Recognition with Inception-Based CNN Models . . . . . . . . . . . . . . . 489 Lakshmi Patil and V. D. Mytri Design of Protocol for Handwriting Recognition Using FPGA . . . . . . . . 505 Vinita Patil and Rajendra R. Patil Diesel Engine Performance with Coolant Temperature Control System and Phase Change Material (PCM) in the Cold Ambient Conditions: A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519 Manish Kumar and S. K. Dhakad Mechanical Properties of Sisal Fibre and Human Hair Reinforced Epoxy Resin Hybrid Polymer Composite . . . . . . . . . . . . . . . . . . . . . . . . 533 S. K. Dhakad and Anas Ahmed Ansari

Contents

xi

Simulation of Swirl Cup as Vane Angle with 58° . . . . . . . . . . . . . . . . . . 541 Amit Kumar, S. K. Dhakad, and Anurag Kulshreshtha Data Acquisition Technique for Temperature Measurement Through DHT11 Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 Brajesh Vallabh, Aquib Khan, Durgesh Nandan, and Manish Choubisa Urban Sprawl Over a Lotic Ecosystem of Doon Valley: Trend and Future Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557 Monika Rawat, S. M. Veerabhadrappa, R. P. Pandey, and D. R. Sena Arduino-Based Therapy Device for Carpal Tunnel Syndrome . . . . . . . . 569 Alok Ahuja, Aditya Katole, Aman Sharma, and Akanksha Vyas IOT-Based Smart Traffic Light System for Smart Cities . . . . . . . . . . . . 579 Manish Gupta, Divesh Kumar, and Manish Kumar Self-Diagnosis Medical Chatbot Using Artificial Intelligence . . . . . . . . . 587 Fakih Awab Habib, Ghare Shifa Shakil, Shaikh Sabreen Mohd. Iqbal, and Shaikh Tasmia Abdul Sajid A Comparison Analysis of Mobile Forensic Investigation Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595 Waheedullah Asghari, A. Suresh Kumar, Ajay Shankar Singh, and K. Thirunavukkarasu Review in Energy Harvesting for Self-Powered Electronics Sensor . . . . 603 Krishna Mittal and Deepak Sharma Review in Smart Oculus Lenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611 Rashmi Jayswal and Nikita Gautam Car Accident Prevention Using Alcohol Sensor . . . . . . . . . . . . . . . . . . . 619 Ruby Jain, Nidhi Tiwari, Devendra Kumar Prajapati, Akhilesh Upadhyay, and Mukesh Yadav Performance Analysis Permanent Magnet Synchronous Motor Drives with Pulse Width Modulation Control Technique . . . . . . . . . . . . . . . . . 625 Ritu Tak, Shuchi Shukla, and Bhart Singh Rajpuprohit A Review on Methanol-Blended-Diesel Fuel Combustion in CI Engine for LTC Technology Using EGR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641 S. K. Dhakad and Amrit Kumar Current Trend and Methodologies of Content-Based Image Retrieval: Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Bhagwandas Patel, Kuldeep Yadav, and Debashis Ghosh Number Plate Recognition: Concept and Its Applications . . . . . . . . . . . 667 Seema Meena and Bipul Kumar

xii

Contents

Crowd Analysis in Public Transport Based on Face Detection . . . . . . . . 673 Jyoti Chauhan, Saurabh Chordiya, Rohan Bhatia, Shrinivas Genge, and Rucha Barad Review on Performance Analysis of PV Module at Different Sites and Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685 Ajay Saini, Upendra Singh Chauhan, and Deepak Sharma Digital Watermarking System Performance Using QIM Techniques and Wavelet Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699 Hiral A. Patel and Dipti B. Shah A Distributed Spanning Tree-Based Scalable Fault-Tolerant Algorithm for Load Balancing in Web Server Farms . . . . . . . . . . . . . . 713 U. Prabu, N. Malarvizhi, J. Amudhavel, and G. Sambasivam Context-Aware Computing for IoT: History, Applications and Research Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719 Ankur O. Bang and Udai Pratap Rao Overview of IoT Privacy and Security Challenges for Smart Carpooling System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727 Manas Ranjan Mohapatra and Jyoti Ranjan Mohanty Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735

About the Editors

Dr. Dinesh Goyal is working as Principal at revered Institute, namely Poornima Institute of Engineering & Technology. Acquiring an experience of 19 years in teaching, and perceive keen interest in research area relating Cloud Security, Image Processing and Information Security. With a mission to append more and better skill set, have attended 4 short term training and programs as Convener and Co-Convener within during this short career span. He has been instrument in obtaining accreditations for his institutions, from various agencies like NAAC & NBA, He has received Grants for Research, Development, Conference & workshops worth Rs. 16 Lakh, from agencies like AICTE, TEQIP etc. He has been Awarded by “Elets Excellence Award 2017” at Higher Education & Human Resource Conclave by Higher Education Department of Government of Rajasthan. Under his leadership, Institutions has also excelled in Industry Academia Interface, has established centers with major tech giants like Microsoft, Google & Amazon etc. He has 6 Full patents published & 1 Copyright under his name. He has successfully published 6 edited books with big publishing giants like Springer, Wiley, IGI Global, Apple Academic Press, Taylor & Francis and Eureka. He has published 1 SCI & 16 Scopus indexed papers & is editors of 2 SCI & 5 Scopus Indexed Journals, special issues. He has successfully guided 8 PhD Scholars & 31 PG Scholars. He has also attended more than 25 International Conferences & has been invited speaker for more than 15 Conferences & Seminars. He is life member of ISC & ISTE and fellow member of CSI & ISTE. Dr. Pradyumn Chaturvedi received Ph.D., M.E. and B.E. degrees in 2010, 2001 and 1996 from National Institute of Technology Bhopal (India), Rajiv Gandhi Proudyogiki Viswavidyalaya Bhopal (India) and Barkatullah University Bhopal (India), respectively. He is currently working as an Assistant Professor (Grade-I) in the Department of Electrical Engineering, Visvesvaraya National Institute of Technology, Nagpur, India. He has 19 years of experience of teaching and research with more than 80 research papers published in the international reputed journals and refereed international/national conferences. He also co-authored one book “Modeling and Control of Power Electronics Converter System for Power Quality xiii

xiv

About the Editors

Improvements,”, Academic Press, USA. Dr. Chaturvedi completed three sponsored research projects and is currently working on “Design and Development of Distributed Energy Resources Assisted Highly Efficient and Reliable Power Electronic Transformer for Off-Grid Rural Electrification” sponsored by DST-SERB as Principal Investigator. Dr. Chaturvedi is holding the position of: 1. Track Chair, Subcommittee on Resonant and Soft Switching Converter of IEEE IES PETC, USA, 2. Chairperson, IEEE Bombay Section Joint Chapter of Power Electronics Society & Industrial Electronics Society (CH010868), 3. Executive Committee Member of IEE IES PETC 4. Executive Committee Member of IEEE Bombay Section and 5. Branch Counselor of IEEE VNIT Student Branch. He is Senior Member of IEEE USA, Member of International Association of Engineers, Hongkong, Life Member of ISTE India, Member of Asian Council of Science Editors, Dubai and Member of International Editorial Board, International Journal “IOSR Journal of Electrical and Electronics Engineering.” Dr. Chaturvedi successfully conducted three International Faculty Development Programs under Global Initiative of Academic Networks (GIAN) scheme of Government of India with Prof. K. Matsuse (Meiji University, Tokyo, Japan), Prof. Sun Jian, Rensselaer Polytechnic Institute, New York, USA) and Prof. Jih-Sheng (Jason) Lai (Virginia Polytechnic Institute and State University, Virginia, USA). He is also actively involved in various IEEE conferences as Track Chair, Session Chair, Technical Program Committee Member and Advisory Committee Member. He also delivered Pre-Conference Tutorial Session in IEEE International Conference on Power Electronics (IICPE), MNIT, Jaipur, India in December 2018. Dr. Chaturvedi proposed and organized special sessions in various IEEE conferences, i.e., ICIT 2019 Melbourne, IEEE IICPE 2018 Jaipur, IFEEC 2019 Singapore, ISIE 2020 Delft (proposed) and IECON 2020 Singapore (proposed). He is an Active and Regular Reviewer for IEEE Transaction on Industrial Electronics, IEEE Transaction on Power Electronics, IEEE Transaction on Industrial Informatics, IET Power Electronics, Electrical Engineering, International Journal of Electronics, International Journal of Power Electronics, Electric Power Components & Systems and various IEEE Conferences. Dr. Atulya K. Nagar is currently working as Professor & Pro-Vice-Chancellor & Research and Dean of Faculty of Science at Liverpool Hope University. His expertise is in nonlinear mathematics, natural computing, bio-mathematics and computational biology, operations research, and control systems engineering. He is an Expert Reviewer for the Biotechnology and Biological Sciences Research Council (BBSRC) grants peer-review committees for Bioinformatics Panel; Engineering and Physical Sciences Research Council (EPSRC) for High Performance Computing Panel and serves on the Peer-Review College of the Arts and Humanities Research Council (AHRC) as a Scientific Expert Member. He has edited volumes on intelligent systems and applied mathematics; He is Editor-in-Chief of the International Journal of Artificial Intelligence and Soft Computing (IJAISC) and serves on editorial boards for a number of prestigious journals such as the Journal of Universal Computer Science (JUCS). He chaired

About the Editors

xv

Conference General Chair and Member of the International Programme Committee (IPC) for several international conferences and has been invited to deliver keynote lectures at a number of such forums. He has published with over 200 publications in prestigious publishing outlets and journals such as the Journal of Applied Mathematics and Stochastic Analysis; the International Journal of Advances in Engineering Sciences and Applied Mathematics; the International Journal of Foundations of Computer Science; the IEEE Transactions on Systems, Man, and Cybernetics; Discrete Applied Mathematics; Fundamental Informaticae; IET Control Theory & Applications, to name a few. He has received a prestigious Commonwealth Fellowship for pursuing my Doctorate (D.Phil.) in Applied Non-Linear Mathematics, from the University of York in 1996. He has completed B.Sc. (Hons.), M.Sc., and M.Phil. (with Distinction) from the MDS University of Ajmer, India. He has worked with the Department of Mathematical Sciences and later at the Department of Systems Engineering, at Brunel University, London, Tata Institute of Fundamental Research (TIFR, Applied Mathematics, Bangalore), BITS Pilani and the Indian Institute of Technology (IITs) in India. Dr. S. D. Purohit is currently working as an Associate Professor of Mathematics, Rajasthan Technical University Kota, India. He obtained his Ph.D. in Mathematics from J.N.V. University, India. He is having 18 years of teaching experience. His area of research interest related to fractional calculus, special functions, integral transforms, basic hypergeometric functions, geometric function theory and mathematical physics. He is a Life Member of Indian Mathematical Society (IMS), Indian Science Congress Association (ISCA), Indian Academy of Mathematics (IAM) and Society for Special Functions and their Applications, and Member of American Mathematical Society (AMS) and International Association of Engineers (IAENG). He is a Member of the editor board in several journals. He is also serving as Reviewer/Referee for journals. He was awarded University Gold Medal for being topper in M.Sc. Mathematics and awarded Junior Research Fellowship and Senior Research Fellow of Council of Scientific and Industrial Research. He has published more than 140 research articles in reputed and referred Journals as well as 5 books contributed as author/co-author.

An Evaluation into Deep Learning Capabilities, Functions and Its Analysis Aamir Hamid Rather, Zubaid Hamid Rather, and Suhail Rafiq Tantray

Abstract Deep learning (DL) is a rising examination space in Machine Learning (ML) and pattern recognition. Deep learning alludes to Machine Learning methods that utilize administered or unsupervised approaches to precisely learn gradable portrayals in profound structures for arrangement. The objective is to locate extra unique choices inside the larger amounts of the representation, by utilizing neural systems that basically isolate the changed educational factors inside the data. Inside the ongoing years, it’s pulled in inexhaustible consideration in light of its dynamic execution in different regions like object perception, speech recognition, computer vision, cooperative filtering, and natural language process. Since the data continues getting bigger, deep learning is going to assume a key job in giving immense information prophetical examination arrangements. It is proposed to advance a brisk outline of deep learning, strategies, present examination tries, and furthermore the difficulties worried in it through this paper. Keywords Machine learning · Artificial intelligence · Deep learning

1 Introduction Humans arrange their thoughts and ideas progressively. Humans first learn clear ideas so make them speak to a great deal of dynamic ones. The human brain resembles a deep neural network, consisting of the numerous layers of neurons that go about as highlight identifiers, police examination a great deal of theoretical highlights on the grounds that the dimensions go up. This way of representing data in a great deal of conceptual methods is less muddled to sum up for the machines. The most favorable position of deep learning is its reduced representation of a greater set of A. H. Rather (B) University of Kashmir, Hazratbal, J&K 190009, India e-mail: [email protected] Z. H. Rather · S. R. Tantray DBIT University, Noida 201306, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_1

1

2

A. H. Rather et al.

capacities than shallow systems utilized by most traditional learning procedures. A profound plan is more communicatory than a shallow one given a comparable number of non-straight units. In any case, capacities concisely spoken to in k layers could require an exponential size at the point when communicated in a couple of layers. Formally, it tends to be demonstrated that a k-layer system will speak to minimalistic capacities anyway a (k − l)- layer arrange can’t speak to them except if it’s AN exponentially enormous number of hidden units. A lot of things like faster CPUs, parallel focal processor designs, GPU computing empowered instructing of deep systems and made it computationally conceivable. Neural systems are commonly spoken to as a network of weight vectors and GPUs are enhanced for fast network activity. Perceptron was created in the 1960s and once Papert, Minsky [1] set up that perceptrons will exclusively learn to display straightly dissociable capacities, the enthusiasm for perceptron immediately declined. There was the restoration of enthusiasm for neural systems because of the creation of backpropagation for training numerous layers of non-straight choices. Backpropagation takes mistakes from the yield layer and engenders them back through the hidden layers. A few specialists gave up to date backpropagation since it couldn’t make efficient utilization of numerous hidden layers. In the center of 2000 Geoffrey Hinton [2] prepared deep conviction systems layer by layer on unmarked learning utilizing back spread to calibrate loads on labeled data. Bengio [3] in 2006 analyzed profound auto-encoders as another other to deep Boltzmann Machines.

2 Motivation Machine Learning has effectively become a significant software engineering discipline with broad applications in science and designing for a long time. The computer extricates learning through directed understanding, where a human administrator is associated with helping the machine learn by giving it hundreds or thousands of preparing precedents and physically amending its mix-ups. While Machine Learning has turned out to be driving inside the field of artificial intelligence, it has its issues. It is especially time expending is as yet not a genuine proportion of machine insight as it depends on human genius to think of the reflection that enables a PC to learn. An essential test to AI is the absence of sufficient preparation information to construct exact and solid models in numerous sensible circumstances. At the point when quality information is hard to find, the subsequent models can perform all around inadequately on another space, regardless of whether the learning calculations are best picked. Unlabeled information is shabby and ample, not at all like marked information which is costly to acquire. The guarantee of self-educated learning is that by abusing the monstrous measure of unlabeled information, much better models can be scholarly. By utilizing unlabeled information to learn a decent starting incentive for the loads in every one of the layers, the calculation can learn and find designs from monstrous measures of information than simply directed methodologies. This as often as possible outcomes in much better classifiers being learned. Profound

An Evaluation into Deep Learning Capabilities, Functions …

3

learning is for the most part unsupervised differentiating AI which is managed. It includes making large scale neural nets that permit the computer to learn and process independently from anyone else without the requirement for direct human mediation. Learning in AI applications relies upon hand designing highlights where the analyst physically encodes important data about the job that needs to be done and afterward there is learning in addition. These complexities with the profound realization which attempts and get the framework to engineer its very own highlights as much as is reasonable [7]. The ongoing Google investigates profound learning that has demonstrated that it is conceivable to prepare an exceptionally huge unsupervised neural system to consequently create highlights for perceiving feline appearances. The information shortage issue related to amazingly substantial scale suggestion frameworks gives solid inspiration for finding better approaches to exchange learning from helper information sources.

3 Study There were endeavors at preparing profound models prior to 2006 yet fizzled on the grounds that preparation of a deep supervised feed-forward neural system yielded more awful results both in preparing and in test blunder than shallow ones with 1 or 2 hidden layers. The situation was changed by three significant papers by Hinton, Bengio, Ranzato [2–4], The key standards found taking all things together three papers are on unsupervised learning of portrayals used to pre-train each layer. The unsupervised preparing in these works is completed one layer at once, over the recently prepared ones. The portrayal learned at each dimension is the contribution of the next layer. At that point directed preparing is utilized to fine-tune every one of the layers. Geoffrey Hinton [2] prepared profound conviction arrangements by stacking Restricted Boltzmann Machines (RBMs) over each other as profound conviction organizations. The Deep Belief Networks use RBMs for unsupervised learning of portrayal at each layer. The Bengio [3] paper investigates and analyzes RBMs and auto-encoders. The Ranzato [4] et al. paper utilizes inadequate auto-encoder with regards to convolutional engineering. As of late striking advances have been made to reduce the difficulties identified with high information volumes. At the point when there is tremendous volume of information usually difficult to prepare a profound learning calculation with a focal processor and capacity. Consequently, dispersed systems with parallelized machines are perfect. Deng et al. [5] proposed changed profound engineering called Deep Stacking Network (DSN), which can be parallelized. A DSN is a mix of a few particular neural systems with a solitary concealed layer. Stacked modules with sources of info made out of crude information vectors and the yields from past module structure a DSN. New profound engineering called Tensor Deep Stacking System (T-DSN), which depends on the DSN, is actualized utilizing CPU groups for versatile parallel processing. Late models utilize bunches of CPUs or GPUs to expand the preparation speed. Profound learning calculations

4

A. H. Rather et al.

have one of the special attributes of utilizing unlabeled information amid preparing. Preparing with incomprehensibly more information is desirable over utilizing a more modest number of accurate, clean, and cautiously curated information, however, deficiency and boisterous names are part of information.

4 Techniques The greater part of the present profound learning structures comprises learning layers of RBM’s or auto-encoders both of which are two-layer neural systems that learn to show their sources of info. RBM’s model their contributions as a likelihood conveyance while auto-encoders figure out how to repeat contributions as their yields. RBM is a two-layer undirected neural system consisting of an unmistakable layer and shrouded layer. There are no associations inside each layer, however, associations run noticeably to cover up. It is prepared to augment the normal log-likelihood of the information. The data sources are parallel vectors as it learns Bernoulli disseminations over each information. The initiation work is registered on a similar route as in a customary neural system and the strategic capacity generally utilized is between 0 and 1. The yield is treated as a likelihood and every neuron is actuated if initiation is more noteworthy than irregular variable. The shrouded layer neurons accept noticeable units as data sources. Unmistakable neurons accept twofold information vectors as starting info and afterward concealed layer probabilities (Fig. 1). In the preparation stage, Gibbs Sampling (MCMC procedure) is performed and is compared to registering a likelihood conveyance utilizing a Markov Chain Monte Carlo approach [6]. In PASS 1 concealed layer probabilities h is processed from sources of info v. In PASS 2 those qualities withdraw to the obvious layer and back up to the concealed layer to get v and h . The loads are refreshed utilizing the distinctions in the external results of the covered up and noticeable initiations between the first Fig. 1 Visible and hidden layers in RBM

An Evaluation into Deep Learning Capabilities, Functions …

5

and second passes. To approach the ideal model, a tremendous number of passes are required, so this methodology gives proximate surmising, however, functions admirably in practice. Subsequent to preparing, the concealed layer initiations of an RBM can be utilized as scholarly highlights [8]. An auto-encoder is traditionally a feed-forward neural system which means to learn a packed, appropriated portrayal of a dataset. An auto-encoder is a three-layer neural system, which is prepared to remake its contributions by utilizing them as the yield. It needs to learn highlights that catch the difference in the information so it can be replicated. It tends to appear to be proportional to PCA, if straight initiation capacities are just utilized and can be utilized for dimensionality decrease. Once prepared, the hidden layer actuations are utilized as the learned highlights, and the top layer can be disposed of. Auto-encoders are prepared utilizing the systems like taking note of, compression, and meager conditions. Amid de taking note of in auto-encoders some irregular commotion is added to the info. The encoder is required to recreate the first info. Haphazardly deactivating contributions amid preparing will improve the speculation execution of standard neural systems. In contractive auto-encoders, setting the quantity of hubs in the concealed layer to be much lower than the number of information hubs powers the system to perform dimensionality decrease. This keeps it from learning the personality work as the concealed layer has deficient hubs to just store the info. Inadequate auto-encoders are prepared by applying a sparsity punishment to the weight update work. It punishes the all-out size of the association loads and causes most loads to have little qualities. RBM’s or auto-encoders can be prepared layer by layer. The highlights gained from one layer are encouraged into the following layer, so initial a system with one covered up layer is prepared, and simply after that is done, a system with two hidden layers is prepared, etc. At each step, the old system with k − 1 hidden layers are taken furthermore, an extra k-th Hidden layer is included that takes as information the past shrouded layer k − 1 that was prepared (Fig. 2). Preparing can either be directed, however more much of the time it is unsupervised. The top-layer actuations can be treated as highlights and sustained into any reasonable classifier like Random Forest, SVM, and so forth. The loads from preparing the layers exclusively are then used to instate the loads in the last profound system, and at that point, the whole design is finetuned. On the other hand, an extra yield layer can be put on top, also, the system calibrated with a back spread. Back spread functions admirably in profound systems just if the loads are introduced near a decent arrangement. The layer savvy pre-preparing guarantees this. Numerous other approaches like dropout, max-out exist for calibrating profound systems. Convolutional Neural Networks (CNN) are organically roused variations of MLPs. A regular Convolutional Neural Network comprises of numerous layers of pecking order with certain layers for highlight portrayals and others as a sort of traditional neural systems for grouping. There are two equivalent sizes, while subsampling layers diminish the sizes adjusting sorts of layers called convolutional and of continuing layers by averaging pixels inside a subsampling layer. The convolutional layers perform little neighborhood, convolution activities with a few channel maps of yields (Fig. 3).

6

Fig. 2 Auto-encoders

Fig. 3 CNNS

A. H. Rather et al.

An Evaluation into Deep Learning Capabilities, Functions …

7

The input is first convoluted with a set of filters. These 2D filtered data are called feature maps. After a nonlinear transformation, a subsampling is further performed to reduce the dimensionality. The sequence of convolution or subsampling can be repeated many times. The lowest level of this architecture is the input layer. With local receptive fields, upper layer neurons extract some elementary and complex features. Each convolutional layer is composed of multiple feature maps, which are constructed by convolving inputs with different filters. In other words, the value of each unit in a feature map is the result depending on a local receptive field in the previous layer and the filter. CNN algorithms learn a hierarchical feature representation by utilizing strategies like local receptive fields, shared weights, and subsampling. Each filter bank can be trained with either supervised or unsupervised methods [10].

5 Applications Deep learning is ordinarily connected to computer vision, discourse acknowledgment, and NLP. These are non-direct grouping issues where the information sources are exceedingly hierarchical in nature. In 2011, Google Brain venture made a neural system prepared with profound learning calculations, which perceived abnormal state ideas, like felines, in the wake of observing just YouTube recordings and without being determined what a “feline” is. Facebook is making arrangements utilizing profound learning aptitude to better recognize faces and items in the photographs and recordings transferred to Facebook every day. Another case of profound learning in real life is voice acknowledgment like Google Now and Apple’s Siri. As indicated by Google, the voice blunder rate in the new form of Android remains at 25% lower than past adaptations of the programming in the wake of including experiences from profound learning. Another rising territory of use is characteristic language handling in light of the fact that the likelihood of understanding the significance of the content that individuals type or then again the state is significant for giving better client interfaces, promotions, and posts. Gaining from content, sound, and video is developing into other areas of profound picking up, starting to be acknowledged by research networks including discourse handling, normal language handling, vision, Machine Learning, data recovery, psychological science, man-made brainpower and learning from the executives. The phenomenal development of information as of late has prompted a surge in enthusiasm for compelling and versatile parallel calculations for preparing profound models. The utilization of extraordinary processing capacity to accelerate the preparation process has appeared potential in big data profound learning. Numerous CPU centers can be utilized to scale-up DBNs with each center managing a subset of repairing information. These executions can enhance challenges.

8

A. H. Rather et al.

6 Challenges Numerous issues utilizing deep systems require enough guides to fit the parameters of an intricate model, which is troublesome. Preparing on lacking information would additionally result in overfitting, because of the high level of expressive intensity of profound systems. Preparing a shallow system with one concealed layer utilizing regulated adapting, for the most part, brought about the parameters combining to sensible qualities, yet when preparing a profound organization, this turns out with awful nearby optima. At the point when utilizing back spread to figure the subordinates, the angles that are proliferated in reverse from the yield layer to the prior layers of the system quickly lessen in greatness as the profundity of the organized increments. Loads of the prior layers change gradually when utilizing angle plummet and the prior layers neglect to adapt much prompting “dispersion of inclinations.” Another test-related is information inadequacy and loud marks, as a greater part of the information may not be marked, or whenever named, there exist uproarious names. Propelled profound learning strategies are required to manage loud information and to endure a few untidiness.

7 Conclusion Superior processing framework-based frameworks together with hypothetically solid parallel learning calculations and novel models are expected to fabricate the future profound learning framework [9]. As there is persistent development in PC memory and computational power through parallel or conveyed processing conditions, further research, and exertion on tending to issues related to calculation and correspondence the executives are required for scaling up to extremely substantial informational collections. There will be difficulties associated with thinking and deduction over complex, progressive connections and learning sources including various substances and semantic ideas. In the coming years answers for location the versatility, unwavering quality, and flexibility of the unsupervised learning models will take the focal stage. These examination challenges presented are convenient, and will additionally bring adequate open doors for profound learning, giving significant advances in science, healthcare, and business [11].

References 1. C. Tataru, A. Shenoyas, Deep learning for abnormality detection in chest Xray images, in IEEE Conference on Deep Learning, 2017 2. M. Minsky, S. Papert, Perceptron: An Introduction to Computational Geometry (1969) 3. G.E. Hinton, S. Osindero, Y. Teh, A fast learning algorithm for deep belief nets. Neural Comput. 18, 1527–1554 (2006)

An Evaluation into Deep Learning Capabilities, Functions …

9

4. Y. Bengio, P. Lamblin, D. Popovici, H. Larochelle, Greedy layer-wise training of deep networks, in ed. by J. Platt et al., Advances in Neural Information Processing Systems 19 (NIPS 2006) (MIT Press, 2007), pp. 153–160 5. M. Ranzato, C. Poultney, S. Chopra, Y. LeCun, Efficient learning of sparse representations with an energy-based model, in ed. by J. Platt et al., Advances in Neural Information Processing Systems (NIPS 2006) (MIT Press, 2007) 6. L. Deng, X. He, J. Gao, Deep stacking networks for information retrieval, in Acoustics, Speech and Signal Processing, 2013 7. S. Xu, H. Wu, R. Bie, Anomaly detection on chest X-rays with image-based deep learning 8. V. Golovko, A. Kroshchanka, U. Rubanau, S. Jankowski, A fast learning algorithm for deep belief nets 9. B. Schölkopf, J. Platt, T. Hofmann, Efficient learning of sparse representations with an energybased model 10. D. Rueda-Plata, R. Ramos-Pollán, F.A. González, Supervised greedy layer-wise training for deep convolutional networks with small datasets 11. L. Oneta, N. Navarin, A. Sperduti, D. Anguita, Recent advances in big data and deep learning

Mathematical Modeling and Simulation of DC-DC Converters Using State-Space Approach Piyush Sharma, Dheeraj Kumar Dhaked, and Ashok Kumar Sharma

Abstract The power electronics converters are used to get the desired output power at the desired voltage, current, and power level for numerous applications. The DC-DC Converters (DDC’s) are worn to alter the DC voltage level of power supply for desired applications. The range of output power can be between milliwatts to megawatts, high voltage AC (HVAC), high voltage DC (HVDC), which are being used for various applications like mobile chargers, battery chargers, and high-scale industrial applications. This paper presents the model realization of all these converters and the simulation is performed using MATLAB Simulink with the steady-state analysis (SSA) modeling. All three DDC’s [Boost, Buck and BuckBoost Converter (BBC)] were realized with the SSA technique and also realized in MATLAB Simulink for analysis. These converters are modeled according to their circuit topology and detailed explanation was given for state-space matrices. The state-space results after analysis are compared with the circuitry model with less tolerance of each converter and simulation computation time has also improved in state-space modeling approach w.r.t. circuitry model. Keywords DC-DC converter boost converter · Buck converter · Buck-boost converter · State-space approach

1 Introduction The converters in power electronics are the combination of electronic circuits that are used for conversion, control, and electrical power conditioning. The range of output power range can be between milliwatts to megawatts, HVDC, HVAC which are worn for mobile chargers, battery chargers, and high-scale industrial applications. These converters reliability has become a key constraint as it is related to various P. Sharma (B) · D. K. Dhaked · A. K. Sharma Rajasthan Technical University, Kota, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_2

11

12

P. Sharma et al.

applications. The total efficiency of these power electronics converters should be analyzed and revisit for enhancement [1–4]. First, on account of the financial and ecological estimation of wasted electrical power and, furthermore because of the cost of energy disseminated that it can create. Indeed, even a little improvement in converter power productivity means the improved benefit of the interest in the electronic market industry. DDC’s are generally utilized on the grounds that it has the most straightforward power electronic arrangement which changes one degree of voltage into a further stage by control action of power electronics switches. These converters are getting more consideration in numerous industrial applications like supply equipment for personal computers, office types of equipment of gear, apparatus control, media transmission types of gear, DC motor drives, automotive, airplane, lighting, and so on. So it is extremely important to oversee and stream of intensity control to accomplish high efficiency and desired operation [5–8]. DDC’s are one of the major important subject matter in the recent period when power electronics is gaining importance. It mainly gives a view on the rudimentary standard of all types of DDC’s. In our learning the power delivery, modeling, and simulation manifestation are ordinarily used to help in understanding and comprehension of DDC’s working procedure and principles. There are numerous ways to deal with simulation and modeling of DDC’s, which incorporates hardware, mathematical implementation, control transfer function, and the SSA [9–11]. Numerous software exist in the market for modeling and analysis of electrical networks like PLECS, PSim, PS-CAD, MATLAB Simulink, etc. and each of it has its own value. Here in this manuscript, the MATLAB software is selected for simulation and modeling platform for SSA analysis of DDC’s for the given subsequent reasons. The SSA based on matrix is used on Simulink and MATLAB platform for result analysis. This paper describes the technique of developing DDC’s model with result analysis and then state-space analysis results are compared with circuitry model with less tolerance of each converter and simulation. The computation time has also been improved in SSA w.r.t. circuitry model [12–14]. This paper is prearranged as the Sect. 1 has the introduction of the converters and their types, Sect. 2 describes the DDC’s mathematical modeling and Sect. 3 used to depict the DDC’s Simulink modeling. Section 4 discusses the result and analyses it with the conclusion in Sect. 5.

2 Mathematical Modeling of DC-DC Converters The converter is usually utilized to get DC power output arrangement to alter the output voltage by altering the value of duty quotient to uphold the solar maximum operating point. These are broadly classified into two types, i.e., isolated and nonisolated DDC’s [2–6]. The DDC’s together with all these converter methodologies are demonstrated utilizing state-space modeling method in the accompanying sections. MATLAB

Mathematical Modeling and Simulation of DC-DC Converters …

13

Simulink was selected as modeling accomplishment and test base. In the first place, the state-space modeling on a very basic level speaks to where A, B, C, D are the framework matrix, x  is the state variable derivative, u is input, and y is the output [1]. x  = Ax + Bu

(1)

y  = C x + Du

(2)

2.1 Buck Converter The output regular voltage obtained is continuously lesser than the supplied DC input voltage given to this topology of DDC’s. These types of DDC’s are used to step down the voltage obtained from DC sources like PV and batteries. The topology of DDC’s which steps down the voltage level, generally recognized as buck type of converters as depicted in Fig. 1. As the control switch (S) is twisted “ON” during mode 1, the diode (D) made reverse biased and the current moves through the inductor into the voltage sink. After a period t, the switch is turned off during mode 2 [3, 4]. The current of inductor freewheels at that point through the diode during mode 2. Mode 2 finishes at the contrary function of the controlling (ON/OFF) stage (Figs. 2 and 3). In this Buck Converter, state variables are Vc and i L . During ON Condition Vc , i L can be defined by Eqs. (3) and (4):

Fig. 1 Circuit illustration of the buck converter

14

P. Sharma et al.

Fig. 2 Mode 1 (switch is close)

Fig. 3 Mode 2 (switch is open)

di L dt

(3)

Vc dVc + dt R

(4)

Vc = u 1 − L iL = C

By mapping above state variables i L = x1 and Vc = x2 then Eqs. (3) and (4) can be written as: dx1 dt

(5)

x2 dx2 + dt R

(6)

x2 = u 1 − L x1 = C

Now after differentiation of Eqs. 5 and 6 obtained result: 

x2 = u 1 − L x1

(7)

x2 R

(8)

x1 = C x2 +

Mathematical Modeling and Simulation of DC-DC Converters …

15

Now rearranging of Eqs. 7 and 8 obtained results: u1 x2 + L L

(9)

x1 x2 − C RC

(10)

x1 = − x2 =

The state-space matrix A and B in (11) can be represented by 

x1 x2



 =

0 − L1 1 1 − RC C



 1 x1 + L u1 x2 0

(11)

During OFF Condition where u 1 is 0 and its derivative x1 is shown (12) and derivative x2 is the same as Eq. 10. Correspondingly, the SSA matrices A and B in Eq. 12 in buck converter are formulated by Eqs. 10 and 12. 1  x1 = − x2 L        0 − L1 x1 0 x1 = + u1 1 1 − x2 x 0 2 C RC

(12)

(13)

The SSA matrices A and B after derivation are used for ‘ON’ and ‘OFF’ states for buck converter. After that SSA matrices were used for ON and OFF state. After deriving the buck converter, state-space A and B matrix for its ‘ON’ and ‘OFF’ state were used. It is necessary to discover the average A and B matrix with the explanation of controlling duty d. The average of matrices A and B are publicized in Eqs. 15 and 17, respectively. A = A(ON) d + A(OFF) (1 − d)  A=

     0 − L1 0 − L1 0 − L1 d + − d) = (1 1 1 1 1 1 1 − RC − RC − RC C C C B = B(ON) d + B(OFF) (1 − d)  B=

   d 0 d+ (1 − d) = L 0 0 0 1 L

(14)

(15) (16)

(17)

For modeling of buck converter matrix of Eqs. 15 and 17 are substituted into Eqs. 1 and 2.

16

P. Sharma et al.



x1 x2



 =

0 − L1 1 1 − RC C



 d x1 + L u1 x2 0

(18)

To obtain the output matrix in SSA value of C and D: 

y1 y2





10 = 01



iL VC



  0 + u1 0

(19)

2.2 Boost Converter Average output voltage obtained is forever more than the supplied input voltage given to this kind of DDC’s. Consequently, the category of converter might be used for MPPT arrangement where the output obtained voltage requires to be larger than the given input voltage. Likewise in a grid-integrated arrangement where the boost converter upholds a high output voltage even if the output of PV array voltage reduces to a low value. The step-up DDC’s usually recognized as a boost converter which is depicted in Fig. 4. During mode 1, when control switch turns on the current in inductor rises, and the energy gets stored in inductor [5, 6]. During the mode 2, control switch is off and the current flows in D diode, through the RC network and come reverse to the source. The inductor discharges its polarity and energy of inductor voltage as the extent of its node linked with the diode and positive to its point coupled with the source. During turn ‘ON’ condition, the inductor gets charge through u 1 which is shown in Eq. 20. There is no current flows in RC circuit during this situation, where i L is nil as clear in Eq. 21 (Figs. 5 and 6). L +

iL +

-i

+ -

Vo

g

D

+

+

D

C

Mosfet

R

S

Vi

v

Continuous powergui

Fig. 4 Circuit illustration of the boost topology

Scope

Mathematical Modeling and Simulation of DC-DC Converters …

17

Fig. 5 Mode (switch is close)

Fig. 6 Mode 2 (switch is open)

di L dt

(20)

dVc Vc + dt R

(21)

u1 = L 0=C

Now rearranging (20), (21) of Boost Converter on mode 0 = u 1 − L x1

(22)

x2 R

(23)

0 = C x2 + x1 =

u1 L

(24)

x2 RC

(25)

x2 = −

18

P. Sharma et al.





x1 x2



0 0 = 1 0 − RC



 1 x1 + L u1 x2 0

(26)

During the ‘OFF’ state of Boost converter, which has its equivalent circuit as Buck converter during the ‘ON’ state. So, the SSA equations of matrices A and B are same for Boost converter during ‘OFF’ state as shown in Eq. 11. A = A(ON) d + A(OFF) (1 − d)

(27)

     0 − L1 0 0 0 − (1−d) L d+ 1 A= (1 − d) = 1−d 1 1 1 − RC − RC 0 − RC C C

(28)



Correspondingly, the value of boost converter with SSA matrices A and B for their ON and OFF status can be originated with the change of controlling duty phase d. Average of A and B matrices are publicized in Eqs. 28 and 30, respectively. B = B(ON) d + B(OFF) (1 − d)  B=

1 L

0

  d+

1 L

0

  (1 − d) =

(29) 1 L



0

(30)

Equations 28 and 30 are substituted into Eqs. 1 and 2 to realize representation of buck topology of the converter and the output matrix is similar to Eq. 19. 

x1 x2



 =

0 − (1−d) L 1−d 1 − C RC



 1 x1 + L u1 x2 0

(31)

2.3 Buck-Boost Converter In this category of converter, amalgamation of buck and boost topology is used. The voltage level can increase by boost converter and decreases by buck converter. In PV system always maintain the output voltage level. The MPPT techniques used in the present scenario exhibits characteristics as complexity, efficiency, convergence speed, cost, sensors required, hardware execution, etc. [11–15] (Figs. 7, 8, and 9). This category of converter during the ‘ON’ condition is equivalent to boost converter during the ‘ON’ state. consequently, SSA matrices A and B for the BBC during the ‘OFF’ condition is analogous to Eq. 26 and during the turn ‘OFF’ state, it’s analogous to buck converter during the ‘OFF’ position where its zero, but its output position VC and iL are opposite in divergence due to inductor discharges as depicted in Eqs. 32 and 33 correspondingly.

Mathematical Modeling and Simulation of DC-DC Converters …

19

D g

+ -

S

Vo

+

iL

C

+

Vi

+

+

-

Mosfet

v

i

D

R

L Continuous powergui

Fig. 7 Circuit illustration of the BBC Fig. 8 Mode 1 (switch is close)

Fig. 9 Mode 2 (switch is open)

Scope

20

P. Sharma et al.

−VC = −L −i L = C

di L dt

VC dVC + dt R

(32) (33)

Now after rearranging Eqs. 31 and 32 of Boost converter during turn ‘ON’ mode. x1 =

1 x2 L

(34)

1 1 x1 − x2 C RC        1 0 x1 0 x1 L = + u1 1 1  − C − RC x2 x2 0 x2 = −

(35)

(36)

Likewise, the average of these converters SSA models A and B matrices for their ON and OFF position be capable of put together with the description of controlling duty phase D. The average of matrices A and B are given away in Eqs. 38 and 40 correspondingly. A = A(ON) d + A(OFF) (1 − d)

(37)

     (1−d) 1 0 0 0 0 L L d+ A= (1 − d) = 1 1 1 − C1 − RC − (1−d) − RC 0 − RC C

(38)

B = B(ON) d + B(OFF) (1 − d)

(39)



 B=

   d 0 d+ (1 − d) = L 0 0 0 1 L

(40)

Average matrix of Eqs. 38 and 40 are substituted into Eqs. 1 and 2 for buck converter model to be completed. 

x1 x2



 =

(1−d) L 1 − (1−d) − C RC

0

and output matrix is similar to Eq. 19.



 d x1 + L u1 x2 0

(41)

Mathematical Modeling and Simulation of DC-DC Converters …

21

3 DC-DC Converter Simulink Modeling This was implemented in MATLAB Simulink which is depicted in Figs. 10, 13, and 16, respectively. All the DDC’s are realized and analyzed with SSA and circuitry models [7]. The state-space model is realized with the desired converter by inflowing the state-space parameters A, B, C, D matrix into the slab constraints as given in Fig. 5. The modeling is realized by entering Eq. 18 into matrix A and B for buck, Eq. 31 for boost and Eq. 41 for BBC. Matrices C and D from Eq. 19 are as similar for all categories DDC’s. All converters are simulated in MATLAB by using the blocks given in Simulink library where a power electronics switch, a diode, inductor, capacitor, and a load is connected a described manner to form a distinct type of DDC’s. The switch is used to turn ‘ON’ and ‘OFF’ the switch for changing the duty ratio to buck or boost the DC voltage level. The formulation of converter is shown for buck converter in 10, for boost converter, it is in Fig. 15 and for the BBC it is depicted in Fig. 16.

3.1 Buck Converter Figure 11a, b exhibits the voltage and current of output waveforms of buck converter, respectively, with circuitry model and Fig. 12a, b exhibits the SSA output voltage and current waveform of the same converter, respectively.

2.38 Display

Pulse Generator

g

m

D

S

+

Mosfet

DC Source

Current Measurement

L

D

i -

C

Fig. 10 Simulink model of buck converter

Scope

R

+ v -

Voltage Measurement

22

P. Sharma et al.

Fig. 11 a Output voltage, b current waveform of the Buck converter

3.2 Boost Converter Figure 14a, b depicts the output voltage and current waveform of the boost converter, respectively with circuitry model and Fig. 15a, b shows the SSA voltage and current output waveforms of the same converter topology, respectively.

Mathematical Modeling and Simulation of DC-DC Converters …

23

Fig. 12 a SSA output voltage, b current waveform of the buck converter

3.3 Buck-Boost Converter Figure 17a, b illustrates voltage and current waveform of the output of BBC, respectively with circuitry model and Fig. 18a, b shows the SSA voltage and current waveform of output of same converter topology, respectively.

4 Result Discussion For the practical manifestation of different DDC’s subsequent constraints, L = 2 milli Henry, C = 220 micro Farad, R = 3 O, controlling frequency of switch 10 kHz, duty ratio = 25% and input 12 V. The simulation results are comparatively evaluated with circuitry and state-space model.

24

P. Sharma et al.

Fig. 13 Simulink model of boost converter

With reference of the above, all parasitic component parameters for all those converter results are discussed here. Theoretical output voltage of buck Converter is 3 V and for circuitry model of buck converter output have, i.e., 2.386 V and 0.795 Amp. Further, we analyzed that circuit in state-space then we got voltage and current are in near a theoretical vale of buck converter, i.e., 2.998 V and 0.997 Amp and computational time is also low in state-space as compare to circuitry model. Theoretical output voltage of boost converter is 16 V and for circuitry model of buck converter output have i.e. 15.07 V and 5.022 Amp. Further, we analyzed that circuit in state-space then we got voltage and current are in near a theoretical vale of buck converter, i.e., 16 V and 7.113 Amp and computational time is also low in state-space as compared to circuitry model. The working procedure of circuitry and state-space model for all these converters is comparatively recapitulated in Fig. 19. It can be seen from Figs. 11, 12, 14, 15, 17, and 18a, b that the voltage and current waveforms are having oscillations for circuitry results and it has been died out in SSA results. Theoretical output voltage of BBC is −4 V and for circuitry model of buck converter output have i.e. −3.173 V and 1.384 Amp. Further, it was analyzed that circuit in state-space then we got voltage and current are in near a theoretical vale of buck converter, i.e., −3.999 V and 1.778 Amp and computational time is also low in SSA with circuitry model. So from the above discussions, we analyzed that tolerance and computational time are less in SSA. The simulation computation time has also improved in SSA w.r.t. circuitry model (Fig. 19).

Mathematical Modeling and Simulation of DC-DC Converters …

25

Fig. 14 a Output voltage, b current waveform of the Boost converter

5 Conclusion All these converters model, realization, and simulation using MATLAB Simulink and SSA modeling are analyzed. All three DDC’s are developed using SSA modeling approach and also executed in MATLAB Simulink. The SSA results are compared with circuitry model for comparatively less tolerance of each converter and simulation computation time has also improved in state-space modeling approach w.r.t circuitry model.

26

Fig. 15 a SSA output voltage, b current waveform of the Boost converter

Fig. 16 Simulink model of BBC

P. Sharma et al.

Mathematical Modeling and Simulation of DC-DC Converters …

Fig. 17 a Output voltage, b current waveform of BBC

27

28

P. Sharma et al.

Fig. 18 a SSA output voltage, b current waveform of BBC

Fig. 19 Performance comparison of DDC’ss with circuitry model with SSA in tabular form

Mathematical Modeling and Simulation of DC-DC Converters …

29

References 1. R.H.G. Tan, M.Y.W. Teow, A comprehensive modeling, simulation and computational implementation of buck converter using MATLAB/simulink, in 2014 IEEE Conference on Energy Conversion (CENCON), Johor Bahru, Malaysia (2014) 2. R.A. Kordkheili, M. Yazdani-Asrami, A.M. Sayidi, Making DDC’s easy to understand for undergraduate students, in IEEE Conference on Open Systmes (ICOS), Kuala Lumpur, Malaysia (2010) 3. L.S. Patil, K.D. Patil, A.G. Thosar, The role of computer modeling and simulation in power electronics education, in IEEE 2nd International Conference on Emerging Trends in Engineering and Technology (ICETET), Nagpur, India (2009) 4. C.A. Canesin, F.A.S. Goncalves, L.P. Sampaio, Simulation tools of DC-DC converters for power electronics education, in 13th European Conference on Power Electronics and Applications (EPE), Barcelona, Spain (2009) 5. I.H. Baciu, I. Ciocan, S. Lungu, Modeling transfer function for boost power converter, in 30th International Spring Seminar on Electronics Technology, pp. 541–544 (2007) 6. A.W.N. Husna, S.F. Siraj, M.Z. Ab Muin, Modeling of DC-DC converter for solar energy system applications, in IEEE Symposium on Computers & Informatics, Malaysia (2012) 7. I. Batarseh, D.A. Kemnitz, Undergraduate education in power electronics, in Southcon/94, pp. 207–213 (1994) 8. D.K. Dhaked, S. Saini, P. Sharma, Analysis of different converters for reduced total harmonic distortion and improved power factor (SPIN, Noida, India, 2018) 9. D.K. Dhaked, Y. Gopal, D. Birla, Battery charging optimization of solar energy based telecom sites in India. Eng. Technol. Appl. Sci. Res. 9(6), 5041–5046 (2019) 10. D.K. Dhaked, M. Lalwani, A comprehensive review on a D-facts controller: enhanced power flow controller (EPFC). Int. J. Adv. Eng. Technol. (IJAET) 10(1), 84–92 (2017) 11. S. Saini, P. Sharma, D.K. Dhaked, L.K. Tripathi, Power factor correction using bridgeless boost topology. Int. J. Adv. Eng. Res. Sci. (IJAERS) 4(4), 209–215 (2017) 12. D.K. Dhaked, M. Lalwani, Modeling and analysis of a D-facts device: enhanced power flow controller. i-manager’s J. Electr. Eng. 11(1), 41–49 (2017) 13. V. Kumar, D.K. Dhaked, A. Jaiswal, Y. Gopal, Comparative study of enhanced power flow controller and TCSC. Int. J. App. Eng. Res. (IJAER) 13(14), 11625–11631 (2018) 14. Reena, D.K. Dhaked, Y. Gopal, J. Mishra, R.K. Vyas, Comparative analysis of cascaded multilevel inverter integration with pv system. Int. J. Res. Anal. Rev. (IJRAR) 6(1) (2019) 15. Y. Gopal, K. Kumar, D. Birla, M. Lalwani, Banes and boons of perturb & observe, incremental conductance and modified Regula Falsi methods for sustainable PV energy generation. J. Power Technol. 97(1), 35–43 (2017)

Detection of Forgery in the JPEG Images Using Forward Quantization Noise Method Satish Pratapur and D. C. Shubangi

Abstract In this work, the presence of forgery was detected using the forward quantization noise method. Required threshold to achieve maximum sensitivity, specificity and precision was derived for JPEG images. Seam carving dataset with a quality factor of 75% was used in order to demonstrate the method. The threshold was varied from 0.005 to 0.0005 and the corresponding maximum sensitivity, specificity and precision were estimated. It has been demonstrated that a threshold of 0.0005 yields that highest maximum sensitivity, specificity and precision. Keywords Image forgery detection · JPEG · Forward quantization noise

1 Introduction Image forgery is becoming a threat to the veracity of the image contents in a web portal or in the social media networks. There is a huge undesired growth in the image forgery of late leading to spread of false news thereby creating false perception in the society. The forgery has been promoted by some anti-social elements and there are special software tools being developed and used for this purpose. Forgery of an image basically deals with alteration of contents of the image. Image compression is one of the techniques used to forge an image. The image compression technique can be lossy or loss less. For example, JPEG is one of the popular techniques used in image compression. The JPEG compression is lossy compression technique. A forgery can be carried out to an image using JPEG compression methods. Usually, an image is altered and then it is subjected to loss compression and then image is recovered back from compression. In such a scenario, when an image is regenerated back, it will be difficult to identify the tampered regions of the image. This S. Pratapur (B) · D. C. Shubangi VTU Regional Office, Kalaburagi, Karnataka, India e-mail: [email protected] D. C. Shubangi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_3

31

32

S. Pratapur and D. C. Shubangi

problem can be addressed by analyzing the history of the JPEG compression [1, 2]. By studying the history in detail using mathematical tools, it is possible to determine to what extent the image was tampered. The extent of image compression can be assessed using mathematical methods [3, 4]. In fact, it is also possible to determine if the image was compressed more than once. However, these methods are not sufficient to deal with the detection of forgery as there is high-quality compressions being performed by professional hackers or manipulators. Especially when images are decompressed of those images that were compressed with high-quality JPEG, it becomes difficult, though not impossible to find if there was any forgery or tampering to the original image. When the compression is a highquality JPEG, the traces can be found only in few high-frequency discrete cosine transform coefficients. When an image is decompressed back into spatial domain, these traces can be observed. The traces can be observed easily when the DCT coefficients are plotted in histograms. The DCT values can be utilized to verify if an image was compressed decompressed or uncompressed [4]. The DCT values in the range of −2 to 2 and its absolute value provides good information about the image compression. For example, quantity of DCT values that lie between −2 and 2 can be used to determine if an image was uncompressed. In order to determine if an image was decompressed or uncompressed, the DCT values can be compared with a threshold and based on quantity or percentage of DCT coefficients beyond a threshold; it is possible to verify if an image was indeed decompressed or uncompressed. This method has a limitation. It considers only those DCT coefficients that are close to zero and the image is subjected to two quantization steps to verify. An improvement over the method of comparing DCT coefficients [4] was made by computing the variance of the DCT coefficients [5, 6]. The basis for this method was, if the image was uncompressed, then the variance of high-frequency DCT coefficients have smaller values and decompressed images will have values. Other methods to verify if the image was decompressed or uncompressed was using JPEG grid position [7–11], quantization tables [12], following the steps of quantization [13– 17] and other methods [18–23].In the current research, the threshold for the variance of the DCT coefficients is derived to determine if the image has any tampered regions in it. Simulations are carried out on Seam carving dataset. The regions of the forgery have been identified and marked in the image. The threshold value has been varied for a quality factor of 75% and the accuracy metrics were analyzed. In this paper, Sect. 1 is dedicated for the introduction of the problem and important developments in the area of detecting forgery using DCT coefficients. Section 2 deals with the mathematical background of the solution. Section 3 mainly focuses on the simulations of the proposed model on the Seam carving dataset. This section also deals with the analysis of the metrics. Finally, important conclusions are drawn in Sect. 4.

Detection of Forgery in the JPEG Images Using Forward …

33

2 JPEG Compression-Forgery Detection in an Image Forward quantization method [24] is used in this work to verify if an image has any tampered region in it or not. Variation of DCT coefficients is compared with a threshold. The quality factor of the JPEG compression has an influence on the DCT coefficients. When quantization is applied to the DCT coefficients, it removes some information. When the image subjected to inverse DCT, the image obtained will have some loss of information. The loss of information can be used as a measure to determine if the image has any tampered regions. In order to determine the tampered regions, the original image is treated as a combination of 8 × 8 blocks; and DCT, Quantization, dequantization and inverse DCT are applied on each of the 8 × 8 blocks. In a DCT block, there are AC components and a DC component. The first element in the DCT block, that s DCT[1, 1] is the DC component and it is the average of all the pixel intensities in the spatial domain. All other components in the DCT block other DCT[1, 1] are AC components. Figure 1 shows the flow of the image in the forward quantization. Loss information is the main criteria to verify the decompressed or uncompressed images. y = DCT1 − DCT2 y = DCT1 − where y DC T1 DC T2 QU AN T 75



DCT1 QUANT75

 QUANT75, QUANT75 ∈ N N (2)

Loss of information First DCT matrix of coefficients Second DCT matrix of coefficients Quantization matrix for a quality factor of 75%.

Quantization noise [24] has a distribution and may be expressed as

Fig. 1 Flow diagram of tamper detection in JPEG

(1)

34

S. Pratapur and D. C. Shubangi

fy =

∞ 

f Y (k.QUANT75 + s),

k=−∞

  QUANT75 QUANT75 , and s ∈ − 2 2

(3)

where f y Distribution of loss of information with Gaussian distribution f Y Distribution of DCT coefficients with Laplacian distribution. Forward quantization noise can be written as Z = Y − [Y ]

(4)

Since Y and [Y ] have distribution, the Z has a variation. Hence variation of Z can be used to verify if the image was decompressed or uncompressed.  Z=

Uncomp, ifσ 2 > Thsld Decomp, if σ 2 ≤ Thsld

(5)

In the present work, the experiments are conducted to determine the threshold value for the Seam carving dataset for the presence of forgery in the JPEG images. Algorithm: Step 1: Read a JPEG image from the Seam Carving database. Step 2: Define a quality factor and derive the quantization matrix using standard methods. Step 3: For first forward quantization, calculate the variance of the noise. Step 4: For second forward quantization, calculate the variance of the noise. Step 5: Define a threshold value. The threshold value is usually dependent on variances of first and second quantization noise. Step 6: Compare the variance of the noise of first forward quantization with the threshold. Step 7: Classify it as untampered, if it is less than or equal to the threshold. And if it is more than the threshold, then classify it as tampered. Step 8: Read tampered JPEG images from the Seam Carving database. Step 9: For first forward quantization, calculate the variance of the noise. Step 10: For second forward quantization, calculate the variance of the noise. Step 11: Define a threshold value. The threshold value is usually dependent on variances of first and second quantization noise. Step 12: Compare the variance of the noise of first forward quantization with the threshold. Step 13: Classify it as untampered, if it is less than or equal to the threshold. And if it is more than the threshold, then classify it as tampered. Step 14: Mark the DCT blocks that are tampered.

Detection of Forgery in the JPEG Images Using Forward …

35

Step 15: Mark the regions of all DCT blocks identified as tampered. Step 16: Vary the threshold from 0.0005 to 0.005. Step 17: Print the confusion matrix.

3 Simulation Results In this work, simulations are carried out for the Seam carving database. One hundred images are taken from the database each from the tampered and untampered sets. The simulations are conducted to determine the percentage of forgery on the image. For the purpose of testing, an image is tested in both tampered and untampered conditions. Figure 2 shows three samples of untampered and tampered images from the seam carving dataset. In each of Figs. 2.1, 2.2, and 2.3, left side image is the untampered image and right side image is the tampered image. The tampered portion in tampered image is highlighted with rectangular box after applying the proposed algorithm. In the simulations, the untempered images were detected as untampared, which is true positives in 100% of the test cases. But the tampered images were detected as untampered in some cases. The percentage of forgery in the image is compared with a threshold in order to determine the optimal threshold that will ensure maximum specificity and precision. The threshold value is varied at 0.005, 0.002, 0.001, 0.0005 and 0.0001. For a threshold value of 0.005, the confusion matrix is Actual Predicted Total

100 (TP)

52 (FP)

Total

0 (FN)

48 (TN)

100

100

100

100

True Positives(TP) = 100 True Negatives(TN) = 48 False positives(FP) = 52 False Negatives(FN) = 0 Total Positive class = 100(Untampered) Total Negative class = 100(Tampered) Sensitivity = (TP)/(TP + FN) = 100/(100 + 0) = 100%

36

S. Pratapur and D. C. Shubangi

Fig. 2 Three samples of untampered (left) and tampered images (right) from the Seam_Carving_Q75dataset

Detection of Forgery in the JPEG Images Using Forward …

37

Specificity = (TN)/(TN + FP) = 48/(48 + 52) = 48% Precision = (TP)/(TP + FP) = 100/(100 + 52) = 66% From the above results, it can be observed that sensitivity is 100% which means all untampered images were detected by the model as untampered in all the cases. This is a very performance. In case of tampered images, only 48 out of 100 were detected as tampered and the remaining 52 were detected as untampered. Hence the specificity is 48% and precision is 66%. For a threshold value of 0.002, the confusion matrix is Actual Predicted Total

100 (TP)

23 (FP)

Total

0 (FN)

77 (TN)

100

100

100

100

True Positives(TP) = 100 True Negatives(TN) = 77 False positives(FP) = 23 False Negatives(FN) = 0 Total Positive class = 100(Untampered) Total Negative class = 100(Tampered) Sensitivity = (TP)/(TP + FN) = 100/(100 + 0) = 100% Specificity = (TN)/(TN + FP) = 77/(77 + 23) = 77% Precision = (TP)/(TP + FP) = 100/(100 + 23) = 81% From the above results, it can be observed that sensitivity is again 100%. In case of tampered images, only 77 out of 100 were detected as tampered and the remaining

38

S. Pratapur and D. C. Shubangi

23 were detected as untampered. Hence the specificity is 77% and precision is 81%. When the threshold was decreased from 0.005 to 0.002, the specificity and precision increased significantly. For a threshold value of 0.001, the confusion matrix is Actual Predicted Total

100 (TP)

8 (FP)

Total

0 (FN)

92 (TN)

100

100

100

100

True Positives(TP) = 100 True Negatives(TN) = 92 False positives(FP) = 8 False Negatives(FN) = 0 Total Positive class = 100(Untampered) Total Negative class = 100(Tampered) Sensitivity = (TP)/(TP + FN) = 100/(100 + 0) = 100% Specificity = (TN)/(TN + FP) = 92/(92 + 8) = 92% Precision = (TP)/(TP + FP) = 100/(100 + 8) = 93% For a threshold value of 0.0005, the confusion matrix is Actual Predicted

100 (TP)

1 (FP)

Total

Total

0 (FN)

99 (TN)

100

100

100

100

True Positives(TP) = 100

Detection of Forgery in the JPEG Images Using Forward …

39

True Negatives(TN) = 99 False positives(FP) = 1 False Negatives(FN) = 0 Total Positive class = 100(Untampered) Total Negative class = 100(Tampered) Sensitivity = (TP)/(TP + FN) = 100/(100 + 0) = 100% Specificity = (TN)/(TN + FP) = 99/(99 + 1) = 99% Precision = (TP)/(TP + FP) = 100/(100 + 1) = 99% For a threshold value of 0.0001, the confusion matrix is Actual Predicted

100 (TP)

1 (FP)

Total

Total

0 (FN)

99 (TN)

100

100

100

100

True Positives(TP) = 100 True Negatives(TN) = 99 False positives(FP) = 1 False Negatives(FN) = 0 Total Positive class = 100(Untampered) Total Negative class = 100(Tampered) Sensitivity = (TP)/(TP + FN) = 100/(100 + 0) = 100% Specificity = (TN)/(TN + FP) = 99/(99 + 1) = 99%

40

S. Pratapur and D. C. Shubangi

Precision = (TP)/(TP + FP) = 100/(100 + 1) = 99% When the threshold value decreased from 0.002 to 0.001, 0.0005, and 0.0001, the sensitivity remains at 100% for untampered images. In case of tampered images, finally 99 out of 100 were detected as tampered and the remaining 1 was detected as untampered. Hence the specificity and precision increased to 99%. Any further decrease in the threshold did not improve the specificity and precision, which is evident when the threshold was reduced from 0.0005 to 0.0001. Hence the threshold can be fixed either at 0.0005 or 0.0001 for this dataset. Figures 3, 4 and 5 show the change in sensitivity, specificity and precision for the 100 images of Seam carving Q75 dataset. It can be observed from Fig. 2 that sensitivity remains constant at 100% at all the thresholds. Specificity changes from

Fig. 3 Sensitivity of the proposed method on Seam_Carving_Q75dataset

Fig. 4 Specificity of the proposed method on Seam_Carving_Q75dataset

Detection of Forgery in the JPEG Images Using Forward …

41

Fig. 5 Precision of the proposed method on Seam_Carving_Q75dataset

48 to 99% when the threshold was decreased from 0.005 to 0.0005 and to 0.0001. It becomes asymptotic thereafter. Similarly, the precision increases 66% to 99% when the threshold was decreased from 0.005 to 0.0005 and to 0.0001. It becomes asymptotic thereafter.

4 Conclusion In this work, the forward quantization noise technique [34] was enhanced and the same was applied to detect the tempered and untampered portions in the JPEG images. The method uses the quantization noise as a measure to determine if there is any forgery that exists in the image. The feed forward noise was compared with a threshold to determine the forgery. To test the performance of the method, Seam carving dataset with a quality factor of 75% was used. The threshold was decreased from 0.005 to 0.0001. It has been observed that sensitivity remains at 100% in all the cases. But the specificity was very low at 48% when the threshold was set at 0.005. When the threshold was reduced in decrements to 0.0005, the specificity increased significantly to 99% and then by further reducing the threshold to 0.0001, the specificity remains at 99%. The precision also increased from 66 to 99% when the threshold was reduced in decrements from 0.005 to 0.0005 and no change observed thereafter. Hence it is concluded that for the Seam carving dataset, the threshold may be fixed at 0.0005 to obtain highest sensitivity, specificity, and precision.

42

S. Pratapur and D. C. Shubangi

References 1. A. Piva, An overview on image forensics. ISRN Signal Process. 2013, 1–22 (2013) 2. M.C. Stamm, M. Wu, K.J.R. Liu, An overview of the first decade. Inf. Forensics IEEE Access 1, 167–200 (2013) 3. Z. Fan, R.L. de Queiroz, JPEG detection and quantizer estimation. IEEE Trans. Image Process. 12(2), 230–235 (2003) 4. W. Luo, J. Huang, G. Qiu, JPEG error analysis and its applications to digital image forensics. IEEE Trans. Inf. Forensics Secur. 5(3), 480–491 (2010) 5. S. Lai, R. Böhme, Countering counter-forensics: the case of JPEG compression, in Proceedings 13th International Conference Information Hiding Workshop (Lecture Notes in Computer Science), vol. 6958, Prague, Czech Republic (2011), pp. 285–298 6. J. Fridrich, Feature-based steganalysis for JPEG images and its implications for future design of steganographic schemes, in Proceedings 6th International Workshop Information Hiding, vol. LNCS 3200 (2005), pp. 67–81 7. W. Luo, Z. Qu, J. Huang, G. Qiu, A novel method for detecting cropped and recompressed image block, in Proceedings IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 2 (2007), pp. II-217–II-220 8. Z. Qu, W. Luo, J. Huang, A convolutive mixing model for shifted double JPEG compression with application to passive image authentication, in Proceedings IEEE International Conference on Acoustics, Speech, and Signal Processing, Las Vegas, NV, USA (2008), pp. 1661–1664 9. Y.-L. Chen, C.-T. Hsu, Detecting recompression of JPEG images via periodicity analysis of compression artifacts for tampering detection. IEEE Trans. Inf. Forensics Secur. 6(2), 396–406 (2011) 10. Q. Liu, Detection of misaligned cropping and recompression with the same quantization matrix and relevant forgery, in Proceedings 3rd ACM International Workshop Multimedia Forensics Intelligence (2011), pp. 25–30 11. T. Bianchi, A. Piva, F. Perez-Gonzalez, Near optimal detection of quantized signals and application to JPEG forensics, in Proceedings IEEE International Workshop on Information Forensics and Security, Guangzhou, China (2013), pp. 168–173 12. D. Fu, Y. Q. Shi, W. Su, A generalized Benford’s law for JPEG coefficients and its applications in image forensics, in Proceedings SPIE, Security, Steganography, Watermarking Multimedia Contents IX, vol. 6505 (2007), pp. 65051L-1–65051L-11 13. J. Fridrich, M. Goljan, R. Du, Steganalysis based on JPEG compatibility. Proc. SPIE, Multimedia Syst. Appl. IV 4518, 275–280 (2001) 14. R.N. Neelamani, R. de Queiroz, Z. Fan, S. Dash, R.G. Baraniuk, JPEG compression history estimation for color images. IEEE Trans. Image Process. 15(6), 1365–1378 (2006) 15. S. Ye, Q. Sun, E.C. Chang, Detecting digital image forgeries by measuring inconsistencies of blocking artifact, in Proceedings of the IEEE International Conference on Multimedia and Expo, Beijing, China (2007), pp. 12–15 16. T.C.-I. Lin, M.-K. Chang, Y.-L. Chen, A passive-blind forgery detection scheme based on content-adaptive quantization table estimation. IEEE Trans. Circuits Syst. Video Technol. 21(4), 421–434 (2011) 17. F. Galvan, G. Puglisi, A.R. Bruna, S. Battiato, First quantization matrix estimation from double compressed JPEG images. IEEE Trans. Inf. Forensics Secur. 9(8), 1299–1310 (2014) 18. J. Seuffert, M. Stamminger, C. Riess, Towards forensic exploitation of 3-D lighting environments in practice, in Proceedings of the SICHERHEIT, pp. 159–169 (2018) 19. S.J. Nightingale, K.A. Wade, D.G. Watson, Can people identify original and manipulated photos of real-world scenes. Cogn. Res. Princ. Implic. 2(1), 30 (2017) 20. T.H. Thai, R. Cogranne, F. Retraint, T.-N.-C. Doan, JPEG quantization step estimation and its applications to digital image forensics. IEEE Trans. Inf. Forensics Secur. 12(1), 123–133 (2017)

Detection of Forgery in the JPEG Images Using Forward …

43

21. D. Cozzolino, G. Poggi, L. Verdoliva, A new blind image splicing detector, in Proceedings of the IEEE International Workshop on Information Forensics and Security, pp. 1–6 (2015) 22. M. Huh, A. Liu, A. Owens, A.A. Efros, Fighting fake news: image splice detection via learned self-consistency, in Proceedings of the European Conference on Computer Vision (ECCV) (2018), pp. 101–117 23. O. Mayer, M.C. Stamm, Learned forensic source similarity for unknown camera models, in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (2018), pp. 2012–2016 24. B. Li, T.-T. Ng, X. Li, S. Tan, J. Huang, Revealing the trace of high-quality JPEG compression through quantization noise analysis. IEEE Trans. Inf. Forensics Secur. 10(3), 558–573 (2015)

Error-Controlling Technique in Wireless Communication Neelesh Kumar Gupta, Narbada Prasad Gupta, Pradeep Gupta, and Kapil Kumar

Abstract If an error occurred during transmission of the message in wireless communication or it may get scrambled by noise. So an error controlling technique will be needed and for achieving it Golay Code can be preferred. Further, properties of this code have the capability to be utilized to device a parallel decoder that control errors. Such decoders will be quicker and easier than conventional error decoders. This code can be extended by adding one parity bit and will implement using software with better performances of high-speed digital circuits in wireless communication. This paper describes the method mentioned and results thus obtained have been provided. Keywords Error detection · Error correction · Binary Golay code · Adder · Weight measurement unit

1 Introduction Nowadays, ATM systems and Digital Audio Broadcasting (DAB) require security and privacy in wireless communication. So, decoding approach with high-speed VLSI architectures must be utilized. The aim of this research paper is to offer good quality of correct signals in communication with less probability of error. The N. K. Gupta ECE, Ajay Kumar Garg Engineering College, Ghaziabad, India e-mail: [email protected] N. P. Gupta (B) SEEE, Lovely Professional University, Phagwara, Punjab, India e-mail: [email protected] P. Gupta Department of CSE, Ajay Kumar Garg Engineering College, Ghaziabad, India e-mail: [email protected] K. Kumar Beacon Institute of Technology, Meerut, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_4

45

46

N. K. Gupta et al.

work mentioned in this paper shows error correction and digital received data for getting efficient wireless communication. This paper is divided into five sections. The next section shows about Golay code and its operation after that proposed methodology with architecture will elaborate. Fourth section displays simulation results with discussion. The last section concludes the paper.

2 Golay Code The Golay code shown here is an error controlling code for correcting and detecting binary errors of transmitted digital data. It can be given as (23, 12, 7). It is clear for this code that the final codeword length is 23 bits, with 12 bits input data and 7 minimum distance between two codewords [1]. Its performance of the code is close to Shannon’s Limit. Characteristic polynomial G(x) is selected to generate check bits. One parity check bit has been added to every codeword in G23 which obtains prolonged Golay code G24 [2] which is approximately seamless [24, 12, 8] binary linear code is obtained [3]. Authors have also developed high-level parametric codes that are capable of generating the circuits aunomously, when only the polynomial is given [4]. ⎡

0 ⎢1 ⎢ ⎢ ⎢1 ⎢ ⎢1 ⎢ ⎢1 ⎢ ⎢1 A=⎢ ⎢1 ⎢ ⎢1 ⎢ ⎢ ⎢1 ⎢ ⎢1 ⎢ ⎣1 1

1 1 0 0 1 1 1 0 0 0 1 0

1 0 1 1 1 1 0 0 0 1 0 1

0 1 1 1 1 0 0 0 1 0 1 1

1 1 1 1 0 0 0 1 0 1 1 0

1 1 1 0 0 0 1 0 1 1 0 1

1 1 1 0 0 1 0 1 1 0 1 1

1 1 1 0 1 0 1 1 0 1 1 1

1 1 1 1 0 1 1 0 1 1 1 0

1 1 1 0 1 1 0 1 1 1 0 0

⎤ 11 1 0⎥ ⎥ ⎥ 0 1⎥ ⎥ 1 1⎥ ⎥ 1 0⎥ ⎥ 0 1⎥ ⎥ 1 1⎥ ⎥ 1 1⎥ ⎥ ⎥ 1 0⎥ ⎥ 0 0⎥ ⎥ 0 0⎦ 01

For example bit position with check bits will be [5]: Bit position 1 2 3 4 5 6 7 8 9 10 11 12 001110010 1 0 0 C1 = XOR of bits (1,3, 5, 7, 9, 11) C2 = XOR of bits (2, 4, 6, 8, 10, 12) C4 = XOR of bits (4, 5, 6, 7, 12) C8 = XOR of bits (8, 9, 10, 11, 12)

Error-Controlling Technique in Wireless Communication

Bit position 1 0 1 0

2 0 0 0

3 1 1 1

4 1 1 1

5 1 1 0

6 0 0 0

7 0 0 0

89 10 10 10

10 1 1 1

47

11 0 0 0

12 0 No error 0 Error in bit 1 0 Error in bit 5

It’s obvious from this example that three mentioned conditions of bits position we can detect the zero or one-bit error in different bit positions.

3 Proposed Methodology Binary Golay code (G23) is characterized as (23, 12, 7) while the proposed Golay code (G24) is represented as (24, 12, 8) that is a prolonged form of G23 [6]. The proposed code is having distance 8 therefore it will be used to correct up to three errors. The objective of this work is to detect and correct errors using the proposed Golay code by cyclic redundancy check (CRC) algorithm. Figure 1 shows flow of proposed work step by step with coding and decoding of data used in communication (Fig. 2; Table 1). The common notation for this structure is [23, 12] consisting of input, output, and check bits available in designed code.

3.1 Encoder: One Example on Generation of Check Bit Is Discussed Here Similarly, at the receiving end, it can be decoded easily (Fig. 3).

3.2 Architecture Figure 4 shows the architecture consisting of input and output combinations.

3.3 Unit of Weight Measurement This unit this used to count number of binary 1 s in binary data sequence provided as input (Fig. 5). The following Evaluation parameters have been taken in this simulation.

48

N. K. Gupta et al.

Fig. 1 Flow of proposed technique

(a) LUTs It stands for lookup table that reduces the complex mathematics calculations and provides the reduced processing time. (b) Number of Slices How many areas are used in this circuit is called number of slices. (c) Number of IOBs All input–output port used in this circuit are combined called number of input–output buffer switch. (d) Maximum Combinational Path Delay-Maximum delay for signal propagation is called the maximum combinational path delay. Step 1 Compute the Syndrome of received sequences Step 2 Compute the S with the help of Weight Measurement Unit

Error-Controlling Technique in Wireless Communication

49

Step 3 If W (S) is less than or equal to 3 then calculate error pattern e = W HT where W is the received codeword Step 4 Final Decode codeword v = e + W.

Fig. 2 Transmitter and receiver with error correction capability

Table 1 Code word format

Check bit

Information bit

XXXX XXXX XXX

XXXX XXXXXXXX

50

Fig. 3 Encoder

Fig. 4 Architecture

N. K. Gupta et al.

Error-Controlling Technique in Wireless Communication

51

Fig. 5 Structure of weight measurement

4 Simulation Results 4.1 CRC Technique CRC Golay code encoder has been simulated in FPGA using Xilinx ISE tool (Figs. 6 and 7).

Fig. 6 View technology schematic

Fig. 7 Output waveform

52 Fig. 8 View schematic

4.2 Code Converter See Figs. 8, 9, 10, 11 and 12 .

Fig. 9 RTL view

Fig. 10 Output waveform

N. K. Gupta et al.

Error-Controlling Technique in Wireless Communication

53

Fig. 11 Parameters comparison

Fig. 12 Delay comparison

5 Conclusion Whenever binary, it may get scram data is communicated it may be corrupted by noise available in the communication channel. To avoid this, error-controlling codes can be employed with additional data added to a given digital message to help us detecting and correcting the errors in wireless communication. The conventional Golay code for the wireless communication system had 3-bit error correction and 7-bit error corrections. To modify the capability of error correction and detection a modified or proposed Golay code can be developed for maintaining security in digital communication. The proposed code will have 4-bit error correction and 8-bit error detections with the help of cyclic redundancy check (CRC) algorithm. The number of slices, flip flop, MCPD and LUT of the proposed code can be reduced by using CRC algorithm than existing technique [7].

54

N. K. Gupta et al.

References 1. M.H. Jing, Y.C. Su, J.H. Chen, Y. Chang, High-speed low-complexity Golay decoder based on syndrome weight determination, in Proceeding on 7th International Conference on Information and Communications, Signal Processing (ICICS), Dec, pp. 1–4 (2009) 2. P. Reviriego, L. Shanshan, L. Xiao, J.A. Maestro, An efficient single and double-adjacent error correcting parallel decoder for the (24, 12) extended Golay code. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 34(3), 1–4 (2016) 3. X.H. Peng, Member IEEE, P.G. Farrell, Life Fellow, IEEE, On construction of the (24, 12, 8) Golay codes, in IEEE 15 Dec (2005) 4. G. Campobello, G. Patané, M. Russo, Parallel CRC realization. IEEE Trans. Comput. 52(10) (2003) 5. W. Cao, High-speed parallel hard and soft-decision Golay decoder: algorithm and VLSIarchitecture, in Proceeding on IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 1996, vol. 6, pp. 3295–3297 6. S. Sarangi, S. Banerjee, Efficient hardware implementation of encoder and decoder for Golay code. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. (2014) 7. P. Bhoyar, Design of encoder and decoder for Golay code, in International Conference on Communication and Signal Processing, 6–8 April (2016)

Cloud Computing: The New World of Technology Arsheen Qureshi and Ashwani Sharma

Abstract In this era of technology, data storage and its management plays an important role. Every organization include loads of data, storing and managing bundle of data is a major issue. A convenient approach is required to handle it so that accessing and updating it become simple and easy. Apart from storage and management, freelancing is an emerging trend nowadays. It requires programming languages and their frameworks to cater to the consumer interests. Yet the availability of all frameworks on a single system is an arduous task which can be eradicated by creating our own database for the purpose of data storage, multiple system usage and software development. But it is costly and time-consuming task. For proper utilization of time, money and space, Cloud Computing comes into play these days. Therefore, in this module, we will discuss about Cloud Computing its working, deployment and service mode of Cloud Computing along with their pros and cons. Keywords Cloud Computing · Service models · Deployment models and storage

1 Introduction It is diligent to define Cloud Computing with immutable definition as it covers a wide range of tasks. In a generic way, Cloud Computing is the way of storing and processing the data by providing various platforms with frameworks and software in a commodious and handy manner. This determines a small portion of Cloud Computing as it is having a variety of other tasks as well. The word ‘Cloud’ is used to show indefinite boundary of Networks. Another concept is that in reality, we cannot see across clouds, the same thing applied to A. Qureshi · A. Sharma (B) Department of Computer Science and Engineering, Arya College of Engineering and Research Centre, Kukas, Jaipur, Rajasthan, India e-mail: [email protected] A. Qureshi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_5

55

56

A. Qureshi and A. Sharma

Fig. 1 Pictorial representation of tasks performs in Cloud Computing

Cloud Computing. We cannot access data physically instead of this we can only use resources such as storage, platforms and software offered by the cloud to us [1] (Fig. 1).

2 Cloud Service Models As we mentioned earlier, that Cloud includes number of services therefore, according to the abstraction level and delivered services three classes can be formed [2].

2.1 Infrastructure as a Service (IaaS) In this class, Cloud provides us with all the resources directly even though they are located at remote locations. This determines what Cloud is! Basically, these resources are storage areas, processing capabilities like servers and networking infrastructure such as switches and routers (Fig. 2).

2.2 Platform as a Service (PaaS) Platform as a Service generally provides domain for burgeoning and stationing for software and applications with the help of Cloud.

Cloud Computing: The New World of Technology

57

Fig. 2 Infrastructure as a Service

The usage of these services provides an ease to users as they don’t have to worry about the development platform and programming languages. Cloud itself provides users with all the mandatory tools and APIs required so that they can directly develop their own software and applications on Cloud (Fig. 3).

2.3 Software as a Service (SaaS) In Software as a Service category, various software are stationed on Cloud and users can get access by agreeing on the licencing terms and conditions. The Cloud owner deploys these software on Cloud. This can be done by third-party as well (Fig. 4).

3 Cloud Deployment Models Till now, we have studied all the major aspects of Cloud. Here disposition of Cloud is explained which is significant in Cloud Computing. This grouping is done according to the availability of resources, expenses and service controlling included. Cloud has four distribution models [3].

58

Fig. 3 Platform as a Service

Fig. 4 Software as a Service

A. Qureshi and A. Sharma

Cloud Computing: The New World of Technology

59

3.1 Private Cloud The usage of such Clouds is exclusively restricted to any specific organization. The responsible organization or third-party controls it. It may exist within or out of the organization premises. The pros include, that we have total control over cloud, high security and modifications can be done with great ease in Cloud services. Its cons include, that firstly it is expensive, second, utilization of resources is low and thirdly it is more prone to hacker’s attacks.

4 Community Cloud These type of Cloud are shared among many organizations which are related to a specific community, as they share common goals and principles for using it. The controlling and location of Community Cloud is similar to that of Private Cloud. The pros of Community Cloud include, that they are of low cost due to sharing of Cloud, there is a better utilization of resources and management of cloud services is an easy task. Its cons include that it is less secure due to sharing and complications in data distribution is present in them.

4.1 Public Cloud As the name suggests, these types of clouds can be used by anyone by adhering to its terms and conditions. It is managed by the vendor, who created it. The pros of Public Cloud include, there is no need for data management here as data is managed by the Cloud providing software itself and these types of Clouds are less expensive in nature. Its cons include less security and privacy issues present in these types of clouds.

4.2 Hybrid Cloud The Hybrid Cloud is a combination of above three Clouds which consists of all the properties of them. The pros and cons of Hybrid Cloud are similar to all of them.

60

A. Qureshi and A. Sharma

5 Future of Cloud Computing Even though Cloud Computing has a long history, it is still not widely adopted. The use of Cloud Computing in companies will spread only if different organizations feel secure with the notion that their data on Cloud, i.e., somewhere other than a server. Cloud Computing is basically considered as an agent of digital transformation. The companies moving to Cloud can easily accelerate the change in business. This can be done by breaking the data into different parts.

6 Conclusion Cloud Computing can be summed up as storing and processing the data, by using different platforms and frameworks which are provided by Cloud. On the basis of services, it can be classified into Infrastructure as a Service, Platform as a Service and Software as a Service. There exists various deployment models of Cloud, based on the resources available and cost. These are Private Cloud, Community Cloud, Public Cloud and Hybrid Cloud. Different models consist of its own pros and cons. As Cloud Computing is an emerging technology, its adoption by various organizations is still a major issue due to high cost. Hence different measures should be taken to reduce the expense so that more organizations can move to Cloud.

References 1. K. Hawang, G.C. Fox, J.J. Dongarra, Distributed and Cloud Computing. Elservier 2. Cloud Computing and Its Complete Description. https://www.zdnet.com/googleamp/article/ what-is-cloud-computing-everything-you-need-to-know-about-the-cloud 3. Cloud Computing Deployment Models. https://www.w3schools.in/cloud-computing/deploy ment-models-in-cloud-computing/

Questions Generation for Reading Comprehension Using Coherence Relations Anamika , Vibhakar Pathak, Vishal Shrivastava, and Akil Panday

Abstract The present scenario requires a large amount of digital content in the context of forums, directories, videos, classes, etc. Questions can thus play a significant role in digital media by quizzes and questions. Producing question is the job of automatically producing questions from natural language, acting as one of the main fields of natural language human–computer interaction. We focus on generating fact seeking, questions using the knowledge base. We implemented a system that takes a reading comprehension text as input and outputs all questions for the selected domain. Our system makes the GQ process into three stages: (1) content selection: selected for question generation, (2) question formation: content transformations to get the question, (3) evaluate the quality of generated question. The framework system is implemented as an end-to-end system that expects a human to specify a topic. The resulting output is a set of questions in natural language that follows the input domain. We show the effectiveness of our approach with previous Heilman and Smith MH method. Keywords Natural language generation · Question generation · Semantic role labeling · Templates · Self-directed learning

1 Introduction 1.1 Background Overview Questions can play a significant role in digital media by quizzes and questions. A considerable amount of research has been invested in the extraction of factual knowledge from unstructured Web resources. These efforts resulted in the creation of a knowledge base, which provides this information in a machine-interpretable Anamika (B) · V. Pathak · V. Shrivastava · A. Panday Computer Science Department, Arya College of Engineering & IT, Jaipur, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_6

61

62

Anamika et al.

format. Given this topical diversity, there is great potential for the creation of a system that can facilitate this knowledge for educational purposes. As part of the learning process, the system could generate questions of a certain topic that is adequate to the learner’s information need and expertise level. By using automatically generated questions as a medium for knowledge acquisition, a novel utilization for knowledge base could be created [1]. As stated above, the challenges we address along the way include the generation of the contents of the question, the verbalization of these contents for humans and the judgment of question difficulty. Correspondingly, our contributions fall into the following research areas: • Question Generation: We propose a novel approach to generate a question, which has a unique answer, using semantic and features-based information from the knowledge base. • Query Verbalization: We elaborate on a pattern-based technique for verbalizing queries, using lexical resources. The resulting natural language mimics the style of clues. To cater to verbalization variety, we expanded the standard set of paraphrases for relations and created a method to distinguish important types for an entity [2]. • Question Difficulty Estimation: We designed, implemented and evaluated a question difficulty classifier trained on data. The classifier’s features are based on statistics computed from the knowledge base [3].

2 Related Work We read various papers to automatic question generation listed in the previous section describes a computer system, ruminator, which learns by reflecting on the information it has acquired and posing questions in order to derive new information. Ruminator takes as input simplified sentences in order to focus on question generation. The authors note that it is important to remove easy questions and refined question strategies to avoid producing silly or obvious questions [4]. Zhang et al. [5] described a system for generating questions, in the context of learning, which also comprises the NLP components of lexical processing, syntactic processing, logical form and generation. This system uses summarization as a preprocessing for identifying information about asking a question. The authors selecting questions created by content QA generator is difficult. Lai [6] describes a system to generate factoid questions automatically from large text. User questions matched against these pre-processed factoid questions in order to identify relevant answer in a question–answering system. Xinya Du et al. defined the task of question generation as the automatic generation of questions from various input sources. Sources can be raw text, a database or some form of semantic representation. They further argue that the “goodness” of a question can only be determined by looking at the context the question was posed in [7].

Questions Generation for Reading Comprehension …

63

Unnat Jain et al. focus on the problem of removing words from a sentence to create fill-in-the-blanks quizzes for language learning. For the removed words, they create distractors and evaluate them in terms of reliability and validity. A distractor has to be reliable; meaning that it cannot be replaced with the answer, thereby avoiding multiple correct answers to a question [8]. Manish Agarwal et al. deal with the technique for generating questions focused on named entities (NE), temporal or position information and semiconductor-based functions correlated with terms in input sentences [9]. The question recognizer module checks whether a specific question type can be generated from a clause in each input sentence and also identifies the possible cue phrase in the question type clause. The generator module replaces the cue expression with the query term and reorders the clause pieces to ensure grammatical correctness [10].

3 Proposed Question Generation Method The main goal of this thesis is to grip the structured information from the knowledge base to design meaningful questions and answers to improve reading comprehension and learning capability. Therefore, the task comprises the selection of the question’s content, meaning which clues are contained in the question and the question’s answer. We decided to choose a structural query representation as to the preliminary formulation of the question. Using this representation of the query enables us to develop a method to express it into natural language, which is required for users to interpret the system’s output.

3.1 Data Set Preparation Data set: we use reading comprehension data set that related to history, science, news, article domain for training and testing reading comprehension. After retrieving data set, we need a different paragraph in order to generate a question and comparison between and our system.

3.2 Paragraph Selection We download articles from the corpus to classify the article’s state, the amount of sentences in an article is more than 5, and we pick papers with more than 7. When an article is very small, the likelihood of detecting coreference is poor. Generally, when those two are far from each other, it is difficult to find individual coreference between two organizations.

64

Anamika et al.

Table 1 Relation set Relation set

Offered from

Performance, (N, S)

Goal (n) Action (s)

Patternization, (N, S)

Pattern (n), concept or goal(s)

Execution (N, S)

Plan/agent (N), accomplishment/goal(s)

Enablement, (N, S)

Devices (N), Action (S)

3.3 Apply Rhetorical Structure Theory for Coreference Detection RST are characterized by three parameters: the nucleus, the satellite and the interaction between the nucleus and the satellite. The nucleus is an action and satellite either describes this action. Here, the discourse graph associated with the document is input to the system, which in turn extracts all relevant nucleus-satellite pairs (Table 1). We discard a sentence when it contains the most representative entity because a question generated from that type of sentence does not require multiple sentences to answer. Each pair is represented as the tuple: relation (nucleus, satellite). Prior to applying any syntactic transformations on the text spans, we remove all leading and/or trailing conjunctions, adverbs and infinitive phrases from the text span. Further, if the span begins or ends with transition words or phrases like “as a result” or “in addition to,” we remove them as well.

3.4 Steps 4 Text Span Identification We associate each text span with a type depending on its syntactic composition. The assignment of types to the text spans is independent of the coherence relations that hold between them (Table 2). Table 2 Span types with relevant examples Span Type

Characteristic of span

Type 0

A group of many sentences

Type 1

One sentence, or a phrase or clause not beginning with a verb, but containing one

Type 2

Phrase or clause beginning with a verb

Type 3

Phrase or clause that does not contain a verb

Questions Generation for Reading Comprehension …

65

Table 3 Relation template Relation

Template type0

Template type1

Template type2

Template type3

Performance, (N, S)

[Nucleus] What led to the start of action?

Why [Nucleus]?

What [Nucleus]?

What caused [Nucleus]?

Patternization, (N, S)

[Satellite]. What Why [Satellite]? led to formation of concept?

What [Nucleus]?

What caused [Nucleus]?

Execution (N, S)

[Nucleus]. What led to the accomplashment?

Why [Satellite]?

What [Nucleus]?

What caused [Satellite]?

Why [Nucleus]?

What [Nucleus]?

What caused [Satellite]?

Enablement, (N, S) [Nucleus]. What led to action?

3.5 Steps 5 Text Spans Syntax Transformations If the text span is of Type 1 or Type 2, we analyze its parse tree and perform a set of simple surface syntax transformations to convert it into a form suitable for QG. We first use a dependency parser to find the principal verb associated with the span, its part-of-speech tag and the noun or noun phrase it is modifying. Then, according to the obtained information, we apply a set of syntactic transformations to alter the text. No syntactic transformations are applied on text spans of Type 0 or Type 3. We directly craft questions from text spans that belong to these types (Table 3).

3.6 Steps 6 Question Generation We obtain a text form suitable for QG. A template is applied to this text to formulate the final question. Table defines these templates and the design of the chosen templates depends on the relation holding between the spans, without considering the semantics or the meaning of the spans. This makes our system generic and thereby scalable to any domain.

4 Experimental Results Question Evaluation The first thing we noticed was the high percentage of grammatical and semantically relevant queries. Intuitively, it certainly sounds very dangerous to remove a priori uncertain parts of a sentence and inject them into predefined models whose grammar

66

Anamika et al.

Table 4 Average score for the question appropriateness: evaluation Evaluation system

System

Performance

Patternization

Execution

Enablement

Average

Grammatical

MH

0.95

0.94

0.91

0.87

0.915

QG

0.92

0.94

0.91

0.90

0.925

Semantic

MH

0.95

0.91

0.97

0.88

0.8923

QG

0.95

0.92

0.97

0.89

0.9325

Superfluous

MH

0.84

0.81

0.77

0.82

0.81

QG

0.90

0.82

0.82

0.83

0.8425

MH

0.93

0.83

0.95

0.75

0.865

QG

0.92

0.91

0.94

0.83

0.9

Question appropriateness

and semantics may or may not be compatible, but as we have seen, we can produce several grammatical and semantically correct questions with some very basic filters and modifiers [11]. To evaluate the quality of generated questions, we used a set of criteria that are defined below. We considered and designed metrics that measure both the correctness and difficulty of the question. All the metrics use a two-point scale: a score of 1 indicates the question successfully passed the metric, a score of 0 indicates it is not able to design question otherwise (Table 4).

4.1 Grammatical Correctness of Questions This metric check whether the question generated is only syntactically correct. We do not take into account the semantics of the question [12].

4.2 Semantic Correctness of Questions We account for the meaning of the generated question and whether it makes sense to the reader. It is assumed if a question is grammatically incorrect, and it is also semantically incorrect. • Superfluous use of language: Generated questions may contain information not required by the student to arrive at the answer. • Question appropriateness: This metric judge whether the question is posed correctly or not.

Questions Generation for Reading Comprehension …

67

5 Results Discussion We decided to test our proposed QG strategy in a manner that was conscious of intent to produce the queries. Evaluation systems of prior QG projects are centered on acceptability. We have seen that both domain-specific and general-purpose models may have learning-value problems, so combining the two is essential. We have compared our proposed QG strategy with Heilman and Smith MH (Fig. 1). In Fig. 2, we checked the grammatical correctness between our proposed QG method and given MH method. To specify our results, we use type of relation (performances, patternization execution, enablement) as x-axis and y-axis represented the range of efficiency between 0 and 1. When we show the graph value of our proposed method is high compare than to the MH method in grammatical correctness. When we compare the average of both methods, our QG method (0.89) also be higher than MH method (0.925). In Fig. 3, we checked the semantic correctness between our proposed QG method and given MH method. To specify our results, we use type of relation (performances,

Fig. 1 Text spans syntax identification and transformations

68

Anamika et al.

Fig. 2 Grammatical correctness evaluation

Fig. 3 Semantic correctness evaluation

patternization execution, enablement) as x-axis and y-axis represented the range of efficiency between 0 and 1. When we show the graph value of our proposed method is high compare than to the MH method in semantic correctness. When we compare the average of both methods, our QG method (0.9325) also be higher than MH method (0.8935). In Fig. 4, we checked the superfluity of language evaluation between our proposed QG method and given MH method. To specify our results, we use type of relation

Fig. 4 Superfluity of language evaluation

Questions Generation for Reading Comprehension …

69

Fig. 5 Question appropriateness evaluation

(performances, patternization execution, enablement) as x-axis and-y axis represented the range of efficiency between 0 and 1. When we show the graph value of our proposed method is high compare than to the MH method in superfluity of language evaluation. When we compare the average of both methods, our QG method (0.8425) also be higher than MH method (0.8132). In Fig. 5, we checked the question appropriateness evaluation between our proposed QG method and given MH method. To specify our results, we use type of relation (performances, patternization execution, enablement) as x-axis and y-axis represented the range of efficiency between 0 and 1. When we show the graph value of our proposed method is high compare than to the MH method in question appropriateness evaluation. When we compare the average of both methods, our QG method (0.90) also be higher than MH method (0.865).

6 Conclusion and Future Scope In this paper, I have presented a novel approach to generating questions from text that combines the relation flexibility template based on different GQ specific with the of question categorizing current GQ models summarizing three emerging trends: multi-task learning, wider input modalities, and deep question generation. Templates have each been used by other approaches to QG [13]. In our approach, we have begun experimenting with a multi-sentence QG method that design templates, a glossary, and discourse connectors. Inter-sentential discourse connectives, such as for example, therefore, however, and furthermore, provide an inexpensive and reasonably robust way to identify groups of sentences that we can and should use to generate questions [14]. We demonstrate a system that uses discourse connectors for multi-sentence QG, but their approach does not truly integrate multi-sentence content into questions. Once they identify the connective arguments, they use syntactic transformations to produce questions from one and only one of those arguments. Generated questions in particular domain are at the level of understanding. There is much more to explore in terms of algorithms and evaluation. Our approach generated

70

Anamika et al.

a set of templates and questions attempt. Although more work can be done to improve the quality of generated questions like general dialog management, no research has explored when machines should ask engaging dialog questions. Modeling asking questions as an interactive and dynamic process can become an interesting topic ahead. QG with interface simulation in dialog or suggestion framework has not yet been investigated. User condition and knowledge to customized QG, which dovetails extreme, end-to-end QG with deep user modeling and combines the dual-level understanding much in the same way as in vision–image generation [15].

References 1. T. Desai, P. Dakle, D. Moldovan, Generating questions for reading comprehension using coherence relations, in The 5th Workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA@ACL) (2018)„ pp. 1–10 2. X. Du, C. Cardie, Harvesting paragraph-level question-answer pairs from wikipedia, in Annual Meeting of the Association for Computational Linguistics (ACL) (2018), pp. 1907–1917 3. H. Guo, R. Pasunuru, M. Bansal, Soft layer-specific multi-task summarization with entailment and question generation, in Annual Meeting of the Association for Computational Linguistics (ACL) (2018b), pp. 687–697 4. V. Kumar, K. Boorla, Y. Meena, G. Ramakrishnan, Y.-F. Li, Automating reading comprehension by generating question and answer pairs, in The Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD) (2018a), pp. 335–348 5. S. Zhang, L. Qu, S. You, Z. Yang, J. Zhang, Automatic generation of grounded visual questions, in International Joint Conference on Artificial Intelligence (IJCAI) (2017), pp. 4235–4243 6. G. Lai, Q. Xie, H. Liu, Y. Yang, E.H. Hovy, RACE: large-scale reading comprehension dataset from examinations, in Conference on Empirical Methods in Natural Language Processing (EMNLP) (2017), pp. 785–794 7. X. Du, J. Shao, C. Cardie, Learning to ask: neural question generation for reading comprehension, in Annual Meeting of the Association for Computational Linguistics (ACL) (2017), pp. 1342–1352 8. U. Jain, Z. Zhang, A.G. Schwing, Creativity: generating diverse questions using variational autoencoders, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5415–5424 9. M.M. Khapra, D. Raghu, S. Joshi, S. Reddy, Generating natural language question-answer pairs from a knowledge graph using a RNN based question generation model, in Conference of the European Chapter of the Association for Computational Linguistics (EACL) (2017), pp. 376–385 10. M. Agarwal, R. Shah, P. Mannem, Automatic question generation using discourse cues, in Proceedings of the 6th Workshop on Innovative Use of NLP for Building Educational Applications, Association for Computational Linguistics, 2011 (2015), pp. 1–9 11. M. Heilman, Automatic Factual Question Generation from Text. Language Technologies Institute School of Computer Science Carnegie Mellon University (2011), p. 195 12. H. Ali, Y. Chali, S.A. Hasan, Automation of question generation from sentences, in Proceedings of QG2010: The Third Workshop on Question Generation (2010), pp. 58–67 13. M. Heilman, N.A. Smith, Good question! statistical ranking for question generation, in Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics (2010), pp. 609–617

Questions Generation for Reading Comprehension …

71

14. S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, D. Parikh, VQA: visual question answering, in IEEE International Conference on Computer Vision (ICCV) (2015), pp. 2425–2433 15. M. Heilman, N.A. Smith, Extracting simplified statements for factual question generation, in Proceedings of QG2010: The Third Workshop on Question Generation (2010), pp. 11–20 16. Z. Fan, Z. Wei, S. Wang, Y. Liu, X. Huang, A reinforcement learning framework for natural question generation using bi-discriminators, in International Conference on Computational Linguistics (COLING) (2018), pp. 1763–1774 17. Y. Zhao, X. Ni, Y. Ding, Q. Ke, Paragraph-level neural question generation with maxout pointer and gated self-attention networks, in Conference on Empirical Methods in Natural Language Processing (EMNLP) (2018), pp. 3901–3910 18. Q. Zhou, N. Yang, F. Wei, C. Tan, H. Bao, M. Zhou, Neural question generation from text: a preliminary study, in CCF International Conference of Natural Language Processing and Chinese Computing (NLPCC) (2017), pp. 662–671 19. X. Du, C. Cardie, Identifying where to focus in reading comprehension for neural question generation, in Conference on Empirical Methods in Natural Language Processing (EMNLP) (2017), pp. 2067–2073

Segmentation of Brain Tumor from MRI Images Using Modified Morphological Novel Approach Harendra Singh and Rajeev Ratan

Abstract In this fast-changing modern era, due to more precise and accurate healthrelated monitoring and to provide an effective treatment for a patient, medical imaging is becoming the backbone of medical diagnosis nowadays. The process to detect the uncontrolled growth of tissues having mass from brain magnetic resonance imaging (MRI) images is called as brain tumor segmentation. The main aim to separate such types of tissues from brain MRI images is to visualize the size, shape, and perfect location of the tumor in brain MRI images. This paper is based on a fully automatic brain tumor segmentation approach with modified thresholding and morphological operation for segmenting the uncontrolled growth of tissues having mass from brain tumor MRI images with more accurate and less computational time. This implemented algorithm also helps to detect the stage of tumor according to the amount of area. Mean, contrast, correlation, energy, skewness, and homogeneity are calculated for a brain tumor MRI image using discrete wavelet transformation (DWT). Keywords Magnetic resonance image (MRI) · Brain tumor · Image filtering and segmentation · Discrete wavelet transformation

1 Introduction The human central nervous system is a very complex network, which is based on two organs, brain and spinal cord. The human brain consists of approx. 100 billion nerve cells and is also known as neurons and interconnected to each other through synapse and dendrites [1]. The brain and spinal cord both play a vital role to control and send or receive a message signal from or to all parts of the body. In a controllable manner dead cells can be replaced and new cells can be generated in human beings. Sometimes, this controlled manner is failed; the result is frequently a generation H. Singh (B) · R. Ratan Department of Electronics and Communication Engineering, MVN University, Palwal, Haryana, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_7

73

74

H. Singh and R. Ratan

of extra cells instead of one cell. Therefore, a brain tumor is a mass of tissue that forms due to the uncontrolled growth of tissues inside or outside of the human brain. The brain tumor is classified as low-grade (benign tumor) and high-grade (malignant tumor) tumor according to size and shape [2]. The cell of a low-grade tumor is not spread in other parts of the human body through the bloodstream because there is no mass. Therefore, low-grade cells are easily removed from any part of the body through a small surgery without damaging the surrounding tissues [3]. The cells of the high-grade tumor easily start to spread in other parts of the body through bloodstream then further become a critical issue for health and good life. If the tumorous cells are diagnosed in an early stage, the chances of surviving and healthy life are increased [4]. Nowadays, a lot of techniques are available to diagnose the tumor-like region growing and splitting, threshold, morphological filtering, watershed segmentation, edge detection, and clustering approach but threshold with morphological operation tumor is easily segmented in less time with a higher degree of accuracy. For capturing the brain images, positron emission tomography (PET), computer tomography (CT) and MRI scans are available but MRI scans are used because it can provide more information about the anatomical structure for clinical studies without any radiation effect.

2 Proposed Methodology The proposed brain tumor segmentation method is based on a basic four modules to confirm, whether the patient’s brain MRI scan images have tumorous cells or not [5].The first module is MRI database acquisition, then the preprocessing module, which is a combination of noise removing and image enhancement, and after that image segmentation module using thresholding and morphological algorithm. The last module of the proposed methodology is feature extraction including tumor area, shape, and state of tumor determination [6]. The flowchart of the proposed methodology is shown in Fig. 1.

2.1 MRI Database and Image Preprocessing In this module, magnetic resonance images (MRI) are collected from open data source, with intensity range varying from ‘0’ to ‘255’, i.e., 255*255 grayscale [7]. From the MRI database, a random sample of eight MRI brain tumor (benign and malignant) images are collected then followed by normalization operation so that the distribution of intensity in the whole image is uniform that becomes more fruitful in feature extraction. A sample dataset of eight MRI images is shown in Fig. 2. Image preprocessing means to enhance the quality in terms of region of interest and remove the error from an image. Therefore, the most important features of an image can be easily extracted using feature extraction techniques [8]. In MATLAB,

Segmentation of Brain Tumor from MRI Images …

75

Fig. 1 Proposed methodology

Fig. 2 Sample dataset of brain MRI images. Courtesy https://www.kaggle.com/navoneel/brainmri-images-for-brain-tumor-detection/data

76

H. Singh and R. Ratan

Fig. 3 Preprocessed image with histogram

some operators like image filtering operation (2D and 3D filters), image histogram equalization operation through ‘histeg’ MATLAB command, and image contrast adjustment through ‘imadjust’ MATLAB command are performed on an image for preprocessing. Some samples of preprocessed with a histogram of images are shown in Fig. 3.

2.2 MRI Image Segmentation Image segmentation means dividing an input MRI image into a large number of different segments. Due to image segmentation, valuable information like an object, boundaries detection, and analysis of an input image become much easier. Nowadays, there are a lot of image segmentation techniques, but in the proposed system, thresholding with a morphological approach is used for image segmentation.

Segmentation of Brain Tumor from MRI Images …

77

Fig. 4 Histogram for multiple thresholding

The threshold segmentation approach is classified as global, local, and adaptive or dynamic according to the value of threshold (T ). For an image having pixel location (x, y) with pixel intensity and average pixel intensity [f (x, y)] and [p(x, y)] respectively, the threshold operation in terms of segmented image [g(x, y)] is shown below.  g(x, y) =

1 if f (x, y) > T 0 if f (x, y) ≤ T

 (1)

For multiple thresholding operation, [g(x, y)] may be shown as, ⎫ ⎧ ⎨ 0 if f (x, y) < t1 ⎬ g(x, y) = 1 if t 1 ≤ f (x, y) < t2 ⎭ ⎩ 3 if t 2 ≤ g(x, y) < t3

(2)

Therefore, if the object is bright then [g(x, y)] is ‘1’ and for dark background [g(x, y)] is ‘0’. For multiple thresholding, the value of ‘T ’ is always selected in the valley region as shown below in Fig. 4. Basic steps of automatic or iterative, or global thresholding technique • • • • • •

Choose the initial value of ‘T ’ or threshold Divide the histogram into two groups according to intensity values Calculate the mean intensity values of each group μ1 = G1 &μ2 = G2 2 Calculate the new threshold value, T = μ1 +μ 2 Repeat Step 2 Continue until, |Ti − Ti+1 | ≤ T 

Optimal thresholding approach contains two gray-level values, one for an object and other one for background; the optimal value of threshold gives minimum error probability and is calculated by,

σ2 P2 μ1 + μ2 + ln T = 2 μ1 − μ2 P1 μ1 Average intensity values of the background

(3)

78

μ2 σ1 σ2 P1 P1

H. Singh and R. Ratan

Average intensity values of an object Standard deviation of intensity values of the background Standard deviation of intensity values of an object region Probability belongs to object Probability belongs to background.

Morphological Filtering Approach Morphological filtering is based on two parameters of an input image, shape and structure element. Erosion and dilation are two morphological operators used to shrink and for dilation growth of an image. If M (x, y) is a grayscale image matrix and N (x  , y ) is a structural matrix then opening erosion and dilation operators are defined as Eqs. (3) and (4). The weak connection of object pixels and fine details are removed using opening operators [9].

{M(x + x  , y + y  ) − N x  , y  } MΘ N = min  

(4)

M ⊕ N = max {M(x − x  , y − y  ) + N x  , y  }  

(5)

x ,y

x ,y

To remove small holes and fill the cracks, closing operators are used as defined in Eqs. (5) and (6).

MoN = (MΘ N ) ⊕ N x  , y 

(6)



M · N = (M ⊕ N )Θ N x  , y 

(7)

Therefore, erosion operators are mainly used to reduce the size of an object and increase the size of the hole but the dilation operator is used to increase the size of an object and decrease the size of holes.

3 Feature Extraction and Appropriate Reasoning Through approximate reasoning, features shape and area of final segmented tumor image are easily calculated. The intensity level of each pixel is compared with the threshold level, if it is greater than the threshold then becomes ‘1’ and less than the threshold value then becomes zero [10, 11].  p(x) =

1 if x ≥ T 0 ifx > T

 (8)

In approximate reasoning tumor, the area is calculated from the binary image that consists of only two values either black or white [12].

Segmentation of Brain Tumor from MRI Images …

Image, I =

255  255 

79

[ f (0) + f (1)]

(9)

w=0 H =0

Pixels = width (w)*Height (H) = 255*255 f (0) = white pixel f (1) = Black pixels

No. of white pixel, P0 =

255  255 

[ f (0)]

(10)

w=0 H =0

P0 = no of white pixels (width*height) = 0.264 mm  √  The Segmented Area of Tumor in MRI image = p0 ∗ 0.264 mm2 .

4 Experimental Result The proposed brain tumor segmentation algorithm is implemented using the software MATLAB 2015b. The experiments are performed on the system having Intel Core i5 7th Gen CPU with 2.5 GHz processor, 4 GB graphics, and 8 GB RAM. Screenshot for preprocessing, filtering, bounding box, tumor alone, tumor outline, and detected tumor in MR images is shown in Fig. 5a, b. Figure 5 is the MRI input image that is first preprocessed, filtered using anisotropic filter, and then segmented by using thresholding and morphological operation [13, 14]. Screenshot for segmented tumor area and its dimension in pixels from MR images are shown in Fig. 6a, b. The sixth image (vi) of Fig. 7 shows the segmented tumor shape area that is segmented by applying area opening operation on tumor alone image (iv). Figure 7 (vii) displays the area of segmented tumor corresponding to the input image. Some samples of the database of the MRI images are shown in Fig. 7. To analyze the complete information related to an input MRI image, some features are extracted from brain tumor MRI image [15, 16]. First, the classifier is trained for both known features of benign and malignant tumors, collected from radiologist center, and then these 13 features are extracted from MRI images 3 and corresponding graph is shown in Table 1 and Fig. 8.

80

H. Singh and R. Ratan

Fig. 5 a, b Output image for preprocessing, filtering, bounding box, tumor alone, tumor outline, and detected tumor in MR images

5 Conclusion and Future Work In this paper, modified thresholding and morphological approaches have been used to segment the tumor in MRI images with a higher degree of accuracy with less computational time. In this approach, the input MR images were filtered using anisotropic diffusion filter before applying morphological operation. The solidity

Segmentation of Brain Tumor from MRI Images …

Fig. 6 a Detected tumor area in MRI image, b detected tumor area in MRI image

81

82

H. Singh and R. Ratan

Fig. 7 (i) Original image, (ii) filtered image, (iii) tumor boxing image, (iv) tumor alone, (v) tumor outline image, (vi) detected tumor image, and (vii) tumor area in pixels

Segmentation of Brain Tumor from MRI Images …

83

Table 1 Extracted features using DWT for a brain MRI image 3 from database Extracted feature

Feature value (Fig. 3)

Mean

0.0032

Standard deviation

0.0810

Entropy

1.7720

RMS

0.0811

Variance

0.0066

Contrast

0.9481

Skewness

29.8467

Energy

2.4302

Smoothness

−0.1076

Kurtosis

0.2886

IDM

0.1884

Correlation

0.8741

Fig. 8 Extraction of features of a brain MRI images 3

and compatible location of a tumor using morphological operation have been determined from average statistical parameters of MRI images having tumorous cells. Based on these two parameters, shape and area of tumor of MRI images have been extracted. Whether tumor is benign or malignant is also based on the size of the tumor in pixels. In the future, the accuracy of the proposed approach will be compared with other approaches and an MRI image database of different patients will be analyzed for the exact location of tumor. Thereafter, the algorithms for detecting the brain tumor in three-dimensional MRI images may be developed.

84

H. Singh and R. Ratan

References 1. S.M. Shelke, S.W. Mohod, Automatic segmentation and detection of brain tumor from MRI, Int’l Conference 2018, ICACCI (IEEE, Bangalore, 2018), pp. 2120–2126 2. D.C. Dhanwani, M.B. Bartere, Survey on various techniques of brain tumor detection from MRI images. Int J Comput Eng Res 4(1), 24–26 (2014) 3. R.P. Joseph, C.S. Singh, M. Manikandan, Brain tumor MRI image segmentation and detection in image processing. Int J Res Eng Technol 3, 1–5 (2014) 4. S. Kaushal, An efficient brain tumor detection system based on segmentation technique for MRI brain images. Int J Adv Res Comput Sci 8(7), 1131–1136 (2017) 5. Tjahyaningtijas APH, Brain tumor image segmentation in MRI image, in 2nd International Conference 2017, ICVEE, vol. 336, IOP Conference Series, Indonesia (2017), pp. 336–339 6. M.A. Said, S.F. Ibrahim, Comparative study of segmentation technology for detection of tumor based on MRI images. Int J Biosci Biochem Bioinform 8(1), 1–10 (2018) 7. A.R. Kavita, C. Chellamuthu, Kr. Rupa, An efficient approach for brain tumor detection based on modified region growing and network in MRI images, in International Conference 2012, ICCEET, vol. 9, issue 2, IEEE Transactions on Information Forensics and Security, Tamilnadu (2012), pp. 1087–1095 8. H. Hassanpour, N. Samadiani, S.M.M. Salehi, Using morphological transformation to enhance the contrast of medical images. J Egyptian Soc Radiol Nuclear Med 46, 481–489 (2015) 9. R. Ratan, P.G. Kholi, S.K. Sharma, A.K. Kholi, Un-supervised segmentation and quantization of malignancy from breast MRI images. J Natl Sci Foundation Sri Lanka 44(4), 437–442 (2016) 10. M. Zawish, A.A. Siyal, S.H. Shahani, A.A. Junejo, A. Khalil, Brain tumor segmentation through region-based supervised and unsupervised learning methods. J Biomed Eng Med Imaging 6(2), 8–13 (2019) 11. P.N.H. Tra, N.T. Hai, T.T. Mai, Image segmentation for detection of benign and malignant tumors, in International Conference 2016, BME-HUST, vol. 3, Proc. IEEE, Vietnam (2016), pp. 51–54 12. J. Selvankumar, A. Lakshmi, T. Arivoli, Brain tumor segmentation and its area calculation in brain MR images using K-means clustering and fuzzy C-means clustering, in International Conference 2012, ICAESM, vol. 1, Tamilnadu (2012), pp. 186–190 13. G.M. Xian, An identification method of malignant and benign liver tumors from ultrasonography based on GLCM texture features and fuzzy SVM. J Elsevier Exp Syst Appl 37(10), 6737–6741 (2007) 14. J.F. Vijay, J. Subhashini, An efficient brain tumor detection methodology using a k-means clustering algorithm, International Conference 2016, ICCSP (IEEE, Tamilnadu, 2013), pp. 653–657 15. M. Saraswat, K.V. Arya, Automatic microscopic image analysis for leukocytes identification. J Elsevier Micron 65, 20–33 (2014) 16. Y. Yang, Z. Su, L. Sun, Medical image enhancement algorithm based on wavelet transform. IEEE Electron Lett 46(2), 1–2 (2010)

Implementation and Use of ERP System in Organization and Educational Institution Fakih Awab Habib, Ghatte Saqib Nisar, Singh Sudhanshu Somnath, and Shinde Abhijit Jagannath

Abstract In recent years with enterprise resource planning (ERP) system, the demand has increased drastically because of the implementation of these systems in small and medium enterprises. ERP system primarily delivers integrated data having great strategic importance to the business. Realizing the challenges by identifying the influencing forces & enhancing the rate of adoption is the prime objective of any implementation program. ERP is considered as large spending projects with the percent of success quite low. The main role of the ERP system is that it provides the management to build their trust towards the accuracy of the data, also allows access to authorized users, i.e., not everyone can interfere with data present in the ERP system. Keywords Enterprise resource planning (ERP) · Post-adoption · Interventions · Algorithm · Critical success factor (CSF) · Software as a service (SaaS)

1 Introduction ERP is a system that automates the task necessary to perform business processes. It replicates the business process into software, also guides employees step-by-step, and automate as many procedures. Hence its main aim is to serve as a backbone for complete business. Most of the companies use an ERP system like Godrej, Mahindra, and Mahindra, etc. Also, most of the educational institutions are using ERP systems nowadays to sort things easily [1]. Many authors have mentioned that the implementation of ERP solutions in higher education institutions as a complicated process. F. A. Habib · S. S. Somnath (B) · S. A. Jagannath Anjuman I Islam Kalsekar Technical Campus, Panvel, India e-mail: [email protected] G. S. Nisar Hertzsoft Technologies Pvt. Ltd., Mumbai, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_8

85

86

F. A. Habib et al.

ERP simply works on the CRUD (Create, Read, Update, Delete) operations. ERP is wholly built on the DMS (Database Management System). ERP results to be costeffective software, where the organization of data is in a better manner [1]. ERP system leads to the security of the data. The managing mechanism of this system is quicker than other methods. It also saves a lot of time in an educational institution, as taking attendance in the registers is like old techniques [2]. [3] ERP system is introduced in the higher institution as it has become the most substantial software investment as compare to any organizations. ERP system helps to identify the organization process and documenting it with the help of the personnel. The attendance of employee services and daily affairs is wholly maintained and updated at every single execution. It provides different data from multiple places to a single platform.

2 Methodology [4] It is necessary for the developer of ERP and the company first to analyze the system because the risk of failure is always there, and sometimes it can be highly expensive. A methodology is used to provide a structure, algorithm, and the process of handling the project. Drivers of project, project management, project resources, management review, final delivery these are the primary fundamental methodology to build an ERP system.

2.1 Drivers of Project Drivers are the essential factors that play a role in the ERP system. The reason behind the implementation of the ERP system and their benefit can shift accordingly due to some other company priorities and demands. The process of managing the project is an essential mechanism to maintain and acquire ERP implementation (Table 1).

2.2 Project Management This section includes the planning, timing, organizing schedules and resourcing, which are nothing but the definition of the initialization and termination of the implementation. We need to pick a project team to have proper control over the project so that it can be on track. This role of managing a project is very critical. It may happen that most of the times some small issues arise in the project that consumes a lot of time to figure it out, which leads to derail the project implementation.

Implementation and Use of ERP System …

87

Table 1 Common modules in ERP system Module No.

Title

1

Dashboard

2

Training

3

Clients

4

Enquiry

5

Meeting Management

6

Human Resource (HR)

7

Finance Management

8

Communication

9

Change and Support

10

Attendance

2.3 Project Resources To carry out the implementation of an ERP system without any issues, we must have proper resources for the project. This means resources are one of the critical decision in the process of implementation. If the resources that are used are not adequate, it leads to a very complicated situation where the probability of completion of the project is very low. Also, it adds much latency in the process of implementation.

2.4 Management Reviews Once the implementation is done, the product is handled to some of the users to review the product. The users will try to seek the mistakes if any present in the product, and after his satisfaction, he will send feedback on it. Once the review is satisfactory, the product is ready for delivery.

2.5 Delivery of Product Once the verdict from management is favorable, the product is ready to use towards the client site, or we can say in organizations and educational institutions. One of the most common sessions that take place many times on such topics is the implementation and rating ERP system. The implementation has some usual questions like the speed of implementation, compactness of system, time required for implementation. Well, the rate of the implementation will automatically get decreased

88

F. A. Habib et al.

if the understanding of the ERP structure is excellent. So the knowledge is the direct equation to the speed. The rating of the ERP always satisfies popularity index and challenges, which directly based on number of buyers and the reviews provided by them.

3 ERP Trends and Perspective Many numbers of survey studies are submitted from the findings of current ERP experience in industries. Also, several articles provide very different kinds of perspectives on the ERP system. For example, the views from managers, vendors, and some users. There are several numbers of models present on ERP in the current market. They begin with a conceptual model of ERP system, factors defining the success of ERP implementation. Currently, many organizations are facing stress and trouble from their customers, stakeholders, and suppliers for carrying improvements and making efficient products within less amount of time without compromising quality [5]. To deal with these, companies have to respond quickly with their production while keeping the costefficient factor in mind. To make it easier, there is a significant advantage to companies if they make use of system applications and/or technology such as ERP, Software as a Service (SaaS), WEB 2.0. According to [6], it caused a huge interest amongst the vendors so that they look towards improving future ERP system to support the customers present at the other end.

4 Implementation Implementing an ERP system is not an easy task to achieve. It’s a long process that involves excellent expenditures, efforts, and time [7]. The most important factor in this project, which leads towards success, is the implementation phase [8]. Implementation of ERP systems allows a company to handle its business for various advantages like improved process flow, better data analysis, and better customer service [9]. Besides, gains expected from the implementation of ERP systems include reduced costs, reduced operations time, and organization having more customer values [10]. ERP is the major project which is currently in demand from all the organization irrespective of the size. Since this system plays a significant role in every organization, the issues covering the process of implementation is one of the major concerns [11]. And it further makes the situation more complicated because of the many failed cases, including some fatal disasters, which caused a massive loss for the industry. One of the popular topics on the implementation of ERP is to improve Critical Success Factor (CSF) [9]. According to [12], the life cycle of a product is now concise, and technology is changing drastically due to which new factors may arise. Likewise, while the CSF for implementation of ERP systems is mentioned, there have been

Implementation and Use of ERP System …

89

Table 2 Implementation and rating of ERP Table head

Parameters

Implementation Speed of implementation

Compactness

Questions

Achieved

Factors for achieving it?

By identifying the organization process and documenting by the help of personnel

Number of modules in ERP?

This totally depends on the organization’s requirements

Time required for How much time take is The time taken by ERP implementation of ERP required for ERP system system to fully complete to get implemented? is depending upon the client requirement Rating

Based on popularity index

How popular is the ERP tool?

Indicated by no. of Models used

Based on challenges faced

What are the challenges faced by ERP tools?

Incident, Actions and violation of actions

many inconsistent and inconclusive findings [13]. The primary factor is recognized as the most crucial part of the implementation of the ERP system is the selection process [14]. Many factors which directs towards the success or failure of the ERP implementation can only be learned from the prior implementation experiences [15] (Table 2). Acknowledgements We appreciate our Project Guide Asst. Prof. Awab Fakih, who provided information and experience that greatly aided the investigation. We also thank the Director of AIKTC, Dr. Abdul Razak Honnutagi for his support, who always inspires students to progress from the perspective of technical research. We thank our parents for their lifetime support, as we would like to thank Hertzsoft Technologies Private Limited for giving us this great opportunity to work on live projects.

References 1. M. Al-Mashari, A. Mudimigh, M. Zairi, Enterprise resource planning: a taxonomy of critical factors. Eur. J. Oper. Res. 146, 352–364 (2005) 2. L. Zornada, T.B. Velkavrh, Implementing ERP Systems in Higher Education Institutions, vol. 6, issue 6, June 2016 3. V. Botta-Genoulaz, P. Millet, B. Grobot, A survey on the recent research literature on ERP systems. Comput. Ind. 56(6), 510–522 (2005) 4. Dunaway—ERP Implementation Methodologies and Strategies. http://web.eng.fiu.edu/chen/ summer%202012/egn%205621%20enterprise%20systems%20collaboration/reading%20erp/ readings_on_erp_chapter04.pdf

90

F. A. Habib et al.

5. R. Addo-Tenkorang, P. Helo, Enterprise Resource Planning (ERP): a review literature report, in Proceedings of the World Congress on Engineering and Computer Science 2011, vol. II WCECS 2011, San Francisco, USA (2011) 6. B. Johansson, Why Focus on Roles when Developing Future ERP Systems (2007) 7. M.-I. Mahraz, L. Benabbou, A. Berrado, Success factors for ERP implementation: a systematic: literature review, in Proceedings of the International Conference on Industrial Engineering and Operations Management, March 5–7, 2019, Bangkok, Thailand 8. K. Almgren, C. Bach, ERP systems and their effects on organizations: a proposed scheme for ERP success, in ASEE 2014 Zone I Conference. University of Bridgeport, Bridgeport, CT, USA, April 3–5, 2014 9. T.F. Gattiker, D.L. Goodhue, What happens after ERP implementation: understanding the impact of inter-dependence and differentiation on plant-level outcomes. MIS Q. 29(3), 559–585 (2005) 10. A.A. Elragal, A.M. Al-Serafi, The Effect of ERP System Implementation on Business Performance: An Exploratory Case-Study. IBIMA Publishing Communications of the IBIMA, (2011), Article ID 670212, 20 pages. https://doi.org/10.5171/2011.670212 11. T.F. Gattiker, D.L. Goodhue, What happens after ERP implementation: understanding the impact of inter-dependence and differentiation on plant-level outcomes. MIS Q. 29(3), 559–585 (2005) 12. K. Amoako-Gyampah, Perceived usefulness, user involvement and behavioural intention: an empirical study of ERP implementation. Comput Human Behav 23, 1232–1248 (2007) 13. C.C.H. Law, E.W.T. Ngai, ERP systems adoption: an exploratory study of the organizational factors and impacts of ERP success. Information Manage 44, 418–432 (2007) 14. F. Mahar, S.I. Ali, A.K. Jumani, M.O. Khan, ERP system implementation: planning, management, and administrative issues. Indian J. Sci. Technol. 13(01), 1–22, January 2020. https:// doi.org/10.17485/ijst/2020/v13i01/148982 15. J. Dawson, J. Owens, Critical success factors in the chartering phase: a case study of an ERP implementation. Int. J. Enterprise Information Syst. 4(3), 9–24 (2008)

WebGIS Concept of River Pollution Monitoring System—A Case Study of Yamuna River Rahat Zehra, Madhulika Singh, and Jyoti Verma

Abstract This study was performed in order to bridge the gap between IT and environmental studies. The aim is to develop a platform which would help in understanding the river pollution from the grass-root level. Dak Patthar to Agra at the Yamuna river catchment was selected for the study period of 2018–2020. The structure was based on a database generated for water quality, heavy metals and land use in the area. The application layers include Web Map, Heavy Metal Pollution Index, and WebGIS. The supporting layer has Arc Spatial Data Engine, SQLite and Java Server Pages. The data layers consist of pollution monitoring, pollution sources, and basic geographical structures. The application designed helps to analyze the data with an interactive platform giving users the provision to generate a new database based on various select query commands. The central and local authorities can use this platform to concentrate on the increasing pollution and further execute the preventive policy measures effectively. Keywords WebGIS · SQLite · Java server pages (JSP) · River pollution · Heavy metal pollution index · Land use

1 Introduction Central Pollution Control Board in 1978 started monitoring the national water quality under Global Environmental Monitoring System (GEMS), Water Programme; initiating with 24 surface water and 11 groundwater stations. However, despite continuous efforts, the water of major rivers in India today is of poor quality. With advances in technology, there’s now an increasing emphasis on water quality and pollution. The Geographic Information System (GIS) is often used with environmental water R. Zehra (B) · M. Singh Amity Institute of Geo Informatics and Remote Sensing, Amity University, Noida, UP, India e-mail: [email protected] J. Verma University of Allahabad, Prayagraj, UP, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_9

91

92

R. Zehra et al.

quality analysis and water pollutants as an efficient visualization and analysis can be performed [1]. A similar example is The Watershed Water Management Decision Support System (DSS-CWM), is given by the Dublin National University of Ireland. (Tao 2013). The development and updating process of such database of rivers often require ample funds and time. Geographical or geospatial data sets created are efficiently processed, managed, analyzed and visualized with the help of Geographic Information System (GIS). With the growth of IT and the World Wide Web, it is now possible to generate and disseminate the spatially generated information. This mechanism that combines the GIS and the Web techniques is called WebGIS platforms. With an interactive platform connecting the online mapping services to the data collected and analyze the WebGIS services are done. Various examples where WebGIS was developed are the studies of the forest, land use, mining sector, and the locational studies, etc. (Nizamuddin, Intan Meutia, Ardiansyah, 2018). The design and implementation of a WebGIS are dependent upon the programming language which forms the backbone. The quantity of data to be presented also depends on the programming languages. Various languages like C#, Visual Basic.NET, PHP, Python, Java Server Pages are available for designing the Graphical User Interface of the application. Similarly, SQLite, MS Access, Postgresql, etc. are the databases [2–7]. For this study a combination of Java Server Pages, SQLite and ArcGIS were used.

2 Research Methodology 2.1 WebGIS System Structure The general scheme was of the client-server computational model, the system used should have internet connectivity, a client desktop, a web server, and an application server. The client has the provision of overlaying various layers generated in ArcGIS i.e. visualizing the raster and vector layers. With facilities of select queries, zoom, analysis and data download, the setup was executed the WebGIS was generated as a pilot study on windows 10 (Fig. 1).

2.1.1

Data Layer

This layer is used to store the data requirement of the work including the remotely sensed images of Landsat satellite system with an approximate spatial resolution of 30 m, Digital Elevation Models, an assemblage of river system maps, HydroMeteorological dataset, sectionally monitored data of river, pollution distribution maps and the administrative maps. The Landsat 7 Enhanced Thematic Mapper Plus (ETM +), Landsat 8 Operational Land Imager (OLI) were used to build the

WebGIS Concept of River Pollution Monitoring …

93

Fig. 1 Structure Diagram of River Pollution Monitoring System

entire geospatial dataset required by this application. The data was downloaded from the USGS Earth Explorer website. The CartoDEM-1arc sec was downloaded from Bhuvan (Indian Geo-Platform of ISRO). ERDAS Imagine and ArcGIS application software were used to generate various thematic maps. Hydro-meteorological data was acquired from Central Water Commission while the previous year’s data of Pollution was procured from Central Pollution Control Board. Primary data collection for the year 2018–20 at 14 sampling sites was done to generate a water quality dataset.

2.1.2

Intermediate Layer

This layer poses hardware and software requirements like the programming language, database management system, GIS software. ArcGIS software was used for spatial analysis and generating thematic maps of the area. These are all used with Google Application Programme Interfaces (API). Further, Microsoft SQLite is used with Arc Spatial Data Engine. This database software makes a unified connection between the spatial and attribute datasets. The program development was done using Java Server Pages (JSP). JSP is a server-side programming technology that allows the development of dynamic, platform-independent Web-based applications. It has the accessibility to all Java APIs, like the Java Database Connectivity (JDBC) which is an application programming interface (API) used for databases. Advantages of using JSP are: better performance with embedded Dynamic Elements in HyperText Markup Language(HTML) Pages; pre-compilation saves server time; being built on Java Servlets API it has access to Enterprise Java APIs, including Java Database Connectivity (JDBC), Java Naming and Directory Interface (JNDI), Enterprise Java Bean, Java API for XML Processing (JAXP), etc.

94

R. Zehra et al.

SQLite is open-source software with independent, serverless, zero-configuration, transactional SQL database engine. Being serverless there is no need for TCP/IP for each transaction or command. It just reads and writes from the database file on the system. Zero-Configuration means while installing SQLite there’s no specific requirement for the server to be started or stopped or configured. Few advantages of SQLite are: better performance due to ease of faster execution of individual data files; economically cheap and less complex; highly portable with operating systems, Big-endian or little-endian, 32-bit or 64-bit does not matter; encapsulated format for file transfer. Endianness describes the order in which a sequence of bytes is stored in computer memory. It could be either big or small referring to the highest or lowest values being stored first. The source code of SQLite is simple and highly accessible to the average programmer.

2.1.3

Graphical User Interface

The layer has the application layer or the Graphical User Interface which makes the interaction between the server system and client system easy. This provides access to analysis of water quality throughout the study area, display various datasets, downloading and saving the generated results of spatial queries, etc. This layer also helps in taking decisions based on the available or predictive analysis during an emergency by an executive.

3 Discussion This system has been designed for the general public provides the simplest form of technology. The geospatial database was generated with two main purposes: the first being information sharing through web servers and second using the generated information for each of the analyses to be performed through select queries. The advantages of such an application include the spontaneously generated graphs through GIS embedded techniques. All the datasets of various parameters can be presented in varied colors. The system could be connected through multiple stations for data upload. With remote access through the servers, it can be useful even to remote and inaccessible areas. However, there are few disadvantages to this system too. Since this works on web technology digital know-how is important for the proper utilization of the system. The parameters and values uploaded must be true and properly measured.

WebGIS Concept of River Pollution Monitoring …

95

4 Conclusions This system is being developed with Java Server Pages on the mdbootstrap concept and uses the advanced technology of GIS. It would bring a thrust of change from the conventional water quality analysis which was time-consuming and error-prone with the slow generation of results. It would not just portray the environment pollution and conservation information but would be a platform for extensive decision support systems aiding the government and local authorities to work together for a sustainable Earth. Since it’s a part of the doctoral research work of the scholar all the suggestions and reviews would be heartily welcomed.

References 1. T.Q. Kuang, J.L. Du, Z.M. Zhang, W. W. Zhang, Study on a water pollution simulation and analysis system based on WebGIS : a case study of Liyang City, Jiangsu Province, in The 5th International Conference on Water Resource and Environment (2019), p. 344. https://doi.org/ 10.1088/1755-1315/344/1/012125 2. D. Mirauda, M. Ostoich, F. Di Maria, S. Benacchio and I. Saccardo, Integrity model application: a quality support system for decision-makers on water quality assessment and improvement, in IOP Conf. Series: Earth and Environmental Science, 120 (2018) 012006, p. 120. https://doi.org/ 10.1088/1755-1315/120/1/012006 3. M. Al-sibai, S. Zahra, M. Huber, A Decision Support System (DSS) for Water Resources Management—Design and Results from a Pilot Study in Syria Climatic Changes and Water Resources in the Middle East and North Africa (2008). https://doi.org/10.1007/978-3-540-85047-2 4. B.T. Ngo, Management and monitoring of air and water pollution by using GIS technology, 3(1), 50–54 (2012). Retrieved from http://www.openaccess.tu-dresden.de/ojs/index.php/jve/ 5. G. Agurto, E. Andrade, C. Tomalá, C. Domínguez, P. Guillén, K. Jaramillo, A. Sánchez-rodríguez et al., Database and WebGIS: tools for integration and access to biodiversity information of invertebrates of the marine reserve ‘ El Pelado’ (REMAPE). Neotropical Biodiversity 4(1), 173–178 (2018). https://doi.org/10.1080/23766808.2018.1553380 6. M. Omidipoor, M. Jelokhani-niaraki, N. Neysani, Annals of GIS A Web-based geo-marketing decision support system for land selection: a case study of Tehran, Iran. Ann. GIS 00(00), 1–15 (2019). https://doi.org/10.1080/19475683.2019.1575905 7. J.K. Kazak , J. Chru´sci´nski and S. Szewra´nski, The development of a novel decision support system for the location of green infrastructure for stormwater management. Sustainability 10, 4388 (2018). https://doi.org/10.3390/su10124388

Data Mining in Crime Analysis Nahid Jabeen and Parul Agarwal

Abstract Crimes are a worldwide problem that can harm the nation under both social and economic conditions. Crime control is an inescapable step. It is compulsory for the welfare and sustainable development of a nation. We know very well that in the digital world, it is not an easy task to expose the criminals and the vulnerable areas that are continuously getting affected by their wrongdoings. The police departments of every nation are also continuously working in a paced manner to overcome the crimes, criminals and their techniques. The difficulty in investigating a large amount of data regarding crimes and criminals has become a major challenge for police department officials. An approach is needed that can classify, systematically investigate and forecast the crimes that help to reduce the crime rate. There are various methodologies and paradigms which will help police officials to discover and eliminate crimes from society. Data mining empowers us with several practical and convenient ways to assess large and distinct sets of information. It helps to uncover hidden information from the large database of criminal records for investigating, controlling and preventing crime for organizations and users. Various researchers and data analysts gave their valuable time and knowledge to the field of data mining. The paper focuses primarily on presenting a short and snappy overview of various research papers focused on the techniques of data mining that have been applied in crime analysis. Keywords Data mining · Crime analysis · Criminal investigation · Weka · ARIMA

N. Jabeen · P. Agarwal (B) Department of Computer Science and Engineering, School of Engineering Sciences and Technology, Jamia Hamdard, New Delhi, India e-mail: [email protected] N. Jabeen e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_10

97

98

N. Jabeen and P. Agarwal

1 Introduction A human being is not criminal by his\her birth. It is the state of affairs of humans’ social life that make them involved in crime. Crime has the evilest influence on communities which could destroy society. Crimes create problems for a country’s economy by burdening them financially with the additional need for multiple security forces [1]. So it is always needed to analyze the knowledge of crime and keep up with them effectively. The analyzed result will not be completely accurate, but to some degree, it will minimize the crime rate. Crime analysis can be defined as a structured way to identify, locate and predict incidents. It becomes a vital requirement of an approach to clearly understand the future possible crime patterns, such that if a crime cannot be prevented from occurring, at least a preparation can be made with them. The traditional methods needed a large amount of paperwork, manpower and time to fetch patterns of data to predict the possible future crimes [2]. An approach between informatics and criminal justice can be used to establish a data mining technique that can help to solve crime more quickly. Data mining is a technique for extracting hidden data from a generally huge dataset and turning it into useful information for further use [3]. The essence of data mining has become a quickly growing field between criminal investigators and crime analysts. A large number of police departmentrelated records have revealed the need for more than ever to use a systematic and intellectual approach to crime investigation. Depending upon the modern type of criminal investigation, many modern scientific techniques are employed collectively. Data is a valuable asset that is used to link and analyze crime scenes. The crime analysis process involves observation of crime reports and identifying the correlated data such as patterns, series, statistics, maps, etc. These correlated data sets can be used in correlating crimes and fetching patterns that can be identified for making the prediction. For example, when two crimes have happened in a particular place, then they are linked. If the crimes are already in the past, then you can predict potentially associated crimes. In this paper, various techniques and methodologies are described that can enhance the current systems for the efficient investigation process, minimize errors and reduce time complexity.

2 Literature Review In the field of crime data mining and criminal investigation, data mining many scholars had done their research work. The focus of researchers is always in reducing, preventing and enhancing the quality of criminal investigation. A few notable research papers in the related area as follows. According to [4], authors described the crime factors using the relationship between computer science and criminal justice to analyze and predict the crime. Some challenges are also mentioned that are facing by law enforcement officials in identifying crime patterns and trends effectively. The clustering strategy is preferred.

Data Mining in Crime Analysis

99

The classification algorithm gives the result of the existing and solved crimes. The five steps of crime analysis with some data mining approach, i.e., to analyze the crimes are mentioned in Table 1. According to [5], authors have applied a general data mining system based on the familiarity obtained from the COPLINK project with the support of scholars at the University of Arizona with the assistance of the police departments Tuscon and Phoenix since 1997. Table 2 introduces several new methods of data mining to distinguish structured and unstructured data as described in the paper. According to [6], authors describe the following data mining components in this paper. This paper introduces the crime analysis process into two main components under the two components of COPLINK project, i.e., COPLINK CONNECT and COPLINK DETECT. The techniques and categories of the data mining components are in Table 3. In [7], authors focus on the existing Indian e-governance. This paper describes briefly an interactive query-based interface that is used by the National Crime Record Bureau (NCRB) as a tool for criminal analysis. This paper also gives a reasonable summary of the Indian police system and key characteristics of the new Crime Criminal Information System (CCIS) and Common Integrated Police Application (CIPA), along with current status and shortcomings. In [8], authors describe various crime prevention data mining methods using some strategies in the following steps—data extraction, data collection, pre-processing, clustering, classification, pattern prediction, and visualization. In this paper, using Google’s marker clustering (GMAPI) and crime-prone locations on the Indian map, the WEKA tool discussed in [9] was used to achieve visualization. To check the selected dataset, random forest and cross-validation are used. Table 4 provides some uses of data mining techniques: According to [2], authors applied clustering techniques to make investigators able to predict and prevent criminal activity. They applied the K-means algorithm in a clustering technique to store the data and forecast the possible result. The result of this paper aimed to make government officials predict crime and criminals based on previous data and location. They also compare some formerly built systems along with the disadvantage of the algorithm being used. In [10], authors worked Table 1 Steps in crime analysis and their approaches S. no.

Steps followed in crime analysis

Approaches used

1.

Data collection

MongoDB is used to collect unstructured data

2.

Classification

Advantages of Naive Bayes Classifier: • It is easy and fast to converge • In the case of probability, it solves zero-frequency problems

3.

Pattern identification

Apriori Algorithm is used to determine frequently occurring crime in a particular place with the help of association rules

4.

Prediction

Decision tree

5.

Visualization

The heat map is used to indicate activity level

100

N. Jabeen and P. Agarwal

Table 2 Data mining techniques and their feature S. no Techniques

Applications

Limitations

1.

Entity extraction This detects specific trends from data such as text, images, or audio

Purposes

It is used in police narrative reports to automatically classify individuals, addresses, vehicles, and personal characteristics

Its output depends on the large amounts of clean input data being available

2.

Clustering technique

It groups objects from data into classes with similar characteristics

It is used to classify criminals who are perpetrating crime in similar ways or to differentiate between groups of different gangs

High computation intensity is required

3.

Association rule mining

This identifies a commonly occurring dataset and presents the pattern as guidelines

It is used in the It needs detection of network high-structured and intrusion to determine rich data the history of contact between users and to predict possible network attacks

5.

Classification

It identifies mutual properties between altered entities involved in crime and systematizes them into predefined groups

It is used to identify the source of spamming via e-mail and to forecast criminal patterns in less time

To predict accurately, it requires complete training and testing data

Table 3 Components and techniques of data mining S. no

Components

Techniques

1.

Crime entity extraction

Named-entity extraction Methods for extracting entities are categorized into four groups, i.e., lexical lookup, rule based, static based and machine learning based

Category

2.

Crime data clustering

Self-organizing neural network

Crime data clustering includes a two-step approach, i.e., self-organizing neural network and K-means

3.

Crime matching process

Multi-layer perceptron

The proposed MLP has three stratums of neuron, i.e., input, output, and hidden

Data Mining in Crime Analysis

101

Table 4 Data mining techniques and its uses S. no.

Data mining techniques

Uses in crime detection

1.

Back propagation NN-classifier

Identifying crime pattern and future predictions

2.

K-means algorithm

Crime detection with the help of clustering

3.

KNN classification with Chi-Square feature selection

Improving criminal identification

4.

Bayes theorem

classification

5.

Apriori algorithm

Finding a frequent pattern of a particular region

6.

Hotspot optimization tool

Crime representation by geospatial spots

on a project through which they intend to detect the crime pattern via forecasting and predicting the crime. They’d gathered data from criminal record databases like India’s National Crime Records Bureau (NCRB), the FBI, the CIA, etc. To forecast data classification algorithms such as the Bayesian Network algorithm and regression algorithm is used to visualize the final end-product. They used the WEKA for data analysis and some predicting models like Auto-regressive Integrated Moving Average (ARIMA) or Artificial Neural Network (ANN). According to [1], authors have mentioned six visualization modules and two machine learning algorithms, i.e., K-Nearest Neighbor and Naive Bayes for identifying areas that are prone to crime. This paper presented a web-based crime prediction mapping & visualization tool, developed in R using various libraries. The visualization modules that are required for prediction of crime are explained in Table 5. As in [11], authors analyzed some complications that have been identified within crime data research. They described research methodology, i.e., based on crime types & local law enforcement and also gave the description and interpretation of the crime data mining i.e., based on techniques, technology, and challenges. They have used the decision tree algorithm and Apriori algorithm for identification and classification of the data in order. They discussed the result by experimenting with the data on the pre-process tab. As in [12], authors have proposed a model using the K-means clustering algorithm and Apriori algorithm for crime and criminal analysis based on more than 350 crime raw datasets collected from the police department of Libyan places, i.e., Benghazi, Tripoli, and Al-Jafara Supreme Security Committee (SSC). They used some software tools such as Google App Engine, WEKA tool, and Microsoft Excel to analyze the concerned data. The main objective of this project is to assist Libyan security officials in detecting criminal behavior and figuring out the connection between criminal age and type of crime in Libya. The proposed model named mining criminal (MLCR) has been implemented with two types of mining algorithms which resulted in two different outputs, i.e., Mining Libyan Criminal Records using Association Rules (MLCR-AR) and Mining Libyan Criminal Records using Clustering (MLCR-C).

102

N. Jabeen and P. Agarwal

Table 5 Visualization modules and their applications Module no. Name of module

Application

Module 1

Crime data visualization using Google Maps

To prevent these areas, it’s helpful for a person to know about risky and dangerous areas. This can help the Department of Police establish heavy security in those areas

Module 2

Visualization of exact crime position with 3-D view

It is useful for the police to investigate the crime affected area by not visiting the crime scene again

Module 3

Visualization according to the type of crime

It is useful in the study of the forms of regularly encountered crime in an area and in the implementation of safety measures centered on that crime

Module 4

Crime hotspots visualization

In research, it is useful to make smarter decisions and devise methods to help law enforcement

Module 5

Crime Frequency Report

Taking safety measures to help the criminal analyst test which type of crime has been enhanced or minimized is useful

Module 6

Interactive crime report using chart and Analysts need to recognize the pattern of bar diagram every crime that happened in a given area. It provides a visual depiction of the monthly pattern of types of crime derived from the database

3 Conclusion Law enforcement officials and investigators collect the raw data from various sources such as telephone records, social networks, police records, transaction records for the investigation process. This is a time-consuming task but new technologies and tools can help law enforcers to identify criminals efficiently. This paper reviewed various data mining tools and techniques for finding particular crime patterns at a particular place at a particular time. The reports concluded that crime data mining is capable of improving the security of the country and intelligence productivity and efficiency. There are many future pieces of research regarding it that are still at their exploring age.

Data Mining in Crime Analysis

103

References 1. H. Toppi Reddy, B. Saini, G. Mahajan, Crime prediction & monitoring framework based on spatial analysis. Proc. Comput. Sci. 132, 696–705 (2018) 2. A. Sangani, C. Sampat, V. Pinjarkar, Crime prediction and analysis, in Proceedings of 2nd International Conference on Advances in Science & Technology, SSRN: Elsevier, India (2019), pp. 1–5 3. Data Mining Wikipedia. https://en.wikipedia.org/wiki/Data_mining. Last accessed 2020/4/18 4. S. Sathyadevan, M.S. Devan, S. Gangadharan, Crime analysis and prediction using data mining, in Proceedings of International Conference on Networks & Soft Computing. IEEE, India (2014), pp. 406–412 5. H. Chen, W. Chung, J. Xu, G. Wang, Y. Qin, M. Chau, Crime data mining: a general framework and some examples. Computer 37(4), 50–56 (2004) 6. M. Keyvanpour, M. Javideh, M. Ebrahimi, Detecting and investigating crime by means of data mining: a general crime matching framework. Proc. Comput. Sci. 3, 872–880 (2011) 7. M. Gupta, B. Chandra, Crime Data Mining for Indian Police Information System. IIT-D (2011), pp. 388–397 8. D.K.K.S. Vinod, Crime analysis in India using data mining techniques. Int. J. Eng. Technol. 7(26), 253 (2018) 9. P. Agarwal, A. Alam, R. Biswas, Issues, challenges and tools of clustering algorithms. Int. J. Comput. Sci. Issues 8(3), 523–528 (2011) 10. V. Pande, V. Samant, S. Nair, Crime detection using data mining. Int. J. Eng. Res. Technol. 5(1), 891–896 (2016) 11. D. Bhargava, P. Singh, R.S. Sangwa, Analysis of crime data using data mining. Int. J. Eng. Sci. Res. Technol. 7(2), 675–681 (2018) 12. D.Z. Suliman, A.M. Altaher, Crime data analysis using data mining techniques to improve crimes prevention. Int. J. Comput. 8, 39–45 (2014)

Thermophotovoltaic Cells: Electrical Power Generation at Night Devesh Bhatnagar

Abstract Photovoltaics claim incredible capacity due to the abundance of sun vitality incidenting on the earth; notwithstanding, they could just create vitality for the length of daylight hours. So as to offer electrical force after the sun has set, we remember an unconventional photovoltaic hypothesis which utilizes the globe as a glow supply and the evening time sky as a glow pool, determining in a “midnight photovoltaic versatile” that includes thermoradiative photovoltaics and thoughts from the propelling territory of radiative cooling. Right now, talk about the necessities of thermoradiative photovoltaics, the speculative furthest reaches of engaging this hypothesis to coupling with profound spot, the capacity of cutting edge radiative cooling strategies to brighten their exhibition, and a conversation of as far as possible, adoptability, and centrality of this midnight photovoltaic thought. Keywords Photovoltaic cell · Thermoradiative cell · Voltage

1 Introduction Currently, many countries are using 100% clean, renewable energy. Solar energy is supposed to supply power during peak hours or during additional requirement. However, regular photovoltaic cells can generate electricity only during daytime, additionally during the sunny season, and during night, it cannot generate electricity so that converted electrical energy from solar cells is stored in battery banks. This storage increases the utilization of electricity during night, but on the other hand, maintenance of batteries increases the overall cost of solar energy production. So to increase the generation from PV cells and to reduce the overall cost of generation by removing the batteries from the system, here, we propose an alternative of photovoltaic cells in which thermo radiative cells are superimposed on photovoltaic cells. D. Bhatnagar (B) Electrical Engineering Department, Rajasthan Technical University, Kota, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_11

105

106

D. Bhatnagar

In this fusion, during daytime, electricity generates from regular or conventional photovoltaic cells, whereas during night, thermoradiative cells generate electricity with negative voltage. Although the power adaptation process of a solar cell is restrained by the principle of balancing, it can be uncomplicated as heat machine. Basically, the cell delivers the power because of the radiation source, the sun, is very hot in comparison with solar cells which is cooler. Possibly, we can warm the solar cells, which become hot gadget in comparison with cold object, i.e. sky at night. When comparing both cases, we found that one object is hot at a time, and another one is cold. So now, we study about the nature, direction of current flow. In the second case, where hot object is solar cell, this type of heat engine is called a thermoradiative (TR) cell. A TR cell generates energy because the emission of thermal radiation from the cell exceeds the absorption of irradiation from the surroundings at some stage in operation. The genuine gadgets, a solar cell and a thermoradiative cell, are nearly similar though the working currents and voltages have contrary symptoms due to the fact the radiative tactics are reciprocal [1]. Thermoradiative cells may be used as misspend warmth recuperation units so that it will extract strength from a warm supply, e.g. a motor’s fumes pipe, a generator’s cooling towers, or different resources in business fabricating plants [2]. The cell essentially should be at a higher temperature than the item towards which it transmits [3]. The evening photovoltaic cell thought resort on thermoradiative impact and uses the glow of the earth, at around 300 K, as a warmth source and the haziness of the space, at 3 K, as a warmth sink. When talking about bandgap, conventional single junction solar cell is made with bandgap ranging from 1 to 1.5 eV from semiconductor. Commercial cells are made up of silicon with 1.1 eV bandgap.

2 Thermoradiative Photovoltaics The physical standards overseeing thermoradiative cells are like those behind regular photovoltaics. At the point when a p-n intersection is in warm harmony with its condition inside the dull, the irregular assimilation of photons through the cell rises to the arbitrary emanation from the cell, and the Fermi organize stays standard at some phase in the semiconductor [3]. Under embellishment and normal photovoltaic cell operation, emission is less than absorption and this contrast generate photo current Photon assimilation expands the complete electron and hole transporter thickness, which builds the amount np to more prominent than n o po = ni , the square of the all intrinsic carrier thickness. This abundance transporter age parts the electron and opening Fermi levels inside the intersection by a sum, which are μ = qV then alluded to as the semi Fermi levels (Fig. 1). Flow and force delivered by a thermoradiative cell are portrayed at temperature T c powerless against cooler body at temperature T a , utilizing the standard of point

Thermophotovoltaic Cells: Electrical Power Generation at Night

107

by point balance, detailed by Shockley and Queisser [2]. Illuminated or one-sided semiconductor emanates the photon transition which is gained from Planck’s summed up law for blackbody radiation and is equivalent to [1] (Fig. 2)

Fig. 1 Positive voltage generation during daytime and negative voltage generation during nighttime

Fig. 2 Photovoltaic cell under different condition showing bandgap and path of carrier through p-n junction

108

D. Bhatnagar

2π N (T, μ) = 3 2 h c

∞ Eg

ε(E)E 2 e

E−μ kB T

−1

dE

where T, μ, ε(E), and E g are the temperature, the concoction potential driving outflow, the vitality subordinate emissivity, and the bandgap of the semiconductor, separately, while h is Planck’s constant, k B is Boltzmann’s steady, c is the speed of light, and the indispensable is assumed control over photon vitality, E [4]. In our arrangement, the temperature of the cell is continued at body temperature by the globe and emits away in space. Consequently, this paper is not concern about the efficiency, the intention behind is to analyse the drawn out electrical power per unit area.

3 Integration The legitimization of the thermoradiative cell relies upon supporting the temperature disparity in between the cell and the sky. Also, the upper feature of the cell must be outwardly paired to the sky. These standards recommend a portion that: (i) is thermally leading on the down surface which can join a back reflector to coordinate the cell radiation towards the sky, (ii) is condensed to shield the cell from the encompassing and utmost conductive and convective warmth misfortune, (iii) fuses an infrared gap on the front to permit radiative warmth bring to the sky. For the dynamic material, there are barely any practicable low bandgap semiconductors that could satisfy as beginning stages for assessment. InSb can outstretch a bandgap underneath 0.1 eV, 2 which can be valuable in proof of guideline gadgets [2]. Though, for excellent force, even base bandgap are required, i.e., bandgap is to be such that it can bear a greater force so as to create good quality of voltage. Hg1−xCdxTe, a vigorously recognize material utilized in the infrared detecting industry can be bandgap designed required for high force delivering TR cell with Cd organization of around x = 0.1 [3]. To produce considerably more force, the cell can be figured to utilize at goliath temperature. At the point when we talk about sunbeam, which is utilized to warmth cells up to 330 K, the force ascends for powerful sky temperature for a 270 K– 13.3 W/m2 . The best force arrives at 200 W/m2 if outside warmth source is associated and working at T c = 471 K. Conceivable design for such device is depicted in Fig. 3. Lieu of utilizing sunlight-based warmth, we can likewise utilize modern fumes framework, continually at high temperature and looking towards the sky so as to pull back unreasonable waste warmth of the mechanical procedure. To expand the force age 24 h, an evening time photovoltaic cell can be utilized by coupling it with regular PV cell lieu of conventional PV cell.

Thermophotovoltaic Cells: Electrical Power Generation at Night

109

Fig. 3 Module of TR cell configuration. a The basic sketch, considered throughout the calculation in the paper, b thermoradiative cell, absorbs photon during sun time and emits photons during night-time due to temperature balancing and produces positive voltage during daytime and negative voltage during night-time, c Metamaterial can also be used to perform the same task which is performed by coupling of both hot side and emitter side

4 Conclusion In conclusion, world is pushing towards carbon-free environment, and solar is not the only sky facade choice for power production. Night-time power generation can be obtained by coupling traditional photovoltaic cell with thermoradiative cell and also clear sky during sun time and night-time so that absorption and emission take place easily at thermal wavelength. Deep space or clear night-time offer best lowtemperature heat sink for emission and help to generate more power by temperature balancing.

References 1. M. Bahadori, Passive cooling systems in Iranian architecture. Sci. Am. 144–154 (1978) 2. W. Shockley, H. Queisser, detailed balance limit of efficiency of P–n junction solar cells. J. Appl. Phys. 510–519 (1961) 3. T. Deppe, J.N. Munday, Nighttime photovoltaic cells: electrical power generation by optically coupling with deep space. ACS Photonics (2019) 4. W.-C. Hsu, J.K. Tong, B. Liao, Y. Huang, S.V. Boriskina, G. Chen, Entropic and near-Field improvements of thermoradiative cells. Scientific Reports. 6(1) (2016)

Insights of Kinship Verification for Facial Images—A Review Shikha Sharma and Vijay Prakash Sharma

Abstract Image feature identification is very complex task to investigate kinship in humans; researchers are very active to find best optimum solutions. Biological Features identification between images and identify genetic similarity between blood relations are very useful process. Indian subcontinent is growing rapidly toward the globalization and migration of human is very common into this region, with this some crime activity like human trafficking, kidnaping, missing person etc., are few in some area investigation agencies use technology to track and identify criminal and crime, seeking and defining biological relationship to solve issue can be a solution. Keyword Kinship

1 Introduction DNA test is a biological process for genetic identification in kinship; it is the best choice for the equivalent, however, it cannot utilize legitimately, and it likewise requires money and time [1]. Kinship verification is utilized in finding missing persons, old age people with Alzheimer, human trafficking, for instance non-habitat behavior between humans which include human violence, trafficking, or any other threat, and kinship can be useful for family relation detection and creating family databases (Fig. 1). However, individual’s relations are confounded and changed after some time. The framework plans to build up a model for decisively evaluating individual’s dynamic connections. Additionally, framework finds group relationship from the system. What’s more, framework gives logical data can sensibly improve the family relationship check exactness [2]. Facial similarities are based on powerful facial S. Sharma (B) CSE Department, Poornima University Jaipur, Jaipur, India e-mail: [email protected] V. P. Sharma IT Department, Manipal University Jaipur, Jaipur, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_12

111

112

S. Sharma and V. P. Sharma

Fig. 1 Sample images of globally available dataset [2]

features such as size of face, distance between lips and nose, size and shape of nose, corners of eye, lips, and nose. This study represents various comparative solutions and work done into this area. This work analyzes about some proposed work for the equivalent and impediment of these works.

2 Related Literature Tang et al. [1] designed feature extraction and selection method for kinship verification. Human appearances expressly or certainly show the family linkage, giving rich data about genealogical relation. They address the novel test by representing the issue as a binary classification task and extricating discriminative facial highlights for this issue. Their design comprised of three main parts, first they conduct controlled online search for collecting images of parents along with their children for famous public figures and some celebrities (Fig. 2). Then, in the next phase, they evaluate various low-level features of an image. Subsequent to choosing the most discriminative acquired facial highlights, they show an order exactness of 70.67% on a test of picture sets utilizing K-nearest neighbors. They also show classification accuracy of top 14 features. For classifying the parent– child image into true–false pair, they used KNN with k = 11 and support vector machine. They also test human performance on parent–child dataset and in that, they found the best and accurate result between father–son images. In order to improve it, in the future, they will work on large image dataset and with more human intervention. Wu et al. [2] summarized the various aspects in Kinship verification from faces. In that paper, firstly, they introduced the topic. The mean is to understand how human outwardly see and distinguish kin signals from appearances and motions. Then, they

Insights of Kinship Verification for Facial Images—A Review

113

Fig. 2 Parent–child database [1]

discussed about various methods available to conduct such type of research; they made summary of all available method in the tabular format with respect to accuracy, also discussed types of databases available, and for that, mainly six databases are available. They made table of detailed analysis of all types available databases and the last, they listed the various challenges in that domain like work on dynamic faces, verification can be done by using gestures, behavior, voice, and many more. Zhao [3] purposed a novel method for Kinship verification that grabbed more emphasis in the PC visualization community because of its potential presentations extending from disappeared kids inquiry to online networking examination. Earlier work was done on Mahalanobis distance metric to validate resemblance among images. In this paper, author designed a powerful approach which is based on multiple kernel similarity metric (MKSM). The general MKSM is a weighted blend of fundamental likenesses and in this manner forces the limit with respect to include combination. The fundamental likenesses are gotten from core kernel and native highlights, and the loads are acquired by taking care of a confined linear programming issue that starts from a huge margin. Author worked on four different dataset. Author also made comparison tables in which accuracy check on different methods with respect to various features on subset of two dataset, i.e., KFW-I and KFW-II. They perform test on daddy–son, daddy–daughter, mom–son, mom-daughter facial images. At the end, in order to grow in this direction, authors wants to work on large and complex dataset.

114

S. Sharma and V. P. Sharma

Mahpod et al. [4] developed a model for facial kinship verification. It is a multiview fusion of uniform and non-uniform distance learning system. Similarity confirmation is very popular problem in the computer vision. Here, authors show whether A and B are kins built on their look such that A  (Dad, Mom) and B  (Son, Daughter). In order to calculate similarity among the faces, author firstly used distance learning. Overall procedure comprises of three major phases. Firstly, for utilizing both symmetric and asymmetric DL, based on margin maximization, they derive HDL. Secondly, they designed network consisting of two HDL layers, namely MHDL- and SVM-based classification and at last, this plan is tentatively demonstrated to be vigorous to the decision of preparing parameter and contrast positively and contemporary cutting edge methods in terms of kinship proof correctness, when tested to the KinFaceW-I, KinFaceW-II and Cornell Kinface Dataset. In the future, the MHLD can be stretched to profound learning by supplanting the utilization of picture structures, for example, HOG AND LBP with convolution layer and train our model for all system while utilizing back propagation. Xia et al. [5] purposed algorithm on most recent phenomenon, i.e., identify relationship of people by their images. The complete procedure is divided into various phases, first phase consists of database creation in that user creates two different databases (Figs. 3 and 4). First database is collection of pictures of children and their parents (young age picture, old age picture) and second database consists of pictures of the persons in a family tree. They construct up a transmission subspace learning-based algorithm between children and their parents and grandparents facial pictures. Besides, by investigating the semantic importance of related metadata, they propose an algorithm to predict the most probable kinship relationship inserted in a picture. Additionally,

Fig. 3 Family photograph [5]

Insights of Kinship Verification for Facial Images—A Review

115

Fig. 4 Family tree [5]

they demonstrate that logical data can sensibly improve the connection confirmation exactness by means of test on the family face database. Lu et al. [6] identify a unique facial analysis problem. They characterized connection between two individuals who are organically related with covering qualities. In this paper, they analyzed four unique kinds of family relationship connection: daddy– son, daddy–daughter, mom–son, mom–daughter family relationship. Applications of this research are organizing based on blood relation, labeling an image, suggesting family circle people on social media, and finding misplaced kids/guardians. Challenge in this direction is that, there are very few publicly available kinship databases. Creator proposes novel neighborhood repulsed metric learning strategy for relationship authentication. To utilize various component descriptors to extricate reciprocal data, they further propose a multi-view NRML to look for a typical separation metric to play out different elements to improve the family relationship check execution. In the future, they are looking for more discriminative features and embed them with their work. Xiuzhuangzhou et at. [7] suggested a novel approach to overcome few shortcomings presented in most popular distance metric learning technique for kinship verification facial image, which is generally used batch learning to get the feasible kin identical metric. DML regrets for few things like: for high-dimensional kin data, it is not efficient to perform training and testing the computation and also this process is limited scalable in realproof. So to resolve those problems, author designed a method for kinship verification, i.e., scalable similarity learning. The SSL strategy is offered for preparing the kinship likeness metric on the social face by acquainting

116

S. Sharma and V. P. Sharma

the trimmed gradient with prompt sparsely in Web-based learning of the corner-tocorner bilinear similitude work, making in computationally effective and adaptable for handy KVFI with great dimensional family information. A multi-view SSL calculation is produced for viable combination of kinship closeness by an ideal blend of the askew bilinear comparability prototypes from various component portrayals in an intelligent connected procedure. They also figure out that, their method against the state of the art distance metric learning occupying result for kin proof on two kinship database. Authors keen to investigate discriminative deep metric learning in the near future for kinship verification. Yan et al. [8] purposed a system for kinship verification named as discriminative compact binary face descriptor (D-CBFD) based on weakly managed highlight learning. Dissimilar to most existing family relationship check technique where hand created highlights are utilized for face portrayal, their strategy takes in nonrecognizes face portrayal from collection of weakly marked example. First, they figure pixel distinction vector (PDV) at neighborhood patches. At that point, they become familiar with a discriminative prediction to delineate PDV into a smalldimensional parallel component space, where a few limitations, for example, vitality preserving, geometrical separation, and code appropriation are utilized. Later on, they utilize the suggested strategy to other PC visualization application to additionally exhibit the adequacy of proposed work. Dahan et al. [9] purposed an approach to solve kinship problem using deep learning, further they offer a novel self-learning deep model, which takes in basic highlights from various countenances. Because of that self-learning parameter reduced by half without effecting performance. In order to classify weighted features, they used different classifier (Fig. 5).

Fig. 5 Flow control of self-learned weighted face features [9]

Insights of Kinship Verification for Facial Images—A Review

117

Fig. 6 Three generation of family [9]

They train their model in the severe situation of the family relationship check issue and demonstrate that our prepared model successes the RFIW2018 challenge and score modernistic result award. Author suggested various things in that direction like people can use relax situations displayed in the challenge, also researcher can designed self-adjusted weighted layer for various approaches such as invariant to age or shading decent variety (Fig. 6).

3 Results Survey results are depicted in Table 1, which shows that various researchers had used different models with various pairs in input parameters & their output parameters. Bar chart as given in Fig. 7 shows the analysis of various model and accuracy achieved through them. In this chart, we find the accuracy between father–son and father–daughter. Bar chart as given in Fig. 8 shows the analysis of various model and accuracy achieved through them. In this chart, we find the accuracy between mother–son and mother–daughter. Pie chart shows the pictorial analysis of accuracy achieved using various databases for various models. As per this analysis, MKSM approach using KFW-II dataset achieved highest accuracy for Kinship (Fig. 9).

DCBFD [8]

Multi-view SSL [7]

MNRML [6]

MHDL [4]

533 Pair 1000 Pair

KFW-II

1000 Pair

KFW-II

KFW-1

533 Pair

1000 Pair

KFW-II

KFW-I

533 Pair

1000 Pair

KFW-II

KFW-I

533 Pair

1000 Pair

KFW-II

KFW-I

533 Pair

KFW-I

1000 Pair

KFW-II

MKSM [3]

533 Pair

KFW-I

Pictorial structural model [1]

64 * 64

64 * 64

64 * 64

64 * 64

64 * 64

64 * 64

64 * 64

64 * 64

64 * 64

64 * 64

64 * 64

64 * 64

79.0

77.6

78.1

79.4

76.9

72.5

78.20

78.22

77.8

78.7

73.5

73.5

74.2

71.6

70.9

72.7

74.3

66.5

70

69.40

77.40

78.38

74.7

67.5

75.4

74.1

73.1

69.3

77.4

66.2

71.20

66.81

81.20

81.01

77.8

66.1

Mother–son

Father–son

Father–daughter

Output parameter (Accuracy in term of %)

Image

Resolution

Input parameter

Database

Approach name

Table 1 Comparison of accuracy (percentage) of different methods

77.3

79.5

68.5

77.1

77.6

72.0

67.80

70.10

82.00

83.90

78.0

73.1

Mother–daughter

(continued)

78.5

75.6

72.6

74.6

76.5

69.9

71.80

71.13

81.20

81.01

77.0

70.1

Mean

118 S. Sharma and V. P. Sharma

Approach name

Table 1 (continued)

Cornell

Database 150 Pair

100 * 100

61.5

57.0

58.8

Mother–son

Father–son

Father–daughter

Output parameter (Accuracy in term of %)

Image

Resolution

Input parameter 59.9

Mother–daughter 59.3

Mean

Insights of Kinship Verification for Facial Images—A Review 119

120

S. Sharma and V. P. Sharma

DCBFD

Cornell KFW-II

PS Model MKSM

MHDL

MNRML

MulƟview SSL

KFW-1 KFW-II KFW-1 KFW-II KFW-1

Father-Daughter

KFW-II

Father- son

KFW-1 KFW-II KFW-1 KFW-II KFW-1 0

20

40

60

80

100

Fig. 7 Kinship technique analysis for father–child relationships

4 Findings 1. In most of papers, researchers commonly used KinFace W-I, KinFaceW-II and Cornell Kinface database. 2. Facial images usages in databases are mostly captured in homogeneous environment. 3. Common pattern in every research is that they work on daddy–son, daddy– daughter, mom–son, and mom–daughter facial pictures databases, but a single approach which will produce a better accuracy for all four types of relationship is still not discover yet. 4. Most of the research is done on static images only. 5. There is no work done on sibling relationship.

Insights of Kinship Verification for Facial Images—A Review

121

DCBFD

Cornell KFW-II

PS Model MKSM

MHDL

MNRML

MulƟview SSL

KFW-1 KFW-II KFW-1 KFW-II Mother-Daughter

KFW-1

Mother- son

KFW-II KFW-1 KFW-II KFW-1 KFW-II KFW-1 0

20

40

60

80

100

Fig. 8 Kinship technique analysis for mother–child relationships

5 Conclusion and Future Work In this study, we analyze and discuss about various methods toward validation of kin relationship between human faces, also discuss various types of renowned databases used worldwide to validate kin relation, compare various algorithm presented by researcher in term of accuracy. Also discuss various challenges in this direction and finding gaps to grow exponentially toward the goal. Also discuss application area of this topic. Future work in this direction is that we can design a model which gives better result in all four above-mentioned relations with better accuracy.

122

S. Sharma and V. P. Sharma

Mean PS Model KFW-1

PS Model KFW-II

MKSM KFW-1

MKSM KFW-II

MHDL KFW-1

MHDL KFW-II

MNRML KFW-1

MNRML KFW-II

MulƟview SSL KFW-1

MulƟview SSL KFW-II

DCBFD KFW-1

DCBFD KFW-II

DCBFD Cornell 6%

7%

8%

8%

8%

8%

8%

9%

8%

7% 8%

7%

8%

Fig. 9 Mean analysis for kinship technique

References 1. R. Fang, K.D. Tang, N. Snavely, T. Chen, Towards computational models of kinship verification, In 2010 IEEE International Conference on Image Processing. IEEE (2010, September), pp. 1577–1580 2. X. Wu, E. Boutellaa, X. Feng, A. Hadid, Kinship verification from faces: methods, databases and challenges, in 2016 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC). IEEE (2016, August), pp. 1–6 3. Y.G. Zhao, Z. Song, F. Zheng, L. Shao, Learning a multiple kernel similarity metric for kinship verification. Inf. Sci. 430, 247–260 (2018) 4. S. Mahpod, Y. Keller, Kinship verification using multiview hybrid distance learning. Comput. Vis. Image Underst. 167, 28–36 (2018) 5. S. Xia, M. Shao, J. Luo, Y. Fu, Understanding kin relationships in a photo. IEEE Trans. Multim. 14(4), 1046–1056 (2012) 6. J. Lu, X. Zhou, Y.P. Tan, Y. Shang, J. Zhou, Neighborhood repulsed metric learning for kinship verification. IEEE Trans. Pattern Anal. Mach. Intell. 36(2), 331–345 (2013) 7. X. Zhou, H. Yan, Y. Shang, Kinship verification from facial images by scalable similarity fusion. Neurocomputing 197, 136–142 (2016) 8. H. Yan, Learning discriminative compact binary face descriptor for kinship verification. Pattern Recogn. Lett. 117, 146–152 (2019) 9. E. Dahan, Y. Keller, SelfKin: Self Adjusted Deep Model For Kinship Verification (2018). arXiv preprint arXiv:1809.08493

Development of a Novel Approach for Classification of MRI Brain Images Using DWT by Integrating PCA, KSVM and GRB Kernel Preeti Arora and Rajeev Ratan

Abstract In today’s world, automated computation is crucial in medical field and precise classification of magnetic resonance (MR) brain images is equally imperative in medical analysis and its interpretation by medical practitioners. In this context, various techniques have already been proposed. This research article focuses on the wavelet transform technique which is first applied on MR images for feature extraction followed by principal component analysis (PCA) technique which simplifies the datasets as it is a dimensionality reduction technique for features which in turn increases the discriminative power. The reduced features are further applied to kernel support vector machine (KSVM) for classification of image as normal/abnormal. The experimentation has been applied to four different kernels namely linear kernel (LIN), homogeneous polynomial (HPOL), inhomogeneous polynomial (IPOL, and Gaussian radial basis (GRB). The overall results show that the accuracy of GRB kernel is best and the processing time has also been reduced. The proposed technique has been compared with the other techniques available in literature and found that use of DWT, PCA and KSVM along with GRB kernel obtained the best results. Considerable amount of datasets have been used in the experimentation and it can be concluded from the results that by using the proposed methodology, the time taken for classification of segmented image is reduced drastically which could be the turning point in medical field for diagnosis of tumor. Keywords MRI · Pre-processing · Feature extraction · Feature reduction · Discrete wavelet transform (DWT) · Principal component analysis (PCA) · Kernel support vector machine (KSVM) · Classification

P. Arora (B) · R. Ratan Department of Electronics and Communication Engineering, MVN University, Palwal, Haryana, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_13

123

124

P. Arora and R. Ratan

1 Introduction Magnetic resonance imaging (MRI) is a technique that uses special magnet and radio waves to capture detailed pictures inside human body. It is very beneficial for diagnosis of tumor and being heavily used for research purpose. There are various other techniques available to examine human body like CT scan and biopsy among which MRI is best for clinical and research purpose [1, 2]. Brain tumor is an uncontrollable growth of cells in brain which can be cancerous or noncancerous [3]. Tumors are further classified as benign and malignant. Benign is the initial stage of tumor which does not spread to its surroundings tissues while malignant tumor spread to its surrounding tissues and also consist of cancerous cell. So, the detection of tumor at early stage is necessary [4, 5]. The accurate classification leads to good treatment. In this context, researchers have proposed multiple approaches that falls into two categories, namely supervised and unsupervised. Supervised classification including supports vector machine (SVM), k-nearest neighbors (KNN) which gives more classification accuracy as compared to unsupervised classification like fuzzy c-means, self-organization feature mean (SOFM) [2]. In this paper, a wavelet transform technique has been proposed which is just applied on MR images for feature extraction followed by principal component analysis (PCA) technique which simplifies the datasets as it is a dimensionality reduction technique. The reduced features are further applied to kernel SVM for classification of image as normal/abnormal.

2 Existing Methods There are various techniques available in literature for tumor detection and classification [4–14]. Table 1 shows the different segmentation, feature extraction and classification techniques along with their results and accuracy. Based on table above, the computational time used in most of the techniques was quite high, and the datasets used in these techniques were not sufficiently large to give the accurate output. Simple feature extraction techniques and classifiers yield less performance so hybrid approach and large dataset are required to enhance the accuracy.

3 Proposed Methodology The proposed methodology is a hybrid model for MRI classification and it consists of following techniques: discrete wavelet transforms (DWT), principal component analysis (PCA) and kernel support vector machine (KSVM). It consists of three stages namely feature extraction stage, feature reduction stage and classification stage.

Objective

Detection and parameter extraction of brain tumor

Classification of brain images into normal, benign and malignant tumor

Tumor detection using feature extraction

Diagnosis of brain tumor

A survey on feature extraction technique

Automatic feature extraction

Author name and year

Riza Mittal, Timsi Arora, 2011

Pauline John, 2012

S. Sivaperuman, M. Sundhararajan, 2013

Dena Nadir George, Hashem B.Jehlol, Anwar Shubhi, 2015

N.Elavarasan, Dr. K. Mani, 2015

Ahmad Chaddad, 2015

Table 1 Work related to tumor detection and classification

Gaussian mixture model (GMM)

Principal component analysis (PCA), linear discriminant analysis (LDA)

Shape features extracted using supervised machine learning algorithm

Structural features using watershed (gradient and marker controlled)

Texture feature extraction using GLCM

Shape features extracted using marker-based watershed algorithm

Feature extracted/feature extraction method

NBC SVM PNN

SVM

C4.5 and multi-layer peceptron (MLP)

Probabilistic neural network (PNN)

Classification method

(continued)

High accuracy performance

C4.5-91% MLP-95%

Higher precision and better recall rates

100%

Reduce over segmentation problem of watershed segmentation, 11–13 times faster than manual segmentation

Result and accuracy

Development of a Novel Approach for Classification of MRI Brain … 125

A review on feature extraction techniques

Segmentation of brain tumor using hybrid feature extraction method

Reema Matthew A, Achala Prasad, Dr. Babu Anto P, 2017

P Kalavathi, R.Ilakkaiyamuthu, 2017

SVM

Texture features using GLCM and histogram-based features Texture features

Nilesh Bhaskarrao Optimization techniques Bahadure, Arun kumar Ray, for brain tumor detection Harpal Thethi, 2017

A Harshvardhan, Dr. Suresh Analysis of various Babu, Dr. T Venugopal, 217 feature extraction methods for tumor detection and classification

SVM

SVM

Based on wavelet and GLCM feature extraction methods

SVM,KNN,PCA,PNN,BPNN

SVM LDA Naives Bayes classifiers (NBC)

SVM

Classification method

Nilesh Bhaskarrao Brain tumor detection and Texture and Bahadure, Arun kumar Ray, feature extraction histogram-based features Harpal Thethi, 2017

Texture feature extraction using GLCM

Cumulative variance method (CVM), genetic algorithm (GA), independent component analysis (ICA)

Statistical texture and morphological features

Brain tumor detection

Manu Gupta, Prof. B. V. V.S. N. Prabhakar Rao, Dr. Vekteshwaran Rajagopalan, 2016

Feature extracted/feature extraction method Texture, shape and color using histogram-oriented gradient (HOG)

Objective

Swapnil R. Talrandhe, Amit Implementation of brain Pimplakar, Ankita Kendhe, tumor detection 2016

Author name and year

Table 1 (continued)

(continued)

Combination of histogram, GLCM and GLRLM achieves better accuracy

Identify the exact location of tumor quickly with high accuracy

96.51%

Produce better output for segmentation and classification of tumor

Gabor-wavelet+CVM+BPNN-97%

SVM (linear kernel)-100%

More accurate result than previous technique

Result and accuracy

126 P. Arora and R. Ratan

A survey on texture feature extraction methods

Anne Humeau-Heurtiee, 2019

Texture feature (seven classes)

Brain tumor detection and Hybrid feature extraction classification method modified multi-texton histogram (MMTH) and multi-texton microstructure descriptor (MMTD)

Statistical feature extraction using hybrid feature extraction technique (analytical and algorithm means and GLCM properties)

Kavin Kumar K, Meera Devi T, Maheswaran S, 2018

Narbada jhalwa, Payal Shah, Hybrid feature extraction Rajendra Sutar, 2018 technique to detect brain tumor

Identification and classification of brain tumor

N. Varuna Shree, T. N. R.Kumar, 2018

Texture feature extracted using GLCM

Brain image classification Feature extraction using rough set theory

T. Rajesh, R. Suja Mani Mallar, M. R. Geetha, 2018

Feature extracted/feature extraction method

Objective

Author name and year

Table 1 (continued)

Support vector machine (SVM), k-nearest algorithm (KNN), extreme learning machine (ELM)

PNN

Particle swarm optimization (PSONN)

Classification method

(continued)

Some methods are obsolete nowadays. Two methods entropy-based and learning-based approaches are latest

SVM with combination of MMTH and MMTD gives more accuracy as compared with rest of classifiers

Very useful technique to extract the required features

100%

96%

Result and accuracy

Development of a Novel Approach for Classification of MRI Brain … 127

E, G, H (Edge, Gray |Contrast)

Rajeev Ratan, Sanjay Brain tumor detection Sharma, S. K. Sharma, 2009

Watershed segmentation

AdaBoost backpropagation neural networks sparse representation classifies

GLCM texture feature extraction (contrast, correlation, energy and homogeneity)

Classification method

K. Vaidehi, T. S. Subashini, Detection and 2014 categorization of benign mass and malignant mass region from breast ROI

Feature extracted/feature extraction method Classification method

Objective

Dr. Rajeev Ratan, P. MRI-based segmentation Feature extraction method G.Kohli, Sanjay K, Sharma, technique for breast tumor Amit K. kohli, 2016 segmentation

Author name and year

Table 1 (continued)

The result shows increased speed of segmentation as compared with manual

The extract GLCM descriptors along with SRC classifies could be effectively used in classifying breast masses in digital mammograms. SRC classifies obtained highest accuracies (93.75%)

Breast tumor tissues were successfully segmented in lesser time with more accuracy

Result and accuracy

128 P. Arora and R. Ratan

Development of a Novel Approach for Classification of MRI Brain …

129

Fig. 1 Block diagram representing different stages of proposed methodology

It has also been illustrated in Fig. 1.

3.1 Feature Extraction Scheme Using DWT Discrete wavelet transform (DWT) transforms a discrete time signal to a discrete wavelet representation. It gives both time and frequency analysis of signal [15]. It is a very efficient tool for feature extraction, it allows analysis of image at different levels of resolution while it takes more memory and costly too. Further, PCA technique will be used to reduce the extracted features [1, 16]. The basic theory of wavelet decomposition is given below: The continuous wavelet transform Ψ (t) is defined as: ∞ WΨ (a, b) =

x(t)Ψa,b (t)dt

(1)

−∞

1  a Ψa,b (t) = √ Ψ t − b a a dilation factor b translation parameter. DWT can be given as:

(2)

130

P. Arora and R. Ratan

 DWTx(n) = d j,k aj,k h(n) g(n) j, k

 d j,k = x(n)h ∗j (n − 2 jk),  x(n)g ∗j (n − 2 jk). a j,k =

(3)

detail component of signal x(n) approximation component of signal x(n) coefficients of the high-pass filter coefficients of the low-pass filter wavelet scale and translation factor [1, 12].

Figure 2 shows 2D DWT having four sub-bands LL, LH, HL, HH. First image is passed through h(n) and g(n) filters which results into further four sub-bands [12]. LL = approximation component of the image LH, HL, HH= detailed component of the image.

3.2 Feature Reduction Using PCA Feature reduction is an essential step in tumor classification. Excessive features take extra memory so it is required to reduce the unnecessary features to reduce complication and make the process easier, and PCA is an effective technique to reduce the feature [2]. The advantage of this method is low computational cost and complexity [1]. The following algorithm is used to find out the principal components of the input matrix to the neural network [2]. Now, the input matrix consists of only these principal components, and the size of the input matrix is reduced from (1024) to (19).

Fig. 2 Schematic diagram of 2D DWT

Development of a Novel Approach for Classification of MRI Brain …

131

Algorithm Let X be an input data set (X: matrix of dimensions M × N). Perform the following steps (Fig. 3): The above algorithm shows the steps involved for extracting the principal components of the input vector [15].

3.3 Kernel SVM Kernel SVM is used for classification of MRI image as malignant or benign. There are various types of SVM technique available among which kernel SVM is best in terms of accuracy [17, 18]. It is an example of supervised technique with enhanced learning algorithm which is used for classification. It uses nonlinear mapping function to change the input space to higher dimensional feature space. The basic difference between SVM and KSVM is that KSVM is nonlinear while SVM is linear [19–21].

4 Performance Metrics Mean: Mean is defined as sum of all the pixels in an image to the total no of pixel value.  M=

1 m×n

 m−1 n−1

f (x, y)

(4)

x=0 y=0

Standard Deviation (SD): It is defined as deviation from its mean value [17].

 SD(σ ) =

1 m×n

 m−1 n−1

( f (x − 1) − M)2

(5)

n−1 y=0

Kurtosis: Kurtosis is used to find out probability distribution of any random variable.  K urt (x) =

1 M×N



( f (x, y) − M)4 SD4

(6)

Skewness (Sk ): It gives an estimate of similarity or dissimilarity.  SK (X ) =

1 M×N



( f (x, y) − M)3 SD3

(7)

132

Fig. 3 Proposed PCA algorithm

P. Arora and R. Ratan

Development of a Novel Approach for Classification of MRI Brain …

133

Fig. 4 Block diagram of proposed work

Entropy(E): Entropy is defined as measure of randomness of textural image. E=

M−1 n−1

f (x, y)Log2 f (x, y)

(8)

x=0 x=0

Contrast: Contrast is the difference in brightness between objects or regions. It measure intensity pixel and neighbor over the image. Con =

m−1 n−1

(x − y)z f (x, y)

(9)

x=0 y=0

Variance: It is expectation of the squared deviation of a random variable from its mean.  (x − μ)2 2 (10) σ = N Smoothness: It is a method to create a less pixel image and less noisy mage with the help of low-pass filter. Correlation: Correlation is spatial dependencies between the pixels [2]. m−1 n−1 Corr =

x=0

y−0 (x,

y) f (x, y) − Mx M y

σxσ y

(11)

where Mx and My are mean and σx and σy are standard deviation in horizontal spatial. Energy (En): Energy is used to describe a measure of information [17].

134

P. Arora and R. Ratan

m−1 n−1 En = f 2 (x, y)

(12)

x=0 y=0

5 Simulation Results The images in Figs. 5 and 6 show the result of brain MRI of malignant type tumor. The complete simulation is done using MATLAB. Figure 4 shows the block diagram of proposed work. First, MRI image is taken and segmentation is done, then feature extraction using DWT technique after that feature reduction using PCA. Finally, KSVM classifier is used for detection of benign and malignant tumor. Table 2 summarizes the different features extracted in simulation process. Different texture features are extracted like contrast, correlation, entropy, energy, etc., and intensity-based features like mean, variance and standard deviation. Feature extraction is necessary to extract relevant or required feature of the image.

5.1 Performance Evaluation For optimization of whole process performance evaluation is necessary, the experiment is performed with four different kernels and below table and figure shows the analysis of simulation result (Tables 3 and 4). Figures 7 and 8 show the accuracy of four different kernels used in simulation

Fig. 5 Results of malignant tumor

Development of a Novel Approach for Classification of MRI Brain …

Fig. 6 Results obtained when malignant tumor is detected Table 2 Extracted features Feature

Value (Fig. 5)

Value (Fig. 6)

Mean

0.00567905

0.00441488

Standard deviation

0.089635

0.0897061

Entropy

3.55654

3.1204

RMS

0.0898027

0.0898027

Variance

0.00803906

0.00803634

Contrast

0.240823

0.256952

Skewness

0.921909

0.929255

Energy

0.721137

0.741896

Smoothness

0.954804

0.942606

Kurtosis

5.7797

7.82183

IDM

0.176715

0.392431

Correlation

0.11622

0.110463

Table 3 Parameter analysis when malignant tumor is detected

Technique

Accuracy

RBF

80

Linear

90

Polygonal

70

Quadratic

80

135

136 Table 4 Parameter analysis when malignant tumor is detected

P. Arora and R. Ratan Kernel function

Accuracy%

RBF

70

Linear

90

Polygonal

70

Quadratic

90

Fig. 7 Performance analysis when malignant tumor is detected

Fig. 8 Performance analysis when malignant tumor is detected

process and the result shows that the use of DWT, PCA and KSVM along with GRB kernel obtained the best results.

Development of a Novel Approach for Classification of MRI Brain …

137

Fig. 9 Computation times at different stages

Table 5 Computation time

Technique

Time (in s)

Feature extraction

0.023

Feature reduction

0.0187

SVM classification

0.0031

Total computation time

0.0448

5.2 Time Analysis For each 256 * 256 image, the mean computation time for feature extraction is 0.023 s, feature reduction is 0.0187 s and SVM classification is 0.0031 s. Classification takes lesser time than feature extraction and feature reduction. The total estimated time is 0.0448 s (Fig. 9; Table 5).

6 Conclusion and Future Scope This research article developed a novel approach for classification of MRI brain image using DWT by integrating PCA, KSVM and GRB kernel. The experiment has been performed with four non-identical kernels. The results show that the combination of DWT, PCA, KSVM with GRB kernel give maximum accuracy as compared to other techniques. It can be concluded from the results that time taken for classification of segmented image is reduced significantly which could be a turning point in the medical field for diagnosis of tumor. Future work should concentrate in the following direction: This method could be applied on foreign images or images of single person at different time span to study about their progressive treatment. Use of advance wavelet transform technique could have further improved the computational time. Some hybrid technology may be designed to improve the classification accuracy.

138

P. Arora and R. Ratan

References 1. V.S. Mehekare, S.R. Ganorkar, Detection of brain tumor using discrete wavelet transform, PCA & KSVM. Int. J. Innov. Res. Comput. Commun. Eng. 5(5), (2017) 2. Y. Zhang, L. Wu, An MR brain image classifier via principal component analysis and Kernel support vector machine. Prog. Electromag. Res. 130, 369–388 (2012) 3. S. Sawakare, D. Chaudhari, Classification of brain tumor using discrete wavelet transform, principal component analysis and probabilistic neural network. Int. J. Res. Emer. Sci. Technol. 1(6), (2014) 4. K. Kumar, K.T. Devi, An efficient method for brain tumor detection using texture features and SVM classifier in MR images. Asian Pac. J. Cancer Prev. 19(1), 2789–2794 (2018) 5. A.H. Heurtier, Texture feature extraction methods: a survey. IEEE Access 7(4), 8975–9000 (2019) 6. S.B. Gaikwad, M.S. Jhoshi, Brain tumor classification using principal component analysis and probabilistic neural network. Int. J. Comput. Appl. 120(3), 5–9 (2015) 7. N.V. Shree, T.N.R. Kumar, Identification & Classification of Brain Tumor MRI Images with Feature Extraction Using DWT & PNN (Springer, 2018), pp. 23–30 8. N. Jhalwa, P. Shah, R. Sutar, A hybrid approach for MRI based statistical feature extraction to detect brain tumor. IOSR J. VLSI Signal Process. 8(2), 30–37 (2018) 9. T. Rajesh, R.S.M. Malar, M.R. Geetha, Brain Tumor Detection Using Optimisation Classification Based on Rough set Theory (Springer, 2018) 10. A. Harshavardhan, S. Babu, T. Venugopal, Analysis of feature extraction methods for the classification of brain tumor detection. Int. J. Pure Appl. Math. 117(7), 147–155 (2017) 11. N.B. Bahadure, A.K. Ray, Harpal: Feature extraction & selection with optimization technique for brain tumor detection from MRI, In International Conference on Computational Intelligence in Data Science, ICCIDS (Chennai, 2017), pp. 5090–5595 12. N.B. Bahadure, A.K. Ray, H. Thethi, Image analysis for MRI based brain tumor detection & feature extraction using biologically inspired BWT & SVM. Int. J. Biomed. Imaging 2017, 1–12 (2017) 13. P. Kalavathi, R. Ilakkiyamuthu, Feature extraction based hybrid method for segmentation of brain tumor in MRI brain images. IJCST: Int. J. Comput. Sci. Trends Technol. 5(1), 95–100 (2017) 14. M. Gupta, B.V.V.S.N. Prabhakar Rao, Brain tumor detection in conventional MR Images based on statistical texture and morphological features. Int. Conf. Inf. Technol. 4(1), 129–133 (2016). IEEE, Bhubaneswar 15. E.S.A.E. Dahshan, T. Hosney, A.B.M. Salem Hybrid Intelligent Techniques for MRI Brain Image Classification (Elsevier, 2010), pp. 433–441 16. S.C.K. Kumar, H.D. Phaneendra, Classification of tumors in brain MRI Images with hybrid of global and local DWT features using decision tree. Int. J. Recent Technol. Eng. 8(3), (2019) 17. G. Mahalakshmi, G. Chellam Heren, Segmentation and classification of brain tumor MRI images using support vector machine. Int. J. Comput. Sci. Eng. 7(8), (2019) 18. A. Islam, M.F. Hossain, C. Saha, A new hybrid approach for brain tumor classification using BWT+KSVM, in International Conference on Advances in Electrical Engineering (IEEE, Dhaka, 2017), pp. 241–246 19. P.Y. Khumbhar, H. Shah, N. Bandgul, K. Dargad, Effect of KSVM algorithm in brain tumor detection and extraction using MRI images. Int. J. Res. Emer. Sci. Technol. 4(2), 10–14 (2017) 20. N.H. Rajini, R. Bhavani, Classification of MRI brain images using K-nearest neighbor and artificial neural network, in International Conference on Recent Trends in Information Technology (IEEE, Chennai, 2011), pp. 863–868 21. S. Yashaswini, Early detection of tumors in human brain MRI using wavelet and support vector machine. Int. J. Adv. Res. Comput. Sci. Technol. 5(2), 52–57 (2017)

Optimization of Sustainable Performance: Housing Project, Bahir Dar, Ethiopia Ambuj Kumar and Harveen Bhandari

Abstract Housing has been a basic need for living since ages. In Ethiopia, the concerns of sustainability do not seem to be incorporated as a significant determining factor in the architectural design process. The country lacks knowledge and expertise of design and construction of sustainable housing projects and is not much in practice. There are multiple and contradictory set of issues that are affecting sustainable planning and design of housing projects which makes sustainability a minor concern for the upcoming housing projects in Ethiopia. Neither much attention has been paid and nor any initiative has been taken to adopt any alternative performance optimization techniques in design and construction to make sustainable buildings. Thereby, this paper examines gaps in the existing design and construction schemes to achieve the envisaged goals of sustainable environment, thereby contributing to overall socioeconomic development. To overcome the crisis that construction industry faces nowadays, two aspects that need attention are firstly finding measures to optimize sustainability performance in design and construction of housings and secondly devising methods to achieve sustainability and optimization by assimilation of various parameters from the beginning like architecture design, construction techniques and material selection to achieve better results. The current research paper examines a housing project in context to sustainable design approach to optimize performance in housings by assimilation of selected parameters within the frame work of sustainability, underlining the concept and benefits of sustainable buildings. The research aims to devise solutions that will contribute to the user well-being and protection of our environment. Keywords Optimization · Sustainable housing · Performance · Integration · Implementation

A. Kumar · H. Bhandari (B) Chitkara School of Planning and Architecture, Chitkara University, Punjab, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_14

139

140

A. Kumar and H. Bhandari

1 Introduction The rapid process of urbanization is leading to development of similar architecture all over and specifically in developing countries accompanied with deficiency of affordable housing. It continues to be one of the main development challenges in twenty-first century. Like in African cities, the Ethiopian cities lack the fundamental necessities required to sustain the livelihood in city [14]. The upcoming housing projects in Ethiopia do not have sustainability as a major concern at their designing stage and later on at their execution. The concept of sustainability and sustainable housing projects is crucial for developing countries as it is necessary to build sustainable societies and contribute to a sustainable environment. Sustainable architecture will remain a hybrid because the concept of sustainability takes knowledge, attitudes and techniques from varied disciplines, and ways of achieving it also need integration from these field of studies. The contribution of different disciplines does not give significant implication individually, and so the integration of varied fields is necessary to achieve success and better optimization and performance in sustainability. The current paper discusses one of the Ethiopian cities ‘Bahir Dar,’ that is witnessing lot of boom in construction industry. Sustainable social housing construction program was launched in the country in 2005 with an aim to provide affordable living to middle incomers [8]. However, the new finished condominium construction projects seen in the city do not differ much from the existing century-old housings. This study examines the existing situation in a housing project, their design approach, construction techniques and material used and methods to optimize their sustainable performance that will contribute to the well-being of its users and protection of its environment and society in a broad way. Housing is a basic need for humans since centuries. Ethiopia, being a developing country, has housing inadequacies due to its increasing population [8]. Meanwhile, the consumption of material and energy in Ethiopia has increased at a shocking rate in the last two decades. Also, the field of architecture presents greater challenges in the area of implementing sustainability in all associated processes of design and construction as it consumes large quantities of materials, large amount of natural resources, producing larger quantities of wastes and resulting in increased negative impacts [4]. Hence, the increase in need of housing projects is posing some complex problems that if urgent measures are not taken it will assume crisis proportion. ‘It will have harmful consequences on the survival and well-being of humanity and also on human economic and social development’ [4]. Herein, attention needs to be given to operation and maintenance of the buildings as per the existing schemes for design and construction, and the previous experiences also need to be considered carefully to avoid any problems/repetitions in the new schemes. ‘Substantial achievements cannot be achieved if the previous factors that contributed to poor implementation as per the old schemes are not analysed and removed completely [2].

Optimization of Sustainable Performance: Housing Project …

141

Table 1 Geographical and physical features of Bahir Dar Characteristics of the site

Description

Location

Located at latitude 110 N and longitude 370E (approx.) on elevation 1800 m above sea level

Landscape and topography

It is generally a flat area

Weather condition

On average, warmest months are March, April and May; rainfall is seen in July, August and September. Humidity is less, and weather conditions vary with the change of the seasons

Soil type and bearing capacity Red soil, with carrying capacity of 0.002 mm thickness

1.1 Site Analysis Building a sustainable housing does not mean grouping all best green practices using modern technologies and green materials. ‘It is a process wherein every element that is part of the building is well chosen, tested, improved, and then its relationship with other elements and systems used in a building is re-assessed to have an integrated and enhanced building solution’ [6]. A study of interrelationship between a building site and its features, sun’s orientation and the building orientation with respect to openings, building envelope and shading devices all of which impact significantly the light levels inside a building. This also affects the buildings solar intake and overall energy performance throughout its life. All these issues need to be considered at the initial stage when the design process begins so that results can be optimized for an efficient building and so were all considered in case of Bahir Dar City too (Table 1). The same interrelationship is necessary to integrate and optimize every stage of the building from its planning, design and selection of construction materials till the final finishing of the building.

2 Research Aim The aim of this study is to promote change in building performance by using a new framework that addresses the issues of sustainable design and confirm that sustainable buildings require transformation in the process of design and its execution. With this, the following research questions were framed: (a) What kind of methods are required to develop sustainable housing projects that efficiently work with the natural environment to achieve comfort level of the occupants? (b) How can the research integrate housing design with sustainable architecture, energy use and environment and construction in architecture?

142

A. Kumar and H. Bhandari

(c) How this research study helps designing quality houses that will support sustainable architecture and what type of sustainable building materials suit life cycle of construction? (d) How sustainable architectural design and construction methods optimize the sustainable performance of a housing project?

3 The Research Framework The research has been built on taking four parameters like sustainable architecture, energy and environment and construction in architecture or referred here as ‘SEECA’ framework. The concerns of sustainable architecture need to be reviewed with the energy and environmental concerns along with requisites of construction industry in architecture.

3.1 Sustainable Architecture Sustainable design dates back to 1970s when it addressed only the energy and environmental concerns, but today it has a more holistic and wide-ranging approach. Integrated approach to sustainable design highlights the need to address sustainability concerns without compromising on a building’s functional aspect or its profitability. Integrated approach will combine the aesthetics of design with a thorough understanding of its energy efficiency, the building analysis and its performance. For a well-integrated design, it should be believed that sustainable design equals good design. For example, façades respond to their respective orientation. The building’s spatial organization should be in context to its existing site features and climatic factors. The architecture typically shifts to climate-friendly design with conservation of energy while using the local resources and all combined to achieve peak performance. Various buildings in an environment are interdependent systems, and their envelope pursues optimal performance at the lowest life-cycle cost. The need arises for an inclusive approach to address sustainability that moves much beyond buildings to incorporate sustainability in our lifestyle and for our communities.

3.2 Energy and Environment To achieve high performance in the building, measuring tools are being used. Assessing performance using design tools available supports and reinforces integration of various disciplines at all levels. The significance of the first simulation modeling studies was realized by Knowles in 1985 [9]. ‘Simulation gives a possible solution to the complex problem of enabling comprehensive and integrated appraisals

Optimization of Sustainable Performance: Housing Project …

143

of design options under realistic operational conditions’ [7]. ‘Building Energy performance Simulation’ tools or called BEPS are the latest tools that allow architects to evaluate their design and upgrade to a high-performance level. These sustainability rating tools not only set initial energy values but serve as a mode for incorporating broader environmental imperatives. A performance-based approach requires comprehensive study of building design, materials and systems. ‘The changes in the design and construction practices are a way for achieving sustainability in an unstable world’ [1]. Building energy performance assessment methods are complex, and architects mostly do not have enough expertise and knowledge of them. Therefore, architecture and construction professionals need to widen their understanding by integration of various disciplines.

3.3 Construction in Architecture Sattrup defines ‘Sustainable construction as a method of conceptualizing and developing buildings that aim to enhance the human life quality and the built environment along with human lifestyle, and reducing the use of natural resources’ [11]. As per Whole Building Design Guide (WBDG), a building achieves optimization in construction by following eight design objectives, namely ‘accessibility, aesthetic sense, cost control, functionalism, productivity and health, history, safety/security and sustainability to build a high-performance home’ [13]. The eight design objectives are integral components for any building project, and ‘green home’ appreciates the integration of all these from the designing stage to choosing high-quality construction techniques and material to construct. The aspect of construction is very significant in architecture as it is the stream from where a number of people get associated with who might or might not be trained to execute a sustainable building project.

4 Research Approach An exploratory qualitative approach has been used based on critical literature review of some of the finest research reports, articles in reputed journals, books and projects showcased in high-quality magazines; case study analysis and interviews. The qualitative research methods along with stakeholders’ interviews help in deep analysis of issues within the framework used in a research [3]. The case study analysis of past professional approaches has also been done. The study tried to identify each set of condominiums based on their special design aspects through a pilot study and using questions on its specific attributes. In order to get detailed information, indepth interviews with professionals were also conducted. Various types of questions to find out the issues were asked to both users and non-users. The semi-standardized

144

A. Kumar and H. Bhandari

interviews of design professionals helped in collection of data that has been analysed qualitatively. The questionnaire comprised of open-ended questions divided into multiple sections that were sent to experts in different areas. Wherever possible, multiple people were interviewed simultaneously to have interactive discussions and analyse the data better. The interviews took place in the informant’s offices and on site too. The questionnaires were not meant to determine statistical significance in research and theory formulations, rather were to understand and draw inferences within this qualitative architectural research. The author experienced some limitations like limited data collecting tools, unwillingness of the users and non-users, officials, construction companies and professionals to give information.

5 Case Study The concept of condominium is quite old. Condominiums were derived from French collective property ownerships under the Napoleonic Code of 1804, the first statutory recognition of the condominium concept. After World War II, it was the only type of common housing available to the general population that was owned individually. The word condominium, often referred to as condo, is a form of social housing that has a collection of individual housing units but with common areas. Each condominium varies in terms of its site layout, location and number of commercial spaces and common community buildings, and this also depends on the site area, its topography and sanctioned density of housing. Since 2005, Ethiopia has been implementing this government housing scheme for low- and middle-income people. The program involves a radical shift from the singlestorey detached housings (government-type rental housing) to a new condominium typology (private home ownership) which involves a building developed as a condominium or a group housing but sold as individual units and has different owners. The overall responsibility of this housing program was on the city administration or municipalities. At the regional level, the Bureau of Works and Urban Development (BWUD) took up the urban management and development. In Ethiopia, Integrated Housing Development Program (IHDP) is run and funded by government for lowand middle-income households. Consultants are hired to conduct detailed studies on the carrying capacity of the site, environment concerns, the existing water sources and supply, approach roads to site, any existing harmful elements and their potential relocation, along with the users need for such residential units and the physical, legal and economic frameworks that exist. According to Condominium Proclamation No. 370/2003, the land for condominium housing projects belongs to the government [5]. Building sites were randomly selected on open spaces in inner city and some on the peripherals of the city. Each condominium project design has 1 BR, 2 BR and 3 BR units with different arrangements, detached family houses and studios.

Optimization of Sustainable Performance: Housing Project …

145

6 Result and Findings The study of a single case of condominium social housing has been analytically studied and summarized under six themes. These parameters emerged from extensive literature survey and interviews done and have been described as themes like functional space, communal areas, construction material, miscellaneous issues, critical analysis and construction quality as outlined below (Fig. 1). Functional space is not articulated well. There are no outdoor spaces or terraces and balconies. Majorly all housing projects exhibit same standard design housing units. 2–4 storey buildings consist of six different types of units. The units are so closely planned that the residents usually cannot open windows to allow light into the house, since the window would simply open into a neighbor’s house. The only source of light in house is artificial light. In regard to the relationship between units and the public realm, the same problem exists but at a larger urban scale. Some condominiums are built with an open kitchen that is part of the living and dining room, especially the studio units. Ethiopians practice an elaborate traditional cooking, but due to lack of required large kitchen, in many cases, the owner has converted bedroom to a closed cooking space. The lost sleeping space is compensated by a change in furniture in the living room, turning it into a bedroom at night. It is not uncommon to find the inhabitants of these blocks cooking their traditional dishes in

Functional Space

Construction Quality

Communal Areas

Themes for Analysis

Critical Analysis

Construction Material

Miscellaneous Issues

Fig. 1 Themes taken for case study analysis. Source Author

146

A. Kumar and H. Bhandari

the common halls too. The parking and recreational facilities were inadequate, and play grounds were added only after tenants petitioned for it. Communal areas for communal activity are scarce while the user demand for such spaces is high. So, on all sites where communal buildings were scarce, the residents built them on their own. This is however strictly forbidden by the government and will eventually be demolished. Many residents faced difficulty to adjust in multistoried building. Living in a condominium housing demands a sense of belonging from its users to ensure a healthy harmonious living environment. Living in close quarters and inappropriate use of public spaces like corridors for storage purposes or for communal activities, the users face noise and privacy issues due to which their relations with neighbors have turned uncordial. The provision of community spaces on these housing sites was made to respond to the residents’ cultural and social needs. The intention was to provide a separate space for residents to perform their traditional tasks like animal slaughtering, washing clothes manually and doing elaborate cooking for whole community and all activities that cannot be performed inside housing units. But in practice, washing and animal slaughtering are done in circulation areas, which are inconvenient to neighbors. Functionally, the occupants face problems cooking large community meals as their kitchens are too small. So, public space is usually transformed into personal active space during daylight and as storage space at night. On the other hand, middle-class singles, younger couples with a single child and big joint families with up to 10 members, all live in the same size unit, which makes communal areas more crowded than ever before. Lack of clear distinction between common and private areas is leading to inequities and dissatisfaction. Construction material used in these condominium residential buildings consists of reinforced cement concrete (RCC) frame construction that has hollow brick infill walls that have been plastered both inside and outside. A major factor to the crisis is the increased use of cement concrete. Units are handed over to owners with simple concrete flooring to save on extra cost. Windows and doors have single glazing fitted into metal frames. The simple repetitive design allows speedy construction, and the same design has been repeated with minor adaptations across other projects. Materials of standard sizes have been used to reduce cost. The roofs have round rafters of tree (usually Eucalyptus) and are covered with corrugated iron sheeting. Miscellaneous issues that have been overlooked during construction include (a) The contractors have used old traditional techniques of construction as they are cheaper (b) Half of the contractors are not professionally qualified and so are ignorant of latest technological advancements (c) The mortar is prepared in large quantities and quite early to its use (d) The materials used in construction like sand, coarse aggregate and bricks are not properly washed and contain harmful materials and dust (e) RCC columns show manual marks as their casting is not mechanically done and are not even machine mixed

Optimization of Sustainable Performance: Housing Project …

147

(f) The reinforcement cover for columns, beams and slabs is thin and highly insufficient (g) Columns show misalignment at foundation level Construction quality and delay in completion of the project was another issue. The program intended to build low-cost housing but not low-quality housing, but the final product is low in quality. There have been evidences of bursting of sewerage pipes, leading to water seepage and widespread cracking of wall plaster. The expected lifespan of the units was supposed to be 100 years, but now it is doubtful. Construction quality has been compromised by use of cheap substandard fixtures such as doors and door handles. The unskilled construction skills were evident from the hasty vast number of employees, inexperienced contractors and builders enrolled for a project of this scale. Besides the construction quality, construction delays were a major issue. The efficiency in the construction phase went low as planned due to gross material shortages, poor management at site and lack of sufficient infrastructure all of which delayed the execution and completion of project by one full year on some sites. Critical analysis of this first condominium housing in Bahir Dar revealed that the objective of IHDP, as expected, was not addressed in it. The units have not been planned in consideration to residents’ requirements such as do not contain space for Ethiopian cooking and cleaning. There is no future design consideration for each family to expand their quarters over time. Design considerations, again seen in retrospect, do not provide the occupant opportunity to adjust the indoor climate according to his or her preferences. The City of Bahir Dar brought together a group of consultants handpicked due to their specialized knowledge and ability to think strategically. Rather than grouping experts from different disciplines, only architects and engineers were employed. The design concept from this grouping remained the product of a single mind rather than derived from group interaction and knowledge of different consultants from allied spheres of architecture to make it a sustainable project. The designs are monotonous and lack flexibility to accommodate the needs of different family sizes. Inefficient and low-quality services like electrical and sanitary were installed which create hazardous and unhealthy living conditions. It can be inferred that attention has not been paid to achieve the envisaged objectives of sustainable environment.

7 Analysis All housing projects and housing units possess monotonous standard design which is inconsiderate to user requirements. Quality of construction is a significant aspect that needs to be monitored in resonance with cost of maintenance and other operational costs for such projects. Although technical sustainability depends largely on design and construction, but it is highly dependent on other associated economic, political, social, cultural, financial, technological and management aspects. Addressing these aspects calls for a well-designed, long-term system with constant monitoring and

148

A. Kumar and H. Bhandari

evaluation program and awareness of its stakeholders and their contribution. Importantly, as the need to mitigate the effects of climate change has become increasingly apparent, it is imperative that attention to be given to the environmental sustainability of future condominium projects. Globally, the building sector accounts for a significant proportion of greenhouse gas emissions through embodied energy of materials and operational energy use. Unfortunately, to date, IHDP has not considered these two aspects, which should have been pivotal in the planning and design of their projects. The search for alternatives to cement as main building material is a positive effort. The quality of the construction should be improved, which requires continuous capacity building of professionals and contractors as well as onsite quality checks by trained professionals. Though IHDP has proved to be a highly effective tool for affordable housing delivery at a large scale and has great potential to be replicated in other areas of the city, the approach needs improvements to comply with international human rights. The initial task is to create awareness on usage of sanitary fixtures and effective methods of sewerage disposal. Regular checking of the sewerage system and manhole cleaning is essential to ensure good community health. Routine maintenance of fixtures to avoid leakage and replacing ruptured and broken fittings timely is necessary. Community organizations responsible for the operation and maintenance of sewerage system should be legally constituted and registered. In addition to employing new schemes in designing and construction, attention must be given to operation and maintenance costs due to existing schemes in housing. Previous experiences should be taken into account while devising new schemes to avoid repetition of problems.

8 Discussion The study intends to achieve optimization in sustainable performance in architecture which has strong impacts on the environment. The study revealed that the problems associated with sustainable development can be dealt with an integrated approach in design and construction. It can be seen that the defects arise due to multiple gaps at implementing processes at government level, then at architectural, engineering and construction levels. Therefore, it is necessary to focus on dealing with these main aspects that contributed to shortfall of the project. The thematic areas that need attention in reference to ‘SEECA’ framework are as follows:

8.1 Sustainable Architectural Practice In Ethiopia, it is necessary to apply improved sustainable architectural design techniques that meet the sustainable goals and also support the desire to improve housing sustainability practice. It requires teamwork and new practices using processes that

Optimization of Sustainable Performance: Housing Project …

149

have minimal impact on environment, are safe and resource efficient all through the life of the building. The policies should critically analyse the pros and cons of previous scheme and avoid its shortcomings for mass housing constructions. Special emphasis in national policy would have to be laid on prevention of environmental degradation and promotion of energy conservation to achieve ecological balance. This can be easily achieved by improving the current policies used in architecture and construction industry and approving and implementation of the national building proclamation. Making the professionals in building industry, architecture students, clients and common man aware about the importance of sustainable solutions in design so that they support promotion and execution of sustainable design standards. In addition, working with stake holders, government sectors and officials to alter policies and regulations so that sustainable design practice is accepted and implemented fully as a standard practice. This also fuels innovation of technology related to the construction industry. Therefore, the economic, social and environmental values will be increased and enhanced all over Ethiopia.

8.2 Energy Environment Construction and Architecture In Ethiopia, condominium houses are said to be cost effective, but this is true only for the initial cost of the project but not for the operation, life cycle and demolishment cost. The total cost of a project which is called the life-cycle cost includes its one-time cost that is its initial capital cost of construction and also the recurring costs which include its regular upkeep and repair cost over its serviceable life. The goal of a sustainable construction should be to regulate the costs for a project throughout its active life, which covers all the direct and indirect costs possible. Sustainability promotes usage of our resources efficiently in both design and construction. The energy efficiency of a building extended during its life cycle happens to be the most significant objective of sustainable architecture. ‘Sustainable development is as a process of development that allows changes to be consistent with the present and future needs of its users’ [10]. To optimize the sustainable performance of a building, professionals and designers should rely on active and passive strategies for energy saving and make the building self-dependent in terms of its ability to generate its own energy. In addition to that, the construction industry should hire skilled labors that are trained with the local and modern construction practices. In sustainability concerns, it is very important for a building to produce energy of its own and have minimal environmental effects. The energy consideration is very important, especially embodied energy, as it is one of the most significant resources a building uses during its lifetime [12]. Embodied energy is the energy that is used in making of a material and is one of the important concerns for selecting a building material to be used in construction. So, a building accounts for its embodied energy in terms of energy used to make its components and then energy used during its construction. So, this reinstates the use of low energy consuming high-quality materials that will have a long life span and less environmental impacts. For example, aluminum requires a

150

A. Kumar and H. Bhandari

high amount of energy input to refine and make it high in quality to be able to last longer without any maintenance, while materials that are cheap or appear economic in the beginning might need additional costs of repair and maintenance over a small period and lead to higher energy consumption. Hence, sustainability must be equated with durability over the life span of a building. Both during the design and construction, it is necessary to adopt a partnering approach between the builder and other members in the construction team. The basic parameters for a good design include its functionality, aesthetics, materials used and its structure. The builder and construction team should be made accountable for regulating and checking the environmental impact of materials used and construction systems employed. Construction processes consume most of the energy and resources in a building, and therefore, it is essential to achieve greater sustainability in construction.

9 Conclusion and Recommendations To enhance our environment and minimize wastage of resources, sustainability needs to be a vital parameter in building industry. Sustainability can be holistically achieved through adoption and integration of all parameters described in SEECA framework. Sustainable architecture, Energy and Environment and Construction in Architecture together lead to economical use of resources, efficient life-cycle design, lower operation and maintenance costs with recycling and reuse of our resources to produce a sustainable environment. It is essential to get high performance and optimization in implementation of sustainability in housing project in coming future. The necessity of resilient architecture is evident and especially vital in housing projects, seeing the pressing demand for affordable houses all over the world. Also, optimizing the sustainability performance increases flexibility and the life of a building and allows reuse of materials improving a buildings performance. This leads to increase in revenue (higher rent/sale price, improved productivity and lower cost (costs of conversion). It also implies that making housings sustainable also improves technological innovations of a country. Hence, it is recommended that for a sustainable society the focus should be on a long-term integrated approach. The architecture educators, trainers and others in the practice of implementation of sustainable practices must be equipped with expert training and professional skills. This will overcome the widening gap between the theoretical knowledge and actual practice. More so, the architects working in the market in sustainable design are advised to integrate knowledge of various disciplines in their practice so that better quality of life can be achieved. In addition, to promote adoption of sustainable architecture and growth of sustainable development in Ethiopia, the economic evaluation of architecture design strategies must be studied in detail as well. Special initiatives need to be taken to improve sustainable practices in the country and synergize architectural design and construction methods, in specifically optimizing the sustainable performance. Such initiatives are essential,

Optimization of Sustainable Performance: Housing Project …

151

and the Government of Ethiopia needs to take national initiatives for their better implementation in housing projects.

References 1. V. Bazjanac, A. Kiviniemi, Reduction, simplification, translation and interpretation in the exchange of model data, in: ed. by Rebolj D, 24 CIB W78 Conference 2007. Maribor, Slovenia (2007), pp. 163–168 2. G.A. Beyene, Y.A. Dessie, Assessment of informal settlement and associated factors as a public health issue in Bahir Dar city, North West Ethiopia; a community based case control study. Sci. J. Public Health 2(4), 323–329 (2014) 3. H. Bhandari, A. Mittal, A naturalistic inquiry of pilgrims’ experience at a religious heritage site: the case of a Shaktipitha in India. Int. J. Relig. Tour. Pilgr. 8(3), 87–97 (2020) 4. Ethiopia, National Population Policy of April 1993. National Population Policy of Ethiopia, Addis Ababa, Ethiopia, Office of the Prime Minister, April 1993, https://cyber.harvard.edu/ population/policies/ETHIOPIA.htm. Last accessed 2019/11/05 5. Federal Democratic Republic of Ethiopia, Proclamation No. 370/2003 Condominium Proclamation, 2003. http://extwprlegs1.fao.org/docs/pdf/eth135250.pdf. Last accessed 2019/09/12 6. P. Gowrishankar, Green architecture for environmental sustainability. Int. J. Adv. Technol. Eng. Sci. 3(3), 181–191 (2015) 7. M.F. Hamedani, R.E. Smith, Evaluation of performance modelling: optimizing simulation tools to stages of architectural design, in Proceedings of International Conference on Sustainable Design, Engineering and Construction, Procedia Engineering, vol. 118. Elsevier Ltd. (2015), pp. 774–780 8. B. Hjort, K. Widen, Introduction of sustainable low-cost housing in Ethiopia—an innovation diffusion perspective. Procedia Econ. Finance 21, 454–461 (2015) 9. R. Knowles, Sun, Rhythm, Form. Reprint edition (11 April 1985) (The MIT Press, Massachusetts, 1985) 10. Report of the World Commission on Environment and Development: Our Common Future. https://sustainabledevelopment.un.org/milestones/wced. Last accessed 2019/10/02 11. P. Sattrup, Sustainability—energy optimization—daylight and solar gains. Ph.D. Thesis, Royal Danish Academy of Fine Arts, Design and Conservation Institute of Architectural Technology, Denmark (2012) 12. C. Thormark, A low energy building in a life cycle—its embodied energy, energy need for operation and recycling potential. Build. Environ. 37(4), 429–435 (2012) 13. WBDG Green Principles for Residential Design. https://www.wbdg.org/resources/green-pri nciples-residential-design. Last accessed 2019/13/05 14. A. Weldesilassie, B. Gebrehiwot, S. Franklin, The low cost housing program in Ethiopia: issues and policy, in Proceedings of World Bank Conference on Land and Poverty, The World Bank, Washington DC (2016)

3D Image Conversion of a Scene from Multiple 2D Images with Background Depth Profile Denny Dominic and Krishnan Balachandran

Abstract The computer vision and augmented reality industry focuses on reconstructing the 3D view of an object with the goal of improving visual effects. There are various applications that require 3D models such as street view, film industry, sports industry. 3D models cannot be captured directly using video or still cameras. Cameras are capable of obtaining high-quality 2D images at different angles. This paper will discuss about how the 2D images captured from various camera are converted using in-depth analysis to obtain a 3D view of an object or image. The last received 3D visual input is a dense reconstruction of the 2D image. This reconstruction method is less complex than traditional mathematical modeling methods. This technique can be easily integrated into any application domains such as 3D, such as facial reconstruction. Keywords 2D images · 3D images · Structure from motion · Depth hue

1 Introduction 3D imaging is widely known as stereoscopy. This technique is widely used to create or enhance a 2D image by increasing the illusion of depth with the help of binocular vision. Almost, all types of stereoscopic techniques rely on two images, one from the left view and the other from the right view. These two images come together to give the illusion of a 3D scene with the addition of depth. The 3D television is a major milestone in visual media these days. In recent years, researchers have focused on developing algorithms for obtaining images and converting them into 3D models using in-depth analysis. The third dimension can usually only be considered from a human perspective. The eyes visualize the depth, and with the help of various thoughts, the eye can reconstruct the third dimension. Researchers used this strategy D. Dominic (B) · K. Balachandran Department of Computer Science and Engineering, Christ (Deemed to be University), Bengaluru, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_15

153

154

D. Dominic and K. Balachandran

to reconstruct 3D models from different angles with the help of some parameters of inequality and calibration. Recently, there have been special cameras that help to capture 3D models of the scene directly. Some examples are stereoscopic dual cameras, depth range cameras and so on. These cameras usually capture the RGB portion of the image and its corresponding depth map. A depth map helps to estimate the depth (in pixel position) of an object at a point. Intensity is usually considered in depth. This paper focuses on reconstructing 3D models from 2D images obtained from different views using a stereoscopic camera. Feature points are summarized and analyzed in depth, which help to create a 3D view of the image. This method is inexpensive and can be used for all types of real-time applications.

2 Literature Review Current approaches to the construction of nongrid continuous 3D shape assume a deformed linear combination object the base shapes. The algorithm, structure to motion (SFM), has been widely used to determine the optimal 3D structure from a 2D point tracks set. Early work by Tamasi and Kanade in the 90s [1] proposed a causal mechanism for retrieving the size of visual from an orthographic camera objects. Bregler et al. [2] explained the method for causing objects without rigid 3D structures, linear combination of all the figure and 3D images that define the original deformation mode. Weak orientation camera projection [2] uses rank constraints on camera rotation to retrieve 3D shapes and speeds. Most recent authors [3–5] have shown that rotational barriers are not sufficient for posture to obtain reliable 3D reconstructions. Brand [3] optimization method is proposed as an alternative (with respect to average size) to introduce additional constraints and reduce distortion as much as possible. Xiao et al. [4] To solve the 3D model, it is proposed, to add a set of limits based on size. These limits are based on the n independent image frame which forms the shape of 3D images. However, as Brand [5] later stated that when n is not clearly given, then the noise can be eliminated from the given image. Alternative least squares (ALS) [6–8] and maximization can be expected (EM) [9–11] methods have proved to be effective methods for describing shape and velocity components in SFM algorithms. Buchanan and Fitzgibbon [12, 13] performed a class of optimization in the second order that became more reliable than alternative experiments in different approaches. Del Beau et al. [14] The priests applied some point of view to obtain reliable estimates of the visual component of the object. Olsen and Bartoli [15] used constant variation in temporal repetition and shape reconstruction. Similar to the procedural concept presented in this paper, DelBuy [14] provided advance knowledge in the previously known 3D shape developed through the SFM algorithm that represents the possible configuration of the object, using the end to sort the rough part. Lajaj Akhtar, Yasser Sheikh, Sohaib Khan and Takeo Kanade [16] performed 3D structure with dual approach developed in the linear combination in the tragedy

3D Image Conversion of a Scene from Multiple 2D Images …

155

structure of the base trajectory. Discrete cosine transforms (DCTs) are currently used as an independent basis for the object, and it proves that it is approaching principal component analysis (PCA) for natural motion. Jeff and Alex [17] discussed a theory to approach to define the problem to construction different forms of the motion source (SFM) as a blind source.

3 Methodology The proposed method is discussed in Fig. 1; it has separate parts that is depth estimation continued with processing. Initially, the segment is used to differentiate the foreground. The difference in the depth between foreground and background divided as separate. The multiple cue estimation and neural classification are necessary here in this regard. To increase in the initial depth stereoscopic display, maps information which are refined (e.g., depth and color edge alignment), and the difference in the initial depth between the background and foreground areas depends on the HVS. Finally, both sophisticated color and depth information can be synthesizes pairs through intensive rendering the image-based technology.

3.1 Depth Hue Separation To capture all foreground areas, we first have a strategy to adopt of dividing the area and then identifying the areas of the foreground. Identifying foreground areas without some priorities is a challenging task. From the observation, it is clear that the center of the image is always color with RGB, and

2D CT images

Separate Images to Edges and Curves

Depth Image Based Rendering

Depth Cue calculation

Depth Refinement

3D images

Fig. 1 2D to 3D image conversion method

Depth profile classific ation

Image colour Enhancement

Image Store

156

D. Dominic and K. Balachandran

then, we need to develop the foreground and pattern box. Information about making dolls about color. Since colors of pixels are determined by the working of the mean shift algorithms, the results of the color statistics are limited. Indicate the types of colors in the middle and OC that is outer circle {oc i 1, …, M} i = and BC {bc i 1, …, N} i = simultaneously. We aim to eliminate the colors from BC which are probably related to the foreground. BC background retaining colors are used to capture the background areas. Our method of classifying the areas of BC color into BC color is to create a priority filter. A filter here used by extract the color from BC to ibc if present in the OC, and according to some criterion and field properties (note that the frame in bc contains many unchanged fields). Area Properties: The ratio of the position (x, y) and the size (x · y) between the small enclosed rectangle and the compactness (area of acreage and x · y). There may be criteria used in filters, for example, that the lower y of the background area should not be less than one range (the lower field is probably related to the foreground); areas of large x are probably foreground.

3.2 Estimation of the Foreground The textures have texture, sharpness, and facial recognition to predict foreground depth. Generally, the nearest object has a strong sharpness of texture gradients. However, these two signs are often signs to the human face. As a means of detecting human face, we even find out human color. The depth image is usually smooth, and they can be even seen as we compute depth signals based on a block of 8 × 8 value pixels where time of processing is reduced. (A) Shape gradient The shape is calculated using eight mask pixels of the gradient method. (B) Sharpen Clearly, the edges of the nearest object are sharply contrasted with those of the distant object. We define the difference and variance of the aggregates in each block as indications. (C) Face queue The input image is RGB and is converted to YCB format. Condition-satisfying pixels in the RGB and YCbCr locations are known as skin-colored pixels. Additionally, human hair color pixels are united to generate human information and assigned [9]. (D) Depth Q Fusion The depth signals are combined to generate depth for pixels that receive the foreground mask.

3D Image Conversion of a Scene from Multiple 2D Images …

157

Fig. 2 Classification of different background pixel settings

3.3 Classification of Background Pixels The back correct neural network is used for five types of profiles to process. Figure 2 shows all the five types of profiles which have a progress in our methods, with bottom, right, left, enter, and non-profile. The Figures above shows various ways how the depth can be visible and used for further processing and the image with internal depth gives better results. The neural network is used with various classifications, like local edge direction, is the input image, is divided as to 3 × 3 regions, each using a horizontal and vertical sober operator to calculate the edge direction. All edge directions are counted as eight principles, each of 45°. They have 72 measurements of the histograms of these 9 regions and local contour gradient is properties based on local edge direction and contour gradient are very promising in practice.

3.4 The Combined Depth Estimation The foreground and background pixel values are calculated pixel by pixel, and less value is considered for the foreground pixel, and background pixel is calculated with maximum pixel values

158

D. Dominic and K. Balachandran

4 Post Processing of Resulted Images 4.1 Calculation of Background Profile The color value cannot be compared with the depth value because they vary each other. The binomial filter is used in this method [11]; thus, depth map is improved.

4.2 Enhancement of the Color Pixels In this system, according to the classification of the background depth result, it is applied with two methods to modify the colors of the background and foreground pixels, that is, to improve the stereoscopic effect.

5 Results and Discussion The MATLAB R2019a is implemented here to process the results. The dataset used is the sample dataset in which hemispherical 3D views are modeled in the computer focus group. The input image size is 1024 × 768. The image is then converted to a gray-level image, and the distortions are eliminated as shown in the figure below. Corner points are marked at the beginning as shown in the figure. Using these feature points collected, the matching points are identified as shown below. The position of the object is calculated using all matching points of all views as shown below. Using the x, y, and z co-ordinates estimated in the previous steps, the 3D model is reconstructed as a dense reconstruction as shown in the figure below.

6 Conclusion This paper focuses on reconstructing the 3D view of 2D images obtained from different angles. This method is less expensive compared to modern techniques and specialized acquisition cameras. It can be widely used in various surveillance purposes to identify intruders and criminals. Reconstructing the future research direction 3D model using some augmented reality-based vision.

3D Image Conversion of a Scene from Multiple 2D Images …

159

References 1. C. Bregler, A. Hertzmann, H. Biermann, Recovering non-rigid 3D shape from image streams, in Proceedings IEEE Conference on Computer Vision and Pattern Recognition, 2000, vol. 2. IEEE, 2000 2. M. Brand, Morphable 3D models from video, in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001. CVPR 2001, vol. 2. IEEE, 2001 3. J. Xiao, J. Chai, T. Kanade, A closed-form solution to non-rigid shape and motion recovery, in Computer Vision-ECCV 2004. (Springer, Berlin Heidelberg, 2004), pp. 573–587 4. M. Brand, A direct method for 3D factorization of nonrigid motion observed in 2D, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005. CVPR 2005, vol. 2. IEEE, 2005 5. M. Maruyama, S. Kurumi, Bidirectional optimization for reconstructing 3D shape from an image sequence with missing data, in Proceedings. 1999 International Conference on Image Processing, 1999, ICIP 99, vol. 3. IEEE, 1999 6. L. Torresani, et al., Tracking and modeling non-rigid objects with rank constraints, in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001. CVPR 2001, vol. 1. IEEE, 2001 7. C. Julià, et al., Factorization with missing and noisy data, in Computational Science–ICCS 2006 (Springer, Berlin, Heidelberg, 2006), pp. 555–562 8. R.F.C. Guerreiro, P.M.Q. Aguiar, 3D structure from video streams with partially overlapping images, in 2002 International Conference on Image Processing. 2002. Proceedings, vol. 3. IEEE, 2002 9. L. Torresani, A. Hertzmann, C. Bregler, Learning non-rigid 3d shape from 2d motion, in Advances in Neural Information Processing Systems, 2003 10. L. Torresani, A. Hertzmann, C. Bregler, Nonrigid structure-frommotion: estimating shape and motion with hierarchical priors. IEEE Trans Pattern Anal. Mach. Intell. 30(5), 878–892 (2008) 11. A.M. Buchanan, A.W. Fitzgibbon, Damped newton algorithms for matrix factorization with missing data, in IEEE Computer Society Conference on Vision and Pattern Recognition, 2005, CVPR 2005, vol. 2. IEEE, 2005 12. A.B. Morgan, Investigation Into Matrix Factorization When Elements Are Unknown. Technical report, Visual Geometry Group, Department of Engineering Science, University of Oxford, 2004 13. A. Del Bue, F. Smeraldi, L. Agapito, Non-rigid structure from motion using non-parametric tracking and non-linear optimization, in Conference on Computer Vision and Pattern Recognition Workshop, 2004. CVPRW’04. IEEE, 2004 14. I. Akhter, et al., Trajectory space: a dual representation for nonrigid structure from motion. IEEE Trans. Pattern Anal. Mach. Intell. 33(7), 1442–1456 (2011) 15. A. Del Bue, A factorization approach to structure from motion with shape priors, in IEEE Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE, 2008 16. Jeff Fortuna, Aleix M. Martinez, Rigid structure from motion from a blind source separation perspective. Int. J. Comput. Vis. 88(3), 404–424 (2010) 17. H.-S. Koo, K.-M. Lam, Recovering the 3D shape and poses of face images based on the similarity transform. Pattern Recognit. Lett. 29(6), 712–723 (2008)

Optimization of a Stand-Alone Hybrid Renewable Energy System Using Demand-Side Management for a Remote Rural Area in India M. Ramesh and R. P. Saini

Abstract In this study, a techno-economic feasibility analysis of a stand-alone hybrid renewable energy system (HRES) for a remote rural area of Chikmagalur district of Karnataka (India) has been presented. Load shifting-based demand-side management (DSM) has been implemented for Lead-Acid and Lithium-Ion batteries based HRES for evaluating the feasibility. The performance of the proposed system has been conducted with DSM and without DSM. From the results, it is found that Lithium-Ion battery-based HRES with DSM gives the optimal feasible solution for providing reliable power supply to the proposed un-electrified villages. The optimal Net present Cost (NPC) and Cost of Energy (COE) are found to be $467,644 and 0.106 $/kWh without DSM implementation. Whereas, the operating costs are $314,564 and 0.072 $/kWh respectively with DSM implementation. Based on the analysis, it is observed that the saving of NPC and COE is found to be $153,080 and 0.034 $/kWh, respectively. Keywords Hybrid renewable energy system · Demand-side management · Lead-Acid · Lithium-Ion · HOMER

1 Introduction The depletion of fossil fuels, an increase in GHG emissions and technological development of solar and wind has paved a way to think of an alternative for generating electricity in the last decade [1]. In addition, due to techno-economic reasons, several remote rural villages are not connected to the conventional grid system [2]. Electrifying remote rural communities using renewable energy based stand-alone hybrid system is considered as the most acceptable and economical approach [3]. Energy from solar and wind are not continuous; hence, it is highly variable. These fluctuations in generation cannot provide reliable power supply to the load demand. The M. Ramesh (B) · R. P. Saini Indian Institute of Technology Roorkee, Roorkee, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_16

161

162

M. Ramesh and R. P. Saini

integration of renewable energy resources mitigates these fluctuations and provides better reliable energy to the load demand. Moreover, batteries and diesel generators can also be used to confronting the irregularity of renewable energy resources [4]. A feasibility study has been evaluated for a stand-alone PV-LED street lighting system in Turkey [5]. Optimal model of PV/WT/DG/BT HRES is proposed for an off-grid rural electrification in Bangladesh using HOMER [5]. An optimal COE was determined for five different locations in the world using HOMER [6]. A mathematical model was developed to obtain optimal sizing PV/WT/BT based HRES for a remote area using GA and HOMER [7]. A study on DSM-based optimal sizing of an integrated renewable energy system to provide the power supply for a cluster of un-electrified villages in Karnataka (India) using GA and PSO is proposed [8]. An incentive-based DSM to minimize operating costs and pollutant emissions is recommended [9]. Demand-side management based load shifting model to achieve techno-economic optimization of stand-alone PV/WT/MHP/BM/WT IRES in the Uttarakhand state of India is developed [10]. The DSM model to obtain feasibility and energy management assessment for rural electrification of several zones in the Tamilnadu state of India using HOMER is proposed [11]. In the literature, most of the studies have been performed with any one type of batteries. Very few studies were found using DSM-based techno-economic performance analysis. In this perspective, this study concentrates to evaluate the performance of the HRES under two different batteries such as Lead-Acid (LA) and Lithium-Ion (Li-Ion). In addition, load shifting based DSM has been implemented for evaluating the performance of PV/WT/MHP/DG/BT HRES for a cluster of villages.

2 Study Area A cluster of 13 un-electrified villages in the Chikmagalur district of the state of Karnataka in India has been considered as the study area. The latitude and longitude of the study area are 13.31 N and 75.77 E. Due to hilly terrain and households are scattered hence, about 297 households are not electrified among these villages. Kerosene lamps and candles are used for lighting purposes in this area. The conventional grid connection to these households is not feasible. Therefore, a hybrid renewable energy system is the alternative option for the supply of electricity to these villages. Solar, wind, and hydro are the available resources in this area. The integration of these renewable energy resources together with the diesel generator will be used to supply the required electrical power to the proposed area. In addition, to store the excess energy and to act as back-up, batteries have been used in the HRES. Since the HRES consists of both AC and DC input/output components, a bi-directional converter has been considered for converting AC–DC/DC–AC power.

Optimization of a Stand-Alone Hybrid …

163

Table 1 Particulars of type load, devices with ratings Type of load

Devices

Devices (Power rating in W)

Residential

Lamps

891(40)

Fans

297 (75)

Commercial

Community

Shops

TVs

297 (150)

Radios

297 (6)

Lamps

4 (40)

Fans

4 (120)

Flour Mill

Motor

3 (6000)

Health Centre

Lamps

6 (40)

Fans

4 (75)

Fridge

1 (180)

Lamps

2 (40)

Fans

2 (75)

Radios

1 (6)

Lamps

6 (40)

School

Community Hall

Fans

6 (75)

Street-lights

Lamps

99 (34)

Pumping Water

Motors

3 (5592) (7.5HP)

Post-Office

Lamps

1 (40)

Agricultural

Pumping Water

Motor

15 (3785) (5HP)

Small Industry

Saw Mill



2 (8000)

3 Demand and Resource Estimation 3.1 Demand Estimation Based on the survey in the area, it is identified the different load sectors such as domestic, commercial, community, agricultural and small industries. Based on the number of households, type and number of appliances given in Table 1, the daily load demand (kWh) has been estimated for summer, rainy and winter seasons in the year. The daily load demand of the proposed area is shown in Fig. 1.

3.2 Resource Estimation The proposed study area consisting of a sufficient quantity of potential for solar irradiation. The average PV radiation of the location is 5.94 kWh/m2 /day. The solar insolation data along with the clearness index of solar potential is given in Fig. 2. The

164

M. Ramesh and R. P. Saini

average wind speed of the location is 3.8 m/s and is shown in Fig. 3. The monthly rainfall is shown in Fig. 4. The estimated net head is 5 m and the discharge of water is 0.435 m3 /s with a probability of 77% of the period of the year.

Fig. 1 Daily load demand without DSM

Fig. 2 Solar insolation data

Fig. 3 Wind speed data

Optimization of a Stand-Alone Hybrid …

165

Fig. 4 Rainfall data

4 Demand-Side Management Demand-Side Management is the preparation and mechanism for altering the patterns of load consumption and/or saving energy. It can be implemented in a number of methods, such as peak clipping, valley filling, load shifting, strategic conservation strategic load growth, etc. as shown in Fig. 5. DSM planning benefits viz: capital investment savings of HRES and electricity bills, social and environmental benefits, and enhancement of system efficiency. The load shifting approach is used for a stand-alone HRES assessment, due to the advantages of simple to implement and no maintenance.

Fig. 5 Different DSM methods

166

M. Ramesh and R. P. Saini

4.1 DSM Strategy Under load shifting-based DSM, shiftable loads are defined and priority has been considered for shifting as follows: agriculture loads, water pumping loads, sawmill and flour mill, etc. The shiftable peak loads can be moved to off-peak periods based on allowable time slots (ATS). The particulars of allowable and shifted time slots for all seasons are given in Table 2. Daily load demand is estimated by implementing the load shifting strategy as shown in Fig. 6. Table 2 The particulars of allowable and shifted time slots for all seasons Load Agricultural

Water-pumping

Saw-mill

Flour mill

ATS (h)

STS (h)

S

6–18

6–10

R

6–18



W

6–18



S

6–22

12–16

R

6–22

11–13

W

6–22

6–7,15–17

S

6–20

13–14

R

6–20



W

6–20



S

6–21

15–17

R

6–21



W

6–21



Fig. 6 Daily load demand with DSM

Optimization of a Stand-Alone Hybrid …

167

5 Modeling and Input Parameters Figure 7 shows the configuration of HRES components. Input parameters required for optimization using HOMER Pro® is given in Table 3. The procedure for evaluating optimization is as follows with and without demand-side management: Step 1: The input data has to be given to HOMER after estimation of load demand and potential of resources. Step 2: Selection of battery from LA and Li-Ion batteries. Step 3: Load following strategy enabled. Renewable energy resources are ‘ON’. If P_Dˆd (t) = P_REˆd (t); it optimizes the system. If P_REˆd (t) > P_Dˆd (t) and SOC < SOCMAX; the excess energy charge the battery. If SOC = SOCMAX; excess energy will be given to dump load. DC BUS

AC BUS

Wind

Battery

LA

Li-Ion

MHP

Bi-directional converter

PV Load

DG

Fig. 7 Configuration of the proposed HRES

Table 3 Input parameters for optimization Component

Capital Cost ($)

Replacement Cost ($)

O&M Cost ($)

PV (1 kW) [12]

630

0

10

25 Y

Wind (1 kW) [12]

300

300

20

20 Y

Hydro (1 kW) [13]

1300

0

100

30 Y

DG (1 kW) [2]

Life Span

220

200

0.03

15,000 h

LA (1 kWh)

162.7

162.7

5

10 Y

Li-ion

190

150

0

15 Y

Bi-directional convertor (1 kW) 300 [12]

300

3

15 Y

Battery [14, 15]

168

M. Ramesh and R. P. Saini

Step 4: If P_REˆd (t) < P_Dˆd (t); the batteries will start discharging along with the renewable energy generation. Step 5: If SOC < SOCMAX or SOC becomes less than SOCMAX; diesel generator is ‘ON’. Step 6: Cycle repeats for 8760 h of the year. Step 7: Step 2 to Step 6 repeats with Li-Ion battery. The detailed flow of the optimization procedure has been given in the flowchart is shown in Fig. 8.

6 Results and Discussion Techno-economic feasibility of the proposed HRES is performed by two different configurations such as PV/MHP/BT and PV/WT/MHP/DG/BT using LA and Li-Ion batteries and considered as Case 1 and Case 2 respectively. Optimization has been assessed with and without DSM and the corresponding results have been given in Tables 4 and 5.

6.1 HRES with and Without DSM The performance of the HRES without DSM is optimal with PV/MHP/Li-Ion configuration and corresponding NPC and COE are found to be as $467,644 and 0.106 $/kWh. The NPC and COE are found as $314,564 and 0.072 $/kWh with DSM implementation. The performance characteristics of HRES with DSM are as shown in Figs. 9, 10, 11 and 12. Figures 9 and 10 are the power output curves of the Case 1 with LA and Li-Ion battery-based HRES. Similarly Figs. 11 and 12 are corresponding to Case 2. From Figs. 9 and 10, it is observed that the load is met by MHP and PV and battery as backup. However, on 2nd September, the load is fully met by battery in the absence of renewable energy resources. From Figs. 11 and 12, it is observed that the load is met by DG whenever the load is unmet by renewable energy resources. In both cases, it is understood that the batteries are charging whenever the PV is generating power. The Li-Ion batteries can be discharged down to 0% [15]. However, the LA batteries cannot be discharged down to 0%, it is considered as 20% in this study. Figures 13 and 14 illustrate the SOC status for LA and Li-Ion batteries for 8760 h of the year. From these figures, it is observed that the batteries are discharged down to their lower limits whenever renewable energy resources are not generating power. In this perspective, the nominal and usable capacity of the batteries remains the same for Li-Ion batteries. Therefore, for the pre-defined load demand, the number of batteries required is less for Li-Ion batteries as compared with the LA batteries. This leads to more NPC with the LA battery-based HRES as obtained in both with and without DSM.

Optimization of a Stand-Alone Hybrid …

169 Start Resource Assessment

Renewable Energy Resources PV, MHP

Input data Load Assessment

Diesel Generator

Initialization d=0&t=0

d = d+1 t = t+1

LA

Battery type ?

Li-Ion

LF Dispatch Strategy

A

Is t = 24 ?

Is d = 365 ? Simulation Successful END

Is

Yes

Renewable Energy is ON ? No Is

Yes

Is

No

SOC < SOC MAX ?

? No Is

Is

Estimate Deficit Energy Yes

Yes

DG is ON

?

? No

No

A

Fig. 8 Flow chart for optimization

Yes Battery charging, Estimation of SOCNEW

Dump Load

PV/WT/MHP/DG/BT

PV/MHP/BT

Configuration

193

175

Li-ion

303

Li-ion

LA

322

LA

PV (kW)

Type

2

13





Wind (no.s)

HRES Components

Battery

Table 4 Optimization results without DSM

15.7

15.7

15.7

15.7

MHP (kW)

120

120





DG (kW)

833

591

843

915

Battery (no.s)

110

110

117

110

Converter (kW)

6,78,208

8,33,143

4,67,644

7,12,975

Net Present Cost ($)

0.15

0.184

0.106

0.162

Cost of Energy ($/kWh)

170 M. Ramesh and R. P. Saini

PV/WT/MHP/DG/BT

PV/MHP/BT

Configuration

176

169

Li-ion

295

Li-ion

LA

319

LA

PV (kW)

Type

1

29





Wind (no.s)

HRES Components

Battery

Table 5 Optimization results with DSM

15.7

15.7

15.7

15.7

MHP (kW)

120

120





DG (kW)

796

431

986

901

Battery (no.s)

108

109

121

108

Converter (kW)

5,48,855

8,22,509

3,14,564

6,89,565

Net Present Cost ($)

0.122

0.183

0.072

0.161

Cost of Energy ($/kWh)

Optimization of a Stand-Alone Hybrid … 171

172

M. Ramesh and R. P. Saini

Fig. 9 Power outputs of Case 1 with LA battery

Fig. 10 Power outputs of Case 1 with Li-Ion battery

Fig. 11 Power outputs of Case 2 with LA battery

6.2 Effect of Batteries on System Performance The effect of LA and Li-Ion batteries on PV/MHP/BT HRES has been considered for the system evaluation. The MHP delivers constant output power as the discharge of water remains the same. Hence its capacity will not change with the type of battery.

2160 2184 2208 2232 2256 2280 2304 2328 2352 2376 2400 2424 2448 2472 2496 2520 2544 2568 2592 2616 2640 2664 2688 2712 2736 2760 2784 2808 2832 2856

State of Charge (%) 1 314 627 940 1253 1566 1879 2192 2505 2818 3131 3444 3757 4070 4383 4696 5009 5322 5635 5948 6261 6574 6887 7200 7513 7826 8139 8452

State of Charge (%)

Optimization of a Stand-Alone Hybrid …

Fig. 13 SOC of the LA battery for Case 1

173

Fig. 12 Power outputs of Case 2 with Li-Ion battery

120

100

80

60

40

20

0

Duration (hours)

120

100

80

60

40

20

0

Duration (hours)

174

M. Ramesh and R. P. Saini

State of Charge (%)

120

100 80 60 40 20 1 284 567 850 1133 1416 1699 1982 2265 2548 2831 3114 3397 3680 3963 4246 4529 4812 5095 5378 5661 5944 6227 6510 6793 7076 7359 7642 7925 8208 8491

0

Duration (hours)

State of Charge (%)

120

100 80 60 40 20 2160 2180 2200 2220 2240 2260 2280 2300 2320 2340 2360 2380 2400 2420 2440 2460 2480 2500 2520 2540 2560 2580 2600 2620 2640 2660 2680 2700 2720 2740 2760 2780 2800 2820 2840 2860

0

Duration (hours)

Fig. 14 SOC of the Li-ion battery for Case 1

Using Li-Ion batteries, the number of PV panels is reduced by 19 as compared to LA batteries under without DSM. However, its number is reduced by 26 with DSM. The NPC and COE are reduced by 34% while using Li-Ion batteries as compared to that of LA batteries under without DSM whereas, 55% reduction is found with the DSM implementation.

6.3 Effect of DSM From the results, it is found that the number of devices required is decreased with the DSM implementation. In addition, the operating costs such as NPC and COEs have been reduced in both Case 1 and Case 2. For the optimal HRES, the NPC is found to be decreased by $153,080 as compared with the HRES without DSM. Whereas, the COE is reduced by 0.034 $/kWh. Further, it is also found that the operating costs are reduced in the same order with Case 2.

Optimization of a Stand-Alone Hybrid …

175

7 Conclusion In the present study, the techno-economic feasibility of a PV/Wind/Hydro/DG/Battery hybrid renewable energy system for a cluster unelectrified village of Karnataka state (India) has been considered. To study the HRES performance, PV/MHP/BT and PV/WT/MHP/DG/BT HRESs have been considered using LA and Li-Ion batteries. A load shifting-based DSM strategy has been implemented to evaluate system performance. From the results, it is found that the Li-Ion battery-based HRES offers optimal operating costs in both with and without DSM implemented HRESs. In addition, it is found that the DSM-based HRES offers better performance as compared to that of HRES without DSM. The optimal NPC and COEs of the proposed HRES are found as $314,564 and 0.072 $/kWh with DSM using Li-Ion battery-based system. Acknowledgements First author conveys sincere thanks to M.C.E., Hassan for deputation for Ph.D. and extended his gratitude to Department of Hydro and Renewable energy, Q.I.P. IIT Roorkee for providing the opportunity.

References 1. A. Maleki, F. Pourfayaz, Optimal sizing of autonomous hybrid photovoltaic/wind/battery power system with LPSP technology by using evolutionary algorithms. Sol. Energy 115, 471–483 (2015) 2. M. Hossain, S. Mekhilef, L. Olatomiwa, Performance evaluation of a stand-alone PV-winddiesel-battery hybrid system feasible for a large resort center in South China Sea. Sustain. Cities Soc. 28, 358–366 (2017) 3. A.L. Bukar, C.W. Tan, K.Y. Lau, Optimal sizing of an autonomous photovoltaic/wind/battery/diesel generator microgrid using grasshopper optimization algorithm. Sol. Energy 188, 685–696 (2019) 4. H. Dagdougui, R. Minciardi, A. Ouammi, R. Sacile, Optimal control of a regional power microgrid network driven by wind and solar energy, in IEEE Proceedings of International Systems Conference, Montreal, QC Canada (2011), pp. 86–9 5. A.C. Duman, Ö. Güler, Techno-economic analysis of off-grid photovoltaic LED road lighting systems?: A case study for northern, central and southern regions of Turkey. Build. Environ. 156, 89–98 (2019) 6. Z. Abdin, W. Mérida, Hybrid energy systems for off-grid power supply and hydrogen production based on renewable energy: a techno-economic analysis. Energy Convers. Manag. 196, 1068– 1079 (2019) 7. M.S. Javed, A. Song, T. Ma, Techno-economic assessment of a stand-alone hybrid solar-windbattery system for a remote island using genetic algorithm. Energy 176, 704–717 (2019) 8. S. Rajanna, R.P. Saini, Employing demand side management for selection of suitable scenariowise isolated integrated renewal energy models in an Indian remote rural area. Renew. Energy 99, 1161–1180 (2016) 9. G.R. Aghajani, H.A. Shayanfar, H. Shayeghi, Demand side management in a smart micro-grid in the presence of renewable generation and demand response. Energy 126, 622–637 (2017) 10. A. Chauhan, R.P. Saini, Size optimization and demand response of a stand-alone integrated renewable energy system. Energy 124, 59–73 (2017)

176

M. Ramesh and R. P. Saini

11. J. Vishnupriyan, P.S. Manoharan, Demand side management approach to rural electrification of different climate zones in Indian state of Tamil Nadu. Energy 138, 799–815 (2017) 12. T. Adefarati, R.C. Bansal, Reliability, economic and environmental analysis of a microgrid system in the presence of renewable energy resources. Appl. Energy 236, 1089–1114 (2019) 13. C. Leonard, P. Klintenberg, F. Wallin, B. Karlsson, C. Mbohwa, Electricity for development?: mini-grid solution for rural electrification in South Africa. Energy Convers. Manag. 110, 268– 277 (2016) 14. N. Ramchandran, R. Pai, A. Kumar, S. Parihar, Feasibility assessment of Anchor-BusinessCommunity model for off-grid rural electrification in India. Renew. Energy 97, 197–209 (2016) 15. B.K. Das, F. Zaman, Performance analysis of a PV/Diesel hybrid system for a remote area in Bangladesh: effects of dispatch strategies, batteries, and generator selection. Energy 169, 263–276 (2019)

Advance Security and Challenges with Intelligent IoT Devices Neha Sharma and Deepak Panwar

Abstract By 2025, more than 25 billion devices are connected over Internet with high cognitive intelligence. Internet of things has major issues and threats regarding security challenges over billions of intelligent devices connecting over Internet. Internet connectivity reaches to 7G in the future. In Japan, it is already on the way and so as complexity regarding connectivity as well as security increases. Internet of things opens the doors for various sectors in advancements but cognitive devices security is become major threat with advancements in IoT. Now, devices are connected with advance and high connectivity Internet and stored over different types of clouds. So, this paper aims purely security to these smart cognitive devices with assistance of deep learning, artificial intelligence with IoT with big data assistance, and discuss the security issues with intelligent devices with different useful applications on day by day basis as well as provide proposed secured methodology for “Smart ATM” with advance applications of deep learning and natural language processing (NLP) with artificial intelligence. In this paper, apply the “Weapon Detection,” “Crime Intention Detection” and using “Edge Computing” with convolutional neural networks irrespective of the use of cloud computing with machine learning for the cognitive intelligence and fast access with high security in smart devices with IoT. This paper also proposes “Smart Laser Fencing” for the Hi-Tech security of smart devices with “Deep-fi Geo-Fencing,” and this eliminates the major security threats in various smart devices and decreases the rate of cyber-crimes majorly in e-money transactions as well as in other various domains. Keywords Internet of things (IoT) · Internet of everything (IoE) IoT security · Deep learning · Artificial intelligence · Edge computing · Smart authentication · Smart weapon detection · Smart ATM security · Cyber security · Edge computing N. Sharma (B) · D. Panwar Computer Science Engineering Department, Amity University, Rajasthan, India e-mail: [email protected] D. Panwar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_17

177

178

N. Sharma and D. Panwar

1 Introduction In present scenario, with the advancements and improvements in technology, the security issues and challenges broadly come up as a major problem. All IoT devices enable with cognitive thinking and featuring with advance computing applications over different types of clouds use in different specialized areas with high automation rate with artificial intelligence and machine learning. These devices are working efficiently in their specific areas but these are not safe and secure; security of these devices are breach vastly because of Internet connectivity and cloud storage. For instance, on smart phones, end user uses many regular apps for e-transactions and at the time of installation, end user did not pay any attention on terms and conditions as well as on granting different permissions regarding smart phone, just a click away all the confidential information stored in that particular smart phone is stored automatically over cloud till the application installed over different smart phones. In IT sector, “Bit-coin” attacks become popular happens due to Denial of Service (DoS) on different devices. Recently, Facebook fined for leaking and selling personal information database. For online transactions, we have to fill all our card details and even CVV number and also we save that card information for future use “Is this information is safe.” OTP stolen cases are increasing higher and higher even if the third-party secure gateway works on those pages. ATM pins are hacked, Internet banking passwords are hacked, so as world become smarter, human being have to become more smarter for the prevention of security breaches.

1.1 Internet of Things (IoT) Several things or devices are connected over Internet known as IoT [1]. It merges with various wireless technologies like Wi-Fi, Zigbee, and many more [2]. IoT is merged with different network protocols and with various large-scale sectors likeSmart homes and Smart cities. Smart homes are the homes in which all the devices like TV, AC, etc., are operated with IoT technology like by the use of RFIDs monitored the activities and make operational decisions that save money and energy [3]. Smart cities [3] are the full package of different IoT applications used in different domains like smart energy, agriculture automation, smart sensors in automatic devices, etc. [4]. Food supply chain. Food supply chains are the major application of IoT, and using IoT techniques, producers keep track of the production and product quality and security from the farms to the consumers and decision making is far better and profit is also increased using IoT in various business models [5]. IoT in Mining. In mining industry, IoT techniques are used for security and communication among minors. Sensors are used to identify various diseases in minors [6].

Advance Security and Challenges with Intelligent …

179

IoT in transportation and logistic. In transportation and logistics, using IoT techniques, tracks the vehicles and devices using RFID and sensors in real time [7]. IoT in garments. In garments industry, IoT techniques are used in medical area (patient clothes); in garment industry, in various malls sensors are installed on each product [8].

1.2 Internet of Everything (IoE) Internet of everything (loE) [9, 10] touches business and industrial processes to enhance the functionalities. The IoE observes real-time data by trillions of connected sensors and processes with “automated and people-based processes.” IoE helps to provide sustainability with environment and economic objectives [11].

1.3 Internet of Nano Things (IoNT) The concept behind IoNT is provide access to data from different sources which is previously impossible to sense by sensors due to sensor size. It is majorly use in medical [12, 13].In IoT, IoE, IoNT, Industrial IoT, Green IoT every device is connected over Internet which expands now in 5G, 6G, 7G which means wide the range of IoT in terms of functionalities [14].

2 Related Work 2.1 IoT Security and Challenges WLAN networks create security threats and attacks on higher probability and in IoT, everything is connected over wireless or wired Internet connections. So security is major issue in today’s era majorly with the area of mobile banking transactions and online money transfer on pat, PayPal, online e-retail apps and online shopping apps. IoT is connected in the future with 5G, 6G, 7G more advancements with more threats so it is difficult to manage large devices securely over the internet. Attacks like Trojan Horse, worms IP sniffing, IP spoofing, Phishing, Pharming emphasize the violation of security principles like Integrity, Confidentiality, Access Control, Authentication. IoT provides data with encryption/decryption techniques and prevention techniques to DOS Attacks and encryption attacks [15]. Various surveys [16] are perform over IoT security over different network layers which studied deeply the architecture of IoT security and threats, discuss the IoT security problems with

180

N. Sharma and D. Panwar

different layers, discuss the IoT security measures over perception layer, network layer, and application layer.

2.2 IoT Security with Artificial Intelligence Artificial intelligence is make system intelligent similar as human mind. Over 13 billion devices are connected over Internet via IoT and merge IoT with “Machine Learning + Artificial Intelligence + Deep learning” boosts the functionalities of IoT 100 times more, for example, “Amazon Echo” and “Google home.” These devices not only control the electronic devices but also work with speech recognition like “Google Assistant”; artificial intelligence also works with IoT in block-chain technology AI used with IoT in facial recognition biometric authentication. In many countries, this application is implemented at airports and failed in authentication several times. Internet of things works with 3 layer [17, 18] (Perception + Network + Application) and 5 layer [19]. IoT with artificial intelligence (AI) makes IoT “Smart IoT,” it saves lot of time and provides objects or devices “supervised learning” which is feature of AI. In this, the instructions given to smart devices are in human language which is easy to understand and use for humans because every human being do not know the how to code and give instructions to devices in machine language, for example, give instructions to “Amazon Echo” via Alexa. Alexa is very popular due to its automation and stores the information over Amazon Web Services, it is also cloud service provided by Amazon so again security threats are actively spread over technology how to prevent security breach when the stored data in petabytes or in terabytes over cloud and very easy to hack because of network connections that are not very secured. IoT with block-chain is also not secured due to large number of e-transactions happens daily over devices and it is hard to manage the security even after double triple advanced encryption algorithms of MasterCard, Visa, and many others. Security with these advance technologies for IoT devices is major challenge. ATM machine with IoT also comes up as “Smart ATM” [20], in this paper, employ these innovative and latest applications of artificial intelligence with IoT devices, for example, here, ATM Security Scenario. At ATM Booth Smart doors [21, 22] with face recognition [23]. In IoT with AI depicts out fingerprint sensing [24] with advance technologies, smart sensor cameras [25],Smart doors which are using now in smart home [26] security, speech recognition [27],facial expression recognition [28] with deep learning, facilitating smart geo-fencing-based payment transactions [29], “Smart GEO Fencing” [30], smart safe [31].

Advance Security and Challenges with Intelligent …

181

2.3 IoT Security with Machine Learning and Deep Learning “Deep Learning(DL)” which is evolved from “Machine Learning(ML)” is becoming powerful exponentially in many fields such as “Natural Language Processing(NLP),” “Vision Recognition(VR),” and “Face Recognition(FR)” [32, 33]. Deep learning is most appropriate technique to solve the above problem [34]. DL introduces many functions relates to IoT and Mobile Apps for fast promising results, for example, DL summarizes the power consumption of electricity with data collection through “Smart Meters” that improves supply rate of electricity of “Smart Grid” [35]. “Edge Computing (EC)” is another major technology comes up for IoT services in “Smart IoT” applications [36]. DL with IoT performs well with different fields with different CNN networks and merges DL with different learnings of artificial intelligence like “Deep Reinforcement Learning” (DLR) [37], Transfer Learning with Deep Models(TL) [38], Online Learning Algorithms, Ladder Networks [39] and different applications like Speech/Voice Recognition [40], Image Recognition [41], Indoor Localization using DeepFi [42, 43], Physiological and Psychological State Detection [44], Security and Privacy [44–46]. Deep Learning over Big Data in IoT is the highly effective approach with IoT devices [47].

3 Proposed Methodology The objective of this paper is provide security to cognitive devices connected with Internet of things embedded with artificial intelligence, machine learning and provide better security solutions against the security challenges of IoT. In this paper, for instance, take cognitive device “SMART ATM” and proposed security steps for overall vast arena of “Smart ATM.” These proposed steps are also work with different several smart cognitive devices where authentication is much more important whether it is matter of authenticate person to enter in the house or place any kind of e-transactions over applications on smart phone or physical transactions of money at ATM. To provide advanced security to cognitive devices here propose the deep learning with edge computing and replace the cloud computing with machine learning methodologies. Deep learning is powerful analytics tool for big data. In IoT devices, open issue is how to protect the data from noisy and fuzzy environment that misleads traditional methodologies of ML. Edge computing is the advance version of “Cloud Computing (CC)” which evolves to overcome the disadvantages of CC and analyze huge amount of IoT data collected from different IoT objects. EC provides the computing functions from centralized cloud to IoT objects edge and data transfer exponentially reduced by the preprocessing functions. EC performs excellently when the size of input data is greater than the size of intermediate data.

182

N. Sharma and D. Panwar

Edge computing (EC) with DL provides more privacy in data transferring of intermediate state. In conventional big data systems like Spark consist privacy from preprocessing to data semantics. In deep learning ,intermediate data consists contrast semantics in comparison with source data. For instance, it is hard to find out the features of original data provided by “CNN” (Convolutional Neural Networks) from the intermediate CNN. Deep learning in IoT exponentially increases the performance + reduce network congestion and most importantly provides best performance with huge data in IoT applications. ML provides accuracy which is depend on some another entity like human face in image processing, NLP, with DL improves the throughput compared to ML in listed technologies as well as in multimedia data. An open issue is implanting DL with IoT devices generally there is shortfall in support of hardware and software processing in parallel computing. So deep learning is better than ML in the field of AI applications and image processing, and in this paper, the concept of DL is used to authenticate the user identity. DL with edge computing is used in voice recognition and NLP for better outputs. This paper uses this technology for better outputs of IoT device. To enhance the capabilities of deep learning (DL) on IoT devices designed an inference engine which enhances the scalability, QoS, robustness, reduce limitations in capturing video or audio objects. In this paper, apply the applications of deep learning that are “Weapon Detection,” “Crime Intention Detection,” in IoT security in proposed methodology to secure the ATM transactions from intruders. In this paper, for example, take IoT device “Smart ATM,” ATM machine is easy to use and safe and not rely on banks like in ancient time now you can debit money or credit money anytime by anywhere without any constraint of time and place. The safety side becomes evil more and more in the world of Wi-Fi and E-Money misuse enhance drastically as well as cyber-crime. Now, not only criminal incidents take place but also cyber-crimes are increasing day by day and produce drastic results in terms of security and privacy of person violation and steal his/her lifetime earn money in a seconds, this evil side turns out as “ATM FRAUD.” In smart ATM, major challenges are: • ATM pin stealing: Stealing the ATM pin by pretending himself/herself bank official on call or message or email communication. • Pin surfing: Pin surfing is stealing the ATM pin when user inserts it at ATM machine for bank transaction. • Skimming: Duplicate card slot is inserted at ATM machine very close to real one and obtain the card information stored in magnetic track ATM card, after attaining information remove the duplicate slot and download all the ATM card details by the thieves. So, to overcome the challenges, “Smart ATM” proposed the overall secured methodology that prevents the security breach for the whole scenario of ATM money transactions from user identification to user authentication.

Advance Security and Challenges with Intelligent …

183

3.1 Proposed Steps for Provide Security to ATM Arena Smart Laser Fencing (invisible). Here proposed the smart fencing with deep learning for providing the advance security to cognitive IoT devices. This is also beneficial for the security of many other IoT devices like in “Smart Homes” in factories, in banks, and in jewelery showrooms. In the case of ATM booth, if someone wants to access the locker of ATM machine with intention of stealing the locker, the sensors attached with the locker of ATM activated the invisible laser shield or say “REDWALL” it also consists enough electric shock to faint the person if he/she tries to breach the wall, so robbery become impossible for robbers and tear gas sensor also activated so that person resists to breach the wall and security alarms rang automatically after the laser shield activation. Laser shield is only de-activated by the administrator through control room by the run time code through software. Smart Doors. Smart doors with smart locks that have fingerprint sensors with twostep access code with the advancements and security by deep learning. In the case of ATM security, one lock is set permanently and other one is randomly generated by software and easily accessed by your phone like OTP, if both are correct only then you enter inside ATM booth. At the time of fingerprint scan, all the information regarding the customer who belongs to that fingerprint with access time is recorded automatically in the database. In case of identity mismatch, the door will not be open and it secured the booth automatically also if the access denied more than 5 times then warning message triggered to the control room, from there easily detect the unauthorized person and mark as suspected. No of thefts become very less due to fear and advance security in robbers. Fingerprints are already linked with KYC with banks and also with aadhar portal so unauthorized person easily detect by police. This proposed methodology is also useful for other IoT devices, for example “Smart homes.” Here, proposed some major restrictions that are put on smart doors of ATM booth. • Limitation on No of User Entry In present scenario, number of users = number of machines. In this case, you have to limit the users’ entry by authentication. Users can entered inside the ATM booth only after the authentication at smart door. Smart door have sensors, so if it detect only one authentication of person and then detects the steps inside the booth if it can’t match security alarm rang automatically and also notify to control room at the same time so administrator have right to deactivate the ATM machines of that particular “ATM booth” that notifies suspicious activities. • Set the Door Unlock Time. Door unlock time should not be more than 5–8 s. Person with Debit card only allows inside the ATM booth no other person should allowed by the security guard at the ATM booth. If someone want to use some other’s card like for student use father’s or mother’s card in case of emergency he/she have to register his/her name as alternate nominee with bank and add the biometrics with bank then only he/she access the card through ATM Machine.

184

N. Sharma and D. Panwar

Smart Authentication. In this paper, use smart authentication with deep learning means if someone wants to access something, then only user access that entity only after verification or authentication become successful. In this authentication, there are different methods used to prevent access violation. In this paper use the voice recognition, face recognition, fingerprint recognition, and iris recognition to prevent access violation and provide smart security to users. DeepFi-Geo Fencing. In this paper, use smart Geo-fence technology with deep learning Deep-Fi in payments through ATM card. In this, user’s location is traced automatically by the bank through ATM card’s GPS and RFID, so in case of any fraud occurrence, it is easily traced by bank and block the card. Geo-Fencing technology works with the concept of deep learning. Smart Weapon Detection. In present scenario, security is massive issue in technology advancements. Here proposed Smart Weapon Detection based on Deep Learning + Artificial Intelligence. Smart Weapon Detection with deep learning uses R-CNN model to provide advance security and prevent false detections. To implement this, use TensorFlow with GPU support (open source library of machine learning), Google object detection API, Python for coding, Image datasets. It is working on the concept of classification of data mining, and in this, image set is divided into testing and training data sets validate the image provided by CCTV and calculate the TP and TR for compared images and show detected objects if match. In this paper use this advance technology with ATM booth to prevent frauds and cyber-crimes and detect the weapon automatically without the knowledge of criminal. Smart Crime Intention Detector. Crime Intention Detection System is detecting the weapons in hand of persons which are used detect the crimes before it happens, Smart Crime Intention Detection System based on the Deep Learning models, for example, Google-Net and VGGNet-19. This also uses Fast R-CNN like smart weapon detection but it has improved results including automatic hand gun alarms with deep learning, automatic visualize detection of robbery, object detector application on visual basis through CCTV; if crime happens, additional functionality of Smart Crime Detector, CCTV automatically security messages to register number, to prevent crimes and provide advance security to ATM machines. This application works with CCTV installed inside ATM booth without in the knowledge to criminals. This is also useful with the terrorist identification and other persons who has intention to do crimes so prevent the mass havoc before happening something disastrous and with other IoT devices. Smart Safe. Here proposed smart safe with the automation and provided advance security via deep learning. Here proposed smart safe with the automation and provided advance security via deep learning. Nowadays, maximum ATM FRAUDS are done by stealing the ATM safe from the ATM machine but by using “SMART SAFE” no one even break the safe and prevent stealing money and provide advance security to ATM safes. Smart Safe is enabled with membrane switches, controllers, and bill validator that run 24*7 and provide immense security.

Advance Security and Challenges with Intelligent …

185

Smart Identification & Smart Transaction. Last step and major step is “Smart Identification” on ATM machine. Current procedure is not advanced, so robberies took place easily by stealing ATM pin, and this is set by user permanently when the card reader reads the card, it automatically open your account and user do the transaction. Step 1: After card reading, account does not open directly. Firstly, it scan user’s iris by iris scanner so in this case, there is a chance of a person who scan at the ATM booth Smart Door is different than the user who do transaction on ATM machine so this case is eliminated because if person somehow manage to spare by the ATM booth Smart Door authentication cannot spare by ATM machine authentication so ultimately identification recorded by the machine in special cases like for the user with eye laser or eye surgery it has option for biometric scanner so fingerprints are recorded and sensors also did their jobs on ATM machine. Step 2: After this ,user’s account is open and after selection of language, account type, windrowing amount entered, ATM machine asks two security questions from the user in the language user chooses initially, the answers of those question are save by the user in the system likewise gmail account so person do not have to type the answers, user have to speak up the answers so that voice recognition + security authentication is done which eliminates the case of using someone other’s ATM DEBIT CARD, because if either voice or answers cannot match, user account is locked for 15 min automatically and transaction cannot take place. Step 3: Now, at present, everyone has basic phone so that user receives message easily on their cellphone after transaction completion then why not use this technology in transaction too. So in smart transaction, after smart identification, there is no permanent ATM pin, there is randomly generated four digit OTP code which sent to user by system after smart identification activates only till 10 s and the message is also Password Protected so no one hack or steal the code in between so that no “ATM PIN STEALING” occur. Step 4. ATM machine verifies the transaction amount in your voice, then transaction is completed. Then again exit door is open for user only after fingerprint scan. Security measures with deep learning and edge computing are more beneficial and effective with IoT devices than traditional machine learning and cloud computing in various areas but majorly in providing security and privacy.

3.2 Advantages • • • • • •

Provide flexible and secured methodology for “Smart ATM.” Preempt the security problems related to IoT devices. Detect security breaches and incidents in “Smart ATM” and other IoT devices. Assure data authentication and data authorization in IoT devices. Assure availability of IoT devices. User-friendly approach.

186

N. Sharma and D. Panwar

4 Open Issues There are many challenges and threats regarding IoT devices and it increases rapidly as the Internet connectivity evolves from 3G to 7G and then so on in the future. • Hacking and sniffing are very easy when user unknowingly grant permission of device to anyone, so users have to be informed regarding these types of hacks, for example, OTP sharing, account information sharing, etc. • IoT devices security depends on the manufacturers as well as end user so at the time of manufacturing, IoT devices have to be enabled with advance security with the technology advancements. • Users have to be trained against all the types of knowing privacy violations done by intruder or hacker so that users are beware about these kind of attacks. • Security and privacy measures should be taken for the Internet connectivity options also like put some constraints over public Internet connectivity or public Wi-Fi. For instance, at airports, at hotels, because these are not safe and very easy to breach the security when device connect to public Wi-Fi. • Impose the rules and regulations over connecting devices via Internet only when it is safe otherwise discard the connection request because User is not aware about all kind of attacks until it is technical user or aware user. • Impose the constraints over vital information access via cloud so that data access become impossible to hack. • For the case of “Smart ATM” for strong privacy, maintain the backup logs and active Quick Response Team (QRT) 24*7 so that preventive actions should be taken timely. • End user should have to be aware about the whole updated process of ATM, so training campaigns should have to place timely.

5 Future Work • Manufacture IoT devices with secured Internet connectivity options with leveraging network protocols. • Manage more susceptibility with the improvements in the technology regarding security of intelligent IoT devices. • Impose the security protocols that are highly protective against the malwares and attackers at cloud applications that are used by intelligent IoT devices. • Manufacture irrepressible and unrestrained IoT devices so that IoT device recovers and adapts quickly to the changes due to infrastructure services. • Intimidation modelling has to be done on timely basis automatically to prognostication security issues in intelligent IoT devices.

Advance Security and Challenges with Intelligent …

187

6 Conclusion The major challenge of IoT security is applying security forecasting for analyzing and detecting problems as they appear is solved and adapting the multi-layered architecture for managing IoT devices and cloud-based applications and services and trying to inbuilt the security measures and offering high availability in IoT devices. In large level of IoT devices, the complexity of the architecture become higher so to overcome this, offer security analysis to report errors. Provide assured communication among IoT devices and cloud services here in ATM machine case. Data stored within IoT devices maintain data privacy and vulnerabilities with the compliance and recovery options and impose high confidentiality via transport encryption. In this paper, proposed the advanced security methods for ATM security which cover the whole security scenario regarding ATM using Internet of things with deep learning, artificial intelligence, image processing, and voice recognition with advancements of technology and devices. This helps in prevention of robberies regarding ATM physical as well as technical-specific actions regarding ATM which are highly decreased.

References 1. J. Holler et aI., From Machine-to-Machine to the Internet of Things: Introduction to a New Age of Intelligence, 1st edn. (Academic Press Ltd., London, UK, 10 Apr 2014) 2. M.A. Feki, F. Kawsar, M. Boussard, L. Trappeniers, The Internet of Things: the next technological revolution. Computer 46(2), 24–25, February 2013 (Online) 3. A. Kiourt I, C. Lee, J. Volakis, Fabrication of textile antennas and circuits with 0.1 mm precision. IEEE Antennas Wirel. Propag. Lett. 99, 1–1 (2015) 4. A. Caragliu, C. Del Bo, P. Nijkamp, Smart cities in Europe. J. Urban Technol. 18(2), 65–82 (2011) 5. S. Li, L. Da Xu, S. Zhao, The Internet of Things: a survey. Inf. Syst. Front. 17(2), 243–259 (2015) 6. Z. Pang, Q. Chen, W. Han, L. Zheng, Value-centric design of the Internet-of-Things solution for food supply chain: value creation, sensor portfolio and information fusion. Inf. Syst. Front. 17(2), 289–319 (2015) 7. L. Da Xu, W. He, S. Li, Internet of Things in industries: a survey. IEEE Trans. Ind. Informatics 10(4), 2233–2243 (2014) 8. A. Kiourt I, C. Lee, J. Volakis, Fabrication of textile antennas and circuits with 0.1 mm precision. IEEE Antennas Wirel. Propag. Lett. 99, 1–1 (2015) 9. A. Weissberger, TiECon 2014 Summary-Part I: Qualcomm Keynote & loT Track Overview, IEEE Com Soc, May 2014 (Online) 10. D. Evans, The Internet of Everything: How More Relevant and Valuable Connections Will Change the World, Cisco Internet Business Solutions Group (TBSG) (Cisco Systems Inc, San Jose, CA, USA, 2012). (White Paper) 11. J. Bradley, C. Reberger, A. Dixit, V. Gupta, J. Macaulay, Internet of Everything (IoE): Top 10 Insights from Cisco’s IoE Value at Stake Analysis for the Public Sector, Cisco Internet Business Solutions Group (1BSG) (Cisco Systems Inc, San Jose, CA, USA, 2013). (Economic Analysis) 12. S. Balasubramaniam, J. Kangasharju, Realizing the internet of nano things: challenges, solutions, and applications. Computer 46(2), 62–68, February 2013 (Online) 13. I.F. Akyildiz, J.M. Jornet, The Internet of Nano-Things. IEEE Wireless Commun. 17(6), 58–63, December 2010 (Online)

188

N. Sharma and D. Panwar

14. LucMaret, Member, IEEE, C´edricDehos, “6GThenextfrontier” Member, IEEEtamp/stamp.jsp?tp = &arnumber = S67S779 arXiv:1901.03239v2 [cs.NI] 16 May 2019 15. G. Gan, Z. Lu, J. Jiang, Internet of Things Security Analysis. Published in: 2011 International Conference on Internet Technology and Applications 16. K. Zhao, L. Ge, A Survey on the Internet of Things Security. Published in: IEEE 2013 Ninth International Conference on Computational Intelligence and Security, China 17. M. Wu, T.-L. Lu, F.-Y. Ling, L. Sun, H.-Y. Du, Research on the architecture of Internet of things. In: 3rd International Conference on Advanced Computer Theory and Engineering (ICACTE) (2010) 18. N. Lin, W. Shi, The research on Internet of Things application architecture based on web. In: IEEE Workshop on Advanced Research and Technology in Industry Applications (WARTIA) (2014) 19. C.-W. Tsai, C.-F. Lai, A.V. Vasilakos, Future Internet of Things: Open Issues and Challenge (Springer Science, New York, 2014) 20. P. Bhuyan, Smart ATM Machines. Published in Research Gate 21. R.S. Divya, M. Mathew, Survey on Various Door Lock Access Control Mechanisms. Published by IEEE in 2017 International Conference on Circuit, Power and Computing Technologies (ICCPCT), Kollam 22. J. Johnson, C. Dow, Intelligent Door Lock System with Encryption. US Patent Application Publication Johnson et al., June 2016, pp. 1–92 23. Jacintha, J. Nagarajan, K. Thanga Yogesh, S. Tamilarasu et al., An IOT based ATM surveillance system. Published in: 2017 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), Coimbatore 24. R. Vijaysanthi, N. Radha, M. Jaya Shree, V. Sindhujaa, Fingerprint authentication using Raspberry Pi based on IoT. Published by IEEE in 2017 International Conference on Algorithms, Methodology, Models and Applications in Emerging Technologies, Chennai (2017) 25. N. Patil, S. Ambatkar, S. Kakde, IoT based smart surveillance security system using Raspberry Pi. Published by IEEE in 2017 International Conference on Communication and Signal Processing (ICCSP), Chennai (2017) 26. S. Feng, P. Setoodeh, S. Haykin, Smart Home: Cognitive Interactive People-Centric Internet of Things. Published by IEEE in 2017 27. D.-G. Shin, M.-S. Jun, Home IoT Device Certification Through Speaker Recognition. Published by IEEE in 2015, South Korea 28. H. Jung, S. Lee, S. Park, Development of deep learning-based facial expression recognition system. Published by IEEE in 2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV), South Korea 29. P. Anne, B. McClard Aaren, E. Wendy, Facilitating Smart Geo-fencing-based Payment Transactions, Google patent US20170178117A1, March 30. J.-M. Bohli, A. Skarmeta, M. Victoria Moreno, Dan García, P. Langendörfer, SMARTIE project: secure IoT data management for smart cities. Published in: 2015 International Conference on Recent Advances in Internet of Things (RIoT) by IEEE, Singapore 31. B. Rosemary, A. Martin Felix, Mon “Smart Safe for Multiple Users” Google patents 2010 32. Z. Fadlullah et al., State-of-the-art deep learning: evolving machine intelligence toward tomorrow’s intelligent network traffic control systems. IEEE Commun. Surv. Tutorials. https:// doi.org/10.1109/comst.2017.2707140 33. N. Kato et al., The deep learning vision for heterogeneous network traffic control: proposal, challenges, and future perspective. IEEE Wireless Commun. 24(3), 146–153 (2017). https:// doi.org/10.1109/MWC.2016.1600317WC 34. S. Verma et al., A survey on network methodologies for real-time analytics of massive IoT data and open research issues. IEEE Commun. Surv. Tutorials. https://doi.org/10.1109/comst. 2017.2694469 35. L. Li, K. Ota, M. Dong, When weather matters: IoT based electrical load forecasting for smart grid. IEEE Commun. Mag. 55(10), 46–51 (2017)

Advance Security and Challenges with Intelligent …

189

36. Ren et al., Serving at the edge: a scalable IoT architecture based on transparent computing. IEEE Netw. (2017) 37. Y. Bengio et al., Deep learning of representations for unsupervised and transfer learning. ICML Unsupervised Transf. Learn. 27, 17–36 (2012) 38. H. Valpola, From neural PCA to deep unsupervised learning, in Advances in Independent Component Analysis and Learning Machines (2015), pp. 143–171 39. H. Larry, Voice Control Everywhere: Low-power Special Purpose Chip Could Make Speech Recognition Ubiquitous in Electronics (2017) 40. O.M. Parkhi, A. Vedaldi, A. Zisserman et al., Deep face recognition, in British Machine Vision Conference, vol. 1, no. 3 (2015), p. 6 41. W. Zhang, K. Liu, W. Zhang, Y. Zhang, J. Gu, Deep neural networks for wireless localization in indoor and outdoor environments. Neuro Comput. 194, 279–287 (2016) 42. X. Wang, L. Gao, S. Mao, S. Pandey, Deepfi: Deep learning for indoor fingerprinting using channel state information, in 2015 IEEE Wireless Communications and Networking Conference (WCNC), (IEEE 2015), pp. 1666–1671 43. J. Liu, Y. Gu, S. Kamijo, Joint customer pose and orientation estimation using deep neural network from surveillance camera,” in 2016 IEEE International Symposium on Multimedia (ISM) (IEEE, 2016), pp. 216–221 44. Z. Yuan, Y. Lu, Z. Wang, Y. Xue, Droid-sec: deep learning in android malware detection, in ACM SIGCOMM Computer Communication Review, vol. 44, no. 4 (ACM, 2014), pp. 371–372 45. R. Shokri, V. Shmatikov, Privacy-preserving deep learning, in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (ACM, 2015), pp. 1310–1321 46. M. Abadi, A. Chu, I. Goodfellow, H.B. McMahan, I. Mironov, K. Talwar, L. Zhang, Deep learning with differential privacy, in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (ACM, 2016), pp. 308–318 47. M. Mohammadi, A. Al-Fuqaha, S. Sorour, M. Guizani et al., Deep learning for IoT big data and streaming analytics: a survey. Published in: IEEE Communications Surveys & Tutorials Fourth Quarter (2018)

Systematic Assessment and Overview of Wearable Devices and Sensors Shashikant Patil, Zerksis Mistry, and Kushagra Chtaurvedi

Abstract The advancement in wearable technologies has facilitated human beings in many ways. These wearable technologies can moreover help us to know whether any individual is sick. In this paper, we are discussing and addressing the various advancement in wearable technology in field of human activity recognition and medical diagnosis. The future lies in wearable technology and sensors, which is based on IoT, nanotechnology, and many more. The application is far more than one can imagine. It can be military based, medical field, or in the scientific research field. This is an attempt to give resourceful insights into the domain wearable devices. Keywords Advancement · Nanotechnology · Diagnosis · Wearable technology · Human activity recognition

1 Introduction The advancement in wearable technologies has facilitated human beings in many ways. It is real-time monitoring, provides us with reliable data with good precision and accuracy. These wearable techs can moreover help us know whether any individual is sick (Fig. 1). These can monitor heart rate, pulse rate. Effective deployment of these sensors can also help in predicting crimes before they can happen. This journal discussing the various advancement in wearable technology in the field of human activity recognition and medical diagnosis. Using complicated machines with active workers and constant monitoring is way too old school. The future lies in wearable tech, which is based on IoT, nanotechnology, and many more. The application is far more than one can imagine. It can be military based, medical field, or in the scientific research field. IoT converts the whole concept of visualization into reality [1]. S. Patil (B) · Z. Mistry · K. Chtaurvedi SVKMs NMIMS, Shirpur Campus, Shirpur 425405, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_18

191

192

S. Patil et al.

Fig. 1 Share of devices

We are in the year 2020 and the market is still growing in the wearable technology field, global end-user spending on wearables will hit $52 billion in 2020, according to a new forecast from research firm Gartner, an increase of 27% compared to the previous year. There may be an exponential rise after the proliferation of wearable technology. Major functions that complete the aim of wearable technology are as follows (Fig. 2). All these integrated and synchronized properly can give what we aim for attaching an ultra-smart wearable technology which is integrated with miniature sensors like accelerometers, electromyography sensors, gyroscopes, inclinometer, magnetometer, etc. are attached to the body obstructively can help in diagnosing and treatment of various health-related problems related to motor disorders like COPD (Chronic- Obstructive- Pulmonary-Disease) also known as ‘Parkinson’s disease’ [2]. Using these highly accurate wearable technologies we can even access post-stroke conditions more efficiently and effectively. Earlier there was a great challenge to understand such health-related issues because the motor movements are hard to trace at a small-scale level. These movements are usually very minute and highly precise and accurate sensors are needed to be attached to the body all the time in order to generate a statistical report [3]. This report can be accessed by medical experts and give a bull’s eye approach to diagnosis can be achieved. The wearable technology, on the other hand, makes everything easy, with all the miniaturized sensors working together to give a real-time monitored and well-estimated data in a closedloop system. Moreover, the sensor also is able to filter out the most significant data like the ECG level, heart rate, bpm, etc. during staircase, climbing, running, which

Systematic Assessment and Overview of Wearable …

193

Fig. 2 Aim of wearable technology

can help in better assessment and diagnosis. One major utilization of this wearable tech. is monitoring condition of a patient suffering with ‘post-stroke’ conditions. The nerve regaining its senses after a stroke or damage can very weak and very minute. Electrical pulses can be detected via these wearable technologies, data regarding grip strength, EMG signals, etc. can be recorded and estimated time of recovery, the kind of treatment patient requires can be asserted accordingly. Examining small datasets via clustering methodology gives wearable tech and extra score. A clustering methodology is basically an advanced approach to filtering the kind of data we want from the heap of the recorded data [4].

2 Brief Categorization of Wearable Technology We are in the year 2020 and the market is still growing in the wearable technology field, global end-user spending on wearables will hit $52 billion in 2020, according to a new forecast from research firm Gartner, an increase of 27% compared to the previous year. There may be an exponential rise after the proliferation of wearable technology. Major functions that complete the aim of wearable technology are as follows (Figs. 3 and 4).

194

S. Patil et al.

Fig. 3 Categorization of wearable technology

Fig. 4 Wearable Technology. Source https://images.app.goo.gl/tjaWU6iBqjjA38zM7

Systematic Assessment and Overview of Wearable …

195

Fig. 5 Working of sensory module

The major fields where WT used are: • • • •

People working in hazardous environment (Radioactive and Mining Areas). Monitoring during labor. Monitoring Parkinson’s disorder. Health monitoring in remote areas.

Sensor node and IOT are the backends of the HWT (health wearable technology). HWT can be embedded to body for real-time assessment. Which gives the doctors to assess many patients at a time with the real-time monitoring done by the Wearable devices [5]. What is a Sensor node? The sensory module is the most important part, which is sensing the required parameters. Converting the physical parameters to numeric form (Fig. 5). This is how a sensory module works: Sensing module: collects all the information which is relevant, desirable, and necessarily required. Signal conditioning: basically, all the information is not necessary, removes all the redundancy and known information. Power module: supplying required power to all the units. ZigBee module: manages all the information, makes sense of the information received. HWTs can be implanted into the body; it can be fixed to the body using the adhesive patch or by band, attached to glasses or specs. HWT is aimed at being unobstructed and hands-free. HWTs are multifunctional, portable, comfortable, accessible, useful, reliable, etc. HWTs are present in fitness, vital sign monitor, cardiovascular defibrillators, insulin pumps, etc. The added advantages of HWTs are tracking in more efficient manner moreover we have more precise data to rely upon (Fig. 6).

196

S. Patil et al.

Fig. 6 Hwt-health wearable tech

What HWTs are doing? Challenges to Health Wearable Technology One major issue that has always been faced when it comes to wearable technology is power source. The latest google glass awarded as best wearable technology is having power supply for just 6 h. Various challenges occur while we use HWTs which are cost, weight, discomfort, power management, waterproofing, durability, insecurity, data protection. Patients having chronic diseases have to wear these HWTs for all their life. Frequent maintenance is required in these wearable technologies. Sophistication makes these HWTs more expansive. Since no. of patients are large hence the data stored is in terabytes and hence more powerful serves and larger storage is required which makes the whole process expansive. They are heavy and hence challenge arises to integrate the devices. There are also rumors that the manufacturers are willing to exchange private data for money. HWTs are also vulnerable to hacking (in most of the cases). Errors arise in the information are hard to verify although chances are less. HWTs are claimed to emit low-level radiations posing various risk factor [6] (Fig. 7). Human Activity Recognition HAR are being adopted in a smart home health care system to improve the recovery of the patients. It includes physical human activity recognition through wearable sensors which help in monitoring the vital signs of any individual. HAR has been the active

Fig. 7 Human activity recognition. Source https://images.app.goo.gl/Hgn12BsStNnq5qL8A

Systematic Assessment and Overview of Wearable …

197

field of research from a fair long time to facilitate the daily need of common peoples. Through these systems researchers study the human behavior and vital signs of the individuals. The information regarding the behavior can be gathered by the sensors and can be continuously processed through machine learning algorithms to recognize the daily events. The HAR system can be used for medical diagnosis, also it can be used for predicting [7]. Work done till date regarding activity recognition mainly focused on daily simple activities, on the basis of their duration, like exercising, walking, running, etc. HAR systems can be differentiated on the basis of wearable sensors such as proximity sensors accelerometers. The systems proposed till date mainly focused on video sensors since videos gave a clear cut view on the surrounding environment but there are many difficulties including the privacy issue. It is better to use inertial sensors which can overcome this difficulty faced by the video sensor. Thus when it comes to human activity recognition then the best-used sensor is the accelerometer focusing on human motion recognition. The sensor can be deployed in two ways: BSN (Body sensor network) or in combination with other sensors like gyroscope temperature and heart rate sensors which have improved recognition performance [8].

3 Sensors Modality Sensor modality plays an important role in activity recognition. Sensor modality is subdivided into various types namely [9].

3.1 Body Worn Sensors As the name suggests these sensors are worn by the users for the purpose of activity recognition. As discussed above, these mainly include accelerometer gyroscopes, etc. human body in continuously moving which affects the acceleration and velocity of the body which completely fulfils the purpose of installing these sensors in wearable things and thus can infer human activities. These are more often found in smartphones, smart watches, bands, etc. These devices can handle multiple sensors simultaneously with continuous device processing. Most importantly wireless communications make it very useful in activity recognition and motion sensing. The data thus collected by these sensors are processed through machine learning algorithms and can further send the desired data to the user [10–13].

198

S. Patil et al.

Fig. 8 Ambient sensing. Source https://images.app.goo.gl/eHEbzfNgqYJpmR5w5

3.2 Object Sensors These sensors, unlike body-worn sensors, are deployed to study the behavior of a particular object. By detecting the movement of the object one can infer human activity. For instance, the RFIDs (Radio frequency identification) are specially used as object sensor as it can provide more precise information [14].

3.3 Ambient Sensors These sensors are used to record the interaction between the environment and objects. These sensors can be deployed in user’s environment. Under this category comes, the temperature sensor, pressure sensor, sound sensor, etc. These are basically used for recording the change in environment. Work done till date also includes combination of multiple sensors for activity recognition which constitutes hybrid sensors (Fig. 8).

3.4 Hybrid Sensors These are the sensors in which the ambient sensors work with several sensors like accelerometer, gyroscopes, etc. for recognition of activities. They record both movements as well as the environment conditions.

Systematic Assessment and Overview of Wearable …

199

Fig. 9 Block diagram for deep learning. Schematic presentation

Fig. 10 Basic layout of deep neural network. Source https://images.app.goo.gl/iAXyrXxrMv1F pU5q8

4 Deep Models 4.1 Deep Neural Network This is developed from ANN (‘artificial neural network’). ANN consists of shallow layers while on the other side deep neural networks consist of more deep layers. Deep neural networks are more capable of handling large amount of data effectively because of large number of layers so is preferred over ANN [9, 15–17] (Figs. 9, 10, and 11).

4.2 Convolutional Neural Network This relies on three basic ideas: 1. sparse interaction

200

S. Patil et al.

2. parameter sharing 3. equivariant representations. CNN basically filters the important attributes from the signals and till now has achieved better results in speech recognition, text analysis, etc. ‘Local dependency’ means that the signals present in HAR are correlated. Due to these advantages of CNN over other deep models most of the work is done in this area. Certain aspects are too considered while deploying CNN in HAR which are: input adaptation, weight sharing and pooling [18–22]. Input adaptation: It basically means to adapt the inputs recorded by HAR sensors in order to form virtual images. Mostly HAR sensors used, unlike images, give time series readings which are multidimensional 1D reading [23–25]. Pooling process gives the capability of speeding up the processing over large data (Fig. 12).

Fig. 11 Basic diagram of convolutional neural network. Source https://images.app.goo.gl/9Yb68g VzNeDDBpiAA

Fig. 12 Input data adaptation

Systematic Assessment and Overview of Wearable …

201

5 Discussions Wearable technology has set off a brand new kind of human–computer interaction with the rapid development of facts and communication technology. At the same time, the number of studied articles related to wearables has been growing exponentially. In this paper, we’ve got surveyed many commercially available products and classified them in accordance to its functionality and different modes of wearing the respective device In this paper, the literature of wearable technologies and their operation are reviewed along with detailed analysis of methodology and applied models. Finally, use of wearable technology in healthcare and possible development models are discussed [20, 26, 27].

6 Conclusion Wearable devices are gaining interest at a very high rate. In previous years, lots of new wearables have been introduced in the market for the consumers… Our survey has confirmed that communique security, power efficiency, and wearable computing are the maximum studied subjects in the literature. We have surveyed the recognized communication protection problems for wearables and mentioned diverse tactics to deal with them followed by way of a survey of studies addressing the energy performance troubles of wearables. We additionally surveyed wearable computing research topics which include offloading and in-tool machine learning along with technologies of deep learning and convolutional neural networks.

References 1. U. Anliker, J.A. Ward, P. Lukowicz, G. Troster, F. Dolveck, M. Baer, F. Keita, E.B. Schenker, F. Catarsi, L. Coluccini, A. Belardinelli, AMON: a wearable multiparameter medical monitoring and alert system. IEEE Trans. Information Technol. Biomed. 8(4), 415–427 (2004) 2. J. Bell, Wearable Health Monitoring System. NASA. Britain Wynard and Co Ltd. 2016. About Polar (2015b). Available: https://www.polarheart.co.nz/content/aboutpolar.html 3. GFK, Wearables: Geek Chic or the Latest “must have” Consumer Tech (2014). tps://static.openit.gr/emea.gr/2015/03/GfK_wearables_report-digital_low_res.pdf 4. L. Dunne, P. Walsh, B. Smyth, B. Caulfield, A system for wearable monitoring of seated posture in computer users, in 4th International Workshop on Wearable and Implantable Body Sensor Networks (BSN’07), Germany (Springer, Heidelberg, 2007), pp. 203–207. T.J. Dwyer, J.A. Alison, Z.J. McKeough, M.R. Elkins, P.T.P. Bye (2009) 5. J.A. Johnstone, P.A. Ford, G. Hughes, T. Watson, A.T. Garrett, BioharnessTM multivariable monitoring device: part II: reliability. J. Sports Sci. Med. 11(3), 409–417 (2012) 6. D. Raskovic, T. Martin, E. Jovanov, Medical monitoring applications for wearable computing. Comput. J. 47(4), 495–504 (2004). View at Publisher, View at Google Scholar, View at Scopus 7. J.M. Chae, Consumer acceptance model of smart clothing according to innovation. Int. J. Human Ecol. 10(1), 23–33 (2009)

202

S. Patil et al.

8. M. Zhang, M. Luo, R. Nie, Y. Zhang, Technical attributes, health attribute, consumer attributes and their roles in adoption intention of healthcare wearable technology. Int. J. Med. Informatics 108, 97–109 (2017). View at Publisher, View at Google Scholar, View at Scopus 9. S. Belaid, A. Temessek Behi, The role of attachment in building consumer-brand relationships: an empirical investigation in the utilitarian consumption context. J. Prod. Brand Manage. 20(1), 37–47 (2011) 10. H. Chen, S.-C. Lee, D.B. DeBra, Gyroscope free strapdown inertial measurement unit by six linear accelerometers. J. Guidance Control Dyn. 17(2), 286–290, March–April 1994 11. Reviseomatic Control Systems. Accessed on 1 November 2018. Available online: https://rev iseomatic.org/help/2-control/Control%20Systems.php 12. D.W.T. Wundersitz, C. Josman, R. Gupta, K.J. Netto, P.B. Gastin, S. Robertson, Classification of team sport activities using a single wearable tracking device. J. Biomech. 48, 3975–3981 (2015). https://doi.org/10.1016/j.jbiomech.2015.09.015 13. P. Shashikant, W. Pravin, K. Mayank, Role of electronic tongue as an E-sensing tool in biomedical, pharmaceutical and allied sciences: a systematic review. Res. J. Pharm. Biol. Chem. Sci. (4), 519–525 (ISSN: 0975-8585) Publisher RJPBCS 14. K. Altun, B. Barshan, O. Tuncel, Comparative study on classifying human activities with miniature inertial and magnetic sensors. Pattern Recognit. 43(10), 3605–3620 (2010) 15. C. Mitschke, M. Ohmichen, T. Milani, A single gyroscope can be used to accurately determine peak eversion velocity during locomotion at different speed and various shoes. Appl. Sci. 7, 659 (2017). https://doi.org/10.3390/app70706 16. P. Tejas, P. Shashikant, W. Pravin, Mobile devices based mechanisms in telemedicine and healthcare: a systematic approach. http://dx.doi.org/10.20431/2349-4050.0504003 17. J. Mishra, Analysis of the Fitzhugh Nagumo model with a new numerical scheme, in Discrete & Continuous Dynamical Systems-S 781 (2018) 18. S. Creasey, Wearable Technology Will Up The Game For Sports Data Analytics. Accessed on 20 October 2018 19. J. Mishra, Numerical Analysis of a Chaotic Model with Fractional Differential Operators: From Caputo to Atangana–Baleanu Methods of Mathematical Modelling: Fractional Differential Equations, 167 (2019) 20. J. Mishra, Unified fractional calculus results related with H function. J. Int. Acad. Phys. Sci. 20(3), 185–195 (2016) 21. J. Mishra, A remark on fractional differential equation involving I-function. Eur. Phys. J. Plus 133(2), 36 (2018) 22. J. Mishra, Modified Chua chaotic attractor with differential operators with non-singular kernels. Chaos, Solitons Fractals 125, 64–72 (2019) 23. M.J. Mathie, A.C. Coster, N.H. Lovell, B.G. Celler, Detection of daily physical activities using a triaxial accelerometer. Med. Biol. Eng. Comput. 41(3), 296–301 (2003) 24. S. Patil, K.R. Patil, C.R. Patil, S.S. Patil, Performance overview of an artificial intelligence in biomedics: a systematic approach. Int. J. Inf. Technol. 2018, 1–11 (2018) 25. L. Peng, C.H. Youn, W. Tang, C. Qiao, A novel approach to optical switching for intradatacenter networking. J. Lightwave Technol. 30(2), 252–266 (2012) 26. J. Mishra, N. Pandey, An integral involving general class of polynomials with I-function. Int. J. Sci. Res. Publ., 204 (2013) 27. J. Mishra, Fractional hyper-chaotic model with no equilibrium. Chaos, Solitons Fractals 116, 43–53 (2018) 28. K. Vijayalakshmi, S. Uma, R. Bhuvanya, A. Suresh, A demand for wearable devices in healthcare. Int. J. Eng. Technol. 7(1–7), 1–4 (2018). View at Publisher, View at Google Scholar, View at Scopus

An Optimal Design of 16 Bit ALU Pasuluri Bindu Swetha, N. Sai Vamshi, Md. Mujeeb Ur Rehamaan, and V. Karthik

Abstract The important parameters for the performance of any VLSI design are speed, chip area, and power consumption. The ALU is majorly responsible for performing operations like addition, subtraction, multiplication, division, shifting and rotating. In this paper, we have designed a 16-bit ALU and is the optimized one in terms of delay and area. Detailed analyses of various adder and multiplier configurations are carried out. Kogge–Stone adder and multiplier using ancient mathematics are found best in performance parameters and used in the design of 16-bit ALU. At last, we are going to integrate the Subtractor, shifter and division modules to complete the 16 bit ALU architecture. This design is synthesized & simulated using Xilinx ISE 14.7. Keywords Fast adder · Multiplier · ALU · Delay · Area

1 Introduction Arithmetic circuits have numerous applications in the processing architecture of the Image and Signal processing. As these architectures are very complex and consume more area. Here it is necessary to design using low power techniques. As the DSP processor speed In an image and signal processing, techniques like DIT-FFT and FILTERS the design adders and multipliers play a vital role in reducing the delay of the processor. Here addition is performed using a parallel prefix adder and multiplication is done using Vedic sutra, since multipliers play an important role in the high-performance system. The term VEDA refers to the storehouse of knowledge. Vedic mathematics was redeveloped by Indian mathematician Shri. Bharathi Krishna Tirtaji. Using Vedic mathematics general and specific techniques, complex calculations can be performed quickly. The entire mechanics of Vedic mathematics is based on 16 sutras. P. B. Swetha (B) · N. Sai Vamshi · Md. Mujeeb Ur Rehamaan · V. Karthik G. Pullaiah College of Engineering and Technology, Kurnool, AP, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_19

203

204

P. B. Swetha et al.

Fig. 1 4-bit carry look-ahead adder [1]

2 Architectures of Various Digital Circuits 2.1 Carry Look Ahead Adder Carry look-ahead adder eliminates the interstage carry delay and speed ups the addition processor. It computes the carry ci+1 for the next input bits trough carry propagate (pi = Ai ⊕ Bi ) and carry generate (gi = Ai * Bi ) using input bits with Ai and Bi . As the number of inputs is increased and in turn, the carries generation circuits required being included in the design increase the complexity of the circuit. The output sum and carry can be expressed as (Fig. 1): Sumi = Pi ⊕ Ci Ci + 1 = G i + (Pi · Ci )

2.2 Carry Skip Adder The carry-in of the circuit is passed through each adder block if the added result of Pi , i.e. the propagate values of each bit is not a binary one. If it is one then, the C in is passed through the ripple path. This carry skip adder is used to reduce the carry propagation time simply by skipping the few consecutive adder blocks. Less power consumption and less area required to make this technique so-called as carry skip adder a bit speed than the carry look-ahead adder (Fig. 2).

An Optimal Design of 16 Bit ALU

205

Fig. 2 4-bit carry skip adder [2]

2.3 Kogge–Stone Adder It is a parallel prefix adder. In the parallel structure, computation is implemented in a parallel way. In the adder the process is carried out in three stages: 1. Pre computing Phase 2. Prefix phase 3. Final processing phase. In pre-computing phase, the carry and propagate signals are generated for each pair of bits Ai and Bi by the equations (Fig. 3): Pi = Ai XOR Bi G i = Ai AND Bi In prefix stage each black cell has an input of two pair of carry and propagate signals and computes the output as generates a pair of generate and propagate signal as output: The black cell takes two pairs of generate and propagate signals (Gi , Pi ) and (Gj , Pj ) as input and computes a pair of generate and propagate signals (G, P) as output   G = G i OR Pi AND P j P = Pi AND P j

206

P. B. Swetha et al.

Fig. 3 Kogge–Stone adder 16-bit [3]

The grey cell takes two consecutive generate and propagate signals (Gi , Pi ) and (Gj , Pj ) as inputs and gives a pair of generate signal G as output   G = G i OR Pi AND P j The final stage involves the computation of sum bits by using the below logic: Si = Pi XOR Ci−1

2.4 Booth Multiplier In the Radix-4, the grouping is to be carried out by taking 3 bits at a time. But we follow a different method. In the first stage, grouping of the multiplier is done by considering only two bits. The first grouping is done by taking zero and the LSB bit of them. The booth encoding technique is defined as shown in the table below. After the first stage of encoding, the multiplier is now transformed into a radix-2 multiplier. Again the multiplier is grouped into a pair of bits from the LSB, then the encoding technique is applied to each pair and finally, we will get the radix-4 multiplier. If there are any bits left without on pairing on the MSB side then add the sign bit to the MSB and do the pairing. This multiplier reduces the number of partial products to half of the number of bits the non-encoded multiplier thereby it reduces the delay to some extent (Table 1).

An Optimal Design of 16 Bit ALU

207

Table 1 Booth encoding Yi

Y i+1

0

0

Encoded bit 0

0

1

−1

1

0

+1

1

1

0

Fig. 4 Booth encoder

2.4.1

Radix-4 Booth Recoding

The booth multiplier encoding circuit is shown in the figure below. In Booth Multiplication, the partial products are reduced to half of the bits in n. Here, Y is the Multiplier which is encoded using the above table. X is Multiplicand. The encoded multiplier is multiplied with each and every bit in the multiplicand resulting in partial products. The NEG indicates the sign of the multiplicand.

2.4.2

Booth’s Encoder

See Fig. 4.

2.4.3

Partial Product Generator

See Fig. 5.

208

P. B. Swetha et al.

Fig. 5 Partial product generator

2.5 Vedic Multiplier To reduce delay in the booth multiplier, we implement the multiplier using UT sutra. The meaning of the sutra is vertically and crosswise. The operation is done simultaneously to reduce the delay. It is used in many fields like medical, compression and many areas of electronics. It is the best multiplier in comparison to the other conventional multipliers even though the number of bits increased. Here are the sample examples of how to perform UT-SUTRA (Figs. 6 and 7): Example: See Fig. 8.

Fig. 6 Calculation of UT-Sutra

An Optimal Design of 16 Bit ALU

209

Fig. 7 Block diagram of 16-bit Vedic multiplier [4]

Fig. 8 Example of UT sutra

2.6 Barrel Shifter A barrel shifter is a circuit particularly used for the shift or rotating the bits. The number of multiplexers required for an n-bit word is nlog2 n. A barrel shifter is a pure combinational logic circuit that can shift a data word by a specified number of bits. In other words, it is a sequence of multiplexers where the output of one multiplexer is given as input to the next multiplexer. Within a single clock cycle, a barrel shifter can shift and even rotate n-bits in modern microprocessors (Fig. 9).

2.7 Binary Division The binary division is much easier than the decimal division when you remember the following division rules. A binary division is meaningless when one is divided

210

P. B. Swetha et al.

Fig. 9 Barrel shifter [5]

by zero (0/1) and zero divided by zero (0/0). Initially set the quotient to zero. Assign the least significant bits in dividend and divisor. Repeat these processes till bits in the dividend are greater than or equal to the divisor. Now subtract divisor from the portion of the dividend an add one or zero to the right side of the quotient accordingly. Now shift the divisor to the right and carry the above process till the dividend is lesser than the divisor. Finally, we will get quotient and remainder after completing the algorithm (Fig. 10).

2.8 Block Diagram of ALU See Fig. 11.

An Optimal Design of 16 Bit ALU

211

Fig. 10 Binary division [6]

Fig. 11 Block diagram of ALU

3 Results and Discussion Using Verilog HDL the design of the said arithmetic circuits and ALU is implemented. All the modules are executed individually using Xilinx ISE 14.7 and all these modules are finally integrated with ALU architecture. The simulation results are as follows (Figs. 12, 13, 14, 15, 16, 17 and 18):

Fig. 12 Simulation of Vedic multiplier (16-bit)

212

Fig. 13 Simulation of booth multiplier (16-bit)

Fig. 14 Simulation of carry skip adder (16-bit)

Fig. 15 Simulation of Kogge–Stone adder (16-bit)

Fig. 16 Simulation of binary division (16-bit)

Fig. 17 Simulation of Barrel shifter (16_Bit)

P. B. Swetha et al.

An Optimal Design of 16 Bit ALU

213

Fig. 18 Simulation of ALU (16-bit)

Synthesis report of various adders (Table 2): Synthesis report of multipliers (Table 3): Synthesis reports of ALU (Table 4): Table 2 Synthesis report of various Adders S.No.

Process name

Time delay (ns)

LUT Tables (4 input)

1

Carry look-ahead adder

12.061

49 out of 12,288

2

Carry skip adder

15.850

46 out of 12,288

3

Kogge–Stone adder

9.012

44 out of 12,288

Table 3 Synthesis report of various multipliers S. No

Process Name

Time delay (ns)

LUT Tables (4 input)

1

Booth multiplier

34.304

978 out of 1,228,864

2

Multiplier using UT sutra

16.351

698 out of 1,228,864

Table 4 Synthesis report of 16-bit ALU S. No

Process name

Time delay (ns)

LUT Tables (4 inputs)

1

ALU

18.708

1025 of 12,288

214

P. B. Swetha et al.

4 Conclusion In brief, the Kogge–Stone Adder has less delay and area when compared to carry look-ahead adder and carry skip adder. In multipliers, Vedic multipliers have lees delay and area compared to the conventional Booth multiplier. Barrel shifter also has a very little delay. An optimal ALU, with modules of the Kogge–Stone adder, Vedic multiplier, Barrel shifter, Division are integrated into it to enhance the performance of a DSP processor.

References 1. J. Kaur, M. Singh, Performance analysis of modified booth multiplier with use of various adders. Int. J. Sci. Eng. Res. 4(6), May 2013 2. K. Nagori, S. Nehra, Design of a High Speed And Low Power 4 Bit Carry Skip Adder, vol. 7, issue 3, (Part-5) March 2017, pp. 66–69, ISSN: 2248-9622 3. J. Kaur, L. Sood, Comparison between various types of adder topologies. Int. J. Comput. Sci. Technol. 6(1), Jan-Mar 2015 4. V.G. Kiran Kumar, C. Shantharama Rai, Low power high speed arithmetic circuits. Int. J. Recent Technol. Eng. (IJRTE) 8(2), July 2019, ISSN: 2277-3878 5. R. Verma, R. Mehra, Area efficient layout design analysis of CMOS Barrel shifter. Int. J. Sci. Res. Eng. Technol. (IJSRET) ISSN: 2278–0882EATHD-2015 Conference Proceeding, 14–15 March 2015 6. https://uomustansiriyah.edu.iq/media/lectures/5/5_2018_12_17!07_25_39_PM.pdf

Preventing SSRF (Server-Side Request Forgery) and CSRF (Cross-Site Request Forgery) Using Extended Visual Cryptography and QR Code Nilesh Arora, Priya Singh, Soniya Sahu, Vineet Kr Keshari, and M. Vinoth Kumar Abstract In this era of technology where technology advancement is at its peak, and new innovations are being introduced every day, Security of consumer plays an important role and can’t be sacrificed. The world is getting digital, everything’s going online and is advancing and so does the security needs to be advanced. This project focuses for the advancement of the current security trend in online transactions that will help to prevent the transactions from various attacks like CSRF, SSRF, Phishing, etc. using combined techniques of EVC, steganography, and QR code. The proposed project ensures that the online transaction system will get more secure as compared to the ongoing system and will be really complex for the hackers to get access to and steal the person’s information or money. Keywords SSRF · CSRF · EVC · Steganography

N. Arora (B) · P. Singh · S. Sahu · V. K. Keshari · M. Vinoth Kumar Department of Information Science and Engineering, Dayananda Sagar Academy of Technology and Management, Bangalore, Karnataka, India e-mail: [email protected] P. Singh e-mail: [email protected] S. Sahu e-mail: [email protected] V. K. Keshari e-mail: [email protected] M. Vinoth Kumar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_20

215

216

N. Arora et al.

1 Introduction Nowadays, Various malicious sites try to steal user’s information and try to loot them by directing them to links which mostly asks for their personal account details or similar in order to get access to their account and loot them, which is a type of CSRF attack or the hackers try to steal someone’s personal information by sending them spam mails and when the user clicks on it, their details get stolen which is a type of phishing attack as shown in Fig. 1, or if the attack is being performed by the server-side in case if someone has hacked the server only, then that type of attack is known as SSRF. This proposed project safeguards the user from all the above-mentioned attacks and is successful in multiplying the complexity for the hacker or the malicious site to get into the system and gaining the information of the user. In this project combination of few security techniques in order to make customer system protected from all the above-mentioned attacks and the techniques that are combined consists the generation of OTP, its conversion into QR, applying steganography, converting back QR into OTP and then authenticating the details and granting the access.

Fig. 1 Phishing process

Preventing SSRF (Server-Side Request Forgery) …

217

2 Survey Study In [1, 2], the authors have described how the OTP is being generated during a transaction and how it can more secure by the generation of an alphanumeric OTP that can be converted into QR code. In [3–6], the authors have described the various techniques in which the QR code can be generated from the OTP or the alphanumeric OTP. In [7–9], the authors have described the use of EVC technique in order to divide the QR code into two shares to make it more secure. In [10], the author tells how to can encrypt the QR code using (K, N) algorithm. In [11, 12], the authors have described how the steganography can be performed on the QR codes and how to decrypt or de-steganify them back to single image. In [13], the author has described how it is helpful in preventing the phishing attack. In [14–16], the authors have described how the CSRF, SSRF, detection of XSS(Black box) types of attack can be prevented.

3 Methodology As shown in Fig. 2, the steps of the proposed project are:

3.1 Generating OTP User authentication can be done through two techniques Namely-authentication by certificate and authentication by ID/password. Authentication by ID/password is exposed to attacks like replay attacks, dictionary attacks, and eavesdropping. To overcome the disadvantages of these methods one-time password was introduced and made popular for security reasons. One-Time Password is a onedimensional password system that is used to authorize only once for a particular user at a specific time. Therefore, each time the user will be authenticated with a unique password. This helps in blocking a number of attacks like a phishing attack, replay attack, and many more which use static passwords for security. One-time password supply security by ensuring that the user is prohibited to make use of identical passwords more than once. In addition to this, it provides various other features like extensibility, anonymity, mobility and ensures safety by preventing any information leaks. The process involves using a secret character sequence also known as a seed. The segments of the one-time password system are a token involved in the security and password algorithm or one-time password key generating device, an authentication client, and an authentication server.

218

N. Arora et al.

Fig. 2 Flow chart of proposed work

One-time password may be created using any sort of one-way hash function or by using stream ciphers. The basic methodology is to rely on any one-way hash function like SHA-1 algorithms, MD4, and MD5 to keep key management in check. One of the most used for OTP generation is HOTP algorithm, which makes use of SHA-1. The hashes are generated in such a way that collision is forbidden. One of the OTP solutions is OATH which is an open-source solution, commonly used, supported by OTP suppliers. OATH makes use of an event-based OTP algorithm which often makes use of a secret character sequence only known by the two-party users. The first one, to be the user software or the device, the one-time password server. Then the request number and the other data such as customer unique seed are updated. All these data go through the algorithm (generally HMAC-SHA1) to generate the OTP.

Preventing SSRF (Server-Side Request Forgery) …

219

Result: OTP at the backend is being generated and it provides a little security from SSRF and CSRF.

3.2 Forming OTP from QR Code Generating the QR code from OTP: Common steps involved in generating the QR code make use of the following information: 1. transaction information 2. timestamp 3. random number The following algorithm s used to get the QR code using the above information: • Transform into the binary format sequence: The mode in which QR code has to be generated is selected first which depends on the type of information available. The various types of modes available are as follows: – Alphanumeric Mode – Kanji Mode – Byte Mode of 8-bits The conversion functions depend on the mode chosen to transform the given data into binary format. • Adding the error-correction code-words: Dividing the code-words sequence into the desired no. of blocks in order to enable the error-correction algorithms, that has to be processed and generating the error-correction code-words for each block by appending the error-correction code-words at the end of the data code-words sequence. • Placing the code-words in the matrix: There are two strategies to arrange the data blocks into QR-code-rectangular blocks or irregular blocks(which is able to accommodate more data) • Masking the data: masking the data into light and dark modules in a balanced way in the symbol through encoding. • Adding format information: It is a 15-bit sequence which is added in the end. • Adding version information: It is an 18-bit sequence added in the end. Result: OTP is converted into QR code as shown in Fig. 3 and it provides medium security from SSRF and CSRF and a little higher from phishing.

220

N. Arora et al.

Fig. 3 QR code generation

3.3 Applying EVC on QR and Generating Shares • Technology used for sharing an image is achieved by visual cryptography. The base representation of the visual cryptography system is to share out a secret image file into multiple shares of the file. • The secret share is optically decrypted by combining some eligible division of shares while banned divisions cannot. • For creating important shares, a few columns were added to support matrices. Extra columns are used in order to take the cover data of every share. After all, these large number of noise is detectable in the shares with poor optical outcome which will be overcome by applying extended visual cryptography. • Now after the creation of QR code at max size of the QR codes should be biggest secret payload acceptable in QR codes. To expand the area of the image, means the lesser size of secret image. So by using the method of probability, the secret pixels are precisely restored with a certain probability. • QR codes contain large number information capacity which comes under different version number which evaluates the size of any QR code. • LOAD COVER IMAGE: Basic set is converted into many subsets. Then, the base matrices for every subset will be extracted using two matrix sets Mk-even and Mk-odd. And then, all the base matrices will be joined and convert the output into the concluding matrix sets. With the use of all these load cover, image will be generated. • EMBEDDING METHOD: QR code contains resolved information and error alteration capacity if its version and then error alteration stages are specified. To extract the note of a QR code, accurate information cannot be altered from when it concerns every important information of decoding. Padding data are included to fill encoding repetition and error alteration code-words are made to improve original data even if any errors are present. Padding data is used to make important shares. Result: EVC is applied on QR code and its shares are being generated.

Preventing SSRF (Server-Side Request Forgery) …

221

3.4 Applying Steganography on Share A Steganography is a widely used term, which is being used from long back. This is a technique, which lets two end-users have secret communication, sharing a secret within some cover media. In this technique senders sends a secret message to the receiver by covering the actual message with some other file. The cover file can be in any format like audio, video, text, and image format. In the process, a secret message is covered by a cover media using some appropriate algorithm and forms a stego file and then this stego file is sent to the receiver. Covering secret messages with other media files is not an easy deal. The stego file size should not get increase, as it becomes noticeable by the attackers who have any sort of idea about the size of the original message. There are lots of essential features, which has to be considered on applying steganography technique: Embedding Capacity: What amount of data to be embedded without destroying the original one. Un-detectability: Data should not detectable in the stego file. Robustness: Data must be safe even after compressing and decompressing processes. Among the proposed various steganography methods, the one which is used here is image steganography. Using image steganography is basically using image as a cover file to cover the secret message. Using the idea of low-frequency region and higher frequency region which deviates to nearby pixels. After getting two shares of secret QR images, share A and share B. Share A goes through steganography. And then sent to the merchant server. The merchant sends the share A with stego cover to the client, which cannot be attacked by the attackers. The share A with stego then goes through de-steganography in order to remove the cover image from share A. Result: Steganography is being done on one of the shares of the image file as mentioned in Figs. 4 and 5 and it provides high security from SSRF, CSRF, and Phishing.

3.5 Merging Back Shares into One QR Code Using De-steganography • Visual cryptography is a sort of cryptography that is decoded straight by the person optical system without any different computation for decryption.

222

N. Arora et al.

Fig. 4 Steganography process

Fig. 5 Process of steganography and de-steganography

• Share A image is generated through load cover image and then embed share A image within the cover image with the help of steganography. Steganography is defined as the practice which hides file containing text, audio, video, etc. with other file. • Share B is generated via electronic mail (email) of an individual person. • After the generation of both the images now the operator assistant dispatch the stego image to the respective user. • Now using de-steganography at the client’s machine and gets share A from cover image. • Additionally now overlapping takes place between share A and share B which is nothing but merging both share A and share B images from which QR code will be extracted. The third picture is reassembled by merging the two images onto clearness and combining the images with each other. This kind of visual

Preventing SSRF (Server-Side Request Forgery) …

223

Fig. 6 Steganography of Share A

cryptography, that reassemble the image by combining important images with each other, is known as extended visual cryptography. • QR code is something that is a machine identification code • After the retrieval of QR code client scans the generated QR code and then OTP is generated as an output. Result: De-steganography is being performed in order to merge the shares into one QR code as mentioned in Figs. 6 and 7.

3.6 QR Code Conversion to OTP QR code id decoded into an OTP, the OTP is extracted from the QR code as it is hidden inside the QR code. Whenever the user opts for QR code, users can confirm that through their cell phone with the assistance of code reader. Then the code reader decodes the given code and then cell phone shows the OTP hidden in the QR code. Every QR code consists of OTP as the OTP for different users is different. It helps in the spotting script to admit variation of the given code. One-time passwords solely result in passwords for a known period of time. This feature assures that a given username or password fusion cannot be used another time.

224

N. Arora et al.

Fig. 7 Merging of Share A and Share B

Generally, a user’s login name is the same, whereas OTP expands at every attempt of login. Thus, for every new session, the user will be verified in accordance with the latest OTP. One-time passwords are a scheme to implement better user validation and to provide a much better security system within networks, online bank accounts, and other systems consisting of un-disclosed data. This specification of better security is addressed by one time passwords in contradiction to the generalized passwords in a way, that it is not exposed to replay and phishing attacks. One-time password, a one-dimensional password system that is applicable only one time for an authentic user for a given time period. Thus, whenever a user is validated with the latest OTP. It provides resistance to various types of attacks for example, replay attack, phishing attack, much more that is majorly tried through generalized passwords. It enhances security by affirming that a user cannot use the ditto password consecutively. It also exhibits other features like anonymity, portability, extensibility, and encryption of high-value data/information. There are two methods for creating OTP’s: • OTP based on Time-Here OTP transforms at a regular time period. • OTP based on event- in which OTP is formed by enabling a button at the client machine. Whereas on different aspects, this code is a quick response 2-D barcode, which stores data as an image. It enhances security, increases storage for accommodating more information. These are assured by ISO standards containing important data in both horizontal and vertical directions, whereas one-dimensional

Preventing SSRF (Server-Side Request Forgery) …

225

barcode contains information only in one direction either horizontally or vertically. It also provides error-correction capability. Result: QR code is being converted back into OTP and authentication of transaction is being done.

4 Result Comparison Result of Security strength after applying various techniques is being calculated and compared as shown in Table 1 and Graph 1. Table 1 Security comparisons of different techniques Methods

Prevention of SSRF

Prevention of CSRF

Prevention of phishing

1

OTP

20

15

22

2

QR

35

30

40

3

EVC

60

59

62

4

Steganography

94

93

96

Complexity for Hacker

Serial number

Security Strength

Graph 1 Security strength versus Complexity for hackers of different techniques

226

N. Arora et al.

5 Conclusion The project that is been proposed in this paper is helpful in advancing the current security trend and will help in providing more secure transactions, making it difficult for the hacker to steal anyone’s information or personal details. The system is efficient in safeguarding its customers from various types of cyberattacks namely SSRF, CSRF, Phishing, etc. using the combination of the most secured techniques available and thus increasing the complexity for someone to break into the system.

References 1. S. Srivastava, M. Sivasankar, On the Generation of Alphanumeric One Time Passwords, IEEE 2016 2. A. Choudhary, S. Rajak, A. Shinde, S. Warkhade, F.S. Ghodichor, Online banking system using mobile-OTP with QR-code. Int. J. Adv. Res. Comput. Commun. Eng. 6(4), April 2017, ISO 3297:2007 Certified 3. P. Tekade, A. Vamadevan, S. Sawant, T. Tamhane, G. Khedkar, Implementation of two level QR code (2LQR). Int. J. Adv. Res. Comput. Commun. Eng. 6(4), April 2017, ISO 3297:2007 Certified 4. T. Yuan, Y. Wang, K. Xu, R.R. Martin, S.-M. Hu, Two-layer QR codes. IEEE Trans. Image Process. 28(9), September 2019 5. I. Tkachenko, W. Puech, C. Destruel, O. Strauss, J.-M. Gaudin, C. Guichard, Two Level QR Code for Private Message Sharing and Document Authentication, IEEE 2015. https://doi.org/ 10.1109/tifs.2015.2506546 6. A.T. Shah, V.R. Parihar, Overview and an approach for QR-code based messaging and file sharing on android platform in view of security, in Proceedings of the IEEE 2017 International Conference on Computing Methodologies and Communication (ICCMC) 7. Y. Cheng, Z. Fu, B. Yu, Improved visual secret sharing scheme for QR code applications. IEEE Trans. Inf. Forensics Secur. https://doi.org/10.1109/tifs.2018.2819125 8. Z. Fu, Y. Cheng, B. Yu, Visual Cryptography Scheme with Meaningful Shares Based on QR Codes, IEEE Access 2018. https://doi.org/10.1109/access.2018.2874527 9. P.-Y. Lin, Distributed secret sharing approach with cheater prevention based on QR code. IEEE Trans. Ind. Inf. https://doi.org/10.1109/tii.2015.2514097 10. C.-N. Yang, C.-C. Wu, Y.-C. Lin, k out of n region-based progressive visual cryptography. IEEE Trans. Circuits Syst. Video Technol. https://doi.org/10.1109/tcsvt.2017.2771255 11. P. Johri, S. Das, A. Mishra, A. Kumar. Survey on Steganography Methods (Text, Image, Audio, Video, Protocol and Network Steganography). 978-9-3805-4421-2/16/$31.00 IEEE 2016 12. Z. Yang, H. Xu, J. Deng, C.C. Loy, W.C. Lau, Robust and fast decoding of high-capacity color QR codes for mobile applications. IEEE Trans. Image Process. https://doi.org/10.1109/ tip.2018.2855419 13. G. Armano, S. Marchal, N. Asokan, Real-time client-side phishing prevention add-on, in 2016 IEEE 36th International Conference on Distributed Computing Systems 14. M. Srokosz, D. Rusinek, B. Ksiezopolski, A new WAF-based architecture for protecting web applications against CSRF attacks in malicious environment, in Computer Science and Information Systems, vol. 15 pp. 391–395. https://doi.org/10.15439/2018f208. ISSN 2300-5963 ACSIS 15. M. El Masri, N. Vlajic, Current state of client-side extensions aimed at protecting against CSRFlike attacks, in 2017 IEEE Conference on Communications and Network Security (CNS): IEEE CNS 2017

Preventing SSRF (Server-Side Request Forgery) …

227

16. H. Soleimani, M.A. Hadavi, A. Bagherdaei, WAVE: black box detection of XSS, CSRF and information leakage vulnerabilities, in 2017 14th International ISC (Iranian Society of Cryptology) Conference on Information Security and Cryptology (ISCISC), September 6–7, 2017

IoT: Security Attacks and Countermeasures Harshit Reylon and Alka Chaudhary

Abstract The Internet of Things (IoT) can be said as an interconnection of devices that are used in computing, mechanically, digitally, or are objects, people and animals that can be uniquely identified and persist the potential to share data or information over a network without the involvement of human interaction. As it is considered as the most buzzing topic to be discussed and researched upon, it comes under the danger zone for vulnerabilities that we observe in its daily use and routine. This study summarizes whole technology with the countermeasures corresponding to its vulnerabilities which are expressed with their degree of damage. Keywords Internet of things (IoT) · Attacks · Network attacks · Physical attacks · Software attacks · Encryption attacks

1 Introduction 1.1 IoT Enabling Technologies Embedded Systems: The embedded systems are now part of our common daily routine as these systems control many of these devices. They are generally considered as a combination of both the hardware and the software in a device but is not itself a general-purpose computer, or in other words, you can say that they are just the computer hardware system that comprises of software. They are usually designed to perform a task in a given period that is expected for it to complete and can be used as an independent system or as combined in parts to be a part of a large system. The embedded system is programmed and controlled by the real-time operating system (RTOS) and may be used for specific purposes like where the computer needs to H. Reylon (B) · A. Chaudhary Amity Institute of Information Technology (AIIT), Amity University, Noida, Uttar Pradesh 201313, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_21

229

230

H. Reylon and A. Chaudhary

be encapsulated entirely by the device it is controlling [1]. It can also be said that both the software and hardware part combine to create a combined system with the expectation to work in the absence of any intervention of humans that is its response, monitoring activity, and controlling of the external environment via sensors and actuators. Examples may include printers, mobile phones, digital cameras, washing machine, etc. Wireless Sensor Networks: Wireless sensor network uses a distributed device that is generally small and autonomous with lower operational cost and sensors (a device that is considered to measure a physical quantity and converts that into an observer or by an instruction) in an ad hoc manner to scan the environment conditioned as physical. A WSN consists of a coordinator, routers, and some end nodes. In short, the coordinator serves by collecting the data from all of the different nodes and can also act as a gateway to the Internet. The end nodes are responsible for routing the data packets from the end nodes to the coordinator that may have multiple nodes connected to them [2]. Some examples of WSN are traffic control, home automation, soil moisture monitoring system, etc. Cloud Computing: Using hardware and software to deliver a service over a network with the Internet is known as Cloud computing. It is an on-demand service where a network is formed for data storage that can be used from different places; that is, it does not require installing in every computer [3]. The main motive of cloud computing is the sharing of resources which can be accessible to anyone anywhere. It makes the work for many organizations a lot easier and cheaper. Communications Protocols: Internet of Things (IoT) enables smart objects/devices to communicate, gather, and transfer data with each other. It has a lot of practical applicability like logistics, healthcare, and much more. Smart devices have two types of connections, that is, wired or wireless connection. There are various wireless communication protocols and technologies which have usability to connect the smart networking devices such as Internet Protocol Version 6 (IPv6), ZigBee, over Low power Wireless Personal Area Networks (6LoWPAN), Z-Wave, Bluetooth Low Energy (BLE), SigFox and Near Field Communication (NFC) and so many more. All of them differ in their range. Few of the IoT technology and communication protocols are Bluetooth, Wifi, Radio Protocols, LTE-A, and WiFi-Direct [4]. All these protocols for communication are used explicitly for different functional requirements in an IoT system. Big Data Analytics: The process of collecting, organizing, analyzing the data set that has been received, to obtain useful information from that data is called Big data analysis. This analysis allows organizations to make predictions regarding their businesses and future decisions. Analysts are employed to understand the knowledge that comes from the data analysis and make predictions for the future.

IoT: Security Attacks and Countermeasures

231

1.2 Structure of IoT IoT typically has a structure with three basic layers, that is, the Perception Layer, the Network Layer, and the Application Layer [5] as shown in Fig. 1. Perception Layer/Physical layer: IoT systems are designed for the collection and exchange of information/data from physical world IoT systems are designed. Therefore, the perception layer has distinct types of collection and control techniques for sensing and gathering information about the environment. These collection and control modules are pressure sensors, sound sensors, temperature sensors, etc. Perception layer is also called the physical layer or sensor layer. Perception layer is generally divided into two parts, i.e. Perception Node that contains controllers and sensors, and another one will be a perception network that helps to communicate with the transport network. The collection node used for acquisition of data and control, then the collection network redirects the data that is collected to gateways and also forwards control instructions to the controller. There are many technologies for Perception layer available which include Wireless Sensor Networks (WSN), Implantable Medical Devices (IMD), Radio Frequency Identification (RFID), Global Positioning System (GPS) [6]. Network Layer: The gap between the perception layer and the application layer is bridged by the network layer that is also called the transmission layer. And just like any other Network Layer model, it carries all the information or data collected on the physical objects via sensors. The transmission that takes place in this layer is generally maintained and processed intelligently on the medium, which is either wired or wireless. It tends to be much vulnerable to attacks making the security much Fig. 1 Internet of things architecture

232

H. Reylon and A. Chaudhary

more focused on the authenticity of the information that is being transported over the network. As taking responsibility for connecting the network devices or smart object and any network(s), the most desirable form of protocol security enhancement would be to extend the IPv6 (Internet Protocol version 6) over the LoWPAN (Low power Wireless Personal Area Networks) together termed as the acronym 6LoWPAN [7]. All this does is that it enables the IPSec (Internet Protocol Security) communication with IPv6 nodes. Application Layer: This layer fills the gap between the network layer and the perception layer. The responsibility of this layer includes the presentation of data and its formatting. It also acts as an interface between the network and the end devices. The storage of data is provided as a database that offers the capacity to store the collected data. In addition to it, communication outside of the device-oriented systems are facilitated with the usage of various types of applications depending upon the needs of the particular user [8]. For example, smart objects, smart homes, and so on. To communicate without the protocols of the application layer is not practical in the world of IoT.

1.3 IoT Main Communication Protocols These are some of the most common protocols that are used in IoT technologies [9]: Wi-Fi—Wireless-Fidelity, used to connect devices wirelessly and even to transfer data without the need for wires with fast transfer rates. BLE—Bluetooth Low Energy, helps in connection and transferring of data with authentication security but slow rate of transfer. LTE—Cellular/Long-Term Evaluation, required to connect and communicate virtually anywhere securely and reliably. ZigBee—Based on IEEE 802.15 specification, used for low-power and lowbandwidth needs. Z-Wave—Used for home automation and its typical range is 100 m. 6LoWPAN - IPv6 over Low -Power Wireless Personal Area Networks, used for low data rate networks like for home and building automation. NFC—Near field communication, used for very small range communication like 4 cm. CoAP—Constrained Application Protocol, used for constrained internet devices as a web transfer protocol. SigFox—Used for machine-to-machine applications with low-data applicability.

IoT: Security Attacks and Countermeasures

233

MQTT—Message Queue Telemetry Transport, is a lightweight network protocol for lossless bi-directional connections. AMQP—Advanced Message Queuing Protocol, used for message orientation on the middleware environment. Thread—A low powered and secured protocol that is based on IPv6 technology with the foundation of 6LoWPAN. LoRaWAN—Long Ranged Wide Area Network, used for regionally or globally distributed smart devices to wirelessly connect with the Internet.

2 Attacks in IoT 2.1 Vulnerabilities in Embedded Systems Tampering: The devices can always have a potential threat of physical intrusion or any type of network or software intrusion. Unsecured Serial Ports: Any port(s) that is not secured with any authentication mechanism is found to be critically vulnerable. Flashing modified vulnerable Firmware: Firmware can be dumped on the JTAG enabled devices. Thermal virus: A vulnerability where an attacker overheats the system. Side-Channel Attacks: They are used for bypassing the protected information secured with a secret key.

2.2 Vulnerabilities in Firmware, Software and Applications Unravel working through Firmware: A device’s functionality can be understood by its firmware itself if the code is not secured enough. Firmware extraction: The hard-coded credentials of the firmware can be extracted such as a password, staging URLs, cryptographic keys, private certificates, and so on. Use of CVEs: Any obsolete component or software can be the significant con for any security bypass. Divulged object references: This could be the biggest misstep in the code itself that creates an additional common vulnerability to the application.

234

H. Reylon and A. Chaudhary

Vulnerable APIs: An attacker can run a maliciously known code which makes an API to leak any sensitive data. CSRF Attack: Cross-site Request Forgery, a malicious exploit is used to create a web application trust the upcoming commands that were initially unauthorized to run. Bruteforcing admin panel: An attacker can make use of a brute force attack by testing the dumped data on the panel. XSS: Cross-site Scripting, malicious scripts or client-side scripts are injected into the web applications to bypass access controls such as SOP (same-origin policy). Outdated SDKs: Any obsolete software development kits and the libraries counts into the list of vulnerabilities that can expose much information that was meant to be hidden. Reverse Engineering mobile application: A common type of technique used by the cracker/attacker that leaks many sensitive data. Cracking the Wi-Fi: Considering it easier if it has WEP security or any other case may be in WPA security, it could have been easier to brute force the PSK (Preshared Key) or attaining of PSK by cracking it through WPA handshake with tools like Hashcat, coWPAtty, and so on, and much more such cases can be the reason for exploitation of service.

2.3 Vulnerabilities in Radiocommunications Man-in-the-Middle Attack (MITM) or Sniffing: In this type of attack, the signals are intercepted, and all the sensitive information/data flowing between a tag and reader will be collected by the attacker form the radio packets. DoS: Denial of Service, it could be easy for an attacker to block the signals in between before it reaches the victim user. It is usually done by producing noise interferences or by removing/disabling RFID tags. Spoofing and replay attacks: An attacker can eavesdrop the signal of RFID from which he can manipulate or spoof them that can anytime be replayed in any future event when he wants and exploits all the services through it [10]. Weak CRC verification: Cyclic Redundancy Check is used for error-detection to avoid any accidental changes in the information that is to be passed wirelessly. If the method of checking the data stored on an RFID tag is not so strong, then it could lead to a malfunction of any service.

IoT: Security Attacks and Countermeasures

235

3 Security Countermeasures 3.1 Countermeasures Against Embedded System Attacks • Tampering may be avoided using anti-tampering software. This tamper-resistant software makes it much harder for modifying any element by an attacker [11]. It can be implemented in either way, that is, internally or externally. Using internal anti-tamper, an application is generally coded to turn itself into its security system whereas, external anti-tamper can be attained by putting the software into monitor to detect tampering. • Introduce serial port security. • Disabling the JTAG after successful completion of code and debugging. • Setting up a cooling system to avoid exposure to overheating. • Mask the code execution to avoid noise injection during its execution. You may also apply the window method in Public Key Cryptosystems that will work against the power analysis-based side-channel attacks. To increase the attacker’s confusion, a dummy instruction can also be inserted to overcome code or algorithm modification.

3.2 Countermeasures Against Firmware, Software, and Applications • First and foremost, firmware should be updated with the latest technological advancements. Encryption security should be used for the bootloader. Maybe compile it with a protection system against malicious exploitation. Automatically, any old CVE would become unusable for an attacker. • Code should be double-checked before getting it public specifically for objects references that increase issues for security. • APIs should be tested significant times against malicious codes pre-existing in the society. It should even have a rate limiter to boost its security. Also use of HTTPS, tokens, and use of encryption with signatures could make it much more secured. • To avoid CSRF attacks, some resources like Ajax calls and Form tags with POST needs to be protected. • Escaping user input can be considered the very first step to steer clear of the XSS attack. Validate any inputs that are coming; malicious information should not be whitelisted in any way. Last way is to sanitize the user input. • Use of updated libraries in the mobile application is mandatory. • Countermeasure against reverse engineering could be to importing and essential code chunks onto the server with the application of SSL. To add even more to the code’s security hashing the algorithm may provide much of a benefit with adding multi-factor security and encrypting databases.

236

H. Reylon and A. Chaudhary

• It is recommended to use WPA-PSK with TKIP (Temporal Key Integrity Protocol) or AES (Advanced Encryption Standard) encryption for your technology to make it much more secured.

3.3 Countermeasures Against Radio Communications • Implementation of several technologies can be implemented to minimize the effect of the MITM attack. You may use various encryption techniques for encrypting the communications, or you may even use an authentication protocol and transferring data/information through a secure channel. • DoS attacks can be prevented once their source gets confirmed. With that being said, using any type of blocking technique/function to counteract jamming attacks [12]. This is done on the tags whose predefined transmission volume gets exceeded, with the help of passive listening. And countermeasures against the removal of tags could be either by attaching a function to activate the alarm or using enhancement techniques for a mechanical connection between the objects and tags. • Spoofing can be avoided by the implementation of the authentication protocol and encryption of data for RFID. The countermeasures against a replay attack could be to use the counter for upcoming requests and even time-based techniques which overall will increase the complexity and cost required for a successful attack. • To avoid the conventional CRC technique getting thwart by a malicious adversary, we can secure it cryptographically by introducing only with n-bits CRC and do not replace n-bit CRC with n-bit CBC-MAC or n-bit HMAC where n is the degree of generator polynomial. As, if replaced, we will lose its efficiency and effectiveness in detection of every of the n length or less random burst errors which ultimately makes the communication unreliable. Only the cryptographically secure CRC will have the capability of attaining the error-detection without decrementing the security level or losing any of the bandwidth and throughput. It may also become suitable for most of the types of messaging communication and computational resource [13, 14].

4 Conclusion Since the last decade, IoT emerged as a major researchable topic. It interconnects various physical objects and sensors together to communicate with each other without the need for human involvement in its working. But due to the fact that IoT is based on a network architecture that is much similar to the conventional network architecture for the communication between the distinct devices and its progression and requirements are rapidly increasing day by day, several threats and flaws in the privacy and security in this technology and invention of breaches prevail that may ultimately

IoT: Security Attacks and Countermeasures

237

hinder its development. Therefore, it can be concluded that the IoT needs a security mechanism that can minimize security issues. On the same side, it should be robust and light-weighted measures. In this paper, we reviewed the technologies pertaining to IoT and its structure that includes its three layers, namely, perception layer, network layer, and application layer. Then we surveyed the security goals to be essential for securing the whole system and then we classified its security challenges. The classification is done by differentiating attacks with categories that include physical attacks, software and encryption attacks, and network attacks. Subject to this classification, we then focused on the security countermeasures for attaining an overall secured system or the IoT environment. This could be used as a framework and as a guide for the secure deployment of IoT systems. To the fact that all these attacks cannot be avoided just by keeping basic precautions, some attacks may seem difficult to tamper and may bypass the security to which efficient solution must be provided. Altogether, it is concluded that the security of IoT devices is contingent on the mechanisms, protocols, and technologies for security that is implemented by the individual manufacturer itself. Ultimately, the IoT development merged the real and virtual world opening new doors in the fields of manufacturing and research related to the security issues which is faced by its all the layers.

References 1. R. Oshana, Software engineering for embedded systems and real-time systems, in Software Engineering for Embedded Systems: Methods, Practical Techniques, and Applications. 2nd edn. ed by R. Oshana, M. Kraeling (Newnes, Cambridge, MA, USA 2019), pp. 1–28 2. M. Kocakulak, I. Butun, An overview of wireless sensor networks towards internet of things. In 2017 IEEE 7th Annual Computing and Communication Workshop and Conference (CCWC) (IEEE, Las Vegas, NV, USA, 2017), pp. 1–6 3. J. Strickland, How Cloud computing Works. https://computer.howstuffworks.com/cloud-com puting/cloud-computing.htm. Last accessed 18 April 2020 4. S. Al-Sarawi, M. Anbar, K. Alieyan, M. Alzubaidi, Internet of things (IoT) communication protocols, In 2017 8th International Conference on Information technology (ICIT) (IEEE, Amman, Jordan 2017), pp. 685–690 5. Y. Song, Security in Internet of Things. TRITA-ICT-EX-2013:196, Stockholm, Sweden (2013) 6. B.N. Silva, M. Khan, K. Han, Internet of things: a comprehensive review of enabling technologies, architecture, and challenges. IETE Tech Rev 35(2), 205–220 (2018) 7. X. Yang, Z. Li, Z. Geng, H. Zhang, A Multi-layer security model for internet of things, in IOT 2012, vol. 312, CCIS, ed. by Y. Wang, X. Zhang (Springer, Heidelberg, 2012), pp. 388–393. https://doi.org/10.1007/978-3-642-32427-7_54 8. R. Khan, S.U. Khan, R. Zaheer, S. Khan, Future internet: the internet of things architecture, possible applications and key challenges, in 2012 10th international conference on frontiers of information technology (IEEE, Islamabad, India, 2012), pp. 257–260 9. Gupta, A.: The IoT Hacker’s handbook. 1st ed. Apress, Berkeley, CA (2019) 10. Q. Xiao, T. Gibbons, H. Lebrun, Rfid technology, security vulnerabilities, and countermeasures. In Supply Chain the Way to Flat Organisation, ed by H. Yanfang, Fu, J. (IntechOpen, Vienna, Austria, 2008), pp. 404 11. A. Perrig, J. Stankovic, D. Wagner, Security in wireless sensor networks. Commun. ACM 47(6), 53–57 (2004)

238

H. Reylon and A. Chaudhary

12. L. Li, Study on security architecture in the Internet of Things, in Proceedings of 2012 international conference on measurement, information and control, vol. 1, (IEEE, Harbin, China, 2012), pp. 374–377 13. E. Dubrova, M. Näslund, G. Selander, F. Lindqvist, Cryptographically secure CRC for lightweight message authentication. IACR Cryptology ePrint Archive, 35 (2015) 14. E. Dubrova, M. Näslund, G. Selander, F. Lindqvist, Message authentication based on cryptographically secure CRC without polynomial irreducibility test. Crypt. Commun. 10(2), 383–399 (2018)

Intelligent Street Light System Using ARM Cortex M0+ Divesh Kumar, Manish Kumar, and Manish Gupta

Abstract Most of the existing switching control for the street light system is manual and needs human intervention. There is a massive waste of electricity due to poor handling of the operator and also due to street light remain functioning all night. This paper proposes a novel intelligent street light system using ARM cortex M0+ processor. In this proposed method, an ultra-low-power microcontroller will automatically control the streetlight based on the time and the movement of the vicinity. Keywords Energy conservation · Street light system · Smart city

1 Introduction More than 24% of energy is lost during transmission and distribution [1]. If generation losses are also added to it then overall losses go above 50%. If one unit of energy is saved then it will be equivalent to the generation of two units of energy. So electricity conservation is the most effective tool to optimize resources. It is found often that street light remains on in the daytime due to the operator forgot to switch-off the light. It wastes lots of electricity. In the developing country like India, street light remains on during all night irrespective of the area of locality. In some places, it needed that street light should remain on all night due to security reasons but at most of the places where no movement happens during late-night then there is no need to keep street light always on. In place of that, an intelligent street light should be used D. Kumar (B) · M. Kumar · M. Gupta GLA University, Mathura, India e-mail: [email protected] M. Kumar e-mail: [email protected] M. Gupta e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_22

239

240

D. Kumar et al.

that can be initially programmed in different operating modes as per the requirement of the place. Many researchers proposed the solutions for smart street light system. Some companies and universities are using centrally controlled street light systems with a host controller or computer. It has the advantage of to switch-on or switch-off the all light of the campus using a single controller but also has the limitation of having switched on all night. Yoshiura et al. proposed a smart street light system based on sensor network which controls the switching of street light based on the movement of the pedestrians or vehicles. Light will be on when any person or vehicle comes otherwise lights will remains off for the entire light. This technique offers a lot of electricity saving but can’t be used at every place as someplace may require light for all night due to security reasons [2]. Abdullah et al. propose another technique that changes its intensity based on the speed of the moving object (pedestrians or vehicles). As the object speed increases, the intensity of the light increases, and if there is no movement then light will be on but will have minimum intensity so that there will be no dark and electricity consumption will be less compared to the conventional street light system. But this technique has the limitation that it can’t be used for the existing street lights. Street light has to be changed as per the proposed technique then only it can be controlled accordingly. Other researchers also exploited the different techniques for the street light system for the conservation of the electricity [3]. In most of the cases, the researchers proposed the entirely new technique for the smart street light but no one addressed the technique that can be used with the existing street lights. It is not possible to replace all the street lights with a new system as most of the existing street lights already have been replaced by the LED light for the low power consumption and further replacement will lead to wastage of resources [4–9]. So this paper proposes an intelligent street light system using ARM cortex M0+ processor that can be used with the existing street light and also offers ultra-low power consumption for the controller module.

2 Intelligent Street Light System There are three major components of the intelligent street light system. These are street light, real-time clock with ARM cortex M0+ processor, and controlled sensors with object movement.

2.1 Street Light The existing street light should be LED lamps in order to reduce energy consumption. LED light provide a better lighting system as it provides better luminance that gives better visibility and also consumes less power.

Intelligent Street Light System Using …

241

Fig. 1 a DS3231 precision RTC b MSP430FR5962: ARM cortex M0+ based ultra-low-power MCU

2.2 Real-Time Clock with ARM Cortex M0+ Processor The proposed system uses the Real-Time Clock (RTC) interfaced with ARM cortex M0+ processor. The advantage of using ARM cortex M0+ processor is that it can operate in ultra-low power mode so that it consumes very less power and it will not add the extra losses to the system. By using RTC, total duration of the night is divided into three segments. First segment is defined as the time from the evening to a threshold-1 (e.g. 10:00 p.m.). Second segment is defined as the time from threshold1 to threshold-2 (e.g. 4:00 a.m.) and the third segment is defined as the time from segment-2 to morning. In most of cases, there will be movement of the pedestrians or vehicles in the first segment and third segment, and very less or no movement will be second segment. So this paper proposes the two operating modes based on these segments. Intelligent street light will be operated in mode-1 for first and third segments and in mode-2 for second segment. RTC will be used to get the time and date stamp for the proper mode of operation. The duration of these segments will vary as per the date read from the RTC. In winters, the duration of the dark period (duration of night) is longer than the summer. So time for threshold-1 and threshold2 will automatically be adjusted by the intelligent controller as per the season by reading and using the date from RTC. Figure 1a shows the RTC which provides the current date and time to microcontroller and Fig. 1b shows an ARM cortex M0+ based microcontroller unit which consumes ultra-low power.

2.3 Controlled Sensors with Object Movement The proposed system uses a Light-Dependent Resistor (LDR) and Passive Infrared (PIR) sensor. LDR is used to turn on the intelligent street light automatically when it is dark and turn-off in the morning. The PIR sensor is used to detect the motion of pedestrians as street light will be operating either in energy-saving mode (low intensity in mode-1) or will be off (in mode-2). The movement of pedestrians is detected using PIR sensor and street light will have the full intensity for a while until

242

D. Kumar et al.

Fig. 2 a LDR b PIR motion sensor

movement is there and it will return to its default operating mode after some time if no further movement is detected by the PIR sensor. Figure 2a shows an LDR that is used to detect the intensity level of sun and Fig. 2b shows a PIR sensor which is used to detect the motion or surrounding.

3 Proposed System This paper proposed an Intelligent street light system that will automatically turn-on, turn-off, or changes the intensity level of the light based on the time and movement. LDR measures the intensity of sunlight. If it is measured the intensity of 20% of the maximum intensity of sunlight then it will turn-on the light [10]. Light turn-on time depends only on the input received from LDR and varies based on the summer and winter season automatically. Input from LDR defines the start time for first segment. The end time of first segment is calculated by reading the date form RTC. In the winter season, the sunset early in the evening and humans also avoid going out of the home late night. So in winter, usually there will be very less movement or no movement after 9:00 p.m., so in the winter season the first segment will be set up to threshold-1 (9:00 p.m.). In winter the sun rises late and due to cold humans also avoid getting up too early in the morning. Usually, there will be no movement before 5:00 a.m. in the winter season. So the second segment is classified from threshold1 (9:00 p.m.) to threshold-2 (5:00 a.m.) for winter. After 5:00 a.m. to till in the morning when LDR detects 20% threshold of the sun intensity, this time segment is classified as the third segment. For the summer season, peoples use to be out late at night and also get up early in the morning. The duration of night is also shorter than the winter’s night. So threshold-1 and threshold-2 are changed as per season. For example, threshold-1 is shifted to 11:00 p.m. and threshold-2 is shifted to 4:00 a.m. for a full summer night. This shift in the threshold is linear and depends on the date and changes periodically. Figure 3 shows the block diagram of proposed system. Microcontroller is programmed in ultra-low-power mode so that overall system will consume very less power and it changes its state as the signal received from the LDR, PIR, and RTC. PIR is used to detect the motion and RTC is used to provide the date and time. Time will be used to switch the operating modes and date will be used to shift the time for threshold-1 and threshold-2 as per season. A relay is used to control

Intelligent Street Light System Using …

243

LDR Street Light

PIR ARM Cortex M0+ based MCU Relay RTC

Fig. 3 Interconnection of the sensors and street light with ARM Cortex M0+ based MCU

the switching of the existing street light so that only this module needs to add to the existing street light system. Based on these thresholds, three segments of total on-duration are classified as first segment, second segment, and third segment. First and third segment has the highest probability of the movement in the street so these two segments use the mode-1 of the operation and as there will be very less or no movement in the second segment then it uses the mode-2 of operation.

3.1 Mode 1 of Operation This mode of operation is applicable for the first segment and third segment. In this mode of operation, street light will be on by default and will have a low intensity if there is no movement detected by the PIR sensor. If there is a movement of pedestrians then PIR sensor will detect it and microcontroller will increase the intensity of the street light using Pulse Width Modulation (PWM) technique. If movement in the street continues then MCU keeps the light as it is and if the pedestrians walk over and no one is out there then after a short span of time controller will decrease the brightness up to low level. This process will continue until the threshold-1 and the same mode and process will be effective in the third segment of the morning also.

3.2 Mode 2 of Operation This mode of operation will be effective in second segment. In this mode of operation, street light will remain switched-off by default and will be switched-on if there is any

244

D. Kumar et al.

Fig. 4 Flow chart to switch on and off light automatically and to detect the motion

movement in the street. Light will remain switched-on until the movement continues and will be switched-off after a short delay if no further movement is found by PIR sensor. Figure 4 shows the flow chart of the operation of the intelligent street light system.

4 Conclusion and Future Scope This paper proposed the technical details for the intelligent street light system and a possible gap for energy conservation in the existing street light system. By using the proposed technique, energy consumption can be reduced approximately by 50%. This technique also has the advantage over other smart streets light system that it can be interfaced with the existing street light by using a relay only so no need to replace

Intelligent Street Light System Using …

245

the existing street light with the completely new system. Timing of the operating modes is adjusted automatically as per the season. This technique can be used with the sensor network and Internet of Things (IoT) for further enhancement. It will enable the convenience to monitor the all street lights of the city on a single portal so that pattern of the movement in a particular area can be recorded and on the basis of the analysis of one-year data [11], the timing of that intelligent street light can be reprogrammed to optimize the power consumption.

References 1. K. Ramalingam, C. Indulkar, Transmission Losses in India. http://www.electricalindia.in/tra nsmission-losses-in-india/ (2018) 2. N. Yoshiura, Y. Fujii, N. Ohta, Smart street light system looking like usual street lights based on sensor networks, in 2013 13th International Symposium on Communications and Information Technologies (ISCIT) (Surat Thani, 2013), pp. 633–637 3. A. Abdullah, S.H. Yusoff, S.A. Zaini, N.S. Midi, S.Y. Mohamad, smart street light using intensity controller, in 2018 7th International Conference on Computer and Communication Engineering (ICCCE) (Kuala Lumpur, 2018), pp. 1–5 4. Y.-S. Yang, et al. An implementation of high efficient smart street light management system for smart city. IEEE Access (2020) 5. R. Majumdar, et al. IOT based street light controlling mechanism, in 2019 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE). IEEE (2019) 6. T.F. Vieira, et al. An IoT based smart utility pole and street lighting system, in 2019 IEEE Chilean Conference on Electrical, Electronics Engineering, Information and Communication Technologies (CHILECON) (IEEE, 2019) 7. F. Wu, et al., Rapid localization and extraction of street light poles in mobile LiDAR point clouds: a supervoxel-based approach. IEEE Trans. Intel. Trans Syst 18(2), 292–305 (2016) 8. N. Ouerhani, et al. IoT-based dynamic street light control for smart cities use cases, in 2016 International Symposium on Networks, Computers and Communications (ISNCC) (IEEE, 2016) 9. V. Sakthi Priya, M. Vijayan, V. Sakthi Priya, Automatic street light control system using wsn based on vehicle movement and atmospheric condition. Int. J. Commun. 5(01), 6 (2017) 10. M. Kuusik, T. Varjas, A. Rosin, Case study of smart city lighting system with motion detector and remote control, in 2016 IEEE International Energy Conference ENERGYCON 2016 (2016) 11. B. Cheng et al., Automated extraction of street lights from JL1-3B nighttime light data and assessment of their solar energy potential. IEEE J. Sel. Top. Appl. Observations Remote Sens. 13, 675–684 (2020)

Image Segmentation Anurag Jindal, Samarth Joshi, Rishabh Jangwal, Ankit Rathi, and Rachna Jain

Abstract Image segments are basically the useful parts of an image of out interest that is possible because of image segmentation. This image which is broken into useful parts is known as sets mathematically whereas the segments are called this set’s subsets. The segmentation of objects meets all the laws of the set theory. Such subsets or segments give us an idea to better understand the picture. The image segmentation process is important for extraction of higher-level features, and this process depends on the characteristics of an object such as point, line, edge, and area. Object segmentation plays a vital role, but it is tough because object segmentation accuracy determines the success or failure of an image’s computerized analysis method. Here are the approaches that can divide an object into corresponding partitions using methods based on edge detection, threshold and area. Keywords Mask R-CNN · Object detection · Instance segmentation · Object counting

1 Introduction Digital image processing is basically used to look at an object as a digital image and all this happens because of image segmentation. All these images play a very vital role for passing on the information. All these images can be examined and we could bring the required information that can be utilized for most of the operations. A good amount of elements and pixels also the development of these pixels are basically composed together to form a good digital image and it is also known as imaging. After that, these images are separated into small partitions with the intention of retrieving the important segment into it. The parts of images are known as subsets whereas the entire image is a set and all this is determined with only one technique that is image segmentation. After all A. Jindal (B) · S. Joshi · R. Jangwal · A. Rathi · R. Jain Bharati Vidyapeeth’s College of Engineering, New Delhi, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_23

247

248

A. Jindal et al.

this, the required subset provides us with the information about the image that is, which subset should be operated and which should not? After taking the particular subset among all the subsets it can get operated. After the division of the image of the same parts, that are next to each other and this whole process is also known as image segmentation. R represents the whole space region covered by an image R1 , R2 , R3 , …, Rn and n number of the various divisions is known as partitions. Therefore, n Ri = R Ui=1

(1)

 Ri acts as a connected set, i = 1, 2, 3,…,n Ri = Ø for all i and j, also i = j. Image segmentation can be alternatively defined as the technique through which an image can be break down into multiple homogeneous parts which acts as cells that are adjacent to each other. Mask R-CNN is a technique, which is the expanding of faster R-CNN by calculating the marks on the region of interest (ROI) and this gets notified on every branch. As we all know that FCN is very much useful to ROI and all these techniques are used very carefully in the required method. For the easy foldable of good designs, we need that Mask R-CNN should get executed by the Faster R-CNN. The division of all these covers allows us to go to the high-speed for the better testing results (Fig. 1). So, for good outcomes ,all we need is a better division of all the masses. For pixel–pixel interaction, there is a relation between system inputs and yield. This is just about how so the plain ROI–Pool [1, 2] is, is basically the truthful bundle instances, and for the characteristic observation, we need an unrefined attribute division. To avoid all these mistakes, we basically ask people to look for ROI and they can work on it. As because of the alteration, the ROI has a huge strike with the proportion of almost 10–50 percentage that also affected the local poetics. It is important to divide mask and class calculation: We forecast a two-fold mask for each class, which does not include classes of all, and trust the ROI assortments branch of the connected network to expect set. FCNs, generally, make multi-classification per pixel,

Fig. 1 Mask RCNN model for performs instance division

Image Segmentation

249

which unites segmentation and categorization, and it works hilariously because of the segmentation of occurrence. Major difference between R-CNN and Fast R-CNN lies in the fact that former uses around 2000 ConvNets [1, 2] whereas the latter uses only a single ConvNet. Also, multi-task loss are used by fast R-CNN whereas these are not used by the R-CNN. SVM is being used by R-CNN whereas Softmax is being used by Fast R-CNN for the purpose of object classification. Also in terms of performance, softmax outperforms the SVM for achieving object classification. In the absence of doorbell as well as the whistles, Mask RCNN beats all earlier progressive one-on-one design outcome on the COCO [3] segmentation job as well as a lot of engineered access from the 2016 competition. This technique of ours that is COCO stands out very well. Because of excision research, we can estimate a visible representation of something abstract which at a particular point can effect the central aspects. At the speed of 200 ms/frame with the help of a GPU system [4, 5], all the models can be carry out. Because of the core component of COCO, we can finally propose the installation of this required program through the mission of anthropoid. Mask R-CNN can be very much useful in finding cite-specific constitute with the help of re-positioning of the key as one warm two-folding mask. Mask RCNN got better and in the 2016 COCO [4, 5] contest champion, it was basically speeding up by 5 frame/s in the aforementioned period. For example, Mask RCNN has a dynamic framework to define rates and may be enthusiastically lengthened to the hardest stuff.

2 Related Work RCNN: CNN(RCNN) holds the abundant classification of objects to take the interest on controlling the tag add up applicant object areas and measuring the complexity function [6] individually for each ROI. RCNN was thorough and decided to use ROI Pool to assist ROIs in feature charts, essential for fast speed, and better accuracy. As it is very much known of fluent category and is also apart from faster RCNN. In various benchmarks, Faster RCNN is graceful and strong to update and the most relevant system. Instance Segmentation: There are various ways of handling the instance dealing that is mostly dependent on the part of the plot which is basically prompted by the accomplishment of RCNN. Segments are basically relied on the old ways. To find out the segments, people usually get classified by Fast RCNN [7, 8] for intense mask and profits. Due to this, segmentation catches up to identification. Similarly, due to the classification, portrayed a dual-faced step flow that could guess the segment system by limiting box plans [3, 9]. Instead, our system is based on the analogous mask and class label recognition, which is direct and flexed.

250

A. Jindal et al.

In instance segmentation, the boundaries of all objects in a chosen image are identified with a detailing of all the pixel levels. The pixel [1, 2] of one object woill not overlap with the pixel of another object. Basically, because of the primarily proposed section, the object identification becomes difficult. The recurring plans is to predict the bundle of spots by fully complex way. Both channels trade in simultaneously with the class of objects, containers, and masks, obtaining a deadly-speed network. But in violation incidents, FCIS displays [10] many problems and makes wrong edits, by seeing the bravery critical difficulties in instances of segmentation. All the achievements of these required solutions are segmented into the fractions [4, 5, 11–13] of this program. These kind of ways basically hit and try to remove the pixels of the schemes belonging to the same group category. The first method that came into existence was the difference between the segmentation in Mask RCNN. Basically, we all are just concerned about the future recognition of these techniques. Hypothetically, the RCNN mask is transparent. There are basically two results for each aspiring object, a class icon and a box which is full of compensations, whereas we also include the third eye which creates an object mask. Popular and observant are the two ends Mask RCNN has. But the additional output of the cover is complex by quantities of category & box, requiring removal to an item of dominant physical creation. Then, we begin the RCNN mask sort, and the track-up of pixel-pixels, which is also the biggest stolen element for Fast or Faster R-CNN. Faster RCNN: Then comes the quick and small commenting on the Faster RCNN spotter. Faster RCNN consists of two parts. The first part consists of coil darting box of element which is known as Region Proposal Network. Fast RCNN takes applications through ROI Pool by using each applicant box and performing gradient identification and darting box which is basically the second part. The functions of both the parts are supposed to be common for the speedy suggestion. Mask RCNN: Mask RCNN recognizes a simple two-part system, with an indistinguishable first part of being RPN. Mask RCNN [14–16] also produces a double cover for each route compared to forecasting the category and box ratio this is what happens in the second part. It reflects most current priorities, where instructions depend on standards of coverage. Reliability related to jumping engraves strategy and parallel relapse basically gets tracked by Fast R-CNN’s. Classification loss L cls and darting risk box L box [6, 17] are like separate loss boxes. The mask dove has one square kilo-meter (km2 ) Officially, during the journey of the training we use several illustrations ROI as: L = L cls + L box + L mask

(2)

Sided gain for each ROI, encoding K dual [1] commitment covers m/m for each K category. As dealing with a frequency domain, each pixel gets out a connection and gets defined by L mask as the normal lack of binary boundary-entropy. Category k is related to ROI, L mask is described clearly on the k-th surface. Mask is defined as interfacing [6, 7] the process to make the masks without interference between classes for each class; we rely on dedicated grouping division

Image Segmentation

251

to predict the whole category mark that is used to select the yield mask. Covering these double things and predicting class. That is not exactly considered to be the same thing. The common procedure of this is functional division which usually uses a Soft Max double bisectional tragedy. With all the information collected, these classes gets worldwide famous and for every little pixel it gets tragedy. Tests show that the concept is essential to the classification of solid cases. Mask Representation: A collect decodes the physical structure of a data peaceful demonstration. Ultimately, unlike class names or container negates that is eventually reduced by fully related (fc) layers into short convey vectors, the removal of the physical configuration that can be assumed to normally be induced by the pixel-topixel correlation provided by contortions. We anticipate a shield of m/m from each line using a FCN [2, 9]. Helps each layer in the cover division to preserve the unambiguous m/m conceptual dissent form while collapsing it with a vector representation requiring quantitative calculations. Convolutionary representation needs fewer variables and is more reliable as shown by tests, different from previous strategies which rely on fc levels for cover forecasting Such pie-to-pixel activity includes our ROI highlights, which needs small component layouts to be configured to protect the specific perceptual relation per pixel sensitively. Urges us to improve the resulting ROI Align shop surface which is a reference to hide count spirit squad. ROI Align: Proper function used to take out a small aspect map (e.g., 7 all-in-one) beginning with each ROI. ROI filter initially quantizes a driving amount ROI to the distinctive specificity of the element graph, ROI is gathered a little while after on sub-partitioning the geographic cans that are then quantized, eventually highlights the self worth enclosed by each instance. Conducted on a continuous game of by calculating[x/16] where 1600 was the step of the characteristic graph or pulling is also done while trying to isolate into bin (e.g., 77). To address ROI Align bank layer that removes the disrespectful ROI Pool perturbation theory and correctly arranges the rolled-out characteristics through feedback. The changes which were asked that we should do are quite simple. Bilinear extrapolation to determine the correct properties of the received functionality at four repeatedly sampling locations each ROI event, and merge the impact, result is not sensitive to the exact sampling places, or how many places are surveyed, as long as no quantization is accomplished. ROI reveals the road to significant developments. We test the ROI Warp system predicted in Separate to ROI Align, ROI Warp overlooked the role issue and was noticed in. For the issuing of a new version of the computerized language, it is very much important to read the task carefully and to keep everything in the bisectional arrangement manner that will make the program easy. This explains how each pixels of an object is associated with category mark such as bus, car, [9, 17, 18]. lane, pole. The schemes used have always been very much useful and the it also develops the keen learning of the program. Near to the ROI Pool rating. While ROI Warp [8] employs trans-linear selection, it performs the vital role of organization on aggregate with ROI Pool revealed by observations.

252

A. Jindal et al.

3 Network Architecture 3.1 Mask R-CNN Mask R-CNN is overlooked by different architectures that will let us know about the presentation. The difference between: (i) the style of convolutionary models used to take a more comprehensive picture, and (ii) the model starting with and filter measurement, i.e., separately to each ROI. All the designs gives us the further division of the system characteristics. All ResNet and ResNet setup [18] are estimated to have a capacity of 50 or 101 layers. Unique achievement of Faster R-CNN ResNets extorts the functionality starting from the last convolutionary surface of the fourth cycle, as defined. This backend is denoted by ResNet-50-C4. This has been a commonly used tool inside. We are also investigating a huge, productive base that was lately proposed at Lin et al. [3], called the Characteristic Pyramid System (FPN). Using tangential relationships, FPN uses finite element analysis up and down to render ladder beginning with one entry in a process parameter. Faster R-CNN in an FPN anchor pull out the ROI measures from abnormal rates of a pyramid function listed to the scope, apart from the persists of the arriving will be similar to ResNet vanilla. Using the ResNet-FPN, integrity/system is designed for function operation via Mask RCNN to offer excellent precision and traction gains. Using Mask R-CNN, following operations can be performed: 1. Identification of objects, offering the coordinate (x, y) bounding for every object in the image. 2. Segmentation of the example, allowing everyone to receive a pixel-wise cover in the image for every individual object. Semantic segmentation [19] is considered to be a very important element of the task data and a key computer vision issue. This explains how every pixel of an object is associated with a category mark like car lane, etc. Semantic segmentation was commonly used during the automated cars, segmentation of medical objects, aerosensing [9, 17, 18], communication between human and machine, and many more. With the help of deep learning, even the layouts have become more simpler leading to a better understanding of the models. This also leads to a great enhancement in the machine learning algorithms (Fig. 2). The required problems got solved with conventional methods of machine training, but developments in deep learning technology have provided plenty of space to develop them in required terms of effectiveness or performance. Semantic is considered to be the most informative segment among all of them. The most difficult way to perform all the focal points is to reduce them so that it can run more effectively and efficiently.

Image Segmentation

253

Fig. 2 Head architecture

Fig. 3 PSP architecture

3.2 PSP-Net We propose our pyramid scene parsing network (PSPNet) with the pyramid pooling module as shown in figure. The concept of the (PSPNet) with the required sensor is in Fig. 3.3. G. 3Distended network approach is basically used as a re-trained template. The function map is shown in Fig. 1/8 3b. In contrast to Fig. 4. Example of the resNet101 supplementary malfunction. Every box refers to a collection of residues. Upon res4b22 residual frame, the supplementary failure is applied. Map, to collect background data, we use the skyscraper bundling configuration shown in (c) (Fig. 3).

254

Fig. 4 a Outcomes of the project. b Outcomes of the project

A. Jindal et al.

Image Segmentation

255

4 Implementation 4.1 Mask R-CNN After effective Faster R-CNN work, there has already been a frenzied limitations [1, 3]. Although these findings are meant for analytical analysis in key qualifications [1, 3], they are very good in our cellular division/optimization approaches. The shapes may vary from one another in the regions arranged from RPN. Consequently, all the layers after a certain period of time it is into the similar shape. For the further prediction of the boxes, there should a network that passes onto the others. All these identical steps are operated by Faster R-CNN. To end, the first thing is to determine the (ROI), so calculation time could be reduced. Then, we are supposed to measure the Intersection of Union (IOU) areas using bounding boxes. IOU = Area of intersections/Area of union

(3)

To find this as a (ROI), only if the IOU is equal to 0.5. Else, this specific area is ignored. For all the areas, we choose only a subset of areas in which the IOU exceeds 0.5. Training details: S IROI is positively defined as compared to the others and the request has been sent to the Loss L mark. Objective of this whole process is meeting up the ROI and to relate the mark on its own. The images can be resized with all the eight hundred pixels [1] that can be resized during the program required [3]. Every average collection has two images each GPU each of which has N amount of collected ROI’s, the proportion being 1:3 positive [1]. N be 64 for the C4 anchor (i.e.,[1]) and 512 for the FPN [3]. We are preparing training for 8. GPUs designed for 160 k repetition, with such a procurement velocity of 0.02 which is lowered by 120 k iterations. It uses a polymerise height of 0.0001 and compel of 0.9. FResNet, with a tentative merger/teaching rate of 0.01, we practiced by 1 photograph/silhouette per GPU and also as a significant amount for ingeminate.

4.2 PSP-Net Devil seems to be in the specifics for a realistic, machine learning program. The architecture is focused on Caffe [16] open framework. I [12], we have used the training level rule of “poly” in which the actual learning level is equivalent to the increasing (1–itermaxiter) power base one. We set 0.01 base learning rates and 0.9 power levels. Quality can be enhanced by raising the number of iterations set at 150 K for either the Image Net test, 30 K for both the PASCAL VOC or 90 K for both the

256

A. Jindal et al.

Landscapes study. Decline of strength and mass is set separately to 0.9 and 0.0001. Studies, we realize how an adequately huge “crop size” may lead to good efficiency but that “batch size” is of great significance in the batch demo-cratisation layer [20]. We adjusted the “sample size” to 16 during practice due to low internal space on GPU chips. To do, along with1 we change Caffe in [21]. PSP-Net inquiry of various environments. Guideline is indeed a distended networked FCN focused on ResNet50. B1 “or” B1236’ signify shared charts of tub measurements{1 average 1} or{1 average 1, 2 average 2, 3 average, 6 average 6}. MAX’ or’ AVE’ constitute person multiplier accumulating processes or average bundling procedures. DR’ reduces the element since combining.

4.3 Spectral Matting A modern approach would be to eliminate all boundaries among significantly different points (i.e., scales w(x i , x j ) > ÿ), again and scan that resulting map for linked elements (CCs). Remember that a simple edges of w(x i , x j ) are appropriate for causing the merger of two required areas. In spill ties (aka “leaks”) among areas, thus, CCs were not resilient. The effect there is mostly no reasonable meaning of π that provides a valuable optimization. The algorithm of Kruskal. In addition, it is relevant to point out when an cost-effective way to do CC grouping here would be to construct a chart’s minimum dynamic routing (MST) with a factor π. This is possible to choose the Kruskal method, that is a selfish strategy that guarantees an optimum MST. Corners are introduced both at the same time starting with both the totally detached chart in reverse order with the strengths, as well as introducing an angle will not insert loops throughout the existing sub graph. Its destroyed chart’s CCs (to corners taken away from w(~ x i , ~ x j ) > π ) should then be calculated effectively through removing a certain corners from either the MST. That plants received necessary CCs throughout the subsequent wood. First is called the revised variant of Kruskal’s method.

5 Results They analyze Mask R-CNN for just the advanced methodology of meiosis/analytics within example and release the name of items in an image. Or more and through study of the diversity contradicts recent changes in older democratic approaches. This includes MNC [17] or FCIS [6]. Even though it is secure to conclude that it will be incredibly hard to accomplish a rather extensive, precise, and relevant clustering of a picture in the future, old, and new locations and study attempts on this issue appear suitable therefore proceed to achieve greater precision in the long term further than feasible. However, since the differentiation is not every time really precise, we recommend that folks keep the minimum levels for optimization based on color, form,

Image Segmentation

257

Table 1 IOU classification comparison of different methods Method

Mask R-CNN

Spectral matting

PSP-NET

IOU classification

78.4

62.5

67.5

Table 2 Advantages and limitations of different algorithms Algorithm

Advantages

Limitations

Mask R-CNN

Simple and flexible. Current state of the art

High training time

Spectral matting

Works well on small datasets and generates excellent clusters

Computation time is large and expensive. Not suitable for non-convex clusters

PSP-Net

Good for small datasets and images having contrasting background

Not suitable when there are too many edges in the image

or geographical positions so that they can monitor the precision optimization want in the input gene. Various image segmentation models were studied and successfully implemented. We have also stated that Mark R-CNN Model acts as a state-of-the-art model. After various implementations and collecting multiple results, we can finally say that Mask R-CNN model turned out to be better model than PSP-NET model. With the help of City Escape datasets, all these objectives have been achieved. Also, accordingly, the COCO 2015 champions of the 2016 cell league/segmentation estimates. The R-CNN cover the ResNet-101-FPN or FCIS+++ [6], which includes dual-scale train/trial, cross-section of the flip test uncomfortable offline mine (OHEM) instance (Tables 1 and 2). Also, the advantages and limitations of different algorithms such as Mask RCNN, spectral matting, PSP-NET are also stated here. We get these observations after studying and implementing these algorithms on the chosen dataset. Figure 2 shows us the outcomes of this project. It gets good amount of the tragedies to overcome the challenges by Mask R-CNN chosen dataset. Figure 2 shows us the outcomes of this project. It gets good amount of the tragedies to overcome the challenges by Mask R-CNN.

6 Conclusions Differentiation of a picture views the picture as a collection and instead splits it into comment thread-sets named sections. Such subgroups are discontinuous classifications so that a set created recognized when nil collection is junction around them. Every entity post-set reflects that relevant data about the set. Photo optimization is conducted primarily via two strategies centered in divergence and parallels. Once they split a defined object into post-sets. It league project is based on frequency, resolution, limit and plant pixel relationship to their neighbors.

258

A. Jindal et al.

Blade identification, area separating or mixing, minimum, etc., were the methods under such strategies. We may break an object using such techniques and obtain useful info of it. Two variants of an example differentiation interface were synthesized, created, and assessed by this venture. That information gathered through processes, though, is also not 100% correct. We need to conceive of additional development throughout the methodology of object differentiation which provides us with simple but relevant picture creation to suit the needs of apps of another decade.

7 Future Scope They also made lots of to ensure accurate clustering of the picture thus far. Photo differentiation has taken steps from human optimization via the implementation of different strategies but techniques for rational robotic optimization. One might still generate very remarkable clustering on a wide set of photos using a couple of basic sorting signs [13]. Depending upon differences in hue intensity further than a regression coefficient, significant artifacts were defined in several instances with an picture. In certain cases, the border among two areas is determined through measuring its variations for strength across border and also the variations in strength among adjacent pixel across each area.

References 1. B. Hariharan, P. Arbelaez, R. Girshick, J. Malik. Hyper- columns for object segmentation and fine Grained localization (2015), p. 2 2. Z. Cao, T. Simon, S.-E. Wei, Y. Sheikh. Real time multiperson 2dpose estimation using partaffinity fields (2017), pp. 7, 8 3. N. Dhanachandra, K. Manglem, Y.J. Chanu, K- means clustering algorithm and subtractive clustering algorithm (2015) 4. B. Hariharan, P. Arbelaez, R. Girshick, J. Malik. Simul taneous detection and segmentation (2014), pp. 234 5. M. Bai, R. Urtasun, Deep watershed transform for instance segmentation (2017), pp. 11–19 6. Z. Liu, X. Li, P. Luo, C.C. Loy, X. Tang, deep parsing network. Semantic image segmentation New state-of-the-art segmentation accuracy of 77.5% (2015) 7. O. Ronneberger, P. Fischer, T. Brox (2015) U-Net, CNN Bio-medical segmentation used an encoder- decoder net for entropy maximization 8. F. Milletari, N. Navab, S.A. Ahmadi (2016) V-net: fully convolutional neural networks volumetric medical image segmentation 9. M. Bai, R. Urtasun, Deep watershed transform for instance and semantic segmentation (2017), pp. 1–23 10. K. He, X. Zhang, S. Ren, J. Sun, Spatial pyramid pooling in visual recognition. IEEE Trans. Pattern Anal. 99–121 (2015) 11. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A.L. Yuille, Semantic image segmentation with deep convolutional nets and fully connected preprint

Image Segmentation

259

12. R. Mandal, N. Choudhury automatic video surveillance for theft detection in at machines: an enhanced approach, in 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom) (2016) 13. S. Ioffe, C. Szegedy, Batch normalization: accelerating deep network training by reducing internal covariate shift (2015) 14. R. Girshick, J. Donahue, T. Darrell, and J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation (2014) 15. Z. Hayder, X. He, M. Salzmann. Shape-aware of instance segmentation (2017), pp. 9–54 16. A. Arnab, P.H. Torr, Pixelwise Image and instance segmentation with a dynamically instantiated network (2017), pp. 3–23 17. R. Girshick, F. Iandola, T. Darrell, J. Malik, Deformable part models are convolutional neural networks (2015), pp. 4–34 18. M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, B. Schiele, The cityscapes dataset for semantic urban scene understanding (2016), pp. 9–21 19. S. Bell, C. L. Zitnick, K. Bala, R. Girshick, Inside Outside net: detecting objects in context with skip pooling and recurrent neural networks (2016), p. 5 20. P. Arbelaez, J. Pont-Tuset, J.T. Barron, F. Marques, J. Malik. Multi scale grouping (2014), p. 2 21. A. Arnab, P. H. Torr, Pixelwise image and instance Segmentation with a dynamically instantiate network (2017), pp. 3, 9, 34–45

Application of Bio Sensor in Carpal Tunnel Syndrome Mayank Agrawal and Nikita Gautam

Abstract The objective of this study is to evaluate the diagnosis of strain and pressure measurement in the median nerve passing through carpal tunnel. In this article, the carpal tunnel disorder is discussed with its symptoms. This paper reflects a review of biosensors with future aspects of wearable and implantable sensors. The paper drops light on different types of biosensors with their working principle. The article presents an idea for the diagnosis process of carpal tunnel disorder. A low-cost pressure transducer is proposed. Signal processor is used to process the data from the sensor and analyze the parameters. Keywords Carpal tunnel · Biosensor · Pressure transducer · Signal processor

1 Introduction The carpal entry is meager path in wrist, almost an inch wide. The lower layer and walls of the section are molded by little wrist bones called carpal bones. A solid band of connective tissue considered makes the top of the passage. The carpal passage has little ability to extend reason for unbending limits. The middle nerve is one of the fundamental nerves in the body. It goes through the carpal passage at the wrist and goes in the hand. The nerve detects feeling in the palm by giving muscles to base of the thumb [1]. Carpal passage disorder (CTS) is a capture neuropathy coming about because of mechanical aggravation or pressure of the middle nerve at the wrist. It happens when tissues encompassing the flexor ligaments swell, delivering pressure on the middle nerve [2]. This enlargement in the tunnel results from forceful exertions induced stress in the median nerve and adjacent tissues. M. Agrawal (B) · N. Gautam Poornima Institute of Engineering and Technology, Jaipur, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_24

261

262

M. Agrawal and N. Gautam

Fig. 1 Carpal tunnel

The main symptoms of this abnormal pressure on the nerve are uncomfortable numbness, tingling sensations, and pain in the wrist and fingers [3] (Fig. 1). Median nerve starts to enlarge when the wrist comes under continuous pressure then need for some indication arises on which the position of the wrist can be changed to reduce the pressure on the carpal tunnel. Implementable or wearable biomedical sensors can be used to detect the strain in the median nerve and to give indication after a limit. Also, the results from sensors can be used for the diagnosis of Carpal tunnel syndrome (CTS) [3]. The mechanism of the nerve kinematics during limb motion is a key to examine the possibilities of Carpal tunnel syndrome (CTS). Portrayal of the progressions of parameters from sensors after carpal passage discharge might be valuable for explaining the viability or the recuperation procedure of the carpal passage disorder treatment. Much of the time, carpal passage disorder deteriorates after some time, so the early analysis is significant. On the chance that pressure on the middle nerve increases, in any case, it can prompt nerve harm and exacerbating side effects [4]. So the aim of the following study is to examine the median nerve compression and enlargement using biomedical sensors.

2 Biosensors Researches and inventions in the medical field are growing day by day, which are only responsible for improved health care and reduced death rate. Inventions give a new way to complete a task. For the improvisation of health care, it is necessary to make each and every disease-free but practically it is not possible anymore. So the facilities of treatment should be better. The treatment of any disease is possible only

Application of Bio Sensor in Carpal Tunnel Syndrome

263

if its existence in the human body is found. So we can say that cure of any disease starts with its diagnosis. For which sensors are needed to examine human body for favorable electrolytes and physiological variables. Sensor is the only thing which can provide information from physiological variables that can be understood by a human being. The sensors which are used in the field of medicine are named biosensors as they directly interact with the human body or bio elements. The study of biosensors come under the field of biomedical engineering. A biosensor can be defined as a specific type of sensor consisting of a biomedical recognition element and a physical transducer to convert physiological variables, extracted from the human body, into an electric signal. In biosensors, the biological element is responsible for sensing the electrolytes and physiology of a part of the interest in the human body. The physical transducer is responsible to convert that extracted information into an electric signal which can be manipulated. The work in the field of biosensors is based around the concept of MEMS (micro electro-mechanical systems). That provides concepts for the nano designing of sensors [5]. The conventional methods by invasive means of implementation of sensors to assess someone’s physiology are so painful and slower. This drawback of conventional sensors gives a direction to work on wearable or implementable sensors that can examine the human body without pain while human working or undergoing some other task. The biggest advantage of wearable biosensors is that the data from these sensors can also be transmitted to medical authorities without disturbing the patient wearing it. Regular monitoring of patients can be made easy by using implementable sensors. Nowadays, wearable and implemented sensors have rapidly entered in the area of digital health checkups and field of interest. The first wearable biosensor or bio device was introduced in 1960 as a cardiac pacemaker that could be implanted in the human body for regular monitoring. All implantable or wearable sensors are battery-powered. These sensors are taking the place of conventional sensors as they can be used in small parts of the body easily (Fig. 2). So now our motivation is to survey an indicative gadget for estimating the extensibility of certain key carpal-tendons to decide the propensity of the Subject to carpal passage harm [6]. The audit gives a gadget and a strategy for estimating the strain in carpal tendons which thusly will give markers of future or existing carpal passage Syndrome [6].

3 Types of Bio Sensors On the basis of sensing methodology and different kind of outputs bio sensors are of many types:

264

M. Agrawal and N. Gautam

Fig. 2 Wearable sensor

3.1 Electrochemical Sensors The sensors working principle of which is dependent on the relation between chemical reactions and electro potential are named as electrochemical sensors. Redox reactions are employed to analyze the amount of an analyte. The electrochemical sensors do not have the drawbacks of optical sensors. More stable output, higher sensitivity and quick response are the main characteristics of electrochemical sensors. Electrochemical sensors are among the most popular sensors. The glucometer that is available for quantifying glucose levels in blood samples is an electrochemical biosensor that is based on the potentiometric principle [2].

3.2 Piezoelectric Sensors The working principle of piezoelectric sensors is dependent upon the piezoelectric materials. Piezoelectric materials are defined as those materials that provide an electrical signal as output when they come under some mechanical action. The piezoelectric material was first introduced in 1880. Generally, quartz crystal is used as a piezoelectric material in these sensors. A bio capture layer is introduced o the surface of the material for sensing purposes. When a mechanical force is applied to the material-specific binding occurs between the bio-capture molecules on the sensor and the analyte and consequently, a mass change occurs. This leads to a change in the oscillation frequency and production of an electric signal that is detected at output [2] (Fig. 3).

Application of Bio Sensor in Carpal Tunnel Syndrome

265

Fig. 3 Piezoelectric sensor

3.3 Surface Plasmon Resonance (SPR) The working principle of SPR-based sensor is based on the reflection phenomenon of frequency-matched electromagnetic radiation. The vibration of the electron cloud in a molecule structure is known as Plasmon. The frequency of vibration of electrons is determined by the characteristics of the material. Usually, gold or silver is used in this type of transducer. When any frequency-matched electromagnetic radiation incidence on the material layer it is in the condition of resonance and reflected at resonance angle. That is determined by the refractive index of the layer. If the material is modified by the capture model, then the resonance angle can be varied. This change in resonance angle can be quantified for determining the concentration of analyte and for generation of an electric signal at output [2] (Fig. 4). Fig. 4 SPR-based sensor

266

M. Agrawal and N. Gautam

4 Diagnosis of Carpal Tunnel Syndrome There are many methodologies of diagnosis of carpal tunnel disorder. Since the reason for carpal tunnel syndrome is enlargement and compression of the median nerve passing through it. So the basis of all the diagnosis methods to analyze the physiological variables associated with the median nerve. So the regular analysis of the variation of physiological variables will quantify the existence of carpal tunnel syndrome. Since the enlargement and compression in median nerve occurs due to the pressure on the wrist during movement of the hand. So it can be understood that by analyzing the pressure on the median nerve its enlargement and compression can be quantified. To analyze the variation in pressure, piezoelectric transducer is an essential thing. Since pressure is mechanical pressure which may affect the piezoelectric sensor to produce an electric signal.

5 Design of Pressure Transducer for Diagnosis of Carpal Tunnel Syndrome Designing of the pressure transducer is a very crucial thing in the diagnosis of carpal tunnel syndrome. As it should be wearable and comfortable for human beings so that patients can wear it for long time and proper actual time recordings can be obtained. For proper diagnosis, it is suggested to record the variation in pressure on the median nerve while a person is undergoing daily tasks. Presently, in the field of medical applications of pressure transducers consist of a liquid-filled elastic measuring cell. That is enclosed by electrodes. When wrist comes in movement it causes enlargement or compression in the median nerve which produces pressure. The wall of the elastic cell changes its shape with respect to the hydraulic pressure inside the cell. Thus, the volume of the cell changes. And it becomes the change in the distribution of dielectric-filled in the cell, which causes the change of capacitance of the electrode arrangement which is associated to pressuredependent electric output signal [7]. The whole process can be understood easily from the following flow diagram (Figs. 5 and 6). Mathematical Aspects Initial pressure = P1 Final pressure = P2 Initial cross-sectional area of cell = A1 Final cross-sectional area of cell = A2 Initial distance between electrodes = d 1 Final distance between electrodes = d 2 Initial dielectric constant = r 1 Final dielectric constant = r 2 Initial Capacitance = C 1 = A1 ri 0 /d1 F

Application of Bio Sensor in Carpal Tunnel Syndrome

267

Final Capacitance = C 2 = A2 r 2 0 /d2 F Initial Current = I 1 A Final Current = I 2 A.

    Change in potential between electrodes = 1/c2 ∗ I2 dt − 1/c1 ∗ I1 dt Volt. Hence, the variation in a potential difference between the electrodes is recorded to quantify the pressure variation on the median nerve.

Fig. 5 Flow chart of sensor

Fig. 6 Capacitive measurement principle of a pressure transducer

268

M. Agrawal and N. Gautam

6 Data Acquisition and Signal Processing Above explained technique is utilized to record the weight in carpal passage. Presently the assignment is to process and travel this information for examination. For this reason, getting signal from the Biosensors Data obtaining procedure will be done that can gauge proper physical conditions and changing over the subsequent yield tests into computerized numeric data that can be inspected by a PC to give parameters for investigation. Since after the handling of information it is essential to show it in an easy to use way, a Graphical User Interface (GUI) can likewise be propelled. After the proper time of implantation of biosensor on wrist, it is removed from person. All the data is recorded in the memory connected with the sensor. Now the data from memory is to be processed for analysis for which the following steps are followed. 1. 2. 3. 4.

Process the signal receiving from pressure transducer implemented on wrist. Data acquisition of the sampled signal. Convert it into digital numeric data. Pass it to the processor (mobile application, computer, etc.)

Thus interface of the application based software and data acquisition system will complete an ideal package for biomedical diagnosis of carpal tunnel. This device records analog biomedical signals from pressure transducer and sends these signals to computer. The signal processor makes the analog signal coming from the transducer, free from unwanted noise, amplify, and sends it to data acquisition system (DAQ) for digital signal processing. An analog signal processor (ASP) consists of filters and amplifiers. DAQ digitizes the signal and converts it into digital numeric value so that it can be processed by the processor [8].

6.1 Pressure Transducer In the signal processor pressure transducer is the main component which takes physiological variables and converts them into an electrical signal (Fig. 7).

Fig. 7 Pressure transducer

Application of Bio Sensor in Carpal Tunnel Syndrome

269

6.2 Analog Signal Processing (ASP) The analog signal processing (ASP) is to be done just after the signal is received from the transducer of the biosensor. The transducer provides the output signal which is in electric form. This received signal has many unwanted harmonics that’s why it is filtered firstly to be processed. This filter signal is free of noise and harmonics now it is amplified because it losses it’s strength during filtration. The amplifying process increases the intensity of the signal. Hence at the output of the analog signal processor amplified and noise-free signal is received which can be analyzed for proper results without ambiguity [8].

6.3 Data Acquisition Now it is needed to sample discrete numeric values out of vast signal. The process of sampling real-world physical converting the samples into numeric values is known as data acquisition. Now, these values can be analyzed by a computer. Data acquisition system converts analog waveforms into discrete values for analyzing [8].

6.4 Processor It is the component of the signal processor that is responsible for analyzing the signal received from DAQ. The processor is hardware which completes the task in the way it is coded.

7 Conclusion The present study demonstrated the carpal tunnel syndrome and its diagnosis. The article reviewed biosensors their future aspects in the form of wearable and implementable sensors. The use of pressure transducer in the detection of strain in the median nerve passing through the carpal tunnel is explained. The paper proposed a low-cost pressure transducer for the diagnosis of carpal tunnel disorder. Further study can be done for the designing of a device that can be wear on the wrist to give an indication when strain in the median nerve becomes so much to change the position of the wrist. The processor used for analysis is programmed to estimate the possibilities of Carpal Tunnel Disorder based on inputs from biosensors.

270

M. Agrawal and N. Gautam

References 1. Y. Yoshii, W. Tung, T. Ishii, Strain and morphological changes of median nerve after carpal tunnel release. J. Ultrasound in Med. 36(6), 1153–1159 (2017). https://doi.org/10.7863/ultra.16. 06070 2. http://nptel.ac.in/courses/118106019/38 3. Y.-Y. Liao, W.-N. Lee, M.-R. Lee, W.-S. Chen, H.-J. Chiou, T.-T. Kuo, C.-K. Yeh, Carpal tunnel syndrome: US strain imaging for diagnosis. Radiology, 275(1), 205–214 (2015). https://doi.org/ 10.1148/radiol.14140017 4. https://orthoinfo.aaos.org/en/diseases-conditions/carpal-tunnel-syndrome/ 5. J. Ponmozhi, C. Frias, T. Marques, O. Frazão, Smart sensors/actuators for biomedical applications: review. Measurement 45(7), 1675–1688 (2012). https://doi.org/10.1016/j.measurement. 2012.02.006 6. Patent-diagnostic apparatus and method for evaluation of carpal tunnel syndrome 7. S. Kartmann, P. Koltay, R. Zengerle, A. Ernst, Pressure Transducer for medical applications employing radial expansion of a low-cost polymer tube. Procedia Eng. 120, 1213–1216 (2015). https://doi.org/10.1016/j.proeng.2015.08.832 8. K.M.R. Anik, M. Oyon, J. Hossain, A.K.M. Azad, M. Alam, Data processing through biosensors and development of simulation software in windows and RT-Linux (2012). https://doi.org/10. 13140/2.1.1995.9365

Applications of Artificial Intelligence Techniques for Cognitive Networks G. Yashasree, Davanam Ganesh, M. Pavan, and K. Bindu

Abstract The Cognitive radio networks are the networks that are configured dynamically and are used to increase the usage of underutilized channels. Cognitive radio networks always try to identify idle or free channels for transmission. Cognitive radio (CR) is an enabling technology for multiple features like dynamic spectrum access, spectrum sharing, and dynamic allocation. To realize this application, a number of Artificial Intelligence (AI) techniques can be used. The AI techniques are artificial neural networks (ANNs), Metaheuristic algorithms, rule-based systems (RBSs), ontology-based systems (OBSs), and case-based systems (CBSs). Responsiveness, security, robustness these features are gaining more importance to the AI techniques and made them popular in the current scenario. Keywords Cognitive networks · Artificial intelligence · Applications

1 Introduction In human life, wireless communication systems play a vital role. The wide range of applications and services of wireless communications is going to change the world more in the future. The growth and usage of wireless communication systems increased rapidly across the world which led to the main problems of wireless communications, which is the scarcity of radio resources like power, frequency, and time. At present, the frequency spectrum’s less availability, the need for reuse of the frequency have also gained great importance. There is a necessity to design, improve, upgrade the wireless communication systems if we need to attain the improvement of communications. In wireless communications, the resources must be utilized properly or else it may cause an issue, i.e, scarcity of resources, which needs to be solved. The spectrum is G. Yashasree (B) · D. Ganesh · M. Pavan · K. Bindu Department of Computer Science and Engineering, Sree Vidyanikethan Engineering College, Tirupati, AP, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_25

271

272

G. Yashasree et al.

a major resource and it should be allocated such that it is not going to be wasted. Static spectrum allocation and dynamic spectrum allocation can be said as the types of allocation methods. The static spectrum allocation has given many successful applications the required spectrum they needed, but a situation was faced, where the spectrum which was available has become insufficient for the emerging technologies, as the available applications left no space for them. We can also say that the spectrum is left unutilized. The main concept behind the working of cognitive radio is the reuse or using this unutilized spectrum in an optimal way. This problem can be reduced by using cognitive radio networks. Cognitive radios have the capability to be aware of its environment. It can modify or control the transmissions based on channel variations, noise, and many more. Mitola was the first person to introduce cognitive radio. The CR should need adaptability, sensing, flexibility, agility. Cognitive radio networks are formed by connecting all the cognitive radios. These cognitive radios try to sense the spectrum and try to figure out the white spaces in it. White spaces are the parts of the spectrum that are free and unutilized in the spectrum. The cognitive radio networks stake the job of assigning the white spaces. There are types of users present in cognitive radio networks to use a licensed spectrum. They are first the users who have license are primary users, and second the users who doesn’t have a license are secondary users. The priority is given to the primary users when compared with secondary users. The main issue in cognitive radio networks is providing the primary and secondary with the proper communication and managing the spectrum utilization between them. The white spaces are the regions where there are no primary users. The secondary users use these white spaces to communicate with each other freely. Cognition, awareness, adaptability are the basic features of cognitive radio networks. The performance of a system can be made as optimal only by processing the information that we obtain from the environment and this process is known as cognition. The process of finding whether a spectrum is available for communication or not that is sensing the environment is known as awareness. The process of adjusting the parameters without causing any changes in hardware is known as adaptability. A dynamic system that can reconfigure its parameters is a cognitive radio system. The issues like utilization of unused spectrum and the less availability of the spectrum can be solved by using the dynamic accessing property of the spectrum of cognitive radio. As these networks are dynamic, open, flexible there are chances that they can be exploited. These networks are vulnerable to attacks. Hence providing them with security is of a big and complicated task. The cognitive cycle can be represented as shown in Fig. 1.

1.1 Artificial Intelligence Artificial intelligence (AI) also known as machine intelligence. It is the process of imitating human intelligence by computer systems. AI is also used to describe machines that mimic the cognitive functions of human brains such as learning and

Applications of Artificial Intelligence Techniques …

273

Fig. 1 Cognitive cycle

problem-solving. An AI is used to analyze its surroundings and take actions which increase the success rate or optimization. Artificial Intelligence mainly focuses on three main cognitive skills: learning, reasoning, and self-correction. AI has gained its importance rapidly due to its fast processing capability, makes more accurate predictions than humans, and so on. The applications of AI contains speech recognition, machine learning, natural language processing, automation, robotics, and many more.

1.2 AI for Cognitive Radio Networks The emerging technology for providing many applications like self-organizing networks, spectrum markets, dynamic spectrum access, and so on is cognitive radio. The cognitive radio designs can be implemented by using AI techniques which help in improving the factors such as security, robustness, stability, the responsiveness of cognitive radio. The three main things that should be in cognitive radio are observation, reconfiguration, and cognition. • Observation: It is the process of collecting the characteristics of the radio and information about the environment. • Reconfiguration: The process of changing a radio’s operating parameters.

274

G. Yashasree et al.

• Cognition: The process of analyzing the radio characteristics and environment, making decisions of actions, noting the impact of these actions on radio is done here. These three can be combined to form an element called cognitive engine (CE). In a CR the tasks of cognition are taken care of by this engine. The process starts by taking the input from the environment, now the analysis and classification of the situation are done by CE. It also determines response and makes a decision. The AI techniques can also be used to protect CRN against attacks on them.

2 Attacks on CRN The layers usually form the base on which attacks on cognitive radio networks are classified. Primary user emulation attack, Jamming attack, etc occur at the physical layer. In data link layer the spectrum sensing data falsification attack can take place. Sinkhole attacks, Hello flood attack takes place at a network layer. At the transport layer, Key depletion attack can occur.

2.1 Primary User Emulation Attack The secondary users present in a spectrum gains equal priority over that spectrum in the absence of a primary user. The secondary malicious user tries to represent the characteristics of a primary user so that the SU is able to access the spectrum totally. Finally, the attacker is able to attain all the bands that are occupied.

2.2 Objective Function Attack Based on the present scenario a cognitive radio can change its radio parameters. This objective function can be used by the attackers to predict the parameters in a wrong way. Then the aim of achieving its own objective function is not achieved. The setting of threshold values for the parameters can help in avoiding this attack.

2.3 Jamming Attack A situation of denial of service can be created by using the jamming attack. Here, the attacker tries to interfere or stop the users’ communication in a network. This can be

Applications of Artificial Intelligence Techniques …

275

achieved by sending packets continuously in a session so that the users cannot send or receive the data, a position where a denial of service is accomplished here.

2.4 Spectrum Sensing Data Falsification Attack Byzantine attack is the other name of the SSDF attack. In this attack, the data collector will receive false sensing reports of the spectrum which are sent to it by the attacker. The data collector by seeing the results it takes a wrong decision causing a chance to make an attack.

2.5 Hello Attack The attacker tries to convince all users as it is the only available route to send their data. This is done by sending a message with high power for remaining nodes in the network. By which the nodes assume that the attacker is their neighbor and sends the data for transmission. Now, the data is in the hands of the attacker and users lose the data.

2.6 Sinkhole Attack In this attack, an attacker behaves as the best route available in the network. The neighbors use this route to transfer the data now. Then the attacker can change the packets as it is with the attacker now.

2.7 Key Depletion Attack The sessions which are useful for communication present in the transport layer are less in number due to the round trips which consume more time etc. The protocols like secure socket layer, transport layer security uses cryptographic keys in the sessions. These keys are stolen by the attackers to stop the process of sending or receiving the data in communications. The secure way to avoid this attack is to use are ciphering attacks.

276

G. Yashasree et al.

2.8 Lion Attack The attackers use the lion attack on the physical layer so that the control protocol’s transmission performance will degrade in the transport layer. The aim of using a lion attack is to reduce the transmission control protocol’s throughput. The PUE attack can be used with lion attack, where the TCP’s throughput can be reduced. This paper presents a view on different types of techniques used for implementing the design of cognitive radio and providing security against the attacks on cognitive radios like sensing data falsification attack, primary user emulation attack, etc. The table representing attacks and their existing solutions is given in Table 1.

3 AI Techniques for Cognitive Radio A Cognitive Radio is mainly made up of three things awareness, reasoning, and learning. Analyzing the characteristics of radio and environment itself is the role of awareness. Reasoning is the process of finding a response that is appropriate with attributes like maximum robustness, lowest cost communications, and reasoning also tries to make sure to share resources with other devices in the network. Learning is the collection of results that are observed after applying the action. Awareness, reasoning, learning these three try to increase the performance of the CR by being together. The base of learning and reasoning and the first step of a CR process is awareness. Reasoning can be improved by learning. This section provides some AI techniques for CR. The historical development is used as an order to represent them below.

3.1 Artificial Neural Networks (ANNs) In 1943 the first artificial neural was introduced for the study of the human brain by W. McCulloch and W. Pits. An ANN is a collection of nonlinear functions which are used to receive required result with parameters which can be adjustable. ANNs are classified based on the training methods and the configurations of the networks. A network is formed by interconnecting all the neurons and each neuron gives the output by taking inputs from other neurons. The most common ANNs which are applicable to CR are discussed below. Multi-layer linear perceptron networks (MLPNs). A MLPN consists of several neurons in the form of layers, where each and every layer is comprised of a linear combination of the outputs of the before layers. The weights are randomly assigned to the neurons before training. Later a genetic algorithm (GA), backpropagation (BP), or combinations of methods can be used to modify the weights. To obtain features which are best training methods which are hybrid can be used.

Applications of Artificial Intelligence Techniques …

277

Table 1 Attacks and their existing solutions Attack

Solution

Contributions

Primary user emulation attack

Yardstick based threshold allocation

Detection, Defense against Primary User Emulation Attack in Dynamic Cognitive Radio Networks. N. Sureka, K. Gunaseelan (2019)

Objective function attack

Differential game based approach

A Differential Game Based Approach Against Objective Function Attack in Cognitive Networks. Guangsheng Feng, Junyu Lin, Qian Zhao, Huiqiang Wang, Hongwu Lyu (2018)

Jamming attack

Boundary node detection

Tracing Jammed Area in Wireless Ad-hoc Network using Boundary Node Detection. Saurabh, S., & Rustogi, R. (2018)

Spectrum sensing falsification data attack

q-out-of-m rule scheme

The detection of the spectrum sensing data falsification attack in cognitive radio ad hoc networks. Ngomane, I., Velempini, M., & Dlamini, S. V. (2018)

Hello attack

symmetric key algorithms

Countermeasures to security threats/Attacks on different protocol layers in cognitive radio networks, Shekhar raj., Dr.O. P. Sahu (2017)

Sinkhole attack

Hop count based sinkhole attack detection algorithm

Sejaphala, L., Velempini, M., & Dlamini, S. V. HCOBASAA: Countermeasure Against Sinkhole Attacks in Software-Defined Wireless Sensor Cognitive Radio Networks (2018)

Sybil attack

Session key certificate

Sybil Attack Detection Technique Using Session Key Certificate in Vehicular Ad Hoc Networks, D. Srinivas Reddy, Dr. V. Bapuji, Dr. A. Govardhan & Prof. SSVN Sarma (2017)

Lion attack

Particle swarm optimization Countermeasures to security (PSO) threats/Attacks on different protocol layers in cognitive radio networks, Shekhar Raj., Dr.O.P. Sahu (2017)

278

G. Yashasree et al.

Nonlinear perceptron networks (NPNs). Nonlinearity can be introduced into a network through the squaring of inputs, cross multiplying two inputs, etc. The backpropagation method for training the weights in neurons makes the process slow, much time is required to gain possible accurate results. The drawback is their network configuration must represent the data similar to the data which is used. Radial basis function networks (RBFNs). Similar to NPN, a RBFN with respect to a center in its hidden layer has a radial nonlinear function of built-in distance criteria. Gradient descent method is mostly used for training. The common problem with perceptron networks, the settling of networks into local minimal, which can be avoided here. Gaussian can be used as a function, but Euclidean distance can also be used. Advantages: • • • •

Information can be stored in the network. Capability of working even though having incomplete knowledge. Best for classification. Tolerance of fault is high.

Disadvantages: • Not able to determine the structure of a network properly. • Hardware dependence. • Process of showing the network the problem is difficult. Applications of ANN’s in CRN • In a system, ANN’s have the capability of learning the attributes, patterns, and features. • The things which are difficult to analyze and formulate like classes, functions, and processes can be described by using an ANN. • The stimuli characterization or classification can also be done by using an ANN. • For the adaption of the parameters of a radio an ANN can be used. • In a CR an ANN can be used to sense a spectrum. • The frequency features can also be considered and can be used for classification of signals. • An algorithm can be developed on the base of ANN for sensing a spectrum in wireless mesh networks. • The classification of the patterns in transmission based on patterns can be done by using an ANN.

3.2 Metaheuristic Algorithms In 1986, the term metaheuristic was first mentioned. The mathematical relations based search algorithms are not that much useful to find the parameters which

Applications of Artificial Intelligence Techniques …

279

are optimal. Metaheuristic algorithms can be used for solving the above and computationally hard problems. The types of metaheuristic algorithms are: Evolutionary Algorithms/GAs. A specific part of the evolutionary algorithms can be said as genetic algorithms. The basic things in a genetic algorithm are fitness functions and chromosomes. The candidate solutions usually take the form of Chromosomes. They are represented abstractly using chromosomes. The job of fitness function is to measure a candidate solution desire. The results that we obtain from fitness functions known as fitness levels are used to evaluate the candidate solutions. The solution’s performance is examined using fitness levels. From a set of solutions chosen for a problem in a GA are named as population. From this population, some are chosen based on their fitness levels and now a new population will be created. The new population is created with the two basic things reproduction which is the process of combining the solutions and mutation, the process of adding some new solution. Now, in the next iteration, the population is the newly created population. The solutions which cannot qualify can be removed or replaced with new solutions. Simulated annealing (SA). The main idea behind the simulated annealing process is the annealing process in metallurgy: where the heated material is cooled in such a way so that the defects are removed and perfect crystallization of the material is obtained. The search space of the area which is broader at the beginning of the algorithm starts to reduce when the algorithm moves to lower energy regions. In each and every step, the decision is made to stay in the same state or to move to another state by the SA algorithm by utilizing its present state neighbors. This process is stopped when a state is obtained by the system that is perfectly apt for the application. Tabu search (TS). Tabu lists, the memory structures are considered as the basic elements in tabu search. A move cannot be repeated with the help of tabu list. When there is a big number of solutions that can be accepted in a region, a term is used to penalize the solutions which are far away from the present one. The process is done by the prioritization of the solutions which have the features that are in common with the present solution. Diversification is used to increase the space of the search algorithm. A term is involved in objective function, which, if there are any solutions near to the current solution it penalizes them. The intensification and diversification phases alternate during the search due to the dynamic weights which are attached to these phases. Ant colony optimization (ACO). The idea behind the ant colony optimization is ants capacity in finding the shorter paths to carry their food to their homes. Ants when they find food, they return to their home by following their left pheromone trails. When they find a pheromone trail, other ants also follow this trail. A pheromone trail is the one that leads to food. The pheromone trail demolishes gradually with time. The trail is useful only when the ants succeed to find the trail before it gets evaporated that is the path should be short. ACO represents behavior of the ants moving around a graph which is used to represent the problem and can be used to find local productive areas.

280

G. Yashasree et al.

Advantages: • • • •

Broad applicability. Ease of implementation. They can solve. larger problems faster.

Disadvantages: • They cannot prove optimality and probably reduce the search space.

3.3 Rule-Based System (RBS) A rule-based system is composed of the two basic elements a list of rules known as rule base and an inference engine (IE) which considers input and rule base to perform an action. The general form of a rule are is “IF conditions THEN actions”. In decision making the rule-based system takes the rules from a specific application area. The actions are performed when the conditions are satisfied after they are tested with the given inputs. There are two types of inference engines forward chaining and backward chaining. In a forward-chaining IE, rules are generated every time when a condition is satisfied and conclusions are driven upon observing the rules. The process of producing the rules is ended when there is no chance to produce another rule. In a backward-chaining IE, a goal is set. Based on these goals the rules are obtained whose conclusions get a match with the goals. These newly identified conditions are used as goals for rules which are not present before. This process will end when there no chance to identify anew condition. Advantages: • Accuracy and less error rate. • Steady response. • E nd result is accurate. Disadvantages: • Manual work will be more. • More time is required. • Less learning capacity.

3.4 Ontology-Based Systems (OBS) The concepts and relationships that exist between concepts in a particular domain can be represented using ontology in a formal and shared form. Ontology can be easily understood by machines as it usually takes the normal form of representation.

Applications of Artificial Intelligence Techniques …

281

The ontology can become useful only when it can be used to share in a group. The domain attributes are usually defined using ontology in an OBS. The ontology is a combination of components like classes which can be defined as the domain’s objects, instances are nothing but they are the things which belong to the classes, attributes are the objects properties representation and relations that exist between these attributes. The ontology needs to be expressed in a language that is known as an ontology language. The mostly used ontology languages which are web-based are XML Topic Maps—The relationships between multiple entities relationships are allowed here. This language is also an ISO standard. Resource Description Framework (RDF)— The maximum number of entities in between relations is allowed here is only two and it is a World Wide Web Consortium standard. Ontology Language (OWL)-It is nothing but the continuation of Resource Description Framework. The capability of an OBS which influences to use it in a CRN is its process of deducing the facts. The facts are also deduced logically only in an OBS. After an OBS is present in a CRN the process of understanding the characteristics of the radios logically is very easy. This understanding of the characteristics of the radios that may of the same radio or its surrounding radios can be used to increase the performance and the parameters can be modified optimally. Advantages: • Logical deduction capability. • Can understand capabilities on its own. Disadvantages: • Perfect domain knowledge is required. • Low efficiency.

3.5 Case-Based System (CBS) A CBS is very useful as it always stays close to the real world and it can be used even though the perfect knowledge of the domain is not present. In CBS, the problem is solved by using the cases which are similar to the problem. These similar cases solved process will be useful in solving the present problem and a solution will be chosen at last to solve the present problem. First, the cases which are similar to the present problem are chosen by dropping one case at a time in each step finally a case is selected to make the solution. Then the process used to solve that case can be taken and those case parameters can also be derived from that single case. The optimization problem can also be solved. The time for selecting the parameters which are optimal can be reduced. Every time when a new solution is obtained a new solution will be added to the database of the cases. The drawback of using previous cases is that if these cases are solved wrongly then this impact will be on the present problem also. And if the domain’s complexity is huge, a large database is required then to search such a case database will not be easy. A CBS is used in a radio network for finding

282

G. Yashasree et al.

the solution for the present problem in the environment. When a similar case is not present due to the reason that all cases may not be present in the database, a new solution must be generated and added to the database. Advantages: • Can learn in the absence of domain knowledge. • Close to real life. Disadvantages: • Depends mostly on previous cases and large memory is needed.

4 Applications of AI in Cognitive Radio Networks Both AI and cognitive radio networks have various applications in today’s world. Some of them are listed below.

4.1 Military The applications of Artificial Intelligence and cognitive radio networks in the military are utilized for Command and Control, Communications, Sensors, Integration and Interoperability, chemical biological radiological and nuclear attack detection, battlefield monitoring, marking of enemy positions, threat detection and identification, etc.

4.2 Healthcare The cognitive radio networks and Artificial Intelligence can be combined and used in medical areas for patient monitoring that helps in immediately notifying the doctors regarding patients such as sugar level, blood pressure, blood oxygen. In clinical decision support systems for medical diagnosis, etc.

4.3 Transportation Cognitive radio integrates heterogeneous wireless networks and artificial intelligence that will be used in order to achieve intelligent communications in future railway systems. They can also be used in fuzzy controllers developed for automatic gearboxes in automobiles, etc.

Applications of Artificial Intelligence Techniques …

283

4.4 Emergency and Public Safety The application of CR networks and AI to emergency and public safety communications by utilizing white space is gradually gaining importance at present.

4.5 Automotive Industry The AI is used in a number of ways in automotive industry. Some are using AI to provide virtual assistants to their users for better performance. Our journey can be made more safe and secure by self- driven cars which are in the developing phase now. An example is Tesla has introduced TeslaBot, an intelligent virtual assistant.

4.6 Robotics Artificial Intelligence is playing a vital role in many areas now. Robotics is one of them. General robots are programmed to perform some repetitive tasks with the help of AI. Intelligent robots that can perform tasks with their own experiences without pre-programmed can be created now very flexibily. Humanoid Robots are best creations for AI in robotics, recently the intelligent Humanoid robot Erica and Sophia are developed who can talk and behave like humans.

4.7 Bandwidth-Intensive Applications Cognitive Radio Networks are very suitable for bandwidth-hungry applications. Multimedia applications, such as on-demand or live video streaming, audio and many more. In these types of huge bandwidth required multimedia applications CRN can be used to obtain good results.

4.8 Real-Time Surveillance Applications High reliability is the main property of real-time surveillance applications. This is possible by using CRN’s and AI. The real-time surveillance applications where CRN can play a major role are bridges or tunnel monitoring, habitat monitoring, traffic monitoring, irrigation monitoring, vehicle tracking, environmental monitoring, inventory tracking, biodiversity mapping, environmental conditions monitoring that affect crops and livestock.

284

G. Yashasree et al.

4.9 Indoor Applications A large CRN environment is required for many indoor applications to achieve reliable communication. Examples of the indoor applications of AI and cognitive radio networks are home monitoring systems, intelligent buildings, etc.

References 1. H.T. Reda, A. Diro, N. Chilamkurti, S. Kallam, Firefly-inspired stochastic resonance for spectrum sensing in CR- based IoT communications. Neural Comput. Appl. Spinger. 10 November (2019) 2. N. Sureka, K. Gunaseelan, Detection and defense against primary user emulation attack in dynamic cognitive radio networks, in Fifth International Conference on Science Technology Engineering and Mathematics (ICONSTEM) (2019), pp. 505–510 3. G. Feng, et al., A differential game based approach against objective function attack in cognitive networks. Chinese J. Elect. 27(4), 879–888 (2018) 4. S. Saurabh, R. Rustogi, Tracing jammed area in wireless ad-hoc network using boundary node detection, in IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS) (2018), pp. 1–4 5. M.V. Ngomame, S.V. Dlamini, The detection of the spectrum sensing data falsification attack in cognitive radio Ad Hoc networks, in Conference on Information Communications Technology and Society (ICTAS) (2018) 6. S. Raj, O.P. Sahu, Counter measures to security threats/Attacks on different protocol layers in cognitive radio networks, in International Conference On Smart Technology for Smart Nation (2017), pp. 1076–1082 7. L. Sejaphala, M. Velempini, S.V. Dlamini, HCOBASAA countermeasure against sinkhole attacks in software- defined wireless sensor cognitive radio networks, in International Conference on Advances in Big Data, Computing and Data Communication Systems, (ICABCD) (2018) 8. D. Srinivas Reddy, V. Bapuji, A. Govardhan, S.S.V.N. Sarma, Sybil attack detection technique using session key certificate in vehicular Ad Hoc networks, in International Conference on Algorithms, Methodology, Models and Applications in Emerging Technologies (ICAMMAET) (2017) 9. K.K. Bae, et al., A survey of artificial intelligence for cognitive radios. IEEE Trans. Veh. Technol. 59(4) (2010)

Design of Pitch Attitude Hold Mode for Commercial Aircraft Using Extended State Observer Princy Randhawa and Tushar Pradeep Basakhatre

Abstract This study formulates the vertical control law design for various modes of autopilot. Many control techniques have been developed to overcome the uncertainties, nonlinearities, disturbance rejection and modelled dynamics of an aircraft. The problem off designing a longitudinal mode autopilot can assure the stability, and its performance throughout the service envelope is a tedious task. To increase the overall performance, stability of the system and to overcome the disturbances an Active Disturbance Rejection Control (ADRC) scheme is used which is also referred as an extended state observer (ESO) design that caters all the un-modelled dynamics and uncertainties as an extended state within the state space representation. Simulation results of MATLAB Simulink have been also analysed. Keywords Extended state observer · PID · Controller · Autopilot · Pitch attitude hold mode

1 Introduction In the aircraft, there are lots of uncertainties in aerodynamic parameters like nonlinearities, external disturbances, measurement inaccuracies, etc. Numerous techniques exist in the control law design for increasing the performance of the system such as PID, Kalman Filtering, Fuzzy and Neural Network. The PID controller is simple and dominant in industry [1]. PID controller technique assists the engineers to tune the parameters such as for various systems easily and rapidly without the knowledge of the dynamics of the plant. Nevertheless, the ever-increasing demands for accuracy, robustness and effectiveness coupled with the characteristic limitations of PID have compelled engineers to seek better control mechanism elsewhere. The drawbacks of PID controller are as follows [2–4]. P. Randhawa · T. P. Basakhatre (B) Department of Mechatronics, Manipal University Jaipur, Jaipur, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_26

285

286

1. 2. 3. 4. 5.

P. Randhawa and T. P. Basakhatre

Computation error. Degradation of noise using derivative control. Performance loss in the control law in the form of linear weighted sum. Complexity by the integral control. No disturbance rejection.

To overcome the trade-offs in PID optimisation technique, the robust control law has been used to increase the stability, disturbance rejection using Active Disturbance Rejection Control Technique (ADRC). For the estimation of the internal states of the system, ADRC law has been used using its input and output [5, 6]. The essential of Active disturbance Rejection Control Technique (ADRC) is to give the unknown dynamics as an extended state of the plant and then guess it using an extended state observer (ESO) and recompense it in a real time. Previously, state observer observed as a dynamic part in designing a controller. The design can be based on an identified model or on unidentified or partially identified model [7]. To design an accurate system, the plant dynamics must be known so that the state observer can achieve better accuracy and make correct estimates. Nevertheless, most of the dynamic systems cannot be modelled accurately without estimation of the internal states. In realism, disturbances, noise, uncertainties, nonlinearities and many other aspects shows a crucial part role in system [3]. To solve the problem, another observer which is known as extended state observer used which will overcome the problem of noise and uncertain parameters. Next section describes the pitch attitude hold mode feedback loop which is used as an inner-loop controller for the design of an autopilot in case of disturbances, turbulences, etc [2, 8]

1.1 Pitch Attitude Hold Mode The pitch attitude hold mode (PAH) is the basic longitudinal autopilot mode; it controls the pitch angle by applying appropriate deflections of the elevator if the actual pitch angle differs from the desired reference value [9–11]. The pitch angle θ is fed back to damp the phugoid mode and to ensure that the desired pitch angle is maintained [12–16]. Figure 1 shows Figure 1 shows the pitch attitude hold mode which consists of two loops: outer loop (feedback loop of pitch angle and the inner loop (feedback loop of pitch rate). The transfer function for output variable u with respect to elevator deflection is presented in equation as follows [17]. 1.721e − 015s 3 − 0.2504s 2 − 0.584s + 0.0101 u(s) = 4 δe(s) s + 0.2388s 3 + 0.88295s 2 + 0.0050s + 0.0008 Using this model, the controller has been designed to reduce the disturbances by tuning the parameters.

Design of Pitch Attitude Hold Mode for Commercial …

287

Fig. 1 Pitch attitude hold mode [7, 17]

1.2 State Observer Design The idea of extended state observer (ESO) was firstly suggested by Han [18]. ESO could provide accurate estimates of outputs and dynamics of the system by adjusting few parameters. Later, Gao [3] enhanced the tuning the several parameters to one to reduce the complexity of the system. Figure 3 shows the basic structure of ADRC of a plant. The notion of ADRC is showed by a typical parameterised ADRC control algorithm using second-order system [19]. Consider a general plant of second-order equation in differential form: y¨ = A1 y + A2 y˙ + D + Bu A1 = −2ξ ω, A2 = ω2 y¨ = A1 y + A2 y˙ + D + (B − B0 )u + B0 u = X + B0 u A1 y + A2 y˙ + D + (B − B0 )u = disturbance. The X contains unknown internal states and external disturbance D. ESO provides a mean of estimating X and y in real time. To build the observer, the plant can be expressed as: x˙2 = x3 + B0 u x˙3 = X y = x1 where X = [X 1 , X 2 … X n ]T represents Rn is a state of system. An ESO is correspondingly designed x3 denoted to as an extended state. Modifying the equation in state space form gives:

288

P. Randhawa and T. P. Basakhatre

x˙ = P x + Qu + R X ⎡

⎡ ⎤ ⎡ ⎤ ⎤ 010 0 0   P = ⎣ 0 0 1 ⎦, Q = ⎣ B0 ⎦ E = ⎣ 0 ⎦, S = 1 0 0 0 , 000 1 The state space observer, represented as extended state observer (ESO), as follows: z˙ =Pz + Qu + L(y − y) y =Sz where y, is the estimation of the system output y and L is the observer gain vector which can be acquired by retaining a pole placement technique. L is expressed as:   L = α1 α2 α3 The three parameters L = α1 α2 α3 prerequisite to be adjusted in the observer. For instance, the order of the plant and observer increases, and the number of parameters also increases that need to be tuned. Gao et al. developed −ω0 parameterisation technique to simplify the tuning of an observer. This parameterisation assigns all observer, and as a consequences, the closed-loop system is reduced to a linear state that incorporates with the estimated Eigen-values at −ω0 and makes all parameters of an observer a function of ω0 . Here, ω0 is the bandwidth of the observer. λ(s) = s 3 + α1 s 2 + α2 s + α3 = (s + ω0 )3 α1 = 3ω0 , α2 = 3ω02 , α3 = ω03 The parameterisation technique can be extended to nth-order plant, and the only parameter to adjust is the bandwidth ω0 . If the observer gains L = α1 α2 α3 are selected properly, then the observer will provide an estimate of the state. ⎤ ⎤⎡ ⎤ ⎡ ⎤ ⎡ z˙ 1 z1 −3ω 1 0 0 3ω  ⎣ z˙ 2 ⎦ = ⎣ −3ω2 0 1 ⎦⎣ z2 ⎦ + ⎣ bo 3ω2 ⎦ u y 0 ω3 −ω3 0 0 z˙ 2 z3 ⎡

We noted that the P and Q matrices are constant and do not depend on a full knowledge of the plant. This feature assists one to choose the observer gains without such knowledge, and due to this reason, the ESO is less sensitive to variations in plant parameters than conventional observers. It can be observed that in this case the possibly unknown dynamics ‘f ’ which is external state of a system is asymptotically on line cancelled out in the control action, and as a consequences, the closed loop

Design of Pitch Attitude Hold Mode for Commercial …

289

system is reduced to a linear state that incorporates with the estimated feedback [5, 6, 20]. The estimated feedback was designed guarantee the stability of the closed loop system and to render a highly desirable performance when saturation is not accounted for at the plant’s input. The aim of this paper is to provide a better solution to the stabilisation problem of the nonlinear system by using the external state observerbased control law with comparison to PID controller.

2 Result Analysis This section shows the test results of ESO for pitch attitude hold mode. The purpose here is to demonstrate the validity of ESO using analysis and experimentation. Figure 2 shows the block diagram of ADRC design for the plant.

2.1 Tuning, Design and Response of ADRC In the shown Fig. 3, sine wave in the form of disturbance is given to the output of a plant with a frequency 0.05 Hz. The output is given as an input to the extended state observer. To cancel out the incoming disturbances, tune the parameter, i.e. bandwidth W1. ESO has designed parameters such as bandwidth (W) and scaling factor (B). The ESO has an advantage of using one controller with all the plant models. After doing analysis, we got a good response for ESO (without disturbance) as compared to conventional PID controller with respect to settling time, overshoot, etc. Figure 4 response shows the nominal response of ESO for all the plant models with one PD controller (Fig. 5; Table 1). Tune the parameter B1 (scaling factor) and bandwidth (W1) according to the plant model to get the desired response. Figure 4 shows the response for unstable plant. With ESO, the unstable plant response also improved in respect of overshoot and settling time. ESO has an advantage of using one controller with all the plant models. After doing analysis, we got a good response for ESO (without disturbance) as compared to conventional PID controller with respect to settling time, overshoot, etc. Figure 4 Fig. 2 ADRC structure

290

P. Randhawa and T. P. Basakhatre

Fig. 3 ADRC design of a plant

response shows the nominal response of ESO for all the plant models with one PD controller.

2.2 Disturbance Rejection Characteristics with ESO ESO has a capability to reject active disturbances as compared to conventional PID [21, 22]. The above statement will be illustrated in the following tables and graphs. ESO requires only one system parameter to tune, i.e. observer bandwidth (W) which has been explained earlier [Section 1.1]. Using these parameters, tests have been conducted to compute the error of disturbance rejection for ESO. Tables 2 and 3 show the results with ESO. In Fig. 6, the first graph shows the noise rejection with ESO and second graph shows the incoming noise from the input. The incoming noise amplitude is 1 and ESO amplitude 0.5. As seen from the figure, ESO amplitude is less than the incoming noise signal. So it shows that ESO has capability to reject disturbances (Fig. 7). Figure 8 shows the response for two plant models with ESO disturbance with the same tuning system parameter works well with the lower frequencies.

Design of Pitch Attitude Hold Mode for Commercial …

Fig. 4 ESO response for pitch attitude hold mode

Fig. 5 ESO response for pitch attitude hold mode for unstable plant

291

292

P. Randhawa and T. P. Basakhatre

Table 1 Experimental results of ESO and PID Parameter

ESO

PID

Settling time (s)

2

9

Overshoot (%)

No overshoot

8

Table 2 Experimental results of noise rejection for ESO (plant 5) Disturbance rejection Bandwidth (W) = 25 S. No.

Frequency (Hz)

Model 5(dB)

Elevator deflection (δe) degrees

1

0.05

6.0206 (full rejection)

−0.95 to +0.1

2

0.1

5.9989

−0.9 to +0.1

3

0.5

5.2014

−0.55 to +0.2

4

1

5.5751

−0.6 to +0.652

5

2

4.0824

−0.3 to +1.7

6

3

1.5836

−3.9 to +4.1

7

4

−0.82785

−5.8 to +6.1

8

5

−6.0206

−8.1 to +8

Plant 5, K p (Proportional Gain) = 3, K d (Derivative Gain) = 3, Scaling factor (B) = 2

Table 3 Experimental results of noise rejection for ESO (plant 7) Disturbance rejection Bandwidth (W) = 25 S. No.

Frequency (Hz)

Model 7 (dB)

Elevator deflection (δe) degrees

1

0.05

6.0206 (full rejection)

−0.9 to +0.05

2

0.1

5.9989

−0.83 to +0.75

3

0.5

5.9814

−0.55 to +0.15

4

1

5.8007

−0.35 to +0.35

5

2

5.2014

−1.25 to +1.25

6

3

4.6019

−2.5 to +2.2

7

4

4.0824

−4.5 to +4.2

8

5

1.5836

−6.1 to +6.7

Plant 7, K p (Proportional Gain) = 3, K d (Derivative Gain) = 3, Scaling factor (B) = 2

2.3 Comparison of Disturbance Rejection Between PID and ESO ESO is superior than PID is demonstrated below with the test and analysis. Table 4 shows the disturbance rejection of PID and ESO with different values of frequencies

Design of Pitch Attitude Hold Mode for Commercial …

Fig. 6 Comparison of ESO and PID (without disturbance)

Fig. 7 ESO response for pitch attitude hold mode with and without noise

293

294

P. Randhawa and T. P. Basakhatre

Fig. 8 ESO response for plant 7 and plant 6

Table 4 Experimental results with ESO and PID Disturbance rejection Bandwidth (W) = 25 S. No.

Frequency (Hz)

ESO (dB)

PID (dB)

1

0.1

5.9989

5.201

2

0.5

5.9814

−1.9382

3

1

5.8007

−6.0206

4

2

5.2014

−6.9357

5

3

4.6019

−7.9588

6

4

4.0824

−13.979

8

5

1.5836

−20

Scaling factor (B) = 2

v/s noise rejection (dB). The more parametric uncertainties in the system can tolerate as the bandwidth increases. In this, the parameters such as bandwidth (W) are tuned with constant scaling factor (B). At this scaling factor, higher frequency makes the elevator exceed its desired limits and less noise rejection. Thus, only lower frequencies can be interpolated. As seen from the graph and table, at lower frequency rejection is more as compare to PID.

Design of Pitch Attitude Hold Mode for Commercial …

295

2.4 Comparison of Disturbance Rejection Between PID and ESO ESO is superior than PID in areas of noise rejection and error compuatation [23, 6] which is is demonstrated below with the test and analysis. Table 4 shows the disturbance rejection of PID and ESO with different values of frequencies v/s noise rejection (dB). The more parametric uncertainties in the system can tolerate as the bandwidth increases. In this, the parameters such as bandwidth (W) are tuned with constant scaling factor (B). At this scaling factor, higher frequency makes the elevator exceed its desired limits and less noise rejection. Thus, only lower frequencies can be interpolated. As seen from the graph and table, at lower frequency rejection is more as compare to PID. The investigation and simulation results explained above validate that the PD/ESO gives better performance than the PID observer in both disturbance rejection and robustness. In addition, the PD/ESO is unusually easy to tune. The tuning parameters have also been reduced using this parameterisation technique (Fig. 9).

Fig. 9 Comparison of ESO and PID response

296

P. Randhawa and T. P. Basakhatre

3 Conclusion and Future Work The modern control design techniques are used for the noise reduction and disturbance rejection for pitch attitude hold mode. Similar approach can be used for the design of other modes using ESO. Higher-order equation can be used in ESO for better results. The optimisation can be done in ESO to get one optimised controller with one ξ and Ñ value for all the models. We used only nine models as an orthogonal array which shows a restriction. More models can be designed for different phases of flight which will define the different set/subsets of experiments with different damping ratio and natural frequency, i.e. ξ1 , ξ2 and ω1 , ω2 values. The longitudinal autopilot design is accomplished in the presence of atmospheric disturbances or turbulences. Pitch attitude hold mode inner feedback loop is used to reduce the effect of atmospheric disturbances using ESO technique.

References 1. J. Roskan, Airplane Flight Dynamics and Automatic Flight Control (DAR Corporation, USA, 1998) 2. S. Zhong, Y. Huang, L. Guo, A parameter formula connecting PID and ADRC. Sci. China. Info. Sci. 63(9), 1–13 (2020). 3. Z. Gao, Scaling and parameterization based controller tuning, in Proceeding of the 2003 American Control Conference, vol. 6, 4 6 June 2003, pp. 4989–4996 4. A. Elsayed, A. Hafez, A.N. Ouda, H.E.H. Ahmed, H.M. Abd-Elkader, Design of longitudinal motion controller of a small unmanned aerial vehicle. Int. Syst. Appl. 10, 37–47 (2015) 5. N. Wahid, N. Hassan, Self-tuning fuzzy PID controller design for aircraft pitch control, in Third International Conference on Intelligent Systems Modelling and Simulation (2012) 6. F.A. Salem, A.A. Rashed, PID controllers and algorithms: selection and design techniques applied in mechatronics systems design—Part II. Int. J. Eng. Sci. 191–203 (2013) 7. S. Li, J. Yang, W.H. Chen, X. Chen, Disturbance observer-based control: methods and applications. CRC press (2014). 8. H. Kikkawa, H., K. Uchiyama, Nonlinear flight control with an extended state observer for a fixed-wing UAV. In 2017 International Conference on Unmanned Aircraft Systems (ICUAS) (pp. 1625–1630). IEEE (2017, June). 9. L. Sherry, M. Feary, R. Mumaw, A cognitive engineering analysis of the vertical navigation (VNAV) function. NASA report (2000) 10. P. Shrikant Rao, S. Chetty, Rapid prototyping tools for commercial aircraft, in AIAA Guidance, Navigation and Control Conference and Exhibit 5–8 August 2002, Monterey California 11. D. Hughes, M. Dornheim, January 30-February 6, 1995, Automated Cockpits Special Report, Parts I & II, Aviation Week & Space Technology 12. Bombardier challenger 604 pilot training guide, “Automatic flight control system “,Bombardier aerospace, Sept. 04 13. A. Degani, M. Heymann, Pilot-Autopilot Interaction: A formal perspective (Eighth International Conference on Human-computer Interaction in Aeronautics, Toulouse France, 2000) 14. D. Hughes, M. Dornheim, Automated Cockpits: Who’s in Charge? Aviation Week and Space Technology, January 30-February 6 (1995) 15. Garmin G1000 Cockpit reference guide for the Cessna Nav, Automatic flight control, March 2007

Design of Pitch Attitude Hold Mode for Commercial …

297

16. M. Sun, Z. Chen, Z. Yuan, A practical solution to some problems in flight control. In Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference (pp. 1482–1487). IEEE (2009, December). 17. P. Randhawa, A. Mishra, Y. Jeppu, C.G. Nayak, N. Murthy, Mode transition logic of a vertical autopilot for commercial aircraft. In Proceedings of IX Control Instrumentation System Conference. (2012, Nov). 18. J. Han, From PID to active disturbance Rejection control. IEEE Trans. Ind. Electron. 56(3) (2009) 19. Y. Huang, K. Xu, J. Han, J. Lam, Flight control design using extended state observer and nonsmooth feedback. In Proceedings of the 40th IEEE Conference on Decision and Control (Cat. No. 01CH37228) (Vol. 1, pp. 223–228). IEEE (2001, December). 20. R. Razali, E. Rachman, A mathematical modeling for design and development of control laws for unmanned aerial vehicle (UAV). Int. J. Appl. Sci. Technol. 1(4) (2011) 21. D.A. Caughey, Introduction to Aircraft Stability and Control Course Notes for M&AE 507 M&AE 5070. Sibley School of Mechanical & Aerospace Engineering Cornell University NewYork 22. P.T. Jung, Modeling and hardware-in-the-loop simulation for a small unmanned aerial vehicle. AIAA, 1–13 23. P. Randhawa, V. Shanthagiri, Concept of operations to system design and development-an integrated system for aircraft mission feasibility analysis using STK engine, matlab and labview. Int. J. Instrum. Control. Sys. (IJICS) 5(4) (2015) 24. W. H. Chen, J. Yang, L. Guo, S. Li, Disturbance-observer-based control and related methods—An overview. IEEE Trans. Ind. Elec. 63(2), 1083–1095 (2015). 25. S.E. Talole, A.A. Godbole, Robust roll autopilot design for tactical missiles. J. Guidance Control. Dyn. 34(1) (2011).

UAV—A Boon Towards Agriculture Manish Verma, Sayed Imran Ali, and Gaurav Agrawal

Abstract The application of unmanned aerial vehicles (UAV’s) or drones has been trending and growing since the twentieth century until now. We launch the use of drones in planting trees which will aim to provide reforestation, even in the areas difficult enough to reach by humans. This involves effective seed dispersal, particularly in areas hard to reach. Vegetation and obstruction can be easily demonstrated by these aerial devices by the picture it clicks highlighting the areas. Based on that information, the seed dropping and planting strategy are developed. This proposed method is known best as drone assisted seed dropping system can be a further one-step-up towards the development in the agricultural field. Keywords Unmanned aerial vehicle (UAV) · Reforestation · Drones

1 Introduction In this constantly developing world, current tree-planting programs are just not fast enough. People now are dedicated more towards smart work rather than hard work. Transforming every manual work into automation, a very versatile technology of using drones in agriculture has been recently added to the queue. Planting trees in remote forest locations are a slow process and require labour and totally relies on human [1]. It can be drastically modernized by employing drones to plant seeds. The drones fly across a specified area and collect data about soil conditions and also determine the prime locations for planting. Valuable information can be collected using this new innovation. Drones carry seeds which are already sprouted before they are inserted into the ground, promising a more prosperous growth pattern and robust rooting system [2].

M. Verma · S. I. Ali · G. Agrawal (B) Maharishi Arvind International, Institute of Technology, Kota, Rajasthan, India e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_27

299

300

M. Verma et al.

2 Working of UAV The drone takes in two rounds to analyse the area and dropping seed. In the very first round, the land feasibility gets checked and it also gets access to the area where seeds need to be dropped using canister [1]. 1. Seed preparation and canister development: The seeds are first of all weighed and measured. The canister is all set for measurement and takes under the responsibility of measurement of seeds. Seed canister works in constant manner to drop the seeds on marked place. Seed canister has various equipments fitted with them [2]. The door gets opened sensing the receiver signals. The door opens and seeds are released on that mapped place. This opening mechanism of door is all mounted over the battery power and the receiver. After the development of seed canister, installation of canister takes place at the bottom of the device body. During the process of designing of canister, weight of seed canister is taken to know the exact amount of seed that can be filled in the box (Figs. 1 and 2). 2. Mapping dropping area: The clicked aerial picture of the place where seeds going to dropped is captured with the help of flying drone which gets stored in drone

Fig. 1 Seed canister contains: canister door, seeds carrying by plastic container, a motor, a receiver, and a charged battery

Fig. 2 Drone that has seed canister fitted at the bottom

UAV—A Boon Towards Agriculture

301

Fig. 3 Estimated seeds that can be delivered (left). Dropping area that is suitable is shown in map (right)

and marks difference out of mapped and deforested area [3]. Exact location where seeds need to be put can easily be determined with the help of this map (Fig. 3). 3. Dropping activity: The canister gets installed followed by the drone goes to the marked area which was previously determined by map [4]. The overall performance of the UAV is exactly determined only after this process, that is, when canister goes to the marked area. Performance is indicated in real time giving information about the drone, that is, drone travel distance, amount of seeds carried and time taken by drone [5].

3 Advantages An unmanned aerial vehicle (UAV) with a seed dispersal system has many advantages over distribution by hand as follows: 1. In contrast to the manual labour that can be variable, a drone has a fixed number of seed planting per minute [3]. 2. This aerial monitoring system allows for highly accurate seed dispersals. 3. The work gets remotely managed by a human pilot hence, risk regarding human life not there [3]. 4. The civilian application involves various duties, as they all put efforts in the reforestation tasks. They have the potential to serve the economy and are highly efficient, feasible, safely adaptable and can be manually operated with ease. Many other traditional aircraft not seem to fly easily 5. The regulation in the water supply is demonstrated by the drones and helps to supply water under peak growing conditions for specific crops. 6. Drone scans the ground and spray the required amount of chemicals precisely at the perfect altitude needed for any application.

302

M. Verma et al.

4 Challenges 1. A flat surface is needed for the quadcopter and an open space for the preceding takeoffs and landing [4]. 2. Obstructed landing or taking off place can deplete drone’s batteries due to which little time to take readings or measurements is left. 3. Quadcopter, fixed-wing, in-built wing or hybrid drone cannot be used in high winds of more than 40 km/Hr., snowstorms, and heavy rain. 4. For safety purpose, drone requires a carry case or a reinforced backpack and sometimes a car or other vehicle for transport from one place to another. 5. Quadcopters are inefficient at covering large areas as it requires more battery power and are best suitable for small-area mapping to minimize power usage.

5 Future Scope There is likely to be increase in demand of drones in agriculture by 2026, it’s estimated that the market for agriculture drones is going to reach dollar 7 billion, amplified to 29% CARG. As the cost of drones decreases by time, the demand for agricultural use of drone software gets increase [5]. The following statistics released by successful farming in year 2016 in agricultural study is as: 1. 2. 3. 4.

The drones are already owned by 9% of the agricultural industry. The agricultural industry will own 3% drones within the next 8 months. The agricultural industry will own 17% drones within the next 2 years. The agricultural industry will own 33.33% drones within the next 4 or more years. 5. While 37% of the agricultural industry still not planned to purchase a drone.

6 Conclusion The use of drone has been very useful from large farmers to the small farmers, and in the preceding time, it’s all set to become every farmer’s needed weapon in every developing nation. They are already acquired a place in construction and in agricultural industry and a sharp increase can be seen in the graph use of drone in different civil engineering fields [6]. Farmers should know about all advantages and disadvantages of using drones so that they can decide before an expensive investment. Farmers are also advised to not to buy drones in bulk even after understanding the pros and cons of using drone for better results [3].

UAV—A Boon Towards Agriculture

303

References 1. O. Hassaan, A.K. Nasir, H. Roth, M.F. Khan, Precision forestry: trees counting in urban areas using visible imagery based on an unmanned aerial vehicle. IFAC-Papers on Line 49, 16–21 (Wallace, L., Lucieer, A., and Watson, C. S. 2014 2016) 2. L. Hojas-Gascón, A. Belward, H. Eva, G. Ceccherini, O. Hagolle, J. Garcia, P. Cerutti, Potential improvement for forest cover and forest degradation mapping with the forthcoming sentinel-2 program. Int. Arch. Photogrammetry, Remote Sens. Spat. Inf. Sci. 40, 417–423 (2015) 3. Unmanned Aerial Vehicle, Wikipedia. Wikimedia Foundation, July 15, 2020. https://en.wikipe dia.org/wiki/Unmanned_aerial_vehicle 4. S. Siebert, J. Teizer, Mobile 3D mapping for surveying earthwork projects using an unmanned aerial vehicle (UAV) system. Autom. Constr. 41, 1–14 (2014) 5. Omega, “The UAV-the future of the sky.” The UAV-Unmanned Aerial Vehicle. Accessed July 16, 2020. https://www.theuav.com/ 6. Unmanned Aerial Vehicle, Applications in Agriculture and Environment (Springer Nature, 2019)

Modelling and Design of 5T, 6T and 7T SRAM Cell Using Deep Submicron CMOS Technology Nidhi Tiwari, Varun Sankath, Akhilesh Upadhyay, Mukesh Yadav, Ruby Jain, Pallavi Pahadiya, Madhavi Bhanwsar, and Shivangini Mouraya

Abstract Memory is a basic need of microcontroller and DSP units. As per market demand, electronic devices should be small in size and have better performance in terms of speed, power consumption and stability. One should consider one important factor, i.e. portability, and in this term, battery backup should be considered. In this paper, we have simulated three static random access memory cell five transistors, six transistors and seven transistor SRAM cell on 180 nm technology. The results are extracted on different temperatures. Keywords SRAM · DRAM · Power consumption · Speed

1 Introduction Power is a critical parameters in electronic devices, and it plays vital role in realtime operation. In VLSI system on chip design, we implemented components and devices on a single chip. But they reflected each other and creates problem due to intrinsic parameters [1, 2]. Today, we are using on chip memories to reduce speed gap between the processor and main memory. SRAM is used for designing these on chip memories. The main element for designing it is CMOS and can be performed all three operations of memory [3–6]. Figure 1 shows complete architecture of memory cell. It consists of sense amplifier, column and row decoder and SRAM cell. This is connected to the output for read operation. Row decoder and column decoder are connected for read and write operation [6].

N. Tiwari (B) · V. Sankath · A. Upadhyay · M. Yadav · R. Jain · P. Pahadiya · M. Bhanwsar · S. Mouraya Electronics & Communication Department, SAGE University, Indore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_28

305

306

N. Tiwari et al.

Fig. 1 Complete diagram of memory cell architecture

2 SRAM Cell Structure and Working The basic memory cell structure composed of four transistors. Two NMOS transistors work as driver transistor and remaining two transistors as pass transistor. One PMOS and resistor are connected with driver transistor and load. This load resistor creates problem as this is directly connected to load and it may be damaged driver transistor NMOS or circuit. Figure 2 shows the circuit diagram of 5T SRAM. Load PMOS transistor is connected instead of load resistor in 5T SRAM cell. Figure 3 shows the schematic figure of six transistor SRAM cell.

Fig. 2 5T SRAM cell with ground gated transistor

Modelling and Design of 5T, 6T and 7T SRAM …

307

Fig. 3 Schematic diagram of six transistor memory cell with extra NMOS transistor

Any memory contains three basic operations: read operation, write operation and hold operation. Write operation: Data writes into memory cell through bitcell. Hold operation: Data holds into back to back inverters. Read operation: Data reads from pass transistor, and they are connected with sense amplifier. A SRAM cell consists of two back to back inverters with each other and two pass transistors. Bitlines are connected to pass transistors for read and write operation. Word line (WL) goes to high for read and write operation. For hold operation, word line (WL) goes to low, and data holds by back to back inverters. This is based on CMOS technology. Figure 4 shows the schematic diagram of seven transistor memory cell with extra NMOS transistor. In this circuit, one extra transistor is added for separation of read and writes operation. This transistor prevents power consumption, due to same bitline of read and write operation. But still problem exists, so we have to do research on this. Simulation results of 5T, 6T and 7T SRAM will describe in next section.

3 Simulation Results of 5T, 6T and 7T SRAM Cell In this paper, we simulated five transistors, six transistors and seven transistor memory cell on 180 nm PTM file. In circuit diagram, we have added one extra NMOS transistor between inverter and ground. This transistor prevents current flowing from load to ground in standby mode. In standby mode, NMOS transistor helps to disconnect short circuit from load to ground. Figures 5 and 6 show the waveform of write operation. Blue and red lines are showing input, and pink and green lines are showing

308

N. Tiwari et al.

Fig. 4 Schematic diagram of seven transistor memory cell with extra NMOS transistor

Fig. 5 Write operation of 6T SRAM cell

output. Some glitches in rise time are present in output waveform due to directly connection of inverter to the ground. Figure 6 is a simulation result of six transistor SRAM memory cell with extra ground gated transistor. Due to this transistor, glitches are removed and show good result.

4 Conclusion In this paper, we have simulated three SRAM cell 5T, 6T and 7T SRAM cell. 180 nm technology is used for the simulations. We have extracted power consumption on two temperatures, which are shown in Table 1. Power consumption is higher in 5T

Modelling and Design of 5T, 6T and 7T SRAM …

309

Fig. 6 Write operation of six transistor memory cell with ground gated transistor

Table 1 Average power consumption on different temperatures Five transistor SRAM cell (µW)

Six transistor cell (µW)

Seven transistor SRAM cell (µW)

25 °C

396

29.7

30.1

50 °C

391

29.1

29.8

SRAM cell, and in 6T and 7T SRAM cell, power consumption is almost similar. But this is not only one parameter for consideration. Many factors are yet to be optimized for better real-time operation of any memory.

References 1. B.H. Calhoun, Y.L. Cao, K. Mai, K.L.T. Oileggi, R.A. Rutenbar, K.L. Shepard, Digital circuit design challenges and opportunities in the era of nano scale cmos. Proc. IEEE 96(2), 343–365 (2007) 2. T. Kamik, S. Borkar, V. De, Sub-90 nm technologies: challenges and opportunities farced 774602, 203–206 (2002) 3. S. Borkar, T. Karnik, S. Narendra, J. Tschanz, A. Keshavarzi, V. De, Parameter variations and impact on circuits and microarchitecture, 775920, 338–342 (2003) 4. S. Nassif, Delay variability: sources, impacts and trends, in Proceedings of the IEEE International Solid-State Circuits Conference, (2000), pp. 368–369 5. Akashe, S., et al., High density and low leakage current based 5T SRAM cell using 45 nm technology, in 2011 International Conference Nanoscience, Engineering and Technology (ICONSET) (2011) pp. 346–350 6. J. Shriv’as et al. Impact of design parameter on SRAM bit cell. Adx’anced computing & communication technologies (ACCT), in 2012 Second International Conference (2012) pp. 353–356

Machine Learning Approach Towards Road Accident Analysis in India Shruti Singhal, Bhavini Priyamvada, Rachna Jain, and Muskan Chawla

Abstract This paper aims to study, compare and analyse the performance of six major machine learning techniques to better understand the occurrence of traffic accidents. The methods considered are Decision Trees, Support Vector Machines, Naïve Bayes, Random Forest, K-Nearest Neighbour and Logistic Regression. For the most realistic and conceivable accident reduction effects with budgetary constraints, the study must be based on objective and scientific surveys to detect and further prevent accidents, understand the causes and the acuteness of injuries. Keywords Road accidents · Machine learning · Random Forest classification · Support Vector Machines · Logistic Regression · K-Nearest Neighbour · Decision Trees classification · Naïve Bayes

1 Introduction India is a developing country, where the frequency and amount of road crashes are more than the critical limit. As the number of vehicles on the road increases every year, the rate of road crashes drastically increases with it. Road mishaps are a human catastrophe that forces a tremendous socio-economic expense as far as unexpected losses, wounds and loss of probable pay are concerned. The repercussions of such incidents can be colossal and can have a detrimental effect on people, their well-being S. Singhal · B. Priyamvada (B) · R. Jain · M. Chawla Bharati Vidyapeeth’s College of Engineering, New Delhi, India e-mail: [email protected] S. Singhal e-mail: [email protected] R. Jain e-mail: [email protected] M. Chawla e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_29

311

312

S. Singhal et al.

and welfare, in addition to the economy. Subsequently, road security has now become an issue of global concern. The number of deaths and damage because of road auto collision uncovers the tale of the worldwide crisis of road security. Road crashes are a major reason for death for individuals between the age bracket of 5–29 and the third driving reason for individuals somewhere from early 30 s to mid 40 s. The number of rising vehicles has a direct relation to the rapid increase in road collisions, which is now on the verge of being the third-highest factor leading towards fatalities by the year 2020. The misfortune in road auto collisions has a tremendous effect on the economy as well as on health-related issues of the families affected. Road accident injuries are hence, troubling in frameworks of global health care. Methods can be developed to detect repetitive causes for accidents, and the seriousness of damage of road collisions can be prevented. These patterns can be used to establish protocols for traffic safety. This paper uses various techniques like Support Vector Machines, KNearest Neighbour, Decision Trees, Logistic Regression, Naïve Bayes and Random Forest Classification to come up with a comparative analysis and study the cause and effect of these fatal accidents. The focus of our road accident analysis and prediction model: • Analyse accidents that have occurred previously in a locality, which will help determine the area’s most prone to accidents and help set up the prompt help for them. • To minimise the occurrence of accidents while analysing the severity of previously occurring disasters and taking into consideration other factors like pollution, visibility, alcohol content, age, weather, etc.

2 Related Work In previous years, many research papers have been published on road accidents analysis. Theofilatos et al. [1] in 2014 exhibited a review highlighting the importance of weather and traffic attributes in road safety. In spite of the presence of, for the most part, blended proof on the impact of traffic parameters, a couple of examples can be observed. For example, traffic- flow appears to have a nonlinear association with mishap rates, even though a few examinations propose a direct association with accidents. Nguyen et al. [2] in 2018 devised a model on the programmed characterization of mishap episode’s seriousness utilizing the concepts of machine learning. The NSW Transport Management Centre (TMC) and the exploration association Data61 in Sydney had teamed up to devise patterns in the historical occurrence of incident records, prompting the programmed order of severity levels among past episodes utilizing propelled AI, dynamic learning and anomaly recognition methods. Mehar et al. [3] in 2013 introduced some essential ideas so that a systematic plan could be formulated for the definition of road security improvement in India. Gupta et al. [4] in 2018 exhibited an approach using techniques of data mining to identify occurrences of accidents along with their methodology and application. Here, methods like

Machine Learning Approach Towards Road …

313

information mining were utilized to recognize reasons for accident impact across the globe. It identified accident occurrences in various locales and distinguished the most legitimate explanation behind mishaps occurring worldwide. Caliendo et al. [5] in 2019 gave an investigation of the recurrence of total accidents (disasters including material harm, physical wounds and fatalities), which occurred over a period of 4 years in 226 unidirectional motorway channels, focused on distinct and connected random parameter models. The random-intercept model was mainly built up from the earlier recordings of the arbitrary impacts (transient correlations among accidents occurring in a similar channel over the years) where the regression intercept was taken randomly.

3 Dataset Analysis Datasets considered in this paper are taken from government websites and nationally authenticated sources that have undergone severe vetting and have been authenticated. The training dataset is taken from nhai.gov.in (National Highway Authority of India). This ensures that the raw data taken for analysis consists of minimum error, making our predictions as accurate as possible. Since the data [6–10] is taken from an open-source, the results are easily verifiable and can also be modified as per the latest trend (Refer Table 1).

4 Methodology The techniques employed are sequentially arranged in the following flow chart. The first sweep is done by raw data collection. All data is vetted and authenticated before any work is done. This is followed by pre-processing, which includes finding and eliminating any data inconsistency and normalizing a range of variables that are independent in nature, achieved missing data detection and feature scaling, respectively. The next process is sampling. Sampling of data is a statistical technique used to pick out a subset of the data. A subset of data is sampled from the training dataset after every epoch with the aim of selecting a relevant sequence of data at the end of each epoch. On the contrary, data during testing is randomly sampled from the test dataset so as to model the real world accurately. The next step involves performing the six machine learning methodologies [11], previously explained on the data to get the required results. Post-processing of data evaluates the performance of the model and is used to devise a corrective strategy for the same. Prominent performance metrics include F1 score, accuracy, precision and recall. With respect to our dataset, we observed Random Forest is the best-performing model and thus was selected as the final classification model [12, 13]. The results of this methodology can be seen in the next section of the report (Refer Fig. 1).

314

S. Singhal et al.

Table 1 Training data attributes description Attributes

Description

State

State in which the accident occurred

Month

Month in which the accident occurred

Date

Date on which the accident occurred

Day

Day on which the accident occurred

Time

Time at which the accident occurred

Harmful event

Whether harmful event occurred during accident or not

Manner of collision

Manner of collision

Person type

Driver/passenger

Seating position

Seating position of the injured person

Age

Age of the person involved

Age range

Age range of the person involved

Gender

Male/female

Injury severity

Severity of Injury

Transported for treatment

Mode of transport used to send the victims for treatment

Air bag

Whether the air bag is present or not

Protection system

Type of the protection system used

Ejection

Whether the person is ejected out of the vehicle or not during accident

Extrication

Whether the person is extricated

Dead on arrival

Whether dead on arrival for treatment

Road type

Type of road at the location of accident local/highway

Drug/alcohol

Whether the driver is under the

Involvement

influence of drug/alcohol

Number of lanes

Number of lanes at the accident location

Surrounding area

Type of surrounding area at the accident location

Road features

Shape of road at the location of accident

Light condition

Lighting condition at the location of accident

Weather condition

Weather condition at the time of the accident

Following traffic rules

Whether the driver was following traffic rules or not

Type of vehicle

Type of vehicle involved in the accident

Causality type

Type of people injured in the accident driver/passenger/pedestrian

Severity class

Severity of the accident

Machine Learning Approach Towards Road …

315

Fig. 1 Diagrammatic representation of methods used

5 Result and Analysis Models are created using accident data records, which can help to understand the characteristics of many features like driver’s behaviour, roadway conditions, light and weather conditions and so on. The study is done by analysing the data and by giving it queries, which are relevant towards accident analysis. Questions like what fractions of accidents occur in developed and underdeveloped areas [14], what is the

316

S. Singhal et al.

most dangerous time to drive, do accidents occur in high-speed limit areas are often included [15]. This data can be accessed and worked upon using spreadsheets or excel sheets, and answers can be obtained. This analysis aims to highlight the most significant data related to the study and to allow predictions to be made. To achieve this purpose, various evaluation parameters were used to assess the performance of this study. The parameters used were recall, accuracy, precision and F1-score. Recall is the proportion of correctly predicted positive observations to the total number of observations made in a class. The relation between correctly predicted observation to the total observations, calculated as a ratio between the two, is called accuracy. Precision is the ratio of the positive correctly predicted observations to the total positive predicted observations. The weighted average of recall and precision is called F1-score. These parameters, when tested on the above-devised model, gave the highest values to the Random Forest Algorithm with values 95.77%, 91.35%, 94.94% and 0.77 to recall, accuracy, precision, and F1-score respectively. Figure 2 depicts the type of vehicle involved in road accidents. It can be clearly seen that ‘cars’ have been involved in most of the accidents with a staggering percentage of (65.6%) followed by ‘bus’ (16.1%), ‘bike’ (9.2%), ‘truck’ (9.1%) [16]. Figure 3 shows the plot between accidents and the features of roads. It is observed that the number of accidents is more on curved roads; this could be due to a lack of road Fig. 2 Types of vehicles

Fig. 3 Features of roads

Machine Learning Approach Towards Road …

317

Fig. 4 Light conditions

Fig. 5 Age range

signs. About 30.4% of road accidents occurred on ‘straight’ roads due to ‘drowsiness’. Figure 4 draws a comparison between the number of accidents and given light conditions. It is observed that ‘no light’ conditions were most susceptible to accidents with 42.8% [17]. We can infer that the accidents decreased with improvement in lighting conditions from 42.8%, and it went to 36.4 and 20.9% accordingly. Figure 5 shows the ages that are involved in accidents. From here, we can depict that the people in ‘20 s’ are the ones involved in most of the accidents. The ‘20 s’ are followed by the ‘30 s’ [18], with a constant decrease in the frequency of accidents as the age increases [19, 20]. Adolescents and teens also contribute majorly. Figure 6 represents the number of mishaps according to the months of the year. From this, we can observe ‘December’ was the month when most accidents occurred. This could be due to adverse weather conditions or poor visibility. The number of accidents is more in ‘winter’ than ‘summer’. Table 2 and Fig. 7 show the accuracy of various classification algorithms that have been implemented. The figure shows the comparison between instances of incorrectly and correctly categorized algorithms. The table shows the percentages for the same. The Random Forest Classification [21–23] has the highest accuracy of 91.35%, and the Naive Bayes algorithm has the least accuracy of 84.09%. Table 3 and Fig. 8 show the score and error rate of the various algorithms. It depicts the mean absolute error (MAE) and the root mean squared error (RMSE)

318

S. Singhal et al.

Fig. 6 Number of accidents in a month

Table 2 Classification accuracy S. No.

Algorithms

Correctly classified instances (%)

Incorrectly classified instances (%)

Accuracy (%)

1

Random Forest Classification

91.35

8. 65

91.35

2

Decision Tree classification

90.74

9.26

90.74

3

K-Nearest Neighbours

90.01

9.99

90.01

4

Support Vector Machines

88.68

11.32

88.68

5

Logistic Regression

85.96

14.04

85.9

6

Naive Bayes

84.09

18.91

84.09

Fig. 7 Classification accuracy

Machine Learning Approach Towards Road …

319

Table 3 Score and error rate S. No.

Algorithms

Score

Mean absolute error (MAE)

Root mean squared error (RMSE)

1

Random Forest Classification

0.77

0.084090

0.293095

2

Decision Tree classification

0.74

0.093769

0.306217

3

K-Nearest Neighbours

0.71

0.107078

0.348708

4

Support Vector Machines

0.67

0.113128

0.336345

5

Logistic Regression

0.60

0.140400

0.374634

6

Naive Bayes

0.57

0.159105

0.398879

Fig. 8 Error rate

[24] for the used algorithms. Random Forest Classification has the highest score of 0.77 and least MAE and RMSE of 0.084090 and 0.293095, respectively, while the Naive Bayes algorithm has the lowest score of 0.57 MAE and RMSE of 0.159105 and 0.398879, respectively. Table 4 and Fig. 9 show the weighted average of the different algorithms and comparison of the precision, recall and F-measure percentages, respectively. Random Forest Classification has the highest precision, recall and F-measure percentages, i.e. 94.94, 95.77 and 94.93, respectively. Naïve Bayes has the lowest precision, recall and F-measure percentages, i.e. 62.15, 59.52 and 60.13, respectively (Refer Fig. 9).

320

S. Singhal et al.

Table 4 Weight average S. No.

Algorithms

Precision (%)

Recall (%)

F-Measure (%)

1

Random Forest Classification

94.94

95.77

94.93

2

Decision Tree classification

94.14

93.70

94.31

3

K-Nearest Neighbours

90.31

89.78

90.01

4

Support Vector Machines

87.40

71.14

74.56

5

Logistic Regression

74.90

58.67

59.19

6

Naive Bayes

62.15

59.52

60.13

Fig. 9 Weighted average

6 Conclusion It is observed that road accident cases are largely influenced by multiple factors such as types of vehicles, age of the vehicle, age of the driver, road structure, weather condition and many more [25]. The various techniques of machine learning used in this study give an accurate idea of the current scenario by classifying accidents into multiple categories. These categories include severity level, injury level and measure of damage that occurred in the event. We have concluded that, based on the observed dataset, Random Forest method outperformed all the other with the highest precision, recall and F-measure percentages, i.e. 94.94, 95.77 and 94.93. Random Forest Classification has the highest score of 0.77 and the least MAE and RMSE of 0.084090 and 0.293095, respectively. Naïve Bayes has the lowest precision, recall and F-measure percentages, i.e. 62.15, 59.52 and 60.13, respectively, the lowest score of 0.57 MAE and RMSE of 0.159105 and 0.398879, respectively. These observations can be used further to research on the occurrence of road accidents and set up immediate redressal for the most affected areas.

Machine Learning Approach Towards Road …

321

References 1. A. Theofilatos, G. Yannis, A review of the effect of traffic and weather characteristics on road safety. Accid. Anal. Prev. 72, 244–256 (2014) 2. H. Nguyen, C. Cai, F. Chen, Automatic classification of traffic incident’s severity using machine learning approaches. IET Intel. Trans. Syst. 11(10), 615–623 (2017) 3. R. Mehar, P.K. Agarwal, A systematic approach for formulation of a road safety improvement program in India. Procedia-Soc. Behav. Sci. 104, 1038–1047 (2013) 4. M. Gupta, V.K. Solanki, V.K. Singh, V. García-Díaz, Data mining approach of accident occurrences identification with effective methodology and implementation. Int. J. Electr. Comput. Eng. 8(5), 4033 (2018) 5. C. Caliendo, M.L. De Guglielmo, I. Russo, Analysis of crash frequency in motorway tunnels based on a correlated random-parameters approach. Tunn. Undergr. Space Technol. 85, 243– 251 (2019) 6. S. Shanthi, R.G. Ramani, Feature relevance analysis and classification of road traffic accident data through data mining techniques. In Proc. World Congress Eng. Comput. Sci. 1, 24–26 (2012) 7. S. Vasavi, Extracting hidden patterns within road accident data using machine learning techniques. in Inf. Commun. Tec. (Springer, Singapore 2018), pp. 13–22 8. S. Krishnaveni, M. Hemalatha, A prospective analysis of traffic accident using data mining techniques. Int. J. Comput. Appl. 23(7), 40–48 (2011) 9. E. Suganya, & S. Vijayarani, Analysis of road accidents in India using data mining classification algorithms. In 2017 Int. Conf. Inventive Comput. Inf. (ICICI), pp. 1122–1126 (2017, November). IEEE. 10. S. Kumar, D. Toshniwal, A data mining approach to characterise road accident locations. J Modern Trans 24(1), 62–72 (2016) 11. A. Iranitalab, A. Khattak, Comparison of four statistical and machine learning methods for crash severity prediction. Accid. Anal. Prev. 108, 27–36 (2017) 12. A. Jain, G. Ahuja, D. Mehrotra, Data mining approach to analyse road accidents in India, in 2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), vol. 15 (IEEE, 2016), pp. 175–179 13. N. Dogru, A. Subasi, Traffic accident detection using random forest classifier, in 2018 15th Learning and Technology Conference (L&T)) (IEEE, 2018), pp. 40–45 14. S.K. Singh, Scenario of urban transport in Indian cities: challenges and the way forward, in Cities and Sustainability (Springer, New Delhi, 2015), pp. 81–111 15. S. Kumar, D. Toshniwal, A data mining approach to characterise road accident locations. J Modern Trans 24(1), 62–72 (2016) 16. P. Rizwan, K. Suresh, M.R. Babu, Real-time smart traffic management system for smart cities by using the internet of things and big data, in 2016 International Conference on Emerging Technological Trends (ICETT) (IEEE, 2016), pp. 1–7 17. E. D’Andrea, F. Marcelloni, Detection of traffic congestion and incidents from GPS trace analysis. Expert Syst. Appl. 73, 43–56 (2017) 18. S.K. Singh, Road traffic accidents in India: issues and challenges. Trans. Res. Procedia 25, 4708–4719 (2017) 19. S.K. Singh, The neglected epidemic: road traffic crashes in India. Metamorphosis, 11(2), 27–49 (2012) 20. S.S. Alavi, M.R. Mohammadi, H. Souri, S.M. Kalhori, F. Jannatifard, & G. Sepahbodi, Personality, driving behavior and mental disorders factors as predictors of road traffic accidents based on logistic regression. Iranian j. medical sci. 42(1), 24 (2017) 21. S. Taamneh, & M. Taamneh, Evaluation of the performance of random forests technique in predicting the severity of road traffic accidents. In Int. Conf. Appl. Human Factors and Ergon. pp. 840–847 (2018, July). Springer 22. S.K. Singh, Scenario of urban transport in Indian cities: challenges and the way forward, in Cities and Sustainability, pp. 81–111 (Springer, New Delhi, 2015)

322

S. Singhal et al.

23. R. Harb, X. Yan, E. Radwan & X. Su Exploring precrash maneuvers using classification trees and random forests. Accident Anal. & Prev. 41(1), 98–107 (2009) 24. V.A. Olutayo, A.A. Eludire, Traffic accident analysis using decision trees and neural networks. Int. J. Inf. Technol. Comput. Sci. 2, 22–28 (2014) 24. V.A. Olutayo, A.A. Eludire, Traffic accident analysis using decision trees and neural networks. Int. J. Inf. Technol. Comput. Sci. 2, 22–28 (2014) 25. P.A. Nandurge, & N.V. Dharwadkar, Analyzing road accident data using machine learning paradigms. In 2017 Int. Conf. I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). (pp. 604–610) (2017, February). IEEE.

IoT-Based Big Data Storage Systems in Cloud Computing Prachi Shah, Amit Kr. Jain, Tarun Mishra, and Garima Mathur

Abstract IoT-based applications have developed as a significant sector for designers and scientists, exemplifying the size and effect of information relevant issues to be resolved in voguish merchandise associations especially in dispersed computing. The paper presented here basically gives a utilitarian system that distinguishes the securing, the executives, managing and mining zones of large Internet of Things info, and distinct parallel functional subjects are individualized and portrayed as much as their pivotal qualities and abilities Wang and Chu (Data management for Internet of things: challenges and opportunities, pp. 1144–1151, 2013 [1]). At that point of ebb and flow look into Web of things, usance is examined, in addition, the complications and operations related with large IoT-based information are recognized. We additionally communiqué an investigation of basic IoT-based application productions and research proposals dependent on similar erudite and commerce distributions. At last, some open issues and challenges what is more, some run of the mill models are delivered under the prospective IoT-based research structure Hasan, E. Curry (Internet Technol. (TOIT), 14(1):2, 2014 [2]). Keywords IoT-based applications · Big data storage systems · Cloud computing · Distributed data processing · Data administration

P. Shah (B) · A. Kr. Jain · T. Mishra · G. Mathur Poornima College of Engineering, Sitapura, Jaipur 302020, India e-mail: [email protected] A. Kr. Jain e-mail: [email protected] T. Mishra e-mail: [email protected] G. Mathur e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_30

323

324

P. Shah et al.

1 Introduction The Web of things is an arrangement of interdependent registering gadgets, involuntary and advanced gadgets, living beings that are equipped with exceptional qualifiers and the scope to move data on a system beyond compelling man-to-man or man-toPC connection. In IoT, it can be considered as a machine-to-machine correspondence. A “thing,” in the Internet of things, can be a car operating through sensors to caution the rider if the tire pressure is below a certain level—or some varied common or human developed items that can be allotted an IP address and give the capacity to move information on a system [1]. Profuse information has been devised by massive methods of dispersed sensors, on how to get, integrate, deposit, procedure and employ the available info to get a dire and significant issue for undertakings to pull off the business intentions. As an outcome, the scientists and designers are accosted with a confrontation of taking care of the huge diversified data in thoroughly appropriate circumstances, especially in cloud stages.

1.1 Notable Attributes of IoT-Based Information in Cloud Stages Are • • • •

Multi-source significant conglomeration information Tremendous scope influential information. Low-level with powerless connotation information. Inaccurate information.

Likewise, by the exploitation of online cloud stage for IoT data exchange, qualifying and incorporation, various prerequisites are handed over for volume, constant and unorganized data handling housing various levels, e.g., data capacity, data representation and data scrutiny. The IoT innovation coordinates with the enormous information access to control shrewd mechanical computerization application. In huge scope of mechanical robotization applications, a large number of computerized gadgets are created with zillions of such IoT coins to build the IoT models, and the systems of these zillions of IoT articles may establish an enormous scope of modern IoT circumstances, from where organized, semi-organized and disorganized large IoT information are delivered in a genuine timeframe [3].

IoT-Based Big Data Storage Systems in Cloud Computing

325

Fig. 1 IoT layer

2 Related Work IoT expands on the achievement of Internet advances to interface substantial items, or things to the Web and empower plenty of uses, for example, helped driving, increased sensitivity or savvy and agreeable households. An essential prerequisite to understand IoT is a foundation of correspondence arrangements and interoperability guidelines. The Internet of things additionally demands middleware layer that can sketch the app designers from the basic advancements; this is vital to IoT-based applications’ selection and progression. Mix of contrasting bulky information is a genuine test as appears in Fig. 1.

3 The Operational Framework of Cloud-Based IoT Applications The commonplace structure of IoT is composed of perception layer, structural layer and application layer. The application layer is the basic layer for storage systems based on IoT in distributed computing since it is formed from middleware. Enough tasks have been performed to entrust lucrative and furthermore, astute data handling and inspection in operation layer in view of dispersed computing [4]. The frontal layer corresponds to RFID, WSN are intelligent options. Considering the handling method of Web application, a system of Internet-dependent data inventory frame of reference in distributed computing is depicted in Fig. 1. The groundwork dwells of a few elements that secure and incorporate information, information build_up, information preparing and information mining and application improvement module.

326

P. Shah et al.

Alluding to Fig. 1, relevant innovations can be isolated into a few practical modules as below: • Data Acquisition and Integration Module: As an information module, the process of gathering and incorporating amalgamated data from circulated networks and cellular phones is a crucial concern for the entire structural evolution. • Data Storage Module: Taking into account various types of Internet-based data including formulated, semi-formulated and disorganized data of colossal sum, varied database types or record framework, e.g., XML documents in Hadoop circulated record framework (HDFS), relational database management system (RDBMS) and not just SQL (NoSQL). A NoSQL database gives capacity to a system and recovery of information that is displayed. Graph DBMS ought to be consolidated to accomplish a high proficiency for information stockpiling in cloud stages [3]. • Data Management Module: The recovering of information from immense volumes of data gathered from various sources with steep productivity, various techniques, e.g., data list, metadata, connotation relationships and interlinked data is approved for information board at different stages. • Data Processing Module: At cloud stage, mass data preparation systems, e.g., Map-R are refined for equal and disseminated info preparation. Information questioning can be done in a more flexible way to alter with enormous quantity of information. • Data Mining Module: Scrutinizing the info from sensors to avail invariably cheap and low-level of information, elevated level of info should be separated, ordered, disengaged and broke down for application reason. Thus, data mining depending on IoT information chiefly means to realize extensive prospects or information investigation outcomes for end-clients [5]. • Application Optimization Module: Based on investigations of operation, relevant calculations or approximations are obligatory for preparing information on cloud platform giving exceptional consummation requisites, e.g., decreased input/output accelerated intermingling, surveillance, flexibility, possibility, reduced cost and so on.

4 Structure and Objections With relevance to useful system of IoT information preparing on cloud stage, the following techniques with challenges can be listed.

4.1 Data Acquisition and Integration Module IoT uses various kinds of sensor gadgets, e.g., RFID, ZigBee sensors, GPS gadgets, temperature sensors and so forth. It makes enormous conglomerate information, so

IoT-Based Big Data Storage Systems in Cloud Computing

327

the model is useful for picking up and incorporating such large information. As a part of information preparation in distributed computing, we might group the procedure of data acquisition and integration module into three techniques like info portrayal figures [6]. Data Representation Models Information description models ought to be outlined dependent on numerous operational objectives with a versatile and key organization. In any case, there is an issue in conventional transferring techniques for IoT, on the grounds that IoT sensor produce heterogeneous information. Traditional transferring policy for the sensor data in licensed arrangements is not plentiful for the applications of IoT. It could be illuminated utilizing overseeing occasional information utilizing RDF framework, and along these lines, occasional information is required to inspect inbound together, incorporated and associated in a way since the occasional information depends on gigantic, various and interlinked data sources [7]. In this manner, sensor information must be advanced and changed into RDF group for additional transformation. VO model is suggested to enhance the sensors with setting data to help brilliant town applications dependent on subjective IoT objects. The figure speaks to genuine items for mass information casually getting to and generates a surge of crude sensor measurements. A different matter related with IoT is UI, in light of the fact that user interface is getting progressively significant with the advancement of the IoT applications. The manifesto form-based applications operate as digital natural-communal frameworks. In the era of connotation level collaboration with setting, there are some current arrangement and guidelines like, sensor ML, which is an endorsed Open Geospatial Consortium standard, gives specific designs and XML coding for depicting sensors and estimation forms. The SSN philosophy can display sensor gadgets, frameworks, procedures and perceptions protests in order to empower expressive portrayal of sensors, sensorial perceptions and information on the earth [7]. Multi-source Data Fusion Right now might want to concentrate on the best way to collect and coordinate gigantic amalgamated data that will originate from various sensors. It is run of the mill to perceive the sort and combination of such information. A theoretical solution for amalgamated sensor information mixed in horde notifying applications is introduced to centralize various types of conventions for various sorts of data. Three ways of assembly, e.g., HL7 for therapeutic data, BACnet for structure analyzing and consideration and assessment model for ecological data, are incorporated into one information model to foresee sensors and exercises [5]. It utilizes smaller scale infusion to utilize a heterogeneous system of remote sensor hubs. These sensor hubs transmit conditional data, e.g., material build up conditions, encompassing lab temperature and moistness. Keeping up the effortlessness and adaptability in assortment of use in IoT is mind boggling task, which can be resolved utilizing Thing Broker means to correlate IoT objects with various qualities, stipulations, collocations and limitations while keeping up the bluntness and adaptability necessity for a combination of usage [7].

328

P. Shah et al.

Data Transmission and Communication In the wake of finishing of picking up info from different sensors in IoT, we need to transfer info and develop correspondence among various sensors for commerce. Internet uses TCP and UDP show for transferring information over a framework. Usually, IoT engineers picked UDP as a result of its steady component, yet when sorted out obstructions and channel disturbance happen, then package hardship happens successfully by UDP show, which is a progressing test [3]. The introduction of CoUDP show is much better than UDP and TCP since it incorporates estimate management and brisk re-transmission framework over UDP application and abandons tedious information like TCP. Directly a separate test was to give guarantee at correspondence level. This is cultivated in a prospective sorted out level security using closeness-based affirmation. In closeness-based approval, access is expected to handle the remote correspondence interface. Still in data representation models in cloud condition don’t have a bound together structure in the piece of getting sensor data and compromise. Multi-source data blend is until now a standard issue in applications. Regardless of the way that data move shows are created [5].

4.2 Data Storage Module IoT generates abundant conglomerated information, and to deposit such large information is a continuous test. Cloud stage gives an arrangement of capacity as administrations. Therefore, to share and separate the information on cloud stage are the primary difficulties in IoT information stockpiling. Information stockpiling module incorporates two sub-segments [5]. Data Storage Types. Cloud stage is a plausible approach to deal with and collect the colossal information. Essentially, we have ordered information into three kinds: ordered, unordered and semi-organized information. It requires capacity type, and thus, we arranged capacity types into 1. 2. 3. 4. 5.

RDBMS NOSQL DBMS DBMS dependent on HDFS Main-memory DBMS and Graph DBMS.

RDBMS. In IoT, huge information is created quickly and differently, the correlations amidst the information are constantly basic for a multi-occupant information stockpiling framework. Structural information is the sort of information that comprises assortment of lines and sections. It is too simple to administer the ACID rules on organized information. Cloud dependent on virtualization, it produces fundamental social information and we have to consolidate this information with customary social information for uniform pattern, however sends out bound together information get to view to go about as a multi-inhabitant database for various occupants.

IoT-Based Big Data Storage Systems in Cloud Computing

329

The proposed methodology of RDBMS known as Ultra-wrap encrypts a legitimate portrayal of each database as a RDF diagram and uses SPARQL inquiries to acquire the information on the current social put away perspectives [7]. NoSQL DBMS. Unorganized information oversees by utilizing NoSQL management system in a basic worth proposal. However, NoSQL database is not sufficient at conforming atomicity, uniformity, desolation and persistence of information. Moreover, it cannot bolster for some appropriate inquiries and the primary issue of NoSQL database. It gives a few properties, e.g., flat adaptability, appropriated capacity, progressively composition and so on. DBMS integrated with HDFS. Utmost noteworthy test in piece of IoT information; different information are produced in XML organization, and the management of these little measured, tremendous volumes of XML documents turns into a significant test. This could be resolved utilizing the improved putting away and getting enormous little XML records in HDFS. Some XML records are converted into bigger documents to decrease the metadata, and in this way, related instruments could be utilized to enhance the information storage performance. With the assistance of another focal ordering administration finding framework, in light of Apache Hadoop HBase information storage, the exhibition of administration finding is expanded [5]. Main-Memory DBMS. Fundamental storage databases that are converted into doable as of late with the expanding accessibility of huge modest memory can give preferable execution and steady throughput over circle-based database frameworks. There is an enormous scope of RFID application in the primary storage database framework H2. Moreover, it gives a multidimensional hash-based file plan structure and accomplishes a beat presentation assessment [7]. RDF-based data storage. RDF-based information implies unorganized information for Internet data asset. RDF Schema proposed a cosmology signifying language for gathering assets into idea and perceiving connection between ideas (Table 1). Data Isolation in Cloud Platform. Information seclusion is a genuine test because information amass emanates from credit flexibility in distributed computing system, because of data authority and data arrangement administration happen with information separation in cloud stage. Cloud stage shares basic asset with multi-tenant for optimum use of asset. This methodology might lead to the issue of inconsistency and dormancy in data content [3].

4.3 Data Management Module In IoT, to oversee conglomerated information utilizing data management module shapes a clever and compelling database for additional appropriated or equivalent IoT applications. The information of the executives, relevant works can be ordered into three angles: metadata, connotation comment and data aligning [3].

330

P. Shah et al.

Table 1 Comparative study of various data types in IoT Features Data types

Support for ACID

Support for semi-structured data and unstructured data

Support for structured data

Support for scalability

Support for massive and distributed processing

RDBMS

Not sound

Not sound

Yes

Not sound

Not sound

NoSQL DBMS

Yes

Yes

Not sound

Yes

Yes, but it is stiff

DBMS integrated with HDFS

Yes

Yes

Yes

Yes

Yes

Main-memory DBMS

Common

Yes

Yes

Not sound

Yes, but it is stiff

Graph DBMS

Common

Use graph structures with nodes, edges

Use graph structures with nodes, edges

Yes

Yes

Data management based on metadata. Metadata is commonly identified as the start to finish cycle and the organization structure for composing, supervising, upgrading, associating, organizing and coping with metadata composition, model or other oriented aggregation structure, either individually or inside an archive and the similar backing procedures. It makes data handily composed and comprehended by clients without being associated with everything regarding the arrangement. Semantic annotation. Loads of IoT-based applications robustly depend upon the apprehension of the info, with the target that enormous scope of info interpretation has found concentrated attention lately. As a significant piece of restored modernization, the rigor of connotation remark determines the recovery outcomes [5] (Table 2). Data indexing strategy. Information aligning can make data restoration activities increasingly viable at the expense of excess compounding tasks and supplementary info require extra space for info storage. With records, DBMS can briskly search data without going through every single line in the database tables. In the territory of data realigning, current relevant task can be characterized into three different ways that are bitmap file, compound info design list and rearranged lists [2].

4.4 Data Processing Module In distributed computing, disseminated servers are used for capacity and procedure. The enormous information utilizing Map-R system is an open source execution. Apache Hadoop is a well-renowned platform for equivalent handling strategies in cloud stage. Guide Reduce is broadly utilized in inquiry handling for info diagnosis

IoT-Based Big Data Storage Systems in Cloud Computing

331

Table 2 Similar investigation of data indexing methods Features

Data structure

Suitable data characteristics

Suitable scene

Bitmap index

Bitmap

Values of a Analytical variable repeat process such frequently as OLAP

Less efficient Less space

Complex data structure index

Tree, graph or others

Values of a Transaction variable repeat process such frequently as OLTP

Efficient

More Space Common

Inverted index

Mapping from content to location

New key values monotonically increase such as sequence numbers

More efficient

More space and increased processing

Large-scale process such as search engines

Performance

Cost

Current usage status Common

Rising trend

assessment in cloud stages. Another test in information preparing utilizing MapReduce is it does not legitimately bolster increasingly complex tasks, e.g., joins. With elevations in research, explanatory authorization of obscure data, e.g., RDF is essential for equal preparation of IoT information in the cloud.

4.5 Data Mining Module Because of the colossal capacious disseminated highlights of net-based information, it is expected to detect a fruitful and productive advent to account for colossal measure of data. For most parts, there are three different ways of information delving for IoT operations, similar to data digging in equal programming, data digging in portable figuring, data digging for diagrams and so on [6].

5 Future Scope and Conclusion Currently, researchers have discovered the greater part of potential difficulties in the field of IoT. The editorial has been segmented into five modules, for example, 1. Data acquisition and integration module, 2. Data storage module,

332

P. Shah et al.

3. Data management module, 4. Data processing module and 5. Data mining module [4]. Here, we have distinguished some problems in the horizons of big data systems aligned with cloud computing and attempt to give answers to expel these predicaments. In future, we might want to explore the fields of information digging system for settling issues related to Map-Reduce problems, while acquiring the information through IoT, on the off chance that we utilized Map-Reduce, it might limit our approach due to a constraint in Map-Reduce, referring to selective access to information which could be resolved in future by enhancing some strategies such as indexing, data layout technique [8]. We might want to lessen cost while information correspondence via IoT by improving partitioning, collocation technique. Considering Map-Reduce, inadequacy of help for m-way activities, we may conquer it by utilizing additional MR stage, redistribution of keys or the record duplication method. For distribution of work in IoT, Map Reduction is an additionally common employment since load adjustment is a frail purpose of Map-Reduce, so we could understand utilizing the method, for example, preprocessing testing, repartitioning and clumping. Low level of data with feeble semantics information in Web of things demands interactive inquiry in information examination errands; however, in Map-R system, there is grouped implementation, absence of communication which could be resolved by upgrading looping, caching, pipelining, recursion, steady processing strategy. Dealing with low-level data with powerless semantics information in Web of things necessitates question streamlining, yet in Map-R absences of the board and prospective reutilization of outcomes, we can utilize advancements in processing, specification adjustment, planning clarification, administrator backtracking, code investigation, dataflow improvement method for reutilization of outcomes [4]. Internet of things (IoT) may create uncertain information, to oversee such information, it requires “exploratory questions” even in Map-Reduce. Lack of snappy recovery of inexact data could limit the utilization of improvised method such as reasoning dependent on formal articulation, for example, cosmology result. At last, we reason that in the wake of illuminating those predicaments, we could deliver great framework for overseeing such powerful, heterogeneous and enormous information concerned with high and low levels of information and also limit the erroneous information at the hour of mining of information from huge information.

References 1. M.P. Wang, C.-H. Chu, Data management for internet of things: challenges and opportunities, in Green Computing and Communications (Green Com), 2013 IEEE and Internet of Things (iThings/ CPS Com), IEEE International Conference on and IEEE Cyber, Physical and Social Computing. IEEE (2013), pp. 1144–1151

IoT-Based Big Data Storage Systems in Cloud Computing

333

2. S. Hasan, E. Curry, Approximate semantic matching of events for the internet of things. ACM Trans. Internet Technol. (TOIT), 14(1), 2 (2014) 3. C. Dupont, R. Giaffreda, Supporting smart-city mobility with cognitive internet of things, in Future network and Mobile Summit, 2013. (IEEE, 2013), pp. 1–10 4. M.-S. Dao, S. Pongpaichet, L. Jalali, K. Zettsu, A real-time complex event discovery platform for cyber-physical-social systems, in Proceedings of International Conference (ACM, 2014), p. 201 5. W. Wang, D. Guo, Towards unified heterogeneous event processing for the internet of things, in 3rd International Conference on the (IEEE, 2012), pp. 84–91 6. M. Aazam, I. Khan, E.-N. Huh, Cloud of things: Integrating internet of things and cloud computing, in Proceedings of 2014 11th International Bhurban Conference on Applied Sciences & Technology (IBCAST) Islamabad, Pakistan, 14th to 18th January, 2014. (IEEE, 2014), pp. 414–419 7. S. Mayer, A.K. Dey, F. Mattern, User interfaces for smart things–a generative approach with semantic interaction descriptions. Trans Comput-Human Interac. (TOCHI), 21, 2, 12 (2014) 8. H. Liu, S. Ma, A heterogeneous data integration model, in Geo-Informatics in Resource Management and Sustainable Ecosystem (Springer, 2013), pp. 298–312

Optimized Hybrid Electricity Generation Pushpa Gothwal, Paridhi Palliwal, and Shubhangi

Abstract The energy is very crucial nowadays due to high usage of technology. Hence the renewable energy source is must fulfill the current energy requirement. Therefore, this paper presents the designing of an optimized hybrid electricity generation at a low cost. It is designed using a solar, wind and piezoelectric sensor. As per the previous study lots of techniques to generate electricity at the individual level. Hence, this paper presents the designing of the low-cost optimized hybrid energy generation to maximize the electricity generation. Keywords Sensor · Electricity · Walking platform

1 Introduction In the present period, the use of electronic gadgets is upgraded during the course of recent years because of the ease of gadgets. Be that as it may, anyone from the working class to a higher class individual can bear the cost of an electronic gadget to facilitate their day by day schedule movement. Presently a day’s utilization of power expanded in a critical sum is expanding because of the developing populace of the nation, yet the creation of vitality through accessible sources is as yet restricted. Henceforth, such methods are required to create to satisfy the vital prerequisite, that systems will be perpetual or produce power by means of squandered vitality. In a past report, loads of vitality age strategies created like sun-powered, wind, piezoelectric and dynamo [1–5]. In which sun based and wind systems previously introduced in different foundations, family, and businesses. Further on, different procedures like a piezoelectric dynamo as yet attempting to show signs of improvement brings about terms of power. The administration likewise gives the reserve to actualizing this sort of work. Along these lines, loads of analysts create vitality by P. Gothwal (B) · P. Palliwal · Shubhangi Department of ECE, Amity School of Engineering and Technology, Amity University Jaipur, Jaipur, Rajasthan, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_31

335

336

P. Gothwal et al.

means of a piezoelectric sensor [6–15] to, amplify the yield vitality till 50 V max. Along these lines, the goal of this paper to create greatest vitality utilizing the ease strolling stage.

2 Proposed Work The block diagram of the proposed work as shown in Fig. 1 represents the energy generated by hybrid sources. It is implemented using solar panel, wind fan and an array of piezoelectric sensors along with battery, voltage multiplier. This platform designed using 36 piezoelectric sensors arranged in series-parallel combination (dimensions 300 × 300 mm). Further, the tile connected to a voltage multiplier circuit, which designed using capacitor and diode, used to increase the voltage. The dimensions of the tile are 300 mm × 300 mm (width and length) as shown in Fig. 2.

Fig. 1 Optimized hybrid energy generation

Fig. 2 Output generated via solar and wind

Optimized Hybrid Electricity Generation

337

For implementing the platform have used PZT (lead zirconium Titanate) material sensor of 40 mm diameter.

3 The Material and Method The method for energy generation is based on the experimental setup and a prototype model of the hybrid energy generation is designed. The prototype includes solar panel, wind fan and 36 piezoelectric sensors along with the battery. In this the arrangement of a piezoelectric sensor (PZT material of 40 mm diameter) in Row versus column, where the generated voltage and current tested with a multi-meter. The walking platform size is 300 mm × 300 mm. Each tile includes hard cardboard as a base and 36 pieces of piezoelectric sensors. In order to fix the sensors, sensors and make them immovable, a glue gun is used. A layer of flexible materials such as rubber covers the piece sensors at the top. The scaled prototype model of the walking platform and solar, wind is designed and shown in Figs. 2 and 3, shows that to achieve the maximum output voltage and current using the current hybrid model, must have to consider the following points: 1. Sensor fixation should be proper and tapping frequency should be high. 2. To obtain maximum output voltage and current use of series-parallel combination instead of only series or parallel. 3. Must have to use a permutation of a series-parallel combination for maximized output. 4. The alignment of the sensor must be in the center. 5. The solar panel should be movable as per the sunlight. 6. Wind fan should be placed at the top opposite to wind speed.

Fig. 3 Output-generated via tile

338

P. Gothwal et al.

Table 1 a Prototypes testing of walking platform. b Results of solar and wind energy generation (a) Voltage (AC) (in V) Voltage (DC) (in V) Current (in µA)

S. No.

Applied force (weight in kg)

III

36 sensors (6 sensor in series and 6 such series in parallel with 18 sensors in center and others on edge)

1

29

39.4

34.8

17.9

2

54

50.2

54.1

25.1

3

66

73.3

54.2

25.1

4

85

68.2

42.8

48.2

(b) S. No.

Power (W) (solar)

Power (W) (wind)

Power (mW) (piezoelectric sensor)

Total power (W)

1

1.76

0.9

3.57

2.6035

2

3.39

0.9

3.47

4.293

3

2.4

0.9

3

3.3003

4 Results Discussion The energy generation using sustainable resources done using three different methods, which are, solar energy generation, wind energy generation, and piezoelectric energy harvesting. Figure 2 shows the setup arrangement and energy generated via solar panel and wind fan and Fig. 3 shows the energy generated by walking platform. Table 1a, b shows the total generated energy in watt during morning, daytime and evening time via solar panels. As we can see, solar panels can generate maximum energy in daytime because solar plate can get the maximum exposure of sunlight as mentioned in Table 1b. The second column of the table shows the energy generated by the wind and tested by external supply to a DC motor. The third column of the table shows the energy generated by piezoelectric sensor. Further next column shows the total generated energy via tile.

5 Result Analysis As per obtained Table 1a, b shows the energy generated via solar, wind and piezoelectric sensors. In this, current design achieved more voltage as compared to previous one.

Optimized Hybrid Electricity Generation

339

6 Conclusion In this study successfully designed the prototype model of the optimized hybrid electricity generation. In this, the energy generated using solar, wind and piezoelectric to save on battery for further utilization. Energy generated by solar and wind fan in watt during morning, daytime and evening time via solar panels. The solar panel can generate maximum energy in daytime because solar plate can get the maximum exposure of sunlight the energy generated by the wind and tested by external supply to a DC motor and tile generates high voltage (73.3 V) and current (48 µA) simultaneously. This designed prototype model can be applied at stairs, dancing floor and metro station, rooftop for generating maximize electricity.

References 1. S.P. Madhu et al., Electrical power generation by footsteps using piezoelectric transducers. Int. J. Recent Trends Eng. Res. 6(2), 108–115 (2016) 2. M.H.M. Ramli, M.H.M. Yunus, C.Y. Low, A. Jaffar, Scavenging energy from human activities using piezoelectric material. Procedia Technol. 15, 827–831 (2014) 3. R. Shreeshayana, L. Raghavendra, M.V. Gudur, Piezoelectric energy harvesting using PZT in floor tile design. Int. J. Adv. Res. Electr. Electron. Instrum. Eng. 6(12) (2017) 4. E.M. Nia, N.A.W.A. Zawawi, B.S.M. Singh, A review of walking energy harvesting using piezoelectric materials. IOP Conf. Ser. Mater. Sci. Eng. 291(1) (2017) 5. D. Kumar, P. Chaturvedi, N. Jejurikar, Piezoelectric energy harvester design and power conditioning, in IEEE Students’ Conference on Electrical, Electronics and Computer Science (2014), pp. 1–6 6. K. Boby, A. Paul, C.V. Anumol, J.A. Thomas, K.K. Nimisha, Footstep power generation using piezo electric transducers. Int. J. Eng. Innov. Technol. (IJEIT) 3(10), 264 (2014) 7. S. Kale, N. Nawkhare et al., Implementation of piezoelectric tiles to generate electricity. Int. J. Eng. Res. Electr. Electron. Eng. (2018) 8. A. Patil, M. Jadhav, S. Joshi, E. Britto, A. Vasaikar, Energy harvesting using piezoelectricity, in International Conference on Energy Systems and Applications (2015), pp. 517–521 9. X. Li, V. Strezov, Modelling piezoelectric energy harvesting potential in an educational building. Energy Conv. Manage. 85, 435–442 (2014) 10. H. Rumman, F.M. Guangul, A. Abdu, M. Usman, A. Alkharusi, Harvesting electricity using piezoelectric material in malls, in 2019 4th MEC International Conference on Big Data and Smart City (2019), pp. 1–5 11. E.M. Nia, N.A.W.A. Zawawi, B.S.M. Singh, Design of a pavement using piezoelectric materials. Materialwiss. Werkstofftech. 50(3), 320–328 (2019) 12. A.M.M. Asry, F. Mustafa, M. Ishak, A. Ahmad, Power generation by using piezoelectric transducer with bending mechanism support. Int. J. Power Electron. Drive Syst. 10(1), 562 (2019) 13. J. Chen, Q. Qiu, Y. Han, D. Lau, Piezoelectric materials for sustainable building structures, fundamentals and applications. Renew. Sustain. Energy Rev. 101, 14–25 (2019) 14. B.M. Kreiner, R. Schaub, T. Knezevich, U.S. Patent Application No. 15/670,922 (2019) 15. Z.A. Al Haddad, M. Al Kaabi, R.Y. Kharouf, H. Al Derei, W. Shakhatreh, U.S. Patent Application No. 15/727,060 (2019)

Access Control of Door and Home Security System Anila Dhingra, Tanya Mittal, Soniya Moolchand Heera, and Varun Menaria

Abstract Network of belongings is that the messages of something by means of different things. This task expects to watch and the executives the house lock exploitation the net of things. Mesh of things is utilized distantly to take a gander at the association and get observe once there is a proximity of close article by way of none substantial contact and at suchlike point an entrance way ring is pressed as well as Raspberry Pi camera module search out activated and catch the image. The photographs square measure sent to Associate in Nursing email throughout Wi-Fi, and through the net server user will management the door lock/unlock. PIR sensing element senses the person, and camera clicks the image of traveler and sends it to the user through an email. Accordingly, benefits such as these mention this function perfect for objective fact homes in absence. The Internet of Things (IoT) is a basic subject in innovation business, strategy, and building circles and has become significant news in each of the field press and furthermore in the in-style media. This innovation is epitomized in a very wide range of organized range, frameworks, and sensors, that benefit of progressions in figuring power, normal way of thinking decreases, and system interconnections to supply new proficiencies not aforesaid possible. Keywords Internet of Things · PIR sensor · Camera module · Servo motor · Buzzer

A. Dhingra · T. Mittal (B) · S. M. Heera (B) · V. Menaria (B) Poornima College of Engineering, Jaipur, Rajasthan, India e-mail: [email protected] S. M. Heera e-mail: [email protected] V. Menaria e-mail: [email protected] A. Dhingra e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_32

341

342

A. Dhingra et al.

1 Introduction The task thought is to style a programmed gadget for locking and opening of the entryway as these days a programmed device can step into the shoes of immense determine of human working command, also group are progressively helpless to mistakes as well as in escalated circumstances the probability of error increments while, a programmed device be able to work with reliability, flexibility and with extremely almost zero error [1]. The agenda will be intended for security purposes. It will fill in as when chime rings at the entrance, and additionally when the movement of human is famed in go by PIR sensor it’ll go about as a trigger to the camera and hence the camera will catch the picture of the individual remaining in front of the entryway, which will be appeared to the enrolled client who is away from home and afterward he will distinguish the individual on the off chance that he need that individual to enter the home he can open the entryway or else it remain lock. This expands extraordinary security for homes which too without human negotiation.

1.1 Proposed System The proposed system is cost effective and compact, so installment becomes much easier. The System contains various modules such as intrusion detection, motion detection and alarm which are discussed in detail below [2, 3]: Intrusion Detection Humans emit heat in the form of radiation which is not visible to the naked eye, and even though they cannot be seen, it can be detected. Passive infrared sensor detects this radiation to identify the being there of some Passive infrared. Alarm As soon as the presence of any human is detected via PIR movement Sensor as well as current time is verified i.e. if the current time is past 11:00 p.m., then the alarm is raised. Notification Once the alarm is raised a 30 s time frame is allotted to turn off the alarm, if the disable code is entered and if it matches the right disabled code fetched from the database within the time frame (30 s) then the alarm is turned off if the user is notified [4].

1.2 Implementation of the IoT-Based Security System Below are steps for implementation of the IoT-based security system for monitoring observation area by using Raspberry Pi and OpenCV with android app warnings [1, 5]. • Install OpenCV module on Raspbian OS and further install imutils package using pip install imutils.

Access Control of Door and Home Security System

• • • • • • •

343

To capture images, install and enable Raspberry Pi camera module. Import the Pi camera to access the film stream of the Raspberry Pi camera. To upload the pictures to drop box, install necessary packages. Link the drop box account to the Raspberry Pi board with necessary functions. Raspberry Pi camera captures the images of the surveillance area. The images are then processed by the OpenCV module in Raspbian OS. If motion is detected, the GSM module sends alerts to the android app with the registered phone number and the images are uploaded to the dropbox. • The android app then gives a vocal sound output “camera Image is captured” to the user. The camera samples for any movement [5] (Figs. 1 and 2).

Fig. 1 Block diagram of circuit

344

A. Dhingra et al.

Fig. 2 Flow chart

2 Methodology 2.1 Hardware Implementation To make the framework equipment, we experienced underneath square outline. The entire square graph is separated into two segments; first is server side, and other one is customer or consumer side. Server side is completely introduced on the Raspberry Pi. Server is made on the Raspberry Pi with the assistance of Linux, Apache, MySQL, PHP (LAMP). Two PHP documents are made and put away on the server that we have made on the Raspberry Pi. Raspberry Pi is having 40 GPIO pins [4]. These pins are utilized to control the home machines. Transfer is associated with the GPIO pins of the Raspberry Pi through the Relay Driver Circuitry. Yield of the GPIO pins is 3.3 V. So as to force

Access Control of Door and Home Security System

345

relay, least 6 V voltage is essential, so this can be acquired with the assistance of relay driver hardware. Every single home apparatus is associated with the relay. Customer side is only a user side. Clients need to utilize mobile gadget to get to the Raspberry Pi through the web. When the client interfaces cell phone in arrange and in the wake of putting the IP address of the Raspberry Pi in the program of mobile gadget will have the option to see the site page which contains UI to control home apparatuses in each room [1]. UI essentially shows the quantity of rooms and home apparatuses present in each room. It additionally contains catches to flip the status of home apparatuses of each room. The number of home machines can be controlled all the while.

2.2 Software Implementation Writing computer programs is finished with PHP language. Two PHP documents are made. One is index.php, and the another is switchDevice.php. These two records are put away on the neighborhood LAMP server of the Raspberry Pi [2]. Dreamweaver programming is utilized to build up the website page and to make the UI present on that site page.

3 Practical Implementation At the point when a visitor lands at the house and sees the entryway as curved, the bell is pressed by the user at the entryway. The upheld utilizing a pushbutton switch inside the framework created. GPIO having twenty-four pins in the switch is associated perpetually and is checked by the framework. If the action of key pressed is recognized, then the framework pay is achieved with resulting steps of the function. Framework controlled is associated with visitor operating speaker. Board gives sound yield through the three 5 mm jack. At whatever point the framework needs to suggest the visitor with respect to the means, plays the comparing wave document spared in the home index [1]. If the switch is pressed by the user at the entryway, a.wav document is contend soliciting the visitor to square ahead from the camera. The picture of visitor is caught by computerized camera, and then it is sent to primary host whenever required to auxiliary host. The answer got decides if the entryway is to be unbound or not [3]. The execution of the presence of organized framework has been done manipulating the Raspberry Pi Model B board. Raspberry Pi could be a solitary board workstation created inside the Britain by the Raspberry Pi Foundation. It is a board with alternatives that square measure inexcusably supportive in characteristic way of thinking comes. The board choices partner degree on-board 10/100 neighborhood RJ45 jack, twin USB instrumentation port, 3.5 mm jack, HDMI audio output and twenty-six committed GPIO pins, just as a UART, an i2c transport, a SPI transport

346

A. Dhingra et al.

Fig. 3 Practical circuit

with 2 chip chooses, i2s sound, 3.3 V, 5 V and ground. The board bolsters video yields through the HDMI RCA video ports. Fundamentals of endeavor encased in switch interface, netproperty, USB computerized camera support, 3.5 mm speakers backing and system interfaces. Consequently, the board is an entrancing option for framework. It is battery-fueled abuse 5 V by means of little USB instrumentation [5]. Its capacity evaluations square measures 5 V DC, 700–1500 mA. Code to send data from Raspberry Pi to user through email: From email.MIMEMultipart import MIMEMultipart from email.MIMEBase import MIMEBase from email.MIMEText import MIMEText from email import Encoders gmail_user = “[email protected]” #Sender email address gmail_pwd = “xxxx” #Sender email password to = “[email protected]” #Receiver email address subject = “Security Breach” text = “Visitors at Door”. Find attached picture (Fig. 3).

3.1 Results and Discussion The total equipment arrangement is appeared. The camera which is proposed to be there set aside outside the entryway at the crest is over the screen. The entrance way security device has been exhibited operating solenoid actuator which is controlled here utilizing a succession eliminator. The breadboard is installed with the driver hardware. The agenda is associated with the web utilizing a CAT5 link in the course of the Ethernet port. The basis built up web from PC was imparted to the framework. The host can answer to the framework whether permit the visitor to enter the building by responding to the email got.

Access Control of Door and Home Security System

347

4 Advantages • Managing all of your home devices from one place. The convenience factor here is enormous. • Flexibility for new devices and appliances. • Maximizing home security. • Remote control of home functions. • Increased energy efficiency. • Improved appliance functionality. • Home management insights.

5 Limitations • Cost: The biggest problems, con or disadvantage of a smart home system, are the cost. There are quite a number of companies that provide the smarty home system, but all of them are quite expensive. • Dependency on Internet: The basic requirement for the smart home system is the Internet. • Reliability: A smart home will be extremely reliant on our Internet connection. If our connection drops, we will be left with a lot of smart products that would not work. Additionally, wireless signals can possibly be interrupted by other electronics in your home and cause some of your smart products to function slowly or not at all.

6 Conclusion Security is of essential importance in today’s world; traditional system has attempted to provide the same using technologies such as microcontroller and updated versions of the same, i.e., Arduino Boards. The proposed system provides safety to the house by noticing the occurrence of any prowler. On the off chance that an interloper is distinguished, a caution is raised and the proprietor and law requirements are told through email. The proposed work takes out the overhead connected with customary framework, for example, high personal time during fix and upkeep and any sort of gadget altering that and interloper or programmer can do to the framework [4]. The proposed work utilizes Raspberry Pi as the controller, and since it is the most recent innovation, it furnishes greater similarity with the most recent gadgets and sensor and more gives more space to future upgrade, for example, misusing a greater amount of Raspberry Pi’s usefulness in regions of productive utilization of power via computerizing the control of lights for a lot of effective force the executives.

348

A. Dhingra et al.

7 Applications • • • • • • • • • • • •

Lighting control HVAC Lawn/Gardening management Smart home appliances Improved home safety and security Home air quality and water quality monitoring Natural language-based voice assistants Better infotainment delivery AI-driven digital experiences Smart switches Smart locks Smart energy meters.

References 1. A. Biswal, R. Marimuthu, S. Balamurugan, S. Ravi, Design of senor network for real time data acquisition of water level within the agricultural field. ARPN J. Eng. Appl. Sci. 10(8), 3391–3396 (2015) 2. H. Kyu, J.W. Baek, Wireless access monitoring and system supported digital door lock. IEEE Trans. Consum. Electron. 53(4), 1724–1730 (2007) 3. Y. Zhai, X. Cheng, Design of smart home remote monitoring system supported embedded system, in IEEE 2nd International Conference on Computing, Control and Industrial Engineering (CCIE) 2 Aug 2011, pp. 41–44 4. K. Thattai, K.B. Manikanta, S. Chhawchharia, R. Marimuthu, ZigBee and ATmega32 based wireless digital control and monitoring system for LED lighting, in International Conference on Information Communication and Embedded Systems (ICICES), Feb 2013, pp. 878–881 5. M.R. Navya, R. Praksh, Development of secured home automation using social networking sites. 8(20), 1–6 (2015)

Bluetooth-Based Smart Sensor Networks Vibha Beniwal, Tarun Mishra, Amit K. Jain, and Garima Mathur

Abstract Wireless sensor networks are networks of small computers, fitted with sensors, microprocessors, and wireless interfaces. This technology has achieved a lot of attention lately. These networks are suggested for the wide range of modern and fascinating applications, from personal health care to environmental monitoring and military applications. Different wireless technologies, such as simple RF, Bluetooth, UWB or infrared, may be used to communicate between sensors. The core concepts, features, and problems of Bluetooth-based wireless sensor networks are outlined in this, as well as the execution of a simple Bluetooth-based sensor network. The presentation of the main problems experienced during the implementation and applied solutions is also done. This is specified on the smart sensor networks using the Bluetooth topology. How smart sensor networks are used and can be implemented using the Bluetooth technology. How they are used as the purpose of communication in industrial field, how they are built, their working and concept are reviewed. Architecture, network, applications, and working have been reviewed basically for the communication and research purpose. Keywords Wireless sensor networks · Media access control · Wide area network

1 Introduction Wireless sensor networks are networks of small devices having sensors, microprocessors, and wireless communication interfaces. This technology has become famous lately. For the purpose of communication in industrial field, WSN technology is widely used. Sensors are used for the communication in industries. In this process, the signals are sending through the wires from each field devices and are monitored on central control room. With the beginning of wiring concept, the field device is V. Beniwal (B) · T. Mishra · A. K. Jain · G. Mathur Poornima College of Engineering, Sitapura, Jaipur, Rajasthan 302020, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_33

349

350

V. Beniwal et al.

used for minimizing the wiring cost. Wireless technology is introduced to eliminate the wires as they are costly, bulky and can be easily damaged [1]. Ericsson mobile communications started a research to explore the usefulness of consuming low power, low cost, low ratio interface and to find a process to remove wires between the devices in the year 1994. Then the Bluetooth technology was invented by an electrical engineer Dr. Jaap Haartsen and named the technology Bluetooth to the honor of the tenth-century King Harald “Bluetooth” of Denmark. The aim of Bluetooth is harmony and unification [2]. It also enables different devices to communicate through wireless connectivity. Bluetooth uses frequency-hopping spread spectrum technique and works in the illicit ISM band at 2.4 GHz frequency. A distinctive quality Bluetooth device holds a range of about 10 m and can be extended to 100 m. The total bandwidth of 1 Mb/s is supported by communication channel. A topmost data transfer rate of 721 kbps maximum of three channels is supported through a single channel [1, 2].

1.1 Technology Overview Bluetooth is low-priced, short-range wireless technology with small power utilization and suitable for diverse small battery-driven devices like mobile phones, cameras, laptops. [1]. There are two topologies used in Bluetooth architecture: Piconet and Scatternet. A Piconet: It is an improvised network used to link wireless devices using Bluetooth technology. It is group of up to 8 devices that share similar frequencies. It uses the concept of master and slave. Each piconet has one master, and rest of the device acts as slave. Usually, the device that starts off the piconet behaves as master [3]. To establish a piconet, first the device searches for the other Bluetooth devices in the range. When the two devices have the same frequency, then the required information is exchanged and to establish the connection paging procedure can be used. For more than 7 devices need to exchange information, there are two possibilities: The initial one is by putting one or more devices into park state. The modes used in Bluetooth are sniff, hold, and park which are used for low power consumptions [4]. When a device changes to the park mode, then it disconnects from Piconet, but schedule adjustment will be maintained. The master of piconet continuously transmits signals to invite the slaves to retain the piconet. If there are less than 7 slave devices in the piconet, then only the slave will rejoin [2–4]. If not then one of the active slave device will be park by the master. Due to these actions, there will be delay and it can be undesirable for some applications like procedure control applications that need an instant reply from the command center. Scatternet: It is composed of interconnected piconets that maintain communication between more than 8 devices. Scatternets are formed when a device of one piconet that can be master or slave chooses to act as a slave in second another piconet [4]. Scatternets provide higher throughput. Also in different piconets, multiple-hop

Bluetooth-Based Smart Sensor Networks

351

Fig. 1 A Scatternet

routing between devices is possible. That means at one time only single component can interface in one piconet so they hop from one to another relying upon the channel limit [3] Fig. 1.

2 Bluetooth-Based Sensor Network To produce different countless applications, the developers are proving operability between different devices. Wireless sensor networks are example of such applications. It has various small devices consisting of sensing unit, microprocessor, power source, and wireless communication bond. Important features of wireless sensor networks: 1. During the task in process, network nodes combine with each other. 2. It has a particular attention on data. As the position of junctions is not decided in the field and formation of smart sensor nodes is not planned, it could be experienced that some sensor nodes close in such positions that they either can accomplish needed measurement or the error probability is more [1, 5]. To overcome these problems, a small node of repetitive number is disposed. These nodes additionally merge and share data and thus ensures better outcomes. The collected data is sent to the users by “gateway” using multiple-hop routes in the smart sensor nodes that are scattered in the field.

352

V. Beniwal et al.

2.1 Bluetooth Hardware Architecture Bluetooth hardware has 3 prime function modules: 1 2.4 GHz Bluetooth radio frequency transreceiver unit. 2 Link controlling unit. 3 Host controller interface. Host controller is made up of a digital signal processing section with link controller and central processor. Link controller composed of both hardware and software parts for the execution of baseband processing and physical layer protocols. CPU core helps Bluetooth module to filter page requests and to handle enquiries [5]. Link manager is software which runs on the CPU and communicates to them with the help of link manager protocols.

3 A Wireless Sensor Network Wireless sensor network provides two important actions querying and tasking. Querying: When there is need of current value of the observed event, queries are used. Tasking: It is more composite operation and is useful when an event needs to be noticed for a long time. These two services querying and tasking are allocated to the system through “gateway” which also forwards the collected responses to users [1]. The main functions of a gateway are • • • • • •

Interaction with sensor networks. Short wireless transmission is used. It contributes functions like finding of smart sensor nodes. It provides techniques of sending and receiving data from sensors and routing. It controls gateway attachment and dataflow to and from sensor network. It provides standard of dealing with ideas that gives detail about the current participating sensors and their characteristics. • It allocates function for consistent approach to sensors without being affected by their type, location or network topology, introduce queries and tasks, and gather respond. • Communication with users. • Gateway communication with user and another sensor networks through the Internet, WAN, satellite, or other short communication technology.

Bluetooth-Based Smart Sensor Networks

353

3.1 Sensor Network Implementation The main target of the sensor network execution was to construct a hardware plan and common software answers that can work as support and for the motive of research in wireless sensor network agreement. Implemented sensor networks involve smart sensor nodes and a gateway. Every node has countless sensors and is connected with microcontroller and Bluetooth element [4, 6]. Gateway and smart nodes behave as the part of the piconet, for example, a Bluetooth pressure sensor. The sensor is attached on the Bluetooth node. It has the pressure feeling element, smart signal-conditioning circuitry containing density and temperature reception, and the transducer electronic data sheet (TEDS). These are integral features of sensor microcontroller which are used for node communication control and also for the memory for TED’s arrangement [6].

3.2 Smart Sensor Nodes Discovery For the execution for the gateway installation, the first step is the locating of the smart sensor nodes. With the help of gateway, all the smart nodes are found and build the list of the attribute of sensors and network topologies. After that for making the process easy for the removal of existing sensor and addition of the new, it is executed concurrently [1, 6, 7]. Algorithm: After the initialization of gateway is done, Bluetooth performs the inquiry procedure. Major and minor device classes are checked after the detection of the Bluetooth devices. Some parameters are set by the sensor nodes for defining the type of device and type of attached sensor. To give some further explanation of the offered service, the duty class field is used [4]. And if the founded device is not a smart node, then it is disposed. As there is no particular sensor profile at ongoing time, the database is hunt for the serial port outline link. Once interrelated string is acquired, then the Bluetooth link is accepted and the transfer of the data is started.

4 Advantages • • • • •

Inexpensive. Low power utilization. Small range. Wireless technology. Sensible throughput.

354

V. Beniwal et al.

• Cheap maintenance cost. • Easy link formation. • Share voice and data.

5 Limitations • • • •

Low data range. Interference with other device. Low security. Low data rate.

6 Conclusion Wireless sensor networks are fascinating research area with multi-feasible applications and with many solutions. They are combination of various tiny devices having the ability of interacting and dealing with data. Bluetooth is an easy and suitable option for data communication in sensor networks [2]. To plan routing and application-level procedures, we overlook multiple affairs related to MAC layer, physical layer, application layer, and routing layer [8]. For the automatic link up and information exchange, the Bluetooth devices need to bring within the range of another device.

7 Applications • • • • • • • • •

Constant sensing Health monitoring Event detection and local control of actuators Smart buildings Smart grid and energy control system Environmental monitoring Military and security and surveillance Industrial safety Home and other commercial areas.

Bluetooth-Based Smart Sensor Networks

355

References 1. Wireless World Research Forum, Book of Visions. https://www.wireless-world-research.org 2. G.J. Pottie, W.J. Kaiser, Wireless integrated network sensors. Commun. ACM 43(5) (2000) 3. J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, K. Pister, System architecture directions for networked sensors, in Proceedings of the ASPLOS (2000) 4. J.M. Rabaey, M.J. Ammer, J.L. da Silva Jr., D. Patel, S. Roundy, PicoRadio supports ad hoc ultra-low power wireless networking. IEEE Comput. Mag. (2000) 5. H. Gharavi, K. Ban, Video-based multihop ad-hoc sensor network design, in Proceedings of World Wireless Congress, San Francisco, CA, May 2002, pp. 469–474 6. J.M. Kahn, R.H. Katz, K.S.J. Pister, Mobile networking for smart dust, in ACM/IEEE International Conference on Mobile Computing and Networking (MobiCom 99), Seattle, WA, 17–19 Aug 1999 7. Scenarios for Ambient Intelligence, EU IST Advisory Group 8. M. Weiser, The Computer for the Twenty-First Century (Scientific American, 1991)

Behaviour of Hollow Core Concrete Slabs Mayank Mehandiratta and Praveen Kumar

Abstract This paper gives the literature review study on hollow core concrete slabs. Studies related to the behaviour of hollow core concrete slabs are presented in this review paper. A hollow core concrete slab is a voided slab having continuous number of hollow cores in the slabs both concentric and eccentric to the axis of slabs. The present study aims to give the overview on the behaviour of hollow core concrete slabs under certain loading conditions. Keywords Hollow core elements · Hollow core slabs · Sectional properties · Loading conditions · Precast elements

1 Introduction The hollow core slabs are in use nowadays as structural elements. Precast hollow core members were in use as typical lightweight members in past few years in multi-storey apartments. Concrete lightweight members were made by making them hollow, i.e. reducing the area of concrete. Hollow core slab members were the need of the present scenario. It is a basic requirement of the evolution in the concrete industry to make the structural elements lightweight without making the compromise with the strength part of the members. Precast hollow core slabs are very popular in present scenario in countries of Northern and Eastern Europe also. It is very convenient and economical to use these members as structural elements in the construction industry. These elements found to be very sustainable structural system in comparison to the conventional RCC slabs that were in use over the years in construction industry.

M. Mehandiratta (B) · P. Kumar Department of Civil Engineering, Rajasthan Technical University, Kota, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_34

357

358

M. Mehandiratta and P. Kumar

2 Literature Review Ou et al. examined hybrid timber concrete floor system. In this study, they introduced core between tensile timber and concrete layers. By experimental study, they investigated the flexural performance of the system. Two configurations have been introduced in this study: with core orientation parallel to the span and with core orientation transverse to the span. Total eight numbers of HTC floor panels were prepared and examined for the flexural capacity and critical failure mode. It was found that the longitudinal specimens gave the best composite action and performance with 73% moment carrying capacity as compared to the standard HTC section with the novel composite action. In addition to that, the concrete reduction found was in between 42 and 51% as compared to the novel HTC panels [1]. Park et al. proposed a study for shear tests on ten precast prestressed hollow core slab specimens. All specimens are casted by extrusion method. Concrete used in this study was having almost 36% water–cement ratio, and slump of the concrete was found to be almost zero. Design strength of concrete for these test specimens was 40 MPa. The compressive strength of concrete was found to be 60.5 MPa. Table 1 shows the concrete mix design for the test specimens. The test specimens of over 315 mm depth were tested for web shear capacity in this study. Specimens showed very brittle shear failure modes, instead of that the web shear capacities were not found to be compromised by the effect of size [2]. Liu et al. investigated the behaviour of hollow core slab elements of dimension 10 m × 1.2 m × 0.27 m each. Six hollow core elements were casted with grouted concrete. Slabs were added with the 50-mm layer of concrete topping after 30 days of casting of joints. At the time of testing, specimens were supported by steel beams. Vibration exciter and the hammer were used to load the specimens. Also, finite element modelling was used to design the solid elements. Both experiment and analytical results showed the good agreement in between them. The elastic modulus has been obtained through this study for hollow core element; it was 1.05, 0.55 for joints, and for the topping, it was found to be 0.8 [3]. Pachalla et al. investigated the failure modes and load resistance of the hollow core slabs with openings. For this, they casted six hollow core slabs and calculated the effects of shear openings as well as flexural openings. A finite element study was also carried out along with this study for prestressed hollow core slabs. The ultimate strength was found to be with 7% deviation with the experimental results. The results from the study indicated that by providing openings in slabs, the strength and stiffness are in decreasing manner. The parametric study indicated that opening in the slabs should be carefully selected. It has also been noticed that while providing Table 1 Concrete mix design used for test specimens [2] W /C (%)

S/a (%)

W (kg/m3 )

Unit weight (kg/m3 ) C

S

G

36.2

34.9

160

340

683

1268

Behaviour of Hollow Core Concrete Slabs

359

the shear loading, the sectional capacity of hollow core slabs is in decreasing order. Parametric results indicated that the opening size should be carefully selected based on the location of the opening. Increasing the opening size in a shear-dominated zone results in more adverse behaviour than increasing the opening size at the mid-span location. Considerations should be given while calculating the sectional capacity of slabs based on the loading conditions. It was observed that shear-dominated loading can significantly decrease the sectional capacity of hollow core slabs both with and without openings [4]. Prakashan et al. carried out an experimental investigation on four hollow core slab specimens by taking the solid specimen for the reference case. In this study, load deflection curves are plotted for the first concrete crack formed in the slabs. Also the efficiency of the conventional bending capacity formula is derived for the strength of HCS [5]. Foubert et al. carried out the study for behaviour of flexural strengthened hollow core slabs. In this study, seven simply supported specimens subjected to monotonic loading were tested up to failure. Failure modes, cracking patterns, deflection and load–strain relationship studies were the criteria that were checked in this investigation. The comparative study of experimental results and theoretical study by adopting Canadian and American standards was also carried out in this study [6]. Awad et al. investigated sandwich panels for building floors that were designed to function as a deck supported on regularly spaced timber joists. Consequently, such panels were designed for relatively small span lengths (0.6 m), thus having a relatively small thickness of 15 mm. The panels comprised 1.8-mm-thick GFRP faces and 11.4-mmthick high-density (850 kg/m3 ) phenolic cores. The authors focused this study on the panel’s response to point loads applied at mid-span. Further studies were also carried out regarding the free vibration behaviour of the panels and the restraint effects on their behaviour owing to the connections to the timber joists. While this represents a use of sandwich panels in building floors, in this case the panels are not their main structural element [7]. Correia et al. investigated composite sandwich panels developed to function as a monolithic slab, not requiring additional support joists or beams along the floor span. The authors carried out a comprehensive study regarding the mechanical behaviour of sandwich panels with GFRP faces and ribs, comprising cores of low-density PUR foam (68 kg/m3 ) and polypropylene (PP) honeycomb (110 kg/m3 ). The panels were designed for spans up to 2.3 m, having a total thickness of approximately 105 mm, with 7-mm-thick faces and 6-mm-thick ribs. Besides characterising the mechanical properties of the materials and panel assemblies through small-scale tests, the authors carried out full-scale static and dynamic flexural experiments to determine the response of the panels for serviceability and ultimate limit state conditions. Numerical investigations were also carried out using finite element models (FEM) of the panels. This work confirmed the high potential of such composite sandwich panels for building floors and other civil engineering structural applications, showing that adequate stiffness and strength may be obtained using this solution [8]. Osei-Antwi et al. proposed a sandwich panel architecture that incorporates aspects of both “vertical” and “horizontal” grading of the core. This panel comprised an

360

M. Mehandiratta and P. Kumar

arch-like core distribution, where a stiffer core material (250 kg/m3 balsa wood) is used near the panel edges/supports and for the interface with the top (compressed) face sheet, while a lower density core (95 kg/m3 balsa wood) is used in the lower central area of the core. Between these two materials, an FRP laminate was incorporated, acting as an arch structure. This panel configuration was compared with simple sandwich panels with homogenous cores of the high-density balsa wood. The authors found that the structurally graded cored panels had higher stiffness and failure loads than those with homogenous high-density cores, illustrating how a complex core assembly may be used to improve the performance of sandwich panels, further optimising them in terms of total strength- and stiffness-to-weight ratios [9]. Sharaf and Fam investigated large-scale building cladding panels. Such panels differ from existing sandwich cladding due to their significantly larger dimensions, having a height of 9.1 m and width of 2.4 m. The panels were fixed at six points, with a maximum distance between fixation points of 4.1 m. The panel thickness was 78 mm, and it comprised GFRP faces and ribs (thickness between 1.6 and 3.2 mm) and PUR foam core with density of 31.6 kg/m3 . The authors tested the panels under a uniformly distributed air pressure load, achieved by installing the panel on a self-reacting airtight frame fed by pressure load actuators. Using this configuration, the span of the panels was effectively reduced to 2.4 m, corresponding to the distance between the panel’s lateral edges. In addition to the experimental work, numerical investigations were carried out to simulate the behaviour of the cladding panels considering the material nonlinearity of the PUR foam and the GFRP ribs. The authors concluded that the proposed panels were fit for application in building façades, having a safety factor of relative to the factored design pressure for the windiest region in Canada (where this study was conducted), and exhibiting deflections lower than span/360 for the maximum design service wind pressure [10].

3 Research Significance Hollow core slabs are increasingly in use at present at industries and high-rise buildings, yet studies on hollow core concrete slabs were found to be limited. Literature survey on HCCS has been made in this report; also some studies on sandwiched concrete floor panels have been taken into account. Studies on the behaviour of HCCS and SCP were made, and behaviour of the specimens such as web shear capacity, bending strength and shear strength was noted. Studies on analysis of specimens with finite element modelling were also observed, and research gap of both experimental and analytical studies of previous literature was made for some related and significant studies. • Depth of Slab can be increased by 200 mm and instead of timber, and GFRP sheets can be used in new study. Also behaviour of slab for some other properties like bending capacity and energy stored can be investigated. Instead of hybrid timber concrete, study on hollow core concrete slabs can be carried out.

Behaviour of Hollow Core Concrete Slabs

361

• Only shear tests were carried out other behaviours of hollow core slabs such as flexural strength and bending capacity can be carried out in future work. Also w/c ratio can be changed instead of 0.36 and study for higher design strengths or lower design strengths can be checked for several loading conditions. • Only six hollow core elements were made in the study carried out by Liu et al., and more number of specimens can be made and behaviour under certain loading condition can be checked. GFRP sheets can be sandwiched in between slabs as permanent formwork for making hollow core slabs. • Behaviour of slab for some other properties like bending capacity and energy stored can be investigated. Opening of slab specifications and eccentricity in openings can come into the picture.

4 Conclusions On the basis of the studies that were reported in this review papers, following conclusions were made: 1. Studies on hollow core slabs were found to be very less and that were reported in this paper are with the investigations of some limited studies only. 2. It can be easily concluded after reading the previous literatures on hollow core slabs that hollow core slabs show very good behaviour under bending. 3. It may be very easy to say that hollow core slabs are convenient to cast on situ and also on the casting workshops. 4. Hollow core slabs found to be economical to use as they reduce the concrete area in sections. 5. In this paper, some studies related to the sandwiched panels were also reported, but the materials used were not showing much good composite behaviour. 6. Studies on sectional properties were found to be limited in case of hollow core slabs, so it may be concluded that hollow core slabs further can be tested for finding out other sectional properties. That might help to understand the complete behaviour of hollow core slabs under some other properties.

References 1. Y. Ou, J. Gattas, D. Fernando, J. Torero, Experimental investigation of a timber-concrete floor panel system with a hybrid glass fibre reinforced polymer-timber corrugated core. Eng. Struct. 203, 109832 (2020). https://doi.org/10.1016/j.engstruct.2019.109832 2. M.-K. Park, D.H. Lee, S.-J. Han, K.S. Kim, Web-shear capacity of thick precast prestressed hollow-core slab units produced by extrusion method. Int. J. Concr. Struct. Mater. 13(1), 7 (2019) 3. F. Liu, J.-M. Battini, C. Pacoste, A. Granberg, Experimental and numerical dynamic analyses of hollow core concrete floors, in Structures, vol 12 (Elsevier, Amsterdam, 2017), pp. 286–297

362

M. Mehandiratta and P. Kumar

4. S.K.S. Pachalla, S.S. Prakash, Load resistance and failure modes of hollow-core slabs with openings: a finite element analysis. PCI J. (2018) 5. V. PrakashanL, J. George, J.B. Edayadiyil, J.M. George, Experimental study on the flexural behavior of hollow core concrete slabs. Appl. Mech. Mater. 857, 107–112 (2017) 6. S. Foubert, K. Mahmoud, E. El-Salakawy, Behavior of prestressed hollow-core slabs strengthened in flexure with near-surface mounted carbon fiber-reinforced polymer reinforcement. ASCE J. Compos. Constr 20(6) 7. Z.K. Awad, T. Aravinthan, Y. Zhuge, Experimental and numerical analysis of an innovative GFRP sandwich floor panel under point load. Eng. Struct. 41, 126–135 (2012) 8. J.R. Correia, M. Garrido, J.A. Gonilha, F.A. Branco, L.G. Reis, GFRP sandwich panels with PU foam and PP honeycomb cores for civil engineering structural applications: effects of introducing strengthening ribs. Int. J. Struct. Integrity 3(2), 127–147 (2012) 9. M. Osei-Antwi, J. de Castro, A.P. Vassilopoulos, T. Keller, FRP-balsa composite sandwich bridge deck with complex core assembly. J. Compos. Constr. 17(6), 04013011 (2013) 10. T. Sharaf, A. Fam, Analysis of large scale cladding sandwich panels composed of GFRP skins and ribs and polyurethane foam core. Thin-Walled Struct. 71, 91–101 (2013)

Business Process Reengineering: Issues and Challenges A. Harika, M. Sunil Kumar, V. Anantha Natarajan, and Suresh Kallam

Abstract Software engineering is the structured and organized approach to software development, operation, and maintenance. Organizations are now looking for new strategic strategies to increase competition in conjunction with quick changes and technological developments. The versatility to adapt to changing consumer demands and establish new technologies is necessary to succeed against an increasingly globalized and competitive environment. Business processes reengineering (BPR) primarily reorganizes operating processes. BPR modifies process management processes, practitioners positions, process composition, and process quality. Business processes (BPR) is one of the most recent industrial engineering developments that reflect a rapid and revolutionary transformation of competitive, valueadded processes and of programs, policies, and organizational structures that enable them to improve corporate workflows and profitability. The main aim of business processes reengineering is optimizing operations, increasing productivity, reducing costs, improved quality, and providing a competitive advantage. The paper aims at evaluating and examining common problems and challenges of reengineering systems utilizing different approaches and methodologies. Keywords Business process reengineering (BPR) · Business processes · Methodologies · Process innovation

1 Introduction Software engineering became apparent in the 1960s when the need for systematic approaches to the development and maintenance of computer software products was realized. In this decade, computer hardware of the third generation has been A. Harika (B) · M. Sunil Kumar · V. Anantha Natarajan · S. Kallam Department of CSE, Sree Vidyanikethan Engineering College, Tirupati, India e-mail: [email protected] M. Sunil Kumar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_35

363

364

A. Harika et al.

invented and multiprogramming and time-sharing techniques have been developed. This provided the platform for the development of immersive, multiuser, and online computing systems and in real time. As computer systems grew larger and more complex, it became obvious that demand for computer software grew more rapidly than our production and maintenance ability. A workshop on the rising problems of information development was held in Garmisch, West Germany, in 1968. This workshop, which was held in 1969 in Rome, Italy, stimulated a wide interest in the systems of technological and management used to develop and maintain applications for computers. The term “software engineering” was first used in these workshops. Increasing diversity, accessibility, and significance to modern society have been imposed on digital computer applications since 1968. As a result, technical specialization became important in the field of software engineering. Boehm states that software engineering requires “the practical application of research to the architecture and maintenance of computer programs and their related documents.” Two terms, software and engineering, are part of the term software engineering. Code is not only a program code but also a software language. An executable code is a program that works for a specific purpose. The program is known as an executable code set, related libraries, and documentation. Technology is called the technology product when created for a specific requirement. On the other hand, innovation includes the production of goods, using well-defined science concepts and methods. According to IEEE, software engineering is characterized as “a structured, organized, and quantifiable approach to software development, service, and maintenance; specifically, software engineering implementation.” Simply put, software engineering can be interpreted as a standardized way of developing software within a given time and budget. Software engineering is a scientific field that incorporates informatics, economics, information and management sciences and technology’s troublesome methodology. It also requires a systematic approach to the implementation of projects, both in management and in technology. One of the main goals of software development is to assist developers in the production of high-quality applications. Such consistency is accomplished through the use of total quality management (TQM), which allows users to continually improve the process and develop better software engineering strategies. The software engineering tasks can be separated into three uniform phases irrespective of implementation area, project size, or complexity (Fig. 1). Definition Phase: The process of definition focuses on “what” aspect of the task. The software developer attempts to determine what data is to process, what features and outputs are needed, what device behavior, interfaces to be described, configuration constraints, and testing requirements for determining a successful system. The device and software primary specifications are determined. Development Phase: The “how” portion of the tasks is the subject of the development stage. During designing, a software engineer seeks to decide how to organize the code, how it will function, how procedural information should be implemented, how interfaces should be defined, how the specification is converted into a programming language, and how testing will be done.

Business Process Reengineering: Issues and Challenges

365

Fig. 1 Three generic phases

Maintenance Phase: The main focus of the maintenance phase is changes associated with the correction of errors, the modifications needed as the context of the program develops and changes brought about by changes in customer requirements. • • • •

Corrective Adaption Enhancement Preventive.

Correction: Maintenance corrections update the program to correct defects. Adaptation: The program adjusts to accommodate changes in its external surroundings such as CPUs, operating systems, business rules, and so on. Enhancement: With time, consumers understand additional feature advantages. The program is expanded beyond the original operational specifications by flawless maintenance. Preventive: In order to allow software to serve the needs of its end users, preventive maintenance, often known by software reengineering, must be carried out because computer software deteriorates due to changes. Preventive maintenance improves computer programs in order to boost, alter, and strengthen them more quickly. Thanks to the concept of “aging software,” many companies pursue software reengineering strategies.

1.1 Categories in Software Engineering • Software Development Process: Software engineering splits the work of software development into different phases to enhance design, product management, and project management. It is also regarded as a life cycle (SDLC) for software development. The approach may include predefining particular deliverables and objects that a project team

366

A. Harika et al.

produces and completes to build or manage an application. Many current development methods can be defined loosely as agile. Specific methodologies include cascade, prototyping, iterative and gradual growth, spiraling, rapid development and programmation of the applications. • Software Management: SPM is the correct way for preparing and leading software projects. Management of software projects. It is a part of project management that designs, executes, tracks, and manages software projects. • Need of Software Project Management: Software is a product which is not physical. Software development is a new source of industry, and the design of software products has little experience. Many products for applications are designed to meet the needs of the consumer. The most important thing is that the fundamental technology change and advance so often and quickly that one product’s experience cannot be applied to another. These market and environmental limitations raise software development risk, and therefore, software projects need to be handled efficiently. A business must produce quality products, keep the costs within the consumer budget, and execute the project as planned. Software project management is therefore necessary to include user expectations, budget, and time constraints. • Management of software project consists of various management models: Conflict Management: Conflict resolution is the mechanism by which conflict is reduced while the positive characteristics of conflict are enhanced. The goal of conflict management is to maximize learning and community outcomes in a corporate environment, including productivity or success. Conflicts that are properly managed will boost group performance. Risk Management: Risk management means the study and recognition of threats that accompany coordinated and cost-effective use of resources to reduce, operate, and monitor the probability or consequences of unfortunate events. Requirement Management: This analyzes, prioritizes, tracks, and records criteria and then monitors changes to educate the appropriate stakeholders. During a project, it is a continuous process. Change Management: Management of change is a systematic approach to the modification or improvement of the goals, procedures, or innovations of an organization. Change management has

Business Process Reengineering: Issues and Challenges

367

Fig. 2 Aspects of SPM advantages of software project management

as its objective to adopt techniques for change management, change control, and change adaptation. Software Configuration Management: The management of software configuration is the process of managing and tracking changes in software which are part of the broader cross-disciplinary configuration management area. The maintenance of the software configuration involves revision control and the creation of baselines. Release Management: The role of release management is to plan, track, and schedule the development in the execution of releases. Liberation administration ensures that new and improved services are provided by the customer by the company. Protecting current facilities’ credibility.

1.2 Aspects of Software Project Management • • • •

This helps to schedule the production of the software (Fig. 2). Further implementation of the development of software is simplified. Software project management is monitored and controlled. Ultimately, it saves software development time and costs.

1.3 Software Reengineering The method of software reengineering is without impacting its functionality. This can be achieved by introducing additional software features and adding features that might or might not be required, but are considered to enhance and improve user experience. This method also ensures, as far as the idea is concerned, that the

368

A. Harika et al.

software product is better maintained. Computer reengineering is an ongoing step towards developing computer so developers and users can manage it better. It is also a way of continuing the sale of existing products. The need for software reengineering is an important part of improving the quality of their products. This method brings more value to your business, as it not only enhances services but also creates additional revenues. Software reengineering is an economic method for developing software. Why does it happen? The steps and services implemented in your existing software can be detected and removed and thus the costs (time, financial, direct, indirect, etc.) that may arise can be minimized. If a person needs a product but has additional features, the original one may only have to be restored to reduce costs. This method also makes it easier to manage the goods in relation to cost-savings. You can identify different faults in the current implementation if you can review the program, so that you can determine how to upgrade your production procedures. It makes it much easier to execute and manage the maintenance period of your development cycle. The core processes for software reengineering are three. It covers (1) reverse engineering, (2) restructuring, and (3) forward engineering. Google’s simple search informs us that reverse engineering is “a replication after a thorough review of another manufacturer’s structure or composition of the product.” It does not just extend the software method to the products of another supplier, but also to products already made by your own company. The review and evaluation of the systems requirements and understanding of the existing processes are important to this end. This technique systematically reverses the software development life cycle, because it typically lifts every level from a higher up to a lower system view. The process is most suited to this method. The system is then restructured once reverse engineering has been completed, and the appropriate requirements have been identified. Restructuring involves the rearrangement or redesign of the source code and determination of whether the programming language should be retained or changed. Nonetheless, this should not affect existing device functionality; rather, it should improve its reliability and maintenance. The removal or reconstruction of sections of source code, which often triggers the software errors (may also be debugged), is also part of this process. Additionally, the old or outdated versions of certain parts of the system will also be updated (such as integrating programming and hardware components). The flow stops with the last phase of forward engineering. This is the method of incorporating the new requirements based on reverse engineering and redesigns assessment tests. This is defined relative to reverse engineering in relation to the entire process, when an effort is made to build up backwards from a coded collection to a model or to undo the integration of the program. In addition, software engineering is also similar but believed that reversed innovation has already been performed in the technology life cycle.

Business Process Reengineering: Issues and Challenges

369

In software reengineering, no particular software development lifecycle (SDLC) model can be used. The SDLC model also depends on how well the product is developed and implemented. This is a structural creation though, similar to software engineering, involving processes within systems, which requires careful analysis for smooth outcomes.

1.4 Software Engineering Techniques The organized technique known as object-oriented programming is one of the strongest among many software development strategies developed and introduced over the years. The use of software and management organized methods tends to focus on research and design and distributes time differently to the various tasks. The software lifecycle efficiency is reduced the earlier in the process, which exposes and fixes errors. Structured technologies utilize complex models that represent system features and can be configured for viewing or viewing the process. To be successful, these methods must be used with the correct attitude, with appropriate training, and with professional practitioners’ input to improve the process.

2 Overview of Business Process Reengineering The A business process renovation approach (BPR) was first implemented in the early 1990s and seeks to analyze and improve business processes and workflows. The business process renewal strategy (BPR) seeks to support organizations to rethink their work in the process of improving customer service, raising operating costs, and becoming world-class competitors. By concentrating on the basic design of your business systems, BPR seeks to help businesses dramatically restructure their organizations. According to Thomas H. Davenport [1] a former supporter of BPR, a business process requires a number of logically related activities to obtain a given business result. Reengineering emphasized a holistic emphasis on organizational priorities and how systems link them to the project, encouraging the whole process recreation rather than iterative process optimization. Reengineering of business processes is defined as redesign of business processes, restructuring of organizations, and management of business processes. The reengineering of business processes (BPR) is the technique to reinvent the manner in which work is carried out to better support the purpose of an organization and to reduce costs. Organizations are reengineering their operations in two important areas. First, they employ advanced technology to improve the dissemination of data and decision-making. Afterwards, they enter functional organizations, creating working teams. Reengineering commences with a high-level plan, strategic objectives, and customer needs evaluation of the company. Basic questions like “Will our

370

A. Harika et al.

mission needs to be redefined? Are our strategic objectives consistent with our goal? Who are our customers?” may be posed by an organization, which is working on dubious premises, especially with respect to its customers’ wants and desires. It only goes on to decide how best to do so after the company has rethought what it ought to do. The reengineering processes of the company are based on the steps and procedures that determine how resources are being used to produce products and services that satisfy the needs of specific consumers and markets within the context of this clear task and target evaluation. A business method can be broken down into certain tasks, calculated, formed, and optimized throughout a formal ordering of work steps. It can also be fully redesigned or removed. Through reengineering, core business processes of an organization are defined, evaluated and redesigned, to enhance essential performance measures such as cost, quality, service, and speed. Reengineering accepts that business processes in an enterprise are generally separated into sub-processes and functions carried out by several functional areas. Reengineering believes that improving sub-process efficiency will result in certain benefits, but cannot improve if the process itself is inefficient and outdated. This is why the focus of reengineering is to redesign the whole process to maximize benefits for the business and its customers. This push for change by rethinking in detail how the work of the company should be performed separates reengineering from efforts to improve processes based on practical or incremental changes.

2.1 Concept of Business Process Reengineering The four dimensions of the reengineering principles are as follows: a. Innovative Rethinking: This is an inherently innovative, motivating, and oldfashioned method itself. Drucker [2] suggests that it is not only the joyful events of an unforgettable flash of insight that are clear to the paradox, but the diligent execution of the unspectacular, yet systemic managerial discipline. b. Process Function: In a systematic perspective, Hammer and Champy [3] define the work process as a set of activities which receive one or more types of input and deliver a value to the customer. It includes the ordering, processing, output, growth, distribution, and invoicing of the organization. c. Radical Change: The transformation of the organizational aspect is an important business process in radical change, and it is crucial to the sustainability of an organization. Modification leads to new innovations, technology, and improvement. It is, therefore, critical that organizations understand the need for change and are able to effectively manage this transition. d. Organizational Development and Performance: This takes a look at the level of performance and the manner in which the company enhances its existing level of activity to meet standards and to resolve competition. Another way to assess an organization’s efficiency is by matching it with another entity in the

Business Process Reengineering: Issues and Challenges

371

group. Nonetheless, the contrast with externals will illustrate and inspire the best industrial practices. This is usually referred to as “bank making” [4]. Literature survey is mainly done by taking deep insight into different literature sources, and the following literature review is focus on different parameters and methods that are used extraction and classification of NFR from text files: a supervised learning approach. H.L. Bhaskar [5], a modern approach was established using non-functional requirements machine learning application catalogs to define the NFR, which indicates how such catalogs may be accessed by a comprehensive “lightweight mapping” process. Stochastic gradient descent (SGD) is implemented in a supervised learning technique, a common algorithm that performs well in a number of computer teaching jobs. The library of SciKit incorporates a basic stochastic gradient descent that recognizes different losses and classified restrictions. To test the suggested approach, a recently categorized dataset with NFR’s is required. We have selected a dataset containing 625 specifications which have been categorized as functional or non-functional through the OpenScience tera-PROMISE repository. It is referred to as promise data in this article and results in 187 phrases on security, performance, and usability. R. Chandna, S.R. Ansari [6] a supervised machine learning methodology have been designed and tested with metadata, lexical, and syntactic characteristics. A number of experiments were carried out in order to handle the unbalances in data packets and classification by way of precision, documenting, and metrics of F1 in a set of experiments based on the supporting vector machine classification algorithms. Assessment of functional/non-functional requirement binary classificators immediately identifies the condition as functional or non-functional requirements. A. Gunasekaran, B. Kobu [7] proposed text categorization semi-approach which was to automatically define and classify non-specifications. A limited number of specifications which can be identified by the specifications team through an elicitation phase would allow the first classification of the non-functional requirements to be learnt and the more requirements will be categorized in an incremental method on a subsequent basis. The aim is to incorporate in an advocacy framework to help the architectural design process requirements analysts and software designers. Semisupervised learning methods are employed in detecting and classifying NFR. Classification shall be based on a reduced number of classified criteria, using the information given by uncategorized criteria and other text properties. The method of learning also uses user reviews to improve classification efficiency.

3 Business Process Reengineering Methodology The technical definition requires a revolutionary redesign of key business operations to produce drastic improvements in efficiency, cycle times, and consistency. The transition is extremely serious, from a blank paper to an entirely new method, which

372

A. Harika et al.

will always understand the customer’s interest. The focus is always on the requirements of the customer. Work levels, steps, and staff can be reduced if we are aware that they do not add any benefit to the customer. BPR analyzes all the procedures of the enterprise, reduces processes which do not bring consumer value, and hinders the growth of the company and fully overhaul certain processes. Business process reengineering methodology steps to follow: • • • • •

Organize results rather than activities. Incorporate the collection of data in the actual work generating information. Instead of just incorporating the output, connect concurrent workflow tasks. Place the pivotal point where the research is carried out and establish power. Once and at source, collect information. As always, there are also a few steps to be done:

• Planning and scheduling are key to transition, especially in the case of radical changes such as BPR. Scheduling for reengineering. • Study and track the methods of AS-IS. Review this flow development software for HEFLO BPMN Modeling Software. You will build your free account. • Recognize and eliminate unnecessary activities and procedures. • Develop the TO-BE systems from conception and verify. • Implement your reengineering process and adapt your company to it. Although BPR can sound difficult to achieve, how you do it is important. It is what determines the success or failure to choose an effective and efficient platform or practitioners for the project. The reengineering approach for business processes aims to full reinvention and believes in little change. Davenport’s and Short’s Methodology: The approach followed in this analysis is Davenport and Short’s [1] five-step procedure. 1. 2. 3. 4. 5.

Vision and objectives Reengineering method recognition Comprehension and assessment of the existing process Use of information technology as an opportunity for change Design and make a model of the process (Fig. 3).

1. Vision and objectives: XYZ Company’s vision is to standardize technical documentation (Parts Manual, Repair Manual, Owner Guide, and Services Instructions) through the reduction of the time required for Techpubs’ work. i. Componentization of manuals (single source) ii. Enhances efficiency and consistency through the development of paper publishing/collaboration criteria iii. Enable authors to quickly write/publish documents and several product-specific versions accelerate time to market.

Business Process Reengineering: Issues and Challenges

373

Fig. 3 Davenport and short methodology (1990)

iv. Decrease in occasions when scientific papers are produced and revised. 2. Reengineering methodrecognition: The method defined is the updating and development of guides depending on the notice of change. Update note is a key function used for modifying different aspects of production data, such as material bills and records. The major difficulties discovered during this phase are the redundant details in the various manuals, and more redundancy of professional publishing documentation is contained in the section of the bodies of manuals. 3. Comprehension and assessment of the existing process: This is explained by using the following modification message that was developed by some organization, which states that, on the basis of an analysis, it has been found that a change in manuals of 26 parts is to be made and because of this change, the cost of the shackle plate with the new pair is to be replaced by the older part number

374

A. Harika et al.

in the back suspension section. The update of these 26 manuals shows that it is necessary to make the same changes twenty-six times in the process instead of just one time, and that when we are referring these 26 manuals, there is no consistent flow of information even when the part figures and figures are the same, nor are any uniform manuals. 4. Use of information technology as an opportunity for change: We are today in the world of information technology, when it comes to the new technology, how well we may adapt it to our jobs, and in the same manner, the question of wasted time in the process of scientific journals may be solved with the introduction of standards in the text. DITA has been used in recent years for the development of technical documents, for example, manuals and helps files. Documents are made up of topics and maps in the DITA architecture. Themes have little data that can be reused. Maps indicate every document element by subject organization. In a single repository, we can handle and select all components as appropriate. This function lets us browse for bits of documentation that we want to read and if a fragment is modified, all records that use the fragment are immediately modified. DITA thus increases traceability of records and reduces maintenance costs. 5. Design and make a model of the process: The DITA model, which layout body sections with XML authoring, excludes all the unnecessary details from the body section and makes the body section a common source for a body segment, is also implemented as a reference body area for all the manuals. As we know how much time is needed to complete a notice of change which concerns the body part of previous notices of change and the same is performed using the newly created structured body portion, it has been noticed that time has been dramatically reduced. Both parts of the manuals are unified with the DITA once it has been identified. The primary purpose of providing a prototype is to make it rather simpler than a method that fails during execution. When complete, the workflow follows the manuals with DITA standard. Process Analysis Design Method Business processes are a popular topic in the industry, and management companies have come to realize that understanding and identifying their business processes are the first step towards business success. The aim of the review phase of the process design initiative is to understand the function and interaction of entity or system processes, and the objective is of the design phase to enhance their functionality and interactions. Business processes are a collection of related business operations, combined to provide a benefit to a customer or the corporation. System review is the history and a thorough knowledge of how and how work is done. The approach is based on the theory of the system and function. The efficiency of the organization, in relation to the activities of the organization, is defined as a set of operations and business processes. The consequence of using the protocol is a set of procedural rules which is an operational graphics model. In industry, structure analysis and design relate to the study of a business situation with the goal of enhancing

Business Process Reengineering: Issues and Challenges

375

Fig. 4 PADM framework

its efficiency through improved processes and procedures. Process research and architecture involve the shaping, performance improvement, and the achievement of productivity and growth goals of companies. Complex systems composed of intertwined and connective sub-systems are organizations. In other parts of the system, changes in one part have both anticipated and unexpected consequences. The certification of programs is an incentive to evaluate and develop computer-based applications. It provides a framework in which the organizational and environmental aspects of a system are visualized (Fig. 4). A systematic approach to process research is always available. It is made up of a recommendation department where workers present their suggestions for the development. Teams working at one or more steps along the way evaluate the process and make changes that are appropriate. A diagram monitoring data flows, consumers, equipment, or products in different steps of a process. The mission or purpose statement is drawn up after defining the customer-driven priorities. The goal is what a company believes it will do, and by the tension of a reengineering process, a wellestablished goal retains its commitment. It can serve as the flag for rallying soldiers as morality starts to slip and provide the yard to calculate the success of the group. A. Analysis Unit: The definition of the analysis process has direct consequences on who the research team needs to interview. It was necessary to gather personal data in every company unit in order to carry out data collection under these conditions as organizations frequently refuse thorough inspection by individuals who are percussionists if the operation is across organization’s borders (both different divisions within the same agency and various organizations). While in these instances process specifications may not always be entirely verifiable, it is important to at least provide some clear specifics of all main operations in a process.

376

A. Harika et al.

B. Process Charts: An structured recording process, the operation of a person or a group of people at a workstation, with a client or on materials. Five cycle diagrams categories: • • • • •

Creating or adding something to change processes Transportation Checking or verifying something Time lag Stocking.

Process research and architecture contribute to operational development, performance improvement and productivity, and growth goals. The study of the step in which an effort is made to create a business process requires an understanding of how the procedures of a company or entity function and interact. C. Performance Evaluating: • Checklist: a method for tracking the incidence frequency of some output provider or product characteristics. • Histograms: a continuous summary of data, showing the distribution in frequency of certain quality characteristics (center trend and dispersion of the data). • Bar chart: set of bars that are measured on yes or no basis reflect the frequency of data occurrence. • Pareto Chart: A bar diagram that displays variables along the horizontal axis in decreasing order.

4 Business Process Reengineering Life Cycle We synthesized an integral lifecycle approach to performing BPR based on the existing BPR methodologies described earlier and used by BPR consulting companies: The BPR lifecycle technique outlined in Fig. 5 consists of seven stages. 1. Visioning: Companies pursue a reengineering initiative because they want to respond above rivals, stay ahead of competition and/or react quickly to the changing market conditions and trends (Hammer and Champy 1993). For a company-wide reengineering, initiative ambition is important. Employees must contribute to creating a new common vision. There are several activities at this stage: • Create an outline of current corporate structure and business processes and identify the real corporate map and macro-level processes. • Establish an operational interest in reengineering. A successful reengineering initiative is difficult to achieve on the long term without a firm commitment of top management and broad support across the entire organization. • Create a business case for change and articulate it.

Business Process Reengineering: Issues and Challenges

377

Fig. 5 Life cycle for BPR

• Both actors in the reengineering program have to express their agenda for progress consistently and constantly. An affair which vigorously and persuasively argued that the troops needed to mobilize is a dramatic change. • Develop a new strategy and plans for companies. The new vision aims to formulate strategies and priorities, and objectives will assist companies to concentrate on crucial market success critical processes. Extensive objectives have to be questioned by traditional knowledge and pressured members to think out of the box. • Evaluation of ability and threat. Control of progress with a reengineering initiative will be done from day one. The first phase in transforming control in a BPR initiative is to determine the potential and challenges to implement dramatic technological, operational, and cultural change. 2. Identifying: The BPR department needs to examine the business transactions to be reengineered relying on the macro-level operation diagram. In the following steps, tasks include: • To display process dependency, create a highly professional process map based on the value chain model. A systematic and exhaustive approach to macro-level systems can be extended depending on the research cluster of

378

A. Harika et al.

the process/data matrix [8]. Most generally, organized workshops or comprehensive conversations with senior management are to be used to define critical business processes [1]. Individuals tend to pay priority to organizational procedures and often ignore high-level market and policy growth systems. • Assessing and choosing reengineering processes. When choosing systems to be reengineered, there are two considerations. The first is the results of the reengineering process. Secondly, the difficulty in carrying out the BPR project. Preferably, we plan to pick strong-impact and easier-to-implement initiatives. Nevertheless, it is difficult to achieve these programs. Typical systems chosen for BPR initiatives include: delayed procedures; central processes with elevated impacts and added value; interfunctional processes; processes to consumers operated by frontline personnel; new processes to deliver new products and services. • Identify proprietors in industrial operations. All the selected mechanisms must include committed mechanism owners with an involvement in the enterprise and sufficient political power to handle the capital necessary to support the initiative. • Specific priority should be given to selected reengineering cycles. Four to five restoration processes should be directed to ensuring that the BPR teams have sufficient resources at their disposal. 3. Analyzing: Once reengineering a procedure has been selected, the BPR team may transition into stages that are unique to an actual BPR project. Detailed analysis is expected of the specific business method: • Conducted a preliminary survey. Determine the proper scope of the cycle to such a degree that reengineering results are not so large that they cannot be managed using current resources nor so specifically. • Build the current systems’ high-level AS-IS base method model. It is enough to provide a decomposed process model with 3–4 deep levels. This stage is designed to identify challenges and opportunities affecting the business process. After recognizing these issues and openings, the current process model research should be halted, in order to avoid the “study paralysis” trap. • Costing dependent on operations if cost reduction is the main purpose of the BPR program. Exposure to the duration of waiting or the activities without additional value in order to assign expenses on the basis of productive activities. In terms of the mission, commodity, and consumer form, efficiency and competitiveness can be calculated. • The analysis of the existing process may be very helpful for benchmarking, help to identify the root causes of the issues, and use it to assess the changes made over time. 4. Redesigning: Define IT-enabled and build process reengineering concept alternatives. The comprehension of future IT-creators can contribute to creative ideas. Including the following tasks:

Business Process Reengineering: Issues and Challenges

379

• Develop proposals for drastic modifications. “Thinking out of the box” is a prerequisite for innovative ideas to make a scientific breakthrough. It is important to be prepared to challenge the very basic assumptions of the process and the company. • Consider technology that allows. The BPR effort should not be powered by information technology, but information technology could be a great facilitator if used properly. The new technology is always on the horizon and may be the right tool for the right job. In the reorganization phase, the identification and evaluation of IT enablers are key tasks. • Recognize sub-processes that are necessary. Sub-processes can be omitted, rendered in parallel, optimized, or combined. The emphasis of the modernization activities should be key sub-processes that create a device bottleneck or that most lead to the added value. • The concept of new processes. Two or three different proposals on the basis of different assumptions about the resource requirements and expected degree of transition for new processes should be suggested. 5. Evaluating: Evaluation of redeveloped appropriate process designs and selection of design alternatives to the implementation stage: • Create standards for evaluating alternate systems reconstructed. • The cost, costs, and risk assessments included in the designs suggested. Identify both concealed costs and immaterial benefits concerned. Big BPR initiatives are often explained by their intangible benefits including customer service time, business growth • Evaluate expense, profit, and risk architecture alternatives and define any additional criteria. • Choose a reengineered method and propose it. 6. Implementing: The execution is as critical as the redesign because “even if breakthroughs have been made at conceptual level it can also be overwhelming for implementation.” • Implementing the IT program. Participants should be carefully selected for designing software and development teams. As alternative implementing strategies, rapid application development or a packaged solution should be considered. • Implementation of the program. BPR and transition in technologies often cause changes in organization. Comprehensive corporate improvement strategies should be formulated to insure that appropriate changes are made to achieve positive adjustments in the operation. • Develop a program development. The BPR team will find pilot projects for a broad BPR initiative which needs to be conducted at several locations. Trial tests should be used before a big roll-out to redesign the prototype system. • Develop new metrics of success. Such acts are used to test BPR outcomes and improve the process continually.

380

A. Harika et al.

7. Improving: System owners of regenerated systems should calculate system performance indicators to assess the effect of BPR and seek to constantly improve the process. They will make use of and continue to improve the mechanism (CPI) and continually change the method. • Implement massive-scale implementation. Implement. The experience in BPR programs should be passed on to other organizations for potential BPR ventures, in which the company can institutionalize BPR activities. • Continuously monitor the performance of the method. The compilation, evaluation, and recording of the new process can be assisted through technologies including data-warehouse technology. • Permanently enhance the operation. It can be shown over time that CPI is not enough to boost the reengineered process to make it successful on the sector. Then the time to think again about reengineering. For a company-wide reengineering project, the very first two steps are important. For process-based reengineering projects, the third to seventh level, the BPR team faces the most difficult challenge by avoiding dramatic changes by individuals. A BPR department must, therefore, execute the BPR initiative as a scheduled transition. The needs of stakeholders must be provided for from the outset to rally support and reduce opposition. It also assists in choosing the right methodologies, tools, and solutions for particular activities during a project. The BPR–SPM methodology can be utilized to help BPR planning and management.

4.1 In the Implementation of BPR Methods, Some Guiding Principles Include 1. Customer focus: Concentrates on procedures for consumer identification and assistance for front line workers. The first phase in a BPR initiative is to consider who the stakeholders of a program are and what its criteria are. 2. Cycle time reduction: Most positive BPR situations are greatly helped by the decrease in cycle time. 3. Continued support and engagement from top management and partners in the process. BPR teams will recommend choosing one or two “quick passes,” which are easy to implement but which cannot have strong results. The BPR team can concentrate on long-term strategic process redevelopment and change by means of these rapid successes to build ongoing engagement. 4. Concentrating on central procedures, but also taking into account other facets of the framework during restructuring and deployment. 5. Use IT to allow new business processes and, in turn, to simplify existing processes. BPR teams will be supported by the understanding of new technology during the modernization period of the BPR initiative.

Business Process Reengineering: Issues and Challenges

381

6. Start the overhaul with a clean paper book and think outside of the box by referring to the core goals of the project. Don’t ever be scared to ask why. To order to build new processes that support new goods/services, BPR standards and methods may be used. New process development will generate more value than redesigning existing processes. 7. Take a BPR methodology into consideration and use the proven methods and instruments for the study and redesign of the procedure. 8. Providing the change and application process from the outset by finding problems with all shareholders. Contact all the parties concerned with reasons for change and employ a team approach to enhance participation.

5 Conclusion Once a company decides to pursue process engineering, there are many considerations that could destroy the perfectly good decision. In addition, the modification of the BPR should not be taken. To excel in regenerating the business process, it is important that the current procedures and activities have an appropriate IT framework and transparency. The major challenges and factors influencing business process reengineering include • Inadequate knowledge • Wrong direction and irregularity in implementation unsuited team formulation • Insufficient and incorrect placement of resources unsound analysis and lack of support.

References 1. T.H. Davenport, J.E. Short, The new industrial engineering: information technology and business process redesign. Sloan Management Review, 31, 11–27 (1990 & 1993) 2. P.F. Drucker, Post-Capitalist Society (Harper Business, New York, NY, 1993) 3. M. Hammer, J. Champy, Reengineering the Corporation: A Manifesto for Business Revolution (Harper Collins, New York, 1993) 4. M.M. Roberts, Why is the Internet so cheap? Educom Review 29(6), (1994) 5. H.L. Bhaskar, Business process reengineering framework and methodology: a critical study. Int. J. Serv. Oper. Manage. 29(4), 527–556 (2018)

382

A. Harika et al.

6. R. Chandna, S.R. Ansari, A literature review of business process reengineering of manufacturing systems, in National Conference on Future Aspects of Artificial Intelligence in Industrial Automation, 2012, Proceedings published by International Journal of Computer Applications (2012) 7. A. Gunasekaran, B. Kobu, Modeling and analysis of business process reengineering. Int. J. Prod. Res. 40(11), 2521–2546 (2002) 8. TH Davenport, L Jane, U.K. Harvard Business School case, Rank Xerox U.K. (A), p. 9–192-071, (1991)

Creating a Biological Intranet with the Help of Medical Sciences and Li-Fi Yagya Buttan

and Komal Saxena

Abstract The following paper provides a review of a pre-existing technology Li-Fi over a medical science advancement on human eye. This paper is basically about the approach combining these two set of advancement in the present era of this technological world, which proves to be fruitful as a change and also establishing a new benchmark in the generation. Now the following may cover all the ‘WH’ closes in relevance to the following technologies mentioned above, and further, in this research paper, the same can be verified throughout and also can be used as a source for new advancement. Keywords Li-Fi—light fidelity · RF—radio frequency · VLC—visual · Light communication Wi-Fi—wireless fidelity · LED—light emitting diode

1 Introduction Wi-Fi is the most commonly used mode for Internet connection in the current day time work, which is about to reach its limits and might not able to advance furthermore at a tremendous rate due to RF signal jamming. With the fast pacing and ever-expanding lifestyle, we all need to communicate much quicker than we ever could getting into the slow arena is not the point to achieve the goal. That raises a need for new and advanced communication technology for reaching a higher height. Light fidelity in short Li-Fi. A transmission system based on “visible light communication” a high future aspect open-ended advancement in wireless connection technology that is having an outstanding potential of succeeding the pre-existing wireless communication taking into record Wi-Fi which based on radio frequency (RF). Li-Fi Y. Buttan (B) · K. Saxena Amity Institute of Information Technology (AIIT), Amity University, Noida, Uttar Pradesh 201313, India e-mail: [email protected] K. Saxena e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_36

383

384

Y. Buttan and K. Saxena

is giving encouraging result in term of speed which is providing 100% faster rate analyzed to Wi-Fi. With the help of medical science and new research in bionic eye implementing such technology with human body could lead to giving us a new revolutionary era of communication. Li-Fi using a photosensor for receiving data, the human eye could work exponentially as the human eye is the best photosensor ever existed. Taking the term human intranet to another level with the help of medical science and Li-Fi, it can create a significant impact on individual’s daily life and it could help us out in many uncountable ways that we could imagine so far. It could become a way to replace our cell phones on which we are deniably too much depend without a second doubt.

2 Light Fidelity Transmission of data using light bulbs a part of an electromagnetic spectrum VLC that is yet to be fully explored and unleashed to its full potential. Li-Fi means light fidelity [1, 2] a new transforming innovative way to transmit data wirelessly at high speed. Li-Fi (e.g., Fig. 1) is a straightforward way to converse data from one side to another side “Professor Harald Haas” explained how data could be sent and received in binary form using light bulb such as in the way of 0’s, and 1’s just by turning on and off of light bulbs [3, 4]. Researches have proved that Li-Fi has achieved the speed of 10 gbps speed already with just a single color LED which can increase to 100 gbps or more using multiple color diodes that emits light.

Fig. 1 Li-Fi in organization [5]

Creating a Biological Intranet with the Help of Medical Sciences and Li-Fi

385

Fig. 2 Human eye [9]

3 Bionic Eye Bionic eye [6] another advancement by medical science that could help in restoring the vision of 100’s of people who are suffering from retinal prosthesis [7], which makes them permanently blind due to retinal damage. It provides visual sensation to the brain with the help of electronics components such as photosensor and microprocessor. The bionic eye can contain around 3500 micro-photosensors that are set behind at the part of the retina. Electrical signals are moved toward the brain after transforming and converting ordinary light signals into the electrical signals that the brain could easily understand without delays. The bionic eye is pretty much alike compared to the individual eye. Still, instead of seeing things from the human eye, we get video input data using a small camera which is then converted into electrical signals by microprocessors that the brain could understand easily [8] (Fig. 2).

4 Human Eye The human eye, which is so far the best photosensor we have come across of the all. Human eye has much more pixels then we have in present cameras. Light passes through our eyes through the filters, and we receive an image that goes to the retina, which then converts it into electrical signals which are then fed into the brain to handle the rest of the work.

386

Y. Buttan and K. Saxena

5 Li-Fi for Eye Li-Fi using LED light as a medium of transmitting data by the flickering of light as data in binary form while light can flicker at a speed that flickering gets transparent and unable to catch to an ordinary human eye. Whereas according to MIT researchers’ study, [10] says that human eye and brain can interpret things seen by eyes as fast as 13 ms which is way quicker then photodiodes that we use in Li-Fi. Li-Fi already being estimated to reach the speed of 100gbps could change data transmission game drastically with the human body.

6 Proposed Working The overall structure in building any communication platform requires the two basic components transmitter and receiver here in the case of we specifically implements LED as the light source which act as a bridge between two ends the flow of data is carried over by modulated light produced from LED the overall microprocessing unit is responsible in modulation and demodulation of the data at transmitter and receiver end, respectively. The basic flow of data one end to another end and its interpretation over the route is done by the conversation of data into 0’s and 1’s but uses light as the source. Now the series of 0’s and 1’s generated is nothing but a series of flashes occurring in unstructured order that shows the flow of data. The basic application of Li-Fi technology is done over white color LEDs. These LEDs are arranging in such a way which are parallel to the flow of data communication that are throughout altered as per the flow of dissimilar frequency depicting that the data is encrypted before transferring. So in short the process adopted is (1) To transfer the data by converting it into bits followed by generation of stream of 0’s and 1’s. (2) That is fed into the Li-Fi Tx hardware (bionic eye). (3) It is passed on further to MPU where it is encrypted as well as decrypted and the flow of data continues, here, we apply a threshold power either on state or off state. (4) For each on state, it passes 1 and for each off state it transmits a 0. The switch mechanism makes it more convenient for LED to perform operation faster and generating a unique code which then gets verified and retrieved by the Li-Fi Rx parallelly. For human eye—The human eye receives an image at retina which then is converted into electrical signals if those electrical signals are converted into binary data using analogue to digital converters, by this being able to decrypt the data that is going to be received by the brain as that data is decrypted using A to D the data can be converted back to electrical signals and then fed it to out brain.

Creating a Biological Intranet with the Help of Medical Sciences and Li-Fi

387

Light intensities too fast that human eye is able to perceive but eyes are always getting that data in continuous form that it all looks to us as a single entity while it is in parts, therefore, we are not able to get the blink of light, but it is always there in our brain through continuous analyzation so if we are able to catch and convert those signals, we would be able to make human Li-Fi system. Furthermore, while transferring data, we would be prepared to turn electrical signals into binary signals then encrypt them and transfer them up to the communicator section that is a transmitter section of the light fidelity.

7 Proposed Architecture See Fig. 3. For setting up a structured system, it is to be divided into two major parts as follows: 1) Sending side: Sending side contains multiple or single LED on a human body with some power source that generated an array of data in 0’s, and 1’s using a microprocessor or microcontroller as all the data we have digitally is binary on the root node, therefore, it is easy to convert any format of digital data into binary form. While this data is emitted using LED in with some encrypted manner that the information does not get lost in the transmission process using encoder and amplifier. 2) Receiving side: At the other hand, the receiving side would consist of a bionic eye that contains a large number of photosensor that could easily track series of 0 and 1 which would be first decrypted by the processor and then converted into

Fig. 3 Proposed architecture of data exchange

388

Y. Buttan and K. Saxena

Fig. 4 Cyborg eye view [11]

an electrical signal by the MPU and transferred to the electrodes that are placed just beside the retina and these signals are fed to the brain. The human eye section, our eye can detect everything at much higher rate than it seems in continuous motion at every instant electric signal are generated which can be converted into binary to decrypt data and then back to electric signals that the individual’s brain could catch.

8 Creating an Intranet Using Li-Fi and Bionic Eye Just like a fictional Cyborg that is able to scan everyone face using his artificial eye (Fig. 4), users might able to act the same shortly. Having a light that is being emitted all the time from somewhere on the body could always transmit data or some other information and their bionic eyes would be able to grasp them just like normal eye. Possibility of a life in which humans would be able to conversate by only seeing each other without taking out their phones from their pocket, or maybe in the more straightforward form users would get to know the name of the person by just watching a LED that is attached to their body. A big network of this can create own intranet, which would be much secure then radio waves.

9 Real-Life Applications Real-life application with it could be enormous in numbers such as: 1. Just a looking at a LED light giving us navigation instead of looking at our phones. 2. Helping in people identification we would be able to know few details such as name of the other people at instant.

Creating a Biological Intranet with the Help of Medical Sciences and Li-Fi

389

3. We would be able to communicate with them without speaking but just looking at them. 4. There would be multiple small–small application in this case we would not even assume by now. A bionic eye with Internet or intranet can give us many unexpected advantages in our real-life day to day works such as: 1. Cost-efficient: Consider the transmitter side of the Li-Fi which can work with a standard LED we use at our home, or even small LEDs that are used in toys are also feasible in making it work, and it is almost everywhere where we are using lights, therefore, it is cost-efficient no extra power source similar to Wi-Fi no other cost layout for modems and setups. 2. Environment-friendly: As LED does not emit radiation like radio frequency it is much safer and healthy for human to use and we use LED almost everywhere nowadays which saves some hardware cost. 3. Security: On the concern of secure transmission, these Li-Fi interaction cannot be hacked beyond walls, and with different encrypting methods, it would be much safer then current Wi-Fi, all the communication in intranet connection would not be hackable as they would not be accessible outside walls or even the light glows [12]. 4. Capacity: VLC is having much larger bandwidth [13] than the RF; therefore, we have more abundance of spectrum around 10,000 times more than RF. Compared to Wi-Fi, we have more scope in Li-Fi in terms of bandwidth. 5. Intranet: We would be able to build our own intranet using bionic eye or human eye that would be able to transmit data with just a look of and eye, which would help is in seamless and time-saving data sharing in the organization as well as outside in daily life.

10 Present Restrictions 1. It possible that bionic eye would not be much compatible for implementing such procedure so people who are blind might not be able to be a benefit of it. It would depend upon the advancement in future technology available for us. 2. Long-range sources would not be able to connect to it like cellular communication (4G and 5G). Short range connectivity might not prefer by everyone and everywhere which is reason it might not be useful everywhere. 3. Cost of receiver section might be high to handle it terms of affordance to general people due to limitation of technology at present but can be overcome in future. 4. Implementation of such things could be harmful for body and lead to death as our natural body might not be compatible with such electronic devices interference therefore high-cost research is required.

390

Y. Buttan and K. Saxena

11 Challenges and Difficulties 1. Humans cannot catch the flickering of LED due to high speed of on and off series but it is not because of our eyes its due to do your brain, our eye is only responsible to transmit signals to the brain through processing electrical signals but these signals are somewhere lost as our brain is not able to process them all at the same time. The same difficulty would be there for us to rapidly modulate and demodulate data which are going to receive and process to understand thing there and then and even send back. 2. Privacy and security of data from data breaches are the two of the biggest concern while transmission of data anywhere to anyone in any form, field or place. This would be a tough problem to overcome for establishing a secure connection within frames of seconds with proper encryption and decryption methods which would provide us a safer and trustable way to exchange data. 3. We do not have such suffocated technology in medical science that we would be able to cope of with all of this right now, but in upcoming future years, let us say about 3–4 years, we would be able to do such things for sure. 4. Costing of a bionic eye is way high right now that a normal person is not able to afford it with very little advancement in it, which could possibly cope up with the rate of technology change in the current scenarios. 5. Compatibility of electronic devices with the human eye might prover harder to think off and such integration of devices might prove health effecting in the realtime application which is the reason it would require high-cost research before its safe application.

12 Conclusion Despite being a hard to implement procedure at the current time, it holds potential to change the future communication process. Li-Fi with the human eye or bionic eye can help in greater extend connecting it as an intranet or Internet, it would help us in daily life work and it is quite similar to “Google glasses.” With the help of medical science, we could see it getting implement in the future and help people to communicate seamlessly without using other devices and without affecting health’s as RF does and it does lead in all the aspects of security compared to Wi-Fi. Li-Fi would help in achieving higher transmission speed with low cost and higher bandwidth spectrum.

Creating a Biological Intranet with the Help of Medical Sciences and Li-Fi

391

References 1. Lifi—H. Haas, L. Yin, Y. Wang, C. Chen, What is LiFi? (2015) 2. P. Chauhan, R.T.J. Rani, Li-Fi (light fidelity)-the future technology in wireless communication. Int. J. Appl. Eng. Res. (2012) 3. TED (2011). https://www.ted.com/talks/harald_haas_wireless_data_from_every_light_bulb 4. Seminar projects (2013). https://seminarprojects.com/Thread-li-fi-light-fidelity-the-futuretec hnology-in-wireless-communication 5. A.P. McCoy, F. Sargent,Lighting race to the ceiling. https://www.ecmag.com/section/lighting/ race-ceiling 6. C. Mayuresh, Bionic eye: a review. Int. J. Pharm. Sci. Rev. Res 2 (2011) 7. J.M. Ong, L. da Cruz, The bionic eye: a review (2011) 8. N.S. Vatkar, Y.S Vatkar, Bionic eye a new invention 2016. IJESC (2016) 9. Illustration 167615928 © Alexandr Mitiuc—Dreamstime.com 10. M.C. Potter, B. Wyble, C.E. Hagmann et al., Detecting meaning in RSVP at 13 ms per picture. Atten. Percept. Psychophys. 76, 270–279 (2014). https://doi.org/10.3758/s13414-013-0605-z 11. COURTNEY BOYD MYERS—Cyborg vision for the iphone makes the world look like terminator, 7 Nov 2011 12. Study paper on Li-Fi & its application FN division, TEC 13. Protocol for enhanced data transmission for visible light communications (2015). https://www. research-innovation.ed.ac.uk/Opportunities/enhanced-data-transmission-for-Li-Ficommuni cations.aspx 14. TechTarget search networking (2010). https://searchnetworking.techtarget.com/definition/cog nitive-radio 15. Teleinfo (2012). https://teleinfobd.blogspot.in/2012/01/what-is-lifi

Optimization of Low Power LNA Using PSO for UWB Application Manish Kumar, Manish Gupta, Divesh Kumar, and Vinay kumar Deolia

Abstract This paper deals with the design challenge of low power LNA. As the biasing voltage is reduced, there is degradation in transit gain, efficiency, and operating frequency. Therefore, an optimized biasing metric has been introduced, depicted the tradeoff between power consumption and above specified parameters. In the last decay, many scholars optimized the single objective and multi-objective function using particle swarm optimization (PSO). In this paper, voltage gain has been taking as objective function. Aspect ratio (W/L) of all transistors and other parameters of LNA is optimized using PSO. The result is simulated in 90-nm CMOS technology in Cadence virtuoso software. All the simulation has been done for a frequency range of 3–8 GHz. Optimization is obtained at operating frequency of 4.1 GHz. The results depicted the performance of LNA with minimum 1.1 dB NF, −16.4 dB S11, and 17.1 dB S21 at frequency 4.1 GHz with 0.9 V power supply and 1.7 mW power consumption. Keywords Particle swarm optimization (PSO) · Cascade complimentary gate (CCG) · Noise figure (NF) · Body biased common source (BBCS)

M. Kumar (B) · M. Gupta · D. Kumar · V. Deolia GLA University, Mathura, India e-mail: [email protected] M. Gupta e-mail: [email protected] D. Kumar e-mail: [email protected] V. Deolia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_37

393

394

M. Kumar et al.

1 Introduction Low noise amplifier (LNA) is the first block of receiver chain and important block of wireless sensor networks (WSNS) application. To work these networks without interruption, devices which it held should consume as little power as possible, since LNA is the first block of receiver chain and consumes large power. Henceforth, it becomes important to focus on designing of LNA by gazing its consumption of power [1–4]. As biasing voltage reduced transistor shift from strong inversion (SI) to moderate inversion (MI) and the performance degrades in terms of intrinsic gain (gm /gds ), operating frequency (ft), and efficiency (gm /I D ). A biasing equation represents the above-specified term is given by [5]. Biasing Metric = [(gm /I D ).(gm /gds ).( f T )]

(1)

The value of gate to source (V GS ) and drain to source voltage (V DS ) is given by VDS,sat = 2UT



I C + 0.25 + 3UT

VGS = 2nUT ln(e



IC

− 1) + VTH

(2) (3)

The biasing metric is shown in Fig. 1, for different value of V DS and optimum value obtained in the moderate region (MI). The value of V GS and V DS is as shown in Fig. 2 with respect to inversion coefficient [6, 7]. The value of V DS ,sat is approximately lying in 100–230 mV in MI region desired for low power [7–9]. In this paper, Sect. 2 comprises the PSO algorithms to implementation of LNA followed by the discussion of the proposed circuit in Sect. 3. Simulation results have been discussed in Sects. 4 and 5 depicted the conclusion part of the proposed LNA. Fig. 1 Voltage biasing metric for different V DS

Optimization of Low Power LNA Using PSO for UWB Application

395

Fig. 2 Graph of V GS and V DS

2 Mathematical Model of PSO for Optimization of Analog Circuit To optimize the nonlinear problem, a set collection of linguistic mutually exclusive term known as PSO was proposed by Kennedy and Elbert. The PSO includes the mathematic model based on the behavior of food searched due to the flocking of fish and bird. In the PSO, a set of n variable is used and known as population. Each variable in the PSO denoted by xi = [xi1 , xi2 . . . xid ] where ith particle is represented by xi and d denotes the different direction. The position and velocity of each particle can be explained by the given equations.     vi j (t + 1) = wvi j (t) + r2 c2 g j (t) − xi j (t) + r1 c1 pi j (t) − xi j (t)

(4)

xi j (t + 1) = vi j (t + 1) + xi j (t)

(5)

The behavior of bird and fish through mathematics can be explained in Eqs. (4) and (5), where velocity and position of ith particle in jth direction are given by vij (t + 1), x ij (t + 1) after t time interval, w represents the inertia weight parameter, acceleration real value coefficients are c1 , c2 and random variable represented by r 1 and r 2 and its value uniformly distributed between 0 and 1. While global best is represented by gj (t) and position best is denoted by pij (t), in Eq. (4) current position is compared with position best and globally best. The updated velocity is given by Eq. (5), and it is also used to update the position. The PSO is mathematically being very simple and easy to implement and realistic organized algorithm. The PSO includes a very simple addition and multiply term. The PSO consumes a very little bit time compare with other heuristic algorithms. PSO can be used for any analog circuit optimization problem and shown in Fig. 3. The netlist of analog variable and constrain can be used as input for search engine. The optimization code for PSO algorithm is written in MATLAB for it as the input. A cost function randomly extracts the design variable and generates the value

396

M. Kumar et al.

Fig. 3 PSO flowchart

of specified parameter. The algorithm seized the different specific value and shows the best solution after a long iteration, desired for cost function.

3 Analysis of LNA Circuit The schematic of the proposed LNA is shown in Fig. 4. The proposed LNA has twostage cascade complimentary stages (CCG) used for low power, in which NMOS transistor (M1) and PMOS (M2) are connected in cascade that provides good input impedance and consume low power. The CCG stage, followed by common source (CS) stage, strengthens the proposed LNA in terms of voltage gain and noise figure [10–13]. Voltage gain of proposed LNA is given by  AT ≈

Z L (gm1 + gm2 + gds1 + gds2 ) 1 + Z L (gds1 + gds2 )(1 + sC gs )

  ·

gm5 (r04 r05 1 + gm5 (r04 r05



1 gm4 + gm3 R3 2 gm5 (6)

Optimization of Low Power LNA Using PSO for UWB Application

397

Fig. 4 Schematic of proposed LNA

where all symbols represent their usual meaning. The voltage gain is optimized through PSO, and all value is shown in Fig. 4.

4 Results and Discussion 4.1 S-Parameters At high-frequency reflection increases, therefore S-parameters depicted the power gain and reflection [6]. Power gain (S21 ) and input reflection coefficient (S11 ) are as shown in Fig. 5. All simulation has been done for a frequency range of 3–8 GHz. The power gain is positive, and input reflection is negative throughout the band. The desired range of S21 and S11 is 17.1 dB, and −16.4 dB at frequency 4.1 GHz.

4.2 Noise Figure Figure 6 depicts the NF of proposed LNA and frequency range is 3–8 GHz. The range of NF is 4.8 and 1.1 dB simultaneously. The decrement in NF with frequency validates the PSO algorithms explain in Sect. 2.

398

M. Kumar et al.

Fig. 5 S-Parameters o proposal a S21 and S11 b process corner simulation (S21 , S11 )

Fig. 6 NF of proposed LNA

5 Conclusion Proposed LNA has been simulated for a wideband of 3–8 GHz that consists of two stages. The first stage is CCG stage, and concept of current reuse has been used for low power. Second stage is a CS stage, utilized for input impedance matching, high and flat gain with low NF. For optimization of LNA parameters, PSO algorithm has been used. In the simulation of proposed LNA with minimum NF of 1.1 dB, S11 are

Optimization of Low Power LNA Using PSO for UWB Application

399

16.4 dB, −17.1 dB, respectively, at frequency 4.1 GHz, show a good strength, and consume very less power of dissipation 1.7 mW.

References 1. H.C. Chen, T. Wang, H.W. Chiu, T.H. Kao, S.S. Lu, 0.5-V 5.6-GHz CMOS receiver subsystem. IEEE Trans. Microw. Theory Tech. 57(2), 329–35 (2009). https://doi.org/10.1109/TMTT.2008. 2011165 2. A. Balankutty, S.A. Yu, Y. Feng, P.R. Kinget, A 0.6-V zero-IF/low-IF receiver with integrated fractional-N synthesizer for 2.4-GHz ISM-band applications. IEEE J. Solid-State Circuits 45(3), 538–53 (2010). https://doi.org/10.1109/JSSC.2009.2039827 3. A. Balankutty, P.R. Kinget, An ultra-low voltage, low-noise, high linearity 900-MHz receiver with digitally calibrated in-band feed-forward interferer cancellation in 65-nm CMOS. IEEE J. Solid-State Circuits 46(10), 2268–2283 (2011). https://doi.org/10.1109/JSSC.2011.2161425 4. A. Shameli, P. Heydari, A novel power optimization technique for ultra-low power RFICs, in Proceedings of the 2006 International Symposium on Low Power Electronics and Design 2006, ACM, pp. 274–279. https://doi.org/10.1145/1165573.1165639 5. M. Parvizi, K. Allidina, M.N. El-Gamal, A sub-mW, ultra-low-voltage, wideband low-noise amplifier design technique. IEEE Trans. Very Large Scale Integ. (VLSI) Syst. 1(6), 1111–1122 (2015). https://doi.org/10.1109/TVLSI.2014.2334642 6. P. Qin, Q. Xue, Design of wideband LNA employing cascaded complimentary common gate and common source stages. IEEE Microw. Wirel. Compon. Lett. 27(6), 587–589 (2017). https:// doi.org/10.1109/LMWC.2017.2701300 7. M. Kumar, V.K. Deolia, Performance analysis of low power LNA using particle swarm optimization for wide band application. AEU-Int. J. Electr. Commun. 111, 152897 (2019) 8. Q. Wan, Q. Wang, Z. Zheng, Design and analysis of a 3.1–10.6 GHz UWB low noise amplifier with forward body bias technique. AEU-Int. J. Electr. Commun. 69(1), 119–125 (2015). https:// doi.org/10.1016/j.aeue.2014.08.001 9. M. Kumar, V.K. Deolia, A wideband design analysis of LNA utilizing complimentary common gate stage with mutually coupled common source stage. Analog Integ. Circuits Signal Process., 1–1 (2018). https://doi.org/10.1007/s10470-018-135 10. J. Kennedy, Particle swarm optimization, in Encyclopedia of Machine Learning (Springer, Boston MA, 2011) pp. 760–766. https://doi.org/10.1007/978-0-387-30164-8_630 11. M. Khurram, S.R. Hasan, A 3–5 GHz Current-reuse $g_ {m}$-boosted CG LNA for ultrawideband in 130 nm CMOS. IEEE Trans. Very Large Scale Integ. (VLSI) Syst. 20(3), 400–409 (2012). https://doi.org/10.1109/TVLSI.2011.2106229 12. S. Toofan, A.R. Rahmati, A. Abrishamifar, G.R. Lahiji, Low power and high gain current reuse LNA with modified input matching and inter-stage inductors. Microelectr. J. 39(12), 1534–1537 (2008). https://doi.org/10.1016/j.mejo.2008.07.073 13. S. Woo, W. Kim, C.H. Lee, K. Lim, J. Laskar, A 3.6 mW differential common-gate CMOS LNA with positive-negative feedback, in Solid-State Circuits Conference-Digest of Technical Papers, 2009. ISSCC 2009. IEEE International 2009 Feb 8, IEEE, pp. 218–219https://doi.org/ 10.1109/ISSCC.2009.4977386

Automatic Segregation and Supervision of Waste Material Using Industrial Control Devices Fakih Awab Habib , Khan Salman Mehtabali, Khan Athar, and Ansari Mohd Afwan

Abstract When it comes to our comfort zone, we humans give our best to keep our lifestyle stable. For these needs of humans, various technology and industrial setups have been made up. These industrial setups release huge amount of waste. Homes and other man habits also have equal contribution in increasing the waste content in this world; so it becomes our duty to keep things around us proper. This paper A PLC controlled automatic material segregation system has been built. The system is capable of segregating materials like metal, non-metal, wood, plastic, etc., successfully. The proposed system is such that it saves both time and cost. The system is one-time investment with low maintenance and one-person control system. The system is reasonable for small-scale industries where an initial investment is low. Keywords Programmable logic controller (PLC) · Segregation · Conveyor belt · Inductive sensor · Capacitance sensor · Proximity sensor · Human–machine interface (HMI) · Supervisory control and data acquisition (SCADA)

1 Introduction Lifestyle has become the biggest leisure of today’s world, and with growing lifestyle and ways of Living, the amount of waste generated also goes high. To overcome today’s man’s needs, various technology and industrial setups have been made up. These industrial setups release huge amount of waste. Homes and other man habits also have equal contribution in increasing the waste content in this world. Now, the question arises that what should be done for this? Should it just be dumped somewhere or used again? as we know there is no waste that can be used again, once already used. But yes it can be recycled and reused for other purposes. Some wastes are biodegradable and get decayed on the soil, but other wastes like plastic, glass, and metal do not decay or are not biodegradable in nature. As far as the waste is F. A. Habib (B) · K. S. Mehtabali · K. Athar · A. M. Afwan Anjuman I Islam Kalsekar Technical Campus, Panvel, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_38

401

402

F. A. Habib et al.

concerned, if it is not treated properly then severe harm is done to the environment which causes increase of harmful gases in nature. So far of these problems to be overcome, different steps are being taken to ensure reduction in wastage and to recycle the waste created, and the first prerequisite for waste management is separation of different materials so that they can be processed for different uses; at past times, the waste was segregated by man himself, which ate up a lot of man power and time, but this was not the most efficient way to do it. As time passed away and working became fast with the rapid growth of man and technology, many such ways of separating the waste came into consideration. In such a way that machines separates the waste. This can save a lot of time and man power. Our project focuses on particular aspects of making of time and man power. Our project focuses on these particular aspects of making one of the toughest jobs easy. As far as the separation is considered, waste treatment is important but it is impossible to treat waste without separating it; so this is one of the most important topic to focus on. Solid waste behaves as the most important aspect to look on. It creates most of the hazards in our environment. With an increase in pollution and global warming, untreated and unattended waste is one of the major reasons behind this. So treatment of this was one of the main issues. According to resources, our country does not stand in one of the leading waste treatment countries of this world where Germany stands on top for the waste management. As we see our country has a major amount of waste created, which is not attended and treated properly, it is high time we start focusing on this.

2 Methodology Generally, this system operates on three main stages, that is, input module stage, output stage, and the main PLC module stage. This stage is the heart of whole system. At this input stage, the sensors are interface with PLC. The detection of plastic, dry/wet glass materials are done in this stage. The sensors are used along the conveyor belt (Fig. 1). These all are appropriately fitted with respect to hydraulics and cylinders. And for each material collection, different bins are placed below the conveyor belt. Blower is also connected to move away dust particles and dry waste materials into a collector section that is placed completely opposite to it. The second stage is the most important stage of the system as it holds the system, and it is brain of the system, i.e., PLC module stage. PLC is a programmable logic control device; it is a controller. It programs according to the need of the user, and the system operates according to the program installed in the PLC. Normally, PLC is used in the small-scale industries; the sensors output is accepted by the PLC, and according to that, hydraulic cylinder operates. The third and the last stage is the output module stage. In our project, hydraulic cylinder and conveyor belt are interfaced on output stage. As soon as IR sensor detects the object of waste material, the conveyor belt starts running and the

Automatic Segregation and Supervision of Waste Material …

403

Fig. 1 Block diagram of PLC-based waste segregate

Fig. 2 3D simulator of waste segregator

cylinders are energized when the respective material objects or waste is detected in order to push waste into respective bin (Fig. 2).

3 Project Requirement The main requirement of project is to communicate PLC with sensors, pneumatic system, and conveyor belt. Secondly, we have to know how to program PLC and

404

F. A. Habib et al.

selection of PLC according to project requirement. Also, we have to study about industrial controls.

3.1 Software Requirement As the PLC is a programmable device, it requires a software at which the program can be created and installed in the PLC. The three most popular programming technique to program a PLC are “Ladder Diagram (LD), Sequential Function charts (SFC), Functional Block Diagram (FBD)” [1]. Here, we are using ladder diagram programming, and the software use to program Allen Bradley PLC is “RS Linux Classic.” This software basically helps the user to communicate PLC with computer. RS Linux classic for Rockwell Automation network and devices is a comprehensive factory communication solution, and it provides Allen Bradley programmable controller access to a wide variety of Rockwell software [2]. The easiest way to think about RS Linux classic is something like a communication central. SCADA stands for supervisory control and data acquisition system. The system is used to monitor as well as to control a prototype in industry such as telecommunications, waste control, gas and oil refining and transporting. SCADA is a technology in which an application gets a data about a system in order to control the same. Here, we are using WIN-CC flexible software for controlling and monitoring the system. The feature of this software is as follows: • • • •

Communication with the automation System. On-screen visualization of image. It works on current run-time data. It helps to operate the whole processor. We are also using factory io software for simulation of our project.

3.2 Hardware Requirement The main components used in segregation of waste in this project are as follows: • Crusher: As the name indicates, it crushes the waste material. Waste material is gathered together and dumped into the crusher to reduce the size of waste [1]. Then, the crushed waste is put onto the conveyer belt. The crusher is not a part of this system, it should connect to the PLC, as it is an external stage. • Programmable logic controller (PLC): There are different types of PLC based on the input and output module, and here we are using Allen Bradley PLC. Since it has 20 inputs and 12 outputs which are sufficient for this system, it is the core of whole system; it controls all the devices which are connected or operated in this system. The main objective of PLC is to obtain signals from input devices and perform required actions to get desired output [3].

Automatic Segregation and Supervision of Waste Material …

405

• Sensors: Basically, sensors are used to sense the different types of waste material; as the waste materials are all different types, the sensors are as follows: • IR sensors: The main objective of the IR sensor is to detect the availability of the item on the conveyor belt. As the name indicates, the IR sensor senses the object with the help of reflecting infra radiations on the object. In our system, the presence of any object on conveyor is to emit the infra radiations. As soon as the presence of item is detected, the sensor will send the command to PLC to start or stop the conveyor belt [3]. A separate switch is also used to turn ON and OFF the belt. • Moisture sensors: This sensor basically detects the objects that are having moisture molecule inside it. In a similar way, it detects the waste material [3]. Moisture sensor is used to sense organic waste material from dry waste. It is placed where the waste is introduced on the conveyer belt. It measures the change in electrical impedance when the water vapor is absorbed, the ionic function gets dissociated and the electrical conductivity will increase due to conductive polymer. • Metal detection sensor: As the title symbolizes, it identifies the metals from the waste. In this illustration, we have used proximity sensors. There are several types of proximity sensors practiced such as capacitive sensor and inductive proximity sensor [2]. A proximity sensor is a sensor capable to recognize the proximity of nearby objects without physical contact. “This proximity sensor works on the policy that the inductance of a coil and its power failures vary as metal or we can tell the conductive target is detected.” • Plastic detection sensors: It basically detects all the plastic waste material; it uses a photoelectric sensor built in amplifier for detection of plastic, bottles, etc. Different size of bottles can be separated using this sensor. • Glass and paper detector: In this, we use proximity sensors, and there are two distinct types of proximity sensors, i.e., inductive and capacitive proximity sensors. Inductive prototype proximity sensor recognizes the appearance of metallic waste, whereas the capacitive type proximity sensor catches glass and paper waste [2]. The engaged principle of this sensor is that an internal oscillator will oscillate until a marking material moves near in the presence of the sensor face. Due to the interface of material, the capacitance of a capacitor changes, which is the portion of an oscillator circuit. • Conveyor-Belt and fan: In this system, a 12 V motor is used to move the conveyor belt. This belt is moving continuously and the sensors are placed along the belt in such a way that each material can be detected easily. The material moves over the conveyor belt from the crusher. The different sensors are placed along the belt to detect the waste material. Different bin for different material is placed below the conveyor belt to separate the waste. A high-speed blower or we can say fan is used to blow the dust particle from the waste, and this blower helps in separation. The dry and light waste materials from the whole crushed are washed. A collector bin is placed just opposite to the blower in order to collect the material and dust particles. • Hydraulic cylinder: The hydraulic cylinder is basically a spring return-type singleacting cylinder. This cylinder releases pressure because of high force exerted by

406

F. A. Habib et al.

the fluid while the extension as well as retraction process. This cylinder is placed along the conveyor belt as soon as the sensor detects the object or waste material gives the input to the PLC and the PLC will response according to it and energized [3]. As these hydraulic cylinder energies, it pushes the waste to the respective bins.

4 Results The project automatic material segregation system using industrial tools is built, having the capability to sort out various materials like metal, non-metal, wet waste, and dry waste. It is made to lower man power usage in any industry having low-cost investment. We also have concentrated on making the use of this project very simpler and user-friendly. We worked on mainly to cater people with low cost. This is the main reason why low-cost tools and cheap sensor/transducers are used in this project.

5 Conclusion The project PLC controlled automatic material segregation system has been built. The system is capable of segregating materials like metal, non-metal, wood, and plastic, successfully. The proposed system is such that it saves both time and cost. The system is one-time investment with low maintenance and one-person control system. The system is reasonable for small-scale industries where an initial investment is low. Acknowledgements We appreciate our Project Guide Asst. Prof. Awab Fakih, who provided information and experience that greatly aided the investigation. We also thank the Director of AIKTC, Dr. Abdul Razak Honnutagi for his support, who always inspires students to progress from the perspective of technical research. We thank our parents for their lifetime support, as we would like to thank Absolute Motion Private Limited for giving us this great opportunity to work on live projects .

References 1. N. Ghosh, P. Kumar, S. Bhardwaj, V.S. Rana, N. Verma (2019) Experimental setup of segregation of industrial waste using PLC. Int. J. Innov. Technol. Explor. Eng. (IJITEE) 8(6S) (2019). ISSN: 2278-3075 2. S. Dwivedi, F. Michael, R. D’souza, A review on PLC based automatic waste segregator. Int. J Adv. Res Comput. Eng. Technol. (IJARCET) 5(2) (2016) 3. R.M. Kittali, A. Sutagundar, Automation of waste segregation system using PLC. Int. J. Emerg. Technol. (Special Issue on ICRIET-2016) 7(2), 265–268 (2016)

An Improved Model for Breast Cancer Classification Using Random Forest with Grid Search Method Yagya Buttan , Alka Chaudhary, and Komal Saxena

Abstract An unhealthy lifestyle today is becoming the main cause of increase in disease cases among humans. One such disease has got a rapid increase among women that is breast cancer. Benign and malignant are two in which breast cancer is mainly classified. Benign indicates non-cancerous which means spreading through other parts of the body does not happen, whereas malignant is cancerous and therefore it can spread through different parts of the body but this can be treated if detection is done at the early stages. In this paper, for breast cancer classification into two categories malignant and benign, we are going to use machine learning algorithms and develop an improvised model for RF using grid search method . Here, we are using the Wisconsin breast cancer openly available data set and will apply random forest algorithm using Python, train the data set for classification and analyze their outcomes subsequently. Keywords Malignant · Benign · Random forest · Machine learning

1 Introduction One of the most critical cancers accountable for death among women is breast cancer. Due to several reasons such as menopause and lactation issues, masses are formed in the milk ducts of the breast or mammary glands which cause breast cancer but all the lesions formed are not necessarily cancerous. Physicians use various techniques like: Y. Buttan (B) · A. Chaudhary · K. Saxena Amity Institute of Information Technology, Amity University, Noida, India e-mail: [email protected] A. Chaudhary e-mail: [email protected] K. Saxena e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_39

407

408

1. 2. 3. 4.

Y. Buttan et al.

Using a biopsy, Using ultrasound, Using X-ray for screening, Self-examination to check if any kind of mass can be felt.

The physician advised pathological techniques for the identification of noncancerous and cancerous lumps in women which are not fast or can be less/not accurate sometimes. For that to overcome this difficulty, machine learning algorithms can be applied to achieve accuracy. ML algorithms are an accurate and intelligent approach to prognosticate the malignant and benign from the given Wisconsin breast cancer data of fine needle aspirate images. Here, we will use the random forest algorithm model with training and testing the data to detect benign and malignant from FNA images in digitized format and grid search method to improve model.

2 Literature Survey In the last few years, regarding prognosis and breast cancer diagnosis, various techniques have been added to the literature and many have been proposed. In the year 2016 [1], they have introduced a classification that is based on the wrapper feature selection technique which is applied to mammograms. They are then grouped into benign or malignant by using mammographic images. In the year 2017 [2], the writing presents a various ML algorithm utilization which involves the random forest algorithm which is aimed for the analysis of breast cancer. All machine learning algorithms show high-performance measures on the binary classification analysis of breast cancer, i.e., discovering whether “B” or “M”. In 2018 [3], “they introduced a paper in which they examined various machine learning strategies such as SVM, decision tree, K-NN and ANNs and applied on Wisconsin breast cancer data set for breast cancer analysis, which is the common database; this is used for analyzing the outcomes by various algorithms. Lastly, a healthcare system model was developed for our recent work”. In 2019 [4], an original approach to the integration of data was presented from two cancer studies in which all the details of the records were a concern. A set of machine learning-based models have been practiced for survival time prediction in BC, and similar results are interpreted in the writing where the SVR-linear, Kernel ridge, DTR, KNR and Lasso exhibited common accurate survival prediction outcomes. In this research paper, BC analysis is taken out using WDBC database which consists of the features of “fine needle aspirate image data” having various characteristics. Random forest algorithm method is used to predict benign or malignant from the breast mass.

An Improved Model for Breast Cancer Classification Using Random …

409

3 Machine Learning ML is the most trending domain of research study that essentially involved with the analytical and statistical design of algorithms that enable and give the ability to the computers to understand and learn. The term “machine learning” is obtained from the “artificial intelligence” section, but now, it is largely the focusing domain for various departments of science and engineering. Learning mainly indicates learning from the data set or characteristic set (features). The foremost purpose and goal of ML are to build smart machines that can work and think like humans. Many machine learning techniques are utilized for the examination of statistical data. They are unsupervised learning, supervised learning, reinforcement learning.

3.1 Random Forest Decision tool used to classify sections of data and further guide computers to make decisions is known as the random forest algorithm. A random forest has an identical central structure as a decision tree. It is a ML algorithm that connects a vast number of instances with the possibility of an event happening. The random forest algorithm is especially effective with an ML training set. Some ML operators want their AI systems to work on an expansive data set that goes exceeding the capabilities of people, while some just want it to work within a small set of parameters. Most data specialists often opt for a decision tree. The decision tree is a way to percentages or probability that are trained to countless people in high institutions. Even people who do not know anything about any other data algorithm can easily understand the decision tree algorithm. It is possible they may get worried regarding the applicability of such a complicated system as not being familiar with random forests. But it has its own applicability in many cases. In the training set, a decision tree algorithm may overfit and may reduce the amount of achievable results that the procedure can produce percentages while random forest considers all of them. Example of how random forest works is shown in Fig. 1.

3.2 Grid Search Method It is a way to find the best optimization method which is utilized to examine the best subset of hyper-parameters from the denoted list of parameters. The method used in this model is alike to the grid, where all the conditions are put in the form of a matrix. The chosen hyper-parameters are the most excellent of working parameters chosen from the training set. In this method, we directly build a model for each combination of many hyper-parameters and we assess each model. The final hyper-parameters are the most suitable subset of parameters utilized in the final model of our testing set.

410

Y. Buttan et al.

Fig. 1 Way random forest might work for B and M classification

We have used a grid search using cross-validation on the training data set of breast cancer. In the grid search [5], we have used the “sklearn model” selection method. The most reliable hyper-parameters selected using grid search schemes are stated below: 1. 2. 3. 4. 5. 6. 7.

‘bootstrap’: True, ‘criterion’: ‘entropy’, ‘max_depth’: 80, ‘max_features’: ‘log2’, ‘min_samples_leaf’: 3, ‘min_samples_split’: 8, ‘n_estimators’: 500.

An Improved Model for Breast Cancer Classification Using Random …

411

Fig. 2 Basic diagram of work flow process

4 Proposed Methodology 4.1 Data Collection The data that has been utilized in this research work and analysis is taken from the Wisconsin Breast Cancer Database from the UCI repository. It contains a data set of FNA images comprising 569 cases, where 212 cases are malignant and 357 cases are benign (Fig. 2).

4.2 Indices of Performance Measure In the field of ML, there are many classification algorithms used to foretell outcomes. In supervised learning, classification is an integral part that mainly aims to classify

412

Y. Buttan et al.

Table 1 Confusion matrix Confusion matrix

Actual results

Predicted result

TP

TP

FN

TN

Fig. 3 Heat map of predicted results using random forest

data depending on its different characteristics which in turn predicts the model is trained using a training data set with known attributes of population data to figure the most fitting classification algorithm to differentiate among benign and malignant. In this work, to measure the performance of algorithms we use some evaluation methods [6]. The data has been split into two parts, training data set which is 75% and testing data set which is 25%. The performance classifier is dependent on several factors like recall, precision, accuracy, confusion matrix and F1-score. The confusion matrix is described in Table 1 and Fig. 3. 1. F1-score = 2(precision*recall) (precision + recall) where TP = number of truepositive predictions FP = number of false-positive predictions TN = number of true-negative predictions FN = number of false-negative predictions, 2. Precision = TP TP + FP, 3. Accuracy = TP + TN TP + TN + FP + FN, 4. Recall = TP TP + FN.

5 Experimental Results We have implemented a model using random forest along with grid search method to get highly precise results (Table 2). Here, the total number of false predictions = 2. So, we can improve our model to achieve more precise result. The heat map presents 2 false predictions which indicate 2 people are not having cancer but are listed as having cancer (Table 3).

An Improved Model for Breast Cancer Classification Using Random …

413

Table 2 The confusion matrix for random forest model is as follows Predicted as cancer

Predicted as healthy

Cancer

87

3

Healthy

2

51

Table 3 Recall, support, F1-score and precision using random forest Precision

Recall

F1-score

Support

0

0.98

0.97

0.97

90

1

0.94

0.96

0.95

53

0.97

143

Accuracy

So, the accuracy is 0.965034965034965. To enhance the model, first we will normalize the data and later we will use the grid search method to determine best of the hyper-parameters to construct a suitable model to analyze breast cancer into benign and malignant (Fig. 4). The new heat map shows that there is only “1–3” false predictions which indicate better accuracy. This indicates that it is the best model for classification with high efficiency (Table 4). So, the accuracy is 0.972027972027972 which is 1% better than before (Table 5).

Fig. 4 Heat map of resulted data of random forest with normalized data resulted by the grid search method

Table 4 Support, precision, F1-score and recall using random forest with grid search method Precision

Recall

F1-score

Support

0

0.99

0.97

0.98

90

1

0.95

0.98

0.96

53

0.97

143

Accuracy

414

Y. Buttan et al.

Table 5 Comparison of various performance measures with random forest classifier and using random forest classifier with grid search method Precision

Recall

F1-score

Support

Macro-avg.

0.96

0.96

0.96

143

Macro-avg. with grid

0.97

0.97

0.97

143

Weighted avg.

0.97

0.97

0.97

143

Weighted avg. with grid

0.97

0.97

0.97

143

6 Conclusion and Future Scope The paper presents an optimal examination of ML algorithms used for the purpose of classification. It includes the random forest algorithm and their shortcomings which urged to build an advanced random forest–GSM model, for breast cancer classification. The RF alone cannot give the proper results. So, as to increase the performance of our model we have used random forest with normalized data with grid search method. This RF–GSM model gives us higher accurate results with 1–3 false predictions, and all the other performance measures like precision, F1-score, support and recall are also higher than those with RF. This introduced model will be extremely helpful in the field of medical science for examination or analysis of other conditions such as different types of cancer like lung cancer, brain cancer, etc.

Appendix sklearn: sklearn is an open software available for ML library for the Python programming language. It includes various regression, classification and clustering algorithms which include SVM, DBSCAN, random forests and k-means and is programmed such as it can interoperate with the Python numerical, statistical and scientific libraries NumPy and SciPy [7].

References 1. A.F.M. Agarap, On breast cancer detection: an application of machine learning algorithms on the wisconsin diagnostic dataset, in ICMLSC 2018, Phu Quoc Island, Viet Nam, 2–4 Feb 2018 2. M.M. Pawar, et al., Genetic fuzzy system (GFS) based wavelet co-occurrence feature selection in mammogram classification for breast cancer diagnosis. Elsevier Gmbh (2016) 3. W. Yue, et al., Machine learning with applications in breast cancer diagnosis and prgnoisis (2018) 4. I. Mihaylov, Application of machine learning models for survival prognosis in breast cancer studies, in The 18th International Conference on Artificial Intelligence: Methodology, Systems, Applications (2019) 5. A. Paneri, M. Patel, An improved model for breast cancer classification using Svm with grid search method. Int. J. Innov. Technol. Explor. Eng. (IJITEE) 8(8) (2019). ISSN: 2278-3075

An Improved Model for Breast Cancer Classification Using Random …

415

6. K. Fukunaga, L. Hostetler, k nearest-neighbor, Bayes risk estimation. IEEE Trans. Inf. Theory 21(3), 285–293 (1975). https://en.wikipedia.org/wiki/Scikit-learn 7. D. Dua, C. Graff, UCI Machine learning repository (University of California, School of Information and Computer Science, Irvine CA, 2019). https://archive.ics.uci.edu/ml

A Review on Low-Noise Amplifier for Wideband Applications Dheeraj Kalra

Abstract Various low-noise amplifier (LNA) topologies are discussed. LNA is the first receiver block in the receiver side whose performance plays an important role for the remaining blocks. Common source topology is the most preferable technique to improve the gain. Resistive feedback is used to stabilize the circuit while common source with inductive load used to match input impedance. Common gate topology used to improve the impedance matching at input side, whereas distributed amplifier topology increases the gain. Source degeneration and distributed amplifier topology are also discussed. Keywords Impedance matching · Low-noise amplifier · Feedback · Noise figure

1 Introduction Low-noise amplifier is the first block in any receiver system which is being fed from antenna that plays an important role in circuit performances [1]. Performance parameters of LNA affect performances of mixer, filter, amplifier, etc., connected in the receiver chain as shown in (Fig. 1) [1]. LNA performance parameters are noise figure, gain, return losses, stability, bandwidth, etc. Noise figure is ratio of SNR at input to SNR at output, and its optimum value is from 2 to 3 dB [2]. The design of LNA like that noise contribution should be SNR as low as possible. Noise factor is given by Noise factor = SNRo/i/ pp , Noise factor = V2

n 1+ 4KTR [3], where Vn2 = Noise voltage, K = Boltzmann constant, T = Temperature S in Kelvin, and R S = Source resistance. For a simple common source amplifier shown in Fig. 2, noise factor can be calculated as NF = 1 + (R S ||R Pγ)gm R D + (R S ||R P1)g2 R D . m LNA gain gives signal strength to decrease the noise effect, so value should be as high as possible. Gain performance of LNA affects the performance of remaining

D. Kalra (B) GLA University, Mathura, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_40

417

418

D. Kalra

Fig. 1 Receiver architecture

Fig. 2 Common source amplifier

receiver blocks. Matching at input and output port is carefully done to reduce the return loss at ports. S parameters are used to measure the input and out return losses. Stability of the circuit is at high concern. The circuit must be stable at all frequencies 1+2 −S211 −S222 and environmental conditions. Stability of the circuit is defined as K = 2S21 S12 where  = S11 S22 − S12 S21 . Maximum bandwidth of the circuit gives the circuit suitability to the entire frequency range, i.e., LNA should give the flat response for the interest of frequency [4]. The above-mentioned parameters always have the trade-off between them. Various design LNA topologies are used to improve these parameters. Different LNA topologies are discussed in next section.

2 Topologies Common source stage is basic topology used in LNA [5]. Gain of the common source MOS amplifier is given by A V = gm R D , where gm is the transconductance of MOSFET and R D is the resistance connected at drain terminal. Gain can be increased by increasing the either gm or R D or both. Common source can be connected using resistive load but it does not provide good matching so have high return loss.

A Review on Low-Noise Amplifier for Wideband Applications

(a)

419

(b)

Fig. 3 a Common source with inductive load. b Common source with resistive feedback

Common source with inductive load [6] can be used to give the good impedance matching common source stage with inductive load and is shown in Fig. 3a. In common source stage with resistive feedback topology, resistance is connected between drain and gate which helps in providing the stability to the circuit at the cost of high noise figure and gain. Bandwidth is extended in this topology. Resistive feedback topology is shown in Fig. 3b. Common gate looks attractive in terms of low input impedance [7]. The common gate topology suffers from the problem of gain headroom, input impedance is given by Rin = g1m , and body biasing and channel length modulation are not considered. Common gate topology is shown in Fig. 4a. L 1 inductor connected at the source of MOS resonates with the capacitance at the output side [2]. Voltage gain of the circuit R1 . is 2R S In distributed amplifier topology, various MOS are connected in series to give the high gain and high bandwidth but it suffers from the high power consumption and high noise figure. Distributed amplifier topology is shown in Fig. 4b. In source degeneration technique, inductor connected at source of MOS cancels out the effect of parasitic capacitances of MOSFET and increases the linearity in the circuit. But

(a) Fig. 4 a Common gate topology. b Distributed amplifier topology

(b)

420

D. Kalra

(a)

(b)

Fig. 5 a Source degeneration. b Band-pass filter topology

this topology has more area and more power consumption in the circuit, and moderate bandwidth is obtained. Common source topology is shown in Fig. 5a. In band-pass filter technique, LC filter is used at the input to provide input impedance matching to reduce the return loss [8, 9]. It gives narrow bandwidth. Bandpass filter topology is shown in Fig. 5b. So, various LNA topologies are discussed and seen that there are pros and cons in every technology [9, 10]. There is always trade-off in these parameters. In Table 1, comparison of LNA topologies is shown. Table 1 Comparison of LNA topologies Topology

Band width

Gain

Noise figure

Power consumption

Impedance matching

Linearity

Area

Resistive feedback

High

Moderate

Low

High

Better

Better

High

Distributive amplifier

High

High

High

High

Better

Good

High

Common gate

High

Low

High

Moderate

Better

Best

Moderate

Band-pass filter

High

Moderate

Low

Low

Good

Moderate

Low

Common source

Low

High

Low

Low

Best

Good

Moderate

Cascode

High

High

Low

Moderate

Best

Best

Moderate

Current reuse

No

High

High

Low

Good

Good

No

A Review on Low-Noise Amplifier for Wideband Applications

421

3 Conclusion Various LNA topologies are discussed and seen that there are pros and cons in every technology. There is always trade-off in LNA parameters. Different topologies can also be combined to improve the performance. Common source circuit and its noise factor are discussed. Common source with inductive load and resistive feedback with their performances are explained. Other techniques such as common gate, distributed amplifier, source degeneration, and band-pass filter topologies with their advantages and disadvantages are described. Table 1 gives the comparison of various topologies.

References 1. M. Kumar, V.K. Deolia, A wideband design analysis of LNA utilizing complimentary common gate stage with mutually coupled common source stage. Analog Integ. Circuits Signal Process. 98(3), 575–585 (2019) 2. B. Razavi, RF Microelectronics (Prentice Hall Communications Engineering and Emerging Technologies Series), 2nd edn. (Prentice Hall Press, USA, 2011) 3. D. Kalra, et al., Design analysis of inductorless active loaded low power UWB LNA using noise cancellation technique. Frequenz 74(3–4), 137–144 (2020) 4. J.-H. Zhan, S. Taylor, A 5 GHz resistive-feedback CMOS LNA for low-cost multi-standard applications. in International Solid-State Circuits Conference, Digest of Technical Papers (ISSCC), pp. 721–730 (2006) 5. B. Perumana, J.-H. Zhan, S. Taylor, B. Carlton, J. Laskar, Resistivefeedback CMOS low-noise amplifiers for multiband applications. IEEE Trans. Microw. Theory Techn. 56(5), 1218–1225 (2008) 6. T. Chang, J. Chen, L. Rigge, J. Lin, ESD-protected wideband CMOS LNAs using modified resistive feedback techniques with chipon- board packaging. IEEE Trans. Microw. Theory Techn. 56(8), 1817–1826 (2008) 7. M. Chen, J. Lin, A 0.1-20 GHz low-power self-biased resistivefeedback LNA in 90 nm digital CMOS. IEEE Microw. Wirel. Compon. Lett. 19(5), 323–325 (2009) 8. C.-W. Kim, M.-S. Kang, P.T. Anh, H.-T. Kim, S.-G. Lee, An ultra-wideband CMOS low noise amplifier for 3-5-GHz UWB system. IEEE J. Solid-State Circuits 40(2), 544–547 (2005) 9. F. Silveira, D. Flandre, P. Jespers, A gm /ID based methodology for the design of CMOS analog circuits and its application to the synthesis of a silicon-on-insulator micropower OTA. IEEE J. Solid-State Circuits 31(9), 1314–1319 (1996) 10. P. Mahdi, K. Allidina, M.N. El-Gamal, A sub-mw, ultra-low-voltage, wideband low-noise amplifier design technique. IEEE Trans Very Large Scale Integ. (VLSI) Syst. 23(6), 1111–1122 (2015) 11. C.C. Enz, E.A. Vittoz, Charge-Based MOS Transistor Modeling: The EKV Model for LowPower and RF IC Design (Wiley, New York USA, 2006) 12. V. Aparin, G. Brown, L. Larson, Linearization of CMOS LNA’s via optimum gate biasing, in Proceedings of IEEE International Symposium on Circuits Systems (ISCAS), pp. 748–751 (2004)

Effects of Single and Double Wide Slots on Microstrip Patch Antennas Characteristics Using Direct Contact Probe Feed Excitation with Broadsided Radiation Ambresh P. Ambalgi, S. K. Sujata, Udit Mamodiya, and Priyanka Sharma Abstract A single and double wide slot microstrip patch antennas etched on low cost dielectric substrate (0.16 cm thick Glass Epoxy) having dual-band frequency application are discussed. These patch antennas are designed, tested practically, and verified using EM simulation software (IE3D V-15.4) with optimized dimensions of the patch and slots so as to obtain the required operable bandwidth in comparison with the conventional designed antenna. The measured and simulated −10 dB RL curve, antenna radiated pattern, beamwidth parameter, and directivity with conclusions are presented. 26% virtual size reduction with broadsided radiation characteristics is obtained. Keywords Beamwidth · Frequency · Power · SMA connector · Mobile · Resistance

1 Introduction The common antenna requirement in wireless application is to have low profile structure with planar configuration, miniaturized in its size with less cost toward its fabrication and development. Hence, strip antennas (MSAs) play a significant role in the wireless communication application because of its numerous advantages of A. P. Ambalgi (B) Srinivas University, Mangalore, Karnataka, India e-mail: [email protected] Department of Research in Electronics, Mangalore University, Mangalagangothri 574199, India S. K. Sujata Godutai Engineering College for Women, Kalaburagi, Karnataka 585103, India e-mail: [email protected] U. Mamodiya · P. Sharma Poornima Institute of Engineering Technology, Jaipur, Rajasthan 302022, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_41

423

424

A. P. Ambalgi et al.

microstrip antennas over the normal antenna having its characteristics to improve its performance depending upon the design requirement [1]. The multiple, broadband characteristics of MSA of different design with symmetrically polarize ideas finds applications in remote access networks are accounted in [2–9], single radiators with multi-band [6], bow tie antenna with meandered slot, patch with U-slot, antenna with miniature size having circular polarization, patch with E- and U-type slots circular asymmetric slotted patch, patches of ring type with various shapes are examined in numerous reviews in [10–24]. MSAs also have the certain demerits. They are narrow BW (1–2%), low gain. Many methods have been proposed and listed in the review to design, but novel multi-slot patch fed using line feed is found to be efficient method to improve the performance (bandwidth, gain, SWR, radiation characteristics) of MSAs [25–31] in the field of microstrip antennas according to the design aspects. The concept of microstrip antenna (MSA) was first put forth by Prof. G. A. Deschamps in 1953 [32], but in the year 1970, the first practical antennas were functionally developed by Robert Munson and John Q. Howell [33–35]. Hence, due to its merits such as less volume, low weight, conformable to host surface, their mass production and ease of integrable with active devices using PCB technology has made its adoptable to indoor/outdoor wireless applications [25–28] for dual-band frequency operations. But to list, some of the disadvantages that the MSA has are its lesser bandwidth and low gain capability and very less power handling capacity, and hence, this paper concentrates on improving the bandwidth and gain of the MSA for dual-band frequency operations.

2 Design of Slotted Patch Antennas A careful design of the MSAs has been carried out using the optimized dimensions obtained by the IE3D simulation platform and are fabricated on the low cost widely available dielectric substrate (glass epoxy material) having thickness (h) = 1.66 mm and permittivity (εr ) = 4.2. Practically, antennas are sketched using AutoCad 2018 design software for design accuracy before the fabrication process. Lithography process is used as a fabrication method to develop antennas. A 50  semi-miniature A-type connector (radius, r = 0.4 mm) is used to feed desired power for the patch using direct contact probe feed excitation technique. The selection of direct contact feed design is done in such a way as to reduce the overall design complexity in comparison with other feeding techniques. The position of the feed is calculated using equation [27] as given below. R(x) = Ro cos2

πx  L

(1)

where R(x) is resonance TM10 mode input resistance,Ro is the resonance radiation resistance at the edge of the patch, L is rectangular patch length, x is distance from the patch edge with constant π = 3.142. Hence, the location of the feed point is

Effects of Single and Double Wide Slots on Microstrip Patch …

425

placed on the Y axis of the conducting patch at a minimum distance of x = 4.3 from the top most edge of the patch as represented in Figs. 1, 2 and 3. This technique is adopted as it proposes the easy feed fabrication for array structures. The structural diagram of conventional rectangular microstrip antenna (C-RMA) is depicted in Fig. 1. The optimized dimensions such as L (patch length) = λo /4.39 (17.76 mm) and W (patch width) = 0.299λo (23.28 mm). All the calculated dimensions are taken to be the order of wavelength in terms of λ and λ/2. The geometry of patch with a single wide slot called as single wide slot rectangular microstrip antenna (SWS-RMA) having ground plane length L g = 40 mm, width W g = 40 mm is shown in Fig. 2. The outer dimensions in (L × W ) of SWS-RMA are similar to C-RMA, but variation in slot dimension is considered and etched on patch. A single wide slot with dimension in length and width are L 1 = 17.26 mm and W 1 = 3.29 mm, Fig.1 Conventional-RMA

Fig. 2 SWS-RMA

426

A. P. Ambalgi et al.

Fig. 3 DWS-RMA

and S 1 = 20.05 mm is the slot distance along the length, S 2 = 2.82 mm and S 3 = 1.64 are the distance from the patch edges which are etched on the same patch. As shown in Figs. 2 and 3, the wider slots are etched on the patch plane as since, they are very effective in enhancing the bandwidth and gain of the patch antenna when compared with narrow slots [32] as the length and width of slots increases the current elongation path also increases due to which the bandwidth and gain improvement is noted. The geometry of patch etched with double wide slots called as double wide slot rectangular microstrip antenna (DWS-RMA) having ground plane length L g = 40 mm, width W g = 40 mm is as shown in Fig. 3. The calculated values for DWSRMA are same as that of C-RMA as that of Fig. 1. Double wide slot with lengths L 1 = L 2 = 19.14 mm and widths W 1 = W 2 = 4.86 mm and the spacing maintained between slots are S 1 = 4.17 mm, S 2 = 8.54 mm, S 3 = 4.17 mm, and S 4 = 0.68 mm are etched on the same patch. All these calculated dimensions are taken to be the order of wavelength in terms of λ and λ/2 (λ is operating wavelength).

3 Measured and Simulated Results 3.1 Bandwidth of Conventional Rectangular Microstrip Antenna (C-RMA) As shown in Fig. 1, the chosen design frequency for C-RMA is 3.800 GHz frequency in WiMax band (operating for 3.5–3.800 GHz) of S-band frequency range. The − 10 dB return loss (RL) curve of the designed C-RMA is tested practically using power network analyzer operating for 10 MHz–40 GHz range of Agilent Technologies E8363B make. C-RMA is also studied through IE3D EM simulation software. The

Effects of Single and Double Wide Slots on Microstrip Patch …

427

Fig. 4 −10 dB return loss (RL) graph of C-RMA

operating frequency range selected for this study is for 2–6 GHz band. Figure 4 represents −10 dB RL curve of C-RMA tested experimentally and also through IE3D simulation. The C-RMA attained −15 dB RL at 3.850 GHz (resonating nearly to deign frequency 3.800 GHz of C-RMA) and thus validates the design of C-RMA (simulation and experimental). The experimental bandwidth (BW) of C-RMA is found to be 2.1% (i.e., 22 MHz). The simulated return loss of C-RMA is −18.22 dB resonating at 4.7 GHz with bandwidth (BW) of 6.1%. The practically measured and simulated resonance frequency curve has slight variation due to dielectric nature of material as a dependent factor on temperature and the soldering impact of 50  coaxial feed [28] to stripline patch. The practically measured plot of two-dimensional radiation curve of C-RMA at 3.85 GHz resonant frequency signifying the linear polarization with broadsided radiation with less cross-polar level below −18 dB is represented in Fig. 5. The directivity and power reflection coefficient are 6.112 dB and 0.213, respectively. C-RMA can be used in WiMax and some mobile commutation-based applications. Fig. 5 2D radiation plot of CRMA

428

A. P. Ambalgi et al.

3.2 Bandwidth of Single Wide Slot-RMA (SWS-RMA) The single horizontal wide slot (as shown in Fig. 2) etched on bottom portion of patch along the length axis (i.e., non-radiating edge) is designed and fabricated, hence the name single wide slot-RMA (SWS-RMA). Figure 6 depicts the −10 dB return loss (RL) bandwidth characteristics of SWS-RMA. The experimental result show that the −10 dB RL of SWS-RMA has resonated for 3.60 GHz frequency with return loss of −18 dB. The bandwidth (BW) of SWS-RMA is found to be 7.1% (i.e., 195 MHz). The simulated resonant frequency of SWS-RMA is 3.75 GHz (BW = 7.8%, RL = −28 dB). Compared to C-RMA, SWS-RMA has shown better improvement in bandwidth. Single slot on the patch plane provided an improvement in impedance bandwidth with better return loss and also reduced the virtual antenna size [28]. Figure 7 show the 2D radiation characteristics of SWS-RMA at resonating frequency 3.6 GHz signifying the linear polarization with broadsided radiation with less cross polar level below −20 dB. At 3.6 GHz, a enhancement in gain up to Fig. 6 −10 dB return loss (RL) graph of SWS-RMA

Fig. 7 Radiation plot of SWS-RMA at 3.6 GHz

Effects of Single and Double Wide Slots on Microstrip Patch …

429

7.78 dB is seen which is due to decrease in antenna Q factor because of current path elongation on surface of patch, resulting into increase in gain [29]. SWS-RMA has minimum X-polar (Cross-polar) which is less than −20 dB. The HPBW measured is found to be 54°. The measured directivity and power reflection coefficient of this antenna are 6.255 dB and 0.312, respectively. The SWS-RMA achieved 20% virtual size reduction.

3.3 Bandwidth of Double Wide Slot-RMA (DWS-RMA) The practical and simulated −10 dB RL curve of DWS-RMA is represented in Fig. 8. The dual band resonance is obtained for DWS-RMA with frequencies f 1 , f 2 having different bandwidths BW1 and BW2 . The bandwidth of each operating band is found to be 9.82% (272 MHz) and 3.53% (143 MHz), respectively. The minimum return loss at BW1 is −19.06 dB which appears at 3.49 GHz and in BW2 is −17.70 dB at 4.69 GHz. The simulated resonant frequencies of DWS-RMA are 3.5 GHz (BW1 = 5.6%, RL = −13.2 dB) and 4.5 GHz (BW2 = 1.8%, RL = −16.1 dB). The dual bands BW1 and BW2 are due to patch acting as a primary resonator and the slots acting as secondary resonators [30]. Due to placing of two wide slots (as shown in Fig. 3) on the patch, induces elongation of current patch along the slot and patch edges, which again introduces an additional resonance. The slot resonates near to the patch resonance which causes increase in impedance bandwidth [31]. Also, wider slots provide improvement in bandwidth compared to the narrow slots [28]. Hence, by adding slots (wider in size) on patch, not only dual bands are achieved but also enhancement up to 260 MHz of bandwidth at BW1 is achieved when compared to the bandwidth of BW1 of C-RMA and SWS-RMA. The virtual size reduction of 26% is achieved with DWS-RMA. Fig. 8 −10 dB return loss (RL) graph of DWS-RMA

430

A. P. Ambalgi et al.

(a)

(b)

Fig. 9 a Radiation plot of DWS-RMA at 3.49 GHz. b Radiation plot of DWS-RMA at 4.69 GHz

The 2D radiation characteristic of DWS-RMA for 3.49 and 4.69 GHz is represented in Fig. 9a, b. Here, patterns remains as linearly polarized with broadsided characteristics. The measured HPBW are found to be 43°, 38° for each frequency. The X-polar (cross-polar) of this antenna is below −15, −25 dB. The maximum gain of DWS-RMA measured in BW1 and BW2 are found to be 8.21 and 2.79 dB. The measured directivity and power reflection coefficient of this antenna are 6.529 dB and 0.101, respectively. This kind of antenna is more suitable for WiMax and RADAR systems. The HPBWs for the all proposed antennas are quite low and such antennas act as focusing device since the beam became narrower, and correspondingly, the gain has been increased.

4 Conclusion This paper presented an improvement in bandwidth, gain characteristics of patch with different structured slots fabricated using lithography process. The SWS-RMA has increased its gain to 8.21 dB with bandwidth of 272 MHz having dual resonance with wide slot etched on patch. In addition to this, a virtual size reduction of 26% is also obtained with linearly polarized broadsided characteristics. Such type of antennas has application in wireless communication. Acknowledgements This work is supported by VGST, KSTEPs, Government of Karnataka, India, for directing the funds in the form of research grants for scientist faculty (RGS/F) scheme to study and work in the area of microstrip antennas for high microwave frequency band range. Funding VGST–KSTePs, Government of Karnataka has funded this work under RGS/F research grant (Grant no.: VGST/KSTePs: GRD No.731). Conflict of Interest The declaration is made to therof that the author’s do not have any conflict of interest.

Effects of Single and Double Wide Slots on Microstrip Patch …

431

References 1. R. Garg, P. Bhartia, I.J. Bhal, A. Ittipiboon, Microstrip Antenna Design Handbook (Artech House, Boston, 2001) 2. C. Chulvanich, J. Nakasuwan, N. Songthanapitak, N. Ansntrasirichai, T. Wakabayashi, Design of narrow slot antenna for dual frequency. PIERS Online 3(7), 1024–1028 (2007) 3. Y.J. Wang, C.K. Lee, Design of dual-frequency microstrip patch antennas and application for IMT-2000 mobile handsets. Prog. Electromagnet. Res. 83, 265–278 (2002) 4. S.V. Shynu, G. Augustin, C.K. Anandan, P. Mohanan, K. Vasudevan, Design of compact reconfigurable dual frequency microstrip antennas using varactor diode. Prog. Electromagnet. Res. 60, 197–205 (2008) 5. A. Pal, S. Behera, K.J. Vinoy, Design of multi-frequency microstrip antennas using multiple rings. IET Microwaves Antennas Propag. 3, 77–84 (2009) 6. D.D. Krishna, M. Gopikrishna, C.K. Aanandan, P. Mohanan, K. Vasudev, Compact dual band slot loaded circular microstrip antenna with a superstrate. Prog. Electromagnet. Res. 83, 245– 255 (2008) 7. C.T.P. Song, P.S. Hall, H. Ghafouri-Shiraz, Multiband multiple ring monopole antennas. IEEE Trans. Antennas Propag. 51(4), 722–729 (2003) 8. T. Archevapanich, J. Nakasuwan, N. Songthanapitak, N. Ansntrasirichai, T. Wakabayashi, E-shaped slot antenna for WLAN applications. PIERS Online 3(7), 1119–1123 (2003) 9. Y.J. Ren, K. Chang, An annular ring antenna for UWB communications. IEEE Antennas Wirel. Propag. Lett. 5(1), 274–276 (2006) 10. G. Mayhew-Rydgers, J.W. Avondale, J. Joubert, New feeding mechanism for annular-ring microstrip antenna. Electron. Lett. 36, 605–606 (2000) 11. N.S. Nurie, R.J. Langley, Input impedance of concentric ring microstrip antennas for dual frequency band operation including surface wave coupling. IEE Proc. 137(6), 331–336 (1990) 12. I. Misra, S.K. Chowdhury, Study of impedance and radiation properties of a concentric microstrip triangular-ring antenna and its modeling techniques using FDTD method. IEEE Trans. Antennas Propag. 46(4), 531–537 (2003) 13. S.I. Latif, L. Shafai, Dual-layer square-ring (DLSRA) for circular polarization, in IEEE Antennas and Propagation Society International Symposium, vol. 2A (2005), pp. 525–528 14. R. Garg, V.S. Reddy, Edge feeding of microstrip ring antennas. IEEE Trans. Antennas Propag. 51(8), 1941–1946 (2003) 15. P.M. Bafrooei, L. Shafai, Characteristics of single and double-layer microstrip square-ring antennas. IEEE Trans. Antennas Propag. 47(10), 1633–1639 (1999) 16. S. Behera, K.J. Vinoy, Design of dual frequency microstrip ring antennas, in IEEE International Symposium on Microwaves ISM, vol. 08 (2008), pp. 277–281 17. R. Hopkins, C. Free, Equivalent circuit for the microstrip ring resonator suitable for broadband materials characterization. IET Microwaves Antennas Propag. 2(1), 66–73 (2008) 18. S.L.S. Yang, A.A. Kishk, K.F. Lee, Frequency reconfigurable U-slot microstrip patch antenna. IEEE Antennas Wirel. Propag. Lett. 7, 127–129 (2008) 19. S.H. Wi, J.M. Kim, T.H. Yoo, H.J. Lee, J.Y. Park, J.G. Yook, H.K. Park, Bow-tie-shaped meander slot antenna for 5 GHz application, in Proceedings of the IEEE International Symposium on Antennas and Propagation, vol. 2 (2002), pp. 456–459 20. R. Chair, C.L. Mak, K.F. Lee, K.M. Luk, A.A. Kishk, Miniature wide-band half U-slot and half E-shaped patch antennas. IEEE Trans. Antennas Propag. 53, 2645–2652 (2005) 21. G.F. Khodaei, J. Nourinia, C. Ghobadi, A practical miniaturized U-slot patch antenna with enhanced bandwidth. Prog. Electromagnet. Res. B 3, 47–62 (2008) 22. N. Misran, M.N. Shakib, M.T. Islam, B. Yatim, Design analysis of a slotted microstrip antenna for wireless communication, in Proceedings of World Academy of Science Engineering and Technology, vol. 37 (2009), pp. 448–450 23. Nasimuddin, Z.N. Chen, X. Qing, Asymmetric-circular shaped slotted microstrip antenna for circular polarization. IEEE Trans. Antennas Propag. 58(12), 3821–3828 (2010)

432

A. P. Ambalgi et al.

24. Nasimuddin, Z.N. Chen, X. Qing, Slotted microstrip antenna for circular polarization with compact size. IEEE Antenna Propag. Mag. 55(2), 124–137 (2013) 25. R.E. Bahl, P. Bhartia, Microstrip Antennas (Artech House, Dedham, MA, 1980) 26. D.M. Pozar, Microstrip antennas. Proc. IEEE 80, 79–89 (1992) 27. K.R. Carver, J.W. Mink, Microstrip antennas technology. IEEE Trans. Antennas Propag. 29, 2–24 (1981) 28. R.J. Mailloux et al., Microstrip array technology. IEEE Trans. Antennas Propag. 29, 25–37 (1981) 29. J.S. Kuo, G.B. Hsieh, Gain enhancement of a circularly polarized equilateral-triangular microstrip antenna with a slotted ground plane. IEEE Trans. Antennas Propag. 51(7), 1652–1656 (2003) 30. S. Rhee, G. Yun, CPW fed slot antenna for triple-frequency band operation. Electron. Lett. 42(17), 952–953 (2006) 31. G.Z. Rafi, L. Shafai, Wideband V-slotted diamond-shaped microstrip patch antenna. Electron. Lett. 40(19), 1166–1167 (2004) 32. G.A. Deschamps, Microstrip microwave antennas, in Proceedings of the 3rd USAF Symposium on Antennas (1953) 33. R.E. Munson, Single slot cavity antennas assembly. U.S. Patent No. 3713462 (1973) 34. R.E. Munson, Conformal microstrip antennas and microstrip phased arrays. IEEE Trans. Antennas Propag. 22, 74–78 (1974) 35. Q.E. Howell, Microstrip antennas. IEEE Trans. Antennas Propag. 23, 90–93 (1975)

Solar Roadways: A Road Toward Betterment Arpit, Richa Ferwani, and Swikrati Gupta

Abstract The development in solar energy in the past few years has been a boon towards development of entire world. To make an optimum use of solar energy and moving toward establishing a cleaner and greener society, a very innovative concept of solar roadways has been introduced recently. It means nothing but paving the roads with solar panels to utilize the solar energy effectively. The electricity gets generated by solar roads surfaced by solar power using photovoltaic cells and LED signage [1]. The present roads are petroleum-based asphalt roads, and replacing those roads with solar roadways can be a step toward contributing toward a better society, that is eco-friendly, feasible, and reduces accidents. This will not only increase the efficiency of power generation but will also reduce our dependency over pollution causing fossil fuels. Keywords Asphalt · Petroleum · Roadways · Photovoltaic cells · LED signage

1 Introduction Solar roadways, as the name suggests are roadways running on solar power. These days we can find solar panels also known as photovoltaic cells, just about everywhere. These roadways covert solar power into electrical power and give out decentralized power, and in return reduce demand for fossil fuels [1]. This renewable energy source replaces the need for the current fossil fuels used for the generation of electricity. This, in turn, reduces the greenhouse gases by half. Looking onto the efficiency, the Arpit · R. Ferwani (B) · S. Gupta Maharishi Arvind International Institute of Technology, Kota, Rajasthan, India e-mail: [email protected] Arpit e-mail: [email protected] S. Gupta e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_42

433

434

Arpit et al.

efficiency of the solar-enabled roadways have proved to be three-times more efficient than the asphalt roads and have the capability to produce 2.5 times of the demand power [2].

2 Working What if petroleum-based economy could shift towards a renewable energy economy? Well, in today’s world of development and innovations there is not anything that can stop you to establish a better society. SOLAR ROADWAYS are roads leading us towards living an effective lifestyle. The three layers of the solar panels used in roads include: • Road surface layer • Electronics layer • Base plate layer (Fig. 1). 1. Road Surface Layer: As from this layer, the solar rays will reach up to photovoltaic cells, and they should be translucent and should have very high strength. This layer is translucent, rough, and is strong enough to withstand vehicular traffic. It provides the grip to the tires of vehicles. It is not only weatherproof but also passes sunlight to the solar collector cells embedded within. It is rough enough to provide great traction to avoid the skidding of vehicles. As the material is made rough but the material used is translucent, it still passes sunlight through it to the solar collector photovoltaic cells embedded within it, along with LEDs and a

Fig. 1 Solar road construction [3]

Solar Roadways: A Road Toward Betterment

435

heating element [3]. And it is tough enough to bear the weight of large vehicles under the unpleasant conditions. It is made in such a way that no dust, water, and insects can pass through it to reach below layer to protect the minor electronics items. 2. Electronics Layer: The paving and working of LED takes in this layer. All the electronic elements like LEDs, heating springs, and PV cells are placed on this layer. With the help of this layer, there will not be any kind of mishappening and collection of ice on the way during cold and wet weather because this layer contains heating elements. This layer helps in lightning up the path as well as in communications [4]. As being a delicate layer, an extra protection is provided to this layer by the use of glasses that are sealed hermetically. It gives warning to drivers if an animal arrives on the road, a detour ahead, an accident or construction work by using LED lights. This microprocessor is useful for controlling lighting as well as communications. 3. Base Plate Layer: While the electronics layer collects energy from the sun, it acts as a support for the layer above it. It helps in distribution of power received from electronics layer. The generated power can be distributed to homes or for commercial use. It can be made from waste recycled material and hence decreasing cost by some. This layer is weather that is does not affected by extreme summer or winter seasons. It shields the electronics layer (Fig. 2).

Fig. 2 Block diagram for constructing solar roadways

436

Arpit et al.

3 Advantages The ever-increasing day by day amplification in technology of solar roadways helps in decreasing cost of raw materials while constructing roads because recycled material can be used for the base support. It has the potential to power up our homes and shops while we are driving on them [2]. Use of sun power in generating electricity is the biggest advantage since in twenty-first century, there has been already decreasing amount in coal and fossil fuels. It is also a better option for sustainable development. These roads have proved to last for longer duration as compared to our traditional roads. This kind of roads can be used for lightning up streets by putting light-emitting diodes below the topmost layer. Due to symmetric and uniform surface, this gives a positive and artful appearance. It can be considered as a source of smart grid used as a road and also as a source of generation of electricity [5]. Solar roadways can be said as multitasking type of technology.

4 Challenges Though we can see major advantages of solar power ruling the world, every coin has two sides to display. Every innovation has its own pros and cons [3]. However, the negatives highlight the unmarked seasonal efficiency as well as undefined durability of solar roads, the deposits over the roads (e.g., rubber) caused by the nature can also be of high concern. Further extending the concern toward the start-up costs of these roadways, these paneled roadways cannot be easily afforded by most of the poorly developing nations [5]. Many under-developed countries have a major problem regarding garbage disposal at roads which is an extremely critical concern as the garbage accumulated can block the sunlight from reaching the panels. If looked upon as a short-term goal, these roadways may not be considered under feasible and economical methods but in the long run, it automatically pays back making everything feasible and economically acceptable [6].

5 Future Scope 1. The drastic effects laid by the asphalted roads make a warning against their use and one should understand this perspective. 2. Foreign oils have been continuous in use and can deplete the resources that must be kept reserved for our future generation. 3. Global warming caused by the foreign oils used in making asphalt roads can be reduced at a major level using these roadways [6]. 4. Innovations on every street are a part of leading a change positively.

Solar Roadways: A Road Toward Betterment

437

5. Helping the under-developed nations develop using these roadways, these can be a pathway toward a better and bright future.

6 Conclusion The implementation of these solar roadways has proved to be a greater step in the field of innovations and economization. The glasses that are used to panel these roads and PVC’s used are cost-effective and a very interesting and useful in-built application of melting ice using heating elements for reducing accidents and road jams along with the LED’s making the road look even more brighter has been an additional positive factor [5]. If worked upon by the administration of roads and highways followed by innovative campaigns, this step can prove to be a leading perspective in the near future.

References 1. Northmore, S. Tighe, Innovative Design: Are Solar Roads Feasible 2. A.A. Kulkarni, “Solar Roadways”—rebuilding our infrastructure & economy. IJERA 3(3), 1429–1436 (2013) 3. E.R. Ranjan, Solar power roads: revitalizing solar highway electrical power and smart grid. Int. J. Eng. Res. Gen. Sci. 3(1) (2015). ISSN 2091-2730 4. S. Maben, Star Trek: George Takei tweet boosts solar roadways. Christian Science Monitor. Associated Press (31 May 2014). Retrieved 1 June 2014 5. A. Seward, Best of what’s new: solar roadways. Popular Science (2014). Retrieved 6 Jan 2015 6. The centuries-old technology behind solar roadways. Indiegogo’s Most Popular Campaign Ever. Forbes (3 June 2014)

A Review on Radiomic Analysis for Medical Imaging Nitika Gupta and Priyanka Sharma

Abstract Medical imaging is one of the most important aspects of medical diagnosis and disease detection. Various medical imaging techniques include ultrasounds, X-ray imaging, MRI scans, CT scans and positron emission tomography (PET) including nuclear medical imaging. More than often, these imaging techniques are enough to decipher the disease but in certain cases, they do not offer enough evidence. In these cases when the disease is finally detected, then either it is too late to cure or the patient is in much agony the whole time without a proper cure. Radiomics is one of such technologies that helps in detection of such diseases in the early stages as it can deter the characteristics which cannot be seen by naked eye. This is done by using data characterization algorithms which extracts large amount of information from medical images. These features check for potential disease-causing symptoms, and hence personalized therapeutic response can be provided to the patient at a very early stage of the disease, thus actually saving a life. Radiomics was originally developed for tumor detection or oncology but now can be extended to any disease detection that uses medical images. Keywords Radiomics · Medical imaging · Radiomic features · Diseases · Data characterization · Database · Image accusation and reconstruction · Feature extraction · Metastasis

1 Introduction Radiomics is the process of extracting hidden phenotype information from a tumor or cancer cell using medical images like X-rays, CT scans, PET scans using data characterization algorithms. This is necessary as a tumor or cancer cell is not made N. Gupta (B) · P. Sharma Poornima Institute of Engineering Technology, Jaipur, Rajasthan 302022, India e-mail: [email protected] P. Sharma e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_43

439

440

N. Gupta and P. Sharma

up of a single element, but different types of materials, and possesses such information that is not appreciated by naked eye. Tumor is a heterogeneous element and curing it requires a study of all the different regions of heterogeneity and then curing them separately. This is where radiomics comes into play. Using data characterizing algorithms, we can characterize different regions of the tumor and provide the personalized treatment for the best treatment possible.

2 Tumors It is observed that every tumor is different from each other by the means of composition and components. Hence, it can be said that tumor is not homogeneous throughout. It is very heterogeneous. Even if there are two tumors present in say Person A’s lungs, both will be different from each other in one way or other [1]. One of the major studies was done by Charlie Swanson’s group of UCLA (London) on lung cancer. They observed RCC or renal cell carcinoma of the tumor of lung cancer. They then sequenced different areas of tumors and observed the heterogeneity in the tumor not only on the genus level but also on the molecular level. They found some mutations common in many regions of the tumor, while some mutations were specific to only 1 or 2 regions of the tumor. It was uncovered that if these specific mutations were cured then it led to the cure of not the entire tumor but definitely some parts of it. Hence, it can be concluded that the curing cancer is not equal to curing one disease but curing a bunch of diseases all together [2, 3] (Fig. 1). Fig. 1 Tumor heterogeneity specifying the different mutations in a tumor

A Review on Radiomic Analysis for Medical Imaging

441

3 Why Radiomics? Hence, we have seen that every tumor is different and there are ever-increasing methods to cure parts of it. To optimize the personalization of treatments so as to identify which treatment would be better for a patient (radiotherapy, chemotherapy, immunotherapy, targeted drugs or surgery), we need to identify the biological differences among the tumor or the biomarkers. A measurable index of the severity or existence of some disease or its condition is biomarker. Usually, a biomarker is something that can point toward a particular disease or any physiological condition of a living being. The goal here is to find biomarkers in such a way that they are 1. 2. 3. 4.

Easy to perform Noninvasive Low cost Captures 3D complexity of solid tumors.

Radiomics is the solution to the problem. It is one of the recent, upcoming processes that does the above all. Radiomics is the analysis of subtended images of CT, MRI or PET scans through data characterizing software to extend information provided by these biological scans [1, 4, 5].

4 Process Radiomics serves the purpose of converting images to mineable original and true data with a high process rate. Hence, radiomics consists of five different tasks, namely: 1. 2. 3. 4. 5.

Image acquisition and reconstruction Image segmentation and rendering Feature extraction and feature qualification Databases and data sharing Ad hoc informatic analyses.

All these steps should begin independently and possess their very own challenges [4, 5].

4.1 Image Acquisition and Reconstruction To identify the biomarkers, we need some reference for the same. Here is where we need image acquisition. A large amount of medical reports that can be used to characterize the biomarkers of such tumors can be accessed using medical scanning technology. Here, we need to see that we are not taking a picture with a camera but are scanning the reports which provide raw volume of data. This raw volume of data

442

N. Gupta and P. Sharma

is not usable as is in medical investigations and needs to be further processed. To make them interpretable, we need to reconstruct them, i.e., use a reconstruction tool [4, 5]. Reconstruction can be done using many different algorithms but the images obtained through each algorithm may differ in many aspects like quality and usability. This will directly affect the ability to find an abnormality in reports and characterizing them. So, consideration must be taken in determining the algorithm used [4]. The images hence constructed or reconstructed are saved in a large public database which is accessible to all medical care centers and can be edited for the better [5].

4.2 Image Segmentation To actually segment the information and biomarkers, the images must be reduced to the most essential parts called “volumes of interest.” These are the places actual data of interest are stored, and hence, they need to be extracted [4]. Since the data that need to be processed are so large, data segmentation cannot be done manually. The cure is automatic and semiautomatic segmentation algorithms. But these algorithms need to pass some pointers first: 1. Reproduce-able Results: this means that if an algorithm is used a second time on a set of data it should give the same result as the previous attempt. 2. Consistency: This refers to when the algorithm can perform the task at hand and should not do something that is not important; i.e., it should be able the detect the disease in the scans. 3. Accuracy: The precision of the detection is a must. Accurate results can be only achieved with accurate data. 4. Time efficiency: With medicine, time plays a very major role. There is no use of having advanced algorithms and techniques if they give results after the disease has taken their toll on a patient. Hence, time efficiency is utterly important [4].

4.3 Feature Extraction and Qualifications As the segmentation is now done successfully, many features of the tumor are now in limelight. The traced region is called traced region of interest or ROI. These features can be broadly divided as quantitative features and qualitative features. Qualitative features are used to describe lesions in radiology lexicons, while quantitative features are extracted using software implementing mathematical algorithms and are called descriptors. They show diverse extent of complexity and convey properties initially of the voxel intensity histogram and the lesion shape, finally of the spatial arrangement of the intensity values at voxel level [4]. Quantitative features can be categorized as: 1. Shape Features: these features describe the shape of the ROI and also reflect the geometric properties like the volume of ROI, highest value of diameter across

A Review on Radiomic Analysis for Medical Imaging

443

different orthogonal directions, largest value of surface, sphericity, i.e., how closely the ROI resembles a perfect sphere and the tumor compactness. 2. First-Order Statistic Features: These features describe the properties which derive directly from histogram like mean, median, upper limit and the lower limit of voxel intensities of the image. It also shows some other aspect of the ROI like entropy (randomness), kurtosis (flatness), skewness (asymmetry) as well as uniformity. It can be said that these features extrapolate the distribution of individual voxel values without care for spatial relationships [4]. 3. Second-Order Statistics Features: These features are also known as the textural features [4, 5]. These features are obtained when we calculate the statistical interrelationship between the adjacent and nearby voxels [2]. They provide a measure of the spatial arrangement of the voxel intensities and thus of intra-lesion heterogeneity. Such features can be concluded from the gray-level co-occurrence matrix (GLCM), quantifying the incidence of voxels with same intensities at a predetermined distance along a constant direction, or from the gray-level runlength matrix (GLRLM), quantifying consecutive voxels with the same intensity along constant directions [6]. 4. High-Order Statics Features: These features are attained by statistical methods after mathematical transforms or filters are applied on it. An example of this can be suppression of noise or highlighting details. These filters or mathematical functions include but are not limited to fractal analysis, Minkowski functionals, wavelet transformation and Laplacian transformation of Gaussianfiltered images. These functions extract areas which possess a high coarse texture pattern. To maintain the scarcity of data, it needs some more qualifiers. Redundant data need to be removed, etc.; hence, we see to process the data; many different variables need to be analyzed, and for this we require a feature selection algorithm that does all the above described. A major problem to overcome here as well is the problem of redundant, unstable and non-reproducible data. Thus, the feature selection algorithm should take care of such data as well [6]. This step also includes analysis of data. Here, the most critical step is to analyze the chosen data. It is done after the selection of features. The molecular, clinical and genetic data play a big impact on the deduction analysis. Hence, they need to be integrated before actual analysis. Now, the data analysis can be done in majorly two ways: 1. In this type of analysis, we compare the different features obtained to one other and identify the common information. It then deciphers what it means to have these features together. 2. The second method can be unsupervised or supervised analysis. In supervised analysis, we use an outcome variable which indeed creates prediction models while unsupervised analysis sums up the information provided to it and represents it graphically. This helps in visualizing the results even better [4, 5].

444

N. Gupta and P. Sharma

4.4 Database and Data Sharing This step involves two processes, namely creation of database and use of database. Creation is one of the major steps and one of the major challenges faced by radiomics today. Since patient information is very much governed and sensitive, acquiring information is very difficult. The difficulty is added in by privacy laws like HIPAA. But this does not mean we can compromise on the accuracy of data. So, the acquired data should not get corrupt even when compressed. Here, we need a large storage location for the images as well because we need to blend in the clinical and molecular data as they are important as well [4].

4.5 Ad Hoc Informatic Analysis Now that the database is created, it is up to the medical institutions to use the data optimally and judiciously. The analysis for new patients would include a new algorithm that will run new data through a database that return information regarding the patient’s disease and course it will take over time. It can also determine the speed of the tumor’s growth, the future course of action for the treatment like chemotherapy, radiotherapy, targeted drugs or surgery. This can also ultimately tell the survival rate of patients or the time period a patient has. The algorithm ultimately has to identify the mutual relationship among the new data or images and the pre-identified features. Hence, the algorithm should be able to extrapolate data from base data and data inputted for analysis [4, 5].

5 Application Radiomics has the following major applications 1. Prediction of Clinical Outcomes: A few investigations have additionally indicated radiomic highlights are greater at anticipating treatment reaction than customary measures, for example, tumor volume and breadth, and the most extreme radiotracer take-up on positron outflow tomography (PET) imaging. Utilizing this procedure, a calculation has been created, after introductory preparing dependent on intra-tumor lymphocyte thickness, to foresee the likelihood of tumor reaction to immunotherapy, giving an exhibition of the clinical capability of radiomics as an incredible for customized treatment in the developing field of immunoncology. Different investigations have likewise shown the utility of radiomics for anticipating immunotherapy reaction of NSCLC patients utilizing pre-treatment CT and PET/CT pictures [3]. 2. Risk Prediction of Distant Metastasis: Metastatic capability of tumors may likewise be anticipated by radiomic features. For instance, 35 CT-based radiomic

A Review on Radiomic Analysis for Medical Imaging

3.

4.

5.

6.

7.

445

features were recognized to be prescient of inaccessible metastasis of lung cancer in an investigation by Coroller et al. in 2015. They subsequently reasoned that radiomic features can be valuable to recognize patients with high danger of creating distant metastasis, advising doctors to choose the compelling treatment for discrete patients [7]. Prognostication: As there is stamped inconstancy in patient results even with a similar phase of cancers, precisely deciding the stage of tumors is necessary for doctors to make alternatives among curative and palliative medications. Radiomic studies have demonstrated that image-based markers can possibly give symmetrical data for arrangement of biomarkers and improve visualization of prognostication [3]. Assessment of Cancer Genetics: Biological mechanism of lung tumor was found to show particular and complex imaging patterns. Specifically, Aerts et al. [3] indicated that radiomic features were related to biological gene sets, for example, cell cycle phase, regulation of immune system process, DNA recombination and so forth. Besides, different mutations of glioblastoma (GBM), for example, 1p/19q cancelation, MGMT methylation, NF1, EGFR and TP53, have been demonstrated to be fundamentally predicted by magnetic resonating imaging (MRI) volumetric measures, including necrosis volume, tumor volume and contrast enhancing volume [8]. Differentiation of True Progression from Radionecrosis: A common phenomenon called radiation necrosis or treatment effect is observed after stereotactic radiosurgery (SRS) related to brain metastases, and this phenomenon is quite indistinguishable from actual progression. Significant differences are observed in 66 patients with a set of 82 treated lesions along with pathological outcomes. A specificity and sensitivity of 0.86 and 0.65 are observed, respectively, along with area under the curve of 0.81 against leave-one-out crossvalidation when highest ranked radiomic features are fed to an optimized IsoSVM classifier. As low as 73% of cases were identified by neuroradiologists having a specificity of 0.19 and sensitivity of 0.97. This deciphers that radiomics holds true differences between true progressions of brain metastases cured by SRS and treatment effect [9]. Image-Guided Radiotherapy: Radiomics offers the power to be noninvasive and can be hence repeated potentially for a given patient more effectively than invasive tumor biopsies. It has been proposed that radiomics could act as a medium to screen tumor dynamic changes during the process of radiotherapy and to characterize sub-volumes in danger for which increase in medications or treatment could be useful [10]. Prediction of Physiological Events: Since radiomics is nothing but matching images with the available images, it can be also used to discover the grueling physiological occurrences, such as brain activities. These activities are studied using functional MRI or fMRI imaging technique. The added advantage of fMRI is that raw fMRI images can actually undergo radiomic analysis and generate features. These features can then correspond to meaningful brain-related activities [11].

446

N. Gupta and P. Sharma

6 Drawbacks For the detection, classification and diagnosis of many different diseases, multiparametric imaging is of the most use. But one problem faced today is that radiomics is limited by using single images for extracting the textural features and hence limits the actual scope of radiomics in practical settings (clinical). Hence, they cannot truly capture the underlying tissue features in high-dimensional multiparametric imaging space [12].

7 Future Scope The latest development in this aspect is development of a multiparametric imaging radiomic framework called MPRAD. This is used for extraction of radiomic features from high-dimensional datasets. It was tested on breast cancer and stroke. The following results were observed [12]. Breast cancer: The MPRAD frameworks when observed on breast cancer cells distinguished malignant from benign lesions with excellent results. Specificity was observed to be 0.80 and sensitivity of 0.87. AUC was observed to be 0.88 and over single radiomic parameters, MPRAD supplied with a 9–28% increase in AUC. But the most important information given out by MPRAD was that the glandular tissues were similar between each group and no major differences were observed [12]. Stroke: Stroke is also called as cerebrovascular accident in brain. When MPRAD features were observed on brain stroke, it demonstrated an increased performance in classifying perfusion–diffusion mismatch when compared to one parameter radiomics. There were no differences observed between the gray and white matter tissues [12].

References 1. R.J. Gillies, P.E. Kinahan, H. Hricak, Radiomics: images are more than pictures, they are data. Radiology 278(2), 563–577 (2018) 2. Y. Balagurunathan, V. Kumar, Y. Gu, J. Kim, H. Wang, Y. Liu, D.B. Goldgof, L.O. Hall, R. Korn, B. Zhao, L.H. Schwartz, S. Basu, S. Eschrich, R.A. Gatenby, R.J. Gillies, Test-retest reproducibility analysis of lung CT image features. J. Digit. Imaging 27(6), 805–823 (2014) 3. H.J.W.L. Aerts, E.R. Velazquez, R.T.H. Leijenaar, C. Parmar, P. Grossmann, S. Carvalho, J Bussink, R. Monshouwer, B. Haibe-Kains, D. Rietveld, F. Hoebers, M.M. Rietbergen, C.R. Leemans, A. Dekker, J. Quackenbush, R.J. Gillies, P. Lambin, Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat. Commun. 5 (2014) 4. S. Rizzo, F. Botta, S. Raimondi, D. Origgi, C. Fanciullo, A.G. Morganti, M. Bellomi, Radiomics: the facts and challenges of image analysis. Eur. Radiol. Exp. 2(1), 36 (2018) 5. V. Kumar, Y. Gu, S. Basu, A. Berglund, S.A. Eschrich, M.B. Schabath, K. Forster, H.J.W.L. Aerts, A. Dekker, D. Fenstermacher, D.B. Goldgof, L.O. Hall, P. Lambin, Y. Balagurunathan,

A Review on Radiomic Analysis for Medical Imaging

6. 7.

8.

9.

10.

11.

12.

447

R.A. Gatenby, R.J. Gillies, Radiomics: the process and the challenges. Magn. Reson. Imaging 30(9), 1234–1248 (2012) M.M. Galloway, Texture Analysis using gray level run lengths. Comput. Graph. Image Process. 4(2), 172–176 (1975) T.P. Coroller, P. Grossmann, Y. Hou, E.R. Velazquez, R.T.H. Leijenaar, G. Hermann, P. Lambin, B. Haibe-Kains, R.H. Mak, H.J.W.L. Aerts, CT-based radiomic signature predicts distant metastasis in lung adenocarcinoma. Radiother. Oncol. 114(3), 345–350 (2015) R. Brown, M. Zlatescu, A. Sijben, G. Roldan, J. Easaw, P. Forsyth, I. Parney, R. Sevick, E. Yan, D. Demetrick, D. Schiff, G. Cairncross, R. Mitchell, The use of magnetic resonance imaging to noninvasively detect genetic signatures in oligodendroglioma. Clin. Cancer Res. 14(8), 2357–2362 (2008) L. Peng, V. Parekh, P. Huang, D.D. Lin, K. Sheikh, B. Baker, T. Kirschbaum, F. Silvestri, J. Son, A. Robinson, E. Huang, H. Ames, J. Grimm, L. Chen, C. Shen, M. Soike, E. McTyre, K. Redmond, M. Lim, J. Lee, M.A. Jacobs, L. Kleinberg, Distinguishing true progression from radionecrosis after stereotactic radiation therapy for brain metastases with machine learning and radiomics. Int. J. Radiat. Oncol. Biol. Phys. 102(4), 1236–1243 (2018) S.S.F. Yip, T.P. Coroller, N.N. Sanford, E. Huynh, H. Mamon, H.J.W.L. Aerts, R.I. Berbeco, Use of registration-based contour propagation in texture analysis for esophageal cancer pathologic response prediction. Phys. Med. Biol. 61(2), 906–922 (2016) I. Hassan, A. Kotrotsou, A.S. Bakhtiari, G.A. Thomas, J.S. Weinberg, A.J. Kumar, R. Sawaya, M.M. Luedi, P.O. Zinn, R.R. Colen, Radiomic texture analysis mapping predicts areas of true functional MRI activity. Sci. Rep. 6, 25295 (2016) V.S. Parekh, M.A. Jacobs, MPRAD: a multiparametric radiomics framework (2018)

Design of CMOS 6T and 8T SRAM for Memory Applications Binduswetha Pasuluri, V. J. K. Kishor Sonti, S. M. M. Trinath, and N. Bala Dastagiri

Abstract In this brief, Single’s architectures are presented ending 6T SRAM and Single ending 8T SRAM with virtual ground, which are used to prevent SRAM cells’ reliability and stability problems. The virtual ground technique is used in the 8T SRAM model to weaken the positive feedback system thus enhancing SRAM cell write ability. The SRAM circuits also do not involve circuits linked to read operation preloading. SRAM architectures isolate the storage nodes from the rows of the read bit, which input increases, read strength, and eliminates read upset. In this work, we explored and compared the parameters of power dissipation and delay in standard techniques. From the comparison, it is obvious that the efficiency of the suggested techniques in terms of power and delay is significantly increased. Keywords Single-ended SRAM · Virtual ground · Power dissipation · Delay

B. Pasuluri (B) · N. Bala Dastagiri Department of E.C.E, G. Pullaiah College of Engineering and Technology, Kurnool, Andhra Pradesh, India e-mail: [email protected] N. Bala Dastagiri e-mail: [email protected] V. J. K. Kishor Sonti Sathyabama Institute of Science and Technology, Chennai, Tamil Nadu, India e-mail: [email protected] S. M. M. Trinath Ace Engineering Academy, Hyderabad, Telangana, India e-mail: [email protected] N. Bala Dastagiri Annamacharya Institute of Technology & Science, Rajampet, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_44

449

450

B. Pasuluri et al.

1 Introduction The use of mobile phones and handheld devices across the board, coupled with the accessibility of quick communication technologies, has given rise to an exponential interest in the scheme for sight and sound management. One of the essential goals of such gadgets is to structure vitality productive frameworks for longer battery life. In addition, high-target imagery in these handheld gadgets has exacerbated the problem of intensity usage due to extended processing and preconditions for capability. It is estimated that the truth is about 30% power utilization alone is implanted static random access memory (SRAM) [1]. Dynamic power usage relies on voltage quadratically, and recollections are designed to operate at ultra-small voltages to obtain critical power reductions in memory activity. Lower supply voltage, however, hinders memory duties and fundamentally builds the bit error rate (BER) [2]. While this is one of the true gaps in using such memories in mainline processors, it is worthwhile in processors working for interactive media apps because they are known to be error-tolerant [3]. Due to the development of demands for greater usefulness at higher power effectiveness, particularly in cell phones and wireless devices, high density and lowpower designs have proven to be increasingly fundamental for distinct apps. With the growing demands for greater memory limit and velocity, it turned out to be fundamental to ensure reduced control usage and greater reliability for the individual memory cells and overall memory structure at a faster pace. Recollections structure an expansive piece of any frame, and memory is heavily subject to the overall framework execution. Different processes are used to reduce power usage and enhance noise margin in memory layout, including circuit partitioning, double threshold voltage system, increasing gate oxide thickness for non-critical circuits, and much more [1–4]. Furthermore, the virtual ground system is used to upgrade the cell’s writing ability. The suggested circuit is much simpler and capable of vitality in terms of planning, execution, and mission. In contemporary VLSI scheme and system-on-chip (SoC) models, installed recollections are inevitable. Because of their ideal installed characteristics, current embedded memories are overwhelmed by the 6-transistor (6T) SRAMs: logic CMOS compatibility, fast and low power activity. They are used in different dimensions ranging from a few kilobits to a few hundred megabits, and in present-day SoCs possess a significant land. However, with the scaling of CMOS technology, the ultra-low supply voltage soundness of SRAM has become a crucial problem for wearable framework apps. Section 2 gives a short overview of the current SRAM designs. Section 3 examines the simulation results of the existing SRAM architectures, Sect. 4 finishes up the paper with a concise review of our continuous and future work.

Design of CMOS 6T and 8T SRAM for Memory Applications

451

Fig. 1 Schematic circuit of conventional 6T SRAM

2 Review of Existing SRAM Architectures In this brief, various architectures of SRAM designs which are existing are discussed.

2.1 Conventional 6T SRAM Figure 1 shows the conventional 6T SRAM cell as the industry standard since the advent of SRAM [5, 6]. The main factor in SRAM design is reliable reading and writing activities. It is very effective due to the simplicity and symmetry of the 6T SRAM cell circuit. In any case, 6T cell’s double-bit line scheme, the bit lines are not electrically isolated during read and write activities from the output storage nodes. During the reading procedure, the SRAM cell is extremely helpless against the noise. Due to the voltage division along the access (N3, N4) and pull down (N1, N2) transistors between preloaded bit lines (BL and BLB) and the ground terminal of the SRAM cell, the voltage at the storage node with “0” rises to a higher voltage than ground. This situation may end with a misperception of a value, usually known as an angry reading. The regular 8T SRAM is familiar with this containment’s defeat.

2.2 Conventional 8T RAM In 8T SRAM, as shown in Fig. 2 two additional stack transistors (N5, N6) provide access to the cell through the additional Read bit line (RBL). It has two-word lines

452

B. Pasuluri et al.

Fig. 2 Schematic circuit of conventional 8T SRAM with dual-port

(WWL and RWL) devoted to it. All else is the same as the standard 6T SRAM. 8T SRAM’s reading job separated from the rest of the cell, expanding the static noise reading margin (RSNM). Higher noise margin edge ensures steadiness and power better perused. 8T SRAM’s reading job does not irritate the cell’s capability data. The quality of the read stack transistors [6, 7] dictates the execution of the 8T SRAM in the midst of the read task.

2.3 10T Single-Ended SRAM In the context of the read task, conventional 6T and 8T SRAM structures require preloading of the bit-lines. This pre-charging forces severe vitality and timing imperatives on high-thickness and high-limit SRAM applications structure and activity. Figure 3 shows the circuit of a 10T SRAM non-pronounced with a BL read bitline. It is a mixture of the traditional 6T SRAM cell, an inverter, and a gate for transmission. Read letter row (RWL) regulates the NMOS transistor (N4) at the gate and RWL B controls the PMOS transistor (P4) at the gate. Once RWL and RWL B are enabled, the transmission gate will be triggered and the storage node will be inverted to RBL. The precharge hardware is dispensed within 10T SRAM framework, considering that the inverter charges/releases the RBL entirely. The RBL does not expend authority if the data lately arrived is like the state of the past. On these lines, if sequential “0”s or “1”s are perused out, 10T single-finished SRAM cell does not devour any additional power. If the readout data is exceptional in relation to the previous state, the charge and discharge powers may be devoured. In a grouping of irregular information, the transient probability on RBL is half for 10T single-finished SRAM, along these lines decreasing force usage entirely in the midst of the read task [5]. Despite this, additional gadgets and required wiring forces higher region overheads as opposed to 8T SRAMs [7, 8].

Design of CMOS 6T and 8T SRAM for Memory Applications

453

Fig. 3 Schematic circuit of conventional 8T SRAM cell

2.4 8T SRAM Cell with Virtual Ground The 8T SRAM uses single-ended activities of writing and reading. The suggested 8T SRAM cell provides resistance to the read disturbance over the use of the inverter (P3N5), confining the RBL storage node (QB) in the circuit, as shown in Fig. 4. Therefore, the QB storage node readout can be conducted without the stored information being

Fig. 4 Architecture of 8T SRAM with virtual ground

454

B. Pasuluri et al.

exasperated. During writing and reading operations, two bitlines (BL) and a single word line (WL) are used in the common 6T SRAM cell. Such as one BL and one WL are used in the proposed 8T SRAM framework prompting reduced power usage during writing and reading tasks. The suggested setup provides greater read and write power. The architecture removes the pre-load circuit as the inverter charges/releases the read bit-line (RBL) entirely. As a consequence, power usage can achieve a critical reduction. During the reading task, the ordinary 6T and 8T SRAM cells involve pre-loading of the bit lines which further extends the memory cell’s power usage.

2.4.1

Write Operation of Virtual Grounded 8T SRAM

The proposed method is a single-ended architecture with virtual ground instead of the 6T cell. Writing bit line (WBL) and writing word line (WWL) are enabled during the write procedure while reading word line (RWL) is disabled. The N3 access transistor is on while N4 is OFF of the deactivation of RWL. The writing assignment is then separated from the reading activity. A virtual ground is also used to enhance cell writeability by weakening the favorable inverter feedback connection (P1-N1, P2N2). The virtual ground (P4-N7) node is connected with the ground during holding and reading activity to keep the stored data in favorable feedback. The proposed designed hold condition is like 6T SRAM cell. Be that as it may, the virtual ground node is associated with the PMOS transistor (P4) source as shown in Fig. 4 during the writing task. Because PMOS is a poor pullhardware is dispensed with in down machine, this weakens the mechanism of positive feedback and efficiently performs writing activity. Let’s consider Q at first at “1”, and in the cell we have to compose a “0”. The job of writing begins once WWL is active. WBL is set to zero at this stage to state “0.” QB becomes high as a nature of favorable feedback when Q drops small.

2.4.2

Read Operation of Proposed SRAM

Only RWL is launched during the reading procedure, while WWL and WBL are deactivated. The reading activity is separated from the writing portion as WWL and WBL are disabled and transistor N3 is disabled while N4 is enabled. The QB storage node is limited to the RBL. The value stored at QB is not worsened along these lines, which creates the static noise margin (RSNM) for reading.

2.5 Proposed 6T Single-Ended SRAM Cell Here is depicted the architecture of the Single ended 6T SRAM cell shown in Fig. 5. Compared to current or conventional architectures, the suggested SRAM cells have enhancement in one or more elements of read and write stability. The proposed circuit is a 7T SRAM cell extension job. Transistor N2 will always be in the cutoff region

Design of CMOS 6T and 8T SRAM for Memory Applications

455

Fig. 5 Architecture of single ended 6T SRAM cell

in this circuit because it connects the gate to the ground. This enables to reduce the N2-P2 argument, thus increasing the write margin.

2.5.1

Write Operation of Single-Ended 6T SRAM

Writing bit line (WBL) and writing word line (WWL) are enabled during the write procedure while reading word line (RWL) is disabled. The N3 access transistor is on while N4 is OFF of the deactivation of RWL. The writing assignment is then separated from the reading activity. The storage node will store the information input on the Write Bit Line (WBL). Because of the feedback mechanism, QB’s storage node output is reversed to Q’s information. Let’s consider Q to be “1” at first, and we need to write a “0” into the cell. The job of writing begins once WWL is active. WBL is set to zero at this stage to state “0”. QB becomes high as a nature of favorable feedback when Q drops small.

2.5.2

Read Operation of Single-Ended 6T SRAM

Only RWL is launched during the reading procedure, while WWL and WBL are deactivated. The reading activity is separated from the writing portion as WWL and WBL are disabled and transistor N3 is disabled while N4 is enabled. The QB storage node is limited to the RBL.

456

B. Pasuluri et al.

3 Proposed SRAM In this section, the proposed SRAM architecture is developed by using reversible logic. The reversible SRAM architecture is shown in Fig. 6. The latch here consists of two back to back connected inverters, can be implemented using Feynman gates, Toffoli gates, Fredkin gates. When compared with other reversible gates Feynman gates dissipate less power. Therefore a latch is formed by using Feynman gate. The truth table of proposed SRAM is shown in Table 1

3.1 Proposed SRAM Reversible Cell with Read and Write Signals Figure 7 displays the proposed SRAM cell with write and read signals. SRAM cell studied in the earlier section is used for storing bits of data. The data is saved in SRAM cell initially used SRAM reversible cell as explained in the earlier section. The quantity saved will be transferred through 2 × 2 Feynman gate with binary as one of the inputs. This generates the complement output at the output of the Feynman gate. It also generates garbage output. The complement output is given as input to the

Fig. 6 SRAM cell

Table 1 Truth table during write operation Bit

WL

Stored data

0

1

0

1

1

1

X

0

Nochange

Design of CMOS 6T and 8T SRAM for Memory Applications

Fig. 7 Schematic of final SRAM cell

457

458

B. Pasuluri et al.

Table 2 Performance of different SRAM structures SRAM

Constant inputs

Garbage outputs

Quantum cost

Quantum delay

Number of gates

Number of transistors

[1]

3

3

21

19

5

60

[6]

1

1

6

6

2

24

Proposed

2

2

2

2

2

13

other 2 × 2 Feynman gate along with binary-1 as other input. This Feynman gate also functions as inverter which is fed to the original input. In this way, the back-to-back connected inverter action is performed using Feynman gates. Here also a garbage output is generated. SRAM cell will either be in write mode or read mode if WL = 1. Whatever the value of data input will be stored in SRAM cell if WR = 1. If RD = 1 and WR = 0 then value saved in SRAM cell will be data output which mirrors physical. The Power consumption, delay and slew rate associated with the Feynman gate SRAM w.r.t different technologies is shown in Table 2.

3.1.1

Read Operation

Assuming that cell initially storing logic-1, during read operation Q is situated at V dd and Q at 0 V. The BL and BL lines are initially precharged (Pre) between 0 V and V dd before the read operation begins (usually to V dd ). The BL and BLb of SRAM cell are coupled to sense amplifier during read operation to produce output data correspondingly.

3.1.2

Write Operation

Considering the write operation, assuming that cell is storing logic-1 initially and we would like to write logic-0. For this BL is discharged to 0 V, BLB is charged to V dd and cell is selected by making WL to V dd . The suggested architecture of SRAM cell is simulated in Tanner EDA CMOS technology. The architecture is improved and design is minimized in terms of quantum cost and worst-case delay. The quantum cost is minimized to 2 and worst-case delay to 2.

4 Simulation Results The transient Simulation results of 8T SRAM cell with virtual ground is depicted in Fig. 8. Here, in this simulation, the WWL is held high up to 280 ns and RWL is disabled, the information is transmitted to the SRAM cell based on the WBL value during

Design of CMOS 6T and 8T SRAM for Memory Applications

459

Fig. 8 Transient results of 8T SRAM cell

this time interval. WBL’s value is allocated to Q and QB’s value is reversed to Q’s. Writing operation is separated from the write procedure during this moment. The RWL is activated later and the value can be read in RBL at the stored node Q. In this manner, the respective values are plotted at the storage nodes Q and QB for distinct wordlines and bitlines values. The transient Simulation results of single-ended 6T SRAM cell is depicted in Fig. 9. During the simulation, upto 280 ns the WWL and WBL is asserted low and high and RWL is deactivated, during this time interval the data is written onto the SRAM cell depending on the value of WBL. The value of WBL is assigned at Q and value of QB is reverse to that of Q. During this time read operation is isolated from the write operation. Later the RWL is activated and the value at stored node Q can be read at

Fig. 9 Transient results of single-ended 6T SRAM cell

460

B. Pasuluri et al.

Fig. 10 CMOS implementation of reversible gates based SRAM

RBL. In this way, for different values of wordlines and bitlines, the corresponding values at storage nodes Q and QB are plotted. The implementation of the proposed SRAM architecture is shown in Fig. 10. In this brief, the total work is carried in CMOS 45 nm technology by using Tanner EDA tools. The proposed SRAM has three modes of operation i.e. when WL = ‘1’ and RL = ‘0’ SRAM will be operated in write mode. When WL = ‘0’ and RL = ‘1’ the SRAM will be operated in read mode. When WL = ‘0’ and RL = ‘0’ the SRAM operate will be in hold mode. The transient simulation results of the proposed SRAM are plotted in Fig. 11. From Table 3 it is obvious that in CMOS 45 nm technology the power dissipation stayed roughly equivalent in standard SRAM architectures. It is also noted that the average delay has changed considerably, i.e. the delay in SRAM’s suggested architectures has reduced compared to standard architectures. The proposed SRAM dissipates less power and it is very fast when compared with other architectures.

5 Conclusion The comparisons were made in this brief between the existing SRAM architectures and also virtual grounded 8T SRAM and single-ended 6T SRAM in terms of average delay and power dissipation during their operation. Also, a reversible SRAM is proposed and it consumes less power and has less delay. Delay is greatly decreased during SRAMS operation. The efficiency of SRAMs in terms of power dissipation and delay is also improved as the CMOS technology is developed. In the extended work, we plan to provide adiabatic SRAM cells for the comparative analysis of proposed SRAMs.

Design of CMOS 6T and 8T SRAM for Memory Applications

461

Fig. 11 Transient analysis of proposed SRAM Table 3 Comparison of power dissipation and average delay w.r.t different existing SRAM architectures in CMOS 45 nm technology Power dissipation (W)

Average delay (s)

Conventional 6T SRAM

3.6e−006

3e-007

Conventional 8T SRAM

1.1e−005

3e−007

Conventional 10T SRAM

3.2e−005

1.5e−007

8T SRAM with virtual ground

2.5e−005

7.8e−008

Single-ended 6T SRAM

2.3e−005

2e−007

Proposed SRAM

2.1e−005

1.25e−007

462

B. Pasuluri et al.

References 1. P. Singh, S.K. Vishvakarma, Ultra-low power high stability 8T SRAM for application in object tracking system. IEEE Access (2018) 2. S. Ataei, J.E. Stine, A 64 kB approximate SRAM architecture for low-power video applications. IEEE Embed. Syst. Lett. (2017) 3. Q. Dong, S. Jeloka, M. Saligane, Y. Kim, M. Kawaminami, A. Harada, S. Miyoshi, A 4 + 2T SRAM for searching and in-memory computing with 0.3-V V DDmin . IEEE J. Solid-State Circ. (2017) 4. M. Raine, M. Gaillardin, T. Lagutere, O. Duhamel, P. Paillet, Estimation of the single event upset sensitivity of advanced SOI SRAMs. IEEE Trans. Nucl. Sci. (2017) 5. N. Surana, J, Mekie, Energy efficient single-ended 6-T SRAM for multimedia applications. IEEE Trans. Circ. Syst. II: Express Briefs (2018) 6. N. Yadav, A.P. Shah, S.K. Vishvakarma, Stable, reliable and bit-Interleaving 12T SRAM for space applications: a device circuit co-design. IEEE Trans. Semicond. Manuf. (2017) 7. N. Zheng, P. Mazumder, Modeling and mitigation of static noise margin variation in subthreshold SRAM cells. IEEE Trans. Circ. Syst. I: Regular Pap. (2017) 8. M. Zamani, S. Hassanzadeh, K. Hajsadeghi, R. Saeidi, A 32 kb 90 nm 9T-SRAM cell subthreshold SRAM with improved read and write SNM, in International Conference on Design & Technology of Integrated Systems in Nanoscale Era (DTIS) (2013), pp. 104–107

New Concept for Solar Efficiency Improvement Ravi Sharma, Ahmad Hasan Khan, and Ankit Kumar Sharma

Abstract With the increase in the population and with the increasing number of vehicles day by day, the demand for the energy resources like petrol, diesel, etc., is increasing day by day. The resources of the fossil oils are limited, and one day they will all come to an end, so the clock is ticking and saying that to search for the alternatives, and searching in the same path, Jatropha oil can be used as biodiesel which is not only the recultivating source of oil but also more environment friendly. Keywords Biofuel · Bio-diesel · Jatropha · Environment friendly

1 Introduction Solar energy is that the progress wont to affect the sun’s energy and make it useable. Starting from 2011, the occasion passed when need of one tenth of 1% from unsure energy demand. Many recognize anticipated photovoltaic cells, or solar sheets, found on things like rocket, rooftops, and handheld number crunchers. The cells are made using semiconductor materials like those found in PC chips. Precisely, when daylight hits the cells, it pounds electrons free of their particles. Since the electrons travel through the cell, they produce power. On how dynamically basic scale, solarfarm power plants utilize different systems to focus the sun’s energy as a radiance source. The brilliance is then wont to bubble water to drive a turbine that creates power during an incredible proportion of an in every way that basically matters undefined structure as coal and atomic power plants, offering ability to an outsized number R. Sharma (B) · A. H. Khan · A. K. Sharma Department of Electrical Engineering (Power System), Jaipur Institute of Technology Group of Institutions, Jaipur, India e-mail: [email protected] A. H. Khan e-mail: [email protected] A. K. Sharma e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_45

463

464

R. Sharma et al.

people. The sun has passed on energy for billions of years. Dependably the sun shafts more energy onto Earth than it must fulfill when unsure energy essentials for an entire year [1]. Other solar degrees of progress are inactive. As an example, gigantic windows catch the good side of a structure which connect light to warm springy materials on the bottom and dividers. These surfaces by then discharge the brilliance around sufficiently dull to stay the structure warm. Also, light plates on a housetop can warm fluid in tubes that provides a house high-temperature water [1]. Solar power is idolized as a vast fuel source that is spoiling and as routinely as conceivable change free. The occasion is in like way versatile. Solar improvements in like way are over the foremost elevated and need massive proportions of land zone to aggregate the sun’s energy at rates reliable to several individuals. Despite the hundreds, solar energy use has overpowered at around 20% a year during late years, agonizing about quickly falling costs and gains in capacity. Japan, Germany are basic markets for solar cells. With charge overhauls, and functional coordination with energy affiliations, solar force can traditionally get itself in five to ten years [2].

2 Solar Tree The working of a solar tree may be an immense measure of like that of a veritable one—leaf-like solar sheets related through metal branches using sunshine to outline energy. Responsiveness of spot where there is showing solar sheets on a gigantic scale is routinely an impediment inside the development of reasonable power source (Fig. 1). A response for this is consistently planting solar trees, which are brilliantly ergonomic, using little space. Solar trees are correlative to housetop solar frameworks, or other green structure measures, symbolizing these progressively conspicuous endeavors and their trademark benefit. The solar tree sheets charge batteries during the day. At sunset, the tree right now lights LED. It is altered to organize the degree of sunlight that it produces. Solar trees are versatile and go to challenge the sun and produce most super possible degree of energy using a structure called “spiraling phyllataxy.” Its picked turns grant even the lowermost solar sheets to support adequate sunshine for power creation. It can in like manner be utilized in street lighting and mechanical power supply frameworks [3]. Solar tree is formed of metal structure and have solar sheets at the most critical as against parts of real tree. Fundamental social event of solar tree may be a sensible unfilled chamber which gets together toward one side to empower the relationship of the upper, increasingly unobtrusive shaft which need to end the upper board. The stature at where it is set empowers a verifiably huge board area which willn not cover the lower set sheets. The progress is insightful to be used in off-the-network remote zones or in places that require point-sourced light like vehicle parks and street lighting. Furthermore, with proper transparency or battery store, the solar tree can in like way supply power wherever required. The plant’s structure can change relentless

New Concept for Solar Efficiency Improvement

465

Fig. 1 Solar tree

with different parts. In India, for example, solar trees can augment fulfilling energy demand while saving space. The event can ensure driving forward store of power in zones that require more power supply and may benefit distinctive who are not associated with the system [3].

3 Related Work Joshi et al. [4], this paper begins with an appraisal of common photovoltaic cells and half-cut solar cells followed by an assessment of basic parameters which is shown in Fig. 2. Thickness measurements impact the solar cell execution. The examination shows the superiority of half-cut solar cells over standard solar cells, to the degree better yield, less calamities, and effortlessness of occasion, making it a doable business alternative to plain cells. Cost and money-related evaluation and bonafide hiding away procedure change or existing computerization line fortifies are not surveyed during this paper. From the appraisal gave during this paper, a superb course of action of central focuses are given by the half-cut cell modules over standard modules concerning plan obstacle, covering, hotspot, and ability then on. Obviously, this advancement would be an update over standard modules to the degree obliging power misfortunes and developing capacity. Liu and Zhang [5] solar energy figure could

466

R. Sharma et al.

Fig. 2 Thickness measurements

also be a key to the workplace the board inside the electronic embedded structure that works using the amassed solar energy.

4 Proposed Work The proposed work follows the innovative approach of using the A-300 cells in the solar tree implantation so therefore improving the efficiency and the productiveness in terms of the power backup (Fig. 3). Capability in photovoltaic solar sheets is surveyed by the cutoff of a board to change over sunlight into usable energy for human utilization. Knowing the productivity of a board is basic so as to pick the right sheets for your photovoltaic structure. For progressively minor rooftops, powerfully effective sheets are basic, in perspective on room limitations. The best capability of a solar photovoltaic cell is given by the going with condition:

5 Conclusion The standard target of the piece exists within the two point of view, first it centers in improving the gainfulness of the solar sheets by utilizing the A300 solar cells and a limited timeframe later spotlights on the space movement inside the planting of the solar loads up by utilizing the solar trees. India is immovably dependent upon

New Concept for Solar Efficiency Improvement

467

Fig. 3 Solar cell A-300 diagram

oil-based items for its energy needs. An outsized little bit of the office age is finished by coal and mineral oil-based power plants which contribute truly to ozone-hurting substances transmission. A serious little bit of the made countries are trading over to solar energy together of the prime reasonable power source. Solar energy is depicted in light of the very fact that the sun’s radiation that grounds at the planet. It is the premier rapidly open wellspring of energy. The sun is that the world’s power station and accordingly the wellspring of all energy on our planet. Solar energy is the energy power that supports life on Earth for all plants, animals, and other people. This paper shows the solar energy current creation in India from different subtleties and necessities of solar energy for nation zone improvement in India. The solar energy could supply this and future energy needs of the Earth. The pre-eminent investigated sensible power source induces for power age in India, explicitly, solar lake, and solar photovoltaic frameworks need increasingly noticeable movement for broadened length focal points. This paper also plots the direct solar usage frameworks like water-warming structure, solar drying, solar cooking, and solar distillation. Solar energy are regularly tapped authentically (for instance, PV); during an underhanded route nearly with wind, biomass, and hydropower; or as fossil biomass animates, for instance, coal and gas.

References 1. S. Sharma, K.K. Jain, A. Sharma, Solar cells: in research and applications—a review. Mater. Sci. Appl. 6, 1145–1155 (2015). Published Dec 2015

468

R. Sharma et al.

2. A.M. Bagher, M.M.A. Vahid, M. Mohse, Types of solar cells and application. Am. J. Opt. Photonics 3(5) (2015) 3. S. Guo, J.P. Singh, I.M. Peters, A.G. Aberle, T.M. Walsh, A quantitative analysis of photovoltaic modules using halved cells. Int. J. Photoenergy 2013 (2013) 4. A. Joshi, A. Khan, S.P. Afra, Comparison of half cut solar cells with standard solar cells, in 2019 Advances in Science and Engineering Technology International Conferences (ASET), Dubai, United Arab Emirates (2019) 5. Q. Liu, Q. Zhang, Accuracy improvement of energy prediction for solar-energy-powered embedded systems. IEEE Trans. VLSI Syst. 24(6) (2016)

Sustainable Smart Cities and Their ICT Practices K. Bhavana Raj

and Mohmad Mushtaq Khan

Abstract Smart Cities tend to represent the ICT (Information and Communications Technology) industry ignoring the historical legacies, cultures, and traditional values. Socio-techno and politico-economic changes has led to increased demand for modernization services, developments in ICT, IoT (Internet of Things), etc. The ICT-integrated smart cities made them more efficient, safe, inclusive, sustainable, and resilient also protecting the cultural heritage of the respective cities. The modern framework of smart cities relied on culture, focus on urban outcomes rather than technology in isolation, governance, etc. This paper identifies the key areas of smart cities becoming sustainable with their ICT practices. Keywords Smart cities · ICT · Sustainability · Technology · Big data · India

1 Introduction The popularity of smart cities projects and programs has increased across the Globe, like in India, China, U.A.E., South Korea, and even in Small Island Developing States like Mauritius [1]. The smart city paradigm is associated with the IoT (Internet of Things), sensors, and big data, leading to informed and data-led governance. Cities have taken advantage of ICT to the enhance quality of life and increased competitiveness in the past 20 years. This phenomenon is associated to the smart city [2]. Within the last several years, the smart city concept has gained a lot of attention and momentum in the European Union, with various projects being set up in nearly K. Bhavana Raj (B) IPE (Institute of Public Enterprise), Survey No. 1266, Shamirpet (V&M), Medchal-Malkajgiri District, Hyderabad, Telangana 500101, India e-mail: [email protected] M. M. Khan KLHBS, KLEF, KL University Hyderabad„ Aziz Nagar (P.O), RVS Nagar, Moinabad Road, Near TSPA (Telangana State Police Academy), R.R. District, Telangana 500075, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_46

469

470

K. Bhavana Raj and M. M. Khan

every European city [3]. In addition to that, the netizens are demanding higher efficiency levels, quality of life, effective management of resources, sustainable development, etc [4]. In order to address these questions, local authorities are considering the implementation of management models which, jointly with energy efficiency, new infrastructures, and environmental protection, mainly focus on ICT. A city that implements all these policies is considered a smart city [5].

2 Review of Literature Neirotti et al. [6] assigns the concept of culture a prominent place in the smart city framework. Washburn et al. [7] and Neirotti et al. [6] defined the key indicators as smart infrastructure, smart people, and smart governance, which are popular within the proposed frameworks but pillars of smart education and public safety seem to be less evident. Kitchin [8] noted that smart governance would play a key role in framing policies backed by rigorous data analysis, to empower its citizens in informed decision making. Smart Governance is the ability to promote competitive business environment (Paskaleva, 2009) [9]. Smart Cities are usually created in new locations on the outskirts of present cities within the modernist tradition of latest Towns (Fig. 1).

Fig. 1 Key components of a smart city. Resource Compiled

Sustainable Smart Cities and Their ICT Practices

471

3 Sustainable Smart Cities and ICT—Handshake 3.1 The Development of Latest Cities Badging Themselves as Smart These are proliferating in rapidly growing countries. GE is developing the World’s first carbon-neutral city outside of Abu Dhabi named Masdar, Paredes in Portugal is where Microsoft is wiring an energy-efficient city, Dongtan within the Yangtze Delta is being developed by Arup as a sensible green eco-town, and Songdo in South Korea is where Cisco is building a town wired in the least levels [10].

3.2 The Development of Older Cities Regenerating Themselves as Smart In far more bottom-up fashion, which include many cities that are embedding new ICT as a matter in fact? Examples of smart cities where there are emerging technologies and tremendous developments in places such as Silicon Alley (New York City), Silicon Roundabout (London) and Akihabara (Tokyo) [11].

3.3 The Development of Science Parks, Tech Cities, and Techno-poles Focused on High Technologies Silicon Valley and Route 128 are the classic examples but the science park idea is still highly resonant with reference to local economic development where high-tech production merges with its consumption in making such areas smart [12].

3.4 The Development of Urban Services Using Contemporary ICT Within the sort of networked database, cloud computing and glued and mobile networks, a force which is more central to our concerns here in coordinating diverse interests and sectors which can make the town smart in its design and planning [13].

472

K. Bhavana Raj and M. M. Khan

3.5 The Use of ICT to Develop New Urban Intelligence Functions These are new conceptions of the way the town functions and utilize the complexity sciences in fashioning powerful new forms of simulation model and optimization methods that generate city structures and forms that improve efficiency, equity, and therefore the quality of life [3].

3.6 The Development of Online and Mobile Sorts of Participation During which the citizenry is massively engaged in working towards improving the town alongside planners and designers from government and business. Decentralized notions of governance and community action are central to those new sorts of participations which use extensive ICT (Table 1).

4 Alternative Models that Ensure IT Initiatives’ Viability External investment: Regional banks and investment funds such as the Asian Development Bank, Inter-American Development Bank, and the Brazilian Development Bank provide funding for public-sector IT initiatives [3]. Revenue-generating (or cost-cutting) initiatives: Revenue-generating and costcutting initiatives, like fee and collection or electronic government procurement, can become self-funding and prove appealing both for budgetary and political reasons [14]. Revenue-sharing and public–private partnerships: Partnerships with a vendor, service provider, systems integrator, or maybe land developer on a revenue-sharing basis can defray upfront costs and risks of a replacement initiative [15]. Capacity reselling: Excess capacity from large municipal IT infrastructure or application deployments is often provided to neighboring cities or organizations, with the larger city IT department acting as a service provider or through a managed service provider. Multicity initiatives: Upfront agreements to pool resources and share infrastructure facilitate the launch of huge IT initiatives. Leasing and financing: Traditional financing remains an alternative for the purchase of IT infrastructure, particularly hardware and networking, and provides flexibility in case of budget shortfall or other political contingencies.

Sustainable Smart Cities and Their ICT Practices

473

Table 1 Smart cities provide various opportunities for vendors of technology Offerings Deploy/Deliver

Technologies

Example initiatives

Telecom and managed service providers

Hosted and managed Cloud computing and e100 service, services services management eHealth, Mobile parking

Business application vendors

Industry-specific solutions

Custom and industry-specific applications

Mobile parking, traffic routing, e100, eHealth, distance learning

Develop

Middleware vendors

Software infrastructure

business analytics communications Data warehouse, management, and master data software, content unified

eGovernment portals, single citizen view, and employee collaboration

Design

Infrastructure vendors

Hardware and telecommunications infrastructure

Networking, telecommunications, RFID, sensors, and video cameras

Broadband connectivity, video surveillance, and real-time data capture

Define

Services companies and systems integrators

Design and planning Identify objectives and systems and vision, develop integration business architectures, and understand policy requirements

Strategy and vision planning, and policy advisory

Source Forrester Research, Inc.

Barter or in-kind exchange: Exchange of product testing or customer references for brand spanking new technologies may be a way of overcoming budget shortfalls, particularly for universities or research facilities with skilled developers and users. Data monetization: The utilization of primary data generated by instrumented infrastructure provides a possible revenue source for data owners.

5 Conclusion Every smart community is exclusive because its characteristics are supported by the community itself. One common denominator is that successful smart communities

474

K. Bhavana Raj and M. M. Khan

are the results of a coalition of business, education, government, and individual citizens. A successful smart community is often built from the highest down or bottom-up, but active involvement from every sector of the community is important. This united effort creates synergy, which allows individual projects to build upon each other for faster progress, resulting in the involved, informed and trained critical mass necessary for the transformation of how the whole community carries out its work. Most of the smart communities or similar initiatives on the local level started from a crisis situation and deep necessity for change. Smart Valley, Latino Net in San Jose, or Tranås in Sweden are examples of such local initiatives. Some countries such as Canada saw the power of the smart communities concept early and in order to fully use its potential and avoid mistakes as well as to learn quickly from successes, the country developed a national program for the practical implementation and to facilitate the exchange of experiences.

References 1. Z. Allam, Building a conceptual framework for smarting an existing city in Mauritius: the case of port Louis. J. Biourbanism 4, 103–121 (2017) 2. A. Caragliu, C. Del Bo, P. Nijkamp, Smart cities in Europe. J. Urban Technol. 18(2), 65–82 (2011). https://doi.org/10.1080/10630732.2011.601117 3. D. Schuurman, B. Baccarne, L.D. Marez, P. Mechant, Smart ideas for smart cities: investigating crowd sourcing for generating and selecting ideas for ICT innovation in a city context. J. Theor. Appl. Electr. Comm. Res. 7(3), 49–62 (2012). https://doi.org/10.4067/S0718-187620120003 00006 4. A.K. Glasmeier, M. Nebiolo, Thinking about smart cities: the travels of a policy idea that promises a great deal, but so far has delivered modest results. Sustainability 8, 1122 (2016) 5. A.H. De Rosario, A. Martín, M. Pérez, A. Martínez Are the smart cities the most democratic? The Spanish Case, in EGPA Annual Conference. Edinburgh (2013) https://www.scss.tcd.ie/dis ciplines/information_systems/egpa/docs/2013/HarodeRosarioetal.pdf 6. P. Neirotti, A. De Marco, A.C. Cagliano, G. Mangano, F. Scorrano, Current trends in smart city initiatives: some stylized facts. Cities 38, 25–36 (2014) 7. D. Washburn, U. Sindhu, S. Balaouras, R.A. Dines, N. Hayes, L.E. Nelson, Helping cios understand “smart city” initiatives. Growth 17, 1–17 (2009) 8. R. Kitchin, Making sense of smart cities: addressing present shortcomings. Camb. J. Reg. Econ. Soc. 8, 131–136 (2014) 9. C. Harrison, B. Eckman, R. Hamilton, P. Hartswick, J. Kalagnanam, J. Paraszczak, P. Williams, Foundations for smarter cities. IBM J. Res. Dev. 54, 1–16 (2010) 10. T. Nam, T.A. Pardo, Conceptualizing smart city with dimensions of technology, people, and institutions, in Proceedings of the 12th Annual International Digital Government Research Conference: Digital Government Innovation in Challenging Times, College Park, MD, USA, pp. 282–291, 12–15 June 2011 11. P.W.G. Newman, in Sustainability and cities: overcoming automobile dependence, ed. by P. Newman, J. Kenworthy (Island Press, Washington, DC, USA, 1999) 12. P. Newman, J. Kenworthy, The rise and fall of automobile dependence, in The End of Automobile Dependence (Island Press/Center for Resource Economics, Washington, DC, USA, 2015) pp. 1–31

Sustainable Smart Cities and Their ICT Practices

475

13. P. Newman, T. Beatley, H. Boyer, Resilient Cities: Overcoming Fossil Fuel Dependence, 2nd edn. (Island Press, Washington, DC, USA, 2017) 14. T. Shelton, M. Zook, A. Wiig, The ‘actually existing smart city.’ Camb. J. Reg. Econ. Soc. 8, 13–25 (2014) 15. www.forrester.com

Development of Hybrid Energy System for a Rural Area Arushi Misra and M. P. Sharma

Abstract In a developing country like India, a continuous supply of electricity at the desired quality to the remote areas is one of the major hurdles in the overall growth of the nation. Although these areas have grid connections, the electricity supply is rather intermittent. Owing to this, the implementation of an isolated hybrid energy system will not only enable energy access in such areas but also reduce the dependency on fossil-based electricity. The present paper aims to find out the optimum scenario of power generation for a cluster of villages in Gujarat using locally available resources like solar, wind, and biomass. This is done in conjunction with lead-acid battery storage and diesel generator to smooth out the irregularity caused due to the stochastic nature of renewable resources. The cost of energy (COE) and net present cost (NPC) for the optimal case are found to be $0.114/kWh and $8,582,541 respectively. Furthermore, sensitivity analysis exercise revealed the variables which majorly affect the economic parameters. Keywords COE · HOMER · Hybrid energy system (HES) · NPC · Sensitivity analysis · Optimization

1 Introduction Energy is one of the important building blocks of the nation. However, there are still some areas that do not have access to continuous power supply due to prevailing barriers such as inaccessibility due to difficult terrain, low population density, lack of awareness and absence of lucrative market infrastructure [1]. In India, currently coal, gas, and diesel are prime hotspots for electricity generation. However, with increasing energy demand and climate change, meeting the demand with the limited fossil reserves is not feasible. Hence, renewable energy resources that are not only available A. Misra (B) · M. P. Sharma Department of Hydro and Renewable Energy, Indian Institute of Technology Roorkee, Roorkee 247667, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_47

477

478

A. Misra and M. P. Sharma

in abundance but are also environmentally friendly can be seen as a sustainable alternative. With the advent of technology, tapping energy from renewable resources has become economically feasible with the added environmental benefit over fossilbased energy. Hybrid energy system (HES) is the integration of renewable energy resources with at least one conventional source (namely diesel generator). It takes care of the stochastic behavior of renewable energy and also helps in controlling the excess energy by means of battery storage. Kaur and Segal [1] have proposed that a hybrid combination of natural gas based gen-set and cross-flow hydro turbines can provide energy at $0.147/kWh. Singh and Baredar [2] have used HOMER Pro* software for the optimization of photovoltaic, biomass gasifier, battery, and fuel cell components. The COE and NPC at 0% unmet load are INR 15.064/kWh and INR 5,189,003 respectively. Ahmad et al. [3] presented a feasibility study on a grid-tied hybrid microgrid system for Kallar Kahar province in Punjab. Usman et al. [4] have found out that with the incorporation of PV into the grid, the cost of energy is the same as the gridonly system, however, reduction in CO2 emissions can be achieved with an increased renewable fraction. Olatomiwa [5] proposed a HES for rural healthcare facilities in Nigeria at 0% loss of power supply probability (LPSP) having a peak load of 2.75 kW. Zahboune et al. [6] has discussed the performance of the Modified Electric System Cascade Analysis method for finding the optimal scenario of the hybrid energy system and compared the obtained results with HOMER Pro* software. The present study focusses on finding the optimal system configuration for a village cluster located in the state of Gujarat with the help of HOMER Pro* (Hybrid Optimization of Multiple Energy Resources) software.

2 Methodology 2.1 Study Area In the state of Gujarat, a cluster of five villages, which are Dhrobana, Ludiya, Andhau, Sadhara, and Dhoravar, located in Bhuj district, are selected for the present study. These villages are located between the latitude of 23.74° and 23.92° and longitude wise between 69.74° and 69.9°. The area consists of 2925 households which include a population of 13,809 as per the 2011 census [7].

2.2 Load Demand The proposed area has average daily load demand of 15,031 kWh/day and peak demand of 1110 kW. The average load of the area is 626 kW. The load curve for a typical day in winter and summer season is given in Fig. 1.

Development of Hybrid Energy System for a Rural Area

479

Fig. 1 Load curve for a typical day in summer and winter season

2.3 Resource Assessment The study area on an average receives 5.21 kWh/m2 /day of solar insolation. The maximum radiation is received in May (6.47 kWh/m2 /day) whereas minimum radiation is received in December (4 kWh/m2 /day). Therefore, the study area has a significant amount of potential for solar energy. Figure 2 shows the variation of solar radiation over the months. The temperature ranges from 19.95 °C in the month of

Fig. 2 Monthly variation of solar radiation

480

A. Misra and M. P. Sharma

Fig. 3 Monthly variation of wind speed

January to 35.8 °C in June month. The average speed of the wind is 4.12 m/s with months of May, June, and July receiving winds with speed greater than 5.0 m/s. The monthly variation of wind speed is given in Fig. 3. The proposed area has a total area of 15,821 ha out of which 3302 ha are under forest and 6525 ha are categorized under crop production area [7]. By utilizing the forest foliage and crop residue, biomass gasifier can be used to extract the biomass energy and generate electricity. The average daily biomass available at the site is 2.59 tons/day.

2.4 Modeling of HES Components The proposed HES is modeled and optimized using HOMER Pro* software tool. Figure 4 depicts the schematic of the HES. While modeling components using HOMER, several technical and economic parameters are used. Technical parameters include lifespan, hub height, efficiency, annual shortage capacity, etc. Economic parameters include cost components such as capital, O&M and replacement cost, discount rate. Table 1 shows the cost summary of every component. Load following strategy is employed for this study. A discount rate is considered to be 6% whereas the project lifetime is assumed to be 20 years.

Development of Hybrid Energy System for a Rural Area

481

Fig. 4 Schematic of HES

Table 1 Cost summary of HES components Components

Capital cost ($/kW)

Replacement cost ($/kW)

O&M cost ($/kW)

Lifetime (y/h)

References

PV

710

484.09

6.45

25 y

[9]

Wind

716.18

572.94

0.2

20 y

[8]

BMG

1033

750

0.010

15,000 h

[10]

Battery

148

148

2.5

20 y

[11]

Converter

150

150

0

15 y

[12]

DG

160

145

0.010

15,000 h

[13]

3 Results and Discussion 3.1 Feasible Configurations In order to perform sizing exercise using HOMER, various information such as local resource data, load data, temperature data, cost data, etc. HOMER performs energy balance calculations for a year for every possible configuration [8]. After that, it lists the feasible configuration based on the lowest NPC and COE. Based on the performance results, five feasible cases have been obtained—Case—1: PV/Wind/BMG/DG/Battery; Case—2: PV/BMG/DG/Battery;

482

A. Misra and M. P. Sharma

Table 2 System architecture of different scenarios Case

PV (kW)

Wind (kW)

BMG (kW)

DG (kW)

Battery (kWh)

Case 1

4971

242

600

630

12,994

Case 2

5097



600

630

13,818

Case 3

6181



600



24,496

Case 4

8670

17

600



11,031

Case 5



33,650

600

630

34,072

Case—3: PV/BMG/Battery; Case—4: PV/Wind/BMG/Battery and Case—5: Wind/BMG/DG/Battery. The system architecture for every scenario is given in Table 2.

3.2 Most Optimal Configuration The cost estimates for the possible configurations are given in Fig. 5 which shows that Case—1 and Case—2 have the same COE. However, since Case—1 has slightly lower NPC, owing to which, it is the leastexpensive option. Case—5 is the most expensive alternative as it has a major contribution from wind energy. The cost of energy and net present cost for Case—1 are $0.114/kWh and $8,582,541 respectively. The cost breakdown for Case—1 given in Fig. 6 shows that PV and Battery are the most expensive components in the system, accounting for 43% and 39% of total system costs, respectively. Replacement cost ($1,328,213) is mainly due to batteries and converters. Biomass fuel and

Fig. 5 NPC and COE for different configurations

Development of Hybrid Energy System for a Rural Area

483

Fig. 6 Component-wise cost distribution

diesel contribute to the fuel cost ($419,319). Salvage cost (−$639,992) is the value recovered from the components after the project life. Figure 7 shows the time series plot of different energy sources and load for a week’s period starting from July 28th to Aug 3rd. It is quite evident from the graph that PV not only supplies the majority of load during the daytime but also charges the battery. Figure 8a depicts the trend of battery SOC for the month of January and Fig. 8b shows this trend for a year. When the battery reaches the minimum SOC of 30% and there is an absence of PV, BMG and DG are used to meet the load.

Fig. 7 Power output curve from July 28 to Aug 3

484

A. Misra and M. P. Sharma

a

b

Fig. 8 a Battery SOC for the month of January. b Battery SOC for 8760 h

3.3 Sensitivity Analysis • Annual Capacity Shortage: It is the shortage in terms of load fulfillment. Studies in the literature show that, by allowing a small amount of capacity shortage, a scenario with reduced NPC and COE can be attained. As discussed above, COE and NPC are minimum in Case—1. This COE and NPC are evaluated at 0%

Development of Hybrid Energy System for a Rural Area

485

Fig. 9 Effect of input parameters on NPC

annual capacity shortage. The same is evaluated at 1, 2, 3, 4, and 5% annual capacity shortages and it is observed that, with an increase in annual capacity shortage, lower NPC and COE values can be obtained. For example, with a 5% annual capacity shortage, 6 and 10% decrease in COE and NPC can be achieved respectively. • Effect of input parameters: A sensitivity analysis is further performed using variables such as biomass fuel price, diesel price, discount rate, wind speed, and solar radiation. From Figs. 9 and 10 it can be inferred that solar radiation majorly affects NPC and COE. With an increase in the average value of solar radiation, the NPC decreases significantly. For example, a 20% increase in solar radiation value reduces the NPC and COE by 10% each. Further, it can also be inferred that, although the discount rate does not have much effect on NPC, it does affect the COE. With a 20% increase in the discount rate, an 8% increase in the COE is

Fig. 10 Effect of input parameters on COE

486

A. Misra and M. P. Sharma

observed. Wind speed only slightly affects both NPC and COE whereas biomass fuel price and diesel price have a negligible effect on NPC and COE.

4 Conclusion This paper investigates the optimal hybrid energy system configuration for a village cluster located in the remote area of Gujarat. The combination of PV/Wind/Biomass/Diesel Generator/Battery is found to be the least-expensive configuration offering the uninterrupted power supply at $0.114/kWh. The proposed system has the initial cost and net present cost as $6,533,025 and $8,582,541 respectively. It can be inferred from the sensitivity analysis that NPC and COE are most sensitive towards the value of solar radiation. The area has immense potential for biogas generation owing to the significant population of tamed animals. Future work includes incorporating biogas resources into the system which will increase the renewable fraction of the system.

References 1. T. Kaur, R. Segal, Designing rural electrification solutions considering hybrid energy systems for Papua New Guinea. Energy Proc. 110(2016), 1–7 (2017) 2. A. Singh, P. Baredar, Techno-economic assessment of a solar PV, fuel cell, and biomass gasifier hybrid energy system. Energy Rep. 2, 254–260 (2016) 3. J. Ahmad et al., Techno economic analysis of a wind-photovoltaic-biomass hybrid renewable energy system for rural electrification: a case study of Kallar Kahar. Energy 148, 208–234 (2018) 4. M. Usman, M.T. Khan, A.S. Rana, S. Ali, Techno-economic analysis of hybrid solar-diesel-grid connected power generation system. J. Electr. Syst. Inf. Technol. 5(3), 653–662 (2018) 5. L. Olatomiwa, Optimal configuration assessments of hybrid renewable power supply for rural healthcare facilities. Energy Rep. 2, 141–146 (2016) 6. H. Zahboune, S. Zouggar, G. Krajacic, P.S. Varbanov, M. Elhafyani, E. Ziani, Optimal hybrid renewable energy design in autonomous system using modified electric system cascade analysis and homer software. Energy Convers. Manage. 126, 909–922 (2016) 7. District Census Handbook, Kachchh (2011) 8. M.K. Shahzad, A. Zahid, T. Rashid, M.A. Rehan, M. Ali, M. Ahmad, Techno-economic feasibility analysis of a solar-biomass off grid system for the electrification of remote rural areas in Pakistan using HOMER software. Renew. Energy 106, 264–273 (2017) 9. W. Ma, X. Xue, G. Liu, R. Zhou, Techno-economic evaluation of a community-based hybrid renewable energy system considering site-specific nature. Energy Convers. Manage. 171(April), 1737–1748 (2018) 10. A. Chauhan, R.P. Saini, Techno-economic optimization based approach for energy management of a stand-alone integrated renewable energy system for remote areas of India. Energy 94, 138–156 (2016) 11. T. Kobayakawa, T.C. Kandpal, Optimal resource integration in a decentralized renewable energy system: assessment of the existing system and simulation for its expansion. Energy Sustain. Dev. 34, 20–29 (2016)

Development of Hybrid Energy System for a Rural Area

487

12. M.S. Islam, R. Akhter, M.A. Rahman, A thorough investigation on hybrid application of biomass gasifier and PV resources to meet energy needs for a northern rural off-grid region of Bangladesh: a potential solution to replicate in rural off-grid areas or not? Energy 145, 338–355 (2018) 13. M. Baneshi, F. Hadianfard, Techno-economic feasibility of hybrid diesel/PV/wind/battery electricity generation systems for non-residential large electricity consumers under southern Iran climate conditions. Energy Convers. Manage. 127, 233–244 (2016)

Face Recognition with Inception-Based CNN Models Lakshmi Patil

and V. D. Mytri

Abstract In this work, three deep learning models are developed and tested for accuracy on the ORL dataset. A new model has been proposed by computing the mean of encodings to improve the accuracy of models. Several sub-models are developed to derive the optimum number of images to be considered in computing the mean of encodings. Two sub-sets of datasets are extracted from ORL dataset for testing purposes. The performance of deep learning models is compared with the classical face recognition methods like PCA and LBP and with statistical methods like singular value based hidden Markov models. It has been determined that proposed inception model by computing mean of encodings outperforms all other models for ORL dataset and the optimum number of images to compute the mean is three. Keywords Convolutional neural networks · Inception models · CNN · Face · Recognition

1 Introduction Application of artificial intelligence in computer vision is gaining popularity and interest among the research community day by day. Introduction of CNNs by Yann LeCun and his collaborators is a revolutionary breakthrough in the field of computer vision. The CNNs were designed keeping the working model of the visual system in mind [1]. In CNNs, the neurons are hierarchically arranged by connecting it locally. This local connectivity was required to mimic the functioning of the visual cortex system for image transformations [2]. The optimal neuron weights were determined using error gradient methods and this approach was tested for many pattern recognition cases [3–5].

L. Patil (B) · V. D. Mytri Sharnbasva University, Kalaburagi, Karanataka, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_48

489

490

L. Patil and V. D. Mytri

The CNN proposed by Yann LeCun has three basic components. 1. Convolutional layer 2. Subsampling or pool layer 3. Dense or fully connected layer. Image is input to the CNNs in its raw form. The image when presented in its raw form to the CNN, it is present in the form of numbers. In order to recognize the image, feature maps need to be extracted from the image so as to differentiate between the images. The image may be labeled into different groups. Convolutional layer performs the function of extracting the feature maps from images. For this purpose, convolutional layer utilizes kernels. The kernels are of many types and each kernel has its own advantages and limitations. Convolutional layers are quite different from dense layers and many special methods are developed to treat convolutional layer to extract the feature maps [6, 7]. CNNs are very useful when the data is present in two dimensional or three-dimensional forms. When the data has a depth component, then CNNs are very apt over the fully connected layers. Second layer in the CNNs is sub-sampling or downsampling or pool layer. The pool layer can be a max pool layer or average pool layer. Purpose of the pool layer is to reduce the size of the data volume at the input. This is required to reduce the size of the input to the next convolutional layer. Unlike convolutional layer, pool layer does not have any effect on the depth of the data volume. In this layer, as there is reduction in data it is known as sub-sampling or downsampling or pool layer and, there is always loss of information. This loss of information must be as low as possible and it reduces the load on the processor during computations. Also, it avoids overfitting of the model with excess data. The pool layer can be implemented using any of the following layers. 1. 2. 3. 4. 5.

Max pooling [8] Average pooling [8] Stochastic pooling [10] Spatial pyramid pooling [11, 12] Def-pooling [13].

It is shown in [9] that max pool layers are better than other types of pool layers as it has higher convergence rates. In a typical CNN, there may be several packs of convolutional and pool layers and finally the data is input to the dense layer. The dense layers process one-dimensional data, but the data output by the pool layers are two dimensional in shape. Hence the two-dimensional data is flattened into one-dimensional form before presenting it to dense layer. The dense of fully connected layers take the flattened one-dimensional vector as input and it outputs the probability of the classes that the image belongs to. If there are only two classes, then it is a binary class problem and if number of classes is more than two, then it is a multi-class problem. The fully connected layer has an output layer and it gives out the probability for each class that the image belongs to [14, 15].

Face Recognition with Inception-Based CNN Models

491

Of late, CNNs have found its application in face recognition problem and is gaining momentum compared to other face recognition methods [16–19] like PCA, ICA or LBP. In PCA, ICA or LBP, dimensionality of image data is reduced and the resulting features are used by the classifier to determine class of that image. Each method uses its own unique approach to extract the features. With CNN, the features are extracted in a unique manner and are extracted automatically. CNN has brought drastic shift in approach that features are extracted. The first application of CNN in face recognition can be found in ref [20]. Other CNN models like light CNNs [21], VGG face Descriptors [22], FaceNet [23] of Google and Deepnet [24] of Facebook are very popular models in face recognition. Another important milestone in the journey of CNNs was the introduction of inception model by Christian Szegedy of Google in 2014. This model is known as GoogLeNet [25]. The purpose of this model was to decrease the computational complexity of CNNs. Inception layers are those layers in which receptive fields of different kernel sizes are used. The GoogLeNet was further enhanced by using inception blocks in the network. The kernel of size 1 × 1 was used in this layer. Main purpose of 1 × 1 kernel is to reduce the computational complexity in subsequent layers by reducing the dimensionality of the data volume. In Sec. II, several deep learning models namely, Convolutional Neural Networks (CNN-ORG), Single image based CNN-Inception model (CNN-INC-ONE), and multiple image based CNN-Inception model (CNN-INC-MANY) are presented in detail along with their architectures. In Sect. 3, results are presented for the ORL dataset. Two samples of the data are extracted from ORL datasets for training purposes. Finally, important conclusions are drawn and presented from the research work in Sect. 4.

2 Deep Learning Models Deep learning models are used in this research work to develop a face recognition system. The deep learning models are chosen over the traditional methods like PCA, ICA or LBP in order to demonstrate the capability of the deep learning model like CNN or its variants. For the purpose of demonstrating the capability of CNN and its variants, ORL dataset is used. ORL dataset has got images of 40 people and each person has 10 facial expressions. Hence there are a total of 400 images in the dataset. Since there are images of 40 people in the dataset, it is a very huge task to classify the images into 40 distinct classes. Usually, the classification problem deals majorly with only two classes or three to five classes. But in this case, number of classes is 40 which is a huge task to classify. To build a face recognition system, any of the three proposed models can be used. The three different CNN models are:

492

L. Patil and V. D. Mytri

Fig. 1 CNN-ORG model architecture for ORL Database

1. Convolutional Neural Network (CNN-ORG)—Original model [1] 2. Single image based CNN-Inception model (CNN-INC-ONE) [25] 3. Multiple image based CNN-Inception model (CNN-INC-MANY)—Proposed. Of the three models, CNN-INC-ONE [25] is an improved model over CNNORG [1] and CNN-INC-MANY is improved model over CNN-INC-ONE. Purpose of developing three models is to achieve the best possible accuracy for the ORL dataset. These models were developed in Google Colab platform with Keras. Figure 1 shows the original convolutional neural network model (CNN-ORG) that has four sets of convolutional and max pool layers followed by one set of fully connected layers. Only two sets of convolutional and max pool layers are shown in Fig. 1 due to space constraints. The input image is of size 112 × 96 × 3 to the CNN-ORG model. Size of the filter used in the convolutional layer is 5 × 5 and a total of 16 filters are used. ReLU activation function is used in the convolutional layer. The size of output volume from the convolutional layers is 108 × 88 × 16. The data volume of 108 × 88 × 16 is fed to the max pool layer after the convolutional layer. The size of max pool window is 2 × 2. This results in an output volume of 107 × 87 × 16. This is the first set of convolutional layer and max pool layer. There is another subsequent set of convolutional and max pool layer in the network. Hence the data volume of 107 × 87 × 16 is fed to the convolutional layer of second set. There are 32 filters in the convolutional layer of second set with a filter size of 3 × 3. This will produce a data volume of size 105 × 85 × 32 as the output from the convolutional layer of second set. This data volume of size 105 × 85 × 32 is again

Face Recognition with Inception-Based CNN Models

493

fed to the max pool layer of second set. Window size of max pool layer of second set is 2 × 2 that will result in a data volume of size 104 × 84 × 32. This data volume is again processed through the third set of convolution layer with 64 filters of size 3 × 3 and max pool size of 2 × 2 and fourth set of convolution layer with 128 filters of size 3 × 3 and max pool size of 2 × 2. The dense layers or fully connected layers are connected at the end of the fourth set of convolutional and max pool layers. The output from the max pool layer is fed to the fully connected layer after flattening the three-dimensional data volume to a vector of size 978,432. The fully connected layer has four components. They are input layer with 978,432 neurons, first hidden layer of 128 neurons with ReLU, second hidden layer of 256 neurons with ReLU, and a softmax layer with an output of 40 probabilities. In the output, a softmax layer was used instead of a sigmoid function since there are 40 classes to be classified by the model. The optimizer used in this model is Adadelta to optimize categorical cross-entropy. Learning rate used is 0.2 and rho is 0.9 for the optimization. In the second architecture, the inception model is added to the CNN, as shown in Fig. 2. An image from the ORL dataset is shown Fig. 2 as the input image to the inception based CNN model. There are three sets of convolutional and max pool layers in the CNN model. The output from the max pool layer of third set is presented to the inception layers. Inception layers are again defined with three convolutional layers and one max pool layer. Filter sizes of three convolutional layers are 1 × 1, 3 × 3 and 5 × 5. In inception layers, the same input in processed in different layers in parallel. After the inception blocks, the output is flattened and presented to a dense layer. Since there are 40 classes, the dense layer has an output of 40 neurons

Fig. 2 CNN-INCEPTION model for ORL database

494

L. Patil and V. D. Mytri

with a preceding 128 neurons of hidden layer. The weights of the inception model are initiated from the GoogLeNet model [25]. The model is trained by parsing the images in the train set one of the other. The encodings for each of the image is extracted and stored. When a test image is presented, the distance between the encodings of all the train images and the test image is computed. The train image that has the closest distance with the test image is the match and its corresponding label is assigned to the test image. Again the shortest distance is compared with a threshold and if the shortest distance is more than the threshold, then no label is assigned to the test image. In CNN-INC-ONE model, the distance of test image encodings is measured from each of the train images. In contrast, a third model is developed where the distance of test image encodings is measured from the mean of all train images of a person. In the third model, the mean of encodings of a person is determined by taking the mean of encoding of five different inception models, assuming there are five different facial expressions for the same person the train set. That means if there are N facial expressions in the train set for each person or class, then N number of inception models (CNN-INC-MANY) are trained and encodings are extracted to compute the mean. Then choose any one of the inception model to find the encodings for the test images. Distance of test image encodings is measured with respect to the mean encodings of each person. The label of the person with shortest distance is considered as the match and the corresponding label is assigned. If only one image is considered in the train set to measure the distance from test image encodings, then that model is considered as CNN-INC-ONE and if N images are considered, then it is treated as CNN-INC-MANY. The optimal value for N may be derived for each dataset with simulations and tradeoff.

3 The Simulation Results ORL Dataset is used to measure the accuracy of the three models, namely, CNNORG, CNN-INC-ONE, and CNN-INC-MANY. Since there are 10 facial expressions for each of the 40 persons, there are a total of 400 images in the dataset. From the 10 facial expressions, 5 facial expressions can be used for training purpose and the remaining 5 facial expressions can be used for testing. The five facial expressions for the training can be chosen randomly or in order. Each set of selections can be treated as a different train set. For example, two train sets can be created with facial expressions {TRAINSET1: 1, 3, 5, 7, 9} and {TRAINSET2: 1, 3, 6, 8, 9}. The remaining facial expression fall into the corresponding test sets {TESTSET1: 2, 4, 6, 8, 10} and {TESTSET2: 2, 4, 5, 7, 10}. Hence there are 200 images in each train set and 200 images in each test set. Initially, CNN-ORG model was trained with {TRAINSET1: 1, 3, 5, 7, 9} and the performance was measured on {TESTSET1: 2, 4, 6, 8, 10}. 20 epochs were used in model training. Figures 5 and 6 show the accuracy and loss function with respect to epochs. It can be observed that train accuracy has increased gradually to 100% at

Face Recognition with Inception-Based CNN Models

495

Fig. 3 Sub sampled ORL Datasets for Training

a. TRAINSET1

b. TRAINSET2

496

L. Patil and V. D. Mytri

a. TESTSET1

b. TESTSET2 Fig. 4 Sub sampled ORL Datasets for Testing

Face Recognition with Inception-Based CNN Models

497

Fig. 5 Accuracy during CNN-ORG training for TRAINSET1

Fig. 6 Entropy Loss during CNN-ORG training for TRAINSET1

epoch 10. Since the maximum accuracy is already reached, it remains constant after 10th epoch. Similar pattern can be observed for loss function as well. The loss drops significantly in the first epoch and reduces gradually thereafter until 10th epoch. It remains constant after 10th epoch as the loss is almost zero after 10th epoch. Performance of the CNN-ORG, CNN-INC-ONE and CNN-INC-MANY in terms of accuracy of prediction is measured against the methods PCA, LBP, SVD-HMM

498

L. Patil and V. D. Mytri

Table 1 Comparison of accuracy of prediction of deep learning models with of LBP, PCA, HMM for TESTSET1 Method

Accuracy (%)

PCA

92.50

LBP

79.50

SVD-HMM

94.00

LIN-SVD-HMM

96.00

CNN-ORG

84.00

CNN-INC-ONE

93.50

CNN-INC-MANY

98.50

and LIN-SVD-HMM. The PCA and LBP are well known classical methods in face recognition. SVD-HMM and LIN-SVD-HMM methods were presented by the authors in Ref. [26] and are statistical methods based on hidden Markov models and singular value decomposition, whereas CNN-ORG, CNN-INC-ONE and CNNINC-MANY are deep learning methods. Hence in this paper, the performance of deep learning methods is compared with the classical face recognition methods and with statistical models. In order to compare the performance, all the 7 models were trained with TRAINSET1 and tested with TESTSET1. Table 1 shows the comparison of accuracy of CNN-ORG, CNN-INC-ONE, and CNN-INC-MANY with PCA, LBP, SVD-HMM and LIN-SVD-HMM for TESTSET1. It can be observed that LBP has performed very poor with an accuracy of 79.5% for TESTSET1. This is maybe attributed to the reason that LBP is dependent on local features and each facial expression has different features and hence there is huge variation in the local features for each person. The next model with slightly better accuracy is CNN-ORG. CNN-ORG is a deep learning model [1] and it performs very well if it was a binary class problem. That means if the number of classes in the dataset was just two, then deep learning models like CNN-ORG performs very well and accuracy will be very good. But in the present case, since the database used is ORL, there are 40 classes to be classified by the models. This is one of the reasons for the underperformance of CNN-ORG. If it was a binary classification model, then any probability above 0.5 can be marked under class 1 and probability equal to or less than 0.5 can be classified under class 2. Since there are 40 classes in the dataset, the probability window of 0 to 1 must be divided into 40 equal parts. Hence it may result in data leakage into adjacent classes in the classification. Since CNN-ORG has performed at an accuracy of 84%, it can be further improved by enhancing the model architecture with inception blocks. Another reason for underperformance of the CNN-ORG model is the fully connected layers. When fully connected layers are used, output from the last set of max pool layer has to be flattened, which results in loss of feature relationships and the order of features within the image. This issue with CNN-ORG model can be eliminated by replacing the fully connected layers with inception blocks. The inception blocks output the encodings

Face Recognition with Inception-Based CNN Models

499

for each image. The inception models are transfer learning models. The weights of inception models are initiated with the already trained GoogLeNet model. Once encodings of all the images in TRAINSET1 are extracted, distance between the encodings of each image of TESTSET1 is computed with the encodings of images of TRAINSET1. The label of the image in TRAINSET1 that has shortest distance with the test image in TESTSET1 is assigned to the test image. If the distance is higher than a threshold, no label is assigned to the test image. Accuracy of CNN-INC-ONE model is at 93.5% which is still less than the SVDHMM and LIN-SVD-HMM, but slightly better than PCA. Hence the CNN-INC-ONE model requires further improvement. As explained in Sect. 2, the CNN-INC-ONE model can be improved by using the mean of many train images to measure the distance of test image instead of using the encoding of just one image in train set. For training purpose, a set of images are used for each person to create the encodings. This model may be called as CNN-INC-MANY. Number of images can be taken as 2–5 to compute the mean of encodings for train set. Table 1 shows the 98.5% accuracy for the model CNN-INC-MANY that has considered 5 facial expressions for each person. The effect of variation in the facial expressions on accuracy is reduced when the mean encodings are used in measuring the distance. With this approach, the accuracy of TESTSET1 is best among all other models. Now a question arises on how many images are required to compute the mean of encodings. To determine the optimum number of facial expressions in the train set, another experiment is conducted. Table 2 shows the accuracy of CNN-INCMANY model for different sub-models. CNN-INC-TWO model, with two images to compute the mean, has yielded an accuracy of 96.5%, CNN-INC-THREE and CNNINC-FOUR have yielded 99% accuracy each. By further increasing the number of images to five, model CNN-INC-FIVE has resulted in an accuracy of 98.5% which is 0.5% less than that of models CNN-INC-THREE and CNN-INC-FOUR. Adding more number of images beyond 3 did not improve the accuracy due to the reason that it shifts the mean of the images and also increases variation in the set. Hence the optimal number of images can be fixed at 3 or 4 in this case. The performance of CNN-ORG, CNN-INC-ONE, and all sub-models of CNNINC-MANY are tested for another dataset that was sub-sampled from ORL by shuffling the images in TRAINSET1 and TESTSET1. These new sets created are treated Table 2 Accuracy of sub-models of CNN-INC-MANY for TESTSET1 Method

Accuracy (%)

CNN-INC-ONE

93.50

CNN-INC-TWO

96.50

CNN-INC-THREE

99.00

CNN-INC-FOUR

99.00

CNN-INC-FIVE

98.50

500

L. Patil and V. D. Mytri

as TRAINSET2 and TESTSET2. The facial expressions in TRAINSET2 are 1, 3, 6, 8, and 9 and that in TESTSET2 are 2, 4, 5, 7, and 10. The accuracy of entropy loss during the training of CNN-ORG model for TRAINSET2 is shown in Figs. 7 and 8. It can be noticed that accuracy improved very gradually up to the epoch 12. The accuracy becomes almost 100% after 12th epoch. Similarly, the entropy also becomes least after 12th epoch. The drop in entropy is significant in first epoch. Thereafter, the loss in entropy is very gradual until 12th epoch. Table 3 shows the comparison of accuracy of CNN-ORG, CNN-INC-ONE, and CNN-INC-MANY with PCA, LBP, SVD-HMM, and LIN-SVD-HMM or TESTSET2. It can be observed that LBP has yielded just 80.5%, whereas PCA, SVD-HMM, and LIN-SVD-HMM have yielded 91.5%, 93.5%, and 94.5% respectively for TESTSET2. When the deep learning models CNN-ORG, CNN-INC-ONE, and CNN-INC-MANY were tested for accuracy on TESTSET2, the accuracies were 88.5%, 93.5%, and 99%, respectively. Accuracy of CNN-ORG is much less than methods like SVD-HMM and LIN-SVD-HMM. However, CNN-INC-MANY has outperformed all other models for TESTSET2, while performance of CNN-INCONE is very close to that of SVD-HMM and LIN-SVD-HMM. In CNN-INC-MANY, five images are considered to compute the mean. The experiment to determine the optimum number of images in computing the mean of encodings is repeated for TESTSET2 as well. Table 4 shows the accuracy of all sub-models of CNN-INC-MANY along with CNN-INC-ONE. It can be observed that CNN-INC-THREE model has yielded highest accuracy of 100% for TESTSET2 while the models CNN-INC-THREE and CNN-INC-FOUR have yielded highest accuracy of 99% each for TESTSET1. The common model that yielded highest accuracy for both TESTSET1 and TESTSET2 is CNN-INC-THREE. Hence, the

Fig. 7 Accuracy during CNN-ORG training for TRAINSET2

Face Recognition with Inception-Based CNN Models

501

Fig. 8 Entropy during CNN-ORG training for TRAINSET2 Table 3 Comparison of accuracy of prediction of deep learning model with of LBP, PCA, HMM for TESTSET2 Method

Accuracy (%)

PCA

91.50

LBP

80.50

SVD-HMM

93.50

LIN-SVD-HMM

94.50

CNN-ORG

88.50

CNN-INC-ONE

93.50

CNN-INC- MANY

99.00

Table 4 Accuracy of sub-models of CNN-INC-MANY for TESTSET2 Method CNN-INC-ONE CNN-INC-TWO

Accuracy (%) 93.50 98.00

CNN-INC-THREE

100.00

CNN-INC-FOUR

99.00

CNN-INC-FIVE

99.00

502

L. Patil and V. D. Mytri

optimum number of facial expressions to be considered in computing the mean of encoding is 3 for ORL dataset.

4 Conclusion In this paper, three models are presented, namely, CNN-ORG, CNN-INC-ONE, and CNN-INC-MANY. The CNN-ORG model is the original CNN model [1] and CNN-INC-ONE is original CNN based inception model [25]. The CNN-INC-ONE has been improved by adding a mean computation step at the end. Performance of these models was compared with LBP, PCA, SVD-HMM, and LIN-SVD-HMM for two subsets of ORL dataset, namely TESTSET1 and TESTSET2. The CNN-INCMANY has several sub-models such as CNN-INC-TWO, CNN-INC-THREE, CNNINC-FOUR, and CNN-INC-FIVE. When compared to all other methods, CNN-INCMANY has yielded an accuracy of 98.5% and 99% for TESTSET1 and TESTSET2 respectively. The optimal set of facial expressions required for maximizing the accuracy has been determined by developing sub-models in CNN-INC-MANY. The number of facial expressions to compute the mean is varied from 2 to 5 and it was tested for TESTSET1 and TESTSET2. It has been observed that, of all the models, CNN-INCTHREE and CNN-INC-FOUR have yielded 99% accuracy each for TESTSET1; and 100% and 99% for TESTSET2. Comparing both CNN-INC-THREE and CNNINC-FOUR, the model CNN-INC-THREE has yielded the highest accuracy for both the datasets. Hence the optimum number of facial expressions can be set to three to obtain maximum accuracy. The reason for poorer performance when number of facial expressions are four and five compared to three facial expressions in the computation is, it will introduce more variation into the data and shifts the means of the images. Therefore the optimum number of facial expressions for this dataset is three and this number must be derived experimentally for a given dataset.

References 1. D.H. Hubel, T.N. Wiesel, Receptive fields, binocular interaction, and functional architecture in the cat’s visual cortex. J. Physiol. 160, 106–154 (1962) 2. K. Fukushima, Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition un affected by shift in position. Biol. Cybern. 36(4), 193–202 (1980) 3. Y. LeCun, L. Bottou, Y. Bengio, P. Hafner, Gradient-based learning applied to document recognition, in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2323 (1998) 4. Y. LeCun, B. Boser, J.S. Denker et al., Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989) 5. M. Tygert, J. Bruna, S. Chintala, Y. LeCun, S. Piantino, A. Szlam, A mathematical motivation for complex-valued convolutional networks. Neural Comput 28(5), 815–825 (2016)

Face Recognition with Inception-Based CNN Models

503

6. M. Oquab, L. Bottou, I. Laptev, J. Sivic, Is object localization for free?—weakly-supervised learning with convolutional neural networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, pp. 685–694 (2015) 7. C. Szegedy, W. Liu, Y. Jia, et al., Going deeper with convolutions, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’15), Boston, Mass, USA, pp. 1–9 (2015) 8. Y.L. Boureau, J. Ponce, Y. LeCun, A theoretical analysis of feature pooling in visual recognition, in Proceedings of the ICML (2010) 9. D. Scherer, A. Muller, S. Behnke, in Evaluation of pooling operations in convolutional architectures for object recognition. Lecture Notes in Computer Science (including subseries Lecture Notes in Artifcial Intelligence and Lecture Notes in Bioinformatics): Preface vol. 6354, no. 3 (2010) pp. 92–101 10. H. Wu, X. Gu, Max-pooling dropout for regularization of convolutional neural networks, in Neural Information Processing. Lecture Notes in Computer Science, vol. 9489 (Springer International Publishing, Cham, 2015) pp. 46– 54 11. K. He, X. Zhang, S. Ren, J. Sun, Spatial pyramid pooling in deep convolutional networks for visual recognition, in Computer Vision—ECCV 2014. Lecture Notes in Computer Science, vol. 8691 (Springer International Publishing, Cham, 2014), pp. 346–361 12. K. He, X. Zhang, S. Ren, J. Sun, Spatial pyramid pooling in convolutional networks for visual recognition. IEEE Trans Patt Anal Mach Intell 37(9), 1904–1916 (2015) 13. W. Ouyang, X. Wang, X. Zeng et al, DeepID-Net: deformable deep convolutional neural networks for object detection in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, USA, pp. 2403–2412 (2015) 14. A. Krizhevsky, I. Sutskever, G.E. Hinton, Image net classification with deep convolutional neural networks, in Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NIPS ’12), Lake Tahoe, Nev, USA, pp. 1097–1105 (2012) 15. R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 14), Columbus, Ohio, USA, pp. 580–587 (2014) 16. D. Chen, X. Cao, F. Wen, J. Sun, Blessing of dimensionality: high-dimensional feature and its efficient compression for face verification, in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’13), pp. 3025–3032 (2013) 17. X. Cao, D. Wip, F. Wen, G. Duan, J. Sun, A practical transfer learning algorithm for face verification, in Proceedings of the 14th IEEE International Conference on Computer Vision (ICCV ’13), pp. 3208–3215 (2013) 18. T. Berg, P.N. Belhumeur, Tom-vs-Pete classifiers and identity-preserving alignment for face verification, in Proceedings of the 23rd British Machine Vision Conference (BMVC ’12), pp. 1– 11 (2012) 19. D. Chen, X. Cao, L. Wang, F. Wen, J. Sun, Bayesian face revisited: a joint formulation, in Computer Vision—ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, 7–13 Oct 2012. Proceedings, Lecture Notes in Computer Science, Part III, vol. 7574 (Springer, Berlin Germany, 2012) pp. 566–579 20. S. Lawrence, C.L. Giles, A.C. Tsoi, A.D. Back, Face recognition: a convolutional neuralnetwork approach. IEEE Trans Neural Netw Learn Syst 8(1), 98–113 (1997) 21. X. Wu, R. He, Z. Sun, T. Tan, A light CNN for deep face representation with noisy labels. https://arxiv.org/abs/1511.02683 22. O.M. Parkhi, A. Vedaldi, A. Zisserman, Deep face recognition, in Proceedings of the British Machine Vision Conference 2015, pp. 41.1–41.12, Swansea 23. F. Schrof, D. Kalenichenko, J. Philbin, FaceNet: a unifed embedding for face recognition and clustering, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’15), IEEE, Boston, Mass, USA, pp. 815–823 (2015) 24. Y. Taigman, M. Yang, M. Ranzato, L. Wolf, DeepFace: closing the gap to human-level performance in face verification, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’14), Columbus, Ohio, USA, pp. 1701–1708 (2014)

504

L. Patil and V. D. Mytri

25. S. Christian, et al., Going deeper with convolutions, in Proceedings of the IEEE conference on computer vision and pattern recognition (2015) 26. L. Patil, V.D. Mytri, A modified SVD based face recognition using hidden markov models. J Adv Res Dynam Control Syst 13, 328–338 (2018)

Design of Protocol for Handwriting Recognition Using FPGA Vinita Patil and Rajendra R. Patil

Abstract The abstract should summarize the contents of the paper in short. An authentication of handwritten documents like signatures, written text, and optical character recognition are the major issues for identification of persons, recognition of addresses, post codes on envelops, document analysis, interpretation of amount on bank transactions, and scripts including English. In this peculiarity, the system architecture for real-time handwritten implementation for handwritten recognition demands a novel automated system to recognize the handwritten text, which is more effective with noise free and proposed work includes probabilistic patch based (PPB)-filter, Canny edge detection and classification. The additive Gaussian noise and multiplicative speckle noise are filtered by using PPB, the characters in the text are extracted by Lifting scheme transformation, the edges of text letters are detected by Canny edge detection and finally, the database is classified as matched or not matched. The complete work is designed in Software Development Kit (SDK), Embedded Development Kit (EDK) and these two are interfaced with Virtex-5 FPGA development board bearing part name XC5VLX50T. Keywords Authentication · Handwriting recognition · Lifting scheme · Canny edge detection

1 Introduction People have progressed toward becoming sagacious of new advancements and the greater part of the most recent items result from the inventive specialized thoughts of researchers and technocrats are utilized by individuals in their regular exercises. V. Patil (B) · R. R. Patil Department of ECE, GSSS Institute of Engineering and Technology for Women, Mysuru, Karnataka, India e-mail: [email protected] R. R. Patil e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_49

505

506

V. Patil and R. R. Patil

However, the handwriting of a client can fill in as the distinguishing proof device for the approval and validation for any client. Handwriting is a direction made by hand on a surface to speak to the very much characterized graphical images of a dialect. These images can likewise be imprinted on an Online Recognition of isolated handwritten characters surface with the guide of machines. The study of handwriting covers numerous fields like, experimental psychology, neuroscience, education, computer science and engineering, and linguistics. Understanding of handwriting generation is an important factor to develop suitable handwritten data processing systems and to deal with the complex variability in handwriting [1, 3]. Depending on the handwritten data, various definitions have been given to handwriting but out of which are important and to maintain the accuracy in identification of the person based on the handwriting, it is required to classify as online handwriting, isolated handwriting, unconstrained handwriting, and writer/type independent [2].

1.1 Online Handwriting Manually written information is named either on the Web or disconnected. In the online case, the two-dimensional directions of progressive purposes of the composing are put away as a spatiotemporal portrayal as a component of time. Then, again, finished composing is accessible as a picture in disconnected handwriting and its examination includes the spatio-luminance data of picture information. The other vital refinement among on the Web and disconnected handwriting is with the beginning of acknowledgment process. In Web-based handwriting, the acknowledgment procedure is on the Web. That is, the acknowledgment procedure is started toward the beginning of composing and both handwriting and acknowledgment are completed all the while. On the other hand, the recognition task of offline handwriting is offline. Here, the recognition process is initiated on already available handwritten data, which has been stored as images. Online handwritten data is popularly known as electronic ink. The trajectory generated from the starting of writing, i.e., pen down till the end of writing, i.e., pen up is called a stroke. A valid symbol or a character may have one or many strokes [4].

1.2 Isolated Handwriting Handwriting could be either isolated or cursive. In isolated handwriting, characters of a word are written separately with spaces between them. On the other hand, in cursive writing, the words are written with connected characters using connecting strokes and overwriting of strokes may occur. In some of the scripts, the defined writing styles of words are isolated. However, the writing styles of individuals may lead to touching or a small overlap among characters. In some scripts like English, a combination of both isolated and cursive writing is in practice [5].

Design of Protocol for Handwriting Recognition Using FPGA

507

1.3 Unconstrained Handwriting For better interpretation and processing of online handwriting, few constraints are imposed on the writer while collecting handwriting. Such type of handwriting is known as constrained handwriting. The possible constraints are: the writers may be asked to write various parts of a character within the specified locations or boxes, restrict writing to be on a straight line or even writers may be imposed to use predefined geometrical properties of characters like, stroke length, position, alignment, order, character size, and its aspect ratio [6].

1.4 Writer Independent Handwriting writer independent handwriting is concerned with the type of handwritten data and the corresponding data processing systems. If the handwritten data samples include the writings of many writers, and the varied writing styles of individual, the corresponding database is called writer independent one and the corresponding data processing systems are called writer independent systems. On the other hand, a system that deals with only the writing styles of an individual is called writer dependent one and the corresponding database is called writer dependent database [7].

2 Hidden Markov Model HMM is a doubly embedded stochastic process with an underlying stochastic process that is not observable. However, the underlying process can be observed through another set of stochastic processes that produce the sequence of observations. An HMM is characterized by the following five components [8]. a. The number of states in a model, S = {S1, S2, S3, . . . Sn}

(1)

b. The number of distinct observation symbols per state, V = {v1, v2, v3 . . . vm}

(2)

c. The state transition probability distribution   A = ai j   where ai j = P qt + 1 = S j |qt = Si 1 ≤ i, j ≤ n.

(3)

508

V. Patil and R. R. Patil

d. The observation symbol probability distribution in state j   B = b j (k)

(4)

πi = P[qi = Si ]

(5)

e. The initial state distribution

So, a complete specification of an HMM requires specification of two model parameters and specification of the observation symbols, and the specification of three probability measures, A B and π for feature extraction [9]. In HMM technique, in our methodology, we have incorporated Baum–Welch algorithm for the handwriting-based identification. The Baum–Welch algorithm is an one of the algorithm which will be worked along with HMM for the estimation of unknown parameters by incorporating forward–backward algorithm. This technique is an essential tool in terms of probabilistic modeling. This algorithm can be described and fetches much importance in the domain of signal processing and bio-informatics due to its joint probability of association of hidden and observed discrete random variables with these essential parameter, this algorithm is well suited to estimate the parameter of HMM on the principle of maximum likelihood method. The uniqueness of Baum–Welch algorithm is its nature of behavior of estimated individual parameters are independent with respect to individual time, respectively, i.e., let ith hidden variable which is represented by (i-1)th hidden variable is individual of its post hidden variable which leads to remarkable conclusion that current observation variable relays only on hidden state currently which will be well suited by describing forward– backward algorithm which is described as follows [10]. With the consideration that ‘T ’ number of total possible states, we assume that the probability P(H x /H x-1 ) is timely independent with respect to ‘t’ which is assumed as stochastic transition matrix representation given by     S = si j = P Hx= j /Hx−1=i

(6)

For x = 1 considered as its initial state distribution Oi = P(H1 = i)

(7)

Let Ox be the observation variable which can attract any value over the range of ‘N’. The hidden state observation is independent of time and its probability of few observation Oi at time t for its state H x = j is given by r j (yi ) = P(Yt = yi /Ht = j)

(8)

Design of Protocol for Handwriting Recognition Using FPGA

509

let stochastic matrix M = {mj (yi )} is the possibility obtained in the range T × N with consideration of Y t and H t , where yi from the observation parameter whereas mj from the all possible states of the HMM parameter An primary observation sequence can be defined as O = (O1 = o1 , O2 = o2 , . . .)

(9)

The respective hidden Markov chain represented as δ = (S, R, Ø) with initial random condition, these parameters can also be set by utilizing prior information of its parameters of HMM. The forward procedure of the algorithm is given by θi (t) = P(O1 = o1 , O2 = o2 , . . . Ot = ot , Ht = i/δ) the chances and choices of tabulating o1 , o2 , o3, calculated recursively as follows

…,

(10)

ot at state I at time ‘t’ can be

θ i (1) = Øi r i (o1 )  θi (t + 1) = ri (yt + 1) Tj−1 θ j (t)s ji Similarly, the backward procedure is given by γi (t) = P(Yt+1 = yt+1 . . . YT = yT /Ht = i, δ)

(11)

the options of ending partial sequence yt+1 … yt is defined ‘i’ starting state at time ‘t’ it can be evaluated as follows: • γ i (T ) = 1.  • γi (t) = Nj−1 γ j (t + 1)si j r j (yt+1 ) we are selecting Baum–Welch algorithm over the other in HMM parameter due to the fact of its provably efficient and it can provide high accurate results with the preprocessed data based on probabilistic patch-based weights filter for the estimation with respect to time which is highly essential for handwritten input image documents [11].

3 Artificial Neural Network Artificial neural networks (ANNs) offer a new group of nonlinear algorithms for classification using hidden layers, and multilayer perceptrons. An ANN is an information processing model that is motivated by the biological nervous systems, like the brain, to process information. The key element of this model is the novel configuration of the information processing system. It consists of a large number of highly interconnected processing elements (neurons) working in unity to solve particular problems. An ANN is structured for a particular application, such as pattern recognition and data classification using a learning process.

510

V. Patil and R. R. Patil

The ANN network is constructed with the hidden layer size of ten and the training function as ‘trainlm’. The database features are divided for training and testing purpose. The network is then trained with the network inputs and the network targets.

3.1 Artificial Neural Network BP The artificial neural network is trained and evaluated here with the BP features of database images. Algorithm: Feature of ANN-BP Input: BP Feature set ANNBP Output: ANN BP results set Start 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

Init Ni, Tr Load ANNBP DB → [Train-F, Train-L, Test-F, Test-L] Perform network fitting Train the ANN classifier using Train-F and Train-L Op ← [ Test F] CF ← [Op, Test L] D ← Diagonal [CF] L ← Lowtriangular [CF] U ← Uppertriangular [CF] FP, FN, TP,TN ← {D,L,U} Store the results ANN-BP.

Stop

4 Canny Edge Detection The purpose of edge detection in general is to significantly reduce the amount of data in an image, while preserving the structural properties to be used for further image processing. In this research work, six steps have been followed to improve the existing algorithms of edge detection. The first one is make low rate of errors and edges which are taken place in the images should be retained without missing and edge method should not response for non-edges of the image. The second step is to identify edges of the image which are the pixels between detector and the actual edge and these edges must be at a minimum. The third step is to detect only one response to a single edge to eliminate the multiple responses to an edge. The aim of Canny edge detection was to regard of following criteria [12]:

Design of Protocol for Handwriting Recognition Using FPGA

511

1. Detection: The probability of detecting real edge points should be maximized while the probability of falsely detecting non-edge points should be minimized. This corresponds to maximizing the signal-to-noise ratio. 2. Localization: The detected edges should be as close as possible to the real edges. 3. Number of responses: One real edge should not result in more than one detected edge (one can argue that this is implicitly included in the first requirement). Based on the above-discussed steps, the Canny edge detector is chosen in this research work to smooth the image to eliminate the noise terms or pixels in the image. For accurate edge detection, gradient method is used to highlight the regions using spatial derivative operation that are shown in Eq. (12) and (13) G(m, n) =



gm2 (m, n) + gn2 (m, n)

∅(m, n) = tan

−1



gm (m, n) gm (m, n)

(12)

(13)

The calculation at that point tracks along these locales and smothers any pixel that is not at the most extreme (non-greatest concealment). The slope cluster is currently additionally diminished by hysteresis. Hysteresis is utilized to track along the rest of the pixels that have not been stifled. Hysteresis utilizes two edges and if the extent is beneath the main edge, it is set to zero. In the event that the extent is over the high edge, it is made an edge.

5 Probabilistic Patch-Based Filter It is an patch-based de-noising filter for the additive white Gaussian noise with the framework of weighted maximum likelihood estimation technique. In this technique, denoising of the image is carried out based on the estimation function of the original image. The input image is said to be defined over the regular grid by every pixel value with its noise distribution function located as likelihood function affected by noise. This likelihood function of noise is modeled as parametric patch-based model by space varying parameter. Hence, the recovered input image is assumed as the maximum likelihood estimator function with the random distribution defined by Eq. (14) and (15), the representation of the PPB filter functioning is as represented in Fig. 1. ˆ θ(MLE)  arg  max θs

ˆ θ(WMLE)  arg  max θs



log p({vt |θs })

(14)

t∈Sθ



t∈Sθ

  w(s, t) log p vt |θ s

(15)

512

V. Patil and R. R. Patil

Noise image v

Parameters (i-1)

PPBW

Weights w(s,t)

WMLE

Fig. 1 Representation of probabilistic patchbased filter. PPBW = Probabilistic patch-based weights. WMLE = Weighted maximum likelihood estimation

where θs = space varying function; W (s,t) = data driven weights a non zero factor. This weighted maximum likelihood estimation (WMLE) is well suited to reduce minimizing the variance of the estimation of the noise for the biased samples by forming the white Gaussian model given by  ˆθ(WA)  t w(s, t)vt t w(s, t)

(16)

The exact removal is AWGN is done based on the exact matching of the weights for noisy patches for the exact weight normalization with the set of probabilities of the exact values of the probabilities given by w(s, t) P P B  [ p(θ∗ s = θ∗ t |v)]1/ h

(17)

where θ∗ s & θ∗ t represent image extract from the parameter θ* The scheme of the probabilistic patch-based filter is summarized as follows.

5.1 Proposed Block Diagram The proposed system of handwriting-based real-time user identification is as shown in Fig. 2 and its respective flow chart of operation is as represented in Fig. 3, the desired system is proposed for the real-time identification of the user cumulatively based on

Design of Protocol for Handwriting Recognition Using FPGA

Load hand writing from database

Display Matched user information

Select hand writing for analysis

Network for Artifical Neutral Classification

513

Probabilistic based filter (denoising)

Software Development Kit

Fig. 2 Block diagram of the proposed system

the real-time handwriting data compared with the existing data base of the same user which is pre-recorded. In the present designed system, it is developed for the twenty different users database with different styles of handwritings are considered, which can also be extended for the required number of users. The front end of the desired system is implemented using software development kit by performing the denoising of the input image by probabilistic-based filter which works independently by varying the filter weights so that no information will be lost for analysis. The major classification of the handwriting-based identification of the user is done by artificial neural network in which it is subjected to testing and training datasets of the user which is interfaced using FPGA Vertex 5 hardware kit and by getting the user matched information will be displayed on the console with high accuracy [12].

5.2 Flowchart of the Proposed Work Figure 3 provides the path of execution for the proposed system for the real-time user identification based on handwriting. In this system after HMM-based feature extraction using Baum–Welch algorithm, all the features extracted from the input handwriting imagem i.e., edges, text, continuity, shapes, length and width between character and all other possible parameters are consolidated as .h file which holds the information about the user handwriting to classify after undergoing the preprocessing for denoising of the input handwriting to reduce the effect of noise on the image for processing. Further, the processed information is fed as input for the FPGA virtex-5 hardware, and the details are as discussed in next section.

514

V. Patil and R. R. Patil

Start

Select handwriting from database for analysis

Pre-processing using Probabilistic based filter

Feature extraction using HMM and .h file generation

ANN for classification

No

Matched user Yes

Display user information

Stop Fig. 3 Flowchart of the proposed system

6 Virtex-5 FPGA (XC5VLX50T) Using the second-generation ASMBL™ (Advanced Silicon Modular Block) columnbased architecture, the Virtex-5 family contains five distinct platforms (sub-families), the most choice offered by any FPGA family. Each platform contains a different ratios of features to address the needs of a wide variety of advanced logic designs. In addition to the most advanced, high-performance logic fabric, Virtex-5 FPGAs contain

Design of Protocol for Handwriting Recognition Using FPGA

515

many hard-IP system-level blocks, including powerful 36-Kbit block RAM/FIFOs, second-generation 25 × 18 DSP slices, SelectIO™ technology with built-in digitallycontrolled impedance, ChipSync™ source-synchronous interface blocks, system monitor functionality, enhanced clock management tiles with integrated DCM (Digital Clock Managers) and phase-locked-loop (PLL) clock generators, and advanced configuration options. Additional platform dependent features include power-optimized high-speed serial transceiver blocks for enhanced serial connectivity, PCI Express® compliant integrated Endpoint blocks, tri-mode Ethernet MACs (Media Access Controllers), and high-performance PowerPC® 440 microprocessorembedded blocks. These features allow advanced logic designers to build the highest levels of performance and functionality into their FPGA-based systems. Built on a 65nm state-of-the-art copper process technology, Virtex-5 FPGAs are a programmable alternative to custom ASIC technology. Most advanced system designs require the programmable strength of FPGAs. Virtex-5 FPGAs offer the best solution for addressing the needs of high-performance logic designers, high-performance DSP designers, and high-performance embedded systems designers with unprecedented logic, DSP, hard/soft microprocessor, and connectivity capabilities. The Virtex-5 LXT, SXT, TXT, and FXT platforms include advanced high-speed serial connectivity and link/transaction layer capability.

7 Results and Discussion Figures 4a, b, and 5a, 5a depicts the sample input reflecting the user number and its accuracy for the samples under consideration, and it is observed that for the user number 23, the accuracy achieved is 94.5% as shown in Fig. 5b and Table 1 and comparative plot Fig. 6 clearly represent the various iteration for all the 24 different uses under considerations & obtained any accuracy rate of 94.99%„ i.e., mean value

4.1

4.2

Fig. 4 Hardware implementation for handwritten person recognition for the selected

516

V. Patil and R. R. Patil

5.1

5.2

Fig. 5 a Hardware implementation and b Accuracy in person identification

among the uses it in also significantly noted that the minimum accuracy achieved for user number 13 is 95.12% and maximum accuracy of 96.3% for user number 23; the deviation in the accuracy is mainly due to the fact that the style of handwriting will vary from one user other user based on practice, alphabet style, spacing between the character; hence, the desired system can also be moduled on various numbers of users depending on user requirement and memory available. user 8 (a) and user 23 (b)

8 Conclusion We have described experiments in handwriting-based user identification system using hidden Markov modeling and artificial neural network modeling methods originally developed for handwriting applications. The input handwriting image is denoised using probabilistic patch weight-based filter by considering basic units in handwriting. We introduce two new features for handwriting that are invariant under translation, rotation, and scaling. However, they have been rarely used previously in real applications due to the difficulty involved in the estimation of high-order derivatives of parameters of handwriting. We have demonstrated that these highorder invariant features can indeed be made useful with careful implementation using FPGA Vertix 5. A method for combining segment-oriented features in a stochastic handwriting pattern recognizer has been developed. Although certain optimality characteristics of the HMM system Baum–Welch algorithm is used in the process of HMM design, significant reduction in user identification error was achieved by this method, reducing writer independent error rate by nearly 50%. Finally, we would like

Design of Protocol for Handwriting Recognition Using FPGA

517

Table 1 Accuracy and user representation S. No.

User No

Accuracy in %

1

1

94.28

2

2

95.24

3

3

95.70

4

4

94.23

5

5

95.93

6

6

94.04

7

7

95.11

8

8

96.04

9

9

96.29

10

10

95.29

11

11

95.73

12

12

96.21

13

13

95.19

14

14

96.35

15

15

95.31

16

16

95.72

17

17

93.60

18

18

96.12

19

19

94.30

20

20

95.63

21

21

96.39

22

22

94.66

23

23

96.38

24

24

Average accuracy (%)

96.16 94.99

to point out that although we report only among 20 different users of different kind of handwriting styles, we are able achieve an highly accurate data of about 94.99%. The system can be easily adjusted to handle a large or unlimited handwriting by imposing different subjective constraints.

518

V. Patil and R. R. Patil

Fig. 6 Comparative graph for the all 24 users between accuracy and user numbers

References 1. J. Hu, M.K. Brown, W. Turin, HMM based on-line handwriting recognition. IEEE Trans. Patt. Anal. Mach. Intell. 18(10), 1039–1045 (1996) 2. C.C. Tappert, C.Y. Suen, T. Wakahara, The state of the art in on-line handwriting recognition. IEEE Trans. Patt. Anal. Mach. Intell. 12(8), 787–808 (1990) 3. R. Nag, K.H. Wong, F. Fallside, Script recognition using hidden Markov models, in Proc. TCASSP ’86, vol. 3, Japan, pp. 2071–2074 (1986) 4. A. Kundu, P. Bahl, Recognition of handwritten script: a hidden Markov model based approach, in Proc. ICASSP ’88, vol. 2, New York, pp. 928–931 (1988) 5. K.S. Nathan, H.S.M. Beigi, J. Subrahmonia, G.J. Clary, H. Maruyama, Real-time on-line unconstrained handwriting recognition using statistical methods, in Proc. ICASSP ’95, Detroit, Mich, pp. 2619–2622 (1995) 6. S.A. Guberman, V.V. Rozentsveig, Algorithm for the recognition of handwritten text. Autom.Remote Control 37, 751–757 (1976) (Translated from Automatika i Telemekhanika, vol. 37, no. 5, pp. 122–129 (1976) 7. J. Seiffertt, Back propagation and Ordered in the Time Scales Calculus. IEEE Trans. Neural Netw. 21(8), 1262–1269 (2010) 8. C.R. Prashanth, K.B. Raja, K.R. Venugopal, L.M. Patnaik, Standard scores correlation based offline signature verification system, in International Conference on Advances in Computing, Control and Telecommunication Technologies (2009) 9. A.C. Ramachandra, J.S. Rao, Robust offline signature verification based on global features, in IEEE International Advance Computing Conference (2009) 10. S. Azernikov, Sweeping solids on manifolds, in Symposium on Solid and Physical Modeling, pp. 249–255 (2008) 11. M. Blumenstein. S. Armand. and Muthukkumarasamy, Off-line signature verification using the enhanced modified direction feature and neural based classification, in International Joint Conference on Neural Networks (2006) 12. S. Madhvanath, V. Govindaraju, The Role of Holistic Paradigms in Handwritten Word Recognition (Pattern Analysis and Machine Intelligence, IEEE Transactions, 2001), pp. 149–164

Diesel Engine Performance with Coolant Temperature Control System and Phase Change Material (PCM) in the Cold Ambient Conditions: A Review Manish Kumar and S. K. Dhakad

Abstract To decrease cold-start emission, thermal energy storage (TES) structure can be used related to the exhaust after-treatment system. Phase change materials (PCM) can be used in the Thermal energy storage system to assimilate the exhaust gas thermal energy, thus melting and storing it as latent heat. This allows the capacity of the exhaust gas heat energy during the motor’s high burden conditions and a tiny bit at a time discharges the thermal energy backed to the catalyst substrate during the motor off period. In view of the result, completing a TES system into the diesel after-treatment system has indicated extraordinary potential in diminishing a vehicle’s discharges, especially for hybrid vehicles. This approach can assist the catalyst to enact the emissions’ conversion reactions straight after the cold start. In any case, its effectiveness to all things considered, relies upon the term of the motor off periods between the driving cycles. Right now, it was discovered that encouraging the heat transfer between the PCM and the catalyst basically improves the emission’s decrease performance. Keywords Phase change material · Energy storage · Thermal management · Engine cold start · Thermal energy storage (TES) · Material selection · Latent heat thermal energy storage (LHTES)

M. Kumar (B) · S. K. Dhakad Department of Mechanical Engineering, S.AT.I. Engineering College, Vidisha, Madhya Pradesh, India e-mail: [email protected] S. K. Dhakad e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_50

519

520

M. Kumar and S. K. Dhakad

1 Introduction Nowadays, thermal heat storage plays an essential job in the reuse of heat energy, as in the driving in cold condition the motor is not warming appropriately. The thermal storage innovation dependent on the usage of PCM. PCM is an entrenched class of materials with numerous potential applications, reaching out from the alteration of temperature to heat storage [1]. High lubricant consistency at lower temperature acquires higher frictional misfortunes, diminishing the demonstrated thermal efficiency of the engine further. Will et al. [2] evaluated that frictional misfortunes in the motor during the beginning times of warm-up (when the motor is in the area of 20 °C) can be up to 2.5 times higher than those saw when the lubricant is completely warm. In the event that this temperature is diminished to a cold beginning situation of 0 °C, at that point [3]. Predicted increments in fuel utilization of up to 13.5%. The fundamental issue is the overconsumption of fuel in diesel motors during winter and a more drawn out time is required to take the motor to the necessary temperature for proper ignition of diesel fuel. The effectiveness of the diesel motor turns out to be poor because of a longer period is required to heat diesel motor. The specialized level of multi-purpose vehicles (MPV) is to a great extent controlled when the time of cold-start preparation. The base temperature for the solid beginning up of a diesel engine with diesel fuel with no apparatus of preheating (AOP) and apparatus of facilitating (AOF) This work aims to increase the temperature of the engine with the utilization of the coolant and PCM to increase the temperature of the engine while it is not working. Department of Energy (DOE) has perceived that improving the administration of vehicle heat is basic to accomplishing higher stage productivity [4, 5]. Contingent upon working conditions, regular vehicles dismiss roughly 65–75% of the fuel’s vitality as waste warmth through the exhaust or radiator, and in current battle vehicles, around 10–15% of the valuable vitality is committed to running the cooling framework [6, 7].

2 Literature Cold-start strategies for diesel engines depend on an integrated model of pre-start heating, especially for the Alps Region with winter temperatures extending from − 30 to −0 °C [8]. There are three techniques for storing thermal energy, which is thermochemical, sensible and latent heat storage. In the thermochemical heat stockpiling process, warm heat is assimilated or discharged by breaking and transforming atomic bonds in a totally reversible chemical response. In a reasonable heat storing unit, warm heat is put away by modifying the temperature of the storage medium, which can be in fluid or solid stage. In an idle heat storage unit, thermal heat is gathered utilizing a reversible change stage happening in the medium. The latent heat of most materials is significantly higher than their sensible heat, in this way requiring a lot littler mass

Diesel Engine Performance with Coolant Temperature Control …

521

of storage medium for storing and then recovering a given amount of thermal heat. The heat storage as latent heat can be accomplished by utilizing PCMs, which materials are described by the high latent heat of fusion. PCMs can absorb or discharge high latent heat during liquefying or hardening process and have been accepting consideration for different applications, for example, waste heat recovery [1]. There are three instances of PCMs, which are organic, inorganic and eutectic. PCMs are normally made out of a few substances, which are appropriate for a specific application. Organic materials are additionally drawn as paraffin and non-paraffin. Organic materials are congruent softening, self-nucleation, non-destructiveness and high warmth of combination. In any case, there are a few unwanted properties of organic materials, for example, low heat conductivity, non-perfect with a plastic container and genuinely combustible. Inorganic materials are named salt hydrate and metallic. Inorganic materials are high latent heat of fusion per unit of volume, nearly high thermal conductivity and little volume changes on melting. The majority of the salt hydrates are cheap [9]. The disadvantages of most salt hydrates are incongruent softening and destructive. The blend of salt hydrates is supersaturated at the liquefying temperature. This outcome in a changeless melting freezing of the salt hydrate continues decreasing with each charge-release cycle. Then again, eutectic PCMs are subdivided inorganicorganic, organic-inorganic and inorganic-inorganic. The primary bit of eutectic PCMs is that they comprise of focal points of organic and inorganic in material; however, the expense of eutectic materials is 3–5 multiple times higher than organic and inorganic materials. Displaying of heat transfer in charging and releasing procedures has been discussed comprehensively in the logical writing for quite a long time. An excellent survey of thermal heat storage, especially on moving boundary problems, a numerical simulation in various heat exchanger developments, is displayed by. Right now, articulation of a numerical model characterizes moving solid-fluid interfaces and consequently differing boundary conditions. This is because of phase change is considered as non-linear since phase change interfaces impact persistently during charging or releasing and its position is not known in the arrangement. Various investigations and tests have been conveyed for the arrangement of the PCM issue including logical, information-based and numerical strategies where the vitality condition is planned in different manners. The most widely recognized methodology used all through the writing is the enthalpy technique because of its reliability quality and effortlessness to execute the numerical calculation without the need to fulfill conditions at phase change front [10]. The enthalpy technique system depends on the arrangement of different differential conditions governing the PCM execution. Where temperature and enthalpy are two variables to be figured. Hardly any procedures are being received to show and comprehend for the temperature variety in the area of interest. The numerical techniques for such issues are accounted for in two general classes, which are the temperature-based model and the enthalpy model. Right now, temperature-based model is utilized. The phase change interface is either caught on a grid at each fixed time step. In this manner, the non-uniform grid spacing is developed, or caught on

522

M. Kumar and S. K. Dhakad

a uniform lattice. At that point, a non-uniform time step is utilized. In 2013, Wang examined the heat charging or releasing execution of a shell-and-tube PCTES unit [10]. They reasoned that the releasing procedure needs a shorter time than the charging procedure does because of the releasing procedure has a bigger heat transfer rate than the charging procedure. Subsequently, it would arrive at a consistent state rapidly under a similar inlet temperature and mass flow rate of HTF in the charging and releasing procedure. Right now, charging and dropping procedures are researched in one-dimensional, symmetrical collocation model of pressed bed heat storage. The objective is to build up an estimated numerical arrangement of the one-dimensional heat move issue for a pressed bed heat storage which is being charged and released isothermally by HTF. A PC program can be made which will anticipate the solid-fluid interface area and the temperature profile of PCM and HTF as a segment of time during charging and releasing procedures in packed bed heat reestablish [1]. Plan a PCM thermal storage system and reuse the waste heat energy of the coolant system. Actualizing this TES system on a 1.6-L diesel engine brought about a generally 40% decrease in the coolant warm-up time to 95 °C and a 2.71% diminishing in fuel utilization was accounted for over the NEDC [11]. Vittorini et al. [12] studied the thermal management of engine oil exhaust gas waste heat recovery to diminish the engine grating and consequently its mileage. Right now, utilization, CO and HC discharges were diminished by 3.6%, 7.2% and 3.5%, separately [13]

2.1 The Selection Standards for PCM Are as Per the Following The determination of phase change materials for TES systems relies upon numerous elements: 1. Material properties. 2. The storage limit of the system. 3. Working temperature. The execution of the HTFs and the plan contemplation of the heat exchangers [11] 1. 2. 3. 4.

Meeting warm performance necessities. Meeting physical performance necessities. Meeting dynamic performance necessities. Meeting financial performance necessities.

Diesel Engine Performance with Coolant Temperature Control …

523

2.2 An Improved One-Dimensional Model Is Introduced Model is introduced with decide numerically the temperature distribution of the PCM and the HTF just as the area of the solid-fluid interface during charging and releasing. The accompanying assumptions have been gotten the model. HTF and PCM are two unique spaces and a different condition for each area. The thermophysical properties of the PCM are differing for the solid and fluid stages. The variety of properties concerning temperature during the stage change is considered as direct. 1. The inlet temperature of the HTF is reliable during the entire charging or releasing procedures. The initial temperatures of the HTF and the PCM are same. 2. The obstruction offered by wall of the round case is neglected. 3. Radiant heat move is ignored. 4. The tank is totally protected (Fig. 1). Heat loss from the tank surface to the earth is neglected [1]. A heat rejection rate of PCM is calculated with the help of Ansys software and use of Ansys in understanding the behavior of PCM with time. The model is prepared for the pipe of length 1 meter and the heat transfer fluid is inside the funnel and PCM is in the outer part of the pipe. With the help of this, we can calculate the heat loss by the HTF to the PCM and vice versa. Analyzing the behavior of PCM with the help of fluid fluent (Figs. 2 and 3).

Fig. 1 Isometric view of the pipe consisting of PCM and HTF

524

M. Kumar and S. K. Dhakad

Analyzing the behavior of PCM with the help of fluid fluent.

Fig. 2 Notation of PCM and HTF in model

Fig. 3 Liquid fraction of PCM present in the pipe

3 Objective Presently, we are talking about the ambient condition that how all the energy is used in the engine and in how much proportion appeared in Fig. 4 [14]. The higher oil viscosity and diminished burning efficiency present during vehicle cold-start making it the principle focus for thermal improvement. A few non-PCM considers have exhibited the thermal advantage of utilizing exhaust heat recovery (EHR) to more rapidly bring the engine up to proficient working temperature. One

Diesel Engine Performance with Coolant Temperature Control …

525

Fig. 4 Energy generation in one cycle of the power stroke

investigation by Hyundai Motor Corporation indicated that a 2.5% mileage improvement could be accomplished by preheating both the coolant and gearbox oil in a business vehicle, with higher reserve funds expected in heavy diesel and hybrid electric vehicle (HEV) frameworks [15] Our objective is to reduce these with the help of coolant and PCM. We come to a point that the causes of failure to start the engine are as follows. Piston wall is too cold. Lubricant is also viscous to effectively lubricate the engine. The catalytic converter is too cold to control emission. Engine cooling system is designed in the MATLAB and Simulink. The results which we get from there is the engine cooling rate and engine wall temperature (Figs. 5 and 6).

Fig. 5 Engine cooling system in MATLAB and Simulink

526

M. Kumar and S. K. Dhakad

Fig. 6 Speed and torque characteristics of the engine

We worked on the model in MATLAB and Simulink, by changing the engine speed and the temperature of the engine, we are getting the temperature of the engine wall concerning time and we must minimize the time to raise the engine wall temperature by using PCM in the cooling system (Fig. 7).

4 Workflow Chart The conditions were solved numerically utilizing the orthogonal collocation technique. The scientific model was created, and the numerical arrangement system was actualized utilizing the propelled significant high-level computing language and intelligent condition of MATLAB R2018b. The program flowchart of the model is given in Fig. 8 [1].

Diesel Engine Performance with Coolant Temperature Control …

527

Fig. 7 Temperature rises of the engine wall and coolant mass flow

5 Codes MATLAB code for calculating the mass of PCM required to make the engine block to 20°. clc m=60; %mass of engine block (kg) c=460; %specific heat of cast iron to store heat Unit ’j/kg-k’ dt=40; %the temperature change in the engine block ’k’ Q= (m*c*dt)/1000; %heat absorption in the Engine block disp (’amount of heat required to heat the engine Block’); disp (Q);

528

M. Kumar and S. K. Dhakad

Fig. 8 Flowchart of the model in MATLAB model

disp (’kJ) %the mass of paraffin wax required to generate above heat is cp=2.38; %calorific heat of paraffin wax unit in J/kg-k dt=50; % m=Q/(cp*dt); %mass of the paraffin wax required disp (’weight of pcm required’);

Diesel Engine Performance with Coolant Temperature Control …

disp (m) disp (’kg’) Next code clc m=54.4; %mass of engine block (kg) c=460; %specific heat of cast iron to store heat Unit ’j/kg-k’ dt=100; %the temperature change in the engine Block ’k’ Q= (m*c*dt)/1000; %heat absorption in the Engine Block disp (’Amount of heat required to heat the engine Block’); disp (Q); disp (’kJ) %the mass of paraffin wax required to generate above heat is %dt=50; %m=Q/(cp*dt); %mass of the paraffin wax Required %disp (’weight of pcm required’); %disp (m) %disp (’kg’) %heat generation in one minute of combustion mf =3.5/60; %mass of fuel in one minute of ideal Combustion ’kg/min’ cf=45,200; %calorific value of diesel fuel is Roughly 45,200 ’kJ/kg’ Qe=mf*cf; disp (’Heat generation in one minute of Combustion’) disp (Qe) disp (’Kj/min’) %time for engine metal warm up from -20 to 20 Degree in the ideal case f= 0.32; %fraction of total heat in Engine metal warm up

529

530

M. Kumar and S. K. Dhakad

h=Qe*f; %the amount of total heat generated in One minute for engine warm up disp (’Ideal rpm is 600’); disp (’Metal warm up in one minute’); disp (h) disp (’Kj/min’) t=Q/h; %time taken for generation of heat Required to go up to 20 degree disp (’Time taken for generation of heat required to go up to 20 degrees’) disp (t) disp (’Min’).

6 Conclusion Using waste heat of the engine stored with the phase transition heat-accumulating for pre-start diesel engines this system uses the heat through the coolant and stores in the PCM. The heat flow through the engine is managed efficiently, there is a mathematical model in MATLAB which we have used in calculating the time and heat rejection rate in the PCM. Paraffin wax is taken for the above calculation and we come to a point that using PCM in the vehicle coolant the system increases engine efficiency and fuel consumption is reduced [16]. At long last, in a military substantial vehicle application, the oil pan and filter on a US Armed force M925 5-ton truck were encompassed by a low-temperature PCM (hexadecane, TM 18 C) keeping the oil warm for more than 12 h after shutdown. This permitted quicker cold start, improved engine oil, and around 2–6 times less cold beginning wrenching energy from the battery [17]. As a result, engine temperature increases from 15 to 20 °C.

7 Future Work The use of PCM in heat storing in all the furnace and boiler wall also in the air condition devices. Application in the cooling system of the passenger car for cooling or heating. Utilizing a PCM to expand the battery cooling framework. The presence of a PCM, normally paraffin waxes inserted in a metal or carbon foam, was appeared through both demonstrating and analysis to diminish top temperature rise and increment thermal consistency under high-rate discharge conditions [12, 18].

Diesel Engine Performance with Coolant Temperature Control …

531

References 1. K.-F. Hoh, S. Syam, E.-T. Phuah, T.S.-Y. Choong, Computer modelling of phase change material using the orthogonal collocation method 2. F. Will, A. Boretti, A new method to warm up lubricating oil to improve the fuel efficiency during cold start. SAE Int. J. Engines 4(1), 175–187 (2011) 3. C. Samhaber, A. Wimmer, E. Loibner, Modeling of engine warm-up with integration of vehicle and engine cycle simulation. No. 2001-01-1697. SAE Technical Paper (2001) 4. T.K. Kelly, et al., The US combat and tactical wheeled vehicle fleets: Issues and suggestions for Congress. Rand National Defense Research Inst Santa Monica, CA, 2011 5. The United States, Department of Energy, Office of Energy Efficiency and Renewable Energy. Freedom CAR and fuels partnership. Electrical and electronics technical team roadmap; 2010 December [cited 10.01.12] Technical report 6. K.T. Chau, C.C. Chan, Emerging energy-efficient technologies for hybrid electric vehicles. Proc. IEEE 95(4), 821–835 (2007) 7. G. Khalil, et al., Power supply and integration in future combat vehicles. Tacom Research Development and Engineering Center, Warren MI, 2004 8. Y. Deng, et al., Effects of cold start control strategy on cold start performance of the diesel engine based on a comprehensive preheat diesel engine model. Appl. Energy 210, 279–287 (2018) 9. J. Villadsen, M.L. Michelsen, Solution of Differential Equation Models by Polynomial Approximation, vol. 7 (Prentice-Hall, Englewood Cliffs, 1978) 10. W.-W. Wang, et al., Numerical study of the heat charging and discharging characteristics of a shell-and-tube phase change heat storage unit. Appl. Therm. Eng. 58(1–2), 542–553 (2013) 11. M.R. Hamedi et al., Thermal energy storage system for efficient diesel exhaust after treatment at low temperatures. Appl. Energy 235, 874–887 (2019) 12. R. Sabbah, et al., Active (air-cooled) vs. passive (phase change material) thermal management of high-power lithium-ion packs: limitation of temperature rise and uniformity of temperature distribution. J. Power Sources 182(2), 630–638 (2008) 13. D. Vittorini, D. Di Battista, R. Cipollone, Engine oil warm-up through heat recovery on exhaust gases–Emissions reduction assessment during homologation cycles. Therm. Sci. Eng. Prog. 5, 412–421 (2018) 14. J.D. Trapy, P. Damiral, An investigation of lubricating system warm-up for the improvement of cold start efficiency and emissions of SI automotive engines. SAE Trans 1635–1645 (1990) 15. J. Lee, et al., Development of effective exhaust gas heat recovery system for a hybrid electric vehicle. No. 2011-01-1171. SAE Technical Paper, 2011 16. A.A. Malozemov, V.N. Bondar, S.I. Cherepanov, Experimental research of forced diesel engine with oil temperature control system in cold ambient conditions. Procedia Eng. 150, 1143–1148 (2016) 17. S.D. Stouffer, et al., Diesel engine cold start improvement using thermal management techniques. No. UDRI-TR-2000-00131. Dayton Univ OH Research Inst., 2000 18. S.M. Lukic, et al., On the suitability of a new high-power lithium ion battery for hybrid electric vehicle applications. No. 2003-01-2289. SAE Technical Paper, 2003

Mechanical Properties of Sisal Fibre and Human Hair Reinforced Epoxy Resin Hybrid Polymer Composite S. K. Dhakad and Anas Ahmed Ansari

Abstract This review paper represents the fabrication of hybrid composite with natural fibres and discusses the latest trends used in various practical applications in different field like in construction, automobile, etc. In the modern era, natural fibre reinforced composites have to keep attention because of their lightweight, low cost, nontoxic, combustible, nonabrasive, and ecological properties instead of synthetic fibres. Present work material like human hair and sisal fibres are used for making hybrid composite. The human hair were collected from barber shop. In India, three to four ton of human hair fibre are wasted daily, so they possess an environmental challenge. In order to find the commercial application of human hair and sisal fibre, they use as a reinforcing material for making composite. This review shows the fabrication of sisal fibre and human hair with epoxy resin in mechanical field especially in making body of vehicle, shock-absorbing part of machine and household furniture etc. Current work focused on the manufacturing process of sisal fibre and human hair from raw material to used application purposes. Keywords Natural fibres · Sisal fibres · Human hair · Epoxy resin polymer composite

1 Introduction A composite is a structural material that comprise of two or more constituents that are combined at an apparent level which are insoluble in each other and differ in form or chemical compositions. In the open literature, there are two constituent of composite exists namely, reinforcing constituent and matrix constituent. The reinforcing constituent material in the form of fibres, particles or flakes and the matrix constituent materials are generally continuous such as polymer, metal, ceramic, S. K. Dhakad (B) · A. A. Ansari Department of Mechanical Engineering, Samarat Ashok Technological Institute, Vidisha, Madhya Pradesh, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_51

533

534

S. K. Dhakad and A. A. Ansari

carbon, etc. In general way, if reinforcing constituent is having more than one constituent, than it is said to be hybrid composite. The properties of the hybrid composite material largely depend on the properties of the constituents, geometry and distribution of the constituent. Present trend in the new material development has been attracted to manufacture. Natural fibre reinforced composites as a replacement and substitute for synthetic reinforced composite material. Natural fibres are classified based on their origins, whether they are plant, animal or mineral fibres. All plant fibres are composed of cellulose but animal fibres are consist of proteins. Plant fibres include seed hairs, such as cotton; stem (or bast) fibres, such as flax and hemp; leaf fibre such as sisal; and husk fibres such as coconut. Animals fibre include silk, human/Animal hair, wool, etc. Mineral fibres includes asbestos, due to the growing greenhouse effect and environmental pollution, natural fibres reinforced polymer composites have attracted more researcher and engineer’s interests. The major benefits of natural fibres are their availability, renewable, biodegradable, environmental friendly, low density, high specific properties, good thermal properties, enhanced the energy recovery and low energy consumption. However natural fibres have some demerits such as low mechanical properties instead of glass fibre, and the water-absorbent tendency is higher as compared to glass fibre.so as a solution of these demerits we used hybrids composite to enhance the mechanical properties of natural fibre. In today’s scenario, there are different manufacturing methods for making hybrid composite material with natural fibres for the safe human health and for the clean environment. In this paper, we study on mechanical properties of sisal fibre and human hair reinforced with epoxy resin hybrid polymer composite material. Mechanical properties such as tensile, hardness, flexibility, and impact strength.

1.1 Sisal Plant Description The botanical name of agave sisalana, but commonly known as sisal. Sisal is a species of agave native to southern Mexico. Sisal is generally available in many countries of the world with different species, but it is widely cultivated in Africa and Asian countries. The plant looks like giant pineapples trees. Sisal plant consists of a rosette of fleshy leaves growing from a central bud. Young plant may have minutes backbone along the edge of leaf, but they shortly loose them, with only the sharp tip remaining. The leaves are heavy, weighing 500–700 g each. The leaves look like dagger in shaped and when mature 1–1.5 m long, 10 cm wide, and about 10 mm thick at the centre. The fibres are attach with pulpy material inside the surface of leaves. In a single leave, there are 20–50 fibres is fix with pulpy material.

Mechanical Properties of Sisal Fibre and Human Hair Reinforced …

535

Sisal plant is available atJawaharBalUdyan Bhopal

Approx. 90% moisture is present in the leaves but the fleshy pulp is very firm and the upper most cover of leaves are rigid. The plant age is 7–12 years and then produce a flower stalk 4–6 m tall and die. Sisal fibre is softer than other natural plant fibre and it is creamy white in colour which takes easier to dye colour very well. It is generally used for making fishing nets, doormats, carpets, binders, etc. and for making paper and paper boards by the waste of the fibres. It can be easily crop anywhere. In tropical region, about 275 species of sisal plant are distributed. Sisal is almost half part of the total production of all textile fibres. Total production of the world approx. 0.6 million tons, only 3000 tons sisal produce in India. About 50,000 kg of sisal fibre produces in Kerala every year.

1.2 Human Hair Description

Humain hair

536

S. K. Dhakad and A. A. Ansari

Human hairs constitute numerous components and their compositions are. It constitutes proteins of 75–95% by weight, 32% dihydrogen monoxide, and the rest is occupied by lipid pigments and other compounds. Keratin is the main occupant of human hair, it is a type of protein that is virtually 80% responsible for the formation of hair. Structural analysis of hair shows, it consists of three different layers such as cuticle, cortex and medulla. The surface properties of hair mainly depend on the cuticle which forms the outermost layer by cross-linked cystine. The medulla contains highly intense lipid and less cysteine and it is in the form of cylinder which forms the innermost hair thread. Utilization of hair as a reinforcement material is an emerging endeavour as it evolves an emerging method to utilize in the material which is available in huge quantities. It can be used as a reinforcement material because it has high tensile strength, friction coefficient and good elasticity recovery property. An important aspect is that a single strand of hair can withstand the load of 100–140 g. Hair is elastic in nature and it is capable of regaining its original position on removal of load. Investigation on the mechanical properties of fibre composites can be done and it can be concluded that hybrid composites exhibit high vigour. The hair fibres structure consisting of several layers, starting from the outside layers following Cuticle: Which contains several layers of flat, thin cells laid out overlapping one another as roof shingles. Cortex: Which comprises the keratin bundles in cell structures that remain roughly rod-like. Medulla: A unsystematic and open area at the fibre’s center.

1.3 Sisal Fibre Processing Steps Sisal fibres are extracted by mechanical decortication process. In mechanical decortications process, the fibres are extracted from the leaves by removing the pulpy material. In this, process two rolling drums are used for removing the pulpy material from the leaves, the leaves are passed between two pairs of metal rollers on which blades are mounted, the blades crushes the pulpy material from the leaves then only fibres are remained. For further treatment, the fibres are dipped in NaOH solution for 48 h, then fibres are washed with clean water which removes the pulpy material and dirt and improves the adhesion properties of fibre. Then fibre is drying in the sunlight.

Mechanical Properties of Sisal Fibre and Human Hair Reinforced …

537

1.4 Sisal Fibre Processing Steps Drying: In this process, the fibres are spread in sunlight for 6–8 h. Drying process is done after decortications process and cleaning the fibre with fresh water. For better quality, fibres are spreads on three parallel wires with the central one wire slightly raised. Quality of the fibres decreases when drying for longer hours. Grading: In this process, the fibres are separated according to length, colour and present of impurities. Length plays an important role in grading. Brushing: This process is used for separating fibres to each other. Brushing removes short fibre which is known as tow, and removed unwanted dry material which is attached with fibre. Brushing improves the adhesion properties of fibre. Baling: In this process the sisal fibre is pack under great pressure for achieving lowest possible volume for packaging, marketing, easily handling and for save freight charges. Storage: The storage space should be enough, in which we could store finish fibre bales. Fibre waiting for brushing, already brushed fibre and bales for transportation. Storage facilities must be sufficient, ventilation should be properly to avoid direct contact with walls, bare floor and any other pollution. We should not be store bales for a very long period.

538

1.4.1

S. K. Dhakad and A. A. Ansari

Literature Survey

Girisha et al. [1] reported study on the mechanical properties and water absorption behaviour of sisal and coconut coir fibre reinforced epoxy composite. They highlighted that as a result of hybridization of sisal fibre with coconut coir epoxy composite, the mechanical properties were improved and water absorption property was reduced. Madhukiran et al. [2] investigated the mechanical properties like tensile and flexural strengths on the hybrid banana and pineapple fibres epoxy composite. Hybrid composites were prepared using banana/pineapple fibres in different weight ratio. The hybridization of these natural fibres has provided considerable improvement of flexural strength when compared to individual reinforcement. This work also demonstrates the potential of the hybrid natural fibre composite materials for use in a number of consumable goods. Zhong et al. [3] alkali-treated sisal fibres were used for reinforcement and for the matrix phase urea-formaldehyde has been used for making composites. No. of specimen of different weight ratio of fibre has been prepared and different test impact, tensile, and water absorption test has been performed over the specimens. Joseph et al. [4] were make study of sisal reinforced polymer composites and they suggest, due to the low density and high specific properties sisal can be used in the automobile industries and sisal fibre composites can become good alternative material of wood in the building construction. Joshi et al. [5] natural fibres are emerging as low cost, lightweight and apparently environmentally superior alternatives to glass fibres in composites. Reasons to go for natural fibre composites is natural fibre production has lower environmental impacts compared to glass fibre production. Because of its lightweight, natural fibre composites improve fuel efficiency and reduce emissions, especially in auto applications. Chandramohan et al. [6] natural fibres present important advantages such as low density, appropriate stiffness and mechanical properties and high disposability and renewability. Moreover, they are recyclable and biodegradable. Vengatesan et al. [7] study on Mechanical Properties and Structural Analysis of Human Hair Fibre Reinforced Epoxy Polymer. Composite are made with different fibre weight ratio. They highlight that mechanical properties such as tensile strength, flexural strength and impact strength reduced when fibre loading increases. Jayachandran et al. [8] natural fibres have earned its reputation in composites industry due to its ease of availability and biodegradability and less toxic nature. They study the mechanical properties of hair and coir reinforced with epoxy resin. Tensile, Flexural and Impact tests were conducted on specimens with different combinations of hair and coir in epoxy resin. Results indicate that the tensile strength of the composite increased due to hair reinforcement.

Mechanical Properties of Sisal Fibre and Human Hair Reinforced …

1.4.2

539

Advantages and Applications of Human Hair

The unique advantages of human hair such as • • • • • •

Hair is having unique chemical composition. Its biodegradable rate is very slow. Hair is having high tensile strength. It is having excellent thermal insulation. It is very flaky surface and also has unique interaction with water and oils. Hair is having good elastic recovery.

Human hair is used for reinforcing clay built constructions because of its high tensile strength and friction coefficient. The hair-reinforced cement and fly ash concrete is high strength and is useful in high pressure bearing structures such as roof sheet, petroleum wells and bridges.

1.4.3

Advantages and Applications of Sisal Fibre

• Sisal fibres have good tensile strength. • Sisal fibres are very well resistant against heat and moist. • Traditional uses: thread, ropes, string, yarn and knit into mats, carpets, and various handicrafts. • Sisal has superior potential as reinforcement in polymer (thermoplastics, thermosets and rubbers) composites due to the low density and good welding specific properties. The use of sisal composites in automotive parts and other furniture is gaining popularity. Sisal also used as best material for dashboards. • Sisal fibre is 100% biodegrable during its lifetime. Sisal ropes is recycled as paper. • Sisal can also be used to add strength in cement mixtures for the development of low-cost housing and use as a reinforcement in plaster of wall, roof and insulation also. • Sisal waste can be used for making biogas, pharmaceutical ingredients and building material. The biomass left after fibres have been removed represents as much as 97% of the plant and most are now flushed away as waste. This waste is used in biogas plant for making manure. High-grade Sisal fibre are long and used in carpets. Medium grade sisal fibre are used in ropes, twine and cordage. Short grade fibre are used in paper industry for strengthening the recycle paper.

2 Conclusion The various applications of sisal fibre and human hair, sisal fibre and human hair processing steps, properties, advantages and applications are discussed in this paper.

540

S. K. Dhakad and A. A. Ansari

This review concludes that the sisal fibre and human hair-reinforced epoxy composites form one of the emergent areas in material science that makes awareness for use in various applications like aircraft, automotive, construction and interior house decoration application.

References 1. C. Girisha, G. Sanjeevamurthy, G.S. Srinivas, Int. J. Eng. Innov. Technol. 2, 166 (2012) 2. J. Madhukiran, et al., Fabrication and testing of natural fibre reinforced hybrid composites banana/pineapple. IJMER 3 (2013) 3. J.B. Zhong, et al., Mechanical properties of sisal fibre reinforced ureaformaldehyde resin composites. Express Polym. Lett. (2007) 4. K. Joseph, et al., A review on sisal fibre reinforced polymer composites. R. Bras. Eng. Agríc. Ambiental, Campina Grande 3 (1999) 5. S.V. Joshi, L.T. Drzal, A.K. Mohanty, S. Arora, Compos.: Part A 35, 371 (2004) 6. D. Chandramohan, K. Marimuthu, Department of Mechanical Engineering, Coimbatore Institute of Technology, Coimbatore, Tamil Nadu, India Paper Entitled “Characterization of Natural Fibres And Their Application In Bone Grafting Substitutes” Acta of Bioengineering And Biomechanics Original Paper, vol. 13, no. 1, 2011 Received: 8 Jan 2011. Accepted For Publication: 25 Feb 2011 7. K.J. Vengatesan, T. Prasanth, Department of Mechanical Engineering, Erode Sengunthar Engineering College, Erode, Tamil Nadu, India paper entitled “Study on mechanical properties and structural analysis of human hair fibre reinforced epoxy polymer”. Int. J. Adv. Res. Basic Eng. Sci. Technol. (IJARBEST) 3(24) (2017) 8. Jayachandran, et al., Faculty of Mechanical Engineering, Sathyabama University, Chennai, India. Int. J. ChemTech Res. 9 (2016)

Simulation of Swirl Cup as Vane Angle with 58° Amit Kumar, S. K. Dhakad, and Anurag Kulshreshtha

Abstract The cold flow for any flow to investigate flow properties, mixing properties and vaporization properties to certain mass flow rate such as 0.3 m/s and its equivalent ratio on hot flow analysis must be 0.6–0.78 with swirl number are 1.6, 1.65 and 1.8 respective vane angle of vane 58°, 57° and 61° so that hub angle of vane is zero. The flow should recirculation flow creates high turbulent zone mixed with high atomized the fuel to introduced. We know that swirl number 0.6 is the minimum number of flow mix zone and better mixing flow properties and better atomize flow properties so that ratio of air fuel will better efficiency results. Now, my aim is to calculate and to know that flow investigation through numerically performance in 58° vane angle of swirler with its recirculation zone properties and its turbulence zone. Keywords Swirler motion · Flow in swirler · Cold flow in swirler · Reverse coaxial flow

1 Introduction The swirler has six vanes blades with upper cashing and respective dimension T i = 298 K, Pi = 0.6 bar and Po = 0.3 bar. Fuel is taken with preference of experimental setup such as white kerosene [1]. As per with base of empirical method [1], we got the required dimension of swirler in that we gone for flow simulation as computational, got acquire result of perfect vane angle will verified. To simplify the result, we go A. Kumar (B) · A. Kulshreshtha Scope College of Engineering, Bhopal, Madhya Pradesh, India e-mail: [email protected] A. Kulshreshtha e-mail: [email protected] S. K. Dhakad Samrat Ashok Technological Institute, Vidisha, Madhya Pradesh, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_52

541

542

A. Kumar et al.

for computational analysis for different vane angle with respective swirl number (as analytical method) to get better efficiency, better flow regime [2].

2 Dimension of Swirler D(Dsw) (outer diameter) = 48 mm d(DHub) (inner diameter) = 28 mm Thickness of vane = 2 mm Height of vane = 5 mm Length of vane = 25 mm Number of vane = 6 Vane Angles = 58° LSWIR (length of swirler) = 25 mm LDOME (length of dome) = 78 mm DDOME (Diameter of dome) = 62 mm.

3 Parameters to Flow Investigates Tinlet (Tempeture at inlet) = Toutlet (Temperature at outlet) = constant = 298 K Pinlet (Pressure at inlet) = 0.6 bar Poutlet (Pressure at outlet) = 0.3 bar m ˙ (Mass flow rate at inlet or outlet) = 0.3 m/s φ (Equivalence ratio) = 0.6–0.78 Fuel = CH4 (Methane) or White Kerosine (C12H23).

3.1 Evaluated the Swirl Number [1] S=

21− 3

1−

 

DHUB DSWIP DHUB DSWIP

3 2 tanθ

(3.1)

Simulation of Swirl Cup as Vane Angle with 58°

543

where, S = swirler number θ = outlet angle of vane DHUB = diameter from center line of swirler DSWIR = diameter of outer of swirler. The different vane angle of swirler numbers are as follows: S40 = 0.6, S50 = 0.9, S60 = 1.3, S62 = 1.4, S68 = 1.9 But our one’s is d/D = 0.58, S58 = 1.28.

4 Geometric View of Model at 58o As we know that investigate internal flow properties of 58° vane angle of six vanes cup of swirler of isomeric view geometry in Ansys (Fluent) as follows in Fig. 1.

Fig. 1 Isomeric view of 58° vane angle swirler cup

544 Table 1 Meshing statistics

A. Kumar et al. Elements

Nodes

41,215

77,123

Fig. 2 Meshing model of cup

4.1 Meshing of Model Meshing of the cup model with six vanes as internally flow analysis in Ansys (Fluent) is done with tetrahedron mesh as local fine mesh to process to get solution of flow properties in cup. Meshing details (Table 1 and Fig. 2).

5 Result and Discussion After preprocessing of cup model, will processed for solver to set boundary condition as per flow condition so we got result as turbulent contour to verify our flow properties results as follows in Ansys in Fluent (Fig. 3). Having the result contour through hub (be zero angle of vane) to tip (having curve vane as 58°) so that mixing properties have created over more turbulent at tip so also flow divert for recirculation zone as we go far from tip turbulent moderate to stabilize the flow so swirler give better geometric effects. While from contour as we see keep increasing turbulent as experiment said with turbulency so it goes for better

Simulation of Swirl Cup as Vane Angle with 58°

545

Fig. 3 Contour for turbulence

atomization flow properties so air fuel ratio will get best one ratio so hot process will satisfy this flow regime will accurate. As we have iteration graph with solver run contour say with certain interval flow changes ratio has proper mixing properties got, such as pressure and temperature with equivalence ratio as per air fuel ratio as chemically. As we analyze the contour, we get cold flow results that at tip of the exhaust turbulency and recirculation zones are increasing gradually and keep on goes to stabilization so results is fully satisfactory for vane cup 58º be verified (Fig. 4).

6 Conclusion After observing of result, we have a words that regards flow mixing and creating for turbulent zone for stabilize flow we got because we know that better flow as nature with the reference of swirler number of 0.6 [1] as respective so flow have better mixing properties with better recirculation zone with better flame stabilize quality. For 58° vane angle, its swirl number after calculation of swirler number formula is 1.28., so better mixing flow properties got and after firing better flame stability and better air fuel mixture the acquire equivalence ratio will archived. The flow at hub to tip generated with curve vane to create recirculation with moderate turbulent so that flow should ideal better with it stabilization as its reference of hot flow analysis.

546

A. Kumar et al.

Fig. 4 Result graph

References 1. A.H. Lefebvre, Gas Turbine Combustion 2. M. Miltner, C. Jordan, M. Harasek, CFD simulation of straight and slightly swirling turbulent free jets using different RANS-turbulence models. Elsevier ATE xxx, p-1-10, 2015 3. M.T. Parra-Santos, V. Mendoza-Garcia, Influence of flow swirling on the aero thermodynamic behavior of flames. CESW 51(4), 424–430 (2015) 4. N.H. Gor, M.J. Pandya, CFD analysis of swirl can combustion chamber. IJIRST 1(2) (2014) 5. A. Radwan, A. Kamal, RANS modeling of unconfined swirl flow. JST 143 (2014) 6. M. Abhilash, Study of swirl and tumble motion using CFD. IJTARME 2(1) (2013) 7. A.B. Anusha Rammohan, V. Natrajan, Flow swirl and flow profile measurement in multiphase flow. IJTE 2(1) (2012) 8. S. Mahalingam, M.K. Koyithitta Meethal, A. Banerjee, W. Basu, H.K. Pillai, Electrical network representation of a distributed system. US Patent 8264246. General Electric Company, 2012

Data Acquisition Technique for Temperature Measurement Through DHT11 Sensor Brajesh Vallabh, Aquib Khan, Durgesh Nandan, and Manish Choubisa

Abstract Weather parameters required adequate data collection as air pressure, humidity and temperature at a specific time as well as location. It is important to gather the information as quickly as possible with the applicability to other areas. Nevertheless, terrestrial features like the natural contours with hills, mountains, river and ocean can hamper the data collection. Consequently, there is a requirement for the development of data collection system that can overcome this challenge without the impact of terrestrial features. This work proposes the solution as developing a data acquisition system with the help of DHT11 sensor to sense humidity and temperature information and uses a data processing unit as Arduino microcontroller. Server displays the collected information as graph or Graphical User Interface (GUI). Furthermore, this paper depicts the merits of smart sensors to overcome the issues during data collection of humidity/temperature as compared to other sensors. It is also appropriate for designing the other embedded systems to monitor ecological constraints. The results show the potential of the developed system in data acquisition. Keywords Sensor · DHT11 · Humidity · Temperature · Humidity · Weather · Data aquisition

B. Vallabh (B) · A. Khan · D. Nandan · M. Choubisa Department of Computer Science & Engineering, Arya Institute of Engineering & Technology Jaipur, Jaipur, India e-mail: [email protected] A. Khan e-mail: [email protected] D. Nandan e-mail: [email protected] M. Choubisa e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_53

547

548

B. Vallabh et al.

1 Introduction By the end of 2020, it is estimated that 50 billion connections will be required to interact in between the number of embedded devices with Internet [1, 2]. Microcontroller can achieve the ever-growing pace of users’ interaction to Internet with its easy-to-use design and substituting the old models with complicated electronic circuits. Arduino acts as a tiny computer with a microcontroller board. It provides a platform for the development and formation of interlinking the objects with the help of programming software. It is important to provide the space for writing the codes in language (such as C, C++, C# etc.) with the help of Arduino software Integrated Development Environment (IDE). It can respond and understand accordingly with some advanced features such as flexibility, easy-to-use and inexpensiveness. It makes the Arduino microcontroller with software and hardware to utilize broadly. It also provides the feature of freedom to use. The software as well as hardware of Arduino are open source. It illustrates the usage of idea by one that is produced by others in their work and upgrade them without taking the authorization from anyone [3]. Therefore, it means that the idea can be utilized by any user for the completion of their task. Arduino boards can perform the task by anyone without previous experience of programming or prior knowledge of electronics. It can build its own interactive object with gathering the data from other people actions. It can control and automatically sense the environment. It is widely available due to low cost and easily accessible by several researchers, teachers, hobbyists and students for developing novel innovation in electronics. This paper has the purpose to develop an Arduino based embedded device for manufacturing to observe (monitoring terrestrial constraints as humidity/temperature) and to perform the data acquisition on their performance characteristics. Testing and validation are conducted under different conditions of humidity as well as temperature. Normal room temperature/humidity (NTP) device creates the different environmental conditions and can be compared with outdoor temperature/humidity [4]. This paper consists the designing methodology for the development of device that can display the readings on liquid crystal display (LCD) with data collection on humidity/temperature. It has the Arduino board with serial monitor and sensors. The designed system is working on two constraints especially operable for greenhouse, laboratory and building. In this paper, the compilation has been presented with required components, connections and circuit diagrams with essential codes.

2 Background There is a long history of development for observing the environmental constraints like humidity, pressure and temperature. These parameters have proven significant influence in the plant growth efficiency, temperature/humidity productivity and the

Data Acquisition Technique for Temperature Measurement …

549

quality of food industry with sensitive equipment. The competitive era of technology has some crucial factors as monitoring and reliable measurement. Arduino has depicted the capability to fulfill the real-time monitoring with accuracy as an open source hardware and controlling of environmental constraints. Arduino provides the platform to their user where they can share the ideas as a community with each other’s work and upgrade them to modernize and enhance several multiple interacting objects under different Internet environment [5]. The future directions can be extended with the integration of monitoring equipment, remote controllers, drones, robots and several other interesting objects. It has the aim to take one big step toward developing the world more sustainable as well as automated. Arduino can support the basic and advance supporting language such as C, C++ and C#.

3 Materials and Method 3.1 Arduino Arduino is the heart of the whole monitoring system with the assembly of computer hardware and software, single-board microcontrollers and microcontroller kits for manufacturing the interactive objects with digital devices. It can control as well as sense the objects as the open source in the physical world. Arduino can collect the receiving input signals from the environment by sensors and interconnects through actuators. An actuator could be ethernet, sensors or motor, a simple light emitting diode (LED) or some other electronics based on the objective. Currently, Arduino hardware is available in several versions with advance features for enabling the designs. It also includes the hardware wiring with the programming. The software can be run on Mac OS, Linux or Windows [6]. The Arduino can work with Integrated Development Environment (IDE) that is a program for performing the operations at stand-alone mode connected with electronic devices or computer. There are the availability of several clones of Arduino hardware in the market due to the open source nature. Arduino UNO is the extended version of Arduino. It has the microcontroller board depending on the 8-bit microprocessor of Atmel Atmega328 series. It has the 14 pins as digital input/output, out of which six can be utilized as the outputs for pulse-width modulation (PWM). Furthermore, it has a 16 MHz oscillator or quartz crystal with six analog inputs. Arduino UNO board has a reset button, an In Circuit Serial Programming (ICSP) header, a power jack and Universal Serial Bus (USB) cable attached with the computer. ‘UNO’ stands for one in Italian.

550

B. Vallabh et al.

3.2 Sensors A sensor can convert the physical phenomenon into an electrical signal. It is the electronic device that can deliver the data to other electronic equipment such as computers. Therefore, it can be the element that create an interface between electronics with the physical world/environment. Sensor is a semiconductor device and developed for responding the variations in their capacitive or resistive characteristics according to the type of sensor. Sensor performs the function of giving respond to an input physical signal and further change it into an electrical signal (voltage). Sensor performance depends on several parameters such as bandwidth, resolution, noise, linearity, hysteresis, uncertainty/accuracy, range, sensitivity and transfer function. Several places and objects required sensors such as light switch with motion sensitivity, phone screen with touch-sensitive and numerous applications. Further expansion of sensors is widely possible with the advancement of microcontrollers [7, 8]. DHT11 sensor is one of the advanced sensor working for the monitoring of humidity and temperature. The sensor-DHT11 can sense the physical variations in moisture and heat with the design of analog sensor. It can work under the exposure of air with appropriate programming and wiring. Measurement Range: Humidity: 20–90% RH Temperature: 0–50 °C. Accuracy: Humidity: ±5% Temperature: ±2% While, the operating voltage remains between 3 and 5.5 V. Available sensor is compact with quick response, low power consumption and low cost characteristics due to which it becomes the best choices in between researcher. The primary applications of DHT11 sensors are in consumer goods, examining/testing tools and HVAC (heating, ventilation and air conditioning). It can also work for developing a humidity regulator or a weather station. Furthermore, DHT11 sensor utilizes in medical, controlling as well as computing the humidity/temperature in home appliances and several other domains. Figure 1 illustrates the DHT11 sensor for the measurement of humidity and temperature. DHT11 sensor has definite range of performance with accuracy. Some of the important parameters are humidity and temperature. Humidity illustrates the quantity of moisture as water vapor present in air. Generally, it is denoted as relative/absolute humidity and dew point. This work includes the DHT11 sensor for computing the moisture level as relative humidity (RH). Relative humidity (RH) defines the ratio of the water vapor quantity with the content of air to the saturated moisture level at the same pressure or temperature:

Data Acquisition Technique for Temperature Measurement …

551

Fig. 1 DHT11 sensor

RH =

ρW × 100% ρS

where RH denotes the relative humidity, ρs is the water vapor density at saturation and ρw is the water vapor density. The electrical resistance between electrodes computes the moisture in the air by DHT11 sensor [9]. This sensor can be manufactured with consisting of the moisture holding substrate. Ionization takes place whenever substrate absorbs moisture and provided the enhancing conductivity in between the electrodes. The variations in resistance between electrodes are proportional to the relative humidity because of the absorbed moisture level.

3.3 IDE IDE is the brain section of manufacturing the monitoring system with the association of Arduino. It has the environment for software development or application for Arduino where person can write several kind of testing platforms to run computer programs. The Arduino can understand several codes written by the user in IDE such as C, C++, C#. Whenever, IDE wrote the codes (program), further uploaded into the microcontroller with Arduino estimates how, when and where the system is operable. The Arduino tool with IDE has the ‘in-built code parser’ that works on the validation of designed codes before transmitting it to the Arduino [10]. IDE can do the translation as well as compilation of work after inspecting the codes validation. IDE uploads the code to the Arduino microcontroller after translating the codes. IDE software consists the different programs dataset that are prepared to be verified

552

B. Vallabh et al.

on the tool. Several libraries can extend the use of Arduino IDE just like the other programming platform with their installation. It has two primary functions such as ‘loop()’ and ‘setup()’ functions. The loop part is where the codes should be written so that the program runs with repetition until the power off or reset button is pushed. The setup part is where the codes should be written so that the program runs and. It permits the person to make the edits in written program with Arduino to perform anything they want to do with the help of base code. IDE enables communication with Arduino board through USB based on the characteristics of several boards.

4 Implementation and Results The proposed approach consists of two subroutines called data acquisition section and monitoring section. Figure 2 shows the block diagram. The data acquisition section consists of Arduino UNO microcontroller and sensors for acquiring temperature and humidity data. DHT11 sensor is used for collecting temperature and humidity data. The data is collected with a sampling time of 10–1000 ms. The monitoring section is the application developed in Microsoft Visual Studio, .net framework. In the application data collected in a given sampling time can be visualized. In this section sampling time can be increased or decreased as per the requirement. If the temperature and/or humidity value exceeds a specified threshold, then system generates warning messages and alert signals. The process flow is shown in Fig. 3. DHT11 sensor acquires temperature data in degrees celsius and relative humidity and sends it to Arduino UNO via I2C protocol. The pin connection between DHT11 and Arduino is demonstrated in Fig. 2. Arduino sends the acquired data to monitoring section by Universal Asynchronous Receiver Fig. 2 Block diagram

Data Acquisition Technique for Temperature Measurement …

553

Fig. 3 Process flow

Transmitter (UART) protocol. The parameters, essential for UART communication, are defined in Arduino IDE. The baud rate is set to 9600 bps, parity bit is set to null, start bit is set to 1 and stop bit is set to 0. The fabricated circuit is shown in Fig. 4.

Fig. 4 Fabricated circuit

554

B. Vallabh et al.

Fig. 5 Result

The developed algorithm is simulated on a 64 bit machine, i5 eighth-generation CPU with 8 core processors with frequency of 1.60 GHz processor. The result of temperature data acquisition in the simulation is shown in Fig. 5. In this case, the upper limit of temperature is set to 30 °C and lower limit is set to 20 °C. Alarm was actuated if the acquired temperature is not in the specified limit.

5 Conclusion In this paper, embedded system for measuring temperature and humidity data is presented. The embedded system is developed using low cost DHT11 sensor and Arduino UNO microcontroller. A data monitoring application is developed in Microsoft Visual Studio for controlling temperature. The simulation shows efficient data acquisition and monitoring. DHT11 sensors show an uncertainty of ±1 °C in temperature readings. In future, the system can be integrated with advanced data acquisition systems such as Internet of Things and Wireless Sensor Networks. Machine learning techniques can be used for improving temperature and humidity control.

Data Acquisition Technique for Temperature Measurement …

555

References 1. L. dan Husein Sukariasih, Prototype System Telemetri Pemantau Suhu dan Kelembaban Udara Berbasis Mikrokontroler ATMega8535, Kendari, Jurnal, Jurusan PMIPA/Fisika FKIP Unhalu, 2013 2. G. Spasov, N. Kakanakov, Measurement of temperature and humidity using SHT 11/71 intelligent sensor. Electron (2004) 3. V.M. Quan, G.S. Gupta, S. Mukhopadhyay, Review of sensors for greenhouse climate monitoring, in 2011 IEEE on Sensors Applications Symposium (SAS), 2011, pp. 112–118 4. G. Kovács, G.E. Marosy, G. Horváth, Case study of a simple, low power WSN implementation for forest monitoring, in Electronics Conference (BEC), 2010 12th Biennial Baltic, 4–6 Oct 2010, pp. 161, 164 5. H. Zhou, F. Zhang, J. Liu, F. Zhang, Applications of Zigbee wireless technology to measurement system in grain storage, in IFIP Advances in Information and Communication Technology (Springer Science, Boston, 2009), pp 2021–2029 6. B. Gholamzadeh, H. Nabovati, Concepts for designing low power wireless sensor network. World Acad. Sci., Eng. Technol. 45, 559–565 (2008) 7. L. Wei, Z. Xiaoping, Wireless temperature and humidity collection system design. Heibei. China 27, 500–502 (2010) 8. L. Chao, Wireless temperature and humidity monitor device for Greenhouse. Shanxi. China 7, 136–139, (2011) 9. Q. Yunxiao, W. Fengbo, Wireless temperature and humidity measure and control system based on ZigBee technology. Shanxi. China 38, 731–734 (2009) 10. J. Polastre, R. Szewczyk, D. Culler, Telos: enabling ultra-low power wireless research, in IPSN 2005. Fourth International Symposium on Information Processing in Sensor Networks, 2005, pp. 364–369

Urban Sprawl Over a Lotic Ecosystem of Doon Valley: Trend and Future Implications Monika Rawat, S. M. Veerabhadrappa, R. P. Pandey, and D. R. Sena

Abstract The unprecedented expansion of urban sprawl on river systems is always a matter of concern as it unduly modifies the hydrologic and the ecological character of the lotic system. A typical example of such a system is epitomized by a twin river system of Dehradun city namely Rispana and Bindal (RB), where heavy unplanned habitations have imposed impervious infrastructure close to riverbed along with its riparian flood plains resulting in constrained flow capacity often inducing flood hazard problems. The encroachment is still continuing since 2005 a year typically considered as the baseline for intensified biotic encroachments post its coronation as capital city of the newly formed Uttarakhand state in 2000. The present study is, therefore, carried out to analyze the spatial and temporal change in urban sprawl along and across the RB river system using Land Change Modeler (LCM) algorithms. The NRSC 1: 250,000 landuse data for year 2005 and 2010 were used to carry out the change analysis along with transition potential of major landuse changes and validate the map projections with data of 2015. A multilayer perceptron Artificial Neural Network (MLP-ANN) technique was used to ascertain transition potential of each of the significant landuse changes at more than 80% accuracy. Further, future trends of landuse changes have been prepared for the year 2030 using Markov Chain technique. Further differential landuse has been characterized for their pattern and nature of change using landscape analysis. M. Rawat (B) · S. M. Veerabhadrappa AIGIRS, Amity University, Noida, India e-mail: [email protected] S. M. Veerabhadrappa e-mail: [email protected] R. P. Pandey National Institute of Hydrology, Roorkee, India e-mail: [email protected] D. R. Sena ICAR-Indian Agricultural Research Institute, New Delhi, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_54

557

558

M. Rawat et al.

Keywords Urbanization · Lotic ecosystem · River Rispana · River Bindal · Land change modeler · GIS · Dehradun valley

1 Introduction Most of the world’s ancient civilizations were established and thrived through ages along the major rivers [1]. Since then, parallel growth of urbanization along river banks leads to the problem of changes in Landuse and Landcover (LULC) [2]. This necessitates continuous monitoring of LULC to keep a check on the drivers responsible for environmental change. Therefore, tracking of LULC change and sustainability of available resources is a major priority for researchers, planners and policymakers globally [3]. Applications of remote sensing techniques have been extensively used during recent years to keep a tab on changes in LULC by comparing historical and present remotely sensed satellite data. This technique is perhaps the only cost-effective method for obtaining the sprawl pattern of an urban landuse [4]. The two main rivers of Dehradun valley namely Rispana and Bindal (RB) are facing encroachments over past two decades after the Dehradun city has been upgraded to capital region. The severity of the problem can be understood by the fact that around 27% of the total population is living around the area of RB river system [5]. Owing to its importance being the lifeline of the Doon City, the present riparian as well as flood plain encroachment significantly constraints the perrenniality of the lotic system. The present study, therefore, investigates the spatial and temporal account of the urban growth in the RB catchment with special reference to lotic system encroachment and analyzing the transitional potentials of each contributing land uses to the built-up sprawl. A comparable time scale of development of 5 years (2005, 2010 and 2015) was taken to identify these transitions using Multi-Layer perceptron Neural Network and building a future scenario of this expansion (2030). A landscape analysis was done to characterize the pattern and nature of landuse change.

2 Study Area The study area is a catchment (Area = 105 km2 ) covering both Rispana and Bindal river (Fig. 1) that confluences with Suswa river a tributary of Song river that ultimately joins river Ganga between Haridwar and Rishikesh. The relief of the catchment ranges from 2253 m MSL to 521 m MSL. The climate of the study area is moderately hot in summers and very cold in winters popularly known as humid subtropical type. During the summer months (April, May, and June), the temperature ranges between 36 and 17 °C. The winter months are colder with the maximum and minimum temperatures touching 23 °C

Urban Sprawl Over a Lotic Ecosystem of Doon Valley: Trend …

559

Fig. 1 Location of the study area with delineated catchment of Rispana and Bindal (RB) river system

and 5 °C, respectively. The average annual rainfall and potential evapotranspiration are about 2000 mm and 1200 mm respectively.

3 Materials and Method 3.1 Database and Pre-processing The DEM dataset of ALOS-PALSAR (https://asf.alaska.edu/) at 12.5 m resolution was obtained to delineate catchment area of RB system (Fig. 2a). The river network vector representing RB was manually digitized using present and archived satellite dataset of Google Earth (Fig. 2b). A road network vector was acquired from Open Street Map (OSM) datasets (Fig. 2c). NRSC 1: 250,000 landuse data (https://bhu

560

M. Rawat et al.

Fig. 2 a DEM (Alos-palsar 12.5 m), b digitized stream network and c OSM road network

van.nrsc.gov.in/) derived from AWiFS for year 2005, 2010, and 2015 (Fig. 3a–c). The original LULC definition provided by NRSC was merged to limit the analysis classes to six. All the raster or converted vectors to raster datasets were resampled to match the spatial resolution of DEM for further analyses.

Fig. 3 Merged and resampled LULC raster obtained from NRSC for different years a 2005, b 2010 and c 2015

Urban Sprawl Over a Lotic Ecosystem of Doon Valley: Trend …

561

3.2 Land Use Change Detection and Transition Potential Modeling A popular land change Modeler (LCM) algorithm was used to analyze temporal change in landuse between 2005 and 2010 with transitional potential (TP) [6] of each of the non-built-up areas to built-up area known as output sub-models. These sub-models define 05 transition layers (T1, T2, …, T5) and 05 persistence layers (P1, P2, …, P5). The output layer suffix 1, 2, 3, 4, and 5 refer to change from agriculture, evergreen forest, deciduous forest, barren, and water land usages to built-up areas, respectively. A mathematical software Terrset v.18.1 was utilized for this LCM and TP functionality. A Multi-Layer Perceptron Artificial Neural Network Technique (MLP-ANN) was used [7–9] to generate these transitional probabilities using 05 input variables that define layers such as (V1) distance map of disturbances between 2005 and 2010, (V2) distance raster of disturbances contributing to built-up areas, (V3) distance raster of the stream network, (V4) the DEM map, (V5) the evidence likelihood raster of likely change of an area to a desired landuse for an area that schedules a change. Number of hidden layers used in the MLP-NN is 07 (Fig. 4). The model utilized 50% dataset for training and 50% for testing. For all changes mapped between 2005 and 2010 only changes more than 6 ha area was only considered

Fig. 4 Architecture of MLP-ANN with 05 input variables (V1 to V5) and 10 output layers (T1 to T5 represent transition and P1 to P5 represent persistence)

562

M. Rawat et al.

Fig. 5 a Validation of actual 2015 land cover classes with the predicted land cover class of 2015. b Projected landuse of 2030

eliminating subtle and unimportant landuse changes which were partly attributed to errors during classification. The expected accuracy rate of prediction by the MLP-ANN of a landuse whose measured accuracy is ω was calculated using the following: E(ω) =

1 (T + P)

(1)

where, E(w) = expected accuracy; T = the number of transitions in the sub-model; P = the number of persistence classes, i.e., the number of “from” classes in the sub-model. A measure of model skill (S) is then expressed as: S = (ω − E(ω))/((1 − E(ω))

(2)

A skill of 0 indicating random chance whereas a value close to 1 suggests bestacquired skill while making the prediction. The Transitional potential also used to validate the prediction of 2015 using actual NRSC landuse map of 2015 (Fig. 5a).

3.3 Future Trend Modeling The transition modeling carried out with its associative transitional potentials of submodel layers V1 to V5 as mentioned elsewhere in this manuscript to create future

Urban Sprawl Over a Lotic Ecosystem of Doon Valley: Trend …

563

maps of 2030 by projecting transitional probabilities to the future using Markov prediction process [10]. Year 2030 was chosen to depict a near-future scenario amenable to fit into the present policy initiatives of the Uttarakhand Government suggesting to revive the river system by regulating riparian built-up areas including land and water management interventions in the catchment area. The transition probability matrix was generated to show the likelihood of transition of one landuse to other landuse.

3.4 Landscape Analysis The analysis of Landscape was done using normalized Shannon’s entropy (E s ) which is expressed as Es = −

k 

Pi + ln(Pi )/ ln(n)

(3)

i=1

where, n is the number of landuse classes in the image; Pi : the proportion I class of all classes in the image; I is the index for classes within the neighborhood; k is the total number of classes within the neighborhood. This was carried out using a 7 × 7 neighborhood. The primary purpose of this analysis was to measure diversity over the local neighborhood of each pixel. This analysis was done to identify landscape pattern of any landuse raster over its earlier landuse. The process also used to compare the nature of change experienced by each of the land cover class using a decision tree procedure [11].

4 Results and Discussions 4.1 Landuse Change from 2005 to 2010 On comparing the relative landuse change from 2005 to 2010 (Table 1) it could be observed that a huge gain was registered by built-up areas (~137%), which was taken mostly at the cost of Barren land registering a loss of about ~81%. All other changes in landuse seem insignificant in terms of their areal gain or loss. A critical observation of the net change (Table 2), it could be seen that net gain in built-up area is about 58% against net loss in barren land which is about 83%. Landuse connected to water, i.e., river and its riparian zone was active in its net percentage change over these 05 years suggesting a likelihood of encroachment activities in those areas.

564

M. Rawat et al.

Table 1 Landuse class change from 2005 to 2010 Landuse class

2005 Area (ha)

2010 Area (ha)

% Change

Built up

2173.0

5141.1

Agriculture

1809.3

1816.5

0.4

Evergreen forest

384.5

378.9

−1.5

Deciduous forest

2337.9

2358.8

0.9

Barren

3688.4

695.9

−81.1

Water

128.8

130.8

1.5

10,521.9

10,521.9

0.0

Total area

136.6

Table 2 Net change in landuse from 2005 to 2010 Landuse

Net area change (%) (Loss)

Net area change (ha) (Gain)

(Loss)

(Gain)

Built up

0

58.18

0

2991

Agriculture

−5.6

5.98

−101

109

Evergreen Forest

−10.11

8.77

−39

33

Deciduous Forest

−4.1

4.94

−96

117

Barren

−82.93

9.54

−3059

66

Water

−11.98

13.3

−15

17

A further analysis of this aspect can unleash some interesting dynamics of this interaction when we analyze the transitional potential of the individual landuse.

4.2 Transitional Probabilities The MLP-ANN simulation for transition potential analysis was carried out using pertinent input datasets as discussed elsewhere in this manuscript and the output transitional potential all land cover classes to built-up areas. The pertinent parameters and performance statistics are given in Table 3. The accuracy rate of 86.2% and Skill measure of 0.8462 suggests the MLP-ANN hypothesis, built-on the choice of input variables, could successfully reproduce the transition potential and persistence of all changes from non-built-up areas to built-up areas. The individual land cover class skill measures (Table 4) computed also corroborates superior performance of MLP-ANN. This is also suggestive of the fact that a superior membership of land cover classes like Barren and Water exists in contribution to transition to built-up area. Based on these transition probabilities, a predicted map of 2015 was constructed based on the acquired transition potential using the

Urban Sprawl Over a Lotic Ecosystem of Doon Valley: Trend …

565

Table 3 Parameters and performance metrics of MLP-ANN Parameters

Metrics

Input layer neurons

5

Hidden layer neurons

7

Output layer neurons

10

Requested samples per class

54

No. of Iterations

30,000

Performance indicators Training RMS

0.1464

Testing RMS

0.1581

Accuracy rate

86.2%

Skill measure

0.846

Table 4 Skill measures for both transition and persistence derived from MLP-ANN Class

Skill measures Transition

Persistence

Agriculture to built up

0.778

0.841

Evergreen forest to built up

0.877

0.847

Deciduous forest to built up

0.778

0.547

Barren to built up

1.000

0.857

Water to built up

1.000

0.926

MLP-ANN technique (Fig. 5a). The same was compared to the actual land cover map of 2015. A confusion matrix was built to identify overall accuracy which was found to be 81.3% (Table 5). The recall or sensitivity of built-up area shows significant prediction accuracy over the other land cover classes. As a matter of fact, agriculture and deciduous forest also Table 5 Prediction metrics of validation of 2015 land cover Land cover class

Prediction metrics Recall/sensitivity

Precision

Built-up

0.914

0.921

Agriculture

0.720

0.771

Evergreen forest

0.426

0.415

Deciduous forest

0.748

0.804

Barren

0.393

0.114

Water

0.364

Overall accuracy of Prediction (%) =

0.530 81.3

566

M. Rawat et al.

Table 6 Transitional probability in 2030 with reference to 2005 land cover classes Land cover class Year: 2005

Year 2030 1

2

3

4

5

6

Built-up (1)

0.97

0.01

0.00

0.01

0.00

0.01

Agriculture (2)

0.12

0.80

0.00

0.06

0.02

0.00

Evergreen forest (3)

0.05

0.03

0.66

0.24

0.02

0.00

Deciduous forest (4)

0.07

0.03

0.04

0.85

0.01

0.00

Barren (5)

0.93

0.03

0.00

0.02

0.01

0.01

Water (6)

0.27

0.09

0.00

0.03

0.01

0.60

comparably showed a better prediction. The probable reason is that higher percentage of the persistence of those landuses comparable to others such as evergreen forest, barren, and water. Those uncertainties in prediction suggest rampant loss in isolate patches of those land cover classes to encroachment or built-up areas as already observed elsewhere (Table 1). A future prediction to the year 2030 is done using Markov prediction method with reference to 2005 transition probabilities (Fig. 5b). Table 6 represents the transitional probability matrix for the year 2030. It can be seen as usual the barren land has a probability of 93% to get converted to builtup, whereas there is a 27% chance that the lotic system environment will probably lose out to encroachment further with business as usual scenario. Interestingly the distance between built-up areas to deciduous forest seemingly look shortened while evergreen forest is expanding over to the deciduous forest zone (24% probability).

4.3 Landscape Analysis Preliminary analysis of transition from 2005 to 2010 (Fig. 6a) revealed that excepting attrition that is more predominant in evergreen forest which is part of a natural recession, i.e., attrition, Enlargement and aggregation are due to anthropogenic reasons. This suggests massive scale encroachment in the form of built-up area or agriculture have consolidated or aggregated as a larger patch of the land cover. Enlargement could be observed in deciduous forest especially in the reserve forest areas of Mussoorie hills or Lachhiwala range where activities of plantations probably had an activity to reckon with. The same transition to 2015 from 2005 (Fig. 6b) a massive scale attrition could be observed in agricultural land which was mostly natural fallow due to lack of agricultural activities. Built-up area consolidation across lotic system was still prevalent. Deciduous forest and evergreen forest just naturally interact with each other while the loss of patch in both is due to natural environmental and climatic exchange. Year 2030 (Fig. 6c) could see further development of new patches (creation) in the form of built-up or urban sprawl in areas that are either part of existing barren patches or cultivated fallow areas or the riparian zones of RB

Urban Sprawl Over a Lotic Ecosystem of Doon Valley: Trend …

567

Fig. 6 Landscape analysis of the transition from 2005 in a 2010, b 2015 and c 2030

river system which ultimately one of the soft target for encroachment. A normalized entropy analysis of the individual land cover classes also computes a higher peripheral diversity to the existing built-up areas which intersect the river course too often. This suggests relatively high vulnerability of the riparian and flood plain zone of the RB lotic system.

5 Conclusions Recent initiative by Uttarakhand Government to rehabilitate the Rispana and Bindal Rivers had been facing constant roadblock from uncontrolled encroachment along the river stretch especially in the valley region of the catchment. In the present study, attempt has been made to identify the growth pattern of built-up area and possible vulnerable soft targets of the landuse likely to be replaced for this encroachment. The hotspots of those encroachment or further new habitation traits could be identified using a robust mathematical tool, i.e., MLP-ANN that successfully identified the transitional potential and possible landscape change pattern. Inferences drawn from those outcomes were described with accuracy levels corroborating their likelihood. A detailed analysis landscape pattern suggested the parcels of the landuse those have undergone either natural or anthropogenic landscape change. A further reconstruction of historical land use to far past than the one used in this study can unleash more accurate and conclusive sprawl patterns of urban habitation and their interaction affecting the ecosystem of the lotic system.

568

M. Rawat et al.

References 1. R.R. Joshi, M.M. Warthe, S. Dwivedi, R. Vijay, T. Chakrabarti, Monitoring changes in land use land cover of Yamuna riverbed in Delhi: a multi-temporal analysis. Int. J. Remote Sens. 1161, 9547–9558 (2011) 2. S. Fazal, A. Amin, Impact of urban land transformation on water bodies in Srinagar City, India. J. Environ. Prot. 2(2), 142–153 (2011) 3. F. Yuan, K.E. Sawaya, B.C. Loeffelholz, M.E. Bauer, Land cover classification and change analysis of the Twin Cities (Minnesota) Metropolitan Area by multitemporal Landsat remote sensing. Remote Sens. Environ. 98(2–3), 317–328 (2005) 4. F.G. Dessì, and A.J. Niang, Thematic mapping using quickbird multispectral imagery in Oung El-Jemel area, Tozeur (SW Tunisia), in Desertification and Risk Analysis Using High and Medium Resolution Satellite Data (Springer, Dordrecht, 2009), pp. 207–212 5. Census, “District Census Handbook”, https://www.census2011.co.in/census/city/23dehradun. html. Accessed on 2019/11/10 6. T. Takada, A. Miyamoto, S.F. Hasegawa, Derivation of a yearly transition probability matrix for land-use dynamics and its applications. Landscape Ecol. 25, 561–572 (2010) 7. P.M. Atkinson, Neural networks in remote sensing. Int. J. Remote Sens. 18(4), 699–709 (1997) 8. D.L. Civco, Artificial neural networks for land cover classification and mapping. Int. J. Geogr. Inf. Syst. 7(2), 173–186 (1993) 9. J.C. Chan, Detecting the nature of change in an urban environment: a comparison of machine learning algorithms. Photogramm. Eng. Remote Sens. 67(2), 213–225 (2001) 10. J.R. Eastman, W. Jin, P.A.K. Kyem, J. Toledano, Raster procedures for multi-criteria/multiobjective decisions. Photogramm. Eng. Remote Sens. 61(5), 539–547 (1995) 11. J. Bogaert, R. Ceulemans, D. Salvador-Van Eysenrode, Decision tree algorithm for detection of spatial processes in landscape transformation. Environ. Manage. 33(1), 62–73 (2004)

Arduino-Based Therapy Device for Carpal Tunnel Syndrome Alok Ahuja, Aditya Katole, Aman Sharma, and Akanksha Vyas

Abstract A novel device (Arduino-based gear for carpal tunnel syndrome) is proposed for the recuperation of forearm anguished of carpal tunnel syndrome (further referred to as CTS). The proposed therapy device with two degree of freedom can be used as therapy for gripping action of the forearm. The results from goniometry and flex sensors confirm the main notion of the device by significantly reducing the muscle numbness caused by CTS and increasing the gripping force, hence providing a more reliable and economical alternative to physiotherapy and medicinal drugs. This device can be altered in the future by targeting complicated joints like knee, neck, shoulder, ankle and toe. Keywords Muscle therapy device · Carpal tunnel syndrome · Physiotherapy

1 Introduction In the last decade, IT companies have seen a great increment in income. Research shows that a total of six companies grew their annual enterprise revenue by at least $16 billion from 2009 to 2019 with no plans of slowing down growth over the next 10 years. An employee of the IT industry works almost for 7.9 h a day which makes it 40 h a week and 158 h a month. Working for so long which does cause a lot of

A. Ahuja (B) · A. Katole · A. Sharma · A. Vyas Department of Electronics and Communication, Medi-Caps University, Indore, India e-mail: [email protected] A. Katole e-mail: [email protected] A. Sharma e-mail: [email protected] A. Vyas e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_55

569

570

A. Ahuja et al.

medical complications such as muscle numbness, back pain, tennis elbow and carpal tunnel syndrome. Employee’s fingers experience a force of 18 tons. Following a similar routine, each day causes numbness and pain in the forearm and wrist region. The human wrist consists of bones which form a tunnel. The wrist bones are called carpal bones, and so the tunnel formed by these bones is called the carpal tunnel. This tunnel is around an inch wide. For the easy and free motion of human, joints and fingers are surrounded by a ligament called synovium. Constant pressure on the wrist makes these ligaments to move/shift into this tunnel leading to numbness tingling and reduction in the gripping action of the hand.

2 Literature Survey Biomedical engineering has been an important field of innovations and research, where artificial limb and exercising device development have been the leading field of interest these days, and biomedical engineering is at its peak with inventions like the ones mentioned below. According to research around 1–3% of the total population is anguished of conditions like the tennis elbow, and one of the research claims to design a device that can massage. Comparison between force and weight is done, various massage modes are developed and a score is given [1] (Fig. 1). The next device is designed for both flexion and extension of the arm of the patient anguished of paraplegia as a result of stoke or abnormalities in the central peripheral nervous system. A protocol named Zigbee protocol is been used in the design of the device and is capable of wireless communication. An approach called Lyapunov approach is applied to establish the system stability [2].

Fig. 1 Carpal tunnel and median nerve

Arduino-Based Therapy Device for Carpal Tunnel Syndrome

571

Another research claims to design a game titled “Design and Development of Equipment Wrist and Forearm Physical Therapeutic in Elderly Persons”. The study proposes a forearm wearable which is used as a joystick. The game can be considered as a therapy for elderly people. Goniometry, total score and EMG show significant improvement in the forearm of the person [3]. An endoskeleton helps in walking of a person, i.e., reducing the effort while walking. A healthy person is let to walk on a treadmill at a speed of around 2.5 mph under various conditions. To check the results of various human joints are plotted with and without the assistance of the device, the varying data is then averaged and accounts the complete strides at the last minute [4].

3 Review of Literature Survey Numerous studies have been done for medical problems such as tennis elbow and people suffering from paraplegia. Many assistive devices have also been developed to reduce the efforts while walking, many games have also been made which help to regain the strength of hand, and results can be verified with the help of scores earned by playing games. The only problem with these assistive devices is that they are not portable, costly and requires certain expertise to operate. So the research paper proposes an automated device named “Arduino-based gear for carpal tunnel syndrome” which can be afforded by all and is compatible and portable simultaneously. The automated device may consist of a skeleton for mounting motors and suspended threads. The overall device would weigh around 156–140 g only. The motors used in the proposed device are independent to be programmed as per the requirement with the help of the Arduino IDE platform. To obtain the results and verify the capability of the device, flex sensors are been used. Using motors and flex sensors makes the device not only cheaper but easy to use by everyone making it compatible and accessible to all.

4 Fabrication The mechanical part refers to an aluminum frame and the complete hardware setup of the device that gets mounted over the hand of a person, with the help of straps and Velcro so that it does not slide off the hand. A graphic representation is shown (Fig. 2).

572

A. Ahuja et al.

Fig. 2 Forearm mounting

4.1 Electronic Part The whole device is a wearable glove that can be worn by anyone irrespective of age. Five dedicated motors are attached to the glove for each fingertip and a thumb, and all the motors are connected independently to Arduino Nano which is a programmable microcontroller and supervises and controls the motor to form specific exercise gestures (Fig. 3).

Fig. 3 Servomotor

Arduino-Based Therapy Device for Carpal Tunnel Syndrome

573

Fig. 4 Arduino IDE

4.2 Software Requirement The Arduino Nano is a microcontroller that is required to be programmed to perform the specific set of instructions. The Arduino provides a programming platform known as Arduino-Integrated Development Environment (Arduino IDE) which is an authentic software introduced by Arduino.cc. It is used for programming, compiling and uploading the code within the Arduino board. Hence to form proper exercising gesturers, the Nano board is programmed under extensive supervision (Fig. 4).

5 Experimental Setup The experimental setup of the device consists of a glove, the aluminum frame and the electronics. The whole device can be worn by a patient; in order to wear the gear, the person needs to place it on palm and tighten it with the straps and Velcro. The device consists of attachments for the fingertips. These attachments are then connected to motors mounted on the frame, and these motors are connected to Arduino which rotates them 360° which will pull the string and lead to bending of fingers. The ultimate objective of this proposed idea is to prove its efficiency over the existing treatment methods; hence, flex sensors are used for measuring the muscle numbness and the changes observed by the regular treatment from the device. The flex sensor is a bend detecting sensor. The resistance of these sensors varies according to the bend, which can be recorded using any microcontroller. For collecting the results from the flex sensors, they are mounted on the dorsal side of the fingers (Fig. 5).

574

A. Ahuja et al.

Fig. 5 Dorsal surface mounting of flex

6 Working Routine The following exercise is studied and analyzed for improving the core strength of the fingers. The device is programmed to keep all the fingers closed and opens an individual finger one by one as shown in Figs. 6, 7, 8, 9, 10, and 11.

7 Results The results from the device are reported by the flex sensors connected to the Arduino which interprets the readings and converts them into the percent force applied by an individual while bending and flexing figures. The sensor is attached to the dorsal side of the finger, and the outcome is represented in the form of the table illustrating the force applied by different fingers before and after routine exercise from the device (Table 1).

Arduino-Based Therapy Device for Carpal Tunnel Syndrome

575

Fig. 6 (Step 1) Making a fist

Fig. 7 (Step 2) Flexing of thumb

The readings in the table imply the difference in the force applied by a particular individual in the normal condition against the force applied after attending the routine exercise from the device. The most important observation detected from the results is CTS majorly affects middle finger, index finger and thumb; hence, the change in readings for the ring finger and little finger is not significant proving the influence of CTS is limited over the middle finger, index finger and thumb only.

576

A. Ahuja et al.

Fig. 8 (Step 3) Flexing the index finger

Fig. 9 (Step 4) Flexing the middle finger

8 Conclusion The concept device exercises the hand and simultaneously helps in the treatment of carpal tunnel syndrome. The treatment of CTS is serving as a unique feature. The device is easy to use and does not need any expertise. It focusses on each region of palm affected due to CTS, and accordingly, different movements are shown by the device in chronological order. In the future, therapy device can be extended further to have a more miniature dimension, more applications and more roles to be examined with the subjects.

Arduino-Based Therapy Device for Carpal Tunnel Syndrome

577

Fig. 10 (Step 5) Flexing the little finger

Fig. 11 (Step 6) Flexing the middle finger Table1 Recorded readings from the flex sensor Without device

With device

Finger

Flexed state

Bend state

Bend state

Thumb

63

66

70

Index

67

70

74

Middle

57

67

71

Ring

59

64

69

Little

67

70

76

578

A. Ahuja et al.

References 1. J. Chimsa, T. Seechaipat, W. Senavongse, Design and development of massage therapy device for arm. Aug 2017 2. Y. Bouteraa, I.B. Abdallah, Exoskeleton robots for upper-limb rehabilitation. May 2016 3. Pintavirooj, Design and development of equipment wrist and forearm physical therapeutic in elderly persons. Nov 2018 4. T. Lenzi, P. Stegall, D. Zanotto, S.K. Agrawal, Reducing muscle effort in walking through powered exoskeletons. Nov 2012

IOT-Based Smart Traffic Light System for Smart Cities Manish Gupta, Divesh Kumar, and Manish Kumar

Abstract In the recent times, the automobile vehicle equipped with the advance technology and luxuries is nevertheless the driver who violates the traffic rules intentionally or unintentionally ignoring the traffic lights. In this scenario, there is a need of up gradation in the conventional traffic management system to make it vehicle assisted by using the Internet-of-things. Therefore, this paper proposes a novel intelligent traffic light management system using Internet-of-things. In this proposed system, the vehicle and traffic light can communicate with each other with the help of cloud system. Keywords IoT · Traffic light system · Cloud system · Smart city

1 Introduction Due to the population density (nation like India and China) increasing in many folds during the past few decades especially in the urban and sub-urban areas, it requires the huge amount of infrastructure to accommodate all the population in order to provide the better services for the resident of that area. In the developing nation like India, the resources are limited in nature, and in order to serve the entire population of that area, it requires the efficient utilization of the available resources with help of technology. Hence, government proposes the concept of smart or intelligent city with objectives to increase personal satisfaction through the use of creativity to enhance administrative competence and address the problems of inhabitants with exploiting the capabilities M. Gupta (B) · D. Kumar · M. Kumar GLA University, Mathura, India e-mail: [email protected] D. Kumar e-mail: [email protected] M. Kumar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_56

579

580

M. Gupta et al.

of Internet-of-things (IoT) and information and communication technology (ICT) [1]. There are various issues that need to be address with help of IoT and ICT to increase the personal satisfaction of inhabitants of any smart city. One of the major issues of any urban and sub-urban area is the traffic congestion which is the result of increase in the vehicle on road due the increase in the population. Traffic congestion leads to the personal frustration, slower the speed, traffic violation, accidents, etc. [2]. And eventually, wastage of fuel and increase of the pollution (sound and temperature) of that area impact on the economy of the country and environment of the planet. Hence, to overcome above-mentioned issues, we need to address traffic congestion very carefully in any smart city to fulfill the main objectives, i.e., personal satisfaction and safety of inhabitants by using the IoT and ICT. The methods of innovation and development of IoT have been implemented by Atzori et al. [3] and are also lightweight in a contemporary scenario and in sizes. Such devices encourage the development of IoT, in which users are connected, interacted, managed, and controlled through their unique IP addresses. Hence, this paper address the traffic congestion issue of a smart city using the IoT and ICT, and further this section presented the state-of-the-art smart traffic light control system. At present to monitor or control the traffic, administration solely depends on the conventional pre-program traffic light signals, and the major drawbacks of this system is that it cannot guide or move the traffic as per real-time demand because it cannot predict the real-time traffic density at the any end of that junction [4]. Moreover, this leads to the traffic jam and its consequences, and at the last to get rid of the traffic congestion the human intervention is required. However, with the advancement in the technology like IoT and high-speed Internet, we can upgrade the existing conventional pre-program traffic light system intelligent enough, so that it can take the real-time decision for smoothly guide or move the traffic without any congestion. In order to solve the traffic congestion problem in urban and sub-urban area, researchers have suggested various solutions based on the sensors, image processing, artificial intelligence, and IoT. Smart traffic control system proposed by Yawle et al. [5] using the infrared (IR) sensors, Bluetooth, and GPS. In this work, they mount the IR sensors on the road separator to detect the density of the traffic, the data captured by the sensors of traffic density now sent to the controller with the help of Bluetooth module to the GSM module available in the control room, and accordingly the traffic controller guides the traffic as per the real-time traffic density. Further, Ghazal et al. [6] presented an automated traffic control system using the microcontroller and IR sensors to predict the real-time traffic density, and with the help of PIC microcontroller, the timing of traffic light can be controlled. Such approaches suffer if there is connection failure between the IR sensors and controller due this researcher suggested the alternate approach to control the traffic dynamically, i.e., based on image processing. Savaliya and Kalaria [7] proposed smart traffic control system based on image processing; in this paper, they divide the road into smaller sector, due to the traffic there were some alternation, these accumulated alterations can be detected by subtracting the original frame with the present frame caused by traffic, and the motion of the particular vehicle or traffic can be find in consecutive frames. They utilize a camera to implement their approach. This approach had several flaws

IOT-Based Smart Traffic Light System for Smart Cities

581

that if any vehicle in the shadow a vehicle then in that cases that vehicle are not able to trace, and hence, the traffic density is inaccurate which result into the traffic jam. Later, Meshram and Malviya [8] and Kumar et al. [4] presented the method based on vehicle classification then counting the vehicle to calculate the current traffic density. Since, this method based on the video processing fails in case of rain and fog. In order to improve the performance of the existing traffic control system researchers were also exploited the capabilities of artificial intelligence. Recently, Kuppusamy et al. [9] exploited the potential of genetic algorithm (GA) in client–server to improve the performance of traffic light system by reducing the signal processing time. Further, Jin and Ma [10] utilize the hierarchical multi-agent modeling framework, and Khadilkar et al. [11] uses the linear regression to get rid from traffic congestion. In order to make the process of traffic light control system, automated and reliable researchers come with another solution which is based on IoT. Recently, various researchers [1, 2, 9, 10, 12, 13] presented their IoT-based smart traffic light control systems and demonstrate that the IoT-based methods outperform then the others. In most of the cases, researchers integrated the IoT with present road infrastructure. But none of the reported work has implemented the vehicle-assisted smart traffic control system, though vehicle can play the key road to avoid the congestion. Therefore, this paper also proposes a smart traffic light control system using the IoT with vehicle assistance. Rest of the paper organizes as follows; Sect. 2 describe the detail of the proposed system and its functioning followed by the conclusion and future scope which will be discussed in Sect. 3.

2 Proposed System The proposed system exploited the capabilities of IoT in order to improve the performance and reliability of traffic light system, by adding the contribution of vehicle to guide the traffic smoothly. Figure 1 depicted the sample of IoT-enabled vehicle and traffic lights, and all traffic light and vehicle are connected with each other with the help of IoT. Figure 2 represents the objectives of the proposed system, which means the outcome of this system will be the systematic traffic flow in respect of traffic congestion at the input of the system. Figure 3 depicts the block diagram of the proposed system, all the data of traffic lights are sent to the local traffic control system and then to the cloud, so that the right pattern of signals can be understood. The real-time location and speed of vehicle get from the dash board console of the vehicle will regularly shared to the cloud. Data collected from the vehicles, traffic lights, and road entities are processed and then stored into the cloud. Further, this stored data in cloud is then sent to the vehicles and traffic lights with the help of Internet. The change in the color of traffic light and distance of traffic light will be shared to the dash board console of the vehicle in the real time which means that the vehicle can get the traffic light green, yellow, or

582

M. Gupta et al.

Fig. 1 A sample of connected vehicle and traffic light scenario

red in advance to avoid the unnecessary traffic congestion and traffic violation. The proposed system showed the better performance while comparing the other existing method in this class.

3 Conclusion and Future Scope In this proposed system, vehicle and traffic lights can be able to communicate directly with the help of IoT, and the vehicle provides all the necessary real-time information to the traffic controller related to road transportation. This system not only improves the road safety and guarantee of smooth traffic, but also improves the environmental efficiency. In the proposed system, there will be exchange of data between the connected vehicle, traffic control center, basic infrastructure, personal device, and cloud-based storage. Further, with the help of this method, exchange of the data ensures the smooth traffic flow and improves the fuel efficiency and road safety.

IOT-Based Smart Traffic Light System for Smart Cities

Traffic Congestion

Proposed Smart Traffic Control System

Systematic Traffic Fig. 2 Objective of the proposed system

583

584

M. Gupta et al.

Fig. 3 Block diagram of proposed system

References 1. P. Rizwan, K. Suresh, M. Rajasekhara Babu, Real-time smart traffic management system for smart cities by using internet of things and big data, in International Conference on Emerging Technological Trends (ICETT). IEEE (2016) 2. A. Frank, Y.S.K. Al Aamri, A. Zayegh, IoT based smart traffic density control using image processing, in 4th MEC International Conference on Big Data and Smart City (ICBDSC). IEEE (2019). 3. L. Atzori, A. Lera, G. Morabito, The internet of things: a survey. Comput. Netw. 54(15), 2787–2805 (2010) 4. S.S. Kumar et al., Autonomous traffic light control system for smart cities, in Computing and Network Sustainability (Springer, Singapore, 2019), pp. 325–335 5. R.U. Yawle, K.K. Modak, P.S. Shivshette, S.S. Vhaval, Smart traffic control system. SSRG Int. J. Electron. Commun. Eng. (SSRG-IJECE) 3(3) (2016) 6. B. Ghazal et al., Smart traffic light control system, in Third International Conference on Electrical, Electronics, Computer Engineering and Their Applications (EECEA). IEEE (2016) 7. R. Savaliya, V. Kalaria, A video surveillance system for traffic application. SIJ Trans. Comput. Sci. Eng. Appl. (CSEA) 2(8) (2014) 8. S.A. Meshram, A.V. Malviya, Traffic surveillance by counting and classification of vehicles from video using image processing. Int. J. Adv. Res. Comput. Sci. Manage. Stud. 1(6) 9. P. Kuppusamy et al., Design of smart traffic signal system using internet of things and genetic algorithm, in Advances in Big Data and Cloud Computing (Springer, Singapore, 2018), pp. 395– 403 10. S. Parekh et al., Traffic signal automation through IoT by sensing and detecting traffic intensity through IR sensors, in Information and Communication Technology for Intelligent Systems (Springer, Singapore, 2019), pp. 53–65 11. J. Jin, X. Ma, Hierarchical multi-agent control of traffic lights based on collective learning. Eng. Appl. Artif. Intell. 68, 236–248 (2018)

IOT-Based Smart Traffic Light System for Smart Cities

585

12. A. Khadilkar et al., Intelligent Traffic Light Scheduling Using Linear Regression Applications of Artificial Intelligence Techniques in Engineering (Springer, Singapore, 2019), pp. 329–335 13. K.-H.N. Bui, J.E. Jung, D. Camacho, Game theoretic approach on real-time decision making for IoT-based traffic light control. Concurrency Comput. Pract. Experience 29(11), e4077 (2017)

Self-Diagnosis Medical Chatbot Using Artificial Intelligence Fakih Awab Habib, Ghare Shifa Shakil, Shaikh Sabreen Mohd. Iqbal, and Shaikh Tasmia Abdul Sajid

Abstract Medical care is very important for a healthy life. However, it is very difficult to seek medical attention if you have a health problem. The recommended notion is to develop a medical chatbot that can adopt AI to analyze the ailment and produce necessary information concerning the conditions were discussing with a doctor. Medical chatbots were built to reduce medical costs and improve access to medical knowledge. Some chatbots serve as medical manuals to help patients become aware of their illness and improve their health. Users can assuredly benefit from chatbots if they can diagnose several kinds of illness and render the required data. Text diagnosis bot enables sufferers to join in analyses of their medicinal matters and present a personalized analysis report with reference to the symptoms. Keywords Artificial intelligence · Chatbots · Health care · Machine learning

1 Introduction Artificial intelligence (AI) is the construction for learning complex new living and serving a continuous circle. Artificial intelligence, according to which the phenomena of the engine from which the competition is to feel and to act and achieve the results, on the machine is that when using the simulation accurately mimics the expression of the human intellect, the development of the role of the congestion, to change and ways of solving problems. In short, this study scouts. The “artificial intelligence” is used when the machine performs or simulates a “cognitive” functions of devices that connect with other human beings, to “exercise” as well as “problem-solving.” Artificial intelligence provides the greatest ability to imitate human thought and behavior in front of a machine. As AI bots chat with advanced disciplines such as researchers are paving the way for its safety. Besides, a change takes place and offers reliable medical chatbots that can be suggested. Identify applicable funding agency F. A. Habib · G. S. Shakil (B) · S. S. Mohd. Iqbal · S. T. A. Sajid Anjuman I Islam Kalsekar Technical Campus, Panvel, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_57

587

588

F. A. Habib et al.

here, if none, deleting this addition to other ways of demonstrating the impact of artificial intelligence is an especially innovative sector of the industry, taking things that were just to have escalated in the place where doctors are supplanted by artificial intelligence. Although the changes did not occur shortly, general medical artificial intelligence will help to deliver an advanced medical diagnosis. Soon the time will come when human intelligence is independent of the decrease in safety compared to the current percentage. So there is the time to chat from the bot AI to recognize and appreciate the health benefits. Immediate prophecy of the illness can benefit and prevent 80 complexities [1]. Super-robots that are responsive to chat are compatible with AI FH which is also a computer. It is not just a change; this is a criterion turn. To understand the use of this technology and its consequences, we briefly describe the robot’s artificial intelligence chat with revolutionizing health care.

2 Literature Survey Simon Hoermann reviewed the present evidence for the effectiveness of online psychic health invasions that use synchronizable text. Artificial intelligence (AI) is the construction for learning complex new living and serving a continuous circle [2]. Artificial intelligence, according to which the phenomena of the engine from which the competition is to feel and to act and achieve the results, on the machine is that when using the simulation accurately mimics the expression of the human intellect, the development of the role of the congestion, to change and ways of solving problems. In short, this study scouts. The “artificial intelligence” is used when the machine performs or simulates a “cognitive” functions of devices that connect with other human beings, to “exercise” and “problem-solving.” Artificial intelligence provides the greatest ability to imitate as a human thought and behavior in front of a machine. As AI bots chat with advanced disciplines such as researchers are paving the way for its safety. Besides, a change takes place and offers reliable medical chatbots that can be suggested. In addition to other ways of demonstrating, the impact of artificial intelligence is an especially innovative sector of the industry, taking things that were just to have escalated in the place where doctors are supplanted by artificial intelligence. Although the changes did not occur shortly, general medical artificial intelligence will help to deliver an advanced medical diagnosis. Soon the time will come when human intelligence is independent of the decrease in safety compared to the current percentage. So there is the time to chat from the bot AI to recognize and appreciate the health benefits. Super-robots that are responsive to chat are compatible with AI FH which is also a computer. It is not just a change; This is a criterion turn. To understand the use of this technology and its consequences, we briefly describe the robot’s artificial intelligence chat with revolutionizing health care. Madhu [3] produced a plan in which that AI can foretell these conditions of illness depending on the indications and provides the listing of accessible medications. If a physic is examined annually, it is conceivable to predict unspecified potential problems yet before causing any harm to the body.

Self-Diagnosis Medical Chatbot Using Artificial Intelligence

589

Some difficulties are analytic expenses and government laws for the victorious use of customized medication, which are not discussed in the paper. Observed that the promotion and growth of chatbot contour are not expanding at an expected speed because of the variety of procedures and methodologies applied to compose a chatbot. The arrangements for chatbot configuration are still a point for discussion and no standard methodology has still been identified [4]. Researchers have so far operated in obscurity with hesitance to exhibit any enhanced techniques that they have found, excluding the enrichment to chatbot design. Additionally, extensively satisfactory chatbots require improvements by crafting in-depth understanding bases which are low as on date. Jyothirmayi et al. discovered that chatbots are low of mining outcomes as expected. Additionally, they do not include sufficient natural language characteristics [5]. Monica Agrawal et al. in their effort aimed to build a text-to-text analysis bot model that brings patients into a conversation about their medicinal culmination and provides a customized diagnosis depending on their indications and prior profile. Be that as it may, their algorithmic precision, recall, and analysis percentage were under. Divya et al. [6] in their chatbot scheme found its power needing and advised augmentations by incorporating a more further mix of words and increasing the utilization of database with the intention that the medical chabot could trade with all sorts of conditions and display generic. “Besides incorporating voice chat meant to be introduced as expected scope of the study.” The user posts the text message or voice message using Google API. Here the user perceives only relevant results of the chatbot. The SVM algorithm is applied to analyze the dataset. Here the Porter algorithm is utilized to cut undesired terms alike suffixes or prefixes [7]. The different documents are served in web, the content is checked by tagging the dataset using n-gram-based low-dimensional demonstration, TF-IDF matrix that generates S, U, and V, and finally multiplying the three matrices, cosine similarity is calculated [8]. Here the chatbot is created for the customer service that functions as public health service. The application uses n-gram, TF-IDF, and cosine similarity. The knowledge base is created for storing the question and answer. The application clearly shows that extracted the keyword from the question ad by using unigram, bigram, and trigram which helps in fast answering. A similar paper “Pharmabot: A Pediatric Generic Medicine Consultant Chatbot” is proposed by Benilda Eleonor [9]. Comendador and team provide a design for a stand-alone medical chatbot that is implemented using MS Access and Visual C. For using the proposed design, the user has to navigate using the four options provided by the application [10]. This design aims to work by converting the user input to SQL queries and execute it on MS Access to retrieve the solution to the illness. Also a research paper “MedChatBot: An UMLS based Chatbot for Medical Students” proposed by Hameedullah Kazi, B. S. Chowdhry, and Zeesha Memon focuses on a design for an AIML-based Medical Chatbot. This chatbot design is implemented using a Java-based AIML interpreter called Chatter Bean [11]. To use the proposed design, the user has to type a message that should contain the illness name, and it detects the illness names using AIML patterns [11]. Once the illness is detected, the chatbot provides the user about the necessary information about the problem.

590

F. A. Habib et al.

However, the previous proposed designs in the past did not focus in understanding the intensity of the illness that the user is suffering through. Our proposed design aims to ask more questions to the user until it gets confident about the probable illness that the user is suffering through. Also our chatbot design has the concept of threshold level that helps it to detect the intensity of the problem and connects the user directly to the doctor if it feels that the problem is too serious for the chatbot to handle. Sometimes, it may happen that the problem is too serious for the chatbot to handle. In this case, the chatbot would connect the user directly with the doctor and also provide the doctor with the chat history of the user [12]. By the time, the doctor is available to chat, and the user is provided by the first aid solution. To trigger this process, the seriousness score should hit or rise above the threshold level.

3 Work Done For building a chatbot using Python, getting its basics and clear idea is about natural language processing (NLP). The most reliable chatbot framework is accessible on woebot.ai, dialogflow.com, qnamaker.ai, core.rasa.ai, wit.ai, and botkit.ai. The basic terminologies applied for chatbots are intent, entities, and utterances training of bot and confidence score of the bot. As we are operating with Python for designing the chatbot, we can run it on Anaconda, Jupyter, Notebook, or Python itself. We have adopted a 3.7 version of Python on the Jupyter Notebook. Why do we require to understand natural language processing for building a chatbot, the question arises. They are in the field of artificial intelligence which helps the computer to recognize and analyze human language, and to implement NLP we should understand natural language understanding (NLU). BLU is the subset of a bigger picture of NLP. We can also create a chatbot without NLP but the ranges will be restricted without NLP. NLP processes the raw data for which it is considered as the chatbot brain, in which mugging up cleans it and gives appropriate actions.

3.1 spaCy spaCy is a library for NLP. It implements intuitive APIs to obtain its methods guided by deep learning models. spaCy along with English, French, and Dutch also offers German, Portugese, and French and also multi-language NER. Features for spaCy are mentioned on the official Web site of spaCy.

Self-Diagnosis Medical Chatbot Using Artificial Intelligence

591

3.2 Introduction to Dialogflow Dialogflow communicates with the output in distinct ways with the help of voiceand text-based conversational interfaces, such as chatbots. Dialogflow is introduced by AI. It keeps correlating with users on the Web site, mobile applications, Google Assistant, Facebook Messenger, Amazon Alexa, and other related platforms. The data regarding diseases, symptoms, and remedies needs to be stored in an organized manner for making it easier for the engine to access it. Our chatbot stores data in XML format. The idea is that the medical professionals would write this data and feed this data to our chatbot, and the chatbot engine would interact with this data. We have separate divisions for every disease that is present in our records. The data is properly organized and is compatible to be parsed by standard XML parsers [13]. There is a notion of agents in Dialogflow that are adequately described as natural language understanding (NLU) modules. Operators can also be composed to achieve a communication flow in a particular way. This can be done with the guidance of circumstances, intent priorities, slot filling, responsibilities, and fulfillment via webhook. The chatbot is developed here for healthcare purposes for the android application. We will evaluate our chatbot design using general word percentage (GWP) analysis and combining the result with terminology detection test analysis that would show us an average percentage about how many time our chatbot detects medical terminologies with the increase in general non-medical terminology as compared to the other chatbots [14].

4 Result The scheme outcome is as follows: The user will have a message communication with the healthcare chatbot and receive to recognize the chief disease, and the user can further get their chat histories inside their features which are handled in the database. Thus, chatbots can, fortunately, serve millions of people in pontificating their well-being interests and providing them with peculiar surveillance.

5 Conclusion From several analysis journals, it is expected that the standard of medical chatbot is very user-friendly and can be simply obtainable by each person who recognizes how to interact with a chatbot. A healthcare chatbot passes personalized examinations based on signs. Soon, the bot’s diagnosis performance can be improvised by appending supporter for added therapeutic characteristics, such as location, the severity of indications, and span. The implementation of recently personalized therapeutic attendant relies majorly on practice data and AI algorithms. This current

592

F. A. Habib et al.

implementation of customized ways would suitably preserve several lives along with building therapeutic perception among others. As we understand, the following era is the era of texting applications because people are likely to consume more time on it than any application. Thus, preventive chatbot has an all-inclusive and gigantic expected scope. No concern wherever the people are inhabited in the world, all can beget this medical consultation. The particular requirement is a desktop or smartphone with an Internet intermediary. The ability of the chatbot can be progressed by expanding its intelligence within more regulation of data inputs. Moreover, voice communications can be merged accompanying the system to address it more conveniently to use. Acknowledgements We appreciate our Project Guide Asst. Prof. Awab Fakih, who contributed information and experience that greatly aided the investigation. We thank Assist. Prof. Afzal Shaikh, (I/c) HoD, Department of Electronics and Telecommunications, Anjuman-I-Islam Kalsekar Technical Campus, for comments that improved the manuscript. We also thank the director of AIKTC, Dr. Abdul Razak Honnutagi, for his support, who always inspires students to progress from the perspective of technical research. We thank our parents for their lifetime support.

References 1. P. Kaur, M. Sharma, A survey on using nature inspired computing for fatal disease diagnosis. Int. J. Inf. Syst. Model. Des. 8(2), 70–91 (2017) 2. S. Hoermann, K.L. McCabe, D.N. Milne, R.A. Calvo, Application of synchronous text based dialogue systems in mental health interventions: systematic review. J. Med. Internet Res. 19(8) (2017) 3. D. Madhu, C.J. Neeraj Jain, E. Sebastain, S. Shaji, A. Ajayakumar, A novel approach for medical assistance using trained chatbot, inInternational Conference on Inventive Communication and Computational Technologies (ICICCT 2017) 4. R. Dharwadkar, N.A. Deshpande, A medical chatbot. Int. J. Comput. Trends Technol. (IJCTT). 60(1) (2018) 5. F. Naaz, F. Siddiqui, Modified n-gram based model for identifying and filtering near-duplicate documents detection. Int. J. Adv. Comput. Eng. Networking 5(10) (2017) 6. H. Dhebys, A. Eka, A. Luqman, W.W. Dimas, A. Indinabilah, N-gram accuracy analysis in the method of chatbot response. Int. J. Eng. Technol. 7(152). https://doi.org/10.14419/ijet.v7i4.44. 26973 (2018) 7. N. Jyothirmayi, A. Soniya, Y. Grace, C. Reddy Kumar Kishor, B.V. Murthy Ramana, Survey on chatbot conversational system. J. Appl. Sci. Comput. 6(1) (2019) 8. S. Divya, V. Indumathi, S. Ishwarya, M. Priyasankari, S.K. Devi, A self-diagnosis medical chatbot using artificial intelligence. J. Web Dev. Web Designing 3(1) (2018) 9. S.A. Abdul Kader, J.C. Woods, Survey on chatbot design techniques in speech conversation systems. Int. J. Adv. Comput. Sci. Appl. 6(7) (2015) 10. B.E. Comendador, B.M. Francisco, J.S. Medenilla, S.M. Nacion, T.B. Serac, Pharmabot: a pediatric generic medicine consultant chatbot. J. Autom. Control Eng. 3(2), 137–140 (2015). https://doi.org/10.1270/joac.3.2.137-140 11. H. Kazi, B.S. Chowdhry, Z. Memon, Med-chatbot: an UMLS based chatbot for medical students. Int. J. Comput. Appl. 55, 1–5 (2012). https://doi.org/10.5120/8844-2886 12. VLDB ’99 Proceedings of the 25th International Conference on Very Large Data Bases, ed. by M.P. Atkinson, M.E. Orlowska, P. Valduriez, S.B. Zdonik, M.L. Brodie (Morgan Kaufmann Publishers Inc., San Francisco, 1999), pp. 302–314. ISBN: 1-55860-615-7

Self-Diagnosis Medical Chatbot Using Artificial Intelligence

593

13. A. S, D. John, Survey on chatbot design techniques in speech conversation systems. Int. J. Adv. Comput. Sci. Appl. https://doi.org/10.14569/ijacsa.2015.060712 14. L. Denoyer, P. Gallinari (2007) The Wikipedia XML corpus, in Comparative Evaluation of XML Information Retrieval Systems. INEX 2006, ed. by N. Fuhr, M. Lalmas, A. Trotman

A Comparison Analysis of Mobile Forensic Investigation Framework Waheedullah Asghari, A. Suresh Kumar, Ajay Shankar Singh, and K. Thirunavukkarasu

Abstract As of now, where every single person is having smart phone, day by day it will increase, and with increasing smart phones, it is obvious that crime will not remain same with the use of mobile phones, due to various activities are done by this device only like calling, chatting, Web browsing, online transactions, mailing, entertainment, and much more. Therefore, it will not be an easy job for professional forensic investigators to find out the criminal form several aspects of smart phones attribute to their many varieties of mobile device manufacturers, unlike hardware components, different operating systems, and many programs used. Numerous tools, process models, and frameworks are designed and developed for the purpose of investigation where all are unable to have a secure solution for investigating all the different mobile devices. Here, we will have reviews of some current process models which are used for mobile device forensic investigation and find out a better result for selecting a process model that should meet all features of mobile devices which have maximum security that should not be modified or be hacked during, and after the investigation this will bring full, accurate results from the evidence, for doing such a database based on the private or federated blockchain technology which will be recommended. Keywords Mobile devices · Investigation · Process model · Framework · Forensic · Blockchain

W. Asghari (B) · A. Suresh Kumar · A. S. Singh · K. Thirunavukkarasu Galgotias University, Greater Noida, Uttar Pradesh 201308, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_58

595

596

W. Asghari et al.

1 Introduction According to an analysis, the growth of mobile devices from 2007 up to 2020 has increased from 122.32 to 1560.85 billion [1]. It will rise anually where for better functionality and reliability each year the structure, format, feature, and configuration will be renewed in all devices. Same there, bit by bit there are huge amount of software companies that are developing up to the minute applications in each domain such as social media, Net banking, telecommunication, entertainment, search engine, and business. For a forensic investigator, main point is the evidence to use in court from a criminal, where the evidence is nothing but the data generated from mobile devices, this data could be great information that could be utilized for further investigation, and the critical data may be found from different types of mobile devices such as: • • • • • •

Personal digital assistant (PDA) Smart phones Tablets Apple IOS Google Android Windows phone.

Data can be collected by two methods which are local data acquisition and remote data acquisition where both can work independently but not applicable in all the smart phones lonely, to be ensured of full integrity in every case applying the pair of methods can resolve the issue [2], not only the fetching the data can work there must be a procedure to get this data analyzed and get the finalized one. The process model taken for any device inspection depends upon the case, data type, and the mobile device, wherein the most of them we have the process of preparation, collection, examination, analysis, reporting, presentation, and achieving. Proceeding to this investigation, huge data will be generated where this data is an asset, and if an intruder gets to it, the investigation process may overcome with a negative or incorrect outcome. A new technology blockchain, which is intruded for a secure transaction of crypto currencies such as Bitcoin and Ethereum, is now used for several other aspect like health care, IOT, corporate, business, and much more where the main target or usage of this technology is security, decentralization, immutability, and transparency, and based on this features, this technology will be used for mobile investigation forensic process model, and this will help investigator for achieving its goal of perfect result without any interference or modification of the data generated.

A Comparison Analysis of Mobile Forensic Investigation Framework

597

2 Related Work Harmonized digital forensic investigation process model was tested on Androidbased mobile devices for the purpose of being accepted for standard investigation using the below process: • • • • • • • • • • • • •

Incident detection First response Planning process Preparation process Incident scene documentation Potential digital evidence identification Digital evidence collection Digital evidence transportation Digital evidence analysis Digital evidence interpretation Report writing Presentation Investigation conclusion process.

After using the above process, it overcomes that somehow, it is admissible, but there is need for adding some more processes to it such as documenting, preserving the evidence, obtaining the evidence, and preserving the chain of custody and analyzing the sub-processes in depth to have a complete investigation process model [3]. This process model will conduct more time for the conclusion, despite additional process to be added which will make the process laborious for the forensic investigator. A tool is to be used for data acquisition and documentation where first the test data will be generated by mobile devices than a forensic tool goes for data acquisition and documentation, and after experiencing the result will be compared based on the time of data acquisition, type of the data acquired against the test, categorically by device model and by forensic tool and how acceptable is it as the evidence [4], where this benchmark of investigating forensic tool for mobile device would not be enough for the investigation and more standard framework has to be designed for this purpose.

2.1 Four-Phase Methodology The four-phase methodology is used for analyzing the mobile phone internal and external memory to find out the evidence that is involved with the crime, and these phases are:

598

W. Asghari et al.

Seizing In this phase, the mobile phone of the criminal is seized which is involved in the crime for the purpose of there should not be any modification in the evidence by seizing which means that the device will have no contact with any network, and the main point of seizing is to maintain the evidence. Acquisition After the device is seized, it will be forwarded to the forensic laboratory to have a duplicate of it by using any software imaging tool or any hard-drive duplicators, then the original device will be kept safe that is not be damaged or it will be prevented from tamper, and here, the model and type of the device are also identified. Analysis In the analysis part, different methodology and tools will be used for recovering any deleted data from the duplicate hard disk images, due to the finding may various types of data like an audio or image file, for each different tool will be used for further analysis. Reporting This phase presents the analysis results that what activities happened to the evidence, for use of the court, and this report will be in a written format and documented [5]. Most of the process models are developed where none can fulfill the appropriate investigation, forensic for Windows mobile devices, and here, the author proposes a model of twelve stages which consists of: preparation and securing the scene, documentation, PDA model, communication shielding, evidence collection (volatile memory, non-volatile memory offset/cloud memory), preservation, examination, analysis, presentation, and review. This model is compared with some other models, which are NIJ law enforcement model, DFRWS model, abstract digital forensic model, IDIP model, and systematic digital forensic investigation model, and for the advancement and completing the necessary steps the offset/online storage, cell site analysis and mode selection scheduling are added in this model. It is a general solution for the modification of the technology and a reference point in the investigation process [6]. However, it works with the Windows mobile perfectly by having additional processes added which is a specific mobile device and having a big market in today’s world, but other mobile devices need more advanced and less time consumable model.

3 Mobile Forensic Process and Types The InfoSec Institute provides four steps for the evidence investigation which are: • Identification • Acquisition

A Comparison Analysis of Mobile Forensic Investigation Framework

599

• Examination • Reporting With four types of process which are: • • • •

Manual method Physical method Logical method File System

where this type of process is based on the encryption level, operating system availability of any password and type of mobile phone mode, manufacture, and make [7]. For an investigator, it is fine to follow the above steps and process, but for a better and less time-consuming investigation, there is a need for having a perfect process model that is used for all types of mobile devices at a time with accurate results. A legitimate procedure for mobile forensic with following process: 1. Conception phase, having to conceptualize of legislation and principle. 2. Preparation phase: Here, another five operations will be done which are: information collection from criminals, decision of searching places, person and dates, preparing the tools, profession of the investigator, and education before investigation. 3. Operation phase is will have a guideline for crime scene and laboratory where both will be followed by collection, analysis, and forensics. 4. Reporting phase: First the investigator will write, present, and give a briefing and then authenticate the forensic result and prepare for the court, and at the last the case will be documented and will be learnt [8]. Here, each of the phases will be continued with other sub-processes where it is legitimate, but not perfect due to another aspect of new trend mobile devices where the new mobile devices are more complex and having more functionalities with high performances. The process model for all kind of mobile devices investigation is proposed in this paper, where the steps and phases used in other model are gathered with it which are acceptable for the purpose of a proper investigation; here, the NIST guidelines are also kept in mind or considerate where other extra and pointless processes are omitted; at the conclusion, other models are also compared with the proposed model, and this proposed model contain of the processes such as: preparation, handling and securing the evidence, data acquisition, documentation, preservation, examination and analysis, presentation, and review [9].

600

W. Asghari et al.

4 Problem Statement Based on Table 1, each process model is having different phases and procedures used for the investigation, but in case of the data integrity, privacy, authenticity, timeliness, and accuracy for preservation are not mentioned in any process methods where an intruder may get involved in the investigation and demolish any step of the process by which the investigator may not have a full accurate results at the conclusion. If any one step of the process get any small modification, this will have a huge impact on the final result, and for the mentioned point, there is a need for having full secure investigation process where this can be done by a private or federated blockchain with using Ethereum or hyper ledge platform; due to this, investigation is done gradually by the forensic expert, and there is no need for getting back in any step and for having any modification and all the data can be documented and placed whether of being structured or unstructured. Here, in each step, it can use the blockchain theory for the purpose of having a secure model. In current process, models besides having no exact effects of security, the authenticity, efficiency, immutability, and decentralization are also not touched upon. Table 1 Comparison of mobile forensic investigation process model Process model

SFIPM

WMDFM

NIST

HDFI

USFIPM and NIST

Documentation











Preparation











Handling evidence and securing scene











Data acquisition











Preservation











Examination and analysis ✔









Presentation











Review











Survey and recognition











Communication scheduling











Volatile evidence collection











Non-volatile evidence collection











Mode selection shielding ✔









Offset/online storage











Cell state analysis











A Comparison Analysis of Mobile Forensic Investigation Framework

601

Fig. 1 Blockchain process model

5 Proposed Model The proposed model (see Fig. 1) will be performed on Ethereum or hyper ledger platform for the purpose of securing the data. In this model, each step of the process will be inserted in a block which will have a connection with the previous block through hashing as blockchain technology does. Here, the proposed process model will be started with the preparation process. In this process, the first necessary point will be written as who is the forensic investigator, what is the mobile device type, which service provider is used with, what forensic workstation to be used, what will be first responder tool kit, and what are the types of hardware and software used for investigation. All these question’s answer will be documented and inserted in a first block or the documentation block as generated data with the tag numbers attached, now for the second block, the next process of the model is inserted with its generated data along hash of the documentation block, and the same process continues to the last step or process of the investigation.

6 Conclusion Data is an asset which must have full security in each environment where blockchain is new technology that offers maximum security, increased efficiency, greater transparency, and improved traceability where by building such chain of blocks in any shell would improve its standard. In this paper, various process models for digital and mobile forensic investigation are reviewed where none of them is a secure investigation process due to which criminal may modify the evidence in any step of the process where small modification could have huge impact on the result of the investigation. A secure model is proposed by deploying blockchain where in the near future the implementation of the process will be done by using Ethereum or hyper ledger platform.

602

W. Asghari et al.

References 1. STATISTA, https://www.statista.com/statistics/263437/global-smartphone-sales-to-end-userssince-2007/ 2. S.H. Mostasebi, A. Dehghantanha, Towards a unified forensic investigation framework of smart phones. Int. J. Comput. Theor Eng 5(2) (2013) 3. S. Omeleze, H.S. Venter, Testing the harmonized digital forensic investigation process modelusing an android mobile phone, in 2013 Information Security for South Africa. IEEE (2013) 4. M. Yates, H. Chi, A framework for designing benchmark of investigation digital forensic tool for mobile devices, in 49th ACM Southeast Conference (2011), pp. 24–27 5. D.M. Sai., N.R.G.K. Parsad, S. Dekka, The forensic process analysis of mobile device. Int. J. Comput. Sci. Inf. Technol. 6(5) (2015) 6. A. Goel, A. Tyagi, A. Agarwal, Smartphone forensic investigation process model. Int. J. Comput. Sci. Secur. (IJCSS) 6(5) (2012) 7. INFORSEC, https://resources.infosecinstitute.com/category/computerforensics/introduction/ mobile-forensics/the-mobile-forensics-process-steps-types/#gref 8. I.L. Lin, H.-C. Chao., S.-H. Peng, Research of digital evidence forensic standard operating procedure with comparison and analysis based on smartphone, in International Conference on Broadband and Wireless Computing, Communication and Applications (2011) 9. M. Sadiq, M.S. Iqbal, K. Naveed, M. Sajjad, Mobile devices forensic investigation: process models and comparison. Int. Sci. J. Theor. Appl. Sci. 1(33) (2016)

Review in Energy Harvesting for Self-Powered Electronics Sensor Krishna Mittal and Deepak Sharma

Abstract In previous years, energy harvesting has been raising very much faster. Because of their need to wireless electronics and wearable devices to get lifetime energy. Because batteries are not having lifetime energy and they are heavy so we cannot use them for wearable electronics. The idea raise from where we cannot change the batteries for electronic sensor these sensors are in human body. So researchers are more focus to search about these issues. So they build up the electronics that can harvest energy for electronic sensor. They build sensors which take the energy from human body temperature. This work on the principle when human body temperature changes heat begins to flow from hotter to colder side the temperature difference begin the heat push the electrons that will induce voltage and current. There are different types of energy sources available for harvesting energy like solar energy, radio frequency conversion, vibration to electrical conversion, etc. In these methods, vibration is the most powerful and efficient energy source and this will take from piezoelectric devices. These devices based on piezoelectric sensor that will uses for measure piezoelectric effect the changes temperature, length, pressure, acceleration and change them into electrical energy. For these types of devices, we require a circuit to store energy as much as possible. These circuits are called conditioning circuit. These sensors which work self-powered are called “self-powered sensor.” The outcome of review to find issues, future scope, advantages and applications and also to understand about how conditioning circuit works. Keywords Piezoelectric effect · Self-powered sensors · Energy harvesting method

1 Introduction Energy harvesting or energy scavenging is process of by which energy generate from external sources like solar energy, kinetic energy, thermal energy and wind energy, K. Mittal (B) · D. Sharma Poornima Institute of Engineering and Technology, Sitapura, Jaipur, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_59

603

604

K. Mittal and D. Sharma

etc., and this stores for small portable and wearable devices. We use batteries to give power for these devices but they are heavy and not have long lifetime. For biomedical uses, we cannot use these batteries because there is a problem to change them again and again. Some sensors are put inside the human body. And the problem with batteries is that they have poisonous toxic so it will be harmful for human body and they are hazardous for environment. So, we require sensors which will self-powered [3]. Piezoelectric sensors are very much efficient to harvest energy from vibration to electrical energy. So researchers are research about to build some circuit that will harvest energy from environment and human bodies. There are many different energy sources like solar energy, thermal energy, etc., but there is issue with solar energy that it is not much efficient and cover much larger unit area for small currents. For radio frequency, conversion affect the environment by radiations which is very harmful and it requires very much effective circuits. So piezoelectric (vibration to electric) is the best to conversion and harvesting energy for self-powered sensors and that require some circuit for store the energy. These are known as conditioning circuits [4]. These circuits made up of basic electronic component like capacitor, inductor, resistor, etc. The vibration occurred are low level vibrations about 300 Hz and the output from these vibrations are proportional to A2/w.

1.1 Piezoelectric Energy Through Human Motion The researcher is focusing on to harvest energy from human and animal motion. They developing some biocompatible and flexible devices that can harvest energy from in vitro and in vivo motion from human. Based on walking, the piezoelectric energy harvesting were triggered through knee-joint, center of mass and hell, etc. The devices include shoes, floor blanket, bicycle, etc. (Figs. 1 and 2).

1.2 Power Generate Through Bicycle The people who are live in mountain areas like at border areas (soldiers, etc.) they have to connect with all over the world but the power or electricity is not in these areas. So companies build up some type of miniature pedal cycle to generate some power for the use of two way radio. Almost 70 W can be generate by these method. They are called foot driven generators.

Review in Energy Harvesting for Self-Powered Electronics Sensor

605

Fig. 1 Smart footwear. Source sciencedirect.com

Fig. 2 Shoe mounted piezoelectric generator. Source sciencedirect.com

2 Working Principle The principle work on conversion from vibration to electrical energy. Crystalline materials generate some amount of small energy when force applied that changes some shape. Quartz is best example of these materials. When force applied to these crystal these induced some amount of voltage. Then, this is amplified. Microphones and lighters are also used at this principle (Fig. 3).

606

K. Mittal and D. Sharma

Fig. 3 Piezoelectric energy harvesting. Source rfwire less-world.com

2.1 This Work in Two Modes Transverse mode—the mode is depend upon piezoelectric constant. This indicates the direction of stress–strain direction. Typically, piezocontractors are flat components. Their displacement occurs perpendicularly to the polarization direction and to the electric field [6]. The displacement of contracting actuators is based on the transverse piezoelectric effect, whereby up to approx. 20 µm is nominally achieved. Multilayer elements offer decisive advantages over single-layer piezoelements regarding their technical realization: Due to the larger cross-sectional area, they generate higher forces and can be operated at a lower voltage. Longitudinal mode—in longitudinal piezoactuators, the electric field in the ceramic layer is applied parallel to the direction of polarization. This induces a strain or displacement in the direction of polarization. Individual layers give relatively small displacements. In order to achieve technically useful displacement values, stack actuators are constructed, where many individual layers are mechanically connected in series and electrically connected in parallel. Longitudinal stack actuators are highly efficient in converting electrical to mechanical energy [7]. They achieve nominal displacements of around 0.1–0.15% of the actuator length (Fig. 4).

3 Design and Applications 3.1 Smart Textile It is a wearable technology that play a huge role in energy harvesting. The companies are more focusing about flexibility to wear and about durability. When someone bend or force, Christine et al. developed a 2D and 3D structure and studied about the pressure on fabrics. This device can generate energy from the finger and attach to palm of human.

Review in Energy Harvesting for Self-Powered Electronics Sensor

607

Fig. 4 Longitudinal and transverse mode. Source researchgate.com

3.2 Smart Footwear This footwear design as when human walking or running electricity produces due to pressure by the foot and the circuit is implemented in shoes, these are called smart shoes. A product always be less cost effective so companies make angle-sliding shoes which harvest more energy and reduce metabolic cost.

3.3 Smart Skin Smart skin can measure temperature, location, pressure and humidity. Self-powered smart skin is to find location based on single electrode effect and electrostatic induction (Fig. 5).

608

K. Mittal and D. Sharma

Fig. 5 Smart skin. Source researchgate.com

4 Issues with Implementation and Design This harvesting technology is growing fast and generate enough voltage to selfpowered electronic sensor but still there are many issues to implement them. The most challenge is human has ultra-low frequency about 2 Hz. And the working bandwidth of PEH devices is quite high so this make difficulties in motion speed of human. The operating frequency is lower than resonant frequency so we get best outputs [8]. These devices have some drawbacks like reduce by noise and plectra. The other challenging part is that human do many action like stretching, bending and sliding so due to this we cannot get linear output. To solve challenges likes flexibility problem, biocompatibility and biodegradability companies make some products like a cantilever beam design, piezoelectric knee-joint energy. These device also have flexibility problem, biocompatibility and biodegradability. For this, Luo et el. make a ferroelectric material in shoe sole for flexibility problem. For biodegradability problem, Kim et el. make wool keratin-based piezoelectric harvester for wearable devices.

5 Conclusion In this paper, I give a review about energy harvesting for self-electronics sensor by human motion. The review of ten papers is carried out in the area of energy harvesting technique and the devices and about issues and challenges. There are many ambient energy sources but best energy source is vibration. We convert energy from vibration

Review in Energy Harvesting for Self-Powered Electronics Sensor

609

to electric [9]. The devices are smart wrist watch, smart skin, smart footwear and smart textile. We firstly taking about the principle of piezoelectric energy. The importance of wearable electronic is growing day by day. The self-electronics sensor gives best results in biomedical uses. There are many challenges like flexibility, biocompatibility and biodegradability. We summarized the solutions for them like knee-joint, wool biodegradable wearable cloth, pedal power, etc. [10].

References 1. T. Starner, J. Paradiso, Human generated power for mobile electronics, in Low Power Electronics Design, ed. by C. Piguet (CRC Press, Boca Raton, FL, 2004), ch. 45 2. D. Dunn-Rankin, E.M. Leal, B.D. Walther, Personal power systems. Prog. Energy Combust. Sci. 31(5–6), 422–465 (2005) 3. P.C.-P. Chao, Energy harvesting electronics for vibratory devices in self-powered sensors. IEEE Sens. J. 11(12) 3106–3121 (2011) 4. S. Khalid, in A Review of Human-Powered Energy Harvesting for Smart Electronics: Recent Progress and Challenges, ed. by S. Khalid, I. Raouf, A. Khan, N. Kim, H. S. Kim. Human powered energy harvesting for smart electronics (Dongguk University, Seoul, 2019) 5. C. Sun, in A review on application of piezoelectric energy harvesting technology, ed. by C. Sun, G. Shang, Y. Tao, Z. Li. Piezoelectric energy harvesting from human motion (Suzhou Vocational University, China, 2012) 6. https://www.piceramic.com/en/piezo-technology/properties-piezo-actuators/displacement 7. J.A. Paradiso, in Low power electronics design. Human generated power for mobile electronics (Media Lab, MIT, Cambridge) 8. B. Meng, Z. Su, S.A. Shankaregowda, Self powered smart skin. ACS Nano. 10(4), 4083–4091 (2016). Epub 2016 Mar 28. https://doi.org/10.1021/acsnano.5b07074 9. https://iopscience.iop.org/article/10.1088/1361-6463/ab0532/meta 10. Z. Lou, L. Li, L. Wang, Recent progress of self powered sensing system for wearable devices (2016). https://onlinelibrary.wiley.com/doi/abs/10.1002/smll.201701791

Review in Smart Oculus Lenses Rashmi Jayswal and Nikita Gautam

Abstract Smart systems and applications play a important role in everyday life. And this feature makes it more valued and even more attractive to the users. In the communication access to information on demand, regarding place and time are almost a common practice. This ongoing advancement in innovation prompts the new developing wearable gadgets, for example, brilliant focal points. Normally, gadgets are intended for the advantages and functionalities that they can give well-being and the security angle. Brilliant glass depends on segments that are accessible in presentday cell phones. This incorporates the CPU, sensors, and working framework. This manner would have comparative security dangers. This paper recognizes the future innovation with well-being and security. The oculus focal points are outfitted with sensor, Wi-Fi, and Bluetooth to give alternatives, and openness includes directly before your eyes. This innovation makes it conceivable to understand message, surf the web without causing any interruption, gathering data of environment, and that is only the tip of the iceberg. The fate of IOT is boundless. The oculus focal point is like the Google’s most yearning venture—The glass. Smart oculus lenses technology protects your private and confidential documents from prying eyes. It can review and send your private and confidential documents with confidence. It provides continuous authentication across all your devices. This smart oculus lens is advanced feature spotlight sign documents with facial recognition. It also prevents cyber fraud. Keywords Oculus lenses · Focal point · Sensors

1 Introduction The smart lenses look like own personal computer. They will provide a holographic user interface that is not unlike your traditional smartphone. You can swipe around and use apps in the thin air and have the entire Internet at your fingertips. R. Jayswal (B) · N. Gautam Poornima Institute of Engineering and Technology, Sitapura, Jaipur, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_60

611

612

R. Jayswal and N. Gautam

The smart oculus lenses should be able to pair with both Android and iOS devices to make your smartphone pop off the screen so you will be able to answer calls and send message just by poking around. And it is also able to pair with wireless keyboard so you can type documents, emails, and message [1]. The smart lens technology innovation is already just found in movies, for example, Iron man—the capacity to make 3D questions in mid space utilizing only your fingertips. Average pair of eyeglasses or shades incorporates two focal points, a particular one of the focal points situated before each eye of the client when the eyeglasses are worn on the client’s head. In some elective plans, a solitary stretched focal point might be utilized rather than the two separate focal points, the single prolonged focal point crossing before according to the client when the eyeglasses are worn on the client’s head [1]. The focal points of a couple of eyeglasses are normally drab and optically straightforward while the focal points of a couple of shades are commonly hued or tinted somehow or another to halfway weaken the light that passes there through. In any case, all through the rest of this determination and the added cases, the expressions “eyeglasses” and “shades” are utilized significantly reciprocally except if the particular setting requires something else.

2 Issues with the Smart Oculus Lenses 2.1 Computer Vision Syndrome Taking a gander at computerized screens—tablet, PC, wireless, TV, iWatch, vehicle GPS—influences our eyes so genuinely that well-being specialists have begat the expression, “PC Vision Syndrome.” In spite of the fact that the term was from the start used concerning office agents who experienced hours before their PCs, today the extended signs related to CVS impact millions around the world paying little brain to age or occupation. Those can include • • • • • •

Cerebral pains Centering trouble Bothersome, consuming, or watery eyes Dry eye Two fold vision Affectability to light (additionally called photophobia).

Any of these can be devastating—and normal when the eyes are locking in. Right when you take a gander at electronic devices, a few things happen which improve the likelihood of CVS signs. You squint less—regularly, the human eye under run of the mill conditions glints 12–15 times every minute. At any rate when your brain centers around a screen, it

Review in Smart Oculus Lenses

613

can make you disregard to gleam and the check can go down to an insignificant 7–8 squints each minute. Each time you squint, you spread a layer of tears over your eyes, so in less glinting strategies your eyes get less oil which prompts dry and sore eyes [2]. You utilize negative points—rather than scrutinizing printed content where you usually look down to examine, while including at a screen the eyes are ordinarily focused straight ahead. When looking down while scrutinizing, your eyelid covers a more prominent measure of your eye than when you look straight on [3]. This suggests looking screen point revealed a more prominent measure of your eye to the air’s drying impacts. This negative top arranging, together with a reduction in squinting, leaves you with ponderously dry eyes and even visual exhaustion. Cell phones are changing a segment of this—yet then consider all the careful selfies people take just to get the perfect pic. You get really close—you may have recently observed that when examining on your PDA, you, generally speaking, hold it closer to your eyes than you would with regular printed content. Additionally, you are not the only one. An ongoing report found that about the sum of their subjects set their phones 12.5– 14 jerks from their eyes to scrutinize flexible substance while the typical detachment for printed content is right around 16 in. This suggests your eyes are constrained to work all the more excitedly to focus at short vicinities by having to almost turn cross-looked toward, which tires out the eyes.

2.2 Retina Damage In the event that you do not convey your telephone or workstation with you when you head to sleep, you are in the minority. Practically, 60% of grown-up Americans have received this propensity and supplanted their morning timers with advanced innovation. This propensity, notwithstanding, can make critical harm to your eyes [3]. As the American Macular Degeneration Establishment, direct prolog to the blue light delivered by drove contraptions can really hurt the retina. The retinal mischief that occurs from highlighting at the blue light—especially around night time—can incite macular degeneration and damage your central vision. 1. As you age, this ends up being impressively dynamically critical, and your retina ends up being progressively tricky to hurt, which can consistently leave you with additional age-related macular degeneration.

614

R. Jayswal and N. Gautam

2.3 Potential Cataracts Though more research is required concerning the association among cascades and blue light, experts are beginning to watch patients in their mid-30s with eyes as cloudy from cascades as people in their mid-70s. Taking everything into account, this is not definitive proof that introduction to blue light causes cascades, anyway there might be an association that benefits further assessment [2]. Security organizations: Along these lines, the system ought to execute security benefits as follows: 1. Secrecy: Ensure that information is not made available to unapproved individuals, substances, or methodology by scrambling traffic between the device and the trusted in party. 2. Respectability: Ensure that data has not been changed by an unapproved customer. 3. Accessibility: To guarantee that organizations are open and useable by endorsed customers. 4. Validation: The affirmation that a component who it claims to be, using confirmation shows. 5. Access control: Contravention of unapproved use of advantage. It remembers the counteraction of utilization of an asset for an unapproved way. 6. Non-denial: By means of advanced marks to forestall disavowal to increase unlawful access. 7. Notarization: Enlistment of information with a confided in outsider to protect exactness of its qualities.

3 Application of Smart Lenses 3.1 Smart Eye Technology for Businesses with Significant Document Security Challenges It is practically difficult to ensure security and forestall conceivable misrepresentation when sending a touchy, secret key secured report, similar to a receipt, agreement, or strategic plan as an email connection. Your private records can be sent, common, or printed, and there is no way around it. That is the place Smart Eye Technology comes in and secures client to-client report sharing. Ventures with consistence or lawful concerns including government substances, monetary firms, schools, medicinal services associations, and that is just the beginning, can take advantage of our record security arrangements [4]. For organizations that do not have lawful or consistence concerns, having an idiot proof record security framework, with a remarkable consistent distinguishing proof framework, can give you genuine feelings of serenity by ensuring your mutual archives against unapproved utilize and any misuse.

Review in Smart Oculus Lenses

615

3.2 Health care The medical field required hands on and experimental training for better care. With this technology, several illness can be detected in early stages and can be prevented although. With mixed reality glasses like holo lens, medical students can better learn human anatomy.

3.3 Aerospace and MRO This industry particularly has a complex assembly structure and requires high skilled labor. With the smart glass, technicians can work hands-free and independent as image and video overlay will help them in precise assembly when real-time data is synced into these AR-based solutions maintenance repair and operation can be predicted and overload can be managed with quick solutions.

3.4 Retail and Logistics Management Vision picking and sorting are of at-most help to both the industries. With real-time data, inputs and hands-free working setup worker in retail and logistics can complete their work quickly.

3.5 Manufacturing and Training As the volume of order explodes and the availability of skilled laborers become scarce, reliance on smart glass-based solution will increase. They also reduce the error within built error rectification alarms that helps in spot-fixes.

4 Future Scope of Smart Oculus Lenses Wearable processing advancement is moving toward cutting-edge portable expanded reality gadgets, and smart glass venture is the confirmation. It reduces the opening in development for broadened reality to be a bit of people’s regular day to day existence, giving them snappier access to information and interfacing them together. This presents new security and protection challenges, where a few issues are notable and shared among comparable gadgets and furthermore IoT.

616

R. Jayswal and N. Gautam

Eye-following innovation itself is not too new. It is utilized in retail settings and in athletic preparing; your preferred site may utilize eye-following examinations to figure out where to put substance and promotions. In cell phones in any case, the innovation has advanced all the more gradually. Endeavors to fuse eye-following innovation have quickened starting late with both Apple and Samsung taking out significant licenses. In the most recent year, Apple has imagined a versatile eye-following 3D show which would permit gadgets to show 3D pictures without the utilization of uncommon glasses, and a look following interface that might one be able to day show up on Apple TV, iPhone, and iPad gadgets [2]. Not to be beaten, Samsung has submitted licenses for a zoom innovation that would perceive when you are squinting or battling to perceive what’s going on screen, and afterward naturally focus in on the black out territory and offer captions until the better subtleties have passed. At first, this innovation will be reserved for their TV division, yet it is not hard to envision how it would be valuable on littler cell phone screens. Other reputed advancements in eye-following tech incorporate retinal acknowledgment for the presentation of warnings, instinct enhancements to autocorrect, and biosecurity measures.

5 Conclusion I have attempted to examine the potential outcomes and difficulties that emerge related to the improvement of smart glasses as wearable innovation. I have attempted to plot whether and how the innovation will create and how quick it may become standard. It is my speculation—despite the fact that this is “perilous”—that shrewd glasses will initially hit standard in explicit institutional settings, for example, administration enterprises, medicinal services and production, and this could occur inside 3–5 years relying upon further innovation improvement. Be that as it may, the acknowledgment and use in social cooperation are right now the greatest test, from my perspective. How individuals in social cooperation should utilize savvy glasses in important manners will rely upon how the info configuration and equipment, for example, casings and optics will create. The reception of savvy glasses by the standard populace will appropriately not happen with glasses pretty much like the models available today [5]. Be that as it may, in 5 years, the innovation and social adequacy will as of now have changed a ton. The wearable innovation will be immense and colonize frameworks and the life world as we probably am aware it; and, soon, glasses, watches, gadgets in attire and, maybe, joined into the skin will be totally typical, similar to the Internet and cell phones are today. The innovation will surely advance land with incomprehensible items, and everybody will need to ponder the huge issues, for example, social communication and mental prosperity, eye issues what is more, lawful and protection issues.

Review in Smart Oculus Lenses

617

References 1. Eye tracking study: the importance of using Google authorship in search results 2. P. Majaranta, H. Aoki, M. Donegan, D.W. Hansen, J.P. Hansen, A. Hyrskykari, K.J. Räihä, in Gaze Interaction and Applications of Eye Tracking: Advances in Assistive Technologies (IGI Global, Pennsylvania, 2011) 3. M. Cognolato, M. Atzori, H. Müller, Head-mounted eye gaze tracking devices: an overview of modern devices and recent advances. J. Rehabil. Assistive Technol. Eng. 5, 2055668318773991 (2018) 4. P. D’Alterio, G. Acampora, S. Rossi, Analyzing social networks activities to deploy entertainment services in HRI-based smart environments, in Proceedings of the 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Naples, Italy, 9–12 July 2017, pp. 1–6 5. M.M. Ahamed, Z.B.A. Bakar, Triangle model theory for enhance the usability by user centered design process in human computer interaction. Int. J. Contemp. Comput. Res. 1(2), 26–32 (2017)

Car Accident Prevention Using Alcohol Sensor Ruby Jain, Nidhi Tiwari, Devendra Kumar Prajapati, Akhilesh Upadhyay, and Mukesh Yadav

Abstract This project is intended to prevent car accidents which are one of the major problems in today’s time. Here we are using alcohol sensor (MQ-3) to detect the amount of alcohol consumed by the drive and that data will be processed to give us the driving limit; here a circuit will be connected with in the car that will control the ignition system. This project will give us a base model to implement a security system in every car to prevent drink and drive. Keywords Alcohol sensor · Display driver · Relay

1 Introduction We have been facing many major problems which are making our country unsafe for living, road accidents are one of them, and due to that thousands died daily. We are here focusing on drunk driving which causes almost one-third of all traffic accidents and are considered to be among the most serious of driving offenses. According to the national statistics, roughly figure twelve thousand people die every year & nearly about 9 lakh people catch in driving incidents [1]. Therefore, it is very necessary to solve this problem as soon as possible; for solving this, we have come with a project to detect drunk driving before the driver starts engine, this idea will detect alcohol level in a person’s breath, and only after getting a safe alcohol limit, it allows driver to start engine; otherwise the engine will not get enough power supply [2]. Block diagram consists of four major parts. First part indicates the MQ-3 sensor (alcohol sensor), which detects the concentration level of alcohol in driver’s breath and that sensor [3, 4] will give output as increased voltage with increase in alcohol level. Then, second part is LM3914 IC which will take input from alcohol sensor, and that input will be compared with the set limit of alcohol consumption while driving. This IC has two output ports; one is high (red led) and second is low (green led) R. Jain · N. Tiwari (B) · D. K. Prajapati · A. Upadhyay · M. Yadav Department of Electronics and Communication, SAGE University, Indore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_61

619

620

R. Jain et al.

which will tell the alcohol consumption of user now after that here comes third part in which we have a relay switch which take high and low outputs of IC as its input, so when we get high input, then switch will be open (off), and if low input, then switch will be closed (on). The last part is starter motor, and relay switch is directly connected to starter motor circuit which will control the ignition of car by the output of relay switch.

2 Alcohol Sensor (MQ-3) Alcohol sensor (MQ-3) is used to measure the concentration of alcohol in driver’s breath. It is highly sensitive sensor which gives outcome very rapidly. SnO2 is the sensitive material which detect alcohol in MQ-3, which have conductivity lower in clean air, and when alcoholic gas over in surrounding, the conductivity of the sensor will rise with the gas concentration. MQ-3 sensor has the ability to detect C6 H6 (benzene), CH4 (methane), liquid petroleum gas (LPG), CO (carbon monoxide) & C6 H14 (hexane). This device has six pin-outs, out of which four are used to take input data or fetching signals & remaining two just provide I (current) to heater coil. • • • • • • •

Detection gas: Alcohol gas Concentration: 0.4–4 mg/L Supply voltage: y˜ + 2 − α If WM = 1

 2

− α then y  = C0 = y then y  = C0 = y˜ + α then y  = C0 = y + 2 − α

(6)

Digital Watermarking System Performance Using QIM Techniques …

703

Fig. 2 QIM technique for paper [19]

   If y ∈ y˜ + 2 , y˜ + 2 + α then y = C1 = y˜ + 2 + α  If y ∈ y˜ + 2 + α, y˜ +  − α then y = C1 = y If y > y˜ +  − α then y  = C1 = y˜ +  − α

(7)

Imperceptibility of watermarked image is improved and the quantization error is reduced [19]. For any pixel value, there must be some modification rule requires. When WM = 0 [19] QIM method works properly but when WM = 1 then modification rule is not specified for the interval [ y˜ , y˜ + /2]. Now if no modification is applied to the pixel value then during de-quantization, WM becomes 0 which gives wrong output. In this paper, two modification rules are proposed and added by the authors in QIM method [19] for [ y˜ , y˜ + /2] interval in such a way that during de-quantization the WM will be 1. The rules are specified in Eq. 8 and 9.   − α then y  = C1 = y˜ + + α 2 2

   If y ∈ y˜ + − α, y˜ + then y  = C1 = y˜ + 3 × 2 2 4 If y < y˜ +

(8) (9)

3 Watermarking Scheme The watermarking scheme is based on which the comparison is done is discussed below:

704

H. A. Patel and D. B. Shah

3.1 Watermark Generation Process 1. Apply wavelet transform (DWT or IWT) to Original Image (I) which decomposes the image into LL1, LH1, HL1, and HH1 sub-bands. 2. Apply wavelet transform (DWT or IWT) to LL1 sub-band for further decomposition and get LL2, LH2, HL2 and HH2 sub-bands. 3. Calculate the average of LL2 sub-band. 4. If LL2 sub-band’s pixel value is greater than the average value then watermark bit is 1 otherwise 0. 5. Keep this binary image as watermark image.

3.2 Watermark Embedment Process 1. Apply Wavelet transform (DWT or IWT) to Original Image (I) which decomposes the image into LL1, LH1, HL1, and HH1 sub-bands. 2. Apply Wavelet transform (DWT or IWT) to LL1 sub-band for further decomposition and get LL2, LH2, HL2, and HH2 sub-bands. 3. Embed the watermark image into LL2 sub-band using QIM quantization technique and get i_LL2. 4. Apply inverse IWT to i_LL2, LH2, HL2 and HH2 to get i_LL1. 5. Apply inverse to i_LL1, LH1, HL1, HH1 to get watermarked image (WMD).

3.3 Watermark Extraction Process 1. Apply Wavelet transform (DWT or IWT) to Watermarked Image (WMD) which decomposes the image into eLL1, eLH1, eHL1, and eHH1 sub-bands. 2. Apply Wavelet transform (DWT or IWT) to eLL1 sub-band for further decomposition and get eLL2, eLH2, eHL2, and eHH2 sub-bands. 3. Extract the watermark image from eLL2 sub-band using QIM de-quantization technique.

3.4 Extracted Generated Watermark Process 1. Apply the generating watermark steps on WMD to extract the feature of it. 2. Consider this watermark as Extracted Generated Watermark (EGWM). Here the comparison is done based on original watermark, extracted watermark, and extracted generated watermark which requires during blind watermarking system.

Digital Watermarking System Performance Using QIM Techniques …

705

4 Performance Evaluation Peak Signal Noise Ratio (PSNR) is the distortion measurement standard which is used to test the imperceptibility of watermarked image. It is defined in Eq. 10. As the value of PSNR increase, the watermarked image is more similar to original image.  PSNR = 10 × log10

255 MSE

 (10)

1 m n 2 Here MSE = M×N i=1 j=1 [I (i, j) − r (i, j)] . Bit Error Rate (BER) is used to measure the robustness. To test the similarity between generated watermark and extracted watermark, BER is calculated. It is defined in Eq. 11.

BER(%) =

UMB × 100 TB

(11)

Here UMB is the number of unmatched bits and TB is the total number of watermark bits. As the BER is decrease, watermark bits are more similar to each other.

5 Experiment Results The algorithm is tested using MATLAB R2017a. Grayscale images which are shown in Fig. 3 are used for implementation. First, the algorithm is applied with DWT and proposed QIM technique. The generated results are shown in Table 1. The algorithm is applied with IWT and proposed QIM technique. The generated results are shown in Table 2. Comparative performance of DWT and IWT is shown in Table 3. The PSNR comparison with DWT and IWT wavelet transforms are shown in Fig. 4.

Fig. 3 Grayscale images

706

H. A. Patel and D. B. Shah

Table 1 PSNR results with DWT transform Lena

Pepper

Sunflower

Camera man

Woman

Haar

37.7071

37.4319

37.5949

37.8543

37.3026

Daubechies (db3)

37.5805

37.3524

37.3467

37.6189

37.6992

Coiflets (coif1)

37.7586

37.3403

37.3455

37.6430

37.7638

Symlets (sym3)

37.5805

37.3524

37.3467

37.6189

37.6992

Biorthogonal (bior1.1)

37.7071

37.4319

37.5949

37.8543

37.3026

Biorthogonal (bior2.2)

39.4548

38.8587

38.4338

39.0072

39.6366

Reverse Biorthogonal (rbio1.1)

37.7071

37.4319

37.5945

37.8563

37.3026

Sunflower

Camera man

Woman

Table 2 PSNR results with IWT transform Lena

Pepper

Haar

29.8054

30.4639

32.2375

32.8708

30.1739

Daubechies (db3)

37.0176

37.9345

37.5502

37.7879

36.0906

Coiflets (coif1)

28.7781

28.1967

28.3761

31.2136

28.4154

Symlets (sym3)

53.2477

53.0935

53.1906

53.7174

53.1632

Biorthogonal (bior1.1)

49.4661

49.0714

49.5360

50.4983

49.1104

(bior2.2)

45.6933

45.4084

46.0625

46.6135

45.4697

Reverse Biorthogonal (rbio1.1)

29.8054

30.4649

32.2375

32.8708

30.1759

Table 3 PSNR comparison for DWT and IWT transform

Average PSNR with Average PSNR with DWT IWT Haar

37.5782

31.1103

Daubechies (db3)

37.5195

37.2762

Coiflets (coif1)

37.5702

28.996

Symlets (sym3)

37.5195

53.2825

Biorthogonal (bior1.1)

37.5782

49.5364

Biorthogonal (bior2.2)

39.0782

45.8495

Reverse Biorthogonal (rbio1.1)

37.5785

31.1109

Digital Watermarking System Performance Using QIM Techniques …

707

Fig. 4 PSNR comparison for DWT and IWT transform

As per the PSNR, it is found that with Haar, Coiflets, and Reverse Biorthogonal wavelets, DWT’s performance is acceptable than IWT. With Symlets and Bi-orthogonal, the performance of IWT is far better than DWT. The blind watermarking algorithm with IWT is implemented with different QIM techniques which are discussed earlier. The imperceptibility performance comparison for QIM methods [18, 19] and proposed are shown in Table 4 and the respective chart is demonstrated in Fig. 5. Standard image ‘Lena’ is selected for implementation. The robustness performance comparison is shown in Table 5 which compares the original watermark with the extracted watermark. Table 4 Comparative PSNR results for QIM techniques Wavelet family

PSNR with [18] QIM method

PSNR with [19] QIM method

PSNR with proposed QIM method

Haar

28.3751

29.8054

29.8054

Daubechies (db3)

38.6355

39.3236

37.0176

Coiflets (coif1)

27.6793

28.4988

28.7781

Symlets (sym3)

52.5505

53.3952

53.2477

Biorthogonal (bior1.1) 47.6669

49.4661

49.4661

(bior2.2)

43.9801

46.5533

45.6933

Reverse Biorthogonal (rbio1.1)

28.3751

29.8054

29.8054

708

H. A. Patel and D. B. Shah

Fig. 5 Comparative PSNR results for QIM techniques

Table 5 BER (%) comparison for original and extracted watermark bits Wavelet family

QIM method [18]

QIM method [19]

Proposed QIM method

Haar

0.11

34.77

0.12

Daubechies (db3)

1.06

52.79

1.06

Coiflets (coif1)

2.31

33.92

2.44

Symlets (sym3)

0

27.42

0

Biorthogonal (bior1.1)

0

26.97

0

(bior2.2)

0

26.99

0

Reverse Biorthogonal (rbio1.1)

0.11

34.77

0.12

The comparison for original watermark and generated watermark from watermarked image is shown in Table 6. The comparison for extracted watermark and generated watermark from watermarked image is shown in Table 7.

Digital Watermarking System Performance Using QIM Techniques …

709

Table 6 BER (%) comparison for original and generated watermark bits from watermarked image Wavelet family Haar Daubechies (db3)

[18] QIM method

[19] QIM method

proposed QIM method

2.07

3.97

2.07

45.02

48.48

43.19

Coiflets (coif1)

4.3

5.49

7.78

Symlets (sym3)

0.04

0.04

0.04

Biorthogonal (bior1.1)

0.07

0.21

0.07

(bior2.2)

0.07

0.15

0.13

Reverse Biorthogonal (rbio1.1)

2.07

3.97

2.07

Table 7 BER (%) comparison for extracted and generated watermark bits from watermarked image Wavelet family Haar Daubechies (db3)

QIM method [18] 2.18 44.7

QIM method [19]

Proposed QIM method

38.74

2.19

48.32

42.86

Coiflets (coif1)

6.54

39.25

10.22

Symlets (sym3)

0.04

27.46

0.04

Biorthogonal (bior1.1)

0.07

27.17

0.07

(bior2.2)

0.07

27.14

0.13

Reverse Biorthogonal (rbio1.1)

2.18

38.74

2.19

6 Conclusion As per the experiments, it is concluded that watermark embedded using IWT transform is more suitable than DWT transform. The imperceptibility which is achieved using IWT is better than DWT. The IWT and DWT both are implemented using different wavelet families. As per the PSNR, it is found that with Haar, Coiflets, and Reverse Bi-orthogonal wavelets, DWT’s performance is acceptable than IWT. With Symlets and Bi-orthogonal, the performance of IWT is far better than DWT. Another comparison is done for QIM techniques. As per the results, it is found that that PSNR is improved using proposed QIM method as compared to other QIM methods. The three watermarks are also compared with each other and as per the result it is found that the BER performance with QIM method suggested by [18] is slightly higher than the proposed system. The QIM method is not suitable as compare to proposed QIM method. Here the imperceptibility and robustness both are the key parameters for judgment so proposed system is acceptable as compare to other two QIM systems. If the watermark is embedded using the IWT and symlets or Biorthogonal filters, then system becomes too robust and imperceptible. If the watermark is generated using different algorithm then the proposed system may react differently. The proposed

710

H. A. Patel and D. B. Shah

system is developed for grayscale images. Moreover, there is still scope to improve the imperceptibility performance of the system as well as to work for color image.

References 1. H.A. Patel, D.B. Shah, Review on watermarking scheme for digital image authentication, tampering and self recovery. Int. J. Comput. Sci. Eng. 1245–1250 (2018) 2. H.A. Patel, D.B. Shah, Digital image watermarking mechanism for image authentication, image forgery and self recovery. Int. J. Electron. Eng. 140–143 (2019) 3. S. Nirmala, P. Naghabhushan, K.R. Chetan, A new robust watermarking scheme for document images by randomized distribution of watermark segments, in Proceedings of the 5th International Conference on Recent Trends in Information, Telecommunication and Computing (2014) 4. K.R. Chetan, S. Nirmala,An efficient and secure robust watermarking scheme for document images using integer wavelets and block coding of binary watermarks. J. Inf. Secur. Appl. 13–24 (2015) 5. S.L. Agrwal, Improved invisible watermarking technique using IWT-DCT, in 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO) (IEEE, 2016) 6. H. Peng, Energy quantization modulation approach for image watermarking. J. Comput. Inf. Syst. 6(8), 2675–2682 (2010) 7. J. Liu, Tu. Qiu, Xu. Xinye, Quantization-based image watermarking by using a normalization scheme in the wavelet domain. Information 9(8), 194 (2018) 8. A. Chitla, A semi fragile image watermarking technique using block based Svd. Int. J. Comput. Sci. Inf. Technol. 3(2), 3644–3647 (2012) 9. K. Chaitanya, K. Ellanti, E. Harshavardhan Chowdary, Semi-fragile watermarking scheme based on feature in Dwt domain. Int. J. Comput. Appl. 28(3), 42–46 (2011) 10. L. Rosales-Roldan, Watermarking-based image authentication with recovery capability using halftoning technique. Sign. Process. Image Commun. 28(1), 69–83 (2013) 11. H. Rhayma, A. Makhloufi, A.B. Hmida, Self-authentication scheme based on semi-fragile watermarking and perceptual hash function, in International Image Processing, Applications and Systems Conference (IEEE, 2014), pp. 1–6 12. H. Rhayma, Semi fragile watermarking scheme for image recovery in wavelet domain, in2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP) (IEEE, 2018) 13. H.A. Patel, N.H. Divecha, A feature-based semi-fragile watermarking algorithm for digital color image authentication using hybrid transform. Adv. Comput. Comput. Sci. 455–465 (2018) 14. S. Prabhishek, R.S. Chadha, A survey of digital watermarking techniques, applications and attacks. Int. J. Eng. Innov. Technol. (IJEIT) 2(9), 165–175 (2013) 15. M. Potdar Vidyasagar, S. Han, E. Chang,A survey of digital image watermarking techniques, in 2005 3rd IEEE International Conference on Industrial Informatics, 2005. INDIN’05 (IEEE, 2005) 16. J. Molina-García,Watermarking algorithm for authentication and self-recovery of tampered images using DWT, in 2016 9th International Kharkiv Symposium on Physics and Engineering of Microwaves, Millimeter and Submillimeter Waves (7MSMW) (IEEE, 2016) 17. K.R. Chetan, S. Nirmala, Intelligent multiple watermarking schemes for the authentication and tamper recovery of information in document image. Adv. Comput. and Commun. Technol., 183–193 (2018) 18. P. Meerwald, Quantization watermarking in the JPEG2000 coding pipeline, in Proceedings of the IFIP TC6/TC11 International Conferenve on Communications and Multimedia Security, Darmstadt, Germany, May 2001, vol. 192, pp. 69–79 (2001)

Digital Watermarking System Performance Using QIM Techniques …

711

19. A.O. Zaid, Improved QIM-based watermarking integrated to JPEG2000 coding scheme. Sign. Image Video Process. 3(3), 197–207 (2009) 20. S. Anjul, A. Tayal, Choice of wavelet from wavelet families for DWT-DCT-SVD image watermarking (2012)

A Distributed Spanning Tree-Based Scalable Fault-Tolerant Algorithm for Load Balancing in Web Server Farms U. Prabu , N. Malarvizhi , J. Amudhavel , and G. Sambasivam

Abstract Web servers generally do various processes like providing different services to the clients according to their requests and do the process of storing information. This paper provides about the process of introducing load balancing among the web server farms to avoid the bottleneck situation. The clients ask for different content from the server in the form of requests to the server. When more clients’ requests are sent to the same server, it may lead to the bottleneck situation where that server becomes overloaded. So, to avoid that situation, we introduce the concept of load balancing based on the distributed spanning tree concept, in which the load is distributed among the servers based on the clustering condition. This DST-based load balancing algorithm will be fault tolerant and scalable. Keywords Distributed spanning tree · Web server · Load balancing · Bottleneck · Fault tolerant · Scalable

U. Prabu (B) Department of CSE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, India e-mail: [email protected] N. Malarvizhi Department of CSE, IFET College of Engineering, Villupuram, Tamil Nadu, India e-mail: [email protected] J. Amudhavel School of Computer Science and Engineering, VIT Bhopal University, Bhopal,, M.P., India e-mail: [email protected] G. Sambasivam Department of CSE, Faculty of Engineering, ISBAT University, Kampala, Uganda e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Proceedings of Second International Conference on Smart Energy and Communication, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-6707-0_69

713

714

U. Prabu et al.

1 Introduction A web server is a computer which provides responses to the clients or web browsers. The response is in term of the web pages. A computer can become a web server when software is installed to it and connected to the Internet. In today’s world, many software applications are available for web servers [1]. Web server load balancing plays a vital role in web server farms. The two major versions of load balancing are static and dynamic. The static version of load balancing was first analyzed using balls-and-bins model by Azar et al. [2]. The dynamic version of load balancing is analyzed by Vvedenskaya et al. [3] using the dynamic supermarket model under the SQ(d) policy. The foremost motivation for the load balancing of the servers is to provide high availability, predictability, and scalability. Even in some cases if one or more server fails, if the client is being capable of accessing the rest of the available servers, it is called as high availability. If the services are provided by considering the availability and performance, then the capability of managing and assuring of it is called as predictability. The competence to dynamically adapt the load as it increases without any impact on the performance is termed as scalability. DNS round-robin is used when load balancing devices are not available. The necessities of load balancing device arise since DNS round-robin is not aware of whether the server is working or not. Later, application delivery controller came into existence which dispatches the jobs to the server. The traditional load balancing policies of web servers are implemented using message passing paradigm. Winton et al. [4] say that optimal response time is achieved when the workload is allocated evenly between the servers. The state information of every client request is not maintained by the web servers, and that is why they are considered as stateless. Instead the back-end servers used to maintain the relevant state and rest of the information is maintained as cookie by the client [5]. The fault-tolerant schemes of web server usually detect the failures and direct the forthcoming requests to backup servers which process the new requests. Specialized routers and load balancers [6–9] and data replication [10, 11] are used by those fault-tolerant schemes. The requests processed at the failure time are unrecoverable by these schemes, and it is called as in-progress requests. To overcome these problems, the concept of distributed spanning tree [12, 13] structure is used. This structure balances the load generated by the requests among servers and thus avoids the overload to the single server.

2 Mathematical Model for DST-Based Load Balancing A general mathematical model for the proposed DST-based load balancing is expressed below. It balances the load generated among servers and thus avoids the overload to the single server. The server utilization value of each node is represented as U.

A Distributed Spanning Tree-Based Scalable Fault-Tolerant …

715

The server utilization value of a node at Stage 2 of the DST, which is also a non-leaf node, is represented as: Ux where x is the index of the non-leaf node at Stage 2. The server utilization value of a node at Stage 1 of the DST, which is also a non-leaf node, is represented as: Ux y where y is the index of the non-leaf node at Stage 1 of the DST, located within the non-leaf node indexed as x at Stage 2. The server utilization value of a node at Stage 0 of the DST, which also happens to be a leaf node, is represented as: Ux yz where z is the index of the node at Stage 0, located within the non-leaf node indexed as y in Stage 1, and within the non-leaf node indexed as x in Stage 2. The server utilization value of each leaf node at Stage 0 Ux yz is determined individually based on the local resource utilization and the degree of resource employment to service remote requests. At Stage 1 of the DST, each non-leaf node’s server utilization value can be determined using the following expression: n Ux y =

z=1 U x yz

n

where n is the number of child nodes present in the non-leaf node indexed as y in Stage 1, within the non-leaf node indexed as x in Stage 2 of the DST. At Stage 2 of the DST, each non-leaf node’s server utilization value is calculated using the below expression: m Ux =

y=1 U x y

m

where m is the number of child nodes present in the non-leaf node indexed as x in Stage 2 of the DST. Thus, the mathematical model to calculate the server utilization values for each node at each stage of the DST, in general, was described in detail.

716

U. Prabu et al.

3 DST-Based Load Balancing Algorithm In this algorithm, the process that takes place is given as first the client sends the request to the server. This request sent by the client is stored in the job list, and they wait in the queue and the availability of the server to process the request is checked. If the server is an idle and available, then number of requests is checked or else if the server is not available, then the load is distributed among the different servers. While checking the number of requests, if job counter is less than the number of requests, then the load of requests is distributed on that server based on the condition of idleness and acceptability, if not then the load is distributed to the other servers. Begin Initialization: Client sends Request Request waits in queue Condition: Check server availability if (Server==idle) Check the number of requests else Load is distributed among the servers end if if (jobcounter