Recent Trends in Communication and Intelligent Systems: Proceedings of ICRTCIS 2019 (Algorithms for Intelligent Systems) 9789811504259, 9789811504266, 9811504253

The book gathers the best research papers presented at the International Conference on Recent Trends in Communication an

190 16 10MB

English Pages 284 [265]

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Recent Trends in Communication and Intelligent Systems: Proceedings of ICRTCIS 2019 (Algorithms for Intelligent Systems)
 9789811504259, 9789811504266, 9811504253

Table of contents :
Jointly Organized By
Publication Partner
Supported By
Organizing Committee
International Advisory Committee
National Advisory Committee
Organizing Committee
Preface
Preface
Contents
About the Editors
1 Fast Convergent Gravitational Search Algorithm
1 Introduction
2 Gravitational Search Algorithm
3 Fast Convergent Gravitational Search Algorithm
4 Experimental Results and Discussion
4.1 Test Problems Under Consideration
4.2 Experimental Setting
4.3 Results Comparison
5 Conclusion
References
2 An Experimental Perusal of Impedance Plethysmography for Biomedical Application
1 Introduction
2 Equipment and Methodology
3 Experimental Works
4 Results
5 Conclusion
References
3 Compressive Sensing: An Efficient Approach for Image Compression and Recovery
1 Introduction
2 Literature Survey
3 Mathematical Modeling and Basic Algorithms of Compressive Sensing
3.1 A Subsection Sample Mathematical Modeling for Sparse Recovery
3.2 Reconstruction Algorithm
4 Results and Analysis
4.1 Signal-to-Noise Ratio
4.2 Compression Ratio
5 Conclusion
6 Future Scope
References
4 Optimal Location of Renewable Energy Generator in Deregulated Power Sector
1 Introduction
2 Problem Formulation
3 Step by Step Algorithm and Flow Chart of the Proposed Approach
4 Simulation Analysis and Results
4.1 IEEE-6 Bus System
5 Conclusion
References
5 A Genetic Improved Quantum Cryptography Model to Optimize Network Communication
1 Introduction
1.1 Quantum Cryptography
2 Related Works
3 Research Methodology
4 Results
5 Conclusion
References
6 Neural Networks-Based Framework for Detecting Chemically Ripened Banana Fruits
1 Introduction
2 Related Work
3 Proposed Framework
4 Experimental Setup and Results
5 Conclusion
References
7 Low Power CMOS Low Transconductance OTA for Electrocardiogram Applications
1 Introduction
2 OTA
2.1 Multi-tanh Linearization Technique
3 Proposed OTA
4 Conclusion
References
8 Spiral Resonator Microstrip Patch Antenna with Metamaterial on Patch and Ground
1 Introduction
2 Design and Configuration of Antenna
3 Design Equations
4 Analysis and Simulation
5 Conclusion
References
9 A Packet Fluctuation-Based OLSR and Efficient Parameters-Based OLSR Routing Protocols for Urban Vehicular Ad Hoc Networks
1 Introduction
2 Literature Review
3 Proposed Protocols
3.1 Packet Fluctuation-Based OLSR Routing Protocol (P-OLSR)
3.2 Efficient Parameter Optimization-Based OLSR Routing Protocol (E-OLSR)
4 Simulation Results
4.1 Simulation Setup
4.2 Simulation Results
5 Conclusions and Future Work
References
10 Evaluation of Similarity Measure to Find Coherence
1 Introduction
2 Model to Represent Documents
3 Metric Used in Document Clustering
4 Proposed Methodology
5 Proposed Algorithms to Compute Similarity
6 Experiments and Results
6.1 Visualization by Plotting Cluster Centers and Their Mean
6.2 Evaluating Optimum Number of Clusters
6.3 Similarity Analysis of Text Documents
7 Conclusion
References
11 Bagged Random Forest Approach to Classify Sentiments Based on Technical Words
1 Introduction
1.1 Sentiment Analysis Levels
1.2 Methods of Sentiment Analysis
2 Literature Review
3 Methodology of the Proposed System
4 Result and Discussion
5 Conclusion and Future Work
References
12 Healthcare Services Customers’ ICT-Related Expectations and Experiences
1 Introduction
2 Private Healthcare Service Providers and ICT in India
3 Literature Review
4 Research Method
4.1 Objectives
4.2 Hypothesis
4.3 Scope and Contribution
4.4 Selection and Size of the Sample
5 Results and Analysis
5.1 Demographic Details of the Respondents
5.2 Expectations and Experiences Regarding Information on Websites
5.3 Expectations and Experiences Regarding Information Through SMS
6 Conclusion
7 Limitations of the Study
References
13 Variation Measurement of SNR and MSE for Musical Instruments Signal Compressed Using Compressive Sensing
1 Introduction
2 Literature Survey
3 Mathematical Modeling and Basic Algorithms of Compressive Sensing
3.1 A Subsection Sample Mathematical Modeling for Sparse Recovery
3.2 Reconstruction Algorithm
4 Result and Analysis
4.1 Signal-to-Noise Ratio (SNR)
4.2 Compression Ratio
5 Conclusion
6 Future Scope
References
14 A Framework to Impede Phishing Activities Using Neural Net and Website Features
1 Introduction
2 Literature Review
3 Anti-phishing Framework
3.1 Feature Selector
3.2 Training Module
3.3 Feature Collector
3.4 Classifier
4 Results
5 Conclusion
References
15 Overcoming the Security Shortcomings Between Open vSwitch and Network Controller
1 Introduction
2 Literature Review
3 Existing Problems
4 Proposed Methodology and Implementation
5 Results
6 Conclusion
References
16 An Incremental Approach Towards Clustering Short-Length Biological Sequences
1 Introduction
2 Methodology
3 Implementation
4 Result and Analysis
5 Applications
6 Conclusions
References
17 A Novel Approach to Localized a Robot in a Given Map with Optimization Using GP-GPU
1 Introduction
2 Previous Work
3 Experiments on Sectorial Error Probability Using Standard SLAM Algorithms
4 Optimization of Candidate Algorithm Using Shaders and Pre Pinned Buffers
5 Implementation Mechanism for Object Identification
6 Results and Discussion
7 Conclusion and Future Scope
References
18 Area-Efficient Splitting Mechanism for 2D Convolution on FPGA
1 Introduction
2 System Design Consideration
2.1 Constraints
2.2 Sliding Window Architecture
3 Proposed Methodology
4 Results and Discussions
5 Conclusion
References
19 Neural Network Based Modeling of Radial Distortion Through Synthetic Images
1 Introduction
2 Theoretical Background
2.1 Distortion Modeling
2.2 Neural Networks and Machine Learning
3 Proposed Methodology
4 Results and Discussions
5 Conclusion
References
20 A Stylistic Features Based Approach for Author Profiling
1 Introduction
2 Literature Survey
3 Evaluation Measures and Dataset Characteristics
4 Stylistic Features
5 BOF Model
6 Experimental Results
7 Conclusion and Future Scope
References
21 Load Balancing in Cloud Through Task Scheduling
1 Introduction
1.1 Cloud Features
1.2 Cloud Services
1.3 Load Balancing QoS (Quality of Service) Parameters
1.4 Existing Task Scheduling Approaches
2 Literature Survey
3 Proposed Approach
3.1 Previous Approach
3.2 Proposed Approach
3.3 Formulas Used
3.4 Task Scheduling Components
3.5 Concept of Proposed Approach
3.6 Experimental Parameters
3.7 Result Analysis
4 Conclusion
References
22 Empirical Modeling and Multi-objective Optimization of 7075 Metal Matrix Composites on Machining of Computerized Numerical Control Wire Electrical Discharge Machining Process
1 Introduction
2 Experimental Details
3 Development of Response Surface Model
4 Multi-objective Optimization
4.1 Non-sorting Genetic Algorithm-II (NSGA-II)
5 Result and Discussion
5.1 Modeling of Cutting Speed (CS) and Surface Roughness (SR)
5.2 Optimization of Non-sorting Genetic Algorithm-II (NSGA-II)
6 Conclusions
References
23 Identification of Rumors on Twitter
1 Introduction
2 Literature Survey
3 Proposed Work
3.1 Preprocessing
3.2 Tweet Analysis
3.3 Rumor Classification
4 Result and Discussion
4.1 Dataset Preparation and Preprocessing
4.2 Rumor Classification
5 Conclusion
References
24 Brain Tumor Detection and Classification Using Machine Learning
1 Introduction
2 Related Work
3 Methodology
4 Dataset
5 Preprocessing of Dataset
6 Feature Selection
7 Experimental Results
8 Conclusion and Future Work
References
25 Design of DGS-Enabled Simple UWB MIMO Antenna Having Improvement in Isolation for IoT Applications
1 Introduction
2 Design for Antennas
2.1 Surface Current Distribution
3 Results with Discussions
3.1 Radiation Patterns
3.2 Envelop Correlation Coefficient (ECC)
3.3 Diversity Gain
3.4 Individual Antenna Element Gain and Efficiency
3.5 Channel Capacity Loss (CCL)
4 Conclusion
References
26 Concatenated Polar Code Design Aspects for 5G New Radio
1 Introduction
2 PC-CA Concatenated Polar Code
2.1 Position of PC Bits
2.2 Value of the PC Bits
3 System Model
4 Simulation Results
5 Conclusion
References
27 Protection of Six-Phase Transmission Line Using Haar Wavelet Transform
1 Introduction
2 Specifications of the System
3 HWT Approach
4 Performance Assessment
4.1 Performance of HWT During Fault Type (FT) Variation
4.2 Performance of HWT During Near-End Relay Faults
4.3 Performance of HWT During Remote-End Relay Faults
4.4 Performance of HWT During Transforming Faults
5 Conclusion
References
Author Index

Citation preview

Algorithms for Intelligent Systems Series Editors: Jagdish Chand Bansal · Kusum Deep · Atulya K. Nagar

Harish Sharma · Aditya Kumar Singh Pundir · Neha Yadav · Ajay Sharma · Swagatam Das Editors

Recent Trends in Communication and Intelligent Systems Proceedings of ICRTCIS 2019

Algorithms for Intelligent Systems Series Editors Jagdish Chand Bansal, Department of Mathematics, South Asian University, New Delhi, Delhi, India Kusum Deep, Department of Mathematics, Indian Institute of Technology Roorkee, Roorkee, Uttarakhand, India Atulya K. Nagar, Department of Mathematics and Computer Science, Liverpool Hope University, Liverpool, UK

This book series publishes research on the analysis and development of algorithms for intelligent systems with their applications to various real world problems. It covers research related to autonomous agents, multi-agent systems, behavioral modeling, reinforcement learning, game theory, mechanism design, machine learning, meta-heuristic search, optimization, planning and scheduling, artificial neural networks, evolutionary computation, swarm intelligence and other algorithms for intelligent systems. The book series includes recent advancements, modification and applications of the artificial neural networks, evolutionary computation, swarm intelligence, artificial immune systems, fuzzy system, autonomous and multi agent systems, machine learning and other intelligent systems related areas. The material will be beneficial for the graduate students, post-graduate students as well as the researchers who want a broader view of advances in algorithms for intelligent systems. The contents will also be useful to the researchers from other fields who have no knowledge of the power of intelligent systems, e.g. the researchers in the field of bioinformatics, biochemists, mechanical and chemical engineers, economists, musicians and medical practitioners. The series publishes monographs, edited volumes, advanced textbooks and selected proceedings.

More information about this series at http://www.springer.com/series/16171

Harish Sharma Aditya Kumar Singh Pundir Neha Yadav Ajay Sharma Swagatam Das •





Editors

Recent Trends in Communication and Intelligent Systems Proceedings of ICRTCIS 2019

123



Editors Harish Sharma Department of Computer Science and Engineering Rajasthan Technical University Kota, Rajasthan, India

Aditya Kumar Singh Pundir Department of Electronics and Communication Engineering Arya College of Engineering & IT Kukas, Rajasthan, India

Neha Yadav Department of Mathematics National Institute of Technology Hamirpur, Himachal Pradesh, India

Ajay Sharma Department of Electrical Engineering Government Engineering College Jhalawar, Rajasthan, India

Swagatam Das Electronics and Communication Sciences Unit Indian Statistical Institute Kolkata, West Bengal, India

ISSN 2524-7565 ISSN 2524-7573 (electronic) Algorithms for Intelligent Systems ISBN 978-981-15-0425-9 ISBN 978-981-15-0426-6 (eBook) https://doi.org/10.1007/978-981-15-0426-6 © Springer Nature Singapore Pte Ltd. 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Jointly Organized By

Publication Partner

Supported By

Organizing Committee

International Advisory Committee Dr. Bimal K. Bose, University of Tennessee, USA Dr. S. Ganesan, Oakland University, USA Dr. L. M. Patnaik, IISc Bangalore, India Dr. Ramesh Agarwal, Washington University, St. Louis Dr. Vincenzo Piuri, University of Milan, Italy Dr. Ashoka Bhat, University of Victoria, Canada Prof. Akhtar Kalam, Victoria University, Australia Dr. M. H. Rashid, University of West Florida, USA Dr. Fushuan Wen, Zhejiang University, China Ir. Dr. N. A. Rahim, UM, Kuala Lumpur, Malaysia Dr. Tarek Bouktir, University of Setif, Algeria

National Advisory Committee Dr. S. N. Joshi, CSIR-CEERI, Pilani Dr. Vineet Sahula, MNIT Jaipur, India Dr. K. J. Rangra, CSIR-CEERI, Pilani, India Dr. R. K. Sharma, CSIR-CEERI, Pilani, India Dr. Vijay Janyani, MNIT Jaipur, India Dr. K. R. Niazi, MNIT Jaipur, India Dr. V. K. Jain, DRDO, India Dr. Manoj Kr. Patairiya, NISCAIR, India Dr. Sanjeev Mishra, RTU, Kota Prof. Harpal Tiwari, MNIT Jaipur Dr. S. Gurunarayanan, BITS, Pilani Dr. Ghanshyam Singh, MNIT Jaipur

vii

viii

Organizing Committee

Prof. Satish Kumar, CSIR-CSIO, Chandigarh Dr. Kota Srinivas, CSIR-CSIO, Chennai Centre Sh. Anand Pathak, SSME, India Sh. Ulkesh Desai, SSME, India Sh. Ashish Soni, SSME, India Sh. R. M. Shah, SSME, India Ach. (Er.) Daria S. Yadav, ISTE Rajasthan and Haryana Section Er. Sajjan Singh Yadav, IEI, Jaipur Er. Gautam Raj Bhansali, IEI, Jaipur Dr. J. L. Sehgal, IEI, Jaipur Smt. Annapurna Bhargava, IEI, Jaipur Smt. Jaya Vajpai, IEI, Jaipur Dr. Hemant Kumar Garg, IEI, Jaipur Er. GunjanSaxena, IEI, Jaipur Er. Sudesh Roop Rai, IEI, Jaipur Dr. Manish Tiwari, IETE Rajasthan Centre, Jaipur Dr. Ashish Ghunawat, MNIT Jaipur Dr. Dinesh Yadav, IETE Rajasthan Centre, Jaipur Dr. Jitendra Kr. Deegwal, Government Women Engineering College, Ajmer

Organizing Committee Chief Patron Prof. (Dr.) Neelima Singh, HVC, RTU, Kota Patrons Smt. Madhu Malti Agarwal, Chairperson, Arya Group Er. Anurag Agarwal, Group Chairman Prof. Dhananjay Gupta, Chairman, Governing Body General Chair Dr. Swagatam Das, ISI Kolkatta Dr. Neha Yadav, NIT Hamirpur Dr. Vibhakar Pathak, ACEIT, Jaipur Conveners Dr. D. K. Sambariya, RTU, Kota Dr. Kirti Vyas, ACEIT, Jaipur Dr. Ajay Sharma, GECJ, Jhalawar Er. Ankit Gupta, ACEIT, Jaipur Er. Vivek Upadhyaya, ACEIT, Jaipur

Organizing Committee

Organizing Chair Dr.Rahul Srivastava, ACEIT, Jaipur Dr. Harish Sharma, RTU, Kota Dr. S. D. Purohit, RTU, Kota Dr. Ramesh C. Poonia, Amity University, Jaipur Organizing Secretaries Dr. Aditya Kr S. Pundir, ACEIT, Jaipur Dr. Sandeep Kumar, Amity University, Jaipur Special Session Chair Dr. Dr. Dr. Dr. Dr.

Anupam Yadav, NIT Jalandhar Sarabani Roy, Jadavpur University, Kolkata Nirmala Sharma, RTU, Kota IrumAlvi, RTU, Kota S. Mekhilef, University of Malaya, Malaysia

Local Organizing Committee Chairs Prof. Manu Gupta, ACEIT, Jaipur Prof. Arun Kr Arya, ACEIT, Jaipur Prof. Akhil Pandey, ACEIT, Jaipur Prof. Prabhat Kumar, ACEIT, Jaipur Prof. Shalani Bhargava, ACEIT, Jaipur Dr. Pawan Bhambu, ACEIT, Jaipur Shri Ramcharan Sharma, ACEIT, Jaipur

ix

Preface

This volume comprises papers presented at the TEQIP-III RTU (ATU) sponsored First International Conference on Recent Trends in Communication & Intelligent Systems (ICRTICS) held on 8–9 June 2019. These presented papers cover a wider range of selected topics related to intelligent systems and communication networks, including Intelligent Computing & Converging Technologies, Intelligent System Communication and Sustainable Design and Intelligent Control, Measurement & Quality Assurance. The volume Recent Trends in Communication and Intelligent System (ICRTICS 2019) of algorithms for intelligent systems brings 27 of the presented papers. Each of them presents new approaches and/or evaluates methods to real-world problems and exploratory research that describes novel approaches in the field of intelligent systems. ICRTICS 2019 has received (all tracks) 272 submissions, 70 of them were accepted for presentation, and 27 papers have been presented. The salient feature of ICRTCIS 2019 is to promote research with a view to bring academia and industry closer. The Advisory Committee of ICRTCIS 2019 comprises senior scientists, scholars and professionals from the reputed institutes, laboratories and industries around the world. The technical sessions will have peer-reviewed paper presentations. In addition, keynote addresses by eminent research scholars and invited talks by technocrats will be organized during the conference. The conference is strongly believed to result in igniting the minds of the young researchers for undertaking more interdisciplinary and collaborative research. The editors trust that this volume will be useful and interesting to readers for their own research work. Kota, India Kukas, India Hamirpur, India Jhalawar, India Kolkata, India August 2019

Harish Sharma Aditya Kumar Singh Pundir Neha Yadav Ajay Sharma Swagatam Das

xi

About Arya College of Engineering & IT

Arya College of Engineering & IT was established under the aegis of All India Arya Samaj Society of Higher & Technology Education, in the year 2000, by Late Er. T. K. Agarwal Ji, Founder Honourable Chairman. Arya has made a strong stand among the topmost private engineering institutes in the state of Rajasthan. This group is a blend of innovation, perfection, and creation. The college is spread over a splendid 25 acres of land area, providing a state-of-the-art infrastructure with well-equipped laboratories, modern facilities and finest education standards. The Arya College 1st Old Campus for a decade is known to create a benchmark with its specialized excellence, innovative approach, participative culture and academic rigour. The Management of Arya College is having the right bent of innovation and has the accurate knack to get these innovative ideas implemented. Globally accredited for its professional emergence towards technical education, Arya College makes special efforts to recruit trained faculty and stirring admission procedures to select potential prospects across the country, which are then trained to turn into a pool of skilled intellectual capital for the nation. This helps in a healthy and dynamic exchange which incubates leader in the corporate world. The strong industry linkages ultimately go along the way in providing a holistic approach to research and education. Arya College is the pioneer in the field of technical education and was the first engineering college in the city of Jaipur, Rajasthan. Arya College of Engineering & IT was the first college to start regular M.Tech. in the state of Rajasthan in the year 2006. Arya College of Engineering & IT (ACEIT), Kukas, Jaipur, Established in Year 2000, is among the foremost of institutes of national significance in higher technical education and approved by AICTE, New Delhi, and affiliated with Rajasthan Technical University, Kota, in Rajasthan. It is commonly known as “ARYA 1st OLD CAMPUS” and “ARYA 1st”. The institute ranks amongst the best technological institutions in the state and has contributed to all sectors of technical and professional development. It has also been considered a leading light in the area of education and research.

xiii

Contents

1

Fast Convergent Gravitational Search Algorithm . . . . . . . . . . . . . . Pragya Rawal, Harish Sharma and Nirmala Sharma

2

An Experimental Perusal of Impedance Plethysmography for Biomedical Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ramesh Kumar, Aditya Kumar Singh Pundir, Raj Kr Yadav, Anup Kumar Sharma and Digambar Singh

3

4

5

6

7

8

1

13

Compressive Sensing: An Efficient Approach for Image Compression and Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vivek Upadhyaya and Mohammad Salim

25

Optimal Location of Renewable Energy Generator in Deregulated Power Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digambar Singh, Ramesh Kumar Meena and Pawan Kumar Punia

35

A Genetic Improved Quantum Cryptography Model to Optimize Network Communication . . . . . . . . . . . . . . . . . . . . . . . Avinash Sharma, Shivani Gaba, Shifali Singla, Suneet Kumar, Chhavi Saxena and Rahul Srivastava Neural Networks-Based Framework for Detecting Chemically Ripened Banana Fruits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Roopalakshmi, Chandan Shastri, Pooja Hegde, A. S. Thaizeera and Vishal Naik

47

55

Low Power CMOS Low Transconductance OTA for Electrocardiogram Applications . . . . . . . . . . . . . . . . . . . . . . . . Gaurav Kumar Soni and Himanshu Arora

63

Spiral Resonator Microstrip Patch Antenna with Metamaterial on Patch and Ground . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Akshta Kaushal and Gaurav Bharadwaj

71

xv

xvi

9

Contents

A Packet Fluctuation-Based OLSR and Efficient Parameters-Based OLSR Routing Protocols for Urban Vehicular Ad Hoc Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kirti Prakash, Prinu C. Philip, Rajeev Paulus and Anil Kumar

10 Evaluation of Similarity Measure to Find Coherence . . . . . . . . . . . Mausumi Goswami and B. S. Purkayastha 11 Bagged Random Forest Approach to Classify Sentiments Based on Technical Words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sweety Singhal, Saurabh Maheshwari and Monalisa Meena

79 89

99

12 Healthcare Services Customers’ ICT-Related Expectations and Experiences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Anamika Sharma and Irum Alvi 13 Variation Measurement of SNR and MSE for Musical Instruments Signal Compressed Using Compressive Sensing . . . . . 123 Preeti Kumari, Gajendra Sujediya and Vivek Upadhyaya 14 A Framework to Impede Phishing Activities Using Neural Net and Website Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Srushti S. Patil and Sudhir N. Dhage 15 Overcoming the Security Shortcomings Between Open vSwitch and Network Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Milind Gupta, Shivangi Kochhar, Pulkit Jain, Manu Singh and Vishal Sharma 16 An Incremental Approach Towards Clustering Short-Length Biological Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Neeta Maitre 17 A Novel Approach to Localized a Robot in a Given Map with Optimization Using GP-GPU . . . . . . . . . . . . . . . . . . . . . . . . . 157 Rohit Mittal, Vibhakar Pathak, Shweta Goyal and Amit Mithal 18 Area-Efficient Splitting Mechanism for 2D Convolution on FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Shashi Poddar, Sonam Rani, Bipin Koli and Vipan Kumar 19 Neural Network Based Modeling of Radial Distortion Through Synthetic Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Ajita Bharadwaj, Divya Mehta, Vipan Kumar, Vinod Karar and Shashi Poddar 20 A Stylistic Features Based Approach for Author Profiling . . . . . . . 185 Karunakar Kavuri and M. Kavitha

Contents

xvii

21 Load Balancing in Cloud Through Task Scheduling . . . . . . . . . . . 195 Tarandeep and Kriti Bhushan 22 Empirical Modeling and Multi-objective Optimization of 7075 Metal Matrix Composites on Machining of Computerized Numerical Control Wire Electrical Discharge Machining Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Atmanand Mishra, Dinesh Kumar Kasdekar and Sharad Agrawal 23 Identification of Rumors on Twitter . . . . . . . . . . . . . . . . . . . . . . . . 219 Richa Anant Patil, Kiran Gawande and Sudhir N. Dhage 24 Brain Tumor Detection and Classification Using Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Pritanjli and Amit Doegar 25 Design of DGS-Enabled Simple UWB MIMO Antenna Having Improvement in Isolation for IoT Applications . . . . . . . . . . . . . . . . 235 Manisha Kumawat, Kirti Vyas and Rajendra Prasad Yadav 26 Concatenated Polar Code Design Aspects for 5G New Radio . . . . . 245 Arti Sharma, Mohammad Salim and Vivek Upadhyay 27 Protection of Six-Phase Transmission Line Using Haar Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Gaurav Kapoor Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263

About the Editors

Harish Sharma is an Associate Professor at Rajasthan Technical University, Kota, in the Department of Computer Science & Engineering. He has worked at Vardhaman Mahaveer Open University, Kota, and Government Engineering College, Jhalawar. He received his B.Tech. and M.Tech. degree in Computer Engineering from Government Engineering College, Kota, and Rajasthan Technical University, Kota, in 2003 and 2009, respectively. He obtained his Ph.D. from ABV-Indian Institute of Information Technology and Management, Gwalior, India. He is secretary and one of the founder members of Soft Computing Research Society of India. He is a life-time member of Cryptology Research Society of India, ISI, Kolkata. He is an associate editor of “International Journal of Swarm Intelligence (IJSI)” published by Inderscience. He has also edited special issues of the journals “Memetic Computing” and “Journal of Experimental and Theoretical Artificial Intelligence”. His primary area of interest is nature-inspired optimization techniques. He has contributed in more than 45 papers published in various international journals and conferences. Dr. Aditya Kumar Singh Pundir has received B.E. degree in Electronics & Communication Engineering from University of Rajasthan, M.Tech. degree in VLSI Design Engineering from Rajasthan Technical University, Kota, India, and Ph.D. in VLSI Testing and Embedded System Design from Poornima University, Jaipur. He is currently working as a Professor in the Department of Electronics and Communication Engineering at Arya College of Engineering & IT, Jaipur. His current research interests are memory testing and built-in self-testing using embedded system design. He is a member of IEEE Electron Device Society and professional member of ACM, life member of ISTE, life member of CSI, India. Dr. Neha Yadav received her Ph.D. in Mathematics from Motilal Nehru National Institute of Technology, (MNNIT) Allahabad, India, in the year 2013. She completed her postdoctorate from Korea University, Seoul, South Korea. She is receiver of Brain Korea (BK-21) Postdoctoral Fellowship given by Government of Republic of Korea. Prior to joining NIT Hamirpur, she taught courses and conducted research xix

xx

About the Editors

at BML Munjal University, Gurugram, Korea University Seoul, S. Korea, and The NorthCap University, Gurugram. Her major research area includes numerical solution of boundary value problems, artificial neural networks, and optimization. Dr. Ajay Sharma did his B.E. from Government Engineering College, Kota (University of Rajasthan), and M.Tech. and Ph.D. from Rajasthan Technical University, Kota. He is presently working as an Assistant Professor in the Department of Electrical Engineering, Government Engineering College, Jhalawar, with more than 12 years of teaching experience. His research area includes soft computing techniques for engineering optimization problems. He has published more than 30 international journals/conference papers and has delivered more than 20 invited talks at the various institutes of repute. He is a life-time member of Institution of Engineers India. Swagatam Das received the B.E. Tel.E., M.E. Tel.E (Control Engineering specialization) and Ph.D. degrees, all from Jadavpur University, India, in 2003, 2005, and 2009, respectively. Swagatam Das is currently serving as an Associate Professor at the Electronics and Communication Sciences Unit of the Indian Statistical Institute, Kolkata, India. His research interests include evolutionary computing, pattern recognition, multi-agent systems, and wireless communication. Dr. Das has published one research monograph, one edited volume, and more than 200 research articles in peer-reviewed journals and international conferences. He is the founding co-editor-in-chief of Swarm and Evolutionary Computation, an international journal from Elsevier. He has also served as or is serving as the associate editor of the IEEE Transactions on Systems, Man, and Cybernetics: Systems, IEEE Computational Intelligence Magazine, IEEE Access, Neurocomputing (Elsevier), Engineering Applications of Artificial Intelligence (Elsevier), and Information Sciences (Elsevier). He is an editorial board member of Progress in Artificial Intelligence (Springer), PeerJ Computer Science, International Journal of Artificial Intelligence and Soft Computing, and International Journal of Adaptive and Autonomous Communication Systems. Dr. Das has 15000+ Google Scholar citations and an H-index of 60 till date. He has been associated with the international program committees and organizing committees of several regular international conferences including IEEE CEC, IEEE SSCI, SEAL, GECCO, and SEMCCO. He has acted as guest editor for special issues in journals like IEEE Transactions on Evolutionary Computation and IEEE Transactions on SMC, Part C. He is the recipient of the 2012 Young Engineer Award from the Indian National Academy of Engineering (INAE). He is also the recipient of the 2015 Thomson Reuters Research Excellence India Citation Award as the highest cited researcher from India in Engineering and Computer Science category between 2010 and 2014.

Chapter 1

Fast Convergent Gravitational Search Algorithm Pragya Rawal, Harish Sharma and Nirmala Sharma

1 Introduction Nature-inspired algorithms (NIAs) provide many effective ways for solving complex problems. Evolutionary algorithms (EAs) and swarm intelligence (SI) based algorithms are two main population-based stochastic approaches. By the gravitational force, individuals attract each other and stimulate according to the gravity force employed on individuals. As contrasted to high masses individuals, lower masses individuals have low attraction power. This attraction strength of individuals responsible for lower masses individuals moves quickly in comparison to higher masses individuals. GSA [11] provides proper balancing between exploration and exploitation. Here individuals of lighter masses are responsible for exploration while individual having higher masses responsible for exploitation in the search space. When the process begins, first the exploration is carried out by the lighter masses individuals with high step size and after that, exploitation is executed by the heavier masses individuals with small step size. Researchers are regularly working in this domain to improve the fulfillment of the GSA [3–5, 7]. In this article, an improved stochastic population-based search algorithm is designed, namely, fast convergent gravitational search algorithm (FCGSA) to enhance the convergence rate as well as the exploitation capabilities of the GSA algorithm. Here, a sigmoidal function is used to control the number of agents that employ the force to another agent in the search space. P. Rawal (B) · H. Sharma · N. Sharma Rajasthan Technical University, Kota, India e-mail: [email protected] H. Sharma e-mail: [email protected] N. Sharma e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_1

1

2

P. Rawal et al.

Further, description of the article is as follows: Sect. 2 describes the GSA algorithm; FCGSA is proposed in Sect. 3; Sect. 4 includes experimental results and discussion; Sect. 5 concludes this article.

2 Gravitational Search Algorithm The intelligent behavior of gravitational search algorithm (GSA) depends on Newton’s gravitational law and movement [6]. According to Newton’s gravitation law, objects accelerate toward each other in the universe by the gravitational force which is exactly corresponding to the multiplication of masses of these particles and conversely corresponding to the square of the separation between them. This gravitational force is responsible for the global action of all objects toward the heaviest mass object. The achievement of these objects is marked in expressions of their masses. Now, assume an n-dimensional space with J agents. Each individual Ui in search region is explained below. Ui = (u i1 , . . . , u id , . . . , u in )

for i = 1, 2, . . . , J

(1)

Here, u id represents the location of ith agent in d dimensional area. As the performance of each individual is measured by gravitational masses, which is dependent on the fitness of individuals, the fitness of each agent is estimated. By these fitness values the worst and best fitness are determined as follows: • Best and worst fitness for minimization problems are [6]: best(q) = min f it j (q) wor st(q) = max f it j (q)

j ∈ 1, . . . , P j ∈ 1, . . . , P

(2) (3)

• Best and worst fitness for maximization problems are [6]: best(q) = max f it j (q)

j ∈ 1, . . . , P

(4)

wor st(q) = min f it j (q)

j ∈ 1, . . . , P

(5)

where the maximum and minimum fitness is represented by max f it j (q) and min f it j (q) of the ith and jth agents, respectively, during the iteration q. After the computation of individual’s fitness, masses, i.e., inertial, active, and passive gravitational masses are calculated. Experimentally, inertial and gravitational masses are same in GSA. Heavier masses agents are more considered due to their high attraction power and slow movement. Gravitational masses of individuals are calculated as i = l, 2, . . . , N . (6) Ma j = M pi = Mii = Mi ,

1 Fast Convergent Gravitational Search Algorithm

m i (q) =

f it i − wor st(q) best(q) − wor st(q)

m i (q) Mi =  J j=1 m j (q)

3

(7)

(8)

where Ma j , M pi , and Mii are active, passive gravitational masses, and inertial masses, respectively. f it i and Mi are the fitness value and gravitational agent’s mass i, respectively. wor st(q) and best(q) are worst and best fitness values. The gravitational constant G(q) is calculated as follows: G(q) = G 0 e(−αq/T )

(9)

Here, G 0 and α are constants and initialized before the process begins. G(q) reduces exponentially with time [1]. The acceleration of ith agent is determined as aid (q) =

Fid (q) Mi (q)

(10)

where Fid (q) is the randomly weighted sum of the forces applied from other agents and Mi (q) is gravitational mass. Fid (q) is calculated through the following expression:  rand j Fidj (q) (11) Fid (q) = j∈K best, j=i

where Kbest is calculated as follows: q ) · (P − f inalper ) T   K best K best = r ound P × 100

K best = f inalper + (1 −

(12)

(13)

Here, f inalper is the constant value. It represents only 2 percent of agents which apply force toward each others in the last iteration. P defines the whole population in the search region. K best is the initial P individuals having the best fitness value and the highest mass. Fidj (q) in Eq. (11) is Newton’s gravitational force, defined as force acting on ith agent mass by the jth agent mass in the qth iteration and is computed using the following equation: Fidj (q) = G(q) ·

M pi (q) · Ma j (q) · (x dj (q) − xid (q)) Ri j (q) + ε

(14)

4

P. Rawal et al.

Here, Ma j (q) and M pi (q) are active or passive gravitational masses of jth and ith individuals subsequently. Ri j (q) is the euclidian distance between ith and jth individuals. ε is a constant and G(q) is the gravitational constant computed through Eq. (9). Due to this gravitational force, global movement of objects takes place. Agents update their velocity and position according to the following equations: vid (q + 1) = randi × vid (q) + aid (q)

(15)

xid (q + 1) = xid (q) + vid (q + 1)

(16)

where rand is uniformly randomized variable lies in range [0, 1]. It gives randomized characteristics for searching operation. aid (q) is the acceleration of ith agent in d dimension space during iteration q. vid (q) and vid (q + 1) are velocities of ith agent at the q and (q + 1) iterations, respectively. While xid (q) and xid (q + 1) are the positions of ith agent at the q and (q + 1) iterations, respectively. Each mass in the GSA is a solution and the optimum solution of the search space is represented by the heaviest mass.

3 Fast Convergent Gravitational Search Algorithm Exploration and exploitation play a vital role in any population-based optimization algorithm. Exploration is for probing an enlarging part of search region with the craving of discovering other encouraging individual that are to be characterized, whereas the exploitation is for investigating a optimum solution of the search region with the passion of encouraging a assuring solution with the previously visited search space. GSA investigates the search region to find the excellent solution. After the few slip by of iterations, GSA exploits the search space. For better execution, it is essential to keep an appropriate balance among exploration and exploitation capabilities. To achieve exploration, individuals should make high step size, whereas in later iterations, individuals should make small step size. It is obvious, that performance of GSA is greatly influenced by its parameters. In GSA, K best control the count of agents that employ the power to other agent in search space. Almost all agents apply force to other agents through preceding iterations, to restrict trape in local optima, while at the last iteration, there is only one agent that applys force to another. As it is clear from Eq. (12), Kbest is linearly reduced to improve in Kbest a little as the count of iterations increments. So to enhance the convergence speed, a new strategy is introduced named as fast convergent gravitational search algorithm (FCGSA). In the FCGSA, a new K best is proposed as demonstrated as follows: K best  = f inalper +

1 · (P − f inalper ) 1 + e(αq/T )

(17)

1 Fast Convergent Gravitational Search Algorithm

5

Here, f inalper and α are constants and q and T are contemporary iteration and highest count of iterations consequently. A sigmoidal function is embedded with K best in Eq. (17), which is decreased exponentially in every iteration. Further, position update process in Eq. (9), performs a huge role for the step size of individual. From the position update process, we can simply conclude that position depends on the velocity. When the velocity of individual is large, acceleration of individual is large, which leads to large step size, that helps in exploration, while low acceleration will result in less step size, which helps in search region’s exploitation. Therefore, in this article, location modification equation is modified as follows: xid (q + 1) = xid (q) + vid (q + 1) · e(−α0 q/T )

(18)

Here, α is constant, q and T are the current iterations and highest count of iteration, respectively. From the Eq. (18), it is clear that impetus in initial iteration is high, which results in high acceleration and step size will be large. Velocity exponentially decreases through the iteration, which controls the step size of individual. So the propose strategy organizes a decent balance between diversification and intensification knowledge as the count of iteration increases. The pseudocode of FCGSA is depicted in Algorithm 1. Algorithm 1 FCGSA Algorithm Identify the initial population of the search space. Creating a randomly dispersed set of individuals. while Stopping criteria is not filled do Evaluate fitness for each individuals. Calculate gravitational mass by Eqs. (7) and (8) for each individual. Evaluate gravitational constant (G) by equation (9). Calculate Kbest by Eqs. (17) and (13). Calculate the acceleration (a) for every individual by Eq. (10). modify velocities and location of the individuals by Eqs. (15) and (18). end while

4 Experimental Results and Discussion 4.1 Test Problems Under Consideration To investigate the pursuance of the introduced FCGSA mechanism, 15 benchmark functions are chosen as shown in Table 1.

Temp

McCormick

Hosaki problem

Dekkers and Aarts

Easom’s function

Six-hump camel back

Gear train problem

Kowalik

Branins’s function

Colville

Beale function

Sum of different powers

Cigar

Zakharov

Griewank

Test problem

i

2

i 3

4

f 9 (u) =

1 6.931



u1 u2 u3 u4 f 10 (u) = (4 − 2.1u 21 + u 41 /3)u 21 + u 1 u 2 + (−4 + 4u 22 )u 22 f 11 (u) = − cos u 1 2 2 cos u 2 e((−(u 1 −π ) −(u 2 −π ) )) 2 2 5 f 12 (u) = 10 u 1 + i 2 − (u 21 + u 22 )2 + 10−5 (u 21 + u 22 )4 f 13 = (1 − 8u 1 + 7u 21 − 7/3u 31 + 1/4u 41 )u 22 exp(−u 2 ) f 14 (u) = sin(u 1 + u 2 ) + (u 1 − u 2 )2 − 23 u 1 + 25 u 2 + 1 f 15 (u) = u 21 + u 22



f 6 (u) = 100[u 2 − u 21 ]2 + (1 − u 1 )2 + 90(u 4 − u 23 )2 + (1 − u 3 )2 + 10.1[(u 2 − 1)2 + (u 4 − 1)2 ] + 19.8(u 2 − 1)(u 4 − 1) f 7 (u) = a(u 2 − bu 21 + cu 1 − d)2 + e(1 − f ) cos x1 + e 11 u (b2 +bi u 2 ) 2 f 8 (u) = i=1 [ai − b21+bi u +u ]

f 4 (u) =

D i+1 i=1 |u i | 3 2 2 f 5 (u) = [1.5 − u 1 (1 − u 2 )]2 + [2.25 − u 1 (1 − u 2 2 )] + [2.625 − u 1 (1 − u 2 )]

D ui 1 D 2 √ f 1 (u) = 1 + 4000 i=1 u i − i=1 cos( i ) D   2 D iu i D iu 1 4 f 2 (u) = i=1 u i 2 + ( i=1 2 ) + ( i=1 2 )  D f 3 (u) = u 0 2 + 100000 i=1 ui 2

Objective function

Table 1 Test problems, S: dimension, AE: acceptable error

2 f (−0.547, −1.547)= −1.9133 f (0) = 0

−1.5 ≤ u 1 ≤ 4, −3 ≤ u 2 ≤ 3 [−5, 5]

2

30

2

f (0, 15) = f (0, −15) = −24777

[−20, 20]

2

2

−2.3458

[−100, 100]

[0, 5], [0, 6]

f (−0.0898, 0.7126) = −1.0316 f (π, π ) = −1

[−5, 5]

f (19, 16, 43, 49) = −1.0316

[12, 60]

4

4

f (0.192833, 0.190836, 0.12311, 0.135766) = 0.000307486

[−5, 5]

2

f (−π, 12.275) = 0.3979

−5 ≤ u 1 ≤ 10, 0 ≤ u 2 ≤ 15

4

2

f (1) = 0

f (3, 0.5) = 0

[−4.5, 4.5]

30

[−10, 10]

f (0) = 0

f (0) = 04

[−10, 10] [−11]

30

f (0) = 0

[−5.12, 5.12]

30

30

f (0) = 0

[−600, 600]

S

Optimum value

Search range

1.0E−5

1.0E − 04

1.0E−05

5.0E−01

1.0E−13

1.0E−05

1.0E−15

1.0E−04

1.0E−05

1.0E−05

1.0E−05

1.0E−05

1.0E−05

1.0E−02

1.0E−05

AE

6 P. Rawal et al.

1 Fast Convergent Gravitational Search Algorithm

7

Fig. 1 Effect of α and α0 on success rate

4.2 Experimental Setting To attest the fulfillment of the introduced algorithm GSA, a comparable estimation is done among FCGSA, GSA [6], FBGSA [4], BBO [10], and SMO [2]. The experimental setting is as follows: • • • • •

The count of runs = 30, Population size = 50, G 0 = 100 α = 20, The Parameter perception for GSA [6], FBGSA [4], BBO [10], and SMO [2] are taken from their original research, • The validity of FCGSA is analyzed for examined test problems over distinct values of α and α0 . The estimation of α and α0 is set through sensitive investigation regarding success rate (SR) and results are appeared in Fig. 1. Figure 1 proves that higest value of sum of success, i.e., best outcome is achieved by taking α = 20 and α0 = 7.

4.3 Results Comparison Experimental outcomes are appeared in Table 2 for the considered algorithms. Table 2 shows the analysis of average count of function evaluations (AFEs), mean error (ME), standard deviation (SD), and success rate (SR). The results shows the hight efficiency and accuracy level of FCGSA as compared to GSA [6], FBGSA [4], BBO [10], and SMO [2]. A comparison is also performed between the considered algorithms for the AFEs through the analysis of boxplots. The boxplot is a graphical representation of data, in which rectangle is drawn to represent the interquartile with a vertical line indicating

8

P. Rawal et al.

Table 2 Comparison results of test problem, TP TF Algorithm SD ME f1

f2

f3

f4

f5

f6

f7

f8

FCGSA GSA FBGSA BBO SMO FCGSA GSA FBGSA BBO SMO FCGSA GSA FBGSA BBO SMO FCGSA GSA FBGSA BBO SMO FCGSA GSA FBGSA BBO SMO FCGSA GSA FBGSA BBO SMO FCGSA GSA FBGSA BBO SMO PSO FCGSA GSA FBGSA BBO SMO

8.06052E–07 7.37634E–07 0.03891712 1.45446E–06 1.84E–03 0.000372323 2.597485864 8.652301498 0.785070093 7.93E–04 4.5719E–07 9.788438037 9.811033166 2.60599E–06 7.74E–07 1.84074E–06 2.51821E–06 1.89628E–06 1.05776E–05 1.73E–06 2.90255E–06 2.96856E–06 2.83446E–06 0.228618561 3.01E–06 0.000246749 0.156289433 0.372881454 2.754500971 2.48E–04 2.90316E–05 2.74336E–05 3.24517E–05 2.11049E–05 6.83E–06 3.72E–06 0.000117784 7.97953E–05 7.81274E–05 7.16167E–05 9.46E–06

9.20853E–06 8.92957E–06 0.865277098 8.92554E–06 5.20E–04 0.009592798 5.068510743 40.38870689 0.988277042 9.14E–03 9.53292E–06 8.780742862 10.82059429 7.72803E–06 8.93E–06 7.56839E–06 6.96522E–06 7.4215E–06 1.06609E–05 7.98E–06 5.99934E–06 4.82759E–06 5.3857E–06 0.07621397 5.52E–06 0.000605967 0.031365194 0.099248862 2.68777526 7.44E–04 4.62067E–05 4.61824E–05 4.31121E–05 7.90241E–05 5.99E–06 5.14E–06 0.000188932 0.000239179 0.000247513 0.000267026 9.29E–05

AFEs

SR

37108.33333 37148.33333 200000.0000 23176.66667 85981.60000 56746.66667 200000.0000 200000.0000 200000.0000 131681.7000 171578.3333 200000.0000 200000.0000 55525.00000 22581.90000 21205.00000 37470.00000 57345.00000 58341.66667 5181.000000 35346.66667 55576.66667 70070.00000 36078.33333 1640.100000 53436.66667 128590.0000 200000.0000 200000.0000 48452.70000 24695.00000 31956.66667 38440.00000 55346.66667 42765.83333 22480.00000 65.00000000 78.33333333 63.33333333 108.3333333 37906.76667

30 30 0 30 27 30 0 0 0 30 30 0 0 30 30 30 30 30 23 30 30 30 30 27 30 30 27 0 0 30 30 30 30 30 27 27 30 30 30 30 30 (continued)

1 Fast Convergent Gravitational Search Algorithm Table 2 (continued) TF Algorithm f9

f 10

f 11

f 12

f 13

f 14

f 15

FCGSA GSA FBGSA BBO SMO FCGSA GSA FBGSA BBO SMO FCGSA GSA FBGSA BBO SMO FCGSA GSA FBGSA BBO SMO FCGSA GSA FBGSA BBO SMO FCGSA GSA FBGSA BBO SMO FCGSA GSA FBGSA BBO SMO

9

SD

ME

AFEs

SR

7.35101E–13 7.49274E–13 7.80543E–13 4.97212E–13 1.42E–14 1.22148E–05 1.18518E–05 1.19628E–05 0.360911475 1.42E–05 0.179498873 0.208447215 2.70435E–14 0.299974022 2.71E–14 5513.426617 6115.568355 4961.479556 6652.02535 4.90E–03 5.99309E–06 6.39726E–06 6.24228E–06 0.49872928 2.13E–06 5.80666E–06 7.08747E–06 5.65863E–06 6.29432E–06 6.76E–06 2.95213E–06 3.16689E–06 2.63159E–06 2.78005E–06 2.74E–06

1.93583E–12 1.81189E–12 1.89104E–12 2.19232E–12 2.7E–12 1.35309E–05 1.30128E–05 1.22282E–05 0.217643947 1.69E–05 0.033332104 0.1 5.3435E–14 0.099996837 4.18E–14 6807.452936 8014.527136 5963.045707 5259.95564 4.88E–01 5.88909E–06 5.76948E–06 6.28523E–06 1.518751054 1.12E–05 8.99053E–05 8.8465E–05 8.89199E–05 9.15118E–05 8.74E–05 4.87418E–06 4.72873E–06 4.03363E–06 4.76596E–06 4.62E–06

12460.00000 17951.66667 21995.00000 1750.000000 415647.0667 32006.66667 40371.66667 47368.33333 69625.00000 222537.4000 163100.0000 137431.6667 155450.0000 193430.0000 11959.2000 330.0000000 533.3333333 683.3333333 890.0000000 1230.90000 28421.66667 35351.66667 36780.00000 200000.0000 402717.8667 26213.33333 38666.66667 41330.00000 12386.66667 702.9000000 32386.66667 37408.33333 42641.66667 750.0000000 726.0000000

30 30 30 30 0 30 30 30 22 14 29 27 30 1 30 30 30 30 30 30 30 30 30 0 1 30 30 30 30 30 30 30 30 30 30

10

P. Rawal et al.

Fig. 2 Boxplots graphs (average number of function evaluation)

the median value. The boxplot analysis of the respective algorithms with FCGSA is shown in Fig. 2. The outcomes demonstrate that interquartile pasture and medians of FCGSA are low in the correlation of GSA, FBGSA, BBO, and SMO. Further, Mann-Whitney U rank sum test (MWUR) [9] is performed at 5% level of exceptional (α = 0.05) between FCGSA–GSA, FCGSA–FBGSA, FCGSA–BBO, and FCGSA–SMO. Table 3 shows the compared results of mean function evaluation and Mann-Whitney test for 30 runs. In this test, we watch the extraordinary difference between two informational sets. If the remarkable difference is not seen then “=” symbol shows up and when a remarkable difference is watched, then an examination is performed as far as the AFEs. We utilize “+” and “−” symbol, “+” speaks to that the FCGSA is finer than the analyzed methods and “−” speaks to that the method is subordinate. The last row in Table 3, authorizes the superiority of FCGSA over GSA, FBGSA, BBO, and SMO. To confirm the convergence power of modified method we use acceleration rate (AR) [8] which is represented as follows: AR =

AF E compar e Algo AF E FC G S A

(19)

Here compar e Algo ∈ (GSA, FBGSA, BBO SMO) and A R > 1 means FCGSA is quicker than the other comparative methods. Table 4 demonstrated that convergence speed of LEGSA is superior than the other analyzed methods.

1 Fast Convergent Gravitational Search Algorithm

11

Table 3 Comparison based on MWUR test at significance level α = 0.05 and mean function evaluations Test problems FCGSA versus FCGSA versus FCGSA versus FCGSA versus GSA FBGSA BBO SMO f1 f2 f3 f4 f5 f6 f7 f8 f9 f 10 f 11 f 12 f 13 f 14 f 15 Total no. of + sign

+ + + + + + + + + + − + + + + 14

+ + + + + + − + + + − + + + + 13

− + − + + + + + − + + + + − − 10

+ + − − + − + + + + − + + − − 9

Table 4 Test problems: TP, acceleration rate (AR) of FC G S A compare to the standard G S A, BBO TP GSA FBGSA BBO SMO f1 f2 f3 f4 f5 f6 f7 f8 f9 f 10 f 11 f 12 f 13 f 14 f 15

1.001077925 3.52443609 1.165648344 1.767036076 1.572331195 2.406400099 1.294054127 1.205128205 1.440743713 1.261351802 0.842622113 1.616161616 1.243828065 1.475076297 1.15505352

5.389624972 3.52443609 1.165648344 2.704315020 1.982365145 3.742748425 1.556590403 0.974358974 1.765248796 1.479952093 0.95309626 2.070707071 1.294083152 1.576678535 1.316642651

0.624567707 3.524436090 0.323613121 2.751316513 1.020699735 3.742748424 2.241209421 1.666666666 0.140449438 2.175328056 1.185959534 2.696969696 7.036885004 0.472533062 0.023157678

2.317042893 2.320518679 0.131612772 0.244329168 1.020699735 0.906731333 1.556590403 583.1810257 33.35851257 6.95284524 0.073324341 3.73000000 14.16939659 0.026814598 0.022416632

12

P. Rawal et al.

5 Conclusion In this article, a new variant of GSA is introduced known as fast convergent gravitational search algorithm (FCGSA). In the proposed FCGSA, a sigmoidal function is used in K best, it decreases directly through iterations to enhance the convergence power and to compress the effect of premature convergence. Additionally, position update process reduces exponentially, leads to improve exploitation capability. This method is more reliable as it leads to enhance the convergence speed as well as preserve the precise balance in the exploitation and exploration knowledge of the mechanism. The introduced mechanism is correlated with GSA, BBO, SMO, and one of its recent variant FBGSA over various benchmark functions. The achieved outcomes declare that FCGSA is a competitive variant of GSA and in future it is also utilized to solve several real-world optimization problems.

References 1. Amoozegar M, Rashedi E (2014) Parameter tuning of GSA using doe. In: 2014 4th international conference on computer and knowledge engineering (ICCKE). IEEE, pp 431–436 2. Bansal JC, Sharma H, Jadon SS, Clerc M (2014) Spider monkey optimization algorithm for numerical optimization. Memet Comput 6(1):31–47 3. Gupta A, Sharma N, Sharma H (2016) Accelerative gravitational search algorithm. In: 2016 international conference on advances in computing, communications and informatics (ICACCI). IEEE, pp 1902–1907 4. Gupta A, Sharma N, Sharma H (2016) Fitness based gravitational search algorithm. In: 2016 international conference on computing, communication and automation (ICCCA). IEEE, pp 309–314 5. Li C, Li H, Kou P (2014) Piecewise function based gravitational search algorithm and its application on parameter identification of AVR system. Neurocomputing 124:139–148 6. Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci 179(13):2232–2248 7. Shamsudin HC, Irawan A, Ibrahim Z, Zainal Abidin AF, Wahyudi S, Abdul Rahim MA, Khalil K (2012) A fast discrete gravitational search algorithm. In: 2012 fourth international conference on computational intelligence, modelling and simulation (CIMSiM). IEEE, pp 24–28 8. Sharma H, Bansal JC, Arya KV (2013) Opposition based lévy flight artificial bee colony. Memet Comput 5(3):213–227 9. Sharma K, Gupta PC, Sharma H (2015) Fully informed artificial bee colony algorithm. J Exp Theor Artif Intell 1–14 10. Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713 11. Sudin S, Nawawi SW, Faiz A, Abidin Z, Abdul Rahim MA, Khalil K, Ibrahim Z, Md. Yusof Z (2014) A modified gravitational search algorithm for discrete optimization problem

Chapter 2

An Experimental Perusal of Impedance Plethysmography for Biomedical Application Ramesh Kumar, Aditya Kumar Singh Pundir, Raj Kr Yadav, Anup Kumar Sharma and Digambar Singh

1 Introduction Bio-impedance method is a noninvasive technique for medical applications, which is known in terms of electrical impedance tomography (EIT) and impedance plethysmography (IPG) technique that measures voltage, according to small changes in electrical impedance of the body surface [1, 2]. These impedance-based measurements reflect in the blood volume and blood flow, which are related to the absence or existence of veins blockage [3]. IPG technique is based on the bio-impedance principle, which is invented by NYBOER in 1940. It has obtained useful results related to cardiac, cerebral, movement, blood pressure, and other functionality of the body [1]. In IPG, a low current pulse with high frequency is passed through the two opposite electrodes and the impedance was calculated. The morphological and physiological features form the basis of comparing changes in impedance across the patients [2]. IPG is a unique noninvasive method that processes small changes in electrical impedance of the body surface. The voltage measurements reflect variation in blood volume and these variations help identify easily any type of blockage [1]. The IPG technique introduces the process of a low current source with high frequency applied between two electrodes and the voltages of remaining electrodes pair are measured [2, 4]. For comparing the changes in results, clinical features are selected through the appropriate location of the patient body. This technique is reliable, however, standard procedure is needed that can achieve the changes in impedance in the electrodes. The measured impedance is needed to identify the actual impedance from the whole R. Kumar (B) · R. K. Yadav · A. K. Sharma Department of Electronic Instrumentation and Control, Government Engineering College, Ajmer, Rajasthan, India e-mail: [email protected] A. K. S. Pundir · D. Singh Arya College of Engineering & Information Technology, Jaipur, Rajasthan, India © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_2

13

14

R. Kumar et al.

impedance, and it should be pertinent imagery that the electrode impedances do not change between the calibration and standard procedures [4]. This paper presented an IPG system, which includes the construction, simulation, and experimental works defined using IPG data acquisition methods from the phantom. All phantoms have been used according to medical field requirements, like feto-maternal, hand blood flow, fruits, and so on. After the experimental works are presented, some important results are defined, which are related to the clinical unit of the medical field.

2 Equipment and Methodology Electrical impedance of distribution in a volumetric conductor is defined by Maxwell’s equations. Biological tissues are good conductors, so it also holds true in the case of living soft tissues of the human body [1]. The body parts may be observed as an electrical source-free region, and the biological tissues are mainly of anisotropic and conductive properties [2]. If the magnetic fields can be neglected, the resultant relationship equations are as follows: j = σ.E

(1)

∇. j = 0

(2)

E = −∇φ

(3)

where J, σ , E, and φ represent the current density, conductivity, electric field intensity, electric field potential, respectively, which are used for localization, to solve in order of depended equation based on internal impedance distribution are to be expected. In IPG technique, also known as four-electrode system, as shown in Fig. 2 of basic IPG system. As shown in Fig. 1, the IPG arrangement defines a unique electrode placement. It has four electrodes or two pairs of electrodes, interior and exterior. One is used for passing current and the other is used for voltage measurements simultaneously. After that, the position of the current and voltage measurements is continuously changed. The excitation current pulse is applied to the objects between E1 and E3 electrodes and the other pair E2 and E4 is used for voltage measurements. The proposed current pulse is in the range of 0.5–1.2 mA AC at 10–100 kHz [2]. The voltage produced between E2 and E4 electrodes is amplified and the impedance is measured using a multimeter and digital storage oscilloscope [5]. After that, the obtained data are uploaded through MATLAB GUI and appropriate results are obtained on DSO, as shown in Fig. 2.

2 An Experimental Perusal of Impedance Plethysmography …

15

Fig. 1 IPG configuration

Fig. 2 Block diagram of IPG technique

A. Electrode and lead Presently, many types of electrodes are used for biomedical applications; especially in bio-impedance monitoring, Ag–AgCl electrodes are used due to their robustness, revived easily, and other favorable properties. In this study also it has been used for all phantoms. It consists of two layers: the first one is a plastic substrate and the other is Ag/AgCl ionic compound. For creation of this electrode, a plastic injection molded substrate is added and coated with silver layer, which is converted to silver chloride [5]. The ECG (Ag/AgCl) electrodes are widely used in biomedical systems and applications. These electrodes have more frequency dependence directly related to conductivity, so have a high DC offset low cutoff, and less desirable. For experimental

16

R. Kumar et al.

work it leads are connected directly to the body and system to the body. After that, the observer can examine the changes in internal impedance by monitoring [6, 7]. B. Waveform generator In order to analyze the correct behavior, first we create a sinusoidal waveform generator. This waveform generator (AC) is battery-operated (12 V). The created AC (sinusoidal) wave frequency must be in the range of 10 and 100 kHz [2]. IC XR2206 and MAX038, ICL8038, or any one IC, can be chosen as a signal generator of IPG technique. Here, we used a digital to analog converter (DAC) for the sinusoidal signal generation, which is controlled by the controller according to the current and frequency values. The obtained AC signal is passed onto a voltage to current converter (VCC) to obtain the suitable AC signals according to IPG applications, as shown in the Fig. 3 circuitry of the signal generator. C. VCC Many amplifying circuits are used in IPG technique. One of them is VCC (voltage to current converter), and they produce an output current that is directly proportional to the input voltage. The infinite shunt impedance should have the proposed current source, such as shown in the circuit diagram of voltage to current converter of Fig. 1. This circuit diagram has been tested on Multisim software using breadboard tool,

Fig. 3 Circuitry of signal generator

2 An Experimental Perusal of Impedance Plethysmography …

17

Fig. 4 V to I converter circuit

as shown in Figs. 3 and 4. Moreover, all the circuits of IPG system are designed on PCB, as shown in Fig. 5.

3 Experimental Works The proposed work is validated by conducting some experiments on phantom using impedance plethysmograph. The measurements were obtained using a four-channel system of impedance plethysmograph [8]. The details of the experiments are given as follows. A. Experimental work on pregnant women The insertion current pulse was obtained from the IPG system. After that, an array of electrodes is placed on the abdomen of the pregnant mother. The obtained current pulse is passed through the electrode pairs that are placed, as shown in Fig. 6. The sensing electrode was Ag/AgCl disposable electrodes. The variation in voltages is measured according to the variation in frequency and current amplitude from the pregnant woman abdomen surface and can be determined based on the fetal movement and uterine contractions that affect the internal distribution and the impedance is continuously recorded for further analysis. The pairs of

18

R. Kumar et al.

Fig. 5 IPG System on PCB board and GUI on MATLAB

current and voltage are switched by oppositely, that sensing the internal impedance is measured according to current applied, as shown in Fig. 6. B. Experimental work on human body The voltage measurements are determined using pairs of sensing electrodes (other than the current electrodes) around the human body parts. The sensing electrode is a disposable electrode (Ag/Agcl). The variation on the body surface of the human body because of the blood flow will affect the impedance changes which are continuously recorded for further analysis [9–11]. The IPG system has four-electrode

2 An Experimental Perusal of Impedance Plethysmography …

19

Fig. 6 Monitoring of pregnant women

Fig. 7 Monitoring of human body

configuration: two electrodes used for passing current and the other two electrodes used for determination of voltage, as shown in Figs. 1 and 7. C. Experimental work on fruits We have taken some fruits, like papaya, orange, and potato, as shown in Fig. 8. The phantoms were then prepared using placement of the electrode on the surface and by passing a current pulse to reference electrode. The generated voltages on other electrodes were measured and noted. These measurements help in analyzing the responses according to the variation in frequency and current amplitude.

4 Results The proposed IPG technique generated current pulse applied to the subject under test in the middle of E1 and E3 (I) pair and the measurement of voltage is determined using E2 and E4 (II) electrodes. This electrode position is called the first position. Same cycle is repeated for second position after changing the current position between

20

R. Kumar et al.

Fig. 8 Monitoring of fruits

E2 and E4 (II) electrodes and measurement of the voltage between E1 and E3 (I) electrode position. The observed impedance from these results is shown in Table 1. It is observed that the impedance distribution depends on the parameters of the object, like frequency, the resistance of the object, materials, and so on. The impedance distribution is presented in Table 1, according to frequency variation at both positions. Figure 9 presents the impedance distribution of position I (between E1 and E3), and Fig. 10 presents the impedance distribution of position II (between E2 and E4). If the frequency is increased, then the potential of both positions is moving toward zero or stable. This is because the cells of the fruit behave like little capacitive plates, and therefore causes a capacitive effect. The real-time variation of the observed outcomes can also be seen through the waveforms using DSO [12].

5 Conclusion The proposed IPG technique is an economical, real-time noninvasively, reliable technique that provides clinical experts a portable diagnostic solution. In addition, it can also be attached to the diagnostic or clinical tools that are useful for clinical application as well as patients. The presented prototype model can be used to observe the variation in internal electrical impedance of human body through which many parameters such as functional blood flow disturbances in arteries, blood pressure, heart rate, blood volume, and blockage in veins and arteries can be measured. The outcomes of the presented prototype have very low noise and demonstrate effective variation compared to costly measurement instruments in the market.

Experimental phantoms

Mother

Human body

Papaya

Potato

Orange

S. No

1

2

3

4

5

1.77

1.55

3.89

13

0.89

0.68

2.56

1.5

1.71

1.66

2.45

12

18.21

0.82

0.96

2.31

11

13

II

I

11.36

II

I 18.23

Voltage (mv) for 30 kHz

Voltage (mv) for 10 kHz

Table 1 Impedance distribution chart according to frequency variation of the position I and II of IPG

1.56

1.36

2.21

9.69

18.45

I

0.75

0.59

0.09

8

0.69

II

Voltage (mv) for 50 kHz

0.91

0.56

1.89

8.69

18.02

I

0.56

0.39

1.56

6.5

10.89

II

Voltage (mv) for 80 kHz

2 An Experimental Perusal of Impedance Plethysmography … 21

22

R. Kumar et al.

Impedance chart according to frequency

20

Volatge mv

variation

10

0 0 Mother

20

40 Human Body

Poteto

Orange

60

80

100

Papaya

Frequency KHz Fig. 9 Impedance chart for position I (between E1 and E3) of the electrodes

Impedance chart according to frequency variation

Volatge mv

15 10 5 0

0

10

20

30

40

50

60

70

80

Frequency KHz Mother

Human Body

Papaya

Poteto

Orenge

Fig. 10 Impedance chart for position II (between E2 and E4) of electrodes

References 1. Nyboer J (1950) Electrical impedance plethysmography; a physical and physiologic approach to peripheral vascular study. J Am Hear Assoc 2(6):811–821 2. Kumar R, Kumar S, Queyam AB, Sengupta A (2018) An experimental validation of bioimpedance technique for medical & non-medical application. In: 2018 8th international conference on cloud computing, data science & engineering. IEEE, pp 14–15 3. Corciova C, Ciorap R, Zaharia D, Matei D (2011) On using impedance plethysmography for estimation of blood flow. In: IEEE, c, pp 8–11 4. Ansari S, Belle A, Najarian K, Ward K (2010) Impedance plethysmography on the arms: respiration monitoring. In: 2010 IEEE international conference on bioinformatics and biomedicine workshops, BIBMW 2010, vol 1, pp 471–472 5. Kumar R, Pahuja SK, Sengupta A (2015) Phantom based analysis and validation using electrical impedance tomography. J Instrum Technol Innov 5(3):17–23p 6. Luna-Lozano PS, García-Zetina ÓA, Perez-López JA, Alvarado-Serrano C (2014) Portable device for heart rate monitoring based on impedance plethysmography. In: 11th international

2 An Experimental Perusal of Impedance Plethysmography …

23

conference on electrical engineering, computing science and automatic control (CCE), pp 1–4 7. Rezhdo A, Agrawal S, Wu X (2014) Multilayer phantom for electrical impedance testing of a fetal heartbeat. In: IEEE international conference, pp 1–2 8. Attenburrow DP, Flack FC, Portergill MJ (1990) Impedance plethysmography. Equine Vet J 22(2):114–117 9. Furuya N, Nakayama K (1981) Development of multi electrode impedance plethysmography. Med Biol Eng Comput 56(221):199–211 10. Anderson FA, Penney BC, Patwardhan NA, Wheeler HB (1980) Impedance plethysmography: the origin of electrical impedance changes measured in the human calf. Med Biol Eng Comput 18(2):234–240 11. Kumar R, Kumar S, Sengupta A (2019) An experimental analysis and validation of electrical impedance tomography technique for medical or industrial application. Biomed Eng Appl Basis Commun 31:2 12. Kumar R, Kumar S, Gupta A (2018) Analysis and validation of medical application through electrical impedance based system. Int J Intell Syst Appl Eng 6(1):14–18 13. Guha SK, Anand S (1986) Comparison of electrical field plethysmography with electrical impedance plethysmography. Ann Biomed Eng 10:231–239 14. Herscovici H, Roller DH (1986) Noninvasive determination of central blood pressure by impedance plethysmography. IEEE Trans Biomed Eng, BME 33(6):617–625

Chapter 3

Compressive Sensing: An Efficient Approach for Image Compression and Recovery Vivek Upadhyaya

and Mohammad Salim

1 Introduction In 1949, a sampling theorem was given by Claude Shannon in which he proved that a periodic band-limited signal should be equally sampled at a fixed rate. According to Nyquist theorem, to avoid loss of information the sampling rate should be double the signal bandwidth. This sampling rate is known as the Nyquist rate, named after Harry Nyquist searching for the same experiment in 1928 from the work of telegraph transmissions at Bell Labs. The Shannon sampling theorem has been observed in a latest signal processing. This phenomenon is used in fundamental communication region, data operations, like audio, video, and in medical region such as X-ray imaging. As we know that computation is very crucial for any process, so the device that is used for this computation process, like computer, also has vital role in signal processing and communication. It also provides a resource for progress in various mathematical areas. Analog to digital converter (ADC) is the heart of digital signal processing (DSP). The ADC reviews its functional limitations and tried to overcome the problems associated with Shannon’s sampling theorem by using advanced technologies in RADAR detection, wide-band communications and telecommunication areas. Analog to digital converter is very expensive for enhanced data rate in the medical and RADAR areas, so high-speed analog to digital converter architecture cannot be used to obtain the Nyquist rates due to huge data rate demand. Our main focus is to find a solution which can provide a way to represent the signal in a very concise manner with low noise value. The traditional approach, which is very well known as compression technique, is called the transform coding. For this V. Upadhyaya (B) Arya College of Engineering & I.T., Jaipur, Rajasthan, India e-mail: [email protected] M. Salim Malaviya National Institute of Technology, Jaipur, Rajasthan, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_3

25

26

V. Upadhyaya and M. Salim

transform coding the prime requirement is of basis by which we can easily search a sparse representation of the signal which is of interest. Sparse representation is the one which have the signal of length N and can be easily represented using K  N nonzero coefficients by application of compressive technique. The prime objective is to get K nonzero coefficients which can easily represent the desired signal. As we know that there are some limitations with ADC as previously discussed in the paper, so some innovative approaches came into picture to overcome the Shannon’s data rate problem. In case of adequate analog to digital conversion, a typical computer is trying to precede the huge amount of data information. Suppose if we want to process the data of 1 GHz at 16 bits/s, then it generates 4 GB/s, and this data can fill the hard disk of modern era in 1 or 2 min. In most of the cases informative data is of very less size, but due to the Shannon’s theorem the size enhanced and it is too tedious to transmit huge data. The person who knows the communication theory very well have a question that why not only that part of signal sampled which is required. In general, this question seems to be foolish but the answer to this question is given in the form of compressed sensing (CS) approach [1, 2]. In compressive sensing the sampling rate of analog to digital converter is much closer to the information rate of signal. CS changes the need of sampling rate which is required for the proper reconstruction of the signal but it does not totally violate the sampling theorem. Some rules are changed in the compressive sensing with respect to Shannon’s theorem. The paper is organized in six different sections. Section 1 is associated with the introduction of compressive sensing. Section 2 explains the extensive literature review. Section 3 explains the various quality parameters associated with compression and reconstruction process. Section 4 contains the result of the experiments and the analysis section. Section 5 is basically used to conclude the whole analysis forwarded with future work.

2 Literature Survey As per the paper written by Duarte and Eldar [3], they worked with the measurement matrix operator that is random in nature, which can be replaced by highly structured sensing architecture that has close similarity to the characteristic of realistic hardware. The sparsity in the prior standard form has to be enlarged to include the richer signal class and broader data models to be encoded, including signals in continuous time domain. In expressive overview the essence is exploiting measurement structure and signal in the compressive sensing. In this review the author is exercising subsequent development in the theory and practice underlying CS. They specifically visualize extending the pre-existing frame to signal with broader class and motivating new design technique and implementation. These gather well-promised methodology for rethinking of several acquisition systems and extend the limit presently sensing capabilities. In this the author also demonstrated that the CS gathers the promise for improved resolution by elaborating signal structure. It can come up with high

3 Compressive Sensing: An Efficient Approach …

27

revolution in many applications like microscopy by efficient using of the degree of freedom available for these techniques. Many applications like radar, medical imaging, military surveillance and civilian surveillance and consumer electronics all rely on analog to digital converter, and basically, they are resolution-limited. So as we remove the Nyquist constraints from this, then the data transfer takes place very fast. We require less size of memory even though the image or sound quality can be much better if we consider number of measurements rather than the total number of signals/samples which are presented in the image. The main RIP property is not satisfied in electromagnetic imaging application [4]. Hence, Richard Obermeier and Jose Angel Martinez Lorenzo illustrate an approach in the electromagnetic image to minimize the mutual coherence of the sensing matrix. Here the mutual coherence is the most important parameter in the compressive sensing. After the analysis of the following, this design method is implemented for simple multiple monostatic imaging application, in which sensor gives a design variable measurement of the position of transmission antenna. The numerical results which are given by the authors express the capability of the algorithm, in which it decreases the mutual coherence and produces the good-sensing matrices for good recover capability of the compressive sensing. The authors explain how the side lobes of the imaging system’s point spread function (PSF) correlated with the mutual coherence, and used first-order algorithm to decrease the mutual coherence of the sensing matrix, which is calculated by the nonlinearity. In this paper a novel compressive sensing scheme is expressed by the Hamid Reza, Mohmmad Ali and Bijan Abhasi Arand to recover high-resolution ISAR image for linear modulated signal [5]. The main motive of the authors is to receive highresolution ISAR image with the help of sparse stepped frequency and increase the anti-jamming ability of the ISAR. This paper also shows the performance in terms of resolution and amplitude error. It used few probes frequency on second thought of wideband signal. The proposed methods are constructing a polarimetric ISAR image by using sparse recovery algorithm. The main motive of the author is to increase the speed of the reconstruction and decrease the memory space, which we used. The problem of sparsity-driven optimization is resolved by using the two-dimensional optimization. The authors perform an analysis of this algorithm on real data of a ship and an aircraft. Computed tomography technology which affects the patient health due to highdose radiation is considered by the authors in this paper [6]. To reduce this side effect they reduce the amount of dose radiation and decrease the number of projection angle. Andres Jerez, Miguel Marquz, Henry approach a new technology “coded aperture Xray tomography” to remove all the limitations. Compressive sensing concept is used in the following approach, in which they need small number of projection. In this approach only few samples which were taken from the previous slot were used for proper reconstruction. It shows that only 18% measurement sample is used for proper reconstruction. Hence the value of PSNR is improved by 2 dB by adaptive-coded aperture with compressive sensing strategy. In recent medical advancement extreme tele surgery (ETS) is a basic idea for the compression and transmission [7]. These instruments are much sensitive to noise

28

V. Upadhyaya and M. Salim

in the channel. To overcome this problem, Lakshminarayana and Mrinal Sarvagya suggest a new method of image signal measurement and proper recovery by using compressive sensing, in which they use real-time image compression in a particular application. The authors applied this scheme in medical field imaging to analyze the impact of the method. In this paper Xiumei Li, Guoan Bi, Srdjan Stankovic, Irena Orovic talk over the image recovery, which depends on Bayesian compressive sensing by using singlelevel wavelet transform [8], through which an image is broken into two parts: first, a low-frequency coefficient and second, a high-frequency coefficient. The authors occupy measurement from the high-frequency wavelet coefficients for the reconstruction. After this analysis measured values are combined with the unchanged remaining low frequency wavelet coefficient. The Bayesian compression sensing (BCS) results are compared with basis pursuit, OMP algorithm to improve the PSNR value of the tested signal with minimum complexity. In the following algorithm of BCS, in which different measurement matrices like Gaussian random matrix, Bernoulli matrix, Toeplitz matrix and Hadamard matrix are differentiated for proper image recovery process. The image compression method which is shown in the theory (e.g., JPEG2000, etc.) causes bit-loss and this problem is usually sorted out using the source coding [9]. As we can see that both the source coding and the channel coding are used for different requirements, in this paper, a different approach is used to propose the problem and then compressive sensing is used to sort out that problem. Here in this paper, DWT is applied for sparse representation and as per the property of 2-D DWT a very rapid CS measurement method is presented. After the application of CS there was minimum bit-loss with considerable amount of information. l1 -Minimization approach is used to reconstruct the image at the decoder side. According to different observations CS-based image codec was more robust compared to existing CS techniques and relevant joint source channel coding (JSCC) schemes. Alonso et al. [10] described the compressive sensing for RADAR data. As we can see that the RADAR data set is used at various places. But unfortunately, this data set has so much large files that is why it is so much desirable to compress this data set without any significant loss. If we go through the information theory, then we can easily understand that to compress a signal that signal must be modeled as a set of basic elements. This set of basic elements has the same quantity of information but this set of information is in the decomposed form. According to different experiments the conclusion was derived that redundancy exists in the explicit representation up to an extent. This redundancy is the only way in the compressive sensing by which the signal is reconstructed without any discrepancy from the measurements which are incomplete in nature. As per this assumption a certain model was proposed in this paper which shows keen formulation of raw data in the radar imaging. The method which is shown here proposed a different way of traditional matched filtering and it defines that the new modes of capturing the data are more efficient in orbital configurations. This paper shows the experiments which are done using this method on 1-D simulated signals and the results are also shown. Further, the same process is repeated on SAR data. And then CS is applied on the received pulses which show that

3 Compressive Sensing: An Efficient Approach …

29

by using very few measurements the image can be constructed significantly without any losses and with very high resolution.

3 Mathematical Modeling and Basic Algorithms of Compressive Sensing 3.1 A Subsection Sample Mathematical Modeling for Sparse Recovery Before we proceed to higher mathematical details about compressive sensing, we have to analyze general signal modeling terms. Assume that x which has n-dimension is K-sparse only if it has K nonzero components: x = R n , ||x||0 : −|supp(x)| ≤ x  n

(1)

Here |supp(x)| shows the cardinality of the support set of x, and thus the number of nonzero components k − k0 is a quasi-norm. For 1 ≤ p < ∞, we denote by ||.|| p the usual p norm.  ||x|| p =

n 

1/ p |xi |

p

(2)

i=1

Also, ||x||∞ = max |x i | where x i denotes the ith component of that vector x. If the signals are not perfectly sparse, the coefficient which they have decays very fast and we know them as approximately sparse signals. Compressible signals are those which gratify power law decay: |xi∗ | ≤ Li −1/q

(3)

Here x* denotes nonincreasing arrangement of x. L is a constant which is positive in nature, and 0 < q < 1. Here, we can give a conclusion that sparse signals can be considered as some special cases of compressible signals. Sparse signals are the signals which have a set of nonadaptive linear measurements and can be measured by CS. Each measurement looks like an inner product between the signal x ∈ Rn and a measurement vector ai ∈ Rn , where i = 1,…, m. If we collect m measurements, then we may consider the m × n measurement matrix A of the ai vector row. Thus, the sparse recovery problem can be considered as the reconstruction of k-sparse signals x ∈ Rn from its measurement vector y = Ax ∈ Rm .

30

V. Upadhyaya and M. Salim

3.2 Reconstruction Algorithm To reconstruct the compressed signal from the few noisy sample vectors is the most challenging work. Various techniques are given to resolve the problem; some are explained in the following three main categories. Greedy Pursuit: In this type of approach the optimization is done at the local stage when we have to select an approximation at the time of one-step marking. Example includes orthogonal matching pursuit (OMP) [11], regularized matching pursuit (ROMP) and stage-wise OMP. Convex Relaxation: Some convex programs have been solved by the techniques in which minimum work is to approximate the target signal approximation. Some algorithms like interior point technique, iteration thresholding and project gradient method are capable of optimization. Combinatorial Algorithm: This technique is used to achieve highly structured samples with the help of group testing rapid reconstruction, in which it involves HSS, chaining pursuit and Fourier sampling. As per the study, combinatorial algorithms in the target signal length are super-fast sublinear but they need large number of samples, and it is too difficult to obtain. If we talk about convex relaxation it used few measurement samples but computational load arises. As per the requirement of sampling efficiency and running time, greedy pursuit, especially ROMP algorithm, is better.

4 Results and Analysis Analysis is done using the MATLAB simulation software. Our main objective is to find out the value of PSNR with the variation in the C/R (compression ratio). So for that purpose the test image is taken. This test image is standard Cameraman image. Dimension of the image is 64 × 64.

4.1 Signal-to-Noise Ratio Signal-to-noise ratio is the parameter which is used to indicate the efficiency of the reconstruction. Higher value of SNR shows that the reconstruction algorithm is much more efficient. Expression for the SNR is given below.   2 10 × log10 m (X i ) SN R =  2 m [X i (m) − Yo (m)]

(4)

3 Compressive Sensing: An Efficient Approach …

31

X i = Actual signal used for testing, Y o = Recovered signal, m = Value for iterations.

4.2 Compression Ratio Compression ratio is another parameter which is very crucial for the data compression and reconstruction. Compression ratio shows ratio of number of samples which are taken for the reconstruction to the number of total samples in the signal. As the value of compression ratio increases it means the number of samples which are taken to reconstruct the original signal also increases. C/R =

U V

(5)

U = number of samples taken for the reconstruction, V = total number of samples in the signal. Different figures have been observed for the reconstruction of the image. In this analysis minimum-l1 , which is a type of basis pursuit algorithm, is used for the reconstruction purpose. From the extensive literature review, we find out that l1 is a very efficient algorithm for the reconstruction of original signal. It will provide the original image efficiently with minimum losses. According to these considerations various combination of basis and sensing matrices are used to reconstruct the compressed image. The “cameraman” image is taken as a test image. The size which is taken is of “64 × 64” grayscale image (Table 1). Compression ratio and the PSNR values for different transforms are given below. Here we considered different combinations for both sensing and basis matrices. We can easily observe that in most of the cases the value of PSNR in DST is lesser than in DCT (Table 2). As per the table, different plots and values have been examined. In this paper we considered comparison based on PSNR values. The tables and the plots shown above depicts that when we increase the compression ratio, the value of PSNR also increases. The reason behind this is sample size, which means when we increase the number of samples then the quality of the image signal also improves. This test is conducted on two types of basis matrices and four types of measurement matrices (Gaussian, Exponential, Rayleigh and Poisson).

5 Conclusion We all know that memory is a limited resource if we consider any storage device. Here from the previously discussed result and analysis, we can easily observe that from a very few number of measurements, we can reconstruct the original image

32

V. Upadhyaya and M. Salim

Table 1 Different figures showing recovered image using different basis and sensing matrix with two compression ratio cases S.No

Sensing Matrix

1 Gaussian Matrix (For CR= 500/4096 & 3500/4096)

2 Exponential (For CR= 500/4096 & 3500/4096)

3 Rayleigh (For CR= 500/4096 & 3500/4096)

4 Poisson (For CR= 500/4096 & 3500/4096)

DCT

DST

3 Compressive Sensing: An Efficient Approach …

33

Table 2 Compression ratio versus PSNR value for different basis and sensing matrices PSNR values Compression ratio (U/V)

Gaussian DCT

DST

Poisson DCT

DST

Exponential DCT

DST

Rayleigh DCT

DST

500/4096

16.1852

15.0409

16.5291

15.1555

16.3662

15.0336

16.4040

14.7591

1000/4096

18.6824

18.2308

18.6802

17.9816

18.5053

17.9846

18.6180

18.2821

1500/4096

20.4819

20.2814

20.4224

20.2329

20.4102

20.1036

20.5553

19.8836

2000/4096

22.2311

21.9801

22.4113

22.3308

22.3840

22.1447

22.3145

22.4131

2500/4096

23.9627

24.1713

23.4278

23.8002

24.2560

24.2261

24.1000

23.8509

3000/4096

26.7882

26.6745

26.4449

26.3481

26.2512

26.1360

26.6958

26.0934

3500/4096

29.8551

29.6660

30.0736

29.5464

29.8662

29.3183

29.4657

29.3133

with a better quality. Parameters like PSNR with compression ratio are listed in Table 1 in the previous section to show the variation, which are due to the variation in sensing and basis matrices. The variations in PSNR values are due to the distribution nature of random matrices which are taken as a measurement matrix or basis matrix. Observations are done for the “cameraman” image, which show that by using various sensing and basis matrices we can easily enhance the quality of the reconstructed image. As we increase the number of measurements for an image, the image quality enhances accordingly. So we can easily resolve the difference between the images with low value of C/R, but as we increase the value of C/R then it is too tedious to distinguish between the original and reconstructed image.

6 Future Scope In this paper we have taken cameraman as a test image and we use Gaussian, Poisson, Exponential and Rayleigh as measurement matrices; DCT and DST random matrices are considered as basis matrices. For future work we can apply the same algorithm to video signals and other type of images like hyperspectral images, SAR images and satellite images and check the durability of the algorithm. We can use different basis matrices like DWT, DFT. Some signals have similar type of properties, so if we have an idea about the behavior of signal in prior, then we can design such type of sensing and measurement matrices which can easily adapt the change in the signal and can improve the SNR.

References 1. Donoho D (2006) Compressed sensing. IEEE Trans Inform Theory 52(4):1289–1306 2. Baraniuk R (2007) Compressive sensing. IEEE Signal Process Mag 24(4):118–120, 124

34

V. Upadhyaya and M. Salim

3. Duarte Marco F, Eldar Yonina C (2011) Structured compressed sensing: from theory to applications. IEEE Trans Signal Process 59(9):4053–4085 4. Obermeier R (2016) Compressed sensing algorithms for electromagnetic imaging applications. PhD dissertation, Northeastern University 5. Hashempour HR, Masnadi-Shirazi MA, Arand BA (2017) Compressive sensing ISAR imaging with LFM signal. In: 2017 Iranian conference on electrical engineering (ICEE). IEEE, pp 1869–1873 6. Jerez A, Márquez M, Arguello H (2016) Compressive computed tomography image reconstruction by using the analysis of the internal structure of an object. In: 2016 XXI symposium on signal processing, images and artificial vision (STSIVA). IEEE, pp 1–5 7. Lakshminarayana M, Sarvagya M (2015) Random sample measurement and reconstruction of medical image signal using compressive sensing. In: 2015 international conference on computing and network communications (CoCoNet). IEEE, pp 255–262 8. Li X, Bi G, Stankovic S, Orovic I (2016) Improved Bayesian compressive sensing for image reconstruction using single-level wavelet transform. In: 2016 international conference on virtual reality and visualization (ICVRV). IEEE, pp 133–137 9. Deng C, Lin W, Lee B-S, Lau CT (2010) Robust image compression based on compressive sensing. In: 2010 IEEE international conference on multimedia and expo (ICME). IEEE, pp 462–467 10. Alonso, MT, López-Dekker P, Mallorquí JJ (2010) A novel strategy for radar imaging based on compressive sensing. IEEE Trans Geosci Remote Sens 48(12): 4285–4295 11. Tropp JA, Gilbert AC (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Info Theory 53(12):4655–4666

Chapter 4

Optimal Location of Renewable Energy Generator in Deregulated Power Sector Digambar Singh, Ramesh Kumar Meena and Pawan Kumar Punia

1 Introduction The renewable energy sources (RESs), like wind power and solar power, are the substitute of remnant fuel-based power generation. Integration of renewable energy sources to power grid has achieved a great importance in electrical power system. Through significant augmentation in technology, the cost of power generation from RESs has been declining in the past few years. Therefore, RESs are expected to bridge the gap between the generation and demand of electrical energy in the near future. In the work of Alberto et al. [1], to analyze the optimal use of RESs and energy storage system (ESS), a multi-period alternating current optimal power flow (ACOPF) has been implemented. A most favorable placement and sizing of the storage sustaining transmission and distribution network has been described in [2], by using complex-valued neural networks (CVNN) and time-domain power flow (TDPF). Correspondingly, for minimization of losses, an algorithm for optimal placement of REGs in the occurrence of dispatchable generation using traditional Big Bang-Big crunch method is reported in [3]. Cheng et al. [4] discussed a case study of “Wang-An Island” on energy demands and potential of RESs for optimal integration of RESs simulated using the energy PLAN model. However, the authors [1–4] have not considered the deregulated model of power system. Singh et al. [5] discussed a technique for optimal location of RESs using mixed integer nonlinear programming (MINLP) method by considering loadability, losses, and cost. Keane et al. [6] proposed a methodology to obtain the optimal allocation of embedded generation using linear programming. Geev et al. [7] proposed D. Singh (B) · P. K. Punia Arya College of Engineering and IT, Jaipur 302028, India e-mail: [email protected] R. K. Meena Government Engineering College Ajmer, Ajmer, Rajasthan, India © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_4

35

36

D. Singh et al.

the step-controlled primal–dual interior point method to maximize the social benefit (SB) for optimal placement of RESs. Sharma et al. [8] considered SB as per the maximum rating of wind energy generator at every bus. However, in the above papers [5–8], minimization of pollutant emission has not been considered. Optimal placement of REGs for profit maximization, enhancement in voltage regulation, and reduction of losses at various load buses in the power network are presented in [9]. In the present paper, a new cost-based multi-attribute function has been proposed accompanied by social benefit, power loss, and environmental emission collectively for the optimal location of RESs. The IEEE-6 bus system has been considered to validate the present approach. Initially, the potential of RESs available at every bus in given system has been identified by considering weather forecasting data. Further, the prominent primal–dual interior point method (PDIPM) optimization technique has been adapted to evaluate the maximized social benefit with various system constraints, like MVA limits of line, voltage limit, and so on, by locating the RESs at different selected buses. Subsequently, power loss and emission pollutants are determined at each of the selected buses in the system.

2 Problem Formulation The mathematical formulation of the proposed approach with maximization of social benefit and minimization of power losses as well as pollutant emission with optimal location of RESs is presented. Generally, bulk loads as well as retailers in deregulated electricity market are required to bid their maximum demand and price function. Similarly, all generators are also required to bid their generation cost function along with their maximum generation [8]. In double auction bidding models, both the generation and distribution companies are allowed to offer and bid their prices to the independent system operator (ISO) [8, 10]. Let the vector of pool real power demand (Pd p ) and vector of pool real power generation (Pgp ) as given in (1) and (2).   p Pd p = Pdj ; j = 1, 2, 3, . . . , m

(1)

  p Pg p = Pgj ; j = 1, 2, 3, . . . , n

(2)

where m is the number of dispatchable loads, and n is the number of generators. Let the generation cost curve of the generator placed at bus i bidding to the pool be p p denoted by Ci (Pgi ). The worth function (i.e. benefit function) for a load be Bj (Pdj ). W PGc is the cost of wind power generation company in $/MW h and PWk is the power rating in MW at kth bus. It represents the load price which is willing to pay for p an amount of electric power (Pdj ) purchase. In a double auction electricity market, the net social benefit (NSB) for an elastic load is given by (3). The elastic load is sensitive to the energy price.

4 Optimal Location of Renewable Energy Generator …

NSB =

⎧ nd ⎨ ⎩

j=1

Bj (Pdj ) −

ng 

Ci (Pgi ) − W PGc ∗ PWk

i=1

37

⎫ ⎬ ⎭

$h

(3)

It is noticed from Eq. (3) that a perfect market has the maximum net social benefit. Also the real markets always operate at lower levels of SB. The difference in the social benefits of a perfect market and a real market is the key measurement for the efficiency of the real market. The net social benefit as mathematical in Eq. (3) is maximized subject to the following transmission network constraints. • The power flow equation for the power network g(V, φ) = 0

(4)

where ⎧ ⎨ Pi (V, φ) − Pinet ← for Each PQ bus i g(V, φ) = Qi (V, φ) − Qinet ← for Each PQ bus i ⎩ Pm (V, φ) − Pmnet ← for Each PV bus m, not including the ref.bus where Pi and Qi are calculated real and reactive powers for PQ bus i, respectively. Pinet and Qinet are specified real and reactive power for PQ bus i, respectively. Pm and Pmnet are calculated and specified real power for PV bus m, correspondingly. V and φ are the voltage magnitude and phase angles at different buses. • The inequality constraint on real power generation Pgi at PV buses Pgimin ≤ Pgi ≤ Pgimax

(5)

where Pgimin and Pgimax represent minimum and maximum limits on real power generations at ith bus. • The inequality restraint on reactive power generation Qgi at ith PV bus Qgimin ≤ Qgi ≤ Qgimax

(6)

where Qgimin and Qgimax represent limits on the reactive power generation at ith PV bus. • The inequality constraint on voltage magnitude V of each PQ bus Vimin ≤ Vi ≤ Vimax

(7)

where Vimin and Vimax represent the minimum and maximum limits on the voltage magnitude at ith bus. • The inequality constraint on voltage angle at PV bus

38

D. Singh et al.

δimin ≤ δi ≤ δimax

(8)

where δimin and δimax represent the minimum and maximum limits on the angle at ith bus. • MVA flow limit on transmission line M V Afij ≤ M V Afijmax

(9)

where MVAfij max is the maximum rating of transmission line connecting bus i and j. Social benefit as mention in Eq. (3) has been optimized as an objective function, satisfying all constraints given in Eq. (4) to Eq. (9). Initially without locating RESs the system power losses, pollutant emission is determined. Secondly, the optimization program is repetitively run by locating the RESs at different preliminary selected buses of the given power system and optimal NSB with system losses and pollutant emission is evaluated. The multi-attribute methodology to determine the combined importance of social benefit, power losses and emission on the optimum location of RESs has been used in this paper. Let

F2 = λ

F1 = SBi

(10)

PLb , b = 1, 2 . . . n

(11)

Ei (Pgi ), g = 1, 2 . . . m

(12)

n  b=1

F3 = H

m  g=1

where SBi is social benefit in $/h, PLb is power loss in megawatts (MW) and Ei (Pgi) is the emission in kilogram/hour (kg/h) when RESs is located at ith bus. λ is the marginal price at reference bus in dollar/megawatt-hour ($/MW h), b is the number of branches, g is the number of generator, and H is the penalty factor in dollar/kilogram ($/kg). Further, the proposed a cost based multi-attribute function (CMAFi) in $/h may be written as. (CMAF)i = F1 − F2 − F3 $/h

(13)

CMAFi is determined by locating the RESs at different preliminary selected ith bus. The preliminary selected busses are arranged in descending order based on value of CMAFi i.e. if CMAFi is maximum when RESs is locate at bus i, then this location of RESs at bus i become optimal location. Similarly, second, third and subsequent preference order is determined.

4 Optimal Location of Renewable Energy Generator …

39

3 Step by Step Algorithm and Flow Chart of the Proposed Approach The step by step algorithm of the proposed multi-attribute methodology for an optimal location of RESs is given in below. Step 1: Read all system data. Step 2: Select the bus numbers which have sufficient RESs (solar or wind) potential. Let total numbers of selected buses are nr. Step 3: Run optimization program with maximization of SB and with all system constraint as explain in Sect. 2 without locating RESs in the system and note the value of SB, losses and emission. Determine the value of CMAF0. Step 4: Locate REG (as PW = 5 MW) at one of the selected bus i. Step 5: Run optimization program with maximization of SB and with all system constraints as explained in Sect. 2. Note the value of SB, losses, emission and determine the value of CMAFi. Step 6: Initialized the bus count i = 1. Step 7: Increment i by next selected bus, i.e. i = i + next selected bus. Step 8: Locate REG as PWi = PW + 5 MW. Step 9: If i ≤ nr, go to step 4, otherwise go to step 9. Step 10: Arrange the selected buses in descending order of CMAFi. This will be a preference list for optimal location of RESs. Step 11: Print all result.

4 Simulation Analysis and Results In this section, two case studies, viz. IEEE-6 bus systems, have been considered to validate the proposed approach. The value of penalty factor (H) for both the cases have been taken as $ 0.66/kg [11]. In the literature it has been observed that net social benefit, power loss, emission, and CMAF have been considered to perform in deregulated power system. Since the present paper considers all the four factors mentioned above, comparative analysis has not been performed.

4.1 IEEE-6 Bus System The system data of IEEE-6 bus system are given in Tables 1, 2, 3, and 4. The system has been considered with three generators and two dispatchable loads [12]. The generators are located on bus numbers 1, 2, and 3, as given in Table 1. Line data for

40

D. Singh et al.

Table 1 Generator bidding coefficients and real power limit for IEEE-6 bus system S. no.

Number of generators

Three generator bidding coefficient

Real power generation limit (MW)

ai ($/MW2 h)

bi ($/MW h)

ci ($/h)

Pgi max (MW)

Pgi min (MW)

1

G1

0.001

45

0

200

50

2

G2

0.002

49

0

150

37.5

3

G3

0.001

48

0

180

45

Table 2 Line data for IEEE-6 bus system Bus i–j

R (p.u.)

X (p.u.)

B (p.u.)

1–2

0.1

0.2

0.04

1–4

0.05

0.2

0.04

1–5

0.08

0.3

0.06

2–3

0.05

0.25

0.06

2–4

0.05

0.1

0.02

2–5

0.1

0.3

0.04

2–6

0.07

0.2

0.05

3–5

0.12

0.26

0.05

3–6

0.02

0.1

0.02

4–5

0.2

0.4

0.08

5–6

0.1

0.3

0.06

Table 3 Demand bidding coefficients and real power limit for IEEE-6 bus system S. no.

Bus number

Demand bidding coefficient α j (($/MW2 h)

β j ($/MWh)

λj ($/h)

Real power demand limit (Pd j max ) (MW)

1

4

0.002

88

0

95

2

5

0.001

91

0

95

Table 4 Fixed load real power demand for IEEE-6 bus system

S. no.

Bus number

Fixed real power demand limit Pd j (MW)

Qd j (MVAr)

1

3

20

10

2

6

70

70

IEEE-6 bus system is given in Table 2. The dispatchable loads have been located at bus numbers 4 and 5 with fixed load on bus numbers 3 and 6. The bid function for the generation unit has been taken as

4 Optimal Location of Renewable Energy Generator …

41

Table 5 Generator emission coefficients for IEEE-6 bus system Generators

Emission coefficients A0i (kg/MW2 h)

A1i (kg/MWh)

A2i (kg/h)

G1

0.00419

0.32767

13.8593

G2

0.00419

0.32760

13.8593

G3

0.00683

−0.54551

40.2669

Ci (Pgi ) = ai (Pgi )2 + bi (Pgi ) + ci

(14)

where ai , bi , and ci are the coefficients of the generators and Pgi is the generation power at ith bus. The corresponding generators bidding coefficient and real power limit are given in Table 1. Similarly, the demand biding function has been taken as Bj (Pdj ) = αj (Pdj )2 + βj (Pdj ) + λj

(15)

where αj, βj, and λj are the demand bidding coefficients, Pd j is an amount of electric power to be purchased from ith bus. The corresponding demands bidding coefficient and real power demand limit are given in Table 3. Fixed load of real power demand is given in Table 4. The emission function is presented by the following quadratic equation: Ei (Pgi ) = A0i + A1i (Pgi ) + A2i (Pgi )2 kg/h

(16)

where A0i , A1i , and A2i are the emission coefficients of the generators and Pgi is the power generation at ith bus. The value of the emission coefficients [13] is given in Table 5. Let us select bus numbers 4, 5, and 6 that have RES potential of 10, 15, and 10 MW, respectively, based on weather forecasting data for optimal location of REGs. In this case the RESs power generation cost is taken as ($40/MW h) in [14]. Initially (Step 1), net social benefit, power loss, emission, and CMAF are determined as 3400.94 $/h, 8.54 MW, 78.27 kg/h, and 2962.85 $/h, respectively, without the location of REG. These attributes have been identified by adopting PDIPM optimization program. Later, REGs have been injected stepwise having a step size of 5 MW at selected buses to obtain the optimal placement of REGs. Table 6 shows the net social benefit (NSB), power loss (PL), emission (EM), and CMAF with and without REG of 5 MW at all the said buses (one by one) by PDIPM optimization program. In order to have a single-preference order based on maximum SB, minimum power loss, and minimum emission, the CMAF is determined as per Eq. (13). Step 2: It is seen that among all the selected buses, bus number 5 is having the maximum CMAF of 3029.24 $/h. Hence, bus number 5 has been identified as optimal location of REG of 5 MW capacity. Further, 5 MW of REG is consecutively applied to all the said buses, including bus number 5.

42

D. Singh et al.

Table 6 Various attributes with location of REG at different buses No. of steps

REG capacity (MW)

S. no.

REG located bus no.

Net social benefit ($/h)

PL (MW)

EM (kg/h)

MAF ($/h)

Optimal location of REG (MW) at bus number

Step 1

0



Without REG

3400.94

8.540

78.27

2962.85



Step 2

5

1

4

3453.86

8.338

77.48

3025.43



2

5

3468.30

8.545

79.39

3029.24

3

6

3462.78

8.662

83.39

3015.79



1

4

3519.84

8.340

78.22

3090.83



2

5

3526.50

8.461

75.95

3093.51

10

3

6

3519.44

8.475

77.16

3085.02



1

4

3574.31

8.195

72.3638

3155.73

15

2

5

3579.55

8.341

70.3977

3155.66



3

6

3574.08

8.325

71.4211

3150.24



1

4

3621.21

7.947

68.89

3216.14



2

5

3626.83

8.085

66.99

3216.77

20

3

6

3621.74

8.061

67.97

3212.12



1

4

3673.19

7.847

63.71

3276.07

25

2

6

3673.87

7.958

62.90

3272.25



Step 3

Step 4

Step 5

10

15

20

5

Step 6

25

Step 7

30

1

6

3720.08

7.723

59.75

3331.18

30

Step 8

35

1

6

3766.50

7.607

56.01

3385.32

35

Step 3: From the above table, it is also observed that bus number 5 is providing a maximum CMAF value of 3093.51 $/h. This leads to a total REG of 10 MW at bus number 5 for optimal location. The same process is repeated again for placement of an additional 5 MW REG. Step 4: It is seen that, among all the selected buses, bus number 4 is having a maximum CMAF of 3155.73 $/h. Hence, bus number 4 has been identified for optimal location of REG of 15 MW capacity. Further, 5 MW of REG is consecutively applied to all the said buses, including bus number 5. Step 5: It is observed that among all the selected buses, bus number 5 is providing a maximum CMAF of 3216.77 $/h. Hence, again bus number 5 has been identified for optimal location of REG with 20 MW capacities. As bus number 5 has reached its maximum capacity of REG (15 MW) therefore, only bus number 4 and bus number 6 will be considered for further allocation of REGs. Further, 5 MW of REG is consecutively applied to remaining buses.

4 Optimal Location of Renewable Energy Generator …

43

Step 6: It is again observed that among the remaining two buses, bus number 4 is having a maximum CMAF of 3276.07 $/h. Hence, again bus number 4 has been identified for optimal location of REG of 25 MW capacity. As bus number 4 has reached its maximum capacity of REG (10 MW), therefore, only bus number 6 will be considered for further allocation of REGs. Further, 5 MW of REG is consecutively applied to remaining buses. Step 7: It is noticed that bus number 6 is having a CMAF of 3121.18 $/h and hence it is identified for location of 30 MW of REG. Further, remaining 5 MW of REG is applied at bus 6 and it gives a CMAF of 3140.32 $/h. Hence, again the bus number 6 has been identified for optimal location of 35 MW of REG. It is to be noted that in allocation of REGs based on CMAF, those buses whose maximum potential has been reached are not considered for further CMAF calculation. The optimal locations of REGs on selected buses in IEEE-6 bus system have been determined by given potential of RESs based on the CMAF. From Table 6, it is observed that the preference order of bus numbers 5, 4, and 6 have been determined based on their available potential with 15, 10, and 10 MW, respectively, for optimal location of REGs. Figures 1, 2, 3, and 4 show the comparisons of various attributes without and with the location of REGs at various selected buses. Figure 1 shows the variation in net social benefit (NSB) with and without REG located on the selected bus of a given system. It is seen that the NSB has been enhanced with the location of REG; Figure 2 shows the variation in power loss with and without REG located on the selected bus of a given system. It is seen that the power loss has been improved with the location of REG; Figure 3 shows the variation in pollutant emission with and without REG located on the selected bus of a given system. It is seen that the EM has been improved with the location of REG.

NSB($/h)

without REG, NSB ($/h) with REG, NSB ($/h)

3,780 3,720 3,660 3,600 3,540 3,480 3,420 3,360 3,300

3,627 3,527

3,673

3,720

3,767

3,574

3,468

1

2

3

4

Number of REG located

5

Fig. 1 Variation in NSB at number of REG located in various buses

6

7

44

D. Singh et al.

PL(MW)

without REG, PL (MW) with REG, PL (MW)

8.60 8.50 8.40 8.30 8.20 8.10 8.00 7.90 7.80 7.70 7.60 7.50 7.40 7.30 7.20

8.461

8.545

8.195 8.085 7.847

7.723 7.607

1

2

3

4

5

6

7

Number of REG located Fig. 2 Variation in PL at number of REG located in various buses without REG, EM (kg/h) with REG, EM (kg/h)

79.39

80.00

75.95

EM(kg/h)

75.00

72.36

70.00

66.99

63.71

65.00

59.75

60.00

56.01

55.00 50.00

1

2

3

4

5

6

7

Number of REG located

MAF($/h)

Fig. 3 Variation in EM at number of REG located in various buses 3500 3400 3300 3200 3100 3000 2900 2800

without REG, MAF ($/h) with REG, MAF ($/h)

3,029

1

3,094

2

3,156

3

3,217

4

3,276

5

Number of REG located Fig. 4 Variation in CMAF at number of REG located in various buses

3,331

6

3,385

7

4 Optimal Location of Renewable Energy Generator … Table 7 CMAF with 5–35 MW of REG located at selected buses

45

S. no.

REG located

CMAF ($/h)

1

Without

2962.85

2

5

2994.24

3

5

3023.51

4

4

3050.73

5

5

3076.77

6

4

3101.07

7

6

3121.18

8

6

3140.32

Fig. 5 Number of REG located in buses 4, 5, and 6 in modified IEEE-6 bus system

In order to have a single-preference order based on maximum SB, minimum power loss, and minimum emission, the CMAF is determined as per Eq. (13). So based on CMAF, a single-preference order considering all the above-mentioned factors, the value of CMAF with and without the location of REGs at different buses in the IEEE-6 bus system is given in Table 7. Figure 4 shows the variation in CMAF without and with REG located on the selected bus of a given system. It is seen that the CMAF has been enhanced with the location of REG. Figure 5 shows the improved IEEE-6 bus system with optimum location of REGs at bus numbers 4, 5, and 6 with 10, 15, and 10 MW, respectively, based on the proposed CMAF for optimal renewable-based power generation.

5 Conclusion In this paper, optimal location of REGs is determined by considering maximization of net social benefit, minimization of power losses, and minimization of pollutant emission in the IEEE-6 bus system. Optimal location of REGs is identified by the proposed cost-based multi-attribute function using PDIPM method. It has been observed

46

D. Singh et al.

that optimal location of REG by considering any of these attributes provides a different solution. So the proposed approach based upon combination of various attributes provides a single solution, as well as efficient preference order for selecting buses having sufficient potential of RESs. Hence, the proposed CMAF will be useful in avoiding the confusion in selecting the priorities of buses for optimal location of REGs. Therefore, it is inferred that the proposed approach is more beneficial to power utilities in finding the optimal location of REG in the power system network.

References 1. Lamadrid AJ (2015) Optimal use of energy storage systems with renewable energy sources. Electr Power Energy Syst 71:101–111 2. Motalleb M, Reihani E, Ghorban R (2016) Optimal placement and sizing of the storage supporting transmission and distribution networks. Renew Energy 94:651–659 3. Abdelaziz AY, Hegazy YG, El-Khattam W, Othman MM (20115) Optimal allocation of stochastically dependent renewable energy based distributed generators in unbalanced distribution networks. Electr Power Syst Res 119:34–44 4. Yue CD, Chen CS, Lee YC (2016) Integration of optimal combinations of renewable energy sources into the energy supply of Wang-An Island. Renew Energy 86:930–942 5. Singh AK, Parida SK (2015) A novel hybrid approach to allocate renewable energy sources in distribution system. Sustain Energy Technol Assess 10:1–11 6. Keane A, Malley OM (2005) Optimal allocation of embedded generation on distribution networks. IEEE Trans Power Syst 20:1640–1646 7. Mokryani G, Siano P, Piccolo A (2011) Social welfare maximization for optimal allocation of wind turbines in distribution systems. In: Proceedings of the 11th international conference on electrical power quality and utilization, Lisbon, Italy, 17–19 October 2011, pp 1–6 8. Sharma NK, Sood YR (2014) Strategy for optimal location and rating of wind power generator with maximization of social welfare in double auction competitive power market. J Renew Sustain Energy 6:013123 9. Gautam D, Mithulananthan N (2007) Optimal DG placement in deregulated electricity market. Elect Power Syst Res 77:1627–1636 10. Lai LL (2001) Power system restructuring and deregulation. Wiley, New York 11. Dixit GP, Dubey HM (2011) Artificial bee colony optimization for combined economic and emission dispatch. In: Second international conference on sustainable energy and intelligent system (SEISCON 2011), Chennai, Tamil Nadu, India 12. Wood AJ, Wollenberg BF (1996) Power generation, operation, and control, 2nd edn. Wiley, New York 13. Thenmalar K, Anujak S, Ramesh S (2014) Multi-objective economic emission load dispatch solution using wolfs method in various generation plants with wind power penetration. In: 2014 International conference electronics and communication systems (ICECS) 13–14 February 2014, Coimbatore, India 14. Akshaya Kumar S, Rajagopal G, Prabhakara Rao T (2016) Tamil Nadu Electricity Regulatory Commission, Comprehensive Tariff Order on WIND ENERGY, March 2016

Chapter 5

A Genetic Improved Quantum Cryptography Model to Optimize Network Communication Avinash Sharma, Shivani Gaba, Shifali Singla, Suneet Kumar, Chhavi Saxena and Rahul Srivastava

1 Introduction Secure communication is the primary requirement of a network. Different detectionbased, preventive and authentication methods are available to provide secure and reliable communication in a network. To save the data from tampering and unauthorized access, one of the most familiar ways is to encode the data. Cryptography is one such method to provide encoded communication. This encoded method not only prevents the information from intruder access but also saves the network from different active and passive attacks. User-level access control is provided by the cryptography method. It actually builds a trust mechanism so that secure communication is possible over the network. A standard cryptography model for encoded communication is shown in Fig. 1.

A. Sharma (B) · S. Kumar Maharishi Markandeswer (Deemed to be) University, Ambala, Haryana, India e-mail: [email protected] S. Kumar e-mail: [email protected] S. Gaba · S. Singla Panipat Institute of Engineering and Technology, Ambala, Haryana, India e-mail: [email protected] S. Singla e-mail: [email protected] C. Saxena · R. Srivastava Arya College of Engineering & I.T., Jaipur, India e-mail: [email protected] R. Srivastava e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_5

47

48

A. Sharma et al.

Fig. 1 Cryptography method

Figure 1 shows the cryptography process applied on sender and receiver side. On sender side, plain data is taken as input and key specific encoding is done. This process is called encryption that transforms the input text to cipher text. After this, the encoded data is delivered to the receiver. On receiver side, the data decoding is done by using the same or different key. This process of transforming the cipher text to plain text is called decryption. There is number cryptography method that varies according to the media type, key type or the algorithmic approach. The key used in these cryptography methods can be static or dynamic. The key generation and encoding method ensures the communication quality. In a dynamic network, the significance of dynamic key generation is higher so that the session-based or group-adaptive or region-adaptive encoding will be done. In this work, a quantum cryptography model is proposed to improve the communication security for sensor network.

1.1 Quantum Cryptography Quantum cryptography is a real-time encoding method that captures the communication behavior or the physical characterization of the network and uses it to perform data encoding. It covers most of the uncertainty issues of network communication. The physical characterization used by the quantum cryptography method includes energy, momentum and angular variation. The charge-specific and session-specific observations are also taken from the real-time particles or atoms. These real-time atoms are known as photons. The energy-specific estimation is used in this study to provide effective data encoding. Different constraint and measures are applied to capture the physical and environmental features so that effective data encoding will be done. The behavior-specific analysis is performed to extract the effective measurement based on which effective data encoding can be done. The medium-specific analysis along with space–time observation is defined to provide effective data encoding. The quantum mechanics is used to provide communication displacement and some supervision method is

5 A Genetic Improved Quantum Cryptography …

49

relatively applied to provide effective encoding. The medium-specific disruption can be used to provide data-effective encoding. The positional observation is used to provide pattern-specific encryption. In this paper, quantum-cryptography-based communication model is used for sensor network. The proposed work model generates genetic-based key to identify the effective node pair. In this section, the requirement of security for a network is defined. The cryptography model and the characterization of quantum cryptography are defined in this paper. In Sect. 2, the works of earlier researchers are discussed. The proposed work model is described in Sect. 3. In Sect. 4, comparative results obtained from the work are discussed. The conclusion of the work is presented in Sect. 5.

2 Related Works Different researchers already provided different encoding methods to provide secure communication. Different quantum-cryptography-based methods are also provided by the researchers. In this section, some of the works provided by earlier researchers are discussed. Mandal et al. [1] provided a three-stage model to improve the quantum cryptography. The space-model-based communication is here integrated with quantum cryptography. The environment-specific analysis along with key generation and distribution was used by the author. The encoded communication control was accessed by the author effectively. Porzio [2] has defined the use of quantum cryptography for communication process in the form of protocol. The methods and measures applied to generate the key parameters for quantum cryptography as well as to setup the relation uncertainty vector was explored by the author. Niemiec and Pach [3] defined the security management for quantum cryptography method. The physics law and methods are applied with network parameters to generate the keys and to provide relatively adaptive network communication. Zhou [4] defined a work on Zigbee protocol to provide the effective network security. The design-specific estimation was applied with interconnection analysis so that the sub-network adaptive communication will be performed. Dahlstrom [5] worked on different routing protocols specifically for sensor network. The Zigbee network standard was observed under AODV and DYMO protocols and relatively metrics-based communication was provided. Pan [6] has published a study on coverset generation in sensor network and provided three adaptive communications. The study is based on the event slot-based communication provided to arrange the communication slots. Shazly et al. [7] worked on node uncertainty analysis to improve the target coverage in real-time network. The authors have defined the probabilistic estimation under various deficiencies. The grid-specific coverage analysis improves the node connectivity. The deployment error and placement errors are resolved by this algorithmic approach. Olteanu [8] has performed device-specific improvement and establishment to the Zigbee network. The

50

A. Sharma et al.

investigation was conducted at node level so that multiple communications are provided. Rostom et al. [9] worked on congestion adaptive Zigbee network generation. The novel improved Zigbee protocol is defined against the delay, congestion and communication loss. The energy resource utilization was provided with protocol exploitation so that the collision adaptive communication will be obtained. Khedikar and Kapur [10] provided an optimized solution to target coverage problem by arising the problem of minimum node selection. The work is defined to observe the node placement scheme and provided an area coverage method with improved k coverage with boundary specification to provide the region-specific cover. The node division is done here under sleep node, listen node and active nodes. Lu and Li [11] worked on direction assignment and tunable node mapping to improve the target coverage with particular directional instance. The investigation observes the target points so that the sensing area will be maximized. Diop et al. [12] addressed one of the critical network issues to provide solution to the target coverage problem. The network lifetime improvement with coverage is suggested and ensured in this work. The node process analysis with sensor activity observation is provided to control the energy dissipation. Dahane et al. [13] used the concept of weighted dynamic clustering environment with security integration. Analysis of this mobile sensor network is done with specification of five metrics. These metrics are able to identify the node criticality and able to sense the secure node for communication. Chaturvedi and Daniel [14] defined a trust-metrics-based observation on target coverage problem to provide the basic communication solution in effective way. A trust metric formulation is used to ensure the coverage, provide energy in a preserved way, and to provide the incorporated communication in the network. A target cover-based preserving protocol is defined to observe the node stage so that the target region-based node monitoring will be obtained. Ren et al. [15] defined a work on event detection and optimization in real-time environment to provide solution against sensing cover problem. The method includes the energy harvesting deployment of network under quality observation and duty cycle analysis.

3 Research Methodology A personal area network is a critical restricted network with energy specification. The open network suffers from various internal and external attacks. To provide safe communication, the requirement is to provide effective encoded communication. In this work, a dynamic quantum-key-based cryptography model is proposed for WPAN network. The proposed network is defined as an integrated work model in which the communication network is formed and later on the encoded communication is applied over it using quantum cryptography. The work also integrated the genetic modeling to generate more adaptive communication pairs so that reliable communication will be drawn over the network. The stages of work relative to this proposed model are shown in Fig. 2.

5 A Genetic Improved Quantum Cryptography … Fig. 2 Proposed work model

51

Generate the Network with N nodes under random mobility and random position and parameters

Analyze the network under connectivity and coverage parameters

Apply genetic modeling to identify the adaptive communication pair

Capture the communication statistics to generate the quantum key

Perform the data encoding using quantum key based hashing

Perform the communication and analyze the network statistics

Fig. 2 shows the proposed work model. It shows the communication model using quantum cryptography. At the earlier stage, parameter-specific network is composed of sensor network specification. Once the network is composed with specification of environmental and node-specific constraints, the physical characteristics are generated for dynamic key using quantum cryptography modeling. Later on, the genetic model is applied to identify the effective node pair. Finally, the quantum encoded communication is performed over the network. The proposed work model is defined in the next section. Genetics is an evolutionary process model that works on the biological process stages to generate new population by applying some effective integrated process on initial and current population. The data-driven generations are defined in controlled form with the specification of fitness function. The potential data analysis is performed to apply the fitness rule on this input data so that effective and feasible data members will be generated. In this work, the genetic model has accepted the node

52

A. Sharma et al.

pairs as the possible connectivity and identified the most cost-effective network connectivity. The aim is to optimize the communication reliability. After specification of initial population, a series of iterative processes are applied to generate the next level population and selection of effective key pairs is done. The next level generation and the specification of objective function are defined for final solution generation. In this work, the effective ranking is mapped under load, coverage and failure parameters. The population is processed under the crossover and the mutation vectors for generating the next possible effective value. The process is repeated till the final optimized solution is obtained.

4 Results In this paper, a quantum integrated encoded communication model is used for wireless personal network. The proposed work model is simulated in MATLAB environment. The network is here composed in a network region of 100 × 100 m. The fixed infrastructure devices are used to control the communication. The mobile network with sensing capability is defined in the network. The quantum encoded communication is performed for 100 rounds and the comparative results are derived from the work. The existing work performs the encoded communication using hash algorithm, whereas the proposed work used the genetic optimized encoded communication. A comparative analysis in terms of packet loss is shown in Fig. 3. Fig. 3 shows the comparative analysis in terms of packet loss; x axis shows the number of rounds and y-axis shows the packet communication. The green line represents the number of packet loss in the case of proposed approach. The results

Fig. 3 Packet loss analysis (existing vs proposed)

5 A Genetic Improved Quantum Cryptography …

53

show that the method has reduced the packet loss over the network. The proposed work model has improved the integrity of work.

5 Conclusion In this paper, a quantum improved encoded communication is used for WPAN network. The network is defined with infrastructure specification. A group-adaptive encoded communication is performed. The genetic modeling is used here to identify the effective node pair. A quantum-based encoded communication is performed in this work. The simulation results show that the proposed work model has reduced the packet loss over the network.

References 1. Mandal S et al (2013) Multi-photon implementation of three-stage quantum cryptography protocol. In: 2013 international conference on information networking (ICOIN). Bangkok, pp 6–11 2. Porzio A (2014) Quantum cryptography: approaching communication security from a quantum perspective. In: 2014 Fotonica AEIT Italian Conference on Photonics Technologies. Naples, pp 1–4 3. Niemiec M, Pach AR (2013) Management of security in quantum cryptography. IEEE Commun Mag 51(8):36–41 4. Zhou L (2013) A simulation platform for ZigBee-UMTS hybrid networks. IEEE Commun. Lett. 1089-7798/13 ©2013 IEEE 5. Dahlstrom A (2013) Performance analysis of routing protocols in Zigbee non-beacon enabled WSNs. Internet of Things: RFIDs, WSNs and beyond 978-1-4673-3133-3/13 ©2013 IEEE 6. Pan M-S (2013) Convergecast in ZigBee tree-based wireless sensor networks. In: 2013 IEEE Wireless Communications and Networking Conference (WCNC): Networks 978-1-4673-59399/13 ©2013 IEEE 7. Shazly MH, Elmallah ES, Harms J (2013) Location uncertainty and target coverage in wireless sensor networks deployment. In: 2013 IEEE International Conference on Distributed Computing in Sensor Systems. Cambridge, MA, pp 20–27 8. Olteanu A-C (2013) Enabling mobile devices for home automation using ZigBee. In: 2013 19th international conference on control systems and computer science 978-0-7695-4980-4/13 ©2013 IEEE 9. Rostom R, Bakhache B, Salami H, Awad A (2014) Quantum cryptography and chaos for the transmission of security keys in 802.11 networks. 2014 17th IEEE Mediterranean Electrotechnical Conference (MELECON). Beirut, pp 350–356 10. Khedikar R, Kapur A (2014) Energy effective target coverage WSNs. In: 2014 International Conference on Issues and Challenges in Intelligent Computing Techniques (ICICT). Ghaziabad, pp 388–392 11. Lu Z, Li WW (2014) Approximation algorithms for maximum target coverage in directional sensor networks. In: 2014 IEEE 11th International Conference on Networking, Sensing and Control (ICNSC). Miami, FL, pp 155–160 12. Diop B, Diongue D, Thiare O (2014) A weight-based greedy algorithm for target coverage problem in wireless sensor networks. In: 2014 International Conference on Computer, Communications, and Control Technology (I4CT). Langkawi, pp 120–125

54

A. Sharma et al.

13. Dahane A, Berrached NE, Loukil A (2015) Homogenous and secure weighted clustering algorithm for mobile wireless sensor networks. In: 2015 3rd International Conference on Control, Engineering & Information Technology (CEIT). Tlemcen, pp 1–6 14. Chaturvedi P, Daniel AK (2015) An energy efficient node scheduling protocol for target coverage in wireless sensor networks. In: 2015 Fifth International Conference on Communication Systems and Network Technologies (CSNT). Gwalior, pp 138–142 15. Ren X, Liang W, Xu W (2015) Quality-aware target coverage in energy harvesting sensor networks. IEEE Trans Emerg Top Comput 3(1):8–21

Chapter 6

Neural Networks-Based Framework for Detecting Chemically Ripened Banana Fruits R. Roopalakshmi, Chandan Shastri, Pooja Hegde, A. S. Thaizeera and Vishal Naik

1 Introduction Fruits are one of the most nutritious as well as naturally available foods, which are generally consumed in raw form. Specifically, fruits provide nutrients including potassium, fiber, vitamin C, minerals, protein, and protect human body against various disorders [1]. In general, humans consuming more fruits are likely to have reduced risk of dangerous diseases such as cancer. Specifically, for a normal human being, eating 5–9 servings of fruits per day will significantly reduce the risk of various chronic diseases. On the other side, in case of fruits, Ripening is a natural process, which makes them sweeter and softer as it ripens. Precisely, Ethylene is a natural ripening agent, also popularly known as ‘Aging Hormone’ in plants, which is responsible for changes in texture, softening, color, and all other processes involved in natural ripening [1]. However, in the present competitive world, 80% of fruits are ripened using hazardous chemicals such as Calcium carbide(CaC2 ) by greedy traders which causes serious health issues [2]. Further, the regular consumption of fruits ripened using Calcium carbide causes cancer due to the presence of traces of poisonous gases including Arsenic and Phosphorous. Furthermore, even very low consumption of Calcium Carbide at regular basis may result in vomiting, diarrhea, burning sensation of chest and abdomen, thirst, weakness and also sometimes leads to serious issues such as permanent damage of eyes and shortness of breath. Due to these reasons, Indian Government has introduced Prevention of Food Adulteration(PFA) act 1955 and also strictly banned the usage of Calcium Carbide for artificially ripening the fruits [2]. Although Calcium carbide is banned in most of the countries, greedy farmers and traders are still using it for chemically ripening the fruits for the sake of faster R. Roopalakshmi (B) · C. Shastri · P. Hegde · A. S. Thaizeera · V. Naik Alvas Institute of Engineering and Technology, Moodbidri 574225, India e-mail: [email protected] URL: http://www.aiet.org.in © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_6

55

56

R. Roopalakshmi et al.

profits. Actually, when the Calcium carbide chemical comes in contact with moisture, it produces Acetylene (C2 H2 ) gas, which is quite similar to the natural ripening agent Ethylene and accelerates the ripening process at a faster rate. Specifically, nowadays, in the current agricultural industry, climacteric fruits such as banana, mango, guava, and avocado are artificially ripened due to cheap mentality of fruit traders for getting higher profits [3]. Specifically, in Indian scenario, Banana is considered as one of the topmost favorite fruit due to its full-year availability and productivity. Also, banana is the first choice among different fruits for all category of people, since it is affordable even for poor people. Further, India is the largest producer of bananas compared to its productivity in other countries such as Guatemala and Indonesia. For instance, in 2018, Banana production in India accounts for 23% in the world pool of banana production, which is recorded to be 29,779.91 thousand tons of bananas [3]. However, in international market, lots of rejection happens for Indian bananas, due to artificial ripening using banned chemicals such as Calcium carbide, which is becoming a big black mark for Indian agriculture industry [4]. To solve these problems, efficient frameworks for detecting chemical ripening of fruits are very much essential in the present agricultural scenario.

2 Related Work In the literature, computer vision-based techniques are employed for identifying ripening stages as well as quality evaluation of fruits from the past decades. For instance, in 2015, Rahul and Aniket [5] proposed an image processing-based system for detecting different ripening stages of mango fruit using RGB and HSV methods. Though the proposed method can classify mango ripening stages with a reasonable amount of accuracy, yet it mainly focuses on ripening stages and their classification types. Further, Capizzi et al. [6] developed an automatic classification system for finding the defects on fruit by employing Co-Occurrence Matrix and Neural Networks computations. In their system, the authors classify the fruit surface defects in terms of color and texture features using Radial Basis Probabilistic Neural Networks (RBPNN). However, the complexity of this method is slightly high due to the employment of probabilistic neural network strategies. Recently, in 2018, Bhargava and Bansal [7] presented a quality evaluation framework for fruits and vegetables using computer vision techniques. The authors used four steps including acquisition, segmentation, feature extraction, and classification for quality evaluation of fruits. In addition, Prince et al. [8] introduced a technique for identifying the quality index of fruits using image processing. The authors used fruit features such as color, shape, and size of fruit samples, yet it is concentrating only toward quality index of the fruits. Very recently, in 2018, Sabzi and Gilandeh [9] developed a system for visual identification of orange varieties using neural networks and metaheuristic algorithms. In this paper, a new approach for recognizing three common orange varieties is pre-

6 Neural Networks-Based Framework …

57

sented, in which the fruits are segmented using an RGB thresholding and mathematical morphology operators, achieving a very precise detection of the oranges. In the existing literature, only very few amount of the research work is carried out toward identifying the chemically ripened fruits using computer vision techniques. Especially in the past decade, image processing techniques are used to detect artificially ripened categories of fruits including banana, mango, and papaya and so on. For example, in 2014, Veena and Bhagyashree [10] designed a portable instrument for the detection of artificially ripened banana fruits using color-based features. In addition to artificial ripening detection, the authors also focus on checking the ripening stages of the banana fruit. Though this method is capable of detecting artificially ripened banana fruits; yet, it uses simple RGB features which is less effective for complex banana structures. In [11] Kathirvelan and Vijayaraghavan introduced an Infrared-based sensor system for the detection of ethylene for the discrimination of fruit ripening stages. They used IR thermal emitter-based ethylene sensor, for the estimation of ethylene release in the fruit ripening process. Though the system is having good reproducibility, yet, it focuses mainly on discriminating the various stages of fruit ripening in order to ensure food safety. In 2015, Amit and Hegadi [12] proposed a system for remote monitoring of banana ripening process, which utilizes wireless monitoring concept and enables the user to monitor the process from the household itself. In 2016, Srividya and Sujatha [13] introduced an ethylene gas measurement system for detecting ripening of fruits using image processing techniques. Recently, in 2017, Maheswaran et al. [14] designed the system for identifying artificially ripened fruits using smartphones. The proposed device captures mango images and compares the histogram features of fruits for detecting chemically ripened fruits. Though this method is easy to use, however, it depends only on surface features of the fruits. To summarize, most of the state-of-theart approaches are focusing mainly on detecting the stages of fruit ripening; yet very few methods are concentrating toward the identification of artificially ripened fruits including different banana categories. Hence, promising frameworks are essential in the current agriculture scenario, which can efficiently detect the chemically ripened fruits in order to protect the customers from hazardous health issues.

3 Proposed Framework Figure 1 shows the block diagram of the proposed framework consisting of two stages, namely, training and testing stages. Specifically, in the first stage, the input banana images including ripened and unripened categories are trained using Neural networks-based classifiers. More specifically, several visual features of input images are extracted and classified into different classes using Neural networks-based classification for further processing. The resultant features are integrated and Visual Features (VF) Database is created. In the testing stage, initially, a query image is input to the proposed detection framework. The corresponding visual features of query banana fruit are compared

58

R. Roopalakshmi et al.

Input Image

Image Preprocessing (Background Subtraction)

Visual Features Extraction (Shape, RGB, Histograms)

Image Database

Comparison of Features and Pattern Matching

Reporting Results

Visual Features Database

Fig. 1 Block diagram of proposed detection framework

with the respective features of training dataset. Based on the comparison of features, the results of detection are reported along with confidence levels. In the proposed framework, 80% of the samples are randomly selected for the training stage whereas the remaining 20% of the samples are incorporated in the testing group.

4 Experimental Setup and Results Dataset Creation: For experimental purpose, different banana images are captured for the data set creation. Specifically, 300 banana images are captured from eight bananas of type Elaichi (species name: Musa acuminata) under different scenarios including ripened, unripened, individual, and group basis of fruits. Figure 2 illustrates the sample snapshot of bananas, which are captured using Canon 700D DSLR camera with the resolution of 4898*3265 pixels.

Fig. 2 Sample snapshot of banana images dataset

6 Neural Networks-Based Framework …

59

Fig. 3 Sample snapshots of a Naturally ripened bananas, b Artificially ripened bananas

Fig. 4 a Naturally ripened banana image ‘IMG 6213’, b Artificially ripened banana image ‘IMG 6262’

Initially, few samples of unripened bananas are treated for artificial ripening with the help of calcium carbide. Precisely, the sample bananas are kept in a airtight container inside a dark room with the presence of Calcium carbide for 8–10 h, in order to make them to ripen at a faster rate. Rest of the bananas are allowed to undergo natural ripening stages for a waiting period of 24–30 h. Figure 3a represents the snapshots of bananas, which are allowed to complete their ripening stages at normal conditions, whereas Fig. 3b indicates the sample snapshots of bananas, which are treated with Calcium carbide for completing their ripening stage, respectively. Results and Discussion: Google Inception-v3 algorithm is used for creating training models followed by the validation of the classification model. Figure 4a represents the input banana image termed as ‘IMG 6213’, which is allowed to complete the natural ripening process, whereas input banana image shown in Fig. 4b is termed as ‘IMG 6262’ is artificially ripened using Calcium carbide, which is considered for testing purposes in the proposed framework. Figure 5a and b illustrate the accuracy as well as entropy values of the trained model, which increase in proportion to number of iterations. Maximum 4000 iterations are considered in the proposed system. After completing 50% of iterations, the model reaches its saturation level as indicated in Fig. 5a and b. Figure 6 shows the detection results of the proposed framework in terms of UI version. Specifically, the detection results of banana (shown in Fig. 4a)in terms of confidence levels are clearly indicated (i.e. 81.15%). Similarly, confidence level measures of chemically ripened banana (shown in Fig. 4b) is illustrated in UI shown in Fig. 7 (i.e. 83.16% ).

60

R. Roopalakshmi et al.

Fig. 5 a Graph showing accuracy of proposed model, b Graph showing entropy variations of the proposed model

Fig. 6 UI showing detection results of proposed system

Fig. 7 UI showing results of proposed system for artificially ripened banana fruit

6 Neural Networks-Based Framework …

61

5 Conclusion This article proposes a new framework, which can identify the artificially ripened fruits by means of employing different visual features including color, shape, and histograms in an integrated manner. The proposed framework is implemented on a real dataset of banana images using neural network-based algorithm. The experimental results in terms of accuracy, cross-entropy and confidence level measures demonstrate the efficiency of the proposed system. In future, different types of bananas as well as other fruits such as mango can be considered for evaluating the accuracy of the proposed system.

References 1. Nelson M, Haber ES (1928) The vitamin A, B, and C content of artificially versus naturally ripened tomatoes. J Biol Chem 2:3 2. Ahmad S, Thompson AK: Effect of temperature on the ripening behavior and quality of banana fruit. Int J Agric Biol 3:2–10 3. Agrixchange of Govt. of India. http://www.agriexchange.apeda.gov.in/Market%20Profile/ MOA/Product/Banana.pdf 4. Meghana NR, Roopalakshmi R, Nischitha TE, Kumar P (2019) Detection of chemically ripened fruits based on visual features and non-destructive sensor techniques. In: Springer Lecture Notes in Computational Vision and Biomechanics, vol 30, pp 865–872 5. Veena H, Bhat K (2014) Design and development of a portable instrument for the detection of artificial ripening of banana fruit. In: Proceedings of the international conference on controlling communication, pp 1–2 6. Kathirvelan J, Vijayaraghavan R (2014) An Infrared based sensor system for the detection of ethylene for the discrimination of fruit ripening. Infrared Phys Technol 1–20 7. Salunkhel RP, Aniket AP (2015) Image processing for mango ripening stage detection RGB and HSV method. In: Third international conference on image information Proceedings, pp 1–4 8. Capizzi G, Sciuto GL (2015) Automatic classification of fruit defects based on co-occurrence matrix and neural networks. In: Federated conference on computer science and informations systems, pp 1–7 9. Verma A, Hegadi R, Sahu K (2015) Development of an effective system for remote monitoring of banana ripening process. In: IEEE International conference on WIE, pp 1–4 10. Srividhya V, Sujatha K, Ponmagal RS (2016) Ethylene gas measurement for ripening of fruits using image processing. Indian J Sci Tech 9:1–7 11. Bhargava A, Bansal A (2018) Fruits and vegetables quality evaluation using computer vision. J King Saud Univ Comput Info Sci 1–15 12. Prince R, Sathish H (2018) Identification of quality index of fruit/vegetable using Image Processing. Int J Adv Res Ideas Inno Tech 4:1–6 13. Maheswaran S, Sathesh S, Priyadarshini P, Vivek B (2017) Identification of artificially ripened fruits using SmartPhones. In: Proceedings of the IEEE international conference on intelligent computing and control, pp 1–6 14. Sabzi S, Abbaspour-Gilandeh Y, Gine SG (2018) A new approach for visual identification of orange varieties using neural networks and metaheuristic algorithms. J Info Proc Agric 5:1–11

Chapter 7

Low Power CMOS Low Transconductance OTA for Electrocardiogram Applications Gaurav Kumar Soni and Himanshu Arora

1 Introduction The biomedical electronics world is developing rapidly with new projects and new technologies. Currently manufacturing of biomedical devices are accompanied with a high value of precision, compactness and ease of use. In moveable medical devices, the facility power consumption has become a heavy drawback to the battery life. Portable biomedical device designs should provide a lower noise response depending on the characteristics of biomedical signals. In past few years many applications require low amplitude signal activity module such as implantable devices in medical oriented application [1]. In these type of portable devices IC will be used which size is compact and has low power consumption which is better for the portable biomedical devices. Due to progressive growth in VLSI technology the size of transistor and the power consumption level can be negotiated as the size of the transistor increases so will be the power consumption [2]. With the rising trend in biomedical application such as ECG, EEG, EMG, ERG, blood pressure detection etc., which are used to detect signal in small portable wireless devices, this requires a design issue correction in low voltage supply to these devices. A critical issue with biomedical system is detection of signal with weak amplitude and very low frequency [3]. With the support of IC technology, medical diagnostic instruments can be made portable and can be implemented as implantable medical devices which require ultra low power circuits. This will ensure that future health services will prevent diseases rather than target their treatment. These devices are capable of medical specialty G. K. Soni (B) Department of ECE, Arya College of Engineering and Research Centre, Jaipur, India e-mail: [email protected] H. Arora Department of CSE, Arya College of Engineering and Research Centre, Jaipur, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_7

63

64

G. K. Soni and H. Arora

signals, like EEG, ECG, EMG, EMG. The amplitude and conjointly the frequency vary of these biomedical signals are very little [4]. The various bio-potential signals posses unique amplitude and frequency characteristics. The frequencies 200 Hz and 250 Hz are the frequency of biomedical signals such as EEG, ECG respectively [5].

2 OTA The OTA is a widely used analog processing block. The occurrence of OTA in the past years with very low natural phenomenon and better linearity has been used in medical science applications. Various OTA topologies have been introduced to obtain very low steepness in several nA/V with a good linear range [6]. Portable biomedical device demands are growing worldwide. To obtain a better shelf life of biomedical devices it is required that these devices have a low power consumption and low voltage supply [7]. The low transconductance operational transconductance electronic equipment (OTAs) is very important within the style of low frequency or a medical specialty frequency filter that’s used within the medical specialty signal method system. The OTA is supposed exploitation various topologies for low power and low transconductance. during this style, the OTA circuit of low transconductance and low power dissipation in operation within the lower threshold region. To increase the linearity of the OTA scheme, linearization technology with several taps is used. In this proposed OTA design 90 nm CMOS technology is used with ±0.35 V supply voltage. The planned circuit is operating in subthreshold resign and to enhanced the one-dimensionality of the OTA multi tanh linearization technique is employed.

2.1 Multi-tanh Linearization Technique Gilbert Multiplier cell is used as OTA which is previously used in saturation region as a multiplier, but in sub threshold region its transconductance is outlined by tanh pure mathematics identity. To extend the linear vary of the OTA circuit, the multi-tanh linearization technique is employed. In the Fig. 1, the differential junction transistors combine diagrammatic by sets M1–M2 NMOS & M3–M4 NMOS shown a voltage offset with the identical definite quantity however in other way of transfer characteristics. The M1, M2, M3 and M4 NMOS transistors current are I 1 , I 2 , I 3 and I 4 respectively.  I1 − I2 = I B tanh and

Vin + VoS 2n∅t

 (1)

7 Low Power CMOS Low Transconductance OTA …

65

Fig. 1 Multi tanh doublet linearity technique

 I3 − I4 = I B tanh

Vin − VoS 2n∅t

 (2)

The output current of this circuit is given by: Iout = (I1 − I2 ) + (I3 − I4 )

(3)

Now, from using the Eqs. (1) and (2) the final output current of the circuit is Iout

     Vin + VoS Vin − VoS + tanh = I B tanh 2n∅t 2n∅t

(4)

Using this projected technique, we tend to tend to extend the spatial property of the circuit and thus the ability consumption of the circuit is to boot very low.

3 Proposed OTA The planned OTA design is implemented using the multi tanh linearity technique to increase the linearity range of the OTA and conjointly the OTA circuit is designed under the threshold it means that the OTA is work in subthreshold resign for low power consumption of OTA circuit. In the proposed design implementation we used the CMOS 90 nm technology file with ±0.35 V supply voltage. The planned style of OTA is shown within the Fig. 2. During this OTA M1–M2 and M3–M4 are the two input differential try of the OTA connected in parallel. During which we have a tendency to apply the positive input voltage at M1 and M3 and negative input voltage at M2 and M4. To use the multi-tanh method we increase the one-dimensionality of the circuit and additionally power consumption of circuit is incredibly low and that we also

66

G. K. Soni and H. Arora

Fig. 2 Design of proposed OTA implementation

operate this circuit in sub-threshold at very low provide voltage. The overall output current of the Operational Transconductance Amplifier for Electrocardiogram (ECG) Application is 16.8 nA. The overall current in the OTA is −8.40 nA to 8.40 nA, thus the total current requirement of OTA is 16.8 nA. The transconductance of the every differential input combination is 32.12 nS and also the overall transconductance of the designed OTA is 64.24 nS for electroencephalogram Application. In Fig. 3 shown the transconductance of the planned OTA for electroencephalogram application. By using the control (management) voltage (Vc ) vary the transconductance of the OTA. The control (management) voltage (Vc ) is employed with completely different worth of the control voltage to vary the transconductance of OTA. Mistreating the management voltage may cause the transconductance of the OTA to be varied and affect the frequency. The variation among the transconductance by variable management voltage or control voltage Vc is shown in Fig. 4. The AC response of the proposed OTA is shown in the Fig. 5 is obtained by adding a 1 pF load capacitor at the output terminal of the OTA. The obtained frequency is used for biomedical electrocardiogram application (Tables 1 and 2).

7 Low Power CMOS Low Transconductance OTA …

67

Fig. 3 Overall transconductance of the proposed OTA for electrocardiogram application

Fig. 4 Variation in the OTA overall transconductance by varying management or control voltage

68

G. K. Soni and H. Arora

Fig. 5 Frequency response of proposed OTA for biomedical electrocardiogram application Table 1 Summary of OTA results for biomedical electrocardiogram application Parameters

Value

CMOS technology

90 nm

Supply voltage

±0.35 V

Power consumption

5.98 nW

Overall current

16.8 nA

Transconductance

64.24 nS

Control voltage (Vc )

−0.104 V

Transconductance tuning method

By varying control voltage (Vc )

Table 2 Comparative analysis of previous work with proposed work Ref. No.

Garradhi et al. [1]

Singh et al. [3]

Mahmoud and Elamien [5]

Rakhi et al. [8]

Wang et al. [9]

Benacer et al. [10]

Proposed work

CMOS process

0.18 µm

0.18 µm

90 nm

0.18 µm

0.35 µm

90 nm

90 nm

Power consumption

1.2 µW

7.243 µW

3.9 µW

15 nW

107.2 nW

2.6 µW

5.98 nW

Supply voltage (V)

±0.9

1

±0.6

0.5

1.8

1

±0.35

7 Low Power CMOS Low Transconductance OTA …

69

4 Conclusion In this paper, the low power consumption and low low transconductance OTA power supply are designed for the application of electrocardiograms. Using the multi-tanh linearization technique, the proposed OTA is designed to operate in the resale subthreshold. During this style 90 nm CMOS technology is employed with ±0.35 V offer voltage. The transconductance of the OTA is 64.24 nS for biomedical ECG Application, which is varying using the control voltage Vc the value of control voltage −0.104 V for biomedical ECG frequency. The overall power consumption of the proposed OTA design for Electrocardiogram application is 5.98 nW and the overall current of the OTA is 16.8 nA for biomedical ECG application.

References 1. Garradhi K, Hassen N, Besbes K (2016) Low-voltage and low-power OTA using sourcedegeneration technique and its application in Gm-C filter. In: 11th IEEE international design & test symposium (IDT), pp 221–226 2. Fiorelli R, Arnaud A, Galup-Montoro C (2006) Nanowatt, Sub-nS OTAs, with Sub-10-mV input offset, using series-parallel current mirrors. IEEE J Solid-State Circuits 41(9):2009–2018 3. Singh BP, Raghav HS, Maheshwari S (2013) Design of low voltage OTA for bio-medical application. In: IEEE international conference on microelectronics, communication and renewable energy (ICMiCR-2013), pp 1–5 4. Aqueel A, Ansari MS, Maheshwari S (2017) Subthreshold CMOS low-transconductance OTA based low-pass notch for EEG applications. In: IEEE IMPACT-2017, pp 247–251 5. Mahmoud SA, Elamien MB (2017) Analysis and design of a highly linear CMOS OTA for portable biomedical applications in 90 nm CMOS. Elsevier Microelectron J 70(2017):72–80 6. Zele R, Mitra S, Etienne-Cummings R (2009) Low-voltage high CMRR OTA for electrophysiological measurements. In: IEEE, pp 345–348 7. Kowsalya B, Rajaram S (2017) Design and analysis of high gain CMOS transconductance amplifier for low frequency application. In: Proceedings of 2017 IEEE international conference on circuits and systems (ICCS 2017), pp–290 8. Rakhi R, Taralkar AD, Vasantha MH, Nithin Kumar YB (2017) A 0.5 V low power OTA-C low pass filter for ECG detection. In: IEEE computer society annual symposium on VLSI, pp 589–593 9. Wang T-Y, Peng S-Y, Lee Y-H, Huang H-C, Lai M-R, Lee C-H, Liu L-H (2017) A powerefficient reconfigurable OTA-C filter for low-frequency biomedical applications. In: IEEE transactions on circuits and systems–I: regular papers, pp 1–13 10. Benacer I, Moulahcene F, Bouguechal N-E, Hanfoug S (2014) Design of CMOS two-stage operational amplifier for ECG monitoring system using 90 nm technology. Int J Bio-Sci BioTechnol 6(5):55–66

Chapter 8

Spiral Resonator Microstrip Patch Antenna with Metamaterial on Patch and Ground Akshta Kaushal and Gaurav Bharadwaj

1 Introduction For better performance of microstrip patch antenna, several techniques have been used. In this paper we have used metamaterial spiral resonator in antenna to achieve enhanced bandwidth. Metamaterials are artificially made structures having both negative permittivity (ε) and permeability (μ) [1, 2]. The negative permeability has been provided by spiral resonator structure, whereas the negative permittivity has been provided by the thin wire structure [3, 4]. In our spiral resonator design we have given special care at each corner also, so that there is no any sharp turn. It helps in better flow of power through corners. Many parameters are involved in antenna specifications; the two most important parameters are impedance bandwidth and pattern bandwidth. In this paper we have used spiral resonator patch. The ground also consists of four spiral resonators. The CST studio has been used to design, implement the simulation and for the result analysis. This spiral resonator structure helps in reducing surface, guided wave and near wave with certain frequency band. The use of metamaterial structure results in reduction of mutual coupling. In many applications of microstrip patch antenna feeding technique on the same substrate has been easily implemented which provides attractive features such as small size, low fabrication cost and so on, while they have disadvantages of having narrow bandwidth and low valued gain. To overcome these limitations, stack-coupled microstrip antenna [5] and gap-coupled microstrip antenna are implemented [6, 7]. The proposed antenna used A. Kaushal (B) · G. Bharadwaj Department of ECE, Government Women Engineering College, Ajmer, India e-mail: [email protected] G. Bharadwaj e-mail: [email protected]

© Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_8

71

72

A. Kaushal and G. Bharadwaj

is spiral resonator with spiral turn number of 4 to achieve negative µ metamaterial. Huang et al. introduced the dimensional reduction of a spiral resonator up to λ0 /50 [8].

2 Design and Configuration of Antenna The design of microstrip patch antenna with spiral resonator is presented. The fabrication of the spiral resonator antenna is done on patch with FR-4 substrate which has dielectric constant 1r = 4.4. The dimension of substrate is 22 × 22 mm2 each with thickness of 0.25 mm. Figure 1 shows the spiral resonator patch with four number of spiral turns. The length of patch L = 20 mm and width of patch W = 20 mm and the gap distance between spirals is 2 mm. Finally, the four spiral resonators are also etched at four corners of ground plane. The length and width of each spiral resonator at ground is 14 mm. Figure 2 shows the ground plane with its four spiral resonators. In our proposed antenna we had applied special turn at the corners of spiral resonator (not using sharp turn) to provide proper flow of power (Table 1).

Fig. 1 Spiral resonator patch

8 Spiral Resonator Microstrip Patch Antenna with Metamaterial …

73

Fig. 2 Ground plane with four spiral resonators

Table 1 Dimensions of the proposed spiral resonator antenna (SRA)

Parameters

Size

Substrate

22 × 22 mm2

Spiral resonator patch

4 turns 2 mm width 2 mm distance 0.2 mm thickness

Ground

4 spiral resonators, each with 4 turns 1 mm width 1 mm distance 0.2 mm thickness

3 Design Equations The design equations for the proposed spiral resonator antenna (SRA) are given below in which the value of total inductance (L SRA ), capacitance (C SRA ) and maximum number of turns for predefined value of outer ring length l, spiral width w, gap between two turns s and spiral turn number N can be calculated [9]. The maximum number of turns is given by: 

l − (w + s) Nmax = Integer 2(w + s)



The average total length and total inductance of SRA are:

(1)

74

A. Kaushal and G. Bharadwaj

L avg =

N 1  4l N − [2N (1 + N ) − 3](s + w) ln = N n=1 N    L avg 1 μ0 L avg + ln L SRA = 2π 2 2w

(2) (3)

The per unit length capacitance between spiral turns is:

Cl = ε0

K

√

1 − m2



K (m)

(4)

where K(m) is complete elliptic integral of the first kind and m=

s 2

w+

s 2

(5)

The total capacitance of SRA is given by: CSRA

   N −1 1 N2  l l− n+ (w + s) = Cl 4(w + s) N 2 + 1 n=1 2

(6)

The shunt resistance in lossy dielectric is given as: Rc =

ρc L SRA wt μ0

(7)

The metallic conductor and dielectric has losses: Rc =

L avg s 1 σd 4h[l − (2w + s)] 4l

(8)

The effective characteristic impedance of spiral resonator antenna structure can be calculated with the help of these above parameters using the following equation Z eff

  (Rd )2 Rd = Rc + + jw L SRA − CSRA 1 + w2 (CSRA Rd )2 1 + w2 (CSRA Rd )2

(9)

4 Analysis and Simulation In this paper to simulate and analyse the antenna, CST microwave studio has been used. The reflection coefficient or return loss is the antenna’s reflected power, which is given by S1,1 parameter. The S1,1 parameter of proposed antenna having return loss of −4.361081 dB is represented in Fig. 3.

8 Spiral Resonator Microstrip Patch Antenna with Metamaterial …

75

Fig. 3 Representation of return loss in S1,1 parameter

The bandwidth of the proposed antenna is 186.7 MHz (2.9336–2.7469 GHz), which is shown in Fig. 4. Thus the bandwidth is significantly broadened up to a few megahertz [10]. The Smith chart of the proposed spiral resonator metamaterial antenna is shown in Fig. 5. Smith chart is a very important tool for transmission line. It is very helpful for impedance matching and also used to display physical impedance of antenna.

Fig. 4 Bandwidth of antenna

76

A. Kaushal and G. Bharadwaj

Fig. 5 S1,1 parameter in the Smith chart

5 Conclusion A microstrip patch antenna along with spiral resonator has been presented in this paper. It has metamaterial properties with compact dimension of 22 × 22 mm2 for WLAN and RFID application. The loading of metamaterial spiral resonator on patch and ground is attributed to achieve operating frequency of 2.73 GHz. The simulation result shows the enhanced bandwidth of 186.72 MHz. This paper highlights the spiral resonator property and the usage of metamaterial. Acknowledgements The authors wish to acknowledge the encouragement, guidance and invaluable support of our worthy Principal, Govt. Women Engineering College Ajmer for giving us an opportunity to explore our views and notions through this research paper. Also thanks to our colleagues for giving constant support and motivation that helped us to achieve the target.

References 1. Singh G, Rajni, Marwaha A (2015) A review of metamaterials and its applications. Int J Eng Trends Technol (IJETT) 19(6) 2. Cui TJ, Smith D, Liu R (2010) Metamaterials: theory, design, and applications. Springer, NY 3. Pendry JB, Holden AJ, Robbins DJ, Stewart WJ (1999) Magnetism from conductors and enhanced nonlinear phenomena. IEEE Trans Microw Theory Tech 47(11):2075–2084 4. Smith DR, Vier DC, Kroll N, Schultz S (2000) Direct calculation of permeability and permittivity for a left-handed metamaterial. Appl Phys Lett 77(14):2246–2248 5. Rasin Z, Khamis NHH (2007) Proximity coupled stacked configuration for wideband microstrip antenna. In: 2007 Asia-Pacific conference on applied electromagnetics, 4–6 December 2007 6. Ray KP, Sevani V, Deshmukh AA (2007) Compact gap-coupled microstrip antennas for broadband and dual frequency operations. Int J Microw Opt Technol 2(3)

8 Spiral Resonator Microstrip Patch Antenna with Metamaterial …

77

7. Chun-Kun W, Wong K-L (1999) Broadband microstrip antenna with directly coupled and parasitic patches. Microw Opt Technol Lett 22(5):348–349 8. Huang W, Kishk A (2011) Embedded spiral microstrip implantable antenna. Int J Antennas Propag 9. Bilotti F, Toscano A, Vegni L (2007) Design of spiral and multiple split-ring resonators for the realization of miniaturized metamaterial samples. IEEE Trans. Antennas Propag 10. Srinivasa Rao V, Redd KVVS, Prasad AM (2017) Bandwidth enhancement of metamaterial loaded microstrip antenna using double layered substrate. Indonesian J Electr Eng Comput Sci 5(3):661–665

Chapter 9

A Packet Fluctuation-Based OLSR and Efficient Parameters-Based OLSR Routing Protocols for Urban Vehicular Ad Hoc Networks Kirti Prakash, Prinu C. Philip, Rajeev Paulus and Anil Kumar

1 Introduction VANET is the technology based on MANETs. By enhancing the traffic flow, VANET assures for our safe drive and hence decreases the accidents. It also provides suitable and accurate data to the driver. However, VANET faces challenges, particularly in routing and security [1]. OLSR is an enhancement of an unadulterated association state steering convention. It utilizes the idea of multipoint transfers and broadcast information can be retransmitted just with the assistance of multipoint transfers. OLSR convention performs hop by hop steering; for example, every hub utilizes its latest data to route a bundle. Multipoint relaying scheme functions in a very distributed way. Here each node has separate set of multipoint relay which is completely different from the other node’s multipoint relay [2]. Different steps have been taken to improve system’s connectivity through legitimate choice of relay hub for urban VANET. The protocol proposed is routing protocol called traffic-light-aware routing protocol (TLRC) and it mostly uses mobility metrics. The quick change in these measurements has been demonstrated to bring down their importance in the urban condition [3]. Although these works include responsive as well as location-based conventions but dynamic OLSR is considered as the best as a result of its small delay and steering overhead [4]. Stackelberg game model inspires K. Prakash (B) · P. C. Philip · R. Paulus · A. Kumar Department of ECE, VIAET, SHUATS University, Prayagraj, India e-mail: [email protected] P. C. Philip e-mail: [email protected] R. Paulus e-mail: [email protected] A. Kumar e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_9

79

80

K. Prakash et al.

hubs to act agreeably, being MPRs, by expanding their status. Gathered status additions are used to decide the arrangement of hubs (devotees) that a MPR (pioneer) will direct depending on hubs’ status [5]. The street-centric QoS-OLSR convention uses urban-based QoS metric for OLSR MPR choice. It uses link as well as road-driven parameters, that is, lane weight, percentage of neighbors on another road (PoNOS) and real neighbor’s classification. The drawback is the lack of load balancing mechanism and no algorithm for packet fluctuations [6]. Here we proposed two main routing protocols, that is, packet fluctuation-based OLSR routing protocol (P-OLSR) and efficient parameter optimization-based OLSR routing protocol (E-OLSR). Packet fluctuation is essential in routing as the vehicles are not stationary, so the packets delivered to these vehicles should be fluctuating and the transmission should be error-free. Efficient parameter optimization-based OLSR routing protocol considers the parameters such as load balancing as well as routing the path, that is, limiting the quantity of hops. Here nodes have been replaced by images of different vehicles such as bike, car, bus, and ambulance. Simulations are directed using SUMO traffic simulator as well as NS3 simulator. Simulation results prove that our packet fluctuation-based OLSR routing protocol as well as efficient parameter optimization-based OLSR routing protocol outperforms road-centric QoS-OLSR in various aspects. In summary, the primary commitments of our proposed protocol are: • Implementation and evaluation of packet fluctuation-based OLSR routing protocol. • Implementation and evaluation of efficient parameter optimization-based OLSR convention. • The xml obtained are not represented in the form of nodes, whereas the nodes have been replaced by the images of different vehicles. The different sections of the paper comprise: Section 2: Review of literature, Sect. 3: The proposed P-OLSR and E-OLSR steering convention, Sect. 4: Simulation results. Section 5: Conclusion.

2 Literature Review Here they mentioned the issue of directing in urban VANETs and proposed a streetcentric QoS-OLSR convention. It uses connection along with road-driven parameters, for example, accessible bandwidth, path width and PoNOS. However, there is lack of packet fluctuation algorithm as well as load balancing [6]. They proposed a disseminated and adaptable calculation that reduces congestion within the sensor connection. It guarantees the reasonable conveyance of packets to the base station. Their algorithm functions in data link layer with small changes in MAC convention [7].

9 A Packet Fluctuation-Based OLSR …

81

They proposed a protocol, ETCoP, to accomplish the insignificant transmission expense along with most extreme sending unwavering quality in intra-lanes. Roaddriven artful directing convention was proposed in which parcels can be progressively sent [8]. Virtual powers which are connected to vehicles from their one bounce neighbors mirror the proportion of difference or combination among them. Here the proposed solution neglects the link conditions of a vehicle. Nonetheless, overhead is presented in thick system while figuring the power between vehicle sets [9]. A calculation for cluster head choice, in light of the traffic stream of automobile in the bunch is displayed. With the accessibility of path identification, path bearing and guide coordinating, they had the capacity to choose the most steady cluster head. However, it cannot be used in inter-cluster communication algorithm [10]. For MPR choice in VANET steering, QoS-OLSR along with wise water drop system is used [12]. The QoS parameters center on separation as well as speed which grasp colossal information in roadway VANET; however, not in urban VANET because of presence of crossing points [11, 12]. The authors originally recognized potential constraining variables for fruitful conveyance and proposed convention upgrades to dodge them. Using TOSSIM, they did an orderly report to measure the elements that limit accomplishing great execution in such a setting [13].

3 Proposed Protocols Here, two main protocols have been proposed. They are: (1) packet fluctuationbased OLSR routing protocol, and (2) efficient parameter optimization-based OLSR protocol.

3.1 Packet Fluctuation-Based OLSR Routing Protocol (P-OLSR) The packets should be fluctuating while it is transmitted because the packets have to be delivered to the vehicles which are moving. So we have to assign a schedule from one transmission to another. If multiple transmissions occur simultaneously then there is a chance of packet collision. So to avoid such circumstances, we are allocating a schedule to each transfer. First slot is allotted to the first schedule, second slot is allotted to the second schedule and the third slot to the third schedule and thereafter. If there is any issue in the transmission then the error has to be rectified before the transmission of packets into the network.

82

K. Prakash et al.

3.2 Efficient Parameter Optimization-Based OLSR Routing Protocol (E-OLSR) So in case of efficient parameter optimization-based OLSR routing protocol, we take into consideration the two main parameters. They are: (a) load balancing and (b) routing the path. Load Balancing: Here the comparison is made between the node size (node QoS) and the packet size. We have to testify the node, that is, whether a node can bear these packets and whether a node can transmit these packets or not. After this, the schedule is allotted for that particular node. So this is the load balancing technique in case of efficient parameters optimization-based OLSR routing protocol. Routing the Path: At this stage, vehicle-to-vehicle communication takes place. Here the space between the source and the goal vehicle is observed. First, we notice the absolute number of paths from source to goal in order to make the transmission efficient. Secondly, we select the shortest route possible (Fig. 1).

4 Simulation Results This section has been categorized into two main parts: (A) simulation setup, and (B) simulation results and analysis. NS-3.25 is the simulation tool used and SUMO 0.25.0 is used to generate the mobility traces. In case of simulation result and discussion, the graphical comparison of P-OLSR and E-OLSR routing protocol along with the original street-centric QoS-OLSR protocol is presented.

Fig. 1 Block diagram representation

9 A Packet Fluctuation-Based OLSR …

83

4.1 Simulation Setup NS-3 is used for testing the systems administration conventions. NS-3 is recognized as the quickest simulators available and supports GUI as well as python bindings. Netanim is an offline system artist device which is presently accessible alongside ns-allinone-3.XX bundle. It is well known that vehicles in NS-3 are represented in the form of nodes. In our proposed protocol, images of vehicles (bus, car, fire brigade, bike, ambulance and truck) have been used instead of nodes. So, when we run the xml, images of various vehicles are displayed. SUMO irregular outing generator is used to create the portability traces. Here SUMO 0.25.0 version is used that is based upon the traffic scenario. In SUMO, we get the real-time scenario. Here vehicles appear in the form of vehicles only. Table 1 represents the network simulation parameters (Fig. 2). In flowchart, initially we have a vehicular network and then the traffic level is checked—maximum, minimum or low. If the traffic level is less than the threshold, we analyze the link capacity. If the traffic level is more than the threshold then the traffic level is recalculated. After link analysis we calculate route optimization. Route optimization means the particular node lies inside the communication range. We calculate the minimum hop count and compare the distance between minimum and maximum threshold. If distance is less than the maximum threshold then data processing takes place and transmission is done. If distance is more than the maximum threshold then route optimization is reevaluated.

4.2 Simulation Results Here we present the graphical comparison of P-OLSR and E-OLSR routing protocol along with the original road-driven QoS-OLSR steering convention. In the graph, blue line represents E-OLSR routing convention, green line represents P-OLSR routing convention and red line represents Q-OLSR convention. Table 1 Simulation parameters

Parameter

Configuration

Scenario region

(5000 * 5000) m

Simulation tool

NS-3.25

Graph language

X-graph

Packet size

512–712 bytes

Simulation time

1000 s

Number of nodes

10–75 nodes (6 different set)

Communication range

250 m

Routing protocol

P-OLSR and E-OLSR

Mobility model

Random

84

K. Prakash et al.

Fig. 2 Flowchart representation

Figure 3 represents the throughput determined by the amount of parcels obtained by the goal every second. In the graph, at node 50 the value of throughput is 400, 585 and 642 kbps for QOLSR, POLSR and EOLSR, respectively. The improved result is

Fig. 3 Throughput

9 A Packet Fluctuation-Based OLSR …

85

Fig. 4 PDR

due to the packet fluctuation-based OLSR in which the error during the transmission is rectified and also because the shortest path for transmission is selected. Figure 4 represents the PDR acquired by dividing the quantity of acquired bundles by the quantity of broadcasted bundles. In the graph, at node 50 the value of PDR is 61, 66.7 and 70.5 for QOLSR, POLSR and EOLSR, respectively. Our proposed protocol gets better result than QOLSR because of the packet fluctuation in which different slot is allotted to different transmission and the minimum hop count. Figure 5 presents end-to-end delay (s) graph. It is the delay between the sending of bundles by the source and its receipt by the receiver. In the graph, at node 50 the value is 2.6, 2.15 and 2 s for QOLSR, POLSR and EOLSR, respectively. The improved result is because of the load balancing technique in which the node QoS is compared with the packet size and also due to the shortest path selection. Figure 6 presents the average hop count graph. It refers to the quantity of intermediate gadgets through which information must travel between source and goal. In the graph, at node 50 the value of average hop count is 3.8, 5.5 and 6.4 for QOLSR, POLSR and EOLSR, respectively. The improved result is due to the selection of the shortest path for the packet transmission. In summary, the proposed P-OLSR and E-OLSR routing protocols outperform road-driven QoS-OLSR convention. It clearly improves the network performance when we take the following parameters into consideration, that is, throughput, PDR, end-to-end delay along with average hop count.

86

K. Prakash et al.

Fig. 5 End-to-end delay

Fig. 6 Average hop count

5 Conclusions and Future Work For solving some of the problems of routing, we proposed two routing protocols: packet fluctuation-based OLSR routing protocol (P-OLSR) and efficient parameter optimization-based OLSR routing protocol (E-OLSR). In P-OLSR, the packet fluctuation and transmission of packet in proper order has been taken into account. In the

9 A Packet Fluctuation-Based OLSR …

87

case of E-OLSR, two main parameters have been considered, that is, load balancing and minimum hop count. In load balancing, our focus is to see whether a node can bear a particular packet size or not and then transmit the packet. In minimum hop count, we select the shortest distance for the packet transmission. Performance evaluation has been done with NS-3 network simulation and in the xml we see images of vehicles instead of nodes. The results prove that the proposed P-OLSR and E-OLSR protocols perform far better than the road-driven QoS-OLSR steering convention with regard to throughput, PDR, end-to-end delay along with average hop count. Since we have not used any security algorithm, in future work we can add some security algorithm so as to achieve a secure transmission of packets.

References 1. Prakash K, Philip PC, Ashok A, Agrawal N (2019) A survey on topology based routing protocols for VANETs. J Emerg Technol Innov Res JETIR1902518. http://doi.one/10.1729/Journal. 19704 2. Adjih C, Clausen T, Jacquet P, Laouiti A, Minet P, Muhlethaler P, Qayyum A, Viennot L (2003) Optimized link state routing protocol. RFC 3626, IETF 3. Ding Q, Sun B, Zhang X (2016) A traffic-light-aware routing protocol based on street connectivity for urban VANETs. IEEE Commun Lett 20(8):1635–1638. https://doi.org/10.1109/ LCOMM.2016.2574708 4. Ali AK, Phillips I, Yang H (2016) Evaluating VANET routing in urban environments. In: 39th International conference on telecommunications and signal processing (TSP), June 2016, pp 60–63. IEEE 5. Kadadha M, Otrok H, Barada H, Al-Qutayri M, Al-Hammadi Y (2018) A Stackelberg game for street-centric QoS-OLSR protocol in urban VANETs. Elsevier 6. Kadadha M, Otrok H, Barada H, Al-Qutayri M, Al-Hammadi Y (2017) A street-centric QoSOLSR protocol for urban VANETs. IEEE 7. Ee CT, Bajcsy R (2004) Congestion control and fairness for many-to-one routing in sensor networks. In: SenSys’04. ACM Press, New York, NY, USA 8. Zhang X, Cao X, Yan L, Sung DK (2016) A street-centric opportunistic routing protocol based on link correlation for urban vanets. IEEE Trans Mob Comput 15(7):1586–1599. https://doi. org/10.1109/TMC.2015.2478452 9. Maglaras LA, Katsaros D (2013) Clustering in urban environments: virtual forces applied to vehicles. In: 2013 IEEE International conference on communications workshops, ICC 2013, pp 484–488 10. Almalag MS, Weigle MC (2010) Using traffic flow for cluster formation in VANETs. In: 2010 IEEE 35th Conference on Local Computer Network s (LCN), pp 631–636 11. Wahab OA, Otrok H, Mourad A (2013) VANET QoS-OLSR: QoS-based clustering protocol for VANETs. Comput Commun 36(13):1422–1435. https://doi.org/10.1016/j.comcom.2013. 07.003 12. Al-Terri D, Otrok H, Barada H, Al-Qutayri M, Shubair RM, Al-Hammadi Y (2015) QoSOLSR protocol based on intelligent water drop for VANETs. In: 2015 International wireless communications and mobile computing conference, pp 1352–1357 13. Shin J, Ramachandran U, Ammar M (2007) On improving the reliability of packet delivery in dense wireless sensor networks. IEEE

Chapter 10

Evaluation of Similarity Measure to Find Coherence Mausumi Goswami and B. S. Purkayastha

1 Introduction Recommendations Engines, Plagiarism checking, Automatic Summarizing, Clustering, Classification, Anomaly Detection, etc., work based on a similarity measure. It is said that similarity measure is the basic building block. Similarity Measure represents the similarity between a pair of objects. It may be well-defined as a function that computes the degree of similarity between any two objects. This work is an extension of our paper [1]. The literature discusses various similarity measures and their applications in document clustering [2–5]. Organization of the paper is as mentioned below: Sect. 2 describes the Bag of Words model for representing documents. Section 3 discusses few important distance measures and similarity measures used for text documents. Section 4 discusses a methodology for implementation. Section 5 discusses algorithms to compute similarity. In Sect. 6 Experiments and Results Analysis related tasks are discussed.

2 Model to Represent Documents There are many ways to model a text document. A text document is represented as a collection or bag of words, where the document is the collection of words and its frequencies. This method is done as follows: tfidf(dr, tc) = tf(dr, tc) × log(|DR|/df(tc)). Here df(tc) is the document frequency considering term tc appears. M. Goswami (B) CHRIST (Deemed to be University), Bangalore 560074, India e-mail: [email protected] B. S. Purkayastha Assam University, Silchar, Assam, India © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_10

89

90

M. Goswami and B. S. Purkayastha

3 Metric Used in Document Clustering A distance measure must be determined before clustering [6–12]. Researchers have emphasized on how to extract the number of clusters in data sets. They used datasets with noise features. Researchers have illustrated usage of a similarity measure for applications using classification technique and clustering technique for text data [13– 18]. The clustering algorithm requires a measure for making and organizing different groups. For a specific type of clustering selecting a proper similarity measures plays a very critical role. As an instance, we may take density based clustering such as DBSAN which is highly effected by similarity or dissimilarity computation. Again, every similarity measure is not a metric. There are four conditions to be satisfied to consider any measure as a valid metric. Let us take a measure d and consider there exits two object a and b such that the distance between a and b are given by d. 1. Non Negativity: Any two object must have non-negative distance, that is, dist(a, b) ≥ 0 if a != b 2. Identical objects have zero distance. Only in case of identical object it is possible to have a zero distance value. i. Dist(a, b) = 0, iff a == b, then objects a and b are identical. ii. Distance from a to b and distance from b to a are considered to be same. This property is called symmetric. Dist(a, b) == Dist(b, a). 3. Triangular Inequality: If there is a non-negative distance between a and b given by Dist(a, b); if there is a non-negative distance between b and c given by Dist(b, c); If there is a nonnegative distance between a and c given by Dist(a, c), then the following inequality shall hold: Dist(a, b) + Dist(b, c) ≥ Dist(a, c).

(1)

Few important similarity measures are discussed in [1] are minkowski, euclidean as a distance measure, cosine as a similarity measure, mahalanobis, Jaccard Coefficient, Pearson Correlation, Pearson’s Correlation Coefficient, Chebyshev Distance.

4 Proposed Methodology In this section the steps followed to apply a few of the measures on real life data set is described. Step 1: Read the Data set. Step 2: Preprocessing the data by applying techniques of stop words removal, stemming, converting text to lower case, removal of punctuation symbols etc. Step 3: Select a Feature Extraction Technique and computation of TF (term frequency) and IDF (inverse document frequency).

10 Evaluation of Similarity Measure to Find Coherence

91

Step 4: Construction of Term Document Matrix and sparsification of the data to save storage. Step 5: Selection of Similarity Measure and computation of Similarity Matrix for finding coherent text documents. Step 6: Select a clustering Algorithm to find coherent groups of documents. Step 7: Evaluation of clustering results and preparing Conclusive reports. In this proposed methodology the different stages of clustering a collection of text documents are depicted. In this work during implementation focus is given on Module B and few steps of Module C. Mainly three similarity measures are chosen for finding the patterns. The steps followed are given in the next Section.

5 Proposed Algorithms to Compute Similarity

Algorithm 1. Angular Similarity of documents

Input: n, D. Here, n represents number of documents from the collection D, where, D: Collection of documents 1.

Read number of documents

2.

Initialize

3.

For each document apply the below mentioned model to calculate the similari-

with n

n zero values

ty with other documents. (2) In Eq. (2), (A) n

n is computed using angular distance between any two doc-

uments.

6 Experiments and Results In this section experiments conducted are discussed. Implementation is done using MATLAB2018 R using an intel i3 processor and 4 GB RAM. Python 2.7 is also used. Experiments are done by using six different documents. Experimental Evidences are described here. It is found that the storage space consumed is reduced by a substantial amount after applying sparsification. The matrix containing 345 terms and 6 documents had consumed 16,560 bytes but after applying sparsification it is

92

M. Goswami and B. S. Purkayastha

Table 1 Results after sparsification Name

Size

Bytes

Class

S1

6 × 345

16,560

Double

sparse_S1

6 × 345

10,368

Double

ysim

6×6

288

Double

Attributes Sparse

Table 2 Results of Kmean with various similarity measures is displayed. Best total sum of differences and Average Sihouette values are also shown Similarity measures Cosine Pearson Correlation Squared Euclidean Distance

Best total sum of distances

Average silhouette value of the cluster

Number of cluster

2.55511

0.0026

3

2.51603

0.3585

3

0.4681

3

375.943

reduced to 10,368 bytes. The first row of the Table 1 shows results before applying sparsification. The second row shows results after sparsification for the same term document matrix. A good quantitative approach to find the optimum number of clusters is to compare average silhouette values in each cluster. Silhouette values can indicate the quality of a cluster. The value lies in the range [−1, +1]. A positive value indicates appropriate clustering and negative value indicate in appropriate clustering. A negative value of silhouette also indicates that the members shall be reorganized. S(i) =

min(MeanD_BET(i, j) ) − MeanD_WITHIN(i)   max MeanD_WITHIN(i) , min(MeanDBET(i, j) )

(3)

Table 2 describes the results obtained by using three different similarity measures. Best total sum of distances for each of the clusters is also shown. Average silhouette values are found to be positive. This indicates appropriate clustering.

6.1 Visualization by Plotting Cluster Centers and Their Mean Here in this figure, it shows six documents and three clusters using correlation metric. First cluster is represented as Cluster 1, contains four documents and second cluster contains one document, third cluster contain one document. Best total sum of distances = 2.51603 and cluster3 = 0.3585 (Figs. 1 and 2). Table 3 indicates that Squared Euclidean distance is showing higher values compared to other two measures. So, for the primary dataset, squared Euclidean Distance performs better than the other two.

10 Evaluation of Similarity Measure to Find Coherence

93

2

1

0 0

10

20

30

40

50

60

70

80

90

100

0

10

20

30

40

50

60

70

80

90

100

0

10

20

30

40

50

60

70

80

90

100

2

1

0

0.4

0.2

0

Fig. 1 Plotting three centers of the three clusters, considering 100 term. Proximity measures used are squared Euclidean (X axis: number of terms, Y axis: coordinate positions of the cluster centers). This figure shows cluster center variations in three clusters. It is observed that third cluster is having maximum variation

6.2 Evaluating Optimum Number of Clusters Optimal number of cluster is decided based Calinski-Harabasz criterion. It reflects optimum number of clusters is four. This is an important criterion while deciding optimal number of cluster in case of huge collection of documents organization task (Fig. 3).

94

M. Goswami and B. S. Purkayastha

Fig. 2 Plotting of Average Silhouette 3 clusters

Cluster

1

2

3

0

0.2

0.4

0.6

1

0.8

Silhouette Value

Table 3 Silhouette values for three different measures

Cosine

Squared euclidean

Correlation

0.0014

0.2143

1.0000

0.0038

1.0000

−0.0044

−0.0000

0.1989

1.0000

0.0014

1.0000

−0.0026

0.0008

0.1971

−0.0182

0.0081

0.1983

−0.0250

Fig. 3 Calinski Harabasz value plot CalinskiHarabasz Values

1.55

1.5

1.45

1.4

1.35

1.3

2

2.5

3

3.5

4

Number of Clusters

4.5

5

10 Evaluation of Similarity Measure to Find Coherence

95

Fig. 4 Similarity analysis graph

6.3 Similarity Analysis of Text Documents In this experiment, the term frequency and inverse document frequency based feature selection technique is used to convert the text into numbers based on their significance. The similarity is computed using a model, which is based on angle of the vectors, which answers the questions like when two vectors are parallel and when two vectors are orthogonal (Fig. 4). In this experiment, it is found that first document exhibits most similarity with itself, second document exhibits most similarity with itself and so on. In this experiment, 6 documents are taken. The similarity is computed using Algorithm 1.

7 Conclusion It is found that similarity measures used in this work are effective to find the similarity between the objects. Clustering is used on unlabeled data to find natural groupings and patterns. Most clustering algorithms need the researcher to have former knowledge of the number of clusters. When this statistics is not obtainable, one can use cluster evaluation techniques to decide the number of clusters present in the data based on a specified metric. It is found during evaluation of document clusters, there are three major causes of variety of results: first, document representation and selection of frequency, dimensionality reduction techniques and preprocessing involved. Second, Space-oriented measure or similarity measure selection. Third, Clustering itself. Advantages of this approach are as mentioned below:

96

M. Goswami and B. S. Purkayastha

1. Addressing local minima through reassignment: Many numeric minimization problem suffer from local minima. One of the drawback of existing kmeans is local minima. An approach of overcoming this local minima is achieved by replicating process. It is the reassignment of a point in a cluster to a new cluster and checking if the overall sums of point to centroid distances have improved. That, if the reassignment process could give a better solution then reassignment is done for those points. 2. Addressing space complexity through sparsification. Optimal number of cluster is decided based on two traditional method and Calinski-Harabasz criterion. In traditional method numbers of clusters are changed between two constant values and it is checked if there is a substantial improvement in the quality of clusters. In this paper, only one algorithm for similarity measure implementation is shown. In our next work, few other algorithm and implementation will be included.

References 1. Goswami M, Babu A, Purkayastha BS (2018) A comparative analysis of similarity measures to find coherent documents. Appl Sci Manag 2. Rousseeuw PJ (1987) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Comput Appl Math 20:53–65 3. Singh P, Sharma M (2013) Text document clustering and similarity measures. Department of Computer Science & Engineering 4. Kaufman L, Rousseeuw PJ (1990) Finding groups in data: an introduction to cluster analysis. Wiley, Hoboken, NJ 5. Steinbach M, Karypis G, Kumar V (2000) A comparison of document clustering techniques. Department of Computer Science & Engineering 6. Tajunisha N (2014) Concept and term based similarity measure for text classification and clustering. Int J Eng Res Sci Tech 3(1) 7. Bhanu Prasad A, Venkata Gopala Rao S (2013) Space and cosine similarity measures for text document clustering. Int J Eng Res Technol 2(2):2278–0181 8. Amorim RC, Hennig C (2015) Recovering the number of clusters in data sets with noise features using feature rescaling factors. Inf Sci 324(126):1602–06989 9. Deng HCX (2008) Efficient phrase-based document similarity for clustering. IEEE Trans Knowl Data Eng 20(9):1217–1229 10. Kalaivendhan K, Sumathi P (2014) An efficient clustering method to find similarity between the documents, vol 1 11. Lin YS, Jiang JY, Lee SJ, Member IEEE (2014) A similarity measure for text classification and clustering. IEEE Trans Knowl Data Eng 26(7) 12. Sruthi K, Venkateshwar Reddy B (2013) Document clustering on various similarity measures. Int J Adv Res Comput Sci Softw Eng 3. ISSN: 2277 128X 13. Wajeed MA, Adilakshmi T (2011) Different similarity measures for text classification using KNN. In: 2011 2nd International conference on computer and communication technology (ICCCT-2011), pp 41–45. IEEE 14. Patidar AK, Agrawal J, Mishra N (2012) Analysis of different similarity measure functions and their impacts on shared nearest neighbor clustering approach 15. Ester M, Kriegel H-P, Sander J, Xu X (1996) A density-based algorithm for discovering clusters in large spatial databases with noise

10 Evaluation of Similarity Measure to Find Coherence

97

16. Kamber JHM (2006) Data mining: concepts and techniques. Morgan Kaufmann, San Francisco, CA, USA 17. Sebastiani F (2002) Machine learning in automated text categorization. ACM CSUR 34(1):1–47 18. Tan PN, Steinbach M, Kumar V (2006) Introduction to data mining. Addision-Wesley, Boston, MA, USA

Chapter 11

Bagged Random Forest Approach to Classify Sentiments Based on Technical Words Sweety Singhal, Saurabh Maheshwari and Monalisa Meena

1 Introduction Sentiment analysis is a process by which one can identify the nature of a sentence or document or a review. Nowadays, many users go through the various review sites to find out the reviews of any product. According to that they will decide whether that product should be purchases or not. So in short, we can say sentiment can be defined as judgement which is based on feeling. A sentiment is about feeling about something like “Current state of economy is a matter of interest for me” expresses a sentiment, whereas the sentence “I believe that country’s economic conditions are not good.” expresses an opinion. First sentence shows the sentiment of the user but the second sentence only tells about the opinion of user [1, 2]. This has been observed in many previous researches that there has been significant impact of reviews in the field of smart phone. A lot of researchers have found out that a person should buy a smart phone or not. They classify it according to the general terms like good, bad but none of them categorized them according to the technical words. Here, we are considering technical words to classify the sentiments. First, we have to collect data from different websites. Next, we will find out sentiment of the data. It is related to research field which is used to differentiate the data that sentiment can be of three categories: positive, negative, or neutral. We can identify them with the usage of multiple methods like, Natural language processing, statistics and machine learning. Then, we will find out precision of SA. It can be determined by number of ways.

S. Singhal (B) · S. Maheshwari · M. Meena Government Women Engineering College, Ajmer, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_11

99

100

S. Singhal et al.

1.1 Sentiment Analysis Levels The levels of SA are categorizing as follows: Word Level, it focuses on previous features which is already determined and according to that opinion is find out. Sentence Level, is more robust than the document level. It majorly focuses on each sentence to identify the sentiments. Document or review level, the major concern in this is how to obtain subjective text for concluding the overall sentiment of the entire document [1].

1.2 Methods of Sentiment Analysis SA can be classified into four principle gatherings: Lexical affinity, Keyword spotting, Concept level leverage, Statistical method leverage.

2 Literature Review Zhang et al. finished up with great number of real data and found out about the four characteristics of mobile reviews that are as follows: (1) Short normal length; (2) Large traverse length; (3) Power-law dissemination (4) important contrast extremity. They used Bayesian method for classification and N-Gram for feature representation [3]. Hegde and Padma proposed a methodology to find out the sentiments of mobile product reviews based on Kannada as there are numerous clients who created Kannada reviews on web. They used Lexicon based approach to find out the aspect of reviews. Polarity is found out by using Naïve Bayes classification method [4]. Chawla et al. furnishes you with wistful investigation of different advanced mobile phone sentiments. They reviewed data of Amazon. They used machine learning approach to classify the sentiments. Naïve Bayes and Support Vector machine methods are used and they conclude that SVM gives much better result than Naïve Bayes approach [5]. Singla et al. directed information examination of a vast arrangement of online surveys for cell phones. Enormous Data business has given a major jump to web based business. It has opened up the roads to more quick witted and educated basic leadership for vast enterprises. They not only classify the sentiments into positive, negative and neutral. They also worked upon the classification based on anger, happiness, trust, sadness etc. [6]. Ramani and Prasun Sentiment Analysis is another and rising field of research which manages data extraction and information revelation from content utilizing

11 Bagged Random Forest Approach to Classify Sentiments …

101

Natural Language Processing (NLP) and Data Mining (DM) procedure. They surveyed multiple approaches mostly based on machine learning and Lexicon based approaches [7]. Patel and Shah the ongoing advancement of information technology has raise up the utilization of different social systems administration sources like Twitter, Blogs, Whatsapp, Facebook and so forth. Individuals from all finished world express their open conclusion at whenever, anyplace regarding any matter [8]. Patni he assumes that internet is very important part of everyone’s life. It contains a great amount of data. They reviewed various techniques to find out the sentiments of mobile reviews based on app [9]. Dar and Jain manages the investigation of estimation examination here the term supposition investigation is utilized for recognizing slants and sentiment of the clients. In this paper creators will comprehend about the distinctive technique utilized for estimation examination and feeling mining [10]. Ekta Audit or Feedback is basic in showcasing for any business or item. Survey is about the client conclusion. Authors gather audits for cell phone and apply content arrangement gullible bayes method. It expels whitespaces web connection, accentuations and numbers and gathers those which reflect assessment and afterward make word cloud with those words. Check the positive, negative and unbiased remark and after that apply content bunching k-implies calculation and it make three groups one for positive, one for negative and last one is for neutral [11]. Ciurumelea et al. investigated the content of 1566 client surveys and characterized a low and high level scientific classification that contain portable particular classifications very pertinent for engineers amid the arranging of upkeep and advancement exercises [12].

3 Methodology of the Proposed System • Data Collection: For the analysis of sentiments, dataset has to be collected from online sources. The data is stored in the form of arff file and this is taken as input for processing. After this, preprocessing and cleaning of data is done. • Preprocessing and Filtering: Data pre-processing and filtering includes sentence splitting, tokenization, stop word filtering and stemming. • Feature Selection (Using Gini Index): Feature subset determination is the way toward recognizing and expelling irrelevant and excess data. This diminishes the dimensionality of the information and may enable learning calculations to work faster and more viably. Now, precision on future characterization can be enhanced leading to a more minimized, effectively deciphered portrayal of the objective idea. It is also known as impurity splitting method. Lesser the impurity, better the attribute is.

102

S. Singhal et al.

• Classification: Random Forest Algorithm: RF is a machine learning approach. In this, multiple let say n decision trees are generated. After generating n number of trees, aggregation of results is done and according to that they generate a final result. Here different samples are used by each tree. That is why it is called as Random. Use of multiple trees reduces the risk of overfitting. Training time is less in RF. RF can maintain accuracy when a large proportion of data is missing. Bagging: Bagging stands for Bootstrap Aggregation, and can be applied to any model class, although historically it was most often used with DTs. The key intuition of Bagging is that it reduces the variance of your model class. If you think about the simple 1-D regression example from above, Bagging is trying to get the prediction as close to the black line as possible. Proposed Bagging Algorithm: In this count dataset is inspected with supplanting into ten sets with comparable number of tuples using packing. What’s more, a short time later for each bootstrap test information AdaBoost approach is used as a base classifier and returns the anticipated result. Around the end the last figure is conveyed using normal of casting a ballot. The stream of the calculation is as shown above. As stand out from the single classifier, the upgraded classifier has more prominent precision that gotten from the first preparing information. It will not be inside and out increasingly horrendous and is even more exceptional to the impact of boisterous information. This joined model decreases the difference in the single classifier as the result execution and accuracy increases. Algorithm: Improved Bagging Procedure Input: T, a set of t training tuples; k, the number of models in the ensemble; a learning algorithm Random Forest Output: A classification model, Z*. (1) for k = 1 to k do // create k models: (2) generate bootstrap sample, Ti, by sampling T with replacement; (3) use Ti to derive a model, Zi; (4) end for To use the composite model on a tuple, V: (1) (2) (3) (4)

if classification then suppose every of the k models classify V and return the majority vote; if prediction then suppose every of the k models estimate a value for V and return the average predicted value;

11 Bagged Random Forest Approach to Classify Sentiments …

103

Fig. 1 Flow chart of proposed technique

• Performance Analysis: Analysis of results on the basis of parameters. Presently to assess the outcome, an alternate parameter is to be determined. Confusion matrix parameters are calculated [12]. The flow chart of the proposed methodology is as follows (Fig. 1).

4 Result and Discussion Evaluation Parameters: The assessment parameters of estimation investigation include several terms. The terms are false positives, genuine negatives, true positives and false negative. Confusion Matrix incorporates these terms which are used for evaluation. Precision(pre) =

p p ; Recall(rc) = p+r p+q

104

S. Singhal et al.

F - Measure =

p+s 2 ∗ rc ∗ pre ; Accuracy = pre + rc p+q +r +s

Figure 2 shows the classification results of Gini Index with Bagged Random Forest algorithm. The results show that out of 91 instances, proposed algorithm classify 79 instances correctly and 12 as incorrectly. That means making the accuracy as 86.8132%. Also, shows class details parameters like precision, recall, F-measure. Kappa statistics shows the performance of algorithm in this it is 0.7248. Here we find out multiple parameters on basis of cross validation model as well as percentage split model. By results, we can see that our proposed methodology gives best result among three. We apply random forest, bagging and our proposed approach (Tables 1, 2, 3, 4 and 5 and Figs. 3, 4 and 5). Then we find out TP rate, FP rate, Precision, Recall, F-Measure for all the three approaches, and concluded that out of the three approaches our approach gives best result. We plotted graphs for error rate comparison. And it is clearly observable that proposed method has lowest error rate than bagging and random forest approach. Fig. 2 Confusion matrix

Table 1 Accuracy comparison using cross validation model Techniques

Cross validation model

Percentage split model

Random Forest

65.9341

67.7419

Bagging

67.033

61.2903

Proposed

86.8132

80.6452

Table 2 Comparison of class parameters using cross validation model Parameters/technique

Random forest

Bagging

Proposed

TP rate

0.659

0.67

0.868

FP rate

0.519

0.545

0.154

Precision

0.631

0.64

0.869

Recall

0.659

0.67

0.868

F-measure

0.631

0.624

0.867

ROC area

0.622

0.614

0.915

11 Bagged Random Forest Approach to Classify Sentiments …

105

Table 3 Comparison of class parameters using percentage split validation model Parameters/techniques

Random forest

Bagging

Proposed

Mean absolute error

0.3813

0.4057

0.2711

Root mean squared error

0.4816

0.4654

0.3368

Relative absolute error

0.8464

0.901

0.5564

Table 4 Comparison of error rate using cross validation model Parameters/techniques

Random forest

Bagging

Proposed

TP rate

0.677

0.613

0.806

FP rate

0.546

0.581

0.145

Precision

0.674

0.564

0.845

Recall

0.677

0.613

0.806

F-measure

0.612

0.566

0.812

ROC area

0.618

0.627

0.932

Table 5 Comparison of error rate using percentage split validation model Parameters/techniques

Random forest

Bagging

Proposed

Mean absolute error

0.4032

0.4152

0.3229

Root mean squared error

0.4816

0.4699

0.3639

Relative absolute error

0.8757

0.901

0.73243

Fig. 3 a Accuracy comparison by cross validation model; b accuracy comparison by percentage split model

Fig. 4 a Comparison of cross validation model based class parameters; b comparison of percentage split validation model based class parameters

106

S. Singhal et al.

Fig. 5 a Comparison of error rate based class parameters; b comparison of error rates based cross validation model

Mean absolute error is 0.3229, root mean squared error is 0.3639 and the relative absolute error is 0.73243 in proposed approach.

5 Conclusion and Future Work Sentiment Analysis is dialect process assignment which uses a way to deal with spot printed content and classify it as positive or negative. In this, level SA is taking into account two classes for polarity of sentiment for every sentence (negative, positive). TP rate, F-measure, accuracy, ROC area, precision, recall and FP rate are calculated for each class. Also, the comparison of proposed Gini Index Feature selection based Bagged random forest with the existing random forest and bagging technique is done. Our approach performs better than existing random forest and bagging approach with accuracy 80.64% in case of percentage split using 66 and 34 as training and testing percent and 86.81% in case of cross validation model. It can be extended in future to include feature determining for mobile in detail i.e. make check for polarity on several features such as battery lifetime, operating system, etc., and make the dictionary for them.

References 1. Singhal S, Maheshwari S, Meena M (2017) Survey of challenges in sentiment analysis. Springer 2. Manek AS, Deepa Shenoy P, Chandra Mohan M, Venugopal KR (2016) Aspect term extraction for sentiment analysis in large movie reviews using Gini Index feature selection method and SVM classifier. Springer, pp 135–154 3. Zhang L, Hua K, Wang H, Qian G, Zhang L (2014) Sentiment analysis on reviews of mobile users. In: International conference on mobile systems and pervasive computing, pp 458–465 4. Hegde Y, Padma SK (2015) Sentiment analysis for kannada using mobile product reviews a case study. IEEE 5. Chawla S, Dubey G, Rana A (2017) Product opinion mining using sentiment analysis on smartphone reviews. In: International conference on reliability, infocom technologies and optimization. IEEE

11 Bagged Random Forest Approach to Classify Sentiments …

107

6. Singla Z, Randhawa S, Jain S (2017) Statistical and sentiment analysis of consumer product reviews. IEEE 7. Ramani D, Prasun H (2014) A survey: sentiment analysis of online review. Int J Adv Res Comput Sci Softw Eng 8. Patel NS, Shah DB (2015) Mobile computing opinion mining and sentiment analysis. Int J Adv Res Comput Sci Softw Eng 9. Patni S (2014) Review paper on sentiment analysis is-big challenge 10. Dar A, Jain A (2014) Survey paper on sentiment analysis: in general terms. Int J Emerg Res Manag Technol 11. Ekta (2017) Survey of review sentiment analysis through text classification and text clustering. Int J Eng Sci Comput 12. Ciurumelea A, Schaufelbuehl A, Panichella S, Gall H (2017) Analyzing reviews and code of mobile apps for better release planning

Chapter 12

Healthcare Services Customers’ ICT-Related Expectations and Experiences Anamika Sharma and Irum Alvi

1 Introduction India’s populace is expected to cross 1.6 billion by 2050, along these lines, the general well-being and healthcare service providers of the nation are confronted with a mammoth challenge. The Government at the Central, State and Local level has made immense budgetary expenditures under aspiring plans like NRHM, yet access to low-cost medicinal and healthcare services is still in a pitiable condition. Information and Communication Technology (ICT) has an immense potential to develop in this area because of minimal cost, low-cost mobile phones and more “inclusive” arrangements that fill significant gaps in information access and communication pertaining to health care. ICT has the ability to influence all parts of the healthcare services, especially in general well-being, managing data and information in public healthcare sector. In spite of the fact that India has the advantage of a strong Information Technology force just as indigenous satellite technology alongside trained Human Resource, still the utilization of telemedicine is at an initial stage, especially in the general well-being area. Notwithstanding the recent endeavours, telemedicine may assume a role in conveying health care to the remotest corners of India. There are different points of interest for incorporating ICT in healthcare services, such as, better access to total and precise Electronic Health Records (HER) that gather results for improving diagnosis and prognosis and avoiding blunders and in this way sparing valuable response times. It additionally prompts more commitment on the part of the patients as customers. Further, it improves populace based knowledge. ICT summons an amplified efficiency in cases where the general wellbeing is in a state of mess. Regardless of endeavours and political will to improve health services, A. Sharma (B) · I. Alvi Rajasthan Technical University, Kota, Rajasthan, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_12

109

110

A. Sharma and I. Alvi

the general picture remains one of inadequacy and ineffectiveness resulting in long waiting time and long queues of restless patients. ICT can help connect the gaps in information present in the healthcare sector in developing nations like India, by giving new and productive methods for accessing, imparting and storing healthcare-related data and information. ICT has the potential of improving the well-being system and of averting medicinal blunders. A savvy, cautious and contextual integration of ICT in providing health care to the customers ought to be an organized technique to help complex well-being needs of the patients. Although the significance of ICT in healthcare sector is widely recognized, yet the research studies undertaken to measure the impact of ICT on healthcare services are scarce, especially in India. The few studies which attempted to study the factors related to ICT causing an impact on healthcare services in India analyzed the preservation of computerized customers’ records [9] or tried to discover the potential advantages of e-health connections, analyzing them with reference to availability, stability of care, acceptance, patient security, and status of care, etc. [18], but none of the study has taken the patients’ or customers’ perspective into consideration. The present research paper is significantly different in its approach as it tries to analyze the influence of ICT o healthcare services taking the customers’ perspective into consideration and provides valuable input for the private healthcare service providers which might assist in improving competence & strength of the private healthcare service providers. The existing studies on ICT in healthcare broadly discuss the use of IT for recording, determining, and sharing healthcare customers’ information, but the present paper uses a different approach by identifying expectations of healthcare customers from their service providers.

2 Private Healthcare Service Providers and ICT in India The out-of-pocket expenses of the Indians on healthcare services are at approximately 60 per cent, making it one of the highest in the world. The normal cost of threeday hospitalization in the private healthcare unit may amount to nearly INR 25,000 while the same may be as low as INR 4,500 in the government hospitals. Present day modern healthcare units offer a wide assortment of services inside a wide field of clinical care, research and specialties, as well as support services. Consequently, a mounting need exists to rebuild and restructure work procedures and workflow to increment value. Until now, ICT infrastructure, despite its immense potential, has been limited to documentation as opposed to work process and workflow, thereby has failed to support the much-needed mobility and necessary operations for the healthcare sector. The ICT framework needs to be merged with wireless access by IP-enabled, handheld digitized gadgets to help support information flow. The frameworks, applications and hardware should be incorporated to help streams of data in parallel to the healthcare units work processes. The fundamental ICT infrastructure will decide how such frameworks can be reached out to help work processes. The use of ICT in the healthcare services is one of the most rapidly growing areas [14].

12 Healthcare Services Customers’ ICT-Related Expectations …

111

The telemedicine programme aims to give healthcare services in the rural areas by linking the local hospitals to the well-developed and advanced hospitals of the cities. In India these facilities are expanding quickly [1]. The new upheaval in human services isn’t just about new medicines or treatments, yet additionally about utilizing ICT to convey real-time data and information, which drives protected and proficient patient-driven consideration. Coordinating healthcare services, gadgets, instruments and information/data frameworks to prompt communications will define the modern healthcare providers, in the coming era. Increasing need of healthcare services from a rapidly developing population and maturing and aging society, implies that the need for ICT is more today than it has ever been before. In the everyday business of healthcare, there are gaps which compromise the health of all–adding considerably to the expenses of maintaining the healthcare system. Thus, ICT has a much greater task to carry out for managing the healthcare services by providing care and cure.

3 Literature Review In India, a large section from the low-income categories lacks quality healthcare, which makes it essential to develop newer methods and means like m-health to give quality and inexpensive healthcare available to all [3]. Customarily, healthcare frameworks are seen as the “iron-triangle” of communication, quality, and cost. None of these can change without influencing the others. Enhancing communication or value necessitates expanded investments, and bringing down costs thereby influencing quality or communication. In India, m-Health may be considered as one of the technologies that can serve the iron-triangle by expanding communication, enhancing quality and bringing down expenses for the majority of its market sections [12]. India has incredible potential, with technology ready to change the healthcare services framework. To amplify the transformative effect of ICT in this sector, technology has to cautiously and deliberately embed into the structure/framework, that are it needs to be properly coordinated and integrated. Various applications are conceivable, in various fields of health care such as diagnostics, tele-counsel booths, remote patient observing and monitoring systems, payment, disease control, trainings, etc. Data management is one of the main areas for enhanced quality that can give a significant understanding of the framework of the health sector [22]. A model based on IT-upgraded quality of service is shown in Fig. 1. The conceptual model is in part dependent on the Healthcare IT-enhanced Service Quality [16]. In the above model, the planned way to deal is to apply IT for the improvement of the quality of the services based on the standards of productivity as well as adequacy, to improve Communication, Promptness and Availability of healthcare related services. The Primary Health Care can be made straight-forward and effective by the enactment of “Mobile based Primary Healthcare Management System” [15]. Smartphone are crucial extensions of normal mobile phones and could be a powerful catalyst in the growth of health services using internet technology and gadgets [21].

112

A. Sharma and I. Alvi

Fig. 1 Healthcare IT-enhanced service quality

The use of ICT has grown extensively everywhere throughout the world, particularly among the developed countries across the globe, though developing nations like India are still struggling, particularly with essential healthcare challenges. These days, individuals across all age groups have access to information/data on the net. ICT can assume an essential role in providing better quality services, in improving the quality of administrative services in hospitals that dare to think outside the box [17]. The utilization of ICT in such spheres as healthcare services, health investigation, health education and training, and research [23], has extraordinarily potential to improve administrations proficiency, extend or scale up treatment for many patients, and improve results [5]. Healthcare related information is presently maintained manually in large parts of India, or it is well-kept in an electronic/computerized form [10]. An automated framework can help improve the system and proficiently deal with the information accumulation, processing, storing, investigating, and sharing. It likewise helps specialists in giving services to the community in a fitting way [2]. Automated systems for healthcare in India can improve the maternal as well as paediatric services, particularly antenatal and immunization services [20]. Further, with healthcare customers becoming more aware of their health maintenance choices, Internet is a suitable media for the promotion of information exchange between healthcare customers and the providers of health-services [7]. Several researchers considered different factors for evaluating the effectiveness of healthcare providers’ websites like to evaluate the web existence of hospitals, including dimensions as accessibility, marketing, content, technology, and usability [7]; access/convenience, audience, certainty, well-timed, content, authority, and security [19]; technical characteristics, hospital communication and facilities, medicinal services, interactive on the Internet services and external activities [13]; the worldwide

12 Healthcare Services Customers’ ICT-Related Expectations …

113

quality, accessibility, usage, interactivity, updating, good quality model and communication [4]; content, function, design, as well as management and usability [11]; general information, contacting details, web connectivity, quality of care, information for patients, message about resources and performance, site exploration and usefulness, health information, services provided to professionals along with facilitating proceedings [6]. In India, no research has been undertaken to check the quality of websites of healthcare service providers, though websites offer similar content and information. The present study tries to evaluate the gap between the information expected and information experience on the websites of the private healthcare service providers.

4 Research Method The present study is based upon primary data using questionnaire, comprising several healthcare service delivery related statements. Comprehensive information is presented which illustrates customers’ expectations regarding ICT in delivering these services efficiently and to achieve high quality in healthcare services. It is a descriptive research, which aims to find the gap between the expectations of the customers regarding healthcare service delivery and their experiences with the present service delivery. Understanding the gap will help in providing more efficient and effective services.

4.1 Objectives The main objectives of the study are: • To scrutinize the impact of ICT on healthcare services. • To identify the areas of healthcare services, where the customers’ expectations are increasing due to ICT advancements. • To identify the gap between expectations and experiences regarding ICT usage in healthcare service delivery.

4.2 Hypothesis To fulfill these objectives, following hypotheses are formulated: H1: There is no statistically significant difference between the expectations and the experiences of the respondents, related to different types of information on the websites of the private healthcare service providers.

114

A. Sharma and I. Alvi

H2: There is no statistically significant difference between the expectations and the experiences of the respondents, related to different types of information sharing through SMS by the private healthcare service providers.

4.3 Scope and Contribution Although many researchers have been undertaken the evaluation of healthcare services quality in the developing and the developed countries, inadequate research has been carried out in India till date. The exponential growth of ICT in India, of mobiles in particular, has an intense effect on information access and its usage at all levels, provided that the applications are designed with the needs and situations of their users in mind [8]. This research serves to understand the expectations and experiences of private healthcare service-provider customers and seeks to bridge the gap by enhancing satisfaction among healthcare customers. This paper contributes to the existing literature on healthcare industry by studying the gap between customers’ expectations related to the use of ICT and the current status of use of ICT in delivering healthcare services. The findings of the study will not only assist the private healthcare service providers in delivering quality care, but also assist policy makers in forming and implementing appropriate strategies. The healthcare service providers need to continually update themselves by adopting new ICTs for enhancing the level of satisfaction of the customers. The study provides a useful basis for understanding varied customer expectations in healthcare services which will assist in developing services, encounters and communications to meet the customers’ needs and expectations.

4.4 Selection and Size of the Sample The population comprises of the select private healthcare service-provider customers in Rajasthan. A sample size of 130 was selected for this study, remembering the average size of samples utilized in different researches in similar investigations. The technique of convenience sampling was utilized to select the sample. This technique is regarded as suitable in light of the fact that the purpose of the present study investigation is to investigate the connections among the select factors and not to gauge the interval estimates of the select factors. The criterion for incorporation and inclusion in this study is that the customers should be local and should have utilized the social private healthcare service units during the last twelve months. The data was by and by gathered during the given time period. Verbal consent was taken from the respondents before the survey. Before administering the survey questionnaire, the importance and significance of the scale was disclosed to the respondents. The survey questionnaires were disseminated to adult patients above 18 years of age.

12 Healthcare Services Customers’ ICT-Related Expectations … Table 1 Gender distribution of the respondents

115

Gender

Number

Percentage

Male

102

78.5

28

21.5

Female

5 Results and Analysis 5.1 Demographic Details of the Respondents From 130 respondents, 78.5% of the respondents were males and 21.5% were females (Table 1). The age groups of the respondents were almost equally distributed between the 20–30 age group range, 30–50 and the group above 50 years of age. This was mainly due to the fact that in the visited wards, mostly these age groups were keen to respond about experiences and expectations related to use of technology in healthcare service delivery. For the present study, the respondents were enquired about the use of Mobile and Internet in the day-to-day activities; the majority of the respondents agreed to its direct access. Major information which they expected to find on websites related to healthcare service delivery and their experiences in real time are as discussed below.

5.2 Expectations and Experiences Regarding Information on Websites In the healthcare service sector, very few super specialty and multi-specialty service providers are making online information available. If they are providing any information online it is just basic information about their address, contact numbers, total capacity, facilities and medical services available and a list of founders etc. Besides that most of the websites are providing information that has not been updated since long. H1: There is no statistically significant difference between the expectations and the experiences of the respondents, related to different types of information on the websites of the private healthcare service providers. The respondents responded to items about the availability of information on the websites of the studied healthcare service providers, to find the gap between expectations about use of ICT in healthcare services and the current status in reality, it was found information that the information was not updated recently and was highly insufficient. They have been asked about what type of information they would want and how quickly they would want this information to get updated along with what type of information they were able to find. Table 2 presents the analysis of the data and depicts a significant difference between expectations and experiences.

116

A. Sharma and I. Alvi

Table 2 Statistics-expectations and experiences on information on websites Construct

Pairs

Mean

N

Std. deviation

Contact number

Expectations

4.308

130

.6207

Experiences

2.392

130

1.0380

Address

Expectations

4.308

130

.6207

Experiences

4.192

130

.9572

Services offered

Expectations

4.592

130

.5665

Experiences

1.085

130

.2794

Panel of doctors

Expectations

4.292

130

.4566

Experiences

1.954

130

.8152

Timings of the doctors

Expectations

4.169

130

.6243

Experiences

1.246

130

.4324

Number of beds

Expectations

4.731

130

.6073

Experiences

1.792

130

.6899

Charges of different facilities

Expectations

4.469

130

.5162

Experiences

1.000

130

.0000

Feedback or experiences of the customers

Expectations

4.369

130

.5989

Experiences

1.946

130

.6012

Total

Expectations

35.2385

130

1.66963

Experiences

15.6077

130

1.68668

z

df

Sig (2 tailed)

17.279

129

.000

1.148

129

.253

65.231

129

.000

28.729

129

.000

42.000

129

.000

36.036

129

.000

76.624

129

.000

37.049

129

.000

99.999

129

.000

A z-test was performed comparing the mean scores of the expectations of the respondents, related to different types of information on the websites and the mean scores of the experiences of the respondents of the same. The mean score of the expectations of the respondents regarding contact number (M = 4.308, SD = 0.6207) was more than the mean score of the experiences (M = 2.392, SD = 1.0380) related to contact number information on the website of the service provider. The value of z = 17.279, p = 0.000 is significant as the Sig (2-Tailed) value less than 0.05, indicates the existence of a statistically significant difference between the pairs. The mean score of expectations of the respondents regarding address (M = 4.308, SD = 0.6207) was more than the mean score of the experiences (M = 4.192, SD = 1.9572). The value of z = 1.148, p = 0.253 is not significant as the Sig (2-Tailed) value greater than 0.05, indicating no statistically significant difference exists between the pairs. The mean score of the expectations of the respondents regarding services offered (M = 4.592, SD = 0.5665) was more than the mean score of the experiences (M = 1.085, SD = 0.2794). The value of z (65.231), p = 0.000 is significant and indicates there exists a statistically significant difference between the expected information about services offered by the service provider and the experience of the customers on the

12 Healthcare Services Customers’ ICT-Related Expectations …

117

website. The mean score of the expectations of the respondents regarding panel of doctors (M = 4.292, SD = 0.4566) was more than the mean score of the experiences (M = 1.954, SD = 0.8152). The value of z = 28.729, p = 0.000 is significant and indicates that there is a statistically significant difference between the expectations and real experiences of the respondents regarding panel of doctors. The mean score of the expectations of the respondents regarding timings of the doctors (M = 4.169, SD = 0.6243) was more than the mean score of the experiences (M = 1.246, SD = 0.4324). The value of z = 42.000, p = 0.000 is significant; the Sig (2-Tailed) value less than 0.05, indicates statistically significant difference exists between the pairs. The mean score of the expectations of the respondents regarding the number of beds (M = 4.731, SD = 0.6073) was more than the mean score of the experiences (M = 1.792, SD = 0.6899). The value of z = 36.036, p = 0.000 is significant and depicts a statistically significant difference between the expectations and experiences of the respondents, regarding the information about number of beds. The mean score of the expectations of the respondents regarding charges of different facilities (M = 4.469, SD = 0.5162) was more than the mean score of the experiences (M = 1.000, SD = 0.0000). The value of z = 76.624, p = 0.000 is significant; the Sig (2Tailed) value less than 0.05 indicates that there is a statistically significant difference between the pairs. Moreover, the mean score of the expectations of the respondents regarding feedback of customers (M = 4.369, SD = 0.5989) was more than the mean score of the experiences (M = 1.946, SD = 0.6012). The value of z = 37.049, p = 0.000 is significant and a statistically significant difference. As per the results, there is a statistically significant difference exists between the expectations of the respondents, related to different types of information displayed on the websites of the healthcare service provider as z = 99.999, p 3 → Phishing

4.

Domain’s updated date—If the domain of a website on WHOIS record [12] is not updated for more than 2 years, it is phishy. 

5.

If upadated date ≥ 2 years → Phishing ElseIf upadated date < 2 years → Legitimate

Double slash—“//” in the path of a URL is used for redirecting. Thus, site using this is considered phishy. 

6.

Use of HTTPS—This rule is proposed to check if site provides secure connection by using HTTPS. But phishers append https to the domain part to fool users. Therefore, that is also checked. 

7.

If “//”in URL → Phishing Else → Legitimate

If HTTPS used → Legitimate Else → Phishing

Requested URL—This feature checks if the images and other content displayed on the web page is loaded from a link belonging to same domain. ⎧ ⎨ If requested URL ≤ 20% → Phishing ElseIf requested URL > 20% and ≤ 50% → Suspicious ⎩ ElseIf requested URL > 50% → Legitimate

8.

URL of anchor tag—If hyperlinks in a web page take to a page of similar domain, the website is real otherwise phishing. ⎧ ⎨ If URL of anchor tag ≤ 20% → Phishing ElseIf URL of anchor tag > 20% and ≤ 50% → Suspicious ⎩ ElseIf URL of anchor tag > 50% → Legitimate

14 A Framework to Impede Phishing Activities Using Neural Net …

9.

137

Links in tags—Links present in tags like META and SCRIPT are checked. If does not belong to same domain then phishy. ⎧ ⎨ If Links in tags ≤ 20% → Phishing ElseIf Links in tags > 20% and ≤ 50% → Suspicious ⎩ ElseIf Linksin tags > 50% → Legitimate

10. Abnormal URL—If WHOIS data [12] of a URL does not mention the same domain name as in the domain part, it is phishy. 

If domain name same in W HOIS record → Legitimate Else → Phishing

11. Pop up window—If a website uses popup windows for passwords, it is phishing. 

If use of pop − up window → Phishing Else → Legitimate

12. Iframe tag—Iframe tags are used to split the window into number of frames. If the border is set to transparent, some pages from another domain inside the current page cannot be recognized. 

If use of transparency in Iframe → Phishing Else → Legitimate

13. Redirect—If the site uses redirects like HTML 301 to redirect to another page when clicked on the link, it is marked phishy. 

If use of redirect → Phishing Else → egitimate

14. Age of the Domain—This feature checks for how long the domain has been registered. Phishy URLs use newly registered domains that are not used for very long. The rule proposed for this is as follows. 

If age of domain ≤ 6 months → Phishing ElseIf age of domain > 6 months → Legitimate

15. DNS record of URL—The DNS record is checked by performing DNS lookup. If a record about this URL is found then it is legitimate otherwise phishy. 

If DNS record present → Legitimate Else → Phishing

138

S. S. Patil and S. N. Dhage

16. Traffic on the website—This feature checks the page rank of the webpage using Alexa database [13]. More the traffic on the webpage more is the rank. ⎧ ⎨ If Page rank ≤ 100, 000 → Legitimate ElseIf Page rank > 100, 000 → Suspicious ⎩ ElseIf No Page rank → Phishing

3.2 Training Module In this module, we constructed a Neural net in Tensorflow using Keras library. The dataset consists of 15,027 instances. The selected 16 features are used to train the Neural net. The Neural net consists of three hidden layers and used Adam optimizer. It gave an accuracy of 90.03%. This trained model was saved in a json file to be used in classifier module.

3.3 Feature Collector When a URL is passed into the AntiPhisher framework, all the above features are extracted automatically by the Feature Collector module as seen in Fig. 1. These extracted features are stored in a python array in form of binary or ternary values. This array is sent as input to the Classifier module.

3.4 Classifier The trained Neural net is loaded in this module to analyze the features collected by the Collector module. The array containing the feature values is passed to the Neural net for prediction. The result after prediction is displayed on the User Interface of AntiPhisher framework as shown in Fig. 2. The URL in Fig. 2a is acquired from PhishTank [14] website which was viewed in Google Chrome as shown in Fig. 3a. This website was later blocked by Google Chrome as shown in Fig. 3b, when the URL was added in the blacklist. We had installed Netcraft antivirus tool in our system. But neither Google Safe Browsing of Google Chrome, nor Netcraft could detect the phishing URL because these tools mainly rely on blacklists. The URL in Fig. 2b is Google website which was correctly classified as Legitimate. Thus, AntiPhisher is able to correctly classify the websites into their respective classes and detects phishing websites which are not detected by tools like Google Safe Browsing and Netcraft.

14 A Framework to Impede Phishing Activities Using Neural Net …

(a) Phishing website detected by AntiPhisher

139

(b) Legitimate website identified by AntiPhisher

Fig. 2 User interface of antiphisher

(a) Phishing URL displayed in Google Chrome

(b) Same website blocked after some time

Fig. 3 Phishing website when accessed on Google Chrome

4 Results As seen above, our model performed better than the existing tools that are in use. In order to compare our model with other Machine learning approaches, we implemented and evaluated them. The Receiver Operating Curve (ROC) curves of all these models are shown in Fig. 4 along with their Area Under Curve (AUC) value. The model with higher True Positive (TP) rate and lower False Positive (FP) rate is believed to be trained better. As seen in the Fig. 4, ROC curve for proposed Neural Net shows best results.

140

S. S. Patil and S. N. Dhage

Fig. 4 ROC curve of all models along with AUC values

Table 1 Summary of machine learning classifiers

Techniques

Accuracy (%)

TP rate

FP rate

SVM

81.70

0.958

0.314

Random forest

87.75

0.784

0.133

Naïve Bayes

81.81

0.995

0.396

Adaboost

87.95

0.899

0.114

Logistic regression

80.48

0.916

0.225

Proposed neural net

90.03

0.933

0.099

TP and FP rate obtained by our model is 0.933 and 0.099 respectively. Out of 15,027 instances, 158 were misclassified as phishing and 171 were misclassified as legitimate. Table 1 shows a summary of accuracies, TP rate and FP rate of all the models. As we can see, proposed Neural net performs better than other techniques.

5 Conclusion In this paper, we developed a framework to analyze a website and verify if it is phishing or not. We made certain modifications in the definition of rules that analyze these features. As per the current scenario, some features have become invalid. By removing these features and modifying others, phishing URLs can be successfully detected. The automatic feature collector reduces the manual work of collecting features and makes the framework more efficient. The proposed Neural net performed

14 A Framework to Impede Phishing Activities Using Neural Net …

141

better than the existing tools. We would like to explore new features in future that can help to identify a phishing website more accurately.

References 1. Dou Z, Khalil I, Khreishah A, Al-Fuqaha A, Guizani M (2017) Systematization of knowledge (SoK): a systematic review of software-based web phishing detection. In: IEEE Commun Surv Tutor 19(4), 2797–2819. IEEE, Fourthquarter 2. Sharma H, Meenakshi E, Bhatia SK (2017) A comparative analysis and awareness survey of phishing detection tools. In: 2017 2nd IEEE international conference on recent trends in electronics, information and communication technology (RTEICT), pp 1437–1442. IEEE, Bangalore 3. Shirazi H, Haefner K, Ray I (2017) Fresh-Phish: a framework for auto detection of phishing websites. In: 2017 IEEE international conference on information reuse and integration (IRI), pp 137–143. IEEE, San Diego, CA 4. Shrestha N, Kharel RK, Britt J, Hasan R (2015) High-performance classification of phishing URLs using a multi-modal approach with mapreduce. In: 2015 IEEE World congress on services, pp 206–212. IEEE, New York 5. Chang EH, Chiew KL, Sze SN, Tiong WK (2013) Phishing detection via identication of website identity. In: 2013 international conference on IT convergence and security (ICITCS), pp 1–4. IEEE, Macao 6. Ahmed AA, Abdullah NA (2016) Real time detection of phishing websites. In: 2016 IEEE 7th annual information technology, electronics and mobile communication conference (IEMCON), pp 1–6. IEEE, Vancouver 7. Barraclough P, Sexton G (2015) Phishing website detection fuzzy system modelling. In: 2015 science and information conference (SAI), pp 1384–1386. IEEE, London 8. Vrbani G, Fister Jr I, Podgorelec V (2018) Swarm intelligence approaches for parameter setting of deep learning neural network: case study on phishing websites classification. In: 2018 association for computing machinery, ACM ISBN 978-1-4503-5489-9/18/06, pp 1–8. ACM, New York, USA. https://doi.org/10.1145/3227609.3227655 9. Shyni CE, Sundar AD, Ebby GSE (2018) Phishing detection in websites using parse tree validation. In: 2018 recent advances on engineering, technology and computational sciences (RAETCS), pp 1–4. IEEE, Allahabad 10. Singh P, Maravi YPS, Sharma S (2015) Phishing websites detection through supervised learning networks. In: 2015 international conference on computing and communications technologies (ICCCT), pp 61–65. IEEE, Chennai 11. Li X, Geng G, Yan Z, Chen Y, Lee X (2016) Phishing detection based on newly registered domains. In: 2016 IEEE international conference on big data (big data), pp 3685–3692. IEEE, Washington, DC 12. WHOIS homepage, https://www.whois.com/whois/. Accessed 26 April 2019 13. Alexa database, https://www.alexa.com/siteinfo. Accessed 26 April 2019 14. PhishTank homepage, https://www.phishtank.com/. Accessed 26 April 2019

Chapter 15

Overcoming the Security Shortcomings Between Open vSwitch and Network Controller Milind Gupta, Shivangi Kochhar, Pulkit Jain, Manu Singh and Vishal Sharma

1 Introduction Software-defined Networking [1] is a glide path towards cloud computing, which is a technique to manage the network and enhance the configuration which will further ensure an efficient facilitation [2]. SDN defined the functionalities of the data and the control plane whose interconnection required some algorithms or protocols. One such incorporated protocol is OpenFlow Protocol. Controller and Switch communication in the nodal network is studied in this research. The working of the network is based on the traffic routing by routers via switches. In such a communication, the data plane is separated from the control plane, the control plane provides the flow rules to the data plane when asked while using new network packets when it does not know how to handle [2]. On the security end, the protocol is encrypted with Transport Layer Security (TLS), as it lies in the topmost layer of the Transmission Control Protocol (TCP). In this work, we demonstrate a well-planned network with the information of few fundamental aspects of the Software-defined Networking technology [3]. The flow M. Gupta · S. Kochhar · P. Jain · M. Singh · V. Sharma (B) Department of CSE, Bharati Vidyapeeth’s College of Engineering, Paschim Vihar, New Delhi 110063, India e-mail: [email protected] M. Gupta e-mail: [email protected] S. Kochhar e-mail: [email protected] P. Jain e-mail: [email protected] M. Singh e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_15

143

144

M. Gupta et al.

rules can be enforced by control plane when required by the data plane dynamically which helps us to efficiently control and monitor the network. This reactive mode may cause problems when so many requests will be sent from the data plane to control plane. When too many requests are received by data plane in very short period of time, the control plane get flooded by data plane [4]. Since, security is one of the biggest concerns and need of the hour, our main ideology is to establish SSL encryption into the OpenFlow protocol which lacks certain encryption by default. Moreover, one of the main shortcomings of OpenFlow is that it’s missing authorization and authentication. This approach makes sure that these aspects are overcome and met with certain expectations as well.

2 Literature Review Enghardt and INET in [4] emphasize extensively on the security concerns of such networks. They conclude that security of the network can be enhanced significantly by concurrently capitalizing the centralized view of the network and its programmability. In [5], the authors have published a comprehensive review of security-oriented research in SDNs. They focus on protecting the network through various techniques such as security configuration, threat detection, threat remediation and network verification to name a few. A secure communication between Open vSwitch and controller promises to solve numerous of network problems and expand its reach within the networking field [4] but as a matter of fact, security is not one of the concerns where this excels. The distribution of control plane and data plane in the nodal setup took various advancements in terms of potential scalability which further allowed a single controller to act for multiple devices. The single controller then stands as the omnipotent decision maker for the complete network. Segregation of Control and Data planes is an emerging paradigm that deals with the segregation of control logic from the underlying components as explained in [6]. The authors talk about the main concepts used in such a network; analyze the functionality of a centralized controller and talk about vertical integration. It is imperative that networks enhance security which includes prevention, detection, and reaction to threads, it happens because of logical centralization of network intelligence which gives innovative applications and services [5]. As mentioned in [6], a series of control plane attacks could easily compromise an entire network. The authors of [7] have demonstrated an efficient attack against networks using the information of fundamental aspects of SDN technology. The goal of the proposed fingerprint attack is to demonstrate that networks bring in security issues and these issues need to be tackled for smooth functioning of the network. With the quality of being characterized by vigorous activity and progress of programmable networks in OpenFlow network brings various new security issues and challenges. These may invite various attacks on our networks such as Dos, traffic diversion, ARP spoofing attacks, network manipulation, etc. [11]. The segregation of both the data and control plane helps to easy system programmability [8].

15 Overcoming the Security Shortcomings …

145

The privilege abuse problem arises in OF controllers due to the involvement of third-party apps; the authors propose a method to minimize the privilege of OF apps to reduce the risk. They propose a fine-grained permission system—Perm OF for the aforementioned [9]. This segregation and association of the two planes allows a better effective path which leads to a smooth data transmission. Flow rules demonstrate the forwarding behavior of networking elements. Klöti et al. in [10] talk about conflicting flow rules in such networks. They present the design on FortNox that can produce enforceable flow constraints. Emphasis has been laid on resolving conflicts and incorporating. Chellan et al. [13] proposes the introduction of a MiddleBox Intrusion Detection System which is paired with the Open vSwitch via Mirror port which is meant to detect and tackle different kinds of attacks on the nodes involved in the network. Also, the authors [13] takes TLS to provide security and encryption to the network and the Open vSwitch,

3 Existing Problems The vulnerabilities of an OpenFlow based network are as follows [3]: Authentication & Authorization. This is done to ensure that the data or information being received is as it was originally transmitted by the sender. Any compromised network element can gain access and control of any network entity in a sabotaging capacity. OpenFlow protocol does not require authentication and authorization between network switches and the controllers and has not made TLS mandatory. Association. It can also be known as data mapping to subsequent access points or stations. Association is responsible for the distribution of the data packets by establishing a link. Thus, improper linking or absence or routes between various nodes could lead to failure of transmission. Data Confidentiality. To have a robust network, there must be the presence of data confidentiality. It simply allows the users to trace the data back to the confidential owner and also, ensures complete privacy and allowing a private form of transmission between the various nodes in the setup. Handover. Sometimes, when the associated access points are out of reach, the control plane may direct the switch to roam to another available point which may allow the transmission via a different route. The absence of which could lead to packet dropping and transmission failures. The aforementioned vulnerabilities cause serious potential for a variety of attacks and compromised scenarios in a Software-defined Network that implements OpenFlow protocol.

146

M. Gupta et al.

4 Proposed Methodology and Implementation Bob Lantz et al. in [7] talk about MININET—a useful technology for implementing networks on a laptop. As identified before, the shortcomings and vulnerabilities of poor network communication protocols can lead to manipulation and ultimately jeopardy of the system when controlled by a malicious entity. In its essence, the OpenFlow protocol has a basic security flaw, the lack of mandatory authentication of the involved switches by the controller, and the lack of compulsory requirement of authorization of the controller which is trying to gain access by the switches. This makes the flow of data between the control plane and the data plane of the network unsecure and unreliable. Because this vulnerability exists in the OpenFlow protocol itself, any particular implementation of an SDN using OpenFlow will inherently have this vulnerability, making the entire implementation prone to possibilities of attacks, network compromises, and network damage. POX is a network controller which uses python scripting for network applications [14]. Its real application is to turn dumb networks into functional ones which can further be implemented or experimented upon. Since, POX does not have any Graphic User Interface (GUI), it collaborates with Mininet emulator to create different topologies. Our research aims at devising a mechanism that can curb this integral shortcoming in the OpenFlow protocol, and serves as an appropriate measure to introduce safety and reliability in an OpenFlow network, by invoking SSH in the interface between the controller and the system switches (Fig. 1).

Fig. 1 Network inhibiting central controller and Open vSwitch via OpenFlow [12]

15 Overcoming the Security Shortcomings …

147

For our implementation, we have set up a virtual SDN environment using Mininet on Linux (Ubuntu 64 bit) in Oracle VirtualBox (virtual machine), Xming (X server), and PuTTY (SSH enabled client) with X11 forwarding capabilities. Once the virtual machine is set up, we installed the POX controller (a networking software platform in python, works as a programmable OpenFlow controller with OVS support), and ultimately a python script using Mininet Python API’s is implemented to simulate a simple network topology containing—a controller, a switch, and 4 hosts, that uses OpenFlow protocol between the POX Controller and the Open vSwitch. On analyzing the packets being transferred between the above instance of SDN nodes using Wireshark, studying the interface configuration, and ping status between the nodes, it is observed that during the switch and controller handshake, no SSH based authentication or authorization is required, and uses an unencrypted port in Open vSwitch. This research paper proposes a solution to the aforementioned inherent flaw of the OpenFlow protocol, and implements the methodology of programming an ovscontroller with SSL security scheme, by first setting up the SSL parameters for Open vSwitch and the ovs-controller, and then running a Python script in Mininet to implement the same network topology but now with SSL. Initial step in the said procedure is to set up keys for both Open vSwitch and ovs-controller and SSL parameters for both: We configure the controller with the key value pairs and certifications by placing the sc-privkey.pem, sc-cert.pem, and cacert.pem files in the respective directories of the Open vSwitch. Then the controller is run with SSL on the specific port: sudo ovs-controller -v pssl:6633\ -p/etc./openvswitch/ctl-privkey.pem\ -c/etc./openvswitch/ctl-cert.pem\ -C/var/lib/openvswitch/pki/switchca/cacert.pem The Python script using Mininet Python API now invokes the topology in Mininet with ovs-controller and Open vSwitch and sets the controller to SSL security scheme: s1.cmd(‘ovs-vsctl set-controller s1 ssl:192.168.56.101:6633’)

5 Results We observe that the ovs-controller supports SSL through a particular port (6633), as seen in figure (Fig. 2). Thus, the aforementioned topology using the Python script is implemented and following response is observed in Mininet (Fig. 3).

148

Fig. 2 OpenFlow configuration in Mininet Fig. 3 Nodal topology in Mininet

M. Gupta et al.

15 Overcoming the Security Shortcomings …

149

6 Conclusion OpenFlow networks are a very breakthrough concept in Software-defined Networking that promotes centralization of flow control. Along with its sheer advantages of providing faster flow decisions, better error and flow control, and smart and central switching logic, it also comes along with a host of compromises in the security domain of OpenFlow protocol. The authors of this paper have identified these shortcomings and vulnerabilities, and their potential for security attacks in such Software-defined networks. The authors of this paper have also identified the main loophole in the OpenFlow protocol, the one that does not mandate authentication and authorization between the flow of packets from control plane to the data plane (i.e., between the switches and the controller), and vice versa. Ultimately, the authors of this paper have devised and implemented using a python script a mechanism that solves the aforementioned problem by invoking the network elements like the controller and switch to use OpenFlow over SSL security schema, thus introducing security and reliability to the Software-defined Networking domain.

References 1. Sharma V, Bhatia S, Nathani K (2018) Review on software-defined networking: architectures and threats. In: Information systems design and intelligent applications, pp 1003–1011 2. Sakir S et al (2013) Are we ready for SDN? implementation challenges for software-defined networks. IEEE Commun Mag 51(7):36–43 3. Scott-Hayward S, O’Callaghan G, Sezer S (2013) SDN security: a survey. In: 2013 IEEE SDN for future networks and services (SDN4FNS), IEEE 4. Enghardt T, INET FG (2012) Authentication, authorization and mobility in OpenFlow-enabled enterprise wireless networks. Master project report, WS 11:12 5. Kandoi R, Antikainen M (2015) Denial-of-service attacks in OpenFlow SDN networks. In: 2015 IFIP/IEEE International symposium on integrated network management (IM), IEEE 6. Kreutz D, Ramos FMV, Verissimo P, Rothenberg CE, Azodolmolky S et al (2015) Softwaredefined networking: a comprehensive survey, version 4.0—IEEE, 103 7. Lantz B, Heller B et al (2010) A network in a laptop: rapid prototyping for software-defined networks. In: Hotnets IX proceedings of the 9th ACM SIGCOMM workshop on hot topics in networks 8. Akhunzada A et al (2016) Secure and dependable software defined networks. J Netw Comput Appl 61:199–221 9. Porras P, et al (2012) A security enforcement kernel for OpenFlow networks. In: Proceedings of the first workshop on hot topics in software defined networks, ACM 10. Klöti R, Kotronis V, Smith P (2013) OpenFlow: a security analysis. In: ICNP, vol 13 11. Li W, Meng W, Kwok LF (2016) A survey on OpenFlow-based software defined networks: security challenges and countermeasures. J Netw Comput Appl 68:126–139 12. Stallings W (2013) Software-defined networks and OpenFlow. Internet Protoc J 16(1) 13. Chellan N, Tejpal P, Hari P, Neeralike V (2016) Enhancing security in OpenFlow, Capstone research project proposal, University of Colorado Boulder 14. Kaur S, Singh J, Ghumman NS (2014) Network programmability using POX controller. In: International conference on communication, computing & systems (ICCCS)

Chapter 16

An Incremental Approach Towards Clustering Short-Length Biological Sequences Neeta Maitre

1 Introduction With the developments in biotechnology, more and more biological sequence information has been generated [1]. Bio informatics plays a significant role in the development of the agricultural sector, agro-based industrial, agricultural by-products utilization and better management of the environment [2]. In case of biological data, data mining is preferred over data analysis. The reason for this is, biological data is tremendous, large in size and vast in details. Dimensionality of this data is very high. Biological information is complex in nature and hence needs new and sophisticated applications to make full utilization of this data. A traditional way of data analysis has proven to have limitations in handling biological data. Biological data mining has two facets: supervised approach and unsupervised approach. It covers data mining theory, underlying concepts and its use in current biological and medical applications. Explosive growth in biological data opens up a new challenge. Biomedical data, proteomics, genomics call for usage of this extensive research data and patterns associated with it. Bioinformatics addresses the biological data mining basics and use of computer science opportunities. There are various techniques available to cluster biological sequences such as CLOBB (CLuster On the Basis of BLAST similarity), hierarchical clustering. Majority clustering algorithms are based on BLAST (Basic Local Alignment Search Tool) which is heuristic in nature [3]. In these algorithms, the allocation of sequence is to the first similar cluster than the most similar cluster. The algorithm such as CLOBB is specifically designed for expressed sequence tags (EST). Comparative genomics can also be helpful in this regard [4]. The approach is similar to that of MidClustpy [5] which can cluster short-length biological sequences and proved by considering a partial length of expressed sequence tags (EST). N. Maitre (B) MKSSS’s Cummins College of Engineering for Women, Pune, Maharashtra, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_16

151

152

N. Maitre

2 Methodology The algorithm is incremental in nature and covers matching of sequences in such a way that allocation of the sequences to cluster representatives is seamless. It creates a pair of cluster representative and allotted sequences. The number of cluster representatives dictates the total number of clusters formed. It overcomes the disadvantage of incorrect allocation of sequences to cluster representatives depending upon the fetching sequence. It gives uniform cluster formation in every run. It allows a user to choose a metric for similarity threshold. The algorithm implementation consists of following steps as shown in Fig. 1. 1. Short-length sequences are given in fasta format as an input. Majority of biological sequences are in FASTA format. Thus, this algorithm facilitates use of sequences as they are. 2. Sequences are then arranged in the descending order of their length. A normalized distance matrix is then generated from these sequences by considering the highest value for a given row. 3. The value of mean or median is computed based upon a user’s choice. This value is considered as a threshold for cluster formation. 4. The first sequence of the list is considered as the already-clustered sequence and same is considered as cluster representative. 5. The sequence can be allotted to an already existing cluster or a new cluster can be created based upon similarity threshold (the value of mean or median computed in Step 3). a. If the similarity threshold is greater than or equal to the highest value for the given vector, then it is merged with the current cluster representative. This sequence is then added as a candidate sequence for the current cluster.

Fig. 1 Incremental approach for clustering small sized biological sequences based on Maitre, Kshirsagar (2019) [5]

16 An Incremental Approach Towards Clustering …

153

b. If the similarity threshold is smaller than the highest value then a new cluster is created and the sequence is added as a new representative in cluster_repr file. 6. Step 5 is repeated for all the sequences under consideration. This algorithm outputs a fasta file of cluster representatives associated with their respective candidate clusters.

3 Implementation Mining biological and biomedical data poses a challenge and is an interesting trend [6]. The dataset used for implementation needs to be of small length biological sequences. ESTs are the single pass reads from randomly selected cDNA clones [7]. ESTs are versatile and have multiple applications [8]. ESTs are chosen from the cotton dataset. ESTs for the following species are downloaded from the National Center for Biotechnology Information (NCBI): • • • •

Gossypium herbacium Gossypium raimondii Gossypium arboreum and Gossypium hirsutum

These sequences are then subjected to the clustering algorithm with the threshold similarity metrics as mean and median. The algorithm is implemented in python. Python provides user friendly environment and is having a large library of functionalities. These functionalities help in managing a large dataset of biological sequences. Biopython in python provides a module which efficiently handles biological datasets. The in-built functionalities can be very well adopted for similarity computation. The pairwise alignment for similarity computation helps to get optimal solution. These sequences are stored in data frames provided by Pandas module. Biopython and pandas are efficiently used majorly due to following reasons [9]: • • • •

Managing biological database file format FASTA. Creating reusable classes and modules. Providing access and utilizing common programs like BLAST. Iterating the process record by record by using indices and dictionary interfaces.

4 Result and Analysis The algorithm is implemented on approximately 1000 sequences of ESTs. These sequences are clustered using mean and median as the threshold criteria. A variable number of sequences are taken for comparison and cluster formation is done using

154

N. Maitre

Fig. 2 Indicative graph showing variation in time for cluster formation using mean and median as a threshold for short sequences

mean and median both as similarity threshold. The results are then compared with the number of clusters created and the time required for creating clusters. Figure 2 depicts indicative graph showing variation in time for cluster formation using mean and median as threshold. The parameters for performance monitoring are the number of sequences taken into consideration and mean/median as threshold criteria. The number of sequences considered for clustering ranges from 4 to 650. The number of clusters formed is also equal in a majority (almost 95%) of the cases. The example graphs shown in Fig. 2, indicates that the time required for both mean and median as a threshold is almost same. This shows that clustering of sequences is unbiased for majority cases for the choice of threshold value to be mean or median. This algorithm is efficient as it allocates sequences to the most similar cluster. The majority of clustering algorithms allocates a sequence to the first cluster which satisfies a threshold similarity condition unlike the approach used in this research paper. It uses pairwise similarity at local level and avoids heuristic approach which may lead to different cluster associations in consequent iterations. Pairwise similarity uses Smith Waterman algorithm inherently which leads to the optimal solution due to its dynamic programmed nature. This algorithm can be compared with the clustering method CLOBB (CLuster On the Basis of BLAST similarity) [7]. CLOBB is specifically meant for ESTs and is scripted in Perl. The feature-wise comparison of proposed algorithm with CLOBB is shown in Table 1.

16 An Incremental Approach Towards Clustering …

155

Table 1 The feature-wise comparison of proposed algorithm with CLOBB Parameter

Proposed algorithm

CLOBB

Dynamic programming approach

Yes

No

Incremental approach

Yes

Yes

Programming language used

Python

Perl

The choice to select mean/median as a threshold

Yes

No

High scoring pair based clustering

No

Yes

5 Applications There has been a great explosion of genomic data in recent years [10]. Biological sequences can be explored patterns with the help of data mining. Clustering being one of the main tasks in data mining can contribute in defining subgroups of the population. Clustering is an unsupervised machine learning approach which helps to create the groups of similar characteristics. Clusters thus formed can be utilized in predicting diseases in agriculture domain. Genomic sequences related to plant diseases can be mapped against cluster representative and thus can predict disease occurrence probability. Similarly, genetically modified sequences can be mapped against a new sequence with threshold similarity, achieving which may reveal the occurrences of genetic modification. The applications in the field of gene therapy can also be supported by clustering genetic sequences.

6 Conclusions The unsupervised learning approach of data mining can be achieved effectively through clustering. Clustering algorithm discussed in this paper is easy to use and simple in nature. The validity of the performance is tested against short-length biological sequences specially sequences from expressed sequence tags. It gives an insight to biological data and can be effectively used in the field of genetics for gene prediction and gene therapy. The approach is extensible and can be integrated with a suitable query engine in order to perform quick similarity checking. Cluster formation is a one-time activity and formed clusters can be reused with other query sequences to find similarity. This algorithm may find utility in the field of agriculture in identifying plant diseases, increasing crop yield by genetic improvement for withstanding adverse conditions. The work can further be used for phylogenetic analysis.

156

N. Maitre

References 1. Kushwaha UKS et al (2017) Role of bioinformatics in crop improvement. Glob J Sci Front Res: D Agric Vet 17(I) 2. Wei D et al (2012) A novel hierarchical clustering algorithm for gene sequences. BMC Bioinform 3. Camacho C et al (2010) BLAST help, NCBI 4. Manali KS et al (2008) Data mining strategy for gene prediction with special reference to cotton genome. Cotton Sci 20 5. Maitre N, Kshirsagar M (2019) MidClustpy: a clustering approach to predict coding region in a biological sequence. In: Nayak J, Abraham A, Krishna B, Chandra Sekhar G, Das A (eds) Soft computing in data analytics. advances in intelligent systems and computing, vol 758. Springer, Singapore 6. Han J, Pei J, Kamber M (2011) Data mining: concepts and techniques. Elsevier 7. Parkinson J et al (2002) Making sense of EST sequences by CLOBBing them. BMC Bioinform 8. Nagaraj SH et al (2006) A Hitchhiker’s guide to expressed sequence tag (EST) analysis. Briefings Bioinform 8 9. Chang J, Chapman B, Friedberg I, Hamelryck T, De Hoon M, Cock P, Antao T, Talevich E (2010) Biopython tutorial and cookbook 10. Jae K Lee et al (2009) Data mining in Genomics. Clin Lab Med

Chapter 17

A Novel Approach to Localized a Robot in a Given Map with Optimization Using GP-GPU Rohit Mittal, Vibhakar Pathak, Shweta Goyal and Amit Mithal

1 Introduction Mapping is the process that defines the boundaries of terrain of the path on which robot navigates whereas localization is a process to ascertain the robot and tells about its whereabout to identify the object near it with relative position in the map. Hence mapping and localization are important parameters in autonomous robotics. In this area some research had been done which leads to optimization in localization and object identification in known map using sensory system. They are not focused on multi sensing approach like random 3D open object. In the above context, we present a novel approach for SLAM (Simultaneous Localization and Mapping) which considers all or few physical parameters for better acustomization of the robot in its surroundings. Errors in localization and identification are focused by few researchers for analysis of probable errors in localization in cause of movement of robot in defined map. Previously researchers had presented work on geographical parameters, IoT-based sensors [1]. The paper presents results of various algorithms on SLAM viz Gmapping, MonoSLAM and FullSLAM. The objective is to analyze localization error probable in cause of navigation. Resultant of the experiment leads to identification of R. Mittal (B) · V. Pathak · S. Goyal Arya College of Engineering & I.T., RTU, Kukas, India e-mail: [email protected] V. Pathak e-mail: [email protected] S. Goyal e-mail: [email protected] A. Mithal Jaipur Engineering College & Research Center, RTU, Kukas, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_17

157

158

R. Mittal et al.

algorithm having least Sectorial Error Probability (SEP) which is further considered as a candidate algorithm for research. After some successful run of these algorithms, authors have found algorithms as the candidate algorithms for navigation of robot and identification of object as it has to localize itself in experimental pathway with an aim to avoid collision with respect to fixed objects. To enhance the functionality and computational power of identified algorithms, GPU architecture is used which harnesses parallel processing. We use AMD’s Graphics Core Next (GCN) architecture because of parallelism which facilitates the GPU architecture in conjunction with CPU architecture. A better approach of object identification is employed using CP-GPU architecture and it is also used for parallel processing of SLAM’s predicted learning algorithm [2, 3]. Parallel processing of any SLAM based algorithm was not discussed previously by authors like DPG-SLAM, RTAB-MAP [4, 5]. In this paper, the next section describes about the previous work done by the researchers on aforesaid area. After that, the following section discusses the analysis on sectorial error probability on various algorithms. Next section discusses the optimization done on candidate algorithm and finally the result set and discussion is presented.

2 Previous Work In the early work, researchers had presented cloud computing as a potential area in robotics. The area of robotics is classified into various sub issues, i.e., API stack management, robotics sensor management, Network computing, storage management, hybrid cloud, infrastructure and security management. Some work had been reported on IoT based sensors and geographical parameters for navigation of robot on various surfaces [1]. Some work had been reported on experimented SLAM (Simultaneous Localization & Mapping) based on supervised navigation of robot with the help of Bluetooth module. The work is focused on SLAM and various physical parameters. The work reported the analysis of impulse generated in the wheel of robot by navigating on different surfaces like marble flooring, glaze and boulder tile. Most of the previous work on Simultaneous Localization and Mapping (SLAM) techniques like iSAM (Incremental Smooth & Mapping) demonstrates incremental updates of data association whether it is online or offline and solved SLAM problem using smoothing based approach rather than filter based [4]. Some journals had reported about FastSLAM algorithm that observes the landmarks (point location) which are independent and gives the robot pose. Previously, some researchers had reported on Any Particle Filter based approach which evaluates the hypothesis of any probability distribution, over space of maps; particle filter based approaches not make any assumption about the nature of noise. Within this, another finding is that computational complexity increases exponentially with size of environment [4, 6]. Incorporation Dynamic Pose Graph (DPG-SLAM) is used to identify incorrect map measurement which incorporate time dimension in the mapping processes but is limited to the quality of pose estimates [1, 4, 7]. Most of the previous work on Real-Time Appearance-based Mapping (RTAB-MAP) algorithm uses OpenCV to

17 A Novel Approach to Localized a Robot in a Given Map …

159

extract the features from image to obtain visual words; it generates 3D cloud map and 2D data from kinect camera [4, 5]. After reviewing previous work, authors have found a gap in the optimization of SLAM algorithm and sectorial error probable analysis. The work that author proposed in this paper will based on experiments conduct on SLAM techniques like Gmapping, Mono SLAM and FullSLAM for finding sectorial error probable on various turn angle on which robot navigates.

3 Experiments on Sectorial Error Probability Using Standard SLAM Algorithms Experiments were conducted to analyze sectorial error probability in course of navigation of robot with view filed SLAM with least SEP. SEP (Sectorial Error Probability) is localization probability in between boundaries when robot moves in its basic poses. The analysis of these experiments has been carried using SLAM techniques like Gmapping, MonoSLAM, and FullSLAM. In the above Table 1, the Gmapping based Extended Kalman Filter, SLAM algorithm uses tracked robot for localization which estimates position error of robot with map generation and pose estimation. Mono SLAM algorithm is probabilistic feature-based mapping which estimates current scale of operation and keeps track of features in real time, but in MonoSLAM and FullSLAM algorithm current position of robot will not be calculated. When robot gets navigated, it has to localize itself in experimental pathway, then to avoid Table 1 Analysis on standard SLAM algorithms

Sectorial turn angle

Gmapping (EKF)

Mono SLAM

Full SLAM

0°–15°

3–5 cm

2–6 cm

3–8 cm

15°–30°

2.6–4.5

2.6–4.5

3.5–4

30°–45°

2.5–4.2

2.5–4.2

3.3–4.5

45°–60°

2.1–4.1

2.1–4.1

4–5.3

60°–75°

1.9–4

1.9–5

4.2–5.1

75°–90°

1.7–3.8

1.7–5.2

4.4–5.4

90°–105°

1.6–3.6

3.6–4.2

4.5–6

105°–120°

1.5–3.4

3.5–4

5.2–6.1

120°–135°

1.3–3.2

3.3–4.5

1.1–3.1

135°–150°

1.1–3

1.1–3

1.3–3.2

150°–165°

1–2.8

2.1–2.8

2.1–4.1

165°–180°

0.9–2.6

0.9–2.6

1.9–4

Average

1.7–3.68

2.27–4.17

3.2–4.9

160

R. Mittal et al.

collision; robot is navigated at various angles. During navigation, probability of error in various sectors is being analyzed for various algorithms. For 0°–15° turn angle Gmapping using Extended Kalman Filter the error probability vary from 3 to 5 cm, for MonoSLAM algorithm it is 2–6 cm whereas for FullSLAM algorithm it is 3–8 cm. For 15°–30° turn angle Gmapping using Extended Kalman Filter the error probability vary from 2.6 to 4.5 cm, for MonoSLAM algorithm it is 2.6–4.5 cm whereas for FullSLAM algorithm it is 3.5–4 cm. From the above results, it has been concluded that as distance between boundaries gets decreases, i.e., the probability of error gets minimized as sectorial turn angle increases, so the Gmapping (Extended Kalman Filter) algorithm (Predicted Learning Algorithm) give best results for finding the Sectorial Error Probability. As per our objective we got candidate algorithm which is Gmapping-based (EKF).

4 Optimization of Candidate Algorithm Using Shaders and Pre Pinned Buffers For the navigation of robot, SIMD-based GPU is required in many of the robotic applications using AMD-based OpenCL architecture. In this paper, predicted learning algorithm is used to optimize the sectorial results when they were used in GCN using shaders, as in the case of AMD Radeon, Shader model work on groups of various shader inputs as single processing unit. The shader instructions are fetched, decoded and executed for group of inputs and in its baseline architecture the inputs that form a group are processed in parallel. The AMD architecture is better in rendering the texture details; the architecture has better parallel programmability than Nvidia architecture as computation floating point performance is better in AMD architecture because shader and pre pinned buffers are used by OpenCL which in results increase in parallelism. When instructions are encoded in compute shader, it manifolds the performance of thread group size that is used in GPU architecture, as floating point performance is better. When instructions get transferred from memory to GPU, the pages gets locked in memory, due to which cost of GPU gets increased which is directly proportional to size of memory. So it is necessary to use pre pinned buffers. In AMD accelerated parallel processing, the implementation is done through the OpenCL [8–10]. These buffers are pinned at the time of creation and can be used to achieve peak interconnect bandwidth, when buffer is used in kernel, it will create a cache copy within the device. Shader and pinned buffers performance depends on instruction in GPU architecture. As the number of instructions decreases parallelism increases.

17 A Novel Approach to Localized a Robot in a Given Map …

161

Fig. 1 Object identification on sensory system [11]

5 Implementation Mechanism for Object Identification In this paper, authors have attached SONAR and IR sensor with Arduino board which is helpful in identifying the object, for that the robot will navigate on experimental pathway on 9600 baud rate, i.e., bits per second and through UART (Universal Asynchronous Receiver Transmitter) which is physical circuit present in microcontroller. Initially the baud rate was 11500 but it was not matched with Arduino Integrated Environment and Arduino UNO board so 9600 baud rate will be taken. Through port monitor, we got to know that when port gets opened and when it gets closed. Also it gives output results, i.e., data received through aforesaid circuit (Fig. 1) [11].

6 Results and Discussion After the execution of code based on aforesaid connections, it can be seen that there is variation in the observed and measured findings during the case of fixed obstacles. These obstacles are in rectangular shape and it can be seen that obstacle presence will not be so profound on 5.1 cm and after 12.3 cm, which is shown in Table 2. After 12 successful runs, it can be seen that sensor values for each distance as shown in Table 2 is different. As distance increases, sensor value changes.

162

R. Mittal et al.

Table 2 Result set for obstacle in rectangular shape S. No.

Measured/Actual distance (cm)

1.

5.1

Obstacle distance observed 5.8

Obstacle detection No

2.

6.3

6.5

Yes

3.

8.2

8.9

Yes

4.

10.4

11.2

Yes

5.

12.3

13.5

Yes

6.

15.7

16.6

No

7.

17.8

18.6

No

8.

19.1

19.9

No

9.

21.8

22.5

No

10.

24.9

25.7

No

11.

25.8

26.4

No

12.

27

27.3

No

It is seen that after navigation of robot on an experimental path way using candidate algorithm, its computational data is depicted in Table 3 which tells that obstacle is not so profound when measured distance is not so more, but after navigate some distance the obstacle can easily be detected; in case if obstacle in cylindrical shape, but after that the actual distance increases sharply (15.7–27 cm) and the Infrared sensor bound with robot is so sensitive that object is not detected. Hence only two times the obstacle is found by robot. The results in Table 4 shows that on presence of obstacles in conical shape, as Infra Red sensor (bound with robot) is very sensitive, so light gets easily go through in Table 3 Result set for obstacle in cylindrical shape S. No.

Measured/Actual distance (cm)

1.

5.1

Obstacle distance observed 5.8

Obstacle detection No

2.

6.3

6.5

No

3.

8.2

8.9

Yes

4.

10.4

11.2

Yes

5.

12.3

13.5

Yes

6.

15.7

16.6

No

7.

17.8

18.6

No

8.

19.1

19.9

No

9.

21.8

22.5

No

10.

24.9

25.7

No

11.

25.8

26.4

No

12.

27

27.3

No

17 A Novel Approach to Localized a Robot in a Given Map …

163

Table 4 Result set for obstacle in conical shape S. No.

Measured/Actual distance (cm)

1.

5.1

Obstacle distance observed 5.8

Obstacle detection No

2.

6.3

6.5

No

3.

8.2

8.9

No

4.

10.4

11.2

No

5.

12.3

13.5

No

6.

15.7

16.6

No

7.

17.8

18.6

Yes

8.

19.1

19.9

No

9.

21.8

22.5

No

10.

24.9

25.7

No

11.

25.8

26.4

No

12.

27

27.3

No

conical shape, so sensors are not able to detect the object. But at particular measured distance the sensor is able to detect the object. Hence, after executing the project several times, here 12 runs, it is seen in Table 5 that only one time sensor is able to detect object. Therefore, it has been concluded that obstacle profoundness is best in case, if obstacle is of rectangular shape that is 4 times. Previous work on various SLAM techniques like iSAM, DPG-SLAM, Real Time based Mapping, Fast SLAM or work with cloud robotics [1, 4, 6, 7], the authors have found gap in detecting the objects in various shape like: rectangular, cylindrical and conical which is shown in Table 5. Table 5 Comparison among previously and proposed SLAM techniques SLAM techniques used previously by authors

Proposed SLAM techniques used

Optimized SLAM technique

iSAM, DPG-SLAM

Gmapping (Extended Kalman Filter)

Real-Time-based Mapping, Fast SLAM

Mono SLAM

Cloud robotics

Full SLAM

Gmapping (Extended Kalman Filter) minimized results for finding the Sectorial Error Probability and running the candidate algorithm for different obstacles shape, it is best for obstacle profoundness, if obstacle is of rectangular shape

164

R. Mittal et al.

7 Conclusion and Future Scope It has been concluded that candidate algorithm is used for acquiring sensor data of robot which focuses on obstacle presence in different physical environment but it has been observed that the sensory system that is used by authors in this paper, is not optimized for various objects which has different physical parameters of surfaces like reflective surfaces, oblong reflection, right angle reflection, non-Line of Sight (LOS) reflection which leads to non detection of some random 3D open objects and for this we need to improvise sector error probable as for different surfaces SEP changes which results in better core architecture.

References 1. 2. 3. 4. 5. 6.

7. 8. 9. 10. 11.

Mittal R, Pathak V et al (2018) A review of robotics through cloud computing. J Adv Robot http://haifux.org/lectures/267/Introduction-to-GPUs.pdf https://www.boston.co.uk/info/nvidia-kepler/what-is-gpu-computing.aspx Dhiman NK, Deodhare D et al (2015) Where am I? Creating spatial awareness in unmanned ground robots using SLAM: a survey Balasuriya BLEA, Chathuranga BAH et al (2016) Outdoor robot navigation using Gmapping based SLAM algorithm. In: 2016 Moratuwa engineering research conference Apriaskar E, Nugraha YP et al (2017) Simulation of simultaneous localization and mapping using hexacopter and RGBD camera. In: 2017 2nd international conference on automation, cognitive science, optics, micro electro-mechanical system, and information technology (ICACOMIT) Frese U, Hirzinger G (2001) Simultaneous localization and mapping—a discussion Fatahalian K (2012) How GPU works AMD accelerated parallel processing guide.pdf Najam S, Ahmed J et al (2019) Run-Time resource management controller for power efficiency of GP-GPU architecture. IEEE Access 7 https://armkeil.blob.core.windows.net/developer/…/pdf/…/Mali_GPU_Architecture

Chapter 18

Area-Efficient Splitting Mechanism for 2D Convolution on FPGA Shashi Poddar , Sonam Rani, Bipin Koli and Vipan Kumar

1 Introduction With increasing demand of reconfigurable architecture that provides high computing speeds, FPGA-based systems have gained huge demand in embedded market. In order to process the images on this hardware, several constraints have to be considered. In this paper, the 2D filtering operation on image I (x, y) with m rows and n columns is achieved by convolving it with a mask F(x, y) of size k rows and l columns as given by Eq. 1 

k−1/2

I´(x, y) =



l−1/2

F(i, j) × I (x + i, y + j)

(1)

i=−(k−1)/2 j=−(l−1)/2

A 2D convolution of filter mask k by l, requires k × l multiplications, k × l − 1 additions and k × l memory access of pixels in the filter window [1]. The filtering operation is applied spatially over the complete image as the mask keeps moving from top left position to the bottom right in a raster scanning format. In this paper, a nonlinear median filter similar to the 2D convolution operation is applied over the masked area. The serial data obtained in the FPGA architecture either requires buffering of the image area or its random access from the external memory for the area at which filtering is to be performed [2]. Another simplistic approach to avoid random access of external memory is to store the complete image on the on-chip RAM, which Sonam Rani and Bipin Koli contributed to this worked in the past during their tenure with CSIR— Central Scientific Instruments Organisation, Chandigarh. S. Poddar (B) · S. Rani · B. Koli · V. Kumar CSIR – Central Scientific Instruments Organisation, Chandigarh, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_18

165

166

S. Poddar et al.

consumes huge available resources on an FPGA. In order to reduce the use of available resources on an FPGA, buffering mechanism is generally used across several designs [3]. These buffers allow reuse of the available data for successive window operation, reducing data bandwidth. There are basically two approaches for data buffering: (1) Full buffering (2) Partial buffering. The full buffering scheme buffers the complete row of an image to provide data to all the filters associated with one pixel and has an advantage of providing efficient pipelining for parallelism with acceptable high resource utilization. Whereas partial buffering scheme accesses a column of data at each clock and is non-efficient for real-time streaming and processing of data [4]. The single window partial buffering (SWPB) scheme proposed in [5], is a compromise between the partial and full buffering approach, and provides improved data utilization for limited bandwidth. This technique buffers a portion of the image to be reused in adjacent masking operation. However, this approach requires the next k rows to be re-accessed from the memory for its usage in the filtering operation. Zhang et al. [3] proposed the technique of multi window partial buffering (MWPB) and aims at maximum re-utilization of data stored in internal buffers. Though the MWPB scheme has a reduced re-accessing of the same data, it requires re-buffering and multiple dataflow to be provided to the convolution window [3]. Researchers also chose to down-sample the image to increasing the processing speed [4] but with information loss. Cao et al. in [4] applied an inter-connected horizontal and vertical buffer on an image but utilizes relatively large hardware resources in terms of the number of flip-flops and the number of input-output blocks for data throughput. Further, Javier Toledo-Moreo et al. [6] presented a hardware solution for computing 2D convolution mask size for large kernels, which is modular in terms of pipelining depth and parallelism, thus saving time. Another approach for parallelism is to split the image into several parts and processing them separately. The splitting method described in [7] provides two techniques of splitting, ones which interact between partition and others which do not. Both these techniques required several parallel buffers with size equal to the sum of partition area and ghost zone. As a result, these techniques provided fast parallel processing at an expense of more than double the number of flip-flops required for internal data buffering. The method proposed in this paper performs fast parallel processing with approximately same number of flip-flops required for internal data buffering for nth level parallelism as the one without parallelism. Added to this, the proposed design does not utilize any optimization algorithm, and uses vertical splitting over other forms of splitting for filtering operation. In order to describe the complete paper it is structured as: Sect. 2 describes the constraints in hardware and software design and the sliding window architecture, Sect. 3 presents different splitting schemes for parallel implementation of 2D convolution, Sect. 4 presents compares different splitting schemes, and finally Sect. 5 concludes the paper.

18 Area-Efficient Splitting Mechanism for 2D Convolution on FPGA

167

2 System Design Consideration The images stored in external memory of a reconfigurable hardware is brought to the on-chip memory for internal access of data and process them. However, this access is limited by the hardware resources available on the system and forms a design constraint. There are multiple constraints which work in cohesion to limit the system performance either in terms of speed or memory, which are discussed below in Sect. 2.1. Further to this, we discuss the sliding window architecture for implementing the filtering operation over an image in a reconfigurable architecture.

2.1 Constraints The image data is generally captured in a raster scanning mode at video rates which is then stored on an external memory or is sent to a processing hardware, FPGA in this case [8]. Any system design is built around either hardware constraint, bandwidth constraint or timing constraint. Timing Constraint is a restriction for the FPGA device to process the given task in a specific duration of time, and is basically used for realtime processing to execute the process before the arrival of next frame in a sequence. Bandwidth Constraint is defined as the rate at which the data is exchanged between an external memory and an FPGA device [9]. The two buffering techniques, i.e., row buffering and partial buffering, have very low and very high bandwidth requirements, respectively. Resource constraint in a system is defined by the total number of inputoutput blocks, slice registers, look up tables (LUTs), and block RAMs. However, concurrent data processing requires data scheduling with additional overhead of accessing resources from on-chip or off-chip devices. These extra requirements add up to the resource constraint, and needs to be catered considering the hardware resources available in a system.

2.2 Sliding Window Architecture The sliding window approach shifts the area of interest of a 2D convolution mask from top left to bottom right corner of the image. The full buffering mechanism, along with the sliding window approach is shown in Fig. 1a. The most basic unit of operation is achieved with the help of First-In-First-Out (FIFO) buffer, which is a combination of RAM and a controller unit. The size of row buffer in case of full buffering is equal to the image width, which helps in providing data to the shift registers as soon as the filter mask moves ahead by one pixel. This helps in uninterrupted supply of pixel information for the 2D convolution task [2] as depicted in Fig. 1a. Owing to the data reusability, the full buffering scheme ensures a total k × l times data sent to the shift registers for convolution at different locations. The full buffering

168

S. Poddar et al.

Fig. 1 a Depicting the sliding window architecture. b Parallel full buffering scheme

mechanism has been explored further in this task and a parallel implementation for simultaneous operation been laid forth. The proposed methodology in the following section describes different splitting procedures in detail and has been depicted to provide improved performance over other splitting techniques.

3 Proposed Methodology Parallelized implementation of the 2D convolution task to perform multiple operations simultaneously in an image frame reduces the computational time of carrying out any operation. Different constraints always need to be in sync for designing an efficient algorithm. In this work, the technique of parallel full buffering (PFB) is proposed as depicted in Fig. 1b, which divides the image into several vertical blocks. This specific partitioning scheme utilizes the on-chip memory most effectively and does not add to the additional storage area with each parallelism stage. The original full buffering scheme stores two complete rows of an image in its two row buffers respectively to provide data for each sliding filter mask as depicted in Fig. 1a. For the parallelism structure proposed here, the PFB methodology splits the image as illustrated in Fig. 1b. The size of row buffer in this case is reduced to (N/2 + 1), where ‘N’ is the image width. This specific splitting has an advantage of having reduced ghost zone area with nearly same number of flip-flops required for data storage as the original full buffer scheme. As the level of parallelism increases, the bandwidth requirement of the system also increases due to increased reading and writing of data simultaneously to and fro the memory at each clock. However,

18 Area-Efficient Splitting Mechanism for 2D Convolution on FPGA

169

the design keeps these requirements to the minimum, with addition of only p-pixels read and write, with p-stage parallelism, resulting in a total of p FIFO buffer storage space. Another two approaches of parallelizing the convolution operation on an input image is horizontal and hybrid splitting. The horizontal splitting divides the image into ‘p’ equal parts horizontally for p-stage parallelism, for which a row buffer size would be equal to the image width N and the total storage space required is 2N · p. The FIFO buffer here consists of two row buffer, for a filter mask of size 3 by 3, and thus requires 2N flip-flops for each parallelism stage. A simple example for hybrid splitting into four parts is quadrant splitting, which requires a row buffer of size (N/2 + 1) with total storage space for p = 4 as 2p · (N/2 + 1). A generic case divides the complete image into H horizontal and V vertical splitting, for p = V × H stage parallelism. As evaluated, the total number of shift register required for this generic case is equal to 2N · H + 2H · V, whereas the storage space required for vertical and horizontal splitting into V × H stage parallelism is equal to 2N + 2H.V and 2N · H · V respectively. This clearly presents the advantage of vertical splitting over any other splitting scheme, and is thus proposed here for any generic applications requiring parallel processing of images in an FPGA platform. The complete design is implemented on the Xilinx Virtex-7 development system. The device XC7vh290t-hcg1155 has 218800 LUTs, 470 block RAMs, and 300 bounded input-output blocks. The proposed architecture has been designed with VHDL, which aims at resembling the hardware implementation as closely as possible. It not only speeds up the process of improving the algorithm but also facilitates comparison of software and hardware synthesized algorithm outputs. Xilinx Integrated software environment (ISE 14.2) toolsets are utilized for the simulation, verification and analysis of the entire system. In case of full buffering, a total number of 2N + L elements are initially stored on an on-chip RAM, where ‘N’ is the image width and ‘L’ is the mask column size. This row buffer keeps on flushing out in a FIFO fashion, maintaining a pool of data for consequent filter operation in sliding window architecture. Following section analyses different splitting architecture for the hardware utilization aspect.

4 Results and Discussions The proposed approach is tested by adding superficially noise on an original image, Fig. 2a and compared with the hardware filtered image as shown in Fig. 2c. The median filtering scheme is then implemented for parallelism with different splitting schemes. A 3 × 3 median filter is applied on an image of size 256 × 256, which when gets padded with zero on all its side converts to an image of size 258 × 258. Figure 3 represent increasing hardware resources and correspondingly decreasing execution time with increasing parallelism stages. Another aim of our experimentation had been to present a generic trade-off between different resources that is utilized with different levels of parallelism. A

170

S. Poddar et al.

Fig. 2 Images showing a Original image. b Noisy image. c Hardware filtered image

Fig. 3 Depicting the a hardware utilization, b execution time for different levels, c LUT-FF percentage with increasing parallelism stages

designer should thus consider all these parameters to ensure one has the available resources for efficient running of the algorithm. Table 1 provides the numerical value of the total hardware utilized in terms of LUTs, FFs, LUT-FF combination, and other resources for different parallelism levels. As seen from Table 1, the parallelism stage cannot be increased further owing to constraints on the available input-output blocks Table 1 Comparative performance of hardware utilization for different levels of parallelism [RB-N: Row Buffer Size] Hardware resources

Available

RB-258

RB-130

RB-66

RB-34

RB-18

Buffer size



258

130

66

34

18

Parallelism stage



No

2

4

8

16

No. of slice registers

437600

239

460

886

1706

3282

No. of LUT’s

218800

495

923

1820

3491

6915

% of LUT-FF pairs used



27.4

28.5

28.9

28.8

28.4

No. of bounded IOB’s

300

21

37

69

133

261

No. of Block RAM/FIFO

470

1

2

4

8

16

18 Area-Efficient Splitting Mechanism for 2D Convolution on FPGA

171

Fig. 4 Showing images with different partitioning scheme a Quadrant splitting. b Vertical splitting. c Horizontal splitting

(IOBs). Similarly, the area available on the FPGA hardware puts a limit on parallelism level, as the increasing number of stages adds to the required LUTs and slice registers at the cost of decreased execution time. An area efficient design has been always a major design strategy being targeted by the researchers, wherein it is desired to have maximum LUT-FF pair with reduced inter-cluster routing time [10]. Figure 3c represents LUT-FF utilization for an image size 256 × 256 and 512 × 512. As seen, the LUT-FF pair has a maximum utilization for the 4th parallelism stage and decreases over further parallelism stages. As a result, a designer might consider the fourth level of parallelism for this given specific process and may vary from application to application. Three different splitting schemes discussed here are (1) Hybrid or quadrant splitting, (2) Vertical Splitting, and (3) Horizontal Splitting as depicted in Fig. 4. The row-buffering scheme implemented here with vertical splitting has advantage over horizontal and quadrant splitting of the image, as it does not require buffers for separate splits to be of the same size as the image width. Vertical splitting is compared with the other two partitioning techniques in terms of hardware utilization as shown in Fig. 5 for four levels of parallelism. As seen, the vertical splitting mechanism utilizes minimum hardware for similar performance up-gradation in performance speed. It is thus very necessary to consider hardware resources in a design aptly so that faster processing time can be achieved efficiently.

5 Conclusion In this paper, a PFB scheme with vertical splitting is proposed for parallel processing of a given task FPGA hardware. The hardware design constraints are to be taken care for achieving real-time performance. Theoretical and practical considerations

172

S. Poddar et al.

Fig. 5 Illustrating hardware resource utilization for different splitting schemes

of sliding window architecture for filtering operation on images have been presented here for different splitting schemes. Added to it, a generic splitting scheme has been discussed and compared with vertical and horizontal splitting. The proposed area efficient design is a compromise between the hardware resources and the available memory bandwidth targeting high throughput of the architecture capable of realtime image processing. Finally, the vertical parallelism strategy has been shown to reduce the computational time with much lower elevation in hardware resources and external memory bandwidth as compared to other techniques.

References 1. Hau N, Asari V (2009) Neighborhood dependent approach for low power 2D convolution in video processing applications. In: 2009 4th IEEE conference on industrial electronics and applications, 2009, ICIEA 2009, pp 656–661 2. Benedetti A, Prati A, Scarabottolo N (1998) Image convolution on FPGAs: the implementation of a multi-FPGA FIFO structure. In: 1998 Proceedings. 24th Euromicro conference, 1998, pp 123–130 3. Zhang H, Xia M, Hu G (2007) A multiwindow partial buffering scheme for FPGA-based 2-D convolvers. IEEE Trans Circuits Syst II: Express Briefs 54:200–204 4. Cao TP, Elton D, Deng G (2012) Fast buffering for FPGA implementation of vision-based object recognition systems. J Real-Time Image Process 7:173–183 5. Bosi B, Bois G, Savaria Y (1999) Reconfigurable pipelined 2-D convolvers for fast digital signal processing. IEEE Trans Very Large Scale Integr (VLSI) Syst 7:299–308 6. Javier Toledo-Moreo F, Javier Martínez-Alvarez J, Garrigós-Guerrero J, Manuel FerrándezVicente J (2012) FPGA-based architecture for the real-time computation of 2-D convolution with large kernel size. J Syst Archit 58:277–285 7. Reichenbach M, Schmidt M, Fey D (2011) Analytical model for the optimization of selforganizing image processing systems utilizing cellular automata. In: 2011 14th IEEE international symposium on object/component/service-oriented real-time distributed computing workshops (ISORCW), 2011, pp 162–171 8. Johnston C, Gribbon K, Bailey D (2004) Implementing image processing algorithms on FPGAs. In: 2004 proceedings of the eleventh electronics New Zealand conference, ENZCon’04, pp 118–123

18 Area-Efficient Splitting Mechanism for 2D Convolution on FPGA

173

9. Yu H, Leeser M (2006) Automatic sliding window operation optimization for FPGA-based. In: 2006 14th annual IEEE symposium on field-programmable custom computing machines, 2006, FCCM’06, pp 76–88 10. Tessier R, Giza H (2000) Balancing logic utilization and area efficiency in FPGAs. In: Fieldprogrammable logic and applications: the roadmap to reconfigurable computing. Springer, pp 535–544

Chapter 19

Neural Network Based Modeling of Radial Distortion Through Synthetic Images Ajita Bharadwaj, Divya Mehta, Vipan Kumar, Vinod Karar and Shashi Poddar

1 Introduction Computer vision is one of the core requirements of industrial automation and machine intelligence that helps in achieving automation at different levels. The low-cost nonmetric camera is a very popular choice owing to its ease of availability and interfacing with existing protocol norms. However, these cameras suffer from lens distortion which needs to be estimated and compensated accordingly. These distortions were first made known in early twentieth century and later modeled by Brown in 1971 as radial, decentering, and thin-prism distortions [1, 2]. The main aim of this paper is towards estimating radial distortion with the help of neural network framework. Radial distortions are those specific distortions that warp the straight lines in a radially symmetric fashion. The distortion magnitude is reflected by a distortion coefficient, k, which defines it to be either barrel for k < 0 or a pincushion distortion for k > 0, respectively. The opposite effect is observed in case of pincushion distortion, i.e. the edges of the image bend inwards. These radial distortions occur due to the variation in lens magnification with distance from the optical axis or the improper positioning of the lens relative to the camera aperture. Lens distortion is generally found in almost all the cameras; however, the magnitude varies with the camera quality and its cost. The three main categories of radial distortion estimation methods are auto-calibration, point correspondence, and plumbline method. The auto-calibration method acquires multiple images from the camera A. Bharadwaj University Institute of Engineering and Technology, Panjab University, Chandigarh, Punjab, India D. Mehta VIT University, Vellore, Tamil Nadu, India V. Kumar · V. Karar · S. Poddar (B) CSIR-Central Scientific Instruments Organisation, Chandigarh, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_19

175

176

A. Bharadwaj et al.

under motion to yield camera parameters [3]. However, this scheme cannot be used for fixed camera cases or when an immediate distortion factor is required [4]. The point-correspondence method requires multiple views of the same scene by different cameras of fixed focal length to yield camera distortion parameter. The plumb-line method does not require multiple views or multiple images and can instead work on single distorted image obtained from the camera. Nevertheless, this scheme is dependent on the existence of straight lines in the images, which can either be generated synthetically or pre-exists due to man-made structures such as buildings, roads, etc. [5, 6]. In this paper, the plumb-line based method is used to estimate the distortion factor corresponding to a given camera. Stapleton and Bajic presented a plumb-line method for extracting image points that lie on straight lines using forward transform model [7]. Bukhari and Dailey broached an algorithm that estimates the radial distortion in an image without any human intervention [8]. This method is based on plumb-line approach and works efficiently even for wide-angle and fish-eye lenses. Li et al. proposed a simple fish-eye distortion correction algorithm, which determines the radius and center of the image using a midpoint circle algorithm and reduces the number of computations as compared to equidistant model method [9]. In this article, the artificial neural network (ANN) technique is used to estimate the radial distortion factor based on the training examples fed to it. It does not require any camera calibration to be known a priori and works efficiently for single-view images. The complete paper is divided into five sections: Sect. 1 introduces the topic, Sect. 2 cover the theoretical background, Sect. 3 details the proposed methodology, Sect. 4 provides corresponding analysis and finally Sect. 5 concludes the paper.

2 Theoretical Background Radial lens distortion is one of the most important factors that needs to be removed from the camera for realistic mapping of the outer view. This section presents the mathematical background related to the modeling of radial distortion and the neural network technique for model estimation.

2.1 Distortion Modeling A distortion model maps the distorted image coordinates to their corresponding ideal image counterparts. This distortion need not always accrue due to the camera lens, but many a times is a reflection of the error in the lens of projective device whose information is viewed by the camera. For example, a camera viewing the projection of a projector, or a head up display device. Some of the popular distortion models used in the literature are: (a) polynomial model, (b) division model, and (c) logarithmic model. The polynomial distortion model used here represents the image points as a

19 Neural Network Based Modeling of Radial Distortion …

177

function of its radial distance from the image center and is given as [10]:   L(r ) = f (r ) = 1 + k1r 2 + k2 r 4 + · · ·

(1)

Here, r is the radial distance between center of distortion and the image point and k1 , k2 , . . . are the radial distortion coefficients of the polynomial function f(r). Generally, the first and second order distortion coefficients, k1 and k2 are taken into consideration while the higher order terms are neglected. It is generally considered that the distortion in an image is largely governed by the first order coefficient, and the second order term is relevant only in case of very high distortion. This model maps the real-world straight lines to curved lines in the distorted image. Even when various higher order coefficients are taken into consideration, it only increases the complexity while no significant upsurge in the precision of distortion estimation is observed. As evident from the equations above, the radial distortion varies with the distance of image coordinates from the distortion center. The division model introduced by Fitzgibbon in 2001 models large distortion with lesser coefficients and the logarithmic model uses polar coordinate representation of points and is based on logarithmic fish-eye transformation [11]. The distortion coefficients signify the magnitude and type of the radial distortion in the image. A lower value of the distortion coefficient means the mapping is close to ideal cases, whereas a higher valued distortion coefficient indicates large distortion in the image.

2.2 Neural Networks and Machine Learning Machine learning techniques are divided into unsupervised and supervised learning. The unsupervised learning algorithms does not require any training with the labeled data whereas the supervised learning algorithm works in a close-loop fashioned mechanism to yield a system model that fits to the training data. Linear regression, support vector machine, and artificial neural networks are some of the most widely used learning algorithms. Among these training algorithms, neural network algorithm is chosen here to predict the output labels, given the input data. A neural network is made up of an interconnection of neurons that help in categorizing unlabeled data and predict the continuous output values as a result of supervised training. The values obtained from the previous layer nodes are summed up with certain weights plus the bias term which is represented mathematically as u1 =



 n+1 wi,n+1 j ∗ x j + wi,bias bn

(2)

j

The multiple layer network structure is then trained using a back-propagation learning algorithm applied to the given dataset, adjusting the weights iteratively to obtain a network structure that gives desired response to the given input vectors.

178

A. Bharadwaj et al.

3 Proposed Methodology Radial lens distortion of an imaging system is an important factor that needs to be monitored and rectified appropriately. In this paper, the distortion magnitude is estimated by relating it to the varying area of shapes that are generated synthetically. In order to study the effect of distortion, regular shaped objects (square and rectangle) of different sizes are distorted with different coefficients. The overall algorithm works by converting the gray scale image to the binary scale, and enumerating the area of ‘on’ pixels enclosed by the object boundary. The pin-cushion and barrel distorted image of a square-shaped object is shown in Fig. 1. The area enclosed by an object is calculated by analyzing the 2 × 2 neighborhood of each pixel that lies within the object boundary. It is to be noted that each pixel is a part of four such neighborhoods and the relative position of the foreground and the background pixels form the pattern. Accordingly, these can be grouped into six different cases as [9]: Case–1: When there are no ‘on’ pixels in the sample, it does not include any object segment and the area is considered to be zero. Case–2: The instance where the sample contains only one object/’on’ pixel with respect to the background, the area is considered to be 1/4. Case–3: Segment consisting two adjacent ‘on’ pixels, the area is considered to be 1/2. Case–4: In case there are two diagonal ‘on’ pixels, the area is equivalent to 3/4. Case–5: While pattern consists of three object pixels, the area is considered to be 7/8. Case–6: The sample consisting of all four pixels as ‘on’ has an area equal to 1. For the cases discussed above, an ‘on’ pixel is a part of the object(s) in the image. The area enclosed by the object boundary keeps changing with the varying distortion coefficient and is explored here for relating it with the distortion magnitude. The distortion coefficient is modeled and estimated in this article by creating a neural network structure between the area enclosed by the object and the corresponding distortion coefficient. The neutral network training involves selecting the input, hidden and output layers, besides the model, which is used to prognosticate results for

Fig. 1 Pin-cushion and Barrel distorted image on the left and right of the center image

19 Neural Network Based Modeling of Radial Distortion …

179

new and unseen data. Added to it, not just the area, but the length and breadth of the rectangular objects and the distance between the center of the rectangle and the image are also considered as inputs to the neural network. An activation function is defined for each layer, which is a transfer function representing the output of a node for the given inputs. Thus, a fully interconnected network of nodes (neurons) is formed which helps in estimating the distortion coefficient. In this framework, a defined set of experimentation has been carried out to train the neural network which is used later for testing its validity on newer set of input parameters. The area of the distorted and the ideal image is determined as described in the above cases and their difference is fed in the form of input data into the neural network. Depending upon the activation function, a corresponding optimal weight for each node is combined with the input to generate the output.

4 Results and Discussions The proposed methodology of estimating distortion by monitoring the area enclosed by known shape has been experimented here. Synthetic dataset of squares and rectangles of different sizes are distorted with different distortion coefficients (both positive and negative). As known and found empirically that the pin-cushion error leads to a reduction in the area enclosed whereas the barrel distortion leads to an increase in the area. This effect has been explored here to train the neural network and estimate the unknown distortion coefficient, given the input parameters. Additionally, the effect of shifting the center of the object from the image center is investigated by incorporating it in the training dataset. Table 1 depicts the effect of varying distortion coefficient on the image area. It can be seen from the Table 1 that the area of the square decreases as the distortion coefficient increases and is nonlinear in nature. Some of the entries in the table remain blank as the distortion is so large that the edges do not remain connected and would require certain approximations to estimate Table 1 Depiction of varying area with varying size of the square and distortion coefficient k(-ve)

s=2

s=3

s=4

s=5

s=6

s=7

s=8

s=9

s = 10

0

38025

84681

149769

233289

335241

455625

594441

751689

927369

0.1

34225

76729

136220

213318

308907

422611

556418

710633

887176

0.2

30827

69169

123386

193771

281830

388251

515235

664969

840840

0.3

27627

62001

110492

174410

254554

353025

471931

614939

787259

0.4

24579

54983

98368

155518

227735

317195

426591

560319

726489

0.5

21609

48456

86608

137246

201499

281404

380011

502419



0.6

18769

42182

75572

119789

176138

246344

333451





0.7

16186

36304

65124

103399

151891

212665







180

A. Bharadwaj et al.

Fig. 2 Non-linear polynomial curve fitting for changing area with size and distortion coefficient

the enclosed area. The nonlinear characteristics of the enclosed area with varying distortion coefficient are depicted in Fig. 2. The nonlinear characteristics of the enclosed area with varying distortion coefficient can be modeled using a least square based second-order polynomial equation and is shown in Fig. 2. In order to carry out this fitting, the distortion coefficient, x and the object size, y are considered as the independent variables and the area to be the dependent variable. Accordingly, the non-linear polynomial equation is modeled as A = f (x, y) = p00 + p10 x + p01 y + p20 x 2 + p11 x y + p02 y 2 .

(3)

Next, the effect of the ratio of adjacent side lengths has been analysed and depicted below in Table 2. As seen, on varying the ratio of the adjacent side lengths, the enclosed area varies nonlinearly and is directly proportional to the distortion coefficient. It outlines the importance of object shape on the radial distortion, and is thus incorporated here in the modeling. Additionally, the effect of shifting the object Table 2 Change in area with varying ratio of side lengths k(+ve)

s=7×6

s=7×7

s=7×8

s=7×9

s = 7 × 10

s = 7 × 11

s = 7 × 12

0

390825

455625

520425

585225

650025

714825

779625

0.1

459018

532815

605428

677135

747792

816931

884892

0.2

525854

607355

686663

763503

837831

909643

978891

0.3

590405

678301

762456

842844

919553

992561

1061975

0.4

651301

744260

832005

914789

992637

1065905

1134876

0.5

708102

804876

895184

979338

1057682

1130628

1198757

0.6

760615

860301

952277

1037089

1115420

1187895

1254956

0.7

808901

910721

1003738

1088835

1166814

1238488



19 Neural Network Based Modeling of Radial Distortion …

181

Fig. 3 Effect of barrel distortion on fixed size square with varying offset

center from the image center has been found to have an effect on the area enclosed by an object considerably and is included as an input feature of the neural network. Figure 3 depicts the effect of varying barrel distortion on images with fixed square shapes, but off-centered towards the right side with different magnitude. It can be seen from Fig. 3 that if an image of square of size 5 inches × 5 inches is shifted rightwards by different magnitude (0.5 to 2.5 inches), the area difference between original and distorted image decreases. Further, on increasing the distortion magnitude, the area difference increases accordingly as seen in Table 3. The dataset generated from the difference in area enclosed by the original and distorted object is used in a neural network training structure. This network consists of two layers of densely connected neurons, linking each neuron to all the available nodes in the next layer. Hence, a network is created with four input nodes, four hidden nodes, and one output node. The rectified activation function is used here for the hidden layer while a linear function is assigned to the output. Out of the total 352 test samples, 70% of the samples are considered as training set (246 samples), 15% as a test set (53 samples), and remaining 15% as a validation set (53 samples). This framework is then used to test and validate on a part of the sample extracted from the original dataset. Figure 4 depicts the performance of different data samples and the mean squared error between true and predicted value. Table 3 Area difference for 5 × 5 in square for varying offset from center of the image k (+ve)

Offset = 0.5

Offset = 1

Offset = 1.5

Offset = 2

Offset = 2.5

0.1

43519

42742

41157

39524

0.2

88180

86149

82745

78157

37096 72357

0.3

132832

129094

123070

114993

105212

0.4

176816

171139

162077

150039

135492

0.5

219496

211663

199255

182882

163243

0.6

260469

250449

234495

213489

188530

0.7

299465

287252

267697

241956

211617

182

A. Bharadwaj et al.

Fig. 4 Performance comparison between training, validation, and test sample set using mean squared error

Figure 4 indicates the performance of the neural network throughout the training process using mean squared error between actual distortion coefficient and the value predicted by ANN. The input data is fed into the neural network several times until the performance ceases to improve. As seen, the mean squared error value for the validation curve decreases as the training proceeds until epoch 29 whereafter no substantial improvement is observed. Although this scheme applies distortion on synthetic test patterns, it is targeted to study the application of this technique in real image cases where straight line objects preexisting as a future scope of this work.

5 Conclusion The paper explores the relationship between the distortion coefficient and the difference in area between an ideal image and its distorted version. The experiments performed on the square and rectangle-shaped images depict that the difference in the area increases nonlinearly as the extent of distortion increases for barrel distortion and decreases for pin-cushion distortion. The experiments have also been carried out for synthetically generated objects for different sizes and varying offset from the image center. The three-dimensional nonlinear curve fitting technique can be used to estimate the distortion analytically for unknown cases. The neural network architecture takes advantage of these findings and predicts the unknown distortion in an image. This work can be extended further for various other possible shapes of images and real-life images that have straight line like objects in it.

19 Neural Network Based Modeling of Radial Distortion …

183

References 1. Duane CB (1971) Close-range camera calibration. Photogramm Eng 37:855–866 2. Liu Y, Tian C, Huang Y (2016) Critical Assessment of Correction Methods for Fisheye Lens Distortion, Int Arch Photogramm Remote Sens Spat Inf Sci 41 3. Fitzgibbon AW (2001) Simultaneous linear estimation of multiple view geometry and lens distortion. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001. pp. I–I 4. Vass G, Perlaki T (2003) Applying and removing lens distortion in post production. In Proceedings of the 2nd Hungarian Conference on Computer Graphics and Geometry. pp. 9–16 5. Thormählen T, Broszio H, Wassermann I (2003) Robust line-based calibration of lens distortion from a single view. Mirage 2003:105–112 6. Wang X, Klette R, Rosenhahn B (2006) Geometric and Photometric Correction of Projected Rectangular Pictures. University of Auckland 7. Stapleton MP, Bajic IV (2016) Robust domain-filling plumb-line lens distortion correction. In: IEEE International Symposium on Multimedia (ISM), 2016, pp. 507–514 8. Bukhari F, Dailey MN (2013) Automatic radial distortion estimation from a single image. J Math Imaging Vis 45:31–45 9. Li Y, Zhang M, Wang B (2011) An embedded real-time fish-eye distortion correction method based on Midpoint Circle Algorithm. In: IEEE International Conference on Consumer Electronics (ICCE), 2011, pp. 191–192 10. Park J, Byun SC, Lee BU (2009) Lens distortion correction using ideal image coordinates. In: IEEE Transactions on Consumer Electronics. Vol. 55 11. Basu A, Licardie S (1995) Alternative models for fish-eye lenses. Pattern Recogn Lett 16:433– 441

Chapter 20

A Stylistic Features Based Approach for Author Profiling Karunakar Kavuri and M. Kavitha

1 Introduction The exponential progress of social media networks brings new challenges in the study and evaluation of text classification problems [1]. They are extremely popular, especially for young people and they impact hugely on several aspects like friendship and love relationships, news industry and the way of working and so on. A great number of experts in fields of Sociology and Computer Science are working on researches about the topic of social networks effects on our society. Every day billions of users use these platforms, therefore huge amount of data, like texts and images is produced daily. In recent days, it is not enough to find the sentiment or author attached to a text document, but also other aspects that can help to understand the profiles like age, gender, and other demographic aspects of the individuals that wrote the text. Author profiling was become a very important text classification task which considers the extraction of this kind of demographic information of the authors. In this work, the overall aim is to find gender information about a person just by analyzing texts written by that individual [2]. Figure 1 shows the definition of Author Profiling. Author profiling usage is growing fast in different fields, such as marketing, forensics, and security. For example, companies are interested in customer segmentation that consists in splitting clients in different groups having the same characteristics. This makes more effective targeted marketing campaigns. The automatic profiling of the authors of user feedback could be also very useful in marketing studies. Detecting demographic patterns is very helpful in order to adapt the product/service that is being sold to the specific target audience. In the field of forensic linguistics, author K. Kavuri (B) · M. Kavitha CSE Department, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi, Chennai, India e-mail: [email protected] M. Kavitha e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_20

185

186

K. Kavuri and M. Kavitha

Fig. 1 Author profiling

profiling was applied to threatening letter analysis to pedophile detection in chat systems and to the automatic detection of cyber bullying. Concerning security, it would be very important to be able to identify the profile of a harassing text message author. Another important application concerns the identification of terrorists based on the content produced and posted in social networks. As you can easily understand, there are several important fields in which Author Profiling is fundamental. As a consequence, diverse approaches have been proposed to solve this task [3]. Many of them use traditional supervised learning techniques that have shown to be effective, but considering that much of the Information comes from small text messages (like Twitter, Facebook, etc.) is difficult. The researchers are working for proposing alternative text representations that can take advantage of all possible information was an emerging interest [4]. In case of gender identification, a possible hypothesis that could be formulated is that women tend to be more sensitive than men. If we continue developing the previous hypothesis, we could also try to prove that women tend to write about how they feel about a certain situation, whereas men tend to write about the situation itself [8]. Another researcher observed [5] that the female writing expresses both positive and negative comments on an object or a person. Researchers has given another inference on male writing wherein, the male are tend to narrate new stories and focus on what had happened but whereas the female express how they felt. According to Koppel et al. [6] women writing contain more pronouns and men uses more quantifiers and determiners. In the past some of the researchers found that more number of prepositions [5, 6] was found in female author’s writings than the male authors. In another observation [7], females use more adjectives and adverbs and talks on shopping and wedding styles and male author’s writings consisting of technology and politics. Text contents and its features play a dominant role in identifying the author as a male or female [5]. The content words such as my husband, boyfriend, pink are generally identified in female writings and also include negations, friends, verbs, the words relate to family and emotional words. The male writings consisting of world cup, cricket and more number of prepositions, statistics, artifacts and the style of men writing are observed in longer words [8].

20 A Stylistic Features Based Approach for Author Profiling

187

2 Literature Survey The goal of this research work is to uncover the gender of an author by comparing the demographic and psychological characteristics found in text documents against an author’s writing style samples using enriched stylistic feature. Many researchers proposed various types of stylistic features such as character based, syntactic, word based, semantic features and so on for distinguishing the style of writing of the authors [9]. In the experiments of Dang Duc [10], 298 word based features and character based features are extracted from approximately 3500 WebPages of 70 Vietnamese blogs and concluded that word-based features are more intended to predict the gender than the features based on characters. They observed that word n-grams and character n-grams with more length are producing good accuracy in gender prediction when experimented with SVM. Koppel et al. [6], experimented with British National Corpus (BNC), from those 566 text documents were collected, 1081 features were extracted and good accuracy is achieved when predicting the gender. Argamon et al. [11] composed a corpus from various blogs which includes different authors (approximately 19320), by using both stylistic features and content based features the accuracy is calculated. Argamon et al. also concluded that stylistic features are contributing more in predicting and discriminating the gender. Palomino-Garibay et al. stated [12] that the character level n-grams with the value of n ranging from 2 to 6 were producing better accuracies over the languages like Spanish and Dutch. The experiment of Soler et al. [13] on opinion blogs of New York Times corpus, features with different combinations were tried. These combinations includes sentence-based-, word-based-, character-based-, syntactic features and dictionary-based features for gender prediction and the combination of all these features achieved good accuracy. Most of the research papers in author profiling consider the use of lexical, syntactical, and semantic elements contained in text documents [9]. In the case of lexical elements, most papers deal with the extraction of text vocabulary like stylistic elements of texts, word N-grams, sentence/word length counts, stop words, words that appear only once or twice in texts and vocabulary richness among others. For the syntactic elements of texts, distinct papers extract features that describe the role that function words play to form sentences, Part Of Speech tags (POS tags), text chunkers and the extraction of sentence and phrase structures. In the case of the semantic elements, various papers incorporate features that deal with the understanding of words and phrases. Examples of these kinds of features include the use of synonyms, antonyms, hyperonims and hyponims. One of the main disadvantages of these kind of features is the difficulty to find the correct meaning of texts considering that most of the analyzed information comes from social media, which adds more complexity to the analysis of the context although some tools have reached an acceptable accuracy to be considered. There are some papers that deal with author profiling using graphs [14, 15]. The majority proposed a supervised classification approach with a co-occurrence graph to detect age and gender. In [16],

188

K. Kavuri and M. Kavitha

a graph-based representation is proposed to extract lexical and syntactical features related to different personality traits like extroverted, stable, open, etc. Alrifai et al. experimented [17] with character N-grams (N range from 2 to 7), hashtags, links, Lengthened Words Ratio and mentions usability ratios to generate the feature vector. They used Sequential Minimal Optimization (SMO) and SVM classification algorithms to build the classification model. They detected that, SMO gave a better accuracy than SVM. They also observed that the POS n-grams and Word N-grams contribution is less for improving the accuracy of language variety and gender prediction. This work is organized in six sections. Section 2 describes the previous work done by the various researchers in Author Profiling. The dataset characteristics and performance measures were presented in Sect. 3. Section 4 discuss about different types of stylistic features extracted for our experiment. The BOF model is explained in Sect. 5. The experimental results were analyzed in Sect. 6. Section 7 explains conclusions of this work with future scope.

3 Evaluation Measures and Dataset Characteristics In this work, we experimented with PAN 2014 competition [18] corpus of hotel reviews dataset for gender prediction. The reviews dataset for English language contains 4160 reviews. The dataset is balanced which means both female and male comprises same number of reviews i.e. 2080. In a supervised learning, the algorithm learns from a training set of correctly labeled examples. The extracted knowledge, also known as “model”, is then applied to unseen instances. To evaluate the performance of a model, a set of unseen instances are classified, and the predictions of the classifier are compared with their real labels. Each evaluation metric implements this comparison in a different way. Some of the common metrics used to evaluate the models of supervised machine learning were precision, recall, accuracy and F-measure. In this work, we used accuracy measure for evaluating the classification models. Accuracy is defined as the ratio of correctly predicted instances to total predictions. This metric is mostly used in cases where the number of instances per class is evenly distributed.

4 Stylistic Features To perform Author Profiling, a feature engineering process needs to be carried out. Author profiling is based on the determination and extraction of features that characterize the writings with respect to their author or with respect to the characteristics of the author. After introducing the different machine learning algorithms that are often used in author profiling, the topic of feature is a vaguely defined set of tasks related

20 A Stylistic Features Based Approach for Author Profiling

189

to designing feature sets for machine learning applications. The first important task to do, in order to correctly design a feature set is to understand the properties of the problem at hand and assess how they might interact with the chosen classifier. Feature engineering is thus a cycle, in which a set of features is proposed, experiments with this feature set are performed, and, after analyzing the results, the feature set is modified to improve the performance until the results are satisfactory. Optimal feature selection helps the algorithms to extract patterns that generalize to unseen instances without needing complex parameterization of the classifier to perform competitively, preventing over fitting. Models created by the machine learning algorithm which contain the “knowledge” extracted from training data are faster to run, easier to understand and to maintain if the feature set is appropriate. Different types of features can be extracted from the input data. Numeric features quantify numeric characteristics of the data; Boolean features indicate presence or absence of a characteristic; and nominal features can take a fixed set of values (e.g., “positive”, “negative”, “neutral”). It is important to choose machine learning algorithms compatible with the type of features that are considered. In our specific case, the constructed feature set will be composed solely of numeric features. In this work, the following sets of features were extracted for experimentation. • • • • • • • • • • • • • • • • • • • • •

Most frequent character n-gram where n range from 2 to 7 Most frequent word uni-grams, bi-grams and tri-grams Most frequent POS n-grams, where n is 1, 2, 3 and TF-IDF weighting. Longest Words: Longest words mean that purposely repeating a character in a word for showing exaggeration in explanation of something The words start with lowercase letter The words start with uppercase letter Character flood counts: the number of times the character sequence of three or more time same characters appear in the document. vocabulary variety is the ratio among the unique words in a document and the total words in the text Cleanliness of a text is the ratio among number of characters it had after and before the preprocessing step Average spelling errors by determining a relative value for correctly spelled words Punctuation features: average dot count, average comma count, average question mark count and average exclamation count Words which are not from a predefined dictionary of words All capital letters Words type-token ratio is the ratio among the unique words and the total words used Positive words count Negative words count Swear words count Number of connective words (furthermore, firstly, moreover, hence), Number of Contractions (I’d, let’s, I’ll, he’d, can’t, he’d …) Number of Familial Words (wife, husband, gf, bf, mom) Number of Collocations (dodgy, telly, awesome, freak, troll …)

190

K. Kavuri and M. Kavitha

• Number of Abbreviations and Acronyms (a.m., p.m., Mr., Inc., NASA, asap …) • POS unigram density is ratio among number of different POS unigram and total POS unigrams • POS bigram density is ratio between number of POS bigram and total POS bigrams • POS trigram density is the ratio among number of different POS trigram and total POS trigrams • Standard deviation of characters per word, which measures the spread of the feature values around the mean value. It is a useful metric that allows us to see whether the variation in word length is high • Range of characters per word - The statistical range is defined as the difference between the largest and the smallest element of a set • First person pronoun usage • Words between parentheses • Words per sentence standard deviation measure the variability of words per sentence by computing the standard deviation. It is a metric that indicates whether an author constructs his/her sentences in a uniform way, or whether they vary greatly in size from one sentence to another.

5 BOF Model In this work, we used a Bag of Features (BOF) Model for document vector representation. The BOF model is exhibited in Fig. 2. In this BOF model, first the training dataset is collected from suitable sources for experimentation. The training dataset is preprocessed for efficient features extraction by using stop word removal and stemming techniques. Extract the stylistic features which are appropriate for distinguishing the style of writing of the authors. Next, calculate the features weights by using the feature frequency measure. These weights of feature weights were used for document vector representation. The document vectors were given to classification algorithms to generate the classification model. In this work, different classifiers like Logistic and Simple Logistic (functional), Naive Bayes Multinomial (Probabilistic), Bagging (Ensemble/meta), IBK (lazy) and random forest (Decision Tree) were used from WEKA tool. This classification model is used to predict the gender of the unseen document.

6 Experimental Results In this paper, the experimentation performed with a set of stylistic features that were represented in the previous section. The weights of stylistic features were used to represent the document vectors. Various classification algorithms were used in this experiment for producing the classification model. Table 1 represents the accuracies

20 A Stylistic Features Based Approach for Author Profiling

191

Fig. 2 The BOF model

Table 1 Gender prediction accuracies when stylistic features were used

Classifier

Accuracy

Naïve Bayes multinomial

79.25

Logistic

76.15

Simple logistic

73.60

Ibk

75.35

Bagging

78.80

Random forest

82.75

for predicting the gender when stylistic features were used (Fig. 3). The random forest classifier achieved 82.75% for predicting the gender profile when stylistic features were used. They observed that the character N-grams influence is more to improve the prediction accuracy of gender. We also experimented without character N-grams, it was observed that the accuracy is reduced. It was also observed that the impact of POS N-grams and word N-grams is less for improving the accuracy of gender prediction.

192 Fig. 3 Graphical representations of accuracies of gender prediction

K. Kavuri and M. Kavitha

Accuracy

84

Accuracy

82 80 78

79.25

76 74

82.75 78.8

76.15

75.35 73.6

72 70 68

7 Conclusion and Future Scope Author profiling is text processing method to extract the demographic profiles of the authors by investigating their written text. Because of its wide applications, several researchers proposed solutions for improving the accuracy of profiles prediction. In this work, we experimented with stylistic features for predicting the gender profile from hotel reviews corpus. We experimented with various classification algorithms to create the classification model for predicting the gender of unseen documents author. The random forest classifier performance is good for gender profile prediction among all classifiers. It was planned to implement efficient weight measures to increase the differentiating power of the features. It was also planned to implement neural networking concepts as well as deep learning algorithms to improve the accuracy of demographic profiles of authors.

References 1. Rosso P, Rangel FM, Potthast M, Stamatatos E, Tschuggnall M, Stein B (2016) Overview of PAN 2016—new challenges for authorship analysis: Cross-genre profiling, clustering, diarization, and obfuscation. In: Proceedings of the CLEF PAN Conference 332–350 2. Rangel FM, Rosso P, Verhoeven B, Daelemans W, Potthast M Stein B (2016) Overview of the 4th author profiling task at PAN 2016: Cross-genre evaluations. In: Proceedings of the CLEF PAN Conference 750–784 3. L´opez AP, Montes-y-Gomez M, Escalante HJ, Pineda LV, Stamatatos E (2016) Discriminative sub profile specific representations for author profiling in social media. Knowl Based Syst 89:134–147 4. Mihalcea R, Radev D (2011) Graph-based natural language processing and information retrieval. Cambridge University Press

20 A Stylistic Features Based Approach for Author Profiling

193

5. Schler J, Koppel M, Argamon S, Pennebaker J (2006) Effects of age and gender on blogging. In: Proceedings of AAAI Spring Symposium on Computational Approaches for Analyzing Weblogs Vol. 6, 199–205 6. Koppel M, Argamon S, Shimoni A (2003) Automatically categorizing written texts by author gender. Lit Linguist Comput 401–412 7. Nerbonne J (2013) The secret life of pronouns. What our words say about us. ALLC 8. Newman ML, Groom CJ, Handelman LD, Pennebaker JW (2008) Gender differences in language use: An analysis of 14,000 text samples. Discourse Process 45(3) 211–236 9. Raghunadha Reddy T, VishnuVardhan B, Vijaypal Reddy P (2016) A survey on authorship profiling techniques. Int J Appl Eng Res 11(5):3092–3102 10. Dang Duc P, Giang Binh T, Son Bao P (2009) Author profiling for vietnamese blogs. Asian Lang Process 190–194 11. Argamon S, Koppel M, Pennebaker JW, Schler J Mining the blogosphere: Age, gender and the varieties of self-expression. First Monday 12(9) 12. Palomino-Garibay A, Camacho-González AT, Fierro-Villaneda RA, Hernández-Farias I, Buscaldi D, Meza-Ruiz IV A random forest approach for authorship profiling 13. Soler J, Wanner L (2007) How to use less features and reach better performance in author gender identification. In: The 9th edition of the Language Resources and Evaluation Conference (LREC), 26–31 14. Rangel FM, Rosso P (2016) On the impact of emotions on author profiling. Inf Process Manage 52(1):73–92 15. Dichiu D, Rancea I (2016) Using machine learning algorithms for author profiling in social media.In: Proceedings of the CLEF PAN Conference 858–863 16. Gomez-Adorno H, Markov I, Sidorov G, Posadas JP, Sanchez MA, Hernandez LC (2016) Improving feature representation based on a neural network for Author Profiling in social media texts. Comput Intell Neurosci 2016:1–14 17. Alrifai K, Rebdawi G, Ghneim N (2017) Arabic tweeps gender and dialect prediction. Notebook for PAN at CLEF 2017 18. https://pan.webis.de/clef14/pan14-web/author-profiling.html

Chapter 21

Load Balancing in Cloud Through Task Scheduling Tarandeep and Kriti Bhushan

1 Introduction Cloud computing is a combination of distributed and parallel computing. Cloud computing is a multi-tenant computing model. Virtualization concept has a huge importance in the cloud as it provides resources online according to the user’s demand. Virtualization makes the computing resources to be created virtually and on time distribute to a large number of users according to their respective demand. Data center managed these resources and make available to the users dynamically based on its availability, demand and some quality parameters needs to be fulfilled [1]. Virtual Machines (VMs) of specified heterogeneous features are present in cloud services and oscillate resource utilization, hosted on many servers present at different regions which result in imbalanced in resource and the task scheduling [2]. This result decrease performance, large consumption of energy, instability in system and negligence of service level agreement (SLAs). Because classical methods like FCFS, Round Robin, SJF, etc., are too slow and fail to find any exact solution for process scheduling and load balancing problem so we move to metaheuristic algorithms ACO [3].

Tarandeep (B) · K. Bhushan Department of Computer Engineering, National Institute of Kurukshetra, Kurukshetra, India e-mail: [email protected] K. Bhushan e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_21

195

196

Tarandeep and K. Bhushan

1.1 Cloud Features Sharing resources on demand. The cloud provides the resources to the user according to their demand. Resource scalability. Cloud gives the user the feature of scaling up resource availability according to the need. Pay per usage. User has to pay according to the number of services and resources used by them. Self-Service. There is a very less interaction of service provider with the user. The cloud provides the user freedom to select the resources according to their need and by their own.

1.2 Cloud Services Cloud Computing represents the availability of IAAS, PAAS and SAAS services to the external service providers [4]. It involves these basic things. Resource Discovery. From the huge shared pool of resources (cloud),which resource satisfies the user’s request is selected. Monitoring. Monitoring the usage of cloud resources are important for the efficient and effective use of the resource. Load Balancing. The hosts may not be over-utilized and under-utilized, they must be balanced. Due to this day-by-day increase in the usage of cloud, a load on the nodes is increasing drastically so there is a need for resolving the Load Balancing Problem. Load balancing is must for the proper utilization of a resource. For reducing the load balancing problem, the cloud service providers should have a look at the following (QoS) parameters of load balancing. In this paper, we are focusing mainly on the Makespan and Response Time QoS parameters by doing load balancing with the help of process and CPU scheduling.

1.3 Load Balancing QoS (Quality of Service) Parameters Makespan. It is a very important factor for scheduling where the algorithm designer tries to reduce the time of completion of the task [5]. Overloaded Host. Node with a load more than the threshold is overloaded. The load should be less than the threshold value. Migration Rate. For better performance of the cloud system, rate of VM Migration should be less.

21 Load Balancing in Cloud Through Task Scheduling

197

Response time. It is the idle time before a task gets the resource [6]. Throughput. It represents how fast the resource is allocated to request. It must be high. Resource Utilization. It is the most important QoS parameter for service provider because to increase their profit they need to optimize the resource utilization. In this paper, we are going to focus on the major important issue of load balancing in cloud, i.e., Task scheduling. Task scheduling consists of process scheduling and CPU scheduling. Normally we use the classical methods of scheduling like FCFS, SJF, RR, Priority-based scheduling, etc., for both process scheduling and CPU scheduling but in our approach, we are going to use the combination of classical and metaheuristic scheduling algorithm for the process of scheduling. We first use a metaheuristic algorithm ACO for process scheduling and then use FCFS or RR for CPU scheduling. We compare our approach with the approach in which we use the FCFS for process scheduling and FCFS or RR for CPU scheduling. Below is the brief introduction about the algorithms we are going to use for the implementation of the scheduling process.

1.4 Existing Task Scheduling Approaches Ant Colony Optimization (ACO) Scheduling. The Ant algorithm was introduced in the year 1996 by Dorigo. M. It is a new heuristic, predictive scheduling algorithm. It was devised on the basis of real-life ants [7]. An ant colony is a bio-inspired algorithm that is stochastic in nature and mimics the foraging behavior of a group of ants to solve many combinatorial problems. Ants use the chemical, i.e., pheromone for indirect communication with other ants. Ants randomly move in search of food and after finding the food source they secrete pheromone while coming back to home. Other ants follow the pheromone to travel and they also secrete pheromone in the path. Pheromone evaporates with time. While solving the combinatorial problem we used the same concept by using an artificial pheromone to indicate the desirability of any solution. Ants probabilistically use this information to build their solution and later update them. The ACO algorithm is used for solving many NP problems, such as TSP hence we tried to use this in task scheduling. First Come First Serve (FCFS) Scheduling. First come first serve policy preference is always given to the task that comes first does not matter what. It is a non-preemptive algorithm means the task that comes first will complete its execution first and then the other process will come to execute. It is a very fast method of task scheduling, but it is not an efficient one. Round Robin (RR) Scheduling. Round-robin scheduling is very simple and easy to execute. Here each process gets the CPU in a circular fashion and preempts after a specified time quantum. Starvation is not possible in this scheduling policy but there may be an increase of Turn Around Time for Longer tasks.

198

Tarandeep and K. Bhushan

Shortest Job First (SJF) Scheduling. The process with smallest burst time value will get the CPU first. If two processes has the same burst time value, then the process whose arrival time is less will get the resource. It is a non-preemptive scheduling algorithm. Priority-Based scheduling (Non-Preemptive). In this scheduling, priorities are defined for processes according to which they get the CPU i.e., process with highest priority value is scheduled first. If two processes have the same priority value, then FCFS is used to resolve the tie. Starvation of process is also possible in this.

2 Literature Survey In cloud computing, task scheduling is very much in the trending area of research and many researchers proposed different task scheduling algorithm for cloud computing. Su et al. [8] gave a solution to schedule big size problem in the cloud by dividing into dependent tasks. DAG tree is used to connect the task in sequence. This algorithm worked only on reducing the Makespan. Lee [9] et al. gave profit driven based scheduling in cloud services. Here profit indicates the quality of service. Li [10] et al. proposed an algorithm for parallel tasks scheduling in cloud to enhance in performance. Xu [11] et al. proposed a model of job scheduling in the cloud called Berger model. It uses two constraints. The first constraint is dividing the user task according to the QoS and second constraint indicates the failure of resource utilization. Amit [12] et al. proposed a priority-based task scheduling algorithm. The priority of tasks depends on the size and VM are prioritized on the basis of MIPS. The FCFS scheduling is used for task allocation to the VMs. All this research work gives a different solution for task scheduling in cloud computing. In this paper, we are using the metaheuristic algorithm ACO for Process Scheduling and FCFS or RR algorithm for CPU Scheduling. The objectives are minimizing the Makespan and Response Time. Problem Statement Let there are n processes and m heterogeneous processor. Each process can be assigned to any processor. One processor can finish a task very fast and others might take some more time. Our task is to schedule the processes in such a way that we can effectively utilize each processor. Example 1. We initialize a pheromone matrix Ʈij of size n × m. 2. We needed a processor matrix U of size, n × m, which shows how much part of a processor needed by a process. As every processor is having different computation speed, so a process may take a different part of processors. e.g., Uij = 0.2 means process i need 20% of computation of processor j.

21 Load Balancing in Cloud Through Task Scheduling

199

3. We made a Schedule matrix S of size n × m, which shows a particular process is allocated to which process. Here we assume, non-preemptive scheduling, i.e., a process is allocated to a processor cannot preempt. Sij = 1, means process i is allocated to the jth processor. One row cannot have more than one 1s. i.e., one process is scheduled to only 1 processor. 4. Take k number of ants, each ant makes a schedule S matrix, based on the transition formula of ACO. 5. Then we check the feasibility of the schedule using the feasibility formula described in the proposed approach section. 6. Then we evaporate the pheromone according to the Pheromone Evaporation formula. And update the pheromone using average utilization of the schedule. 7. Iterate the steps 3–6 till the stopping condition is reached. 8. The result will be a balanced schedule.

3 Proposed Approach First of all, we randomly distribute the processes among the processor to make a schedule. We take k number of Ant Each ant has a schedule. Then we check the feasibility of each schedule, i.e., any processor should not get more than its capacity. If the schedule is feasible than we go for the best schedule among k ants. We save the best schedule as global best schedule. Then we iterate the same procedure up to number of iteration taken. And if any schedule found superior to the global best solution then we iteratively update the global best solution. In the end, we got a schedule which is best for the given number of iterations. Scheduling process consists of two types of scheduling, i.e., 1. Process Scheduling 2. CPU scheduling Usually, the same classical method of scheduling is used for both CPU scheduling and process scheduling. But in our approach, we are using the combination of metaheuristic and classical method for scheduling. We are using the metaheuristic algorithm ACO for Process Scheduling and FCFS or RR algorithm for CPU Scheduling. Here we are comparing the previous approach with our proposed approach and finding which one is better over Makespan and Response time QoS parameter. As shown in the Fig. 1.

3.1 Previous Approach FCFS is used for Process Scheduling and FCFS/RR is used for CPU Scheduling.

200

Tarandeep and K. Bhushan

Fig. 1 Comparing previous and proposed approaches

FCFS Scheduler Jobs

Ready Q1

Previous ACO Scheduler

Proposed

Ready Q2

Process Scheduler

FCFS

P1

RR

P2

FCFS

P3

RR

P4

CPU Scheduler

3.2 Proposed Approach ACO is used for Process Scheduling and FCFS or RR is used for CPU Scheduling. ACO (Ant Colony Optimization). We are using the concept of ACO for selecting the best schedule of processes that will give us a better load balancing in all the processers by distributing the tasks in the processors in an efficient way.

3.3 Formulas Used Following formulas of ACO are used in Process scheduling: Transition Probability.

(1)

Here, pikj (t) is the probability of kth ant for scheduling ith process to the jth processor. Nik (t) set of a possible processor for fulfilling the ith process. represent the amount of pheromone on path i–j at time t. Pheromone Update. (2) Here nk

represent the amount of pheromone on path i–j at time t. change in pheromone by a kth ant. is the number of ants and

21 Load Balancing in Cloud Through Task Scheduling

201

Pheromone Evaporation. pheromone = (1–e) * pheromone where e is the evaporation rate. Feasibility. ∀m

n 

Pi  1

(3)

i=1

Here m is the number of processors, n is the number of processes, Pi is the process number. Previously Long-Term Scheduler makes the schedule of processes for the ready queue on the bases of FCFS scheduling policy but in our approach, it is using the concept of ACO. However, ACO approach is taking more time than the FCFS for making a schedule of processes for the ready queue but it makes the schedule so well according to the capacity of the processor that if we use FCFS or RR later for CPU scheduling it will reduce the Makespan and response time.

3.4 Task Scheduling Components Following components are used in the process of task scheduling. Jobs. The number of processes requesting the CPU for their execution. Process Scheduler or Long-Term scheduler. • Selects processes to move to the ready queue for the CPU. • Invoked unusually (in seconds, sometimes minutes), it may be slow. • It controls the degree of multiprogramming. Ready Queue. The ready queue of all processes is waiting for the CPU to execute. CPU Scheduler or Short-Term scheduler. • Schedules the execution of the process in the ready queue of the system. • Sometimes it is the single scheduler in the Operating System. • It’s invoked frequently, so it must be fast.

3.5 Concept of Proposed Approach Process Scheduling. Jobs, i.e., the number of processes that are requesting for the CPU is first scheduled according to the ACO approach by allocating the processes according to the capacity of the processors by the Long-term Scheduler where they can execute in a best possible way and made a ready queue of the processes.

202

Tarandeep and K. Bhushan

CPU scheduling. Short-term scheduler schedules the processes from the ready queue for the CPU according to the FCFS or RR scheduling policy.Experimental Results and Analysis.

3.6 Experimental Parameters The considered dataset consists of 50 processes for 4 heterogenous processers. Each process has different-different burst time according to CPU processing capacity. The simulation of our proposed method is performed 10 times with different number of iterations, i.e., 10, 20, 30, 50, and 100. Implementation of the proposed method and other existing methods have been done by Python 3.14 running at Intel-Cores i7-8650U CPU @ 1.90 GHz with 16 GB RAM. In our proposed method ACO, parameters: α (Influence of Pheromone) = 0.3, β (influence of Heuristic) = 1 evaporation of pheromone = 0.2, number of ants = 50.

3.7 Result Analysis Here we take the tasks that are mutually independent and non-preemptive. Tasks are first scheduled by the process scheduler using ACO or FCFS algorithms and send to ready queue. From the ready queue, tasks are sent for CPU scheduling using FCFS or RR. We take 50 tasks with their requirement with respect to each available process. We have created a random dataset which contains a requirement of the task. Figures 2 and 3 shows process scheduling by ACO and CPU scheduling by FCFS or RR gives a better result for both the QoS parameter i.e. Makespan and response time. For Process scheduling, FCFS performs better as shown in Fig. 4 but overall ACO performs better. Figure 2 clearly shows that the proposed approach ACO-FCFS and ACO-RR is taking less average Makespan time than the FCFS-FCFS and FCFS-RR respectively. Figure 3 clearly shows that the proposed approach ACO-FCFS and ACO-RR is taking less average Response time than the FCFS-FCFS and FCFS-RR respectively. Fig. 2 Average makespan time

100

Makespan

80 60 40 20 0

ACO-RR ACO-FCFS FCFC-RR FCFS-FCFS

21 Load Balancing in Cloud Through Task Scheduling Fig. 3 Average response time

203

Response Time

100 80 60 40 20 0

ACO-RR ACO-FCFS FCFS-RR FCFS-FCFS

Fig. 4 Scheduling time (in Sec.)

Scheduling time (sec.) FCFS

ACO 0

0.05

0.1

0.15

Figure 4 shows that the proposed approach (ACO) takes more scheduling time than the previous approach (FCFS) but it schedules so well that it reduces the overall average Makespan and Response time.

4 Conclusion In this paper, we performed a comparative analysis of Metaheuristic Algorithm ACO and classical scheduling algorithm FCFS and RR for load balancing in the cloud by doing task scheduling, i.e., one of the key issues of load balancing now a day. Task scheduling is an NP-Hard Problem. We first applied FCFS in process scheduling and FCFS or RR for CPU scheduling and then ACO for process scheduling (allocating the processes according to the capacity of the processor) and FCFS or RR for CPU scheduling and compare them over Makespan and Response Time QoS parameters of Load Balancing. We found that when we use ACO in process scheduling and then FCFS or RR in CPU scheduling then we get the minimum value of Makespan and Response Time. We conclude that it is better to use the combination of metaheuristic and classical method for task scheduling than only classical methods for task scheduling.

204

Tarandeep and K. Bhushan

References 1. Wen W-T, et al (2015) An ACO-based scheduling strategy on load balancing in cloud computing environment. In: 2015 ninth international conference on frontier of computer science and technology. IEEE 2. Li K, et al (2011) Cloud task scheduling based on load balancing ant colony optimization. In: 2011 sixth annual china grid conference. IEEE 3. Kalra M, Singh S (2015) A review of metaheuristic scheduling techniques in cloud computing. Egypt informatics J 16(3):275–295 4. Singh P, Dutta M, Aggarwal N (2017) A review of task scheduling based on metaheuristics approach in cloud computing. Knowl Inf Syst 52(1):1–51 5. Kaur A, Kaur B, Singh D (2017) Optimization techniques for resource provisioning and load balancing in the cloud environment: a review. Int J Inform Eng Electron Bus 9(1):28 6. Li K, Wang Y, Liu M, A task allocation schema based on response time optimization in cloud computing 7. Tawfeek MA, El-Sisi A, Keshk AE, Torkey FA (2013) Cloud task scheduling based on ant colony optimization. In: 2013 8th international conference on computer engineering and systems (ICCES). IEEE, pp 64–69 8. Sen S, Li J, Huang Q, Huang X, Shuang K, Wang J (2013) Cost-efficient task scheduling for executing large programs in the cloud. Parallel Comput 39:177–188 9. Lee YC, Wang C, Zumaya AY, Zhou BB (2012) Profit-driven scheduling for cloud services with data access awareness. J Parallel Distrib Comput 72:591–602 10. Li J, Qiu M, Ming Z, Quan G, Qin X, Zhonghua G (2012) Online optimization for scheduling preemptable tasks on IaaS cloud systems. J Parallel Distrib Comput 72:666–677 11. Baomin X, Zhao C, Enzhao H, Bin H (2011) Job scheduling algorithm based on Berger model in cloud environment. Adv Eng Softw 42:419–425 12. Agarwal A, Jain S (2014) Efficient optimal algorithm of task scheduling in cloud computing environment. Int J Comput Trends Technol 9:344–349

Chapter 22

Empirical Modeling and Multi-objective Optimization of 7075 Metal Matrix Composites on Machining of Computerized Numerical Control Wire Electrical Discharge Machining Process Atmanand Mishra, Dinesh Kumar Kasdekar and Sharad Agrawal

1 Introduction At present regularly rising demand of highly developed material like metal matrix composites (MMCs) that have mechanical characteristics like good wear opposing capability with sufficient strength, lightweight, toughness, and high relative fraction between strength to weight. Due to these reasons, the composite material has to be used more relevantly in different domain such as aerospace, defense, automotive areas. So, due to these reasons metal matrix composite using Al7075 as a base metal, and SiC is reinforced, due to this behaves like a hard ceramic reinforcement. So, these are the reasons why conventional machines cannot do MMC’s machining due to the presence of higher hardness and strength, and also due to very less thermal conductivity, it is difficult to owe conventional machining of MMC and its alloys. Hence, WEDM machining becomes a vital method and also produces intricate and complex shapes easily. While performing an experiment in WEDM surface roughness and cutting speed is a major concern. On WEDM, uses these intake parameters practical designed based on RSM. So, when performing experiments on WEDM, to take following intake functions like Ton, WF, Toff, servo voltage, spark gap, and WT and other residual functions remain constant to give suitable output function responses. So, some researchers tried to optimize WEDM process by using different optimization techniques. One of the commonly used optimization technique for highly complex and nonlinear manufacturing operations in WEDM, which requires to predict an accurate model to evaluate CS and SR, so for this using a probable A. Mishra (B) · D. K. Kasdekar · S. Agrawal Department of Mechanical Engineering, Madhav Institute of Technology and Science, Gwalior 474005, MP, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 H. Sharma et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-0426-6_22

205

206

A. Mishra et al.

evolutionary modeling algorithm is called genetic algorithm (GA). When a single objective can be optimized by only one attempt to get the best solution which globally maximizes, otherwise minimized. While regarding multi-objectives, where best one solution out of the combination of both functions that behave opposite to each response functions. When output response function conflicts with each other, the problem is formulated as a multi-objective optimization problem. To retrieving this problem, the non-dominated sorting algorithm becomes popular. So many researchers work have discussed in this literature survey in detail. Kushwah et al. [1] have an attempt to focus regarding manufacturing industries where dimensional accuracy, cost, time, surface roughness and MRR are the main factors to dominate productivity for this WEDM is used. Magabe et al. [2] have an investigating surface quality, i.e., related to mean roughness depth (Rz) and MRR as output functions and take input function as WF, spark gap, Toff, Ton and voltage for raising production rates by used NSGA-II optimization method for Ni55.8Ti. Liu et al. [3], an attempt to make these nontraditional processes become extensively used for this parameter optimization are considered for this NSGA-II algorithm tool is used to found higher CS and lower SR corresponding to decreases tool life. Rao et al. [4], an attempt has to make to evaluate MRR, wire wear ratio and SR for machining of Al7075/SiCp perform in WEDM so, various intakes used to perform machining, and empirical formula checked by Annova that should be obtained by RSM methodology, furthermore for optimal solution NSGA-II is used to form Pareto Sets, and analyzed by using SEM. Sriram et al. [5], An attempt put into investigating the WEDM performance affected by intake functions during machining of Al7075 alloy to optimized SR and MRR by Taughci technique based on L9 Orthogonal array, in Minitab software used for analysis. Shanthi et al. [6], an attempt made to optimized performance of WEDM when Zinc coated wire should be used. However, the effect of various intakes on desired response MRR should be optimized by genetic algorithm. Xu et al. [7], an attempt, he was working on a workpiece properties effect during machining forming recast layer by Al7075 alloy. Following intakes of WEDM varies to minimize surface roughness with decreases thermal power density, improves corrosion, and wear resistance. Patel et al. [8], an attempt has put to made to develop a non-contact technique to remove electrically conductive material through the inspiration of spark EDM. WEDM should create complex geometry and required tolerance without performing the secondary operation. Garg et al. [9]. An attempt was made on titanium material machined by WEDM experiment was performed based on Box-Behnken design. Cutting speed and surface roughness have to be optimized by NSGA-II tool. So, due to the requirement of high-performance goods, parts should be made by MMC’s to elaborate our work after many research investigations based on earlier work. We compared as (a) the impact of different intake has taken in consideration and (b) in WEDM, two performance output should be taken in consideration viz., SR and CS for optimization as an objective function simultaneously. During this parametric modeling for Al7075 MMC’s carried on WEDM during the operation and optimization of these has been done. So, RSM is used for preparing models for desired output, and those models again optimized, become formulated as a multiobjective optimization problem, and their objectives are conflicts opposite to each

22 Empirical Modeling and Multi-objective Optimization of 7075 …

207

other. These can be solved effectively by non-dominated sorting genetic algorithm. It is required to determine optimal intake character to attain maximum yield in desired circumstances for maximization of cutting speed and minimization surface roughness for different classes of materials.

2 Experimental Details In this experiment, we have fabricated 7075 MMC by using the Stir casting method, during this consisting 2wt% for SiC particulates in it. Due to large-potential available in 7xxx series of Al alloy, it can be used in aerospace and automotive factories, because of good resistivity against corrosion and high potential strength as compared to weight. In Al7075, zinc has a second base element having a more significant percentage should take as 5.65 weight % due to these imparting better ductility, and formability, also good resistivity against corrosion in this alloy. Also, remaining representatives present in the alloy, i.e., titanium, chromium, copper, iron, magnesium, and silicon. Table 1 represents the Al7075 composition as a weight %. During reinforcement of SiC are used in particulate form for the fabrication of Al7075 composite. Moreover, these reinforcements 120 mesh size used in particles. After this WEDM setup is used to cut the desired length in a specific direction in the workpiece, wire movement in any direction has controlled by CNC. So, for this wire is held correctly in the guided pin on the upper and lower section apart from the workpiece. On WEDM, when the electrical discharge takes place between wire and workpiece, the wire is subjected to complex oscillation. Due to this reason wire become held in proper position against the workpiece. A new fresh wire is always used during each cut which take place in a particular set of parameter level in the experiment. This experiment was performed on WEDM machine in Central Institute of Plastics Engineering and Technology, Gwalior is shown in Fig. 1. During the experiment, the workpiece specimen is rectangular having 12 mm thick that was produced from the midsection of the cast ingot. Length and breadth of the specimen produced after machining is 10 mm × 10 mm on each trial. The workpiece specimen has been smooth surface finish obtained on both the top and bottom surface by using 600-grade emery paper. The experiment was performed on three-axis computer numerically controlled (CNC) wire electrical discharge machine, having model number ELECTRONICA ENOVA 1S, and their PLC based on ELPULSE 55. The brass wire has used as an electrode whose diameter is 0.25 mm. The dielectric fluid used in WEDM is deionized water. The chiller has to be used to maintain the dielectric fluid at 30 °C temperature. The gap between the specimen and the wire Table 1 Chemical ingredients of 7075 Element

Zn

Cu

Mg

Cr

Si

Ti

Fe

Al

Weight %

5.65

1.78

2.51

0.27

0.36

0.19

0.48

Balance

208

A. Mishra et al.

Fig. 1 WEDM machine used in experiments

electrode is maintained in between 0.025 mm and 0.075 mm. In WEDM the position of wire is controlled by the CNC positioning system. The surface of workpiece specimen is cleaned with acetone after machining. For measuring surface roughness (Ra) used Time@3100 μm. Average surface roughness measured along to direction of cut (which is perpendicular to wire travel path) at three different positions. In surface tester micrometer, probe travel length is 2.50 mm, and travel speed of the stylus was 1 mm/s. While machining a composite on WEDM, we have select four intakes such as pulse on time, pulse off time, wire feed, and wire tension to evaluate their impact on SR and CS. The pilot of the experiment, we have to decide the process parameter range for this research work. The limits of various intakes and their respective notations have represented in Table 2. Table 2 Feasible ranges of input variables Input variables

Units

Notations

Feasible range input Lower limit

Upper limit

Pulse on time

μs

Ton

110

118

Pulse off time

μs

Toff

48

60

Wire feed rate

mm/min

WF

2

4

Wire tension

gm

WT

8

10

22 Empirical Modeling and Multi-objective Optimization of 7075 …

209

3 Development of Response Surface Model RSM became utilized this set of collections for modeling and optimized their characteristics by using mathematical and statistical approach. During this contains quantitative freelance variables, so these response functions form a second-order quadratic polynomial have shown in Eq. 1 is called regression model. Xi = f (Ton , Toff , WF, WT) ± ε

(1)

where in Eq. 1, Xi represents required response variables, f represent response function, Ton, Toff, WF, WT represent independent variables and ε represent the statistical error. ‘Design Expert 6.0.8’ software is used to determine approximate coefficient of the regression model with the help of experimental data. This experiment designed 30 trails based on Central Composite Design (CCD) technique. With the help of DOE in which we can consider full factorial with all possible combinations at two levels as high, +1 and low, –1 it has to compose as a full factorial point as 16, star point as 8, and 6 central point’s (during a coded level 0), corresponds at FCC at which value of ξ is 1. The experiment has to be conducted based on the randomization strategy, when developing the model empirically for WEDM while machining on Al composites, select intake parameter (control factors) based on the detailed preliminary investigation, the experience of the technician and literature. During the work, we take quadratic model Xi, which contain linear, squared, and interaction terms in the response surface, when intake terms spilled over the entire process. For building a response model, necessary data should require can be predicted from the design of the experiment, after the performed experiments, the result has obtained with a CCD design matrix. DOE takes 30 trials of the experiment, at four different intake parameters. Table 2 can represent actual values of intake machining variables and their viable limits. With the use of an experimental result, we have to determine the constants and coefficients of the model. Also, find the adequacy of proposed models by the invigilator for which variance analysis is necessary is called F-test. The model of CS and Ra give Rsq value as 0.96 and 0.87, which defines the model deals with perfect fit and agreement with experiment and predicts model. Table 3 explained the analysis of variance of the response surface regression equation generates with CCD in DOE.

4 Multi-objective Optimization In our study, we have to focus on maximizing CS and minimizing Ra instantly, but their behavior is to increase simultaneously. So, during this, we can see a single optimal solution, not give relevant result for this goal, when both objectives

210

A. Mishra et al.

Table 3 Analysis of variance ANOVA for cutting speed Source

DF

SS

MS

Regression

14

185.70

13.26

Residual error

15

6.48

0.43

Lack-of-Fit

10

6.48

0.65

0.00015

0.00003

Pure error Total

5 29

F-ratio

P

30.70