Advances in Computational Intelligence: Proceedings of Second International Conference on Computational Intelligence 2018 [1st ed.] 978-981-13-8221-5;978-981-13-8222-2

This book presents the proceedings of the International Conference on Computational Intelligence 2018 (ICCI 2018). It br

1,154 211 14MB

English Pages XIII, 368 [359] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Advances in Computational Intelligence: Proceedings of Second International Conference on Computational Intelligence 2018 [1st ed.]
 978-981-13-8221-5;978-981-13-8222-2

Table of contents :
Front Matter ....Pages i-xiii
Front Matter ....Pages 1-1
An Adaptive Neuro-Fuzzy Inference System-Based Intelligent Grid-Connected Photovoltaic Power Generation (Neeraj Priyadarshi, Farooque Azam, Amarjeet Kumar Sharma, Monika Vardia)....Pages 3-14
Efficient Energy Management in Hybrid Electric Vehicles Using DRBF Networks (Ralli Sangno, Siba Prasada Panigrahi, Saurav Kumar)....Pages 15-21
Torque and Current Noise Reduction of BLDC Motor Using Fuzzy Logic Control Strategy (Goutam Goswami, P. R. Thakura)....Pages 23-36
Front Matter ....Pages 37-37
Bi-objective Optimization of a Reconfigurable Supply Chain Using a Self-organizing Migration Algorithm (L. N. Pattanaik, Paras Agarwal, Saloni Ranjan, Urja Narayan)....Pages 39-53
Front Matter ....Pages 55-55
Infected Area Segmentation and Severity Estimation of Grapevine Using Fuzzy Logic (Reva Nagi, Sanjaya Shankar Tripathy)....Pages 57-67
Image Encryption Using Modified Rubik’s Cube Algorithm (Rupesh Kumar Sinha, Iti Agrawal, Kritika Jain, Anushka Gupta, S. S. Sahu)....Pages 69-78
Front Matter ....Pages 79-79
Decision Support System for Business Intelligence Using Data Mining Techniques: A Case Study (Pankaj Gupta, Bharat Bushan Sagar)....Pages 81-94
Target Marketing Using Feedback Mining (Ritesh Kumar, Partha Sarathi Bishnu)....Pages 95-103
Front Matter ....Pages 105-105
A Comparative Study Among Different Signaling Schemes of Optical Burst Switching (OBS) Network for Real-Time Multimedia Applications (Manoj Kr. Dutta)....Pages 107-117
Performance Analysis of Deflection Routing and Segmentation Dropping Scheme in Optical Burst Switching (OBS) Network: A Simulation Study (Manoj Kr. Dutta)....Pages 119-128
Secure Anti-Void Energy-Efficient Routing (SAVEER) Protocol for WSN-Based IoT Network (Ayesha Tabassum, Sayema Sadaf, Ditipriya Sinha, Ayan Kumar Das)....Pages 129-142
Mathematical Analysis of Effectiveness of Security Patches in Securing Wireless Sensor Network (Apeksha Prajapati)....Pages 143-155
A Novel Debugger for Windows-Based Applications (J. Anirudh Sharma, Partha Sarthy Banerjee, Hritika Panchratan, Ayush)....Pages 157-164
Nudge-Based Hybrid Intelligent System for Influencing Buying Decision (Vivek Gupta, Sudip Kumar Sahana)....Pages 165-174
Reputation-Based Reinforcement Algorithm for Motivation in Crowdsourcing Platform (A. Vijayalakshmi, Chittaranjan Hota)....Pages 175-186
A Hardware-in-a-Loop Setup for Benchmarking Robot Controllers (Shiladitya Biswas, Arun Dayal Udai, Gaurav Kumar)....Pages 187-199
Assembling Multi-Robots Along a Boundary of a Region with Obstacles—A Performance Upgradation (Rahul Kumar Singh, Madhumita Sardar, Deepanwita Das)....Pages 201-212
Front Matter ....Pages 213-213
Delineation of Mine Fire Pockets in Jharia Coalfield, India, using Thermal Remote Sensing (Farzana Shaheen, A. P. Krishna, V. S. Rathore)....Pages 215-227
Secure Data Sharing for Cloud-Based Services in Hierarchical Multi-group Scenario (Ditipriya Sinha, Sreemana Datta, Ayan Kumar Das)....Pages 229-244
Raga Identification in Rabindra Sangeet Using Motif Discovery (Shreemoyee Dutta Choudhury, Soubhik Chakraborty, Niladri Chatterjee)....Pages 245-253
Modeling a Raga-Based Song and Evaluating Its Raga Content: Why It Matters in a Clinical Setting (Swarima Tewari, Soubhik Chakraborty)....Pages 255-264
Data Analysis and Network Study of Non-small-cell Lung Cancer Biomarkers (Koel De Mukherjee, Aman Vats, Deepshikha Ghosh, Santhosh Kumar Pillai)....Pages 265-272
Design of an Energy-Efficient Cooperative MIMO Transmission Scheme Based on Centralized and Distributed Aggregations (Sarah Asheer, Sanjeet Kumar)....Pages 273-286
Recognize Vital Features for Classification of Neurodegenerative Diseases (A. Athisakthi, M. Pushpa Rani)....Pages 287-301
MM Big Data Applications: Statistical Resultant Analysis of Psychosomatic Survey on Various Human Personality Indicators (Rohit Rastogi, Devendra Kumar Chaturvedi, Santosh Satya, Navneet Arora, Piyush Trivedi, Mayank Gupta et al.)....Pages 303-325
Residual Exploration into Apoptosis of Leukemic Cells Through Oncostatin M: A Computational Structural Oncologic Approach (Arundhati Banerjee, Rakhi Dasgupta, Sujay Ray)....Pages 327-341
Front Matter ....Pages 343-343
An Experimental Study of a Modified Version of Quicksort (Aditi Basu Bal, Soubhik Chakraborty)....Pages 345-353
An Improved Pig Latin Algorithm for Lightweight Cryptography (Sandip Dutta, Sanyukta Sinha)....Pages 355-365
Back Matter ....Pages 367-368

Citation preview

Advances in Intelligent Systems and Computing 988

Sudip Kumar Sahana Vandana Bhattacharjee   Editors

Advances in Computational Intelligence Proceedings of Second International Conference on Computational Intelligence 2018

Advances in Intelligent Systems and Computing Volume 988

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Nikhil R. Pal, Indian Statistical Institute, Kolkata, India Rafael Bello Perez, Faculty of Mathematics, Physics and Computing, Universidad Central de Las Villas, Santa Clara, Cuba Emilio S. Corchado, University of Salamanca, Salamanca, Spain Hani Hagras, Electronic Engineering, University of Essex, Colchester, UK László T. Kóczy, Department of Automation, Széchenyi István University, Gyor, Hungary Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA Chin-Teng Lin, Department of Electrical Engineering, National Chiao Tung University, Hsinchu, Taiwan Jie Lu, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, Australia Patricia Melin, Graduate Program of Computer Science, Tijuana Institute of Technology, Tijuana, Mexico Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro, Rio de Janeiro, Brazil Ngoc Thanh Nguyen, Faculty of Computer Science and Management, Wrocław University of Technology, Wrocław, Poland Jun Wang, Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong

The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing such as: computational intelligence, soft computing including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms, social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and society, cognitive science and systems, Perception and Vision, DNA and immune based systems, self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric computing, recommender systems, intelligent control, robotics and mechatronics including human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent data analysis, knowledge management, intelligent agents, intelligent decision making and support, intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia. The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results. ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink **

More information about this series at http://www.springer.com/series/11156

Sudip Kumar Sahana Vandana Bhattacharjee •

Editors

Advances in Computational Intelligence Proceedings of Second International Conference on Computational Intelligence 2018

123

Editors Sudip Kumar Sahana Department of Computer Science and Engineering Birla Institute of Technology, Mesra Ranchi, Jharkhand, India

Vandana Bhattacharjee Department of Computer Science and Engineering Birla Institute of Technology, Mesra Ranchi, Jharkhand, India

ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-981-13-8221-5 ISBN 978-981-13-8222-2 (eBook) https://doi.org/10.1007/978-981-13-8222-2 © Springer Nature Singapore Pte Ltd. 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

This book constitutes the proceedings of the Second International Conference on Computational Intelligence (ICCI 2018) held during December 10–11, 2018, at Birla Institute of Technology, Mesra, Ranchi, India. ICCI 2018 was an international gathering for the researchers working on all aspects of computational intelligence and provided a high-level academic forum for the participants to disseminate their new research findings in the emerging areas of research. It also created a stimulating environment for the participants to interact and exchange information on future challenges and opportunities in the field of computational intelligence. The definition of computational intelligence was introduced by Bezdek (1994) as: a system is called computationally intelligent if it deals with low-level data such as numerical data, has a pattern-recognition component, and does not use knowledge in the artificial intelligence sense. Additionally, Bezdek and Marks (1993) clearly mentioned that computational intelligence should be based on soft computing methods. And the principal constituents of soft computing (SC) are fuzzy logic (FL), neural computing (NC), evolutionary computation (EC), machine learning (ML), and probabilistic reasoning (PR). In this book, we targeted to cover the recent trends, developments, and future possibilities of these soft computing techniques. Additionally, we also cover the bio-inspired computing techniques, which have become widely popular in recent times, in this book. Computational intelligence has a rational theoretical part as well as an experimental wing. In this proceedings, we try to balance theory and experiments by selecting papers from both the new theoretical findings in different concepts of computational intelligence and the application papers which are full of experiments for exploring new possibilities where the computational intelligence technique is superior. This book is divided into seven parts, namely Soft Computing, Evolutionary Computing and Bio Inspired Algorithms, Image Processing and Cognition Systems, Data Mining, Intelligent Systems and Modelling, Interdisciplinary Applications and Other Applications of Computational Intelligence.

v

vi

Preface

ICCI 2018 received 86 submissions from different countries in the world. Each submission was reviewed by at least two reviewers. Based on the rigorous reviews by the reviewers, 28 high-quality papers were selected for the publication with an acceptance rate of 32.55%. Ranchi, India

Sudip Kumar Sahana Vandana Bhattacharjee

Acknowledgements

International Conference on Computational Intelligence (ICCI 2018) was a watershed event enriched with contributions from researchers all over the country and also from other countries. We were privileged to provide them with a platform to express their scientific intellect. It was these contributions that made ICCI a really successful endeavor. We are grateful to every single author who sent their submissions to the conference. ICCI 2018 would definitely not have been possible without the constant and untiring efforts of every single person in the organizing committee. We are thankful to them for making the conference the success that it was. Our special gratitude is toward the wonderful members of the program committee and reviewers who were our guiding light and led us in the right direction whenever we stood at crossroads during the organization of the conference. The high standard set by the papers is a reflection of the efforts that had been put by the reviewers. We would also like to thank the members of the editorial board of Springer publications for giving shape to these proceedings and fructifying the efforts put in by all involved in ICCI 2018. It is only through their zeal and encouragement that we have been able to assimilate these proceedings in a timely manner. ICCI 2018 expresses its sincere gratitude to one and all who contributed to making the event the grand success that it was. Dr. Sudip Kumar Sahana Dr. Vandana Bhattacharjee

vii

Contents

Part I

Soft Computing

An Adaptive Neuro-Fuzzy Inference System-Based Intelligent Grid-Connected Photovoltaic Power Generation . . . . . . . . . . . . . . . . . . Neeraj Priyadarshi, Farooque Azam, Amarjeet Kumar Sharma and Monika Vardia

3

Efficient Energy Management in Hybrid Electric Vehicles Using DRBF Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ralli Sangno, Siba Prasada Panigrahi and Saurav Kumar

15

Torque and Current Noise Reduction of BLDC Motor Using Fuzzy Logic Control Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . Goutam Goswami and P. R. Thakura

23

Part II

Evolutionary Computing and Bio Inspired Algorithms

Bi-objective Optimization of a Reconfigurable Supply Chain Using a Self-organizing Migration Algorithm . . . . . . . . . . . . . . . . . . . . . L. N. Pattanaik, Paras Agarwal, Saloni Ranjan and Urja Narayan Part III

39

Image Processing and Cognition Systems

Infected Area Segmentation and Severity Estimation of Grapevine Using Fuzzy Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reva Nagi and Sanjaya Shankar Tripathy Image Encryption Using Modified Rubik’s Cube Algorithm . . . . . . . . . Rupesh Kumar Sinha, Iti Agrawal, Kritika Jain, Anushka Gupta and S. S. Sahu

57 69

ix

x

Part IV

Contents

Data Mining

Decision Support System for Business Intelligence Using Data Mining Techniques: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pankaj Gupta and Bharat Bushan Sagar Target Marketing Using Feedback Mining . . . . . . . . . . . . . . . . . . . . . . . Ritesh Kumar and Partha Sarathi Bishnu Part V

81 95

Intelligent Systems and Modelling

A Comparative Study Among Different Signaling Schemes of Optical Burst Switching (OBS) Network for Real-Time Multimedia Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Manoj Kr. Dutta Performance Analysis of Deflection Routing and Segmentation Dropping Scheme in Optical Burst Switching (OBS) Network: A Simulation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Manoj Kr. Dutta Secure Anti-Void Energy-Efficient Routing (SAVEER) Protocol for WSN-Based IoT Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Ayesha Tabassum, Sayema Sadaf, Ditipriya Sinha and Ayan Kumar Das Mathematical Analysis of Effectiveness of Security Patches in Securing Wireless Sensor Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Apeksha Prajapati A Novel Debugger for Windows-Based Applications . . . . . . . . . . . . . . . 157 J. Anirudh Sharma, Partha Sarthy Banerjee, Hritika Panchratan and Ayush Nudge-Based Hybrid Intelligent System for Influencing Buying Decision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Vivek Gupta and Sudip Kumar Sahana Reputation-Based Reinforcement Algorithm for Motivation in Crowdsourcing Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 A. Vijayalakshmi and Chittaranjan Hota A Hardware-in-a-Loop Setup for Benchmarking Robot Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Shiladitya Biswas, Arun Dayal Udai and Gaurav Kumar Assembling Multi-Robots Along a Boundary of a Region with Obstacles—A Performance Upgradation . . . . . . . . . . . . . . . . . . . . 201 Rahul Kumar Singh, Madhumita Sardar and Deepanwita Das

Contents

Part VI

xi

Interdisciplinary Applications

Delineation of Mine Fire Pockets in Jharia Coalfield, India, using Thermal Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Farzana Shaheen, A. P. Krishna and V. S. Rathore Secure Data Sharing for Cloud-Based Services in Hierarchical Multi-group Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Ditipriya Sinha, Sreemana Datta and Ayan Kumar Das Raga Identification in Rabindra Sangeet Using Motif Discovery . . . . . . . 245 Shreemoyee Dutta Choudhury, Soubhik Chakraborty and Niladri Chatterjee Modeling a Raga-Based Song and Evaluating Its Raga Content: Why It Matters in a Clinical Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Swarima Tewari and Soubhik Chakraborty Data Analysis and Network Study of Non-small-cell Lung Cancer Biomarkers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Koel De Mukherjee, Aman Vats, Deepshikha Ghosh and Santhosh Kumar Pillai Design of an Energy-Efficient Cooperative MIMO Transmission Scheme Based on Centralized and Distributed Aggregations . . . . . . . . . 273 Sarah Asheer and Sanjeet Kumar Recognize Vital Features for Classification of Neurodegenerative Diseases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 A. Athisakthi and M. Pushpa Rani MM Big Data Applications: Statistical Resultant Analysis of Psychosomatic Survey on Various Human Personality Indicators . . . . . 303 Rohit Rastogi, Devendra Kumar Chaturvedi, Santosh Satya, Navneet Arora, Piyush Trivedi, Mayank Gupta, Parv Singhal and Muskan Gulati Residual Exploration into Apoptosis of Leukemic Cells Through Oncostatin M: A Computational Structural Oncologic Approach . . . . . 327 Arundhati Banerjee, Rakhi Dasgupta and Sujay Ray Part VII

Other Applications of Computational Intelligence

An Experimental Study of a Modified Version of Quicksort . . . . . . . . . 345 Aditi Basu Bal and Soubhik Chakraborty An Improved Pig Latin Algorithm for Lightweight Cryptography . . . . 355 Sandip Dutta and Sanyukta Sinha Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367

About the Editors

Sudip Kumar Sahana received his B.E. degree in Computer Technology from Nagpur University, India in 2001, and his M.Tech. (Computer Science) and Ph.D. (Engineering) from the BIT, Mesra, India in 2006 and 2013, respectively. His major field of study is in Computer Science. He is currently working as an Assistant Professor at the Department of Computer Science and Engineering, BIT. His research and teaching interests include soft computing, computational intelligence, distributed computing and artificial intelligence. He has authored several research papers in the field of Computer Science and served as an editorial team member and reviewer for a number of journals. He is a lifetime member of the Indian Society for Technical Education (ISTE), India. Vandana Bhattacharjee is currently a Professor at the Department of Computer Science and Engineering, Birla Institute of Technology, (BIT) in Mesra, India. She completed her B.E. (CSE) at the BIT in 1989, and her M.Tech. and Ph.D. in Computer Science at Jawaharlal Nehru University, New Delhi in 1991 and 1995, respectively. She has published in several national and international journals and conference proceedings, and is a member of IEEE Computer Society and Life Member of the Computer Society of India. Her research areas include software process models, software cost estimation, software metrics, data mining and soft computing.

xiii

Part I

Soft Computing

An Adaptive Neuro-Fuzzy Inference System-Based Intelligent Grid-Connected Photovoltaic Power Generation Neeraj Priyadarshi, Farooque Azam, Amarjeet Kumar Sharma and Monika Vardia

Abstract This paper deals the artificial intelligence-based adaptive neuro-fuzzy inference system (ANFIS) for achievement of peak power from solar modules. This controller does not require the previous knowledge of database and has economical implementation as there is no additional sensor needed. The performance of ANFIS control is investigated by considering grid-integrated photovoltaic (PV) system. ANFIS supervisory control regulates the duty cycle of buck/boost converter under abrupt solar insolation. Simulated responses interpret that ANFIS has robust, rapid, and precise behavior under different operating conditions. Classical methods suffer more oscillating behavior with high settling period to obtain maximum PV power. Keywords ANFIS · dSPACE · PV · MPPT · Buck/boost converter

1 Introduction Compared to fossil fuels, the renewable energy sources have remarkable growth for power generation. Among all, the photovoltaic (PV) power generation system is considered the most acceptable renewable technology nowadays [1–6]. However, N. Priyadarshi (B) · A. K. Sharma Department of Electrical Engineering, Birsa Institute of Technology (Trust), Ranchi 835217, India e-mail: [email protected] A. K. Sharma e-mail: [email protected] F. Azam School of Computing & Information Technology, REVA University, Bangalore 560064, India e-mail: [email protected] M. Vardia Faculty of Engineering, Pacific University, Udaipur 313003, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_1

3

4

N. Priyadarshi et al.

the PV-generated power has transmitted behavior because it depends on solar irradiance and temperature. Therefore, maximum power point tracking methodologies have been employed to ensure maximum power point under different operating situations at all instances. Various artificial intelligence (AI) techniques [7–11] have been reviewed which provide improved efficiency under dynamic conditions. Compared to perturb and observe algorithm, the fuzzy logic-based AI techniques provide better utilization of maximum PV power under changing weather conditions. However, FLC has major disadvantage, viz. complex fuzzy rules, membership functions, and previous system knowledge requirement. Another important artificial neural network (ANN) is soft computing tool, but because of complex learning rules based on backpropagation technique, this technique is not referred as effective MPPT controller. To overcome the above shortcomings, in this work an ANFIS is considered which combines benefits of FLC and ANN and provides rapid computational convergence for PV peak power extraction.

2 ANFIS MPPT Algorithm Figure 1 illustrates the overall structure of ANFIS-based MPPT. Artificial intelligence-based system takes decisions as human being as per operating conditions automatically. Fuzzy logic controllers employed fuzzy variables as linguistic by application of inference-based rule base, and defuzzification method generates numerical values as a duty ratio of buck/boost converter. Antecedent and conclusion process can be discussed in fuzzy logic control implementations. ANN techniques are based on biological neurons which are calculated using learning rules. Jang proposed the ANFIS techniques which combine FLC and ANN and composed to the adaptive structures. It uses the back-propagation technique as well as least square methodology. Epochs are used to train the data variables by generating fuzzy rule base. The values of membership variables are varied till error obtained to maximum variable and whenever all values get adjusted. ANFIS controller has been used at MPPT, prior to ANFIS utilization, the trained variables are equated with checked variable and whenever membership values get adjusted which is derived using the flowchart (Fig. 2). Table 1 depicts designed specifications. Figure 3 depicts the architecture of ANFIS controller which comprises first-order Sugeno inference system and combines back-propagation with least square methods.

Table 1 Designed specifications of ANFIS

Parameters

Values

Training data

200

No. of epochs

50

Membership functions

49

Type of membership function

Triangular

An Adaptive Neuro-Fuzzy Inference System-Based Intelligent …

5

Fig. 1 Grid integration using ANFIS MPPT

Fig. 2 Description of an ANFIS-based PV generation

This structure consists of total 5 layers and 2 rules. Every rule represents the output in linear combinations of input variables as Rule I: If x = A1 & y = B1 → f 1 = m 1 x + n 1 y + K 1

(1)

Rule II: If x = A2 & y = B2 → f 2 = m 2 x + n 2 y + K 2

(2)

6

N. Priyadarshi et al.

Fig. 3 Structural design of an ANFIS algorithm

where A j & B j ≡ Fuzzy variables associated with antecedent inputs m j , n j , K j ≡ Constant values. Layer I: Node variables are mathematically calculated for layer of node j as: O P1 j = µ A j (x),

j = 1, 2

(3)

O P1 j = µ B j (x),

j = 3, 4

(4)

whereµ A j (x) & µ B j (x) ≡ Membership functions. Layer II: Firing functions can be calculated as: O P2, j = W j = µ A j (x) ∗ µ B j (y) j = 1, 2

(5)

Layer III: Normalized firing strength can be the ratio of jth firing strength to total firing strength as: O P3, j = W j =

Wj j = 1, 2 W j + W2

(6)

whereW j = Normalized firing strength. Layer IV: jth node can be realized using the following mathematical relation as:

An Adaptive Neuro-Fuzzy Inference System-Based Intelligent …

  O P4, j = W j f j = W j m j x + n j y + K j ,

7

j = 1, 2

(7)

Layer V: Overall output can be equated using the following mathematical expression as (Figs. 4, 5, and 6): O P5, j =



Wj f j

(8)

j

Fig. 4 MATLAB/Simulink-generated membership. a Editor FIS, b error, c error change, d duty cycle

8

N. Priyadarshi et al.

Fig. 5 MATLAB/Simulink-based fuzzy rules

3 Inverter Controller In this paper, dSPACE-employed programmed library component has been used to interface and processor running using on-chip peripheral. The generation of pulses is obtained through MATLAB and digital signal processing interface. The sine pulse width modulation is implemented for proper inverter control using dSPACE platform. The RTI (master bit I/O) DSP controller has been isolated using an opto-isolating interfacing circuit. The implemented simulation has been carried out using 64-bit MPC8240 processor and PPC603e core peripheral. The total 4 gating signals are realized with the help of I/O master bit, and compiled simulation is transformed to C-coding for automatic linking of hardware interface (DS1104ADC, DS1104BIT OUT, and DS1104DAC employed as I/O port) (Fig. 7).

An Adaptive Neuro-Fuzzy Inference System-Based Intelligent …

Fig. 6 a Fuzzy rule viewer, b surface fuzzy viewer

9

10

N. Priyadarshi et al.

Fig. 7 SPWM generation using dSPACE DS1104 interfacer

4 Simulink Implementation The steady-state performance of an ANFIS control for PV grid is analyzed using simulation. Result in Fig. 8 shows that when we do not use MPPT methodology, the PV array is able to track only 140 W power at solar insolation 1000 W/m2 . We also know that the rated power of PV module at this solar insolation is 200 W at G = 1000 W/m2 . Therefore, PV-generated efficiency is quite low under without MPPT operation. Hence, there is a requirement of MPPT controller to achieve 200 W PV power at G= 1000 W/m2 solar insolation value. In this work, an ANFIS controller is employed to provide solution of this issue and at the same solar irradiance PV module is able to generate 200 W powers. The computer-simulated results demonstrate that the proposed ANFIS MPPT controller works very efficiently and shows excellent steady-state performance. The performance of the proposed grid-connected PV system has been investigated in different irradiance levels. Simulated responses (Fig. 9) present grid and inverter current are synchronized with grid voltage, and PV modules deliver grid and load power as per demand. Simulation results suggest that the integrated inverter completely compensates the load-reactive power requirements. Also, results show that the integrated inverter is functioning as active filters and produces the required load harmonics current without degrading the quality of the grid current. The tracking and inverter efficiency of the proposed structure have measured compared to conventional control for PV grid integration. It shows that results obtained from the proposed system have an average tracking and inverter efficiency about 96 and 88%, respectively, for different insolation levels, which is found better as compared with similar works in the area. The performance of the developed integrated inverter for grid-connected PV system is investigated under varying insolation levels.

An Adaptive Neuro-Fuzzy Inference System-Based Intelligent …

(a) Proposed system with MPPT

11

(b) System without MPPT

Fig. 8 a Proposed ANFIS MPPT control, b without MPPT

The ANFIS MPPT is compared with conventional P&O MPPT algorithm presented using Fig. 10. P&O-based traditional control has inability for tracking actual MPP, whenever insolation level is rapidly changed. Simulation results show that P&O-based MPPT is unable to track maximum power due to sudden change in irradiance level and ambient temperature. On the other hand, the proposed MPPT can track the real MPPT and able to extract maximum power due to change in temperature and solar insolation. Obtained responses are quite satisfactory under varying insolation levels as compared to conventional control, as zero reactive power compensation with unity power coefficient. Simulated responses of the proposed PV power system illustrate that compared to traditional MPPT algorithms, the oscillations nearby MPP region are more with less power tracking and have less accuracy. On the other hand, the proposed ANFIS-based maximum power trackers will provide high-rated power tracking, better precision, fast convergence speed, and high precision. Moreover, the inverter strategy forces the inverter to synchronize with grid and has unity power coefficient with proper active and reactive power control. The technical justification of the finding can be tested using dSPACE platform for anti-islanding protections of grid integration.

12

N. Priyadarshi et al.

Fig. 9 Simulink-based variable solar insolation and PV responses with grid connection

An Adaptive Neuro-Fuzzy Inference System-Based Intelligent …

(a) Using proposed ANFIS based MPPT

(b) Using classical P&O MPPT scheme Fig. 10 Experimental result under varying irradiance levels

13

14

N. Priyadarshi et al.

5 Conclusion This work demonstrated the design of ANFIS controller through buck/boost converter for grid-connecting PV power system. In comparison, with classical FLC and ANN methods, an ANFIS has better accuracy, least peak overshoot, and small MPP period to achieve PV power. Simulation (MATLAB/Simulink)-based dynamic responses reveal that stable power transmitted to utility grid with higher efficiency. An ANFIS controller performance has been equated with ANN control which justifies the effective implementation of the proposed control under all solar insolation levels.

References 1. N. Priyadarshi, S. Padmanaban, P.K. Maroti, A. Sharma, An extensive practical investigation of FPSO-based MPPT for grid integrated PV system under variable operating conditions with anti-islanding protection. IEEE Syst. J., 1–11 (2018) 2. N. Priyadarshi, S. Padmanaban, M.S. Bhaskar, F. Blaabjerg, A. Sharma, A fuzzy SVPWM based inverter control realization of grid integrated PV-wind system with FPSO MPPT algorithm for a grid-connected PV/wind power generation system: hardware implementation. IET Electric Power Appl., 1–12 (2018) 3. N. Priyadarshi, A. Anand, A.K. Sharma, F. Azam, V.K. Singh, R.K. Sinha, An experimental implementation and testing of GA based maximum power point tracking for PV system under varying ambient conditions using dSPACE DS 1104 controller. Int. J. Renew. Energy Res. 7(1), 255–265 (2017) 4. N. Priyadarshi, V. Kumar, K. Yadav, M. Vardia, An experimental study on zeta buck-boost converter for application in PV system, in Handbook of Distributed Generation (Springer, Heidelberg). https://doi.org/10.1007/978-3-319-51343-0_13 5. N. Priyadarshi, A.K. Sharma, S. Priyam, An experimental realization of grid-connected PV system with MPPT using dSPACE DS 1104 control board, in Advances in Smart Grid and Renewable Energy. Lecture Notes in Electrical Engineering (vol. 435, Springer, Singapore, 2018) 6. N. Priyadarshi, A.K. Sharma, F. Azam, A hybrid firefly-asymmetrical fuzzy logic controller based MPPT for PV-wind-fuel grid integration. Int. J. Renew. Energy Res. 7(4) (2017) 7. N. Priyadarshi, A.K. Sharma, S. Priyam, Practical realization of an improved photovoltaic grid integration with MPPT. Int. J. Renew. Energy Res. 7(4) (2017) 8. N. Priyadarshi, A.K. Sharma, A.K. Bhoi, S.N. Ahmad, A. Azam, S. Priyam, MATLAB/Simulink based fault analysis of PV grid with intelligent fuzzy logic control MPPT. Int. J. Eng. Tech. 7, 198–204 (2018) 9. N. Priyadarshi, A.K. Sharma, A.K. Bhoi, S.N. Ahmad, A. Azam, S. Priyam, A practical performance verification of AFLC based MPPT for standalone PV power system under varying weather condition. Int. J. Eng. Tech. 7, 338–343 (2018) 10. N. Priyadarshi, S. Padmanaban, L. Mihet-Popa, F. Blaabjerg, F. Azam, Maximum power point tracking for brushless DC motor-driven photovoltaic pumping systems using a hybrid ANFISFLOWER pollination optimization algorithm. MDPI Energies 11(1), 1–16 (2018) 11. N. Priyadarshi, F. Azam, A.K. Bhoi, S. Alam, An artificial fuzzy logic intelligent controller based MPPT for PV grid utility. Lect. Notes Netw. Syst. 46. https://doi.org/10.1007/978-98113-1217-5_88

Efficient Energy Management in Hybrid Electric Vehicles Using DRBF Networks Ralli Sangno, Siba Prasada Panigrahi and Saurav Kumar

Abstract Energy management in hybrid electric vehicles (HEVs) remains as a challenge. Hybrid electric vehicles (HEVs) are considered as one of the most promising automotive technologies in terms of energy management. In this paper, it has demonstrated a strategy to improve HEV energy efficiency via the use of DRBF network. The more capabilities and benefits are achievable for a power-split driven HEV with DRBF energy optimization strategy has been considerably achieved and indicated accordingly. With the consideration of several real-time implementation issues, the results show improvements in fuel consumption with the HEV system under various driving cycles. This paper focuses on energy and emission impacts of the HEV system at the network level, and a cost-benefit analysis is conducted, which indicated that the benefits outweighed costs for HEV. For the said purpose, this paper introduces one novel and effective energy management strategy (EMS) using directed search optimization (DSO)-trained radial basis function neural network (RBFNN) and termed here as DRBF networks. DSO is used for both ANN and RBFNN. Keywords Energy management · Electric vehicle · RBFNN · ANN · DRBF · DSO · HEV · NEDC

1 Introduction The use of artificial neural network (ANN) in research on EMS becomes popular in recent years [1]. But, ANN always is associated with its inherent drawbacks like complexity, overfitting, and local minima. However, radial basis function (RBF) networks find global optima because of a single hidden layer. The use of RBF networks becomes a popular choice for EMS [2–4]. Determination of number of RBFs for R. Sangno (B) · S. Kumar National Institute of Technology, Yupia, Arunachal Pradesh, India e-mail: [email protected]; [email protected] S. P. Panigrahi The Veer Surendra Sai University of Technology (VSSUT) Odisha, Burla, India © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_2

15

16

R. Sangno et al.

RBF network is one problem in its design and is still based on trial-and-error, which is time consuming. The choice of the free parameters for the RBFs like centers, spreads, connecting weights, etc., is also an additional problem. Barreto et al. [5] and Feng [6] tried to design RBF networks with use of genetic algorithm (GA) and particle swarm optimization (PSO), respectively, to get rid of the problems discussed above. They selected the parameters through minimization of the mean square error (MSE) of the desired and actual outputs. For the purpose, this paper proposes DSO [5]-trained RBF (DRBF) networks for EMS.

2 The Problem Statement The objective of EMS in HEV is minimization of fuel consumption by reducing energy losses and minimizing the exhaust emissions simultaneously. Let, x(k) be the state variables (corresponding HEV states). These states are speed of the vehicle, speed of the engine, and the storage levels of energy, and let u(k) are the continuous or discrete control variables at kth instant. Here, continuous variables represent the power flow and the discrete variables are for the engine ON/OFF systems. The EMS in HEV is characterized by a dynamic system that used in [6]: x(k + 1) = f {x(k), u(k), k}

(1)

which has to be controlled, such that the cost criterion: n 

γ {x(k), u(k), k}t

(2)

0

which is to be minimized and satisfies the constraints: φ{x(k), U (k), k} ≤ 0 ψ x(k), u(k), k = 0

(3)

In this current problem, the only state that is applicable is the level of energy E s , which is actually in discrete time form of Eq. (1): E s (k + 1) = E s (k) + Ps (k)t

(4)

Assuming the states like speed or rpm of the engine {ω(k)}, mechanical drive train power {Pd (k)} and electrical load power {Pl (k)} to be known, and combining all parameters of each component that can be given by the following equation: Pb = Ps + Ploss (Ps ) Pe = Pl + Pb

Efficient Energy Management in Hybrid Electric Vehicles …

Pg = g(Pe , ω)   Pm = max Pd + Pg , Pm min

17

(5)

Here, Pb is power of the battery, Ps is power stored in the battery, Pe is electrical power output of the motor, Pg is mechanical output power of the motor, Pm is mechanical power engine, and m˙ is rate of the fuel; the fuel rate can also be expressed as a function of the battery power storage: m{w(k), ˙ Pd (k), Pl (k), Ps (k)} = m{P ˙ s (k), k}

(6)

The cost function can be expressed as the fuel use over the driving cycle in the time interval t = t[0, . . . , n], so Eq. 2 becomes: n 

γ {Ps (k), k}

(7)

k=0

where γ (Ps , k) = w1 m{Ps (k), k} + w2 CO2 {Ps (k), k} + w3 CO{Ps (k), K } + w5 HC{Ps (k), k} Now, by choosing Ps as decision variable z, the characteristics of all parts can be included in the cost functions so that the actual controlled input in the vehicle is Pe . As the relation between Ps and Pe is known, Pe can be easily computed from the optimal Ps . The range of operation for different parts is limited so that it is necessary to limit the engine power, electrical power, and battery power through output. This can be achieved with the use of the following constraints: Pm min ≤ Pm ≤ Pm max Pe min ≤ Pe ≤ Pe max Pb min ≤ Pb ≤ Pb max

(8)

By combining the above constraint that leads to new constraints as shown in Eq. (9) given below: Ps min ≤ Ps ≤ Ps max

(9)

And the constraint for battery energy level E s is written as: E s min − E s (0) ≤

k  i=0

Ps (i)t ≤ E s max − E s (0) ∀k ∈ [0, n]

(10)

18

R. Sangno et al.

A vehicle to sustain charged needs a kind of endpoint constraint in such a way that the state of charge of the battery should be nearly equal to a certain set value. This endpoint penalty is the state of energy at the beginning and at the end of the cycle which is to be equal and expressed by Eq. (11): E s (tn ) = E s (0) ⇒

n 

Ps (k) = 0

(11)

k=0

3 RBFNN Trained with DSO (DRBF) and Energy Management In this paper, for developing the DRBF-based EMS, the coefficients of Eq. (7) have initially been chosen from a population of M (=2 × N) CSVs. Each particle constitutes the number of CSVs, and each CSV represents one coefficient in Eq. (7). The main objective of this paper is to develop DRBF-based EMS. In a EMS problem, the inputs to the RBFNN are speed and torque of the motor which is given by x(k) = [ωg , τg , ωm , τm , Pb , γ ]

(12)

Here, ωg , τ g are the motor speed and torque and ωm , τ m are the engine speed and torque, respectively; Pb is battery power, and γ is the instantaneous cost. The RBFNN acts as a actual controller in EMS which goal is to achieve the desired control actions, and it is trained with DSO in order to get desired output.

4 Simulations The simulation model for the vehicle uses Ford Mondeo car having power net of 42 V, 2001 model, 2.0-liter petrol engine, 5-gear manual transmission that consists of a 5-kW alternator and a 36-V AGM lead–acid battery with a capacity of 27.5 Ah, which corresponds to an energy capacity of 4 MJ (Fig. 1) [7]. Electrical load can be adjusted using a programmed available with the power net. Simulation has been as per the New European Driving Cycle (NEDC). The NEDC is a vehicle testing process to confirmed the vehicle’s fuel economy and CO2 exhaust emission is as per the information provided by the car manufacturers or not. In this technique, car is tested on a test bed for the CO2 exhaust emission and subsequently allows measuring the fuel consumption rate also. Only the demerit associated with this test is that the test has been done in a laboratorially driving situation under ideal conditions, but in practical driving condition, this test is bit higher than the laboratorial one. For calculation of the fuel intake and exhaust emission, battery

Efficient Energy Management in Hybrid Electric Vehicles …

19

Fig. 1 Model of the vehicle used Table 1 Fuel consumption 500 W

1000 W

2000 W

Strategy

Fuel use (gm)

Fuel use (gm)

Fuel use (gm)

GA

557.261

590.442

662.141

PSO

556.157

589.862

661.464

DSO

554.221

584.185

660.372

RBFNN

546.524

581.140

655.460

DRBF

544.913

579.821

654.002

losses are assumed as quadratic. The weighted sum of fuel intake and hydrocarbon emission has been developed as cost criteria, because, if the cost function is only on the fuel intake, then the CO2 , CO, and NOx emissions are also decreased, but in contrary hydrocarbon emission would rise; that is why its weighted sum has been considered [8–11]. The weighting factors in Eq. (7) are chosen as ω1 = ω5 = 1 and ω2 = ω3 = ω4 = 0. Simulation parameters for GA, PSO, and DSO are: population size and number of iterations for all three chosen as 50 and 1000, respectively. For GA, mutation ratio = 0.03; crossover ratio = 0.9. For PSO, parameters are: C 1 = C 2 = 0.7. Similarly for DSO, forward probability = 0.8; forward coefficient = 1; backward coefficient = 10; genetic mutation probability = 0.01. Simulation runtime using MATLAB for GA, PSO, DSO, and ANN is taken as 22, 23, 23, and 24 s, respectively. The calculation of the fuel consumption rate using Eq. (6) and simulation runtime is given in Table 1. From the simulation results, it is revealed that the proposed methods are operative and cost-effective and also reduce the fuel intake and also the emissions levels of the vehicles. Hence, the EMSs using RBF network trained with DSO provide superior results as with comparison to other methods. By doing further fine-tuning of the weights in the cost function, the output may also improve with certain level. From the resulting trajectories that we are getting for Pe for DSO and RBFNN management strategies with Pl = 1000 W shown in Fig. 2, it is found that the

20

R. Sangno et al.

Fig. 2 Electrical alternator power at P1 = 1000 W (Source S.P Panigrahi et al.)

strategies with DSO and RBFNN are more effective, as they able to minimize the fuel consumption rate and the exhaust emissions of the HEV. These two methods are better than other existing methods as they provide better results for the entire driving cycle test in NEDC. The results can be little bit improved by fine-tuning the weighting factors of the cost function as explained above. Most of the profit comes from regenerative braking, which delivers a certain amount of energy for free [12].

5 Conclusion This paper proposed three novel and efficient EMSs as evidenced by the simulation results. Contributions of the paper can be outlined as: use of DSO and DSO-trained RBFNN in effective energy management. Here, three novel strategies for EM of electrical power net have been discussed through which we can predict the future state as well as existing state of the vehicle which is very essential parameter to reduce the fuel consumption rate and exhaust emission in a long driving conditions or cycles. The reduction in fuel consumption by 2% has been obtained effectively with this technique and also exhaust emission is being reduced with considerable amount.

Efficient Energy Management in Hybrid Electric Vehicles …

21

References 1. Y. Ates, O. Erdinc, M. Uzunoglu, B. Vural, Energy management of an FC/UC hybrid vehicular power system using a combined neural network-wavelet transform based strategy. Int. J. Hydrogen Energy 35, 774–783 (2010) 2. V. Danil, Prokhorov, Toyota Prius HEV neurocontrol and diagnostics. Neural Networks 21, 458–465 (2008) 3. Mansour Sheikhan, Reza Pardis, Davood Gharavian, State of charge neural computational models for high energy density batteries in electric vehicles. Neural Comput. Appl. 22, 1171–1180 (2013) 4. A. Taghavipour, M.S. Foumani, M. Boroushaki, Implementation of an optimal control strategy for a hydraulic hybrid vehicle using CMAC and RBF networks. Sci. Iranica B 19(2), 327–334 (2012) 5. A.M.S. Barreto, H.J.C. Barbosa, N.F.F. Ebecken, Growing compact RBF networks using a genetic algorithm, in Proceedings of the VII Brazilian Symposium on Neural Networks (2002), pp. 61–66 6. H.M. Feng, Self-generating RBFNs using evolutional PSO learning. Neurocomputing 70, 241–251 (2006) 7. W. Li, G. Xu, Y. Xu, Online learning control for hybrid electric vehicle. Chin. J. Mech. Eng. 24 (2011) 8. D. Zou et al., Directed searching optimization algorithm for constrained optimization problems. Expert Syst. Appl. 38, 8716–8723 (2011) 9. C.-C. Lin, J.-M. Kang, J.W. Grizzle, H. Peng, Energy management strategy for a parallel hybrid electric truck, in Proceedings of the American Control Conference, Arlington, VA, 25–27 June 2001 10. B. Tripathy, S. Dash, S.K. Padhy, Dynamic task scheduling using a directed neural network. J. Parallel Distrib. Comput. 75, 101–106 (2015) 11. C. Kumar, S.K. Padhy, S.P. Panigrahi, B.K. Panigrahi, Hybrid swarm intelligence methods for energy management in hybrid electric vehicles. IET Electr. Syst. Transp. 3(1), 22–29 (2013) 12. C.K. Samanta, M.K. Hota, S.R. Nayak, S.P. Panigrahi, B.K. Panigrahi, Energy management in hybrid electric vehicles using optimized radial basis function neural network. Int. J. Sustain. Eng. 7(4), 352–359 (2014). https://doi.org/10.1080/19397038.2014.888488

Torque and Current Noise Reduction of BLDC Motor Using Fuzzy Logic Control Strategy Goutam Goswami and P. R. Thakura

Abstract The torque of the permanent magnet brushless direct current (PMBLDC) motor is one of the major parameters of concern as it indirectly controls the stator current magnitude. Speed control of BLDCM can be done using different controllers. Among them, PID controller is the most commonly used controller for this machine. But in the case of load change, the torque and the stator current characteristics deteriorate significantly. It has already been studied that fuzzy controllers have better speed control over conventional PID controllers in all aspects (Usman and Rajpurohit in speed control of a BLDC motor using fuzzy logic controller. International Conference on Power Electronics, Intelligent Control and Energy Systems (ICPEICES), Delhi, India, 2016, [1]; Kamal et al. in speed control of brushless DC motor using intelligent controllers. 2014 Students Conference on Engineering and Systems, Allahabad, India, 2014, [2]), but in this paper, it has been shown that a suitably designed fuzzy controller is also able to reduce noise from torque and current characteristics of BLDCM significantly than any conventional PID controller. Here, source voltage control method is used to control the speed of the BLDCM. Complete analysis of both torque and stator current waveforms has been done, and then, overall results were compared and analysed with those of conventional PID controller. In comparison with conventional controller, the fuzzy controller provides better reduction of noise from torque and stator current characteristics along with better speed control in sudden load change conditions. The above investigation has been done completely in MATLAB/Simulink environment (Tibor et al. in modelling and simulation of the BLDC motor in MATLAB GUI. 2011 IEEE International Symposium on Industrial Electronics, Gdansk, Poland, 2011, [3]). Keywords BLDCM · PID controller · Fuzzy controller · FLC · Torque · Current · K p · K i

G. Goswami (B) · P. R. Thakura Department of Electrical and Electronics Engineering, Birla Institute of Technology Mesra, Ranchi, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_3

23

24

G. Goswami and P. R. Thakura

1 Introduction In the modern world of electric drives, BLDC motors are one of the best solutions for all kinds of electric drive applications. From small applications like vibration of our smart phones to large applications used in aerospace, medical, and many industrial automations, the usage of BLDCM is becoming dominant day by day. The complexity and difficulty of motor control have been reduced largely as the complete control strategy of BLDCM is solely based on software and proven hardware. As the name implies, brushes are not used in BLDCM for commutation; instead, the commutation process is done electronically. The advantages such as long life span, no possibility of explosion due to arcing, less maintenance, less noise on operation, high operating speed range, and high efficiency of BLDCM give them an edge over conventional DC motors. BLDCM has permanent magnets on the rotating outer part and salient pole electromagnets on the inner stator part. The stator winding produces trapezoidal back EMF, and the motor acts as a permanent magnet synchronous motor (PMSM) [4]. The conventional PID controllers are the widely used speed controllers of BLDCM in industries now, but the chief demerit of the conventional controller is that they can furnish finer transient and steady-state responses solely when the system parameters for which they are customised remain unaltered. But in the majority of the practical systems, parameters of the system differ in the course of functioning [5]. It has been identified that the variation of the moment of inertia is about 10–20 times in most of the industrial applications. This paper shows how a fuzzy logic controller is able to significantly reduce the noise from torque and current responses. This happens due to the fact that the fuzzy controllers are real-time expert systems implementing human experiences and knowledge, which cannot be realized by PID controllers. From sufficient knowledge of the system, FLC can achieve a higher degree of automation and can go far beyond conventional controllers. Hence, to get better torque and current responses with excellent speed control, fuzzy logic controller is one of the good choices. Often, it is seen that fuzzy logic speed controllers are the best performers for highperformance electrical drives [6]. When we compare the conventional PID controller with these recent intelligent fuzzy logic controllers, it is found that PID controllers are comparatively inefficient in all aspects. The reason behind this inefficiency of PI controllers is slow response time than the FLCs [7]. As BLDCM is highly nonlinear in nature, the effect of variation of parameters and load disturbances [8] is high. All these results increased the demand for modern fuzzy logic controllers. Paper [9, 10] discusses modelling and control scheme of BLDCM.

Torque and Current Noise Reduction of BLDC Motor …

25

2 Mathematical Modelling of Permanent Magnet BLDC Motor For mathematical modelling of BLDC motor, the following assumptions are taken into account: i. ii. iii. iv. v. vi. vii.

The motor’s stator winding has star connection. The motor’s three phases are identical. The change of rotor air-gap reluctance is negligible. The motor is not saturated. All three phases have trapezoidal back EMF. Power semiconductor switches used in the inverter are ideal. Copper losses and iron losses are negligible.

The circuit equations of stator winding in terms of the system parameters are given as: Van = Ra Ia + P(L a Ia + Mab Ib + Mac Ic ) + E a

(1)

Vbn = Rb Ib + P(Mba Ia + L b Ib + Mbc Ic ) + E b

(2)

Vcn = Rc Ic + P(Mca Ia + Mcb Ib + L c Ic ) + E c

(3)

where P is the derivative operator; Van , Vbn , Vcn are respective phase voltages; Ra , Rb , Rc are respective phase resistances; E a , E b , E c are respective back EMFs; Ia , Ib , Ic are respective stator phase currents; L a , L b , L c are respective self-inductances; Mab , Mbc , . . . are mutual inductances between two respective phases. Note—Every subscript a, b, c defines respective phases. Let Van = Va , Vbn = Vb , Vcn = Vc Ra = Rb = Rc = Rs La = Lb = Lc = L Mab = Mac = Mba = Mbc = Mca = Mcb = M

26

G. Goswami and P. R. Thakura

Now, we can write Eqs. (1), (2), and (3) in matrix form as: ⎡

⎤ ⎡ Van Rs 0 ⎣ Vbn ⎦ = ⎣ 0 Rs Vcn 0 0 ⎡ L + P⎣ M M

⎤⎡ ⎤ 0 Ia 0 ⎦⎣ I b ⎦ Ic Rs ⎤⎡ ⎤ ⎡ ⎤ M M Ea Ia L M ⎦⎣ I b ⎦ + ⎣ E b ⎦ Ic Ec M L

(4)

As in all the three phases, stator currents are balanced w.r.t. assumption (ii), we can write Ia + Ib + Ic = 0 So Mab Ib + Mac Ic = M Ib + M Ic = −M Ia Now, Eq. (4) become ⎡

⎤ ⎡ ⎤⎡ ⎤ Van Rs 0 0 Ia ⎣ Vbn ⎦ = ⎣ 0 Rs 0 ⎦⎣ Ib ⎦ Vcn Ic 0 0 Rs ⎡ ⎤⎡ ⎤ ⎡ ⎤ L−M 0 0 Ea Ia + P⎣ 0 L−M 0 ⎦⎣ I b ⎦ + ⎣ E b ⎦ Ic Ec 0 0 L−M

(5)

This equation gives equivalent circuit of stator winding as shown in Fig. 1. On adding Eqs. (1), (2), and (3), we get Van + Vbn + Vcn = E a + E b + E c

Fig. 1 Equivalent circuit of stator winding

Van

Vbn Vcn

Ia

Ib Ic

Ra

Rb Rc

(6)

L-M

L-M L-M

Eb

Ea n Ec

Torque and Current Noise Reduction of BLDC Motor …

27

If Vn0 = 0, then we can write Van = Va + Vn0 Vbn = Vb + Vn0 Vbn = Vb + Vn0 And from Eq. (6), we get Vn0 =

(E a + E b + E c ) − (Va + Vb + Vc ) 3

(7)

where Vn0 is the voltage at the neutral point of star connection. Equation (8) shows the expression of induced electromagnetic torque Te =

E a Ia + E b Ib + E c Ic ωr

(8)

where the rotor speed is ωr in rad/s. Like conventional DC motor, the mechanical torque equation can be expressed as: J

dωr + Bωr = Te − Tl dt

(9)

where J = moment of inertia of rotor in kg m2 , B = coefficient of friction in N-m-s/rad, Tl = load torque in Nm. The equation of rotor position in electrical angle is given as p dθr = ωr dt 2

(10)

where θr is the rotor position in electrical degree and p is the number of poles.

3 Design and Simulation of PID and Fuzzy Controller A controller is used to keep the output within a desirable limit using a control action. Torque and current responses of BLDCM have been investigated thoroughly with both conventional PID and intelligent fuzzy controller.

28

G. Goswami and P. R. Thakura

Fig. 2 Simulink model of PID control of BLDC motor

i. PID Control Scheme A PID controller maintains the output by closed-loop control. It basically consists of three basic coefficients, i.e. proportional coefficient (K p ), integral coefficient (K i ), and derivative coefficient (K d ). Any change in the output from the reference input gives the error which is the input of the PID controller, and we get the output by the following Eq. (11) which can adjust the process accordingly. But in maximum cases, PI controllers are best-suited controllers as the amount added to it grows or shrinks immediately and proportionately and integral action eliminates offset. The Simulink model of BLDCM drive with PID controller is shown in Fig. 2. In terms of error and coefficient constants, the PID controller can be represented as:  de(t) (11) y(t) = K p · e(t) + K i e(t)dt + K d dt ii. Fuzzy Control Scheme Fuzzy logic is a specialised domain of artificial intelligence where the input is based on the value of that information which is neither definitely true nor false. For controlling situations which demand the output of fuzzy logic controller, humans can apply their intuitions which they would generally use to solve similar situations in daily lives [11, 12]. Complete knowledge of the system to be controlled can be a perfect solution to remove the unwanted effects of the system response. Figure 3 shows a basic block diagram of fuzzy logic controller. Fuzzy logic controllers use a highly versatile set of IF-THEN rules. The elucidation can then be applied for obtaining appropriate membership functions.

Torque and Current Noise Reduction of BLDC Motor …

29

Defuzzification Module Fuzzy Rule Base

Fuzzy Interference Engine

Control Process Fuzzification Module

Fig. 3 Block diagram of basic FLC

Fig. 4 Simulink model of fuzzy logic control of BLDC motor

The Simulink model of BLDCM drive with fuzzy controller is shown in Fig. 4. The Hall signals of BLDCM are decoded in sensor decoder gate driver module with a proper logic scheme which then enter the gate terminal of three-phase inverter module. Ratings and parameters of BLDCM used for simulation are shown below in Table 1. Fuzzy logic controller has been designed with the MATLAB Fuzzy Toolbox. In this paper, Mamdani’s fuzzy controller has been used. The fuzzy controller has two input variables related to the speed error parameter. One is error (e), and the other is change in error (e). ˙ A scaled input DC voltage of the inverter is taken as output variable. The Gaussian membership function has been used as two input variables, and triangular membership function is used as output variable. The range of every membership function is taken from −1 to 1 (Fig. 5 shows membership function of error input variable). The fuzzification of inputs has been done using a continuous universe of discourse. Seven fuzzy sets have been made for each input and output, viz. large negative (LN), medium negative (MN), small negative (SN), zero (ZE), small positive (SP), medium

30 Table 1 BLDCM parameters used for simulations

G. Goswami and P. R. Thakura

Ratings or parameters

Values

No. of pole pairs

4

No. of phases

3

Rated voltage (L-L) (V)

310

Rated speed (rpm)

3000

Rated torque (Nm)

6.4

Rated power (Kw)

2

Peak current (A)

25

Rotor inertia (Kg m2 )

0.00026

RS (ohm)

18.7

L S (mH)

8.5

Fig. 5 Gaussian membership function of error input

positive (MP), and large positive (LP). Figure 5 shows Gaussian membership function has been used for each fuzzy set. In this work, we use Mamdani’s ‘min’ operator for connotation and centroid method for defuzzification. Total forty-nine fuzzy rules have been defined in Table 2 using the system knowledge because more the fuzzy sets more the identification and separation of the inputs into different classifications, which in turn will result in a response with higher resolution. The rules are in the form of IF-THEN statement. Operations of few rules are listed below. Rule1. IF error e is Large Negative LN and change in error e˙ is Large Negative LN THEN input DC voltage of inverter module is also Large Negative LN. The implication of this rule is that when the system output is at Rule1, the original speed is more than the reference speed or set speed and the motor is accelerating. To compensate for the increment of speed, the input DC voltage of the inverter module should be reduced so that the average voltage applied across the phase winding

Torque and Current Noise Reduction of BLDC Motor … Table 2 Rule-based output matrix

31

e/e˙

LN

MN

SN

ZE

SP

MP

LP

LN

LN

LN

LN

LN

LN

LN

LN

MN

LN

LN

LN

MN

SN

ZE

SP

SN

LN

LN

MN

SN

ZE

SP

MP

ZE

LN

MN

SN

ZE

SP

MP

LP

SP

MN

SN

ZE

SP

MP

LP

LP

MP

SN

ZE

SP

MP

LP

LP

LP

LP

ZE

SP

MP

LP

LP

LP

LP

Fig. 6 Fuzzy surface or control surface

also decreases; this in turn will bring the original speed of the system closer to the reference speed. Rule25. IF error e is Zero ZE and change in error e˙ is Zero ZE THEN input DC voltage of inverter module does not change i.e. Zero. Rule49. IF error e is Large Positive LP and change in error e˙ is Large Positive LP THEN input DC voltage of inverter module is also Large Positive LP. The implication of this rule is that when the system output is at Rule49, the original speed is less than the reference speed or set speed and the motor is deaccelerating. To compensate for the reduction of speed, the input DC voltage of the inverter module should be increased so that the average voltage applied across the phase winding also increases; this in turn will bring the original speed of the system closer to the reference speed. Fuzzy plane or rule plane or control plane is the surface that shows the output value for any combination of the two input values of the fuzzy controller. The fuzzy surface is shown in Fig. 6. Now that all rules and membership functions have been defined, FIS can be exported to the MATLAB workspace. Here, the output of the fuzzy controller is basically scaled between −1 and 1, but this is the input DC voltage of the three-phase inverter module, which is ranged

32

G. Goswami and P. R. Thakura

between 0 and 440 V. So, a 1D lookup table after fuzzy controller (as shown in Fig. 4) is used to vary the values from −1 to 1 in the range of 0–440 linearly.

4 Simulation Results, Discussion, and Comparison In this work at first, all the possible combinations of K P and K i values have been thoroughly investigated to get less noisy torque and current waveforms, but no such combination was obtained. This is due to the fact that in PID controller, the actuating signal or the output signal of controller has a very low frequency of oscillation with respect to the reference point. Even if the values of K P and K i are much high like 200 and 1000, respectively, the frequency never goes above 8–10 kHz as shown in Table 3. In the case of fuzzy controller, the actuating signal is moved along the fuzzy surface and its frequency of oscillation is hundred times more than the PID controller which is nearly 1 MHz and above as shown in Fig. 7. This high frequency of controlled voltage produces high inductive reactance in stator winding because BLDCM contains highly inductive material in stator poles. The value of the inductance of this material is usually 10 mH or even more. This high inductive reactance limits the change in stator current which basically removes the noise from the stator current, and as the induced torque is proportional to stator current, the noise is also eliminated from the torque response. Now, torque and current responses of the intelligent fuzzy controller have been compared with that of the conventional PI controller having one fixed K P and K i value. Here, the values of K P and K i are taken as 10 and 100, respectively. From all of the curves shown in Figs. 8, 9, 10, and 11 corresponding to PID controller, the starting stator current and induced torque are nearly double than that of the fuzzy controller. The no-load starting torque is nearly 9 Nm in PID controller,

Table 3 Different frequencies of actuating signal

PID constants values KP

Frequency of actuating signal

Ki

0.5

10

833.33 Hz

10

100

2.85 kHz

100

1000

Fuzzy controller

8 kHz 1 MHz

Torque and Current Noise Reduction of BLDC Motor …

33

Fig. 7 Comparison between the output signals of PID and fuzzy controller (It shows that fuzzy controller has highest frequency of actuating signal)

whereas in fuzzy controller, it is nearly 5 Nm. Similarly, the starting stator current in PID controller is nearly 6.5A, whereas in fuzzy controller, it is 3.5 A. So, the starting peak torque and stator current are reduced by almost 0.5 times using fuzzy controller. Now, if the total response of every curve driven by PID controller as shown in Figs. 8, 9, 10, and 11 is observed, then significant noise is found in stator current as well as torque response in both ‘with and without load’ conditions. In the case of no-load condition, the amount of noise is somewhat less as shown in Fig. 8, whereas if a sudden load is applied, the noise increases rapidly as shown in Fig. 10. Next, from the curves corresponding to fuzzy controller as shown in Figs. 8 and 10, it is observed that the torque noise is reduced significantly. Similarly, from Figs. 9 and 11, it is found that stator current noise reduction is also achieved.

34

Fig. 8 No-load responses of torque

Fig. 9 No-load responses of stator current

G. Goswami and P. R. Thakura

Torque and Current Noise Reduction of BLDC Motor …

35

Fig. 10 On-load response of torque (2 Nm sudden load applied at 0.01 s)

Fig. 11 On-load response of stator current (2 Nm sudden load applied at 0.01 s)

5 Conclusion In this paper, BLDCM which is driven by PID and fuzzy controllers is simulated and the torque and current responses of it have been analysed and discussed. Only experimental analysis has been made to find noise control. To design fuzzy controller, we use the error of speed and the change in error of speed for defining the rule-based interface of fuzzy controller which gives the actuating signal to the controlled voltage

36

G. Goswami and P. R. Thakura

source. A comparison between conventional and fuzzy controller-driven BLDCM has been done on the basis of torque and current noise reduction capability on both no-load and on-load operations. From the simulation and comparison results, it has been found that in both the conditions, PID controllers are unable to reduce noise from torque and current responses, whereas under all operating conditions, fuzzy controllers’ noise reduction capability is much better than that of the conventional controllers. It is because the rate of oscillation of actuating signal is nearly hundred times more in fuzzy controller than the PID controller. Acknowledgements The author would like to sincerely thank the Department of Electrical and Electronics Engineering of Birla Institute of Technology, Mesra, Ranchi, for every cooperation and support.

References 1. A. Usman, B.S. Rajpurohit, Speed control of a BLDC motor using fuzzy logic controller, in 2016 IEEE 1st International Conference on Power Electronics, Intelligent Control and Energy Systems (ICPEICES), Delhi, India (2016) 2. M.M. Kamal, L. Mathew, S. Chatterji, Speed control of brushless DC motor using intelligent controllers, in 2014 Students Conference on Engineering and Systems, Allahabad, India (2014) 3. B. Tibor, V. Fedák, F. Durovský, Modeling and simulation of the BLDC motor in MATLAB GUI, in 2011 IEEE International Symposium on Industrial Electronics, Gdansk, Poland (2011) 4. G. Prasad, N. SreeRamya, P.V.N. Prasad, G. Tulasi Ram Das, Modelling and simulation analysis of the brushless DC motor by using MATLAB. Proceedings IJITEE 1(5), 2278–3075 (2012) 5. R. Shanmugasundram, K. Muhammad Zakariah, N. Yadaiah, Implementation and performance analysis of digital controllers for brushless DC motor drives. IEEE/ASME Transa. Mechatron. 19(1), 213–224 (2014) 6. T.C. Siong, B. Ismail, S.F., Siraj, M.F.N.T. aAjuddin, N.S. Jamoshid, M.F. Mohammed, Analysis of Fuzzy Logic controller for permanent magnet brushless DC motor drives, in 2010 IEEE Student Conference on Research and Development (SCOReD), Putrajaya, Malaysia (2010) 7. Cheng, M., Sun, Q., Zhou, E.: New self-tuning fuzzy PI control of a novel doubly salient permanent-magnet motor drive. IEEE Trans. Ind. Electron. 53(3), 814–821 (2006) 8. V.M. Varatharaju, B.L .Mathur, K. Udhyakumar, Speed control of PMBLDC motor using MATLAB/Simulink and effects of load and inertia changes, in 2010 International Conference on Mechanical and Electrical Technology, Singapore, Singapore (2010) 9. J. Sriram, K. Sureshkumar, Speed control of BLDC motor using fuzzy logic controller based on sensorless technique, in 2014 International Conference on Green Computing Communication and Electrical Engineering (ICGCCEE), Coimbatore, India (2014) 10. R. Shanmugasundram, K. Muhammed Zakariah, N. Yadaiah, Digital implementation of fuzzy logic controller for wide range speed control of brushless dc motor. in 2009 IEEE International Conference on Vehicular Electronics and Safety (ICVES), Pune, India (2009) 11. M. Surya Kalavathi, C. Subba Rami Reddy, Performance evaluation of classical and fuzzy logic control techniques for brushless DC motor drive, in 2012 IEEE International Power Modulator and High Voltage Conference (IPMHVC), San Diego, CA, USA (2012) 12. N. Samoylenko, Q. Han, J. Jatskevich, Dynamic performance of brushless DC motors with unbalanced hall sensors. IEEE Trans. Energy Convers. 23(3), 752–763 (2008)

Part II

Evolutionary Computing and Bio Inspired Algorithms

Bi-objective Optimization of a Reconfigurable Supply Chain Using a Self-organizing Migration Algorithm L. N. Pattanaik , Paras Agarwal, Saloni Ranjan and Urja Narayan

Abstract In this paper, two objective functions related to supply chain performance are considered for optimization during several demand periods. Due to fast and dynamic demand variations in recent times, the supply chains for outsourced components also need agility and quick reconfiguration to adapt to these challenges. For a known demand scenario, the manufacturer must select the optimum combination of suppliers to minimize the total cost of supplies as well as the transportation cost. The two objective functions developed in this model represent the minimization of the total cost of supplies including transportation and maximization of reliability of the set of suppliers. As the two objectives may have trade-offs in many instances, a set of Pareto optimal non-dominated solutions is searched using an evolutionary algorithm called self-organizing migration algorithm or SOMA. A case study on the supply chain of a laptop computer manufacturer is selected from the literature to illustrate the implementation of algorithm to real industrial problems. Keywords Supply chain networks · Pareto optimal solutions · Self-organizing migration algorithm · Bi-objective optimization · Non-domination

1 Introduction Supply chains are often considered to be a challenge for decision-makers as it involves several entities and parameters like suppliers, warehouses, logistics supports, demands and costs. Uncertainties in events and forecasted demands also add to this problem. In present times, the demand is highly unpredictable due to the dynamic nature of market. Further, the need for providing a wide range of product variety to satisfy the customized demands also aggravates the problem. In this context, the supply chain network (SCN) considered for the present work exhibits L. N. Pattanaik (B) · P. Agarwal · S. Ranjan · U. Narayan Department of Production Engineering, Birla Institute of Technology Mesra, Ranchi 835215, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_4

39

40

L. N. Pattanaik et al.

agility through a reconfigurable model. The concept of reconfiguration is pervaded into many domains of engineering and service. Reconfigurations of the manufacturing systems, machine tools, automobiles, software, etc., are already in practice. A reconfigurable supply chain network must be capable of changing its structure and operations as the need arises. Agile supply chains can be developed by incorporating certain inherent flexibility through reconfigurability feature in it. Some researchers [1, 2] defined responsiveness as the swiftness to adapt to new requirements by adjusting the physical structures and operations. In this paper, a reconfigurable SCN model is presented over multiple time periods to illustrate the changes in its structure based on an optimization tool. The problem is formulated as a bi-objective optimization model having the total cost of supplies and transportation as the first objective and the overall reliability of the set of suppliers as the second. This model is based on the assumption that the manufacturer has an SCN which comprises of several suppliers of its components. However, in a given time period it may subcontract the job to some of those suppliers. The rest of the suppliers are passive in that period. The selection of these suppliers for a time period with known demands is the basic theme of this work. A multi-objective evolutionary algorithm known as multi-objective SOMA (MOSOMA) is implemented for identifying these suppliers for each of the periods.

2 Literature Review This literature review focuses on two broad areas relevant to this paper. It conducts a survey of the literature on the optimization and decision-making problems related to an agile or reconfigurable supply chain and then on the application of metaheuristics or evolutionary algorithms to such combinatorial optimization problems. Supply chain management (SCM) involves many decision-making challenges to optimize the utilization of resources while delivering the final product to the customer. It encompasses many functions starting from supplier selection, logistic planning, warehouse management, economic order quantity, reverse logistics, etc. The performance evaluation of a supply chain can be in terms of both quantitative and qualitative analyses [3]. Continuous assessment of the performance is required for a supply chain in order to evolve into a successful business model. Flexibility is considered as an essential component for addressing the challenges in present times. Flexibility in the SCN has emerged as a new strategic tool to enhance business excellence. The exploitation of flexibility is vital in designing, planning and controlling SC system to improve an organization’s ability to recover from disruptions. There are various types of flexibility, and each type comes in various forms, each of which can be implemented in different ways and with different costs. The topic of supply chain flexibility has drawn the attention of the researchers and practitioners over a decade [4]. It is argued that future empirical research should approach research design from a network perspective, treating the supply chain as the unit of analysis,

Bi-objective Optimization of a Reconfigurable Supply …

41

in order to develop a more complete understanding of the effects of flexibility across the whole supply chain [5]. Agility is the ability of a firm to survive and prosper in an environment of continuous and unpredictable changes by reacting quickly and efficiently to dynamics of the market. [6] explicate agility as using market knowledge and a virtual corporation to exploit profitable opportunities in a volatile marketplace. Gunasekaran [7] identified strategies based on reconfigurability, core competencies and integration, modular and flexible technologies and skilled and knowledgeable workforce as the key elements to achieve an agile system. The prime focus areas of agility are the dependence and driving powers of automation, understanding market volatility, process integration, buyer–supplier relationship, logistic planning and management, cost minimization, quality management and delivery performance [8]. To survive dynamic market conditions, there have been various proposals of agile architecture systems to inculcate flexibility and responsiveness in supply chain. Lu and Zhang [9] proposed a Web base agile architecture of SCM system, comprising of wizard based website in front end for agility and multi agent based reconfigurable business process in back end along with reconfiguring tools to implement agility in SCM. Conversely, in those contexts where demand is dynamic and unpredictable and the customer requirement for variety is high, a high level of agility is required. Christopher and Towil [10] discussed the ways in which the hybrid of lean and agile strategies can lead to efficient and cost-effective supply chain. Responsiveness, one of the primary ingredients of agility and reconfigurability, is the ability of the company to respond quickly to customer demands and market changes. A manufacturing responsiveness model based on the concept of manufacturing input–output system and fundamental elements that contribute to responsiveness have been discussed in Ebrahim et al. [11]. Notions of responsiveness have been found to be a useful analysis variable and critical success factor for the supply chain [12]. They undertook case study of a mobile phone supply chain and set up delivery lead time, postponement strategies, bullwhip effect and information exchange as crucial evaluation variables for assessing the responsiveness of a firm. Decision-making in supply chains is often based on some mathematical optimization model. Of late, the traditional optimization tools are replaced with metaheuristics owing to their versatility and ease of application. Evolutionary algorithms inspired from nature like genetic algorithm (GA), ant colony optimization (ACO), particle swarm optimization (PSO) and many more are successfully applied to both single- and multi-objective optimization problems. Many new evolutionary-based approaches and variations of existing techniques have been published in the technical literature [13]. A multi-objective genetic algorithm (MOGA) was implemented to refine optimal reconfiguration rules from a large number of alternatives available in operational level by Hitoshi et al. [14]. Ding et al. [15] presented a simulation-based MOGA to design and analyse supply chains in textile and automobile sectors. In this paper, a bi-objective optimization model is developed to address the reconfiguration-related decision-making in a supply chain network. The two objective functions are mathematically formulated to represent the total cost of supply chain consisting of supplies and transportation and the overall reliability of selected

42

L. N. Pattanaik et al.

suppliers in a given time period. The evolutionary algorithm applied to the optimization problem is a migratory algorithm called SOMA [16–18]. The computational superiority of this algorithm over many contemporary evolutionary algorithms has been reported. Performance of MOSOMA was compared with some other evolutionary multi-objective algorithms like non-dominated sorting genetic algorithm (NSGA II) and strength Pareto evolutionary algorithm (SPEA2) by Kadlec and Raida [19]. Based on the results of six test problems which are standards for comparison, they proved that MOSOMA outperformed the rest in four metrics: spread, hit rate, the number of evaluations and generational distance. Onwubolu and Babu [20] also reported about the robustness and fast convergence of this algorithm. The organization for the remainder of this paper is presented here. Section 3 describes the supply chain problem in terms of mathematical formulations for the two objective functions and various constraints involved. Introduction to the optimization tool MOSOMA is also included in this section. In Sect. 4, a case study on laptop manufacturer is presented from the literature to illustrate the implementation of the evolutionary algorithm to select the suppliers from Pareto optimal solutions. Conclusion and future scope of this work is discussed in the last section.

3 Problem Formulation The supply chain structure considered in the present paper is based on a model of assembly-oriented manufacturing like automobiles, consumer and electronics having a single location of plant with multiple suppliers in geographically diverse locations. Further, suppliers are assumed to be capable of supplying a single type of component/part to be used in assembly. However, the same component may have more than one supplier. The following nomenclature and decision variable are used during the mathematical formulation of objective functions and constraints: Nomenclature i j γ i,j D Z TCi,j li Ri αi di R qi,j BCi,j n

indices used for suppliers, i ∈ I indices used for components, j = 1, 2,…, J cost of unit component type j from supplier i demand of final assembled product in a given demand cycle total cost incurred in the supply chain network cost of transportation for delivering component type j from supplier i distance multiplication factor for distance between supplier i and plant reliability index of supplier i, (0 ≤ Ri ≤ 1) factor for transportation cost (per component per unit distance) Euclidian distance between supplier i and assembly plant overall reliability volume of component type j ordered from supplier i base cost for procuring component type j from supplier i number of components.

Bi-objective Optimization of a Reconfigurable Supply …

43

Decision variable  xi =

1, if supplier i is selected 0, otherwise

For i ∈ I, the universal set of suppliers is denoted by ‘I’. Objective Functions: The first objective function is formulated to minimize the total cost of supplies and the associated transportation cost as given in Eq. (1). Min Z =



(TCi, j + BCi, j )

(1)

i∈I j∈J

where   TCi, j = di · li · αi · qi, j · xi

(2)

BCi, j = (γi, j · qi, j · xi )

(3)

The overall reliability of all the orders placed in a given time period is to be maximized as expressed in Eq. (4) Max R =



(Ri · xi )

(4)

i∈I

Constraints: (i) Constraint for meeting product demand requirement is qi, j = D and

  qi, j · xi = n D i∈I

(ii) Constraint for n suppliers for n components is 

xi = n

i∈I

(iii) All model parameters are nonnegative αi , qi , γi, j , D, li ≥ 0 Optimization of these two objective functions simultaneously requires the application of non-domination concept to identify Pareto optimal solutions.

44

L. N. Pattanaik et al.

A set of suppliers is to be selected for a known demand scenario to minimize the total cost involved and maximize the reliability in order to assemble and deliver the final products before the due date. The trade-off between these two objectives can only be addressed by the decision-maker when multiple optimal solutions are found. An evolutionary bi-objective search algorithm is implemented to a case study from the literature in the next section to illustrate the various computational steps.

3.1 Multi-objective SOMA Self-organizing migratory algorithm or SOMA is an evolutionary algorithm [18] which is guided by the concept of a societal behaviour of living beings. A leader in a society is likely to attract other individuals towards the leader. During this migration of individuals, the algorithms explore the search space to replace the existing leader with any better individual as a new leader. Unlike genetic algorithm, new generations are not evolved in this case but migration loops are similar to those evolutions and hence it is classified as an evolutionary algorithm. Further, the number of migration loops to be assumed at the beginning of search can be compared to the number of generations in genetic algorithm which also serves as the stopping criteria. The individuals while moving towards the leader are allowed to take a predetermined length of steps which decides the precision in the search and computational complexity to converge the algorithm. The smaller the steps, the more the precision is and vice versa. The direction of movement for individuals during migration is controlled by applying perturbation. Application of SOMA for multi-objective optimization problems follows several strategies in terms of the movements towards the leader. All-to-all, all-to-one, allto-random and all-to-many are some of the options for multi-objective search algorithms. Some important steps for the MOSOMA with all-to-many strategy which is applied in the present case for identifying Pareto optimal solutions are presented here. • Parameter selection and generation of the initial population The parameters such as size of population (Q), a number of migration loops (ITER), path length, probability of perturbation (PRT ) and step size are numerically selected before starting the algorithm. Uniform distribution is used to generate a random population of individuals of size Q over the search space. • Identification of the leaders Leaders are those individuals in the population which are non-dominated with respect to the two objective functions or fitness values defined in the present problem. • Generation or migration loop Migration step is to move all the individuals towards the leaders or non-dominated solutions identified in the previous step. Then, the fitness is found after each step taken by each individual. These individuals cease to move further once the path

Bi-objective Optimization of a Reconfigurable Supply …

45

Table 1 Components and their potential suppliers Components Trade name

C1

C2

C3

C4

C5

C6

Display assembly

Speakers

Microprocessor

Hard drive

Computer base

Main battery

S 2 –S 6

S 4 –S 12 –S 13

S 5 –S 9 –S 14

S 7 –S 8 –S 15

S 10 –S 11

Suppliers S 1 –S 3

length set at the beginning is reached. This completes one generation or migration loop. • Stopping criteria Similar to genetic algorithm when a predetermined number of migration loops are completed, then the search algorithm can stop to report the optimal solutions found during these migrations.

4 Illustrative Case Study The case study illustrated in this paper is based on a manufacturer of laptop computers having 15 suppliers capable of providing six different components used in the assembly of the final product. It is evident from the number of suppliers that a particular component can be supplied by more than one supplier. The six components denoted by C 1 through C 6 outsourced by the manufacturer and their corresponding suppliers from S 1 to S 15 are presented in Table 1. Figure 1 depicts the coordinates for the physical location of the plant and the fifteen suppliers S 1 through S 15 with reference to an origin at (0, 0). For example, the coordinates for the plant are (6, 10), for supplier S 1 are (3, 8) and so on. These coordinates are used to find the minimum (straight line) distance between the plant and the suppliers. However, the actual distance is then calculated by multiplying a distance multiplication factor (li ) to the shortest straight line distance. As shown in the figure by a curved route between supplier S 9 and plant is the actual path having a multiplication factor of 1.5. Similarly for all the routes connecting to the plant, distance multiplication factors (always greater than 1) are estimated based on actual road distance. The present model considers the demands for laptops in cycles of time periods. Due to fluctuations in market demand, the requirement of components also varies substantially among demand cycles. Further, the cost of components quoted by suppliers also changes with time. Table 2 presents the various associated costs and reliability of suppliers in a particular demand cycle. Reliability of a supplier (Ri ) is estimated based on its historical performance in delivering the supplies in time. It is a simple ratio of the number of in-time deliveries to the total number of orders placed to a supplier. The unit cost of component (γ i,j ) is the quoted price offered by the vendor. Hence, there can be a price difference for a

46

L. N. Pattanaik et al.

Fig. 1 Coordinates for location of plant and fifteen suppliers Table 2 Numerical data related to the case study for a demand cycle (Supplier, component) (S i , C j )

Location coordinate of Si

Unit cost of component γ i,j

Distance multiplication factor li

Transportation Reliability cost factor α i Ri

(S 1 ,C 1 )

(3,8)

100

1.6

1.8

0.4

(S 2 ,C 2 )

(15,2)

30

1.01

1.3

0.91

(S 3 ,C 1 )

(14,8)

90

1.3

1.8

0.75

(S 4 ,C 3 )

(10,7)

45

1.7

1.4

0.66

(S 5 ,C 4 )

(10,12)

100

1.5

1.5

0.85

(S 6 ,C 2 )

(9,9)

35

1.6

1.3

0.76

(S 7 ,C 5 )

(5,11)

55

1.7

1.1

0.4

(S 8 ,C 5 )

(9,8)

60

1.2

1.1

0.64

(S 9 ,C 4 )

(5,2)

100

1.3

1.5

0.49

(S 10 ,C 6 )

(4,8)

50

1.5

1.2

0.91

(S 11 ,C 6 )

(13,12)

55

1.08

1.2

0.40

(S 12 ,C 3 )

(1,17)

42

1.2

1.4

0.48

(S 13 ,C 3 )

(17,15)

40

1.06

1.6

0.68

(S 14 ,C 4 )

(3,10)

90

1.5

1.5

0.86

(S 15 ,C 5 )

(15,6)

50

1.5

1.1

0.96

Bi-objective Optimization of a Reconfigurable Supply …

47

component among the suppliers. For example, suppliers S 1 and S 3 are quoting the cost of component C 1 as 100 and 90 cost units, respectively. The transportation cost factor (α i ) is another multiplicative factor expressed in terms of per unit component per unit distance to include the degree of complexity in handling of components. Fragile components like display assembly need special packaging, thus having a relatively higher transportation factor compared to computer base. Further, if the likelihood of damages during transportation is more then a higher factor is assigned for that component. Referring to Table 2, display assembly (C 1 ) is given a transportation factor of 1.8 while 1.1 is for the component computer base (C 5 ) and so on. To apply MOSOMA to this optimization problem, the decision variables are mapped and represented by a finite-length string of numerals. The representation scheme as permutation string adopted in the present paper is given here.

Components

C1

C2

C3

C4

C5

C6

Selected supplier

3

2

12

9

8

11

This string of solution can be easily decoded to component C 1 being supplied by S 3 , C 2 by S 2 , etc. The location index indicates the component number in the present string. Hence, the encoded solution is a simple string of numerals representing the index for suppliers. The two fitness values of each potential solution are found using Eqs. (1) through (4) as illustrated for the string given above. The chromosome is 3 2 12 9 8 11 Assuming a demand D of 700 in a demand cycle, qi, j = 700 Referring to Table 2 for values of γ i,j , αi , li , Ri and coordinates of suppliers, the shortest distance between the plant located at (6, 10) and the locations of the six suppliers is found as follows:  d3 = (14 − 6)2 + (8 − 10)2 = 8.24 distance units Similarly, d 2 = 12.04, d 12 = 8.6, d 9 = 8.06, d 8 = 3.6 and d 11 = 7.28 distance units.   Then finding TC3,1 = d3 · l3 · α3 · q3,1 · x3 as the transportation cost for component C 1 from supplier S 3 . TC3,1 = (8.24 ∗ 1.3 ∗ 1.8 ∗ 700 ∗ 1) = 13,497.12 cost units Similarly, BC3,1 = (γ3,1 · q3,1 · x3 ) = (90 ∗ 700 ∗ 1) = 63,000 cost units. Referring to the chromosome, the second component C 2 from supplier S 2 results in transportation cost of

48

L. N. Pattanaik et al.

  TC2,2 = d2 · l2 · α2 · q2,2 · x2 = (12.04 ∗ 1.01 ∗ 1.3 ∗ 700 ∗ 1) BC2,2

= 11,065.94 cost units and = (γ2,2 · q2,2 · x2 ) = (30 ∗ 700 ∗ 1) = 21,000 cost units.

The transportation and base costs for the rest of four components are found here. TC12,3 = (8.6 ∗ 1.2 ∗ 1.4 ∗ 700 ∗ 1) = 10,113.6 cost units BC12,3 = (42 ∗ 700 ∗ 1) = 29,400 cost units TC9,4 = (8.06 ∗ 1.3 ∗ 1.5 ∗ 700 ∗ 1) = 11,001.9 cost units BC9,4 = (100 ∗ 700 ∗ 1) = 70,000 cost units TC8,5 = (3.6 ∗ 1.2 ∗ 1.1 ∗ 700 ∗ 1) = 3326.4 cost units BC8,5 = (60 ∗ 700 ∗ 1) = 42,000 cost units TC11,6 = (7.28 ∗ 1.08 ∗ 1.2 ∗ 700 ∗ 1) = 6604.41 cost units BC11,6 = (55 ∗ 700 ∗ 1) = 38,500 cost units The fitness F 1 value of the chromosome can be found using Eq. (1) F1 = Z =



(TCi, j + BCi, j ) = 13,497.12 + 63,000 + 11,065.94

i∈15 j∈6

+ 21,000 + · · · + 38,500 = 319,509.37 cost units Similarly, the second fitness F 2 is calculated using Eq. (4) F2 = R =



(Ri · xi )

i∈15

= (0.91 ∗ 1 + 0.75 ∗ 1 + 0.64 ∗ 1 + 0.49 ∗ 1 + 0.40 ∗ 1 + 0.48 ∗ 1) = 3.67

4.1 MOSOMA Implementation The controlling parameters for MOSOMA as selected before the implementation coded in C++ are as follows: Number of individuals in the initially generated random population Q = 30 Maximum number of generation/migration loops ITER = 30 Number of steps during migration loops ST = 3 Probability of perturbation PRT = 0.1

Bi-objective Optimization of a Reconfigurable Supply …

49

Table 3 Pareto optimal solutions for the first demand period (D = 700) Solution number

Encoded solutions

Decoded solutions (component–supplier)

F1

F2

1

3 2 12 5 15 10

C 1 –S 3 C 2 –S 2 C 3 –S 12 C 4 –S 5 C 5 –S 15 C 6 –S 10

338,773.9212

4.68

2

1 2 12 5 7 10

C 1 –S 1 C 2 –S 2 C 3 –S 12 C 4 –S 5 C 5 –S 7 C 6 –S 10

309,011.1933

4.02

3

1 6 12 5 7 10

C 1 –S 1 C 2 –S 6 C 3 –S 12 C 4 –S 5 C 5 –S 7 C 6 –S 10

306,048.04

3.63

4

1 2 13 5 15 10

C 1 –S 1 C 2 –S 2 C 3 –S 13 C 4 –S 5 C 5 –S 15 C 6 –S 10

315,764.0762

4.4

5

1 2 12 5 15 10

C 1 –S 1 C 2 –S 2 C 3 –S 12 C 4 –S 5 C 5 –S 15 C 6 –S 10

311,535.4185

4.28

6

1 2 4 5 15 10

C 1 –S 1 C 2 –S 2 C 3 –S 4 C 4 –S 5 C 5 –S 15 C 6 –S 10

316,749.084

4.52

7

3 2 4 5 15 10

C 1 –S 3 C 2 –S 2 C 3 –S 4 C 4 –S 5 C 5 –S 15 C 6 –S 10

343,987.5867

4.92

8

3 2 13 5 15 10

C 1 –S 3 C 2 –S 2 C 3 –S 13 C 4 –S 5 C 5 –S 15 C 6 –S 10

343,002.5789

4.8

9

1 6 12 5 15 10

C 1 –S 1 C 2 –S 6 C 3 –S 12 C 4 –S 5 C 5 –S 15 C 6 –S 10

308,572.2652

3.89

Minimum size of the external archive N exf = 5 Maximum size of the external archive 5 × N exf Length of the path PL = 1.3. Four different hypothetical demand periods are considered for applying the biobjective optimization algorithm. The data for the first demand period is as given in Table 2 in the previous section for a demand of 700 units of computers. The nine Pareto optimal solutions found from the MOSOMA are presented in Table 3. The encoded solutions are easily decoded to actual solutions for selecting the suppliers for each type of component. The nine non-dominated solutions plotted on the graph (Fig. 2) are evidently producing a non-dominated front for minimization of F 1 and maximization of F 2. In other words, each of these solutions is a winner while comparing both of their fitness values with rest.

50

L. N. Pattanaik et al.

Fig. 2 Pareto optimal front for the first demand period

As discussed in Sect. 3 earlier, these multiple solutions are to be further analysed by the decision-maker to make a final selection of one which can be implemented on the field. In this case, depending on the priority placed on each objective, i.e. the total cost of supplies and transportation (F 1 ) and overall reliability for the orders (F 2 ), a solution can be identified which optimizes that objective. For example, if the decision-maker is more concerned about the reliability track records of the suppliers then the solution with maximum F 2 (solution number 7) is preferred. Similarly, solution number 3 with minimum F 1 is the best solution when cost is the priority. The rest of non-dominated solutions are presenting alternative trade-off options for the management to choose from in case of any eventualities arising from unavailability of suppliers or logistics. Three more hypothetical demand cycles are generated and solved using the same approach to illustrate the applicability and versatility of the evolutionary algorithm. The demands and base costs for three cycles along with the changes in reliability of the suppliers are compiled in Table 4. It can be noted here that the location coordinates for the suppliers, distance multiplication factors and transportation cost factors as used in the first demand cycle (referring Table 2) are assumed to be unchanged and used in all the cycles. The justification for changing the unit cost of component for each demand cycle is obvious owing to inflation and tax dynamics. Similarly, the reliability index for each supplier also changes (either improves or deteriorates) with time based on their on-time delivery performance. The Pareto optimal solutions found for three demand cycles as MOSOMA outputs are produced in Table 5. The demand for the fourth cycle is taken as 700 which is the same as the first period. With changes in the base cost of supplies and reliabilities, the Pareto optimal solutions for the two demand cycles are found to be different. Although one solution (1 6 12 5 15 10) is common in both periods but with different level of fitness values. Selection of a single solution for the different demand cycles can be performed using the priority for the objective functions as discussed earlier. Each demand cycle represents a virtual reconfiguration of the supply chain by activating a different set of suppliers which optimizes the stated objectives.

Bi-objective Optimization of a Reconfigurable Supply …

51

Table 4 Numerical data related to the three demand cycles Demand cycle 2 (D = 1500)

Demand cycle 3 (D = 300)

Demand cycle 4 (D = 700)

(Supplier, component) (S i , C j )

Unit cost of component γ i,j

Reliability Ri

Unit cost of component γ i,j

Reliability Ri

Unit cost of component γ i,j

Reliability Ri

(S 1 ,C 1 )

120

0.6

130

0.6

135

0.6

(S 2 ,C 2 )

50

0.91

60

0.75

65

0.65

(S 3 ,C 1 )

110

0.75

120

0.7

130

0.6

(S 4 ,C 3 )

70

0.66

80

0.66

85

0.66

(S 5 ,C 4 )

120

0.65

130

0.65

135

0.65

(S 6 ,C 2 )

60

0.86

70

0.86

75

0.86

(S 7 ,C 5 )

80

0.76

90

0.79

95

0.79

(S 8 ,C 5 )

85

0.64

95

0.64

98

0.64

(S 9 ,C 4 )

120

0.49

130

0.49

135

0.49

(S 10 ,C 6 )

75

0.71

90

0.81

120

0.61

(S 11 ,C 6 )

85

0.4

100

0.4

100

0.4

(S 12 ,C 3 )

65

0.78

80

0.72

90

0.72

(S 13 ,C 3 )

60

0.68

60

0.68

65

0.58

(S 14 ,C 4 )

110

0.86

120

0.76

140

0.66

(S 15 ,C 5 )

70

0.96

70

0.96

74

0.86

5 Conclusion Various problems of decision-making in supply chain systems often comprise of multi-criteria or multi-objective optimizations with conflicting nature in objectives. In such situations, more than one optimal solution is found using non-domination principle. The classical optimization approaches transform the problem to a singleobjective one by using some user-provided scalar weights for each objective. Hence, to find multiple optimal solutions, the weights are to change for each run of the search. In this paper, this limitation of the classical approach is overcome by applying an evolutionary bi-objective search algorithm MOSOMA based on Pareto non-domination. A reconfigurable supply chain network for a laptop manufacturer is considered as a case study to illustrate the computational steps and application of the metaheuristic. Four different demand cycles are simulated for finding the optimal solution in each case using hypothetical demands and base costs for supplies. As the supply chain model presented in the paper is similar to real industries from automotive, electronics and many other sectors, there are potential practical applications of the work on the field. The future scope of the research may include more conflictive objective functions and constraints to achieve a realistic form of supply chain model. Further, the modular concept of reconfigurable supply chains can be explored to

52

L. N. Pattanaik et al.

Table 5 Pareto optimal solutions for three different demand cycles Demand cycles

Encoded solutions

Demand cycle 2 Demand cycle 3

Demand cycle 4

Decoded solutions (suppliers)

F1

F2

3 2 12 14 7 10

S 3 S 2 S 12 S 14 S 7 S 10

831,066.6

4.77

3 2 12 14 15 10

S 3 S 2 S 12 S 14 S 15 S 10

836,475.7

4.97

3 2 13 14 15 10

S 3 S 2 S 13 S 14 S 15 S 10

181,107.4

4.66

3 6 12 14 15 10

S 3 S 6 S 12 S 14 S 15 S 10

185,525.2

4.81

3 6 13 14 15 10

S 3 S 6 S 13 S 14 S 15 S 10

181,337.5

4.77

1 6 4 14 15 10

S 1 S 6 S 4 S 14 S 15 S 10

480,167.3

4.25

1 6 13 5 15 11

S 1 S 6 S 13 S 5 S 15 S 11

460,041.6

3.95

1 6 12 14 15 10

S 1 S 6 S 12 S 14 S 15 S 10

485,453.7

4.31

1 6 13 14 15 11

S 1 S 6 S 13 S 14 S 15 S 11

461,223

3.96

1 6 4 5 15 11

S 1 S 6 S 4 S 5 S 15 S 11

468,026.6

4.03

1 6 4 5 15 10

S 1 S 6 S 4 S 5 S 15 S 10

478,985.9

4.24

1 2 13 5 15 11

S 1 S 2 S 13 S 5 S 15 S 11

459,504.8

3.74

1 6 13 5 15 10

S 1 S 6 S 13 S 5 S 15 S 10

471,000.9

4.16

1 6 13 14 15 10

S 1 S 6 S 13 S 14 S 15 S 10

472,182.3

4.17

1 6 12 5 15 10

S 1 S 6 S 12 S 5 S 15 S 10

484,272.3

4.3

1 6 4 14 15 11

S 1 S 6 S 4 S 14 S 15 S 11

469,208

4.04

include suppliers, warehouses and logistics for designing an ideal modular and agile supply chain to counter the dynamic demand fluctuations.

References 1. M. Holweg, The three dimensions of responsiveness. Int. J. Oper. Prod. Manage. 25(7), 603–622 (2005) 2. A. Reichhart, M. Holweg, Creating the customer-responsive supply chain: A reconciliation of concepts, Int. J Oper Prod Manage. 27(11), 1144–1172, (2007) http://doi.org/10.1108/ 01443570710830575 3. B.M. Beamon, Supply chain design and analysis: models and methods. Int. J. Prod. Econ. 55, 281–294 (1998) 4. M. Dileep, S. Dileep, Managing supply chain flexibility using an integrated approach of classifying, structuring and impact assessment. Int. J. Serv. Oper. Manage. 8(1), 46–50 (2011) 5. M. Stevenson, M. Spring, Flexibility from a supply chain perspective: definition and review. Int. J. Oper. Prod. Manage. 27(7), 685–713 (2007) 6. J.B. Naylor, M.N. Mohamed, D. Berry, Leagility: Integrating the lean and agile manufacturing paradigms in the total supply chain, Int J Prod Eco. 62, 107–118 (1999) 7. A. Gunasekaran, Agile manufacturing: a framework for research and development. Int. J. Prod. Econ. 62, 87–105 (1999) 8. V.C. Pandey, S. Garg, Analysis of interaction among the enablers of agility in supply chain. J. Adv.Manage. Res. 6(1), 99–114 (2009)

Bi-objective Optimization of a Reconfigurable Supply …

53

9. C. Lu, S. Zhang, Reconfiguration based agile supply chain system, in IEEE International Conference on Systems, Man and Cybernetics, Tucson, USA (2001), pp. 1007–1012 10. M. Christopher, D. Towill, An integrated model for the design of agile supply chains. Int. J. Phys. Distrib. Logistics Manage. 31(4), 235–246 (2001) 11. Z. Ebrahim, A. Nurul, M. Ahmad, M. Razali, Understanding responsiveness in manufacturing operations, in International Symposium on Research in Innovation and Sustainability, Malacca, Malaysia, 15–16 Oct 2014 12. M. Catalan, H. Kotzab, Assessing the responsiveness in the Danish mobile phone supply chain. Int. J. Phys. Distrib. Logistics Manage. 33(8), 668–685 (2003) 13. A.C.C. Carlos, An updated survey of GA-based multi objective optimization techniques. ACM Comput. Surv. 32(2), 109–110 (2000) 14. K. Hitoshi, T. Tomiyama, M. Nagel, S. Silvester, H. Brezet, A multi-objective reconfiguration method of supply chains through discrete event simulation, in 4th International Symposium on Environmentally Conscious Design and Inverse Manufacturing, Tokyo (2005). pp. 320–325 15. H. Ding, B. Lyès, X. Xie, A simulation-based multi-objective genetic algorithm approach for networked enterprises optimization. Eng. Appl. Artif. Intell. 19, 609–623 (2006) 16. S.C. dos Leandro, Self-organizing migration algorithm applied to machining allocation of clutch assembly. Math. Comput. Simul. 80, 427–435 (2009) 17. S. Roman, Z. Ivan, D. Donald, O. Zuzana, Utilization of SOMA and differential evolution for robust stabilization of chaotic Logistic equation. Comput. Math Appl. 60, 1026–1037 (2010) 18. I. Zelinka, J. Lampinen, SOMA-self-organizing migrating algorithm, in Proceedings of the 6th International Conference on Soft Computing, Brno, Czech Republic (2000), pp 177–187 19. P. Kadlec, Z. Raida, A novel multi-objective self-organizing migrating algorithm. Radio Eng. 20(4), 804–809 (2011) 20. G.C. Onwubolu, B.V. Babu, New Optimization Techniques in Engineering (Springer, Berlin, 2010)

Part III

Image Processing and Cognition Systems

Infected Area Segmentation and Severity Estimation of Grapevine Using Fuzzy Logic Reva Nagi and Sanjaya Shankar Tripathy

Abstract Disease in a crop is a major factor which affects the growth of the plant and projects disease management a challenging area in agriculture. The identification of the disease and the estimation of its severity are the building blocks of effective disease management system. Disease identification can be possible by visual inspection but the estimation of severity is relatively difficult by visual inspection. In this paper, automatic severity estimation is done by calculating the infected area using Fuzzy logic. The recognition accuracy of the proposed fuzzy system is 87.5, 86.67 and 85.83% as compared to 79.16, 75.83 and 75% in the crisp method for black rot, black measles and leaf blight infected grape images respectively. The proposed technique will help to quantify the diseases accurately. Keywords Fuzzy logic · Severity estimation

1 Introduction The agricultural society around the globe is witnessing a reduction of 20–40% of the total agricultural productivity around the globe [1]. Diseases, insects, weeds, and environmental factors contribute to this loss. Accurate and timely identification of the disease is urgently required to mitigate this loss in the future [2, 3]. Disease quantification has always been a challenging task even if the disease symptoms are clearly visible. To effectively estimate the severity with which crop has been infected, the prime task is to extract the infected area from the leaf image. The existing techniques used to segment the infected area and thereby estimate the severity of the disease are classified as visual, semiautomatic, and automatic techniques. The visual R. Nagi · S. S. Tripathy (B) Department of Electronics and Communication Engineering, Birla Institute of Technology Mesra, Ranchi, India e-mail: [email protected] R. Nagi e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_5

57

58

R. Nagi and S. S. Tripathy

methods are manual, subjective in nature, and time-consuming, and thus, there is a need to automate the estimation process [4, 5]. With the advancement in digital technology, semiautomatic and automatic methods can be seen as the upcoming choice. In semiautomatic methods, processing of infected leaf images is done by software tools [6]. They provide accurate and precise result but at the cost of time consumption. With the above constraints in manual and semiautomatic techniques, various automatic methods have been employed for the severity estimation in recent years. Segmentation is found to be the most utilized technique to separate the infected region from the entire leaf region. Infected region segmentation has witnessed various algorithms such as segmentation of infected pixels on the basis of gray levels [7], identification of symptom edges using Sobel operator [8], histogram of intensities [9], and triangle thresholding [10]. Cui et al. [11] extracted ratio of infected area (RIA) and rust color index (RCI) and used them as symptom indicators to estimate the severity in soybean. The automatic techniques are more accurate and precise in their quantification as compared to visual estimation, and they are computationally faster than semiautomatic techniques [12]. However, the automatic techniques discussed above use a fixed threshold to separate infected area from the leaf. The application of fuzzy logic can be seen in segmentation and disease quantification algorithms. Sekulska-Nalewajko and Goclawski [13] showed that one image pixel feature can contribute to more than one cluster using fuzzy c-means (FCM) clustering. According to fuzzy logic, pixels of the infected region are not completely non-green but do contain some green color in them. In this study, we have extracted the infected area from the leaf and utilized it further to estimate the severity of the disease by using fuzzy logic.

2 Methodology In this section, the method to estimate the infected area from the leaf image is discussed. Figure 1 shows the flowchart of the proposed method.

2.1 Data Analysis Leaf images (healthy and infected) of grapevine over three disease classes, namely black rot, black measles, and leaf blight, are taken from PlantVillage database [14] where each disease class containing 1000 images. The images are in RGB color space with pixel intensities varying from 0 to 255, and the size of each image is 256 × 256. A group of 120 images from each disease class has been selected by the plant pathologists for the performance analysis of the system.

Infected Area Segmentation and Severity Estimation …

59

Fig. 1 Flowchart of the proposed fuzzy method to extract infected area from the leaf image

2.2 Background Removal This process is used to enhance the green color of the input image by calculating the following equations for each pixel value: y1 = G − B

(1)

y2 = G − R

(2)

y = y1 + 3y2

(3)

Pixels corresponding to leaf region are considered as foreground, while others are considered as background. A mask is created by applying active contour technique [15] on the enhanced image by treating foreground pixels as logical 1 and background

60

R. Nagi and S. S. Tripathy

Fig. 2 Illustration of the background removal process showing original input image (left column), mask used to segment leaf area (middle column), and background removed image (right column) infected by a black rot, b black measles, and c leaf blight

pixels as logical 0. Figure 2 shows that the mask is used to separate the foreground from the background in the infected images of grape.

2.3 Color Transformation The variation in intensity values among the pixels is unaccountable in the RGB color space. Thus, the leaf images are converted into HSI color space, since it decouples the hue and saturation component from the intensity value. The hue component describes the color as angle variations from 0° to 360°. The hue color wheel in Fig. 3 shows

Infected Area Segmentation and Severity Estimation …

61

Fig. 3 Illustration of the hue color wheel showing hue values of different colors

Table 1 Hue values for different colors

Color

Hue value [0°, 360°]

Normalized Hue value [0,1]

Green

94°–148°

0.26–0.41

Yellow

65°–100°

0.18–0.28

Red

0°–15° or 350°–360°

0–0.04 or 0.97–1

Orange/brown

30°–45°

0.083–0.125

hue values for different colors. The 0° to 360° hue values are normalized in the range 0–1 for further processing.

2.4 Segmentation of Infected Area Using Fuzzy Logic The segmentation process is used to separate regions of interest from the entire image by grouping similar kind of pixels together. Region growing segmentation technique is utilized in this paper which segments the infected area from the entire leaf image. The HSI color wheel (shown in Fig. 3) shows that hue value for green color varies from 105° to 135°. Hue values of a healthy leaf image lie within this range. With the development of infection, green color in the leaf loses its pure content. The leaf turns yellow in color with hue values varying from 65° to 105°. With the advancement in infection, various color variations are observed such as red (hue is 0°–15° or 350°–360°), orange or brown (hue between 30° to 45°) depending upon the type of infection. The hue values along with their normalized version for green, yellow, red, orange, and brown are given in Table 1. An appropriate threshold value is required to separate infected area from the healthy portion of the leaf image. Cui et al. [11] have considered a fixed threshold in the hue values to distinguish between healthy and infected regions in the soybean leaf. But in a practical scenario, no such fixed demarcation exists. Depending upon the degree of greenness present in the pixel, it contributes to the healthy as well

62

R. Nagi and S. S. Tripathy

Fig. 4 Illustration of the trapezoidal membership function for the green region in the leaf image

as the infected region. With a decrease in the degree of greenness decreases in a pixel, the pixel is said to be contributing less like a healthy pixel and more as an infected pixel. The degree of greenness is well explained by fuzzy logic by assigning a membership value for the healthy region. The trapezoidal membership function, shown in Fig. 4, segments healthy region from the entire leaf region. The trapezoidal membership function is preferred over other shapes such as Gaussian and bell due to the ease it provides in arithmetic operations. Moreover, it requires fewer input parameters to define the function as compared to other shapes. Most importantly, trapezoidal membership function is defined over a range of values, even in case of maximum membership value, unlike other functions, which achieve their maximum membership value at a fixed point. Once the fuzzy green region is extracted, the fuzzy infected area can be calculated by using the following formulae: Infected area = Total leaf area − Healthy area

(4)

The area calculations are done on the basis of a number of pixels. The infected region segmented from the entire leaf region is shown in Fig. 5.

2.5 Severity Calculation Once the infected area is segmented out of the entire leaf region, the severity with which the disease has affected the crop becomes our prime concern. The severity of the disease by calculating the percentage of the infected region over the entire leaf region is given by the following formulae: %s =

cd × 100 ct

where cd Number of diseased pixels in the leaf image.

(5)

Infected Area Segmentation and Severity Estimation …

63

Fig. 5 Illustration of segmentation process with background removed image (left column) and infected area segmented from the leaf (right column) in a black rot, b black measles, and c leaf blight using fuzzy logic

Table 2 Classification of disease stage based on the severity of the infection

Disease stage

Severity of the infection (in %)

Early

0–25

Middle

26–50

Later

51–75

Advanced

>75

ct Total number of pixels in the leaf image Based on the percentage of severity, the stage of the disease can be determined as early, middle, later, and advanced stages, respectively (as shown in Table 2).

64

R. Nagi and S. S. Tripathy

3 Results and Discussion The results obtained using the proposed algorithm are presented and discussed in this section. The crisp threshold, as discussed by Cui et al. [11], is applied to the 1000 grape images provided by the PlantVillage database. It was done to evaluate the performance of the crisp method on our dataset. Table 3 shows the severity estimated by the existing crisp method and the proposed fuzzy method. The severity estimated by the crisp method is 64.36% which categorizes the leaf image in the later stage of the disease. However, the severity estimated by the proposed fuzzy method is 27.54% which correctly estimates the disease severity to be in the middle stage. The crisp method evaluates the infected pixels by setting a fixed threshold which drives the severity estimate to a higher value. The fuzzy method evaluates the infected area considering the fuzzy greenness present in the infected pixels and thus estimates the disease severity accurately.

Table 3 Comparison of the severity estimation of a black rot, b black measles, and c leaf blight by the crisp and proposed fuzzy method Actual severity stage

Sample image

Total leaf area

Infected area (in pixels)

Severity (in %)

Crisp method

Fuzzy method (proposed)

Crisp method

Fuzzy method (proposed)

32,192

20,719

8866.877

64.36

27.54

28,748

24,976

12,501.4132

86.88

66.85

30,226

25,459

18,561.6957

84.23

60.73

(a) Middle

(b) Later

(c) Later

Infected Area Segmentation and Severity Estimation …

65

Table 4 Confusion matrix for black rot at four stages of severity using the crisp method True stage of severity Predicted stage of severity Early Middle Later Advance

Total no. of images in test data base of each severity stage 30 30 30 30

Early

Middle

Later

24 1 0 0

6 23 1 0

0 6 21 3

Advance

0 0 8 27

Table 5 Confusion matrix for black rot at four stages of the severity using the proposed fuzzy method True stage of severity Predicted stage of severity Early Middle Later Advance

Total no. of images in test data base of each severity stage 30 30 30 30

Early

Middle

Later

27 2 0 0

3 26 2 0

0 2 25 3

Advance

0 0 3 27

A confusion matrix shows the performance of the proposed system in categorizing the infected image into its correct disease stage depending upon the severity. The predictions, whether correct or incorrect, are summarized as count values. The diagonal cells show the number of images in each disease class, among the 120 images that are being correctly categorized by the system. Table 4 shows the confusion matrix of grape infected by black rot (using the crisp method) in which the rows and columns represent the predicted and true stage of the severity, respectively. The true-positive prediction of the crisp system for black rot is 79.16%. Table 5 shows the confusion matrix of black rot infected grapevine for the estimation using the proposed fuzzy method. The true-positive prediction of the proposed fuzzy system for black rot is 87.5%. Similarly, true-positive prediction of the crisp system and the proposed fuzzy system for black measles (Tables 6 and 7) is 75.83 and 86.67% and for leaf blight (Tables 8 and 9) is 75 and 85.83%, respectively.

4 Conclusion This paper proposes a fuzzy logic-based severity estimation system. The hue value is calculated for the non-green region using fuzzy membership function. The performance of the proposed system is tested using the confusion matrix. The true-positive recognition accuracy of the proposed system was found to be higher for each infection

66

R. Nagi and S. S. Tripathy

Table 6 Confusion matrix for black measles at four stages of the severity using the crisp method True stage of severity Predicted stage of severity Early Middle Later Advance

Total no. of images in test data base of each severity stage 30 30 30 30

Early

Middle

Later

22 1 0 0

8 23 1 0

0 6 21 5

Advance

0 0 8 25

Table 7 Confusion matrix for black measles at four stages of the severity using the proposed fuzzy method True stage of severity Predicted stage of severity Early Middle Later Advance

Total no. of images in test data base of each severity stage 30 30 30 30

Early

Middle

Later

25 3 0 0

5 25 3 0

0 2 26 2

Advance

0 0 1 28

Table 8 Confusion matrix for leaf blight at four stages of the severity using the crisp method True stage of severity Predicted stage of severity Early Middle Later Advance

Total no. of images in test data base of each severity stage 30 30 30 30

Early

Middle

Later

23 3 0 0

7 21 1 0

0 6 24 8

Advance

0 0 5 22

Table 9 Confusion matrix for leaf blight at four stages of the severity using the proposed fuzzy method True stage of severity Predicted stage of severity Early Middle Later Advance

Total no. of images in test data base of each severity stage 30 30 30 30

Early

Middle

Later

27 3 0 0

3 25 2 0

0 2 25 4

Advance

0 0 3 26

Infected Area Segmentation and Severity Estimation …

67

category as compared to the recognition accuracy of the crisp method for grapevine. In the future, the number of lesion regions on the leaf can also be included as one of the parameters for disease quantification.

References 1. J.L. Dangl, D.M. Horvath, B.J. Staskawicz, Pivoting the plant immune system from dissection to deployment. Science 341(6147), 746–751 (2013) 2. W. Yang, J. Chen, G. Chen, S. Wang, F. Fu, The early diagnosis and fast detection of blast fungus, Magnaporthe grisea, in rice plant by using its chitinase as biochemical marker and a rice cDNA encoding mannose-binding lectin as recognition probe. Biosens. Bioelectron. 41, 820–826 (2013) 3. J.G.A. Barbedo, A novel algorithm for semi-automatic segmentation of plant leaf disease symptoms using digital image processing. Tropical Plant Pathology 41(4), 210–224 (2016) 4. C.H. Bock, P.E. Parker, A.Z. Cook, T.R. Gottwald, Visual rating and the use of image analysis for assessing different symptoms of citrus canker on grapefruit leaves. Plant Dis. 92(4), 530–541 (2008) 5. C.H. Bock, A.Z. Cook, P.E. Parker, T.R. Gottwald, Automated image analysis of the severity of foliar citrus canker symptoms. Plant Dis. 93(6), 660–665 (2009) 6. J.G.A. Barbedo, L.V. Koenigkan, T.T. Santos, Identifying multiple plant diseases using digital image processing. Biosys. Eng. 147, 104–116 (2016) 7. T.V. Price, R. Gross, W.J. Ho, C.F. Osborne, A comparison of visual and digital imageprocessing methods in quantifying the severity of coffee leaf rust (Hemileia vastatrix). Aust. J. Exp. Agric. 33(1), 97–101 (1993) 8. S. Weizheng, W. Yachun, C. Zhanliang, & W. Hongda, Grading method of leaf spot disease based on image processing, in Proceedings of the IEEE International Conference on Computer Science and Software Engineering, vol. 6 (2008), pp. 491–494 9. A. Camargo, J.S. Smith, Image pattern classification for the identification of disease causing agents in plants. Comput. Electron. Agric. 66(2), 121–125 (2009) 10. S.B. Patil, S.K. Bodhe, Leaf disease severity measurement using image processing. Int. J. Eng. Technol. 3(5), 297–301 (2011) 11. D. Cui, Q. Zhang, M. Li, G.L. Hartman, Y. Zhao, Image processing methods for quantitatively detecting soybean rust from multispectral images. Biosys. Eng. 107(3), 186–193 (2010) 12. J.G.A. Barbedo, An automatic method to detect and measure leaf disease symptoms using digital image processing. Plant Dis. 98(12), 1709–1716 (2014) 13. J. Sekulska-Nalewajko, J. Goclawski, A semi-automatic method for the discrimination of diseased regions in detached leaf images using fuzzy c-means clustering, in Proceedings of the IEEE international conference on Perspective Technologies and Methods in MEMS Design (MEMSTECH) (2011), pp. 172–175 14. D. Hughes, M. Salathé, in An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv preprint (2015). arXiv:1511.08060 15. T.F. Chan, L.A. Vese, Active contours without edges. IEEE Trans. Image Process. 10(2), 266–277 (2001)

Image Encryption Using Modified Rubik’s Cube Algorithm Rupesh Kumar Sinha, Iti Agrawal, Kritika Jain, Anushka Gupta and S. S. Sahu

Abstract For multimedia applications, data are developed and transmitted through the networks. These multimedia data should not be accessed by the unauthorized persons as it contains information. So, in current scenarios image security and privacy become a major issue in communication. In this work, we proposed an advanced encryption scheme based on modified Rubik’s cube algorithm. First, the original image is scrambled using two secret keys, which is generated using logistic function and shift register method, respectively. Then, with XOR operator, rows and columns of the scrambled image are again mixed using various means. Performance of the proposed work is assessed with correlation coefficient and information entropy. From the experimental analysis and security parameter evaluation, it can be observed that the proposed scheme can resist exhaustive attack, statistical attack and differential attack. Keywords Image security · Modified Rubik’s algorithm · XOR operator · Correlation coefficient

1 Introduction Security of data has appeared as a challenging task in the recent years because of the increasing trend in communication of subtle information. Everybody seems to use Internet for many services like e-commerce, online banking, etc. Due to this, sensitive information are under threat of privacy breach, data hacking and identity stealing which led to uncertainty in using communicating means. This is the main reason for the need of highly secure communication systems. So, academics are inspired to develop encryption techniques to safeguard the multimedia data.

R. K. Sinha (B) · I. Agrawal · K. Jain · A. Gupta · S. S. Sahu Electronics and Communication Engineering, Birla Institute of Technology, Mesra, Ranchi, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_6

69

70

R. K. Sinha et al.

For this, various encryption techniques have been proposed by researchers in the recent years. These encryption techniques may be categorized in different ways such as value alteration, change in pixels position and use of chaotic systems to maintain security. In [1], Rubik’s cube algorithm has been used to modify the original image. After this, two secret keys are generated and XOR operation is implemented over rows and columns of the jumbled image. In [2], iris-based efficient cryptosystem has been proposed and due to this, relay attacks can be protected by means of confusion and diffusion operations. In [3], performance and correctness of the Rubik’s cube-based image encryption algorithm have been accessed for mobile devices. In [4], an algorithm has been proposed for 3D encryption of images, where RC6 is the first step and followed by Rubik’s cube. Here, result of the proposed method shows robustness with safety. These images are then communicated above orthogonal frequency-division multiplexing channel for wireless application. In [5], a security measure for images has been explained using Rubik’s cube. In this, images are scrambled by means of mixing of pixels with XOR action. In [6], different types of image encryption and decryption techniques have been discussed. In this, existing encryption and decryption techniques are explained and give emphasis on the protection of authenticity, integrity and confidentiality of the images. In [7], a new method has been explained in which ANN and Rubik’s cube principles are used together for secure data transmission. ANN is used to overcome the non-linearity and for mapping the input and output. In [8], selective encryption methods have been explained for minimizing the encryption time, for which some parts of data were compressed. Permutation and combination techniques are used together for better security. In [9], a unique image encryption scheme has been discussed. In this, skew tent map along with permutations—diffusion architecture are used to get P-box and key stream. In [10], a cryptosystem has been explained where Arnold cat map is used for bit-level permutation and logistic map for the diffusion process. In [11], a unique method has been explained for encryption using mixing of image components. Here, four differential chaotic systems are combined and are used with pixel shuffling to disorders the distributive characteristics of RGB levels. In [12], a novel encryption technique has been proposed for real-time application. In this, 3D cat map is generalized and is accustomed to jumble the positions of pixels, and additional chaotic map is applied to confuse the association of cipher-image and plain-image. In [13], a chaotic image encryption system has been proposed with a perceptron model for analyzing the cryptographic security. In [14], a fast image encryption algorithm has been proposed by combining permutation and diffusion. Here, the image is divided into blocks of pixels on which spatiotemporal chaos is employed to shuffle the blocks and change the pixel values. Here, we are presenting an encryption technique in which modified Rubik’s cube principle is used. In this, pixel positions of the image are scrambled by means of modified principle. In this, two random keys are generated, one using logistic function and another using shift register concept. These two randomly generated secret keys are used to circular-shift the original image row-wise and column-wise, respectively. After this step, bitwise XOR operation is performed with the secret keys. These steps are repeated until the predefined number of iteration is not reached. Generation of

Image Encryption Using Modified Rubik’s Cube Algorithm

71

two different keys by different methods as well as the finding of number of iteration is unique in this technique.

2 Methodology For scrambling or ciphering of the images, Rubik’s cube concept can be used in digital images encryption algorithms. The idea behind the use of Rubik’s cube principle for image cryptosystems suggests permutation of pixels among individual rows or columns of an image by means of circular-shifting in any possible way.

2.1 Image Encryption The step by step procedure of encryption for the proposed modified Rubik’s cube algorithm is described as follows: 1. Input image (m × n) is converted into gray-scale image. 2. Two keys are generated namely K r and K c having length m and n, respectively. Key K r is generated using logistic function X n+1 = r * x n (1 − x n ), and key K c is generated using shift register concept as explained in Figs. 1 and 2, respectively. 3. Process for generating key K r is as follows: 4. Process for generating key K c is as follows: 5. After generating the keys, assume a variable ITERMax , which will give the total no. of iterations. Take r & X two variables as input. 0

Range of r is (2 to 4) & Range of X (0-1).

r & X0

0

Multiply each elements of the sequence by 255.

Random sequence

Generate a sequence of m length using a function Xn+1= r * xn (1-xn). Fig. 1 Block diagram for generating K r

Multiply by 255

Convert into binary

Convert each elements of sequence into 8-bit binary.

72

R. K. Sinha et al.

Take input of 8 bits binary string, which would be the first term of the sequence.

Input

XOR Operation

Take D0, D4, D5, D6 bits and XOR them.

Put that XORed bit at the end of the string and remove the first bit that would be the next element of the sequence.

Next element

N length

Using similar procedure generate a sequence of n length.

Fig. 2 Block diagram for generating K c

ITERMax = (K r [m/2]XOR K c [n/2])%5 + 10. 7. Now initialize the variable ITER = 0, and then increment this by 1 after execution of all the steps. 8. Then, in the image perform image shifting row-wise for each row as follows: a. Find the sum of all the elements of the row and take it’s modulo 2 denoted by M (i). b. Then based on the positions of K r (i), left or right circular-shift operation is performed on row i, according to the following condition: If M (i) = 0, then right circular-shift otherwise left circular-shift. 9. Now in the image, perform image shifting column-wise for each column as follows: a. Find the sum of all elements of the column and take it’s modulo 2 denoted by M (j). b. Then based on the positions of K c (j), down or up circular-shift operation is performed on column j, according to the following condition: If M (j) = 0, then up circular-shift otherwise down circular-shift. 10. Now for each row of the image, perform XOR operation of the image row-wise as follows: a. XOR the row with K c and then circularly right shift K c by one element. 11. Now for each column of the image, perform XOR operation of the image columnwise as follows: a. XOR the column with K r and then circularly right shift K r by one element. 12. If ITER = ITERMax , then encryption process is over else switch to step 7.

Image Encryption Using Modified Rubik’s Cube Algorithm

73

2.2 Image Decryption The step by step procedure of encryption for the proposed modified Rubik’s cube algorithm is described as follows: 1. Initialize the variable ITER = 0. 2. Then increment this by 1 after execution of all the steps. 3. Now for every column of the image, XOR operation is performed column-wise as follows: a. XOR the column with K r and then circularly right shift K r by one element. 4. Now for every row of the image, XOR operation is performed row-wise as follows: a. XOR the row with K c and then circularly right shift K c by one element. 5. Now in the image, column-wise image shifting is performed for each column as follows: a. Calculate the sum of all the elements of the column then find it’s modulo 2 denoted by M (j). b. After this, on the basis of the positions of K c (j), down or up circular-shift operation is performed on column j, according to the following condition: If M (j) = 0, then up circular-shift otherwise down circular-shift. 6. Then in the image, row-wise image shifting is performed for each row as follows: a. Calculate the sum of all the elements of the row and take it’s modulo 2 denoted by M (i). b. On the basis of the positions of K r (i), left or right circular-shift operation is performed on row i, according to the following condition: If M (i) = 0, then right circular-shift otherwise left circular-shift. 7. If ITER = ITERMax , then encryption process is over else switch to step 2.

3 Results & Analysis The encryption and decryption is performed over the standard images like Lena and Baboon to calculate various parameters to assess the efficiency of the algorithm. Two standard images Lena and Baboon are used in the analysis. The standard Lena image as shown in Fig. 3 is used for the analysis. Figures 4 and 5 shows the output of image shifting row-wise and column-wise using K r and K c . Figure 6 and Fig. 7 show outputs of XOR operation of the image row-wise and column-wise using K c and K r , respectively. Then, Figs. 8 and 9 are the outputs after 5 and 12 iterations, respectively. Figure 10 is the decrypted image after going through decryption processes. Security

74

R. K. Sinha et al.

Fig. 3 Original Image

Fig. 4 Image shifting row-wise using K r

Fig. 5 Image shifting column-wise using k c

analysis of the above explained algorithm is done by calculating standard parameters like correlation coefficient (CC) and information entropy.

Image Encryption Using Modified Rubik’s Cube Algorithm

75

Fig. 6 Row-wise XOR operation using k c

Fig. 7 Column-wise XOR operation using k r

Fig. 8 After 5 iterations

3.1 Correlation Coefficient Correlation coefficients are used to measure the relationship between two variables. In ordinary unencrypted image, each pixels are very much correlated with its adjacent pixels and having very high correlation coefficient. For correlation coefficient, N pairs of adjacent pixels (vertical and horizontal) are selected randomly from the original and the encrypted images, respectively. Then, the correlation coefficient of each pair is calculated using,

76

R. K. Sinha et al.

Fig. 9 After 12 iterations

Fig. 10 Decrypted image

R f,g = √ cov f,g = D( f ) =

cov( f, g) √ D( f ) D(g)

N 1  ( f i − E( f ))2 N i=1

N 1  ( f i − E( f ))(gi − E(g)) N i=1

E( f ) =

N 1  fi N i=1

where, f, g = adjacent pixels of gray value, Rf,g = correlation coefficient, cov = covariance of pixels, D (f ) = variance, E (f ) = mean. It is observed from the table that the proposed encryption scheme provides very less value of correlation compared to the original image (Table 1).

Image Encryption Using Modified Rubik’s Cube Algorithm

77

Table 1 Comparison of correlation coefficient Correlation coefficient

Proposed method with Lena

Proposed method with Baboon

Ref. [2]

Ref. [12]

Horizontal

0.0071

0.0056

0.0206

0.0118

Vertical

0.0089

0.0081

0.0115

0.00016

Table 2 Entropy comparison for different images Image

Information entropy

Information entropy

Information entropy

Proposed

Baptista’s (Ref. [15])

Wong’s (Ref. [16])

Lena

7.993

7.926

7.969

Baboon

7.985





3.2 Information Entropy Information entropy is a quantity which is used to describe the amount of information which must be coded by a compression algorithm. Entropy can be calculated by the formula given below, H (c) =

L 2 −1

i=0

P(ci ) log2

1 P(ci )

For 256 bits gray-scale images, theoretical value of entropy should be equal to 8. For this work, entropy of the encrypted image is very close to the ideal value of 8, which indicates the robustness of the proposed algorithm against entropy attacks (Table 2).

4 Conclusion In this paper, an encryption algorithm is proposed using modified Rubik’s cube algorithm. A chaotic key is generated using logistic function, and another key is generated using shift register property. The value of maximum number of iterations depends upon K r and K c , which are random in nature. To scramble the original image for obtaining the encrypted image, XOR operator has been used on the rows and columns separately. The performance of the proposed method is assessed on standard images computing the statistical parameters such as entropy and correlation coefficients. The results validate that the proposed algorithm is highly secured. The proposed algorithm is also capable of fast encryption/decryption which is appropriate for real-time applications.

78

R. K. Sinha et al.

References 1. K. Loukhaoukha, J.-Y. Chouinard, A. Berdai, A secure image encryption algorithm based on Rubik’s cube principle. J. Electr. Comput. Eng. 2012, Article ID 173931 (2012) 2. K. Loukhaoukha, M. Nabti, K. Zebbiche, An efficient image encryption algorithm based on blocks permutation and Rubik’s cube principle for IRIS images. 8th International workshop on systems, signal processing and their applications (WoSSPA) (2013), pp. 267–272 3. V.M. Ionescu, A.V. Diaconu, Rubik’s cube principle based image encryption algorithm implementation on mobile devices. International conference on electronics, computers and artificial intelligence, 25 June–27 June, Bucharest, România (2015) 4. M. Helmy, E.S. M. El Rabaie, I.M. Eldokany, F.E. Abd El Samie, 3-D image encryption based on Rubik’s cube and RC6 algorithm. 3D Display Res. Centre 8, 38 (2017) 5. M. Sirisha, S.V.V.S. Lakshmi, Pixel transformation based on Rubik’s cube principle. Int. J. Appl. Innov. Eng. Manage. (IJAIEM) 3(5) (2014) 6. R. Pakshwar, V.K. Trivedi, V. Richhariya, A survey on different image encryption and decryption techniques. Int. J. Comput. Sci. Inf. Technol. 4(1), 113–116 (2013) 7. T. Gomathi, B.L. Shivakumar, A secure image encryption algorithm based on ANN and Rubik’s cube principle. ARPN J. Eng. Appl. Sci. 11(1) (2016) 8. P. Praveenkumar, C. Swathi, K. Thenmozhi, J.B.B. Rayappan, R. Amirtharajan, Chaotic & partial encrypted image on XOR bus—an unidentified carrier approach. International conference on computer communication and informatics, January 07–09, Coimbatore, India (2016) 9. G. Zhang, Q. Liu, A novel image encryption method based on total shuffling scheme. Opt. Commun. 284(12), 2775–2780 (2011) 10. Z.-L. Zhu, W. Zhang, K.-W. Wong, H. Yu, A chaos based symmetric image encryption scheme using a bit-level permutation. Inf. Sci. 181(6), 1171–1186 (2011) 11. C.K. Huang, H.H. Nien, Multi chaotic systems based pixel shuffle for image encryption. Opt. Commun. 282(11), 2123–2127 (2009) 12. G. Chen, Y. Mao, C.K. Chui, A symmetric image encryption scheme based on 3D chaotic cat maps. Chaos, Solitons Fractals 21(3), 749–761 (2004) 13. X.Y. Wang, L. Yang, R. Liu, A. Kadir, A chaotic image encryption algorithm based on perceptron model. Nonlinear Dyn. 62(3), 615–621 (2010) 14. Y. Wang, K.W. Wong, X. Liao, G. Chen, A new chaos-based fast image encryption algorithm. Appl. Soft Comput. J. 11(1), 514–522 (2011) 15. M.S. Baptista, Cryptography with chaos. Phys. Lett. Sect A 240(1–2), 50–54, (1998) 16. K.W. Wong, S.W. Ho, C.K. Yung, A chaotic cryptography scheme for generating short ciphertext. Phys. Lett. Sect A 310(1), 67–73, (2003)

Part IV

Data Mining

Decision Support System for Business Intelligence Using Data Mining Techniques: A Case Study Pankaj Gupta and Bharat Bushan Sagar

Abstract Business intelligence is an arrangement of strategies, designs, and innovations that change crude information into significant and helpful data used to empower more compelling vital and operational experiences and basic leadership. Decisions support-systems (DSSs) assist in translating raw information into further understandable forms to be used by the advanced stage executives. Business intelligence apparatuses are utilized to make DSS which separate required information from an extensive database to produce easy to use outlines for basic leadership, to create such client graphs; we utilize an open-source business intelligence apparatus fusion charts—which have the capacity to use the accessible data, to pick up a superior comprehension of the past, and to foresee or impact the future through better basic leadership. Extensively characterized, information mining depends on marketable insights, counterfeit awareness and machine learning or information disclosure in databases. DSS uses accessible data and data mining techniques (DMT) to give a basic leadership instrument more often than not depending on human–PC cooperation. Together, DMT and DSS tell us about the range of investigative data advancements and gives us information-directed human-driven goals. Here, we are presenting a case study of DSS for BI for relating data mining procedures for the calculation of energy produced by wind power plant; remarkable outcomes were accomplished in by placing the cutoff. Hence, the data mining procedures were capable to be trained and to ascertain enhanced reliance among variables and are a lot nearer to in fact calculated values. Keywords Decision support system · Decision-making process · Big data · Business intelligence · Data mining · Organizational learning

P. Gupta (B) · B. B. Sagar Department of Computer Science & Engineering, Faculty of Computer Science & Engineering, BIT, Off-Campus, Noida, India e-mail: [email protected] B. B. Sagar e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_7

81

82

P. Gupta and B. B. Sagar

1 Introduction 1.1 Decision Support System (DSS) Decision building is the method of mounting as well as examining options toward builds a judgment—an option as of offered choices. The majority decisions are completed in reply to a crisis—an inconsistency among attractive and real decision along with judgment—characteristic of the decision-building method [1]. Decisions may be either planned or non-planned. Planned decisions are cyclic/schedule and can be resolved through definite automatic actions, like as put-on the policies to locate finest resolution. Up to 91% of administration decisions are planned, while non-planned decisions are outstanding or not frequent, and they are frequently completed in emergency circumstances that involves a lot of haziness, in such situation the administrators have to take non-planned decisions based on verdict, imagination and perception. Every administrator is able to use the following practices to defeat the obstacle for efficient decision building: enhance information, unbias judgment, be imaginative, perception, and timing.

1.2 Business Intelligence (BI) BI is a latest name in information technology. The sense of BI fluctuates circumstance to circumstance. It illustrates the procedure of turning datum into info and after that into acquaintance. The aptitude is declared to be more helpful to the consumer as it goes by through each step. BI explains a set of notions and technique to develop business decision making by comprising all the traditions a venture can discover, admittance, and examine information in the data warehouse to enlarge approaching decisions [2]. BI can be viewed as sunshade which wraps a complete range of thoughts. BI is just like a data warehouse, with three levels on top of it: queries and reports, online analytical processing, and data mining (Fig. 1).

Fig. 1 Components of BI

Decision Support System for Business Intelligence …

83

1.3 DSS for BI The decision support system idea does a reversal quite a while; the definition fluctuates relying upon the advancement of data innovations and, obviously, on the perspective of the individuals. DSS as “an extensible framework, prepared to do specially appointed examination and choice displaying, concentrated on future arranging and utilized at unpredictable timestamps.” Supportive network as “an intelligent, adaptable and versatile framework, only intended to offer support in illuminating unstructured or semi-organized administrative issues, planning to enhance the decisional procedure [3]. The framework utilizes information (interior and outside) and models, giving a basic and simple to utilize interface, in this manner, permitting the leader control over the choice procedure. DDS offers a boost in all choice procedure’s stages”. Considering the definations for the DSS, probably the most imperative attributes of the decision support system are: utilizes information and models; upgrades the educating procedure; develops the proficiency of the basic leadership prepare; offers bolster in the basic leadership handle and permits the chief manage over the whole procedure; encourages in every phase of the basic leadership prepare; encourages leaders in taking care of organized or unorganized issues; offers bolster for a client or for a gathering of clients; and so on [4].

1.4 Using of Data Mining Practices in DSS Keeping in mind the end goal to settle on a choice, the administrators require learning. Data mining assist in determining wisdom from unfinished, accepted information. Data mining arrangements allow pattern learning from the information shop, information distribution centre [5]. Here, specific circumstance and data mining acquire a critical part in serving associations to understand their clients and their conduct. The fundamental distinction between data mining systems in addition to traditional database process methods is that, for later ones, the database is not inactive any longer, having the capacity to serve valuable data with respect to the marketable strategies put in examination [6]. Data mining utilizes expansive measurable calculations and information from which we can specify regression calculations. The all-inclusive test for information mining grouping is the fortuitous event list network.

84

P. Gupta and B. B. Sagar

1.5 Building DSS for BI Using Data Mining Techniques Creating DSS includes time, high expenses, and HR endeavors, and the achievement of the framework can be influenced by many dangers like framework outline, information quality, and innovation out-of-date quality. The choice emotionally supportive networks’ goal is to help the chiefs and administrators to settle on choice with respect to the formal of venture, planning money streams, and monetary arranging, particularly on account of open assets. Numerous foundations put resources into building authoritative information distribution centers keeping in mind the end goal to expand the execution and the proficiency of the logical reporting action. Additionally, there are a few costly instruments and programming that can be utilized to break down the patterns and to anticipate some future attributes and the development of the business. Some of these instruments dissect information by utilizing neural systems. The prerequisite can be acquired by consolidating information warehousing, OLAP, data mining, and trade knowledge devices for breaking down and coverage into an adaptable engineering that ought to encloses: an information representation level where an ETL procedure should be related to clean and stack information into an information distribution center; an relevance level with explanatory representations where multidimensional coverage like OLAP and data mining strategies can be joined for recorded as well as estimate investigation [7]. Here, propose a improved stages of steps for business insight frameworks: viability study, assignment scheduling, analysis, plan, expansion and let go into manufacture [8]. Step1. The viability study: This stage thinks about comprising of distinguishing the necessities and business openings and proposing arrangements of enhancing the basic leadership handle. Each of the proposed arrangements must be supported by the inferred expenses and advantages. Step2. Assignment scheduling: This stage comprises of assessing undertaking maintainability potential outcomes, recognizing existent framework parts and future needs. The consequence of these exercises closes with the venture arranges. After its approval and endorsement, the powerful begin of the venture can start. Step3. Business necessities analysis: This stage concentrates on specifying and dissecting on need the underlying prerequisites of the hierarchical administration group. The necessities are identified in the view of meetings directed by supervisors and the venture staff. These necessities may endure slight changes amid the venture; thusly decreasing the danger of unpractical business prerequisites to happen. Step4. System design: As per the framework’s prerequisites, the vital information will be put away both on a point-by-point level and in addition on total level considered. At this sub-stage, the legitimate information representation is refined and the substantial representation of the novel framework is created with a specific end goal to fulfill the reporting and investigation necessities of the chiefs. With previously mentioned viewpoints, we suggest that the storage, administration, and information processing resolutions for comprise a cetralised data

Decision Support System for Business Intelligence …

85

warehouse on a organizational hierarchical level. Taking after sensible and bodily criteria, the information distribution center can be partitioned into information stores on divisional level, along these lines being simpler to keep up and created by discrete groups, taking after a similar arrangement of determinations [9]. The ETL (“extract/transform/load”) prepares outline—this stage is the mainly difficult one in the venture’s life cycle and is specifically based on the information source’s excellence. The outline of the ETL procedure needs a progression of pre-essential stages: preparatory handling of information sources, so as to have an institutionalized organization, information compromise and repetition and irregularity disposal of information Step5. Building the system: The advancements that are utilized for choice emotionally supportive networks’ improvement are a piece of the business knowledge innovation classification and comprise of: advances for information distribution center information association, online analytical processing (OLAP) examination frameworks, information mining calculations, separate, change, and load (ETL) instruments, computer-aided software engineering demonstrating devices and Web advances. Step6. System execution: This speaks to the phase when the framework is being conveyed, the essential specialized support is given, information stacking strategies are run, the application is introduced, and the execution is being followed. The phase closes with the arrival of the framework created (business be alive) and with the conveyance of the utilities and last venture citations, the client aides, and appearance manuals for the application.

1.6 DSS for BI Design The principal parts of a decisional framework are underlined: A DSS is portrayed as being “a framework made out of three associating modules: the UI (dialog management), the information administration segment (data management), and the model administration segment (model management).” It is distinguished into four center parts that shape a choice emotionally supportive network: the interface, regularly thought to be the most vital segment, the database framework which incorporates every one of the databases and the database administration frameworks of the association, the model framework containing the expository, numerical, and factual representations, and the correspondence segment, made out of the center system and the cell phones [10]. From Fig. 2, the DSS engineering can likewise be seen from an improvement level perspective, from base to beat, pyramidal, having three layers: base level, center level, and top level, the association of every one of these levels being ended on the communication layer. In this way, DSS engineering may be made out of the accompanying levels: Lowest Level—Data administration: It is made out of information, metadata, database administration frameworks (DBMS), information distribution centers,

86

P. Gupta and B. B. Sagar

Fig. 2 Model of three-tier DBMS with DSS process

information word references, and metadata lexicons. At this level, information originating from a few distinct frameworks must be coordinated and the primary procedures utilized for this procedure are replication, federalization, or information relocation, together with information distribution center stacking. Information originating from prepared databases and outer resources are separated utilizing interface sort applications, recognized as portals, operated on DBMS and permitting customer applications to produce server-side implementable SQL code. A few sources might be placed in exchange: documents, databases, messages, Web and unpredictable sources [11]. At this building level, so as to load information in an information distribution center, a progression of assignments is required: gathering and removing information from the information sources that have been recognized amid the investigation stage, as indicated by the administration’s business prerequisites. A source information distribution center (arranging territory) can be made with a specific end goal to load all the fundamental information which then should be prepared and stacked into another goal stockroom. This procedure, regularly, changes crude information for consistency with the inner configuration of the stockroom; information cleaning and change to guarantee information precision and to affirm information can be utilized for investigation; stacking information in the goal stockroom.

Decision Support System for Business Intelligence …

87

Middle Level—Model administration: In this level, information is handled and the important data for basic leadership is separated. This stage includes information investigation, reproduction, and conjecture models, keeping in mind the end goal to react to the abnormal state business prerequisites. At this level, the center parts are: the support, the database administration framework, the metadata, and the administration and implementing server. Extricating learning from information (data mining)—frequently, the achievement of a decision support system is dictated by the disclosure of new truths and information connections and not by construction of reports which just shows information. Keeping in mind the end goal to satisfy these necessities, information mining methods must be connected, together with learning separate from the hierarchical information, for example, bunching, anticipating, prescient demonstrating, and grouping [12]. Top Level–The User Interface: This is the level where the cooperation with the clients happens, where the administrators and the people required in the basic leadership procedure can speak with the framework. UI has to be exceptionally outlined, so this sort of clients will effortlessly communicate with the framework. This stage is made out of questions and reports producing apparatuses, lively examination instruments, information distributing and information showing devices in a straightforward, and natural and adaptable route for the end clients. At this step, the human asset can be found, spoken to by leaders which connect with the framework through its interfaces. In the most recent couple of years, a developing offer in the improvement of choice emotionally supportive network interfaces is taken by the entryway-based Web advances. BI entries grasp the most critical position in making specific, adaptable, easy-to-use, and open interfaces, permitting clients a decent back-to-back understanding, pleasant graphical look, details combination choices, and realistic apparatuses, got in the past stages [13]. Level IV—Communication: This speaks to the level that permits interrelating every past stage and may have Web servers, PC systems, specialized gadgets, circulated stages, GRID advancements, and portable correspondence stages. In the view of these means and DSS’= design, in the accompanying areas, we suggest a theoretical model that can be executed for the situation think about in which the nationwide power structure (NPS) movement is broke down, particularly the wind power creation and wind vitality joining into the NPS. The whole contextual investigation considers the attributes of wind power plants, the approaches to incorporate the vitality delivered, and the effect on basic leadership framework from the specialized perspective (the qualities and effect on hold control), budgetary, business, natural, lawful. The following segment introduces just the strategies connected to anticipate and decide the wind vitality in the view of information mining systems and the outcomes got for a portion of the calculations [14].

88

P. Gupta and B. B. Sagar

2 A Case Study: Using Data Mining for Predicting Wind Energy Creation 2.1 Formulation of Problem: The Features of Wind Energy Creation Establishing one wind farmhouse is especially essential since energy generation based to a few meteorological variables of that location. The factors for deciding the area of wind creating components are wind velocity and course, ecological conditions, separate from the power network, sub-stations and association circumstances, admission to gear and individual obstruction person exercises (tourism, vicinity to resolutions, streets, railroads, airplane terminals), electromagnetic impedance, conditions ashore utilize and associations with nearby powers the force of its shoppers [15]. In any case, the primary normal component, wind speed, records noteworthy changes even inside hours. The wind velocity that create wind turbines vitality variety from 4 to 23 m/s. If the wind velocity goes down beneath this point of confinement or surpassing 25 m/s, the turbine discontinue. Notwithstanding for a zone, for example, a desert region of Rajasthan (where the territory is breezy), the twist speed in specific regions of land shifts fundamentally. Subsequently, to figure out whether the area is reasonable, it is important to gauge the meteorological variables, for example, wind speed and bearing, temperature and weight, and so forth [16]. From the previously mentioned in the project stage is especially vital to decide the energy created by wind sources keeping in mind the end goal to pick the sort of wind generator and area of every generation station furthermore in the operational stage to accomplish great creation figures. In any case, wind power plants (WPPs) still report huge variations from the genuine estimations of energy stuff because of the failure of current frameworks to accurately evaluate the wind velocity. The issue turns out to be more complicated on the grounds that the estimates acquired are utilized to set up the energy assets important to cover any crack in the energy framework. It is outstanding that quick accessibility of force is flawed or costly. In the event that the speculation of wind energy from wind resources is further precise the more it moderates the power framework holds. The part of a decent anticipation is especially critical in light of the fact that it lessens the expenses of guaranteeing secure function nationwide power structure (NPS) and subsequently has no noteworthy increments in energy costs therefore of these stores [17]. But, the most critical issue, which relies on upon the measure of energy saves, and quantifiable profit in wind power, is the segment on which these wind control panels are based, to be specific the wind. With Fig. 3, one can see substantial variances of the wind traced by an anemometer inside twenty-four hours at an elevation of 55 m. Wind energy creation is adapted by different components, some of which are featured by low consistency, for example, impact of sun shading, soil topography,

Decision Support System for Business Intelligence …

89

Fig. 3 Wind velocity measured within 24 h at desert region of Rajasthan

Fig. 4 Comparison between actual and predicated wind energy production

the power, misfortunes to the point of association, and so forth [18, 19]. At present, there are a few educational frameworks utilized for the expectation of energy, yet the precision of these frameworks is still very low. This can be seen investigating the warnings sent by the wind control energy generation inside a week time (Fig. 4). These issues with respect to the low forecast precision and of information incorporation from different gear and neighborhood frameworks, energy effectiveness investigation, prompt to the need to create answers for a superior prescient power as items, additionally to encourage basic leadership here. A superior forecast cannot be accomplished by established factual strategies, and this is the explanation behind involving the utilization of current methods like data mining. Here, we have been investigated in the accompanying area, in detail, the fundamental calculations that can be connected to anticipate the wind.

2.2 Projection of an Efficient Prototype for Forecasting Wind Energy For construction and trying the data mining procedures, we have used here a freeware tool of software “WEKA” which gives a user-friendly interface for data examination and justification of consequences. WEKA gives tools/wizards for the data dealing out, testing, and evaluation which required in data mining technology.

90

P. Gupta and B. B. Sagar

Here, dataset is requisite. For our case we considered and documented the data for a quarter period of month (assumed). The values documented at a height of 55 m which collect 13,050 record sets. The least value documented in this era is 00 m/s and the highest value of 23.7 m/s and average 6.3 m/s. From the set containing hypothetical 13,050 record sets of wind velocity at 55 m elevation, about 2300 were lesser or equivalent to 4.6 m/s—begin velocity of a wind energy producer. For 1200 cases, We documented a wind velocity more than 13 m/s Approx. 6500 cases documented values were lesser than average velocity, and about 3050 values higher than average wind velocity i.e. 6.3 m/s. We have divided sourced entries into three sets of tables, namely wind_construct, wind_check, and wind_relate. Each table holds data on different time intervals, for learning purpose a table of record-set of about 9000 considered. For the table of testing purpose record-sets of about 5500 and for evaluation purpose, the table contains record-sets of about 1200. Following the data grounding step, we can apply the following algorithms: Predictive models: classification algorithms, regression algorithms; descriptive models: clustering, association rules. Here, we are using the supervised learning procedures.

2.2.1

Using of Naïve Bayes Procedure and Its Results

Now, how we can relate the Na|ve Bayes (NB) procedure on the deliberate data to examine the goal attribute ‘F’ which may have two values, 1—where turbine produces wind energy when it is inside the variety 4–23 m/s and 00 or else, is given here. With Naïve Bayes algorithm, we can predict whether or not the turbine will generate wind energy based on climate situation. Now, it will proceed in three phases, i.e., NB_construct_phase; NB_check_phase; NB_evaluate_phase. In the first phase, i.e., NB_construct_phase, we consider a minimum threshold of 7%—only data over this cutoff will be considered for learning purpose. Here, we are using the table wind_contruct. We attained a 86.8% accurateness of calculations with the Naïve Bayes algorithm. In the second phase, i.e., NB_chek_phase, we applied testing algorithm using the table wind_check. In last phase, i.e., NB_evaluation_phase, we validate the results (Fig. 5). After applying Naïve Bayes procedure, the resulting inaccuracy rate is less than 7.99% that can be presumed as a satisfactory outcome. Taking everything into account, the blunder rate coming about because of the use of NB calculation is under 7.99% that may be thought acceptable.

Decision Support System for Business Intelligence …

91

Fig. 5 Naïve Bayes results

Training

Validation

Test

Fig. 6 When enough training data are available, the dataset is commonly split into 50, 25, and 25% sets for training, validation, and testing

2.2.2

Using of Decision Tree Procedure and Its Results

Decision trees are fundamental machine learning devices for arrangement and relapse. Other than moderate computational costs, the fundamental preferred standpoint of decision trees is their model interpretability: A decision tree is normally a binary tree, where every hub portrays a decision basis thinking of one as a specific element of the test design. To each leaf hub, a name is appointed. Along these lines, the machine learning professional can without much of a stretch fathom the decisions made while crossing the tree. There exist distinctive calculations for making decision trees dependent on a preparation dataset. In this work, we constrain ourselves to the order and relapse trees. Decision trees are extremely straightforward, and their capacities are restricted; however, they are connected in an extensive variety of orders. In any case, decision trees are vital for this work and for ongoing machine learning when all is said in done on account of their applications [20]. Decision Tree Construction The main concept of constructing a decision tree is to hierarchically partition the input space into reasonable regions. To classify a test pattern x 1 , the tree is traversed to find the leaf containing x 1 . Each tree node ‘t’ contains a subset of the training dataset with N t samples and implements a splitting criterion for this subset along one axis ‘i’. The tree construction is done when each leaf nodes only contains training samples of the same class. If decision trees become very complex, they tend to be overfitted to the training dataset. To avoid this, the tree built is pruned to a smaller depth in order to yield a better generalization performance (Fig. 6). An additional expectation calculation connected to ‘F’ variable is the decision_tree. Subsequent to construction and testing the model on the information sets, taking after similar strides exhibited in the past segment, we get a precision of 98.57%, higher than that acquired by applying the Naïve Bayes calculation.

92

P. Gupta and B. B. Sagar

Hence, the outcomes acquired by applying the decision_tree calculation are superior to those got with the NB calculation. Be that as it may, to get a genuine vitality forecast of turbine’s real values is important to apply different calculations where the objective characteristic has discrete qualities, not just values false(0) or true(1).

2.2.3

Using of Regression and Its Results

On the underlying information set, we presented segment F which is the measure of force created by wind tempo (S) calculated at 55 m cubed. The qualities in this section will be the objective attribute for the regression calculation. We connected the regression on the information sets, taking after similar strides (arrangement, learning, validating, and relating), and the outcomes acquired from the calculation have a precision of just 67.86%, which gives no certainty to accomplish thorough conjectures. Thus, the regression model ought to be connected to a quality with a low level of diffusing relying upon meteorological elements. In this manner, we presented E_PRAG quality for gathering values into interims relying upon power delivered by twist speed of 0.45 m/s. For instance, we found that at twist speeds somewhere around 00 and 3.25 m/s, here is 00 kW control yield, at paces running from 3.25 to 4.25 m/s, control yield is 45 kW, and so forth. These limits are characterized as per the power attributes of the turbines. Summarizing the results of the evaluation phase in Table 1, it shows the actual and predicted values for E_PRAG attribute every 10 min.

Table 1 Comparison of actual and estimated values from the regression algorithm Date/time

E_actual

E_prog

E_prag actual

E_prag prog

25-02-2016 00:00:00

357.91

779.1

343

341.1

25-02-2016 00:10:00

250.05

694.7

216

245.7

25-02-2016 00:20:00

140.61

617.5

125

170.4

25-02-2016 00:30:00

140.61

620

125

170.7

25-02-2016 00:40:00

110.59

612.7

91

141.2

25-02-2016 00:50:00

54.87

651.1

43

56.9

25-02-2016 01:00:00

0

506.4

0

15.5

25-02-2016 01:10:00

0

695.4

0

64.5

25-02-2016 01:20:00

0

771.3

0

20.4

25-02-2016 01:30:00

79.51

618.4

64

109.5

25-02-2016 01:40:00

140.61

627.9

125

167.2

25-02-2016 01:50:00

125

622.3

125

153.3

25-02-2016 02:00:00

0

695.7

0

23.6

Decision Support System for Business Intelligence …

93

3 Conclusions Database administration and information warehousing innovations have developed altogether throughout the past years. There is an unquestionable need to use the accessible information and innovations to build up and coming era of logical and business applications, which can join information managed strategies with area particular learning. Investigative data advances, which incorporate data mining technique and DSS for BI, are especially suited for these assignments. These innovations can encourage both robotized (information-directed) and human master-driven learning disclosure and prescient examination and can likewise be made to use the consequences of models and reenactments that depend on process or business bits of knowledge. Business applications have adaptable, however, direct DMT implanted inside DSS. Multidisciplinary innovative work endeavors are required later on for maximal usage of diagnostic data advancements with regard to these applications. Here, we are presenting a case study of DSS for BI for relating data mining procedures for the calculation of energy produced by wind power plant; remarkable outcomes were accomplished in by placing the cutoff. Hence, the data mining procedures were capable to be trained and to ascertain enhanced reliance among variables and are a lot nearer to in fact calculated values.

References 1. L. Agosta, L.M. Orlov, R. Hudso, The future of data mining: predictive analytics. Forrester Brief (2003) 2. C. Apte, B. Liu, E.P.D. Pednault, P. Smyth, Business applications of data mining. Commun. ACM 45(8), 49–53 (2002) 3. C. Carlsson, E. Turban, DSS: directions for the next decade. Decis. Support Syst. 33(2), 105–110 (2002). (Elsevier) 4. P. Gupta, B.B. Sagar, Discovering weighted calendar-based temporal relationship rules using frequent pattern tree. Indian J. Sci. Technol. 9, 2–6 (2016) 5. R. Agrawal, T. Imielinski, A. Swami, Mining association rules between sets of items in large databases, in Proceedings of the ACM SIGMOD International Conference on Management of Data (Washington, D.C., 1993), pp. 207–216 6. R. Agrawal, R. Srikant, Fast algorithms for mining association rules, in Proceedings of the 20th International Conference on Very Large Databases (Santiago, Chile, 1994), pp. 487–499 7. P. Bradley, J. Gehrke, R. Ramakrishnan, R. Srikant, Scaling mining algorithms to large databases. Commun. ACM 45(8), 38–43 (2002) 8. G.C. Lan, V.S. Tseng, A novel approach for discovering chain-store high utility patterns in a multi-stores environment, in The Second ACM International Workshop Mining Multiple Information Sources (Las Vegas, USA, 2008), pp. 293–302 9. A.M. Geoffrion, R. Krishnan, E-business and management science. Mutual impacts (Parts 1 and 2). Manage. Sci. 49, 10–21 (2003) 10. U. Fayyad, R. Uthurusamy, Evolving data mining into solutions for insights. Commun. ACM 45(8), 28–31 (2002) 11. Retrieved from http://www.lafayetteacademyno.org 12. A.R. Ganguly, Software review. Data mining components, Editorial review, ORMS Today. Inst. Oper. Res. Manage. Sci. (INFORMS) 29(5), 56–59 (2002a)

94

P. Gupta and B. B. Sagar

13. Retrieved from http://www.intechopen.com 14. R. Grossman, C. Kamath, W. Kegelmeyer, V. Kumar, R. Namburu, Data Mining for Scientific and Engineering Applications (Kluwer, Academic Publishers Norwell, MA, USA, 2001) 15. H. Conover, S.J. Graves, R. Ramachandran, S. Redman, J. Rushing, S. Tanner, R. Wilhelmson, Data mining on the TeraGrid. Supercomputing Conference Phoenix, AZ (2003) 16. S. Curtarolo, D. Morgan, K. Persson, J. Rodgers, G. Ceder, Predicting crystal structures with data mining of quantum calculations. Phys. Rev. Lett. 91(13), 419–431 (2003) 17. A.R. Ganguly, A hybrid approach to improving rainfall forecasts. Comput. Sci. Eng. 4(4), 14–21 (IEEE Computer Society and American Institute of Physics) (2002b) 18. S.J. Graves, Data Mining on a Bioinformatics Grid (SURA BioGrid Workshop, Raleigh, NC, 2003), pp. 28–30 19. C. Kamath, E. Cantú-Paz, I.K. Fodor, N.A. Tang, Classifying of Bent-double galaxies. Comput. Sci. Eng. 4(4), 52–60 (IEEE Computer Society and American Institute of Physics) (2002) 20. C.W. Lin, T.P. Hong, W.H. Lu, Maintaining high utility pattern trees in dynamic databases, in Second International Conference on Computer Engineering and Applications (ICCEA) (Bali Island, 2010), pp. 304–308

Target Marketing Using Feedback Mining Ritesh Kumar

and Partha Sarathi Bishnu

Abstract The objective of this paper is to set the strategic planning on upcoming products using partitional clustering algorithms. Extensive experiments have been conducted on the proposed algorithm to establish our claims. The experiments performed on synthetic and real datasets showed the effectiveness of our proposed algorithm. Keywords Marketing · Data mining · Business decision system

1 Introduction Nowadays, due to the Internet and social media, it is possible to collect the feedback from the customers on the products conveniently. The companies collect feedback to set the business policies. The feedback mining produces results, which help companies to think one-step forward on marketing and other business activities. To establish the best strategic planning of the new products (NP) among the customers (CU) where popular existing products (XP) are already playing in the market [1, 2], the company can use data mining tools. Systematically, companies apply various business strategies, which deal with product specifications, product requirements, product popularity, etc. We can store the feedback from the customers using BMI index structure [1], which helps to construct the satisfaction bit string (SBT table) to reveal the relationship between the customers and existing products. In the SBT table (Table 1), the x p1 to x p4 are the existing products (XP), the np1 to np6 are the proposed new products (NP) and the c1 to c12 are the probable customers (CU). The entry value 1 or 0 implies that the customer is satisfied or not satisfied with the product, R. Kumar (B) Cambridge Institute of Technology, Ranchi, India e-mail: [email protected] P. S. Bishnu Birla Institute of Technology, Ranchi, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_8

95

96

R. Kumar and P. S. Bishnu

Table 1 SBT table

x p1 x p2 x p3 x p4 np1 np2 np3 np4 np5 np6 c1

0

1

1

0

1

1

1

1

1

1

c2

1

1

1

0

1

0

1

1

0

0

c3

1

1

0

0

1

1

1

0

0

0

c4

0

0

0

0

1

1

1

1

1

1

c5

1

1

1

0

0

0

1

1

1

1

c6

1

1

1

0

1

1

1

1

0

0

c7

0

1

0

0

0

1

1

1

1

1

c8

0

1

0

0

0

1

1

0

0

0

c9

1

1

1

0

0

1

1

0

1

1

c10

1

1

1

0

1

1

1

1

1

1

c11

0

1

0

0

1

1

1

1

0

1

c12

0

1

0

1

1

1

1

1

1

0

respectively [3]. As an instance, the column content {0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0} of the existing product x p1 indicates that the customers {c2 , c3 , c5 , c6 , c9 , and c10 } are satisfied with the x p1 and customers {c1 , c4 , c7 , c8 , c11 , and c12 } are not. The main objectives of this paper are (i) to identify the products for campaign selection for the given customers who have similar tastes and (ii) to identify the customer (cluster) group for the given campaign. First, we apply the K-means clustering algorithm on SBT table (where we store customer feedback) to identify the customer groups. Then, we select the v number of products from each cluster group to form the campaign for the customer groups. Moreover, we identify the best customer group for the given campaign. To compare the performance of our proposed K-means-based campaign selection algorithm (KMCA), experiments have been conducted in which existing technique hierarchical-based clustering (HCCA) [4] has been executed with the help of synthetic and real datasets and results are presented. The proposed algorithm performs fairly well on all the datasets. The outline of this paper is as follows: a brief related work is described in Sect. 2. In Sect. 3, we describe the proposed KMCA algorithm. Section 4 presents the experiments on the real and synthetic datasets with all the algorithms. Finally, in Sect. 5, we conclude this paper.

2 Related Work Here, we present the campaign selection process and customer segmentation using clustering algorithms as follows: Brentari et al. [4] applied hierarchical cluster algorithm on a rank data (survey data of the Italian McDonald’s restaurants), using a dissimilarity matrix based on a WRC measure. Chen et al. [5] proposed a partitional

Target Marketing Using Feedback Mining

97

clustering algorithm, named PurTreeClust and a new distance metric to effectively compute the distance between two purchase trees. They have used a gap statisticbased method to evaluate the number of clusters. Masood et al. [6] suggested customer segmentation using clustering algorithms on real data of a telecommunication company in Pakistan. They used the two-step clustering algorithm for different customer segments and moreover they have suggested marketing strategies for up-selling and better targeted campaigns. Ezenkwu et al. [7] presented a MATLAB-based k-Means clustering algorithm for customer segmentation on mega business retail outfit data of Akwa Ibom state, Nigeria. They have presented various outcomes with necessary suggestions. Ansari and Riasi [8] combined the fuzzy c-means clustering and genetic algorithms to cluster the customers of the steel industry. Using the variables of the length, recency, frequency, and monetary value, the customers were divided into two clusters and they have interpreted the results extensively. Wu and Choub [9] suggested online customers’ classification using a soft clustering method. They claimed that the proposed soft clustering method produces better results than hard clustering and greater within-segment clustering quality than the finite mixture model.

3 K-Means-Based Campaign Selection Algorithm Let x pi ∈ XP, 1 ≤ i ≤ n XP , is the existing product, where n XP is the number of existing products, npi ∈ NP, 1 ≤ i ≤ n NP , is the proposed new product, where n NP is the number of new products, and ci ∈ CU, 1 ≤ i ≤ n CU , is the customers, where n CU is the number of customers. From the BMI index structure [1], we prepare a table (a dataset) of size n CU by (n XP + n NP ) (Table 1). First, we apply the K-means algorithm to group the customers based on the preferences on items. Each cluster groups the similar taste customers. Therefore, we should select the items for campaign wisely for each customer group. Each cluster cli ∈ C, 1 ≤ i ≤ k, consist of nci number of data, where k is the number of clusters. The k value is depended on the number of proposed campaigns. The entry value pi j p (1 ≤ i ≤ nci , (n XP + 1) ≤ j ≤ (n XP + n NP ), 1 ≤ p ≤ k) is 1 or 0. If pi j p (∈cl p , 1 ≤ p ≤ k) is 1, then the customer (ci ) of pth cluster group is satisfied with the product (np j ) and if pi j p is 0, then the customer (ci ) of pth cluster group is not satisfied with the product (np j ). For each customer (cluster) group cl p ∈ C, 1 ≤ p ≤ k, we calculate popularity index PI j p , (n XP + 1) ≤ j ≤ (n XP + n NP ), 1 ≤ p ≤ k for each product. The popularity index is nc p PI j p =

npi ∈cl p ,i=1

nc p

pi j p

(1)

(n XP + 1) ≤ j ≤ (n XP + n NP ) and 0 ≤ PI j p ≤ 1. If the PI j p is 1, then the jth item is popular in pth cluster group and if the PI j p is 0, then the jth item is not at all popular in pth cluster group. Finally, we have to identify the v most popular items

98

R. Kumar and P. S. Bishnu

TPIvp =

v 

PIi p

(2)

i=1

of pth cluster group. Here, v is the predefined number of items for the given campaigns. Definition 1 kN set (k items for campaign selection process): For the given cluster cl p , select v numbers of items npi ∈ k N ⊆ NP, 1 ≤ i ≤ v, so that the summation of the popularity index PI j p is maximum, i.e., TPI p is maximum for the cl p cluster. Lemma and npb be two new products, where npa nc p 1 Let npa  nc p p ≥ npa ∈cl p , i=1 iap npb ∈cl p , i=1 pibp , then PIap ≥ PIbp . nc p Proof It is known that npa ∈cl p , i=1 piap   nc p nc p thus npa ∈cl p , i=1 piap − npb ∈cl p , i=1 pibp nc p

npa ∈cl p , i=1

nc p nc p piap − np ∈cl p , i=1 pibp + np ∈cl p , i=1 pibp

nc p

np ∈cl , i=1

p b = ≥ PIbp . (By applying proper fraction [1].) b

b



nc p

npb ∈cl p , i=1

pibp ,



0. PIap = nc  nc p p pibp + npa ∈cl p , i=1 piap − np ∈cl p , i=1 pibp

nc p

hence PIap

= npb . If

b

nc p

Theorem 1 Let k N1 and k N2 be two sets of v new products of pth cluster, where k N1 = k N2 , and k N1 ∩ k N2 = NU L L. If PIap ≥ PIbp , npap ∈ k N1 , npbp ∈ k N2 , then TPIv1 ≥ TPIv2 . nc nc Proof If npap ∈k N1 , i=1 piap ≥ npbp ∈k N2 , i=1 pibp then from Lemma 1, we have seen v v that PIap ≥ PIbp . Therefore, i=1 PIiap ≥ i=1 PIibp and TPIv1 ≥ TPIv2 is proved. Definition 2 The most prospect group: The group, which returns the maximum OG value, is called the most prospect group. To know the most prospect group (the group which has higher possibility to make the campaign successful), we calculate, OG p =

TPIup u

, 1≤ p≤k

(3)

The group that has maximum OG value is the most prospect group. While interpreting the OG value, the analyst must consider the size of the cluster. If the size of the cluster is too small, then the analyst should not consider the cluster as most prospect group and go for the next higher OG value (just less than the highest OG value.) u u Theorem 2 If TPIup ≥ TPIqu , i.e., i=1 PIi p ≥ i=1 PIiq where piap ∈ cl p and pibq ∈ clq , 1 ≤ a, b ≤ n NP , a = b and 1 ≤ p, q ≤ k, p = q then OG p ≥ OGq . Proof Given that TPI p ≥ TPIq , i.e., (TPI p − TPIq ) ≥ 0. We can say that OG p = TPI p −TPIq +TPIq TPI +(TPI −TPIq ) = q up , hence OG p ≥ OGq is proved. u

,

Target Marketing Using Feedback Mining

99

Example 1 We apply the K-means algorithm on data as Table 1. Let the number of campaigns be 3, hence k = 3 (number of clusters). (i) Calculate PI, and TPI for all the groups, and identify the k-most prospective items for the campaign of size v = 3 and (ii) Calculate OG values to identify the best cluster group for the given campaign where the items are U = {np2 , np5 , np6 }.

x p1

x p2

x p3

x p4

np1

np2

np3

np4

np5

np6

1

1

1

1

0

1

0

1

1

0

0

2

1

1

0

0

1

1

1

0

0

0

3

1

1

1

0

1

1

1

1

0

0

Sum









3

2

3

2

0

0

PI









1

0.67

1

0.67

0

0

Cluster 1 Here, the nc1 = 3, The v (i.e., 3) most future popular items are k N1 = {np1 , np3 , (np2 or np4 )}, where the maximum TPI31 = 2.67. The OG1 = (0.67 + 0 + 0)/3 = 0.67/3 = 0.2233.

x p1

x p2

x p3

x p4

np1

np2

np3

np4

np5

np6

1

1

1

1

0

0

0

1

1

1

1

2

1

1

1

0

0

1

1

0

1

1

3

1

1

1

0

1

1

1

1

1

1

Sum









1

2

3

2

3

3

PI









0.33

0.67

1

0.67

1

1

Cluster 2 Here, the nc2 = 3, The three most future popular items are k N2 = {np3 , np5 , np6 }, and the maximum TPI32 = 3. The OG2 = (0.67 + 1 + 1)/3 = 0.89.

x p1

x p2

x p3

x p4

np1

np2

np3

np4

np5

np6

1

0

1

1

0

1

1

1

1

1

1

2

0

0

0

0

1

1

1

1

1

1

3

0

1

0

0

0

1

1

1

1

1

4

0

1

0

0

0

1

1

0

0

0

5

0

1

0

0

1

1

1

1

0

1

6

0

1

0

1

1

1

1

1

1

0

Sum









4

6

6

5

4

4

PI









0.67

1

1

0.83

0.67

0.67

100

R. Kumar and P. S. Bishnu

Cluster 3 Here, the nc3 = 6, The v (let it be 3) most future popular items are {np2 , np3 , np4 }, and the maximum TPI33 = 2.83. The OG3 = (1 + 0.67 + 0.67)/3 = 0.78. In summary: the 3 (v) most popular items are {np1 , np3 , (np2 or np4 )}, {np3 , np5 , np6 }, {np2 , np3 , np4 } of the clusters (customer groups) 1, 2, and 3, respectively. The best cluster group for the given campaign where the items are {np2 , np5 , np6 } is group 2, where the OG value is maximum. Algorithm of the Proposed KMCA Algorithm Now, we present steps of the KMCA algorithm as follows: Input: Customer preference data (SBT Table), k, v, and U (set of items); Output: k-most prospect items and the most prospect group for the given campaign; Step 1: Apply simple K-means algorithm on customer preference data; Step 2: For each cluster, calculate POI and OG; Step 3: return k-most prospect campaigns for each cluster, and most prospect group for the given campaign; Complexity The time complexity of the K-means algorithm is O(n CU kd), the time complexity to calculate PI j p is O(n CU × d), i.e., O(n CU ) since n CU  d, and the time complexity to calculate TPI and OG are O(d log d) and O(1), respectively, where d = (n NP + n XP ). Explanation of the KMCA Algorithm In line 1, we apply the K-means algorithm on the customer preference data, where k is the number of campaigns. Next, calculate PI, TPI, and OG for each cluster. Next, in line 3, identify the most prospect group for the given campaign and we can identify the most prospect campaign for the given customer groups.

4 Experiments 4.1 Dataset Two real datasets named car dataset and auto dataset from the UCI machine learning repository (http://archive.ics.uci.edu/ml/) have been used for experiment purpose. The number of data of car and auto data are 1728 and 398, and the dimensions which we have considered are 6 and 7, respectively. Using synthetic datasets, all the algorithms have been executed on different dimensions d = (n NP + n XP ) and sizes (n NC ) (Fig. 1a and b).

Target Marketing Using Feedback Mining

(a) Dimension vs Time

101

(b) Data size vs Time

Fig. 1 Performance comparison

4.2 Experimental Analysis We have selected 15 and 21 random data for existing and new products for auto and car data, respectively. From the selected items, we have compared with rest of the data to build the SBT table. From the SBT table first 30% column, we have selected as XP and rest of the data as NP. We have implemented the K-means algorithm (KMCA) and hierarchical clustering algorithm (HCCA) [4] for comparison purpose. For all the real datasets (car data and auto data), we have experimented on various campaigns and we set the number of clusters accordingly. The number of items per campaign v is based on number of clusters. The K-means algorithm is programmed to iterate 15 times (the convergence criteria of the K-means clustering). For OG calculation, we have selected k 1 = 3 {2, 5, 6} items to identify the best cluster choice. However, for comparison purpose, we have used the same set parameters value (k, k 1 ) in HCCA to calculate OG value. We have executed all the programs using Octave programming of version 3.2.4 on a computer with an Intel(R) Core(TM) i5-4570 CPU, 3.20 GHz, and 8 GB of RAM running on the Microsoft Windows 10, 64-bits operating system. For comparison purpose, we have implemented the HCCA algorithm [5].

4.3 Result and Analysis To evaluate the scalability performance of our proposed algorithm, we compare execution time with other existing algorithm HCCA [4]. The performance (in terms of time) of KMCA algorithms (proposed) are compared with HCCA algorithm (Tables 2 and 3), where we displayed time, OG and selected cluster (Sc) for different k values for auto and car data. We observed that the time efficiency and OG values in our proposed techniques are quite better than the existing technique. Moreover, in Table 4, we have presented the average TPI values, and we have found that most

102 Table 2 Results (Time, OG, and Sc) of auto data

Table 3 Results (Time, OG, and Sc) of car data

Table 4 Results (average TPI values of all the real datasets)

R. Kumar and P. S. Bishnu

Sr. no.

k

KMCA (proposed)

HCCA [4]

Time OG

Sc

Time OG

Sc

1

2

0.21

0.3612

1

0.31

0.2578

2

2

3

0.24

0.3492

2

0.39

0.2876

2

3

4

0.29

0.3678

1

0.46

0.3987

3

4

5

0.34

0.4242

1

0.49

0.4024

4

5

6

0.39

0.6315

5

0.79

0.4211

2

6

7

0.43

0.6166

7

0.92

0.5392

7

Sr. no.

k

KMCA (proposed)

HCCA [4]

Time OG

Sc

Time OG

Sc

1

2

1.10

0.2673

2

2.45

0.1233

2

2

3

1.21

0.3167

2

3.23

0.2342

2

3

4

1.35

0.2579

1

4.12

0.3452

2

4

5

1.52

0.3283

2

5.56

0.2113

3

5

6

1.70

0.3111

4

6.05

0.4221

5

6

7

1.89

0.3180

6

6.86

0.1023

3

Sr. no.

k

Auto data

Car data

KMCA

HCCA

KMCA

HCCA

1

2

1.92

1.71

2.25

4.33

2

3

2.76

2.96

1.53

1.54

3

4

3.38

2.98

2.22

2.03

4

5

3.85

3.58

2.75

2.63

5

6

4.12

3.23

3.11

2.96

6

7

4.14

4.00

3.34

3.16

of the time (except when k = 2 and 3 of auto data) the average TPI values are more (better) as compared to HCCA. From the plot (using synthetic datasets) (Fig. 1a and b), it is seen that the times for our KMCA algorithm is the best. It is also observed that the proposed KMCA algorithms scale well in terms of dimension and size.

5 Conclusion In this paper, we have applied the K-means algorithm to identify the customer segmentation using feedback data. After formation of the customer segmentation, we have given an idea to select items for campaign selection so that the effectiveness

Target Marketing Using Feedback Mining

103

of the advertisement is increased. From each cluster (customer segment), we can select v number of best items to form the campaign. Moreover, if the item is already selected for the given campaign, then using our algorithm, we can identify the best customer group to promote the items. The experiments conducted for comparison with existing algorithms prove the effectiveness of our approach.

References 1. C.Y. Lin, J.L. Koh, A.L.P. Chen, Determining k-most demanding products with maximum expected number of total customers. IEEE Trans. Knowl. Data Eng. 25(8), 1732–1747 (2013) 2. N.G. Mankiw, Principle of Economics, 5th edn. (South Western College Publication, New York, 2008) 3. R. Kumar, P.S. Bishnu, V. Bhattacherjee, K-Means algorithm to identify k1 -most demanding products, in PReMI 2017. LNCS vol. 10597 (2017), pp. 451–457 4. E. Brentari, L. Dancelli, M. Manisera, Clustering ranking data in market segmentation: a case study on the Italian McDonald’s customers’ preferences. J. Appl. Stat. 43(11), 1959–1976 (2016). https://doi.org/10.1080/02664763.2015.1125864 5. X. Chen, Y. Fang, M. Yang, F. Nie, Z. Zhao, J.Z. Huang, PurTreeClust: a clustering algorithm for customer segmentation from massive customer transaction data. IEEE Trans. Knowl. Data Eng. 30(3), 559–572 (2018) 6. S. Masood, M. Ali, F. Arshad, A.M. Qamar, A. Kamal, A. Rehman, Customer segmentation and analysis of a mobile telecommunication company of Pakistan using two phase clustering algorithm, in Eighth International Conference on Digital Information Management (ICDIM, 2014) 7. C.P. Ezenkwu, S. Ozuomba, C. Kalu, Application of K-Means algorithm for efficient customer segmentation: a strategy for targeted customer services. Int. J. Adv. Res. Artif. Intell. 4(10) (2015) 8. A. Ansari, A. Riasi, Customer clustering using a combination of fuzzy C-Means and genetic algorithms. Int. J. Bus. Manage. 11(7) (2016) 9. R.-S. Wu, P.-H. Choub, Customer segmentation of multiple category data in e-commerce using a soft-clustering approach. Electron. Commer. Res. Appl. 10(3), 331–341 (2011)

Part V

Intelligent Systems and Modelling

A Comparative Study Among Different Signaling Schemes of Optical Burst Switching (OBS) Network for Real-Time Multimedia Applications Manoj Kr. Dutta

Abstract The development of different real-time multimedia facilities over the Internet has introduced an ever-increasing demand of Internet capacity. Wavelengthdivision multiplexing (WDM) technology along with optical burst switching scheme (OBS) is a very useful mechanism for utilizing the raw bandwidth of an optical fiber to fulfill the huge bandwidth requirement. OBS technique is very efficient but the one-way reservation policy. In this paper, the performance of OBS network has been estimated and compared for different signaling protocols, viz. just-in-time (JIT), just-enough-time (JET), and tell-and-go (TAG). Wavelength converters are used to overcome contention in this case. The result shows that the performance of the switching architecture changes with number available wavelength converters for any signaling mechanisms and if the degree of conversion is kept constant, then the JET signaling scheme offers the best performance followed by TAG and JIT. The result presented in this paper may be useful for the network engineer to adopt the best signaling protocol for data transmission. Keywords Optical burst switching · WDM network · Blocking probability · Incoming traffic · Gain · Different signaling schemes

1 Introduction With the development of different modern Internet-based technologies like online banking, online marketing, e-commerce, telemedicine, several social networking sites, and many other real-time multimedia applications, the demand of bandwidth is increasing day by day. Optical fiber communication technology is the only solution to meet the ever-increasing bandwidth requirement [1–3]. Researchers proposed different technologies to expand the capacity of optical backbones network. WDM and DWDM are probably the best one to utilize the enormous bandwidth offered M. Kr. Dutta (B) Bit Mesra, Off-campus Deoghar, Ratanpur, Deoghar 814142, Jharkhand, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_9

107

108

M. Kr. Dutta

by the optical fiber. For implementing wavelength-division multiplexing efficiently, different switching schemes, namely optical burst switching (OBS), optical packet switching (OPS) and optical circuit switching (OCS), are employed. In OCS, the transmission path is established at the beginning of the actual transmission and kept connected throughout the entire transmission process. This technique is very reliable but not bandwidth efficient. In case of optical packet switching, the information is sent in terms of data packets [3–6]. Each packet consists of a header which contains the all necessary information regarding the destination of the packet. This process is bandwidth efficient but not very much reliable in terms of packet synchronization. Optical burst switching network (Fig. 1) combines the benefits of OCS and OPS. In this technique, packets of data are assembled to form bursts and header packet of each burst is transmitted well before the actual transmission. The header transmits through different O/E/O transmission, but the actual data bypasses the slow O/E/O transmission. Resource reservation policy used in this case is one way, and this type of reservation policy may lead to the contention at the output node. The contention in a node arises when more than one burst try to reserve a particular wavelength at the output [7–10]. Appropriate contention resolution policy is very much essential in case of optical burst switching network. There are different conventional contention resolution schemes are available. In time domain, the fiber delay line is used for contention resolution, in space domain, deflection routing scheme is used, and in wavelength domain, the wavelength converter is used for resolving the contention if any. Of the above-mentioned policies, the wavelength conversion mechanism is most efficient. In wavelength conversion scheme, the data burst is converted to any one of the available wavelengths, and after successful transmission, the data bursts are converted back to its original wavelength. Although this process is very efficient, the conversion process is quite complicated which creases the cost and complexity of the switching network. If the addition of cost and hardware to the switching process is not allowed, then the segmentation dropping scheme is adopted to overcome the problem due to contention. The OBS can be realized by using three different kinds of signaling mechanisms, namely JIT, JET, and TAG [11, 12]. JIT as shown in Fig. 2 is an immediate reservation scheme where the resource is reserved as early as the header burst is processed. The TAG as shown in Fig. 3 is also one type of immediate reservation scheme where the mandatory buffering is removed at each intermediate node by inserting a time equal to buffering time. JET as shown in Fig. 4 is a delayed reservation system where the resource is reserved exactly after the offset time. In JIT whenever the control header of the oncoming burst is processed , a wavelength is reserved and remains reserved until burst is transmitted to the output port. In case of JET, the wavelength is blocked during the burst processing time only. However in TAG, the actual data burst is sent just after the control header is processed [13]. As per the knowledge of the author, the comparative performance analysis of the above-mentioned three different kinds of signaling protocols of OBS network has not yet been reported. A performance-based comparative analysis of the three signaling schemes for OBS network is presented in this work. Wavelength converter is used for contention resolution in this discussion. The results show that just-enough-

A Comparative Study Among Different Signaling Schemes … Fig. 1 Optical burst switching architecture

109

Switch Control Unit Control Channel

O/E 2 1

Input Fiber

1

3

E/O 1

Control Channel Output Fiber

2 .

. 3

. Input Fiber

.

. 3

2

Demux Fig. 2 JIT signaling technique (t p = call set-up time, t c = time required by the switch to configure a connection from I/P port to O/P port)

Output Fiber

.

OXC

Mux

Intermediate Nodes Destination

Source Setup

tp

Setup tp

tc

Setup Phase

Setup

tc

tp

Setup

tc tp

Release

tp

Release tp

tp tp

tp Connect

tp Release Connect

Transmis sion and Release Phase

Connect

time signaling protocol provides the best network performance among the three signaling schemes discussed here. All the simulations were performed using standard MATLAB tool.

110 Fig. 3 TAG signaling technique (t b = time equivalent to burst length, t p = call set-up time, t agg = burst aggregation time)

M. Kr. Dutta

Source

Intermediate Nodes

Destination

BHP

tagg

Setup Phase 2tp Confirm Phase tb

Transmission Phase

Burst

Release Phase

tp

Fig. 4 JET signaling technique (t b = time equivalent to burst length, t p = call set-up time, t agg = burst aggregation time, t ot = total time taken between transmission of data burst and control burst)

Source

tagg

Intermediate Nodes

δ

tot Burst

tb

Destination

BHP

δ

Setup and Confirm Phase

δ ST

tp

Transmission and Release Phase

A Comparative Study Among Different Signaling Schemes …

111

2 Mathematical Calculation for Blocking Probability and Gain Considering the Poisson burst arrival process and infinite source systems, the blocking probabilities and gains of the optical burst switching network have been determined for different signaling schemes. Let it considered that ρ = λμs , where λs = arrival rate per subscriber, μ = processing speed of the node, k = number of busy subscribers, N = total number of subscribers, R = number of servers. As JIT is an immediate reservation protocol, so the Erlang B formula is used to find out the corresponding blocking probability and gain of optical burst switching (OBS) architecture 1

PB =  R

k=0

ρ k Ckn

(1)

and the gain (G) for JIT signaling scheme is G=

N−R PR N − A0

(2)

TAG is a delayed reservation scheme, so Erlang C formula is used to find out the blocking probability and gain. As TAG signaling scheme is delayed reservation without void filling, so the blocking probability is given by ρ R C Rn PB =  R k n k=0 ρ C k

(3)

and the corresponding gain of TAG is G=

N 1 ρ R C RN −1  R k n N − A0 k=0 ρ C k

(4)

JET is delayed reservation scheme with void filling, so the blocking probability for JET is given by R k=0 PB =  R

ρ k CkN −1

k=0

ρ k Ckn

(5)

and gain of JET signaling scheme is G=

N

R k=0

 ρ k Ckn − Rj=0 jρ j C nj R k n k=0 ρ C k

(6)

112

M. Kr. Dutta

The above-mentioned formulae are used to determine the blocking probability and gain of the optical burst switching architecture of consideration. The performance was analyzed for different values of partial wavelength conversion. The simulation work was carried out under MATLAB environment.

3 Results and Discussion The simulation study was performed using the above-mentioned equations to estimate the performance of the different signaling schemes. The efficiency of the different signaling schemes was determined on the basis of different network parameters, viz. incoming traffic load versus blocking probability and incoming traffic load versus gain. The simulation results show that the efficiency of any signaling scheme is dependent of the number of wavelength converters employed with in the network. Figures 5, 6, and 7 depict the incoming traffic load versus blocking probability variation of JIT, TAG, and JET signaling schemes, respectively, assuming available wavelength converters as variable parameters. The results show that if the number of converters increases, then the blocking probability reduces for the same amount of incoming traffic load for all types of signaling schemes. The results are quiet obvious because when more converters employed then the loss due to contention reduces in all cases. But the effect due to converter reaches its saturation value at very high input traffic load. The comparative results of Figs. 5, 6, and 7 indicate that out of the three above-mentioned signaling schemes, the JET provides the best results. This result is quite interesting. JET is a delayed reservation scheme where the actual data is sent after the processing of header packet at the intermediate node, so the probability of data loss due to contention at the intermediate node is very small. Figures 8, 9, and 10 represent the variation of system gain with the incoming traffic for JIT, TAG, and JET signaling schemes, respectively, for various numbers of wavelength converters. The result indicates that for all the above-mentioned cases the gain of the system decreases as the amount of incoming traffic increases. As the amount of incoming traffic increases, the probability of loss due to contention also increases; as a result, the gain of the signaling scheme decreases. The gain not only depends on the incoming traffic but also on the number of wavelength converters that are used in the nodal architecture. If the number of available converters increases, then obviously the loss due to contention reduces significantly. The results discussed above and shown in the figures may be useful to network engineers to design an efficient optical burst switching network.

A Comparative Study Among Different Signaling Schemes …

Fig. 5 Blocking probability versus traffic load for JIT

Fig. 6 Blocking probability versus traffic load for TAG

113

114

Fig. 7 Blocking probability versus traffic load for JET

Fig. 8 Gain versus traffic load for JIT

M. Kr. Dutta

A Comparative Study Among Different Signaling Schemes …

Fig. 9 Gain versus traffic load for TAG

Fig. 10 Gain versus traffic load for JET

115

116

M. Kr. Dutta

4 Conclusions In the present work, different signaling schemes of OBS network have been compared depending upon various network parameters. In the present analysis, the performance of different networks is calculated in the presence of wavelength converters. The performance of different mechanisms is evaluated in term of different network parameters, viz. blocking probability versus incoming traffic and gain versus incoming traffic. Required mathematical equations to determine the same have also formulated. The simulation study indicates that JET signaling scheme offers the best performance over all other signaling schemes discussed here. The result depicts that the performance of different signaling schemes is dependent on the available wavelength converters as well. Wavelength converters are very much useful to overcome the problem due to contention in an intermediate node of OBS network. So if the number of wavelength converters is increased, then both the blocking probability and gain performance improve for all kinds of signaling schemes and the same fact has been verified by the simulation results also.

References 1. C. Qiao, M. Yoo, Optical burst switching (OBS)—a new paradigm for an optical Internet. J. High Speed Netw. Spec. Issue Opt. Netw. Arch. 8(1), 69–84 (1999) 2. M.K. Dutta, Comparative performance analysis of different segmentation dropping schemes under JET based optical burst switching (OBS) paradigm, in 3rd International Conference on Advanced Computing, Networking and Informatics (ICACNI2015). School of Computer Engineering, KIIT University, Odisha, India. June 2015, pp. 23–25 3. M.S. Alam, S. Alsharif, P. Panati, Performance evaluation of throughput in optical burst switching. Int. J. Commun. Syst. 24(3), 398–414 (2011) 4. M. Klinkowski, J. Pedro, D. Careglio, M. Pióro, J. Pires, P. Monteiro, J. Solé-Pareta, An overview of routing methods in optical burst switching networks. Opt. Switch. Netw. 7, 41–53 (2010) 5. J.Y. Wei, R.I. McFarland Jr., Just-in-time signaling for WDM optical burst switching networks. J. Lightwave Technol. 18(12), 2019–2037 (2000) 6. M.K. Dutta, Design and performance analysis of traffic rerouting based congestion control technique in optical WDM network, in 3rd International Conference on Opto-Electronics and Applied Optics 2016, OPTRONIX 2016. University of Engineering & Management, Kolkata, India from 18–20 Aug 2016 7. J.J.P.C. Rodrigues, M.M. Freire, N.M. Garcia, P.M.N.P. Monteiro, Enhanced just-in-time: a new resource reservation protocol for optical burst switching networks, in 12th IEEE Symposium on Computers and Communications (ISCC, 2007) 8. J. Teng, G.N. Rouskas, A detailed analysis and performance comparison of wavelength reservation schemes for optical burst switched networks. Photon Netw. Commun. 10(3), 311–335 (2005) 9. M.K. Dutta, V.K. Chaubey, Design and performance analysis of deflection routing based intelligent optical burst switched (OBS) network, in IEEE International Conference on Device & Communication, ICDeCom-11. BIT Mesra, Ranchi, Feb 2011 10. G. Stepniak, L. Maksymiuk, J. Siuzdak, Binary-phase spatial light filters for mode-selective excitation of multimode fibers. J. Lightwave Technol. 29, 1980–1987 (2011)

A Comparative Study Among Different Signaling Schemes …

117

11. S. Waheed, Comparing optical packet and optical burst switching. Daffodil Int. Univ. J. Sci. Technol. 6(2) (2011) 12. R. Lamba, A.K. Garg, Survey on contention resolution techniques for optical burst switching networks. Int. J. Eng. Res. Appl. 2(1), 956–961 (2012) 13. J.P. Jue, V.M. Vokkarane, Optical burst switched networks, in Optical Network Series (Springer, 2005)

Performance Analysis of Deflection Routing and Segmentation Dropping Scheme in Optical Burst Switching (OBS) Network: A Simulation Study Manoj Kr. Dutta

Abstract Optical burst switching (OBS) along with wavelength-division multiplexing (WDM) is a very promising and useful technology to meet the ever-increasing bandwidth requirement for different Internet and multimedia-based real-time applications. Although promising and effective but one way reservation protocol of optical burst switching technology leads to burst contention. Proper contention resolution is a very important issue in OBS. Among different available contention resolution techniques, deflection routing and segmentation-based dropping schemes are discussed in this paper. A comparative performance analysis of data handling capacity for the above-mentioned schemes is presented here. Appropriate mathematical equations are considered to evaluate the performance. The contention resolution performance is determined in terms of packet loss probability versus incoming traffic under MATLAB environment. The results show that the deflection routing provides better contention resolution than the segmentation dropping to the OBS network. Keywords Contention resolution · Optical burst switching · Blocking probability · Deflection routing · Segmentation-based dropping · Incoming traffic

1 Introduction In recent years, demand for the Internet is increasing day by day. In this modern era, people mostly depend on the Internet of thing, artificial intelligence, multimedia applications, and other Internet-based technologies like online marketing, online banking, and even telemedicine. All these technological advances require a huge bandwidth to implement them. Optical fiber is probably the only available solution to meet the huge raw bandwidth requirement. A single optical fiber is capable to provide up to 50 GHz bandwidth. WDM or DWDM are the technologies that are used to explore the huge raw bandwidth of single fiber. WDM/DWDM may be M. Kr. Dutta (B) BIT Mesra, Deoghar Campus, Deoghar, 814142, Jharkhand, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_10

119

120

M. Kr. Dutta

implemented by three different switching technologies, viz. optical circuit switching, optical packet switching, and optical burst switching. Out of these three, optical burst switching offers the best possible switching technology to implement WDM and DWDM. In the OBS technology, the header packets are sent earlier to the actual data bursts are sent. This technology is very effective and promising also; however, it uses only one-way reservation protocol. This one-way reservation protocol of OBS gives rise to contention within the bursts. Contention in the burst occurs when more than one output bursts try to reserve the same output wavelength. Only one of the contending bursts will be able to reserve the available output wavelengths. The rest of the bursts will be dropped. The problem due to contention OBS network could be resolved by different mechanisms (a) by using optical buffering or fiber delay line (b) wavelength conversion (c) deflection routing, and (d) segmentation dropping. Deflection routing [1–3] is an efficient technique to reduce burst in the OBS networks. In this method, the burst is forwarded from one node to the other through different diverted paths. The first three methods use extra hardware, extra cost, and provide additional complexity, whereas the last one is a dropping policy which involves dropping of few overlapped portion of contending bursts [4, 5]. In this paper, a comparative study between the deflection routing and the segmentation-based dropping schemes is discussed. Appropriate mathematical models to evacuate the contention resolving capacity of both the contention resolution policies are derived and simulated through MATLAB simulation.

2 OBS Fundamentals Optical burst switching (OBS) is a better alternative to all-optical packet and circuit switching. Here the data packets are concatenated into bigger size unit named as burst. The bursts are switched through the optical core network. The burst contains the control signal is transmitted well before the transmission of the actual data burst, and this feature of the OBS networks allows a greater degree of statistical multiplexing and better data handling capacity than optical circuit and packet switching. Figure 1 shows the OBS signaling mechanism. In this scheme, the control signal is sent prior to the actual data signal and the time gap is known as offset time. This offset time provides sufficient time to the control bursts to be processed electronically at each node and to proceed through the O/E/O conversion. Figure 2 shows the burst assembly scheme used in the OBS networks [1, 2].

Performance Analysis of Deflection Routing and Segmentation …

Core Core

Ingress

121

Egress

Control Offset time

Data Burst

Fig. 1 OBS signaling mechanism

Burst Assembly Unit

. .

B S

BS: Burst Assembler

BAU

Traffic from Edge Routers

. .

.Assembled Burst . BAU

Fig. 2 Burst assembly scheme

3 Contention in OBS Network and Different Contention Resolution Techniques The offset time between the header packet and the data packet provides enormous advantages; like in this case, there is no need to use buffering and switching for each packet and the optimum utilization of processing time. Though this technique is very efficient, the drawback of this technique is that here the ingress node starts sending data without reservation acknowledgment from the next node. This feature leads

122

M. Kr. Dutta

to contention and data loss in the OBS network. There are different conventional techniques to overcome the burst loss due to contention. Broadly, they can be classified into two categories reactive and proactive. In proactive method, the contention resolution technique is initiated after the contention occurs. In the reactive counterpart, the precaution to overcome the loss due to contention is taken well before the actual transmission of data. This resolution policy is realized either by feedback or non-feedback techniques. Both feedback and non-feedback techniques are implemented by traffic management policies. The reactive contention resolution can be realized by three conventional techniques, namely optical buffering using fiber delay line, wavelength conversion, deflection routing, and partial burst dropping policy. The first three methods of contention resolution are well efficient but require more hardware and involve more cost and complexity to the circuit. In burst segmentation dropping policy, only the overlapping portion of the bursts is dropped. Dropping may be of two types, tail dropping, and head dropping as shown in Fig. 3a and b, respectively, [4–7].

3.1 Contention Resolution by Burst Dropping in OBS Network Segmentation dropping scheme offers contention resolution by discarding the overlapped portion of the burst. The dropping is of two types (a) head dropping (b) tail dropping. Head dropping may lead to some in-sequence delivery of the packets, and tail dropping may lead to improper burst length information. In tail dropping scheme since the header has no information regarding the new modified burst length, so the header will reserve resource as per the original length of the burst and this leads to inefficient utilization of available bandwidth resource. In segmentation-based dropping, core nodes may be considered as standard M/G/∞/N E W E system [8] where the burst loss probability is written as, ⎛ P=

N E WE

N E WE ! ⎜k − n ⎝  n k!(NE WE − k)! k=n−1

 k λ μ

1+

λ μ



⎟  N E WE ⎠

(1)

In this case, the arrival of the bursts is considered as the Poisson process where mean burst arrival rate is λ and 1/μ is the mean burst length. If N 0 be the available output channels number, W 0 represents the number of available output wavelengths and n be the total available bandwidth resource then n = N 0 W 0 . Similarly if N E be the number available input channels and W E be the number of available input wavelengths, then total available input resource is N E W E , where N E W E ≥ n.

Performance Analysis of Deflection Routing and Segmentation …

123

dropped segments

(a) scheduled burst contending burst

tc

ts

tc : contending time ts : switching time ts

(b)

tc

scheduled burst

Contending burst

dropped segments

(c) C

1

B

A

2

Fig. 3 a Tail dropping. b Head dropping. c Principal operation of deflection routing

3.2 Deflection Routing for Contention Resolution in OBS Network There is another one method to reduce contention in the optical network which is known as deflection routing. In this method, one of the contending bursts is sent in different paths rather than the primary path. Figure 3c explains deflection routing. If input data bursts 1 and 2 at node A would like to reach B at the same time, then the data burst which arrives first or having higher priority will reserve the shortest path

124

M. Kr. Dutta

from A to B. The unprocessed data burst will establish a virtual path from node A to B via C which is the next shortest diverted path to reach from node A to node B. In deflection routing, the unused path may be used as the virtual path. The probability of loss in a deflection routing process can be expressed as [6–8], where n is the number of different routes form the source node to destination node and k is the number of input link to that node.

P=

1 n!

1+

n



n λ μ

1 k=1 k!



k λ μ

(2)

4 Simulation and Result Simulations are carried out to evaluate the contention resolution capacity of the segmentation dropping and deflection routing schemes. The simulations have been performed using MATLAB simulation tool under the appropriate node and traffic assumptions. Figure 4 shows the packet loss probability of incoming traffic for different values of input traffic while the number of available output channels remains constant in case of segmentation-based dropping scheme. The result shows that as the incoming traffic increases the packet loss probability also increases. If the number of available output channels is fixed, then the burst handling capacity of the node also becomes limited. Figure 5 represents the dependency of burst loss probability with incoming traffic for several values of available output channels, while the number input channels are kept fixed and the number of output channels is varied. If the available output channels are increased, then obviously the dropping probability should decrease and the result verifies the same fact. Figure 6 depicts the packet loss probability of deflection routing for different values of available output channels. The packet loss probability decreases if the number of output channels increases for both contention resolution techniques discussed here. Figures 7, 8, and 9 show the comparative performance analysis of segmentation dropping and deflection routing for different network parameters. The results reveal that the deflection routing scheme offers better blocking probability performance than the segmentation scheme for all values of input and output parameters. The reason may be that in deflection routing there is no burst dropping associated with the scheme, but the segmentation method itself is a dropping scheme so shows inferior blocking performance. This result is very interesting and helpful to the network engineers to design a proper network with minimum blocking probability.

Performance Analysis of Deflection Routing and Segmentation …

125

Fig. 4 Packet loss probability versus incoming traffic keeping the no of output fixed but variable input for segmentation dropping

Fig. 5 Packet loss probability versus incoming traffic keeping the no of input fixed but variable output for segmentation dropping

126

M. Kr. Dutta

Fig. 6 Packet loss probability versus incoming traffic keeping input fixed for different output channels for deflection routing

Fig. 7 Comparative study of segmentation dropping and deflection routing scheme for no of i/p channels is 40 and output channels is 6

Performance Analysis of Deflection Routing and Segmentation …

127

Fig. 8 Comparative study of segmentation dropping and deflection routing scheme for no of i/p channels is 30 and output channels is 6

Fig. 9 Comparative study of segmentation dropping and deflection routing scheme for no of i/p channels is 20 and output channels is 8

128

M. Kr. Dutta

5 Conclusion Optical burst switching is a very useful and effective technology to meet the everincreasing bandwidth requirement and to efficiently explore the huge available raw bandwidth offered by the optical fiber. Though very efficient but OBS technology has some inherent problem due to contention because it uses one way reservation at the output. For proper utilization of the OBS scheme, appropriate contention resolution schemes are very essential. Contention can be resolved by different conventional schemes. In this paper, the comparative performance analysis between deflection routing and segmentation-based dropping scheme is presented. The performance is analyzed for different network parameters. The results show that deflection routing offers better contention resolution.

References 1. Y. Chen, C. Qiao, X. Yu, Optical burst switching: a new area in optical networking research. IEEE Netw. 18(3), 16–23 (2004) 2. C. Qiao, M. Yoo, Optical burst switching (OBS)—a new paradigm for an optical internet. J. High Speed Netw. 8(1), 69–84 (1999) 3. M.K. Dutta, V.K. Chaubey, Comparative analysis of wavelength conversion and segmentation based dropping method as a contention resolution scheme in optical burst switching (OBS) network. Proc. Eng. 30, 1089–1096 (2012) 4. L. Hongbo, M.T. Mouftah, A segmentation-based dropping scheme in OBS networks, in ICTON (2006), pp. 10–13 5. M.K. Dutta, V.K. Chaubey, Design and performance analysis of deflection routing based intelligent optical burst switched (OBS) network, in ICDeCom-11. BIT Mesra, Ranchi, 24–25 Feb 2011 6. Y. Mori, T. Abe, H. Pan, Y. Zhu, Y. Choi, H. Okada, Effective flow-rate control for the deflection routing based optical burst switching networks, in Asia-Pacific Conference on Communications, 2006. APCC’06. Busan, South Korea, 31 Aug–1 Sept 2006 7. M.K. Dutta, Comparative performance analysis of different segmentation dropping schemes under JET based optical burst switching (OBS) paradigm, in Proceedings of 3rd International Conference on Advanced Computing, Networking and Informatics Volume 43 Smart Innovation, Systems and Technologies, Chapter no: 40 (2016), pp, 385–390 8. S. Sarwar, S. Aleksic, K. Aziz, Optical burst switched (OBS) system with segmentation-based dropping. Electrotech. Informationtech. 125(7-8), 296–300 (2008)

Secure Anti-Void Energy-Efficient Routing (SAVEER) Protocol for WSN-Based IoT Network Ayesha Tabassum, Sayema Sadaf, Ditipriya Sinha and Ayan Kumar Das

Abstract The use of Internet of Things (IoT) is increasing rapidly. In most of the cases, wireless sensor network (WSN) takes the responsibility of communication between IoT devices. The deployment of secure communication for resourceconstrained sensor nodes in WSN leads to a great challenge for the researchers. WSNs are deployed in remote areas in most of the applications with almost no possibility of recharging. This makes efficient energy usage a major criterion to be considered while designing security solutions in this field. To keep WSN in confidential and to provide guaranteed packet delivery, secure communication is essential. This paper has proposed a protocol, called SAVEER, which is used to prevent from both the internal and the external threats by using low-cost encryption technique with oneway hash chain-based authentication mechanism. The rolling ball approach is used to prevent the anti-void problem. The simulation results show that SAVEER provides better routing efficiency than existing protocols SEER, INSENS, and CASER. Keywords Wireless sensor network · Sensors · Void problem · Rolling ball · Security

A. Tabassum · S. Sadaf · A. K. Das (B) Department of Computer Science and Engineering, BIT Mesra, Patna campus, Patna, India e-mail: [email protected] A. Tabassum e-mail: [email protected] S. Sadaf e-mail: [email protected] D. Sinha Department of Computer Science and Engineering, NIT Patna, Patna, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_11

129

130

A. Tabassum et al.

1 Introduction Wireless sensor network (WSN) is one of the sources of data for the elements of Internet of Things (IoT). It makes the bridge between the digital world and the real world. In 1999, the concept of IoT was first proposed by MIT Auto-ID Lab [1]. The collaboration between the elements of IoT and WSN makes the benefits of remote access, as heterogeneous information systems are able to provide common services. The anytime, anywhere, and with everything connectivity is the aim of IoT. To meet the demand, WSN has the capability to provide such connectivity between the real world and the virtual world. A large number of spatially distributed sensor nodes are connected to form WSN. Limited computational capacity, storage space, and power of sensor nodes have issued lots of challenges in the field of WSN. In order to minimize energy consumption, the researchers have adopted clustering techniques, in which the whole network is divided into separate regions, known as clusters. Clustering schemes involve one node, known as cluster head (CH), based on certain criteria to take the responsibility for collection of data, aggregate those collected data and forwarding the same to the sink node in order to increase network life span [2]. WSN may face two types of attack—internal and external. The security of nodes may involve secure bootstrapping and secure wake-up. A low duty cycle is necessary to increase the lifetime of energy-constrained nodes. In sleep deprivation attack [3], nodes will be unable to switch in sleep mode and thus reduce the lifetime of a compromised sensor node. SEER [4] is designed to make a secure routing scheme to route the aggregated data to the sink node from CH in an energy-efficient way. SEER introduces one-way hash chain (OHC) followed by the formation of MAC (message authentication code). MAC is generated by the global key and OHC, in order to impose security restriction and for authentication of nodes. The nodes of the IoT-based WSN are placed at different levels. In order to send the secure data to sink node, a lightweight encryption technique is used which encrypts the sensed data. WSN faces a real challenge with respect to secure data transmission. In the phase data transmission, the number of attacks is possible, like selective forwarding attack [5], sinkhole attack [6], flooding attack [5], and wormhole attack [7]. In sinkhole attack, all packets get dropped and not a single one gets forwarded to the BS. In selective forwarding attack, selected packets get dropped and only a few of them are forwarded. In flooding attack, an adversary node represents itself as a CH and sends sorted data to sink node. This paper focuses on the secure and efficient transmission of aggregated data in WSN-based IoT network. In this paper, the proposed scheme SAVEER uses greedy forwarding technique [8] and RUT [9] scheme to forward the data to the destination by minimizing void problem. Greedy forwarding technique and void problem are depicted in Figs. 1 and 2, respectively. In Fig. 1, node x is the sender of data and node D is the receiver of data packets. Each node has its own defined radio range. Here x’s radio range is denoted by dotted circle. A dashed arc is drawn about D with radius that is equal to distance between D and y. y is that neighbor of x which is closest to D and falls within x’s radio

Secure Anti-Void Energy-Efficient Routing (SAVEER) Protocol …

131

Fig. 1 Greedy forwarding

D x y

Fig. 2 Void problem due to greedy forwarding

D

z

v void w

y x

range. As the distance between D and y is less than that of any other neighbor, the data is forwarded to y and so on. This continuous process of data forwarding in greedily manner results in drawback. In Fig. 2, the drawback of greedy forwarding is explained through a topology where sender node x is itself much closer to D than any of its neighbor (i.e., w and y). Sender node x chooses to send data directly to D rather than through any of its neighbor. Here the dashed arc is drawn about D with radius equal to distance between x and D. The region intersected between x’s radio range and the arc about D has no any neighbors of x. This intersected region is termed as void, which is shown in Fig. 2 as shaded region. To address this void problem, RUT scheme is used for boundary identification and forwarding strategy. Combination of greedy forwarding and RUT scheme (i.e., GAR protocol [10]) helps in reducing void problem and guarantee the delivery of data from CH to sink node. The rest of the paper is organized as follows: Sect. 2 deals with the literature survey, Sect. 3 describes the proposed methodology, Sect. 4 analyzes the performance of the proposed protocol, Sect. 5 shows the simulation results, and Sect. 6 concludes the paper followed by References.

2 Literature Survey The challenges for security in WSN are totally different from traditional network security [11] due to inherent resources and computing constraints [12, 13]. Different kinds of denial of services are surveyed by the number of researchers [14]. Under WSN, nodes are susceptible to physical attack. Data transmission adopts secure routing protocol in order to provide original message to the sink node. Sensor nodes are vulnerable to both internal attack and external attack.

132

A. Tabassum et al.

Internal attack leads to node failure results in creation of black hole within the network, which leads to a huge amount of packet loss. These packets may carry useful information for some specific application, and the loss of this kind of packets may result in a great loss of some industry or a nation. External attack includes injection of some malicious act in the sensor nodes. These attacks result in packet dropping, alteration of data and jamming. Wireless communication medium has the nature of broadcasting which helps to enhance the abilities of an attacker to drop the packets, alter the packets or insert new packets to initiate denial-of-service attack. The WSNs are deployed in remote areas and left abandoned, which demands implementation of secure routing to defend against various attacks like selective forwarding attack, spoofing attack, hello flood attack, and sinkhole attack [15]. The WSN should inhibit security features like authenticity, confidentiality, integrity, and secure key distribution [16]. Various protocols have been designed considering the encryption of data to make efficient and secure transmission. The authors of [17] have proposed a scheme for securing data transmission with encryption in WSNs. However, the previous algorithms are still very expensive and not always result in efficient security. A variety of key management schemes have been proposed [18]. In [19], a public key-based key revocation scheme that demands periodical certificate broadcast is proposed. In order to conserve the limited resources of node, it uses hash function chains. The authors [20] have proposed an intrusion-tolerant protocol that sets up routing protocol for tree-structured multipath in WSN. A geographybased routing protocol is proposed in [21]. It balances the energy consumption of the sensor nodes and increases the lifetime of WSN. It prevents jamming attack in WSN. Another secure routing scheme SEER [4] has proposed to take care of security as well as energy-balancing issue at the same time. It prevents from unnecessary clustering. The computational complexity is reduced by using a lightweight cryptography.

3 Proposed SAVEER Protocol SAVEER proposes a novel method by using greedy forwarding technique [8] and RUT [9] scheme in order to send aggregated data to the sink node. The cluster head (CH) makes a greedy choice to select the next hop. It only requires knowledge about sender’s immediate node. It may result in void problems when the forwarded data is received at void node. SAVEER addresses the void problems which occur while transmitting aggregated data from selected CH to the sink node and uses GAR protocol [10] to resolve the problem. This protocol uses RUT scheme as its forwarding strategy. RUT scheme is used for solving boundary identification problem. Boundary traversal phase is conducted by assigning a starting point, associated with the rolling ball hinged at void node. If there exists a node, such that distance between that node and sink node is less than that of between void node and sink node, then the greedy forwarding algorithm uses GAR protocol to make guaranteed delivery of data toward sink node.

Secure Anti-Void Energy-Efficient Routing (SAVEER) Protocol …

133

3.1 Assumptions The following assumptions are made for the proposed scheme to design the network architecture. • Each node has an initial trust value >0 (In simulation it is considered as 10). • The network is a static one. • The sink node is highly configured with sufficient memory, energy, and computation capability. • CH has aggregated encrypted data.

3.2 Greedy Forwarding Technique of Aggregated Data The main objective of SAVEER is to send aggregated data to the sink node in a secure and energy-efficient way. The aggregated data is also encrypted before getting forwarded. This phase is used to prevent from various attacks like selective forwarding, sinkhole, and flooding attacks. It also reduces void problem. A set of cluster head nodes N = {N i |∀i} are considered. Each CH   has specific location, represented in (x, y) coordinate by the set P = {PNi |PNi = x Ni , y Ni , ∀i}. range, defined by the set D =   has also its own transmission  Each  CH node D PNi , R |∀i , where D PNi , R = x|x − PNi  ≤ R, ∀x ∈ R 2 , the center of the closed disk is PNi , and the radius of the transmission range for each node N i is represented by R.

3.3 Implementation of Greedy Forwarding Scheme This scheme mainly considers implementing one-hop neighbor table T. The next hop is found by making linear search in T, when there is no void occurred in the network. Otherwise, adopt the RUT scheme based on GAR protocol. T can be defined as: T =

     ID Nk , PNk |PNk ∈ D PNi , R , ∀k = i

where ID Nk represents the identification number for the CH node N k . The next hop neighbor is selected by satisfying two conditions: (1) having the shortest distance from the sink node and (2) it is located closer to sink node, compared to the distance between CH and sink node. Then, this process is repeated until the destination is found. The task is performed by using the rolling ball concept.

134

A. Tabassum et al.

3.4 Implementation of RUT Scheme Implementing RUT algorithm [9] guarantees the delivery of data to the destination. In Fig. 3, the cluster heads want to forward the sensed data to the sink node. The next hop is chosen as per greedy forwarding technique. There is void occurred at next neighbor node. In order to solve such problem, a circle is formed with center point S 1 and radius equals to half of the transmission range (i.e., R/2). The circle is hinged at neighbor node N 1 . It starts rolling in anticlockwise direction until its boundary encounters a node (N 4 in Fig. 3). Hence, the packet is forwarded from N 1 to N 4 and causes the formation of a new equal-sized circle, centered at s2 , and hinged at node N 4 . Node N 5 is identified as the next hop node. This process is repeated until node N 7 is reached, having smaller distance to the sink node than that of N 1 to sink node. At node N 7 , the greedy forwarding scheme is resumed. Hence, the resulting path becomes CH → N 1 → N 4 → N 5 → N 6 → N 7 → N 8 → sink node. SEER [4] calculates the trust value (Tvalue ) of the next cluster head to forward the aggregated data securely. It is the ratio of successful communication (S) to the total number of communications (i.e., Success and Failure (F)).

Fig. 3 Elimination of void problem to construct the routing path

Secure Anti-Void Energy-Efficient Routing (SAVEER) Protocol …

Tvalue =

S S+F

135

(1)

Weight value is calculated based on calculated trust value for each node, its remaining energy (RE), and distance from sink node (DBS ), as: Wvalue = W ∗

Tvalue ∗ RE DBS

(2)

where W is a constant. The node gets its lower weight value if it is attacked either by selective or by sinkhole attack, due to packet drops and resulting in reduced trust value. The node with lower weight value cannot be used for data forwarding. In any situation, first weight value for each node is calculated, and on the basis of value, it is decided whether it is useful for data forwarding or not. This leads to useless computations of calculating weight value for nodes with lower trust value. It decreases the energy of the nodes in making useless computations and affects network lifetime. The flooding attack is caused with fraud messages sent from node which is compromised by an adversary. In SAVEER, greedy forwarding technique removes the burden of weight value computation for each node in WSN-based IoT network. It helps in maintaining energy level of sensor nodes and network lifetime. Here network lifetime represents the time span from deployment time to the time when the average energy of the network comes down under a threshold value. The transmission of data to the sink node must be secure and energy efficient. The aggregated data is encrypted. Ciphertext CT is created by generating random number K d . To maintain the authenticity of the aggregated data, a message authentication code is calculated by using global key K G , time stamp TS , trust value Tvalue , and one-way hash chain OHC for CH node. The CH node is treated as parent node for any next neighbor hop which gets selected greedily. The packet format is CH → Ni : CT||CHID ||OHC||TS ||Tvalue ||MAC(K G ; CT||OHC||TS ||Tvalue ) On receiving the encrypted data packet, the next neighbor hop replaces the parent node ID (e.g., CHID ) with its own identification. This process is repeated until the data gets arrived at sink node. To help the decryption process at sink node, the random number K d is appended with the data packet as CH → Ni : CT||K d ||CHID ||OHC||TS ||Tvalue ||MAC(K G ; CT||OHC||TS ||Tvalue ) The generated nested MAC helps in preventing from wormhole attack. On reaching the sink node, data gets decrypted with the help of K d , and thus, original message is obtained.

136

A. Tabassum et al.

4 Performance Analysis In the following subsections, the performance of our proposed scheme SAVEER is analyzed. In SAVEER, we are using greedy forwarding method and rolling ball technique in order to send the sensed data to sink node. We compare SAVEER with three existing protocols, INSENS [20], CASER [21], and SEER [4].

4.1 SAVEER Versus INSENS [20] INSENS provides a tree-structured architecture for energy-efficient and secure communication in the network. The distributed lightweight security mechanism of INSENS is capable of reducing the damage caused by injecting, blocking, or modifying packets by the intruder. The scheme includes one-way hash chain and nested keyed message authentication code for authentication purpose and thus defends against wormhole and selective forward attacks. Complexity from resource-poor nodes is pushed toward the base stations which are resource rich. It tolerates attacks caused by intruder/malicious nodes and limits its damage. SAVEER performs better than INSENS in three areas: (i) The phases of INSENS, including flooding, routing table forwarding, and data sending, expense more energy to provide security, whereas SAVEER is more energy efficient by involving only the affected nodes by an event. (ii) Solution of void problem in greedy forwarding technique and the inclusion of lightweight encryption technique make SAVEER more secure than INSENS. (iii) SAVEER results in reduced computation overhead of calculating node weight.

4.2 SAVEER Versus CASER [21] An efficient grid-based cost-aware secure routing protocol, called CASER, has addressed both the issues of energy efficiency and the security through the parameters: probabilistic-based random walking and energy balance control. The lifetime of the network and packet delivery ratio can be improved by non-uniform energy deployment strategy under the same security requirement and energy resource. The authors have done a quantitative security analysis. It is clear from the performance analysis that in the presence of attacker nodes, CASER provides high message delivery ratio. The protocol significantly extends the lifetime of the network and provides an excellent tradeoff between energy balance and secure routing. In security concern, INSENS performs better than CASER. SAVEER performs better than CASER in two areas: (i) The event-based cluster formation and forwarding of data to the next neighbor hop in greedily manner make SAVEER to reduce the energy consumption.

Secure Anti-Void Energy-Efficient Routing (SAVEER) Protocol …

137

(ii) No encryption technique is involved in CASER to prevent spoofing attack, but SAVEER uses lightweight encryption technique.

4.3 SAVEER Versus SEER [4] In SEER, an event-based clustering approach is proposed. It creates a cluster only when an event occurs by the affected nodes, called initiator nodes, instead of cluster creation throughout the network. This causes a huge amount of energy saving. Oneway hash chain (OHC) and MAC are used for node authentication purpose, and lightweight encryption technique is used to reduce the overhead. SEER is able to defend against hello flood attack, selective forwarding attack, and wormhole attack in an energy-efficient way. SAVEER is a modified version of SEER. SAVEER performs better than SEER in two areas: (i) The proposed protocol SAVEER does not introduce any level-based architecture for data forwarding, rather it uses greedy forwarding process to forward the sensed data. This helps in resolving the void problem and thus reduces the number of dropped packets and black hole attack, in contrast to SEER. (ii) The security is increased in intra-cluster communication by introducing trust value to calculate the competition bid value at the time of cluster head selection. Thus, the chance of selective forwarding attack is also reduced within the cluster. It is observed from the simulation results that the performance of SAVEER is better than that of SEER in terms of both security and network lifetime.

5 Simulation Results Table 1 describes the performance evaluation parameters. NS2 is used to simulate the performance of SAVEER. A clustered network of wireless sensor nodes in a field of 100 m × 100 m dimensions is considered for performance validation of SAVEER. The total number of

Table 1 Description of performance evaluation parameters

Parameters

Description

Number of nodes

100

Energy at deployment time

50 J each node

Protocol

IEEE 802.15.4

Type of sensor node

Imote2

Frequency

13 MHz

Number of rounds

20

138

A. Tabassum et al.

Fig. 4 Dead nodes versus the number of rounds

nodes is taken 100 each possessing initial energy of 50 J. The nodes that have more energy than the threshold value are called alive nodes and nodes having energy less than threshold value are called as dead nodes. Each node in the field has some specific location represented by horizontal and vertical coordinates. Assume that sink node is located at any one corner of the field. A round is defined as a process of collecting sensed data by the cluster head from cluster members and is forwarded to the sink node after aggregation and encryption process. For simulation result, 20 rounds are considered. The simulations of CASER, SEER, and INSENS have been done. The dead node number is counted for each CASER, SEER, and INSENS protocols after completing each round. After 20 rounds, the increase in dead nodes is depicted in Fig. 4. In CASER, the number of calculated dead nodes increases after each round than in INSENS. INSENS uses several routing phases which make it lowering the energy level of each node after each round but is better than CASER. Trust value and weight value for each node are calculated in SEER to find a path and send data from CH to sink node. It includes complex computations and thus its energy efficiency is less though the performance is better than that of INSENS. However, the computational overhead of weight value calculation is reduced in SAVEER and data is forwarded in a greedy manner. It results in energy-efficient routing. The number of dead nodes after 20 rounds is greater than that of CASER, INSENS, and SEER, which is depicted in Fig. 4. For each round, numbers of alive nodes are also counted. In Fig. 5, simulation result for numbers of alive nodes after 20 rounds for each CASER, INSENS, SEER, and SAVEER protocol is shown. Network lifetime is considered as an important parameter to evaluate the performance of any routing protocol. It depends on the lifetimes of the single nodes that constitute that wireless sensor network. After completion of each round, the average energy of every node is measured and considered that average value as network lifetime. Longer the sensor node remains functional, network lifetime will be more. Figure 6 shows the performance graph for network lifetime of each protocol CASER,

Secure Anti-Void Energy-Efficient Routing (SAVEER) Protocol …

139

Fig. 5 Number of rounds versus alive nodes

Fig. 6 Network size versus lifetime

INSENS, SEER, and SAVEER for varying network size. In SEER and SAVEER, all nodes in network do not get affected with the occurrence of any event. Nodes being affected by the event only form cluster and remaining are considered as non-affected. Successful delivery of packets in the presence of malicious nodes is also considered as an important parameter for performance evaluation. Ratio of the number of successful delivered packets to the total number of sent packets is known as packet delivery ratio. Figure 7 shows the delivery rate of successful packets in the presence of some malicious nodes for each CASER, INSENS, SEER, and SAVEER protocols. The simulating result shows that SAVEER and SEER perform better in delivering successful data packets in comparison with CASER and INSENS. In addition to packet delivery ratio, the number of dropped packets is also taken into consideration for evaluating performance. The number of dropped packet is directly proportional to the presence of malicious nodes in WSN. Increase in the numbers of malicious nodes degrades the rate of successful delivery of data packets.

140

A. Tabassum et al.

Fig. 7 Malicious nodes versus packet delivery ratio

Fig. 8 Packet dropped versus the number of malicious nodes

Figure 8 shows the performance of CASER, INSENS, SEER, and SAVEER regarding the number of dropped data packets in the presence of some attacker nodes in the network. SEER and SAVEER again perform better in comparison with INSENS and CASER as it has fewer numbers of malicious nodes. The numbers of malicious nodes are taken from 5 to 50 for simulating the performance. The increase in malicious nodes is proportional to the increase in the number of packet drops. CASER does not allow any intra-grids authentication process to involve. SEER and SAVEER use OHC-based authentication scheme to authenticate cluster members. INSENS do not use any data encryption technique and thus making easier data alteration. SEER and SAVEER both have used lightweight encryption technique to improve data confidentiality. This also leads to energy saving. The proposed protocol

Secure Anti-Void Energy-Efficient Routing (SAVEER) Protocol …

141

SAVEER reduces the void problem and increases security. The number of packets that are dropped is minimum in SAVEER as shown in Fig. 8. Thus, MAC generation based on one-way hash chain, lightweight encryption technique, involvement of trust value in intra-cluster communication, and anti-void greedy forwarding approach to forward the data makes SAVEER more secure than SEER, CASER, and INSENS.

6 Conclusions Energy-efficient secure routing protocol, called SAVEER, is proposed in this paper for WSN-based IoT network. SAVEER focuses on saving energy by replacing weight value calculations with greedy forwarding technique. It makes the selection of the next neighbor hop greedily. It is observed that SAVEER has to do more computations in order to reduce void problem caused by using greedy forwarding technique. However, to achieve better efficiency while forwarding aggregated data the security of data should also be maintained. It uses event-based clustering techniques. Unnecessary cluster formation prevents node from getting its battery power drained. Energy saving of network results in increased network lifetime. SAVEER uses lightweight encryption technique by using one-way hash chain OHC to prevent data from being attacked. The simulation results show that SAVEER protocol provides better performance in comparison with that of SEER, INSENS, and CASER routing protocols.

References 1. I. Bose, R. Pal, Auto-ID: managing anything, anywhere, anytime in the supply chain. Commun. ACM 48(8), 100–106 (2005) 2. M.M. Afsar, M.-H. Tayarani-N, Clustering in sensor networks: a literature survey. J. Netw. Comput. Appl. 46, 198–226 (2014) 3. T. Martin, M. Hsiao, D. Ha, J. Krishnaswami, Denial-of-service attacks on battery powered mobile computers, in Second IEEE International Conference on Pervasive Computing and Communications (PerCom’04), (IEEE, 2004), pp. 309–318 4. A.K. Das, R. Chaki, K.N. Dey, Secure energy efficient routing protocol for wireless sensor network, in Foundation of Computing and Decision Sciences (FCDS), vol. 41, Issue-1, (2016), pp. 3–27. ISSN- 0867-6356, https://doi.org/10.1515/fcds-2016-0001 5. C. Karlof, D. Wagner, Secure routing in wireless sensor networks: attacks and countermeasures. Ad Hoc Netw. Elsevier J. 293–315 (2003). https://doi.org/10.1016/s1570-8705(03)00008-8 6. C. Cachin, J.A. Poritz, Secure intrusion-tolerant replication on the internet, in IEEE International Conference on Dependable-Systems and Networks (DSN’02), (2002) 7. Y.C. Hu, A. Perrig, D.B. Johnson, Packet leashes, A defense against wormhole attacks in wireless networks, in Proceedings of IEEE Infocom, (2003) 8. B. Karp, H.T. Kung, GPSR: greedy perimeter stateless routing for wireless sensor network, in 6th Annual ACM/IEEE International Conference on Mobile Computing and Networking, MobiCom, (Aug 2000). pp. 243–254 9. W.-J. Liu, K.-T. Feng, GAR: greedy routing with anti-void traversal for wireless sensor networks. IEEE Trans. Mob. Comput. 8(7), 910–922 (2009)

142

A. Tabassum et al.

10. K.T. Feng, W.-J. Liu, Greedy routing with anti-void traversal for wireless sensor networks. IEEE Trans. Mob. Comput. 8(7), 910–922 (2009) 11. S.N. Kumar, Review on network security and cryptography. Int. Trans. Electr. Comput. Eng. Syst. 3(1), 1–11 (2015) 12. Secure routing in wireless sensor networks: Attacks and countermeasures. Ad Hoc Netw. 1(2–3) (2003) 13. A. Perrig, R. Szewczyk, V. Wen, D. Culler, J. Tygar, Spins: security protocols for sensor networks. Wirel. Netw. J. (WINET) 8(5), 521–534 (2002) 14. A. Wood, Denial of service in sensor networks. IEEE Comput. 35(10), 54–62 (2002) 15. J. Sen, A survey on wireless sensor network security. Int. J. Commun. Netw. Inf. Secur. (IJCNIS) 1(2), 2009 16. A. Perrig, J. Stankovic, D. Wagner, Security in wireless sensor network. L Commun. ACM. 47, 2004 17. H. Hayouni, M. Hamdi, T.-H. Kim, A survey on encryption schemes in wireless sensor networks, in 7th International Conference on Advanced Software Engineering & Its Applications, (2014) 18. W. Du, J. Deny, Y.S. Han, S. Chen, P.K. Varshney, A key management scheme for wireless sensor network using deployment knowledge, (2004) 19. P. Chuang, S. Chang, C. Lin, A node revocation scheme using public-key cryptography in wireless sensor networks. J. Inf. Sci. Eng. 26, 1859–1873 (2010) 20. J. Deng, R. Han, S. Mishra, INSENS: Intrusion-tolerant routing for wireless sensor networks. Comput. Commun., Elsevier 29(2), 216–230 (2006). https://doi.org/10.1016/j.comcom.2005. 05.018 21. D. Tang, T. Li, J. Ren, J. Wu, Cost-aware secure routing (CASER) protocol design for wireless sensor networks. IEEE Trans. Parallel Distrib. Syst. 26(4), 960–973 (2015)

Mathematical Analysis of Effectiveness of Security Patches in Securing Wireless Sensor Network Apeksha Prajapati

Abstract This paper explores the effect of an updated and successfully installed security patches in securing sensor network through a mathematical model. This work also examines the model for both secure and insecure network cases. The model investigates the equilibrium and its stability and also confers the role of parameters which are responsible for securing networks. Keywords Sensor network · Security patches · Virus · Epidemic model · Reproduction number

1 Introduction Due to easy installation and cost effectiveness, wireless networking has a great importance in computer networking. Wireless sensor network (WSN) is one form of networking which consists of numerous of sensors. Major applications of WSN are in the field of observing weather condition, pollution level, etc. It has a great future in networking because of its monitoring features. The network passes the data collectively to the sink. Sink is also known as base station. The base station is an interface between the user and the network. Communication between sensor nodes takes place through radio signals. The nodes have very limited storage capacity and less bandwidth and have also few more constraints; due to this, attacker can easily disturb WSN services. WSN is very much vulnerable toward malware attack. Viruses are responsible for different kind of interruptions in the network like privacy violation, congestion, sinkhole attack, etc. [1, 2]. The attacker mainly uses vulnerabilities in the routing protocols in a WSN, and their counter-measures before the launch of the attacks like wormhole [3], sinkhole [4] and Sybil [5]. This paper basically describes the above attack scenario in mathematical form and role of security patches in sensor networks. In order to develop the efficient and A. Prajapati (B) Nirmala College Ranchi, Ranchi, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_12

143

144

A. Prajapati

effective strategies for its security, an epidemic model is formulated. Epidemic modeling is a branch of mathematics which describes the infectious disease transmission. This work also concentrates on different parameters which affects the recovery of sensors from malware like rate of installation of security patches in different sensors and recovery rate of the sensors from malware attack. The basic epidemic models describe the flow of individuals that are susceptible (S) and infected (I) from a particular disease [6]. The basic assumptions produce sets of two standard differential equations. Various models have been produced by modifying the basic structure of the epidemic models either by implementing more complexity or by different tools [7–10]. Feng et al., have proposed a SIRS model with communication radius and distributed density of nodes to analyze the worm propagation in WSN. The model focuses on spatial and temporal dynamics of spreading behavior of worms [11]. The paper by Singh et al., describes about epidemic behavior of digital worms in WSN. Threshold value or reproduction number with respect to communication radius and distribution node density has been discussed. Stability of the model is also discussed [12]. Upadhyay et al., proposed an SVEIR model to understand the virus dynamics in computer network. The model consists of Holling type II functional response, and this response function is taken as the treatment function. Stability analysis is performed for both endemic- and virus-free state [13]. Paper by Tao et al., focuses on SIRS model which is basically based on the feedback mechanism on scale-free network. Reproduction number has been identified and stability analysis is discussed based on the value of reproduction number. It has also shown that the epidemic spreading is affected by feedback parameters [14]. There is plenty of literature on different models in epidemiology including the work of various researchers [15, 16]. The rest of the paper is structured as follows. The next section introduces mathematical model and presents theoretical results. Discussion on the malware free equilibrium and endemic equilibrium and their stability are presented in Sects. 3 and 4. The next section is focused on simulation-based analysis followed by conclusion.

2 Model Formulation Let the network consist of N sensors, and it is divided into five exclusive states: susceptible (S(t)), immunized (S I (t)), infectious (I(t)), quarantine (Q(t)) and recovered (R(t)). S(t) represents number of sensors which are susceptible toward malware attack. S I (t) is number of sensors which are immunized and have nominal chance of malware attack. I(t) is number of infected sensors. Q(t) is number of sensors which are infected and isolated from the network for better recovery. R(t) represents number of recovered sensors. Data transmission from the infectious to susceptible sensor nodes is responsible for malware spread in WSN. In the model, it is assumed that the installation of security patches brings a partial degree of protection and the protection wanes with time. In an

Mathematical Analysis of Effectiveness of Security Patches …

145

immunization strategy, two classes of entities might be considered: (i) the sensors with security patches and whose immunity has not yet diminished, belongs to the immunized class S I (ii) the sensors without the security patches belongs to the fully susceptible class. The immunized class S I is produced by the installation of security patches in susceptible class by a fraction p of recruited individuals. The installation of security patches does not always bring complete security to the infected sensors due to the requirement of regular updating of security patches. Since security patches update is not done in a regular interval, so the individuals of this class might still become infected at a lower rate of infection σ . An E-epidemic SS I IQR model exemplifies the dynamics of malware transmission and effectiveness of security patches in the sensor network. The mixing of nodes, here nodes mean sensors, follows the law of mass action. The basic assumptions of the model are as follows: Each new sensor additional to the network is primarily susceptible. The sensors with security patches are in the immunized class. The active population includes all the sensors. The parameters of the models are assumed to be positive. The per capita contact rate does not depend upon the total population size. All interactions are assumed as homogeneous. The variables and parameters of the models are defined as follows: β is the infectivity contact rate. δ is the recovery rate of sensors in immunized class. μ represents death rate due to attack of malware. d is the natural death rate of sensors. p is the probability of recruitment of sensors having security patches. ε is the recovery rate in Quarantine class. η is the quarantine rate. α is the immunization rate (the rate at which security patches has been installing in the sensors of susceptible class). (1−σ ) represents the degree of protection induced by primary immunization. γ is the rate at which sensors at immune class lost their immunity. The flow of malware in sensor network is shown in Fig. 1. The differential equations of the model are given as ⎫ dS ⎪ = (1 − p)b − β S I − αS + γ S I − dS ⎪ ⎪ ⎪ dt ⎪ ⎪ ⎪ ⎪ dS I ⎪ = pb − σβ S I I − (γ + δ + d)S I + αS ⎪ ⎪ ⎪ ⎪ dt ⎪ ⎬ dI (1) = β S I + σβ S I I − (d + μ + η)I ⎪ dt ⎪ ⎪ ⎪ ⎪ dQ ⎪ ⎪ = ηI − (d + ε + μ)Q ⎪ ⎪ dt ⎪ ⎪ ⎪ ⎪ dR ⎪ ⎭ = ε Q + δS I − dR dt

146

A. Prajapati

3 Equilibrium Point and Reproduction Number Stability of equilibrium points gives information about the long-term behavior of the I = dS = dI = dQ = dR =0 model. For equilibrium points, dS dt dt dt dt dt In the absence of attack, the model has a unique malware-free equilibrium point 

E0 =

b(1 − p)(δ + d) b(α − pd) δb(α − pd) , , 0, 0, (δ + d)(α + d) + γ d (δ + d)(α + d) + γ d d((δ + d)(α + d) + γ d)



 and an endemic equilibrium point at E ∗ = S ∗ , S I∗ , I ∗ , Q ∗ , R ∗ where S∗ =

p σ qσ (1 − γ p  ) + γ p  (1 + γβσ

− σ)

, S I∗ =

p  σ (1 − βγ ) qσ (1 − γ p  ) + γ p  (1 + γβσ − σ )

p γ γ η2 (1 + σβγ − σ ) − qσ q I∗ = (1 + σ γβ − β) − , Q ∗ = σβ β γ 2β2 R∗ =

εγ η2 (1 + σβγ − σ ) − qσ δpσ (1 − βγ ) − γ 2β2 qσ (1 − γ p  ) + γ p  (1 + γβσ − σ )

γ where p  = δβ (d + μ + η), q = α + d + γσ To control the attack, the mathematical model emphases on the malware free equilibrium point to provide threshold condition. Since the class of recovered individuals (R) does not seem in initial four equations of (1), so the study will be limited to the dynamics of initial four equations of (1). The total population (sensors) size is N, = b − dN , thus N → Nb when N → ∞ that is N = S + S I + I + Q + R and dN dt The feasible

region is, = (S, S I , I, Q) : S, S I , I, Q ≥ 0, S + S I + I + Q ≤ db is a positively invariant set for the model.

Fig. 1 Flow diagram

Mathematical Analysis of Effectiveness of Security Patches …

147

Therefore, attention to be confined in the dynamics of the model in .

4 Stability Analysis: Local and Global Theorem 1 The malware free equilibrium is E 0 locally stable in . Proof For the analysis of local stability, the Jacobian of the system (1) at malwarefree state (i.e., I = 0) is given as ⎡

⎤ −α − d γ 0 0 ⎢ α ⎥ −(γ + δ + d) 0 0 ⎥ J =⎢ ⎣ 0 ⎦ 0 −(d + μ + η) 0 0 0 η −(ε + d + μ) Characteristic equation of the Jacobian at malware-free state is (α + d + λ)(μ + η + d + λ)(μ + ε + d + λ)(γ + δ + d + γ δ + λ) = 0 Eigen values are −(α + d), − (μ + η + d), − (μ + ε + d), −(γ + δ + d + γ δ) Since all the eigen values are negative, so the malware free equilibrium is locally asymptotically stable. Reproduction number is the threshold value of an epidemic model. It is generally used to measure the spreading potential of malware in E-epidemic models. This threshold has an important role in malware eradication. It can be understood as the total number of secondary contaminations produced by a typical case of a contamination in a totally vulnerable population. When R0 ≤ 1, the malware free equilibrium is stable and for R0 > 1 the endemic equilibrium exists. The reproduction number for the developed model is obtained by the method presented by Hurford et al. [17] R0 =

β (d + μ + η)(d + μ + ε)

Theorem 2 If R0 > 1, then the unique endemic equilibrium E* is globally stable in . Proof In this section, geometric approach would be applied to study the global stability of the endemic equilibrium [18, 19]. Proof of the theorem is followed by the proof of the lemma which shows the global stability. Lemma 1 Assume that G is simply connected and following assumptions hold. Then, the unique equilibrium x* is globally asymptotically stable in G if q1 < 0. Proof Consider the autonomous dynamical system: x˙ = f (x) with initial condition x(t) = x(0). Where f: G → Rn , G ⊂ Rn is an open set and simply connected and f ∈

148

A. Prajapati

C 1 (G). Let x * be a unique endemic equilibrium point. There exists a compact absorbing subset K of G, and x * is globally stable if it satisfies the additional Bendixson criteria given as 1 q¯1 = lim sup sup t→∞ x 0 ∈K t

t

μ(G(x(s, x0 )))ds < 0, where, D = A f A−1 + A

0

∂ f [2] −1 A ∂x

The matrix Af is obtained by replacing each entry Aij of A by its derivative in the direction of f, Aij f, and μ is the Lozinski˘ı measure of Q with respect to a vector norm [2] |·| in Rn . Further, J [2] = ∂∂f x is the second compound Jacobian matrix of order 4. J [2] is given as ⎡

⎤ 0 −β S 0 0 A11  σβ S I Q ⎢ σβ I I ⎥ 0 γ 0 0 ⎢ ⎥ Q f I − A22 ⎢ ⎥   ⎢ ⎥ Q I ⎢ 0 ⎥ η − A33 0 γ −β S Q I ⎢ ⎥ f   ⎢ ⎥ Q I ⎢ −β I ⎥ − A 0 0 α 0 44 Q f I ⎢ ⎥   ⎢ ⎥ Q I ⎢ 0 ⎥ − A σβ S 0 α 0 I 55 ⎢ ⎥ Q f I ⎣ ⎦   Q I 0 0 βI 0 σβ I Q I − A66 f

where, A11 = −(β I + α + 2d + σβ I + δ + γ ) A22 = −(β I + α + 2d + σβ S I − β S + μ + η) A33 = −(β I + α + 2d + μ + ε) A44 = −(β I + μ + η + 2d − σβ S I + σβ I + δ + γ ) A55 = −(β I + μ + ε + 2d + σβ I + δ) A66 = −(−β S − σβ S I + 2μ + ε + 2d + η) A66 = −(−β S − σβ S I + 2μ + ε + 2d + η) For the matrix D in the Bendixson criteria, a diagonal matrix A is characterized   A = diag 1, QI , QI , QI , QI , QI and A f A−1 =             , So D is given as diag 0, QI QI , QI QI , QI QI , QI QI , QI QI as

f

D = D f D −1 + D J [2] D −1

f

f

f

f

Mathematical Analysis of Effectiveness of Security Patches …

149



0 −β S QI A11  σβ S I QI Q ⎢ σβ I I 0 γ ⎢ Q Q f I − A22 ⎢   ⎢ Q I ⎢ 0 0 η Q f I − A33 ⎢   =⎢ Q I ⎢ −β I α 0 Q f I − A44 ⎢   ⎢ I ⎢ 0 0 α 0 ⎢ Q f ⎣ 0 0 βI 0

0

0

0

0

γ

−β S

0

0

Q I

ε

− A55

βσ S I   Q I Q I − A66

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

f

which can be expressed in the form of a block matrix as

   D11 D12 where, D11 = [A11 ]; D12 = σβ S I QI 0 −σβ S QI 0 0 ; D21 D22 T  I = σβ γ 0 −β I 0 0 Q ⎤ ⎡  Q I 0 γ 0 0 Q f I − A22 ⎥ ⎢   ⎥ ⎢ Q I − A33 0 γ −β S η ⎥ ⎢ Q I ⎥ ⎢ f   ⎥ ⎢ Q I ⎥ − A 0 0 α 0 =⎢ 44 Q I ⎥ ⎢ f   ⎥ ⎢ Q I ⎥ ⎢ 0 α 0 − A βσ S I 55 ⎥ ⎢ Q f I   ⎦ ⎣ Q I 0 βI 0 ε Q I − A66 

D= D21

D22

f

To calculate Lozinskii measure of matrix D, first is to calculate sup {g1 , g2 } [18, 20]. Where g1 and g2 are defined as Q g1 = μ(D11 ) + |D12 | = −(β I + α + 2d + σβ I + δ + γ ) + σβ S I and, I   Q I σβγ I + g2 = |D21 | + μ1 (D22 ) = −2d + β I + Q I Q f where |D12 |, |D21 | are matrix norms with respect to the l 1 vector norm, and μ1 denotes the Lozinskii measure with respect to the l1 [20]. To obtain μ(D22 ), add the absolute value of the off-diagonal elements to the diagonal one in each column of D22 and then take the maximum of two sums.         = QI I Q Q−I2 Q = QQ − II and so Now QI QI f

I I Q = η − (μ + ε + d) and = σβS I + β S − (μ + d + η) Q Q I Hence, g1 and g2 reduce as 

−1

Q g1 = −(β I + α + 2d + σβ I + δ + γ ) + σβ S I η + (μ + ε + d) Q

150

A. Prajapati

 g2 = −2d + β I + η(σβγ + η)

 Q + (μ + ε + d) − σβ S I − β S + η Q

Now, μ(D) ≤ sup {g1 , g2 }

     Q −(β I + α + σβ I + δ + γ ) + σβ S I η QQ + (μ + ε + d) ,−1 ≤η − 2d + sup Q β I + η(σβγ + η)((μ + ε + d) − σβ S I − β S + η)

which gives Q μ(D) ≤ η − 2d and so, Q



t

μ(D)dt < log Q(t) − 2dt

0

t Hence, it gives q1 = 1t 0 μ(B)dt < 1t log Q(t) − 2d < 0 for all initial values of S, S I , I, Q in . The condition q1 < 0 is fulfilled which proves the global stability of endemic equilibrium. Hence by Lemma 1, endemic equilibrium is globally stable. The condition also verifies the local stability of the endemic equilibrium.

5 Simulation-Based Analysis The simulations are carried out in Matlab with ODE45 suite. Figure 2 represents the secure scenario. The infection in the system drops significantly and drops almost near to zero; whereas, the case is very different in insecure scenario. Figure 3 shows that the 35% of the total population remains infected in the system with time, and it never drops to zero. Figure 4 presents the global stability of endemic equilibrium in support of Theorem 2. Figures 5 and 6 present the behavior of infectious class versus immune class for both secure and insecure scenarios, respectively. Figure 7 shows the behavior of infectious class with respect to time for different values of the rate of installation of security patches (α). From Fig. 7 it can be seen that the rate of installation of security patches is somehow directly proportional to the number of infected sensors in the network. It can also be observed that the infection in the network is 62% when the value of α = 0.001, but the infection reduced to 20% when α = 0.8.

Mathematical Analysis of Effectiveness of Security Patches …

151

1 S SI I Q R

0.9

Population class

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

50

100

150

200

250

300

350

400

450

500

Time

Fig. 2 Behavior of the system with respect to time in case of secure scenario with initial condition (0.99, 0, 0.01, 0, 0) 1 S SI I Q R

0.9

Population Class

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

50

100

150

200

250

300

350

400

450

500

Time

Fig. 3 Behavior of the system in case of insecure scenario with initial condition (0.99, 0, 0.01, 0, 0)

6 Conclusion This paper discussed the mathematical model of malware spread and effectiveness of security patches in WSN. Two states have been discussed to understand the network behavior with and without security patches. The secure state stands for successful installation of security patches and its regular updating. The insecure state means unsuccessful installation of the security patches or negligence in its updating. Table 1 presents both the state in terms of distribution of sensors in different classes with

152

A. Prajapati 0.7

0.6

Infectious class

0.5

0.4

0.3

0.2

0.1

0 0

0.1

0.3

0.2

0.4

0.5

0.7

0.6

Immune class

Fig. 4 Endemic equilibrium of the system 0.45 0.4

Inectious class

0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 0.1

0.2

0.3

0.4

0.5

0.6

Susceptible class

Fig. 5 Infectious class versus immune class in secure scenario

0.7

0.8

0.9

1

Mathematical Analysis of Effectiveness of Security Patches …

153

0.7

0.6

Infectious class

0.5

0.4

0.3

0.2

0.1

0

0

0.2

0.4

0.6

0.8

1

1.2

1.4

Immune class

Fig. 6 Infectious class versus immune class in insecure scenario 0.7 0.8 0.0001 0.5 0.2 0.01 0.06 0.001 0.0005 1

0.6

Infectious class

0.5 0.4 0.3 0.2 0.1 0

0

10

20

30

40

50

60

70

80

90

100

Time

Fig. 7 Behavior of infectious class for different values of alpha (the rate of installation of security patches software) with initial condition (0.96.0.0.04.,0,0)

time. It can be seen that in the secure state, the infection in the network drops almost near to zero whereas infection persists with high intensity in insecure state. Five out of 1000 sensors are infected in secure state and the number of infected sensors is 341 in insecure case, which is very high and this huge number of infected sensors can easily breakdown the network. Hardly, 1% sensors get infected with the use of updated security patches; whereas, more than 80% sensors get infected in the absence of security patches or an updated security patch. With this model, the conclusion can

154

A. Prajapati

Table 1 Secure and insecure state Insecure state

Secure state

Time

S

SI

I

Q

R

Time

S

SI

I

Q

R

000

0.990

000

0.010

000

000

000

0.99

000

0.010

000

000

0.06

0.988

0.006

0.010

000

000

11.09

0.613

0.090

0.256

0.020

0.009

5.20

0.844

0.414

0.032

0.001

0.005

24.50

0.451

0.172

0.296

0.108

0.100

34.4

0.105

0.904

0.416

0.053

0.147

62.97

0.026

0.328

0.103

0.038

0.349

93.1

0.115

0.993

0.342

0.043

0.249

105.4

0.058

0.424

0.025

0.008

0.358

137.6

0.115

0.989

0.341

0.043

0.262

195.4

0.115

0.507

0.007

0.002

0.308

259.5

0.115

0.989

0.341

0.043

0.265

253.6

0.132

0.525

0.008

0.002

0.295

387.4

0.115

0.989

0.341

0.043

0.266

334.4

0.135

0.534

0.011

0.002

0.292

421.4

0.115

0.989

0.341

0.043

0.266

422.7

0.125

0.536

0.015

0.003

0.298

500

0.115

0.989

0.341

0.043

0.266

500

0.121

0.537

0.005

0.003

0.302

b = 0.01, α = 0.01, β = 0.4, d = 0.01, ε 0.01, d1 = 0.01, δ = 0.005, η = 0.003, p = 0.8, σ = 0.3, γ = 0.001

b = 0.1, α = 0.001, β = 0.4, d = 0.01, ε 0.01, d1 = 0.01, δ = 0.005, η = 0.03, p = 0.8, σ = 0.3, γ = 0.0001

be drawn that security patches are responsible for securing the sensor networks; although it does not assure the complete security, but the security of the network can be optimized by the proper use of security patches.

References 1. D. Welch, S. Lathrop, Wireless security threat taxonomy, in Information Assurance Workshop, (IEEE Systems, Man and Cybernetics Society, 2003), pp. 76–83 2. A. Herzog, N. Shahmehri, C. Duma, An ontology of information security. Int. J. Inf. Secur. Priv. 1(4), 1–23 (2007) 3. Y. Hu, A. Perrig, D. Johnson, Packet leashes: a defense against wormhole attacks in wireless networks, in IEEE INFOCOM. Twenty-Second Annual Joint Conference of the IEEE Computer and Communications Societies, vol. 3, 2003 4. C. Karlof, D. Wagner, Secure routing in wireless sensor networks: attacks and countermeasures. Ad Hoc Netw. 1, 293–315 (2003) 5. J. Douceur, The sybil attack in peer-to-peer systems, in First International Workshop, (IPTPS Cambridge, MA, USA, 2002), pp. 251–260. Revised Papers 6. W.O. Kermack, A.G. Mckendrick, A contribution to the mathematical theory of epidemics. Proc. Roy. Soc. Lond Ser. A. 115, 700–7214 (1927). Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, 3rd edn. (Springer-Verlag, Berlin Heidelberg New York, 1996) 7. J.R.C. Picqueria, A modified epidemiological model for computer viruses. Appl. Math. Comput. 213(2), 355–360 (2009) 8. T. Qiulin, H. Xie, Dynamical behavior of computer virus on internet. Appl. Math. Comput. 217(6), 2520–2526 (2010) 9. M.E. Alexander, S.M. Moghadas, P. Rohani, A.R. Summers, Modelling the effect of a booster vaccination on disease epidemiology. J. Math. Biol. 52, 290–306 (2006)

Mathematical Analysis of Effectiveness of Security Patches …

155

10. A. Prajapati, B.K. Mishra, Cyber attack and control technique, in Information Systems Design and Intelligent Applications, vol. 339, (Springer, 2015), pp. 157–166 11. F. Liping, S. Lipeng, Q. Zhao, W. Hongbin, Modeling and stability analysis of worm propagation in wireless sensor network, Mathematical Problems in Engineering, Article ID 129598, 2015 12. A. Singh, A.K. Awasthi, K. Singh, P. Srivastava, Modeling and analysis of worm propagation in wireless sensor networks. Wireless Pers. Commun. 98, 1–17 (2017) 13. R.K. Upadhyay, S.M. Kumari, A.K. Misra, Modeling the virus dynamics in computer network with SVEIR model and nonlinear incident rate. J. Appl. Math. Comput. 54, 485–509 (2017) 14. T. Li, X. Liu, J. Wu, C. Wan, Z. Guan, Y. Wang, An epidemic spreading model on adaptive scale-free networks with feedback mechanism. Physica A, Stat. Mech. Appl., Elsevier. 450(C), 649–656 (2016) 15. J.R.C. Piqueira, B.F. Navarro, L.H.A. Monteiro, Epidemiological models applied to virus in sensor network. J. Comput. Sci. 1, 31–34 (2005) 16. T. Qiulin, H. Xie, Dynamical behaviour of computer virus on internet. Appl. Math. Comput. 6, 2520–2526 (2010) 17. A. Hurford, D. Cownden, T. Day, Next-generation tools for evolutionary invasion analyses. J. R. Soc. Interface 7, 561–571 (2009) 18. M.Y. Li, J.S. Muldowney, On Bendixson’s Criterion. J. Differ. Equ. 106, 27–39 (1994) 19. B.K. Mishra, K. Haldar, e-Epidemic models on the attack and defense of malicious objects in networks, in Theories and Simulations of Complex Social Systems. Intelligent Systems Reference Library, vol. 52, eds. by V. Dabbaghian, V. Mago (Springer, Berlin, Heidelberg), pp. 117–143 20. M.Y. Li, J.S. Muldowney, A geometric approach to global-stability problems. SIAM J. Math. Anal. 27, 1070–1083 (1996)

A Novel Debugger for Windows-Based Applications J. Anirudh Sharma, Partha Sarthy Banerjee, Hritika Panchratan and Ayush

Abstract With every passing day, science is developing and along with it the field of software and applications is growing at an unmatched rate. With this advent of technology, the complexity of our systems has increased a manifold. Finding errors in small-scale software with just about a thousand lines of code was a difficult task and now in an era of virtual reality and artificial intelligence where the programs now have reached millions of lines of codes. Searching and pinpointing the error can cost a lot of time as well as human power, that is where we need a tool that may guide the user straight to the place where the error occurred. In this paper, based on Windows operating system, we try to offer a solution to such problems to find the error. Keywords Black-Box debugging · Functional testing · Windows applications · Breakpoints

1 Introduction The debugger is a software that finds out the bugs (errors) in a program or a running application. It uses many sets of instructions that help them to achieve a higher level of control on its execution. According to specific conditions, it also allows to stop a program at a point desired by the programmer/user. Simulators are used instead of directly running the program on the processor. The use of simulator decreases the speed of the execution of program. There are times when a program crashes; at that moment, debugger is used to point out the error and it targets the exact location of error. Debuggers generally have skills of running the programs in step-by-step mode. Debuggers also tell about the state of program at their running time. Advanced debuggers also provide detailed information that includes thread, processes, program at every state of their execution. J. A. Sharma (B) · P. S. Banerjee · H. Panchratan · Ayush Department of Computer Science and Engineering, Jaypee University of Engineering and Technology, Raghogarh, Guna 473266, Madhya Pradesh, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_13

157

158

J. A. Sharma et al.

While talking about the functionality of the debuggers, they may be classified into two types: white-box debugger and black-box debugger. White-box debuggers are debuggers that are inbuilt in software, like IDEs. This development platform contains an inbuilt debugger that is used to debug the source code. Black-box debuggers are not inbuilt in any software. This type of debugger assumes that the user could not see the software and it can only see the information in its dissembled format. Black-box debuggers are also of two types: user mode and kernel mode. User mode works on ring three or the uppermost level. A processor mode in which user can run the application is known as user mode. User-mode applications do not have enough privileges. Kernel mode works on ring zero. Kernel-mode debuggers have great privileges. This is the core of the operating system. Examples of user-mode debuggers are WinDbg, OllyDbg, GNU debugger used in Linux platform. Some other debuggers are PyDbg which purely based on Python. One more amazing debugger is the Immunity Debugger which is somewhat like OllyDbg but with many increments in it. Kernel-mode debugger helps us to examine the state and contents of the basic registers. When we talk about EAX, it is known to be the accumulator register, all arithmetical operations take place; return values from function calls are stored. EDX, known as the data register, is an extension of EAX and helps in storing more complex data. ECX, the count register, stores the count at the end of loops/iterations. ESI, source index register, is used to point out the location where the data is stored after the operation is performed. EDI, is similar to the ESI register; ESI is used to read, whereas EDI is used to read the data and is known as the destination index register. ESP, stack pointer, holds the return address as it points the top of the stack. EBP, stack base pointer, always points to the base of the stack. EBX: extra register that can be used for any other purpose.

2 Related Work The latest work done till date on this topic is automating the test case generation for fault injection in embedded systems. Concepts of machine learning are used in reverse engineering for the models of the system under consideration as discussed in [1]. Then, the concept of time travel debugging is fairly new in the field where user has the options of replay, reverse and analyze the software as discussed in [2]. The debugger’s architecture is designed for multiprocessor system-on-chip (MPSoC) based on the NoC. MPSoC demands a bootloader that must be scalable and architecture of debugger with the increase in the number of processors. The debugger’s designs and the bootloader utilize the NoC summarized in [3] to distribute data to the core and also from the core as defined in [4]. A small hardware overhead can utilize the full benefits

A Novel Debugger for Windows-Based Applications

159

Fig. 1 Orientation of stack pointers

of the scalability architecture of NoC. [5, 6]. For 8-bit microcontroller, an all-in-one debugger has been introduced which consists of three important modes named as boundary-scan mode [7], programming mode [8], and debugging mode [9]. Its main feature is that it is having very high transfer speed than any other protocols. This can be tested on a field-programmable gate array (FPGA) [10]. A debugger that debugs the Inter-Integrated Circuit (I2 C) [11] buses at a very low cost and with the independent system for monitoring has been designed [12]. The system gets connected to any I2 C or SMBuses [13], for example, consumer electronics, measurement equipment, switch-mode power supplies, automotive equipment, etc. It uses the master–slave concept [14]: It monitors the bus by behaving it as a slave, and it controls the bus by behaving it as a master [15]. It has high efficiency of battery voltage, and it regulates from direct current to direct current [16]. An industry-standard debugger is used to facilitate cyclic debugging in real-time systems with the real-time operating system. It includes loop iteration for interrupts produced by deterministic reproduction, and it also provides an algorithm which will be used to find the starting points and also some techniques for replaying the target system. Here deterministic monitoring has been discussed, and for industrial strength, it provides a benchmark [17] (Fig. 1).

3 Debugging Events As discussed, the main use of the debugger is when a bug is found at any point through the course of execution of a program. So, a need of some debugging event is important. These events are breakpoints hits, exception created by the ongoing program and if or any memory violations occurs.

160

J. A. Sharma et al.

Fig. 2 System architecture: control flow

4 Architecture See Fig. 2.

5 Proposed Work At first, the process ID of a desired process/application is passed as input for the program which further lets the main debugger process to attach and take charge of the control of flow. Now, since the application is in the control of the debugger, the breakpoints can be set and the program is executed step-by-step as shown in Fig. 2. The vulnerabilities can be tested by going to that point of error where the flow breaks away from the usual. The communication channel helps in the flow of data from the application to the debugger interface. After the breakpoints are identified, the code/program can be examined. Errors like memory violation and exception occurrences can be identified and dealt with (Fig. 3).

A Novel Debugger for Windows-Based Applications

Fig. 3 Result when PID is passed and the process is successfully attached

161

162

J. A. Sharma et al.

6 Implementation Algorithm 1 My_dbg: Algorithm: defining the debugger by putting all the structures, unions and the constant values for maintainability. import all from ctypes library WORD stores c_ushort D_WORD stores c_ulong LPBYTE stores pointer to (c_ubyte) LPTSTR stores pointer to (c_char) HANDLE stores c_void_p Debug_Process stores 0x00000001 #constant Create_New_Console stores 0x00000010 #constant class StartUpInfo(Structure) : _fld_ 1

(2)

where α is the smoothing factor, and 0 < α < 1. In other words, the smoothed statistic st is a simple weighted average of the previous observation x t −1 and the previous smoothed statistic st −1 . The term smoothing factor applied to α here is something of a misnomer, as larger values of α actually reduce the level of smoothing, and in the limiting case with α = 1, the output series is just the same as the original series (with

Modeling a Raga-Based Song and Evaluating …

257

a lag of one-time unit). Values of α close to one have less of a smoothing effect and give greater weight to recent changes in the data, while values of α closer to zero have a greater smoothing effect and are less responsive to recent changes. There is no formally correct procedure for choosing α. Sometimes, the statistician’s judgment is used to choose an appropriate factor. Alternatively, a statistical technique may be used to optimize the value of α. For example, the method of least squares has been used in our case to determine the value of α for which the sum of the quantities (sn−1 − x n−1 )2 is minimized. See [12] for further literature. Some authors call it single exponential smoothing to distinguish it from double exponential smoothing.

1.3 Computing the Raga % Using Similarity Analysis of Melodies and Segments Melody may be mathematically defined as a sequence of notes “complete” in some sense as determined by music theory, taken from a musical piece [13]. For example, every line of a song is a melody since it is complete, and a melody need not be a complete musical sentence. It suffices if it is a complete musical phrase. A Segment is a sequence of notes which is a subset of melody but is itself incomplete. For example, {Sa, Sa, Re, Re, Ga, Ga, Ma, Ma, Pa} is a melody and {Sa, Sa, Re Re} is its segment in raga Kafi. Length of a melody or its segment refers to the number of notes in it. Significance of a melody or its segment (in monophonic music such as Indian classical music) is defined as the product of the length of the melody and the number of times it occurs in the musical piece. Thus, both frequency and length are important factors to assess the significance of a melody or its segment. For a more technical definition of significance of melody in polyphonic music, see [14]. By shape of a melody is meant the difference of successive pitches of the notes in it, e.g., the shape of the Kafi melody {Sa, Sa, Re, Re, Ga, Ga, Ma, Ma P}, i.e., {0, 0, 2, 2, 4, 4, 5, 5, 7} is {0, 2, 0, 2, 0, 1, 0, 2) from Table 1. Two melodies are in translation if the correlation coefficient r of their shapes equals +1. Two melodies are in inversion if the correlation coefficient r of their shapes equals −1. Two melodies are called different if the correlation coefficient of their shapes approaches 0. Thus, correlation coefficient of shapes here is a measure of similarity between the corresponding melodies. For computing the raga % in the song, segments and melodies of equal length will be compared with those from a standard database of the raga concerned [15].

2 Methodology Musical data are certainly chronological and the numbers representing pitches in different octaves will be the possible response entry st corresponding to the argument

258

S. Tewari and S. Chakraborty

Table 1 Numbers representing pitches in three octaves C

Db

D

Eb

E

F

F#

G

Ab

A

Bb

B

Western notation

S

r

R

g

G

M

m

P

d

D

n

N

(lower octave) Indian notation

−12 −11 −10 −9

−8

−7

−6

−5

−4

−3

−2

−1

Numbers for pitch

S

r

R

g

G

M

m

P

d

D

n

N

(middle octave)

0

1

2

3

4

5

6

7

8

9

10

11

Numbers for pitch

S

r

R

g

G

M

m

P

d

D

n

N

(higher octave)

12

13

14

15

16

17

18

19

20

21

22

23

Numbers for pitch

Abbreviations The letters S, R, G, M, P, D, and N stand for Sa, Sudh Re, Sudh Ga, Sudh Ma, Pa, Sudh Dha and Sudh Ni, respectively. The letters r, g, m, d, and n represent Komal Re, Komal Ga, Tibra Ma, Komal Dha, and Komal Ni, respectively. Normal type indicates the note belongs to middle octave; italics implies that the note belongs to the octave just lower than the middle octave while a bold type indicates it belongs to the octave just higher than the middle octave. Sa, the tonic in Indian music, is taken at C. Corresponding Western notation is also provided. The terms “Sudh,” “Komal,” and “Tibra” imply, respectively, natural, flat, and sharp

time t which would in our case (i.e., structure analysis) be just the instance (1, 2, 3…) at which a musical note is realized. The tonic Sa is taken at the note C (i.e., the scale is C). Also, C of the middle octave is assigned the number 0 representing its pitch as the reference point for other notes of the higher and lower pitch to be assigned numbers accordingly as 1, 2, 3…. or −1, −2, −3…, respectively (detailed in Table 1). This is the technique that is used for structure analysis and we are motivated by the works of Adiloglu, Noll, and Obermayer [14]. Our database for analysis comprises of a sequence of notes of the song based on raga Malkauns. This is given next in Table 2.

3 Experimental Results The experimental results are obtained using the Minitab statistical package, version 16.

3.1 Simple (Single) Exponential Smoothing Plot for C1 See Fig. 1.

Modeling a Raga-Based Song and Evaluating …

259

Table 2 Note sequence of song on raga Malkauns (refer to Table 1 to identify the note and its octave) S. no.

Pitch

S. no.

Pitch

S. no.

Pitch

S. no.

Pitch

S. no.

Pitch

S. no.

Pitch

1

5

21

5

41

8

61

-2

81

12

101

8

2

5

22

8

42

5

62

0

82

12

102

10

3

3

23

10

43

8

63

5

83

12

103

12

4

5

24

12

44

10

64

5

84

12

104

15 12

5

3

25

12

45

12

65

3

85

12

105

6

0

26

12

46

8

66

5

86

8

106

15

7

−2

27

8

47

10

67

5

87

5

107

17

8

0

28

10

48

8

68

5

88

8

108

15

9

−4

29

8

49

5

69

5

89

10

109

12

10

−2

30

5

50

5

70

3

90

8

110

8

11

0

31

5

51

5

71

3

91

10

111

5

12

5

32

8

52

5

72

5

92

10

112

8

13

5

33

5

53

5

73

8

93

12

113

10

14

3

34

8

54

3

74

8

94

10

114

3

15

5

35

10

55

5

75

8

95

10

115

0

16

5

36

8

56

3

76

12

96

8

116

5

117

5

17

5

37

12

57

0

77

12

97

10

18

5

38

12

58

−2

78

15

98

12

19

5

39

12

59

0

79

10

99

8

20

3

40

12

60

−4

80

12

100

5

Smoothing Plot for C1 Single Exponential Method 20

Variable Actual Fits

15

Smoothing Constant Alpha0.917237 Accuracy Measures MAPE33.1430 MAD1.9715 MSD5.9554

C1

10

5

0

-5 1

12

24

36

48

60

Index

Fig. 1 Single exponential smoothing for C1

72

84

96

108

260

S. Tewari and S. Chakraborty Residual Plots for C1 Versus Fits

Normal Probability Plot 6

Residual

Percent

99.9 99 90 50 10 1 0.1

3 0 -3 -6

-8

-4

0

4

-5

8

0

5

Residual

10

15

Fitted Value

Histogram

Versus Order Residual

Frequency

6 30 20 10

3 0 -3 -6

0 -6

-4

-2

0

2

4

1

10

Residual

20

30

40

50

60

70

80

90

100

110

Observation Order

Fig. 2 Residual plots for C1. Note Zero values of x t exist; MAPE calculated only for nonzero x t

3.2 Residual Plots for C1 For data C1 of length 117, the smoothing constant Alpha as evaluated by the package comes out as 0.917237. Accuracy measures computed by the package are MAPE = 33.1430; MAD = 1.9715; and MSD = 5.9554. Next, we give our interpretations of Figs. 1 and 2 and an explanation of the aforesaid results. Interpretations from Figs. 1 and 2: The random pattern of the residuals (Fig. 2) together with the closeness of smoothed data with the observed one (Fig. 1) justifies the simple exponential smoothing. The normal probability plot of the residuals roughly follows a straight line indicating that the residuals may be taken as normally distributed. The next graph plots the residuals versus the fitted values. The residuals should be scattered randomly about zero which is so in our case (uneven spreading implies the error variance as non-constant; any curvilinear pattern would indicate model inaccuracy demanding other terms). A histogram of the residuals shows the distribution of the residuals for all observations. It serves as an exploratory tool to learn about typical values, spread, shape, and unusual values in the data. We find one bar different from the rest which is indicative of an outlier (influential observation). The graph residuals versus order plot the residuals in the order of the corresponding observations. The plot is useful when the order of the observations may influence the results, which can occur when data are collected in a time sequence (as in our case). The residuals in the plot should fluctuate in a random pattern around the center line as in Fig. 2. One can examine the plot to see if any correlation exists between error terms that are near each other. Correlation among residuals may be signified by: • An ascending or descending trend in the residuals • Rapid changes in signs of adjacent residuals.

Modeling a Raga-Based Song and Evaluating …

261

Mean absolute percentage error (MAPE)—measures the accuracy of fitted time series values. It expresses the % of accuracy. Mean absolute deviation (MAD) measures the accuracy of fitted time series values. It expresses accuracy in the same units as the data, which helps conceptualize the amount of error. Mean squared deviation (MSD)—measures the accuracy of fitted time series values. For all three measures, smaller values generally indicate a better fitting model. In case we fit other models to the same data, it will be of interest to compare the corresponding MAPE, MAD, and MSD values.

3.3 Evaluating the % of Raga in the Song Recall that a melody is a succession of musical notes (characterized by a succession of pitch values) that is complete and hence can be taken as a single entity and shape is the succession of pitch differences of the melody. Thus, if {A, B, C, D} is a melody, then {B–A, C–B, D–C} is its shape. Two melodies are similar if their shapes are correlated (see Sect. 1.3). Significance of an observed correlation coefficient can be tested using t test. Thus, we are testing the null hypothesis H 0 that the population correlation coefficient is zero against the alternative hypothesis H 1 that it is nonzero. If r is the value of the sample correlation coefficient and n be the number of pairs of √observations (here successive differences of pitch), we calculate the statistic t = r √ (n−2)2 which follows student’s t distribution with (n – 2) degrees of freedom. If (1−r ) calculated |t| exceeds the table value of t at 5% level of significance (say) and (n − 2) degrees of freedom, then the value of r is significant at 5% level, in which case H 0 is rejected, otherwise insignificant in which case we may accept H 0 . Here, it is assumed that the n pairs are coming from a bivariate normal distribution. The formula for r is covariance(x,y) Covariance (x, y) can be computed easily {sd(x) ∗ sd(y)} where sd = standard deviation.   

(x ∗ y) x ∗x −{x¯ ∗ y¯ }. sd(x) = + as − {x¯ ∗ x} ¯ and similarly for sd(y). n n 



x¯ = n(x) and y¯ = n(y) . Since the t statistic has (n − 2) degrees of freedom, we must have n > 2. This means melodies of lengths at least 4 (whose shapes will have at least n = 3 successive pitch values) can be compared for similarity. In our study, we have compared melodies and segments of lengths 4, 5, and 6 only and a separate % of similarity is obtained. Then, a weighted mean of this raga % is taken, and the weights being the corresponding lengths (see Sect. 1.3). The algorithm being used picks up melodies or segments of a specified length from the song, one at a time, and compares with every melody or segment of similar length from the raga note sequence in the database [15]. A counter initialized at zero is used. If a single comparison yields similarity, we would increase the counter by one, stop any further comparison of this melody or segment, and pick up the next one from song sequence. However, if the next melody or segment, or any other from the song note sequence picked likewise, turns out to be identical to a previous one in the song that was found similar with that in the raga, then we agree to increase the counter by one as we have already considered the

262

S. Tewari and S. Chakraborty

number of times a melody or segment is repeated as an important factor in assessing its significance. In short, all repeated song melodies or segments, if found similar with the raga, will contribute towards raga %. Omitting details, we are reporting the main finding. Raga % in the song analyzed for melody lengths 4, 5, and 6 turned out to be 91.30, 89.65, and 94.74%, and their weighted mean is 92.126.  respectively,  [Weighted mean is defined as xw/ w, where x is an individual observation and ∗ 89.65+6 ∗ 94.74 = w its corresponding weight; in our case, Weighted mean = 4 ∗ 91.3+54+5+6 92.126.] As this % is quite high, the song can be called a raga pradhan one meaning thereby that the underlying raga is playing a major (pradhan) part in the song. The motivation for evaluating the raga content in a song is discussed in Sect. 3.4.

3.4 Discussion While ragas are very rich in their emotional content, a common man may not have the receiving (understanding) capacity and hence cannot derive its therapeutic value fully. So, a good idea is to render raga-based songs first in gradually increasing order of raga content and then render the raga itself. Therefore, evaluating the raga content in the song becomes a research problem. This is the issue addressed in the previous section. However, it is always wise to have the songs graded by a musician also, in which the musician will be asked to provide only the order of the songs rather than give any subjective value of raga content measures, and then test the significance of the rank correlation coefficient between the musician’s ranking and the scientific ranking (Sect. 3.3) since, from a therapeutic angle, it is the order ultimately rather than the raga content measure that is of interest. The doctor treating the patient only needs to know this order. We must bear in mind the limitation of the correlation coefficient which only measures the degree of a linear relationship and that too for melodies of equal length. The strategy of using raga-based songs and then the ragas has been successfully implemented in a recent study in music-medicine on brain injury patients. See Singh et al. [16]. Assuming that no two songs in the list are ranked equal, let X and Y denote the ranks given by a scientist and a musician, respectively, to a particular song (X and Y can be equal). X and Y both can take values (ranks) 1, 2….n for a list of n ragabased songs (all based on the same raga). Spearman’s rank  correlation coefficient is calculated by the formula ρ = 1 − 6 d 2 /n n 2 −1 where d = X − Y. The significance can be tested in the same manner as in the case of r replacing r by ρ and performing a t test as described in Sect. 3.3.

Modeling a Raga-Based Song and Evaluating …

263

4 Concluding Remarks We have successfully modeled the song based on raga Malkauns using simple exponential smoothing with a smoothing factor 0.917237. We have also evaluated the % of the raga in the song and found that the song contains 92.126% of the raga. However, the technique has its own pros and cons as discussed in Sect. 3.3 and there are definite aspects on which one can improve. For example, here we have considered only the pitch of the notes and the melodic shapes they represent for testing melodic similarity, we can additionally take into account other features such as similarity in the note duration of melodies. While our research is ongoing, we feel our study to be of great value and immediate extension to music therapy. This is because the common man does not understand classical music but appreciates light music and the strength of raga-based songs in popularizing classical music among laymen cannot be thrown away. The moral of the paper is that in order to have the therapeutic value of ragas reach the common mass, it is important to collect several songs based on a particular raga and let the mass get familiar with the raga gradually by listening to the raga-based songs in steps of increasing raga content in them. Thereafter, the raga itself may be applied in a full-fledged composition to the patients who, in general, may be from a non-musical background. Perhaps a good strategy would be to group the raga-based songs into two broad categories—raga pradhan and non-raga pradhan. This grading can be done intuitively by a musician. Next, the songs in the non-raga pradhan group should be played first in some order to be decided by the scientist provided only his grading is not significantly different from that of the musician. Next, the songs in the raga pradhan group should be played in some order again to be decided by the scientist in a similar way in close proximity to the musician’s grading. In case the two gradings are quite different, we would respect the musician’s grading but research will continue by bringing in more features into our analysis till the intuitive and logical gradings get close. Another point of interest is to discover some commonality and diversity in models that characterize raga-based songs, that is to say, do the models that capture the melodic movement of raga-pradhaan songs have something in common and more importantly do they differ from those models that capture the melodic movement for non-raga-pradhaan songs? Compliance with Ethical Standards This research did not receive any funding. The authors hereby declare that they have no conflict of interest.

264

S. Tewari and S. Chakraborty

References 1. J. Beran, G. Mazzola, Analyzing musical structure and performance—a statistical approach. Statistical Sci. 14(1), 47–79 (1999) 2. M.T. Pearce, G.A.Wiggins, Improved methods for statistical modelling of monophonic music. J. New Music Res. 33(Article 4), 367–385 (2004) 3. G.A. Wiggins, M.T. Pearce, D. Müllensiefen, Computational modeling of music cognition and musical creativity, in The Oxford Handbook of Computer Music, ed. by R.T. Dean (Oxford University Press, Oxford, 2011) 4. S. Chakraborty, G.. Mazzola, S. Tewari, M. Patra, Computational Musicology in Hindustani Music (Springer, Berlin, 2014) 5. P. Priyadarshini, S. Chakraborty, Using statistical modeling, rate of change of pitch and inter onset interval to distinguish between restful and restless ragas. Commun. Math. Statistics 5(2), 199–212 (2017) 6. S. Chakraborty, R. Ranganayakulu, S. Chauhan, S.S. Solanki, K. Mahto, A statistical analysis of Raga Ahir Bhairav. J. Music Meaning 8(4) (2009). http://www.musicandmeaning.net/issues/ showArticle.php?artID=8.4 7. N.A. Jairazbhoy, The Rags of North Indian Music: Their Structure & Evolution (Popular Prakashan, Mumbai, 1995) 8. http://www.inspirationalstories.com/quotes/t/e-y-harburg/ (2012). Accessed on 13 Sept 2017 9. S. Tewari, S. Chakraborty, in A Statistical Analysis of Raga Bhairavi, ed. by S.K. Srivastava, Kailash, K. Chaturvedi. Acoustic Waves (Shree Publishers and Distributors, New Delhi, 2011), pp. 329–336 10. C.C. Holt, Forecasting trends and seasonal by exponentially weighted averages. Int. J. Forecast. 20(1), 5–10 (1957). https://doi.org/10.1016/j.ijforecast.2003.09.015 11. R.G. Brown, Smoothing Forecasting and Prediction of Discrete Time Series (Prentice-Hall, Englewood Cliffs, 1963) 12. G.E.P. Box, G.M. Jenkins, G.C. Reinsel, Time Series Analysis: Forecasting and Control, 4th edn. (Wiley, London, 2008) 13. S. Chakraborty, K. Krishnapryia, S. Loveleen, S. Chauhan, S.S. Solanki, K. Mahto, Melody revisited: tips from indian music theory. Int. J. Comput. Cogn. 8(3), 26–32 (2010) 14. K. Adiloglu, T. Noll, K. Obermayer, A paradigmatic approach to extract the melodic structure of a musical piece. J. New Music Res. 35(3), 221–236 (2006) 15. D. Dutta, Sangeet Tattwa, Pratham Khanda (in Bengali), Brati Prakashani, 5th edn. (2006) 16. S.B. Singh, S. Chakraborty, K.M. Jha, S. Chandra, S. Prakash, S. Tewari, Music and Medicine: Healing Brain Injury Through Ragas (CBH Publications, 2016)

Data Analysis and Network Study of Non-small-cell Lung Cancer Biomarkers Koel De Mukherjee, Aman Vats, Deepshikha Ghosh and Santhosh Kumar Pillai

Abstract Non-small-cell lung cancers (NSCLCs) are the most common lung cancers and account for more than 80% of all lung cancers. The principal objective of this study is to identify and characterize novel cancer biomarkers through transcriptome screening in non-small-cell lung cancer patients to determine their role in cancer progression. From the microarray data for non-small-cell lung cancer, genes were screened based on their upregulated expression levels. Cytoscape was used to make the gene network of all such selected genes and also for build-up network modules for genes with maximum expression levels. The accuracy of the result was further validated by performing a comparative study among the cancer and developmental genes. Three genes, AURKA, TFAP2A, CREBBP, respectively, were found to be involved in NSCLC. Further, the work sheds light on the fact that TFAP2A gene might play an important role as a novel biomarker for non-small-cell lung cancer. So, potent drug molecules against the target gene (TFAP2A) can be searched and applied for docking in further studies. Keywords System biology · Transcriptome · Cytoscape · TFAP2A

1 Introduction Cancer is a broadly defined term for a number of multi-factorial and heterogeneous diseases characterized by uncontrolled cellular growth. Traditional medicinal research focuses on the identification of a single component, perhaps molecule that K. De Mukherjee (B) · S. K. Pillai Department of Biotechnology & Food Technology, Durban University of Technology, 4000 Durban, South Africa e-mail: [email protected]; [email protected] A. Vats EXL Company, 122002 Gurugram, Haryana, India D. Ghosh Department of Chemical Engineering, IIT Gandhinagar, 382355 Gujarat, India © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_22

265

266

K. D. Mukherjee et al.

has gone wrong for a particular disease. Cancer research has not been different from this. It is proposed that in reality, cancer looks like a network system inside the human body. This means that there is dysfunctionality in the regulatory network if the cells escape normal growth control by its multicellular environment [1]. Different factors such as the regulatory circuits, cross-talk between pathways and interactions have been found to be involved between tumour and other cell types. Small-cell lung cancers (SCLCs) [2] and non-small-cell lung cancers (NSCLCs) are the two types of lung cancers that can be classified based on microscopic appearance of the tumour cells. NSCLCs are the most common lung cancers and account for more than 80% of all lung cancers. The survival rates for stages I through IV decrease significantly due to the advancement of the disease such as 47% for stage I, stage II is 30%, stage III is 10%, and stage IV is 1% [3]. Effectively, the study aims to identify and characterize novel cancer biomarkers through transcriptome screening in non-small-cell lung cancer patients using highthroughput data analysis techniques, including microarray data analysis. Biomarkers play an increasingly important role in the clinical management of cancer patients as they describe a normal or abnormal biological state in an organism by analysing biomolecules such as DNA, RNA, protein, peptide [4]. A biomarker may be used to see how well the body responds to a treatment for a disease as a measurable indicator. Some of the established biomarkers are oestrogen receptor, ER(α) [5], progesterone receptor, HER2 [6], whereas few of the emerging ones include Ki67 [7], cyclin D1 and ER. In addition to their use in cancer medicine, biomarkers are often used throughout the cancer drug discovery process and development. The current work intends to study and design the gene regulatory networks for the newly identified biomarkers from transcriptome data using Cytoscape to establish their role in cancer progression. Further, the best marker gene was identified and modelled, and some potent drug candidates were screened against it for docking analysis.

2 Materials and Methods DNA microarray allows the researchers to investigate and address the issues of large number of gene expression in a single reaction and in an efficient manner with respect to other traditional methods.

2.1 Databases and Data Retrieval Various databases such as Gene Expression Omnibus (GEO) from NCBI (https:// www.ncbi.nlm.nih.gov/geo/) and Array Express from EMBL-EBI (https://www.ebi. ac.uk/arrayexpress/) were searched for microarray datasets for non-small-cell lung cancer (NSCLC) for the normal and tumour cells of different patients. Array Express

Data Analysis and Network Study …

267

is a public repository of functional genomics data mainly generated from microarraybased assays, including gene expression, comparative genomic hybridization (CGH), chromatin immunoprecipitation (ChIP) experiments and tiling arrays.

2.2 Data Analysis The data were searched, and the expression values of normal and cancer cells of different patients were selected and curated for further studies. The processed form of the curated data was fed to the BRB array tool (https://brb.nci.nih.gov/BRBArrayTools/) for collation by the data import wizard and then compiled. The upregulated and downregulated genes were obtained with the help of the scatter plot for the normal versus tumour expression for the various patients. The upregulated genes of various patients were imported for further analysis. Upregulated genes with maximum number of common expression in sample patients were identified and isolated. All upregulated genes were studied based on various parameters by generating network of upregulated genes using GORILLA (https://omictools.com/gorilla-tool), followed by selection of developmental genes amongst the upregulated genes and then analysed using Cytoscape (http://www.cytoscape.org/).

2.3 Modelling and Interaction Studies Modelling of all three structures was achieved by MODELLER 9.20 [8] software which relies on the principles of homology modelling. Templates of the query sequences (xylanases) were selected based on the highest percentage of similarity and coverage area after performing BLAST. From the generated five models for each query sequence, best model was assessed on the basis of their DOPE profile. The top models were further subjected to validation and verification with some computational servers such as Ramachandran plot [9].

3 Results and Discussion Systems’ biology approach was made, and networks were built upon using different bioinformatics tools and techniques in order to correlate the data towards the identification and characterization of novel biomarkers for lung cancer. From the selected microarray data for our study, genes were selected on the basis of maximum expression levels (187 genes) and network modules were constructed in Cytoscape. The data were searched and the expression values of normal and cancer cells of different patients were selected and downloaded for the further studies. The data with its various information like annotations, description, intensity was compiled

268

K. D. Mukherjee et al.

9 8 7 6 5

GSM398087 Log(Intensity)

4

Fig. 2 Scatter plot showing upregulated and downregulated genes. (Red: Not differentially expressed, Grey: Upregulated and downregulated)

10

Fig. 1 Snapshot of the imported data of normal and cancer patients along with different parameters such as annotations, description, intensity

and represented in the snapshot (Fig. 1). The scatter plot is made for the normal versus tumour expression for the same patient to obtain the upregulated and downregulated gene (Fig. 2). The upregulated genes of various patients are imported for the further analysis. Upregulated genes with maximum number of common expression in sample patients are identified and isolated. The function, location and role of their genes in lung cancer were analysed for each of these genes, and gene ontology tools like GORILLA were also used to relate the genes. A small screening was performed to select only the developmental genes amongst the upregulated genes in our study. The network modules constructed in Cytoscape were correlated for stage 0 nonsmall-cell lung cancer, and the common genes (10 genes) were screened (Table 1). The data were then refined by performing a comparative comparison for the genes obtained with the predicted roles in NSCLC. Depending on its expression value and developmental gene status, we proceeded for the final gene target with three genes, i.e. AURKA, TFAP2A and CREBBP. Gene networks were built in Cytoscape [10]

Data Analysis and Network Study …

269

Table 1 List of 10 NSCLC-related genes Sl. No.

Gene name

Developmental gene

Degree

Involvement in metabolic pathway

Score

1

AURKA

1

1

0

2

2

CD24

0

0

1

1

3

CREBBP

1

1

1

3

4

TFAP2A

1

0

0

1

5

CXCR4

0

0

1

1

6

ERBB2

0

1

1

2

7

EZH2

0

1

0

1

8

FAS

0

0

1

1

9

PIP

0

0

0

0

10

KRT8

0

0

0

0

Table 2 DOPE scores of the models generated using MODELLER

Sl. No.

Models generated

DOPE scores

01.

Tfap2a_1.pdb

−22652.71094

02.

Tfap2a_2.pdb

−23068.48047

03.

Tfap2a_3.pdb

−21249.36133

04.

Tfap2a_4.pdb

−22706.00781

05.

Tfap2a_5.pdb

−22882.91211

and GeneMANIA for each of the three genes (AURKA, TFAP2A and CREBBP) separately (Fig. 3a–c). TFAP2A gene plays critical roles in embryonic development, growth control and homeostasis by coupling chromatin remodelling to transcription factor recognition. The three-dimensional structure of TFAP2A was constructed for finding potential drug molecules against it for our future studies. The software, MODELLER, provides five models (Table 2), but the model (Fig. 4) with lowest DOPE score (Table 2) (with negative value) was selected −23068.48047(model 2) for further analysis [11]. Ramachandran plot confirms that the models are stereochemically stable [9] as the residues in the favoured region (90.05%) for the selected modelled structure are quite acceptable.

270

K. D. Mukherjee et al.

Fig. 3 Network of selected three genes using GeneMANIA. a AURKA b TFAP2A and c CREBBP. Different colour codes are pink: physical interaction, violet: co-expression, yellow: predicted, blue: pathway, green: genetic interaction

Data Analysis and Network Study …

Fig. 3 (continued) Fig. 4 Cartoon representation of TFAP2A protein using Chimera. Green and red colours represent the helix and coil region of the protein

271

272

K. D. Mukherjee et al.

4 Conclusion Recent studies suggest that novel TFAP2A-mediated posttranslational Nglycosylation activity alters the conformation of TFAP2A-interacting proteins, leading to regulation of gene expression, cell growth and differentiation. Hence, we come up to the conclusion that TFAP2A gene might play an important role as a novel biomarker for non-small-cell lung cancer. Protein structure and more illustration on the functional domain may help in designing potent inhibitors in future. Acknowledgements We thankfully acknowledge Mr. Amit Gupta, Director, HH Biotechnologies Pvt. Ltd, India, for extending the help and suggestion at the crucial time.

References 1. R. Siegel, J. Ma, Z. Zou et al., Cancer statistics. CA Cancer J. Clin. 64, 9–29 (2014) 2. T. Sher, G.K, Dy, A.A. Adjei, Small cell lung cancer. Mayo Clin. Proc. 83, 355–367 (2008) 3. C. Zappa, S.A. Mousa, Non-small cell lung cancer: current treatment and future advances. Transl Lung Cancer Res 5(3), 288–300 (2016) 4. N. Goossens, S. Nakagawa, X. Sun, Y. Hoshida, Cancer biomarker discovery and validation. Transl. Cancer Res. 4(3), 256–269 (2015) 5. P. Walter, S. Green, G. Greene, A. Krust, J.M. Bornert, J.M. Jeltsch, A. Staub, E. Jensen, G. Scrace, M. Waterfield, Cloning of the human estrogen receptor cDNA. in Proceedings of the National Academy of Sciences. pp. 7889–7893 (1985) 6. S.W. Luoh, B. Ramsey, A.H. Newell, M. Troxell, Z. Hu, K. Chin, P. Spellman, S. Olson, E. Keenan, HER-2 gene amplification in human breast cancer without concurrent HER-2 overexpression. SpringerPlus. 386 (2013) 7. E.C. Inwald, M. Klinkhammer-Schalke, F. Hofstädter, F. Zeman, M. Koller, M. Gerstenhauer, O. Ortmann, (2013) Ki-67 is a prognostic parameter in breast cancer patients: results of a large population-based cohort of a cancer registry. Breast cancer research and treatment. pp. 539–552 8. B. Webb, A. Sali, Comparative protein structure modeling using modeller. Current Protocols in Bioinformatics 54, John Wiley & Sons, Inc., 5.6.1–5.6.37 (2016) 9. R.A. Laskowski, M.W. MacArthur, D.S. Moss, J.M. Thornton, PROCHECK: a program to check the stereochemical quality of protein structures. J. Appl. Crystallogr. 26, 283–291 (1993). https://doi.org/10.1107/S0021889892009944 10. B. Sarkar, S.K. Verma, J. Akhtar, S.P. Netam, S. Gupta Kr, P. Panda Kr, K. Mukherjee, Molecular aspect of silver nanoparticles regulated embryonic development in Zebrafish (Danio rerio) by Oct-4 expression. Chemosphere 206, 560–567 (2018) 11. D. Roy, K. Mukherjee, Homology modeling and docking studies of human chitotriosidase with its natural inhibitors. J. Proteins Proteomics 6(2), 183–196 (2015)

Design of an Energy-Efficient Cooperative MIMO Transmission Scheme Based on Centralized and Distributed Aggregations Sarah Asheer and Sanjeet Kumar

Abstract Wireless sensor network (WSN) operates under strict energy constraints; therefore, minimizing energy becomes a key concern in the design of such networks. A cooperative multiple-input multiple-output (CMIMO) model has been proposed here which utilizes node cooperation in centralized as well as in distributed aggregation schemes. The distributed aggregation scheme has been modified with an optimum number of aggregator nodes, and the centralized aggregation scheme has been modified with an optimum number of nodes for long-haul link to the base station. The selection criteria of the aggregator nodes and long-haul link nodes for both the schemes are based on the residual energy level of the nodes. Energy consumption model has been provided to analyze the effects of cluster size, the number of aggregator nodes and the number of nodes forming the long-haul link on the average energy consumption. In each aggregation scheme, most of the sensor nodes periodically go to sleep mode to save the energy. The proposed modified schemes significantly lower the average energy consumption as compared to the conventional schemes. Keywords CMIMO · Data aggregation · Aggregator nodes · Spatial correlation · Centralized aggregation scheme · Distributed aggregation scheme · WSN

1 Introduction Improving energy efficiency of a WSN has always been a key concern for the researchers, and a significant amount of work has been done in this field [1–4]. The major approaches that can be used to conserve energy in a WSN are radio optimization, energy-efficient routing, duty cycling, data minimization through data aggregations and data compression. Cooperative communication is one of the radio S. Asheer (B) · S. Kumar BIT, Mesra, Ranchi, Jharkhand, India e-mail: [email protected] S. Kumar e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_23

273

274

S. Asheer and S. Kumar

optimization schemes in which the nodes with single antenna collaborate to form a virtual multi-antenna system. And, this can provide a considerable performance gain in terms of energy as well as throughput [1–3] in WSN. But due to small physical size of a node, it is impractical to incorporate more than one antenna in a single node. In cooperative communication, a group of nodes can collectively send/receive data to another such group to reap the benefits of cooperative communication as well as MIMO system. Unlike the traditional MIMO systems, it has the flexibility to select its sensing nodes. Energy efficiency in WSN can be realized by working on node level as well as data level. At the node level, several deployment schemes, duty cycling protocols [2], power allocation [1] methods have been proposed and analyzed. On the other hand, at the data level by working on the generation of useful data by using appropriate data aggregation and compression techniques a considerable amount of energy can be saved. Due to the close proximity between the nodes in a WSN, there exists a huge spatial correlation among the sensed data. This may lead to data redundancy which is one of the major causes of increased energy consumption. Several data aggregations [5, 6] and data compression [7] techniques have been proposed, which helps in lowering the energy consumption by cutting down the number of data bits transmitted to the base station (BS). An efficient transmission data scheme is also as necessary and crucial for the performance of the network. The authors in [8] have jointly considered CMIMO and data aggregation to minimize the energy requirement in transmission. Here, the authors have considered two aggregation schemes: centralized aggregation scheme (CAS) and the distributed aggregation scheme (DAS). The energy analysis of the above two schemes has been done, and based on this analysis optimal cluster size is formulated. However, the number of aggregator nodes has not been taken into account. In the considered scenarios, each cluster node independently aggregates the data. Also, all the nodes participate in the long-haul communication which significantly lowers down the energy savings of the node. In fact, optimum selection of the number of aggregator nodes has a huge impact on the overall energy savings. In this paper, we have shown that how data aggregation with optimum number of aggregator nodes in a network with cooperative MIMO channel can lead to energy savings. We have considered a linear multi-hop network which exploits the benefits of both CMIMO and data aggregation. The amount of data generated after aggregation depends upon the correlation level of data, viz. due to the node proximity. The results further show that selecting an optimum number of aggregator nodes can lower down the energy consumption per node. The effect of a varying number of aggregator nodes is analyzed. Further by implementing duty cycle, some of the sensor nodes are periodically put to “sleep” state. These nodes wait for their turn to participate in transmission and enter “active” state. The paper is structured as follows. The scenario overview has been discussed in Sect. 2. Section 3 of the paper has the proposed energy model for the four scenarios, i.e., Modified DAS (M-DAS), Modified CAS (M-CAS), CMIMO and SISO. In

Design of an Energy-Efficient Cooperative MIMO …

275

Fig. 1 CMIMO scenario

Sect. 4, the results along with its discussions are presented. The energy consumption of the four scenarios is compared. Also, the effect of varying the cluster size and the number of aggregator nodes has been analyzed. A comparison graph between DAS and M-DAS and CAS and M-CAS has been generated. Finally, the paper is concluded in Sect. 5.

2 Scenario Overview of CMIMO We have considered a linear multihop cooperative-MIMO WSN. The linear network consists of a set of sensor nodes grouped into clusters. Figure 1 gives a detailed overview of the network. Each sensor node has a single omnidirectional antenna, and the nodes are assumed to be static. The clusters are of uniform size; i.e., each cluster has “n” number of sensing nodes. All nodes are assumed to have a uniform circular communication range of radius r (in meters), and the spacing between the nodes of a cluster is denoted by “d”. Among the “n” sensor nodes of a cluster, some nodes perform high-end operations of data aggregation. These nodes are called as the Data Aggregator Nodes (DANs), and the rest are known as the Data Collector nodes (DCNs). The nominated DANs of each cluster perform data aggregation.

276

S. Asheer and S. Kumar

In intra-communication phase, the nodes within a cluster exchange their data and also the nominated DAN performs data aggregation. In long-haul communication phase, the aggregated data is sent to the BS by selecting an optimum number of nodes. After each phase, the node residual energy is calculated and this forms the basis of the selection of the nodes forming the long-haul link. This ensures the connectivity between the nodes and the BS.

3 Energy Model of the Proposed Aggregation Schemes Here, we have presented the energy model of the proposed schemes.

3.1 Modified-Distributed Aggregation Scheme (M-DAS) In broadcast phase of M-DAS, all nodes exchange its data among its cluster members. Now, each node has a copy of data sensed by rest of the nodes of the cluster. After the broadcast phase, only the nominated number of DANs denoted by “nA ” aggregates the received data and transmits to the BS. The number of aggregator nodes “nA ” is always less than “n”. It is to be noted that M-DAS is distinct from DAS considered in [8]. In DAS after the broadcast phase, each and every node of the cluster performs aggregation independently and forms long-haul link with the BS. The total energy is due to the energy spent in two phases, viz. intra-communication phase and long-haul communication phase given by E sum . M-DAS M-DAS E sum = E intra + E lhM-DAS

(1)

M-DAS , E lhM-DAS are the energy consumptions of the intra-communication where E intra phase and long-haul communication phase, respectively. The two communication phases and its associated energy are discussed in detail.

A. Energy for Intra-cluster Communication In this phase, each DCN broadcasts its sensed data to its one-hop neighbors, i.e., within its cluster. Upon receiving data from the DCNs, the nominated DANs of that particular cluster aggregate the received data. The intra-communication phase has two sub-phases: the broadcast phase and the aggregation phase. The energy equations are given below: M-DAS M-DAS M-DAS = E bro + E agg E intra M-DAS where E bro is given by

(2)

Design of an Energy-Efficient Cooperative MIMO … M-DAS d E bro = n PSISO

L Rintra

277

+ n[b(n)PT + (n − 1)PR ]

L Rintra

(3)

The broadcast phase has two energy components, viz. the transmission power of d the power amplifier given by PSISO and the power consumed by the circuit block for transmission and reception denoted by PT and PR , respectively. The number of data bits sensed by each node is denoted by L with transmission rate Rintra = b · B. Here, b is the constellation size expressed in bits per symbol (bps) and B is the modulation bandwidth. A binary function b(n) is introduced in Eq. (4) which ensures that each node within a cluster broadcasts its data only when cluster formation has taken place. The binary function b(n) is defined as [8]:  b(n) =

0, n = 1 1, n ≥ 2

(4)

The power consumption of the power amplifier is derived from [9] and the link budget relationship from [10]. d = (1 + α)E intra Rintra PSISO

(4π )2 d 2 Ml Nf G t G r λ2

(5)

  In the above equation, α = ηξ − 1, where η is the drain efficiency of the RF power amplifier [9]. The peak-to-average ratio (PAR) denoted by ξ depends on the type of modulation and its constellation size (b) [9]. The broadcast phase uses the multi-quadrature amplitude modulation (MQAM); thus, we have [9]:  b  22 − 1 ξ =3 b (6) 22 + 1 For calculating E intra , the average bit error rate (BER) with MQAM having a constellation size of 2 is given by [10]:  εintra ≈ Q( 2γintra )

(7)

where Q(x) is the Q-function and γintra is the instantaneous SNR given by [11]: γintra =

H 2F E intra Mt N 0

(8)

278

S. Asheer and S. Kumar

And, according to the Chernoff bound [12] an upper bound for the required energy per bit can be derived: E intra ≤

Mt N 0 1/M

εintra t

(9)

In broadcast phase where all the nodes are broadcasting its data within the cluster, the number of transmitting antennas “Mt ” is equal to the cluster size. The terms “G t ” and “G r ” in Eq. (5) are the antenna gains for transmission and reception, respectively. The carrier wavelength is denoted by “λ”. The link margin is denoted by “Ml ”. The noise figure of the receiver is denoted by “Nf ” and defined as Nf = NNOr with “Nr ” and “NO ” being the thermal noise power spectral density (PSD) at room temperature and PSD of the total effective noise at the receiver input, respectively [8]. The second phase of the intra-cluster communication phase is the aggregation phase. The energy requirement of this phase is given by: M-DAS E agg = nn A L E agg

(10)

where “n A ” denotes the number of DANs in a cluster and “E agg ” is the energy required per bit for data aggregation. B. Energy for Long-Haul Communication In this phase, the selected nodes of each cluster send the data to the BS. The energy requirement in long-haul transmission is given by: Dlh E lhM-DAS = PMIMO

In In + (n A PT + PR ) Rlh Rlh

(11)

where Dlh PMIMO = (1 + α)E lhb Rlh

2 (4π )2 Dlh Ml N f G t G r λ2

(12)

Dlh where PMIMO denotes the power consumed by the power amplifier for transmission. The transmission data rate of MIMO channel using orthogonal space-time block coding (OSTBC) denoted by Rlh is given by Rlh = RS bB where RS = 1/2 is the spatial rate of the OSTBC scheme. The net data generated after aggregation among the “n” nodes of a cluster is given by In . For calculating In , we have used the data rainfall set of [13].  1 L , i = 2, 3, . . . .n (13) Ii = Ii−1 + 1 − (di /c) + 1

In Eq. (13), di is the internode spacing within a cluster and “c”, a constant, signifies the level of correlation among the data sensed by the closely placed nodes.

Design of an Energy-Efficient Cooperative MIMO …

279

For a desired BER, the average energy per bit for a long-haul transmission is given by E lhb . The average BER for a MIMO channel with MQAM having a constellation size of 2 is given by [10]: εlh ≈ μh [Q(2γlh )]

(14)

where μh (x) is the expectation of x with channel vector h. The instantaneous SNR for the CMIMO system denoted by γlh is given by [11]: γlh =

H 2F E lh nA N O

(15)

And, according to the Chernoff upper bound [12], “E lh ” can be calculated as: E lh ≤

n A NO (εlh )1/n A

(16)

The distance “Dlh ” is the long-haul transmission distance between the nodes and the BS. Since the BS is located high (500 m) above the ground, the distance between the DANs of all the clusters and the BS is approximated to be the same. The average energy consumption per node of a cluster is calculated using: M-DAS = E avg

M-DAS E sum n

(17)

The above equation can be calculated using Eqs. (1), (10) and (11) as: M-DAS E avg =

L

Rintra

d PSISO + b(n)PT + (n − 1)PR

+ n A L E agg +



In Dlh PMIMO + n A PT + PR n Rlh

(18)

The communication among the nodes in DAS [8] is similar to that of M-DAS with a difference that in DAS all the nodes of a cluster are aggregating the data after exchanging the data through the broadcast phase. Thus, it can be stated that “n” is equal to the number of DANs, “n A ” in this case, and also all the nodes participate in long-haul transmission.

280

S. Asheer and S. Kumar

DAS E avg is given by the equation [8]: DAS E avg =

L

Rintra

In Dlh d PMIMO + n PT + PR PSISO + b(n)PT + (n − 1)PR + n L E agg + n Rlh (19)

The general equation for calculating the residual energy after each round is given Eq. (20) which also forms the basis for the selection of the aggregator node and the node forming the long-haul link with the BS. M-CAS M-CAS E res,i = E avg,i (r − 1) − E avg,i (r )

(20)

where “r” denotes the current round and “r−1” the preceding round.

3.2 Modified-Centralized Aggregation Scheme (M-CAS) The intra-cluster communication phase of M-CAS has three sub-phases: gathering phase, aggregation phase and broadcasting phase. In gathering phase, the data sensed by all the cluster members is collected and aggregated by the central cluster node. Thereafter, in the broadcast phase the aggregated data is broadcasted within the cluster. Since the low-power sensor nodes are deployed in a redundant manner, so out of the “n” cluster nodes only an optimum number of node denoted by “n opt ” receive the data. The radio power of the remaining “n − n opt ” nodes is switched off to avoid unnecessary energy expenditure. These “n opt ” nodes form link with the BS, and it is decided based on empirical results which consume minimum energy. The energy equations of gathering and aggregation phase are the same as in [8]. M-CAS d E ga = (n − 1)PSISO

L L + (n − 1)(PT + PR ) Rintra Rintra

M-CAS E agg = n L E agg

(21) (22)

The energy equation of the broadcast phase is given by: M-CAS d E bro = PSISO

In In + [b(n)PT + n opt PR ] Rintra Rintra

(23)

The energy consumed in the long-haul phase is the same as Eq. (11) where n A is replaced by n opt .

Design of an Energy-Efficient Cooperative MIMO …

281

By combining Eqs. (11), (21), (22) and (23), we get the average energy consumption in M-CAS as: 1  d + [(n − 1)L + b(n)In ]PT [(n − 1)L + In ]PSISO n Rintra  In +[(n − 1)L + n opt In ]PR + L E agg + [P D + (n opt PT + PR )]. n Rlh MIMO (24)

M-CAS = E avg

In CAS [8], all the nodes of a cluster communicate with the BS in long-haul communication phase whereas in M-CAS only the optimum number of nodes “n opt ” forms several links to the BS. As the data collected by sensor nodes is highly redundant, so comparatively a less number of nodes are required to communicate with the BS. This reduces the energy consumption due to reduced number of long-haul links (n lh ). The average energy consumption of CAS is given by the equation [8]: 1  d + [(n − 1)L + b(n)In ]PT [(n − 1)L + In ]PSISO n Rintra +[(n − 1)(L + In )]PR } In D + LE agg + PMIMO + (n PT + PR ) . n Rlh

CAS = E avg

(25)

3.3 CMIMO Without Data Aggregation (CMIMO) The CMIMO system works in a similar way as M-DAS but without data aggregation. The intra-cluster communication phase of the CMIMO system has only one subphase, i.e., the broadcast phase. The other sub-phase of data aggregation is eliminated as a result of which the information content is the uncompressed data gathered by all the nodes of a cluster, denoted by L n , and is given by L n = n L. The average energy consumption for CMIMO can be calculated as: CMIMO E avg =

L

Rintra

L n Dlh d PMIMO + n lh PT + PR PSISO + b(n)PT + (n − 1)PR + n Rlh (26)

Here, n lh is the number of nodes forming the long-haul link with the BS.

282

S. Asheer and S. Kumar

3.4 Single Input Single Output (SISO) In SISO scheme, the nodes directly transmit data to the AP without forming clusters and exchanging data with other nodes. The equation derived for this scheme is as follows: Dlh SISO E avg = PSISO

L Rintra

+ (PT + PR )

L Rintra

(27)

4 Results and Discussion The results discussed are based on the mathematical modeling of the data aggregation schemes derived in Eqs. (18), (19), (24), (25), (26) and (27). The simulation parameters used in the paper are listed in Table 1. The energy performance of the proposed scenarios is compared, and the effect of varying the number of aggregator nodes on energy has also been analyzed. In Fig. 2, we have compared the average energy consumption per node of M-CAS, M-DAS, CMIMO and SISO by varying the cluster size.

Design of an Energy-Efficient Cooperative MIMO …

283

Table 1 Simulation parameters Symbol

Parameter

fc

Carrier frequency

Value 2.4 GHz

n

Number of nodes in a cluster

2,3,4 … 12

L

Number of data bits sensed by each node

2000 bits

E agg

Energy required per bit for data aggregation

5 nJ/bit/signal

b

Constellation size

2

B

Bandwidth

10 kHz

Pb

Required BER

10−4

PT

Circuit power requirement for transmission

150 Mw

PR

Circuit power requirement for reception

100 Mw

η

Drain efficiency of the RF power amplifier

0.35

Nf

Receiver noise figure

10 dB

GtGr

Transmitter and receiver gain product

5 dBi

Dlh

Distance between the cluster and the base station

500 m

d

Spacing between two nodes

20 m

c

Degree of spatial correlation

50

Fig. 2 Average energy consumption versus cluster size for the four scenarios M-CAS, M-DAS, CMIMO and SISO

In M-CAS and M-DAS, we have considered an optimal number of aggregator nodes, viz. 50% of the cluster size. These aggregator nodes aggregate the data within a cluster and communicate with the BS. It can be seen that M-CAS outperforms all other scenarios followed by M-DAS and CMIMO. This is due to the fact that the intra-communication phase of M-CAS is consuming lesser energy than the intracommunication phase of M-DAS because in M-CAS aggregation is taking place only at a single central node whereas in M-DAS the number of aggregator node is always greater than one.

284

S. Asheer and S. Kumar

The results show that for small cluster sizes the energy requirement is maximum and then it decreases with the increase in the value of “n”. Again after a certain threshold, the energy consumption tends to increase with the increase in the cluster size. Therefore, choosing an optimal cluster size is crucial for CMIMO and data aggregation to work efficiently. It can be seen that the optimal cluster size for MCAS, M-DAS and CMIMO is n = 8 where the energy consumption is the least. The average energy requirement of CMIMO is greater than that of M-CAS and M-DAS because in CMIMO aggregation is not taking place and due to this a large amount of redundant data is being transmitted to the BS. In Fig. 3, we have analyzed the effect of varying the number of aggregator nodes on total energy consumption. As seen from the previous results, both M-CAS and MDAS achieve a minimum energy consumption at cluster size n equal to 8; therefore, we have taken the optimal cluster size n = 8 and have studied the effect of varying the number of aggregator nodes (n A ). For both M-CAS and M-DAS with cluster size(n) equal to 8 when the number of aggregator nodes (n A ) is equal to 4, i.e., for n A 50% of the cluster size, the total energy consumption is the lowest. This is also the threshold value for n A as above this when the number of aggregator nodes is further increased the total energy consumption also increases. Therefore, choosing an optimal number of aggregator nodes is crucial in determining the efficiency of a network in terms of energy. In Figs. 4 and 5, we have compared the performance of M-DAS and M-CAS with DAS and CAS, respectively. It can be seen that for smaller cluster size (i.e., n ≤ 6) both CAS and DAS outperform M-CAS and M-DAS, respectively, and for larger cluster size (i.e., n > 6) M-CAS and M-DAS are better. It can be seen that for smaller cluster size (i.e., n ≤ 6) both CAS and DAS outperform M-CAS and M-DAS, respectively, and for larger cluster size (i.e., n > 6) M-CAS and M-DAS are better.

Fig. 3 Total energy consumption versus the number of aggregator nodes for cluster size, n = 8

Design of an Energy-Efficient Cooperative MIMO …

285

Fig. 4 Average energy consumption per node versus cluster size for DAS and M-DAS

Fig. 5 Average energy consumption per node versus cluster size for CAS and M-CAS

5 Conclusion In this paper, we have jointly considered data aggregation as well as CMIMO to reduce energy in a clustered WSN. With certain modifications in the existing CAS and DAS, a significant amount of energy savings has been achieved. The proposed M-DAS and M-CAS are far more superior to DAS and CAS, respectively, in terms of the average energy consumption. These modifications are done by careful selection of the aggregator nodes as well as the long-haul link nodes. Also, a comparison has been drawn among M-DAS, M-CAS, CMIMO and SISO systems to highlight the performance improvement. Data aggregations in M-DAS and M-CAS reduce their average energy consumption by eliminating the redundant data as compared to the CMIMO and the SISO systems where the un-aggregated data is sent to the BS. The result shows that the optimal selection of the cluster size is crucial to the performance of the CMIMO systems with aggregation as well as without aggregations. In our approach, an optimal cluster size has been obtained first to determine the optimal

286

S. Asheer and S. Kumar

number of aggregator nodes/long-haul link nodes in a cluster for better energyefficient system.

References 1. F. Iannello, O. Simeone, U. Spagnolini, Medium access control protocols for wireless sensor networks with energy harvesting. IEEE Trans. Commun. 60(5), 1381–1389 (2012) 2. C. Demirkol, F. Ersoy, Alagoz: MAC protocols for wireless sensor networks: a survey. IEEE Commun. Mag. 44(4), 115–121 (2006) 3. A. Thakkar, K. Kotecha, Cluster head election for energy and delay constraint applications of wireless sensor network. IEEE Sens. J. 14(8), 2658–2664 (2014) 4. M. Khabiri, A. Ghaffari, Energy-aware clustering-based routing in wireless sensor networks using Cuckoo optimization algorithm, Wirel. Personal Commun. 98(3), 2473, (2018) 5. S.S. Pradhan, J. Kusuma, K. Ramachandran Distributed compression in a dense micro sensor network, IEEE Signal Process. Mag. 51–60, (2002) 6. M. Zhao, J. Li, Y. Yang, A framework of joint mobile energy replenishment and data gathering in wireless rechargeable sensor networks. IEEE Trans Mobile Comput, 13(12), 2689–2705 (2014) 7. J. Chung, J. Kim, D. Han, Multi hop hybrid virtual MIMO scheme for wireless sensor networks. IEEE Trans. Veh. Technol. 61(9), 4069–4078 (2012) 8. Q. Gao, Y. Zuo, J. Zhang, X.-H. Peng, Improving energy efficiency in a wireless sensor network by combining cooperative mimo with data aggregation. IEEE Trans Veh. Technol. 59(8), 3956–3965 (2010) 9. S. Cui, A.J. Goldsmith, A. Bahai, Modulation optimization under energy constraints at proceedings of ICC’03, Alaska, U.S.A, (2003) 10. J.G. Proakis, Digital Communications, 4th edn. (McGraw Hill, New York, 2000) 11. A. Paulraj, R. Nabar, D. Gore, Introduction to Space-Time Wireless Communications (Cambridge University Press, Cambridge, UK, 2003) 12. H. Chernoff, A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. Ann. Math. Statist. 23(4), 493–507 (1952) 13. S. Paattem, B. Krishnamachari, R. Govindan, The impact of spatial correlation on routing with compression in wireless sensor networks. ACM Trans. Sensor Network 4, 1–33 (2008)

Recognize Vital Features for Classification of Neurodegenerative Diseases A. Athisakthi and M. Pushpa Rani

Abstract Neurodegenerative diseases including Parkinson diseases, Huntington diseases, ataxia, myoclonus and amyotrophic lateral sclerosis are medically, hereditarily, pathologically fluctuated and are described by its symptoms and execution of motor impairment. Early diagnosis, efficient treatment planning and observation of PD and other NDDs are achieved by gait dynamic characterization. In this study, the data samples are of 15 subjects with PD, 20 subjects with HD disease, 13 subjects with ALS and 16 subjects of healthy or fit persons. The database is composed of one minute recording of force sensitive resistor (FSR) Signal. Both left and right stride-stride footfall contacts are obtained from FSR signals. For feature deduction, two-level wavelet decomposition is done using discrete wavelet transform (DWT). The acquired features were evaluated using the means of 10-trials for fivefold crossvalidation (FFCV) in LDA with a random forest classifier (RFC). In the result, the NDD pathologies are detected by the proposed method. For NDD location, the random forest classifier gives the better outcome contrast with SVM and QB ordinary classifiers. The experiment proves that the accuracy, sensitivity and the specificity of the proposed system is highly accurate and efficient than the previous methods. The percentage of the proposed method is 98.24, 97.92 and 96.78% as each. Keywords Neurodegenerative diseases (NDD) · Amyotrophic lateral sclerosis (ALS) · Parkinson disease (PD) · Support vector machine (SVM) · Huntington disease (HD) · Random forest classifier (RFC) · Discrete wavelet transform (DWT) · Force sensitive sensor (FSR)

A. Athisakthi (B) · M. Pushpa Rani Department of Computer Science, Mother Teresa Women’s University, Kodaikanal, India e-mail: [email protected] M. Pushpa Rani e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_24

287

288

A. Athisakthi and M. Pushpa Rani

1 Introduction Neurodegenerative diseases are debilitating neurons that result in accelerating degeneration. This causes problems with motor impairment. Nowadays predictable people are affected by some form of neurodegenerative diseases such as spinocerebellar ataxia (SCA), Parkinson’s disease (PD), Prion disease, amyotrophic lateral sclerosis (ALS), Huntington disease (HD), and Spinal Muscular Atrophy (SMA). Degenerative diseases can be dangerous and life threatening, depending upon the loss of neurons in the brain cell. Poor cognitive abilities, muscle weakness, memory loss and decreasing alertness are some symptoms of degenerative diseases. In early phases of the disease, it is more difficult to distinguish the atypical variants. The early detection and stratification of neurodegenerative disease is important and efficient for general practitioner which reduces the time and cost of the diagnosis process. To recognize people, there was enormous number of recognition techniques used by the researchers. Gait recognition is one of a fascinated technique for successive recognition. Even though this technology is not only limited with this function, it can be used in the clinical side to distinguish the motor disability using gait samples. Gait signal may be a fine parameter for classifying the NDDs that is caused by some demise of neurons in the human brain [1]. In this work, a hybrid method on triplelayered feature extraction technique is urged to detect the neurodegenerative diseases from control and morbid signal of Parkinson disease (PD), Huntington disease (HD) and amyotrophic lateral sclerosis (ALS). In the human nervous system, Parkinson disease (PD) is one of the brain conditions, which leads to tremoring and complexity with walking in movement progress and management skill. The human muscle movements are controlled by nerve cells; these cells use one of a chemical called dopamine. When dopamine content was getting destroyed in the nerve cells, the Parkinson disease will occur. The brain nerve cells cannot properly propel the command without dopamine. So the muscle functions are slowly losing its ability. The injury gets worse with time. It is an unknown factor exactly why the dopamine content wastes away from the nerve cell. It is the most common neurosystem in elder stage. Both genders are affected by this disease. Rigid or stiff muscles, problem with balance and walking, stooped position and difficulty continuing to move are some most common symptoms of Parkinson diseases [2]. The National Parkinson Foundation [3] survey shows, in worldwide there were 1% of older adults affected by this disease. Chorea and lack of coordinations are a stage which affects the human body movement characterized by genetic neurological disorder [4]. Huntington disease (HD) is a degenerative disorder which is inherited from upper generations. In this disorder, there was a hereditary defect on chromosome 4. CAG is one of a DNA part of the brain cell. It is repeated many more times than it is supposed to. The CAG gene portion was repeated 10–28 times in normal person, but in the HD gene it was repeated in 36–120 times. The counting of repeats tends to increase as the gene inherited through the families. The greater chances of developing earlier symptoms are depending upon the repeats of chromosome. Slow uncontrolled actions, trembling gait, facial movements including frowns, judgment

Recognize Vital Features for Classification …

289

defection, head turning to shift eye position are some of the symptoms of ALS [2]. Amyotrophic lateral sclerosis (ALS) is an accelerating neurodegenerative disease, which affect the brain and spinal cord nerve cells of the human body. Motor neurons are linking the brain and spinal cord and spinal cord to muscles in the nerve cell, all over the body. In ALS, because of accelerating degeneration in a nerve cell, the motor neurons are dying and losses its ability to control the muscle movements. The basic muscle movements of speaking, eating, moving and breathing are affected by the loss of motor neurons [5]. Figure 1 shows the gait signal of PD, HD and ALS subjects.

2 Background Study The researcher named M. P. Murray et al. demonstrated identifying gait is the best theory of recurrent development in medicinal tests and completed the fundamental examination such as the effect of tallness, age and various factors on person’s behavior [6]. Jonghee Han, et al. proved that Peak examination can be useful in the comprehension of the diseases propelling state which is a valuable data for the medicines [7]. The author Simon proves the examinations of gait in persons are characterized as the better investigation of strolling people. A noteworthy application territory is in the medicinal basic leadership and effective drug providing forms for neuro musculoskeletal infections in various groups of persons, for example, exceptional status frameworks and human recognizable proof. By removing spatial, worldly parameters from human step and stance, restorative medicines would tally with important extra data, permitting a superior finding and effective solution appraisal for illnesses like Parkinson’s [8, 9]. In his paper, J. M. Hausdorff clears that clinicians as well as specialists are requiring much of the time for gait examination framework. It gives precise and dependable proportion of fleeting and spatial step attributes for a wide scope of medical populaces. In the field of medical, this framework is more utilized in an extensive variety of setups including outside. The capacity to get precise step walk time measurements, which also includes standard deviation of stride time and fractals of walk time, in a predefined count of consistent strolling strides is a significant thing to be noted [10]. The researchers Haeri, Sarbaz, and Gharibzadeh implemented their work as the change of gait flag might be a decent technique in separating development issue which is due to the breaking down of some neurons in the part of brain. The method is also used as the approval of models which are exhibited for the neurodegenerative type disease. The kind of neurodegenerative like Huntington’s and Parkinson’s diseases is both neurological illnesses which are mainly due to the effect in nature of the basal ganglia. It plays a major role in influencing the direction control of engine [1, 11]. In subjects with amyotrophic lateral sclerosis (ALS), the basal ganglia stay unblemished. Nonetheless, the study of nature of motoneuron creates in the cere-

290

Fig. 1 Gait signal of PD, HD, ALS subjects

A. Athisakthi and M. Pushpa Rani

Recognize Vital Features for Classification …

291

bral cortex, brain stem and spinal line which cause the gait cycle to end up anomalous [12]. Walk interim connection is liable to impairment because of reduced conduction speed of nerve, lack of motor neurons, diminished reflexes, diminished muscle quality and diminished focal handling abilities initiated by neurodegenerative maladies such as Huntington affected issue, Parkinson’s treatment and amyotrophic lateral sclerosis (ALS) [11]. Hausdorff et al. explained maturing as well as certain diseases influencing the walk interim relationship. In their examination, they found that the level of walk to walk interim relationships relates with ALS and Huntington’s diseases are contrarily connected to the level of utilitarian impedance [11]. He also demonstrated that every one of the three maladies adjust step beat, yet it is obscure with respect to either these ailments influence the privilege and the left walk interims at an equivalent rate. L. N. Sharma et al. showed that vitality esteems identification is a decent technique for catching important parts from signs [13].

3 Methodology These medical conditions were picked in light of the fact that they are known to have related causes that may make troublesome finding of them. In this examination, we have utilized an open database from Physionet. 15 subjects with PD, 20 subjects with HD disease, 13 subjects with ALS and 16 subjects of healthy or fit person’s signals are incorporated from the benchmark database. The natural information was procured utilizing force sensitive resistors, with the yield roughly in respect to the power under the foot. Each file comprises two signs and it is analyzed using FSR of each foot’s stride-to-stroll proportions with respect to footfall touch instances. In this paper, the time arrangement of flag was categorized as six intervals. The intervals are given as stride interval, right stride interval, left swing interval, right swing interval, left stance interval, proper stance interval and double assist period in-between and follow measurable, energy estimations of wavelet decay and pinnacle exam systems for spotlight extraction. The selected requirements are relied upon to be normal for the individual’s normal gait performance (Fig. 2). The repetitive pattern of human motion is called a gait cycle. It includes steps and strides. One single step is called as a step; whole gait cycle is called as a stride. The stance and swing phases are incorporated in every gait cycle. In a verbose gait cycle, the stance phase occupies 60% of the gait cycle and it can be part into the double support and single-leg support [14]. The position is one of the stride organize used to appoint the entire time period in the midst of which the foot is on the ground. Position starts with opening contact. A swing start at the foot is lifted from the floor. Right beginning contact emerges while the left foot is still on the ground and double support between introductory contact on the privilege and toe off on the left. Amid the swing stage on the left side, just the correct foot is on the ground, giving a period of right single support, which closes with introductory contact with the left foot. There is a

292

A. Athisakthi and M. Pushpa Rani

Fig. 2 Movement disorder detection and classification

gait is a verbose development of human and can be deteriorated to a solitary stride cycle. Every gait cycle incorporates of the position and swing stage. The position stage is around 60% of the step cycle and can be part into the twofold leg and singleleg position [14]. The position is one of the step stage used to assign the whole time frame amid which the foot is on the ground. Position begins with opening contact. The word swing applies to the time the foot is noticeable all around for appendage movement. Swing starts at the foot are lifted from the floor. Right beginning contact emerges while the left foot is still on the ground and double support between introductory contact on the privilege and toe off on the left. Amid the swing stage on the left side, just the correct foot is on the ground, giving a period of right single support, which closes with introductory contact with the left foot. There is another time of double support, until the point when toe off on the correct side. Left single support relates to the correct swing stage, and the cycle closes with the following introductory contact on the privilege [15]. The gait length is the separation between two progressive situations of a similar foot. It comprises of two stage lengths, left and

Recognize Vital Features for Classification …

293

right, each foot separately pushes ahead before the other one [16]. Times of double support and two times of single support are in the gait cycle. The position stage normally keeps going around 60% of the cycle, the swing stage around 40% and every time of double support around 10% [15].

3.1 Statistical Analysis The statistical gait analysis helps mainly in depicting the gait. Practically, gait examining a few tens or several successive advances and it is planned to assess the patient amid a “functional” walk, of the typical day by day life. In this research for each time arrangement of a single subject, the parameters such as energy, standard deviation, mean, variance and covariance of a similar walk are considered. The “statistical gait analysis” approach helps to find the repetitions and exact results. (a) Energy The continuous time complex signal’s energy and it is represented as x(t) is calculated as ∞ |X (t)|2 dt

Ex = −∞

(b) Mean The mean is always calculated to know the average rate of the signal or the preferred sample. The calculation of mean value is done using the below formula. μ=

N −1 1  XI N I =0

(c) Standard Deviation The computation of Standard deviation is the diffusion set of records from its mean. It measures the complete variability of a distribution. The mathematical representation of standard deviation is as follows. σ =

N −1 1  (X i − μ)2 N − 1 i−0

(d) Variance The Variance calculation is done using the below representation. Var(X ) = E[(X − μ)2 ]

294

A. Athisakthi and M. Pushpa Rani

(e) Co Variance The covariance of data sample of X and Y is calculated as follows. COV =

n 

(X i − x)(Yi − y)

i−1

3.2 Discrete Wavelet Transform (a) Decomposition of Wavelet Signals The decomposition of wavelet is one of the best approaches to identify signals in the field of signal processing. The significance of wavelet decomposition in the field of time space is as follows. (i) The decomposition of wavelet transformation is a set of usefulness and it is adequate disregarding variance, it is represented as zero normal. (ii) The properties of smoothness and coherence make the wavelet decomposition as a normal capacity. (iii) With the conservative help of capacity, wavelet decomposition proves that it is in the space implicitly. The utilization of wavelet decomposition is to break down the signals into its appropriate segments for testing purpose. Using the recurrence, the indicative information of the direct signal is related to the different sub-bands of wavelet decomposition. It is mapped to the lower recurrence sub-groups which handle the large part of the original signal’s data [3]. The proposed method handles two phases of decomposition of wavelet which is mainly used for feature extraction. The process of signal decomposition resulted in approximation 2 and  ∞ detailed set of three. Basically, a wavelet is a function ϕ ∈ L (R) with a zero −∞ ϕ(t)dt = 0. The continuous wavelet transformation (CWT) of a signal x(t) is then defined as: 1 CWTϕ x(a, b) = √ |a|

∞ −∞

x(t)ϕ ∗



 t −b dt a

where y(t) represents the ancestor of wavelet, * is defined as complex conjugate, scaling parameters are a and b. The recurrence of the oscillation and the wavelet length is defined by the scaling parameter a. The scaling parameter b defines the position of moving of wavelet. The capacity of scaling and function of wavelet is addressed to low-pass filters (LPF) and

Recognize Vital Features for Classification …

295

Fig. 3 Decomposition of wavelet to extract energy

high-pass filters (HPF). The process of decomposition of signal starts by transiting through these filters. The low-frequency parts of the time arrangement is considered as approximation and the high-frequency segments are computed by points of interest. The signal processing passes through the low-pass filters and high-pass filters. To get the detail coefficients and the approximation, coefficients A1 and D1 at first level, the yielded signals from the filters are crushed. The approximation, coefficients which are obtained and passed to the next phase are to rehash the technique. The signal is deteriorated at the normal level in the last phase of decomposition process [17] (Fig. 3). The energy of a vague signal can be separated at different resolution levels. Mathematically, this can be presented at EDi=

N   2 Di j  , i = 1, . . . .l j=1

EAi=

N   2 Ai j  j=1

296

A. Athisakthi and M. Pushpa Rani

Fig. 4 Analysis of peak

where i = 1, …, l is the wavelet decomposition level from level 1 to level l. N is the number of the coefficients of detail or approximate at each decomposition level. EDi is the energy of the detail at decomposition level i and EAl is the energy of the approximate at decomposition level l.

3.3 Peak Analysis For analyzing signals, peaks are considered to be the most important thing. In identifying the signals, the peaks are categorized within the signal’s series of time. Using the six intervals such as left stride interval, right stride interval, left swing interval, right swing interval, left stance interval, right stance interval and double support interval signals, the interims and histogram of normal peak is calculated (Figs. 4 and 5).

Recognize Vital Features for Classification …

297

Fig. 5 Computation of histogram of peak interval

(a) Classification The proposed theory is carried out with the help of three different classifiers which are random forest classifier (RFC), quadratic Bayes (QB) and support vector machine (SVM). It helps mainly in the identification and characterization of signals. The chosen classifiers in the proposed work is considered to be learning classifiers and they mainly support in the categorization of various levels of class in the test vector [18]. Among the three classifiers, the SVM belongs to two class classifier category. It is used in the process of augmenting the hyper-plane with the choice limit comparison. The SVM is useful in the coding systems like “one VS one”, “one VS all” and so forth [19]. The arrangement of work is done by SVM and the preparation of group vectors x as per the accompanying condition: c=



αik(si,x) +b

i

where support vector is represented as S i, , α i is used to define the weight, bias is represented as b, and k is to represent the kernel function. k is considered to be the dot product, when the process is of a linear kernel. In the case of c > = 0, the member of group 1 is categorized by x, or else it is categorized as a member group 2. The next classifier is random forest classifier which is a combination of predictors of tree. In that each predictor of tree is individually dependent on the sampled random vector values like the distribution as same in all the trees of forest. Leo Breiman [9] defines that RFC includes set of structured trees {h(x,k), k = 1, …} where the {k} are distributed random vectors. The most popular class at input x is voted by the each tree independently. The technique of bagging is handled by the RFC algorithm and it helps prominently for tree learners. The algorithm for random forests applies the

298

A. Athisakthi and M. Pushpa Rani

general technique of bagging, to tree learners. A training set is denoted as X = x 1… , x n with responses Y = y1 … yn , bagging repeatedly (B times) selects a random sample with replacement of the training set and fits trees to these samples for b = 1, …, B, Sample, with replacement, n training examples from X, Y; call these X b , Y b . Train a decision or regression tree f b on X b , Y b . After training, predictions for unseen samples x  can be made by averaging the predictions from all the individual regression trees on x  : 

f =

B 1  

f x B b=1 b 

Another classifier Quadratic Bayes normal classifier, in this classifier, the patterngenerating mechanism is represented in a probabilistic framework. A Bayes classifier is a pattern classifier based on two fundamentals: (1) When an object is injured or loss its value by the incorrect classification it can be quantified as a cost. (2) The anticipation of the cost is acceptable as an optimization criterion.

4 Performance Measures The classifiers performance is calculated in terms of sensitivity, accuracy and specificity. The terms are presented based on the comparison of actual and predicted output. The confusion matrix is used to predict the count of true positives (TP), false positives (FP), false negatives (FN) and true negatives (TN) of each classifier. The ability of trained method to identify the positivity of NDDs is done using the sensitivity rate calculation. The Sensitivity (SE) calculation is as below. SE =

TP TP + FN

The specificity rate (SP) is calculated to review the result of negativity with respect to healthy subjects or non-infracted and it is evaluated as below. SP =

TN TN + FP

The Acc abbreviated as classification accuracy is basically a measurement system which helps in the computation of degree of closeness of a quantity to its actual or true value. The formulaic of computation is as follows. Acc =

TP + TN TP + TN + FP + FN

Recognize Vital Features for Classification …

299

5 Results and Discussion For the proposed method, the data samples are collected from the online database Physionet. The samples of 15 subjects with Parkinson disease, 20 subjects with Huntington disease, 13 subjects of amyotrophic lateral sclerosis and 16 subjects of healthy persons gait signal is obtained by fitting force sensitive sensor resistor to the subject’s shoes. In this paper, the proposed method based on the analysis of statistical, energy and peak is discussed. The parameters like energy, standard deviation, mean, variance and covariance are evaluated with respect to six intervals in statistical analysis. The six intervals are left stride interval, right stride interval, left swing interval, right swing interval, left stance interval, right stance interval and double support interval signals. The energy space is calculated from the signal decomposition using discrete wavelet transformation in the phase of energy analysis. Average peak interval and histogram is computed in the last phase peak analysis. For better performance and high accuracy result, the three different analysis methods are combined. The tabular view of performance, classification, specificity and sensitivity of all the three classifiers SVM, QB and RFC comparison is presented in the following section (Tables 1, 2 and 3; Figs. 6, 7 and 8). Table 4 and Fig. 9 are presented to represent the comparison of classifiers based on accuracy, sensitivity and specificity rate. The accuracy of SVM and RFC is 96.875 and 96.87.5%. The sensitivity of SVM and RFC is 85.4167 and 96.755%. And the specificity of SVM and RFC is 88.8571 and 96.725%.

Table 1 Tabular representation of classifiers

Table 2 Sensitivity rate of classifiers in TABLE FORMAT

Table 3 Tabular form of classifiers based on rate of specificity

Classification rate

Stride

Swing

Stance

Support

QB

71.14

69.54

70.47

70.86

RF

93.75

93.75

93.75

96.875

SVM

81.25

84.375

84.375

81.25

Sensitivity rate

Stride

Swing

Stance

Support

QB

76.04

68.21

71.21

70.45

RF

91.66

87.5

87.5

93.75

SVM

81.25

70.833

85.41

83.33

Sensitivity rate

Stride

Swing

Stance

Support

QB

64.15

67.54

70.04

69.13

RF

91.66

96.15

96.15

98.00

SVM

70.74

79.43

78.98

76.66

300

A. Athisakthi and M. Pushpa Rani

Fig. 6 Result of classification

100 80 Stride

60

Swing

40

Stance

20 0 QB

Fig. 7 Comparison of sensitivity result

RF

SVM

100 80 Stride

60

Swing

40

Stance

20 0

Fig. 8 Result of classifiers based on rate of specificity

QB

RF

SVM

100 80 Stride

60

Swing

40

Stance

20 0

Table 4 Comparison of classifiers (SVM, QB, RFC) based on accuracy, sensitivity and specificity rate in tabular form

Fig. 9 Comparison of classifiers

QB

RF

SVM

Classifier

Accuracy

Sensitivity

Specificity 76.6667

QBC

61.9048

61.9048

RF

85.743

85.743

90

SVM

71.4286

71.4286

76.7677

100 80 Accuracy

60

Sensitivity

40

Secificity

20 0

QBC

RF

SVM

Recognize Vital Features for Classification …

301

6 Conclusion In this paper, the neurodegenerative diseases are successfully detected by the proposed method. This is achieved using force sensitive resistor signals. The proposed method is based on the single method, and it comprised the three different sets of analysis. This type of proposing resulted in the high rate of accuracy percentage in early analysis of disease. The accuracy 85.743% is considered as reasonable of proposed method based on random forest classifier (RFC). The discussion of comparison is also presented in this paper and the effectiveness of proposed method is explained.

References 1. M. Haeri, Y. Sarbaz, S. Gharibzadeh, Modeling the Parkinson’s tremor and its treatments. J. Theor. Biol. 236(3), 311–322 (2005) 2. www.ncbi.nlm.nih.gov 3. L.N. Sharma et al., ECG signal de noising using higher order statistics in wavelet subbands. Biomed. Signal Process. Control 5(3), 214–222 (2010) 4. M. Banaie, Y. Sarbaz, S. Gharibzadeh, F. Towhidkhah, Huntington’s disease: modeling the gait disorder and proposing novel treatments. J. Theor. Biol. 254(2), 361–367 (2008) 5. E.R. Kandel, J.H. Schwartz, T.M. Jessell, Principles of Neural Science, 4th edn. (McGraw-Hill, New York, 2000) 6. M.P. Murray, Gait as a total pattern of movement. Am. J. Phys. Med. 46(1), 290–333 (1967) 7. M. Trew, Human Movement. Churchill Livingstone 8. S.R. Simon, Quantification of human motion: Gait analysis—benefits and limitations to its application to clinical problems. J. Biomech. 37, 1869–1880 (2004) 9. RANDOM FORESTS Leo Breiman Statistics Department University of California Berkeley, CA 94720 September 1999 10. J.M. Hausdorff, Gait dynamics in Parkinson’s disease: common and distinct behavior among stride length, gait variability, and fractal-like scaling. Chaos, 19(2), 26113 (2009) 11. J.M. Hausdorff, Dynamic markers of altered gait rhythm in amyotrophic lateral sclerosis. J. Appl. Physiol. 88(6), 2045–2053 (2000) 12. J.M. Hausdorff, Altered fractal dynamics of gait: reduced stride-interval correlations with aging and Huntington’s disease. J. Appl. Physiol. 82(1), 262–269 (1997) 13. L.N. Sharma, R.K. Tripathy, S. Dandapat, Multiscale energy and eigenspace approach to detection and localization of myocardial infarction. IEEE Trans. Biomed. Eng. (2015) 14. G. Rau, Movement biomechanics goes upwards: from the leg to the arm. J. Biomech. 33(10):1207–1216 (2000) 15. A. Kharb, V. Saini, Y.K. Jain, S. Dhiman, A review of gait cycle and its parameters. IJCEM Int. J. Comput. Eng. Manag. 13 (2011) 16. C. Kirtl, Clinical gait analysis: theory and practice. (2005) 17. I. Omerhodzic, S. Avdakovic, A. Nuhanovic, K. Dizdarevic, Energy distribution of EEG signals: EEG signal wavelet-neural network classifier 18. C.-J. Huang et al., Application of wrapper approach and composite classifier to the stock trend prediction. Expert Syst. Appl. 34(4), 2870–2878 (2008) 19. J.A.K. Suykens et al., Least Squares Support Vector Machines, vol. 4 (World Scientific, 2002)

MM Big Data Applications: Statistical Resultant Analysis of Psychosomatic Survey on Various Human Personality Indicators Rohit Rastogi, Devendra Kumar Chaturvedi, Santosh Satya, Navneet Arora, Piyush Trivedi, Mayank Gupta, Parv Singhal and Muskan Gulati Abstract Machines are getting intelligent day by day. Nowadays, the human being is so much complex that nobody can do any concrete or accurate behavioural or actionrelated predictions. The mood swing has been also an epidemic in modern India and all metro cities of the developed countries. We have grown rich in terms of money and physical facilities; modern science has gifted us many boons, but simultaneously the mental, physical and spiritual disorders have surprisingly disturbed the smile, peace and definite attitude and life style of individual and all human beings. The 3 M—Mobile, Money and Marriage are essential in one’s life, but the over addiction and misuse of resources, facilities and lack of right understanding with selfishness and ungrateful conduct have changed all the social parameters. So, presently the stress has R. Rastogi (B) · P. Singhal · M. Gulati Department of Computer Science and Engineering, ABESEC, Ghaziabad, India e-mail: [email protected] P. Singhal e-mail: [email protected] M. Gulati e-mail: [email protected] D. K. Chaturvedi Department of Electrical Engineering, DEI, Agra, India e-mail: [email protected] S. Satya Department of Rural Development, IIT-Delhi, New Delhi, India e-mail: [email protected] N. Arora Department of Mechanical Engineering, IIT-Roorkee, Roorkee, India e-mail: [email protected] P. Trivedi Centre of Scientific Spirituality, DSVV, Haridwar, India e-mail: [email protected] M. Gupta IT Analyst, Tata Consultancy Services, Noida, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_25

303

304

R. Rastogi et al.

been the biggest challenge against mankind like nuclear weapons, global warming and epidemics. It leads towards tension, frustration and depression and ultimately in extreme cases towards the self-suicide or the murder of innocents. The happiness index, safety of individual, living parameters have been drastically challenged us, and India specially has pathetic situation among global quality of life (QoL) index. The present paper is an effort to define a simulated model and framework for the subjective quality of stress into quantitative parameters and mathematically analysing it with the help of popular machine learning tools and applied methods. With the help of machine intelligence, authors are trying to establish a framework which may work as an expert system and may help the individual to grow self as a better human being. Keywords Stress · Python · Anaconda · Pycham · Use case · Machine learning · Multinomial logistic regression · Variance

1 Introduction 1.1 Young People’s Health Is Vital and Crucial Youth is the future of our country and are assumed to be healthy and active, but as per a latest report from WHO [1] around 2.6 million young people of about 10–24 years of age die annually; a huge number of them suffer from various chronic diseases which hinder their growth and development. A behavioural pattern which is seen from many years which is established during the phase of development indicates some serious health issues of upcoming youth of the future. From 1990, many things like morbidity, morality and modes of communication have revolutionized a lot [2, 3]. Though, we have shifted towards the safety and health schemes but still there is a tremendous need to understand the severe problems of youth and work upon them and find a suitable mechanism to find appropriate solutions of these growing problems and provide a check to overcome them [4–6].

1.2 Previous Work Study – The use of the novel was born from the experiments conducted by Shelly in 1930. He used the term to focus not only on the agent but also on the state of organism and the way it reacted and adapted to the environment. Starting from 1960s, the academic psychologist began to embrace the notion of Shelly; they have tried to liberate “life stress” by noting “remarkable life events”, and more research has been done to inspect the links between stress and disease of almost all kinds [7–9]. – In a study conducted and supervised by Agarwal and Chadha in 2007, various types of stress roles prevailing among engineering and administrative students in

MM Big Data Applications …











305

India were inspected. It has been observed that the overburden of roles, the separation of the role of oneself and the stagnation of roles are the main stress experienced by the student. Moreover, many students also face queer situations such as frustration, nervousness, depression, worry, humiliation, etc. The uncertainty of these emotions easily commences unexpected actions and behaviour, which therefore affects the adaptability of students and learning outcomes if timely advice is not appropriate from the side of teachers, institutions and parents, or if they fail to get the right concern from their siblings and peers. [10–13]. Rao K. and D. K. Subbakrishna in 2006 conducted an assessment of stress in the National Institute of Mental Health and Neuronal Sciences (NIMHANS), and addressed various coping behaviours to a group of 258 graduate and undergraduate students. Piekarska et al. in 2000 did exhaustive work and they stressed that the vital factors for stress development are strong and frequent. There is a coordinated correlation between stress results and intellectual and personality characteristics [14–16]. The University’s “Life Stress Survey” Chronic, built by Towbes and Cohen in 1996, emphasize on the prevalence of persistent and ceaseless stress in the lives of university students. This scale includes the elements that persist over time to create stress, such as interpersonal conflicts, self-esteem problems and money problems. Rocha-Singh in 1994 also inspected various sources of stress among college-going students through similar subsequent researches and studies [17–20]. According to Hirsch and Ellis in 1996, the robust relation between any person and the environment, in the cognizance and response of stress, is particularly reinforced in university students who may slightly differ from those faced by their non-student squint. The major remarkable stressors are the time-specific or subject-specific elements that assist Carroll’s (1963) statement that learning is a function of allowed aptitude, time, quality of instruction and the ability to understand education [21–24]. It was analysed that these basic academic stressors remained relatively unaffected over time, as noted by Murphy and Archer in 1996 who differentiated the academic stressors of their recent study with those experienced eight years later. Larson in 2006 and many previous researchers have found that university stressors include: finances, social relationships, academics, daily problems (for example, parking etc.) and family relationships. Within each domain dissention, time demands, techniques are the brain assault insufficient resources, and the newer responsibilities are simply characterizing stress. The brainstorming group includes a small number of people who are revitalized to anticipate various innovative ideas to elucidate problems of an inventive nature. It encourages a detained assessment because analysis during production has a diminutive effect on members. The conceptualized brainstorming session can last two or more hours. It is a question of free wheel with innovative ideas that come very swiftly from diverse sources. Checklists which are presented by researchers should rectorate the generation of ideas [25–29]. Osborn’s (1957) beliefs on development and education of the child and its relationship to academic achievement by taking the aim of studying the difference in stu-

306

R. Rastogi et al.

dents’ academic performance related to culture, intelligence and gender by taking a sample of 200 selected students and profound differences in the gross academic performance of the students, and the scores obtained in social science, science and languages as regards culture but not gender, parents’ beliefs about the development due to the apprehensive learning processes were optimistic [27, 30, 31]. – Gorden’s technique is a refined manifestation of the brain assault procedure. Gorden in 1961 released a procedure called “operative creativity” [26, 30]. In this method, an abstraction of the problem is granted to the members of an assembly rather than specific problems that occur in Osborn’s assault on the brain. His thesis is that by proposing an extreme abstraction, we can fetch many novel ideas that usually do not arise [32, 33].

1.3 Study Plot The questionnaire was planned based on the study of several books. We sketched a control group that did not engage in an experiment and produced seven experimental groups; each of the 30 in the class of 210 students, these participated in the experimentation, in regular contact and their comments were recorded. The observations and readings were recorded at periodic intervals. After a period of time, the authors group calculated the benefit, difference and deviation level based on series of questionnaires. Other than the control group was subjected to standard defined questionnaire tools or by the developer with a high level of support, trust and reliability with detailed interviews, ethnography, schedules and group interactions that were used to re-examine the very notion of psychic challenges. • Product Functions—The psychoanalysis and stress analyser will evaluate the stress level of an individual and will take care to curtail stress. The product will contain a set of training already fed and defined that will conduct the survey database, based on questionnaires. The questionnaire will comprise of four levels. In this experiment, about 1000 people will be asked to enrol in the survey and thus a table will be prepared based on the data obtained from the survey. This table will display general data.

2 Algorithm(s) Implementation Results 2.1 Questionnaires’ Language Applied [16] Following statements were kept in question form to users. (Separated by Full Stop.)

MM Big Data Applications …

307

2.2 Graphs and Tables See Figures 1, 2, 3 and Tables 1, 2 3, 4, 5, 6, 7. From Fig. 4, based on Table 3, we observed that all the covariance values are positive, which means, if there is an increase in the value of one variable, the value

Fig. 1 Figure based upon Table 1

Fig. 2 Correlation from questions 1 to 25 based upon Table 2

308 Table 1 Mean of responses of 399 participants

R. Rastogi et al.

q1

3.06

q2

2.69

q3

2.02

q4

2.09

q5

2.14

q6

2.11

q7

2.47

q8

2.89

q9

2.47

q10

2.04

q11

2.33

q12

2.46

q13

2.64

q14

2.13

q15

1.55

q16

2.488

q17

1.78

q18

1.99

q19

1.44

q20

2.64

q21

2.55

q22

2.47

q23

1.47

q24

2.48

q25

2.65

Table 2 Variance and standard deviation of responses of 399 participants who participated in the survey

Variance

0.27489

Standard deviation

0.524277

Table 3 Covariance of the questions from 1 to 5 and input for this is given by the response of 399 participants who participated in the survey

[{1.2050

0.455

[0.455 [0.5836

0.5836

0.501

0.4397

1.3598

0.62289

0.4598

0.6369

0.6289

1.5067

0.4598

0.6155

[0.50131

0.4598

0.647

1.45077

0.60057

[0.43

0.6369

0.6155

0.6

1.6099}]

MM Big Data Applications …

309

Fig. 3 Covariance of questions 1–25 based upon Table 3

Table 4 Covariance of the questions from 6 to 10 and input for this is given by the response of 399 participants who actively participated in the survey

Table 5 Covariance of the questions 11–15 and input for this is given by the response of 399 participants who actively participated in the survey

Table 6 Covariance of the questions 16–20 and input for this is given by the response of 399 participants who actively participated in the survey

[{1.939

0.626

0.2528

0.20963

0.317

[0.6264

2.37441

0.0321

−0.011

0.331

[0.2524

0.321

2.6754

−0.111

0.3478

[0.2092

0.0321

2.2678

−0.111

0.37897

[0.3715

0.0331

0.34789

[{1.554

0.487

0.277

[0.498

1.5027

0.374132 1.76512}] −0.0081

0.4647

0.0032

0.264

0.6745

1.491

−0.0766

0.5415

1.215

0.052

[0.2724

0.883

[−0.0081

0.2664

[0.4642

0.6759

[{1.6423

0.4584

0.1987

0.3214

0.2987

[0.459

0.9447

0.1666

0.4024

0.278951

[0.195

0.166

0.7879

0.0784

0.2825

[0.3270

0.4078

0.0746

1.554

0.27

[0.2961

0.21752

0.28492

0.27

1.74}]

−0.078 0.5412

0.502088 1.813}]

310 Table 7 Covariance of the questions from 21 to 25 and input for this is response given by 399 participants in the survey

R. Rastogi et al.

[{1.664

0.3654

0.7931

0.6177

0.5217

[0.354

0.62411

0.1974

0.2511

0.0458

[0.731

0.1911

1.599

0.477

0.442

[0.6788

0.4778

0.475

1.514

0.6999

[0.5211

0.0781

0.4422

0.6995

1.9726}]

Fig. 4 Covariance of 1–5

Fig. 5 Covariance of 6–10

of the other variable will also increase. It is observed that if any person is worried, then after a certain level that person will feel tremendously irritable and sensitive. From Fig. 5, based on Table 4, we observed that if any person spends less time in reading a newspaper, then that particular individual is not well aware about the latest updates and happenings of the world and is also least interested in his hobbies. These types of people spend more time watching television for entertainment purposes rather than fetching scientific updates and working on their ambitions. Well, IoT has many advantages. But somehow, people’s dependence on IoT devices is increasing. Well, in this age of digitization, most of us depend on these devices because they make our lives very simple and reduce a lot of effort. These

MM Big Data Applications …

311

Fig. 6 Covariance of 11–15

Fig. 7 Covariance of 16–20

devices are also used for entertainment purposes such as television, smartphones and other more recent digital devices. From Fig 6, we observed that if any person is not feeling slacken then he might suffer from various chronic headaches, upset stomach and often feels fatigue. Then, he might start smoking tobacco in order to avoid these stress symptoms. From Fig 7, It is observed that if a person fights once in a week with his fellow co-workers then he is feeling stressed. He often fights because he starts feeling like no one understands him. Then, he may be subjected to accidents too as due to those pointless fights he will not be able to focus on other things and might even start drinking in order to relieve stress and eradicate these stress symptoms. From Fig. 8, it is observed that if a person is having bad dreams or nightmares, then he might face trouble to fall asleep, so in order to sleep well he would start taking sleeping pills. He will always look tired and exhausted (Fig. 9; Tables 8 and 9). It presents the graph as the yield of multinomial logistics algorithm which is applied on those highly correlated six questions. They reflect the stratification of test set into three main classes that are class 0,1 and 2 where 0 represents low, 1 represents medium and 2 represents high. The train set and the test set are maintained in the ratio of 70:30.

1

q1

q2

q3

q4

q5

q6

q7

q8

q9

q10

q11

q12

q13

q14

q15

q16

q17

q18

q19

q20

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

0.98

0.21

0.85

0.24

0.73

0.45

0.91

0.99

−0.46

0.45

0.73

0.12

0.64

0.52

0.36

0.67

0.56 0.14

0.65

0.25

0.88

0.94

−0.6

0.77

0.89

0.65

0.78

0.91

0.45

0.99

0.73

0.78

0.91

0.45

0.99

1

q4

E

0.13

0.67

0.64

0.73

0.69

0.43

0.43

−0.79

0.16

0.64

0.73

0.69

0.12

0.64

0.52

0.43

0.43

−0.79

0.16

1

q3

D

0.12

1

q2

C

0.73

0.69

0.21

0.64

0.27

0.21

0.33

0.22

0.64

0.27

0.21

0.33

0.22

0.64

0.27

q1

B

1

(a)

A

Table 8 Correlation

0.24

0.93

0.37

−0.28 0.95

0.78

0.25

0.65

0.28

0.46

0.81

0.37

0.44

0.19

−0.28

0.94

−0.56 0.46

0.89

0.84

−0.64

0.13

0.77

0.31

0.19

0.94

−0.56 0.46

1

q6

G

0.83

1

q5

F

0.43

0.49

0.14

0.76

0.34

0.43

0.49

0.34

0.24

0.93

0.94

0.16

0.43

0.54

0.93

0.52

0.49

0.16

0.18

−0.27 0.46

−0.33

0.3

0.49

0.16

−0.76

−0.46

0.96

1

q10

K

0.67

0.29

0.52

0.46

−0.46

−0.24 0.94

−0.46

0.25

0.44

1

q9

J

0.65

0.11

−0.34

−0.73 0.44

−0.61

1

q8

I

0.91

0.34

1

q7

H

0.54 −0.85

−0.43

0.36

−0.66 0.36

0.81

−0.76

0.14

−0.85

−0.43 0.62

0.54

0.75

1

q12

M

0.36

0.34

0.25

1

q11

L

(continued)

0.36

0.15

0.29

0.92

0.24

0.36

0.15

1

q13

N

312 R. Rastogi et al.

q1

q2

q3

q4

q5

q6

q7

q8

q9

q10

q11

q12

q13

q14

3

4

5

6

7

8

9

10

11

12

13

14

15

1

q14

q15

P

O

A

2

1

(b) q16

Q

−0.8

−0.46

−0.24

q25

26

−0.5

0.11

0.77

0.49

0.96

q24

25

−0.46

D

−0.46

q23

24

0.48

0.99

0.43

C

0.25

q22

23

B

0.65

q21

22

A

Table 8 (continued)

q17

R

0.34

0.25

0.47

0.88

0.19

E

0.75

0.83

0.23

0.46

0.27

F

q18

S

0.98

0.21

0.85

0.25

0.96

G

q19

T

0.24

0.73

0.45

0.76

0.34

H

q20

U

0.36

0.67

0.56

0.43

0.54

I

q21

V

0.14

0.65

0.25

0.67

0.29

J

q22

W

0.95

−0.28

0.28

−0.33

0.3

K

q23

X

0.73

0.37

0.78

−0.76

0.62

L

q24

Y

0.43

0.49

0.14

−0.66

0.14

M

(continued)

q25

Z

0.93

0.94

0.16

0.92

0.24

N

MM Big Data Applications … 313

q16

q17

q18

q19

q20

q21

q22

q23

q24

q25

17

18

19

20

21

22

23

24

25

26

0.52

0.49

0.16

0.18

−0.27

0.46

0.67

0.56

0.77

0.49

0.36

0.67

0.56

1

P

0.73

0.45

0.99

0.48

0.24

0.73

0.45

0.99

O

0.54 −0.85

−0.43 0.36

0.15

0.29

0.37

−0.28 0.36

0.78

0.25

0.36

1

S

0.28

0.46

0.27

0.95

1

R

0.36

0.81

0.65

0.25

0.88

0.19

0.14

0.65

1

Q

0.14

0.34

0.43

0.49

0.14

0.76

1

T

0.16

0.54

0.93

0.94

0.16

1

U

−0.27

0.29

0.52

0.46

1

V

0.18

0.3

0.49

1

W

0.81

0.62

1

X

0.36

1

Y

1

Z

Depicts the maximum correlation values and six questions are found to be highly correlated. These highly correlated questions are Q—3, 5, 10, 15, 20, 21. Then finally logistic regression algorithm is applied on these particular questions

q15

16

A

Table 8 (continued)

314 R. Rastogi et al.

q2

q3

q4

q5

q6

q7

q8

q9

q10

q11

q12

q13

q14

q15

q16

q17

q18

q19

q20

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

0.33

0.22

0.64

0.27

0.21

0.33

0.22

0.64

0.27

0.21

0.33

0.22

0.64

0.27

0.21

0.33

0.22

0.64

0.27

1.21

q1

2

B

q1

A

1

(a)*

Table 9 Covariance

0.43

0.43

−0.79

0.16

0.64

0.73

0.69

0.12

0.64

0.52

0.43

0.43

0.16

0.64

−0.79

0.73

0.69

0.12

0.64

0.52

0.43

0.43

−0.79

0.16

0.64

0.73

0.69

0.12

0.64

0.52

0.43

0.43

−0.79

0.16

1.64

0.73

0.69

q3

D

0.12

1.64

0.52

q2

C

0.99

0.73

0.78

0.91

0.45

0.99

0.73

0.78

0.91

0.45

0.99

0.73

0.78

0.91

0.45

0.99

1.73

0.78

0.91

0.45

q4

E

0.77

0.31

0.34

−0.64

0.19

0.94

0.46

0.89

−0.56

0.34

−0.64

0.13

0.77

0.31

0.19

0.94

−0.56 0.46

0.89

0.34

−0.64

0.13

0.77

0.31

0.19

0.94

−0.56 0.46

1.89

0.34

−0.64

0.13

1.77

0.31

0.19

0.94

0.46

0.89

−0.56

q6

G

0.13

q5

F

−0.61 −0.34

−0.73

0.44

−0.43

−0.46

−0.24

0.25 −0.46

0.11

0.44 0.65 0.91

0.34

0.24

0.44

−0.34

−0.73

−0.46 −0.43

−0.24 −0.61

−0.46

0.25

0.44

0.65

0.91

0.34

0.24

0.11

−0.34

0.44

−0.73

−1.43

−0.46

−0.61

−0.46

−1.24

0.25

0.44

−0.43

0.65

0.91

0.34

1.24

0.11

−0.34

0.44

−0.61

−0.46

−0.24

−0.73

−0.46

0.25

q9

J

0.65

0.11

q8

I

0.91

0.34

0.24

0.44

q7

H

0.66

0.97

−0.76

−0.46

0.96

0.66

0.97

−0.76

−0.46

0.96

1.66

0.97

−0.76

−0.46

0.96

0.66

0.97

−0.76

−0.46

0.96

q10

K

−0.39

0.37

0.34

0.25

0.47

−0.39

0.37

0.34

0.25

1.47

−0.39

0.37

0.34

0.25

0.47

−0.39

0.37

0.34

0.25

0.47

q11

L

−0.77

0.35

0.75

0.83

0.23

−0.77

0.35

0.75

1.83

0.23

−0.77

0.35

0.75

0.83

0.23

−0.77

0.35

0.75

0.83

0.23

q12

M

(continued)

−0.46

0.43

0.98

0.21

0.85

−0.46

0.43

1.98

0.21

0.85

−0.46

0.43

0.98

0.21

0.85

−0.46

0.43

0.98

0.21

0.85

q13

N

MM Big Data Applications … 315

O

q24

q25

A

25

26

q2

q3

q4

q5

q6

q7

q8

q9

q10

q11

q12

q13

q14

3

4

5

6

7

8

9

10

11

12

13

14

15

1.48

0.24

0.73

0.45

0.99

0.48

0.24

0.73

0.45

0.99

0.48

0.24

0.73

q14

0.45

q1

2

0.22

0.64

1

(b)**

0.33

q23

24

0.27

q22

0.21

q21

23

B

22

A

Table 9 (continued)

0.49

0.36

0.67

0.56

0.77

0.49

0.36

0.67

0.56

0.77

0.49

0.36

0.67

0.56

q15

P

0.19

0.14

0.65

0.25

0.88

0.19

0.14

0.65

0.25

0.88

0.19

0.14

0.65

0.25

q16

Q

0.43

0.43

0.16

0.64

−0.79

0.73

0.69

D

0.12

0.64

0.52

C

0.27

0.36

0.73

0.37

−0.28 0.95

0.78

0.25

0.36

0.28

0.46

0.27

0.73

0.37

0.95

0.78

−0.28

0.25

0.36

0.73

0.28

0.46

0.27

0.95

0.37

−0.28

q18

S

0.78

0.77

0.31 0.34

0.34

0.43

0.49

0.14

0.76

0.34

0.43

0.49

0.14

0.76

0.34

0.43

0.49

0.14

q19

T

−0.64

0.19

0.94

0.46

0.89

−0.56

G

0.13

F

0.28

q17

R

0.99

0.73

0.78

0.91

0.45

E

0.54

0.93

0.94

0.16

0.43

0.54

0.93

0.94

0.16

0.43

0.54

0.93

0.94

0.16

q20

U

−0.34

−0.73 V

−0.46

0.29

0.52

0.46

0.3

0.49

0.16

0.18

−0.27

0.14

0.14

−0.85 0.62

0.54 −0.43

0.36

−0.66 0.36

0.81

0.62 −0.76

−0.33

−0.85

−0.43 0.3

0.49

0.54

0.36

−0.66 0.36

0.81

0.67

0.29

0.52

0.16

0.18

−0.27 0.46

−0.76

−0.33 0.67

−0.85

−0.43

0.14

0.54

0.36

q24

Y

−0.77

0.35

0.75

0.83

0.23

M

0.36 0.62

0.49

0.16

0.81

q23

X

−0.39

0.37

0.34

0.25

0.47

L

0.3

0.29

0.52

0.46

q22 0.18

q21

W

0.66

0.97

−0.76

−0.46

0.96

K

−0.27

0.44

−0.43

−0.24 −0.61

−0.46

0.25

J

0.65

0.11

I

0.91

0.34

0.24

0.44

H

(continued)

0.24

0.36

0.15

0.29

0.92

0.24

0.36

0.15

0.29

0.92

0.24

0.36

0.15

0.29

q25

Z

−0.46

0.43

0.98

0.21

0.85

N

316 R. Rastogi et al.

q17

q18

q19

q20

q21

q22

q23

q24

q25

18

19

20

21

22

23

24

25

26

0.99

0.48

0.24

0.73

0.45

0.99

0.48

0.24

0.73

0.45

0.99

O

0.77

0.49

0.36

0.67

0.56

0.77

0.49

0.36

0.67

0.56

1.77

P

0.88

0.19

0.14

0.65

0.25

0.88

0.19

0.14

0.65

1.25

0.88

Q

0.46

0.27 0.25

0.36

0.73

0.37

−0.28 0.95

0.78

0.25

0.36

0.28

0.46

0.27

1.73

0.37

−1.28 0.95

0.78

0.25

S

0.28

0.46

R

0.76

0.34

0.43

0.49

0.14

0.76

1.34

0.43

0.49

0.14

0.76

T

** Shows

the covariance values from Q1 to Q25 in rows to Q1 to Q13 in column the covariance values from Q1 to Q25 in rows to Q14 to Q25 in column

q16

17

* Shows

q15

16

A

Table 9 (continued)

0.43

0.54

0.93

0.94

0.16

1.43

0.54

0.93

0.94

0.16

0.43

U

0.67

0.29

0.52

0.46

−0.76

−0.33

−0.66

1.14

−0.85

−1.43 0.62

0.54

0.36

−0.66

0.36

0.3

0.49

1.16

−1.27

0.81

−0.76

−0.33 0.18

0.67

−0.85 0.14

0.54

−0.43

0.36

−0.66

Y

0.36 0.62

0.49 0.3

0.29

0.52

0.16

−0.27 0.46

−0.76

−0.33 0.18

0.67 0.81

X

W

V

1.92

0.24

0.36

0.15

0.29

0.92

0.24

0.36

0.15

0.29

0.92

Z

MM Big Data Applications … 317

318

R. Rastogi et al.

Fig. 8 Covariance of 21–25

Fig. 9 Classification results of four personality attributes by logistic regression and multinomial logistic regression

I worry a lot. I feel very angry inside. I feel extremely sensitive and irritable. I feel like other people don’t understand me. I really don’t feel good about myself. I spend 3 h a week working on a hobby of mine. I lack time to read daily newspaper. I watch television for entertainment for more than one hour a day. I drive in motor vehicle faster than speed limit for excitement and challenge of it. I spend less than 30 min a day working towards the life goal or ambition of mine. I have hard time feeling really relaxed. I get severe or chronic headaches. My stomach quivers or feels upset. I smoke tobacco. I feel short of breath after mild exercises like climbing up

MM Big Data Applications …

319

Fig. 10 Accuracy percentage of different algorithms

four flights of stairs. At least once during the week, I have a shouting match with co-workers or supervisors. I tend to stumble when walking or have more accidents than other people. I get drunk or high with drugs more than once a week. I get tongue tied when I talk to other people. After dinner, I spend more time alone or watching TV than with family or friends. I have trouble falling asleep. I take pills to get to sleep. I have nightmares or repeated bad dreams. I wake up at least once in the middle of the night for no apparent reason. No matter how much sleep I get, I awake feeling tired. Scoring was done from 1 to 5, where 5 indicates extreme and 1 indicates lowest. We got the results of 394 participants.

3 Final Analysis of Graphs and Concluding Remarks 3.1 Performance Evaluation According to this algorithm, the relationship between the train set and the test set is 70:30, i.e. 70% of the complete data set will be the train set and 30% will represent the set of tests. After this algorithm is applied, we obtain the accuracy level of the train set and the set of tests. Also a classification chart is obtained in which the classification of the data set is made between the three classes that are 0,1 and 2, where 0 represents low, 1 represents medium and 2 represents high based on the characteristics of those highly correlated six questions.

3.2 Accuracy Percentage of Different Algorithms Figure 10. Machine learning is extensively used to analyse the stress level of an individual. Learning car is an extensive way to analyse the stress level of any person. Melee remedies are applied depending on the optimum level of stress. Emotional, physical, behavioural and other personal indicators are required to be cured asap [12, 13, 15].

320

R. Rastogi et al.

3.3 Learning and Outcome The student authors under supervision of their mentors learned python, teamwork, how to distribute work as tasks. Any person can be identified that he is in stress or not by some symptoms. These symptoms are classified into different stress levels. These symptoms can be easily recognized if a person spends more time alone, feels extremely sensitive and due to it other person also does not feel good, he/she spends less than 30 min on goal and face difficulty in falling asleep also.

3.4 Future Directions The result was sent along with the resources to people via e-mail. People try after a subsequent time interval to see if the applied remedies are useful or not, and it is used also to see that whether the intervals division are correct or not [34–36].

4 Novelty in Our Work The work was done by combining and merging various backgrounds and disciplines, such as spirituality, computer science, medical engineering and clinical psychology. They have been united by a common interest, namely elevating human awareness. Philosophy has various pros and cons, but in the present scenario, psychic summons are the substantial crisis. The authors even tried to bestow a quantitative approach to a subjective concept. Various mathematical formulas and the solutions of major problems related to spiritual well-being and human life have been addressed and will eventually follow a new path in the study of human intelligence. The proper use of the analysis and spiritual practices will definitely show substantial improvement in different characteristics [4, 5, 36, 37].

5 Complexity Analysis Human life, the human mind along with human nature, is very complex by nature. Calculated characteristics are not only intuitive but will also differ from an individual to another individual of what should and should not be done, which may be long and interminable. Furthermore, it is such a complex phenomenon that it can be treated as a complete NP problem. As a result, the complexity of the execution time will be much higher due to it the possible combination of human encountered problems will be high [6, 38].

MM Big Data Applications …

321

6 Recommendations The given method is completely a new technology to elucidate the duration of human life and human complexity, the decision-making process of human beings, etc. As it is known, incorrect decision can make someone go under a lot of pressure and which will cause stress, tension or mental disorder. With the result analysis through this technology, proper remedies will be given. This is one of the basic reasons why it is so essential to continuously update our ability to question and interview, read the latest information, get more information about the environment, etc. Human beings are complex, and we must continue to learn more about them [9, 38].

7 Limitations The frame could work in a short time, number of parameters, say 30. We must continually try to explore and exploit the important components that contribute to these phenomena. Only the best global individual will be able to drive at the end of every iteration. Features may vary. Their priorities may differ, so it is important to understand every small aspect and study in depth and therefore define the characteristics of absolute human spiritual attributes [12, 23, 39].

8 Future Scope and Possible Applications You can work on identifying absolute parameters as well as their priorities. Even its measurement and absolute representation are essential. The most analytical model and the simulation structure can be planned and designed and thus an expert system can be formulated to represent the most complete notion alive [12]. It can be used to recognize an honest employee, a good colleague, a life partner or a friend for a particular individual. For social reform, recognizing individual characteristics to make distinct decisions can prove to be helpful. The scientific approach to recognize invariable spiritual characteristics and defining behaviour and personality can be utilized in various distinct ways in almost all the aspects of life. The results are acquired through a questionnaire, and finally the results are analysed.

9 Motivation The recommended project will make an endeavour to address intelligent machine techniques such as python programming on the Anaconda platform and SPSS tools using the bio-feedback therapy sensor modalities to address optimized methods of

322

R. Rastogi et al.

developing ethical life in the outlook of Indian psychology and philosophy which uses varied spiritual instruments. They are strongly influenced by drugs, stress, memory problems, high level of anger, injury problems, etc. All these problems are responsible for the hike of the mortality rate of the human per day. The machines are becoming very intelligent day by day due to the growth of artificial intelligence. Almost all the work is performed with the help of machines. Science is used to solve many complex problems and has even simplified our lives to a great extent, but despite this, there are some hurdles that are also needed to be solved as soon as possible like psychological problems such as tension, frustration, stress, depression, etc., which are now becoming more frequent in the human traits and personalities [13, 39]. Currently, we require training on how to talk and interact with people in order to get the maximum information on their psychological condition and mental condition. Usually, we meet people in the office, school, university, public places, etc., and we begin to judge people based on their sense of balance in the bank of dress, the familiar background, etc.; we have to work out how to get the best information from people by simply looking at the moment of their hand, the sense of the word, the visual contact, the moment of the body at the moment of speech and many other factors that help us. We have to interact with people who are stressed, who are highly influenced by drugs, who are short tempered, who face severe memory problems, etc. The proposed document is a mathematical structure of success for the calculation of the personal human spiritual factor and to direct an individual towards a better and nobler life [15].

10 Conclusion Developing a process to turn data into a viable vision is an essential part of big data and the success of IoTs. According to McKinsey, open strategies that claim “give us your data and give you new knowledge” are not really enough. Companies must review on the standards of information they incorporate and, consequently, design the system in order to optimize the task. As the number of connected devices increases, organizations will acquire more opportunities to use them and gather useful and pertinent data that can improve their business process. As per the statistics collated by researchers, Internet traffic is shifting towards multimedia data from non-multimedia data. This signifies that users are inclining more towards multimedia in day to day activities. Seamless integration, cooperative sensing, connectivity and autonomy in the IoMT structure open doors to abundant chances to improve services and applications through efficient utilization of large multimedia data. However, the diverse nature of large multimedia data demands accessible and tailored recommendation frameworks for efficient analysis of big data collected in situations like surveillance, retail, telemedicine, traffic monitoring and disaster management.

MM Big Data Applications …

323

Multimedia is used in real time emergency detection by using visual analytics and response recommendation. Data mining in IoMT systems requires scalable and efficient algorithms for big data analytics. Evolutionary algorithms for multimedia analysis and approvals in IoMT ecosystem, multimodal features extraction methods for multimedia data analysis in IoMT environment, novel data collection, reality mining, deep learning and prediction methods are based on physical world remarks. In this paper, we have discussed the importance of big data, its challenges and also huge potential in the upcoming era. We have also highlighted the large role big data can play in human development. It can also play a key role in global prosperity. We have also stressed on various challenges and pitfalls associated with big data analytics, but considering the huge potential. Researchers are working on improving the techniques and handling the identified challenges [16]. Acknowledgements The authors’ team are researchers in the field of scientific spirituality and are heartily grateful for the guidance of co-guides of several reputable academic institutions in order to comprehend the concept well as well as reflect the way forward. The authors also pay sincere thanks to the Chairperson, members of Spirituality, Yoga and Ethics committee of ABESEC, Ghaziabad for their encouraging support and guiding us the right way. We are also thankful to them for helping us to design the consent Performa for participants to seek individual subject permission for this research work. We also thank to Mr. Ayush Pratap Singh for his services.

References 1. P. Andlin-Sobocki, B. Jonsson, H.U. Wittchen, J. Olesen, Cost of disorders of the brain in Europe. Eur J Neurol 1(12), 13, 14, 18 (2005) 2. A. Baum, Stress, intrusive imagery, and chronic distress. Health Psychol. 6, 653–675 (1990) 3. R. Rastogi, D.K. Chaturvedi, S. Satya, N. Arora, S. Chauhan, An optimized biofeedback therapy for chronic TTH between electromyography and galvanic skin resistance biofeedback on audio, visual and audio visual modes on various medical symptoms, in The National Conference on 3rd MDNCPDR-2018 at DEI, Agra on 06–07 Sept 2018 (2018a) 4. R. Rastogi, D.K. Chaturvedi, S. Satya, N. Arora, P. Singh, P. Vyas, Statistical analysis for effect of positive thinking on stress management and creative problem solving for adolescents, in Proceedings of the 12th INDIACom; 2018 ISSN 0973-7529 and ISBN 978-93-80544-14-4, pp. 245–251 (2018e) 5. R. Rastogi, D.K. Chaturvedi, S. Satya, N. Arora, P. Singhal, M. Gulati, Statistical resultant analysis of spiritual & psychosomatic stress survey on various human personality indicators, in The International Conference Proceedings of ICCI 2018 (2018f) 6. D.S. Scott, T.F. Lundeen, Myofascial pain involving the masticatory muscles: an experimental model. Pain 8(2), 207–215 (1980) 7. R. Rastogi, D.K. Chaturvedi, S. Satya, N. Arora, H. Saini, H. Verma, K. Mehlyan Y. Varshney, Statistical analysis of EMG and GSR therapy on visual mode and SF-36 scores for chronic TTH, in The Proceedings of UPCON-2018 on 2–4 Nov 2018 MMMUT Gorakhpur, UP (2018b) 8. G. Bronfort, N. Nilsson, M. Haas et al., Non-invasive physical treatments for chronic/recurrent headache. Cochrane Database Syst. Rev. 3, 13, 14, 18 (2004) 9. D. McCrory, D. Penzien, V. Hasselblad, R.N. Gray, Evidence Report: Behavioral and Physical Treatments for Tension-Type and Cervicogenic Headache, Duke University Evidence-based Practice Center, Durham, North Carolina. Available at: www.masschiro.org/upload/research/ 16_0.pdf. Accessed 28 Dec 2011. 12, 15, 16 (2001)

324

R. Rastogi et al.

10. R. Rastogi, D.K. Chaturvedi, N. Arora, P. Trivedi, S. Chauhan, Framework for Use of Machine Intelligence on Clinical Psychology to study the effects of Spiritual tools on Human Behavior and Psychic Challenges, in Proceedings of NSC-2017 (National System Conference), DEI, Agra, Dec 1–3 (2017a) 11. T.H. Budzynski, J.M. Stoyva, An instrument for producing deep muscle relaxation by means of analog information feedback. J. Appl. Behav. Anal. 2(231–237), 14, 19 (1969) 12. David Mechanic, Students Under Stress; A Study in the Social Psychology of Adaption (University of Wisconsin Press, Madison, 1978) 13. L.H. Miller, A.D. Smith, The Stress Solution (American Psychological Association, 2003) 14. R. Rastogi, D.K. Chaturvedi, S. Satya, N. Arora, V. Yadav, S. Chauhan, P. Sharma, SF-36 scores analysis for EMG and GSR therapy on audio, visual and audio visual modes for chronic TTH, in The Proceedings of the ICCIDA-2018 on 27 and 28th Oct 2018 CCIS Series, Springer at Gandhi Institute for Technology, Khordha, Bhubaneswar, Odisha, India (2018c) 15. Misra Ran jita and Castillo G. Linda, Academic stress among college students, comparison of American and international students. Int. J. Stress Manag. 11(2), 132–148 (2005) 16. C.F. Morgam, R.A. King, J.R. Weisz, J. Schopler, Introduction to Psychology (1986), p. 301 17. R. Rastogi, D.K. Chaturvedi, S. Satya, N. Arora, P. Trivedi, A. Singh, A. Sharma, A. Singh, Intelligent analysis for personality detection on various indicators by clinical reliable psychological TTH and stress surveys, in the proceedings of CIPR 2019 at Indian Institute of Engineering Science and Technology, Shibpur on 19th–20th Jan 2019, Springer-AISC Series (2019a) 18. F. Granella, S. Farina, G. Malferrari, G.C. Manzoni, Drug abuse in chronic headache: a clinicoepidemiologic study. Cephalalgia 7(15–19), 12 (1987) 19. Nandamuri Prabhakar Purna and Ch Gotham., National Health Ministry, Stress and The college Students. Oxford Brookes University (2006), p. 257 20. R. Rastogi, D.K. Chaturvedi, S. Sharma, A. Bansal, A. Agrawal, Audio Visual EMG & GSR Biofeedbac Analysis for Effect of Spiritual Techniques on Human Behavior and Psychic Challenges, in Proceedings of the 12th INDIACom; 2018, ISSN 0973–7529 and ISBN 978-9380544-14-4, pp. 252–258 (2018g) 21. Hans Selye: The stress of life, Jain Shashi, Introduction to Psychology. Kalyani Publishers p. 40 (1979) 22. R. Rastogi, D.K. Chaturvedi, S. Satya, N. Arora, V. Yadav, S. Chauhan, P. Sharma, Analytical comparison of efficacy for electromyography and galvanic skin resistance biofeedback on audio-visual mode for chronic TTH on various attributes, in The Proceedings of the ICCIDA2018 on 27 and 28th Oct 2018, CCIS Series, Springer at Gandhi Institute for Technology, Khordha, Bhubaneswar, Odisha, India (2018d) 23. E. Peper, F. Shaffer, Biofeedback history: an alternative view. Biofeedback Winter; 4(20), 12, 18, 22 (2010) 24. J.C. Rains, D.B. Penzien, D.C. McCrory and R.N. Gray, Behavioral Headache Treatment: History, Review of Empirical Literature and Methodological Critique. Headache 14 (2005) 25. R. Rastogi, D.K. Chaturvedi, N. Arora, P. Trivedi, V. Mishra, Swarm intelligent optimized method of development of noble life in the perspective of Indian scientific philosophy and psychology, in Proceedings of NSC-2017 (National System Conference), DEI Agra, Dec 1–3 (2017b) 26. Headache disorders and public health 9 March 2000 14, 18 27. S. Kadhiravan K. Kumar, Stress coping Skills among college students. J. Arts, Sci. Commer. 3(4)1 (2002) 28. S.E.B. Ross, C. Neibling, T.M. Heckert, Source of stress among college students. College Student J. 33(2), 312, 6p, one chart (2008) 29. R. Plomin, M.J. Owen, P. McGuffin, The genetic basis of complex human behaviors science 264(5166), 1733–1739 (1994). https://doi.org/10.1126/science.8209254 30. K.P. James, F.H. Gragory, An academic stress scale: identification and rated importance of academic stressors. Psychol. Rep. 59, 415–426 (1986)

MM Big Data Applications …

325

31. R. Rastogi, D.K. Chaturvedi, S. Satya, N. Arora, M. Gupta, V. Yadav, S. Chauhan, P. Sharma, Book chapter titled as ‘Chronic TTH analysis by EMG & GSR biofeedback on various modes and various medical symptoms using IoT’, Paperback ISBN: 9780128181461, Advances in ubiquitous sensing applications for healthcare, Book-Big Data Analytics for Intelligent Healthcare Management (2019b) 32. R. Rastogi, D.K. Chaturvedi, N. Arora, P. Trivedi, P. Singh, P. Vyas, Study on efficacy of electromyography and electroencephalography biofeedback with mindful meditation on mental health of youths, in Proceedings of the 12th INDIACom; 2018 ISSN 0973–7529 and ISBN 978-93-80544-14-4, pp. 84–89 (2018h) 33. R. Rastogi, D.K. Chaturvedi, S. Satya, N. Arora, H. Sirohi, M. Singh, P. Verma, V. Singh, Which one is best: electromyography biofeedback efficacy analysis on audio, visual and audio-visual modes for chronic TTH on different characteristics, in the proceedings of ICCIIoT-2018, 14–15 Dec 2018 at NIT Agartala, Tripura, ELSEVIER- SSRN Digital Library (ISSN 1556-5068) (2018i) 34. The American Institute of Stress, 50 Common Signe and Symptoms of Stress (Wood and Wood: The World of Psychology, 1999), p. 469 35. H. Vernon, C.S. McDermaid, C. Hagino, Systematic review of randomized clinical trials of complementary/alternative therapies in the treatment of tension-type and cervicogenic headache. Complement Ther. Med. 7(14), 13, 14, 19 (1999) 36. R. Rastogi, D.K. Chaturvedi, S. Satya, N. Arora, H. Saini, H. Verma, K. Mehlyan, Comparative efficacy analysis of electromyography and galvanic skin resistance biofeedback on audio mode for chronic TTH on various indicators, in The Proceedings of ICCIIoT- 2018, 14–15 Dec 2018 at NIT Agartala, Tripura, ELSEVIER- SSRN Digital Library (ISSN 1556-5068) (2018j) 37. What is biofeedback? Association for Applied Psychophysiology and Biofeedback, 12(5), 12 (2008) 38. R. Rastogi, D.K. Chaturvedi, S. Satya, N. Arora, I. Bansal, V. Yadav, Intelligent analysis for detection of complex human personality by clinical reliable psychological surveys on various indicators, in The National Conference on 3rd MDNCPDR-2018 at DEI, Agra on 06–07 Sept 2018 (2018k) 39. R. Yadav, V. Koushal, P. Aggarwal, V. Saini, R. Sharma, The Interrelationship of positive mental and physical health; a health promoting approach. Indian J. Posit. Psychol. 3(1), 1–5 (2012)

Residual Exploration into Apoptosis of Leukemic Cells Through Oncostatin M: A Computational Structural Oncologic Approach Arundhati Banerjee, Rakhi Dasgupta and Sujay Ray

Abstract Oncostatin M (OSM) targets cells through the formation of a triple protein complex involving gp130 and OSMR (oncostatin M receptor). This leads to sequential triggering of Jak/STAT pathways. Signal transduction thus occurs efficiently for the apoptosis of leukemic cells. In this study, the essential 3D protein structures were docked among themselves to form the trio-protein complex. Through optimization techniques upon the best docked structure, a comparative analysis was undergone to examine the stable conformation and firm interactive complex. Residual investigation through binding patterns if not performed, void remains in the research as efficient drug targeting holds a risk. G values and other stability parameters inferred with a steadier and more spontaneous interaction to occur for the optimized protein complex. Gp130 showed an improvement in β-sheet formation via compromising coil-like conformation. Glu and Asp residues from OSMR protein formed more than 50% of ionic bonds with gp130 and OSM. OSMR acted as a linker peptide between OSM and gp130 (which formed one ionic interaction with OSM). Several Phe-Phe aromatic interactions were accomplished by OSMR to render an additional strength. Phe4 and Phe7 from OSMR played a double role through aromatic–aromatic and cation-pi interactions. Above 83% of cation-pi interactions were exhibited by OSMR protein. Altogether, all statistically validated evaluations affirmed the optimized protein complex to be the more stable and firmer one. This study possesses a foundation through molecular-level computational approach in oncology for the apoptosis of human leukemic cells. It would additionally instigate the drug discovery centered research.

A. Banerjee · R. Dasgupta (B) Department of Biochemistry and Biophysics, University of Kalyani, Kalyani, Nadia, WB, India e-mail: [email protected] A. Banerjee e-mail: [email protected] S. Ray (B) Amity Institute of Biotechnology, Amity University, Kolkata, WB, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_26

327

328

A. Banerjee et al.

Keywords Leukemic cells · Oncostatin M · Protein–protein interactions · G values · Binding patterns · Statistical significances

1 Introduction IL-6, oncostatin M (OSM) and leukemia inhibitor factor (LIF) forming the “interleukin 6 hematopoietic cytokine family,” execute an efficient role in coordinating the initiation and thereby development of the inflammatory reactions and managing the homeostatic phenomena [1]. These responsibilities were analyzed through the effects of expression of the transgenic cytokine members, knockout of the respective cytokine genes in mice [1–4], or through in vivo experiments with activity counteracting antibodies or pharmacological dosages of cytokines [1, 5, 6]. Stimulated monocytes and lymphocytes are responsible for the production of OSM in the inflammatory sites and perform their function on the cells of stroma [1, 7]. Oncostatin M (OSM) and leukemia inhibitory factor (LIF) were studied to act upon their target cells through the formation of a triple protein complex comprising their individual receptor proteins and a common gp130 protein [1, 8]. Thus upon interaction with OSMR and gp130, OSM mediates its biological impacts. It thus forms the Type II Receptor Mediated Pathway [1]. Type I Pathway of the same has been previously studied in our investigations [9]. This triple protein interaction leads to the signaling of cytokines. One of the regulatory molecules which synchronize the immune responses is the “cytokines”. Cytokine receptor protein is known to form a steady combination with janus kinase (JAK) which is a typical cytoplasmic tyrosine kinase. This mechanism of cellular signaling is known to rapidly turn on a series of genes to their activation. The activation of JAK’s tyrosine kinase activity is caused only when the regulator protein interacts and leads to dimer formation of the receptor protein [10, 11]. This leads the two JAKs to get closer to each other and thereby phosphorylate themselves and further phosphorylate the receptor. Now, the phosphorylated tyrosine residues on the receptor molecule serve as the binding sites for a particular group of proteins known as signal transducer and activator of transcription (STAT) proteins [10, 12]. As soon as STAT interacts with the receptor protein, it is able to be phosphorylated by JAK. After this phosphorylation, a STAT dimer is formed which is known to be an active transcription factor. Now, it gets translocated to the nucleus to bind to the specific DNA sequence on the promoter of a particular gene [10, 12]. As a result, the cytokine leads to the activation of a particular set of genes to cause the cellular signaling mechanism. Optionally, these triggered receptors can also lead to “mitogen-activated protein kinase (MAPK) pathways” [10, 13] and “PI3K/AKT pathways” [10, 14]. In this present study, the three essential proteins (OSM, OSMR and gp130) were analyzed through their tertiary (3D) structures. OSM and gp130 were extracted from their experimentally validated crystallographic structures while the amino acid sequence of the receptor protein of OSM underwent proficient molecular modeling technique. Validation of the stereo-chemical characteristics was performed. Energy

Residual Exploration into Apoptosis of Leukemic Cells …

329

optimization of the modeled 3D protein structure was performed to attain its stable protein conformation nearer to its native state. The protein monomers underwent sequential protein–protein docking analysis to first form a duo complex and then a triple complex. The best protein model complex was selected for further studies. Energy minimization of the final protein complex was also executed. Thermodynamic stability through G values was calculated. Further, the strong binding patterns and the interacting residues were also observed in the trio-protein complex. Electrostatic surface potential evaluations were also performed. Earlier studies [15–18] document the molecular level studies for many such diseases and disease-associated proteins, but proteins associated with the progression of leukemic cells were hitherto not undergone. Abridging the study, this computational exploration focused into the root level molecular basis for the execution of the cellular signaling pathways for leukemic cell progression. This residual level study would thereby be beneficial for the upcoming clinical analysis. It might bestow with a forthcoming scope for the analysis into any tiny modulators or/and any specific drug discovery to instigate OSM and its receptor to diminish its inclination for progression of leukemic cells.

2 Materials and Methods 2.1 Sequence Analysis of Human OSMR and Its’ Template Search The amino acid sequence for OSMR (Homo sapiens) was extracted from NCBI having Accession No. AAH63468.1. For template search, the sequence was subjected to PSI-BLAST [19] against PDB [20]. The results showed an unsatisfying result for the protein to be modeled through homology modeling. The best template from the search showed a protein having PDB ID: 2Q7N, chain A with 93% query and 31% identity. As the sequence identity between the template and target showed regretfully less, homology modeling was not opted for modeling the protein.

2.2 Molecular Modeling of the OSMR Protein Upon an unpleasant template search outcome, fold recognition technique was opted for modeling the protein. I-TASSER as the name stands for “Iterative Threading ASSEmbly Refinement” helps in modeling the 3D structures of the target proteins from their basic amino acid sequences [21]. It identifies the templates for the structure from PDB through fold recognition technique, which is also known as threading [21]. First, rearrangement of the fragmented structures was done from the available templates through fold recognition [21]. After that, through the execution of “replica

330

A. Banerjee et al.

exchange Monte Carlo simulations” [21], the full-length model of the required OSMR protein was built. Therefore, this leads to build its native stable structural conformation.

2.3 Loop Optimization of OSMR Protein To optimize the loop regions of the modeled OSMR protein, the distortions in those regions (loop regions specifically) are to be essentially eradicated. Not only the deformities in those loop regions, but also the discrepancies caused due certain inappropriate ψ−ϕ angle conformation needs to be removed. So for that purpose, ModLoop was operated [22]. This led to the formation of a loop optimized structure of OSMR protein.

2.4 Protein Refinement for OSMR OSMR protein was energy refined through the usage of ModRefiner [23]. With varied force fields, the OSMR protein model was refined. Algorithm of high resolution was utilized to correct the unstable conformation. Therefore, protein accuracy was improved [23].

2.5 Stereo-Chemical Validation for OSMR ERRAT value was calculated using SAVES v.5 [24]. The LG-score and Max-score of the protein were calculated through the usage of ProQ [25]. LG-score of more than 1.5 and Max-score of more than 0.1 depict a fairly good model [25]. In this study, it was found to be 1.19 and 0.15, respectively. Z-score value from ProSA was calculated to be (−) 4.8 for OSMR model [26]. Even the Ramachandran plot for the model showed more than 80% residues to be in the core regions with no residues in the disallowed regions [27].

2.6 Structural Analysis of OSM and Human Gp130 The X-ray crystallographic structures of OSM and gp130, each belonging from Homo sapiens were found to possess PDB ID: 1EVS (chain A) [28] and 3L5I (chain A) [9, 29], respectively, in the Protein Databank. The resolution values during X-ray crystallography were 2.2 Å and 1.9 Å, respectively, for OSM and gp130.

Residual Exploration into Apoptosis of Leukemic Cells …

331

2.7 Protein–Protein Docking Simulations for OSMR, OSM and Gp130, Sequentially Through the operation of Cluspro2.0 [30], first OSM and OSMR were docked. Then with this duo-protein complex, gp130 protein was docked further to form the trioprotein complex. After docking, simulation was performed (to remove the steric clashes and bring the protein to its native state). Ten docked protein complexes were obtained. The complex protein with the best electrostatic energy, desolvation energy, etc. was taken into consideration for further evaluations [30]. In the formation of duo- as well as the trio-protein complex, the longer protein was taken as receptor while the other one as the ligand (in each case).

2.8 Minimization of the Final Trio-Protein Complex Using GROMACS The final trio-protein complex was energy minimized to alter the dihedral angle conformation from the complex, so order that they fluctuate concurrently [31]. First, steepest descent technique was followed, which was succeeded by conjugate gradient method using GROMACS [32].

2.9 Analysis for Thermodynamic Stability and Strength in Interaction To calculate the thermodynamic energy value (G value) of the entire protein complex, DFire energy was calculated [33]. It helps to evaluate the energy in non-bonded dipole–dipole interactions [33, 34]. More negative G value indicates spontaneous and stable interaction that has taken place [33, 34]. To investigate the strength in the interaction pattern, the net area for solvent accessibility was evaluated for the protein complexes before and after minimization [35]. Lower values indicate that more number of residues have participated in the interaction, thereby, making the complex stronger. Further, upon the surface of Gp130, the electrostatic potential values were calculated. Vacuum electrostatics from PyMOL was operated for the purpose [36]. More negative values of electrostatic surface potential show stronger affinity for interaction with the partner proteins.

332

A. Banerjee et al.

2.10 Comparative Analysis of Conformational Switches in Gp130 The fluctuations in gp130 protein (before and after minimization of the trio complex) were evaluated and compared with the usage of DSSP algorithm [37, 38]. PyMOL [36] and Discovery Studio packages (Accelyrs) helped to obtain a consensus outcome. From earlier documentation, it has been known that proteins with an ascent in the β-sheets show a steady and firmer interaction pattern with a stable conformation [39].

2.11 Binding Pattern Evaluation and Residual Participation To analyze the binding patterns and investigate into the residual involvement from the respective proteins, protein interaction calculator (PIC) was operated [40]. Thus, the final minimized protein complex was subjected to the required study. Similar outcomes were corroborated from PyMOL [36] as well as Discovery Studio 4.1 (Accelyrs).

2.12 Statistical Evaluations and Significances Through T-Tests All the evaluated outcomes were subjected to statistical analysis. Paired T-tests were carried out for all the evaluations. P-value of less than 5% denotes highly significant calculated outcomes.

3 Results and Discussion 3.1 Structural Description of Modeled OSMR Protein The satisfactorily modeled OSMR protein was found to be 215 amino acid residues long. The protein was found to accomplish a satisfying RMSD value upon superimposition with its backbone atoms upon the respective template. Not only that, the TM-score of the protein was also found to be less than 0.9 showing that the functionality of OSMR protein was conserved in comparison with its native stable structure. The protein started with a stretch of 36 residues (Met1 to Ser36) in coil conformation. There were interspersed β-sheet conformations with no helices. The β-barrel arrangement also was found. Thirteen β-sheets were found to be present.

Residual Exploration into Apoptosis of Leukemic Cells …

333

Fig. 1 Structural portrayal of modeled OSMR (Met1 till Cys215) with red β-sheets and marine shaded coils

The protein again ends with a stretch of coil regions. The 3D structure of the protein has been demonstrated through Fig. 1 with marine β-sheets connected with red coils.

3.2 Structural Description of Crystallographic Structures of OSM and Gp130 The crystallographic structure of OSM was found to possess six helices interspersed with coil conformations. The protein started with six residues in coil region and ended with one residue in coil region. The crystallographic structure of gp130 protein was observed to be long enough with 23 sets of β-sheet conformation. These 23 sets formed three β-barrel arrangements. Two–three residue long helical conformation was observed within the first and the third β-barrel arrangements only. These β-barrel arrangements were interspersed with coil conformations. The protein finally ended with two residues in coil region. The detailed structures of OSM and gp130 protein have been demonstrated in Fig. 2 with cyan helices connected with pink coils and yellow sheets, red helices and green coils, respectively.

3.3 Evaluations from Thermodynamic Stability Parameters and Binding Strength For the evaluation of G values, the non-bonded atomic dipole–dipole interactions were taken into account. Before the energy minimization, the protein complex was found to exhibit −1334.1 kcal/mol which was seen to get increased to − 1356.91 kcal/mol after energy optimization (Table 1). This affirms the more sponta-

334

A. Banerjee et al.

Fig. 2 Structural portrayal (through interactive structure) of OSM (Gly1till Trp163) with cyan helices and pink coils while gp130 (Glu1 till Lys286) in yellow β-sheets, red helices and green shaded coils Table 1 G values and strength in interaction for the trio complex (OSM-OSMR-Gp130)

Stability parameters

Before optimization

After optimization

G (kcal/mol)

−1334.1

−1356.91

Net area for solvent accessibility (Å2 )

29,864.02

29,230.61

neous and stable interaction to take place in the complex after optimization. Along with that, the net area for solvent accessibility values was found to get decreased in the final optimized complex, thereby inferring that more number of residues participated after the trio complex was optimized (Table 1). This shows a strong interaction overall. In addition to that, when electrostatic surface potential was evaluated upon the interacted gp130 structure, it was observed that this outcome also supported a spontaneous and compact interaction pattern to get exhibited after optimization (Fig. 3). The negative electrostatic potential value upon gp130 surface got changed from − 39.315 units (before optimization) to −47.019 units (after optimization).

3.4 Conformational Fluctuations in Gp130 Protein The conformational fluctuation in gp130 protein was observed from its interacted complex structure, before and after optimization. It was found that there was an abrupt increase in the β-sheet conformation from 44.8% to 47.6% with a compromise on the

Residual Exploration into Apoptosis of Leukemic Cells …

335

Fig. 3 Electrostatic potential upon gp130 surface (from its interacted complex) before and after optimization (OSMR and OSM has been represented in yellow and cyan ribbon)

Fig. 4 Conformational fluctuations in gp130 before and after optimization from interacted complex

residues adopting coil-like conformation (Fig. 4). This further affirms the stable and firmer interaction pattern as proteins with an increased β-sheet conformation leads to the increased stability in the protein.

3.5 Residual Participation from the Individual Proteins in the Final Optimized Protein Complex Protein interaction examination is one of the vital parts to study the molecular and root basis of interaction process. To explore into the residues involved and binding patterns between the proteins in the trio-protein complex, protein–protein interactions were analyzed. It also helped

336

A. Banerjee et al.

to examine the varied binding patterns. Most predominant interactions were found to be the ionic–ionic interactions, aromatic–aromatic interactions and cation-pi interactions. Earlier studies revealed that the cation group is persistently involved in intermolecular hydrogen bonds from the cation-pi pair of interactions [41, 42]. It was previously documented that arginine from the side chain regions takes part in manifold protein interactions, while it was analyzed that one of the most chief participant for protein–protein interface interactions is the cation-pi interactions [41, 42]. The conformational orientation as well as the packaging of the aromatic residues in a protein is governed by two key factors. Among which, electrostatic interactions (ionic–ionic) are the paramount ones [43, 44]. Though these interactions are comparatively small but large in quantity, they are considered to be the favorable interactions (enthalpically) [43, 44]. Altogether, they are capable of contributing substantially to the improved stability of the protein structure or its complex structure. Additionally, aromatic–aromatic interactions stand unique for investigation because of the aromatic rings which can interact with each other [45, 46]. These interactions have been documented to contribute to the specificity of the protein complex in its favorable folding and its enhanced stability [45, 46]. Among the aromatic–aromatic interactions, the geometrical arrangement due to Phe-Phe interaction serves as a preference [45, 46]. Research shows that electrostatic interactions added with the Phe-Phe interactions and other benzene ring indulged interactions lead to additional strength in the protein interaction phenomena [45, 46]. Here, in this study, six ionic–ionic interactions, five aromatic–aromatic interactions and six cation-pi interactions were observed. The receptor protein accomplished two sole ionic–ionic interactions with Lys517 of gp130 with the glutamic acid residues from 28th to 125th position. Arg353 from OSM was also found to be bonded electrostatically with Asp120 from the receptor protein. Asp523 from gp120 additionally formed two sole ionic interactions with OSM protein. Altogether, these ionic interactions led to a charged pocket-like structure to accommodate the partner proteins. The predominant ionic interactions have been shown through Fig. 5a and have been tabulated through Table 2. Supporting the documentation from earlier studies, three out of five aromatic–aromatic interactions were accomplished through Phe-Phe interactions. Out of these three, Phe4 from the receptor protein formed two sole interactions with OSM protein. Additional aromatic interactions were contributed by Phe12 from OSMR and Tyr338 from OSM with Tyr546 and Trp494 from gp130 protein. The predominant aromatic–aromatic interactions have been shown through Fig. 5b and have been tabulated through Table 2. Out of six cation-pi interactions, again Phe4 and Phe 7 (which contributed for aromatic ring formation from OSMR) were also found to form three interactions with arginine residues OSM protein. Phe11, an adjacent phenylalanine residue (with respect to Phe12 as aforementioned) also participated with Lys544 from gp130. Another nearby Lys residue from gp130 also was bonded through the benzene ring of Tyr21 from OSMR. Tyr535-Arg334 was the only cation-pi interaction among

Residual Exploration into Apoptosis of Leukemic Cells …

337

Fig. 5 a Ionic–Ionic interactions in the optimized complex with black dotted lines indicating the bonds with the respective residues from the respective proteins. b Aromatic–Aromatic interactions in the optimized complex with black dotted lines indicating the bonds with the respective residues from the respective proteins. c Cation-pi interactions in the optimized complex with black dotted lines indicating the bonds with the respective residues from the respective proteins

gp130 and OSM, respectively. The predominant ionic interactions have been shown through Fig. 5c and have been tabulated through Table 2. Therefore, it can be stated in a nutshell that the above interactions, binding patterns and residual contributions led to a strong and firm enough interaction between OSM, OSMR and gp130 for inhibition of leukemic cells.

3.6 Evaluation of Statistical Significances All the evaluated outcomes from this study were made to get validated through paired T-tests. All the calculations were found to be less than 5%, thereby implying that they are statistically significant. The G value calculation, net area for solvent accessi-

338

A. Banerjee et al.

Table 2 Tabulation of predominant ionic–ionic, aromatic–aromatic and cation-pi interactions from the optimized trio-protein complex Binding patterns Ionic–ionic interactions

Aromatic–aromatic interactions

Cation-pi interactions

Position

Residue

Protein

Position

Residue

Protein

28

GLU

R

517

LYS

G

120

ASP

R

353

ARG

O

125

GLU

R

517

LYS

G

295

HIS

O

523

ASP

G

321

GLU

O

418

LYS

G

327

ARG

O

523

ASP

G

4

PHE

R

268

PHE

O

4

PHE

R

360

PHE

O

7

PHE

R

351

PHE

O G

12

PHE

R

546

TYR

338

TYR

O

494

TRP

G

4

PHE

R

296

ARG

O

7

PHE

R

303

ARG

O

7

PHE

R

353

ARG

O

11

PHE

R

544

LYS

G

21

TYR

R

569

LYS

G

535

TYR

G

334

ARG

O

R, G and O represent OSMR, Gp130 and OSM proteins, respectively

bility and conformational fluctuations showed a P-value of 0.002463, 0.001328 and 0.02414, respectively.

4 Conclusion and Future Scope Oncostatin M (OSM) protein from humans has been studied to form a triple protein complex through the interaction of gp130 and OSMR (oncostatin M receptor). OSM thereby acts upon their target cells. The interaction of these three proteins leads to the cytokine signaling mechanism. Through tyrosine phosphorylation, janus kinase family members get triggered. This leads to further activation of transcriptional activator (JAK/STAT) proteins for signal transduction. Due to the formation of the triple protein complex, the cellular signaling mechanism gets accomplished efficiently. This leads to the formation of “Type II Receptor Mediated Pathway.” In this study, 3D structures of the three proteins were taken into consideration. The proteins underwent protein–protein docking phenomena and then the best clustered trio complex was energy optimized to attain its stable native structural conformation. A comparative study was performed between the trio-protein complex before and after

Residual Exploration into Apoptosis of Leukemic Cells …

339

optimization. It was analyzed that G values for thermodynamic stability and net area for solvent accessibility affirmed with a more stable, firmer and more spontaneous interaction to take place in the protein complex after optimization. An increased number of residues were also observed to participate in the interaction which led to the reduction in the net area for solvent accessibility. The electrostatic surface potential upon gp130 also supported the aforementioned analysis of spontaneous interaction. Conformation switches in gp130 helped to infer about its steady conformation due to an immense increase in the percentage of residues forming β-sheet conformation with a compromise in the coil-like conformation. Residual involvement in the formation of the final optimized protein complex shows six ionic–ionic, five aromatic–aromatic and six cation-pi interactions to be the most predominant ones. Glu and Asp residues from OSMR protein were observed to form more than 50% of the ionic bonds with gp130 and OSM. This made OSMR to create a charged cavity and to act as a linker peptide between OSM and gp130. These interactions triggered interactions to take place solely between OSM and gp130, as well. Earlier literature study suggests Phe-Phe interactions render an additional strength through the aromatic–aromatic interactions. Here, in this study, more than 60% of the aromatic–aromatic interactions were due to Phe-Phe interactions accomplished by OSMR protein with its partners. Phe4 and Phe7 were found to accomplish a double role through aromatic–aromatic and cation-pi interactions from OSMR protein. More than 83% of cation-pi interactions were exhibited by OSMR protein with its partner proteins. Only one set of interaction was studied between gp130 (Tyr535) and OSM (Arg334). Altogether, all the interactions along with the statistically validated evaluations inferred with a stable, steady and more interactive optimized complex of OSM-OSMR-gp130. Without this detailed molecular and structural analysis with protein interaction studies, efficient drug targeting shall not be plausible. This study holds a rationale for the investigation in the oncologic field through a molecular-level computational approach for the apoptosis of the leukemic cells. This would therefore prompt the future research regarding drug targeting and enrich the clinical and pharmaceutical zone. Acknowledgments High gratefulness is rendered to the Department of Biochemistry and Biophysics, University of Kalyani for the support. Authors render their gratefulness to DST PURSE (II) and DST-FIST program in the University of Kalyani. Authors are also grateful to the Department of Biotechnology, Amity University, Kolkata for the cooperation and support as well. Conflicts of Interest None

References 1. W. Yanping et al., Receptor subunit-specific action of Oncostatin M in hepatic cells and its modulation by Leukemia inhibitory factor. J. Biol. Chem. 275, 25273–25285 (2000) 2. M. Kopf et al., Impaired immune and acute-phase responses in interleukin-6-deficient mice. Nature 368, 339–342 (1994)

340

A. Banerjee et al.

3. R.A. Gadient, P.H. Patterson, Leukemia inhibitory factor, interleukin 6, and other cytokines using the GP130 transducing receptor: roles in inflammation and injury. Stem Cells 17, 127–137 (1999) 4. C.H. Clegg, H.S. Haugen, J.T. Rulffes, S.L. Friend, A.G. Farr, Oncostatin M transforms lymphoid tissue function in transgenic mice by stimulating lymph node T-cell development and thymus autoantibody production. Exp. Hematol. 27, 712–725 (1999) 5. J.K. Loy, T.J. Davidson, K.K. Berry, J.F. MacMaster, B. Danle, S.K. Durham, Toxicol. Pathol. 27, 151–155 (1999) 6. P.M. Wallace, J.F. MacMaster, K.A. Rouleau, T.J. Brown, J.K. Loy, K.L. Donaldson, A.F. Wahl, Regulation of inflammatory responses by Oncostatin M. J. Immunol. 162, 5547–5555 (1999) 7. A. Grenier et al., Oncostatin M production and regulation by human polymorphonuclear neutrophils. Blood 93, 1413–1421 (1999) 8. C. Gabay, I. Kushner, Acute-phase proteins and other systemic responses to inflammation. N. Engl. J. Med. 340, 448–454 (1999) 9. A. Banerjee, R. Dasgupta, S. Ray, in Molecular and Protein Interaction Studies for Inhibiting Growth of Human Leukemic Cells: An in Silico Structural Approach to Instigate Drug Discovery (AISC Springer, 2017) (in press) 10. G. Dey et al., Signaling network of Oncostatin M pathway. J. Cell Commun. Signal 7(2), 103–108 (2013). https://doi.org/10.1007/s12079-012-0186-y 11. M. Tanaka, A. Miyajima, Oncostatin M, a multifunctional cytokine. Rev. Physiol. Biochem. Pharmacol. 149, 39–52 (2003). https://doi.org/10.1007/s10254-003-0013-1 12. L.K. Schaefer, S. Wang, T.S. Schaefer, Oncostatin M activates stat DNA binding and transcriptional activity in primary human fetal astrocytes: low- and high-passage cells have distinct patterns of stat activation. Cytokine 12, 1647–1655 (2000). https://doi.org/10.1006/cyto.2000. 0774 13. N.J. Van Wagoner, C. Choi, P. Repovic, E.N. Benveniste, Oncostatin M regulation of interleukin-6 expression in astrocytes: biphasic regulation involving the mitogen-activated protein kinases ERK1/2 and p38. J. Neurochem. 75, 563–575 (2000). https://doi.org/10.1046/j. 1471-4159.2000.0750563.x 14. K. Arita et al., Oncostatin M receptor-beta mutations underlie familial primary localized cutaneous amyloidosis. Am. J. Hum. Genet. 82, 73–80 (2008). https://doi.org/10.1016/j.ajhg.2007. 09.002 15. A. Banerjee, S. Ray, Molecular computing and structural biology for interactions in ERα and bZIP Proteins from Homo sapiens: an insight into the signal transduction in breast cancer metastasis. Adv. Intell. Syst. Comput. 404, 43–55 (2015). https://doi.org/10.1007/978-81-3222695-6_5 16. A. Banerjee, S. Ray, Molecular modeling, mutational analysis and conformational switching in IL27: an in silico structural insight towards AIDS research. Gene 576(1), 72–78 (2016) 17. A. Banerjee, R. Dasgupta, S. Ray, Mutational impact on the interaction between human IL27 and gp130: in silico approach for defending HIV infection. Curr. HIV Res. 15(5), 327–335 (2017) 18. S. Ray, A. Banerjee, Comparative binding mode and residual contribution from Lactoferrins (bLF and hLF) and HIV Gp120: an in silico structural perspective to design potent peptide inhibitor for HIV. Curr. Enzym. Inhib. 13(3), 226–234 (2017) 19. S.F. Altschul et al., Basic local alignment search tool. J. Mol. Biol. 25, 403–410 (1990) 20. M.H. Berman et al., The protein data bank. Nucleic Acids Res. 28, 235–242 (2000). https:// doi.org/10.1093/nar/28.1.235 21. R. Ambrish, K. Alper, Z. Yang, I-TASSER: a unified platform for automated protein structure and function prediction. Nat. Protoc. 5, 725–738 (2010) 22. A. Fiser, A. Sali, ModLoop: automated modeling of loops in protein structures. Bioinformatics 19(18), 2500–2501 (2003) 23. D. Xu, Y. Zhang, Improving the physical realism and structural accuracy of protein models by a two-step atomic-level energy minimization. Biophys. J. 101, 2525–2534 (2001). https://doi. org/10.1016/j.bpj.2011.10.024

Residual Exploration into Apoptosis of Leukemic Cells …

341

24. C. Colovos, T.O. Yeates, Verification of protein structures: patterns of non-bonded atomic interactions. Protein Sci. 2, 1511–1519 (1993) 25. B. Wallner, A. Elofsson, Identification of correct regions in protein models using structural, alignment, and consensus information. Protein Sci. 15, 900–913 (2006) 26. Sippl Weiderstein, ProSA-web: Interactive web service for the recognition of errors in threedimensional structures of proteins. Nucleic Acid Res. 35, W407–W410 (2007) 27. G.N. Ramachandran, V. Sashisekharan, Conformation of polypeptides and proteins. Adv. Protein Chem. 23, 283–438 (1968) 28. M.C. Deller, K.R. Hudson, S. Ikemizu, J. Bravo, E.Y. Jones, J.K. Heath, Crystal structure and functional dissection of the cytostatic cytokine oncostatin M. Struct. Fold. Des. 8, 863–874 (2000) 29. Y. Xu et al., Crystal structure of the entire ectodomain of gp130: insights into the molecular assembly of the tall cytokine receptor complexes. J. Biol. Chem. 285, 21214–21218 (2010) 30. S.R. Comeau et al., ClusPro: an automated docking and discrimination method for the prediction of protein complexes. Bioinformatics 20, 45–50 (2004) 31. B. Hess, C. Kutzner, D. Van Der Spoel, E. Lindahl, GROMACS 4: algorithms for highly efficient, load-balanced, and scalable molecular simulation. J. Chem. Theory Comput. 4(2), 435 (2008). https://doi.org/10.1021/ct700301q 32. B.R. Brooks, R.E. Bruccoleri, B.D. Olafson, D.J. States, S. Swaminathan, M. Karplus, CHARMM: a program for macromolecular energy, minimization, and dynamics calculations. J. Comp. Chem. 4(2), 187–217 (1983). https://doi.org/10.1002/jcc.540040211 33. Y. Yuedong, Z. Yaoqi, Specific interactions for ab initio folding of protein terminal regions with secondary structures. Proteins 72, 793–803 (2008) 34. M. Mina, V. Gokul, R. Luis, The role of electrostatic energy in prediction of obligate proteinprotein interactions. Proteome Science 11, S11 (2013). https://doi.org/10.1186/1477-5956-11S1-S11 35. M.A. Gerstein, Resolution-sensitive procedure for comparing protein surfaces and its application to the comparison of antigen-combining sites. Acta Cryst. A48, 271–276 (1992) 36. W.L. DeLano, The PyMOL Molecular Graphics System (DeLano Scientific, San Carlos, CA USA, 2002). https://doi.org/10.1093/nar/gki408 37. W. Kabsch, C. Sander, Dictionary of protein secondary structure: pattern recognition of hydrogen-bonded and geometrical features. Biopolymers 22(12), 2577–2637 (1983) 38. D.P. Klose, B.A. Wallace, W.J. Robert, 2Struc: the secondary structure server. Bioinformatics 26(20), 2624–2625 (2010) 39. D.T. Paul, D.A. Ken, Local and nonlocal interactions in globular proteins and mechanisms of alcohol denaturation. Protein Sci. 2, 2050–2065 (1993) 40. K.G. Tina, R. Bhadra, N. Srinivasan, PIC: protein interactions calculator. Nucleic Acids Res. 35, W473–W476 (2007) 41. D.A. Dougherty, J.C. Ma, The cation-pi interaction. Chem. Rev. 97(5), 1303–1324 (1997). https://doi.org/10.1021/cr9603744. PMID 11851453 42. P.B. Crowley, A. Golovin, Cation-pi interactions in protein-protein interfaces. Proteins 59(2), 231–239 (2005) 43. R.L. Baldwin, How Hofmeister ion interactions affect protein stability? Biophys. J. 71(4), 2056–2063 (1996) 44. S.K. Burley, G.A. Petsko, Amino-aromatic interactions in proteins. FEBS Lett. 203(2), 139–143 (1986) 45. K.M. Makwana, R. Mahalakshmi, Implications of aromatic–aromatic interactions: from protein structures to peptide models. Protein Sci. 24(12), 1920–1933 (2015). PMCID: PMC4815235. PMID: 26402741 46. L.M. Espinoza-Fonseca, J. García-Machorro, Aromatic-aromatic interactions in the formation of the MDM2-p 53 complex. Biochem. Biophys. Res. Commun. 370(4), 547–551 (2008). https://doi.org/10.1016/j.bbrc.2008.03.053. Epub 2008 Mar 18

Part VII

Other Applications of Computational Intelligence

An Experimental Study of a Modified Version of Quicksort Aditi Basu Bal and Soubhik Chakraborty

Abstract In the present work, a certain modified version of quicksort, Quicksort_wmb [1], has been taken into consideration. The authors of this new algorithm have claimed that the worst-case complexity of this algorithm is θ (nlogn) which is, in fact, the best-case time complexity for ordinary quicksort. The author has also given examples of its application on some random arrays. Our aim is to study the performance of Quicksort_wmb on input elements drawn from various continuous and discrete probability distributions. The performance is measured in terms of the number of comparisons the algorithm makes to sort the whole array. Here, we have assumed that comparisons are the dominant computer operations, i.e., the most time-taking and therefore performance determining operation in the algorithm. A number of common probability distributions—both continuous and discrete—were simulated to constitute the elements of a random unsorted list of numbers. Next, the aforementioned modified version of quicksort was applied on these arrays to sort the numbers. The number of comparisons required to sort the list of numbers was recorded for each distribution. The results obtained were very interesting. The continuous distributions were sorted faster than the discrete ones by the algorithm, the reason for which, after further investigation, was found to be the existence of ties in discrete distributions, thus providing evidence that this modified version of quicksort is sensitive to ties. The sensitivity of quicksort to ties is not new. What is interesting is that the sensitivity to ties remains irrespective of the improvement. Keywords Sorting · Quicksort · Quicksort_wmb · {discrete, continuous} probability distribution · Simulation Mathematics Subject Classification 62P99

A. B. Bal · S. Chakraborty (B) Department of Mathematics, Birla Institute of Technology, Mesra, Ranchi 835215, India e-mail: [email protected] A. B. Bal e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_27

345

346

A. B. Bal and S. Chakraborty

1 Introduction The present paper studies a certain modified version of quicksort, namely Quicksort_wmb [1]. Our goal is to study the performance of Quicksort_wmb on input elements drawn from various continuous and discrete probability distributions. The performance is measured in terms of the number of comparisons (assumed to be the most dominant operation here) the algorithm makes to sort the whole array. A number of common probability distributions—both continuous and discrete—were simulated to constitute the elements of a random unsorted list of numbers. Next, the aforementioned modified version of quicksort was applied on these arrays. The number of comparisons required to sort the list of numbers was recorded for each distribution.

2 State of the Art Quicksort is one of the fastest sorting algorithms in existence. However, it has certain drawbacks and many attempts on modifying it have been made so far. In the present work, a certain modified version of quicksort has been taken into consideration. It is called the Quicksort_wmb where “wmb” stands for “worst case made best case” [1]. As described in [1], the authors claim that this modified version of quicksort has a worst-case complexity of θ (nlogn) and a best-case complexity of θ (n). This is a drastic improvement from the worst-case complexity of O(n2 ) and best-case complexity of O(nlogn) of the ordinary quicksort. This was achieved by introducing a simple change in the ordinary quicksort algorithm. Three global variables no_part, Aorder and Dorder indicating the number of partitions, an array in ascending order and an array in descending order, respectively, were introduced. These variables could identify when Quicksort_wmb encountered the input array in ascending order or in descending order on returning from Quicksort_wmb after the first partition, thus avoiding recursion later on. The authors have illustrated the application of this algorithm on ascending ordered sorted arrays, descending ordered sorted arrays, and random arrays. They have also encouraged future scholarly work on testing the application of his modified version of quicksort. In [2], the authors have questioned the robustness of quicksort. They have tested the time complexity of quicksort on six different probability distributions. It is suggested that the lack of support of the O(nlogn) complexity for discrete distributions is due to the presence of ties (given that the probability of a tie is zero in inputs from continuous distributions). Also, according to David Karger, Computer Science Professor at MIT, “quicksort will not be affected by the distribution from which you draw (unless the distribution produces a lot of duplicate values)” [3]. This direction of thought is what has encouraged this investigative piece of work. The objective of this study has been to experimentally evaluate the performance of this modified quicksort algorithm on unsorted arrays whose elements are drawn randomly from particular probability dis-

An Experimental Study of a Modified Version of Quicksort

347

tributions. Here, in order to define a quantity as a measure of speed of the algorithm, we have assumed that the comparison operation is the most dominant computer operation of all, i.e., the most time-consuming operation. This has, in fact, been proven to be true in most cases and definitely in sorting algorithms, and thus, we could safely make this assumption. We have illustrated how differently the Quicksort_wmb algorithm works on each of the continuous and discrete distributions and whether the existence of ties in discrete distributions has a similar effect on Quicksort_wmb as [2]. For further literature on algorithms and their theoretical and empirical analysis, see [4–10], [11], [12], [13], [14–19]. For a sound theoretical discussion on sorting for inputs from various probability distributions, we refer the reader to [20]. The following sections explain how we have applied the abovementioned modified quicksort algorithm to study its performance on random input arrays with particular parameters, i.e., drawn from some common probability distributions. System Specifications Processor: Intel Core i5-380M dual-core processor (2.53 GHz) Hard Disk: 500 GB RAM: 4 GB Operating System: Windows 7.

3 Design of Computer Experiment A computer experiment is a series of runs of a code for various inputs. In our case, the response variable is the computational complexity (comparisons, which is the dominant operation in this algorithm). Further literature on computer experiments can be found in [21]. Designing a computer experiment means selecting the input sights. Here, the inputs have to be from various probability distributions. The intended study was accomplished by conducting the following procedure: Step 1: Simulation of Probability Distributions Altogether 11 different probability distributions—five continuous, six discrete—were simulated using algorithms as described in the following sections. The values of the parameters, if any, considered for simulation in this study are mentioned at the beginning of each algorithm. A. Continuous Distributions a. Algorithm to generate a Uniform variate U in the range (0,1) • Start • Set U = rand()/RAND_MAX • Return U • End b. Algorithm (inversion method) to generate a Standard Normal variate Z • Start • Generate Uniform variate U

348

A. B. Bal and S. Chakraborty

• Set Z = −log((1/U) − 1)/1.702 • Return Z • End c. Algorithm to generate a Log Normal variate L • Start • Generate Standard Normal variate Z • Set L = log(Z) • Return L • End d. Algorithm to generate an Exponential variate E(λ = 0.5) • Start • Generate Uniform Variate U • Set E = −λlog(1 − U) • Return E • End e. Algorithm to generate a Cauchy variate C • Start • Generate Standard Normal variates Z1 and Z2 • Set C = Z1/Z2 • Return C • End B. Discrete Distributions a. Algorithm to generate a Uniform variate U in the range (min = 1, max = 10000) • Start • Set U = rand()%(max-min) + min • Return U • End b. Algorithm to generate a Binomial variate B(n = 100, p = 0.25) • Start • Set B = 0 • For k = 1 to n • Generate a Continuous Uniform variate U • If U > p • Set B = B+1 • End if • End • Return B • End c. Algorithm to generate a Poisson variate P(λ = 6) , U2 ,… • Generate U(0,1) random variables U1 I+1 U i < e−λ • Let I be the smallest index such that i=1 • Set P = I

An Experimental Study of a Modified Version of Quicksort

349

• End d. Algorithm to generate a Geometric variate G • Start • Generate independent Bernoulli (p = 0.2) random variables Y1, Y2… • let I be the index of the first successful one, so YI = 1 • Set G = I − 1 • End e. Algorithm to generate a Negative Binomial variate X(N = 25, p = 0.25) • Start • Set count = 0 • Set X = 0 • do{ • X++; • Generate Continuous Uniform variate U • if(U > p) • count = count + 1 • End if • }while(count < N) • Return X f. Algorithm to generate a Hyper Geometric variate X(n = 100000, n1 = 25000) • Start • Set T = n • Set T1 = n1 • Set X = 0; • Set J = 0 • do • { • Set J = J + 1 • Generate Continuous Uniform variate U • If U ≤ T1/T • Set X = X + 1 • If X = n1 • Set T1 = T1 − 1 • End if • End if • Set T = T − 1 • }while(J < k) • Return X • End Step 2: Construction of Unsorted Arrays and Application of Quicksort_wmb Eleven different arrays of length 500,000 each were constructed with random elements drawn from each of the 11 above-mentioned distributions, respectively. Quicksort_wmb, the version of quicksort under consideration, is applied on each of these

350

A. B. Bal and S. Chakraborty

arrays to sort them. Adjustments in the Quicksort_wmb program are made by adding a global variable, counter to it which is incremented every time there is a comparison operation. Thus, the number of comparisons required to sort each of these arrays is recorded. Step 3: Recording of Data Step 2 is repeated 100 times, and the mean number of comparisons for each distribution is calculated using the formula, 100 Mean =

i=1

number of comparisons(i) 100

The standard deviation (SD) for each distribution is calculated using the formula,  SD =

100

i=1 (number of comparisons(i)

− Mean)2

100

The coefficient of variation (CV) for each distribution is calculated using the formula, CV =

SD Mean

Remark: While mean is a measure of average and SD is a measure of dispersion, CV is a special measure of dispersion in that it combines mean and SD to yield a useful measure of consistency in the data. Less the CV, more will be the consistency.

4 Analysis of the Computer Experiment and Discussion The mean number of comparisons, standard deviation (SD), and coefficient of variation (CV) for each distribution is presented in a Table 1, and the related bar graph is given in Fig. 1. The mean number of comparisons for each distribution is presented in a bar chart given in Fig. (1) for comparative study. Key CU SN LN E C DU B P

Continuous distribution Standard normal distribution Lognormal distribution Exponential distribution Cauchy distribution Discrete uniform distribution Binomial distribution Poisson distribution

An Experimental Study of a Modified Version of Quicksort

351

Table 1 Table showing number of comparisons, their mean, SD, CV for various distributions Distribution

Mean number of comparisons

Standard deviation

Coefficient of variation

Continuous uniform

1398648.63

32542.52

0.023267

Standard normal

1396909.5

26442.95

0.01893

Log normal

1434280.13

26739.97

0.018643 0.018645

Exponential

1396405.75

20636.24

Cauchy

1377744.13

26898.38

0.019523

Discrete uniform

1491922.13

27677.3

0.018551

Binomial

16704943

77083.41

0.004614

Poisson

14907860

131775.34

0.008839

Geometric

1384520.13

30429.27

0.021258

Negative binomial

16664444

64537.66

0.003873

Hypergeometric

15007112

47473.72

0.003163

Distribution vs Mean No of Comparisons for Quicksort_wmb 20000000 15000000 10000000 5000000 0 CU SN LN E

C DU B

P

G NB HG

Mean No. of Comparisons Fig. 1 Bar graph for mean comparisons

G Geometric distribution NB Negative binomial distribution HG Hypergeometric distribution. The experimental results show that the algorithm sorts continuous distributions faster and discrete distributions slower in general. However, it was found to sort discrete uniform distribution faster, although not faster than continuous uniform distribution. Since the probability of a tie is 0 in a continuous case, we suspected that the algorithm is perhaps sensitive to ties and more ties slow it down. Since the discrete distribution in our experiment had a small range, we decided to increase the range so that we could have a lesser number of ties. Initially, the range for discrete uniform distribution was fixed at (1, 10000) (Fig. (1) and the mean number of comparisons

352

A. B. Bal and S. Chakraborty

required to sort the array came to be 1491922.13. The range was then increased to (1, 100000), and as a consequence, the mean number of comparisons to sort the array turned out to be 1395671.00. Furthermore, we decided to decrease the range of discrete uniform distribution from the initial range of (1, 10000) to (1, 1000) which resulted in more number of duplicate elements in the array. Now, the mean number of comparisons increased to 3530115.00! These results are very interesting revealing that the number of comparisons drastically reduces as the range of input for discrete uniform distribution increases. This confirms that the algorithm is indeed sensitive to ties and a greater number of ties have an adverse effect on it. For the same reason, the continuous distributions are sorted faster where the probability of a tie is zero.

5 Conclusion and Future Work This study shows that the number of comparisons (on which sorting time depends) required to sort each of the arrays drawn from the 11 distributions by the modified quicksort algorithm under consideration is in the following order: Cauchy < Geometric < Exponential < Standard Normal < Continuous Uniform < Lognormal < Discrete Uniform < Poisson < Hyper Geometric < Negative Binomial < Binomial. This is because the algorithm is sensitive to ties. Greater the number of ties slower is the array sorted. The probability of getting duplicate values in continuous distributions is 0. Naturally, the high number of ties in most of the discrete distributions results in the slowing down of their sorting and this can be seen in the results obtained. Thus, it can be concluded that the modified version of quicksort described in [1] may not always perform well and that its performance is dependent on the parameters of input elements to be sorted. The case of geometric distribution can be argued on similar lines. Since the probability of success p was kept small (0.2), hence there are likely to be more failures before the first success, thereby increasing the practical range of the variate resulting in less ties as a consequence of which the sorting time decreases. As a suggestion for future work, the results obtained in this experimental study can be further substantiated by carrying out analysis of Quicksort_wmb and providing a theoretical proof of how the algorithm is expected to behave when there is a large number of ties in the input elements. It appears from the previous studies on different versions of quicksort and the present findings that all versions, no matter what the improvements are, are sensitive to ties. That said, we close this paper throwing an open research problem: Is it possible to modify quick sort such that it will no longer be sensitive to ties and if so what bearing this can have on the sorting time? [Concluded] Acknowledgements The authors thank an anonymous referee for the constructive criticism. Ethical Statement The authors hereby declare that this research was not funded by any funding agency. They further declare that the work is new and that they do not have any conflict of interest.

An Experimental Study of a Modified Version of Quicksort

353

References 1. O.K. Durrani, S.A.K. Nazim, Modified quick sort: worst case made best case. Int. J. Emerg. Technol. Adv. Eng. 5(8). Website: www.ijetae.com (ISSN 2250-2459, August 2015) 2. S. Chakraborty, S.K. Sourabh, How robust is quicksort average complexity? in A Computer Experiment Oriented Approach to Algorithmic Complexity (Lambert Academic Publishing, 2010) 3. D. Karger, Answered on Jan 24, 2016 to Algorithms: How do statistical distributions affect Sorting Techniques? www.quora.com 4. S. Sahni, Data Structures and Algorithms in C++. University Press Publication 2004, Chapter 4: Performance Measurement, p. 123 5. A. Levitin, Introduction to the Design and Analysis of Algorithms (Addison-Wesley, Boston MA, 2007) 6. T.H. Cormen, C.E. Leiserson, R.L. Rivest, C. Stein, Introduction to Algorithms, 2nd edn. (Prentice-Hall, New Delhi, 2004) 7. V. Sharma, P.S. Sandhu, S. Singh, B. Saini, Analysis of modified heap sort algorithm on different environment. World Acad. Sci. Eng. Technol. 42 (2008) 8. Y. Langsam, M.J. Augenstein, A.M. Tenenbaum, An Introduction to Data Structures with C++, 2nd edn. Prentice Hall India Learning Private Limited (2008) 9. G. Soileau, M. Younus, S. Nandlall, T. Jenkins, T. Ngouolali, T. Rivers, Sorting algorithm analysis, in Data Structures and Algorithms (SMT-274304-01- 08FA1), Professor James Iannibelli, 21 Dec 2008 10. O.K. Durrani, V. Shreelakshmi, S. Shetty, Performance measurement and analysis of sorting algorithms, in National Conference on Convergent Innovative Technologies and Management (CITAM-11), held on 2–3 Dec 2011 at Cambridge Institute of Technology and Management, Bengaluru 11. D.E. Knuth, The Art of Computer Programming, Volume 3: Sorting and Searching, 2nd Edition. (Addison-Wesley, 1998). ISBN 0-201-89685-0. Pages 106–110 of section 5.2.2: Sorting by Exchanging 12. S. Chakraborty, P.P. Choudhury, A statistical analysis of an algorithm’s complexity. Appl. Math. Lett. 13(5), 121–126 (2000) 13. S. Chakraborty, S.K. Sourabh, On why an algorithmic time complexity measure can be system invariant rather than system independent. Appl. Math. Comput. 190(1), 195–204 (2007) 14. R. Sedgewick, The analysis of quicksort programs. Acta Informatica 7(4), 327–355 (1977) 15. C.A.R. Hoare, Quicksort. Comput. J. 5, 10–15 (1962) 16. C.A.R. Hoare, Partition (Algorithm 63); Quicksort (Algorithm 64); Find (Algorithm 65). Comm. ACM 4, 321–322 (1961). [See also certification by J. S. Hillmore in Comm. ACM 5, 439 (1962) and by B. Randell and L. J. Russell in Comm. ACM 6, 446 (1963)] 17. R.S. Scowen, Quickersort (Algorithm 271). Comm. ACM 8, 669–670 (1965). [See also certification by C. R. Blair in Comm. ACM 9, 354 (1966)] 18. M.N. van Emden, Increasing the efficiency of quicksort (Algorithm 402). Comm. ACM 13, 693–694 (1970). [See also the article by the same name in Comm. ACM 13, 563–567 (1970)] 19. M. Buot, Probability and computing: randomized algorithms and probabilistic analysis. J. Am. Stat. Assoc. 101(473) (2006) 20. H.M. Mahmoud, Sorting: A Distribution Theory (Wiley, 2000) 21. K.T. Fang, R. Li, A. Sudjianto, Design and Modeling of Computer Experiments (Chapman and Hall/CRC, 2015)

An Improved Pig Latin Algorithm for Lightweight Cryptography Sandip Dutta and Sanyukta Sinha

Abstract The security of data transmitted has paramount significance in the exchange and storage of information. Cryptography provides a solution to a lot of real-world data security problems over digital communication channels. While cryptographic algorithms are not the only requirement for providing security, but these are the first step toward providing the required security over data transmission and communication. Cryptography is the science of keeping messages secure from unauthorized recipients. In this paper, we present a new cryptographic algorithm which is an amalgamated version of Pig Latin, a popular argot or jargon and RSA, one of the most popular asymmetric key cryptosystems. The modified Pig Latin involves a layer of encryption using the cryptographic algorithm RSA to provide better security than RSA alone, thereby increasing the security. The practical performance of the algorithm is analyzed in terms of its empirical computational complexity. There are several cryptographic algorithms which are used to secure information in the form of encryption and decryption. The running time of the proposed algorithm is compared with a present cryptographic algorithm AES-128. Because of the simplicity of the algorithm, it can be implemented for messaging applications used in mobile phones and to secure data stored on other small storage devices. Keywords Cryptography · RSA · AES-128 · Empirical analysis

S. Dutta (B) Department of Computer Science and Engineering, Birla Institute of Technology, Mesra 835215, India e-mail: [email protected] S. Sinha Department of Computer Science Engineering and Mathematics, Birla Institute of Technology and Science Pilani, Goa Campus, Sancoale 403726, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2_28

355

356

S. Dutta and S. Sinha

1 Introduction and Motivation Cryptography involves converting of the message (plaintext) into a disguised form (ciphertext) so that only the intended recipient can remove the disguise and read the message. On the basis of the way keys are employed for encryption and decryption, cryptographic algorithms are of two categories, viz. secret key or symmetric key cryptography, and public-key or asymmetric key cryptography. The symmetric key cryptosystems use a single key for both encryption and decryption and are primarily focused on providing “privacy” and “confidentiality.” AES-128/192/256, DES, CAST-128/256, Blowfish and Twofish are some popular examples of symmetric key cryptosystems. Asymmetric key cryptography uses two distinct keys, viz. public key and private key for encryption and decryption, and is primarily focused on providing “authentication,” “non-repudiation” and “key exchange.” Public keys are used to encrypt the plaintext into an unreadable form which can be decrypted only by using the private key possessed by the intended recipient. The RSA algorithm, ElGamal cryptosystems and Diffie–Hellman key exchange algorithms are well-known examples of asymmetric key cryptography. These are mainly based on some complex mathematical problems which are hard to reverse such as the integer factorization problem and exponentiations versus logarithm problems. Information technology has seen a rapid rise in the last few years. Most of the information exchange today takes place over digital communication mediums. There are several social networks where people communicate and where their privacy is put to risk. Many of them promise that the messages are secured by techniques like end-to-end encryption (E2EE) used by popular messaging apps like WhatsApp and Facebook Messenger. Still, doubts about our privacy against third parties have reemerged from time to time. Short message service (SMS) is a popular way for mobile phone and portable device users to send and receive simple text messages. Also to be noted is the fact that SMS does not offer a secure environment for confidential data transfer. The algorithm we propose can be implemented to secure mobile applications, due to its simplicity, and can be used to encrypt and decrypt messages while we use messaging apps. Moreover, it can also be used to secure any confidential information stored in mobile phones and other low storage devices. The ease of use of the algorithm in being implemented on small devices has been emphasized by showing the empirical computational complexity of the algorithm and not the theoretical computational complexity. The use of empirical complexity can focus only on the data set we are interested in and not on the whole universe of the data as is the case with theoretical complexity. Further, we draw an average case bound on the execution time of the algorithm. Empirical complexity is a weight-based approach and theoretical one is count-based, wherein the number of operations being executed inside for loop is counted. We have taken “time” as weight function to get the empirical complexity of our proposed methodology. In the empirical analysis, we try to estimate the run time of the algorithm in an average case by executing

An Improved Pig Latin Algorithm for Lightweight Cryptography

357

the algorithm for varying and increasing input. Then through some statistical model fitting, we emphasize on the correctness of the estimated empirical complexity.

2 Related Work In [1], a secure messaging system based on cryptographic algorithms is proposed for both Web and android platforms. Bodur et al. [2] examine how RSA encryption algorithm and the secure messaging process on the SMS channel are realized in the devices with the Android operating system. Ariffi et al. [3] also propose SMS security by the use of 3D-AES block cipher symmetric cryptography algorithm. Bhagoliwal et al. [4] propose security of data in mobile devices which is not heavy for a mobile phone because it would minimize the computing power, storage space, battery lifetime. A good deal of computational complexity of an algorithm can be found in [6]. In [5], a statistical analysis of bubble sort algorithm has been done while [7, 8] focuses on empirical complexity. In [7], two compound operations are compared to check whether they are statistically similar or not and [8] emphases on getting the order of complexity of multiplying two dense n × n matrixes to O (n2 ). [10] is the broader lesson on empirical computational complexity of an algorithm and proving the estimated complexity by fitting a statistical model. In [9], many statistical concepts such as normal distribution, regression analysis and others with a good detail are given. [11] is a detailed book on approximating an algorithm’s complexity through a computer-oriented approach.

3 Proposed Approach Pig Latin is a secret language formed from English language. It can be seen as a hiding technique of English language where words are altered by transferring the initial consonant or consonant cluster of each word to the end of the word and suffixing it with “ay”. For example, in the word “school”, “o” is the first vowel from the left. The entire consonant cluster “sch” is transferred to the end with the suffix “ay”. For words that begin with vowels, “ay” is simply suffixed at the end. Hence, the corresponding Pig Latin word is “oolschay”. It is undoubtedly not very difficult to decipher the actual words when the Pig Latin language is spoken because of the sounds but a computer algorithm requires proper decryption technique. So, we modify the suffix “ay” to denote the number of letters in the consonant cluster that was transferred to the end of the word, so that it can be transferred back in the decryption process. To do this, we consider the letters of the English alphabet as digits in base-26. Hence, 0 of the base-26 number system is denoted by “a”, 1

358

S. Dutta and S. Sinha

by “b”, 2 by “c” and so on till 25 denoted by “z”. Therefore, the place values of a number in base-26 system are powers (exponents) of 26. Thus, the encrypted word corresponding to “school” becomes “oolschad”. This is because (ad) 26 = 26 × 0 + 1 × 3 = 3 (since a = 0 and d = 3 in base-26) which is the number of letters in the shifted consonant cluster “sch”. Similarly, “This is my school” will be encrypted to “isThac isaa myac oolschad”. Note that “is” starts from a vowel and hence it is just suffixed by “aa”, since (aa) 26 = 26 × 0 + 1 × 0 = 0, denoting that there are 0 letters in the shifted consonant cluster, and hence nothing has been shifted. Also notice that “my” does not contain any vowels and hence itself a consonant cluster (at the end of the word). Thus, it is suffixed with “ac” [(ac) 26 = 26 × 0 + 1 × 2 = 2]. Clearly, the encryption in the above example does not result in very difficult or indecipherable text. So, its complexity is increased by performing one more step of encryption. The entire encrypted text generated so far is randomly permuted. Pseudorandom number generators (RNGs) are used to do this. Cryptographically, secure RNGs can generate random numbers that pass statistical randomness tests, and are resilient against prediction attacks. Mobile SDKs offer standard implementations of RNG algorithms that produce numbers with sufficient artificial randomness. Decryption is achieved by using the same seed value that was used to attain the randomization in encryption. This seed value is used as a symmetric secret key by the sender and receiver. Random number generation can be successfully used for encryption and decryption since the computer or any deterministic device is not capable of generating truly random numbers. It only generates pseudorandom numbers. John von Neumann contemplated “Anyone who attempts to generate random numbers by deterministic means is, of course, living in a state of sin.” After randomization in the above example, the text becomes “cahT oahaaimsdccssy olia” in our implementation. Here, seed value was taken as 25. Finally, an encryption using RSA is performed. Since RSA is slower than other advanced algorithms like AES, it is generally used only for key exchange. But here, we are using it to encrypt an entire text file or message stored in small devices. This is achieved in less time by taking primes that are much smaller than is usually taken (around 300 decimal digits). The security of RSA is based on the fact that it is difficult to factorize large composite numbers. Therefore, by reducing the size of primes we decrease the security by an appreciable amount. This is compensated to some effect by the novel encryption techniques applied in previous steps. The seed value (symmetric key) in the randomization step is also encrypted with RSA and attached to the message. The secret key used in the algorithm must be really secured, since the security of the algorithm relies on that of the key. For this, Diffie–Hellman key exchange algorithm can be used to exchange keys of RSA (Table 1). Clearly, decryption is performed as the reverse of the encryption process stated above.

An Improved Pig Latin Algorithm for Lightweight Cryptography

359

Table 1 Algorithm: improved Pig Latin cryptosystem 1.

A secret key is used as a seed value

2.

Loop over the letters of each word of the plaintext, and check for a vowel

3.

If the letter is a vowel, concatenate the part of the word starting from the vowel till the end of the word and the part of word before the vowel (consonant cluster) in the reverse way, that is, bring the consonant cluster to the end

4.

Calculate the number of letters in the consonant cluster. Derive its equivalent two-digit base-26 value [i.e., if the number of letters is less than or equal to 25, add prefix “a” (=026 )]

5.

Add the above calculated two letters after the consonant cluster

6.

When the loop ends, the key is given as a seed to the random number generator

7.

The entire ciphertext is permuted randomly

8.

RSA is applied in the generated ciphertext from step 7 as well as in the seed value. This asymmetrically encrypted seed value (symmetric key) is attached to the message

9.

The keys of RSA are transferred using Diffie–Hellman key exchange algorithm

4 Empirical Computational Complexity of the Proposed Work An empirical O is the empirical estimate of the non-trivial and conceptual weightbased statistical bound. A statistical bound, unlike a mathematical bound, weighs the computing operation instead of counting them, and it takes all the operations collectively and mixes them for accessing the bound [7]. The empirical analysis of an algorithm is comparatively a newer and better approach for estimating the execution time of any algorithm. The algorithm constituting the program is run for increasing and varying inputs, and then through some statistical model fitting, the execution time empirically can be estimated. We implemented the proposed “improved Pig Latin algorithm” in MATLAB R2016b with the following system specifications: Processor Intel(R) Pentium(R) CPU 4405U @ 2.10 GHz, 2101 MHz, 2 core(s), 4 logical processor(s) OS name Microsoft Windows 10 Pro RAM 4 GB We took four trials for every input size to defy the dominance of cache hit in the execution time. Tables 2 and 3 show the execution time in seconds along with the first differences of the execution time for all the four trials for encryption and decryption, respectively. Execution time is the mean of the time taken in four trials. The estimation of run time empirically can be precisely estimated using the converse of the “fundamental theorem of finite difference.” Its converse states that if the nth difference of a tabulated function is constant and higher differences are zero, the function is the nth degree of polynomial.

360

S. Dutta and S. Sinha

Table 2 Execution time and first difference for encryption of improved Pig Latin Input (in bytes)

Trial 1

Trial 2

Trial 3

Trial 4

Mean (execution time)

First difference

128

0.018475

0.017863

0.018873

0.017737

0.018237

0.0113

160

0.029169

0.030549

0.029116

0.029312

0.029537

0.004689

192

0.033903

0.034276

0.034484

0.034237

0.034225

224

0.032894

0.03304

0.033042

0.032245

0.032805

0.007343

256

0.03967

0.041198

0.039601

0.040124

0.040148

0.005847

288

0.045552

0.046047

0.04553

0.046851

0.045995

0.001394

320

0.048166

0.047207

0.047106

0.047075

0.047389

0.004355

352

0.051226

0.052614

0.051479

0.051653

0.051743

0.004333

384

0.056142

0.056319

0.055884

0.05596

0.056076

0.001347

416

0.057081

0.057458

0.057679

0.057477

0.057424

0.008786

448

0.065976

0.066065

0.0668

0.065998

0.06621

0.004299

480

0.070591

0.070568

0.070417

0.070459

0.070509

0.010853

512

0.073522

0.073829

0.10458

0.073514

0.081361

−0.00142

Table 3 Execution time and first difference for decryption of improved Pig Latin Input (in bytes)

Trial 1

Trial 2

Trial 3

Trial 4

Mean (execution time)

First difference

128

0.071022

0.068978

0.067732

0.071065

0.069699

0.02782

160

0.09567

0.096989

0.09991

0.097508

0.097519

0.020616

192

0.1184

0.11964

0.11692

0.11758

0.118135

0.006472

224

0.12369

0.12009

0.12333

0.13132

0.124608

0.02038

256

0.14863

0.14657

0.1432

0.14155

0.144988

0.031118

288

0.1609

0.19123

0.18125

0.17104

0.176105

0.006638

320

0.18953

0.17392

0.17692

0.1906

0.182743

0.021343

352

0.21042

0.21032

0.19632

0.19928

0.204085

0.014808

384

0.22161

0.22192

0.21338

0.21866

0.218893

0.03087

416

0.27009

0.25387

0.23964

0.23545

0.249763

0.02303

448

0.28565

0.25823

0.27989

0.2674

0.272793

0.02325

480

0.30595

0.28043

0.29261

0.30518

0.296043

0.018425

512

0.32352

0.30459

0.30425

0.32551

0.314468

An Improved Pig Latin Algorithm for Lightweight Cryptography

361

So, from the difference column of both the procedures, we observe that the first differences itself are almost constant. So, we predict an empirical O (n) complexity for our proposed approach. This estimated empirical complexity has been proved from a statistical model fitting. Figures 1 and 2 show the model fitting for both the processes. It can be observed from both the figures that the execution time almost follows a linear pattern and hence abides by our estimated complexity, i.e., empirical O (n).

Fig. 1 Statistical model fitting of the improved Pig Latin algorithm (encryption)

Fig. 2 Statistical model fitting of the improved Pig Latin algorithm (decryption)

362

S. Dutta and S. Sinha

Table 4 Goodness of fit: encryption of improved Pig Latin Linear model Poly1: f (x) = p1 * x + p2 Coefficients (with 95% confidence bounds): p1 = 0.0001413 (0.0001254, 0.0001572) p2 = 0.003369 (−0.002052, 0.00879) Goodness of fit: SSE: 0.0001065 R-square: 0.9722 Adjusted R-square: 0.9696 RMSE: 0.003112 Table 5 Goodness of fit: decryption of improved Pig Latin Linear model Poly1: f (x) = p1 * x + p2 Coefficients (with 95% confidence bounds): p1 = 0.0006235 (0.0005895, 0.0006575) p2 = −0.009524 (−0.02114, 0.002093) Goodness of fit: SSE: 0.0004892 R-square: 0.9933 Adjusted R-square: 0.9927 RMSE: 0.006669

The following data shows the goodness of fit of our estimated model from which the residual ( yˆ ) is calculated by using the values observed in the linear equation (Tables 4 and 5). The residual for both the processes is shown in Table 6. The residual plot almost shows a horizontal pattern, and so we can say that our estimated model actually fits the statistical model. So, we can conclude that the empirical run time of our proposed approach is empirical O (n). We compared the execution time of our approach with that of the standard AES cryptosystem for same input sizes. The execution time taken by AES-128, for different input sizes, is shown in Tables 7 (encryption) and 8 (decryption) . It can be clearly seen from the tables of the algorithms that, in case of encryption, AES-128 takes much greater time than the proposed Pig Latin algorithm while for decryption the amount of time taken by both the algorithms is comparable. Our RSA implementation involves exponentiation by squaring instead of Chinese remainder theorem (commonly referred to as RSA-CRT), which is known to perform modular exponentiation faster and hence with lesser execution time. Hence, our proposed algorithm could be implemented even faster using the CRT implementation of RSA.

An Improved Pig Latin Algorithm for Lightweight Cryptography

363

Table 6 Residual table for encryption and decryption of improved Pig Latin Encryption (residuals) x

y

128 160



Decryption (residuals) y − yˆ (residuals)



y − yˆ (residuals)

x

y

0.018237 0.021455 −0.00322

128

0.069699 0.070284 −0.00058

0.029537 0.025977

0.00356

160

0.097519 0.090236

0.007283

192

0.034225 0.030499

0.003726

192

0.118135 0.110188

0.007947

224

0.032805 0.03502 −0.00221

224

0.124608 0.13014 −0.00553

256

0.040148 0.039542

0.000606

256

0.144988 0.150092 −0.0051

288

0.045995 0.044063

0.001932

288

0.176105 0.170044

320

0.047389 0.048585 −0.0012

320

0.182743 0.189996 −0.00725

352

0.051743 0.053107 −0.00136

352

0.204085 0.209948 −0.00586

384

0.056076 0.057628 −0.00155

384

0.218893 0.2299

416

0.057424 0.06215 −0.00473

416

0.249763 0.249852

0.019952

448

0.06621 0.066671 −0.00046

448

0.272793 0.269804

0.002989

480

0.070509 0.071193 −0.00068

480

0.296043 0.289756

0.006286

512

0.081361 0.075715

512

0.314468 0.309708

0.004759

0.005647

0.006061

−0.01101

Table 7 Execution time for AES-128 encryption process Input (in bytes)

Trial 1

Trial 2

Trial 3

Trial 4

Mean (execution time)

128 160

0.1588

0.15559

0.14985

0.15759

0.155458

0.17964

0.19248

0.19272

0.19018

0.188755

192

0.21122

0.22299

0.21387

0.22231

0.217598

224

0.25146

0.25785

0.25941

0.25391

0.255658

256

0.27577

0.28439

0.27315

0.28732

0.280158

288

0.30974

0.31132

0.31368

0.31559

0.312583

320

0.34005

0.34382

0.34322

0.33615

0.34081

352

0.37505

0.37807

0.37707

0.37732

0.376878

384

0.38812

0.39405

0.38821

0.39501

0.391348

416

0.44113

0.44414

0.4422

0.43741

0.44122

448

0.45871

0.45577

0.46542

0.44607

0.456493

480

0.49446

0.52744

0.49571

0.5122

0.507453

512

0.55057

0.54057

0.54653

0.57112

0.552198

364

S. Dutta and S. Sinha

Table 8 Execution time for AES-128 decryption process Input (in bytes)

Trial 1

Trial 2

Trial 3

Trial 4

Mean (execution time)

128

0.18573

0.17453

0.17151

0.15759

0.17234

160

0.24647

0.22216

0.21875

0.22464

0.228005

192

0.26337

0.30604

0.27876

0.26411

0.27807

224

0.31522

0.31586

0.34252

0.32047

0.323518

256

0.34126

0.3337

0.33526

0.32661

0.334208

288

0.37339

0.3719

0.36725

0.37065

0.370798

320

0.41967

0.41059

0.42284

0.42976

0.420715

352

0.45321

0.44946

0.45087

0.44506

0.44965

384

0.47129

0.47428

0.46359

0.47304

0.47055

416

0.53121

0.52913

0.53438

0.52533

0.530013

448

0.54537

0.54093

0.5407

0.54288

0.54247

480

0.62

0.60345

0.62811

0.62037

0.617983

512

0.65602

0.65093

0.77773

0.71701

0.700423

5 Conclusion We proposed a new cryptographic algorithm which uses the well-known Pig Latin language game, pseudorandomization and RSA cryptosystems. There are not much computational overheads. The simplicity of the algorithm makes it a better option to implement in mobile applications. Moreover, our algorithm consists of both symmetric (till pseudorandomization) and asymmetric (RSA) layers of cryptography. Hence, it gives us the benefit of both. Further, the work provides a low use of computational resources as it does not involve a large number of rounds (compared to 10, 12 and 14 rounds in AES-128, AES-192 and AES-256, respectively). Also, the empirical complexity analysis of the algorithm gives an empirical O (n) average case complexity. The derived equation through model fitting can be used to predict a priori the execution time for any length of input messages. Although the model shows an O (n) empirical complexity, the execution time can be seen to be more inclined toward a lesser time as compared to AES-128.

References 1. M.M. Rahman, T. Akter, A. Rahman, Development of cryptography-based secure messaging system. J. Telecommun. Syst. Manage. 5, 142 (2016). https://doi.org/10.4172/2167-0919. 1000142 2. H. Bodur, R. Kara, Secure SMS Encryption Using RSA Encryption Algorithm on Android Message Application (2015)

An Improved Pig Latin Algorithm for Lightweight Cryptography

365

3. S. Ariffi, R. Mahmod, R. Rahmat, N.A. Idris, in SMS encryption using 3D-AES block cipher on android message application. Proceedings of the 2013 international conference on advanced computer science applications and technologies, ACSAT 2013 (2013), pp. 310–314. https:// doi.org/10.1109/acsat.2013.68 4. Shikhar Bhagoliwal, Jyotirmoy Karjee, Securing mobile data using cryptography. Int. J. Adv. Networking Appl. 7, 2925–2930 (2016) 5. S. Chakraborty, P.P. Choudhury, A statistical analysis of an algorithm’s complexity. Appl. Math. Lett. 13, 121–126 (2000) 6. D.E. Knuth, Fundamental Algorithms, vol. 1 (Addison-Wesley, 1973) 7. S. Chakraborty, K.K. Sundararajan, A simple empirical formula for categorizing computing operation. Appl. Math. Comput. 187, 326–340 (2007) 8. S.K. Sourabh, S. Chakraborty, Empirical O(n2 ) complexity is convincingly gettable with two dense matrix in n × n matrix multiplication. InterStat (2006) 9. S.C. Gupta, V.K. Kapoor, Fundamental of Mathematical statistics (Sultan Chand & Sons) 10. S.F. Goldsmith, A.S. Aiken, D.S. Wilkerson, Measuring Emirical Coputational Complexity (ACM, 2007) 11. S. Chakraborty, S.K. Sourabh, A Computer Experiment Oriented Approach to Algorithmic Complexity (Lap Lambert Academic, 2010)

Author Index

A Agarwal, Paras, 39 Agrawal, Iti, 69 Arora, Navneet, 303 Asheer, Sarah, 273 Athisakthi, A., 287 Ayush, 157 Azam, Farooque, 3 B Bal, Aditi Basu, 345 Banerjee, Arundhati, 327 Banerjee, Partha Sarthy, 157 Bishnu, Partha Sarathi, 95 Biswas, Shiladitya, 187 C Chakraborty, Soubhik, 245, 255, 345 Chatterjee, Niladri, 245 Chaturvedi, Devendra Kumar, 303 Choudhury, Shreemoyee Dutta, 245 D Das, Ayan Kumar, 129, 229 Das, Deepanwita, 201 Dasgupta, Rakhi, 327 Datta, Sreemana, 229 Dayal Udai, Arun, 187 De Mukherjee, Koel, 265 Dutta, Manoj Kr., 107, 119 Dutta, Sandip, 355 G Ghosh, Deepshikha, 265 Goswami, Goutam, 23

Gulati, Muskan, 303 Gupta, Anushka, 69 Gupta, Mayank, 303 Gupta, Pankaj, 81 Gupta, Vivek, 165 H Hota, Chittaranjan, 175 J Jain, Kritika, 69 K Krishna, A. P., 215 Kumar, Gaurav, 187 Kumar, Ritesh, 95 Kumar, Sanjeet, 273 Kumar, Saurav, 15 N Nagi, Reva, 57 Narayan, Urja, 39 P Panchratan, Hritika, 157 Panigrahi, Siba Prasada, 15 Pattanaik, L. N., 39 Pillai, Santhosh Kumar, 265 Prajapati, Apeksha, 143 Priyadarshi, Neeraj, 3 Pushpa Rani, M., 287 R Ranjan, Saloni, 39 Rastogi, Rohit, 303

© Springer Nature Singapore Pte Ltd. 2020 S. K. Sahana and V. Bhattacharjee (eds.), Advances in Computational Intelligence, Advances in Intelligent Systems and Computing 988, https://doi.org/10.1007/978-981-13-8222-2

367

368 Rathore, V. S., 215 Ray, Sujay, 327 S Sadaf, Sayema, 129 Sagar, Bharat Bushan, 81 Sahana, Sudip Kumar, 165 Sahu, S. S., 69 Sangno, Ralli, 15 Sardar, Madhumita, 201 Satya, Santosh, 303 Shaheen, Farzana, 215 Sharma, Amarjeet Kumar, 3 Sharma, J. Anirudh, 157 Singhal, Parv, 303 Singh, Rahul Kumar, 201

Author Index Sinha, Ditipriya, 129, 229 Sinha, Rupesh Kumar, 69 Sinha, Sanyukta, 355 T Tabassum, Ayesha, 129 Tewari, Swarima, 255 Thakura, P. R., 23 Tripathy, Sanjaya Shankar, 57 Trivedi, Piyush, 303 V Vardia, Monika, 3 Vats, Aman, 265 Vijayalakshmi, A., 175