Cognitive Informatics, Computer Modelling, and Cognitive Science: Theory, Case Studies, and Applications: Volume 1: Theory, Case Studies, and Applications [1 ed.] 012819443X, 9780128194430

Cognitive Informatics, Computer Modelling, and Cognitive Science: Theory, Case Studies, and Applications presents the th

1,738 110 23MB

English Pages 420 [398] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Cognitive Informatics, Computer Modelling, and Cognitive Science: Theory, Case Studies, and Applications: Volume 1: Theory, Case Studies, and Applications [1 ed.]
 012819443X, 9780128194430

Table of contents :
Cover
Cognitive Informatics, Computer Modeling, and Cognitive Science: Theory, Case Studies,
and Applications
Copyright
Dedication
Contents
List of contributors
Editors’ biographies
Authors biography
Preface
Acknowledgments
1 Introduction to cognitive science, informatics, and modeling
1.1 Introduction and history of cognitive science
1.1.1 Cognition, brain, and consciousness
1.1.2 Dynamic theory and cultural aspect
1.1.3 Psychology, philosophy, and cognitive neuroscience
1.2 Cognitive modeling
1.2.1 Cognitive networks
1.3 Cognitive informatics and resources
1.4 Cognitive maps and perception
1.5 Conclusion
References
Further reading
2 Machine consciousness: mind, machine, and society contributors
2.1 Introduction: Using cognitive maps as adaptive interface tool in an online course
2.1.1 Cognitive mapping and theories
2.1.2 Web-based online course
2.1.3 Context modeling and reasoning
2.2 Multimedia processing and acquisition system
2.3 Cognitive maps based on students’ mental model
2.4 Overview of instructional planning to improve student’s cognitive ability
2.4.1 Cognitive map with weights of subject modules and concepts
2.5 Illustrating a hypothetical instruction model
2.6 Conclusion
References
3 Brain–computer interface and neurocomputing
3.1 Introduction
3.2 Brain–computer interface
3.2.1 History
3.2.2 Types of brain–computer interface
3.2.2.1 Invasive brain–computer interface
3.2.2.2 Semiinvasive brain–computer interface
3.2.2.3 Noninvasive brain–computer interface
3.2.2.3.1 Electroencephalography
3.2.2.3.2 Magnetoencephalography
3.2.2.3.3 Positron emission tomography
3.2.2.3.4 Functional magnetic resonance imaging
3.2.2.3.5 Functional near-infrared spectroscopy
3.2.3 Assumptions and working of brain–computer interface
3.2.3.1 Placement of electrodes
3.2.3.2 Electroencephalography signal acquisition
3.2.3.3 Preprocessing
3.2.3.3.1 Technical artifacts
3.2.3.3.2 Human functioning artifacts
3.2.3.4 Features and feature extraction technique
3.2.3.4.1 Fourier transform based feature
3.2.3.4.2 Wavelet-based feature
3.2.3.4.3 Statistical features
3.2.3.5 Classification
3.3 Electroencephalography acquisition devices
3.4 Challenges
3.4.1 Implantation of electrode
3.4.2 High dimensionality of data
3.4.3 Information transfer rate
3.4.4 Technical challenges
3.5 Case study on brain–computer interface
3.5.1 Dataset
3.5.2 Problem statement
3.5.3 Proposed method
3.5.3.1 Data acquisition and preprocessing
3.5.3.2 Optimized channel selection using particle swarm optimization
3.5.4 Working of particle swarm optimization for channel selection
3.5.4.1 Standard channel selection
3.5.4.2 Feature extraction using wavelet transform
3.5.4.3 Classification
3.5.5 k-Nearest neighbor
3.5.6 Support vector machine
3.6 Results
3.7 Conclusion
References
4 The impact on cognitive development of a self-contained exploratory and technology-rich course on the physics of light an...
4.1 Background
4.2 Methodology
4.3 Results
4.4 Discussion
4.5 Limitations
References
Further reading
5 Identification of normal and abnormal brain hemorrhage on magnetic resonance images
5.1 Introduction
5.2 Literature survey
5.3 Proposed work
5.3.1 Edge enhancement
5.3.2 Modified multilevel set segmentation algorithm
5.3.3 Feature extraction algorithm
5.3.3.1 Minimal angular local binary patterns
5.3.3.2 Gray level cooccurrence matrix features
5.3.3.2.1 Autocorrelation
5.3.3.2.2 Contrast
5.3.3.2.3 Correlation
5.3.3.2.4 Cluster prominence
5.3.3.2.5 Cluster shade
5.3.3.2.6 Dissimilarity
5.3.3.2.7 Energy
5.3.3.2.8 Entropy
5.3.3.2.9 Homogeneity
5.3.3.2.10 Maximum probability
5.3.3.3 Naïve Bayes-probabilistic kernel classifier
5.4 Result and discussions
5.4.1 Comparative analysis between proposed NB-PKC and support vector machine
5.4.1.1 Precision
5.4.1.2 Recall
5.4.1.3 Accuracy
5.4.1.4 Jaccard coefficient
5.4.1.5 Dice coefficient
5.4.1.6 Kappa coefficient
5.4.2 Comparison of the proposed NB-PKC and support vector machine schemes
5.4.2.1 Fault rejection rate
5.4.2.2 Fault acceptance rate
5.4.2.3 Global acceptance rate
5.4.2.4 ROC curve
5.5 Conclusion
Acknowledgment
References
Further reading
6 Cognitive informatics, computer modeling and cognitive science assessment of knee osteoarthritis in radiographic images: ...
6.1 Introduction
6.2 Machine learning approach
6.2.1 Knee X-ray analysis: a machine learning approach
6.2.1.1 Image acquisition
6.2.1.2 Preprocessing/enhancement
6.2.1.3 Identification of region of interest
6.2.1.4 Segmentation
6.2.1.4.1 Edge-based methods
6.2.1.4.2 Texture-based segmentation
6.2.1.4.3 Otsu’s based method
6.2.1.4.4 Active contour method
6.2.1.5 Feature computation/extraction
6.2.1.5.1 Statistical feature
6.2.1.5.2 Shape features
6.2.1.5.3 Haralick features
6.2.1.5.4 Zernike features
6.2.1.5.5 Histogram of oriented gradients features
6.2.1.5.6 Local binary pattern
6.2.1.6 Classification
6.2.1.6.1 k-Nearest neighbor
6.2.1.6.2 Decision tree
6.2.1.6.3 Random forest
6.2.1.6.4 Error correcting output code
6.3 Experimental analysis
6.3.1 Experiment I
6.3.2 Experiment II
6.3.3 Experiment III
6.3.4 Experiment IV
6.4 Discussion
6.5 Summary
References
Further reading
7 Adaptive circadian rhythm a cognitive approach through dynamic light management
7.1 Introduction
7.1.1 Circadian clock and circadian rhythm
7.1.2 Perception of eye as a visual and nonvisual information sensor
7.2 Photoreceptors in the eye
7.2.1 Light-emitting diodes
7.3 SunLike light-emitting diodes
7.4 Data sheet
7.4.1 A case study at an educational campus
7.5 Conclusion
Acknowlegments
Further readings
8 Cognitive and brain function analysis of sleeping stage electroencephalogram wave using parallelization
8.1 Introduction
8.2 History of electroencephalography
8.3 Analysis of electroencephalogram signals
8.4 Electroencephalogram waves
8.5 Electroencephalogram signal recording variables and components
8.5.1 Frequency
8.5.2 Voltage
8.5.3 Morphology
8.5.4 Impedance
8.5.5 Electroencephalogram electrodes
8.5.6 Electrode gel
8.5.7 Electrode positioning (10/20 system)
8.5.8 Artifacts in Electroencephalogram recording
8.5.9 Filtering
8.5.10 Electroencephalogram recording device
8.6 Subject preparation and equipment setup for electroencephalogram recoding using an electro cap
8.7 Sleeping stage electroencephalogram waves
8.8 Type of channel selection for cognitive
8.9 Disorders detection using electroencephalogram
8.10 Application of electroencephalogram
8.11 Case study—channel selection for alpha, beta, theta, and delta waves using parallel processing
8.11.1 Java Parallel Processing Framework architecture [JPPF]
8.11.1.1 Client layer
8.11.1.2 Service layer
8.11.1.3 Execution layer
8.11.2 Coherence estimation functions
8.11.3 Distributed parallel computation
8.12 Conclusion
References
Further reading
9 The future networks—a cognitive approach
9.1 Introduction
9.2 Intelligence in networks
9.3 Challenges in current network
9.4 Cognitive networks
9.5 Need for intelligent networks
9.6 Background
9.7 Cognition approach
9.8 Learning and reasoning for intelligent networks
9.9 Human reasoning mechanism
9.10 Cognitive model for reasoning at human level
9.11 New intelligent approach
9.12 Learning approaches
9.13 Requirement of Bayesian approach for cognitive network
9.13.1 The Bayesian network
9.13.2 Importance of Bayesian model
9.13.3 Environment in which Bayesian works the best
9.13.4 Advantages over other alternative models
9.13.4.1 Decision theory
9.13.4.2 Consistent process for uncertain information
9.13.4.3 Robustness property
9.13.4.4 Flexible
9.13.4.5 Handling expert knowledge
9.13.4.6 Semantic interpretation of the model parameters
9.13.4.7 Different variable types
9.13.4.8 Handling missing data
9.13.5 Collateral relationship with graded cognitive network
9.14 Future trends
9.15 Research challenges
9.16 Conclusion
References
10 Identification of face along with configuration beneath unobstructed ambiance via reflective deep cascaded neural networks
10.1 Introduction
10.2 Machine learning life cycle
10.2.1 Collection of data
10.2.2 Normalization of data
10.2.3 Modeling of data
10.2.4 Training and feature engineering of model
10.2.5 Production and deployment of models
10.2.5.1 Convolution operation
10.2.5.2 Kernel
10.2.5.3 Pooling
10.2.5.4 Difference between average pooling and max pooling
10.2.5.5 Fully connected layer
10.2.5.6 Classification of images
10.2.5.6.1 Supervised classification
10.2.5.6.2 Maximum likelihood classification
10.2.5.6.3 Minimum distance classification
10.2.5.6.4 Parallelepiped classification
10.2.5.6.5 Unsupervised classification
10.2.5.6.6 Rich training data
10.3 Popular augmentation techniques
10.3.1 FLIP
10.3.2 Rotation
10.3.3 Scale
10.3.4 Crop
10.3.5 Translation
10.3.6 Gaussian noise
10.4 Localization
10.5 Methodology
10.5.1 Preprocessing
10.5.2 Detection phase
10.6 Experiments
10.6.1 Training data
10.6.1.1 The effectiveness of online hard sample mining
10.6.1.2 Effectiveness of both detection and alignment of face
10.6.1.3 Face detection evaluation
10.6.1.4 Face alignment evaluation
10.6.1.5 Runtime efficiency
10.7 Conclusion
References
Further reading
11 Setting up a neural machine translation system for English to Indian languages
11.1 Introduction
11.2 Neural machine translation
11.2.1 Long- and short-term memory model
11.3 Setting up the neural machine translation system
11.3.1 Encoder and decoder
11.3.2 Attention in the model
11.3.3 Residual connections and bridges
11.3.4 Out-of-vocabulary words
11.4 Results and discussions
11.4.1 Datasets
11.4.2 Experimental setup
11.4.3 Training details
11.4.4 BLEU score
11.5 Discussions
11.6 Conclusion
References
12 An extreme learning-based adaptive control design for an autonomous underwater vehicle
12.1 Introduction
12.2 Modeling of autonomous underwater vehicle in diving plane and problem statement
12.2.1 Kinematic
12.2.2 Dynamics
12.2.3 Discretization of the kinematic and dynamic of autonomous underwater vehicle for controlling the autonomous underwat...
12.3 Identification of autonomous underwater vehicle dynamics using extreme learning machine model
12.3.1 Sequential extreme learning machine model for autonomous underwater vehicle dynamic
12.4 Design of diving controller
12.4.1 Kinematic backstepping control law
12.4.2 Dynamic nonlinear proportional, integral, and derivative control law
12.5 Control law formulation with delay prediction
12.6 Results and discussion
12.7 Conclusion
References
13 Geometric total plaque area is an equally powerful phenotype compared with carotid intima–media thickness for stroke ris...
13.1 Introduction
13.1.1 Performance numbers
13.2 Background survey on cIMT, LD, and TPA measurements
13.2.1 cIMT detection and measurement methods
13.2.2 LD detection and measurement methods and our proposal
13.3 Materials and methodology
13.3.1 Patient demographics and image acquisition
13.3.2 gTPA modeling using cylindrical fitting
13.3.3 Overall architecture
13.3.4 cIMT and LD detection using DL system
13.4 Experimental protocol, results, and its validation
13.4.1 DL system results and visual display of LI and MA interfaces
13.4.2 Mean value computations for cIMT and gTPA for two DL systems
13.4.3 Relationship of age versus cIMT/gTPA
13.4.4 Validation
13.4.4.1 gTPA versus cIMT for DL1, GT1, DL2, and GT2
13.4.4.2 gTPA versus LD for DL1, GT1, DL2, and GT2
13.4.4.3 gTPA versus IAD for DL1, GT1, DL2, and GT2
13.4.4.4 Bland–Altman plots
13.5 Statistical tests and 10-year risk analysis
13.5.1 Risk analysis
13.5.2 Statistical tests
13.5.3 Ten-year risk assessment
13.6 Discussion
13.6.1 Benchmarking
13.6.2 Strengths/weakness/extensions
13.7 Conclusion
Acknowledgments
Conflict of interest
Funding
Appendix A LD/IMT measurement using deep learning system
Appendix B Polyline distance method
Polyline distance metric
Appendix C Correlation coefficient of gTPA against all the wall parameters
gTPA versus cIMT
gTPA versus LD
gTPA versus IAD
Appendix D Statistical tests
Appendix E List of abbreviations/symbols
References
14 Statistical characterization and classification of colon microarray gene expression data using multiple machine learning...
14.1 Introduction
14.2 Patients demographics
14.3 Materials and methods
14.3.1 Gene expression data normalization
14.3.2 Feature selection
14.3.2.1 Wilcoxon sign rank sum test
14.3.2.2 t-Test
14.3.2.3 Kruskal–Wallis test
14.3.2.4 F-test
14.3.3 Classifier types
14.3.3.1 Linear discriminant analysis
14.3.3.2 Quadratic discriminant analysis
14.3.3.3 Naive Bayes
14.3.3.4 Gaussian process classification
14.3.3.5 Support vector machine
14.3.3.6 Artificial neural network
14.3.3.7 Logistic regression
14.3.3.8 Decision tree
14.3.3.9 Adaboost
14.3.3.10 Random forest
14.3.4 Statistical evaluation
14.3.4.1 Accuracy
14.3.4.2 Sensitivity
14.3.4.3 Specificity
14.3.4.4 Positive predictive value
14.3.4.5 Negative predictive value
14.3.4.6 F-measure
14.4 Five experimental protocols
14.4.1 Experiment 1: Kernel optimization
14.4.2 Experiment 2: Effect of P-value during statistical tests on machine learning performance
14.4.3 Experiment 3: Intercomparison of the classifiers
14.4.4 Experiment 4: Effect of dominant genes
14.4.5 Experiment 5: Effect of data size on memorization versus generalization
14.5 Results
14.5.1 Results of experiment 1: Kernel optimization
14.5.2 Results of experiment 2: Effect of P-value during statistical tests on machine learning performance
14.5.3 Results of experiment 3: Intercomparison of the classifiers
14.5.4 Results of experiment 4: Effect of dominant genes
14.5.5 Results of experiment 5: Effect of data size on memorization versus generalization
14.6 Performance evaluation and hypothesis validations
14.6.1 Gene separation index
14.6.2 Interrelationship between nGSI and classification accuracy
14.6.3 Reliability index
14.6.4 Receiver operating curve analysis
14.6.5 Validation of proposed methods
14.7 Discussion
14.7.1 Benchmarking different machine learning systems
14.7.2 A note on the intercomparison of classifiers
14.7.3 Strengths, weakness, and extensions
14.8 Conclusion
14.9 Acknowledgments
14.10 Ethical approvals
14.11 Funding
14.12 Conflict of interest
14.13 Author’s contributions
Appendix A
Appendix B
Appendix C
Appendix D
References
15 Identification of road signs using a novel convolutional neural network
15.1 Introduction
15.2 Literature review
15.3 Proposed convolutional neural network method
15.4 Experimental analysis
15.4.1 Preprocessing: impact of input shape of images
15.4.2 Preprocessing using contrast limited adaptive histogram equalization
15.4.3 Comparison of the proposed CNN against LeNet using holdout and cross-validation
15.4.4 Comparison of proposed CNN against ANN and SVM using holdout and cross-validation
15.4.5 Comparison of proposed CNN method against kNN, CART, and random forest
15.4.6 Why does the proposed CNN outperform other methods?
15.5 Conclusion
References
16 Machine learning behind classification tasks in various engineering and science domains
16.1 What are classification tasks?
16.2 Classification tasks in engineering and science domains
16.3 Machine learning classification algorithms
16.3.1 Statistical methods
16.3.1.1 Logistic regression
16.3.1.1.1 Mathematical illustration
16.3.1.1.2 Pros
16.3.1.1.3 Cons
16.3.1.1.4 Application
16.3.1.2 Naïve Bayes
16.3.1.2.1 Mathematical illustration
16.3.1.2.2 Pros
16.3.1.2.3 Cons
16.3.1.2.4 Applications
16.3.1.3 k-Nearest neighbor
16.3.1.3.1 Pros
16.3.1.3.2 Cons
16.3.1.3.3 Applications
16.3.1.4 Classification Decision Tree
16.3.1.4.1 Pros
16.3.1.4.2 Cons
16.3.1.4.3 Applications
16.3.1.5 Support vector machine
16.3.1.5.1 Pros
16.3.1.5.2 Cons
16.3.1.5.3 Application
16.3.1.6 Ensemble classifier
16.3.2 Cognitive methods
16.3.2.1 How classification in artificial neural network differ from statistical machine learning algorithms?
16.3.2.2 Types of artificial neural network
16.3.2.3 Convolution neural network
16.3.2.4 Architecture of a ConvNet
16.4 Case study—machine learning implementation
16.4.1 Case study 1: Medical industry
16.4.2 Case study 2: Geographical data
16.4.3 Case study 3: Finance dataset
16.4.4 Case study 4: Electrical dataset
Acknowledgments
Further reading
Index
Back Cover

Citation preview

Cognitive Informatics, Computer Modeling, and Cognitive Science Theory, Case Studies, and Applications

Cognitive Informatics, Computer Modeling, and Cognitive Science Theory, Case Studies, and Applications Volume 1 Edited by

G. R. Sinha International Institute of Information Technology (IIIT) Bangalore, Bengaluru, India Myanmar Institute of Information Technology (MIIT), Mandalay, Myanmar

Jasjit S. Suri Stroke Monitoring and Diagnostic Division, AtheroPoint, Roseville, CA, United States Advanced Knowledge Engineering Center, Global Biomedical Technologies, Inc., Roseville, CA, United States

Academic Press is an imprint of Elsevier 125 London Wall, London EC2Y 5AS, United Kingdom 525 B Street, Suite 1650, San Diego, CA 92101, United States 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom Copyright © 2020 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress ISBN: 978-0-12-819443-0 For Information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals

Publisher: Mara Conner Acquisitions Editor: Chris Katsaropoulos Editorial Project Manager: Ana Claudia Garcia Production Project Manager: Surya Narayanan Jayachandran Cover Designer: Matthew Limbert Typeset by MPS Limited, Chennai, India

Dedication Dedicated to my Late Grand Parents, My Teachers, and Revered Swami Vivekananda G. R. Sinha Dedicated to my late loving parents, immediate family and children. Jasjit S. Suri

Contents List of contributors ............................................................................................... xvii Editors’ biographies................................................................................................xxi Authors biography ............................................................................................... xxiii Preface ................................................................................................................xxxix Acknowledgments .................................................................................................. xli

CHAPTER 1 Introduction to cognitive science, informatics, and modeling................................................................. 1 G.R. Sinha and Jasjit S. Suri 1.1 Introduction and history of cognitive science ...............................1 1.1.1 Cognition, brain, and consciousness................................... 3 1.1.2 Dynamic theory and cultural aspect ................................... 5 1.1.3 Psychology, philosophy, and cognitive neuroscience ........ 6 1.2 Cognitive modeling ........................................................................6 1.2.1 Cognitive networks ............................................................. 8 1.3 Cognitive informatics and resources..............................................9 1.4 Cognitive maps and perception....................................................10 1.5 Conclusion ....................................................................................11 References.................................................................................... 11 Further reading ............................................................................ 12

CHAPTER 2 Machine consciousness: mind, machine, and society contributors ................................................................. 13 Anandi Giridharan and K.A. Venkatesh 2.1 Introduction: Using cognitive maps as adaptive interface tool in an online course ...........................................................................13 2.1.1 Cognitive mapping and theories ....................................... 13 2.1.2 Web-based online course .................................................. 14 2.1.3 Context modeling and reasoning ...................................... 17 2.2 Multimedia processing and acquisition system ...........................19 2.3 Cognitive maps based on students’ mental model ......................20 2.4 Overview of instructional planning to improve student’s cognitive ability............................................................................21 2.4.1 Cognitive map with weights of subject modules and concepts............................................................................. 22 2.5 Illustrating a hypothetical instruction model...............................23 2.6 Conclusion ....................................................................................25 References.................................................................................... 26

vii

viii

Contents

CHAPTER 3 Brain computer interface and neurocomputing ....... 27 Samrudhi Mohdiwale and Mridu Sahu 3.1 Introduction ..................................................................................27 3.2 Brain computer interface............................................................28 3.2.1 History ............................................................................... 28 3.2.2 Types of brain computer interface.................................. 28 3.2.3 Assumptions and working of brain computer interface............................................................................. 32 3.3 Electroencephalography acquisition devices ...............................40 3.4 Challenges ....................................................................................41 3.4.1 Implantation of electrode.................................................. 41 3.4.2 High dimensionality of data ............................................. 41 3.4.3 Information transfer rate ................................................... 42 3.4.4 Technical challenges......................................................... 42 3.5 Case study on brain computer interface ....................................42 3.5.1 Dataset ............................................................................... 43 3.5.2 Problem statement............................................................. 43 3.5.3 Proposed method............................................................... 43 3.5.4 Working of particle swarm optimization for channel selection............................................................................. 44 3.5.5 k-Nearest neighbor ............................................................ 47 3.5.6 Support vector machine .................................................... 48 3.6 Results ..........................................................................................48 3.7 Conclusion ....................................................................................49 References.................................................................................... 51

CHAPTER 4 The impact on cognitive development of a self-contained exploratory and technology-rich course on the physics of light and sound ................. 55 4.1 4.2 4.3 4.4 4.5

Fernando Espinoza Background...................................................................................56 Methodology.................................................................................58 Results ..........................................................................................59 Discussion.....................................................................................64 Limitations....................................................................................67 References.................................................................................... 68 Further reading ............................................................................ 70

CHAPTER 5 Identification of normal and abnormal brain hemorrhage on magnetic resonance images ............ 71 Nita Kakhandaki and S.B. Kulkarni 5.1 Introduction ..................................................................................71

Contents

5.2 Literature survey ..........................................................................72 5.3 Proposed work..............................................................................74 5.3.1 Edge enhancement ............................................................ 76 5.3.2 Modified multilevel set segmentation algorithm ............. 76 5.3.3 Feature extraction algorithm............................................. 78 5.4 Result and discussions..................................................................82 5.4.1 Comparative analysis between proposed NB-PKC and support vector machine.............................................. 84 5.4.2 Comparison of the proposed NB-PKC and support vector machine schemes ................................................... 86 5.5 Conclusion ....................................................................................89 Acknowledgment ......................................................................... 89 References.................................................................................... 90 Further reading ............................................................................ 91

CHAPTER 6 Cognitive informatics, computer modeling and cognitive science assessment of knee osteoarthritis in radiographic images: a machine learning approach .................................... 93 6.1 6.2 6.3

6.4 6.5

Shivanand S. Gornale, Pooja U. Patravali and Prakash S. Hiremath Introduction ..................................................................................94 Machine learning approach..........................................................96 6.2.1 Knee X-ray analysis: a machine learning approach......... 97 Experimental analysis ................................................................110 6.3.1 Experiment I.................................................................... 110 6.3.2 Experiment II .................................................................. 111 6.3.3 Experiment III ................................................................. 112 6.3.4 Experiment IV................................................................. 113 Discussion...................................................................................117 Summary.....................................................................................117 References.................................................................................. 117 Further reading .......................................................................... 120

CHAPTER 7 Adaptive circadian rhythm a cognitive approach through dynamic light management......................... 123 Srinagesh Maganty 7.1 Introduction ................................................................................123 7.1.1 Circadian clock and circadian rhythm............................ 123 7.1.2 Perception of eye as a visual and nonvisual information sensor........................................................... 125

ix

x

Contents

7.2 Photoreceptors in the eye ...........................................................125 7.2.1 Light-emitting diodes...................................................... 126 7.3 SunLike light-emitting diodes....................................................132 7.4 Data sheet ...................................................................................132 7.4.1 A case study at an educational campus.......................... 132 7.5 Conclusion ..................................................................................134 Acknowlegments ....................................................................... 134 Further readings ......................................................................... 134

CHAPTER 8 Cognitive and brain function analysis of sleeping stage electroencephalogram wave using parallelization ........................................................... 137 8.1 8.2 8.3 8.4 8.5

8.6 8.7 8.8 8.9 8.10 8.11

Vikas Dilliwar and Mridu Sahu Introduction ................................................................................137 History of electroencephalography ............................................138 Analysis of electroencephalogram signals.................................139 Electroencephalogram waves.....................................................140 Electroencephalogram signal recording variables and components.................................................................................140 8.5.1 Frequency ...................................................................... 140 8.5.2 Voltage .......................................................................... 142 8.5.3 Morphology ................................................................... 142 8.5.4 Impedance ..................................................................... 142 8.5.5 Electroencephalogram electrodes ................................. 142 8.5.6 Electrode gel ................................................................. 143 8.5.7 Electrode positioning (10/20 system) ........................... 143 8.5.8 Artifacts in Electroencephalogram recording............... 143 8.5.9 Filtering ......................................................................... 144 8.5.10 Electroencephalogram recording device ...................... 144 Subject preparation and equipment setup for electroencephalogram recoding using an electro cap................145 Sleeping stage electroencephalogram waves.............................145 Type of channel selection for cognitive ....................................146 Disorders detection using electroencephalogram ......................147 Application of electroencephalogram ........................................150 Case study—channel selection for alpha, beta, theta, and delta waves using parallel processing.................................150 8.11.1 Java Parallel Processing Framework architecture [JPPF] ............................................................................ 151 8.11.2 Coherence estimation functions.................................... 151 8.11.3 Distributed parallel computation .................................. 152

Contents

8.12 Conclusion ..................................................................................158 References.................................................................................. 158 Further reading .......................................................................... 160

CHAPTER 9 The future networks—a cognitive approach ........... 161 Kavitha Sooda and T.R. Gopalakrishnan Nair Introduction ................................................................................161 Intelligence in networks .............................................................161 Challenges in current network ...................................................162 Cognitive networks ....................................................................162 Need for intelligent networks ....................................................163 Background.................................................................................163 Cognition approach ....................................................................165 Learning and reasoning for intelligent networks.......................165 Human reasoning mechanism ....................................................166 Cognitive model for reasoning at human level .........................167 New intelligent approach ...........................................................168 Learning approaches ..................................................................169 Requirement of Bayesian approach for cognitive network.......170 9.13.1 The Bayesian network .................................................. 170 9.13.2 Importance of Bayesian model ..................................... 171 9.13.3 Environment in which Bayesian works the best .......... 171 9.13.4 Advantages over other alternative models ................... 172 9.13.5 Collateral relationship with graded cognitive network.......................................................................... 173 9.14 Future trends...............................................................................173 9.15 Research challenges ...................................................................174 9.16 Conclusion ..................................................................................175 References.................................................................................. 175 9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 9.9 9.10 9.11 9.12 9.13

CHAPTER 10 Identification of face along with configuration beneath unobstructed ambiance via reflective deep cascaded neural networks .............................. 177 Siddhartha Choubey, Abha Choubey, Anurag Vishwakarma, Prasanna Dwivedi, Abhishek Vishwakarma and Abhishek Seth 10.1 Introduction ................................................................................177 10.2 Machine learning life cycle .......................................................179 10.2.1 Collection of data.......................................................... 180 10.2.2 Normalization of data ................................................... 180 10.2.3 Modeling of data ........................................................... 180

xi

xii

Contents

10.3

10.4 10.5

10.6 10.7

10.2.4 Training and feature engineering of model ........................................................................ 180 10.2.5 Production and deployment of models......................... 180 Popular augmentation techniques ..............................................185 10.3.1 FLIP............................................................................... 185 10.3.2 Rotation ......................................................................... 185 10.3.3 Scale .............................................................................. 186 10.3.4 Crop ............................................................................... 186 10.3.5 Translation..................................................................... 186 10.3.6 Gaussian noise............................................................... 186 Localization ................................................................................186 Methodology...............................................................................188 10.5.1 Preprocessing ................................................................ 188 10.5.2 Detection phase ............................................................. 189 Experiments................................................................................191 10.6.1 Training data ................................................................. 191 Conclusion ..................................................................................192 References.................................................................................. 193 Further reading .......................................................................... 194

CHAPTER 11 Setting up a neural machine translation system for English to Indian languages ............................... 195 Sandeep Saini and Vineet Sahula 11.1 Introduction ................................................................................195 11.2 Neural machine translation ........................................................197 11.2.1 Long- and short-term memory model .......................... 198 11.3 Setting up the neural machine translation system .........................................................................................200 11.3.1 Encoder and decoder..................................................... 200 11.3.2 Attention in the model .................................................. 201 11.3.3 Residual connections and bridges................................. 202 11.3.4 Out-of-vocabulary words .............................................. 204 11.4 Results and discussions ..............................................................205 11.4.1 Datasets ......................................................................... 205 11.4.2 Experimental setup........................................................ 206 11.4.3 Training details ............................................................. 206 11.4.4 BLEU score................................................................... 207 11.5 Discussions .................................................................................207 11.6 Conclusion ..................................................................................210 References.................................................................................. 210

Contents

CHAPTER 12 An extreme learning-based adaptive control design for an autonomous underwater vehicle....................................................................... 213 Biranchi Narayan Rath and Bidyadhar Subudhi 12.1 Introduction ................................................................................213 12.2 Modeling of autonomous underwater vehicle in diving plane and problem statement .....................................................215 12.2.1 Kinematic ...................................................................... 215 12.2.2 Dynamics....................................................................... 216 12.2.3 Discretization of the kinematic and dynamic of autonomous underwater vehicle for controlling the autonomous underwater vehicle in diving motion ....... 217 12.3 Identification of autonomous underwater vehicle dynamics using extreme learning machine model .....................................218 12.3.1 Sequential extreme learning machine model for autonomous underwater vehicle dynamic .................... 220 12.4 Design of diving controller ........................................................221 12.4.1 Kinematic backstepping control law ............................ 221 12.4.2 Dynamic nonlinear proportional, integral, and derivative control law ................................................... 222 12.5 Control law formulation with delay prediction .........................223 12.6 Results and discussion................................................................224 12.7 Conclusion ..................................................................................225 References.................................................................................. 226

CHAPTER 13 Geometric total plaque area is an equally powerful phenotype compared with carotid intima media thickness for stroke risk assessment: A deep learning approach..................................................... 229 Elisa Cuadrado-Godia, Saurabh K. Srivastava, Luca Saba, Tadashi Araki, Harman S. Suri, Argiris Giannopolulos, Tomaz Omerzu, John Laird, Narendra N. Khanna, Sophie Mavrogeni, George D. Kitas, Andrew Nicolaides and Jasjit S. Suri 13.1 Introduction ................................................................................229 13.1.1 Performance numbers ................................................... 231 13.2 Background survey on cIMT, LD, and TPA measurements.....232 13.2.1 cIMT detection and measurement methods ................. 232 13.2.2 LD detection and measurement methods and our proposal ......................................................................... 233 13.3 Materials and methodology........................................................234 13.3.1 Patient demographics and image acquisition ............... 234

xiii

xiv

Contents

13.4

13.5

13.6

13.7

13.3.2 gTPA modeling using cylindrical fitting...................... 235 13.3.3 Overall architecture....................................................... 236 13.3.4 cIMT and LD detection using DL system.................... 236 Experimental protocol, results, and its validation .....................237 13.4.1 DL system results and visual display of LI and MA interfaces......................................................... 238 13.4.2 Mean value computations for cIMT and gTPA for two DL systems....................................................... 238 13.4.3 Relationship of age versus cIMT/gTPA ....................... 238 13.4.4 Validation ...................................................................... 238 Statistical tests and 10-year risk analysis ..................................247 13.5.1 Risk analysis ................................................................. 247 13.5.2 Statistical tests............................................................... 249 13.5.3 Ten-year risk assessment .............................................. 251 Discussion...................................................................................254 13.6.1 Benchmarking ............................................................... 254 13.6.2 Strengths/weakness/extensions ..................................... 258 Conclusion ..................................................................................258 Acknowledgments ..................................................................... 258 Conflict of interest..................................................................... 258 Funding ...................................................................................... 259 Appendix A: LD/IMT measurement using deep learning system ........................................................................................ 259 Appendix B: Polyline distance method..................................... 260 Polyline distance metric ........................................................260 Appendix C: Correlation coefficient of gTPA against all the wall parameters .............................................................. 261 gTPA versus cIMT ................................................................261 gTPA versus LD ....................................................................261 gTPA versus IAD ..................................................................261 Appendix D: Statistical tests ..................................................... 262 Appendix E: List of abbreviations/symbols.............................. 268 References.................................................................................. 268

CHAPTER 14 Statistical characterization and classification of colon microarray gene expression data using multiple machine learning paradigms ..................... 273 Md. Maniruzzaman, Md.Jahanur Rahman, Benojir Ahammed, , Md.Menhazul Abedin, Harman S. Suri, Mainak Biswas, Ayman El-Baz, Petros Bangeas, Georgios Tsoulfas and Jasjit S. Suri 14.1 Introduction ................................................................................273

Contents

14.2 Patients demographics................................................................276 14.3 Materials and methods ...............................................................276 14.3.1 Gene expression data normalization............................. 277 14.3.2 Feature selection ........................................................... 278 14.3.3 Classifier types .............................................................. 281 14.3.4 Statistical evaluation ..................................................... 285 14.4 Five experimental protocols.......................................................286 14.4.1 Experiment 1: Kernel optimization .............................. 286 14.4.2 Experiment 2: Effect of P-value during statistical tests on machine learning performance ........................ 287 14.4.3 Experiment 3: Intercomparison of the classifiers ........ 287 14.4.4 Experiment 4: Effect of dominant genes...................... 287 14.4.5 Experiment 5: Effect of data size on memorization versus generalization..................................................... 288 14.5 Results ........................................................................................288 14.5.1 Results of experiment 1: Kernel optimization ............. 288 14.5.2 Results of experiment 2: Effect of P-value during statistical tests on machine learning performance........ 289 14.5.3 Results of experiment 3: Intercomparison of the classifiers....................................................................... 290 14.5.4 Results of experiment 4: Effect of dominant genes..... 294 14.5.5 Results of experiment 5: Effect of data size on memorization versus generalization ............................. 294 14.6 Performance evaluation and hypothesis validations..................295 14.6.1 Gene separation index................................................... 295 14.6.2 Interrelationship between nGSI and classification accuracy......................................................................... 296 14.6.3 Reliability index............................................................ 297 14.6.4 Receiver operating curve analysis ................................ 297 14.6.5 Validation of proposed methods................................... 297 14.7 Discussion...................................................................................298 14.7.1 Benchmarking different machine learning systems ..... 300 14.7.2 A note on the intercomparison of classifiers................ 302 14.7.3 Strengths, weakness, and extensions ............................ 303 14.8 Conclusion ..................................................................................304 14.9 Acknowledgments ......................................................................304 14.10 Ethical approvals ........................................................................304 14.11 Funding.......................................................................................304 14.12 Conflict of interest .....................................................................305 14.13 Author’s contributions................................................................305 Appendix A................................................................................ 305 Appendix B................................................................................ 312

xv

xvi

Contents

Appendix C................................................................................ 313 Appendix D................................................................................ 314 References.................................................................................. 314

CHAPTER 15 Identification of road signs using a novel convolutional neural network................................... 319 15.1 15.2 15.3 15.4

15.5

Yang Pan, Vijayakumar Kadappa and Shankru Guggari Introduction ................................................................................319 Literature review ........................................................................321 Proposed convolutional neural network method .......................323 Experimental analysis ................................................................325 15.4.1 Preprocessing: impact of input shape of images.......... 325 15.4.2 Preprocessing using contrast limited adaptive histogram equalization .................................................. 326 15.4.3 Comparison of the proposed CNN against LeNet using holdout and cross-validation ............................... 326 15.4.4 Comparison of proposed CNN against ANN and SVM using holdout and cross-validation .............. 330 15.4.5 Comparison of proposed CNN method against kNN, CART, and random forest ............................................ 331 15.4.6 Why does the proposed CNN outperform other methods?........................................................................ 333 Conclusion ..................................................................................335 References.................................................................................. 335

CHAPTER 16 Machine learning behind classification tasks in various engineering and science domains.............. 339 Tilottama Goswami 16.1 What are classification tasks? ....................................................339 16.2 Classification tasks in engineering and science domains..........340 16.3 Machine learning classification algorithms ...............................341 16.3.1 Statistical methods ........................................................ 342 16.3.2 Cognitive methods ........................................................ 349 16.4 Case study—machine learning implementation ........................350 16.4.1 Case study 1: Medical industry .................................... 350 16.4.2 Case study 2: Geographical data .................................. 352 16.4.3 Case study 3: Finance dataset....................................... 353 16.4.4 Case study 4: Electrical dataset .................................... 354 Acknowledgments ..................................................................... 355 Further reading .......................................................................... 355 Index ......................................................................................................................357

List of contributors Md. Menhazul Abedin Statistics Discipline, Khulna University, Khulna, Bangladesh Benojir Ahammed Statistics Discipline, Khulna University, Khulna, Bangladesh Tadashi Araki Department of Cardiology, Toho University, Tokyo, Japan Petros Bangeas Department of Surgery, Papageorgiou Hospital, Aristotle University Thessaloniki, Thessaloniki, Greece Mainak Biswas Advanced Knowledge Engineering Centre, Global Biomedical Technologies, Inc., Roseville, CA, United States Abha Choubey Shri Shankaracharya Technical Campus, Bhilai, India Siddhartha Choubey Shri Shankaracharya Technical Campus, Bhilai, India Elisa Cuadrado-Godia Department of Neurology, IMIM—Hospital del Mar, Barcelona, Spain Vikas Dilliwar National Institute of Technology, Raipur, Raipur, India Prasanna Dwivedi Shri Shankaracharya Technical Campus, Bhilai, India Ayman El-Baz Department of Bioengineering, University of Louisville, Louisville, KY, United States Fernando Espinoza Department of Physics and Astronomy, Hofstra University, Hempstead, NY, United States; Department of Chemistry & Physics-Adolescence Education, SUNY Old Westbury, Old Westbury, NY, United States Argiris Giannopolulos Department of Vascular Surgery, Imperial College, London, United Kingdom Anandi Giridharan Indian Institute of Science, Bangalore, India Shivanand S. Gornale Department of Computer Science, School of Mathematics and Computing Sciences, Rani Channamma University, Belagavi, India Tilottama Goswami Department of Computer Science and Engineering, Anurag Group of Institutions, Hyderabad, India

xvii

xviii

List of contributors

Shankru Guggari Data Analytics Research Lab, Department of Computer Applications, B.M.S. College of Engineering, Bengaluru, India Prakash S. Hiremath Department of Master of Computer Applications, KLE Technological University, Hubballi, India Vijayakumar Kadappa Data Analytics Research Lab, Department of Computer Applications, B.M.S. College of Engineering, Bengaluru, India Nita Kakhandaki SDM College of Engineering & Technology, Dharwad, India Narendra N. Khanna Department of Cardiology, Apollo Hospitals, New Delhi, India George D. Kitas Arthritis Research UK Epidemiology Unit, Manchester University, Manchester, United Kingdom; Department of Rheumatology, Group NHS Foundation Trust, Dudley, United Kingdom S.B. Kulkarni SDM College of Engineering & Technology, Dharwad, India John Laird Department of Cardiology, St. Helena Hospital, St. Helena, CA, United States Srinagesh Maganty Department of Electronics & Communication Engineering, PACE Institute of Technology & Sciences, Ongole, India Md. Maniruzzaman Statistics Discipline, Khulna University, Khulna, Bangladesh; Department of Statistics, University of Rajshahi, Rajshahi, Bangladesh Sophie Mavrogeni Cardiology Clinic, Onassis Cardiac Surgery Center, Athens, Greece Samrudhi Mohdiwale National Institute of Technology Raipur, Raipur, India T.R. Gopalakrishnan Nair Department of CSE, RREC, Bengaluru, India Andrew Nicolaides Vascular Diagnostic Center, University of Cyprus, Nicosia, Cyprus Tomaz Omerzu Deparment of Neurology, University Medical Centre Maribor, Maribor, Slovenia Yang Pan Data Analytics Research Lab, Department of Computer Applications, B.M.S. College of Engineering, Bengaluru, India Pooja U. Patravali Department of Computer Science, School of Mathematics and Computing Sciences, Rani Channamma University, Belagavi, India

List of contributors

Md. Jahanur Rahman Department of Statistics, University of Rajshahi, Rajshahi, Bangladesh Biranchi Narayan Rath Department of Electrical Engineering, National Institute of Technology Rourkela, Rourkela, India Luca Saba Department of Radiology, Azienda Ospedaliero Universitaria, Cagliari, Italy Mridu Sahu National Institute of Technology, Raipur, Raipur, India Vineet Sahula Department of Electronics and Communication Engineering, Malaviya National Institute of Technology, Jaipur, India Sandeep Saini Department of Electronics and Communication Engineering, Myanmar Institute of Information Technology, Mandalay, Myanmar Abhishek Seth Shri Shankaracharya Technical Campus, Bhilai, India G.R. Sinha Myanmar Institute of Information Technology, Mandalay, Myanmar Kavitha Sooda Department of CSE, B.M.S. College of Engineering, Bengaluru, India Saurabh K. Srivastava Department of Computer Science & Engineering, ABES EC, Ghaziabad, India Bidyadhar Subudhi Department of Electrical Engineering, National Institute of Technology Rourkela, Rourkela, India Harman S. Suri Brown University, Providence, RI, United States Jasjit S. Suri Stroke Monitoring Division, AtheroPointt, Roseville, CA, United States Georgios Tsoulfas Department of Surgery, Aristotle University of Thessaloniki, Thessaloniki, Greece K.A. Venkatesh Myanmar Institute of Information Technology, Mandalay, Myanmar Abhishek Vishwakarma Shri Shankaracharya Technical Campus, Bhilai, India Anurag Vishwakarma Shri Shankaracharya Technical Campus, Bhilai, India

xix

Editors’ biographies Dr. G.R. Sinha is an adjunct professor at International Institute of Information Technology (IIIT) Bangalore and currently deputed as the professor at Myanmar Institute of Information Technology (MIIT) Mandalay, Myanmar. He obtained his B.E. (Electronics Engineering) and M. Tech. (Computer Technology) with Gold Medal from National Institute of Technology, Raipur, India. He received his PhD in Electronics & Telecommunication Engineering from Chhattisgarh Swami Vivekanand Technical University (CSVTU) Bhilai, India. He has published 250 research papers in various international and national journals and conferences. He is an active reviewer and editorial member of more than 12 reputed international journals such as IEEE Transactions on Image Processing, and Elsevier Computer Methods and Programs in Biomedicine. He has been Dean of Faculty and Executive Council Member of CSVTU India and is currently a member of Senate of MIIT. Dr. Sinha has been appointed as ACM Distinguished Speaker in the field of DSP for years (2017 20). He has also been appointed as Expert Member for Vocational Training Program by Tata Institute of Social Sciences (TISS) for 2 years (2017 19). He has been Chhattisgarh Representative of IEEE MP Sub-Section Executive Council for the last 3 years. He has served as Distinguished Speaker in Digital Image Processing by Computer Society of India (2015). He has also served as Distinguished IEEE Lecturer in IEEE India Council for Bombay section. He has been the Senior Member of IEEE for last many years. He is the recipient of many awards such as TCS Award 2014 for Outstanding contributions in Campus Commune of TCS; R B Patil ISTE National Award 2013 for Promising Teacher by ISTE New Delhi, Emerging Chhattisgarh Award 2013; Engineer of the Year Award 2011; Young Engineer Award 2008; Young Scientist Award 2005; IEI Expert Engineer Award 2007; ISCA Young Scientist Award 2006 Nomination; and awarded Deshbandhu Merit Scholarship for 5 years. He has authored six books, including Biometrics published by Wiley India, a subsidiary of John Wiley and Medical Image Processing published by Prentice Hall of India. He is consultant of various Skill Development initiatives of NSDC, Govt. of India. He is a regular referee of Project Grants under DST-EMR scheme and several other schemes of Govt. of India. He has delivered many Keynote/ Invited Talks and chaired many technical sessions in international conferences held in Singapore, Myanmar, Bangalore, Mumbai, Trivandrum, Hyderabad, Mysore, Allahabad, Nagercoil, Nagpur, Kolaghat, Yangon, Meikhtila, and many other places. His special session on “Deep Learning in Biometrics” was included

xxi

xxii

Editors’ biographies

in IEEE International Conference on Image Processing 2017. He is the fellow of IETE New Delhi and member of international professional societies such as IEEE and ACM and many other National Professional bodies such as ISTE, CSI, ISCA, and IEI. He is a member of various committees of the University and has been Vice President of Computer Society of India for Bhilai Chapter for two consecutive years. He has guided eight PhD scholars and 15 M. Tech. scholars. His research interest includes Image Processing & Computer Vision, Optimization Methods, Employability Skills; Outcome based Education (OBE) etc. Jasjit S. Suri is an innovator, scientist, a visionary, an industrialist and an internationally known world leader in biomedical engineering. Dr. Suri has spent over 25 years in the field of biomedical engineering/devices and its management. He received his doctorate from University of Washington, Seattle and Business Management Sciences from Weatherhead, Case Western Reserve University, Cleveland, Ohio. Dr. Suri was crowned with President’s Gold Medal in 1980 and the fellow of American Institute of Medical and Biological Engineering for his outstanding contributions in 2004.

Authors biography Dr. G.R. Sinha is an adjunct professor at International Institute of Information Technology (IIIT) Bangalore and currently deputed as a professor at Myanmar Institute of Information Technology (MIIT), Mandalay, Myanmar. He has published 223 research papers in various international and national journals and conferences. Dr. Sinha has been appointed as the distinguished speaker in the field of Digital Signal Processing by ACM for the next 3 years (2017 20). He has been appointed as an expert member for Vocational Training Programme by Tata Institute of Social Sciences (TISS) for 2 years (2017 19). He has been elected as Chhattisgarh Representative of IEEE MP Sub-Section Executive Council 2017 and Executive Council 2016. He was also selected as a distinguished speaker in the field of Digital Image Processing by Computer Society of India (2015). He is the recipient of many awards such as TCS Award 2014 for Outstanding contributions in Campus Commune of TCS, Rajaram Bapu Patil ISTE National Award 2013 for Promising Teacher for Creative work done in Technical Education by ISTE New Delhi, Emerging Chhattisgarh Award 2013, Engineer of the Year Award 2011, Young Engineer Award 2008, Young Scientist Award 2005, IEI Expert Engineer Award 2007, ISCA Young Scientist Award 2006 Nomination and awarded Deshbandhu Merit Scholarship for 5 years. He served as a distinguished IEEE Lecturer in IEEE India Council for Bombay section. He has been a senior member of IEEE for the last many years. He has authored six books including Biometrics published by Wiley India, a subsidiary of John Wiley, and Medical Image Processing published by Prentice Hall of India. Jasjit S. Suri, PhD, MBA, is an innovator, visionary, scientist, and an internationally known world leader in the field of biomedical imaging and healthcare management. Dr. Suri is a recipient of President Gold Medal (1980), Fellow of American Institute of Medical and Biological Engineering by National Academy of Sciences, Washington DC (2004), and Marquis Life Time Achievement Award (2018). Dr. Suri is a board member with several organizations. Currently, he is the chairman of AtheroPoint, United States. Dr. Suri has published over 700 papers/patents/ books/trademarks with an H-index of 54.

xxiii

xxiv

Authors biography

Dr. Anandi Giridharan is working as the principal research scientist in Department of Electrical Communication Engineering at Indian Institute of Science. She has 29 years of experience in Research and Academics. Her research interests are ubiquitous learning, communication protocols design and testing, quality assurance and assessment in technical education, multimedia information aspects, and quality of service. She is IEEE Senior Member, Former Chair IEEE Computational Intelligence Society, Bangalore Chapter (2016 18), ViceChair, Women in Engineering, Bangalore Section, Member of IETE, ISHM, IEEE-IISc HKN chapter, Member of Interview Board—All India Recruitment to the Post of Trained Graduate Teachers, Project trainee—SERC, IISc, etc., and guidance to student in IISc Summer Fellowship Programme. She has published papers at national and international conference and journals; has written Solution Manual to Multimedia Information System; coordinated activities in the conduct of conferences and workshops, Faculty Development Programs, Invited talks etc.; conducted Extension of Lecture Programmes (CCE, IISc); assisted in the conduct of GATE and KVPY examinations; and scrutinized work for GATE Examination. Dr. K.A. Venkatesh has obtained M.Sc., M.Phil, and PhD from Madurai Kamaraj University, Manonmaniam Sundaranar University, and Alagappa University and currently associated with IIITB as an adjunct professor and serving as a professor at Myanmar Institute of Information Technology on deputation. He has served as registrar, dean research, principal, professor, and HoD and also served in a Software Industry as CTO and Head-HR. He has 28 years of teaching experience. His research areas of interest are applicable mathematics, theoretical computer science, and banking and finance. Dr. Venkatesh has published papers in the areas of optimization, finance, banking (benchmarking), computer architecture (LNCS), mathematics and high voltage engineering. He has coauthored a book Discrete Mathematics, published by Vikas Publishing House. He is a member of System Society of India, Ramanujan Mathematical Society, Indian Statistical Institute, Karnataka, Association of Constraint Programming, United States, Academy of Discrete Mathematics and Applications, founder Secretary.

Authors biography

Samrudhi Mohdiwale received her bachelor degree in Engineering with specialization in Electronics and Telecommunication Engineering in 2015 from Shri Shankaracharya Institute of Professional Management and Technology Raipur and masters of Technology in Information Technology from National Institute of Technology Raipur (C.G.) in 2018. She is pursuing PhD from NIT Raipur. She has 2 years of research experience, and in such a short span of time, she has published more than four research papers in the areas of signal processing and machine learning. The area of research of the author is biomedical image and signal processing with machine learning. Dr. Fernando Espinoza Ed D. (Columbia University) holds appointments as a professor in the School of Education and the Department of Chemistry and Physics at SUNY Old Westbury, as well as an adjunct associate professor in the Department of Physics and Astronomy at Hofstra University. Dr. Espinoza has extensive teaching experience at the high school and college levels teaching astronomy, physics, Earth science, physical science, and in the pedagogical preparation of science teachers. His active research agenda includes more than a dozen peer-reviewed publications, two textbooks The Nature of Science and Wave Motion as Inquiry more than $400,000 in grants, and a significant number of conference presentations. He serves as a reviewer for several journals, and most recently as a member of the New York State Education Department’s Science Content Advisory Committee, charged with providing feedback on the adoption of the common core science curriculum as part of the US Next Generation Science Standards (NGSS). Mrs. Nita Kakhandaki received her BE degree in Electronics and Communication Engineering from Basaveshwar Engineering College Bagalkot, Karnataka in 1992. Later, she received her master degree in Digital Electronics from SDM College of Engineering and Technology, Dharwad in 2001 and is pursuing PhD from Visvesvaraya Technological University, Belagavi, Karnataka, India. She has published two papers at international journal and two international/national conference papers. She is presently working as an assistant professor in Department of Computer Science and Engineering, SDM College of Engineering and Technology, Dharwad. Her present research interests are biomedical image processing and design of embedded systems.

xxv

xxvi

Authors biography

Dr. S. B. Kulkarni received his Bachelor of Engineering in Electrical and Electronics from Basaveshwar Engineering College Bagalkot in 1994, master degree in Computer Science and Engineering from BVB College Engineering and Technology, Hubli in 2008. He received his PhD degree in 2014 from Graphic Era University, Dehradun, Uttarakhand, India. He is guiding five PhD students at Visvesvaraya Technological University, Belagavi, Karnataka. He has published 39 papers at international journal and conferences. He is presently working as an associate professor in the Department of Computer Science and Engineering at SDM College of Engineering and Technology, Dharwad, Karnataka, India. His present research interests are the development of machine learning-based approaches for image processing. He is a life member of ISTE and IEI. Dr. Shivanand S. Gornale has completed M.Sc. in Computer Science. M.Phil. in Computer Science., PhD in Computer Science from Savitribai Phule Pune University, Maharashtra, India in 2009 under the guidance of Prof. K.V. Kale and has been recognized as a research guide for PhD in Computer Science and Engineering from Rani Channamma University, Belagavi, and Jain University, Bangalore. He has published more than 90 1 research papers in various national and international journals and conferences. He is a fellow of IETE New Delhi; life member of CSI; life member of Indian Unit of Pattern Recognition and Artificial Intelligence (IPRA); member of Indian Association for Research in Computer Science (IARCS); member of International Association of Computer Science and Information Technology (IACS&IT) Singapore; member of International Association for Engineers’, Hong Kong; member of Computer Science Teachers Association, United States; life member of Indian Science Congress Association, Kolkata, India. Presently, he is working as a professor and chairman, Department of Computer Science and also director of IQAC Rani Channamma University, Belagavi, Karnataka, India. His research areas of interest are digital image processing, pattern recognition, computer vision and machine learning, video retrieval, and biometric analysis.

Authors biography

Mrs. Pooja U. Patravali is pursuing PhD program in Computer Science at Rani Channamma University Belagavi, Karnataka, India. She received B.E. degree in Computer Science and Engineering from Visvesvaraya Technological University, Belagavi, Karnataka, India in 2007 and received M.Tech degree in Computer Science and Engineering from Karnataka State Open University, Mysuru, Karnataka, India in 2014 respectively. Her research interest includes image processing and pattern recognition, medical image processing, computer vision, and machine learning techniques. Dr. Prakash S. Hiremath has obtained M.Sc. degree in 1973 and PhD degree in 1978 in Applied Mathematics from Karnataka University, Dharwad. He had been in the faculty of Mathematics and Computer Science of various institutions in India, namely, National Institute of Technology, Surathkal (1977 79), Coimbatore Institute of Technology, Coimbatore (1979 80), National Institute of Technology, Tiruchirapalli (1980 86), Karnataka University, Dharwad (1986 1993). From 1993 to 2014, he worked as a professor in Department of Computer Science, Gulbarga University, Gulbarga. Presently, he is working as a professor in Department of Computer Science (MCA), KLE Technological University, Hubballi, Karnataka, India. His research areas of interest are computational fluid dynamics, optimization techniques, image processing and pattern recognition, and computer networks. He has published more than 220 research papers in peer-reviewed international journals and proceedings of international conferences. Dr. M. Srinagesh is currently working as a professor in the Department of ECE PACE Institute of Technology and Sciences Ongole. He completed his undergraduation and postgraduation in Instrumentation Engineering from Andhra University College of Engineering, Visakhapatnam. He completed his PhD from CMJU, Meghalaya. He is having 18 years of industrial experience and 10 years of teaching experience. He published 8 papers in various international journals having a good impact factor and presented more than 10 papers in various national and international conferences. He is a fellow member of Institute of Engineers, IETE, and a life member of Indian Science congress, Instrument Society of India, member of ISA and IEEE. Currently, he is working with DST funded project “Sleeping disorders and improvised efficiency for Night-shift workmen.”

xxvii

xxviii

Authors biography

Vikas Dilliwar received his B.E. (Hons.) degree in Information Technology from Pt Ravisankar Shukla University, Raipur, India in 2006 and M.Tech degree in Computer Technology from National Institute of Technology, Raipur, India in 2011. He is an assistant professor of Information Technology Department in Chhattisgarh Institute of Technology, Rajnandgaon, India. He is currently pursuing PhD degree from National Institute of Technology Raipur (CG), India. His research interests include parallel processing, distributed computing, biomedical signal processing, image processing, and soft computing. He has published more than 25 research papers in various journals and conference proceedings. Dr. Mridu Sahu has completed her graduation in Computer Science and Engineering in 2004 from Maulana Azad National Institute of Technology, Bhopal. She completed her post graduation Master of Technology in Computer Science and Engineering from RIT, Raipur in 2011 and completed the Ph.D. in Computer Science and Engineering in 2018 from National Institute of Technology Raipur, India. She is having more than 10 year experiences in teaching, presently, she is working as an Aassistant professor in dDepartment of Information Technology, NIT Raipur. She has published more than 15 research articles in various journals and conferences in the field of data mining, brain brain computer interface, sensor devices, and visual mining techniques. Kavitha Sooda holds a PhD degree in Computer Science and Engineering. She has 17 years of teaching experience and completed her Post-Doctoral work on teaching styles from IISc, Bengaluru. Her research interest includes routing techniques, QoS application, cognitive networks, evolutionary algorithms, Indian methodology for teaching. She has more than 20 papers published in reputed Journals and Conferences. Currently, she works as an associate professor at B.M.S. College of Engineering, Bengaluru, India. T.R. Gopalakrishnan Nair holds M.Tech (IISc, Bengaluru) and PhD degree in Computer Science. He has four decades of experience in Computer Science and Engineering through research, industry, and education. He has published several papers and holds patents in multidomains. He is the winner of PARAM Award for technology innovation. Currently, he is the Professor, in the Department of CSE, RREC, Bengaluru.

Authors biography

Dr. Siddhartha Choubey, M.Tech, PhD (Computer Science and Engineering), LMISTE, MCSI. He is working as a professor in Computer Science and Engineering in Shri Shankaracharya Technical Campus Bhilai, India. He has published more than 60 research papers in various international and national journals and conferences. His areas of interest include networking, parallel processing, image processing, biomedical imaging, nanoimaging, neural network, fuzzy logic, pattern recognition, bioinformatics, AI, machine learning, deep learning, and IOT. Dr. Abha Choubey, M.Tech, PhD (Computer Science and Engineering), LMISTE, MCSI, Life member ACM. She is working as a professor in Computer Science and Engineering in Shri Shankaracharya Technical Campus, Bhilai, India. She has published more than 60 research papers in various international and national journals and conferences. Her areas of interest include networking, parallel processing, image processing, biomedical imaging, nanoimaging, neural network, fuzzy logic, pattern recognition, bioinformatics, AI, machine learning, and IOT. Abhishek Seth is an engineering undergraduate who is pursuing his BE degree in Computer Science Engineering from Chhattisgarh Swami Vivekanand Technical University, Bhilai, Chhattisgarh. His present research areas include the development of deep learning-based approaches for image processing, IOT, reinforcement learning, and behavioral biometrics.

Abhishek Vishwakarma is an engineering undergraduate who is pursuing his BE degree from Chhattisgarh Swami Vivekanand Technical University, Bhilai, Chhattisgarh. His research topics include computer vision, natural language processing, and reinforcement learning.

xxix

xxx

Authors biography

Anurag Vishwakarma, BE, received the BE degree from Chhattisgarh Swami Vivekanand Technical University, Bhilai, Chhattisgarh in 2019. He has authored/coauthored publication. His research topics include computer vision, natural language processing, and reinforcement learning.

Prasanna Dwivedi received his BE degree in Information Technology from the Chhattisgarh Swami Vivekanand Technical University, Bhilai, Chhattisgarh in 2019. He had coauthored one publication in high impact factor international journals. He is presently working as a Software Engineer in Spark Electronics. His present research interests are machine learning, natural language processing, and virtual reality.

Biranchi Narayan Rath received master degree in Electrical Engineering from NIT, Rourkela, India, in 2014. He is currently working for the PhD degree in Electrical Engineering at NIT Rourkela, India. His research interests include robotics and control of autonomous underwater vehicle.

Prof. Bidyadhar Subudhi received a PhD degree from the University of Sheffield, Sheffield, in 2002. He is currently a professor in the Department of Electrical Engineering, NIT Rourkela. His research interests include robotics, control system, control of photovoltaic power system, and control of wind energy system.

Authors biography

Elisa Cuadrado-Godia, MD, PhD, is a neurologist specialist in cerebrovascular diseases at Hospital del Mar, Barcelona, Spain. She is a member of NEUVAS research group and also an associate professor of the Bachelor’s Degree in Biomedical Engineering of the Universitat Pompeu Fabra (UPF) in Barcelona. Her research is focused on the search for biomarkers of cerebrovascular diseases.

Saurabh K. Srivastava, M.Tech, is working as a senior assistant professor in the Department of Computer Science and Engineering, ABES Engineering College, Ghaziabad, India. He is currently pursuing his PhD from Jaypee Institute of Information Technology, Noida, India.

Luca Saba, MD, is with A.O.U. Cagliari, Italy. His research interests are in multi-detector-row computed tomography, magnetic resonance, ultrasound, neuroradiology, and diagnostic in vascular sciences.

Tadashi Araki received the MD degree from Toho University, Japan in 2003. His research topics include coronary intervention, intravascular ultrasound (IVUS) and peripheral intervention. Now, he works at Toho University Ohashi Medical Center, Tokyo, Japan as coronary and peripheral interventionalist.

xxxi

xxxii

Authors biography

Harman S. Suri is currently pursuing his BS from Brown University, Providence, United States. He worked in summers of 2015 in the area of telemedicine-based Autism industry at Behavioral Imaging, Boise, Idaho, United States, and at Instituto Superior Te´cnico, Lisbon, Portugal in 2018.

Argiris Giannopolulos, MD, is currently working in the Department of Vascular Surgery, Imperial College, London, United Kingdom.

Tomaz Omerzu, MD, is currently working as at University Medical Centre Maribor, Slovenia. His research interests are radiology and cardiovascular medicine.

John R. Laird, MD, FACC is with St. Helena Hospital, CA, United States. Prof. Laird is an internationally renowned interventional cardiologist and his expertise is innovative procedures for carotid artery disease.

Authors biography

Narendra N. Khanna, MD, DM, FACC, is an advisor to Apollo Group of Hospitals in India and is working as a senior consultant in Cardiology & Coordinator of Vascular Services at Indraprastha Apollo Hospital, New Delhi.

Sophie Mavrogeni, MD, PhD, is currently working at Cardiology Clinic, Onassis, Athens, GREECE. Her research is focused on nonischemic cardiomyopathy, dystrophinopathies, myocarditis, and rheumatic diseases.

George D. Kitas, MD, PhD, FRCP, is a director of Research and Development-Academic Affairs, Dudley Group, NHS Foundation Trust, Dudley, United Kingdom. He is an honorary professor of Rheumatology at the Arthritis Research UK Epidemiology Unit.

Andrew Nicolaides, MS, FRCS, PhD (Hon), is currently the Professor Emeritus at Imperial College, London. He is the coauthor of more than 500 original papers and editor of 14 books.

xxxiii

xxxiv

Authors biography

Md. Maniruzzaman, M.Sc., is working as a lecturer, Statistics Discipline, Khulna University, Khulna, Bangladesh. His research interests are machine learning, public health, medical imaging, and bioinformatics. He has published 10 research papers in the international journals.

Md. Jahanur Rahman, PhD, is a professor in the Department of Statistics, University of Rajshahi, Rajshahi, Bangladesh. His research interests are machine learning, time series analysis, and bioinformatics.

Benojir Ahammed, M.Sc., is currently working as an assistant professor, Statistics Discipline, Khulna University, Khulna, Bangladesh. His research interests are biostatistics and machine learning.

Md. Menhazul Abedin, M.Sc., and is working as a lecturer in Statistics Discipline, Khulna University, Khulna, Bangladesh. His research interests are bioinformatics, and machine learning.

Authors biography

Mainak Biswas, PhD, is a visiting scientist at Global Biomedical Technologies Inc., CA, United States. His research interests are in the areas of machine learning and biomedical applications.

Dr. Ayman El-Baz, PhD, is a professor in the Department of Bioengineering at the University of Louisville, KY. He has 12 years of hands-on experience in the fields of bioimaging modeling and computerassisted diagnostic systems. He has developed new techniques for analyzing 3D medical images. He has authored or coauthored more than 300 technical articles. Currently, he is an acting chair of bioengineering.

Petros Bangeas received his medical degree from Aristoteleion University of Thessaloniki, Greece School of Medicine and now is a resident surgeon at 1st AHEPA University hospital of Thessaloniki, Greece. His clinical and research interests include hepatobiliary surgery, surgical oncology, nanomedicine and nanosurgery applications in surgery, and medical education use of technology.

Georgios Tsoulfas, MD, received his medical degree from Brown University and research fellowship in transplant at the Starzl Transplant Institute at the University of Pittsburgh. He is currently an associate professor of Surgery at the Aristoteleion University, Greece.

xxxv

xxxvi

Authors biography

Yang Pan, PhD, received his BE of Applied Mathematics in Huazhong University of Science and Technology, Wuhan, Hubei, China in 2015, then gained the Master of Computer Applications degree in Bengaluru, Karnataka, India. Now, he works as a PhD in University of Texas at Arlington, Arlington, Texas, United States. His research domain focuses on Deep Learning, especially in Neural Networks and experienced in convolutional neural network (CNN) and generative adversarial network (GAN). Dr. Vijayakumar Kadappa is currently working as an associate professor in the Dept. of Computer Applications, BMS College of Engineering, Bangalore. Vijayakumar Kadappa obtained his PhD in Computer Science from a reputed Central University (University of Hyderabad, Hyderabad, AP) in 2010. He obtained his master degree (MCA) in 1998 from University of Mysore, Mysore, Karnataka and working extensively in the area of principal component analysis. His research interests are in data mining, pattern classification, and related areas. He published his research work in international journals (Elsevier and Springer) and international conferences. He also received a grant (UGC) from Government of India. Mr. Shankru Guggari, PhD, M.Tech, is a research scholar, in the Dept. of Computer Science and Engineering, BMS College of Engineering, Bangalore. He is currently working in a classification technique area for his PhD dissertation. Recently, he has won the best research paper award in the international conference. Pattern recognition, IOT, and machine learning are the interested research area of him. He has published some of his research works in international conferences and a research paper in the Elsevier publication journal. He has more than 4 years of industry experience and more than 3 years in academic research experience.

Authors biography xxxvii

Tilottama Goswami completed her B.Tech (Computer Science and Engineering) from NIT Durgapur in 1995, M.S. (Computer Science) from Rivier University, New Hampshire, United States in 2000. She was awarded PhD in Computer Science from University of Hyderabad in 2016. Her areas of research interests are Image Processing, Machine Learning and Computer Vision. Tilottama has been awarded University Grants Commission—Basic Scientific Research (UGC-BSR) Fellowship (under Govt. of India) during 2014 15. Tilottama Goswami has an overall 20 years of work experience in industry and academia, both in India and abroad. At present, she is working as a Professor of Department of Computer Science and Engineering in Anurag Group of Institutions, Hyderabad, Telangana, India. She is a senior member of IEEE, presently executive committee member of CIS/GRSS Hyderabad section. Sandeep Saini received his B.Tech degree in Electronics and Communication Engineering from International Institute of Information Technology, Hyderabad, India in 2008. He completed his M.S. from the same institute in 2010. He is pursuing his PhD from Malaviya National Institute of Technology, Jaipur India. He has been working at Myanmar Institute of Information Technology from 2018. Before joining MIIT Mandalay, he had worked at LNM Institute of Information Technology, Jaipur as an Assistant Professor from 2011 onward. His research interests are in the areas of natural language processing and cognitive modeling of language learning models. Sandeep is a member of IEEE from 2009 and an active member of ACM as well. Vineet Sahula obtained his bachelors in Electronics (Hons.) from Malaviya National Institute of Technology, Jaipur, India in 1987 and masters in Integrated Electronics; Circuits from the Indian Institute of Technology, Delhi in 1989, and the PhD degree from Department of Electrical Engineering, Indian Institute of Technology, Delhi in 2001. In 1990 he joined as a faculty member at Malaviya National Institute of Technology, Jaipur, where he is currently the head of the Department of Electronics and Communications Engineering. He has 80 1 research

xxxviii Authors biography

papers in reputed journals and conference proceedings to his credit. His research interests are into system-level design, cognitive architectures, cognitive aspects in language processing, modeling and synthesis for analog and digital systems, and computer-aided design for VLSI and MEMS. Dr. Sahula has served on the Technical Programme Committee of the VLSI Design and Test Symposium, India from 1998 to 2013. He has also served on organizing a committee of Embedded Systems Week, Oct. 2014 Delhi and as fellowship-chair of 22nd IEEE International Conference on VLSI Design, India in 2009. He is a senior member of IEEE, Life Fellow of IETE and IE, life member of IMAPS and member of ACM SIGDA.

Preface The evolution of cognitive science started from the ancient time when Plato and Aristotle used to interpret the nature of human knowledge. Now, there have been a large number of studies and research on cognitive science, which suggest that cognitive science is actually systematic and scientific study of brain, mind, and intelligence. The concept of brain, mind, and intelligence does not only limit to human beings but to all other living beings in the world. This emerging science as interdisciplinary field covers philosophy, psychology, computer science, neuroscience, linguistics, etc. The science that has been ruling most of the modern world, the cognitive science, and therefore its intricacies, theory and applications need to be highlighted and elaborated that could help numerous researchers, scientist, psychologist, philosophers, neuroscientists, and others working in the field of human brain and exploitation of its cognitive ability. This book covers Introduction and theoretical background—representation and understanding of brain information processing; Philosophical and psychological theory—neuroscience, intelligence, thinking, and cognitive linguistics; Cognitive informatics and computing—machine learning, image processing, and data analytics; Statistics for cognitive science—statistics, probability theory, and cognitive maps; Cognitive applications—brain computer interface, human brain, cognitive ability, cyber cognitive systems, cognitive robotics, and internet of cognitive things, machine, and deep learning applications; and Case studies—philosophical, psychological, and embodied cognitive science-based case studies. G. R. Sinha and Jasjit S. Suri

xxxix

Acknowledgments Dr. Sinha expresses sincere thanks to his wife Shubhra, his daughter Samprati, and his great parents for their wonderful support and encouragement throughout the completion of this important book on Cognitive Informatics, Computer Modeling, and Cognitive Science (Volume 1: Theory, Case Studies, and Applications). This book is an outcome of focused and sincere efforts that could be given to the book only due to great support of the family. Dr. Sinha is grateful to his teachers who have left no stones unturned in empowering and enlightening him, especially Shri Bhagwati Prasad Verma who is like Godfather for him. Dr. Sinha also extends his heartfelt thanks to Ramakrishna Mission order and Revered Swami Satyarupananda of Ramakrishna Mission, Raipur, India. Dr. Sinha would like to thank all his friends, well-wishers, and all those who keep him motivated in doing more and more; better and better. Dr. Sinha offers his reverence with folded hands to Swami Vivekananda who has been his source of inspiration for all his work and achievements. We sincerely thank all contributors for writing relevant theoretical background and real time applications of Cognitive Science and Informatics and entrusting upon us. Last but most important, we express my humble thanks to Chris Katsaropoulos, Senior Acquisitions Editor (Biomedical Engineering) of Elsevier Publications for great support, necessary help, appreciation, and quick responses. We also wish to thank Elsevier Publication for giving us this opportunity to contribute on some relevant topic with reputed publisher. G. R. Sinha and Jasjit S. Suri

xli

CHAPTER

Introduction to cognitive science, informatics, and modeling

1

G.R. Sinha1 and Jasjit S. Suri2 1

Myanmar Institute of Information Technology, Mandalay, Myanmar Stroke Monitoring Division, AtheroPointt, Roseville, CA, United States

2

1.1 Introduction and history of cognitive science The evolution of cognitive science started since ancient time when Plato and Aristotle used to interpret the nature of human knowledge. Now, there have been a huge number of studies and research on cognitive science, which suggest that it is actually a systematic and scientific study of brain, mind, and intelligence. The concept of brain, mind, and intelligence does not only limit to human beings but to all other living beings in the world. This emerging science as an interdisciplinary field covers philosophy, psychology, computer science, neuroscience, linguistics, etc. Modern digital computers; robots; fighter pilot application; decision-making in medical science; IoT devices; environmental monitoring and surveillance-related applications; and many more employ the cognitive science theory in their automatic and programmable operations. Artificial intelligence (AI) makes computers and robots behave as human being and function like human brain. Human brain involves perception, understanding, decision-making, reasoning, emotion, and language as important processes, and all these need to be comprehended properly so that modeling of human brain could be achieved more precisely. The science that has been ruling most of the modern world, the cognitive science, and therefore its intricacies, theory, and applications need to be highlighted and elaborated that could help numerous researchers, scientist, psychologist, philosophers, neuroscientists, and others working in the field of human brain and exploiting its cognitive ability. This chapter presents theoretical background and history of cognitive science to understand its background, followed by its philosophical and psychological aspects. Representation of cognitive science, cognitive model of brain, knowledge representation, and information processing of human brain are discussed, although cognitive science includes modeling and imitation of brains of all living beings. The theory of consciousness, neuroscience, intelligence, decision-making, and mind and behavior analysis are described with case studies supported by strong background. The computing part is another important factor that helps in Cognitive Informatics, Computer Modelling, and Cognitive Science, Volume 1. DOI: https://doi.org/10.1016/B978-0-12-819443-0.00001-5 © 2020 Elsevier Inc. All rights reserved.

1

2

CHAPTER 1 Introduction to cognitive science

decision-making, and thus cognitive computing presents different ways of information manipulation, processing, and finally decision-making. Neuroscience aims at developing mathematical and computational models, structures, and processes of human brains and other animals. AI is considered as a core part of cognitive science and advances began in the field of AI since the 1950s. The role of machine learning, AI, cognitive knowledge base, deep learning, cognitive image processing, and suitable data analytics are useful for cognitive science. Evaluation of various cognitive science tasks and computation results has to be done using some statistics. Probability theory is discussed to address the uncertainty and interpret the cognitive scenario, which includes probability distribution concepts, maximum likelihood estimator, and other similar distribution functions. Bayesian statistics is presented that play an important role in establishing a hypothesis from evidences. The modeling of human brain and assessment of cognitive ability would greatly help the researchers, neuroscientists, and psychologists working in the field of understanding human brain and its functions. Mind is always considered as different from brain; brain as physical and mind as mental entities and therefore consciousness of human brain and mind control are emerging applications in the field of cognitive psychology and philosophical understanding of brain. We discuss here the current status of emerging research in the field of cognitive science as current as well as future trends. Cognitive language processing is discussed that paves the ways for developing numerous tools for helping physically challenged persons. We are now in the age of self-driving cars and autonomous driver-assistance system and therefore the insight, theory, and applications of cognitive science in these areas are also explained with suitable case studies. Cognitive systems are employed in many of the modern innovations and tools that operate on certain wireless network using smart sensors. So, security becomes an important issue in the field of cognitive science. Background information of necessary imaging modality can be seen in an important book on Medical Image Processing [1] where MRI and other brainimaging modalities are discussed in detail. One more book [2] discusses about biometrics and various types of multimodal and unimodal biometric techniques. We are discussing biometrics and medical imaging here because both medical imaging and biometrics are important aspects of study of human cognitive ability. Cognitive ability of human brain and its assessments has been attempted by a limited number of research contributions, few such [3,4] provided enough background and conceptualized the assessment of cognitive ability of human brain. The term “cognition” is closely associated with brain, and thus assessing its cognitive ability is an emerging area of research all across the globe. One such result (only a sample out of extensive results) of cognitive ability assessment can be seen in Table 1.1 where we can observe few wonderful conclusions on the ability of brain that actually varies from age to age, and also from men to women. Table 1.1 highlights the comparison of cognitive ability for different age groups, beginning from 10 to 60 years involved in the study. It can be clearly

1.1 Introduction and history of cognitive science

Table 1.1 Cognitive ability assessment.

Different age group 45 35 21 15 10

60 45 35 21 15

Those who correctly recognized (RC)

Those who replied as “cannot say anything”

Those who did not recognize (NR)

Time taken to recall as retention time (RNT) in seconds Male

Female

8.3 7.9 6.8 5.4 5.2

6.9 6.7 5.8 4.8 4.1

Male

Female

Male

Female

Male

Female

18 55 59 60 65

22 64 67 69 70

06 04 12 11 04

02 12 11 11 04

16 31 29 29 21

16 24 22 20 16

seen that the ability to recognize correctly is always higher in female, which means that the number of women who can recognize the faces that were shown to them some time back is more than that of men. The decisiveness of whether they can recognize or not is also better in female candidates. Moreover, the time taken to recognize, which we call retention time, is also less for women. This small comparison provides a huge scope of research and more studied evaluating ability of human brain when it comes to cognitive ability. The ability was found better in kids as compared to young-age and old-age persons, and among young and old people, young candidates or participants are performing better while recognizing the faces.

1.1.1 Cognition, brain, and consciousness Cognitive science is closely associated with the behavior of brain, its cognition process, and consciousness associated with it. Introduction, history, and some background of cognitive science have been discussed in a book on cognitive neuroscience [5]. This discusses mind, brain, concept of neurons, and their interconnections; imaging and vision aspect of cognition; consciousness; hearing ability; memory, thinking and learning ability, problem-solving skill and ability; the role of language and emotion; social cognition, and development. Few facts related to brain and its ability, such as drinking alcohol, affect the brain; the brain consists of neurons that are connected in certain order; the neurons get interacted in definite ways and form some visual maps inside the brain called cognitive maps. This can be seen in a typical diagram shown in Fig. 1.1. Neurons are connected with each other as a network, which is called neural network in computer language. The neurons, having their certain weights, play an important role in decision-making. When the brain receives a signal either through eyes, nose, ear, or any sensory organs, the weights of neurons are

3

4

CHAPTER 1 Introduction to cognitive science

Neurons

Brain

Visual map or cognitive map

FIGURE 1.1 Formation of cognitive map.

changed and accordingly an impression is created inside the brain, which we refer to as a cognitive map. The term “cognitive map” is not very simple when it comes to discussion about this in the context of research activities related to brain, its functioning, and ability. Consciousness or subconsciousness is a significant state of brain or rather mind that is developed with the help of certain functioning of human brain; this is related to brain and its cognition processes. Cognition, here, is the ability of brain to recognize, understand, interpret, analyze, distinguish, and many similar actions. In the process of brain development or cognition process, training is an important factor through which the brain learns. Let us try to understand this by a simple example—a child never has seen before an object, say a bus or car, which is explained by the parents or teachers to the child by describing what the object looks like and also some of its features; then say that this is called a bus or car or an apple. So, an impression is created in the brain of a child, which he or she never forgets due to the cognitive or visual map created for that object inside the brain. Cognitive revolution and cognitive linguistics were introduced [6], and it was suggested that the people who work in the area of cognitive science borrow the concepts from the history about how the people in the past used to recognize and correlated various things. Hegelian arguments are discussed highlighting the philosophical and psychological aspects of cognitive science (see Ref. [7]). Cognitive

1.1 Introduction and history of cognitive science

Philosophy and psychology

Neuroscience

Cognitive science

Linguistics and mathematics

Physics and computer science

FIGURE 1.2 Scope of cognitive science.

science is considered as interdisciplinary subject or area that combines many disciplines such as philosophy, psychology, linguistics, intelligence, evolutionary biology, neuroscience, and anthropology. It also includes scope for mathematics, physics, computer science, and many other modern disciplines of studies [8]. This report of European commission highlights culture among several perspectives of cognitive science and attempts to emphasize that the cognition or brain’s ability is affected across culture to culture. In Ref. [9] also, cognitive science is said to be encompassing several disciplines as constituents of cognitive science and having strong history that was originated in the 1950s. The subjects or disciplines that are covered in cognitive science can be seen in Fig. 1.2.

1.1.2 Dynamic theory and cultural aspect Dynamic theory of cognitive science is explained as framework that combines several aspects of the cognition, such as psychology and philosophy. This framework (see Ref. [10]) highlights major elements of cognitive science and applications supported by suitable metatheory established using systematic approaches. It mentions about a number of talks, discussions, debates, and arguments in the area of cognitive theory regarding what and how part of it. Theoretical and lot of empirical works exist that establish the fact the something like cognitive theory is there that acts as an interdisciplinary area and can be seen in all research areas as an important aspect. Pieter [11] in a dissertation on historical aspects, cognitive science discusses several examples of the dynamic theory in which the behavior of brain and its ability can be seen in varying ways that indicate that the cognitive theory is an example of dynamic theory. Models of memory, plastic memory, brain, mind, philosophical history, and cognition under confrontation were major focus of the study in this work. For all these factors or components, some

5

6

CHAPTER 1 Introduction to cognitive science

examples exist, which mean that there are enough evidence of the effect of confrontation on brain and its cognitive ability, similarly the plastic nature of the memory that is utilized by the brain. Cognitive science of religion is another area that talks about various religious factors affecting cognition (see Ref. [12]). Brain oscillations are measured with the help of EEG, relationship between modalities, and behavior levels of brain; prestimulus information are studied in a dynamic way in a thesis work [13], which is recently reported to be good findings based on a number of participants subjected to temporal stimuli, and the changes in the behavior was observed and inference was drawn.

1.1.3 Psychology, philosophy, and cognitive neuroscience As we have seen in Fig. 1.2 and corresponding contributions reported in Section 1.1 state that philosophy, psychology, and neuroscience are the essential components of cognitive science; thus this section introduces major studies and work on these elements. Memory models and their role were discussed long time back in the past in Ref. [14] focusing on cognitive science and Wittgenstein’s theory that was propounded in the 1980s. The brain has its memory and stores several information and attempts to get some meaningful information out of it. In the same theory of cognition suggested by Wittgenstein in 1935, the memory was referred to as storehouse that is now explored as a cognitivist approach that includes psychology and philosophy as important elements in various studies related to modern science and engineering applications. In an experimental psychology as PhD thesis, visual cortical responses and perception were investigated in Ref. [15]. Flow of happenings and responses in this work can be interpreted easily with the help of Fig. 1.3. We receive various stimuli through our multisensory organs as multisensory data or information, which is correlated and interpreted by brain. Based on the information and its meaningful interpretation, a shape is created inside the brain that further instructs appropriate action or function to be performed using information channel. Philosophy of cognitive theory is study of ontology which related mind, nature of brain, and relationship, whereas psychological study is considered as a systematic, scientifically done study of brain processes and behavior. Neuroscience concept involving several brain activities is the outcome of psychological aspect of cognitive science.

1.2 Cognitive modeling Cognitive concept needs suitable mathematical models for implementation into an application that further requires mathematical functions, probability theory, mathematical operators, block sets, and many other tools. One such modeling of cognitive theory can be seen in Ref. [16] that discusses cognitive modeling for

1.2 Cognitive modeling

Receive information from sensory organs

Interpretation of information

Perception shaping

Forwards to suitable channel by brain

Appropriate action or function

FIGURE 1.3 Cognitive theory as psychological and philosophical trait.

addressing computational tractability. Actually, cognitive capability of human brain is attributed to the ability of human brain, and therefore ability and functionality of brain needs to be analyzed in terms of suitable mathematical blocks or functions. The focus is on selecting useful abilities that contribute to cognitive capacity, and other functions representing the abilities are ignored. This work highlights the concept of useful mathematical function, stated as follows:

• Total cognitive functions include actual as well as possible functions. • The functions that contribute to cognitive capability are only considered. Based on the actual number of cognitive functions, the modeling is developed that will demonstrate the functioning of brain so that the functionalities and abilities can be studied on the basis of activities of brain. Computability and tractability are explored as possible foundations of cognitive modeling, and this is shown in Church Turing function [16]. Computability of cognitive capability is assessed in terms of computational complexity, time complexity, and the size of input that is accepted as the stimuli by human brain. The representation of these terms may employ some mathematical functions or algorithms, either linear, nonlinear, or exponential algorithms based on the given size of input and values of complexity. Cognitive modeling uses several cognitive domains, such as coherence, visual search, language processing, and Bayesian inference.

7

8

CHAPTER 1 Introduction to cognitive science

N

N FIGURE 1.4 Cognitive network.

1.2.1 Cognitive networks Network is generally defined as interconnection of a number of nodes such as neural network as interconnection of millions of neurons. The network used in cognitive theory and exploring all capabilities of brain with the help of cognitive approach is referred to as cognitive networks. The network will have an input or multiple inputs, processing elements, and the outputs. Each element of network may act as processing elements and contribute in obtaining suitable output from the network. Fig. 1.4 shows a typical cognitive network architecture highlighting the elements of the cognitive network. The network can have millions of nodes (N) of different size representing different weights similar to neural network. Brain is modeled as neural network in the implementation of several engineering tasks and applications where nodes of the network are neurons. The processing among neurons plays an important role and so do the nodes in cognitive network. Cognitive processes determine the final output of a network. Andrea in an important work on cognitive network highlighted some mathematical challenges in the implementation of cognitive network for human brain; and the challenges are related to

• • • •

the construction of network architecture, the analytics of the network parameters, the construction of dynamic models for varying needs, and choosing suitable analogy between cognitive science approach and network theory.

There may be various possibilities of interconnection of cognitive networks similar to network topology in computer network. The network topology could be suitably referred to as cognitive network architecture in context to cognitive network theory. One such architecture development and discussion is reported in Ref. [17] where cognitive architectures are explained as symbolic representation system. Perception and action cycles are very important in cognition that requires learning algorithm the only perception is converted into action. So, the architectures also include memory, associative memory, and also a suitable method or algorithm for training the cognitive network.

1.3 Cognitive informatics and resources

Cognitive network (memory)

Speech, vision, motor modules

Environment

FIGURE 1.5 Use of informatics.

1.3 Cognitive informatics and resources In cognitive modeling and architecture the amount of information is processed. The concept of informatics has an important role in information processing that deals with the design and development of knowledge inference system, linguistics, characteristics, and structure of knowledge. Cognitive informatics covers knowledge base required in the cognitive system, AI and decision support, neural network and learning methods, visualization concepts, information retrieval, language processing, and others and common flow of action that requires suitable cognitive informatics in all stages, as shown in Fig. 1.5. The environment here in the Fig. 1.4 may indicate the presence of stimuli as well as some external learning factors, and this may be either a learning source or stimuli; some suitable informatics will be required to process the data or train the information. Memory is an essential part of a sensory module where the data is stored and subjected to appropriate soft computing tools for processing. Vimal et al. discuss the cognitive informatics in details with experimental results and case studies. Cognitive informatics necessary for neuroscience applications, as discussed in Ref. [18], focus on representation, management, and the understanding of information. Fuzzy and neural systems are suggested as useful tools for processing the data for neuroscience imaging and other information. Language, comprehension of language, and linguistics need the development of cognitive linguistics that involves

• • • • • •

the results of experimental psychology, neurolinguistic information such as neuroimaging data, probability distribution models, statistics, training algorithms and context-dependent analysis, and testing algorithms.

9

10

CHAPTER 1 Introduction to cognitive science

The main cognitive resource is the memory that is associated with brain, and there are different types of memory such as synaptic memory, sensory memory, and associative memory. The memory stores the data in the form of some symbolic representations, and proper reasoning is applied to those representations to arrive at certain decision or outcome. There comes the role of suitable reasoning algorithm or methods such as AI that provides logical reasoning over the sensory data processed and stored inside memory. Cognitive memory as a main resource is divided into three main types:

• Working memory • Sensory memory • Long-term memory (declarative and procedural type) The resources used for cognitive processing are generally known as bounded resources because there is always a limitation of each of the memories as far as the human brain is concerned.

1.4 Cognitive maps and perception Response to stimulus by the brain is in different forms, and one such representation of the stimuli response is cognitive map. In a dissertation work of Adam [19], study was made on brain of animals stressing spatial map of cognitive ability as spatial ability using neural network concept. Reinforcement learning was used for the neurophysiological study of hippocampus that is mainly associated with the memory of brain. How adaptive a brain can act was investigated on the basis of test using cognitive maps. This work underlined a fact that the concept of cognitive map was mentioned in early writing of Tolman in 1932 that exploits the cognitive map concept in terms of the following:

• Hypothesis: Expected outcome is prepared. • Stimulus search: Attempts to find the stimulus. • Latent learning: It aims at obtaining some changes that can be observable. Few important properties of routes that were discussed in Ref. [19] are based on links between routes and maps, for example, for motivation as a stimulus, map is created that appears something that represents curiosity. Mental representations are created by external stimuli given to brain that affects visual short-term memory, and then perception is formed as visual image. One such study, based on interrelationship between mental representations and perception, was done in a dissertation [20]. The visual perception is actually transmitted from retina to early visual cortex and the mental representations are expressed. Visual short-term memory, perceptual features, maintenance fidelity of memory, and neural basis are important factors in determining and analyzing the visual perceptions. In Ref. [21] perception was studied on the basis of motion

References

among various modalities of sensory imaging. The study focused on motion modality in comparison with vision, touch, and audio modalities, and the congruency effect observed how each of these modalities affect the perception capability. Multisensory representations were studied on the basis of extensive experimentation in a dissertation [22], and the translation idea from perception to conception was presented. Emphasis is made on how the stimulus can equivalently produce abstract representations as conception rather than map approach as perception.

1.5 Conclusion Cognitive science, cognitive informatics, and computer modeling require some basic fundamentals for their implementation as cognitive concept in various applications of science and engineering, and the chapter has introduced all such required terms so that the readers can have basic idea of the fundamentals. The focus has been made on overview of brain, cognition, and memory. Historical background of cognitive science, cognitive map, and perception to conception were introduced in addition to cognitive network, modeling, and architecture.

References [1] G.R. Sinha, Medical Image Processing: Concepts and Applications, Prentice Hall of India, 2014. [2] G.R. Sinha, Biometrics: Concepts and Applications, Wiley India Publications, a Subsidiary of John Wiley, 2013. [3] G.R. Sinha, K.S. Raju, R.K. Patra, A. Daw Win, D.T. Khin, Research studies on human cognitive ability, Int. J. Intell. Def. Support Syst. 5 (4) (2018) 298 304. [4] G.R. Sinha, Study of assessment of cognitive ability of human brain using deep learning, Int. J. Inf. Technol. 1 (1) (2017) 1 6. [5] J.B. Bernard, M.G. Nicole, Introduction to Cognitive Neuroscience-Cognition, Brain and Consciousness, second ed., Elsevier, United Kingdom, 2010. [6] R. Alan, F.S. Francis, Literature and the cognitive revolution: an introduction, Poetics Today 23 (1) (2002) 1 8. [7] R. Dale, Critique of radical embodied cognitive science, J. Mind Behav. 31 (2010) 127 140. [8] A. Daniel, Cognitive science, Key Technologies for Europe, European Commission Directorate-General for Research Directorate, 2006, pp. 1 83. Version 4. [9] A.M. George, The cognitive revolution: a historical perspective, Trends Cogn. Sci. 7 (3) (2003) 141 144. [10] V.F. Paul, Dynamic systems theory in cognitive science: major elements, applications, and debates surrounding a revolutionary meta-theory, Dyn. Psychol. (2013). ,http://dynapsyc.org/2013/Fusella.pdf..

11

12

CHAPTER 1 Introduction to cognitive science

[11] P. Pieter, Historical Cognitive Science-Analysis and Examples (Dissertation of PG Diploma in Logic, History and Philosophy of Science), Ghent University Belgium, 2015. [12] N.M. Robert, W. Harvey, Introduction: new frontiers in the cognitive science of religion, J. Cogn. Cult. 5 (2005) 1 13. [13] M. Stefanie, Selective Deployment of Attention to Time and Modality and Its Impact Upon Behavior and Brain Oscillations (Ph.D. thesis in Department of Experimental and Health Sciences), University of Barcelona, 2016. [14] G.S. David, Models of memory: Wittgenstein and cognitive science, Philos. Psychol. 4 (2) (1991) 203 218. [15] C. Silvia, The Multisensory Visual Cortex: Cross-Modal Shaping of Visual Cortical Responses and Perception (Ph.D. thesis for Doctoral Program in Experimental Psychology, Linguistics and Cognitive Neuroscience), University of Milano-Bicocca, 2014. [16] V.R. Iris, The tractable cognition thesis, Cogn. Sci. 32 (2008) 939 984. [17] A. Pulin, F. Stan, S. Javier, Sensory memory for grounded representations in a cognitive architecture, ACS Poster Collect. 1 (2018) 1 18. [18] D. Włodzisław, Neurocognitive informatics manifesto, Series of Information and Management Sciences, 8th Int. Conf. on Information and Management Sciences (IMS 2009), California Polytechnic State University, Kunming-Banna, Yunan, China, 2009, pp. 264 282. [19] C.J. Adam, On the Use of Cognitive Maps (Ph.D. dissertation), Faculty of the Graduate School, University of Minnesota, 2008. [20] S. Elyana, Interaction Between Visual Perception and Mental Representations of Imagery and Memory in the Early Visual Areas (Ph.D. thesis), Institute of Behavioral Sciences University of Helsinki Finland, 2005. [21] S.F. Salvador, S. Charles, L. Donna, K. Alan, Moving multisensory research along: motion perception across sensory modalities, Curr. Dir. Psychol. Sci. 13 (1) (2001) 29 32. [22] Y. Ilker. From Perception to Conception: Learning Multisensory Representations Departments of Brain & Cognitive Sciences (Ph.D. thesis), School of Arts and Sciences, University of Rochester, Rochester, NY, 2014.

Further reading R.N. Abdul, Reasoning with Bounded Cognitive Resources (Ph.D. thesis), Department of Applied Information Technology Chalmers University of Technology & University of Gothenburg, SE-412 96 Gothenburg Sweden, 2015. B. Andrea, F.C. Ramon, P.S. Romualdo, C. Nick, H.C. Morten, Networks in Cognitive Science. ,https://arxiv.org/ftp/arxiv/papers/1304/1304.6736.pdf.. L.P. Vimla, R.K. David, Cognitive Science and Biomedical Informatics. ,http://eknygos. lsmuni.lt/springer/56/133-185.pdf..

CHAPTER

Machine consciousness: mind, machine, and society contributors

2

Anandi Giridharan1 and K.A. Venkatesh2 1

Indian Institute of Science, Bangalore, India Myanmar Institute of Information Technology, Mandalay, Myanmar

2

2.1 Introduction: Using cognitive maps as adaptive interface tool in an online course This chapter presents a study that looks at the ways to support students with different degree of knowledge in an online course. To achieve this an ideal interface tool to represent online course graphically is cognitive maps. Based on cognitive theory, the proposed curriculum representation uses a decision treelike structure having several branches [1]. Each branch has certain weight that has been obtained from relationship between concepts of the subject. The main goal is to guide students to expertize in the subject by considering student’s cognitive learning and illustrate the development of student model. This chapter has five sections: first section discusses about students’ mental model. The second section, surveys multimedia processing and acquisition system. Third section presents knowledge construction using cognitive maps based on students’ mental model; section four gives an overview of instructional planning to improve student’s cognitive ability. Section five concludes by illustrating a hypothetical instruction model.

2.1.1 Cognitive mapping and theories The cognitive revolution emerged from various theoretical perspectives and gave rise to different theories of learning and instructions [2]. Cognitive theory seeks to instruct the process of knowledge acquisition and its effects on human mind and memory. Based on epistemology, knowledge has objective in which relationship, structure can be identified. Curriculum design involves constructing structure and introducing knowledge in such a way that any student with diverse knowledge will be able to acquire and reproduce. Novice students are provided with basic, simple, and straightforward concepts without any complicated information, whereas average students are provided with complex concepts that have related conceptual information and expert students are provided with increasingly Cognitive Informatics, Computer Modelling, and Cognitive Science, Volume 1. DOI: https://doi.org/10.1016/B978-0-12-819443-0.00002-7 © 2020 Elsevier Inc. All rights reserved.

13

14

CHAPTER 2 Machine consciousness: mind, machine, and society

complex and depth learning experience focusing on quality. Cognitive map gives step-by-step guidance, by helping students to prioritize information. Hence, student’s anxiety regarding subject material was found to be reduced using the cognitive mapping. Designing models of long memory is based on instructions such as hierarchical sequencing, activation of prior experience and knowledge and analyses of subject content considering student’s cognitive learning process. To promote students to understand subject well, following features should be considered. Learning complexity: Understanding and remembering the concepts are influenced by our prior experience for a given situation. Novice students will retrieve what they have learned more effectively and they can relate to what they are learning [3]. Web-based course must be suitable on the basis of ability and skill level of the student. Empirical evaluation of adaptive subject content, proper planning and arrangement of the learning concepts, the difficulty of concepts, and other instructional parameters to the students’ knowledge helps the students to learn quickly and understand better. Navigation complexity: Design to check the navigation structure by not highlighting links that do not suit the student’s cognitive knowledge, otherwise that creates cognitive overload and confusion. Students with different knowledge levels may be ideal for different concept level. Tracking student’s progress: Using student model to track their progress and generating study page accordingly based on student’s knowledge level. Interaction with system: Interaction with system is one of the important supports to the online learning system. Students must be able to interact with the working system using personalized annotations. Automatic retrieval of proper answers based on student’s questions. Self-evaluation: Evaluation of students is the most important challenge of online learning system. Self-evaluation by giving test at end of each module and assessing the progress of the student. Supporting student to redo the module in the case of bad progress based on evaluation. Novice student must progress through each module step by step to expertise the subject [4].

2.1.2 Web-based online course Web server and subject content retrieval: The web server database stores, processes and retrieves, and delivers subject contents to the end user [5] (Fig. 2.1). The students log on to web server to access the online course material through given interface. System database contains the student model and domain models. Based on student’s requirement the appropriate information is retrieved and serves the information to the student.

2.1 Introduction: Using cognitive maps as adaptive interface tool

Web server Student model Goal, knowledge Action

Progress

Queries

Web–client

Adaptive interface

Student as end user accessing online contents

Cognitive mapping online course

Retrieved course contents

FIGURE 2.1 Components of web-based online course-ware.

Student modeling: The student model involves students in maintenance and progress of their own model updating the latest status obtained retaining the accuracy of the model. In student modeling, characteristics such as student’s goals, knowledge, history, and cognitive skill are taken into concern. The behavior of the student, feedback, preference, performance, and progress are managed by student model. Online course domain: Cognitive mapping used to construct subject content domain. Formal cognitive maps are used to structure primary knowledge of the subject based on situation. Cognitive mapping can be modified by adding or deleting new course content when it has become insignificant. Methods for constructing cognitive maps can be either direct or indirect. In direct methods to extract the belief system of subject, direct task with teachers, experts, and researchers is followed. Technique like processing sources such as documents and transcripts is followed in indirect method. Building a cognitive map of subject content includes the following: 1. Conceptual diagram of the model is constructed, which includes the main subject areas based on student’s knowledge, goal, history, and context taking into account relationships detail the selected conceptual scheme. 2. Assigning weights to each module and concept that influences subject contents in cognitive map. Students’ mental modeling updates student’s information to impart adaptivity and personalization. The student model includes all information regarding student, such as domain knowledge, progress, interests, goal, tasks, social background,

15

16

CHAPTER 2 Machine consciousness: mind, machine, and society

Student’s characteristics relation to context reasoning Student profile Name Age Gender Designation Affliation Email ID

Cognitive level Student knowledge level Learning progress Learning quality Cognitive relativism

Student learning style Kinesthetic Aural Visual Physical

Student interest preference Student learning timing Prefered place of study Suitable device Required subject content Learning technique

FIGURE 2.2 Students’ mental modeling.

personal trait, location, and cognitive knowledge level. Fig. 2.2 shows the student’s characteristics related to context reasoning for building individualized student model. With updated status from student model it is possible to cover wider cognitive aspects and improve the progress and performance of the students [6]. When student accesses the course, student modeling builds up slowly at runtime, since it is mainly through the validity provided by student’s input to the system that the student model is created. Context-aware student model: The proposed context-aware student model caters for knowledge possessed by student that is not present in the expert domain knowledge. Context perusal has the functions of automatic contextual reconfiguration based on student’s context followed by context-triggering actions like a reminder program to satisfy certain contextual rule, accomplishment of services automatically, context-aware information eases later retrieval, contextual information, and commands that may be altered automatically by the context of the student. The context acquisition module gets the context details from learning environments.

• Location context: Student moving in class room, Laboratory, library, (in/out of campus).

• Time context: Lecture time, when student is in study time, leisure time, break time, etc.

• Device context: Student’s device can be hardware devices such as desktop, •

tablet, smart phone; warble device, and software used such as version of the device, software used, and features. Environmental context: Network-related such as speed, network type, real environment such as temperature, light, and weather.

2.1 Introduction: Using cognitive maps as adaptive interface tool

• Social context: Student can collaborate and coordinate with students, teachers, experts, etc.

• Work context: Student may like to go through lectures, glance at slides, do homework, or play some games. Context information is variation in the context value, when the student is executing a particular study process. Context acquisition is done by capturing student’s current learning context using intelligent sensors and from student’s input during registration. Students’ context information of location, time, physical, behavior, etc. are captured and analyzed and context classification is done with semantic learning framework. Semantic learning framework is context aware in description, processing, and retrieval. Context model captures static content and dynamic process descriptions. Contextual knowledge consists of elements that have basic metadata of entities, subject conceptual models, and statements linked to conceptual models. On the server the student’s information such as history log is collected, analyzed, and categorized. Student’s interest, preferences, styles, and cognitive levels are categorized to create information library of students [5].

2.1.3 Context modeling and reasoning The simplified student modeling based on context reasoning is shown in Fig. 2.3. Structuring personalized student model is the process of choosing proper algorithm to context reasoning and computing. The contextual model weighting (between 0 and 1) represents context similarity or distance computation in

Registration Physical sensing Student

System sensing Physical system application social context

Context sensing Context recognition

Context

Application Sensing sensing

Context analysis

Context

Social sensing

Sensing

Context classification

Student profiles

FIGURE 2.3 Students’ mental modelingbased context reasoning.

Semantic learning framework

Student learning context model Physical context

Time context

Device context

Social context

Work context

Location context

Learning model

17

18

CHAPTER 2 Machine consciousness: mind, machine, and society

processing and retrieval of subject contents. Weights have influence on student’s profile, knowledge level, personality, etc. Thus context reasoning helps in extracting or deducing learning characteristics of each parameter based on semantic weightage [6]. In Fig. 2.3 the procedure of generating student simplified model based on context reasoning is depicted. Appropriate algorithm should be chosen for estimating values of student’s contents [7]. Considering the different levels of students based on knowledge level, age, affiliation, cognitive ability, social group, etc., taking different weight by learning context metadata into mind, we can arrive at certain values of student learning contents considering following formula: Student’s learning style ðSls Þ depends on various context parameters such as student’s social group (Sg ), physical location (Pl Þ, cognitive level (Cl ), etc. Let K 5 fSg ; Pl ; Cl ; ::. . .g and j is number of context parameters considered for relating student’s learning style. Sls 5

X

wi 3 P i

iAK

P where Pi 5 Nj51 Pij =N Pi is the ith icontext metadata in percentage and wSg 1 wPl 1 wCl 1 . . . 5 1 L Also, Pij 5 Ljj 3 100 Now let us consider an example to understand the computation involved in student’s learning style. Let us consider only two context parameters, K 5 feducation; financeg, where the education may be undergraduate program, graduate/master program, or professional programs and the term “finance” may be the fee structure of the opted program. Let wedu ; wfin be the contextual metadata weight of education and finance, respectively, such that wedu 1 wfin 5 1. The learning style is computed as: S

Vedu 3 100 Vedu 1 Vfin

S

Vfin 3 100 Vedu 1 Vfin

g Pedu 5

Pfing 5

where Vedu ; Vfin are the value of education and value of the finance (fee structure) of the registered program such as undergraduate, graduate/master, or professional programs. S

Now; PSg 5

S

g Pedu 1 Pfing : 2

Let E1 ; E2 ; E3 be the scores obtained by a student in three exams. Percentage of ith metadata based on social group is given as: PEi 5

Ei :ð1 # i # 3Þ: E1 1 E2 1 E3

Learning styles of student can be formulated for all the student’s contexts K 5 fSg ; Pl ; Cl ; . . . g

2.2 Multimedia processing and acquisition system

Student’s style can be formulated considering all context reasoning based on various factors to construct the personalized student model

2.2 Multimedia processing and acquisition system Cognitive theory of multimedia learning (CTML) helps creating mental portrayal from text, image, audio, and video. Students with low knowledge levels often struggle with weak cognitive skills. Based on Seller (2005), CTML accepts learning model that has information acquisition system, knowledge construction that helps students to attain the desired learning outcomes by proper mind retention and understanding. Multimedia processing and acquisition system is presented in Fig. 2.4. Multimedia presentation of online course content is very important to the way the actual learning process is fostered or hindered [8]. Learning material leads to capturing sensory materials that are picked up by the ears and eyes and entered working memory as sounds and images that are stored and influence long-term memory. Meaningful learning environment is context aware; that is system senses context of the student and appropriately provides multimedia learning presentation based on real-world physical environment. Having audio, video, text, and animations helps students pay attention invoking active learning in multimedia platform. Among five human senses especially vision is recognized as the most powerful data acquisition device for the brain. Multimedia information that consist of images and animations, pictures, along with sound and speech are more suitable for student’s deep understanding as they gain high degree of reality and visualization. Multimedia processing and acquisition system

Student learning environment

Long-term memory

Working memory

Meaningful learning environment

Context analysis and classification

Student model

Student status

Navigational path of course with multimedia presentation

FIGURE 2.4 Multimedia processing and acquisition system.

Sensory memory

Multimedia presentation text, audio, video, and animation

19

20

CHAPTER 2 Machine consciousness: mind, machine, and society

2.3 Cognitive maps based on students’ mental model A primary concern for our proposed online course-ware is the description of cognitive mapping tree with branches that helps students to locate the suitable and easy concept based on his/her needs. Classification of subject into modules, concepts, and subconcepts is as shown in Fig. 2.5. Relation of intermodular and conceptual links has been designed sequentially in an order, in which subject material to be presented to the students. Arrangement of modules and concepts in cognitive tree is based on perception of subject difficulty [9]. Novice students have access to links having less weightage and less difficulty, whereas the average and expert students are provided access to links having more and most complex modules and concepts based on their knowledge levels. After completion of each chapter, students are self-evaluated with automatic feedback. The test consists of multiple choice questions with predefined answers. Students will be able to understand his/her status in that accessed chapter. Student’s anxiety on subject learning material was found to be reduced using the cognitive map increase their motivation.

3

C13

C14 2

4

C12 1

C15 M1

C11

C21 5

M2 C22

M3

M4

Mn

FIGURE 2.5 Intermodularinterconceptual relationship.

2.4 Overview of instructional planning to improve student’s cognitive

2.4 Overview of instructional planning to improve student’s cognitive ability The proposed architecture of web-based online course uses a client/server model that will be accessed by students with divergent knowledge levels as seen in Fig. 2.6. The web server interface interacts with web client interface to deliver online course contents according to his/her knowledge level. The web server has database of student model that updates the student’s current status information. Cognitive maps are organized into modules, concepts and associated subconcepts. Cognitive maps are structured with intermodular and interconceptual links, which guide the students through the course [8]. Subject demonstrations, skill building, and discovery of knowledge can be done well by expert teachers. Second, teachers know problem-solving techniques, how to teach something. Third, teacher builds a model of the student’s knowledge, hypothesis testing, and empirical investigation. This eases the teachers to plan their teaching modules and techniques to different level of students. Basically, the cognitive knowledge base is built on a conceptual mapping with different types of modules, concepts, subconcepts, subsubconcepts. The pretest and prerequisites of the students are weighted according to their importance for a course. Pretest is classified and Server side components

Retrieval system

Retrieval process

adverse course

Course content preparation process

Cognitive mapping

Query process

Adaptive server interface

Student model Student requirements Student knowlege level

Student history Student progress record present status

Database scrutiny

Online course

Client interface Student present status

Received course page Student with diverse knowledge

Client side components

FIGURE 2.6 Proposed architecture online course-ware.

21

22

CHAPTER 2 Machine consciousness: mind, machine, and society

administrated as novice, average, and expert students. Depending on the pretest and grading, student’s progress has different impacts on the student model [10]. The student model stores and updates student’s history, goal, knowledge, and preference of a student. Subject contents are designed with different levels of difficulty. Based on primary concepts suited for new novice students, the next level will be planned for concepts with more details and higher level is planned for detailed information and advanced hints and in-depth knowledge of the concept.

2.4.1 Cognitive map with weights of subject modules and concepts The modules are arranged in the order of degree of difficulty Dd ðM1 Þ , Dd ðM2 Þ , . . . , Dd ðMm Þ, where Dd is the degree of difficulty as shown in Fig. 2.7. The degree of difficulty of a module can be given as weight, Wi 5 x 3 i, 1 # i # m 2 1, where value x depends on the level of difficulty of the modules, which may vary from say 20 for novice, 40 for average, above 50 for expert students and so on [10]. Example: We will illustrate theoretical online course using cognitive map as discussed. Cognitive map for hypothetical online course is shown in Fig. 2.7. When student access the course module Mi having weightage Wi and then related concept Cij that is easy to understand and appropriate to his/her knowledge level is presented to him. Subject course-ware

W1 W2

M1

M2 C11

W111 C111

W11 C12

W112

C121 C122

W52 C52

W222

W522

W51 C51

W2

C21

W211

C112

W4 M 4

W12 W31 C31

W212

W121

W5 M5 W3 M3

W312 W311 C312

C211 C222 C

W4 C41

W32 C32

W511 W413

W313 W411

W412

C413

W312 C313

W311

C312

311

C311

FIGURE 2.7 Cognitive mapping of course contents with weightage.

C522

W221

C411

C412

C221 C511

2.5 Illustrating a hypothetical instruction model

2.5 Illustrating a hypothetical instruction model We have considered cognitive map as adaptive tool for online course model. Initially registered students of online course are tested for their knowledge by giving pretest. Based on pretest performance and their history, students are classified as novice, average, and advanced students. Pretest marks range from 0 to 10 points. Students whose marks fall under 03 points are classified as novice; marks ranging from 3 to 6 points fall under average group and from 6 to 10 points are classified as experts. Cognitive mapping is designed in such a way; online course is designed in three levels based on knowledge level of students. Modules/ concepts have been prepared based on the degree of difficulty shown in Table 2.1. 1. Navigation path of novice student: The navigation path is shown in Fig. 2.8, when the novice student accesses the online course. Novice student based on his/her knowledge and history is given access to modules/concepts whose weight # 20. So novice students’ navigational path is through fM1 ; C11 ; C12 g; as weight W13 of concept C13 is .20 he/she is not given access to that concept. After completing Module M1 he/she navigates through Module M2 and associated concepts having weight ,20, such as fM2 ; C21 ; C22 ; C23 g. The designed modular cognitive trees with multiple conceptual links were very useful in guiding diverse students through the course-ware. 2. Navigation path of expert student: The navigation path of expert student is depicted in Fig. 2.9. As expert student has more knowledge on subject, he/she is given access to all the modules and concepts with weightage $ 50. Table 2.1 Classification of the students and adaptive content based on knowledge level. Cognitive mapping as adaptive tool (weightage of the module/concepts) Wi 5 weight of the modules and Wij 5 weight of concepts, and so on

Classification of students

Pretest marks

Novice Average

03 36

020 040

Expert

610

. 050

Level of difficulty of content of the online course Only basic information More detailed information, related problem-solving contents In-depth information, problem-solving, competence for acquiring knowledge

23

24

CHAPTER 2 Machine consciousness: mind, machine, and society

Novice student accessing course

Online course

C11 W11 = 5

W1 = 10

M1 C12

Modules M1, M2, M3 Concepts C11, C12, C13 weights W11, W12, W13 Concepts C21, C22, C23 C24 W21, W22, W23, W24 Concepts C31, C32 with weight W31, W32

W12 = 8

C13 W13 = 30

C24 W24 = 25

W2 = 20 M2 C21

W21 = 5

C22

C23 W23 = 15

W22 = 10

W3 = 45 M3

C31 W31 = 10

W32 = 15

C32

FIGURE 2.8 Cognitive mapping as adaptive tool for novice students.

Expert student accessing course

Modules M1, M2, M3, M4 Concepts C11, C12, C13 weights W11, W12, W13 Concepts C21, C22, C23 C24 W21, W22, W23, W24 Concepts C31, C32 with weight W31, W32

Online course

C11 W11 = 5 M1 C12 W12 = 8

W1 = 10 C13 W13 = 30

C24

W2 = 20 M2 C21

W21 = 5

W24 = 25 C23 W23 = 15

C22

W22 = 10

W3 = 45 M3

C31 W31 = 10

FIGURE 2.9 Cognitive mapping as adaptive tool for expert students.

W4 = 50 C32

W32 = 15

M4

2.6 Conclusion

Initial student model Student goal a. Expertize in subject b. Revise prerequisite c. Develop real time problem analysis

Student history and information a. Perfect 3points b. Novice student c. Interested

Adaptive cognitive map checks for Prerequisite

Yes No Good

Self evaluation report

Bad Done

Assessements

Move to next module qualified as average student

No Student progress and performance a.Revised prerequisite b.Accessing M1 c.Difficulty in C12 d.Redoing C12

Level of understanding Expertise with module

Good No Yes No

FIGURE 2.10 Student model development.

Navigation path of the expert student is given by fM1 ; C11 ; C12 ; C13 g; fM2 ; C21 ; C22 ; C23 ; C24 g; fM1 ; C31 ; C32 ; M4 . . . g: Qualitative cognitive mapping can be used to model the construction of sequencing of subject content and development of mental model of individual student and progress in acquiring knowledge. Behaviorism stimulates surface level learning and knowledge development. Like cognitive mapping is knowledge construction process, student’s learning is construction of their own knowledge based on their prior knowledge. Various factors such as cognitive skill, reasoning, critical thinking, and analyses are essential for student to develop his knowledge. Self-evaluation should focus on student’s cognitive development. For the new developed student model, cognitive mapping should check for various factors such as prerequisites of the subject, self-evaluation progress, assessment, and level of understanding and expertise in that module as shown in Fig. 2.10. Meaningful learning environment is context aware; that is system senses context of the student and appropriately provides multimedia learning presentation based on real-world physical environment as seen in Fig. 2.11. In case all these factors are fine then student gets qualified to move on to next level with development in student model.

2.6 Conclusion In this chapter a novel methodology for designing new student model using of adaptive interface Cognitive mapping was deployed. The designed cognitive trees with several modular/conceptual branches were very useful in helping students in navigating students with diverse knowledge through the course-ware. This proposed model is demonstrated to successfully improve online course that provided students with instruction for students with divergent knowledge levels.

25

26

CHAPTER 2 Machine consciousness: mind, machine, and society

Multimedia presentation helps in student learning processes sensory memory, working memory, and long-term memory

Multimedia processing and acquisition

When student receives multimedia presentation, sound/image will be remembered for a short term in working memory some images/sound will be recollectd in working memory and these knowledge may be combined with previous knowledge of long-term memory

FIGURE 2.11 Student model development with long-term memory.

References [1] E.I. Papageorgiou, Learning algorithms for fuzzy cognitive maps—a review study, IEEE Trans. Syst. Man Cybern. C: Appl. Rev. 42 (2) (2012). [2] S.F. Shawer, D. Gilmore, S. Rae, Student cognitive and affective development in the context of classroom-level curriculum development, J. Sch. Teach. Learn. 8 (1) (2008) 128. [3] Gwo-Jen Hwang, W. Hong, Development of an adaptive learning system with multiple perspectives based on students’ learning styles and cognitive styles, Educ. Technol. Soc. 16 (4) (2013) 185200. [4] B. Ruzhekova-Rogozherova, ESP Curriculum Design and Cognitive Skills Formation, BET AE  Newsletter Issue18. [5] E.I. Papageorgiou, Review study on fuzzy cognitive maps and their applications during the last decade, IEEE Int. Conf. Fuzzy Syst. 444 (2011) 828835. [6] A. Giridharan, Adaptive eLearning Environment for Students with Divergent Knowledge Levels, ELELTECH, Hyderabad, 2005. [7] A. Giridharan, P. Venkataram, Organising subject material in accordance with Ubiquitous student status, Int. J. Educ. 3 (2015) 110. [8] Z. Shen, S. Tan, K. Siau, Using cognitive maps of mental models to evaluate learning challenges: a case study, mental models, cognitive maps, learning challenges, in: Twenty-Third Americas Conference on Information Systems, Boston, MA, 2017. [9] M. Taka´cs, I.J. Rudas, Z. Lantos, Fuzzy cognitive map for student evaluation model, in: 2014 IEEE Int. Conf. Syst. Sci. Eng. (ICSSE), 2014. [10] A. Giridharan, P. Venkataram, A Causal Model Based Subject Domain Creation for a Web-Based Education, NSEE, Bangalore, 2005.

CHAPTER

Braincomputer interface and neurocomputing

3

Samrudhi Mohdiwale and Mridu Sahu National Institute of Technology Raipur, Raipur, India

3.1 Introduction The brain is the most important organ of the nervous system, which takes decisions and responds as per knowledge and experience with current information available. Cognitive science is related to the brain activity of acquiring knowledge. The word cognition derived from the Latin word cognoscere that means “get to know” or “to learn.” To understand the cognitive process, take an example of traffic signaling. A person driving a car suddenly saw a red light on a traffic signal and stopped the car. This process of mental activity (to stop a car) is based on the previous experience and knowledge of the driving person, but, at the same time, the person saw the nearby area of the traffic signal and found no vehicles so far and took decision of crossing the road; this is called the decision-making as per the current situation. Hence the cognition is the mental activity of gaining knowledge from thoughts, senses, and experience. The process of learning, retention, perception, interpretation of new phenomenon by integrating previous experiences and present characteristics of that phenomenon are the cognitive processes [1]. Fig. 3.1 shows the various subdisciplines of cognitive science. Cognitive science is the scientific study of human behavior with various subdisciplines as neuroscience, artificial intelligence (AI), linguistics, philosophy, psychology, anthropology, and many more. The terms of neuroscience such as electroencephalography (EEG), functional magnetic resonance imaging (fMRI), and functional near-infrared spectroscopy (fNIRS) are the basic terms that are useful to interpret brain and human behavior in cognitive science [2]. AI, in its boom nowadays, helps one to understand the intelligence of the brain with the help of a machine and also has another dimension in which AI plays an important role to interpret the cognitive behavior of machines. Similarly, linguistics is also related to cognition as a speaker speaks as per the memory of dictionary terms of that language [3]. After reading this chapter, a reader will come to know an overview of the braincomputer interface (BCI), the evolution of BCI and its types, working, challenges, and applications. One case study is also presented to understand the concept of BCI in-depth. Cognitive Informatics, Computer Modelling, and Cognitive Science, Volume 1. DOI: https://doi.org/10.1016/B978-0-12-819443-0.00003-9 © 2020 Elsevier Inc. All rights reserved.

27

28

CHAPTER 3 Braincomputer interface and neurocomputing

FIGURE 3.1 Subdisciplines of cognitive science.

3.2 Braincomputer interface 3.2.1 History The start of BCI in today’s era was initiated by the recording of electrical activities of the brain as EEG by Hans Berger in 1924. He found the relation between various brain diseases and EEG signals and drew a path to explore the new opportunities of brain activities [4]. Jacques Vidal who was the professor of the University of California named the term BCI and published few articles on it. Later he was not very active in BCI research [4,5]. BCI spread in wide dimensions after 2011. In the following sections we will come to know the details of the BCI.

3.2.2 Types of braincomputer interface In this section types of BCI are discussed in detail. Based on electrode placement in brain area, BCI can be classified into three types—invasive, semiinvasive, and noninvasive BCI. Fig. 3.2 represents the types of BCI with the output of each kind of BCI.

3.2.2.1 Invasive braincomputer interface Invasive BCI is the most complex system of BCI which requires the implantation of electrodes directly in the brain neurons/cells. These electrodes record intraparenchymal signals directly from the brain and gives highly accurate signal quality. But invasive electrode placement inside the brain leads to scar tissue formation

3.2 Braincomputer interface

FIGURE 3.2 Types of BCI. BCI, Braincomputer interface.

and neurosurgery involves high risk and complexity in this BCI. This BCI is primarily used for paralyzed and blind patients [5,6].

3.2.2.2 Semiinvasive braincomputer interface Semiinvasive BCI reduces the complexity and requirement of deep neurosurgery as for invasive BCI. In semiinvasive BCI, electrodes are implanted in the skull over or under the dura matter of brain. Electrocorticography (ECoG) is acquired from this kind of BCI. Semiinvasive BCI is most suitable for diverse cognitive application as it covers the large area of the brain with a set of electrodes [7,8]. ECoG is having more spatial resolution than EEG because it is acquired directly from skull, so loss of information in signaling is less. It is also not affected by artifacts such as electrooculography and electromyography. This is also less risktaking technique as invasiveness is reduced [9].

3.2.2.3 Noninvasive braincomputer interface Noninvasive BCI refers to the electrode placement technique on the skull without any surgery or implantation of electrodes inside the brain. This technique is widely used for research because of its hardware portability, cost, reduced complexity, and ease of use. Noninvasive BCI is distinguished based on signal

29

30

CHAPTER 3 Braincomputer interface and neurocomputing

acquisition such as EEG, positron emission tomography (PET), magnetoencephalography (MEG), fMRI, and fNIRS [10]. Each of the acquisition methods is described in the following sections.

3.2.2.3.1 Electroencephalography As the name suggests, EEG is the method of recording of electrical signals generated by the brain. EEG records the rhythmic pattern of neuron activities which categorized into five frequency bands, namely, alpha, beta, theta, gamma, and delta. Each frequency component has significance that varies from application to application [2,11]. The significance of each listed in Table 3.1. Another aspect is the resolution; EEG does not have higher resolution due to loss of signals during traversing from neurons to scalp in the various layers of the brain. But the resolution can be improved by using spatial filtering technique. EEG also shows lower spatial accuracy due to electrode signal acquisition technique that records a mixture of all the signals generated near the electrode. The advantage of using EEG signals is its highest time resolution [12].

3.2.2.3.2 Magnetoencephalography MEG is the technique of recording magnetic field induced by electrical current circulated in the brain which follows the right hand rule of the Ampere law. This technique has come into picture because of its low resolution of EEG signals [13]. MEG signals are less prone to distortion than the other signals generated from the brain and also have less blurring generally caused from the different layers of the brain [13,14]. Since it is limited to distortive effect, this is proved to be safe noninvasive method of source localization. MEG is used for Table 3.1 Significance of brain rhythm with frequency ranges. Frequency band

Frequency range

Delta

0.54 Hz

Theta

48 Hz

Alpha

813 Hz

Beta

1426 Hz

Gamma

Greater than 30 Hz

Significance Deep sleep (found in adults and posterior children) Drowsiness or arousal Meditation Abnormal activity detection (found in young children) Wakefulness Effortless alertness Creativity Active attention Active thinking Critical problem solving Cross-modal sensory process

3.2 Braincomputer interface

the identification of a specific portion or the center of the brain that is highly affected in epileptic seizure. MEG is also widely used in research related to memory, language, and motor functions of the brain [6]. One of the major advantages of using MEG is that it provides good temporal as well as spatial resolution of the brain activity.

3.2.2.3.3 Positron emission tomography PET is used to evaluate the functioning of tissues and organs of the human body. The radioactive drug in small amount is injected or inhaled in the body of patients based on the study required for any specific organ. The injected drug accumulates in the part of the organ of disease and shows higher chemical activity in the area having disease. In a PET image, the area of disease is shown as bright spots on the organ of the body [15]. The PET/CT scan images are used to diagnose cancer, spreading areas of cancer, effectiveness of treatment used, blood flow rate, impact of heart attack on heart and other organs, diagnosis of tumor, and other central nervous system disorders [16]. PET provides better output images than other scanning techniques such as CT scan images, but it is having risk of allergic reaction due to the trace (drug) injected in the body.

3.2.2.3.4 Functional magnetic resonance imaging The brain activity can also be measured by the blood flow in brain areas associated with particular activities. The measurement of blood flow increases in the area of brain as cerebral activities and neurons are related to each other. The main advantage of this technique is that it does not use any radiation such as X-rays to measure the blood flow that makes it safer and has good spatial and temporal resolution [17]. In the mind, hemoglobin in narrow red platelets supplies oxygen to the neurons. Movement causes more requirement for oxygen, which prompts an expansion of blood stream. The magnetic qualities of hemoglobin change on the off chance that it is oxygenated or not. This distinction permits the MRI machine, which is a round and hollow cylinder with an amazing electromagnet, to distinguish which zones of the cerebrum are dynamic in a particular minute [18].

3.2.2.3.5 Functional near-infrared spectroscopy Noninvasive technique of the measurement of the active brain region with optical techniques of recording of brain activities using hemodynamic response of blood flow is known as fNIRS. The technique uses the infrared light and magnetic field to identify demand of oxygen flow unlike fMRI. As any part of the brain involves so in certain activity, the demand of oxygen increases due to more consumption of oxygen in that area. fNIRS measures the availability of oxygen and provides images for the same to analyze the brain [19,20]. Advantages of fNIRS are that they are less prone to noise, low cost, portable, and easy to use than fMRI. This technique is used in cognitive task analysis in brain regions.

31

32

CHAPTER 3 Braincomputer interface and neurocomputing

3.2.3 Assumptions and working of braincomputer interface While working with BCI, certain assumptions are made to clearly understand and analyze the brain activity. These assumptions are as follows:

• The brain only generates EEG waves within a certain range of frequencies and all other frequencies found in the signal are not originated from the brain.

• The brain only generates frequency between 1 and 30 Hz. A typical BCI consists of invasive or noninvasive type of electrodes for signal acquisition. These electrodes are sensitive to changes in neural activity and provide the sequential signals to analyze those activities. Fig. 3.3 represents the working of BCI in various steps involved. Working of BCI is discussed in the following sections.

3.2.3.1 Placement of electrodes To record EEG signals, electrodes are placed on the scalp. These electrodes are made up of gold, silver chloride, and tin-like metals, and these are small in size with firm contact with the scalp to ensure low impedance and reduce artifacts due to environment and electrodes [21]. These electrodes are placed on the scalp based on a standard technique of placement called 1020 international system of

FIGURE 3.3 Working of braincomputer interface.

3.2 Braincomputer interface

electrode placement discovered by Dr. Herbert Jasper. The three planes named sagittal, coronal, and horizontal are used for measurement. Electrodes with the odd number are placed on the left while with the even number are placed in the right hemisphere of the brain [22]. Fig. 3.4 depicts the position of electrodes on the skull. Naming of electrodes done on the basis of region of brains, such as electrodes on the frontal lobe represented by Fp1, Fp2, etc. Table 3.2 represents the position of electrodes on the brain with their respective region of recording. Electrodes on center, named as Fz, Cz, and Pz positioned in the midline of frontal, central, and parietal lobe. Recording of EEG signals has done based on montage. Montage is referred to as the selection of electrode as reference electrodes. Bipolar and referential are two commonly used montages for EEG signal recording. In bipolar montage, there is one reference electrode for each electrode, while referential montages have a common reference electrode for all channels.

3.2.3.2 Electroencephalography signal acquisition After successful placement of electrodes, EEG signals are acquired by the acquisition device. The acquisition devices having dry or wet types of electrodes have

FIGURE 3.4 Electrode positions on the skull.

33

34

CHAPTER 3 Braincomputer interface and neurocomputing

Table 3.2 Position of electrodes and the related brain area. Electrodes on left

Electrodes on right

Areas of brain

Fp1 F3 C3 P3 O1 F7 T7 P7 A1

Fp2 F4 C4 P4 O2 F8 T8 P8 A2

Forehead, frontopolar Frontal Central Parietal Occipital Inferior frontal/anterior temporal Mid temporal Posterior temporal Earlobe electrodes

different impact on EEG signal acquisition. Signals acquired from electrodes need to be amplified due to their low amplitude and noncompatibility with display and other devices such as an A/D converter [23]. For amplification a biopotential amplifier is used which has noninfluential, best possible separation of signal, and protection against damage properties within it. The standard values of instruments used in designing of EEG acquisition such as amplifiers and filters are provided in Table 3.3. Effective EEG signal acquisition mostly based on types of electrode, amplifier technique, and data transmission methods. Specific to BCI applications, Pinegger et al. presented a case study on the choice of electrodes for the suitability of BCI. This study concludes that the selection of electrode is purely based on application and requirement because lowest noise is obtained using water-based electrodes, maximum P300 speller accuracy is associated with a hydrogel-based electrode cap while a dry electrode cap provides least inconvenience and maximum satisfaction [24].

3.2.3.3 Preprocessing EEG acquisition is one of the most important aspects for effective BCI performance. While recording of EEG signals, many precautions are taken, still some technical or human functional glitches affect the analysis of EEG signals [23], which could become a very serious problem while any other noise is considered as EEG and could create some possibility of mismatch in specific disease identification or any specific application. Theses glitches are called artifacts. Preprocessing is the process of removal of reduction of those artifacts. Artifacts are classified into two types: technical and human functioning.

3.2.3.3.1 Technical artifacts Artifacts related to the instruments used in the acquisition of signals are called technical. All the technical artifacts are present in the acquired signal with the cause of that specific artifact presented in Table 3.4 [23].

3.2 Braincomputer interface

Table 3.3 Standard parameters of electroencephalography acquisition devices. Parameters

Value

Gain Common mode rejection ratio Input impedance Low-pass filter Notch filter High-pass filter Noise value

100100,000 $ 100 dB 100 MΩ # 50 Hz 5060 Hz 0.10.7 Hz 0.32 μV

Table 3.4 Technical artifacts. Name of artifact

Cause

50/60 Hz artifact Cable movement Impedance fluctuation Broken wire Too much electrode gel Low battery Electrode pop Bad electrode

Due to poor contact of electrode Human error Technical error Technical error Human error Technical error Charge on electrode Poor electrode contact with high voltage deflection

3.2.3.3.2 Human functioning artifacts The artifacts arise due to human body functioning, named human functioning artifacts. Body functions related to brain disturb the original signal acquired for specific BCI tasks [23]. These body functions with artifacts caused and affected the channel of acquisition presented in Table 3.5. Removal of these artifacts is necessary for effective analysis of EEG signals. The process of reducing the artifacts is known as preprocessing. This is generally done with filtering of signals. Ideal filters are those that remove all the artifacts and provide the actual cerebral activity of the brain. The realtime filters are not the ideal ones, but based on some mathematical formulations, hence, only the artifacts that can be interpreted as mathematical functions or formulations can be removed with standard filtering techniques. Standard preprocessing techniques are

• low-pass filter, • high-pass filter, and • notch filter.

35

36

CHAPTER 3 Braincomputer interface and neurocomputing

Table 3.5 Human functioning artifacts. Name of artifact Eye blink Lateral eye movement Nystagmus artifact Muscle artifact Sweat artifact EKG artifact Pulse Motion (special) Hiccup Chewing and bruxism artifact Glossokinetic artifact Breach rhythm

Reason

Affected area/channel

Eye opening and closing Movement of cornea

Frontal lobe Motions of 1 ve charge on F7 and F8 are important for this artifact analysis Rhythmicity on F7 and F8

Due to eye movement Muscle on wakefulness (high spike on channel) Sweating due to long time of recording Related to heart and respiration process (spike) When electrodes placed directly on artery (wide pulse) Due to heavy respiration or ventilation Due to hiccupping Chewing as nervous habit and during sleep Due to the movement of tongue Due to skull defect

Mostly found in Fp1, Fp2, T7, and T8 Adjacent channel affected due to formation of electrolyte bridge EKG channel (A1-T7) Heart beat like rhythm with EKG channel A1-T7

Temporal lobe Central and frontal lobe Depends on the area of defect

EKG, Electrocardiograph.

Apart from standard filtering, filters are modified for getting noise-free signals. The various artifacts are discussed previously from which only human functioningrelated artifacts can be reduced via available filtering. There are some techniques widely used in the removal of artifacts given in Table 3.6.

3.2.3.4 Features and feature extraction technique After preprocessing and artifact removal, the signals are ready to process for specific application. To extract the features from EEG signals various techniques are used. Features provide characteristics of signals that are important for BCI-based application. BCI is used for the various applications that are listed later. The features extracted for the various applications are also presented to understand the types of features that can be extracted to improve BCI performance.

3.2.3.4.1 Fourier transform based feature EEG signal is a combination of various frequencies. To analyze the EEG signals in terms of amplitude frequency and phase, that is, the components of EEG, Fourier transform is used. One of the problems associated with Fourier transform is that its signal results in a complex number; hence the absolute value of it is

3.2 Braincomputer interface

Table 3.6 Different artifacts and their removal techniques. Type of artifact

Removal techniques

Eye movement artifact

• • • • • • • • •

EOG

EMG

• • • •

Blind source separation 1 SVM [25] Independent component analysis 1 weighted SVM [26] DWT 1 adaptive predictor filtering [27] Radial basis function 1 artificial neural network [28] GSVD 1 SFA [29] Kalman filter [30] Adaptive filter [31] Wavelet neural network [32] Functional link neural network And adaptive neural fuzzy inference system [33] Local singular spectrum analysis 1 embedding dimension [34] Second-order blind identification 1 stationary wavelet transform [35] SVD [36] Polynomial network 1 decision tree [37]

DWT, Discrete wavelet transform; EMG, electromyography; EOG, electrooculography; GSVD, generalized singular value decomposition; SFA, spectral factor analysis; SVD, singular value decomposition; SVM, support vector machine.

incorporated for practical purposes [12]. The Fourier transform of signals can be calculated as follows: N ð

xðjωÞ 5

xðtÞe2jωt dt

(3.1)

2N

where xðtÞ is the EEG signal and xðjωÞ is the resulting Fourier transform of the signal [38]. Now one of the assumptions associated with Fourier transform is that the signal must be stationary in the given range. To justify this assumption with EEG signals, fast Fourier transform (FFT) is an alternative option that provides faster, efficient, and elegant response [12]. Even though Fourier transform provides the analysis of EEG signals, it is associated with its limitation that it cannot deal with nonstationary signals for a very long period of time. At the same time, it cannot handle dynamics (time-varying changes in frequency structure) of EEG with only a power spectrum and a phase spectrum. Matlab command for Fourier transform fftðÞ function in Matlab computes FFT to provide Fourier transform of signals. This function has computational complexity of nlogn [39]. z 5 fftðxÞ

where x is the input EEG signal and z is the output matrix of FFT of x.

37

38

CHAPTER 3 Braincomputer interface and neurocomputing

3.2.3.4.2 Wavelet-based feature Before starting wavelet transform, let us understand what the wavelet is. The wavelets are similar to band-pass filters at a certain frequency range or its nearby frequency. These are useful for localizing the changes in frequency of signals over time. Wavelets are popular over Fourier transform because it does not provide any information of amplitude fluctuation throughout the time interval while wavelets provide the same with different kinds of wavelet function available for transformation [12]. One of the famous wavelets is Morlet wavelet because it suits the most for EEG signals for the localization of frequency in time. The mathematical expression to obtain continuous wavelet transform (CWT) is shown as follows: ð t 2 τ  1 p ffiffiffiffiffi dt CWT[ ð τ; s Þ 5 xðtÞ[ X s jsj

(3.2)

Expression (3.2) shows that CWT is a function of two variables τand s that are translational and scale parameters, respectively [40]. The translational parameter is related to the location of window with time, while the scale parameter is related to contraction or dilation of signals for global or local view of signals. The function [ðÞ refers to the transformation function called mother wavelet. As the name suggests, this function provides support in transformation with small wave of finite duration and oscillatory in nature [40]. The small wave is multiplied throughout the signal; it is easy to explore the nature of frequencies in the signal. As we have seen that wavelets provide the frequency characteristics of signals, it is unable to provide power and phase information required for signal analysis. One more issue associated with wavelets is phase lag that results in negative or positive orthogonal vector through which energy cannot be calculated. Energy calculation requires zero-phase lag between the two vectors. This information must be taken into consideration while working with wavelet transform. Various studies suggest that discrete wavelet transform (DWT) is used for nonstationary signal analysis in computational environment. The basic difference between CWT and DWT is the discretization of scale parameter. In CWT, scale parameter (s) is exponential with base greater than 2, while in DWT exponential scale with base equal to 2 is used [41]. DWT results in sparse representation in which coefficients of DWT are much lesser than CWT; hence these reduce the dimension of data with minimum loss. In DWT, it is easy to distinguish signal and noise based on wavelet coefficient because wavelet coefficients are large for signal of interest and the noise results in many small DWT coefficients that can be easily discarded [41]. Matlab command for wavelet decomposition Wavelet decomposition of signals in Matlab can be obtained using the function given next: ½c; l 5 wavedecðx; n; wnameÞ

3.2 Braincomputer interface

where x is the signal for decomposition, n is level of decomposition and wname is the mother wavelet used for decomposition [42]. Example, ½c; l 5 wavedecðx; 3; db2Þ ap 5 appcoef ðc; l;0 db20 Þ; ½cd1; cd2; cd3 5 detcoef ðc; l; ½123Þ

In the example, db2 wavelet with three level of decomposition is used. This results in one approximate coefficient “ap” and three detailed coefficients “cd1, cd2, cd3” [42].

3.2.3.4.3 Statistical features Apart from transformations like Fourier and wavelet, statistical features have always proven their presence as an important feature in every field. In the area of BCI, it has proven its significance. Mean, median, mode, variance, and kurtosis are few statistical measures that analyze the given data and provide results to conclude about the type of data. Statistical measurement is made by using the following tools for analysis of

• mean and amplitude distribution, • entropy measure, and • autocorrelation function. Energy-based features and common spatial pattern are also very popular feature for the classification of motor imagery signals. Mayra et al. compared different feature extraction algorithms for BCI based on steady state visual evoked potential (SSVEP), P300 signals. The results showed that SSVEP and P300 result in higher classification accuracy with cross-correlation coefficient [43]. Hamza et al. proposed a new feature extraction technique that is signal dependent and uses liner prediction singular value decomposition for extraction. The result of this technique outperforms discrete cosine transform and auto regressive features [44].

3.2.3.5 Classification Based on features, vector classifiers are chosen for a better classification of task in BCI. Support vector machine (SVM), k-nearest neighbor (kNN) decision tree, logistic regression are some popular classifiers widely used in classification in the area of BCI. For BCI, the task can be of binary classification type or it may have multiple classes based on application. The detailed description of some classifiers is provided while discussion of case study is provided in the current chapter.

39

3.3 Electroencephalography acquisition devices Name of device/company NeuroSky (http://neurosky.com/)

Acquisition technique

Electrode

Sensor type

Acquisition signal

1

Dry

EEG

Muse (https://www.muse.mu)

Wired/ wireless Wireless

4

Dry

EEG

Emotiv (https://www.emotiv.com/)

Wireless

514

Semidry

EEG

OpenBCI (https://openbci.com)

Wireless

816

ABM (https://www. advancedbrainmonitoring.com)

Wireless

1024

Semidry

EEG, MEG, ECG EEG

ANT Neuro (https://www.antneuro.com)

Wireless

832, 32256

Dry, semidry

Cognionics (https://www. cognionics.net) mBrainTrain (https://mbraintrain. com)

Wireless

2030

Dry

EEG, TMS, MEG, fMRI or NIRS EEG

Wireless

24

Semidry

EEG

Wearable sensing (https:// wearablesensing.com) G.tec (http://www.gtec.at/)

Wireless

724

Dry

EEG

Wireless

864

BioSemi (https://www.biosemi. com)

Wireless

16

Brain products LiveAmp (https:// brainproducts.com)

Wireless

32

EEG, ECoG Wet, dry, semidry Dry

EEG, ECG, EMG EEG

Useful area/advantage Attention, meditation, neurofeedback solutions to help improve meditation and sleep Meditation Mind heart body breath signal analysis Also detect head movement, nine axis motion sensor, facial expression, emotional state, mental command Low cost, high-quality brain imaging Sleep disease, biomarkers, enabling data collection to occur with increased mobility (and increased comfort too) The ability to collect EEG data without conductive gel Time to data collection is reduced Psychology studies Sport studies Drowsiness/fatigue studies Serious gaming/VR studies Mental work, speller project, neuroscience, psychology, BCI, neuromarketing, ergonomics Zero class enabled for SSVEP, P300 and motor imagery Research products and online interface Mind wave applications

ECoG, Electrocorticography; EEG, electroencephalography; EMG, electromyography; fMRI, functional magnetic resonance imaging; NIRS, near-infrared spectroscopy; MEG, magnetoencephalography.

3.4 Challenges

3.4 Challenges In the previous sections BCI has been introduced and the working of it is explained in detail. Still some challenges that are being faced while working with BCI are unexplored in this chapter. So this section described the challenges, solution of which will improve the BCI performance. These are some future directions where researchers must go through to contribute better in the area.

3.4.1 Implantation of electrode As the performance of BCI is directly proportional to the signal obtained from the brain, the electrode placed on the brain plays a wide role in BCI performance. Two types of methods exist for electrode placement, namely, invasive and noninvasive. The invasive method of electrode placement is related to the implantation of electrodes inside the skull and in the gray matter that provide highest quality of signals but have a higher risk of tissue damage. Apart from those risks, the invasive technique is important for the analysis of temporal aspects of seizer and such kind of disease to understand the patterns and find correlation between the actuated signals [5]. Due to the risk of tissue damage and scar tissue formation, loss of signal results in more loss of information that could be obtained from those decayed signals which turn out to be a challenging task for information enrichment. To overcome the problem of tissue damage, noninvasive techniques are introduced into picture. In the noninvasive BCI, electrodes are placed on the scalp. These are either dry, semidry, or wet electrodes, each of which has its advantages and disadvantages. Noninvasive electrodes provide information from the brain but, since the electrical field reduces with the increase in distance, the EEG signals are more prone to noise, and the signal-to-noise ratio (SNR) is low for EEG signals [6]. Another issue associated with the noninvasive BCI is low-pass filtering of signals by tissues of the brain which acts as a low-pass filter and attenuates the brain signals up to a few Hzs, namely, low frequency; hence BCI with noninvasive electrodes is limited to the study of low-frequency signals [6,45].

3.4.2 High dimensionality of data BCI uses various electrodes for the acquisition of signals as discussed earlier. The acquired signals are not from a single electrode but from a set of electrodes called channels. There are various devices available for the acquisition, which have channels ranging from 8 3 8 to 64 3 64. As we increase the number of channels, the information will increase and at the same time the redundant data also become higher in proportion. The increased number of channels leads to the increased dimension of data that typically will be in GBs; hence it requires higher computational power for the used devices [10].

41

42

CHAPTER 3 Braincomputer interface and neurocomputing

3.4.3 Information transfer rate Information transmission stream in the BCI is found similar to telecommunication system of message transmission. Information transfer rate (ITR) is one of the measures of performance evaluation in the BCI similar to various measures described in Ref. [46]. One of the major challenges in BCI performance is ITR due to the lower SNR of EEG signal [47]. The ITR can be calculated by the following formula: ITR 5 ðlog2 M 1 Plog2 P 1 ð1 2 PÞlog2 ½ð1 2 PÞ=ðM 2 1ÞÞ 3 ð60=TÞ

(3.3)

where P is the accuracy, M is the number of classes, and T is the average time of selection [46,48]. To improve the ITR, P and M should be maximized, and T should be minimized. Hence the trade-off between the parameters of ITR creates challenges for the BCI system.

3.4.4 Technical challenges • Nonlinearity—Nonlinearity in the BCI is related to the brain functioning. The



brain is assumed as nonlinear system due to its variable dynamics. The nonlinear system is more preferred to linear system in EEG signals because it can describe the intents more clearly than linear system [4951]. One example is provided in the research paper [48] for understanding the impact of nonlinearity. The authors used SSVEP signals and found in study that nonlinearity that exists in SSVEP could provide resonance with stimulus frequency, which means that the information obtained from nonlinear SSVEP signals is useful to distinguish stimulus frequency from available set of frequencies of the brain. On the other hand, nonlinearity also leads to loss of information, especially in ERP signals which are highly sensitive to neuropsychological parameters [1]. Nonstationary signals—The nonstationary word itself shows the mobility. The mobility of EEG signals is a variation in statistical characteristics with time due to fluctuation in mental stability or mental state of human. This results in problem when frequency analysis is required because frequency domain transformation considers input signals as stationary all the time. EEG signals are highly nonstationary and since these are inputs of BCI hence the nature of EEG is sensitive for the BCI [36]. This could become a very challenging task in the domain, but one solution to the problem is dividing the signal into small durations where it can be assumed as stationary.

3.5 Case study on braincomputer interface In the previous sections, we have understood the concept behind BCI, its working, and scope. Now it is the time to know BCI practically. In this section we have

3.5 Case study on braincomputer interface

evaluated a BCI for motor imagery movement, namely, hand and feet movement. To work on BCI based on hand and feet movement, dataset is taken from BNCI horizon 2020. The description of the dataset given in the following sections.

3.5.1 Dataset A two-class motor imagery standard dataset is available online at http://bnci-horizon-2020.eu/database/data-sets. The dataset consists of 20 subjects (persons from the signal are recorded), cue-based paradigm has been used to train the model, hence single session has been performed. The session is having eight runs, five are for training and three are for testing. One run is composed of 20 trails: 10 trails for each class, which means that there are 5 3 10 5 50 trails available for training and 3 3 10 5 30 trails, for testing of each class [52]. The data is recorded from 15 channel electrodes, so the dataset has 15 columns, each of which represents the position of an electrode.

3.5.2 Problem statement Various studies reported that the evaluation of motor imagery classification gives the best performance when only three channels C3, C4, and Cz are selected from the available set of channels. In the current study we have evaluated whether these three channels are only responsible for motor imagery classification or the other channels also play a significant role for a better classification of motor imagery signals.

3.5.3 Proposed method Comparative analysis of channel selection for motor imagery classification has performed with the following given steps. Initially the data is acquired from the dataset and trails are combined in one set for each subject to reduce time complexity of overall process. Now the data has been preprocessed using a Butterworth filter. The processed data having 15 channels, out of them 5 best channels were selected using particle swarm optimization (PSO) and in another set standard channels for motor imagery were selected. Then the data from selected channels used for feature extraction using wavelet transform and further it is classified using various classifiers to verify the significance of channel selection. Each step is elaborated in further subsections of the chapter. A flowchart of the proposed method is shown in Fig. 3.5.

3.5.3.1 Data acquisition and preprocessing The process of data acquisition and preprocessing is discussed earlier. The detailed description of dataset is available online at http://bnci-horizon-2020.eu/ database/data-sets.

43

44

CHAPTER 3 Braincomputer interface and neurocomputing

FIGURE 3.5 A flowchart of channel selection in BCI for two-class motor imagery classification. BCI, Braincomputer interface.

3.5.3.2 Optimized channel selection using particle swarm optimization Going toward our aim of study, that is, impact of channels for motor imagery classification, a well-known optimization technique PSO is used for channel selection process. Channel selection methods with optimization are used in various studies [53,54].

3.5.4 Working of particle swarm optimization for channel selection Optimization techniques are those that are opted out for the convergence of optimal point. In this case, we are searching for an optimal number of channels that maximize classification accuracy. Objective function for the problem is formulated as follows: max Accuracy

channel

where the number of channels is kept constant to six. Kennedy, Eberhart, and Shi were the first who mapped the social behavior of swarms into computational mathematics for optimization [55]. According to the mathematics involved for searching of best position, the position of particle as well as its velocity plays an important role. For finding the best channel that provides better classification accuracy, the global best position for each iteration is important. Fig. 3.6 shows the searching of two swarms for best possible electrodes. The velocity and position update formula for each swarm in the population is given in the following. The velocity update formula is

3.5 Case study on braincomputer interface

FIGURE 3.6 PSO for channel selection. PSO, Particle swarm optimization.

vðt 1 1Þ 5 w 3 vðtÞ 1 c1 3 rand 3 ð pbest 2 xðtÞÞ 1 c2 3 rand 3 ðgbest 2 xðtÞÞ

(3.4)

The position update rule for swarm is xðt 1 1Þ 5 xðtÞ 1 vðt 1 1)

(3.5)

where xðtÞ is the current position, vðtÞ is the velocity, w is the weight, c1, c 2 are the constants associated with pbest and gbest positions: Initially, a population size of 300 is taken for the current work, 100 iterations with six channels are selected for search operation. The algorithm described the workflow of the complete event of channel selection. In a population of 300 swarms, each has its personal best position, that is, each swarm finds the best electrode but it is also influenced by other swarms that have their best electrodes as well as the global best position of electrodes among all the swarms. Constants c1 and c2 help the swarm to know how much weight should be associated with each position to move in the best direction. The velocity of a particle is controlled by weight w. By following this theory, swarms are located in the electrodes which will maximize the classification accuracy. In the current work, PSO gives the result of six optimized channels that corresponds to columns 11, 13, 15, 12, 10, and 3. We performed 100 times the same optimization technique and the electrodes that have maximum frequency of occurrence, selected for further investigation.

45

46

CHAPTER 3 Braincomputer interface and neurocomputing

Algorithm 3.1 Algorithm for channel selection using PSO Input: No of channels to be selected: n, No. of Iterations (I) Output: Optimized Channels for best accuracy Initialize: Weight(w), Constant C1, Constant C2, Population size (P), for

1:P Initialize Particle Position

end initialize velocity; initialize Global best value(Gbest) while for 1: P calculate fitness value end update position of particle Calculate Pbest if

global best > particle’s best position No change in global best; else update global best position

end update velocity of each particle using formula Vn = V + C1*rand()*Pbest+C2*rand*Gbest; Update particles position using Pn = P+Vn; Check for boundary Check for number of iterations if iteration = MaxIteration(I) break; else iteration = iteration+1; end end Selected channel = gbest (iteration)

3.5 Case study on braincomputer interface

3.5.4.1 Standard channel selection Motor imagery dataset is acquired on 15 channels which correspond to 15 columns present in the dataset. Various studies of motor imagery suggest that electrodes pasted on central lobe, that is, C3, C4, and Cz are the best electrodes and the corresponding channels are best channels for motor imagery task classification. The channels related to central lobe are channels 5, 8, and 11. So these channels are compared with other channels obtained from PSO for the analysis of impact of channel selection.

3.5.4.2 Feature extraction using wavelet transform Now we have two sets of channels available: one is {5, 8, 11} and the other is {11, 13, 15, 12, 10, 3}. These channels are used for feature extraction. Wavelet transform is already discussed in Section 3.2.3.4. For each set of channels, wavelet transform is obtained and based on wavelet energy and five level wavelet decomposition one approximate and five detailed coefficients are calculated that will be used as features for motor imagery classification. Here only one feature is calculated because the main aim of the study is to know the impact of channel selection for a constant set of feature and static classifiers. For the used dataset, standard channels having features of dimension 160 3 19 and optimized set of channels lead to a feature vector of 160 3 36. This feature vector is further used for classification.

3.5.4.3 Classification Various classification techniques are available for the classification of data. In the current study well-known classifiers kNN, SVM, and subspace discriminant ensemble method are used for two-class motor imagery classification. To understand the basic concept of these classifiers, they are introduced later.

3.5.5 k-Nearest neighbor The kNN is a proximity-based lazy learner. This classifier is flexible to find training instances that are comparatively similar to test instances. A kNN classifier characterizes each instance as a datapoint in V-dimensional space where V is the number of features. By having test examples, proximity is computed to rest of the instances in the training set by using proximity measures [56]. The algorithm for nearest neighbor classification is presented later.

Algorithm 3.2 Algorithm for nearest neighbor classification Input: Training dataset D 5 [(a1 ; b1 ), (a1 ; b1 ), . . .. . .. . .. . . (a1 ; b1 )] where (ai ; bi ÞERV Output: Class of new data for (each instance t 5 (a1 ; b1 )) do Calculate the distance between t and every instance (ai ; bi ) E D

47

48

CHAPTER 3 Braincomputer interface and neurocomputing

Select Vi E V, the set of k nearest neighbor training example to t Obtain the class label by majority voting approach end

Fine kNN uses one neighbor with equal weight, while medium kNN uses 10 neighbors, and coarse kNN takes 100 neighbors, equal weight and Euclidian distance measurement for the classification algorithm. In weighted kNN instead of equal weight squared inverse weight is used with 10 neighbors and Euclidian distance.

3.5.6 Support vector machine SVM is a classifier derived from statistical learning theory by Vapnik et al. in 1992. It is widely used for classification because it uses the maximum margin concept for classification, and the line of classification should be chosen is such a way that it is having maximum generalization ability. To create separable classes, it transforms the feature into higher dimension feature space using kernels. So kernels such as Gaussian, polynomial, and sigmoid are used in the SVM [57]. The formula for classification is given in the following: X 1 minimize jjwjj2 1 C δ 2

(3.6)

Such that yi ðwT [ðxÞ 1 bÞ $ 1 2 δandδ $ 0where ðwT x 1 bÞ is the equation linear classifier, [ðxÞ is the kernel used in classification, C is the parameter to control overfitting [58]. In the current work, Gaussian kernel and polynomial kernel with degree 2 (quadratic) are used for classification purpose. The choice of these two kernels is based on the dataset, because the two-class classification of motor imagery data is nonlinear and dual degree of freedom makes it more suitable for classification using Gaussian and quadratic SVM.

3.6 Results Performance evaluation of channel selection in the current work has been done based on accuracy. The accuracy obtained from classifiers for different sets of channels are given in Table 3.7. Classification accuracies show that selected channel using PSO is better than the standard channels that are C3, C4, and Cz. Various classifiers show a higher change in accuracy such as in fine kNN there is difference of 10% and the other also shows a change of approximate 3%4% but ensemble subspace discriminant is likely to be similar. So it can be said that the choices of channels as well as classifiers are also important for a better classification of motor imagery signals.

3.7 Conclusion

Table 3.7 Comparative analysis of accuracies for classification of MI Task. Classifiers Fine kNN Coarse kNN Quadratic SVM Logistic regression Ensemble subspace discriminant

Accuracy for standard channels C3, C4, and Cz (%)

Accuracy for channels selected using PSO (%)

50.6 43.1 53.1 52.5 52.4

60.0 48.8 53.8 57.5 52.5

kNN, k-Nearest neighbor; PSO, particle swarm optimization; SVM, support vector machine.

Effect of channel selection in accuracy 53.8

57.5

52.5

53.1

52.5

52.4

Quadratic SVM

Logistic regression

Ensemble subspace discriminant

60 48.8

50.6

Fine kNN

43.1

Coarse kNN

Accuracy for channels selected using PSO (%) Accuracy for standard channels C3, C4, Cz (%)

FIGURE 3.7 Effect of channel selection in accuracy.

The graphical representation of the classifiers is shown in Fig. 3.7 to visually understand the impact of channels. The area under curve and receiver operating characteristics (ROC) also has significance for the choice of classifiers and model selection. In the current study, the ROC obtained from the fine kNN and logistic regression is shown in Figs. 3.83.11.

3.7 Conclusion From the current study, it can be concluded that the channel selection for every task should not always be fixed. Nowadays, various techniques are available

49

1

1

0.8

0.8

True positive rate

True positive rate

CHAPTER 3 Braincomputer interface and neurocomputing

0.6 (0.44,0.57) AUC = 0.57

0.4

0.2

0.6 AUC = 0.49 (0.47,0.45)

0.4

0.2

ROC curve Area under curve (AUC) Current classifier

0 0

0.2

0.4 0.6 False positive rate

0.8

ROC curve Area under curve (AUC) Current classifier

0

1

0

0.2

0.4 0.6 False positive rate

0.8

1

FIGURE 3.8

FIGURE 3.9

ROC for fine kNN for channel selection using PSO. kNN, k-Nearest neighbor; PSO, particle swarm optimization.

ROC for fine kNN for channel selection using standard channels. kNN, k-Nearest neighbor.

1

1

0.8

0.8

0.6

True positive rate

True positive rate

50

(0.44,0.57) AUC = 0.53

0.4

0.6 (0.47,0.56) AUC = 0.51

0.4

0.2

0.2

ROC curve Area under curve (AUC) Current classifier

0 0

0.2

0.4 0.6 False positive rate

0.8

1

ROC curve Area under curve (AUC) Current classifier

0 0

0.2

0.4 0.6 False positive rate

0.8

1

FIGURE 3.10

FIGURE 3.11

ROC of logistic regression for channel selection using PSO. PSO, Particle swarm optimization.

ROC of logistic regression for channel selection using standard channels.

for the choice of selection. If we will use those selection schemes which would produce better results. The current study shows that by selection of channel in motor imagery, the results of classification of motor imagery task improved by 1%8% with specific classification techniques, which concludes that channel selection plays a very important role for improvement in classification performance.

References

References [1] S.J. Luck, G.F. Woodman, E.K. Vogel, Event-related potential studies of attention, Trends in Cognitive Sciences 4 (11) (2000) 432440. [2] N. Kamel and A. Malik, The fundamentals of EEG signal processing, in: EEG/ERP Analysis: Methods and Applications, 2014, 2171. [3] D.T. Langendoen, L.R. Gleitman, M. Liberman, An invitation to cognitive science, Language (Baltim) (1997). ¨ ber das elektrenkephalogramm des menschen: XII. Mitteilung, Arch. [4] H. Berger, U Psychiatr. Nervenkr. 106 (1937) 577584. [5] D. Zumsteg, H.G. Wieser, Presurgical evaluation: current role of invasive EEG, Epilepsia 41 (2000) S55S60. [6] S. Waldert, Invasive vs. non-invasive neuronal signals for brain-machine interfaces: will one prevail? Front. Neurosci 10 (2016) 295. [7] N. Mesgarani, E.F. Chang, Selective cortical representation of attended speaker in multi-talker speech perception, Nature. 485 (7397) (2012) 233. [8] A. Kuruvilla, R. Flink, Intraoperative electrocorticography in epilepsy surgery: useful or not? Seizure 12 (8) (2003) 577584. [9] E.C. Leuthardt, G. Schalk, J.R. Wolpaw, J.G. Ojemann, D.W. Moran, “A braincomputer interface using electrocorticographic signals in humans, J. Neural Eng. 1 (2) (2004) 63. [10] R.A. Ramadan, A.V. Vasilakos, Brain computer interface: control signals review, Neurocomputing 223 (2017) 2644. [11] N.D. Patel, An EEG-Based Dual-Channel Imaginary Motion Classification for Brain Computer Interface, Lamar University-Beaumont, 2011. [12] M. Cohen, Analyzing Neural Time Series Data: Theory and Practice, 2014, MIT press. [13] Daroff, R. B., & Aminoff, M. J. Encyclopedia of the neurological sciences. Academic press, 2014. [14] M. Proudfoot, M.W. Woolrich, A.C. Nobre, M.R. Turner, Magnetoencephalography, Pract. Neurol. 14 (5) (2014) 336343. [15] Positron Emission Tomography Scan  Mayo Clinic. [Online]. Available from: ,https://www.mayoclinic.org/tests-procedures/pet-scan/about/pac-20385078.. [16] PET/CT  Positron Emission Tomography/Computed Tomography. [Online]. Available from: ,https://www.radiologyinfo.org/en/info.cfm?pg 5 pet.. [17] S.M. Coyle, T.E. Ward, C.M. Markham, Braincomputer interface using a simplified functional near-infrared spectroscopy system, J. Neural Eng. 4 (3) (2007) 219226. [18] N.K. Logothetis, What we can do and what we cannot do with fMRI, Nature 453 (7197) (2008) 869. [19] fNIRS: The In-Between for Brain Activity in Real-World Settings  Cognitive Neuroscience Society. [Online]. Available from: ,https://www.cogneurosociety.org/ fnirs_wan/.. [20] N. Naseer, K.-S. Hong, Corrigendum fNIRS-based brain-computer interfaces: a review, Front. Hum. Neurosci. 9 (2015) 172. [21] Marcuse, L. V., Fields, M. C., & Yoo, J. J. Rowan’s Primer of EEG E-Book. Elsevier Health Sciences, 2015.

51

52

CHAPTER 3 Braincomputer interface and neurocomputing

[22] R.W. Homan, J. Herman, P. Purdy, Cerebral location of international 1020 system electrode placement, Electroencephalogr. Clin. Neurophysiol. 66 (4) (1987) 376382. [23] M. Teplan, Fundamentals of EEG measurement M. Teplan, Meas. Sci. Rev. 2 (2) (2002) 111. [24] A. Pinegger, S.C. Wriessnegger, J. Faller, G.R. Mu¨ller-Putz, Evaluation of different EEG acquisition systems concerning their suitability for building a brain-computer interface: case studies, Front. Neurosci 10 (2016) 441. [25] L. Shoker, S. Sanei, J. Chambers, Artifact removal from electroencephalograms using a hybrid BSS-SVM algorithm, IEEE Signal Process. Lett. 12 (10) (2005) 721724. [26] S.Y. Shao, K.Q. Shen, C.J. Ong, E.P.V. Wilder-Smith, X.P. Li, Automatic EEG artifact removal: a weighted support vector machine approach with error correction, IEEE Trans. Biomed. Eng. 56 (2) (2009) 336344. [27] Q. Zhao, B. Hu, Y. Shi, Y. Li, P. Moore, M. Sun, et al., Automatic identification and removal of ocular artifacts in EEG  improved adaptive predictor filtering for portable applications, IEEE Trans. Nanobiosci. 13 (2) (2014) 109117. [28] A.M. Torres, M.A. Garc´ıa, J. Mateo, Eye interference reduction in electroencephalogram recordings using a radial basic function, IET Signal Process 7 (7) (2013) 565576. [29] C.W. Anderson, J.N. Knight, T. O’Connor, M.J. Kirby, A. Sokolov, Geometric subspace methods and time-delay embedding for EEG artifact removal and classification, IEEE Trans. Neural Syst. Rehabil. Eng. 14 (2) (2006) 142146. [30] J.J.M. Kierkels, J. Riani, J.W.M. Bergmans, G.J.M. Van Boxtel, Using an eye tracker for accurate eye movement artifact correction, IEEE Trans. Biomed. Eng. 54 (7) (2007) 12561267. [31] B. Noureddin, P.D. Lawrence, G.E. Birch, Online removal of eye movement and blink EEG artifacts using a high-speed eye tracker, IEEE Trans. Biomed. Eng. 59 (8) (2011) 21032110. [32] H.A.T. Nguyen, et al., EOG artifact removal using a wavelet neural network, Neurocomputing 97 (2012) 374389. [33] J. Hu, C. sheng Wang, M. Wu, Y. Xiao Du, Y. He, J. She, Removal of EOG and EMG artifacts from EEG using combination of functional link neural network and adaptive neural fuzzy inference system, Neurocomputing 151 (2015) 278287. [34] A.R. Teixeira, A.M. Tome´, E.W. Lang, P. Gruber, A. Martins da Silva, “Automatic removal of high-amplitude artefacts from single-channel electroencephalograms, Comput. Methods Programs Biomed. 83 (2) (2006) 125138. [35] S.C. Ng and P. Raveendran, “Enhanced μ rhythm extraction using blind source separation and wavelet transform,” IEEE Trans. Biomed. Eng., 56 (8) (2009) 20242034. [36] W. De Clercq, B. Vanrumste, J.M. Papy, W. Van Paesschen, S. Van Huffel, Modeling common dynamics in multichannel signals with applications to artifact and background removal in EEG recordings, IEEE Trans. Biomed. Eng. 52 (12) (2005) 20062015. [37] V. Schetinin, J. Schult, The combined technique for detection of artifacts in clinical electroencephalograms of sleeping newborns, IEEE Trans. Inf. Technol. Biomed. 8 (1) (2004) 2835. [38] A.V. Oppenheim, Discrete-time signal processing, in: Electronics and Power, Pearson Education India, 1999. [39] Fourier Transforms  MATLAB & Simulink. [Online]. Available from: ,https:// www.mathworks.com/help/matlab/math/fourier-transforms.html. (accessed 24.04.19).

References

[40] R. Polikar, The Wavelet Tutorial, Internet Resources, 1994, 167. [41] Continuous and Discrete Wavelet Transforms  MATLAB & Simulink. [Online]. Available from: ,https://www.mathworks.com/help/wavelet/gs/continuous-and-discretewavelet-transforms.html.. [42] 1-D Wavelet Decomposition  MATLAB Wavedec  MathWorks India. [Online]. Available from: ,https://in.mathworks.com/help/wavelet/ref/wavedec.html. (accessed 26.04.19). [43] M. Bittencourt-Villalpando, N.M. Maurits, Stimuli and feature extraction algorithms for brain-computer interfaces: a systematic comparison, IEEE Trans. Neural Syst. Rehabil. Eng. 26 (9) (2018) 16691679. [44] H. Baali, A. Khorshidtalab, M. Mesbah, M.J.E. Salami, A transform-based feature extraction approach for motor imagery tasks classification, IEEE J. Transl. Eng. Heal. Med 3 (2015) 18. [45] G. Waterstraat, M. Burghoff, T. Fedele, V. Nikulin, H.J. Scheer, G. Curio, Noninvasive single-trial EEG detection of evoked human neocortical population spikes, NeuroImage 105 (2015) 1320. [46] M. Billinger, I. Daly, V. Kaiser, J. Jin, B.Z. Allison, G.R. Mu¨ller-Putz, and C. Brunner, Is It Significant? Guidelines for Reporting BCI Performance, 2012, 333354. [47] J.R. Wolpaw, N. Birbaumer, D.J. McFarland, G. Pfurtscheller, T.M. Vaughan, Braincomputer interfaces for communication and control, Clin. Neurophysiol. 113 (6) (2002) 767791. [48] S. Gao, Y. Wang, X. Gao, B. Hong, Visual and auditory brain-computer interfaces, IEEE Trans. Biomed. Eng. 61 (5) (2014) 14361447. [49] T.M. McKenna, T.A. McMullen, M.F. Shlesinger, The brain as a dynamic physical system, Neuroscience 60 (3) (1994) 587605. [50] C.J. Stam, Nonlinear dynamical analysis of EEG and MEG: review of an emerging field, Clin. Neurophysiol. (2005). [51] K.R. Mu¨ller, C.W. Anderson, G.E. Birch, Linear and nonlinear methods for braincomputer interfaces, IEEE Trans. Neural Syst. Rehabil. Eng. 11 (2) (2003) 165169. [52] D. Steyrl, R. Scherer, O. Fo¨rstner, G.R. Mu¨ller-Putz, Motor imagery brain-computer interfaces: random forests vs regularized LDA  non-linear beats linear, in: Proceedings of the 6th International Brain-Computer Interface Conference, 2014, 241244. [53] M. Arvaneh, C. Guan, K.K. Ang, C. Quek, Optimizing the channel selection and classification accuracy in EEG-based BCI, IEEE Trans. Biomed. Eng. (2011). [54] W.Y. Hsu, Application of quantum-behaved particle swarm optimization to motor imagery EEG classification, Int. J. Neural Syst. (2013). [55] R. Eberhart, J. Kennedy, New optimizer using particle swarm theory, in: Proceedings of the International Symposium on Micro Machine and Human Science, 1995. [56] M.L. Zhang, Z.H. Zhou, ML-KNN: a lazy learning approach to multi-label learning, Pattern Recognit. 40 (7) (2007) 20382048. [57] C.J.C. Burges, A tutorial on support vector machines for pattern recognition, Data Min. Knowl. Discov. 2 (2) (1998) 121167. [58] C. Cortes, V. Vapnik, Support-vector networks, Mach. Learn. 20 (1995).

53

CHAPTER

The impact on cognitive development of a self-contained exploratory and technology-rich course on the physics of light and sound

4 Fernando Espinoza1,2

1

Department of Physics and Astronomy, Hofstra University, Hempstead, NY, United States 2 Department of Chemistry & Physics-Adolescence Education, SUNY Old Westbury, Old Westbury, NY, United States

The traditionally structured (containing separate lecture, recitation, and laboratory parts) courses in introductory physics present significant challenges for most nonscience majors. Many struggle with the content, its applications, and with the additional inherent difficulties that emerge (such as the transfer of information between different course parts) due to the separation of these three integral components required for a proper understanding of the topics covered. For instructors, unless the same individual teaches all parts, problems of coordination of topic coverage, coherence in presentation of material, and variations in teaching approaches confound the problem. A 5-year longitudinal investigation of student performance on both content and laboratory tasks in three separate courses dealing with the physics of wave motion reveals that integrating these course components is beneficial for students; however, in the understanding of wave phenomena, incorporating the properties of light and sound is superior to covering these topics separately. The instrument used in the investigation is the use of a recently published textbook. The format of the textbook represents a departure from traditional physics instruction. Because most traditionally written textbooks emphasize the coverage of topics from an authoritative perspective, they have neglected the students’ view. There must be a simultaneous engagement with the material where the ability to apply the ideas is inseparable from the exposure to their basic definitions. A particularly inhibiting feature of the traditional presentation is that the didactic approach often lacks a context, which likely undermines the development of cognitive skills on the part of the learner. Cognitive Informatics, Computer Modelling, and Cognitive Science, Volume 1. DOI: https://doi.org/10.1016/B978-0-12-819443-0.00004-0 © 2020 Elsevier Inc. All rights reserved.

55

56

CHAPTER 4 The impact on cognitive development of a self-contained

Research in science education has repeatedly shown that from a very young age, humans tend to be better observers when they are interested. Therefore providing learners with a context is more conducive to understanding the material, rather than introducing it without one. The benefits would extend to the affective as well as the cognitive aspects of learning the material, particularly those related to content retention and problem-solving. The evidence for the superiority of exploratory rather than confirmatory experiences that support the observed physics content retention and improvement in this study comes from a statistical analysis of performance on all tasks assigned (e.g., exams and experimental activities). While there is evident improvement on content and laboratory skills performance for all three courses, the course where both topics are integrated is the only one where consistent statistically significant results appear. In addition, as part of scientific literacy for nonscience majors the majority of the benefits in developing process skills, which they seriously need, relate to the uses of microcomputer-based laboratory experiences.

4.1 Background Introductory science courses provide opportunities to acquire and develop important skills when engaged in the experimental part of the course; after all, the main distinguishing characteristic of science rests in the ability to test concepts and ideas. However, learning scientific concepts, understanding the nature of science, and developing positive attitudes toward science remain as important [1 8]. This is particularly important for those unlikely to study more science as part of their preparation. To that extent, affective issues that have an impact on student attitude toward laboratory work can become central to the overall effectiveness of science instruction. While the superior nature of active “student-centered” learning over traditional lecture in higher education continues to be documented [9], the main challenge for faculty is one of implementation. For instance, student-centered courses have shown deeper understanding of key concepts in chemistry than the traditional lecture approach for small enrollment classes, whereas the outcomes are less evident for large enrollment classes [10]. Interacting with computers using Physlets, small Java applets in a teachercentered instruction seems more effective than in a student-centered computer lab for developing basic conceptual understanding. However, using Physlets in a student-centered computer lab seems superior in developing students’ ability to solve quantitative, real-world problems [11,12]. There needs to be more research on undergraduate laboratory courses [13]. There are negative factors that have an impact on the traditional separation of the lecture from the laboratory part of a course. One can find solutions to issues with scheduling for example, but coordination between the topics covered, and even

4.1 Background

the case of students taking the two parts of the course in different semesters presents significant problems. Studies have shown that integrating the lecture and laboratory experiences in anatomy and physiology courses results in improved learning [14]. The incorporation of traditional laboratory experiences into lecture courses to design so-called studio courses has not been found to lead to improvements in student performance [15]. However, improvements in student attitudes toward problem-solving confidence, as well as personal interest and real-world connections, have been found among introductory physics studio format sections, as compared to traditional lecture-type format sections [16]. These types of courses have also greatly increased student interactivity [17]. The emphasis on practicing accuracy and precision in experimental tasks follows recommendations for such tasks as student work to mirror scientific practice [18,19]. Courses that focus on the development of process skills instead of confirming material covered in lecture convey expert-like features to students unlikely to take more science courses in college [20]. Among the issues concerning the need for students’ proper understanding of the role of uncertainty in measurement, meaningful instruction requires the use of a set of data points that emphasize data collection recognizing that an individual value is an estimate of the physical quantity [21]. However, studies have shown that the strategies used in traditional laboratory courses do not lead to improvements in students’ understanding of uncertainty beyond numerical routines and algorithmic procedures [22]. Proponents of alternative approaches to laboratory instruction have questioned for some time the effectiveness of laboratory work in undergraduate education [23]. If the design and purpose of laboratory experiences are to reinforce classroom instruction, the lecture part of the course, they appear to be ineffective. Studies have shown that content performance differences between students taking a separate laboratory section from the lecture and those not taking the lab in the same lectures were insignificant [24]. Attempts to keep the traditional structure of separating the lecture and the laboratory include modifications of the physical space, where both take place in the same room [25]. There is also evidence of the increasing popularity of projectbased approaches to laboratory instruction [26]. The lack of opportunities to do experimental work encountered by nonscience majors in introductory courses has been addressed through the provision of extracurricular activities, such as Dorm Room Labs [27]. Other approaches with engineering and science students have included blended learning, which combines face-to-face activities with online tasks enabling students to explore virtual phenomena and discuss them in class [28]. This approach has been particularly successful with low-ability students [29]. Blended learning approaches have also been found successful with chemistry students enrolled in remedial courses [30]. Other studies have included courses designed for at-risk students, where the integration of lectures and experimental work has shown improvements in

57

58

CHAPTER 4 The impact on cognitive development of a self-contained

retention and success in student performance. However, there needs to be considerable coordination among the members of the groups involved in delivering the instruction [31]. Cooperative learning approaches with medical students working in groups of four in the laboratory part have resulted in better content performance than conventional lecture-only physics courses [32]. Fused courses where the lecture and the laboratory times are not differentiated have been successful with developmental biology, and cellular and molecular biology courses [33]. In summary, after nearly three decades of teaching a variety of physics and astronomy courses with an awareness of the documented shortcomings of the traditional separation of the lecture and laboratory components, I have been investigating the potential benefits of combining these parts into a comprehensive approach to teaching the basics of wave motion. This has been facilitated by the availability of instructional technologies that can enhance the connections between the theoretical and experimental features of learning physics made possible by an integrated learning environment.

4.2 Methodology The project emerged from the teaching of two courses required to fulfill the natural science requirement of nonscience majors at a private university. Two courses have been offered that alternate in coverage of topics from the physics of light and sound. The students are predominantly from communications and visual arts programs, as well as speech language pathology. Data on all aspects of lecture and laboratory performance have been collected beginning in the fall of 2014. Subsequently, a course on wave motion was developed at a public university to provide nonscience majors with an opportunity to learn the basics of the study of light and sound combined. Data from that course have been collected since the fall of 2015, on the same measures as those in the courses that treat light and sound separately. The main research questions are as follows: 1. Does teaching acoustics and optics separately, or in an integrated manner result in significant performance differences on measures of cognitive improvement in content proficiency, and laboratory skills? 2. Is there a difference in performance between: (1) A control group made of students enrolled in the courses during 2014 17, and (2) an experimental group made of students enrolled in the courses after 2017 and using a textbook specifically written to teach wave motion as inquiry? All three courses are self-contained by virtue of having the same instructor for the lecture and laboratory parts and having been designed to provide ample opportunities for project-based tasks in addition to other curriculum requirements

4.3 Results

(e.g., exams and experiments). The lecture parts allow for considerable online assignments dealing with simulations of the phenomena discussed in class, and the laboratories predominantly integrate microcomputer-based technology in the performance of exploratory tasks. The textbook “Wave Motion as Inquiry” [34] explicitly written for those taking the courses was introduced in the spring 2017 semester. All three courses had previously used different textbooks, and other curriculum materials; however, issues with the relevance of the topics found in those sources created the need for a text specifically addressing the suitability of existent materials to accomplish the unique objectives of the courses. In addition, the experimental tasks became embedded in the textbook, as it is a self-contained approach to the study of wave motion. This way, there is no distinction between topic coverage in lecture, and experimental tasks designed to explore (not just confirm) the concepts and ideas involved. Consequently, for instructional purposes, the separation between lecture and laboratory work has been effectively eliminated. The design of the textbook contents was primarily chosen to address the documented student difficulties due to the inhibiting roles of prior knowledge, and misconceptions in two areas: (1) acoustic phenomena [35 39] and (2) knowledge of optics phenomena [40 45]. The textbook design is also innovative in making extensive use of virtual environments (simulations) to support the introduction of the various topics allowing for a dynamic representation of the concepts and ideas, as well as their relationships. The experimental tasks predominantly involve the use of technologyassisted settings, where data are collected using sensors and probes interfaced with laptops, iPads, and mobile devices; these provide innumerable opportunities for data collection and analysis in an exploratory manner [46]. Measures of comparison of performance with control groups (e.g., traditional introductory physics courses) have been reported that show the benefits of exploratory, as opposed to confirmatory experiences in relating content and process skills [47 49].

4.3 Results The data are disaggregated in several categories; first by performance overall comparing both groups, then by performance on the measures of content proficiency (exams), and finally by performance on laboratory tasks. Mean 1 represents the % score in the course on all measures of performance combined (exams and laboratory tasks) for those taking the courses during 2014 17; Mean 2 represents the equivalent % score for those taking the courses after 2017. These scores correspond to the grade earned in the courses. The table indicates that there is no change in the overall score for the course that covers the physics of light (optics),

59

60

CHAPTER 4 The impact on cognitive development of a self-contained

whereas the combined (all) and the course that covers the physics of sound (acoustics) do exhibit a somewhat moderate difference in performance. As Table 4.1 indicates, combining the physics of light and sound (wave motion) does exhibit a statistically significant difference in performance between those taking the course during 2014 17 and those taking it after 2017. There is also a medium size difference in the effect size (Cohen’s d) value. The disaggregated data on content performance alone yields a statistically significant difference for all courses on quiz 1 and the final exam, although the Effect Sizes are modest. The performance on the second quiz does not show a particularly significant difference. As Table 4.3 indicates, dealing with the physics of light shows inconsistent results on the measures of content performance, except for the final exam. Curiously the performance decreases from the first to the second quiz, and it shows an improvement more noticeable for the Mean 2 value (the experimental group). In contrast to the previous results, Table 4.4 shows consistent improvements in performance by those taking the course on the physics of sound (acoustics); the results for the first quiz are statistically significant between experimental and control groups, with a medium Effect Size. The performances seem to deteriorate progressively for both groups though. Table 4.5 indicates that the group taking the course where both light and sound are combined (wave motion) exhibits the most consistent improvement in content performance. The experimental group (Mean 2) shows a statistically significant difference in performance from the control group (Mean 1), despite an apparently progressive deterioration in performance by both groups. The data on measures of content performance for all three courses were plotted for the entire duration of data collection in Figs. 4.1 4.3. Fig. 4.1 shows a graph of the content performance on the two quizzes and the final exam, for the optics course. Figs. 4.2 and 4.3 show the same information for the acoustics course and the wave motion course respectively. These three graphs show the content performance by all groups during the time of data collection (2014 18). One can see at a glance that there are differences in the trends as expressed by the slopes of all the graphs. An analysis of the differences between the three courses is included below. For the optics course a t test of the correlation between the performance and the duration of data collection shows no statistical significance for the performances in quiz 1 and quiz 2, although there is a statistical significance for the final exam performance. For the acoustics course there is no difference in performance for all three measures. Furthermore, there is a decrease in performance for quiz 2 consistent with that for the optics course. The performance on all three content measures for the wave motion course, however, is consistently statistically significant. The structure of the content for the first six chapters covered in the textbook is included in Fig. 4.4.

4.3 Results

FIGURE 4.1 Content performance for acoustics course.

FIGURE 4.2 Content performance for optics course.

The following summary of the topics included in the two quizzes, along with the chapters where they appear in the textbook “Wave Motion as Inquiry” shows that a likely reason for the progressive decrease in performance exhibited by most, although not all three courses, is the subsequent separation in topic coverage that occurs in the textbook. The first two chapters cover topics common to both the physics of light and sound, whereas beginning with Chapter 3 the emphasis shifts between topic coverage. Chapters 3, 4, 5, and 6 differ in emphasis

61

62

CHAPTER 4 The impact on cognitive development of a self-contained

FIGURE 4.3 Content performance for wave motion course.

between the two topics. For instance, the topics of reflection, refraction, and diffraction (Chapters 3, 4, and 6) receive more emphasis for those dealing with the physics of light, than for sound. By contrast, the topics of wave interference, standing waves, and harmonics (Chapter 5) receive more emphasis for those dealing with the physics of sound.

• • • • • •

Optics quiz 1: Wave Characteristics, Reflection and Mirrors (Chapters 1 3) Acoustics quiz 1: Wave Characteristics, Reflection (Chapters 1 3) Wave motion quiz 1: Wave Characteristics (Chapters 1 and 2) Optics quiz 2: Refraction, Lenses, and Diffraction (Chapters 3, 4, and 6) Acoustics quiz 2: Interference and Standing Waves, and Diffraction (Chapters 5 and 6) Wave motion quiz 2: Reflection and Mirrors, and Refraction and Lenses (Chapters 3 and 4)

The final exam is somewhat cumulative for all three courses, and most of it deals with material already covered in the quizzes; only a small amount of material is included subsequent to quiz 2. Table 4.7 lists the experimental tasks undertaken by all courses, as part of the laboratory component. All experiments require the submission of a report outlining the objectives, theoretical background, procedure, data analysis and results, and finally a section on reflections. This last section is central to the entire laboratory experience, as it deals with sources of error, and considerations of issues such as accuracy, precision, and measurement uncertainty. Students perform the experiments in groups of four and share the data; however, the reports are submitted individually, and the section on reflections is particularly scrutinized to ensure it is individually written.

4.3 Results

FIGURE 4.4 List of topics covered in the textbook for the first six chapters.

As the table indicates all experimental tasks show a significant difference in performance between the groups; control (Mean 1) and experimental (Mean 2). The Effect Sizes are also substantial. Consistent with the findings in content performance, most significant differences are found in the experiments performed by

63

64

CHAPTER 4 The impact on cognitive development of a self-contained

those in the acoustics and wave motion courses. The largest Effect Sizes are found in those experiments that made active use of microcomputer-based technology tasks, where the data were collected and analyzed using probes and sensors. The first experimental task does not make use of such technology, and the difference between the acoustics and optics students is still significant. It is also apparent that the benefits of using the technology do not appear evident for the optics students as none of their technology-assisted experimental task results appear in the table.

4.4 Discussion The data results indicate a statistically significant difference in content performance found in Tables 4.1 4.5 between the experimental (Mean 2) and the control (Mean 1) groups. As stated earlier, the difference between groups since spring 2017 is likely the textbook. While there are variations in the results for quiz 1 and quiz 2, there is a more consistent performance on the final exam for all groups. A likely explanation is that since the final exam is cumulative, whatever issues there may have been in the previous quizzes, they appear to have been resolved for most students by the time they took the final exam. In addition, the data for the wave motion groups show a superior performance on all three Table 4.1 Overall performance by all groups. Group (N)

Mean 1 ( 6 SD)

Mean 2 ( 6 SD)

Significant difference

t Test

P Value

Effect size Cohen’s d

All (256) Optics (94) Acoustics (93) Wave motion (69)

84.8 85.5 87.5 77.8

86.3 85.4 89.6 83.5

Yes No Yes Yes

1.44

,.10 N.S. ,.10 ,.01

.18 N.S. .31 .64

(8) (6) (7) (9)

(9) (9) (7) (9)

1.49 2.51

Table 4.2 Performance on measures of content by all three classes (optics, acoustics, and wave motion). Group (N)

Mean 1 ( 6 SD)

Mean 2 ( 6 SD)

Significant difference

t Test

P Value

Effect size Cohen’s d

Quiz 1 (253)

78.5 (14) 79 (13) 77.3 (15)

82.8 (13) 81 (13) 82.2 (15)

Yes

2.54

,.01

.32

No Yes

1.17 2.50

,.15 ,.01

.15 .33

Quiz 2 (233) Final (234)

4.4 Discussion

Table 4.3 Performance on measures of content by optics classes. Group (N)

Mean 1 ( 6 SD)

Mean 2 ( 6 SD)

Significant difference

t Test

P Value

Effect size Cohen’s d

Quiz 1 (93)

80.9 (12) 81 (8.5) 77.9 (13)

82.1 (9.8) 78 (10) 84.7 (12)

No

.510

..25

.11

No Yes

2.33

N.S. ,.05

N.S. .54

Quiz 2 (73) Final (73)

Table 4.4 Performance on measures of content by acoustics classes. Group (N)

Mean 1 ( 6 SD)

Mean 2 ( 6 SD)

Significant difference

t Test

P Value

Effect size Cohen’s d

Quiz 1 (93)

77.8 (16) 84 (11) 82 (13)

87.6 (11) 87 (10) 85 (14)

Yes

3.46

,.001

.72

Yes No

1.38 1.08

,.10 ..15

.29 .22

Quiz 2 (93) Final (94)

Table 4.5 Performance on measures of content by wave motion classes. Group (N)

Mean 1 ( 6 SD)

Mean 2 ( 6 SD)

Significant difference

t Test

P Value

Effect size Cohen’s d

Quiz 1 (69) Quiz 2 (69) Final (69)

74 (12) 67 (16) 66 (17)

79 (16) 77 (15) 77 (16)

Yes Yes Yes

1.33 2.56 2.64

,.10 ,.01 ,.01

.34 .67 .69

Table 4.6 Trends in performance on content measures (fall of 2014 18). Course Optics Acoustics Wave motion

Quiz 1 slope

Corr. t test

P Value

Quiz 2 slope

Corr. t test

P Value

Final exam slope

Corr. t test

P Value

2.04 3.00 3.05

1.37 .825 8.14

N.S. N.S. ,.01

2.38 2.40 5.25

2.234 2.436 2.36

N.S. N.S. ,.10

3.36 1.95 7.70

4.57 .90 7.87

,.01 N.S. ,.01

measures; this is confirmed by the analysis of regression included. Table 4.6 also strongly suggests a definite benefit for students when the two topics, optics and acoustics are combined, as opposed to being covered separately. While there were combined benefits for both courses where the topics are covered separately, as far

65

Table 4.7 Performance on experimental tasks by all classes ( ) includes use of microcomputer-based laboratory technology. Group (N)

Experiment

Mean 1 ( 6 SD)

Mean 2 ( 6 SD)

Significant difference

t Test

P Value

Effect size Cohen’s d

Optics (113) Acoustics (93) Acoustics (71) Acoustics (71) Acoustics (48)

Accuracy and precision Accuracy and precision Standing waves ( ) Doppler effect ( ) Tones, vowels, and telephones ( ) Tones, vowels, and telephones ( ) Index of refraction

86 89 91 88 92

91 (9) 97 (5) 96 (3) 91 (6) 96 (4)

Yes Yes Yes Yes Yes

2.75 5.76 3.73 2.06 3.67

,.005 ,.001 ,.001 ,.05 ,.001

.52 1.2 .93 .53 .95

86 (6)

93 (3)

Yes

4.56

,.001

1.45

85 (12)

90 (7)

Yes

2.28

,.05

.55

Wave motion (48) Wave motion (68)

(11) (8) (7) (4) (4)

4.5 Limitations

as content retention and performance, the students in the acoustics class appear to benefit more from the use of the textbook. Interestingly, the student evaluations of the courses collected throughout the data collection period are consistent with these results; those in the optics course seem to find the content less relevant to their needs. In terms of laboratory performance the results are consistent with those in content performance. As Table 4.7 indicates, the statistically significant performances are found among acoustics, and wave motion courses. Clearly, the incorporation of the laboratory tasks into the textbook has made a difference in the way all students integrate concepts and ideas from the lecture parts with the tasks required of the experimental part. The latter is consistent with findings included in the review, of benefits for a variety of students when the two parts of a science course are not separated. In addition, as Table 4.7 suggests, the use of sensors and probes interfaced with other devices results in significant gains in performance when students undertake experimental tasks.

4.5 Limitations Despite the considerable evidence of benefits for students found in this study when the two topics light and sound are covered together, rather than separately, it should be evident that investigator biases cannot be ruled out entirely. Although there is confirmation of the robustness of the findings by using students’ independent evaluation of the courses, the fact that the investigator is also the author of the textbook may be a reason to suspect a bias in seeking vindication. However, the nature of the textbook in being student-centered as opposed to the traditional didactic approach lends credence to its utility in providing distinctive benefits for its users. The use of the textbook as a tool to determine whether an integrated lecture laboratory course that is also exploratory in its laboratory component has tangible advantages for many students who are in principle afraid of science ought to be given consideration. There are many cognitive skills that students taking science courses fail to acquire, such as a level of comfort with measurement uncertainty, leading to a proper understanding of accuracy and precision in results. This feature of scientific knowledge alone should be a reason to attempt to do better in our efforts to help students develop scientific literacy. One of the distinctive features of exploratory rather than confirmatory science activities is a particular emphasis on students’ interpretive development, which is a priority in the laboratory reports. If anyone who takes a science course, especially if they are unlikely to ever take another such course in their careers, does not develop such skills represents our failure to properly educate them.

67

68

CHAPTER 4 The impact on cognitive development of a self-contained

References [1] S. Delamont, J. Beynon, P. Atkinson, In the beginning was the Bunsen: the foundations of secondary school science’, Int. J. Qual. Stud. Educ. 4 (1988) 315 328. [2] J. Head, What can psychology contribute to science education? Sch. Sci. Rev. 63 (1982) 631 642. [3] A. Hofstein, N. Lunetta, The role of the laboratory in science teaching: neglected aspects of research, Rev. Educ. Res. 52 (1982) 201 217. [4] W. Keys, Aspects of Science Education, NFER-Nelson, Windsor, 1987. [5] L. Klopfer, Learning scientific enquiry in the student laboratory, in: E. HegartyHazel (Ed.), The Student Laboratory and the Science Curriculum, Routledge, London, 1990. [6] A. Lawson, M. Abraham, J. Renner, A Theory of Instruction, Monograph No. 1, NARST, Washington, DC, 1989. [7] R. Millar, What is scientific method and can it be taught? in: J. Wellington (Ed.), Skills and Processes in Science Education, Routledge, London, 1989. [8] K. Tobin, Research on science laboratory activities: in pursuit of better questions and answers to improve learning, Sch. Sci. Math. 90 (1990) 403 418. [9] S. Freeman, S.L. Eddy, M. McDonough, M.K. Smith, N. Okoroafor, H. Jordt, et al., Active learning increases student performance in science, engineering, and mathematics, Proc. Natl. Acad. Sci., U.S.A. 111 (2014) 8410 8415. [10] E.J. Borda, A. Boudreaux, B.F. Adams, P. Frazey, S. Julin, G. Pennington, et al., Adapting a student-centered chemistry curriculum to a large-enrollment context: successes and challenges, J. Coll. Sci. Teach. 46 (5) (2017) 8 13. [11] D. Dervi´c, D.S. Glamocic, A.G. Busuladˇzi´c, V. Meˇsi´c, Teaching physics with simulations: teacher-centered versus student-centered approaches, J. Balt. Sci. Educ. 17 (2) (2018) 288 299. [12] M. Belloni, W. Christian, Physlets: Teaching Physics With Interactive Curricular Material, Pearson Education, Upper Saddle River, NJ, 2001. [13] National Research Council, in: S.R. Singer, N.R. Nielsen, H.A. Schweingruber (Eds.), Discipline-Based Education Research: Understanding and Improving Learning in Undergraduate Science and Engineering, National Academies Press, Washington, DC, 2012. [14] K. Finn, K. FitzPatrick, Z. Yan, Integrating lecture and laboratory in health sciences courses improves student satisfaction and performance, J. Coll. Sci. Teach. 47 (1) (2017) 66 75. [15] K. Cummings, J. Marx, R.K. Thornton, D.E. Kuhl, Evaluating innovation in studio physics, Am. J. Phys. 67 (Suppl. 1) (1999) S38 S44. [16] D. Gatch, Restructuring introductory physics by adapting an active learning studio Model, Int. J. Sch. Teach. Learn. 4 (2) (2010). Article 14. [17] C.M. Sorensen, A.D. Churukian, S. Maleki, D.A. Zollman, The New Studio format for instruction of introductory physics, Am. J. Phys. 74 (12) (2006) 1077 1082. [18] S. Pillay, A. Buffler, F. Lubben, S. Allie, Effectiveness of a GUM-compliant course for teaching measurement in the introductory physics laboratory, Eur. J. Phys. 29 (3) (2008) 649. [19] C. Wieman, Comparative cognitive task analyses of experimental science and instructional laboratory courses, Phys. Teach. 53 (2015) 349 351.

References

[20] B.R. Wilcox, H.J. Lewandowski, Developing skills versus reinforcing concepts in physics labs: insight from a survey of students’ beliefs about experimental physics, Phys. Rev. Phys. Educ. Res. 13 (2017) 010108. [21] N.G. Holmes, D.A. Bonn, Quantitative comparisons to promote inquiry in the introductory physics lab, Phys. Teach. 53 (2015) 352 355. [22] T.S. Volkwyn, S. Allie, A. Buffler, F. Lubben, Impact of a conventional introductory laboratory course on the understanding of measurement, Phys. Rev. Spec. Top. Phys. Educ. Res. 4 (2008) 010108. [23] R.T. White, The link between the laboratory and learning, Int. J. Sci. Educ. 18 (7) (1996) 761 774. [24] C. Wieman, N.G. Holmes, Measuring the impact of an instructional laboratory on the learning of introductory physics, Am. J. Phys. 83 (2015) 972 978. [25] M. Rogers, L.D. Keller, A. Crouse, M.F. Price, Implementing comprehensive reform of introductory physics at a primarily undergraduate institution: a longitudinal case study, J. Coll. Sci. Teach. 44 (3) (2015) 82 90. [26] T. Feder, College-level project-based learning gains popularity, Phys. Today 70 (6) (2017) 28. [27] M.B. Moldwin, Dorm room labs for introductory large-lecture science classes for nonscience majors, J. Coll. Sci. Teach. 47 (5) (2018) 36 41. [28] T. De Jong, M.C. Linn, Z.C. Zacharia, Physical and virtual laboratories in science and engineering education, Science 340 (2013) 305 308. [29] B.W. Tuckman, G.J. Kennedy, Teaching learning strategies to increase success of first-term college students, J. Exp. Educ. 79 (2011) 478 504. [30] P. Boda, G. Weiser, Using POGILs and blended learning to challenge preconceptions of student ability in introductory chemistry, J. Coll. Sci. Teach. 48 (1) (2018) 61 67. [31] S. Brahmia, E. Etkina, Switching students on to science, J. Coll. Sci. Teach. 31 (3) (2001) 183 187. [32] M.J. Shahri, M. Matlabi, R. Esmaeili, M. Kianmehr, Effectiveness of teaching: Jigsaw technique vs. lecture for medical students’ physics course, Bali Med. J. 6 (3) (2017) 529 533. [33] J. Round, B. Lom, In Situ teaching: fusing labs & lectures in undergraduate science courses to enhance immersion in scientific research, J. Undergrad. Neurosci. Educ. 13 (3) (2015) 206 214. [34] F. Espinoza, Wave Motion as Inquiry: The Physics and Applications of Light and Sound, Springer, 2017. [35] C.J. Linder, G.L. Erickson, A study of tertiary physics students’ conceptualizations of sound, Int. J. Sci. Educ. 11 (1989) 491 501. [36] C.J. Linder, University physics students’ conceptualizations of factors affecting the speed of sound propagation, Int. J. Sci. Educ. 15 (6) (1993) 655 662. [37] L. Maurines, Spontaneous reasoning on the propagation of visible mechanical signals, Int. J. Sci. Educ. 14 (3) (1992) 279 292. [38] M.C. Wittmann, et al., Making sense of how students make sense of mechanical waves, Phys. Teach. 37 (1999) 1 8. [39] M. Calik, M. Okur, N. Taylor, A comparison of different conceptual change pedagogies employed within the topic of sound propagation, J. Sci. Educ. Technol. 20 (2011) 729 742.

69

70

CHAPTER 4 The impact on cognitive development of a self-contained

[40] L.J. Atkins, I.Y. Salter, Constructing definitions as a goal of inquiry, AIP Conf. Proc. 1289 (2010) 65. [41] D.J. Jones, K.W. Madison, C.E. Wieman, Transforming a fourth year modern optics course using a deliberate practice framework, Phys. Rev. ST Phys. Educ. Res 11 (2015) 020108. [42] F. Mateycik, D.J. Wagner, J.J. Rivera, S. Jennings, Student descriptions of refraction and optical fibers, AIP Conf. Proc. 790 (2005) 169. [43] C.M. Sorensen, D.L. McBride, N. Sanjay Rebello, Studio optics: adapting interactive engagement pedagogy to upper-division physics, Am. J. Phys. 79 (2011) 320. [44] W. Zhang, R.G. Fuller, Combining cognitive research and multimedia for teaching light and optics, AIP Conf. Proc. 399 (1997) 871. [45] P. Colin, L. Viennot, Using two models in optics: students difficulties and suggestions for teaching, Am. J. Phys. 69 (S36) (2011). [46] R. Trumper, The physics laboratory—a historical overview and future perspectives, Sci. Educ. 12 (2003) 645 670. [47] F. Espinoza, The use of graphical analysis with microcomputer-based laboratories to implement inquiry as the primary mode of learning science, J. Educ. Technol. Syst. 35 (3) (2006-2007) 315 335. [48] F. Espinoza, D. Quarless, An inquiry-based contextual approach as the primary mode of learning science with microcomputer-based laboratory technology, J. Educ. Technol. Syst. 38 (4) (2009-2010) 407 426. [49] F. Espinoza, Graphical representations and the perception of motion: integrating isomorphism through kinesthesia into physics instruction, J. Comput. Math. Sci. Teach. 34 (2) (2015) 133 154.

Further reading AAPT, Goals of the introductory physics laboratory (ed.) Phys. Teach. 35 (1997) 546 548. J.W. Belcher, Studio physics at MIT, MIT Phys. Ann. (2001) 58 64. J. Gaffney, E. Richards, M.B. Kustusch, L. Ding, R. Beichner, Scaling up education reform, J. Coll. Sci. Teach. 37 (2008) 18 23. D.R. Garrison, H. Kanuka, Blended learning: uncovering its transformative potential in higher education, Internet High. Educ. 7 (2004) 95 105. D.P. Jackson, P.W. Laws, S.V. Franklin, Explorations in Physics: An Activity-Based Approach to Understanding the World, Wiley, New York, 2003. M.C. Wittmann, et al., Understanding and addressing student reasoning about sound, Int. J. Sci. Educ. 25 (8) (2003) 991 1013. B.M. Zwickl, D. Hu, N. Finkelstein, H.J. Lewandowski, Model-based reasoning in the physics laboratory: framework and initial results, Phys. Rev. ST Phys. Educ. Res 11 (2015) 020113.

CHAPTER

Identification of normal and abnormal brain hemorrhage on magnetic resonance images

5

Nita Kakhandaki and S. B. Kulkarni SDM College of Engineering & Technology, Dharwad, India

5.1 Introduction Magnetic resonance imaging (MRI) is one of the noninvasive medical imaging techniques used by radiologists. They examine MRI and identify abnormality based on their experience and visual capability. These MR images are visualized with 256 gray levels or more that cannot be differentiated by human eye. Hence, a computer-aided diagnostic (CAD) system is required which extracts some more information from MRI which in turn helps the radiologists for correct diagnosis of abnormality. The bleed in the brain region called hemorrhage can occur between covering of brain and skull, layers of brain cells, between membranes. Depending upon the location of bleed, the hemorrhage can be classified into the following types: subdural hemorrhage (SDH), intraventricle hemorrhage (IVH), intraparenchymal hemorrhage (IPH), subarachnoid hemorrhage (SAH), epidural hemorrhage (EDH-traumatic). Identification of correct position and type of hemorrhage is a challenge, and there is a scope for research since the designed techniques and algorithms are not very accurate and efficient. MRI can be captured in modalities: gradient recalled echo (GRE); T2-weighted; T1-weighted; susceptibility weighted image; and also in different axes, namely, sagittal (xz plane), coronal (xy plane), and axial (horizontal plane). A study on different methods and techniques used for the identification of brain hemorrhage using image processing techniques is presented in Section 5.2. The methods and algorithms used in this work are presented in Section 5.3. The performance metrics used for the evaluation of the implemented methods are presented in Section 5.4. In Section 5.5 the enhancements which may be considered for future implementation are discussed.

Cognitive Informatics, Computer Modelling, and Cognitive Science, Volume 1. DOI: https://doi.org/10.1016/B978-0-12-819443-0.00005-2 © 2020 Elsevier Inc. All rights reserved.

71

72

CHAPTER 5 Identification of normal and abnormal brain hemorrhage

5.2 Literature survey Some of the related works by different researchers have been discussed in this section. Brain stroke affects 15 million people causing death or long-term disability, according to a study by the World Health Organization (WHO). It was observed that in a CT slice the natural contralateral symmetry gets misaligned due to stroke. To identify the two types of stroke—infract and hemorrhage—a unified method was designed, in which the difference in histograms in both hemispheres of the brain was observed and a wavelet-based texture information was utilized to differentiate between normal and acute cases. At patient and slice level the classification performance was tested [1]. Using FLAIR (fluid attenuated inversion recovery) MR image, a method was designed to identify tumor, edema, and tissues. To distinguish the three, composite feature vectors were extracted by applying fuzzy C-means algorithm. The artificial neural network (ANN) was trained using fuzzy backpropagation algorithm, and validation was carried out. Then to determine whether tumor or edema the converged values of the weights of fuzzy ANN were used [2]. An intelligent and more accurate system was designed using watershed fuzzy C-means algorithms and neural network. The system was intended to identify the presence of hemorrhage, along with its type. The main features extracted from ANN were area of number of objects and number of objects. The training was done with “nprtool” of MatLab. The testing was done using two sets of images with different numbers of hidden neurons; however, the best set of classification of images was obtained with 15 hidden neurons [3]. For accurate identification of abnormal region in brain, a presegmentation method was proposed. The homogeneous regions were grouped based on the pixel intensity such that the abnormalities could be identified. The input image was partitioned into four regions, followed by the comparison of the mean and standard deviation of all regions where it was found that the abnormal region was characterized by higher mean and lower standard deviation values. The abnormal region could be further segmented to get the accurate location of abnormality [4]. To detect and identify brain hemorrhage and its types, a CAD system was designed which could classify EDH, SDH, and ICH. The dataset used for implementation was from e-radiography.net, and WEKA tool was used for classification and testing. The binary classification problem was solved with 100% accuracy, and 92% accuracy was achieved by implementing neural network for classification for identifying different types [5]. A study of infants’ brain development was conducted for early detection of any abnormalities and diseases. In order to achieve proper segmentation of tissues, deep convolutional neural network (CNN) was used [6]. The lesions in brain were segmented using 3-D CNN method. This scheme was easy to compute and simplified the imbalance which occurred while segmenting images [7]. Segmentation plays a crucial role in identifying abnormality; therefore many segmentation algorithms were implemented and compared [8].

5.2 Literature survey

A dual-tree complex wavelet transforms, and spatial constrained k-means algorithm was framed which automatically segmented the human brain. The expectationmaximization segmentation software was used to extract brain region. The result was noise free, high contrast image, for better diagnosis. The method implemented was compared with six different segmentation algorithms from Internet Brain Segmentation Repository (IBSR) and showed better results [9]. In region-based segmentation methods the efficiency gets affected by intensity inhomogeneities and, in level-set segmentation method, the performance gets degraded due to adjustments of controlling parameters. Therefore to overcome this issue a hybrid method that integrated local region based level-set segmentation method and fuzzy clustering was applied [10]. The detection of hemorrhagic region becomes very trivial when the clinician is not experienced and the area of blood spill is very small. Hence, many researchers are still designing an automated segmentation algorithm. For an accurate identification of hemorrhage, a hybrid method that combined fuzzy C-means clustering and maximum-entropy based thresholding was implemented. The designed method could detect SAH, IVH, and ICH [11]. The segregation of normal/abnormal brain image and identification of abnormality was a tedious task for radiologists. Therefore to identify brain tumor, a CAD system was designed. To separate tumor from other parts of the brain, level-set-based algorithm for segmentation and for classification ANN was implemented. However, the ANN implemented was complex, and hence in future, new schemes may be designed [12]. The symmetry of the two brain hemispheres may also be used for identification of brain abnormality. To measure the symmetry, modified gray level cooccurrence matrix (GLCM) features were extracted, which however used large memory space and was computationally complex [13]. In a sampled medical image the pixels give the details; therefore to detect tumor, a method was proposed. The segmentation was implemented with k-means clustering algorithm and morphological filters were implemented for tumor detection. In the method implemented, a deviation in the results was observed as it depended on the input data given by user. Therefore for better results, there is a need for complete automated system [14]. With the increase in abnormal brain conditions in patients, a scheme to classify the brain MR image was implemented. The implementation involved wavelet energy, support vector machine (SVM), and biogeography-based optimization. However to improve classification accuracy, multiple slices may be considered [15]. The analysis of brain MR image for accurate identification and analysis of abnormality is crucial; hence, removal of nonbrain tissues called skullstripping is required. There are many methods of skull stripping, one of the method is morphology based [16]. To propose the work the observations of the literature review are listed and some of the algorithms were combined with modifications. Different techniques and algorithms have been proposed by many researchers, but there is a need further improvement in accuracy, computational complexity, and in some, the segmentation parameters need manual adjustments. To study and identify

73

74

CHAPTER 5 Identification of normal and abnormal brain hemorrhage

abnormality, the boundary regions must be identified accurately. A brain MRI scan has many regions and the pixel intensity values in each region vary slowly. Therefore for an error-free diagnosis, an automated brain pathological tissue segmentation is mandatory. The different regions of brain MRI scan must be identified and geometric properties of every region must be estimated, for which modified multilevel set (MMLS) segmentation method is implemented [17,18]. To extract textural features that are rotationally invariant, minimal angular local binary pattern (MALBP) is implemented [19]. Another method for textural feature extraction is GLCM with which statistical moments of intensity histogram of an input image can be extracted [20]. The features extracted may or may not contribute for describing a region, which increases computational complexity. Therefore to extract only contributing features, optimization is often required. The conventional numerical methods of feature optimization have computational drawback; therefore a metaheuristic optimization technique called cuckoo optimization algorithm may be implemented [21]. For classification naı¨ve Bayes-probabilistic kernel classifier (NB-PKC) is implemented when evaluation process does not get affected by the size of the training set [22].

5.3 Proposed work In the proposed work, GRE technique (horizontal axis) of MRI scan is used for early detection of hemorrhage. The input dataset consists of 148 MRI scans of 16 patients taken from SDM Medical College, Dharwad. This work is the first step to reach the design of an automated system for hemorrhage diagnosis. The main aim of the software is to analyze the scanned input brain MRI and identify the presence of hemorrhage. The training phase consists of 70% of the total sample and for testing phase 30% of the total samples are used. The input images are captured from 1.5 T MRI machine with eight channels, slice thickness of each slab 5 mm, repetition time 500 ms, and echo time 15.8 ms. The original image is enhanced to get the finer edge details of brain structure and hemorrhage, which in turn may impart noise. The noise is removed by median filter. The emphasis is on the brain section, thus the outer region—skull portion—is to be discarded. The defective area in the image is highlighted by binary thresholding. The ROI (region of interest) gets marked depending on the area of the defect. Around the hemorrhage area, a mask is generated that is segmented and classified. The proposed scheme emphasizes on segmentation, feature extraction, and classification to identify the defect. The segmentation is carried out using MMLS algorithm. The textural features are extracted using two algorithms: MALBP and GLCM. The NB-PKC algorithm is applied for classification. The test image also undergoes the same preprocessing as in training phase, and at the classification stage the test image is identified as whether normal or abnormal. Fig. 5.1 shows the proposed flow diagram.

T r a i n i n g p h a s e

Input brain image

Test image

Preprocessing

Preprocessing Binary thresholding

Binary thresholding of image

ROI extraction and mask generation

ROI extraction Modified multilevel set based segmentation

Mask generation

Features extraction

Modified multilevel set based segmentation Feature extraction Minimal angular local binary pattern extraction

FIGURE 5.1 Flow diagram.

Gray level cooccurance matrix feature extraction

Feature selection by optimal cuckoo search algorithm

Best selected feature index

Selective features

Selective features Classified output Naive Bayes probabilistic kernel classifier

Trained machine Normal

Hemorrhage

T e s t i n g p h a s e

76

CHAPTER 5 Identification of normal and abnormal brain hemorrhage

The sequence of algorithms applied in the scheme is listed.

5.3.1 Edge enhancement Initially, the original image is taken and preprocessed. Prior to filtering, the edge enhancement is implemented to avoid the loss of useful details in the image. The Laplacian formula is applied which sharpens the edges. The filter mask function is formulated as given in the following equation: Δ2 J 5

@2 J @2 J 1 2 2 @x @y

(5.1)

This equation is used to generate a filter mask which includes diagonals. The given equation is implemented as given in the following equation: I ðx; yÞ 5 J ðx; yÞ 1 m½Δ2 ðJ ðx; yÞÞ

(5.2)

J ðx; yÞ is the input image. I ðx; yÞ is the sharpened image and m 5 21 for the filter mask values. The histogram of the image is taken and then the image is cropped to remove the light box surrounding the image and a threshold is set to binarize the image. Then finally the bottom of the image is fixed and 1015 layers of pixels are eroded, to remove the skull. Then, a binary thresholding is applied to get the ROI and to create an initial mask.

5.3.2 Modified multilevel set segmentation algorithm The algorithm segments an image on the basis of pixel intensity and variational boundaries, that is, pixel differences at angles 0, 30, 45, 60, 90, 120, 135, and 180 degrees and the reverse angles. The main focus in this algorithm is to capture and analyze the motion of the curve in time. Input: 1. Filtered Image (FImg ) 2. Initial mask/extracted ROI (GROI ) 3. Number of iterations NItr Output: Segmented image. Procedure: Step 1: Initialize mask based on the following equation: IMask 5

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 GROI ðxÞ2 2 GROI ð yÞ2 2 2 ð12GROI ðxÞÞ2 2 ð12GROI ð yÞÞ2 1 GROI 2

(5.3)

Step 2: Set initial boundary (BIdx ) region by extracting the points from the initial mask (GROI ). Step 3: Then for NItr iterations, set internal (PI ) and external (PE ) points based on the conditions given in the following equations for all pixels:

5.3 Proposed work 

GROI if ðGROI $ 0Þ 0 else

PI 5  PE 5

GROI if ðGROI , 0Þ 0 else

(5.4) (5.5)

Step 4: Compute and update force from FImg (Eq. 5.6) h i2 X X ForceðF Þ 5 FImg ðBIdx Þ2 FImg ðPI Þ=sizeðPI Þ 2 ½FImg ðBIdx Þ2 FImg ðPE Þ=sizeðPE Þ2 (5.6)

Step 5: Compute curvature and update. The curvature value is computed by the following equation: CurvatureðCuvÞ or

@u 5 ruIMask ðBIdx Þ 1 k @t

(5.7)

where k is the constant, u is the pixel differences at particular angles like 0, 30, 45, 60, 90, 120, 135, and 180 degrees and in the reverse angles also. If the pixels differences are greater, curvature is updated by 1, else by 0. Step 6: Compute energy and update for each iteration using the following equation: EnM 5 F=MaxðF Þ 1 α 1

@u @t

(5.8)

where α denotes energy minimization and @u/@t is the updated curvature level. Step 7: Estimate energy difference using the following equation: de 5 1=MaxðEnM Þ

(5.9)

Step 8: Updating of boundary index is done by the following equation: BIdx 5 BIdx 1 de 3 EnM

(5.10)

Step 9: The internal and external points are updated based on the pixel coordinates that are computed, and contour is applied to the input image with the updated coordinates. Step 10: Estimate the negative and positive displacement of the pixel coordinated by the following equations: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi maxðPp2 ; Qn2 Þ 1 maxðRp2 ; Sn2 Þ 2 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi DNeg 5 maxðPn2 ; Qp2 Þ 1 maxðRn2 ; Sp2 Þ 2 1 DPos 5

(5.11) (5.12)

where Qp, Qn, Pp, Pn, Sp, Sn, Rp, Rn are the eight pixel positions which represent right, left, up, down, down-left, down-right, up-left, and up-right, displacement of the pixel coordinates. Step 11: Finally, the weight of the computed contour is updated by Eq. (5.13), which is done for NItr times

77

78

CHAPTER 5 Identification of normal and abnormal brain hemorrhage ! Cø p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi Cø 5 Cø 2 de 3 3D 20 Cø 2 1 1

(5.13)

Step 12: Finally segmented Image, Sout 5Fimg ðBIdx # 0Þ:

5.3.3 Feature extraction algorithm In feature extraction phase to extract textural features from segmented output image, two algorithms are implemented, at first MALBP algorithm and next, GLCM features.

5.3.3.1 Minimal angular local binary patterns It is used for feature discrimination. A 5 3 5 mask (to get good circular neighborhood) is used to get good robustness against rotation and global illumination, which results in a histogram, that is, vector of features with 256 dimensions. In LBP algorithm with 3 3 3 mask, only the signs of the differences are computed for the final descriptor, and the magnitude of differences is neglected. But the magnitude of differences may provide important information in the neighborhood with strong edges (dominant direction). The dominant direction may be considered as the reference in the circular neighborhood and weights may be assigned with respect to it. Therefore in this scheme the magnitude and sign of the differences are considered for texture discrimination. Input: Segmented image (Sout ) Output: Combined features from 1. MALBPft , 2. GLCM features (GLCM) Procedure: MALBPft , Step 1: The image is divided into blocks and median of each block is calculated. Step 2: Median of all the medians Cp is calculated using the following equation: Cp 5 MedianðmedianðThblk ÞÞ

(5.14)

Step 3: For all values of x, y, that is, (5,5) circular neighborhood, check whether block median is greater than Cp , if yes, then the pixel value TPt ðx; yÞ is set to 1, else 0. Step 4: Then initialize MALBPi;j 5 1, set N 5 x 3 y, for all values from 1 to N, if (x16¼y1), then calculate MALBPi;j , decrement N with every iteration using the following equation:

5.3 Proposed work

MALBPi; j 5 MALBPi; j 3 TPt ðx1; y1Þ 3 2 X N

(5.15)

5.3.3.2 Gray level cooccurrence matrix features GLCM is a method for extracting second-order statistical texture feature and is used to compute the scalar values from the image. It represents the linear relationship between the current image pixel i and the reference pixel j present in the segmented image. In this scheme ten scalar features are extracted.

5.3.3.2.1 Autocorrelation It is the measure of amount of fineness and regularity in the texture of the image and given by the following equation: Gac 5

Xn Xn Xn i51

j51

k51

Gac 1 ½i 3 j 3 Sout ði; jÞ

(5.16)

5.3.3.2.2 Contrast It refers to local variation in the given image and is given by the following equation: GC 5

Xn i51

Gc 1 ði2jÞ2 3 Sout ði; jÞ

(5.17)

5.3.3.2.3 Correlation Correlation is given by the following equation: GCr 5

Xn Xn i51

3

Xn k51

j51

GCr rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 Pn  Pn  ð ð i; j Þ Þ 3 S ði; jÞ 3 3 Sout ði; jÞ ð i Þ2 i 3 S out out k51 k51 ð jÞ2 ð j 3 Sout ði; jÞÞ

(5.18)

5.3.3.2.4 Cluster prominence It is the measure of asymmetry and given by the following equation: Gcprom 5

Xn Xn i51

j51

ði1j2meanx 2meany Þ4 3 Sout ði; jÞ

5.3.3.2.5 Cluster shade It refers to the skewness of the image and given by the following equation:

(5.19)

79

80

CHAPTER 5 Identification of normal and abnormal brain hemorrhage

Gcshade 5

Xn Xn i51

j51

ði1j2meanx 2meany Þ3 3 Sout ði; jÞ

(5.20)

5.3.3.2.6 Dissimilarity It is the variation between the two nearest neighbor pixels and given by the following equation: GSim 5

 Xn Xn   i 2 j 3 Sout ði; jÞ i51

j51

(5.21)

5.3.3.2.7 Energy It is the textual uniformity of the pixels in the image and given by the following equation: GE 5

Xn Xn i51

j51

Sout ði; jÞ2

(5.22)

5.3.3.2.8 Entropy GEntro 5 2

   S ð i; j Þ 3 log S ð i; j Þ out out j51

Xn Xn i51

(5.23)

5.3.3.2.9 Homogeneity GHom 5

Xn Xn i51

j51

1 3 ðSout ði; jÞÞ 1 1 ði2jÞ2

(5.24)

5.3.3.2.10 Maximum probability Gmaxp 5 max Sout ði; jÞ

(5.25)

GLCMft 5 Gac; Gc ; GCr ; Gcshade ; Gcprom ; GSim ; GE ; GEntro ; GHom ; Gmaxp

(5.26)

i; j

The combined feature set generated is, set of features ðFtÞ5½MALBPft 1 GLCMft . Next -Bayes method is applied to the combined feature set, which uses naı¨ve assumptions of conditional independence between attributes. In this scheme, mean and standard deviation are considered. In naı¨ve-Bayes method only the variances of the variables for each label need to be determined and not the entire covariance matrix as compared to principle component analysis. To optimize the feature set, that is, elimination of noncontributing features, cuckoo search algorithm is implemented for which the optimal solutions are much better than the

5.3 Proposed work

particle swarm optimization and genetic algorithm. The main emphasis is given to replace not so good features by potentially better features. To implement it, at first, mean fitness value of features is calculated and then fitness value of each feature is compared with mean value, if larger then consider the feature for classification.

5.3.3.3 Naı¨ve Bayes-probabilistic kernel classifier The algorithm implemented learns a set of kernel functions that associate to an input vector a probability distribution over classes instead of a single class. The probabilities are computed based on Bayesain inference. The size of the training set does not affect the evolution. Input: Optimized feature set (Ft) Output: Classified label as output Procedure: Step 1: Construct a distribution matrix by Eq. (5.27) considering size (Ft) of input feature set. dMat ðx; 1Þ 5

1 Xn Ftx i51 n

(5.27)

Step 2: Update distribution matrix by computing using Eq. (5.28) and compute Feat (i,j) using Eq. (5.29). sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pn 2 i51 ðFtx2 dMat ðyÞÞ dMat ðy; 2Þ 5 n21 2 2 dMat ði; 1Þ2 Featði; jÞ 5 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Xe2ðFt2dMat ði;1Þ Þ=ð2 3 dMat ði;2Þ Þ 2π 3 dMat ði; 2Þ2

(5.28)

(5.29)

Step 3: Compute the training (FeatTrn ) and testing (FeatTst ) features from the extracted features (Feat). Let LTrn be the Label which represent the training features. Initialize number of cycles of execution. Step 4: Set initial weight (wtx ; wty ) and angles (øx ; øy ). Set probabilistic weight and angles to zero Step 5: Construct a layer Nx 5 FeatTrn 3 wtx 3 øx Step 6: Apply objective function obf x 5 ðeNx 2 e2Nx Þ=ðeNx 1 e2Nx Þ Step 7: Construct a layer Ny 5 FeatTrn 3 wty 3 øy Step 8: Apply objective function obf y 5 ðeNy 2 e2Ny Þ=ðeNy 1 e2Ny Þ Step 9: Compute the state difference, using the following equation:  dNy 5ð11obf y Þ 3 1 2 obf y Þ 3 FeatTrn ðN Þ 2 obf y

(5.30)

Step 10: Update the differential weight (dWty ) and differential angle (øWty ) of for Ny

81

82

CHAPTER 5 Identification of normal and abnormal brain hemorrhage

Step 11: Similarly compute the state difference using the following equation: dNx 5 ð1 1 obf x Þ 3 ð1 2 obf x Þ 3 FeatTrn ðN Þ 2 obf x

(5.31)

Step 12: Update the differential weight (dWtx ) and differential angle (øWtx ) of for Nx Step 13: Estimate and update weight Wtx 5 Wtx 1 dWtx 1 pWtx Step 14: Estimate and update weight Wty 5 Wty 1 dWty 1 pWty Step 15: Estimate angle (øy ) as øy 5 øy 1 døy 1 pøy and update Step 16: Estimate angle (øx ) as øx 5 øx 1 døx 1 pøx and update Step 17: Compute probabilistic weight (pWtx pWty ) and angle deviation (pøx ; pøy )

5.4 Result and discussions The proposed scheme was implemented on 87 normal and 61 abnormal images. The abnormal MR images comprised of combination of SDH, IVH, IPH, and SAH of patient age ranging from 58 to 75 years. The proposed scheme and conventional SVM were applied to the input images. The SVM uses hyperplane to distinguish the normal and abnormal image; however, if the features lie very close to hyperplane the classification is not accurate. In the proposed scheme, this has been incorporated and overcome. The implementation was carried out as per the proposed flow diagram and the outputs up to classification are given from Fig. 5.25.5. The input image considered has IPH that was correctly classified as abnormal.

FIGURE 5.2 (A) Original input image and (B) edge enhanced image.

5.4 Result and discussions

FIGURE 5.3 (A) Filtered image and (B) skull removed.

FIGURE 5.4 (A) Binary thresholded image and (B) generated mask.

FIGURE 5.5 (A) Segmented output and (B) MALBP pattern. MALBP, Minimal angular local binary pattern.

83

84

CHAPTER 5 Identification of normal and abnormal brain hemorrhage

5.4.1 Comparative analysis between proposed NB-PKC and support vector machine The following criteria are used to compare the performance of the conventional SVM scheme and the novel NB-PKC scheme.

5.4.1.1 Precision It refers to positive predictive rate and represents the fraction of retrieved instances that are relevant. Precision 5

TP TP 1 FP

(5.32)

5.4.1.2 Recall It is the fraction of relevant instances that are retrieved. It is based on an understanding and measure of relevance. Recall 5

TP TP 1 FN

(5.33)

5.4.1.3 Accuracy The closeness of a measured value to the standard or known value is termed as accuracy. It is also referred to as a weighted arithmetic mean of precision. Acc 5

TP 1 TN TP 1 TN or P1N TP 1 TN 1 FP 1 FN

(5.34)

5.4.1.4 Jaccard coefficient It is the measure of the asymmetric information in a binary variable. The segmentation of two images is a perfect match if the similarity index is 1 (or 100%). JaccardðA; BÞ 5

TP TP 1 FP 1 FN

(5.35)

5.4.1.5 Dice coefficient It is used in identifying the similarity index and its value must be close to 1 to indicate perfect match of segmentation of two images. DiceðA; BÞ 5

2 3 TP ð2 3 TP 1 FP 1 FNÞ

(5.36)

5.4 Result and discussions

5.4.1.6 Kappa coefficient It refers to the measurement of agreement amongst two raters which classify n items into C classes that are mutually exclusive. A standard used for finding the accuracy of all multivalued classification problems and is a robust measure as it considers the agreement occurring by chance. Kappa coefficientðkÞ 5

Po 2 Pe 1 2 Pe

(5.37)

where Po is the observed agreement, Pe is the random agreement, Po 2 Pe is the agreement due to true concordance, and 1 2 Pe is the residual not random agreement. ðTP 1 TNÞ ðTP 1 TN 1 FP 1 FNÞ

(5.39)

Py 5

ðTP 1 TNÞ ðTP 1 FPÞ 3 ðTP 1 TN 1 FP 1 FNÞ ðTP 1 TN 1 FP 1 FNÞ

(5.38)

Pn 5

ðTN 1 FNÞ ðTN 1 FPÞ 3 ðTP 1 TN 1 FP 1 FNÞ ðTP 1 TN 1 FP 1 FNÞ

(5.39)

Po 5

Pe 5 Py 1 Pn

(5.40)

If value of k . 0.80, it implies a strong agreement and good accuracy. The input images with different types of hemorrhage considered for comparisons are given in Figs. 5.6 and 5.7, and comparative analysis and performance analysis of parameters for the given NB-PKC and SVM scheme are given in Tables 5.1 and 5.2. From the results depicted in Table 5.1, it can be inferred that the performance of NB-PKC technique is better than SVM scheme.

FIGURE 5.6 (A) Input MR Image 1 with IPH (B) Input MR Image 2 with SDH. IPH, Intraparenchymal hemorrhage; MR, magnetic resonance; SDH, subdural hemorrhage.

85

86

CHAPTER 5 Identification of normal and abnormal brain hemorrhage

FIGURE 5.7 (A) Input MR Image 3 with IPH 1 IVH 1 SAH (B) input MR Image 4 with IPH. IPH, Intraparenchymal hemorrhage; IVH, intraventricle hemorrhage; MR, magnetic resonance; SAH, subarachnoid hemorrhage.

5.4.2 Comparison of the proposed NB-PKC and support vector machine schemes The comparison is based on their percentage sensitivity to fault acceptance rate and fault rejection rate. The criteria for comparison are described in the following subsections.

5.4.2.1 Fault rejection rate It refers to the percentage of genuine pixels rejected by the system. The genuine pixels claim their identity in verification phase and therefore should not be discarded. The false rejection ratio should be small value as compared to the fault acceptance rate.

5.4.2.2 Fault acceptance rate It refers to the percentage of pixels falsely accepted by the system.

5.4.2.3 Global acceptance rate It refers to the percentage of genuine pixels accepted by the system and is represented by GAR 5 100 2 FRR. The proposed scheme is very sensitive to faults and discards faulty instances.

5.4.2.4 ROC curve It represents a plot of true positive rate versus the false positive rate for the different possible cut points of a diagnostic test. It depicts the tradeoff between specificity and sensitivity. If the curve follows the left-hand border closer than the top border, the test is more accurate. If the curve comes closer to the 45 degrees diagonal of receiver operating characteristic (ROC) space, then the test is less

Table 5.1 Comparative analysis between NB-PKC and support vector machine (SVM) schemes. Input Image 1 With IPH

Input Image 2 With SDH

Input Image 3 With IPH 1 IVH 1 SAH

Input Image 4 With IPH

Parameters

NB-PKC

SVM

NB-PKC

SVM

NB-PKC

SVM

NB-PKC

SVM

Tp Tn Fp Fn Sensitivity Specificity Precision Recall Jaccard coefficient Dice coefficient Kappa coefficient Accuracy

78 61 0 9 89.6552 100 100 89.6552 93.9189 96.8641 0.8772 93.9189

56 61 0 31 64.3678 100 100 64.3678 79.0541 88.3019 0.5982 79.0541

79 61 0 8 90.8046 100 100 90.8046 94.5946 97.2222 0.8906 94.5946

57 61 0 30 65.5172 100 100 65.5172 78.7297 88.7218 0.6103 78.7297

78 61 0 9 89.6552 100 100 89.6552 93.9189 96.8641 0.8772 93.9189

56 61 0 31 64.3678 100 100 64.3678 79.0541 88.3019 0.5982 79.0541

79 61 0 8 90.8046 100 100 90.8046 94.5946 97.2222 0.8906 94.5946

58 61 0 29 66.6667 100 100 66.6667 80.4054 89.1386 0.6224 80.4054

IPH, Intraparenchymal hemorrhage; IVH, intraventricle hemorrhage; NB-PKC, naïve Bayes-probabilistic kernel classifier; SAH, subarachnoid hemorrhage; SDH, subdural hemorrhage.

Table 5.2 Performance analysis of NB-PKC and support vector machine (SVM) schemes. Input Image 1 With IPH

Input Image 2 With SDH

Input Image 3 With IPH 1 IVH 1 SAH

Input Image 4 With IPH

Parameters

NB-PKC

SVM

NB-PKC

SVM

NB-PKC

SVM

NB-PKC

SVM

FRR (abnormal image class) FAR (abnormal image class) GAR (abnormal image class) FRR (normal image class) FAR (normal image class) GAR (normal image class)

6.0811 0 93.9189 0 6.0811 100

20.9459 0 79.0541 0 20.9459 100

5.4054 0 94.5946 0 5.4054 100

20.2703 0 79.7297 0 20.2703 100

6.0811 0 93.9189 0 6.0811 100

20.9459 0 79.0541 0 20.9459 100

5.4054 0 94.5946 0 5.4054 100

19.5946 0 80.4054 0 19.5946 100

FAR, Fault acceptance rate; FRR, fault rejection rate; GAR, global acceptance rate; IPH, intraparenchymal hemorrhage; IVH, intraventricle hemorrhage; NB-PKC, naïve Bayes-probabilistic kernel classifier; SAH, subarachnoid hemorrhage; SDH, subdural hemorrhage.

Acknowledgment

ROC for classification 1 NB-PKC SVM

0.9

True positive rate

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

False positive rate

FIGURE 5.8 ROC curve of the input given in Fig. 5.2.

accurate. The slope of the tangent line at a cut point provides likelihood ratio (LR) for that specific value of the test. The area under the curve represents the measure of test accuracy. The ROC curve of the proposed scheme and SVM is displayed in Fig. 5.8.

5.5 Conclusion The brain MR images with SDH, IPH, and IVH were classified correctly and proposed scheme provided 13% improvement in classification as compared to SVM. However, brain MR images with only SAH could not be identified clearly. Hence, the scheme needs improvement in correct identification of all types of hemorrhage. In this work, only GRE images are considered, but at times, hemorrhage may be clearly identified with other imaging modalities and hence may be considered.

Acknowledgment The authors thank the Members of Ethical Committee and also Dr. Preetam Patil, Professor, SDM Medical College, Dharwad, for the real-time input images and extending help for verification of results.

89

90

CHAPTER 5 Identification of normal and abnormal brain hemorrhage

References [1] M. Chawla, S. Sharma, J. Sivaswamy and L.T. Kishore, A method for automatic detection and classification of stroke from brain CT images, in: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Minneapolis, MN, 2009. [2] A. Nandita Pradhan, Intelligent computing for the analysis of brain magnetic resonance images, in: 2010 First International Conference on Integrated Intelligent Computing, 2010. [3] M. Ushani Balasooriya, Intelligent brain hemorrhage diagnosis using artificial neural networks, in: Proceeding of IEEE Business, Engineering & Industrial Applications Colloquium (BEIAC), 2012. [4] M.M. Kyaw, Computer-aided detection system for hemorrhage contained region, Int. J. Comput. Sci. Inf. Technol. 1 (1) (2013) 1116. [5] D. Alawad, K. Al-Darabsah, M. Al-Ayoub, Automatic detection and classification of brain hemorrhages, WEAS Trans. Comput. XII (19) (2013) 395405. [6] W. Zhang, R. Li, H. Deng, L. Wang, W. Lin, S. Ji, et al., Deep convolutional neural networks for multi-modality isointense infant brain image segmentation, NeuroImage 108 (2015) 214224. [7] K. Kamnitsasa, C. Lediga, V.F.J. Newcombe, P. Simpson, D.K. Menon, D. Rueckert, et al., Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation, Med. Image Anal. 36 (2017) 6178. [8] N. Baraiya, H. Modi, Comparative study of different methods for brain tumor extraction from MRI images using image processing, Indian J. Sci. Technol. 9 (2016) 15. [9] J. Zhang, W. Jiang, R. Wang, L. Wang, Brain MR image segmentation with spatial constrained k-mean algorithm and dual-tree complex wavelet transform, J. Med. Syst. 38 (9) (2014) 1. [10] J. Shanbezadeh, H. Soltanian-Zadeh, M. Rastgarpour, A hybrid method based on fuzzy clustering and local region-based level set for segmentation of inhomogeneous medical images, J. Med. Syst. (2014) 3868. [11] Y. Li, Q. Hu, J. Wu and Z. Chen, A hybrid approach to detection of brain hemorrhage candidates from clinical head CT scans, in: 2009 Sixth International Conference on Fuzzy Systems and Knowledge Discovery, Vol. 1, IEEE, 2009. [12] A.K. Samanta, A.A. Khan, Computer aided diagnostic system for detection and classification of a brain tumor through MRI using level set based segmentation technique and ANN classifier, World Acad Sci. Eng. Technol. Int. J. Med. Health Biomed. Bioeng. Pharm. Eng. 11 (6) (2017) 286293. [13] A.M. Hasan, F. Meziane, Automated screening of MRI brain scanning using grey level statistics, Comput. Electr. Eng. 53 (2016) 276291. [14] V. Ramalingam, C. Balasubramanian, S. Palanivel, A. Shenbagarajan, Tumor diagnosis in MRI brain image using ACM segmentation and ANN-LM classification techniques, Indian J. Sci. Technol. 9 (2016). [15] Y. Zhang, J. Yang, G. Ji, Z. Dong, S. Wang, G. Yang, Automated classification of brain images using wavelet-energy and biogeography-based optimization, Multimed. Tools Appl. 75 (2016) 1560115617. [16] P. Kalavathi, V.B.S. Prasath, Methods on skull stripping of MRI head scan images— a review, J. Digit. Imaging 29 (3) (2016) 365379.

Further reading

[17] X.-C. Tai, T.F. Chan, A survey on multiple level set methods with applications for identifying piecewise constant functions, Int. J. Numer. Anal. Modeling 1 (1) (2004) 2547. [18] T. Brox, J. Weickert, Level set based image segmentation with multiple regions, Jt. Pattern Recognit. Symp. 3175 (2004) 415423. [19] S. Ghosh, S. Kundu, S. Ghosh, Texture classification using local binary patterns and modular PCA,”, Int. J. Adv. Res. Comput. Eng. Technol. (IJARCET) 5 (5) (2016) 14271432. [20] P. Mohanaiah, P. Sathyanarayana, L. GuruKumar, Image texture feature extraction using GLCM, Int. J. Sci. Res. Publ. 3 (5) (2013) 15. [21] E. Valian, S. Mohanna, S. Tavakoli, Improved cuckoo search algorithm for feedforward neural network training, Int. J. Artif. Intell. Appl. (IJAIA) 2 (3) (2011) 3643. [22] A. Ashari, I. Paryudi, A.M. Tjoa, Performance comparison between naı¨ve Bayes, decision tree and k-nearest neighbor in searching alternative design in an energy simulation tool, (IJACSA) Int. J. Adv. Comput. Sci. Appl. 4 (11) (2013) 3339.

Further reading V. Anitha, S. Murugavalli, Brain tumour classification using two-tier classifier with adaptive segmentation technique, IET Comput. Vis. 10 (1) (2016) 917. H.P. Bahare Shahangian, Automatic brain hemorrhage segmentation and classification in CT scan images, in: Eighth Iranian Conference on Machine Vision and Image Processing, Iran, 2013. S. Roy, P. Maji, A simple skull stripping algorithm for brain MRI, in: Advances in Pattern Recognition (ICAPR), 2015 Eighth International Conference on, Kolkata, India, 2015.

91

CHAPTER

Cognitive informatics, computer modeling and cognitive science assessment of knee osteoarthritis in radiographic images: a machine learning approach

6

Shivanand S. Gornale1, Pooja U. Patravali1 and Prakash S. Hiremath2 1

Department of Computer Science, School of Mathematics and Computing Sciences, Rani Channamma University, Belagavi, India 2 Department of Master of Computer Applications, KLE Technological University, Hubballi, India

Arthritis is one of the chronic diseases related to joints. Most familiar kinds of arthritis are rheumatoid arthritis and osteoarthritis (OA). The evaluations of such diseases are examined using radiographic images. A knee X-ray images are highly exposed to unwanted distortions that cause problems in analyzing the bone structures. To overcome such problems various automated and semiautomated techniques have to be developed effectually to analyze the abnormalities and problems associated with the bone structures. The objective of this chapter is to evaluate and build up a computer-assisted automated analysis for the proper analysis and early recognition of OA using digital knee X-ray images. This technique helps orthopedicians and radiologists to estimate the JSW (joint space width) between femur and tibia and to correlate JSW measurements with KellgrenLawrence grading system for the assessment of disease severity. In this chapter, we have considered only the radiological assessment of knee X-ray, it provides good platform for the other researchers to develop a technology or model that is associated to OA pain and clinical symptoms.

Cognitive Informatics, Computer Modelling, and Cognitive Science, Volume 1. DOI: https://doi.org/10.1016/B978-0-12-819443-0.00006-4 © 2020 Elsevier Inc. All rights reserved.

93

94

CHAPTER 6 Cognitive informatics, computer modeling

6.1 Introduction Medical imaging has a very significant task in human society to improve public health of all population groups that encircles various imaging techniques to generate the human body image for the diagnostic and treatment purposes. Medical imaging is decisive in a variety of medical setting as the effective decision depends on the correct diagnoses. In spite of the fact that clinical examination might be adequate for the treatment of numerous conditions, the utilization of analytic imaging administrations is prevalent in breaking down and affirming numerous illnesses for the further treatment. Medical imaging should be highly efficient and secure in making decision and in reducing the unnecessary measures such as surgical procedures can be avoided if simple diagnostic medical imaging is available [1,2]. The demand for highly equipped and improved medical imaging techniques is increasing day by day. Therefore to know the proper progression and extremity of the disease medical imaging is carried out. Various examples of medical imaging are MRI, CT, mammography, X-ray, etc. [3,4]. Osteoarthritis (OA) is one such joint disorder that needs medical imaging along with clinical examination. OA is a joint inflammation that mostly affects cartilage. Cartilage holds important role in leg mobility. It is a protective connective tissue that allows easy glide of bone in the joint and prevents them from resisting each other. In OA the upper layer of cartilage gets ruptured, which causes the bones to rub each other resulting in severe pain. OA is largely affected to the age-group above 45 and women’s are more likely to be victims of OA than men. OA is classified in two different types [5]: Primary OA: This is most common type of OA that arises without obvious predisposing influences. It generally appears insidiously and is idiopathic. The main cause of primary OA is age, obesity, and hereditary. Secondary OA: This type of arthritis occurs due to major joint distress, persistent joint damage, joint surgery, congenital dislocation, rheumatoid arthritis, and due to some metabolic conditions. The important clinical symptoms of OA in the initial stage are joint pain in knee, hip, ankle, spine, etc. The pain is severe depending on the type of weather. It also includes joint stiffness, tenderness early in the morning. If any of these indications are experienced, the patients have to immediately see the doctor/experts preferably rheumatologists/orthopedicians for further analysis. The experts precisely observe the patient clinically and may ask to go for an X-ray [6,7]. Some of the important radiological parameters are cartilage disintegration, reduced joint space width (JSW), formation of osteophytes, loose bones, and bone deformation. Depending on the radiological parameters and severity level, the experts prescribe the appropriate medications for treatment of disease. Radiological features of OA:

• joint space narrowing—first sign of OA—reflects loss of articular cartilage. (Subchondral sclerosis: increased density in radiographic images);

6.1 Introduction

• bony outgrowths; and • presence of cysts in or around the joint. Risk factors:

• Dietary factors: Research has suggested that oxidative species may damage



the articular cartilage. To protect the cartilage against such damage, antioxidants such as vitamin A, C and E have to be used. Vitamin D also plays an important role in bone mineralization and may influence response of bone during arthritis. Estrogen deficiency: Incidence of knee OA in women increases sharply after menopause due to estrogen deficiency.

Fig. 6.1 depicts the normal and affected knee. In OA the important and prime radiological parameter is JSW. As per medical experts the normal JSW of healthy knee is 5.2 mm for male and 4.7 mm for female. The severity of the disease is

FIGURE 6.1 (A) Normal knee and (B) osteoarthritis knee.

95

96

CHAPTER 6 Cognitive informatics, computer modeling

Table 6.1 KellgrenLawrence (KL) grading system. KL grades

OA analysis

Normal grade Doubtful grade Mild grade Moderate grade Severe grade

Radiographic features of OA are absent Doubtful OA (thinning of cartilage area) Mild OA (visible reduction in joint space width) Moderate OA (numerous osteophytes, sclerosis) Sever OA (large osteophytes, sever sclerosis, and bone deformity)

OA, Osteoarthritis.

inversely proportional to the JSW [8,9] for validation and classification. The KellgrenLawrence (KL) grading system is used, which is given in Table 6.1. Analysis of X-ray images is done manually by the physician that is timeconsuming process and it is subjective and unpredictable. The complexities related to the medical images make it hard to examine them in an effective way. A knee X-ray image is highly exposed to unwanted distortions that cause problems in analyzing the bone structures. If the knee X-ray findings are not clear, doctor recommends the patients to go for MRI that causes difficulty for a common man to afford it. It may be possible that the experts may delay in investigating the knee X-ray image while waiting for MRI report and may reach some vague conclusion. Thus to overcome these problems various automated and semiautomated techniques have to be developed to effectually analyze the abnormalities and problems associated with the bone structures. The purpose of the chapter is to evaluate and build a computer-assisted automated diagnostic technique for the early evaluation of knee OA using radiographic images for orthopedicians and radiologists.

6.2 Machine learning approach Machine learning concentrates on the progression of computer programs that are capable of learning themselves by extracting the data. Based on experience or instructions, the way toward machine learning starts with perceptions or data provided and reconciles on better choices in future based on models that we give [10]. The vital goal is to consent the computers learn automatically without human help or intervention and amend activities accordingly [10]. Machine learning approaches for knee X-ray images are processed in this chapter are mainly divided into the following processes: Step Step Step Step

1: 2: 3: 4:

Acquisition of knee X-ray from multiple X-ray machines Preprocessing and enhancement for getting better quality images Identification of region of interest (ROI) Segmentation for ailment analysis

6.2 Machine learning approach

Preprocessing/ enhancement

Image acquisition

Segmentation

Feature extraction

Classification

Normal healthy knee

G0

Identification of ROI

Affected knee

G1

G2

G3

G4

FIGURE 6.2 Block diagram (G0: normal, G1: doubtful, G2: mild, G3: moderate, G4: severe).

Step 5: Feature extraction Step 6: Classification of extracted features using various classifiers

6.2.1 Knee X-ray analysis: a machine learning approach The proposed methodology constitutes preprocessing that includes enhancement and bone edge detection, next step is to detect/identify the ROI (cartilage region), after identification extracting/segmenting the interested region. Further the segmented region is followed by various feature computation techniques that are classified using different classifiers as per KL grading. The schematic representation of the proposed algorithm is depicted in Fig. 6.2.

6.2.1.1 Image acquisition Image acquisition can be designated as the action of fetching an image from some source, which is further processed to get new and better image. Some of the common devices used for retrieval of images are high-resolution camera, X-ray machines, two-dimensional (2-D) charged coupled device camera, etc. [11]. The overall dataset of 1173 radiographic knee images are collected from various health centers with DICOM (Digital Imaging and Communications in Medicine)

97

98

CHAPTER 6 Cognitive informatics, computer modeling

standards. Each radiographic knee image is manually classified as per KL grades by two orthopedicians who examine 65110 radiographic images per day.

6.2.1.2 Preprocessing/enhancement Preprocessing enhances important features relevant to understand the image. After collection of knee X-ray images, preprocessing techniques are used which are application dependent [11]. The distortions in knee X-ray images are due to secondary radiation, film processing and handling and digitization. The distortions in the knee X-ray images can be removed using filters such as mean filters, median filters, and Gaussian filters. Further, the processed images may undergo enhancement to improve the quality of the image to great extent [12]. It is an automated process based on mathematical functions. For a specific application the purpose of image enhancement is to process a given image into more appropriate and suitable image such that the important image features such as edges, boundaries, and intensity are enhanced which are further useful for examination.

6.2.1.3 Identification of region of interest Identification of ROI of an image can be implemented by partitioning the entire image into significant structures. The required or interested structures or objects from the partitioned image can be segregated from background or foreground. It can be scaled, estimated, or evaluated for processing [13,14]. The whole identification process is divided into four steps: first row wise partition of entire image is carried out along the axes. Second, extract ROI from each segmented part [15]. Third, estimate the area of each segmented part to detect the bone density [16]. Lastly, extract the region with high density value that can be obtained from denser region and enhance it using sine adaptive filter [15,17,18]. The steps used by sine adaptive filter are given in Eqs. (6.16.4). Cf 5 0: Af 5

L :0 2

(6.1)

L L 2 Fw 3 2 2

Cf ðCf . Af Þ 5

L 2

Iout 5 Iin sinðCf Þ

(6.2) (6.3) (6.4)

where Cf is a filter coefficient and Fw is a filter width calculated based on the Xray reconstructed image characteristics. Images recorded from X-ray detectors always have the data in the center portion of the image. Therefore by defining the filter coefficient as a sine wave, we are adding more weight to the data in the middle of the image. Adaptive sine filter is used for allocating weights based on the geometrical axis of the reconstructed image [15,19]. Thus the ROI is accurately extracted that can be further used for the medical examination [15,20]. The identification of ROI is shown in Fig. 6.3A and B.

6.2 Machine learning approach

FIGURE 6.3 Identification of ROI: (A) single knee ROI (left/right) and (B) ROI identification both left and right knee. ROI, Region of interest.

6.2.1.4 Segmentation In segmentation the desired objects can be separated from the background that can be further measured, counted, or in other means quantified [21]. In segmentation the image is partitioned into its constituent parts or objects that can be identified individually. Through image segmentation we fragment the image in a sequence of regions, based on image characteristics that are constant in each region, but vary from one region to another. The exploration of OA is implemented through distinct segmentation techniques, which may further help for the appropriate and clear diagnose of the disease grade wise (KL grading system). As per the literature some of the image segmentation techniques are

• • • • • • •

edge-based methods texture-based segmentation thresholding methods otsu’s based methods atlas-based methods deformable-based methods active contourbased methods

6.2.1.4.1 Edge-based methods Variations in the intensities of image pixels can represent the edges that in turn provide uncertainly data about the area of edges. Different types of edge-based methods are as follows: Sobel edge detection: Sobel method is least complicated strategies that predominantly emphasize the high-dimensional recurrence to perform 2-D spatial gradient estimation [15,22]. The main advantage of this operator is that it helps in

99

100

CHAPTER 6 Cognitive informatics, computer modeling

FIGURE 6.4 Sobel convolutional kernel.

FIGURE 6.5 Segmented image using Sobel edge method.

enhancing the components at edges along both axes since it is differential of rows/columns. As a resultant the edges seem to be brighter and denser [23]. Sobel operator uses a standard kernel function for image smoothening using gradient magnitude by estimating the gradient image intensity at each pixel within the image. The convolutional kernel is shown in Fig. 6.4. The magnitude M is given in Eqs. (6.5)(6.7) and the output of Sobel-based segmentation method is shown in Fig. 6.5. jM j 5

qffiffiffiffiffiffiffi Mx2 1 My2

(6.5)

Computation of approximate magnitude is given as   jM j 5 jMx j 1 My 

(6.6)

The edge angle of orientation is specified by θ 5 arctan

  Mx My

(6.7)

Prewitt edge detection: The general execution of Prewitt is similar to that of Sobel operator aside from the kernel [15,22]. It has an alternative convolution

6.2 Machine learning approach

FIGURE 6.6 Prewitt convolutional kernel.

FIGURE 6.7 Segmented image using Prewitt edge method.

quantity restricted to 8 numbers of directions [24,25]. The convolution mask with largest constituent is considered based on the calculations using 3 3 3 neighboring elements for 8 directions [15]. The masks and output of Prewitt edge detection are shown in Figs. 6.6 and 6.7, respectively.

6.2.1.4.2 Texture-based segmentation Texture-based segmentation method primarily makes use of statistical estimations that assist in differentiating the texture of an image [15,26]. While examining the medical images its slight transition in gray level in comparison to background and foreground images is discovered [15]. The output for texture-based segmentation method is depicted in Fig. 6.8.

6.2.1.4.3 Otsu’s based method Otsu method basically works on of global thresholding, which particularly depends on the gray values of images; Otsu computes gray level histogram for every image to obtain the binary image. However, in our algorithm the 2-D Otsu method was implemented, which gives the appropriate segmentation results by calculating spatial intraclass correlation within the neighborhood [15,27]. The equation is given as follows:

101

102

CHAPTER 6 Cognitive informatics, computer modeling

FIGURE 6.8 Segmented image using texture-based method.

FIGURE 6.9 Segmented image using Otsu’s method.

σ2ω ðzÞ 5 ω0 ðzÞσ20 ðzÞ 1 ω1 ðzÞσ21 ðzÞ

(6.8)

where ω0 and ω1, are the probabilities of the two groups isolated by a threshold z and σ20 and σ21 are variances of two groups. The output of the method is shown in Fig. 6.9.

6.2.1.4.4 Active contour method An active contour or snake is a curvature characterized in an image that is permitted to change its area and shape until it best fulfills predefined conditions [28,29]. It very well may be utilized to fragment an item by giving it a chance to settle much like a contracting snake around the frontier of an entity [30]. A snake S is frequently modeled as a parameterized curve S(c) 5 (x(c), y(c)) where the parameter c varies from 0 to 1. So, S(0) gives the coordinate pair (x(0), y(0)) of the starting point, S(1) gives the end coordinates, and S(c) with 0 , c , 1 gives all intermediate point coordinates [29,31]. The development of the snake is

6.2 Machine learning approach

FIGURE 6.10 Segmented image (A): segmented binary image, (B) segmented gray image, and (C) enhanced segmented image.

demonstrated as an energy minimization process, where the total energy En to be limited comprises three terms given in the following equation: En 5

ð1

EnðsðcÞÞ 5

0

ð1

ððEni ðSðcÞÞEne ðSðcÞÞEnc ðSðcÞÞÞds

(6.9)

0

The term Eni depends on internal powers of the snake. The term Ene depends on external powers. The last term Enc can be utilized to vigor extra imperatives, such as, fining the production of loops in the snake, or fining an undesired image surroundings [29]. The output for active contour segmentation is depicted in Fig. 6.10.

6.2.1.5 Feature computation/extraction In image processing to obtain a better accuracy, various features have to be extracted using different feature extraction techniques bearing simple classification model. Some of the important feature extraction algorithms are

• • • • • •

statistical feature extraction shape feature extraction haralick features zernike features histogram of oriented gradients (HOGs) local binary pattern (LBP)

6.2.1.5.1 Statistical feature These features include assembling, formulating, determination, and clarification of data. The mean and entropy of images are one of the common features, which are used as statistical features. All the statistical properties are calculated based on individual pixels. The individual pixels are determined by looking at its 2-by-2 neighborhood. The following are some features: Mean (M) 5 computes the average of an array I using mean (I (:)). EntropyðHÞ 5 2

n X i51

pi ðlogpi Þ

(6.10)

103

104

CHAPTER 6 Cognitive informatics, computer modeling

Variance 5 SD 5

N   i X Ai 2μ2 N 2 1 i51

rffiffiffiffiffiffiffiffiffiffiffiffi N 2 i X  Ai 2μ N 2 1 i51

(6.11)

(6.12)

Skweness 5

Eðx2μÞ2 σ3

(6.13)

Kurtosis 5

Eðx2μÞ4 σ4

(6.14)

6.2.1.5.2 Shape features These include measuring the similarities between shapes represented by their features. The shape features are calculated based on connected components stored in contiguous and discontiguous regions.

• Area is the number of pixels of an image region. Major axis length 5 ða 1 bÞ

(6.15)

It signifies the length of major axis of an object in pixels, where a, b are distance from each focus point. Minor axis length 5



pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðx 1 yÞ 2 m2

(6.16)

It signifies the length of minor axis of an object in pixels, where m is distance between focal point, and x and y are distance from each focal point. Perimeter: the number of boundary pixels. s ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi Area Equivdiameter 5 43 L   λ1 Foci of ellipse Eccentricity 5 λ2 Major axis length Euler number 5 ðNo of objectsÞ 2 ðNo of holesÞ

(6.17)

(6.18) (6.19)

6.2.1.5.3 Haralick features Haralick features help in measuring the texture of the image in terms of contrast, energy, correlation, homogeneity, etc. Contrast estimates the amount of neighborhood variations in a picture. It reveals the details of the textures based on the intensity change. It restores the proportion of intensities between a pixel and its neighborhood. It is the measure of local variations, if high results in higher values comparatively. If dissimilarity

6.2 Machine learning approach

is continuous for grayscale images then the contrast increases and texture becomes coarse and vice versa. Contrast 5

X

jm2nj2 pðm; nÞ

(6.20)

m;n

Correlation determines how image pixels are associated to its neighborhood. The standard values of characteristics range from 21 to 11, representing ideal negative and positive relationship individually. The mean and standard deviation of pixel X(m, n) is given as μm, μn and σm, σn, respectively. The correlation for the horizontal textures of an image is usually high as compared to other directions. Correlation 5

X ðm 2 μmÞðn 2 μnÞiðm; nÞ σm σn

m;n

(6.21)

Energy also implies consistency or angular second moment. The more homogeneous the image is, the bigger the value. At the point when energy equivalents to 1, the image is accepted to be an invariable image. E5

X

iða; bÞ2

(6.22)

a;b

Homogeneity estimates the similarity of pixels. A diagonal gray level cooccurrence matrix gives homogeneity of 1. It turns out to be huge if local textures just have insignificant changes. H5

X a;b

iða; bÞ 1 1 ja 2 bj

(6.23)

From the abovementioned equations, a and b are the horizontal and vertical cell coordinates and i is the cell value.

6.2.1.5.4 Zernike features These features help in representing properties of an image with no overlap and to describe shape characteristics. It basically has inherent rotational, scale, and translation invariant characteristics, which are self-descriptive in nature [1,5]. This makes Zernike features more accurate descriptors even with relatively few data points. Zernike moments are defined as the projections of f(i, j) on a class of polynomials, called Zernike polynomials Pnm, given in the following equation: Zernm 5

XX i

f ði; jÞPnm ði; jÞ;

pffiffiffiffiffiffiffiffiffiffiffiffiffi i2 1 j2 # 1

(6.24)

j

6.2.1.5.5 Histogram of oriented gradients features HOG is a function vector or feature descriptor that is beneficial in image analysis and object recognition. The HOG descriptors constitute the primary traits that

105

CHAPTER 6 Cognitive informatics, computer modeling

encode object characteristics into a series of particular numbers that may be used to distinguish objects from each other. Primarily, the HOG features are computed from blocks of size 12 3 12 pixels of the segmented knee X-ray image [30,32]. Each block inside the grid is further divided to smaller cells, in which the gradients are computed. Gradients are the rates of local intensity changes at a particular image pixel position [33]. Gradient is a vector quantity that has both magnitude and direction [30,34]. The magnitude and direction of gradient at pixel (i, j) are given in Eqs. (6.25) and (6.26), respectively. qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Mi ði; jÞ2 1 Mj ði; jÞ2   Mi ði; jÞ αði; jÞ 5 arctan Mj ði; jÞ

Mði; jÞ 5

(6.25) (6.26)

The HOG visualization of segmented image is shown in Fig. 6.11. The pictorial representation of image gradients and histogram of cell and orientation is shown in Fig. 6.11C. Each pixel will now have an orientation and magnitude for the edge lying on it. Assemble a histogram for every orientation in a cell utilizing discrete (A)

(C)

(B)

Gradient strength

106

0

Direction

FIGURE 6.11 (A) Segmented image and (B) HOG visualization of segmented image. (C) Image gradients with orientation and cell histogram. HOG, Histogram of oriented gradient.

6.2 Machine learning approach

(A) 6

2

1

9

5

4

5

7

3

1

Threshold

0

1 1

0 0

1

0

Binary: 10000111 Decimal: 135

(B)

FIGURE 6.12 (A) Illustration of fundamental LBP operator. (B) Illustration of extended LBP operator with neighborhood (8, 1), (16, 2), and (24, 3). LBP, Local binary pattern.

orientation bins (from 0 to 360 degrees). Use the magnitude as prime factor in the histogram and link these cell histograms for each block, which are further directed into a HOG descriptor vector [35].

6.2.1.5.6 Local binary pattern Primarily it is used to generate decimal numbers called LBP codes or patterns that are obtained by comparing the local parameters of an image with the neighboring pixel values. The example of LBP is illuminated in Fig. 6.12A. In LBP a 3 3 3 neighborhood is considered in which the center pixel is subtracted with its 8 neighboring pixels in clockwise or anticlockwise direction, if the center pixel value is larger, address it as “1” otherwise address it as “0” [36]. As a resultant an eight digit binary number is acquired called LBP code that is converted into decimal number for further labeling [37,38]. The extended LBP operator with multiple neighborhoods is shown in Fig. 6.12B. Where (P, R) represent a neighborhood of P sampling points on a circle of radius of R [17,39]. LBP representation in digital form with pixel (xc, yc), is given in Eq. (6.27) LBPP;R ðxc ; yc Þ 5

P51  X S ip 2 ic 2P

(6.27)

P50

where P represents the surrounding pixel in a respective circular neighborhood with R as a radius and the gray values of central pixels are represented by ic and ip [39]. Function s(x) is elaborated as in the following equation:

sðxÞ 5

1; 0;

if if

x$0 x,0

(6.28)

107

108

CHAPTER 6 Cognitive informatics, computer modeling

The computed histograms of local structures upon a specific region can be used as labels for texture descriptors.

6.2.1.6 Classification The classification is the process of automatically categorizing the features of the images into groups or classes accordingly. For a specific application image classification is implemented using a computer program, which is referred to as classifiers.

• • • • • • •

k-Nearest neighbor (k-NN) support vector machine (SVM) decision tree (DT) random forest (RF) error correcting output code (ECOC) linear discriminant analysis quadratic discriminant analysis

In the study for the correct categorization of the ailment, all classifiers were used for experimentation, but only few classifiers results were promising and challenging. The classifiers with promising results are explained in the following sections.

6.2.1.6.1 k-Nearest neighbor k-NN classifies the class labels based on measuring the distance between testing and training data. k-NN will classify by suitable k value that in turn finds the nearest neighbor and provides a class label to unlabeled images [40]. Depending on the types of problem, a variety of different distance measures can be implemented. In this work, city-block distance, cosine, correlation, and Euclidean distance are considered with K 5 3, which is empirically fixed throughout the experiment [40]. Basically, k-NN is nonparametric classifier, which finds the minimum distance d between training sample X and testing pattern Yj and S 5 3 using the following equation: DEuclidean ðX; YÞ 5

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S  ffi X2Yj X 2 Yj

(6.29)

6.2.1.6.2 Decision tree It is widely used classifier for multiclass problems. In DT the outcome is demonstrated in terms of leaf node, whereas nonleaf nodes depict the decision. Here the various decedents are encircled based on the attributes examined by nonleaf nodes [41]. The major step while building the DT is to figure out which characteristic is to be investigated and which among numerous possibility tests dependent on characteristic has to be performed. In DT the important query is to estimate the optimal partition of m components into n sections [15,42]. Every leaf is allocated one

6.2 Machine learning approach

single class corresponding to most suitable target value. It may so happen that the leaf node may hold the probability vector specifying target attribute comprising some specific value [38]. Further the classification is carried out by navigating from the root to a leaf along the path.

6.2.1.6.3 Random forest A RF is a classifier comprising a collection of tree organized classifiers {g(xΘy), y 5 1. . .} where the {Θy} are independent identical disseminated random vectors and each tree makes a unit choice for the most prominent class at info x [43]. RF is an adaptable machine learning technique fit for performing both regression and classification tasks. In RF, we develop numerous trees instead of a single tree to categorize new element based on certain characteristics, each tree gives a grouping and we state the tree “votes” for that class [5]. The classification having the most votes is chosen by forest and the average of output by different trees is considered in the case of regression.

6.2.1.6.4 Error correcting output code ECOC utilizes two phases, coding design and decoding design for classification. The classes/groups that train binary learners are quickly prompted by coding design scheme and accretion of these binary classifiers are decided by decoding scheme. For example, in our study the number of classes are 5, one-versus-one coding design is considered having 5(5 2 1)/2 5 10 learners, which are represented from n1 to n10 [44]. For each binary learner, one group is positive another is negative and the rest are ignored. The one-versus-one coding design is given in Table 6.2 where row represents classes and column represents binary learners. Learner1 (n1) trains on observation having class normal and class doubtful, in which class normal is considered as positive class and class doubtful as negative class [32]. The decoding phase decides how well a binary learner groups a perception into the classes [45,46]. The decoding scheme uses binary loss function that helps in producing minimum binary loss over binary learners [23], which is given in the following equation:

Table 6.2 Coding matrix with 10 learners. Class

n1

n2

n3

n4

n5

n6

n7

n8

n9

n10

Normal grade Doubtful grade Mild grade Moderate grade Severe grade

1 21 0 0 0

1 0 21 0 0

1 0 0 21 0

1 0 0 0 21

0 1 21 0 0

0 1 0 21 0

0 1 0 0 21

0 0 1 21 0

0 0 1 0 21

0 0 0 1 21

109

110

CHAPTER 6 Cognitive informatics, computer modeling

k^ 5

argminx

N  P  n51 N P

 mxy  f ðmxy; sn Þ (6.30)

  mxy 

n51

where mxy is component of the coding design of matrix M, Sn be the score of binary learner n for an observation, f is the binary loss function, and k^ is the predicted class for the observation.

6.3 Experimental analysis To initiate the research, 1173 digital X-ray images of knee joint with DICOM standards are collected from various health centers. Each and every radiographic knee is manually allotted a KL grades by 2 orthopedician who examine 6090 radiographic images per day. The generalized algorithm used to carry out all the experiments is as shown Input: Digital knee radiographic image Output: Normal or affected knee X-ray image Step 1: Preprocessing that includes normalization and noise removal Step 2: Enhancement used to enhance bone edges. Step 3: Identification/segmentation of ROI Step 4: Computation of various features Step 5: Grade wise classification using various computed features. End

6.3.1 Experiment I The first case study is conducted on 500 knee X-ray samples. The ROI was extracted using active contour method and different features such as shape features, statistical features, first-four moments, Haralick features, texture analysis features, and Zernike moments are computed. In the experimentation 40% of training and 60% of testing were carried out for the analysis. The computed features were classified using RF and k-NN classifiers. The classification rate of 87.92% was demonstrated for RF classifier and 88.88% for k-NN classifier provided the given image is normal or affected. The confusion matrices of both the classifiers are given in Tables 6.3 and 6.4. Table 6.3 Confusion matrix of random forest classifier. Classes

Normal

Affected

Normal Affected

59 14

11 123

6.3 Experimental analysis

Table 6.4 Confusion matrix of k-nearest neighbor classifier. Classes

Normal

Affected

Normal Affected

61 14

09 123

Results of random forest and k-NN classifiers 150

123

123

100 59

61

50 0 Random forest classifier (87.92%)

k-NN classifier (88.88%)

Normal Affected

FIGURE 6.13 Graphical representation of classification accuracies.

The graphical representation of RF and k-NN classifier is depicted in Fig. 6.13. From the abovementioned experiment it is observed that the results of k-NN classifier are better compared to RF classifier.

6.3.2 Experiment II In this case study 532 digital knee X-ray images are considered that are manually annotated/labeled by 2 different doctors as per KL grading system. The total numbers of images by two experts annotated as per KL grades are given in Table 6.5, where G0 is for normal grade, G1 for doubtful grade, G2 for mild, G3 for moderate, and G4 for severe grade. The implementation was further conceded using different segmentation methods such as Prewitt, Sobel, texture based, and Otsu’s based methods [17]. Among all the methods Prewitt method obtained a better accuracy of 97.55% compared to other methods. In the experimentation the nearest neighbor value for k 5 1 and k 5 3 is considered. The classification accuracies of all methods for k 5 1 and 3 are shown in Table 6.6. The comparative analysis of proposed algorithm with experts’ opinion is pictorially represented in Figs. 6.14 and 6.15.

111

112

CHAPTER 6 Cognitive informatics, computer modeling

Table 6.5 Manual annotations by two experts based on KellgrenLawrence (KL) grading system. KL grade

Expert opinion-1

Expert opinion-2

G0 G1 G2 G3 G4 Total

337 139 31 09 16 532

348 128 31 09 16 532

Table 6.6 Results of proposed methodology using k-nearest neighbor classifier for expert-1 and 2 opinions. k51

k53

Segmentation methods

Expert-1

Expert-2

Expert-1

Expert-2

Sobel edge detection (%) Otsu’s based segmentation (%) Texture-based segmentation (%) Prewitt edge detection (%)

90.60 96.61 94.17% 96.61

86.27 93.79 90.97 93.23

91.16 96.80 94.92 97.55

88.34 94.54 92.85 95.11

Comparative analysis of medical expert-I opinion with proposed methods using k-NN classifier (k = 1 and k = 3) 400 336 336 337 327 326 326 316 350 312 306 300 250 200 139 139 139 139 139 138 139 138 138 150 100 31 16 26 8 26 9 27 13 27 12 28 13 28 11 22 11 21 12 50 8 8 4 6 2 6 9 6 7 0 Medical Otsu Texture Prewitt Sobel Otsu Texture Prewitt Sobel expert-I Proposed method for k = 3 Proposed method for k = 1 opinion Doubtful Mild Moderate Severe Normal

FIGURE 6.14 Comparative analysis of proposed algorithm with experts’ opinion.

6.3.3 Experiment III This case study is implemented using 616 digital knee radiographic images with DICOM standards. The dataset used in this case study are also being assigned KL grades manually by experts. The radiographic images used in this case study are obtained with 1355x2541 dimensions. The overall count of knee X-ray images with KL grade labeling are depicted in Table 6.7.

6.3 Experimental analysis

Comparative analysis of medical expert-II opinion with proposed methods using k-NN classifier (k = 1 and k = 3) 400 348 342 340 335 331 326 319 316 312 350 300 250 200 128 127 121 121 119 119 116 116 110 150 100 31 16 28 13 27 12 27 13 26 9 26 8 28 11 21 12 22 11 50 7 8 6 4 8 9 2 6 6 0 Medical Sobel Otsu Texture Prewitt Sobel Otsu Texture Prewitt expert-II Proposed method for k = 3 Proposed method for k = 1 opinion Normal Doubtful Mild Moderate Severe

FIGURE 6.15 Comparative analysis of proposed algorithm with experts’ opinion.

Table 6.7 Manual assignments by 2 experts as per KellgrenLawrence (KL) grades. KL grade

Expert-1

Expert-2

N(G0) D(G1) MIL(G2) MOD(G3) S(G4) Total

246 252 58 25 35 616

257 241 58 25 35 616

In this case study, grade wise classification of knee radiographic images is implemented utilizing same segmentation method but with different features for computation. The HOGs were computed and classified using ECOC classifier obtaining the accuracy of 97.96% for KL G0, 92.85% for KL G1, 86.20% for KL G2, and 100% for KL G3 and KL G4, respectively. The results of experimentation are shown in Table 6.8 and 6.9. The pictorial representation of accuracies with respect to proposed algorithm and experts’ opinion is shown in Fig. 6.16. It is observed that the classification results validated by the two experts are in close agreement.

6.3.4 Experiment IV This case study utilizes highest number of dataset as compared to other case studies. In this work, 1173 patients’ knee radiographic images are used with 1355 3 2541 dimensions. The overall images with KL grades by two different orthopedicians is depicted in Table 6.10, where G0 is for normal grade, G1 for doubtful grade, G2 for mild, G3 for moderate, and G4 for severe grade.

113

Table 6.8 Confusion matrix of classification by the proposed method as compared to medical expert-I opinion. Classification by proposed method Class

Normal

Doubtful

Mild

Moderate

Severe

Normal (G0) Doubtful (G1) Mild (G2) Moderate (G3) Severe (G4)

241 5 0 0 0

18 234 0 0 0

5 3 50 0 0

0 0 0 25 0

0 0 0 0 35

Table 6.9 Confusion matrix of classification by the proposed method as compared to medical expert-II opinion. Classification by proposed method Class

Normal

Doubtful

Mild

Moderate

Severe

Normal (G0) Doubtful (G1) Mild (G2) Moderate (G3) Severe (G4)

252 5 0 0 0

18 222 1 0 0

6 3 49 0 0

0 0 0 25 0

0 0 0 0 35

Comparative analysis of proposed method with medical experts opinion 300 250

246 252

257 241

241 234

252 222

200

Normal

150

Doubtful Mild

100

58

50

25 35

50

58 25 35

49

25 35

25 35

Moderate Severe

0 Medical expert-I opinion

Proposed method (94.96%)

Medical expert-II opinion

Proposed method (94.64%)

FIGURE 6.16 Graphical representation of expert analysis with proposed methodology.

Table 6.10 Manual annotations made by two experts as per KellgrenLawrence (KL) grading system. KL grades

Annotations by expert-1

Annotations by expert-2

G0 G1 G2 G3 G4

383 277 179 186 148

374 286 179 186 148

6.3 Experimental analysis

Table 6.11 Classification accuracy of three different classifiers. Classifiers

Medical expert-I opinion (%)

Medical expert-II opinion (%)

k-NN Multiclass SVM Decision tree

96.67 97.27 97.86

95.82 96.18 97.61

k-NN, k-Nearest neighbor; SVM, support vector machine.

Table 6.12 Confusion matrix of classification by decision tree as compared to medical expert-I opinion. Class

Normal

Doubtful

Mild

Moderate

Severe

Normal (G0) Doubtful (G1) Mild (G2) Moderate (G3) Severe (G4)

379 2 2 0 0

1 272 3 0 1

2 1 172 1 3

0 0 4 179 3

0 0 2 0 146

Table 6.13 Confusion matrix of classification by decision tree as compared to medical expert-II opinion. Class

Normal

Doubtful

Mild

Moderate

Severe

Normal (G0) Doubtful (G1) Mild (G2) Moderate (G3) Severe (G4)

369 2 2 0 1

5 278 2 1 0

2 3 172 1 1

0 0 2 181 3

0 0 3 0 145

In this case study the main objective is to extract the ROI, that is, the cartilage region based on density. The extracted region is further used for computation using HOG method and LBP. The computed features are classified using DT classifier. The accuracy of 97.86% and 97.61% is obtained with respect to expert 1 and 2 opinions. The methodology is implemented by classifying the computed gradients and local structures using three different classifiers, k-NN, multiclass SVM, and DT. The classification accuracy of three classifiers is given in Table 6.11. From Table 6.11, it is observed that the accuracies of DT classifier are high compared to other two classifiers. Thus the confusion matrix of DT classifier for medical expert-I and II is given in Tables 6.12 and 6.13.

115

116

CHAPTER 6 Cognitive informatics, computer modeling

The results of proposed methodology with experts’ opinions are comparatively depicted using bar graph shown in Figs. 6.17 and 6.18, respectively. The study demonstrated that for the proposed technique there is a misclassification of 2.13% and 2.38% based on the opinions of two different experts. The reason for the misclassification might be the subjective nature of classification by experts. However, there are no any appropriate and automatic methods available for OA recognition. Thus the synchronism between the proposed method and the experts’ opinion is not met to some extent. From the proposed methodology it is observed that the classification results validated by the two experts are in close agreement.

Comparative analysis of medical expert-I opinion and proposed method Normal

Doubtful

Mild

383

Moderate

Severe

379 277

272 179

186

172

148

Classification by medical expert-I

179

146

Classification by proposed method

FIGURE 6.17 Graphical representation of expert-I and proposed method analysis.

Comparative analysis of medical expert-II opinion and proposed method Normal

Doubtful

374

Mild

Moderate

Severe

369 286

278 179

186 148

Classification by medical expert-II

172

181 145

Classification by proposed method

FIGURE 6.18 Graphical representation of expert-II and proposed method analysis.

References

6.4 Discussion This chapter mainly focuses on exploration and recognition of radiographic knee OA as per KL grading system. Here we have studied various methods and features that are helpful in recognizing the ailment suitably. In the study we have considered overall dataset of 1173 digital knee radiographic images, which are classified manually by two medical experts as per KL grading system. The experiments mentioned in the chapter are few examples of disease recognition and classification grade wise. The anatomical structure of knee incorporates tibia, femur, patella, meniscus, cartilage, and so on. The said parameters are assumed to play vital role in causing OA. According to few analyst or researchers, these anatomical structures along with some important demographic characteristics such as gender, weight, blood group, age, CRP content and occupation are likewise the main reason for the cause of OA. Analysis of X-ray images is done manually by the physician that is timeconsuming process, subjective, and unpredictable. The complexities associated with the medical images make it difficult to analyze them in an effective way. A knee X-ray image is very much prone to unwanted distortions that cause problem in analyzing the bone structures. To overcome these problems there are various automated and semiautomated techniques that provide a quick and efficient method to analyze the abnormalities and problems associated with the bone structures. To do so, image processing experts have to undergo detail discussion with biologists/medical experts to understand the ailment parameters. Thus choosing an appropriate technique for specific task we may need to develop efficient and robust algorithm for early detection of the disease.

6.5 Summary This chapter focuses on analysis and classification of OA based on KL grading system using radiographic knee images. The experiments and algorithms are helpful foe the doctors to the early detection and ease analysis of ailment.

References [1] S.S. Gornale, P.U. Patravali, R.R. Manza, Computer assisted analysis and systemization of knee osteoarthritis using digital x-ray images, Proceedings of Second International Conference on Cognitive Knowledge Engineering (ICKE), Chapter 42, Excel Academy Publishers, Aurangabad, Maharashtra, 2016, pp. 207212. ISBN: 978-93-86751-04-1. [2] H.J. Braun, G.E. Gold, Diagnosis of osteoarthritis: imaging, Bone 51 (2) (2012) 278288.

117

118

CHAPTER 6 Cognitive informatics, computer modeling

[3] A. Tiulpin, J. Thevenot, E. Rahtu, S. Saarakkala, A novel method for automatic localization of joint area on knee plain radiographs, Scand. Conf. Image Anal. (SCIA) (2017) 290301. Available from: https://doi.org/10.1007/978-3-319-591292_25. Cited as: arXiv: 1701.08991[cs.CV]. [4] C. Peterfy, M. Kothari, Imaging osteoarthritis: magnetic resonance imaging versus xray, Curr. Rheumatol. Rep. 8 (2006) 1621. ISSN 1523-3774. [5] S.S. Gornale, P.U. Patravali, Medical imaging in clinical applications: algorithmic and computer based approaches, basic chapter, Engineering and Technology: Latest Progress, Meta Research Press, 2017, , ISBN: 978-81-932850-2-2, pp. 65104. [6] G.W. Stachowiak, M. Wolski, T. Woloszynski, P. Podsiadlo, Detection and prediction of osteoarthritis in knee and hand joints based on the x-ray image analysis, Biosurface Biotribology 2 (4) (2016) 162172. Available from: https://doi.org/ 10.1016/j.bsbt.2016.11.004. [7] S.S. Gornale, P.U. Patravali, R.R. Manza, A survey on exploration and classification of osteoarthritis using image processing techniques, Int. J. Sci. Eng. Res. 7 (6) (2016) 334355. ISSN 2229-5518. [8] L. Anifahl, M.H. Purnomo, T.L.R. Mengko, I.K.E. Purnama, Osteoarthritis severity determination using self organizing map based Gabor kernel, IOP Conf. Ser.: Mater. Sci. Eng. 306 (1) (2018) 012071. Available from: https://doi.org/10.1088/1757-899X/306/1/012071. [9] L. Shamir, S.M. Ling, W.W. Scott, A. Bos, N. Orlov, T. Macura, et al., Knee x-ray image analysis method for automated detection of osteoarthritis, IEEE Trans. Biomed. Eng. 56 (2) (2009) 407415. Available from: https://doi.org/10.1109/ TBME.2008.2006025. [10] ,https://www.acil.in/what-is-machine-learning. [11] R.C. Gonzalez, R.E. Woods, Digital Image Processing, Pearson Education, Inc., 2002. ISBN: 81-7808-629-8. [12] I. Frosio, N.A. Borghese, Statistical based impulsive noise removal in digital radiography, IEEE Trans. Med. Imaging 28 (1) (2009) 316. Available from: https://doi. org/10.1109/TMI.2008.922698. [13] R. Nithya, B. Santhi, Computer aided diagnostic system for mammogram density measure and classification”, Biomed. Res. (0970-938X) 28 (6) (2017) 24272431. [14] C.S. Crisan, S. Holban, A comparison of x-ray image segmentation techniques, Adv. Electr. Comput. Eng. 13 (3) (2013) 8590. Available from: https://doi.org/10.4316/ AECE.2013.03014. [15] S.S. Gornale, P.U. Patravali, A.M. Uppin, P.S. Hiremath, Study of segmentation techniques for assessment of osteoarthritis in knee x-ray images, Int. J. Image Graph. Signal Process. 11 (2) (2019) 4857. Available from: https://doi.org/10.5815/ ijigsp.2019.02.06. [16] P. Levinger, D.T.H. Lai, R. Begg, K. Webster, J. Feller, W. Gilleard, The application of multiclass SVM to the detection of knee pathologies using kinetic data: a preliminary study, in: IEEE, 2007. [17] T. Ojala, M. Pietikainen, T. Maenpaa, Multi-resolution gray-scale and rotation invariant texture classification with local binary patterns, IEEE Trans. Pattern Anal. Mach. Intell. 24 (7) (2002) 971987. [18] J.L. Semmlow, Bio Signal and Biomedical Image Processing: MATLAB-based Applications (Signal Processing), Annotated ed., Taylor & Francis Inc, 2004. ISBN: 08247-48034.

References

[19] M.M. Hadhoud, X-ray images enhancement using human visual system model properties and adaptive filters, in: 2001 IEEE Int. Conf. Acoust. Speech, Signal Process. Proc. vol. 6, 2002, (Cat. No.01CH37221) DOI: 10.1109/ICASSP.2001.941342. [20] K. Najarian, R. Splinter, Biomedical Signal and Image Processing, second ed., CRC Press, 2012, ISBN: 9781439870334, . [21] D.L. Pham, C. Xu, J.L. Prince, Current methods in medical image segmentation, Annu. Rev. Biomed. Eng. 2 (2000) 315337. Available from: https://doi.org/ 10.1146/annurev.bioeng.2.1.315. [22] C. Stolojescu-Crisan, S. Holban, An interactive x-ray image segmentation technique for bone extraction, Int. Work-Conf. Bioinform. Biomed. Eng. 13 (3) (2014) 8592. Available from: https://doi.org/10.4316/AECE.2013.03014. IWBBIO 2014, Granada, Spain, April 79, ISSN: 1582-7445, e-ISSN: 1844-7600. [23] S. Gupta, S.G. Mazumdar, Sobel edge detection algorithm, Int. J. Comput. Sci. Manage. Res. 2 (2) (2013). ISSN 2278-733X. [24] A.I. Ben, O. Ogini Nicholas, O. Onyekweli Charles, Optimum fuzzy based image edge detection algorithm, Int. J. Image Graph. Signal Process. 9 (4) (2017) 4455. ,https://doi.org/10.5815/ijigsp.2017.04.06.. [25] S. Gehlot, J.D. Kumar, The image segmentation techniques, Int. J. Image Graph. Signal Process. 9 (2) (2017) 918. Available from: https://doi.org/10.5815/ ijigsp.2017.02.02. [26] D. Reska, C. Boldak, M. Kretowski, A texture-based energy for active contour image segmentation, in: R. Chora´s (Ed.), Image Processing & Communications Challenges 6. Advances in Intelligent Systems and Computing, vol. 313, Springer, Cham, 2015, pp. 187194. ,https://doi.org/10.1007/978-3-319-10662-5_23.. [27] A.B. Patil, J.A. Shaikh, OTSU thresholding method for flower image segmentation, Int. J. Comput. Eng. Res. (IJCER) 06 (05) (2016). ISSN (e): 2250  3005. [28] V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, Int. J. Comput. Vis. 22 (1) (1995) 6179. Available from: https://doi.org/10.1023/A:1007979827043. [29] ,www.cs.uu.nl... [30] L. Pauly, D. Sankar, Non intrusive eye blink detection from low resolution images using HOG-SVM classifier, Int. J. Image Graph. Signal Process. 8 (10) (2016) 1118. Available from: https://doi.org/10.5815/ijigsp.2016.10.02. MECS Publisher, ISSN: 2074-9074 (Print), ISSN: 2074-9082(Online). [31] K.R. Ananth, Dr.S. Pannirselvam, A geodesic active contour level set method for image segmentation, Int. J. Image Graph. Signal Process. 4 (5) (2012) 3137. Available from: https://doi.org/10.5815/ijigsp.2012.05.04. MECS Publisher, 2012, ISSN: 2074-9074 (Print), ISSN: 2074-9082(Online). [32] S.S. Gornale, P.U. Patravali, K.S. Marathe, P.S. Hiremath, Determination of osteoarthritis using histogram of oriented gradients and multiclass SVM, Int. J. Image Graph. Signal Process. 9 (12) (2017) 4149. Available from: https://doi.org/10.5815/ ijigsp.2017.12.05. [33] N. Dalal, B. Triggs, Histograms of oriented gradients for human detection, Proc. 2005 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR’05), 1, 2005, pp. 886893. Print ISBN: 0-7695-2372-2, Print ISSN: 1063-6919, DOI:10.1109/ CVPR.2005.177. [34] M. Ghorbani, A.T. Targhi, M.M. Dehshibi, HOG and LBP: towards a robust face recognition system, in: 2015 Tenth International Conference on Digital Information

119

120

CHAPTER 6 Cognitive informatics, computer modeling

[35]

[36] [37]

[38]

[39]

[40]

[41]

[42]

[43] [44]

[45]

[46]

Management (ICDIM), 2015, pp. 138141. Electronic ISBN: 978-1-4673-9152-8, USB ISBN: 978-1-4673-9151-1, DOI:10.1109/ICDIM.2015.7381860. S. Mishra, M. Panda, A histogram-based classification of image database using scale invariant features, Int. J. Image Graph. Signal Process. 9 (6) (2017) 5564. Available from: https://doi.org/10.5815/ijigsp.2017.06.07. Z. Guo, L. Zhang, D. Zhang, A completed modeling of local binary pattern operator for texture classification, IEEE Trans. Image Process. 9 (16) (2010) 16571663. M.S. Rao, V.V. Kumar, M.H.M.K. Prasad, Texture classification based on local features using dual neighborhood approach, Int. J. Image Graph. Signal Process. 9 (9) (2017) 5967. Available from: https://doi.org/10.5815/ijigsp.2017.09.07. D. Coppersmith, S.J. Hong, J.R.M. Hosking, Partitioning nominal attributes in decision trees, Data Min. Knowl. Discov. 3 (2) (1999) 197217. Available from: https:// doi.org/10.1023/A:1009869804967. D. Huang, C. Shan, M. Ardabilian, Y. Wang, L. Chen, Local binary patterns and its application to facial image analysis: a survey, IEEE Trans. Syst. Man Cybern. C: Appl. Rev. 41 (6) (2011) 765781. S.S. Gornale, A. Patil, M. Hangarge, R. Pardesi, Automatic human gender identification using palmprint, in: A. Luhach, K. Hawari, I. Mihai, P.A. Hsiung, R. Mishra (Eds.), Smart Computational Strategies: Theoretical and Practical Aspects, Springer, Singapore, 2019, pp. 4958. ISBN:978-981-13-6295-8, 22 March 2019. Available from: https://doi.org/10.1007/978-981-13-6295-8_5. M.N. Murty, V.S. Devi, Decision trees, Pattern Recognition. Undergraduate Topics in Computer Science, Springer, London, 2011Online ISBN:978-0-85729-495-1, Print ISBN:978-0-85729-494-4. Available from: https://doi.org/10.1007/978-0-85729495-1_6. E.A. Leonidaki, D.P. Georgiadis, N.D. Hatziargyriou, Decision trees for determination of optimal location and rate of series compensation to increase power system loading margin, IEEE Trans. Power Syst. 21 (2006) 13031310. L. Breiman, Random forests, Mach. Learn. 45 (2001) 532. © 2001 Kluwer Academic Publishers. Manufactured in The Netherlands. S. Escalera, O. Pujol, P. Radeva, On the decoding process in ternary error-correcting output codes, IEEE Trans. Pattern. Anal. Mach. Intell. 32 (7) (2010) 120134. Available from: https://doi.org/10.1109/TPAMI.2008.266. PrintISSN: 0162-8828. T.G. Dietterich, G. Bakiri, Solving multiclass learning problem via error correcting output codes, J. Artif. Intell. Res. 2 (1) (1995) 263286. Available from: https://doi. org/10.1613/jair.105. G. James, T. Hastie, The error coding method and PICTs, J. Comput. Graph. Stat. 7 (3) (1998). Available from: https://doi.org/10.1080/10618600.1998.10474782.

Further reading M.A. Bagheri, Q. Gao, S. Escalera, A genetic-based subspace analysis method for improving error-correcting output coding, Pattern Recognit., 46, 2013, pp. 28302839. J.C. Buckland-Wright, D.G. Macfarlane, J.A. Lynch, M.K. Jasani, C.R. Bradshaw, Joint space width measures cartilage thickness in osteoarthritis of the knee: high resolution

Further reading

plain film and double contrast macro radiographic investigation”, Ann. Rheum. Dis. 54 (4) (1995) 263268. S.S. Gornale, P.U. Patravali, R.R. Manza, Detection of osteoarthritis using knee x-ray image analyses: a machine vision based approach, Int. J. Comput. Appl. 145 (1) (2016) 2026. ISSN-0975-8887. A. Norouzi, M.S.M. Rahim, A. Altameem, T. Saba, A.E. Rad, A. Rehman, et al., Medical image segmentation methods, algorithms, and applications, IETE Tech. Rev. 31 (3) (2014) 199213. Available from: https://doi.org/10.1080/02564602.2014.906861. A. Pandey, A. Jain, Comparative analysis of KNN algorithm using various normalization techniques, Int. J. Comput. Netw. Inform. Secur. 9 (11) (2017) 3642. Available from: https://doi.org/10.5815/ijcnis.2017.11.04. A. Porebski, N. Vandenbroucke, L. Macaire, Haralick feature extraction from LBP images for color texture classification, in: First Workshops on Image Processing Theory, Tools and Applications, 2008, pp. 18, @2008 IEEE. L. Sharmir, S. Rahimi, N. Orlov, L. Ferrucci, I.G. Goldberg, Progression analysis and stage discovery in continuous physiological process using image computing, EURASIP J. Bioinform. Syst. Biol. 2010 (1) (2010). Available from: https://doi.org/10.1155/2010/ 107036. Hindawi Publishing Corporation, Article ID 107036, PMCID: PMC3171360. Y. Shen, W. Zhu, Medical image processing using a machine vision-based approach, Int. J. Signal Process. Image Process. Pattern Recognit. 6 (3) (2013) 139146. J. Thomson, T. O Neill, D. Felson, T. Cootes, Automated Shape and Texture Analysis for Detection of Osteoarthritis From Radiographs of the Knee, Medical Image Computing and Computer Assisted Intervention—MICCAI, 2015, pp. 127134. Print ISBN: 9783-319-24570-6, DOI: 10.1007/978-3-319-24571-3_16, Springer. J. Wu, M.R. Mahfouz, Robust x-ray image segmentation by spectral clustering and active shape model, J. Med. Imaging 3 (3) (2016) 034005. Available from: https://doi.org/ 10.1117/1.JMI.3.3.034005.

121

CHAPTER

Adaptive circadian rhythm a cognitive approach through dynamic light management

7

Srinagesh Maganty Department of Electronics & Communication Engineering, PACE Institute of Technology & Sciences, Ongole, India

7.1 Introduction Good sleep is a sign of healthy body and healthy mind. The efficiency of a human being is greatly affected by sleep wake cycle that they undergo. From the ages, most of the spices on this earth, including human being, used to be alert in the daytime (i.e., during the sunlight) and retire (keep the body in rest/sleep) at night. The daylight time and nighttimes are referred to as light and dark cycles of the day. These cycles are due to earth revolution around the sun. Cognitive perception of this light and dark cycles that lead the brain to be alert and retire are the point of interest in this chapter. In this chapter the reader will learn about the effect of light on the mechanism of brain to drive from wake to sleep modes. The following topics are discussed: Circadian rhythm, melatonin hormone that affects brain alertness, effect of light on secretion of melatonin, sleep-management techniques using dynamic white lighting, and the role of LED (light-emitting diode) lights in indoor and outdoor applications to enhance human alertness during the dark cycles. A brief note on research and product development is SunLike LEDs.

7.1.1 Circadian clock and circadian rhythm Sunlight or any artificial lighting has a great impact on everyday’s human life. Revolution of earth around its axis makes a day, and earth revolutions around sun in elliptical orbit brings seasons. The nature clock provides day and night cycles with a total duration of 24 hours, and this causes a circadian clock (biological clock) in the living organisms. This clock is alternatively referred to as master body clock. Joint synchronization of nature clock in tandem with circadian clock is referred to as circadian rhythm. The circadian clock, or circadian oscillator, in most living things makes it possible for organisms to organize their biology and behavior with accordance to daily environmental changes in the day night cycle. The term circadian derives from the “Latin circa (about) diem (a day),” since when taken away from external Cognitive Informatics, Computer Modelling, and Cognitive Science, Volume 1. DOI: https://doi.org/10.1016/B978-0-12-819443-0.00007-6 © 2020 Elsevier Inc. All rights reserved.

123

124

CHAPTER 7 Adaptive circadian rhythm a cognitive approach

FIGURE 7.1 Features of the human circadian (24 h) biological clock. Courtesy: Extract from International Conference on International Lighting Riegens.

cues (such as the day night cycle), they do not match with 24-hour duration. However, there is a mismatch of clocks in humans and clocks in a lab in constant lowlight, for example, duration will average around 24.2 h/day rather than exact 24-hour duration. Hence, this is termed circadian (Fig. 7.1). Circadian Rhythm is derived from the Latin words circa dies denoting “approximately a day.” Human behavior and physiology changes occur within a 24-hour period. The mammalian circadian system is in harmony to endogenous clock genes. The synchronization of the nature clock with circadian clock (biological clock/master body clock) is termed circadian rhythm. Natural light plays an imperative role on human circadian rhythm and reproduction of melatonin in human. Melatonin hormone regulates the sleep wake cycle and ensures the role in the immune system found in animals and humans. Exposure to light at night, with blue-rich light, suppresses melatonin production. Proper indoor lighting can be effectively used to mold the circadian rhythm. Research studies state that manipulating the circadian rhythm through LED lighting will improve alertness and attentiveness among the population working in phase with the dark cycle along the day (Fig. 7.2). Pineal gland is a tiny gland in the brain where melatonin hormone is produced and which is released in response to darkness; hence, this is also referred to as “darkness hormone.” Melatonin originates circadian and seasonal signals in the mammals. The melatonin is not simply stocked in the pineal gland, but it is released into the bloodstream and can spread to all body tissues. It is worthy point to note that “darkness” rejuvenates the pineal gland to produce melatonin, whereas radiation due to light suppresses this mechanism. It is clinically proven that the release and secretion of melatonin regulates the sleep wake cycles, and as already discussed, the melatonin regulation is controlled by light that is nothing but a radiation in the wavelengths of visible spectrum of solar radiation. The timing (clocking) of regulation of release of this neurohormone is controlled by suprachiasmatic nucleus (SCN) which is also known as biological clock or master pace maker.

7.2 Photoreceptors in the eye

FIGURE 7.2 Perception of light and radiation and the melatonin hormone production and secretion in the pineal gland. Courtesy Koch BC, Nagtegaal JE, Kerkhof GA, ter Wee PM (2009). Circadian sleep wake rhythm disturbances in end-stage renal disease. Nat. Rev. Nephrol. 5, 2009, 407 416.

7.1.2 Perception of eye as a visual and nonvisual information sensor Any image or visual information is conveyed to the brain through the sensor eye where image information is sensed due to the light falling on the object, which creates the image on the retina of the eye which is passed to the brain. The object information as a bright or a dull image depends on the intensity of the light to which object is exposed. The intensity depends on the wavelength of the radiation. Radiation information is a nonvisual information send to the brain through the sensor eye, which provides signals to the pineal gland for secretion or release of melatonin. Eye is a sensor that can also be termed “ocular photoreceptor,” and the function and mechanism are explained in the next section.

7.2 Photoreceptors in the eye Photoreceptor acts as a sensory transduction neuronal cell located within the eye, which receives light information and correspondingly converts (transduces) the photons into bioelectrical signals for processing in the brain. There are three major photoreceptors located within the eye: Rods—Vision at lowlight (scotopic vision) is generally processed by rods, approximately 120 million rods are available in each eye and they have responsible role for circadian entrainment.

125

126

CHAPTER 7 Adaptive circadian rhythm a cognitive approach

Cones—Color vision along with attention to fine visual detail are usually managed by cones; there are approximately 6 million cones embedded in each eye to carry out color management, and bioelectric signals are generated by them as per colors. Intrinsically photosensitive retinal ganglion cell (ipRGC)—ipRGCs are recently discovered photoreceptors, and these cells play a major role for collecting specific light wavelength dependent information and transducing that signal by using a process called ocular phototransduction. The master biological clock (SCN) makes use of this decisive information for circadian system processing and entrainment of circadian rhythm. These cells also take part in “stress response and the pupillary light reflex.” Approximately 40,000 ipRGCs are available in every human eye. It is scientifically proven and clinically established that the human circadian rhythm is affected by nonvisual photoreceptors in the retina, with a response function peaking around 460 nm in the blue portion of the spectrum (see Fig. 7.3); exposure to light during night, with the aid of blue-rich light, suppresses melatonin production. Optimizing the production of light in indoor environment shapes circadian rhythm, producing several health benefits. Hence, in this chapter, further studies will be made on LEDs and their circuits which will be useful in providing shift in the circadian rhythm that will be beneficial for night shift indoor and outdoor workmen (Fig. 7.4). Lighting can be either outdoor lighting and indoor lighting: for the purpose of outdoor lighting especially for streetlights and industrial lighting, different types of lights are in use such as fluorescent lights, high discharge mercury lamps, and sodium vapor lamps. All these lights will provide either white rich or yellow rich lighting which will in the wavelengths range from 550 to 650 nm. But this range will have very little or no effect on secretion of melatonin. The entire visible range of the solar spectrum varies from 400 to 650 nm. The range of 440 460 nm wavelength is more safe and ideal for having impact on secretion of melatonin hormone that has impact on shifting the circadian rhythm. It is important to note that radiation wavelength below 440 nm which is in the range of ultraviolet (UV) may damage the eye. Hence, it is essential that the LED manufacturer has to take judicial wavelengths precisely between 450 and 460 nm for the light emitted from the LEDs. This range 440 460 nm is often referred to as blue-rich white range (Fig. 7.5). From the earlier discussion, it is evident that design of indoor or outdoor LED light is the best bet for providing high energy efficient and adoptive circadian lighting for human benefit.

7.2.1 Light-emitting diodes LEDs are the semiconductor devices that produce illumination when they are forward biased with a proper DC voltage. The diodes elements are encapsulated in a

7.2 Photoreceptors in the eye

FIGURE 7.3 Human photopic and circadian sensitivity curves displayed against a typical blue-rich LED light. LED, Light-emitting diode. Courtesy International Dark-Sky Association.

FIGURE 7.4 (A) Action spectra of circadian and photopic light and (B) visual receptors in the retina have different sensitivities than melanopsin receptors. Courtesy to Mark S. Rea.

plain or prismatic glass and are sealed with an appropriate gas fill to provide color of the light. The intensity of the illumination depends upon the forward current flowing through the diode. Lamps made with LEDs can provide much energy

127

128

CHAPTER 7 Adaptive circadian rhythm a cognitive approach

FIGURE 7.5 Typical spectral power distributions of HPS; ceramic metal halide; white LED. HPS, High pressure sodium vapor lampLED, Light-emitting diode. Courtesy International Dark-Sky Association.

efficient lighting and produce illumination near to the natural light, which can mimic a daylight in the night, also giving the circadian comfort for the night shift workmen. Invention of blue LED bagged the Nobel prize in the year 2014 invented by professors Akasaki, Nakumura, and Amano, which has brought the revolution in the LED lighting industry in developing energy efficient and human well-being and adoptive circadian rhythm (Fig. 7.6). The important terminology associated with LEDs is used in indoor and outdoor lighting purpose. Power: This is the electrical power consumed by the LED lamp to produce light (units: W). Lumens: Lumen equal to the quantity of light emitted per second in a unit solid angle of one steradian from an identical source of one candela. It provides information about the brightness of the objects apparent to the human eye. Since incandescent lighting is being used, we measure the brightness of light (the SI units are lux). Color rendering index (CRI): CRI is the measure of the capability of a source of light to reveal the colors of a range of objects faithfully with comparison to an

7.2 Photoreceptors in the eye

FIGURE 7.6 (A) Working of LED semiconductor and (B) basic construction of LED. LED, Light-emitting diode. Courtesy Physics and Radio Electronics.

ideal or natural light source. Light sources with a high CRI are desirable in colorcritical applications. CRI is measured on a scale from 0 to 100. Correlated color temperature (CCT): CCT is light source ability to provide the color appearance of the object to the eye. The color of the light can be measured by referring to its color temperature that is measured in Kelvins (K). LED lamps are broadly categorized in three CCT segments. LED light with 2700K are referred to as warm light, CCT in the range of 4000 5000K are considered as normal white, and LED lights with CCT above 6000K are treated as cool white. Efficacy: It is a metric which gives the efficiency ratio of the consolidated amount of lumens output by the luminaire to the quantity of electricity required to power the light fixture. These factors in everything, which includes integrated LEDs. In addition to other electronic devices, these might have a lower efficiency than considered LED chip itself. Circadian stimulus (CS): Metric for quantifying effectiveness of light sources to activate the circadian system it takes into account the response by all types of photoreceptors (rods, cones, and ipRGCs). CS is equivalent to percent melatonin suppression following 1-hour contact to the light source. The abovementioned parameters are important, which are required to design a LED lamp for adoptive circadian rhythm. Sleep, referred to as rest session for the brain, can be divided into different stages, and it takes from 90 to 120 minutes for the body to go into deep sleep, which finally gives freshness to the brain and body. EEG record of the subject at various stages is also recorded under normal dark cycle is shown in Fig. 7.7.

129

130

CHAPTER 7 Adaptive circadian rhythm a cognitive approach

FIGURE 7.7 Wave patterns of sleep stages.

Different stages are outlined here: Stage-1: Light transitional sleep in this stage drowsiness and deep sleep initiated. Stage-2: More stable sleep stage in this stage chemicals block in the senses and makes it difficult to be woken. Stage-3: Deep sleep stage during this stage growth hormone is released. Stage-4: This stage is referred to as rapid eye movement stage; this leads to revitalize the body and brain, and intense dream occurs. Ambient light and temperature have great influence on the sleep proper light with low lumens as much as around 5 lx without UV radiation and temperature around 25 C is ideal for quick sleep. A luminaire of 1000 lx in the indoor lighting is preferable with the wavelength of the light around 425 460 nm and around 5000 6000K. CCT, most ideal and experimentally proven to suppress melatonin hormone in the pineal gland, provides best alertness to the inhabitants in the room. Indoor LED luminaries and control room for night shift workers fitted with LED lights in the factories and process plants, which meet the above specification, are proven to be ideally suited for night shift workmen to have best alertness during the dark cycles of the day to carry out their job work. Due to globalization,

7.2 Photoreceptors in the eye

productivity is very important in current day business world; it is the need of the situation to design LED lights to meet the adoptive circadian requirements. Various lamps and luminaries manufacturers are concentrating on development of lights for human well-being under global working challenges. Lighting architects and medical experts are working together in the R&D divisions of various medical institutes and LED-manufacturing companies to design the light sources to meet the requirements of circadian rhythm manipulation. By using LED lamp circuit, it is also possible to design the driver circuits for the LED lamps that can be either manually or automatically (climatic feedback) control the illumination that will lead to light for well-being and human efficient. To provide adoptive circadian rhythm lighting design, architects are working on dynamic lighting and controls support circadian wellness of occupants. LED manufacturing major firms such as Nichia, Bridgelux, and Seoul Semiconductor are producing LED devices that produce sun-like radiation to benefit the users. Various studies conducted by lighting research institutions and sleepmanagement and health research institutions proved that well-calibrated and controlled LED lighting is effective for the circadian system (circadian-effective light), provide personal light exposures in office workers, and have shown remarkable effect on their sleep and mood. The CSs score is at B0.3 at midday whose value is optimal for maximum productivity and B0.1 in the evening for relaxation. From Fig. 7.8, it is evident that by managing the CCT and illumination of lighting that is easily and economically possible with LED lamps, controlled driver circuit’s melatonin release can be manipulated to enhance the alertness of the night shift workers. Proper selection of the LEDs with radiations, that will not affect the human health, and a well designing of a well-calibrated LED driver circuit are the important parameters in designing luminaire for every application in which one requires the adoptive circadian rhythm that will benefit the productivity of the employee in the night shift. However, it is very essential for the human being to have proper retired cycle to have well-balanced sleep wake cycles. So it is also very important to simulate a dark cycle for the night shift workmen to manage the melatonin release during the daytime where one would undergo rest. This can be achieved by preparing a dark room maintain a well-balanced illumination below 30 lm in the spectral range of 500 600 nm radiation or wearing dark goggles with UV polarizers. Survey results proved that using bright light during night hours and wearing “dark goggles” in the daytime by night shift workmen provides a better phase shift in the circadian rhythm rather than bright light in the nighttime and resting in dark room during the daytime. It is also a noteworthy to point that it is not the intensity of the light but the radiation emitted from it that is important to control the melatonin release. Ambient temperature is another factor that is also plays a major role in major role in disturbing the sleep and wake cycle. But most of our research has been carried in the indoor applications where temperature is controlled and kept at constant so that the effect of temperature can be neglected.

131

132

CHAPTER 7 Adaptive circadian rhythm a cognitive approach

FIGURE 7.8 Effect of illumination on the melatonin suppression and circadian stimulus. Courtesy Rea et al. (2005, 2011) (Lighting Research Centre).

7.3 SunLike light-emitting diodes LEDs can be manufactured to emit the radiation that is equivalent to visible spectrum of solar radiation. Fine tuning can be done to design a well-established circadian light (CLA). A brief write-up and data sheets of Seoul Semiconductor. UV radiation has an effect on Melatonin secretion to manipulate the circadian rhythm; however, radiation with wavelength below 425 nm (blue-rich white range) will have adverse effect on the eye, and eyes may be damaged permanently if exposed to this wavelength over a long period. Most of the commercially available LEDs are blue-rich white LEDs that may not be suitable for circadian lighting. Manufacturer of LEDs must reduce the Blue pump and convert into violet so that the required circadian lighting may be ensured. Seoul Semiconductor is one such manufacturer using violet LEDs to achieve SunLike visible spectrum (Fig. 7.9).

7.4 Data sheet 7.4.1 A case study at an educational campus A survey is conducted in a higher educational institution having an integrated campus with hostel facilities to record the findings of effect of circadian phase shift on brain alertness by artificially manipulating the nature day and night cycle.

7.4 Data sheet

FIGURE 7.9 SunLike LED spectral distribution of light emission and eye response to SunLike LED data sheet. LED, Light-emitting diode.

Four hostel rooms are selected, and two are provided with circadian lights and two complete dark rooms with a facility of LED lighting with lux value below 30 lm in the CCT of 2700K (warm light). All the rooms are provided with air conditioners to maintain 25 C room temperature. A total of 12 samples are selected in the age group of 19 20 years (final year UG technical students)—6 male students and 6 female students. Test is conduct for 7 days. Test results are analyzed through estimating reading and writing skills of the sample subjects. A two-page standard write-up is handed over to the students asked them to read and reproduce and answer questions on the write-up under test conditions. Students are sent into the dark room at 10:00 a.m. and asked to relax and retire till 6:00 p.m. (8 hours). After this exercise students are shifted to the rooms where circadian lights are arranged with lux value 550 lm and CCT 6000K. They were not allowed to use any mobile phones or laptops and as given with the study material that is

133

134

CHAPTER 7 Adaptive circadian rhythm a cognitive approach

pre-prepared and asked to read once. Later, around 2:45 3:30 a.m., they were asked to read the content of the material provided again and write the same and also asked to discuss on the topic provided in the read out provided. This same process is repeated with different study material for entire 7 days, and in the mean, they were also asked to answer some quiz questions. The results have shown that during the first day, the alertness is observed to be within the limits of 20% 25% and has grown gradually to 80% 87% on the 6 and 7 days of the experiment. Detailed results cannot be discussed in this chapter and the EEG and ECG records along with melatonin secretion which is measured using MelatoninSaliva-Elisa Kit are in progress to establish the clinical evidence and fact that alertness can be improved by using circadian light during the dark cycle by limiting the melatonin exertion. As this is a Govt. of India sponsored project under SERB-DST-CSIR scheme, the total report cannot published till the permissions are approved. The author of the chapter is the PI of the project (DST Sanction order no.: SR/CSRI/185/2016 dt.24.11.2016).

7.5 Conclusion By properly calibrating the LED drivers and selecting the LEDs, one can create an environment where night shift workmen can perform the duties with equal efficiencies with the peer group working in the day shift. The LED lighting can also be used to manage the sleep of the insomnia patients and patients with Alzheimer’s disease. Management of jet lag is also possible with circadian lighting.

Acknowlegments The author is grateful to the management, principal and staff of PACE Institute of Technology & Sciences, Ongole, A.P., for extending all the support to carry out this project. The author is also thankful to DST for awarding grant for this project under SERBCSRI (Sanction order no.: SR/CSRI/185/2016 dt.24.11.2016). The students of final year B. Tech are voluntarily involved as subjects in the execution of the experiment. Special thanks to all the scientists and medical practitioners who have given their acceptance to use their published materials in the public domain of research. Finally, my sincere thanks to all the people who have directly and indirectly supported me in this project.

Further readings M. Srinagesh, S. Ayesha, N. SivaKumar, Some studies on adaptive circadian through tunable led lightening, IJETMAS 4 (6) (2016) 143 148. S. Folkard, P. Tucker, Shift work, safety and productivity, Occup. Med. 53 (2) (2003) 95 101.

Further readings

M.S. Rea, M.G. Figueiro, A. Bierman, J.D. Bullough, Circadian light, J. Circadian Rhythms 8 (1) (2010) 2. N. Praschak-Rieder, M. Willeit, A.A. Wilson, S. Houle, J.H. Meyer, Seasonal variation in human brain serotonin transporter binding, Arch. Gen. Psychiatry 65 (9) (2008) 1072 1078. M. Terman, A.J. Lewy, D.J. Dijk, Z. Boulos, C.I. Eastman, S.S. Campbell, Light treatment for sleep disorders: consensus report: IV. Sleep phase and duration disturbances, J. Biol. Rhythms 10 (2) (1995) 135 147. 6. G. Vandewalle, C. Schmidt, G. Albouy, V. Sterpenich, A. Darsaud, G. Rauchs, et al., Brain responses to violet, blue, and green monochromatic light exposures in humans: prominent role of blue light and the brainstem, PLoS One 2 (11) (2007) e1247. D. Burnett, Circadian Adaptive Lighting. Professional Lighting Design, 78, Verlag fur Innovationen in der Architektur, Gu¨tersloh, 2011, pp. 48 54. J.Y. Park, T. Dougherty, H. Fritz, Z. Nagy, LightLearn: an adaptive and occupant centered controller for lighting based on reinforcement learning, Build. Environ. 147 (2019) 397 414. M.L. Westwood, A.J. O’Donnell, C. de Bekker, C.M. Lively, M. Zuk, S.E. Reece, The evolutionary ecology of circadian rhythms in infection, Nat. Ecol. Evol. 3 (4) (2019) 552 560. C. Blume, C. Garbazza, M. Spitschan, Effects of light on human circadian rhythms, sleep and mood, Somnologiey (Berl.) 23 (3) (2019) 147 156. ,https://www.circadian.com/images/pdf/staffing_levels.pdf..

135

CHAPTER

Cognitive and brain function analysis of sleeping stage electroencephalogram wave using parallelization

8

Vikas Dilliwar and Mridu Sahu National Institute of Technology, Raipur, Raipur, India

8.1 Introduction Cognitive neuroscience attracts the researches of various engineering, medical, and other disciplines [1]. One of the research fields of cognitive neuroscience is the analysis and diagnosis of brain signals. Numerous brain signaling techniques are available to analyze and diagnose the brain status such as electroencephalogram (EEG), electrocardiogram (ECG), and electromyogram [2]. An electroencephalogram or electroencephalography (EEG) is an effective noninvasive brain signaling technique to understand the mechanism of brain activities. This can be helpful to neurologists, scientists, and researchers to understand the brain activities for a particular problem. An EEG measures brain voltage fluctuation at scalp with externally placed electrodes. These voltage fluctuations are generated by a large number of cortical cells with the help of postsynaptic potential changes. This potential is measured by mounting a cap onto the subject’s head with different channels. This procedure is completely safe and painless; an EEG can be utilized for the application of analysis of patients with some cognitive function or other cognitive brain disorders [3]. EEG data collection, analysis, and processing have a wide application in the bioscience and neuroscience research fields, usually owning a large amount of data values and intensives. So, it is necessary to use a high computing power device to obtain fast result or use parallel computing architecture (distributed computing/cluster computing methods), which is cost-effective and reuse the existing numerous small computing device for high-power computing (cluster of computers) [1]. Distributed parallel computing is a very efficient tool to reduce the computational time and increase the speed-up using multicore, multiprocessorbased environment, cluster computing, etc. [4]. The concept of parallelly distributed computing is broadly used for large complex problems that can be divided into independent task or separate task. In this chapter, cluster computingbased parallel processing technique has been used for the execution of EEG signals Cognitive Informatics, Computer Modelling, and Cognitive Science, Volume 1. DOI: https://doi.org/10.1016/B978-0-12-819443-0.00008-8 © 2020 Elsevier Inc. All rights reserved.

137

138

CHAPTER 8 Cognitive and brain function analysis

processing (EEG channel selection) to find the frequency similarities between individual channels for 900 second-long EEG recordings with open-source Java Parallel Processing Framework (JPPF) as a parallel computing tool [[5], JPPF]. For an EEG signal analysis, channel selection is also an important task to reduce the redundant, unnecessary signals. In the present model, the similarities of two channels (pair of channels) have been found with the evaluation of coherence function (based on joint power spectral density) for the set of EEG recordings. The processing of pair of channels has been executed individually as independent tasks and finds best suitable channels as result. The proposed parallel computing architecture has reduced the computational cost that is associated with EEG signal processing. Suitable channel selection and frequency range (theta, delta, gamma, alpha, and beta) identification are the essential task for the EEG signalbased analysis and diagnosis of brain function and various mental disorders such as sleep disorder [5,6]. In this chapter, the history of EEG, EEG mechanism, application, EEG sleepstage analysis, and finally implementation of coherence estimationbased EEG signal channel selection method for different EEG waves with the help of JPPF as a parallel computing tool are introduced.

8.2 History of electroencephalography The history of EEG started in the 18th century with the experiments related to the identification of electrical properties of brain of animals such as rabbits, monkeys, and dogs [7]. The electrical signal on the cerebral of animals was introduced by Richard Caton in the British Medical Journal in 1890, and then the physiologist, Adolf Beck continued the investigation with an impulsive electrical signal activity of animal’s brain and changes of rhythmic oscillations with light. Beck used sensory stimulation to put electrodes on the brain and observed the fluctuation of brain waves according to brain activities. The first recording of EEG was done by the Ukrainian physiologists, Vladimir, Vladimirovich, and Pravdich-Neminsky in 1912; this was the EEG recording of animal (Dog); after 2 years, EEG of seizures was recorded by the Napoleon Cybulski and Jelenska-Macieszyna in 1914 [8]. The German psychiatrist, Hans Berger discovered a concept of EEG in 1929. This was the historical breakthrough for providing a new era of diagnostic tools for neurologists and psychiatrics. The electrical properties on gray substance in open brain were reported by Caton. According to Caton’s discovery, Berger and other researchers recorded the first EEG on July 6, 1924. In 1929, the neurosurgeon, Nikolai Guleke observed and introduced the terms alpha and beta waves in the EEG signal. Unfortunately, at the dawn of the World War II, Berger’s further EEG-related experiment was completely banned because he belonged to the country where Nazi’s ideology used to prevail. In 1934, Adrian and Matthews confirmed that Hans Berger invented the first EEG recording device for human

8.3 Analysis of electroencephalogram signals

brain, and he named the device EEG. This is the most amazing, notable, and significant development in the history of neurology [9]. In 1934, Fisher and Lowenback observed the sharp waves in the EEG which were identified as distinctive spikes waves. In 193536, Gibbs et al. illustrated the spike waves with its cyclic pattern of seizures and point of epilepsy. Further, Offner proposed an archetype of the EEG, called crystograph and now it is known as the Offner dynograph. In 1953 the concept of REM sleep was given by Aserinsky and Kleitman. In the 1950s, W.G. Walter developed an extended version of EEG. This is called EEG topography. It had a facility of mapping of the electrical activity of the brain surface. In 1988, Bozinovski, Bozinovska, and Sestakov were given the description regarding EEG-contorted robot and other objects [10]. After two decades, scientists really connected the human brains and successfully performed the thoughtsharing tasks.

8.3 Analysis of electroencephalogram signals The various methods and techniques are available to evaluate the frequency and amplitude of waveform in signal-processing engineering. Spectral and Fourier analyses are popular waveform measuring methods that can be used in EEG waveform analysis [11]. In spectral analysis method, waveform is decomposed mathematically in a sum of different waveforms. In the Fourier analysis, waveform decomposes into the different components, and it measures the amplitude (as a power) of each frequency component [11]. Fig. 8.1 shows the spectral analysis plot of the signals. The Fourier analysis is plotted in Fig. 8.2 with the amplitude (spectrum power) of each frequency [11]. 6 5

µv

4 3 2 1 0 0

50

100 Hz

FIGURE 8.1 EEG spectral analysis plot. EEG, Electroencephalogram.

150

200

139

CHAPTER 8 Cognitive and brain function analysis

3500 Amplitude (as a power)

140

3000 2500 2000 1500 1000 500 0

0

10

20 30 Frequency (Hz)

40

FIGURE 8.2 EEG Fourier analysis plot. EEG, Electroencephalogram.

8.4 Electroencephalogram waves The brain electrical signal can be recorded from the scalp with the help of EEG. This recorded wave represents the various activities of brain. Some essential brain signal frequency bands of human EEG waves are explained in Table 8.1 with its frequency range, location of brain, its normal condition, and pathological condition [12,13]. EEG wave’s analysis plays an important role in the identification of brain disorders, diseases, and some abnormal activities of brain.

8.5 Electroencephalogram signal recording variables and components In the process of EEG recording, some variables and components are used. The important variables of recording and analysis of EEG signals are frequency, amplitude, morphology, periodicity, etc., and EEG recording components are electrodes, electrode gel, electrode positioning, impedance, artifacts, filtering, etc. [3,11].

8.5.1 Frequency Frequency of the signal is defined as “the form of rhythmic repetition of any activity (in Hz).” EEG activity includes waves that can have frequencies with some important properties. In EEG signal, if activity has constant frequency waves then it is called Rhythmic; if EEG activity has unstable rhythms, then this

Table 8.1 Human brain electroencephalogram (EEG) signals. Band names

Frequency range (Hz)

Brain location

Normal occurrence

Neuropathological existence

Delta

04 Hz

The front portion of the brain in the case of adults and posterior portion of children’s brain

• In adult, sleeping stage (with the

Observe in subcortical, diffuse, deep midline lesions, and metabolic EEG with high amplitude waves

Theta

48 Hz

Brain position is not related with task.

Alpha

812 Hz

Posterior and central sites of the brain

Beta

1240 Hz

Left side and right sides of brain with symmetrical distribution and most marked at frontally; low amplitude waves

Gamma

40100 Hz

Somatosensory unit of cortex

Mu

0812 Hz

Sensorimotor (left motor) cortex

slow wave) • In children, attention-related tasks • Mostly found in young children with an age up to 13 years • Drowsiness state of the adults and teens • It spikes in the situations of a person, when aggressively trying to give the response or take an action This occurred in the closed eyes condition and the relaxed position of adults mostly after 30 years of age Occurred when people go to the active peaceful stage too, strained, and mild infatuated Also occur in the stage of thinking, focus, hi-alert, etc. Occur when multiple perceptions are active simultaneously such as sound and sight This is also observed in objects’ recognition Observe in relax stage of motor neurons

Observe at focal lesions, metabolic EEG, deep midline disorders, and instances of hydrocephalus

Observe in dominant rhythm of subjects, alertness, or anxious stage Observe in the dominant rhythm of subjects, alertness, or anxious stage with eye open

Observe in visual cortex in awakens stage activity in the central nervous system for rhythmic repetitive pattern

Observe in motor mirror neurons, inferior parietal lobule, right anterior parietal, and left inferior frontal cortex

142

CHAPTER 8 Cognitive and brain function analysis

is called Arrhythmi. If any EEG activity pattern (rhythms) appears in a group of patients or rarely seen in healthy subjects is called Dysrhythmi [11].

8.5.2 Voltage Voltage plays an important role for the analysis of EEG activities; this may be referred to as the average voltage or peak voltage. This depends on the recording techniques. EEG voltage includes attenuation, hypersynchrony, and paroxysmal. Attenuation is concerned with suppression, depression stage of brain. In this stage, the amplitude of EEG signal may reduce. Hypersynchrony activity can be observed in high-level voltage and activities related to alpha, beta, and theta waves. Paroxysmal is an activity that reaches high voltage and ends with lower voltage in the same activity; some abnormal activities may be paroxysmal [14].

8.5.3 Morphology The morphology of waveforms is concerned with the shape of waveform, which is determined from the relationship of frequencies and voltage level of the signal. In EEG, wave patterns are defined in four types: monomorphic, polymorphic, sinusoidal, and transient. Monomorphic is a separate EEG activity that is composed of one leading activity. If any EEG activity involves multiple frequencies and creates a composite waveform, it is called polymorphic activity. Sinusoidal activities are represented by sine waves. Monomorphic activity also produces a sinusoidal wave. Transient activity is concerned with isolated wave that is different from other activities. Spike transient and sharp wave transient are the category of transient activity [11].

8.5.4 Impedance Impedance is an important factor to determine the flow of current and the unit of impedance is “ohm.” The higher the value of resistance, the lower the current flow. For recording of EEG signals, high impedance is preferred with lower amplitude of signal. The approximate value of impedance can be taken as 100 Ω to 5 kΩ for EEG recording [11].

8.5.5 Electroencephalogram electrodes EEG electrodes have an important role in EEG recording process. These are small metal (made from stainless steel, tin, gold, or silver) pieces, located on the scalp at specific places. The positions of electrodes are specified in international standard, for example, 10/20 system, and 10/10 system. Each electrode is named alphanumeric identity according to specific brain area, for example, “F” for frontal lobe, “T” for temporal lobe, “O” for occipital lobe, and “P” for parietal, etc. The even and odd numbers are used to, respectively, represent the right and left

8.5 Electroencephalogram signal recording variables and components

Cz Nasion Fp2

Fp1 F7

A1

T3

F3

C3

T5

Fz

F4

20%

Pz

P4

Pz

20%

F8

F3

C4

Cz

P3

Vertex

20%

Fz

T4

C3

Fp1

A2 10%

20%

P3

F7 T3

T5

O1

Nasion T6 10%

O1

O2

A1

Inion

Inion

FIGURE 8.3 10/20 Electrode placement international standard [2].

side of the head, for example, Fp1, Fp2, etc. Usually in advance, EEG recording system electrodes are fixed inside of cap [14].

8.5.6 Electrode gel Electrode gel is applied in between electrodes and head skin, is used for increasing the skin contact with electrodes, and reduces the resistance of EEG recording path [14].

8.5.7 Electrode positioning (10/20 system) The 10/20 scalp electrodes arrangement system has become very popular and commonly used in international standard. In this system, electrodes are arranged on the scalp with distances (percentage) among nasioninion, and fixed points must be between 10% and 20% ranges. These positions are named frontal-pole (Fp), central (C), parietal (P), occipital (O), and temporal (T), and midelectrodes are subscripted as “z” means zero. The 10/20 international standard electrode placement is shown in Fig. 8.3 [2].

8.5.8 Artifacts in Electroencephalogram recording The artifacts are an unwanted signal captured during the EEG recording, artifacts recognition, and the elimination from EEG signals is a challenging task. These artifacts can be mortar effects, sweating, ECG, eye movements of subject, and other technical artifacts are frequency artifacts (5060), cable-related artifacts, electrode- and gel-related artifacts, etc. Some tools and techniques are available for finding and handling the artifacts. For example, the detection of contaminated signals can be done using facial electromyography and impedance measurement

143

144

CHAPTER 8 Cognitive and brain function analysis

techniques. EEG artifacts can be classified into two categories: subject-related artifacts and equipment-related artifacts. The subject-related artifacts concern with physiologic artifacts, such as signal-generated artifacts, other than the brain of the subject, and equipment-related artifacts are an extra physiological artifact that occurs due to recording equipment and environmental effect [1517]. 1. Eye movement’s artifacts—The signal produced from Fp1, Fp2 (frontal) electrodes are affected by eye moment artifact due to the dipolar nature of eyeball and cornea as well as rotation of eyes. It generates a high amplitude alternate current field near to eye. This problem can be resolved by filtering the dominant frequency (blinking frequency) [16]. 2. Skin artifacts—Another artifact in EEG induced by multiple layers of skin in the cortex, local deformation of skin, and induced DC between the stratum corneum and stratum granular layers varies the actual potential of electrodes. Some trustworthy way to remove this artifact is to cleaning the skin and creates a low resistance pathway with the help of alcohol swab, sodium chloride (electrolyte), etc. [16]. 3. Electrodes artifacts—The EEG interface of subjects (ionic solution) and electrodes (metallic conductor) also reduces the cell-potential that can be half of the signal recorded. To reduce this problem, electrodes are coated with silver chloride, and the layer of conductive paste is used over the skin [15]. 4. 5060 Hz artifact—If the impedance level of any active electrodes increases significantly compared to other electrodes and ground amplifiers, then, in particular, location produces a high-frequency radiation. So, the EEG amplifier and other electronic devices can be overloaded. This is called 5060 Hz artifacts [16].

8.5.9 Filtering In EEG recording, a recommended standard filtering setting for low frequency is 1 Hz, and high frequency ranges in between 50 and 70 Hz [11].

8.5.10 Electroencephalogram recording device The following major devices involve in the recoding of encephalographic measurement and recording of EEG signals [11].

• • • •

Electrodes (electrode cap) with conductive gel and other media. Signal amplifier and signal filter Analog-to-digital converter EEG recording equipment

In EEG recording process, electrodes are responsible for catching the brain signal from the scalp. Amplifiers are used for increasing the signal strength from microvolt to the max limit for correct digital representation. According to an

8.7 Sleeping stage electroencephalogram waves

“analog-to-digital converter,” this device converts the signal from analog-todigital for further process in digital devices such as digital computers and display units.

8.6 Subject preparation and equipment setup for electroencephalogram recoding using an electro cap Subject preparation and equipment setup are vigilant process of EEG recording. It involves following essential steps for the recording of EEG [2]. First, installation of an electro cap in the head (according to Fig. 8.3):

• To prepare the subject with neat and clean head, the subject in a particular • •

• •

position with hold arms in a straight position is to be kept to provide comfort as much as possible to the subject’s body. One disposable sponge disk in Fp1 and Fp2 electrode position is to be placed. Disposable sponge disk absorbs the excess electrode gel and helps to position the electro cap. The head circumference is measured with the help of measuring tool; the distance between nasion to inion is measured. Then consider the 10% distance from nasion to forehead with electrode positions Fp1, FP2, and place the electro cap on the subject. At the time of electro cap placement, the thumbs are normally kept on outside of the electro cap and fingers, inside. The body harness of the electro cap is connected in a crossover manner. So, right-side strap is connected to left side, and the left-side strap is connected with right side if body harness is snapped. Electrolyte gel is injected with the help of a syringe at all holes of electrodes with moderate pressure. Other operations:

• Different channels can be assigned for various purposes such as filtering •

(using band pass, and low- and high-pass filters), alpha, beta, theta, delta frequencies, and alpha power. Observe the chart view, spectrum window PSD plots in the monitor.

Subject preparation and equipment setup is an importation and careful task before EEG recording process.

8.7 Sleeping stage electroencephalogram waves Sleep is defined as “The time appears to be passive or restful of living creatures.” In the sleeping activity, brain is active and scripts interplay in the brain circuit. The stages of sleep were discovered in the 1950s and classified as Nonrandom

145

146

CHAPTER 8 Cognitive and brain function analysis

Eye Movement (NREM) sleep and random eye movement (REM) sleep with the following four stages [1820]. Stage 0 is the wake stage. In this stage, eyes are opened. Brain gives response to external stimuli, and it can hold intelligible conversation. In the wake stage, brain produces a beat waves with 1550 Hz frequency and ,50 μV amplitude. Presleep stage brain generates 812 Hz frequencies and 50 μV amplitude. Stage 1: Stage1 or N1 is the light-sleep stage of NREM sleep. This is a drowsiness-based sleep stage. In this stage, person feels calm. The EEG wave frequency and amplitude can be produced in between 48 Hz and 50100 μV. This is measured as the theta wave of EEG signals. Stage 2: This is a starting stage of true NREM sleep, also called N2 stage of sleep. Brain waves represent a continuous decline in specific time duration with low amplitude and high frequency, which is called sleep spindles. This signal along with sleep structures is known K complexes. The EEG frequencies of this stage can be 415 Hz and amplitude 50150 μV. This stage body temperature decreases, and the heart rate goes down. Stage 3 (N3): This is deep NREM sleep and is also called a soothing stage of sleep. The stage of sleep produces a delta-wave frequency of 14 Hz and an amplitude of 100200 μV. This stage of sleep produces a slow wave and awakenings, arousals are uncommonness, sleep-walking, sleep-talking, and night-terrors can also arise in this stage. Stage R: The REM (Stage R) is also called dreaming stage. Due to frequent eye movements, this is more active than in N1, N2 NREM sleep stage. In REM stage, awakenings, arousal, feeling groggy, or overly sleepy can be the major indications. The EEG signal of this stage is observed as beta waves with the frequency of 1530 Hz and the amplitude of ,50 μV. Table 8.2 shows the description of different sleeping stages of a human body with situations of open eye, close eye, random, and NREMs. It also describes the frequency bands (waves) that are generated in various sleeping stages [21].

8.8 Type of channel selection for cognitive The cognitive applications of EEG are classification and diagnosis of seizure finding, seizure prediction, motor effect recognition, psychological tasks, feeling based, sleep-stage analysis, medicinal effects diagnosis, etc. In the traditional EEG recording system, large numbers of EEG channels are used to recording the EEG data that increases the complexity and processing time. The choice of channel varies from application to application, so it has been noticeable that the competent and expert channel selection algorithms are required for cognitive applications. The major objective of the channel selection algorithm can be articulated in the following fourfold [22].

8.9 Disorders detection using electroencephalogram

Table 8.2 Human brain sleep stages and its descriptions. Sleep stages

Types

Description

Stage 0

Wake

Open eyes, responsive to external stimuli, intelligible conversation Closed eyes and intelligible conversation

Eyes closed Stage 1 (N1)

NREM sleep (light sleep)

Stage 2 (N2)

NREM sleep (light sleep) NREM sleep (deep sleep) REM sleep (light sleep)

Stage 3 (N3) Stage R

Closed eyes stage. In this stage a person in a calm position and an intermediate stage between wakeful and sleeping This is light sleep stage with memory consolidation This is a deep sleep stage with slow waves on EEG readings The waves are likely to be in a wake stage. Vivid dreams can be occurred in this stage without body movements

EEG waves

Time spent

Beta

1618 h/d for adults

Alpha

Before sleeping, just closed eyes moments 47 h per night

Theta

Sleep spindles, Kcomplexes Delta Beta

90120 min per night

EEG, Electroencephalogram; NREM, Nonrandom Eye Movement; REM, random eye movement.

• To reduce the required processing time and computational complexity of EEG-based processing task.

• To extract the desired features as per objective can be performed by the selection of relevant channels.

• To reduce the amount of processing cost requited for the computation of unnecessary channels and improve the performance.

• To reduce the setup time required for the channel specific applications. Different signal-processing techniques can be used for a channel selection and feature extraction, for example, time-domain analysis, power spectral estimation, wavelet transform, etc. Rather than these, some other estimate approaches such as filtering, wrapper, embedded, hybrid, and manual procedure are used for the selection of channels [22].

8.9 Disorders detection using electroencephalogram EEG is capable of representing the electrical changes associated with the brain, so the EEG is widely used by the researcher and the neurologist for the diagnosis

147

148

CHAPTER 8 Cognitive and brain function analysis

of the brain-related disorders [5]. In 1020 international electrode placement system, around 20 electrodes are arranged symmetrically over the head (cortex). Normally at awake condition, EEG shows alpha waves (812-Hz frequency, 50μV amplitude sinusoidal signal) occipital and parietal lobes, beta wave (frequency .12-Hz, amplitude 1020 μV) shows frontally, mixed with theta waves (47Hz frequency, 20100 μV@@ amplitude). The EEG is a useful tool to identify the abnormal brain wave pattern. This abnormal wave patterns are either nonspecific or diagnostic. The excessive occurrence of delta wave with a frequency of 14 Hz, an amplitude of 50350 muV represents the depressed consciousness, encephalopathy, and dementia of the brain. The EEG can be useful for the identification of the following brain disorders [23]:

• • • • • • • • •

Epilepsy or other seizure disorders Brain lesions Brain tumor Head injury Brain dysfunction Brain inflammation (encephalitis) Stroke Sleep disorders Alzheimer’s disease or dementia, etc.

In the case of epilepsy seizure, EEG shows rapid spiking waves; subject with lesions and tumors or stroke of their brain EEG shows unusual slow waves depending on size and location. An EEG analysis can also be used for analyzing the persuade activity of brain such as Alzheimer’s disease, psychosis, narcolepsy, evaluate trauma, drug intoxication, comatose patients and also used for the observation of blood circulation in the brain during brain surgery. According to “International Classification of Sleep Disorders (ICSD),” the sleep disorder has been identified more than 80 different types. The subject with sleep disorders can suffer physically, psychologically, and financially. Hundreds of road traffic accidents take place due to daytime sleepiness (because of poor sleep or sleep disorder). In fact a poor sleep affects the mental status, mental function, and worse mental conditions such as depression, schizophrenia etc. The recent golden standard for sleep disorder diagnosis is polysomnogram (PSG). This is expensive and available in limited places. According to ICSD, sleep disorders are categorized into the following types [17]:

• Insomnias: poor sleeping, problem in sleeping and staying asleep, week sleep quality.

• Breathing disorders: change in sleeping structures includes chronic snoring, UARS, sleep apnea, obesity hypoventilation disorder, etc.

• Hypersomnias: due to disturbed nocturnal sleep.

8.9 Disorders detection using electroencephalogram

• Circadian rhythm sleeps disorders: this type of sleep disorder included a shift

• •

• • • •

work sleep disorder (variation in sleeping time), time zone changes (also called jet lag), medications and changes in routine, delayed sleep phase disorder, advanced sleep phase disorder, non 24 h sleep wake disorder, etc. Narcolepsy: This is overwhelming daytime drowsiness and irregular REM sleep. Parasomnias: This is troublemaking sleep disorders; normally happened in the arousals of REM sleep or partial arousals of NREM sleep. It includes nightmares, night terrors, enuresis nocturnal, bruxism, sleep-walking, confusion arousals, etc. Psychiatric disorders: This is sleep disturbancebased disorder of psychiatric patients. Sleep-related movement disorders. Isolated symptoms: An unsolved issue with no clinical significance; sleep paralysis is also a type of isolated sleep disorder. Other sleep disorders.

Generally, the sleep problems are ignored by the people, while the sleeping routine and time around the world will be the same at bedtimes and waking times different. Table 8.3 shows the brain function identification for different frequency ranges (band).

Table 8.3 Brain functions and disorders according to frequency range (captured from brain signals). Type of signal

Frequency range

Delta waves

04 Hz

Theta waves

48 Hz

Alpha waves

812 Hz

Beta waves

1240 Hz

High level

Low level

Optimal level

Brain injury, learning issues, thinking inability, etc. Hyperactivity, depression, impulsive inattentive activity Day dreaming, over relaxation stage, lack of focus Lack of feeling relax, high adrenal levels, stressed mind

Drowsiness, poor sleep

Healthy resistant power, soothing sleep

Anxiety symptoms, week emotional awareness, higher stress Anxiety symptoms such as higher stress

Creative, good emotional connection with all, relax stage

Depressed, lack of cognitive capability, lack of concentration

Consistency in focus, good memory recall, good problem-solving capability

Perfect relax stage

149

150

CHAPTER 8 Cognitive and brain function analysis

8.10 Application of electroencephalogram EEG is the cheapest, trustworthy, and mature technology for the following BCI applications (Van Der Stelt, O.).

• Neuroentertainment: Neurogaming, neurotoys, art (peoples are using for art and music generation), virtual reality.

• Security: EEG signalbased authentication system. • Biofeedback therapy: Anxiety (alpha and theta training used to provide to put relaxed state of mind), sleep improvement (measure the quality of your sleep).

• Rehabilitation: Stroke revival, addiction removal, Rett disorder (RTT). • Diagnostics: Diagnosis of seizure and sleep disorder and other brain diseases. EEG Alpha-active study has been used for various sports training activities like elite athletes, Olympic development squads, golf, football, rugby, cricket, cycling, rowing, pistol shooting, darts, snooker, and all sports where aim and/or focus is required. Also EEG brain wave observation is used for sport training, coaching, stress tiredness management, etc. The wide variety of application of EEG will be built in future [24,25].

8.11 Case study—channel selection for alpha, beta, theta, and delta waves using parallel processing EEG data analysis and processing has wide applications in the various fields, which usually have bulk amount of data values and very intensives. So, it is necessary to use high computing power devices to obtain fast result or use parallel computing architecture (distributed computing/cluster computing models), which is cost-effective and reuse the existing numerous small computing device as high computing power (cluster of computers). In this chapter, on case study of parallel EEG signals processing; EEG large data set has been processed (using parallel distributed computing) to match the similarities among the frequency components of EEG channels (separately for alpha, delta, theta, beta waves) for the analysis and diagnosis of the subject. The parallel processing is performed on the sleeping stage EEG data set and finds the similarities between 19 channels for different sample sizes 25, 50, 100, 200, and 500 s long signal. “Java parallel processing framework” is used as a cluster-based parallel computation tool [6,22]. A JPPF is an open source tool for the implementation of grid computing; multiprocessing, parallel, distributed, cluster-based processing concepts in any devices and any platforms, where JVM is available. The developers of JPPF give the following slogan for JPPF “Write once, deploy once, execute everywhere” [JPPF].

8.11 Case study—channel selection for alpha, beta, theta, and delta

8.11.1 Java Parallel Processing Framework architecture [JPPF] JPPF is the three-tier architecture with the following three layers.

8.11.1.1 Client layer This layer is responsible for providing an API, communication tools, submission of tasks for parallel execution in different nodes.

8.11.1.2 Service layer Service layer deals with the communication between the clients unit and the processing elements or nodes; this also deals with the execution queue management. Other important responsibilities of the service layer are load-balancing, packet recovery, dynamic loading of classes in appropriate node related to framework, and application program.

8.11.1.3 Execution layer The execution layer is a group of processing elements, called nodes. This layer is responsible for the execution of individual tasks, return executed results, handling dynamically request from the JPPF driver, and JPPF sever. Fig. 8.4 illustrates architecture of JPPF. Fig. 8.5 shows the peer-to-peer topology of the JPPF server and its peer clients that communicate with each other. There are a number of major advantages to this design. In the proposed work, similarities of two channels (pair of 0 channels) have been found with the evaluation of coherence function (based on joint power spectral density) for the EEG-recorded data from various channels. Here, according to channel’s frequencies, filter the ALPHA, BETA, DELTA, and THETA wave component with the help of fast Fourier transform (FFT) filter and select the best (ALPHA, BETA, DELTA, and THETA) wave producer channels separately for each wave.

8.11.2 Coherence estimation functions The coherence estimation plays an important and significant role in the field of EEG signal processing. This is used to find the various brain signal frequency bands produced by different positions. This can be used for channel selection, seizure identification, and other EEG signal diagnosis activities. Before the coherence estimation of EEG signals, first remove the muscle and movement-related artifacts from EEG data using visually inspection and remove the horizontal and vertical eye movements through the independent component analysis algorithm [26]. Coherence Cxy of two EEG signals x and y is a relationship of cross-power spectral density, Sxy and power spectral density Sxx and Syy for individual

151

152

CHAPTER 8 Cognitive and brain function analysis

Application JPPF Client

Submit

Send response

Read request

Task set …

Return results

Application layer

Server layer

Tasks queue (FIFO)

Task set

Task bundler

Notify completion

Send for execution

Receive results

Decode

Send results

Execute

Execution layer

JPPF node

FIGURE 8.4 Architecture of JPPF, [JPPF]. JPPF, Java Parallel Processing Framework.

channels x and y. The relation of coherence Cxy is shown in Eq. (8.1) with the square of Sxy [27]. Cxy ð f Þ 5

2 Sxy Sxx ðf ÞSyy ðf Þ

(8.1)

In the present work, EEG power spectral density is calculated with the help of FFT for epoch sizes 25, 50, 100, 200, 500, and 900 seconds long and frequency, 128 Hz.

8.11.3 Distributed parallel computation Distributed parallel computing is a very efficient tool to reduce the computational time and increase the speed-up using multicore, multiprocessorbased

8.11 Case study—channel selection for alpha, beta, theta, and delta

Client

JPPF server

Client

JPPF server Client

Client

Client

JPPF server Nodes

Client

Network/organization boundary

Client

JPPF server

Client

Nodes

FIGURE 8.5 Topology [JPPF]. JPPF, Java Parallel Processing Framework.

environment. The concept of parallel computing can be an effective solution for those problems that can be further divided into independent tasks. In the proposed work, the computation of pair of channels has been processed individually as separate tasks, and it finds best suited channels as result for each bands (delta, theta, alpha, and beta). The proposed parallel computing architecture has reduced the computational cost that is associated with EEG signal processing. The parallel algorithm for EEG channel selection and frequencies range retrieval has been implemented in the one Intel Quad Corp-i3 (2.10 GHz), 4GB RAM, Window 7 personal computer as a server and 8Intel Pentium Dual Core E2140 1.6 GHz, 1GB RAM, Window 7 as a computing nodes using JPPF architecture. The proposed method is implemented in the sleep stage; data set was taken from https://sccn.ucsd.edu/Barno/fam2data/publicly_available_EEG_data.html (converted.edf to.csv file format). Fig. 8.6 shows the process of a parallel EEG channel selection method for sleep stage, the data set. In this method, first clean the data from various artifacts and noises, then filter the data using FFT band-pass filter, further coherence

153

154

CHAPTER 8 Cognitive and brain function analysis

FIGURE 8.6 Data flow diagram for the proposed EEG channel selection model for different sleeping waves. EEG, Electroencephalogram.

Table 8.4 Best suited channels obtained from proposed channel selection method with 50, 100, 200, 500 signal length for alpha, beta, theta, and delta waves.

8.11 Case study—channel selection for alpha, beta, theta, and delta

FIGURE 8.7 Best suited channels for alpha ( recordings 25 s long.

), beta (

), and delta (

) waves from a set of EEG

estimation, select pair of channels operation performed in the filtered data on parallel basis. Finally, the best suited channels are selected for alpha, beta, theta, and delta waves; results are shown in Table 8.4. Fig. 8.7 shows the best suited EEG channels for 25 seconds long signal with the coherence in alpha (circle), beta (triangle), and delta (star) waves from the 19 channels (Fp1, Fp2, F7, F3, FZ, F4, F8, T3, T5, C3, CZ, C4, T4, T6, P3, PZ, P4, O1, O2). It is observed in Table 8.5 and Fig. 8.8 of experimental results, when the execution is done on single, 2, 4, and 8 nodes, respectively, then the execution time decreases almost linearly. Here, observed in Table 8.5, the percentages of time-saving, and speed-up increase when numbers of processing elements (nodes) increase (continuously in a manner of 1, 2, 4, and 8 nodes). In the proposed method obtained, a best time-saving of 85.468 % and a speed-up of 6.881 times (with serial execution [Ts/T8]) are observed in the case of parallel execution on eight nodes with 500-second long signal lengths.

155

Table 8.5 Execution time (s) obtained in the execution of electroencephalogram channel selection program in sequential and parallel manners for different signal lengths. Serial execution

Signal length (s) 25 50 100 200 500

Parallel execution

Single processor computing time (s) Ts

Computing time (s)

% Time saving

Speedup

Computing time (s)

% Time saving

Speedup

Computing time (s)

% Time saving

Speedup

Ts

T2

((Ts 2 T2)/ Ts) 3 100

Ts/T2

T4

((Ts 2 T4)/ Ts) 3 100

Ts/T4

T8

((Ts 2 T8)/ Ts) 3 100

Ts/T8

354.188 479.688 583.06 877.5 1753.872

205 275.812 334.188 484.252 955

42.121 42.502 42.684 44.815 45.549

1.728 1.739 1.745 1.812 1.837

114.312 154.436 178.5 264.188 518.188

67.726 67.805 69.386 69.893 70.455

3.098 3.106 3.266 3.321 3.385

68.064 87.876 106.128 144.06 254.872

80.783 81.681 81.798 83.583 85.468

5.204 5.459 5.494 6.091 6.881

2 Nodes

4 Nodes

8 Nodes

Bold-Italic represents the ‘Maximum percentage time saving’ in the parallel execution of EEG channel selection program.

Execution time (sec)

2000

1600

1800

1400

1600

1200

1400

1000 25

1200

800

50

1000

600

100 800

400

200

600

500

400

200

Serial Exec 2 Nodes...

200

0 500

4 Nodes...

200

8 Nodes...

0 Serial Exec

2 Nodes Exec

4 Nodes Exec 8 Nodes Exec

Execition time (sec)

1800

Parallel execution of EEG channel selection method

50 25

100 c) th (se l leng

Signa

FIGURE 8.8 Plot of a number of nodes (CPUs) vs execution time obtained by the parallel EEG channel selection program for various signal lengths in 2D and 3D views. EEG, Electroencephalogram.

158

CHAPTER 8 Cognitive and brain function analysis

8.12 Conclusion This chapter endeavors to observe the effect of the parallel distribute computing in the EEG signal processing. In the present work, we used the concept of parallel execution for the EEG channel selection method with signal, 2, 4, and 8 computers one by one using JPPF. Proposed model performs parallel computational work on sleep stage data set with five different signal samples (25, 50, 100, 200, and 500 seconds long) and obtained improved time-saving and eight times speedup. It is clear that the complex time-consuming algorithms execute in a few numbers of computers connected through LAN and a free available open source parallelization tool JPPF. This chapter also attempts to provide the information about EEG recording process, cognitive and BCI application of EEG, and advantages of channel selection algorithms.

References [1] D. Millet, The origins of EEG  session VI  anatomical and physiological models and techniques, in: Seventh Annual Meeting of the International Society for the History of the Neurosciences (ISHN), 2002. [2] O. Van Der Stelt, A. Belger, Application of electroencephalography to the study of cognitive and brain functions in schizophrenia, Schizophr. Bull. 33 (4) (2007) 955970. Available from: https://doi.org/10.1093/schbul/sbm016. [3] E. Kirmizi-Alsan, Z. Bayraktaroglu, H. Gurvit, Y.H. Keskin, M. Emre, T. Demiralp, Comparative analysis of event-related potentials during Go/NoGo and CPT: decomposition of electrophysiological markers of response inhibition and sustained attention, Brain Res. 1104 (1) (2006) 114128. Available from: https://doi.org/10.1016/j. brainres.2006.03.010. ˇ [4] P. Jezdik, R. Cmejla, P. Krˇsek, A. Jahodova, Distributed computing system for EEG signal processing using MATLAB as ACTIVEX object in DCOM model, JPPF, 2019 ,www.jppf.org.. [5] Z. Juhasz, Highly parallel online bioelectrical signal processing on GPU architecture, in: 2017 40th International Convention on Information and Communication Technology, Electronics and Microelectronics, MIPRO 2017  Proceedings, 2017. ,https://doi.org/10.23919/MIPRO.2017.7973446.. [6] V. Dilliwar, G.R. Sinha, S. Verma, Efficient fractal image compression using parallel architecture, I-Manager’s J. Commun. Eng. Syst (2018). Available from: https://doi. org/10.26634/jcs.2.3.2332. [7] D. Gu¨llmar, J. Haueisen, M. Eiselt, F. Gießler, L. Flemming, A. Anwander, et al., Influence of anisotropic conductivity on EEG source reconstruction: Investigations in a rabbit model, IEEE Trans. Biomed. Eng. (2006). Available from: https://doi.org/ 10.1109/TBME.2006.876641. [8] C. Armon, R.A. Radtke, A.H. Friedman, Inhibitory simple partial (non-convulsive) status epilepticus after intracranial surgery, J. of. Neurol. Neurosurg. Psychiatry (2000). 7/1/2000. https://doi.org/10.1136/jnnp.69.1.18.

References

[9] L.F. Haas, Hans Berger (1873-1941), Richard Caton (1842-1926), and electroencephalography, J. Neurol. Neurosurg. Psychiatry 74 (1) (2003) 9. Available from: https:// doi.org/10.1136/jnnp.74.1.7. [10] S. Bozinovski, M. Sestakov, L. Bozinovska, Using EEG alpha rhythm to control a mobile robot, in: Proc. IEEE Annual Conference of Medical and Biological Society, New Orleans, 1988, 2003, pp. 15151516 ,https://doi.org/10.1109/ iembs.1988.95357.. [11] B.E. Swartz, The advantages of digital over analog recording techniques, Electroencephalogr. Clin. Neurophysiol. 106 (2) (1998) 113117. Available from: https://doi.org/10.1016/S0013-4694(97)00113-2. [12] M. Abo-Zahhad, S. Ahmed, S.N. Seha, A new EEG acquisition protocol for biometric identification using eye blinking signals, Int. J. Intell. Syst. Appl. 07 (2015) 4854. Available from: https://doi.org/10.5815/ijisa.2015.06.05. [13] E.M. Holz, M. Doppelmayr, W. Klimesch, P. Sauseng, EEG correlates of action observation in humans, Brain Topogr. (2008). Available from: https://doi.org/ 10.1007/s10548-008-0066-1. [14] M.A. Kisley, Z.M. Cornwell, Gamma and beta neural activity evoked during a sensory gating paradigm: effects of auditory, somatosensory and cross-modal stimulation, Clin. Neurophysiol. 117 (11) (2006) 25492563. Available from: https://doi. org/10.1016/j.clinph.2006.08.003. [15] L. Shoker, S. Sanei, J. Chambers, Artifact removal from electroencephalograms using a hybrid BSS-SVM algorithm, IEEE Signal. Process. Lett. (2005). Available from: https://doi.org/10.1109/LSP.2005.855539. [16] J.J.M. Kierkels, J. Riani, J.W.M. Bergmans, G.J.M. Van Boxtel, Using an eye tracker for accurate eye movement artifact correction, IEEE Trans. Biomed. Eng. (2007). Available from: https://doi.org/10.1109/TBME.2006.889179. [17] D. Kelleher, A. Temko, S. Orregan, D. Nash, B. McNamara, D. Costello, et al., Parallel artefact rejection for epileptiform activity detection in routine EEG, in: Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, 2011. ,https://doi.org/10.1109/ IEMBS.2011.6091961.. [18] A. Roebuck, V. Monasterio, E. Gederi, M. Osipov, J. Behar, A. Malhotra, et al., A review of signals used in sleep analysis, Physiol. Meas. (2014). Available from: https://doi.org/10.1088/0967-3334/35/1/R1. [19] M. Sahu, S. Shirke, G. Pathak, P. Agarwal, R. Gupta, V. Sodhi, et al., Study and analysis of electrocardiography signals for computation of R peak value for sleep apnea patient, Adv. Intell. Syst. Comput. (2016). Available from: https://doi.org/ 10.1007/978-81-322-2526-3_5. [20] K.A.I. Aboalayon, M. Faezipour, W.S. Almuhammadi, S. Moslehpour, Sleep stage classification using EEG signal analysis: a comprehensive survey and new investigation, Entropy (2016). Available from: https://doi.org/10.3390/e18090272. [21] B. Noureddin, P.D. Lawrence, G.E. Birch, Online removal of eye movement and blink EEG artifacts using a high-speed eye tracker, IEEE Trans. Biomed. Eng. (2012). Available from: https://doi.org/10.1109/TBME.2011.2108295. [22] O. Friman, I. Volosyak, A. Gra¨ser, Multiple channel detection of steady-state visual evoked potentials for brain-computer interfaces, IEEE Trans. Biomed. Eng. (2007). Available from: https://doi.org/10.1109/TBME.2006.889160.

159

160

CHAPTER 8 Cognitive and brain function analysis

[23] E. Carrera, J. Claassen, M. Oddo, R.G. Emerson, S.A. Mayer, L.J. Hirsch, Continuous electroencephalographic monitoring in critically Ill patients with central nervous system infections, Arch. Neurol. (2008). Available from: https://doi.org/ 10.1001/archneur.65.12.1612. [24] K.P. Mason, E. O’Mahony, D. Zurakowski, M.H. Libenson, Effects of dexmedetomidine sedation on the EEG in children, Paediatr. Anaesth. (2009). 12/2009. ,https:// doi.org/10.1111/j.1460-9592.2009.03160.x.. [25] D. Senturk, V. Saravanapandian, P. Golshani, L.T. Reiter, R. Sankar, S.S. Jeste, A quantitative electrophysiological biomarker of duplication 15q11.2-q13.1 syndrome, PLoS One 11 (12) (2016). Available from: https://doi.org/10.1371/journal. pone.0167179. [26] J. Krupa, A. Pavelka, O. Vyˇsata, A. Procha´zka, Distributed Signal Processing, 2007. [27] A. Plerou, P. Vlamos, Evaluation of mathematical cognitive functions with the use of EEG brain imaging, Spec. Gifted Educ. (2016). Available from: https://doi.org/ 10.4018/978-1-5225-0034-6.ch094.

Further reading C.W. Anderson, J.N. Knight, T. O’Connor, M.J. Kirby, A. Sokolov, Geometric subspace methods and time-delay embedding for EEG artifact removal and classification, IEEE Trans. Neural Syst. Rehabil. Eng. (2006). Available from: https://doi.org/10.1109/ TNSRE.2006.875527. A. Coenen, E. Fine, O. Zayachkivska, Adolf beck: a forgotten pioneer in electroencephalography, J. Hist. Neurosci. 23 (3) (2014) 276286. Available from: https://doi.org/ 10.1080/0964704X.2013.867600. Z. Juhasz, G. Kozmann, A GPU-based soft real-time system for simultaneous EEG processing and visualization, Scalable Comput. (2016). Available from: https://doi.org/ 10.12694/scpe.v17i2.1156.

CHAPTER

The future networks—a cognitive approach

9

Kavitha Sooda1 and T.R. Gopalakrishnan Nair2 1

Department of CSE, B.M.S. College of Engineering, Bengaluru, India 2 Department of CSE, RREC, Bengaluru, India

9.1 Introduction As the portable devices’ explosion on the internet continues and as the internet moves well beyond the remote login, file transfer, and classical services of email, it arises in the different network scenarios for various applications needed to support mobility. In the past decade, it has aggravated research on programmable and intelligent networks, where network architecture has a novel approach in which messages flowing through them are customized by the switches. Making the network programmable and adaptive was the main objective, and human administration manages the networks throughout the performance. Network is unable to reason for its action, state of needs, knowledge of its goals, and the roadmap in achieving them. This redirects the network toward self-governance properties. Currently, network has been working on the principle based on changes happening in the environment. Instead the network must react based on proactiveness of adaptation when the environment changes. Mobile computing is permeating, improvements are needed in manageability of administrators and usability to reach its full potential, and autonomic networking helps in the network environment by incorporating intelligence. Detailed knowledge of network components, connections to other systems, and operating environments are needed. For proper functioning, it will need to know the resources borrowed, bought, lent, or simply sharing with others [1 3].

9.2 Intelligence in networks A grand-challenge with the vision of the future is of building intelligence into the network in which high-level objectives are followed by the computing systems. Recovery mechanism, decision-making capability, and proper planning are some of the requirements. Various intelligent techniques are achieved when the approach is followed systematically. Cognitive approaches, artificial immune Cognitive Informatics, Computer Modelling, and Cognitive Science, Volume 1. DOI: https://doi.org/10.1016/B978-0-12-819443-0.00009-X © 2020 Elsevier Inc. All rights reserved.

161

162

CHAPTER 9 The future networks—a cognitive approach

system, neural networks, Markov Chain approach, genetic algorithm (GA), simulated annealing, learning, bioinspired computing, and steepest descent technique are some of the well-known methods and practices.

9.3 Challenges in current network Large expansion of the network has occurred in the recent past in several ways. Different networks and connections have had a significant increase. Tendency on the demands of the scale of the network and demand for scalability has increased. In addition, to handle the service of several applications, preparation of the current routing structure is not precise. Different types of intelligent techniques are applied for different challenges of current network.

9.4 Cognitive networks A cognitive network (CN) can be regarded as the ability of a given system to determine the optimal path with respect to the current network scenario based on the few aspects that were learnt about the environment. These aspects about the network environment are based on the Quality of Service (QoS) parameters obtained from the topology. The reasoning technique used can be based on the action taken. Routing in an internetwork is a complex structure, adapting to dynamic environment. Here, the three functions, namely, control, management and monitoring prove to be ineffective. It is intractable to deal with operations in large networks, perform the diagnostics, deliver dependable services, or prevent contiguous failures. Therefore it is required that the networks think and learn in a nondeterministic way. Cognitive approach, hence, has a paramount role to play to overcome the shortcomings. It is a known fact that the Internet is complex. Though it is aware of certain aspects for self-correcting, it seems to have no mechanism to execute it. It needs to be self-regulatory. There is a need for the Internet to learn and adopt according to the behavior of the environment and emerging challenges. This is where cognitive approach shows a promising direction for the future network to evolve. CN is a network that has the ability to effectively self-regulate, learn, and evolve. For a CN to be in working mode, there is a requirement to reconsider the existing architecture and components involved in the process to transmit content in the network. An intense research has been carried out in the past decade which has been focused on nature inspired mechanism and to derive useful information from known data. This has a primary focus on learning and reasoning. Some of the significant algorithms that have shown a sign of progressive results for learning and reasoning are reinforcement learning, Q-learning, foraging algorithms,

9.6 Background

evolutionary algorithms, and neural networks. An information database provides the aspects that are learnt through these. For future references the comprehended knowledge aids as a useful information database [4]. The information available can facilitate to know the quality of the nodes that can provide a helping scenario for determining the optimal path. Once the awareness about the environment is created, forwarding the packets to the designated node plays a vital role. An optimal routing path should be founded by an effective routing algorithm. Besides this, it must also be simple with low overhead, should be stable and robust, should have rapid convergence, and flexibility. Routing algorithms are network specific, as well as general purpose exist. The problem of intelligent routing is still unsolved. While integrating the key need of dynamic adaptation, the mobile ad hoc network has been one of the interests of research focus. The changes made in the existing routing approach to suit the need of the current demands are shown by many research works. Evolutionary algorithms, foraging algorithms, artificial immune system, memetic, particle swarm optimization (PSO), artificial bee colony (ABC), GA, ant-based algorithms, simulated annealing, and a list of other heuristic algorithms are some of the new concepts that have been implemented to achieve user requirement. The buzz word for working in the internet has been adaptive routing. These algorithms, which are employed as the environment in which the nodes are handled, change dynamically.

9.5 Need for intelligent networks Cognitive is the ability to adjust to the operational parameters and have awareness of network operation according to the needs of the scenario. It includes an adaptation and learning technique, and the behavioral aspect is similar to the working of an active network which makes the difference. Mitola conceptualized cognition and knowledge of feedback loop are to be applied on networks. Autonomic approach became a field of cognition with advancement, and solving a number of network-related issues became evident. The functionality of cognition found diverse applications that include business aspects and technical issues. To be autonomic network it needs more process other than self-properties. It should inculcate to be environment-aware and have self-learning. Such awareness adds to the adjustment, monitoring, and knowledge database as shown in Fig. 9.1.

9.6 Background The section depicts the technology deployment of intelligent network. The first inception of cognitive radio to detect the unused bandwidth took place while trying to develop intelligence into the network. Various domain-specific

163

164

CHAPTER 9 The future networks—a cognitive approach

A1

Parameters: P1 P2

Studied environment (network)

-

Attributes

A2 An

Pn

KB–knowledge base (data on services, users, networks)

Utility

Metaheuristics (simulated annealing)

Cognitive engine

FIGURE 9.1 Knowledge base design.

interventions have been explained by the autonomic networking, Software Defined Networks (SDN), Future InterNet Design (FIND), and Motorola Focale. The following section provides the details of the technology:

• FIND—has been introduced by United States’s Major Initiative such as

• • • •

National Science Foundation (NSF) works on security, optical properties, and futuristic wireless network and economics, establishing global network in the next 15 years. BIONETS—inspired to enhance human senses and pervasive computing. HAGGLE—was established to provide intermittent network connectivity, and autonomicity principle among network is utilized. AKARI project in Asia—uses clean slate approach that can implement a new generation network. AMBIENT Network—the control structure for mobile networks and domains of wireless/optical network are developed in Europe.

9.8 Learning and reasoning for intelligent networks

• Autonomic Network Architecture Project—a recent network design that is fully dynamic, autonomic, and elastic.

• CASCADAS—a framework to support deployment, execution, and composition with dynamic self-adaptation, surroundings adjustments of the autonomic component [5 10].

9.7 Cognition approach Learning and reasoning has a vital need in the cognitive or intelligent or heuristic networks. Decision-making process that involves the details or study of the past and present network environments requires reasoning as shown in Fig. 9.2. It is to take a set of measures. But learning requires the collection of the data sets of the past experimentations. The nodes in this type of network use knowledge as the base in order to enhance the reasoning perspectives. The CN structure aims to achieve an optimal solution, which is accomplished by the usage of the agents, which performs the assigned task as well as stores these results to refer in the future. By storing these results, it helps in better learning and decision-making in the future [11 13].

9.8 Learning and reasoning for intelligent networks Many of these networks require a generalization mechanism. The CN agents use their derived knowledge to operate on the network. The topology of the network helps in reasoning out the appropriate decision to find the optimal path. The procedure to reason out is derived from the data obtained earlier. As we already know that agents work from a precollected data and errors cannot be neglected, probabilistic reasoning has shown a promising scope for deriving the required output. Sense Intelligent agent Based on preference like

Environment

Learn

Plan

Act

Decide

Policy

Reason Plan Learn

FIGURE 9.2 Cognition approach.

165

166

CHAPTER 9 The future networks—a cognitive approach

Reasoning is of two types, inductive reasoning and deductive reasoning. The first category is derived from the hypothesis, while the second one is based on logical connections based on which the conclusions are derived. The study on network is just a small part of the actual topology. In such instances, we need to adapt to the inductive reasoning since obtaining conclusions is a challenging task. When final action to be executed is based on information, the second type, also known as one-shot reasoning, is utilized. If any intermediate steps are used to take an action, it is called the sequential method. A cognitive method can choose any of the mentioned methods based on its requirements. A moment in time is an important factor to make the decision upon the classification. There is another method for reasoning that involves the way the human brain functions, that is, it helps in all voluntary and involuntary processes. The study of the brain helps in the better understanding of the concept of reasoning that is involved. This concept is being mapped and used in the cognitive or intelligent network. This study itself is a wide area of research. Learning the network routing is achieved in two different ways, namely, centralized and distributed methods. The toolbox method, which was put forth by Mahonen in 2006, comprises the machine learning, mathematics concepts, and signal processing for the process of decision-making. This shows that the above approaches cannot be neglected. Distributed artificial intelligence has provided a platform for better understanding the present internet architecture [14,15].

9.9 Human reasoning mechanism The human brain has the involuntary control over human body’s daily processes. The study of the nervous system shows the way in which man has control over his actions. Brain is responsible for the consciousness and actions of a human. When we look at the brain structure in large, we have understood that reasoning perspective is a function of the Frontal lobe. Humans generally perform analogical reasoning which depicts the ability to analyze, relate, and process. It is also the comparison between the terms and situations that are present and identified in a conventionalized semantic relation [16]. For example, as an analogy, atom is like solar system and there is a logical relation between the constituent of the atom that is conventionalized (i.e., movement of electrons around the nucleus). It is similar to the planets revolving about the sun. The key component of analogical thought is that the similar relations can be presented with two different items/ situations. Other than just identifying conventionalized semantic relations, we require analysis for better understanding that is termed analogical mapping. It involves one-to-one alignment process of situations and elements. The ability of thinking is the major function of the left anterior prefrontal cortex. Researcher’s study tells that the magnitude of damage is deficient to prefrontal cortex. The main objective of cognition is to reason properties. Causality is a

9.10 Cognitive model for reasoning at human level

difficult and confusing study because though the events are connected directly, the connection between them is not so. Hume assumed that causality is obtained from psychological point of view. A casual inference includes both association and the interactions of the agents with the world. People with preexisting mental mechanisms have the capacity to derive the casual inferences in situations that are of biased nature, while the others find it difficult. The issues in dealing with the samples are necessary, and the cognitive system is capable to learn the structure, without the need of samples. Many times the weight of shared data, that is, the causes and effects can be made known by oneshot learning. Humans also reason based on casualty. They can be further studied under the following categories:

• mental mechanism versus logical representations • reasoning with and without mental mechanisms • learning mental mechanisms from data The main area of interest in cognition field is the logic, but psychological and philosophical principles do not bind with the outward appearance of the logic. The science related with the cognition requires more observation and preferences for the semantics since human beings are not so accustomed to situations. Human reasoning is less reasonable or logical since it considers the meaning of the task also. Standardized reasoning relates to the meaning of the information. The field of cognitive informatics (CI) thinking is the prime need. CI is the base and philosophy of next-generation computers that can think and infer feelings. It is composed of cognitive functions and has set of descriptive mathematical concepts. Computers with cognition capacity are being created for concept processing basis for algebra, real-time process and system algebra, since these computers implements the very fundamental process of natural intelligence. Natural intelligence includes thinking, formal inferring, and process perceptions and feels [17].

9.10 Cognitive model for reasoning at human level The development of an analogy or comparison is extensively dependent on the domain that is represented. Domains those are very obvious use induction reasoning as they require identical properties. Abduction is the reasoning obtained from an effect or cause. It is received from conclusions that are true. Analogy just focuses on the common structures involved between two domains (one is source and the other is the target). It derives a new learning. It is considered that a good analogy when implemented and verified yields a consistent domain. Fig. 9.3 shows a cognitive model of reasoning at human-level. The two main components involved are reasoning unit and knowledge base. The latter is similar to the memory of human beings that collects all information across the world, while the former holds the very important information for problem solving.

167

168

CHAPTER 9 The future networks—a cognitive approach

Deduction Domain knowledge Induction

Analogy

Conceptual knowledge

Abduction

FIGURE 9.3 Cognitive model of human-level reasoning.

The description of concepts across all hierarchy is the task of conceptual knowledge. Reasoning unit consists of four mechanisms. They are deduction, induction, abduction, and analogical reasoning. The deduction deduces the information from facts and rules. It helps the machine to explicitly use the implicit knowledge. Thus knowledge is independent. Researchers believe that this phase plays negligible importance in human reasoning but does influence analogical reasoning. The analogical reasoning is the core of the framework.

9.11 New intelligent approach An additional layer for cognitive processing is required when we observe the layers of the Open Systems Interconnection (OSI) model and still want to succeed in the transmission of data. The following tasks can be executed by the additional layer: learning, planning, and decision taking. An agent is introduced that will help in the learning process and reduce the working of cognitive process that can be shared and made to work in this layer. Reasoning and planning is the work of the agent as shown in Fig. 9.4. Major tasks done here is by the agents involved in the cognitive cycle. The knowledge base available is needed for the determination of the output since it contains the learnt aspects. It is derived from a set of parameters used for network routing on the internet. The concept learnt organization helps in the retrieval of data. This can be useful in a cognitive process for decision-making. This technique comprises a fixed parameter set for understanding the working of the learning task and to test efficiency of the system. The learning function is selected based on application traffic’s elasticity, mean opinion score, resource allocation, fairness and utility function, reliability, and congestion level estimates. Few of the required parameters for the network routing scenario currently used are

• availability of the bandwidth, • delay in the arrival of the packet, • allocating resources, and

9.12 Learning approaches

Application/user/resource.

End-to-end goals

Specification language

Cognition layer

Network API

Cognitive process

Network status sensor

Software adaptive network Configurable network elements

FIGURE 9.4 Layers at the cognition level.

• lifetime of the network. Reasoning and learning would help us to bifurcate productive nodes and nonproductive nodes. One of achieving this is by assigning values to nodes participating in network routing and ranks them. This way we will be able to improve the routing process. The value assigned to the node increases when assessing the utility parameter. This method helps in referring the routing outputs. In order to assess the learning function, many conditions are to be satisfied; these conditions can be graded as homogeneous or high-grade network that gives collective data at node. It is at the nodes wherein decisions are made by intelligent arbitration. The output of the learning function needs a proactive result to be achieved as output since multiple paths may be available to reach the destiny, but we need the most optimum one. The gradient value once computed must be available as a global index in the network.

9.12 Learning approaches Learning ensures that the global indexed value justifies the value it represents. The assigned value to a node is similar to an index. This index is made available across the whole network, and routing process is dependent on it. Its significance specifies router’s quality, which is the knowledge of the network it possesses. A router should be an intelligent one since it has many operations to be performed based on the network parameters. It is dependent on the input given, the output obtained, the load on the network, and the availability of the required resources. Since many external factors influences a node in an autonomic network, the topology of the network is just temporary one. Due to the constraint that a node’s data must be known to all nodes in the network, it defines the environmental conditions as well as the utility factors of a particular node. On the whole the grade

169

170

CHAPTER 9 The future networks—a cognitive approach

of the network in the local vicinity can be implemented by one of the available algorithm with heuristics, which will search and also find the optimal path and all the participating nodes in the routing. Assignments of the values to the nodes are very much dependent on the chosen level of intelligence for an operation. When changing the level of intelligence is an area of consideration, then specifying the rules and regulations protocol is a real challenging task to be handled. Once the well-defined factors are obtained, assignment can be obtained very easily.

9.13 Requirement of Bayesian approach for cognitive network The probabilistic graphical models that use a graphical representation based on the set of independent variables over a distributed space are broadly classified into two branches: Bayesian and Markov networks. Bayesian network works on directed acyclic graphs, whereas later works on undirected graphs. A conditional probability distribution is usually represented in directed graph structure. This can contain discrete values that can be represented in a table called conditional probability table. The work is based on the analysis of directed acyclic paths, and it follows the Bayesian network approach. The nodes are assigned values that are computed based on grade approach while making it much more efficient and easier for the path-determination process. The node selection is based on the QoS property threshold value choice. Since there is a dependency that exists for node selection, Bayesian network is apt for statistical result collected. The network topology that has been considered is a dynamically changing one; thus, developing the mathematical representation is a challenge. From the literature we know that the Bayesian network is well suited for statistical dataset analysis and for prediction purpose. Therefore the topological setup is analyzed in the Bayesian network form while predicting the future solution for determining the optimal path [18,19].

9.13.1 The Bayesian network A Bayesian model is a statistical model for which the graph represents the conditional dependency among the random nodes. These models are popularly used in probability theory, Bayesian statistics, and machine learning. The model holds a specific distribution among the nodes in the complete search space of interest. In this chapter, we have discussed Bayesian model for representing the network. It is an acyclic representation of the nodes that have condition dependencies among the set of nodes. This model is sometimes referred to as Bayes network, belief network, Bayesian model, or probabilistic directed acyclic graphical model. A Bayesian system may or may not make the assumption of condition independence.

9.13 Requirement of Bayesian approach for cognitive network

Heckerman and Wellman described Bayesian probability of a result x as a person’s degree of belief in that result. When compared with classical probability, here we need not repeat trails to measure the probability. In 1988 it was Pearl who coined the term “Bayesian networks.” It was referred to consist of graphical structure, with encoded variables and qualitative relationships between the variables. There existed a quantitative part that included the encoded probability over the variables. The advantages of using Bayesian networks as discussed by Heckerman is as listed below: 1. 2. 3. 4.

handling of incomplete datasets learning ability about causal relationships combining domain knowledge and data avoiding over fitting of data with efficient techniques

9.13.2 Importance of Bayesian model In the case of varied interest of variables a statistical model such as the Bayesian network has proved to be useful as it encodes probabilistic relationship among them. It has proved to be an efficient model for data analysis when combined with other statistical techniques. Other prediction techniques commonly used are as follows

• Markov network (induced dependencies cannot be represented using this approach).

• Neural network (very slow learner). • The human reasoning approximation with his or her casual knowledge, which performs inference in exactly the same way as in Bayesian network.

9.13.3 Environment in which Bayesian works the best The following key points highlight the advantages of Bayesian network and ensure that it works the best in a given environment:

• It represents all the relationships between the nodes in the system with connecting arcs.

• It is easy to recognize the dependency between various nodes. • When the dataset is incomplete, Bayesian networks can handle it better as the model accounts for dependencies between all variables.

• When we are unable to measure all variables for the system under •

consideration due to system constraints, Bayesian networks can help to map scenarios. Can be used for any system model where the parameters maybe known or unknown.

171

172

CHAPTER 9 The future networks—a cognitive approach

9.13.4 Advantages over other alternative models Several advantages have been offered by Bayesian framework over other modeling approaches. The most important of these advantages are given next.

9.13.4.1 Decision theory Since Bayesian network is a model of probability distribution, it is best used to predict the future on the outcomes of the possible action taken. Hence, it can be used for risk analysis in decision theory, where it gives an outcome that maximizes the expected utility. Bayesian network can be chosen as the ideal optimal procedure for decision-making.

9.13.4.2 Consistent process for uncertain information Bayesian network provides a consistent answer for uncertain inferences, where the output is always ambiguous. Given the inputs, all the alternative outputs give the same result as predicted by Bayesian.

9.13.4.3 Robustness property It has been found that a very small change or alternation in the network does not affect the performance of the system. This indicates that maintaining and updating the existing model is easy as the system adapts in a smooth fashion.

9.13.4.4 Flexible Bayesian model is constructed using joint probability distribution over different combinations of the domain variables. It can be used for classification task as well as prediction and configuration problems.

9.13.4.5 Handling expert knowledge Here the sample data is processed independently where expert domain knowledge can be useful as it provides prior knowledge of the sampled data. It gives an estimated weight to the knowledge accumulated for the data available.

9.13.4.6 Semantic interpretation of the model parameters The parameters used by the Bayesian model have an understandable semantic interpretation, which gives a good support for prediction, later by the model.

9.13.4.7 Different variable types Bayesian model can handle different variable types. But clear understanding of the types is important as the future prediction or the output maybe incorrect if not defined properly.

9.13.4.8 Handling missing data Bayesian network has been useful in building probabilistic models where few parameters are known. Here the missing data will be marginalized over all

9.14 Future trends

possibilities of the missing values. The accuracy of the parameters choice plays an important role in the result determination. Hence, identifying appropriate parameter values play important role in many applications. The limitations of Bayesian networks are as follows:

• Calculation of all branches is a must to determine the probability of any one branch.

• The measure of the quality of the result is based on the prior beliefs or • •

models. Calculation of the network is nondeterministic polynomial-time hard (NPhard). The complexity of the Bayes network increases when restrained parameters are considered and care must be taken while selecting them.

9.13.5 Collateral relationship with graded cognitive network With the grading approach, we do not require an extra controller. All we need is a table in each node to retain the quality of it. Further routing can be determined by accessing these values from the table in each node. But we require a thinking process for each segment to take decision on qualifying the nodes and estimating the node failures. In graded CN, all packets try to gain the best grade and are therefore selfish. Here the nodes are directed by the quality factor determined by the information base that is decentralized and evaluated region-wise and incorporated with SDN, whereas in SDN, the direction on how to route packets is given by a central controller as in Fig. 9.5.

9.14 Future trends Table 9.1 showcases the trend which will lead to a futuristic network. While designing an intelligent network, the main focus will be on the properties with mitigation on the classification. Along with this, we need to manage required QoS level, resources, and bandwidth. Selection of servers, architecture, routing protocols, and deliver pattern also play an important role in designing intelligent system. The future of the graded CN lies in the biological aspect of the human body [20]. The recent workshop on “The Social Biology of Microbial Communities” focused on the community of colony, which can be applied well on the Internet routing. The various formations and functionality of the microbial family are derived by the interactions of many diversified microbes. Swarm algorithms are having quite a number of applications for route determination. Here the sessile biofilms, a surface-associated bacterium secrete surfactants that help in collective movement of data with the aid of flagella. Thus the

173

174

CHAPTER 9 The future networks—a cognitive approach

Connectivity-usual connectivity

SDN Improved SDN

Software defined Cognitive network Cognitive part

Devices/node-usual node

FIGURE 9.5 Collateral relationship with SDN.

Table 9.1 Trends in futuristic network. Properties

Classification

Activity Autonomy Adaptability Intelligence Awareness Memory strength

Proactive, reactive Automated, autonomous, autonomic Closed, open Centralized, distributed Self and nonself-environment Current state, trend, history

research interest has shifted from plank-tonic to surface environment relevant to microbial communities. In this method a small microorganism cooperatively transports a larger one. Other than swarming, it also efficiently disperses and has the capacity of rescuing by moving to an untreated area where they can grow.

9.15 Research challenges Since the current network is in revolution, autonomicity is demanded in order to manage the network; switch to diverse strategies; and learn, adopt, and control the network. The core research challenges which exist are 1. Autonomic architecture: Autonomic network itself must be the organizational structure, and the combination and interactive collaboration of autonomic element should be restricted. 2. Software engineering Tools: The advanced methodology for autonomic network is the major point that achieves a blueprint.

References

3. Strategies: The research of autonomic network in theory and in engineering is still in its initial stage. 4. Applying agent-oriented/component-based: Helps is better considerate of components and relation between the working elements. 5. Issues in relationships among autonomic elements (AE): As the parameters considered are dynamic in nature, issues may come up relating to the cordial working of the AEs. 6. Learning and optimization theory: For AN, it is a very challenging issue to decide on the most suitable algorithms. 7. Robustness and trust: The topology borrowed from intelligent network techniques must sustain for successful delivery of data and preserve privacy. A systematic classification methodology of approaches is needed to achieve each of the above research challenge. A reference framework for future management systems is a must. Hence, autonomic networking is the dire need of the hour, as it assures a promising solution to the current routing problem.

9.16 Conclusion This chapter dealt with how the study of the environment can be carried out by grading. This kind of learning actually helps in restricting the view area to limited search space. Restricting the geographical view has been the most widely proposed solution in today’s network scenario. The parameters used in the simulation give confidence that the congestion scenario is well taken care of. Whether these parameters are sufficient or more need to be considered with various combinations depending on the network status can be the scope of future work. Based on the experimental results and observations, the following are recommended, which can be carried out to make the graded CN work better:

• choice of parameters based on network status, • multiobjective functions to prove the effectiveness of the path based on different studied topology,

• quicker learning schemes to fasten the process of attaining knowledge, and • mathematical model for reasoning.

References [1] J. Wang, Computer network routing configuration based on intelligent algorithm, in: 2017 Sixth International Conference on Future Generation Communication Technologies (FGCT), IEEE, 2017, pp. 1 4. [2] Q. Mao, F. Hu, Q. Hao, Deep learning for intelligent wireless networks: a comprehensive survey, IEEE Commun. Surv. Tutor. 20 (4) (2018) 2595 2621.

175

176

CHAPTER 9 The future networks—a cognitive approach

[3] K.L.A. Yau, David Chieng, Junaid Qadir, Qiang Ni, Towards enhancement of communication systems, networks and applications for smart environment, J. Ambient. Intell. Humanized Comput. 10 (4) (2019) 1271 1273. [4] K. Sooda, T.G. Nair, Competitive performance analysis of two evolutionary algorithms for routing optimization in graded network, in: 2013 Third IEEE International Advance Computing Conference (IACC), IEEE, 2013, pp. 666 671. [5] Component-ware for autonomic situation-aware communications, and dynamically adaptable services. Available from: ,http://acetoolkit.sourceforge.net/cascadas/ index.php.. [6] S. Elaluf-Calderwood, P. Dini, BIONETS economics and business simulation: an alternative approach to quantifying the added value for distributed mobile communications and exchanges, in: International Conference on Bio-Inspired Models of Network, Information, and Computing Systems, Springer, Berlin, Heidelberg, 2009, pp. 77 87. [7] Bio-inspired services. Available from: ,http://www.bionets.eu/index.php?area 5 21.. [8] Autonomic opportunistic communication services. Available from: ,http://www.haggleproject.org/.. [9] NFS NeTS FIND. Available from: ,http://www.nets-find.net/.. [10] AKARI Project home page. Available from: ,http://akari-project.nict.go.jp/.. [11] A. Georgakopoulos, K. Tsagkaris, D. Karvounas, P. Vlacheas, P. Demestichas, Cognitive networks for future internet: status and emerging challenges, IEEE Veh. Technol. Mag. 7 (3) (2012) 48 56. [12] T.R. Nair, M. Jayalalitha, S. Abhijith, Cognitive routing with stretched network awareness through hidden Markov model learning at router level, arXiv preprint arXiv:1001.3740, in: IEEE Workshop on Machine Learning in Cognitive Networks, Hong Kong, 2008. [13] R.W. Thomas, D.H. Friend, L.A. Dasilva, A.B. Mackenzie, Cognitive networks: adaptation and learning to achieve end-to-end performance objectives, IEEE Commun. Mag. 44 (12) (2006) 51 57. [14] D. Karaboga, C. Ozturk, A novel clustering approach: artificial bee colony (ABC) algorithm, Appl. Soft Comput. 11 (1) (2011) 652 657. [15] Y. Liu, K.M. Passino, Biomimicry of social foraging bacteria for distributed optimization: models, principles, and emergent behaviors, J. Optimiz. Theory Appl. 115 (3) (2002) 603 628. [16] Y. Wang, Y. Wang, S. Patel, D. Patel, A layered reference model of the brain (LRMB), IEEE Trans. Syst. Man Cybern. C: Appl. Rev. 36 (2) (2006) 124 133. [17] Y. Wang, Cognitive informatics: towards future generation computers that think and feel, in: 2006 Fifth IEEE International Conference on Cognitive Informatics, vol. 1, IEEE, 2006, pp. 3 7. [18] D. Koller, N. Friedman, Probabilistic Graphical Models: Principles and Techniques, MIT Press, 2009. [19] D. Heckerman, M.P. Wellman, Bayesian networks, Commun. ACM 38 (3) (1995) 27 31. [20] K. Sooda, B. Aishwarya, D.M. Anitha, M. Ashika, N. Harshitha, An implementation of agent based improvised artificial bee colony algorithm, in: International Conference on New Trends in Engineering & Technology (ICNTET 2018), 2018.

CHAPTER

10

Identification of face along with configuration beneath unobstructed ambiance via reflective deep cascaded neural networks

Siddhartha Choubey, Abha Choubey, Anurag Vishwakarma, Prasanna Dwivedi, Abhishek Vishwakarma and Abhishek Seth Shri Shankaracharya Technical Campus, Bhilai, India

10.1 Introduction Detection and alignment of face are very important to many face application, for example, facial recognition and expression analysis. A large variation in visualization of faces such as occlusion, pose variation, and heavy lightings makes different for these works in real-world application. As we enter decade of humancomputer interaction, face recognition is gaining considerable attraction from research community. Recognition and tracking of human faces provides strong evidence as nose, eyes, and mouth is very important and crucial for information regarding fatigue analysis, interpretation, detection, and recognition which object-based coding for recognition face. Detection and verification of eyes is studied in various studies that can be categorized into three types of approach groups, image, neural, and model based. Image-based approach includes shape or motion or color as the important factor toward eye detection. In the image-based approach, color information is employed for the detection of skin region and recognizes candidate’s eye pattern or surrounding skin. Since this approach can only be implemented to quasifrontal and close-up image of the face. Eyes have psychological properties while some researchers have implemented infrared illumination for the detection of eyes. This is done by focusing infrared beam into the eyes. Red-eye effect is caused by infrared beam that is reflected back by cornea as is often seen in flash photographs. The pupil is brighter in grayscale image, thus making it easier for the eye detection process. However, there are many objects that have similar reflectance properties, hence difficult to distinguish from the eyes. Hence, the success of the system is highly dependent on a synchronization scheme, a special illumination setup, and other information about eyes. Cognitive Informatics, Computer Modelling, and Cognitive Science, Volume 1. DOI: https://doi.org/10.1016/B978-0-12-819443-0.00010-6 © 2020 Elsevier Inc. All rights reserved.

177

178

CHAPTER 10 Identification of face along with configuration beneath

In this model-based approach, template matching is used for the detection of eye regions. A circle, two intersecting parabolic curves, and two points in the center of the white of the eye are used to build eye template. An image in the input by minimizing energy function is matched with the template. Determination of the parameters of the template is done by the eye deformable template after including some extra terms in an energy function. These template matching techniques are very sensitive toward the initial parameters in case of additional eye templates. These are time-consuming operation; hence, they are not tended to give accurate outcomes. Pattern recognition problems are simplified by applying artificial neural networks, whereas traditional methodologies are unsuccessful and are very complex to build. These systems have ability to perform tasks that is outside the scope of traditional processors such as parallel computing and fault tolerance. Framework proposed by Refs. [1,2] for object detection using Haar-like features and AdaBoost algorithm for classifier training achieved very good role and was efficient in real-world usage. But in experimental real-world phase, this framework underperforms with wider face variations. After which, deformable part model (DPM) was proposed. The DPM, proposed by Refs. [35], for face detection had outstanding performance compared to previous methods introduced. The downside is the requirement of very high amount of computational resources. Most recently the deep learning approach, namely, convolutional neural networks (CNNs) are very much popular in the area of computer vision. The CNNs have achieved some of the remarkable benchmarks on performance in computer vision tasks such as image classification proposed by Ref. [6], and face detection and recognition proposed by Ref. [7]. Some of the CNN-based face detection models are also proposed in recent years [8]. Deep Neural Network (DNN) is used for training the faces and face regions and produces the candidate windows of faces. However, the proposed approach was not time efficient due to the complexity of its CNN architecture. Ref. [9] used a different approach for face detection, the cascaded CNNs, but it required extra computational expense for bounding box regulation with face detection with the correlation between bounding box and facial landmark localization is ignored. Face alignment also largely seeks the interest of many. There are currently two popular categories existing that are regression-based methods proposed by Burgos et al. [10] and Cao et al. [15] and template fitting approaches proposed by Cootes et al. [11], and Zhu et al. [5]. Ref. [12] recently proposed the possible use of face attribute recognition as helper feature to intensify the face alignment performance by using deep CNNs. Some of the remarkable benchmarks are Face Detection Dataset and Benchmark (FDDB), WIDER FACE, and Annotated Facial Landmark Wild (AFLW), which are discussed one by one: FDDB: Images and the captions, which are taken out from news articles, are collected in the dataset. Pose variations, lighting, background, and appearance

10.2 Machine learning life cycle

are displayed in this collection of images. The reasons behind such variations in the images are motion, occlusions, and facial expressions, which are properties of the unconstrained settings for face acquisition. Annotated faces in this dataset are opted based on the result of an automatic face detector. WIDER FACE: One of the face detection benchmark dataset is a WIDER FACE dataset in which data are selected from publicly available dataset. WIDER FACE dataset is created on the basis of 60 event classes. Different scene typically created different events. Measurement of each event is based on three factors, pose, scale, and occlusion. For each factor, based on detection rate, rank them in ascending order and divided into three categories easy (top 4160 class), medium (top 2140 class), and hard (top 120 class). AFLW: The full form of AFLW is Annotated Facial Landmark Wild. It provides a wide range of collection; short explanations of facial images were assembled from the internet, it shows the appearance (e.g., phase, expressions, community, age, and gender) and the general image and environmental conditions in a very large variety. A total of about 25k faces give a short explanation with up to 21 landmarks per image. AFLW is necessity for largescale, multi-view, real-world face database with short explanation of facial features. Assembles the image on Flickr by using a large variety of face relevant tags (e.g, face, mugshot, and profile face). The images that are sets of download are manually scanned for image containing faces. Database contains 25k short-explanation facial images in real world. In this 59% of updates are about females and 41% about males. Some of the faces contain multiple faces. Most of the images are color and some of them are in grayscale. AFLW contains total 380k short-explanation facial landmark of a 21 point markup. The facial landmarks are short explanation upon visibility. A large range of natural facial occupied the database, which is not limited to frontal or near frontal area of faces. In AFLW, database supplies face rectangles and ellipses. In this ellipses are well suited with FDDB protocol. AFLW is a wide-scale, real-world database for facial landmark.

10.2 Machine learning life cycle Any machine learning or data science project follow the machine learning life cycle, which is a repetitive process. Each step in machine learning life cycle is defined as per project, which needs to achieve the leads of machine learning and artificial intelligence to process practical values. Machine learning life cycle consists of five major steps in which all consist specific equal importance and order.

179

180

CHAPTER 10 Identification of face along with configuration beneath

10.2.1 Collection of data Collection of data is the process where gathering and measuring of information about target variable in a well-established system takes place, which then used to evaluate outcomes. Data collection represents a key component of research in the area of scientific model training. The main targeted results of data collection are to acquire quality evidence that leads the analysis. To improve operation or create value, we first need to identify the objective. Preparation of customer data for excellent machine learning projects could be an intimidating task due to huge amount of data sources as well as data silos that are present in organizations. It is very difficult to select data that predict the required target—the output that model will predict based on other input, to make an accurate model.

10.2.2 Normalization of data Machine learning needs data that lead us to the next step, which includes collecting data and preparing for use. This data should get into a format suitable for analysis such as file format .csv. This is the step where data scientist and analyst consume most of the time due to cleaning and normalization of dirty data as missing data, incomplete data, or outlier is on data scientist’s decision.

10.2.3 Modeling of data Target variables are the fact, which you want for deeper understanding, should clearly be determined in order to achieve insights from data with machine learning. In this step the target variable is included as a feature in the dataset during data collection. After that, machine learning algorithm will run on this dataset to build model that is learned by the data. Finally, these trained models on data have not been trained and run in order to make the right decisions.

10.2.4 Training and feature engineering of model Models need to interpret and the more interpretable model, the easier it will be to meet regulatory requirements. When the collection of enrich and meaningful input data is deployed, it is time to add predictive property of the data for the test. These datasets are used for training and validating of the model. Data sources are used to derive the data points by continuously testing done by iterating rapidly at the key component of this phase, this phenomenon is called feature engineering.

10.2.5 Production and deployment of models To improve the model, we need to implement, document, and maintain. Lots of expertise of coding and data science experience are required for model deployment. In this final step, all works at this point are combined to deploy a model

10.2 Machine learning life cycle

for production where are predictive property of outcomes are tested in the real world. Threshold accuracy should be met by model at this time. Some data are insufficient to predict the behavior of model and thus accuracy is never achieved by model. So machine learning could be a tool for optimizing decision-making.

10.2.5.1 Convolution operation Convolution is a mathematical operation, which acts as a primary operator to many common image processing methods. Convolution operation contributes to method, which “multiplies together” two varied matrices of variable sizes, but possessing similar measurement, to further generate the third matrix possessing the same dimensionality as before. This helps in implementation of operators having their output pixel values in the form of simple linear combinations of certain input pixel values. In the context of image transformation, one of the input arrays forms a grayscale image and the later input array is generally of small dimension and is of two-dimensional in nature, which is called kernel. The convolution operation is performed by moving the kernel over the image, which usually begins at the top-left corner and after that moves through all the positions at which it completely fits within the boundaries of the image. Each position of the kernel is according to a single output pixel, value of which is deduced by multiplying the kernel value and the underlying image pixel value for every cell in the kernel, then summing all these numbers together.

10.2.5.2 Kernel A convolution operation helps in performing the following functions: calculating derivatives, detecting edges, applying blurs, etc., a very wide range of things. And all of this is performed with a “convolution kernel.” The convolution kernel forms a small matrix and further slides over an image and does its thing. For determining the position of the kernel with respect to the image, anchor point is used (Fig. 10.1). So, as the previous example shows, the bottom right pixel value in the final image will be: 0

–3

0

1

0

1

anchor 0

FIGURE 10.1 Kernel with 0 as an anchor cell.

-3

0

181

182

CHAPTER 10 Identification of face along with configuration beneath

Table 10.1 A small size image (left) and kernel (right). I11 I12 I13 I14 I15 I16 I17 I18 I19 I21 I22 I23 I24 I25 I26 I27 I28 I29 I31 I32 I33 I34 I35 I36 I37 I38 I39 I41 I42 I43 I44 I45 I46 I47 I48 I49

K11 K12 K13 K21 K22 K23

I51 I52 I53 I54 I55 I56 I57 I58 I59 I61 I62 I63 I64 I65 I66 I67 I68 I69

O57 5 I57 K11 1 I58 K12 1 I59 K13 1 I69 K21 1 I68 K21 1 I69 K23

The output image contains M 2 m 1 1 rows and N 2 n 1 1 columns, considering the image having M rows and N columns, and the kernel with dimension as m 3 n. Mathematically convolution is written as: Oði; jÞ 5

X

K 5 1m

X

l 5 1n Iði 1 j 2 l; j 1 l 2 1ÞKðk; lÞ

where i runs from 1 to M 2 m 1 1 and j runs from 1 to N 2 n 1 1. Table 10.1 illustrates a small size image (left) and kernel (right) to depict the convolution. The labels within each grid square are used for identification of each square.

10.2.5.3 Pooling In convolutional networks, localized or globalized pooling layers are generally included. Pooling layers are used in reducing the dimension of data by the combination of the outputs from one layer neuron clustering with a sole neuron in the later layer. Local pooling is combination of small sized clusters, generally 2 3 22 3 2. All the neurons of the convolutional layer are affected by global pooling. In addition, pooling helps in computing maximum or mean. Pooling is of two types:

• Max pooling: Max pooling can be defined as a discretization process based on sample. It is used in dimensionality reduction of an input representation (image, output matrix of hidden layers, etc.) and allows us to predict the attributes that are contained within the binned subregions. This is used for preventing overfitting as the representation in abstracted form is provided by it. Reduction in the number of parameters to learn helps in reducing the computational cost and provides internal representation, a basic translation invariance. Max pooling is performed by application of a max filter to nonoverlapping subregions of the initial representation. [Note: Image matrix of max pooling to be added.]

10.2 Machine learning life cycle

• Average pooling: An average pooling layer helps in the downsampling as it divides the input into rectangular pooling areas and the mean values of each area is calculated.

10.2.5.4 Difference between average pooling and max pooling The difference between average and max pooling can be explained by the difference in treatment of the downsampling “images” that remained after the convolutional layers. In the classification of cats versus dogs, applying average pooling over the image answers “how doggy or catty is this image overall,” since dogs and cats form a large part of these images, making further sense. Using max pooling, we simply find “the most doggy or catty” part of the image, which probably will not be as useful. However, this perhaps find its importance in fields such as the competition of fisheries, where the fish occupy only a relatively smaller part of the picture. Note: Images of matrix showing the max pooling and average pooling.

10.2.5.5 Fully connected layer Fully connected layers are formed by connecting every neuron in one layer to every neuron in another layer. It is principally similar to that of the traditional multilayer perceptron neural network. The vector moves across a fully connected layer for classification of the images. Note:Image of a fully connected layer.

10.2.5.6 Classification of images Classification process involves categorization of pixels in a digital image into one of varied classes, or “themes.” This data is further used in the production of thematic maps of the land cover, which are observed in an image. Typically, data with different spectral attributes are used for classifying and the spectral pattern within the data for every pixel is classified numerically. The aim of image classification is identification and portrayal, as an exclusive gray level (or color), the occurrence of attributes in an image in terms of the classes, actually represent on the ground. Classification of image plays the most crucial part of image analysis. Two of main methods for classification are Supervised Classification and Unsupervised Classification.

10.2.5.6.1 Supervised classification Supervised classification involves identifying the samples of the information classes (i.e., land cover type) of interest in the image, which are termed as “training sites.” Then the image processing software system creates a characterization that statistically develops the reflectance for each information class. This stage is coined as “signature analysis,” involving the development of a characterization

183

184

CHAPTER 10 Identification of face along with configuration beneath

that may be as simplified as the average of reflectance on each band, or as complicated as elaborated analyses of the mean, variance, and covariance overall bands. After statically characterizing every information, the image is categorized according to the reflectance for each pixel and decided which signatures it resembles the most. Supervised classification further involves

• maximum likelihood classification • minimum distance classification • parallelepiped classification 10.2.5.6.2 Maximum likelihood classification Maximum likelihood classification is defined as a statistics-based decision criterion, which helps in assistance of the classifying the overlapping signatures; pixels, having the highest probability, are allocated to the class. This classifier is examined, giving more efficient results than parallelepiped classification; but extra computations make it much slower. The word “accurate” signals that input data classes, possesses a Gaussian distribution and that signatures are selected but this assumption is not safe always.

10.2.5.6.3 Minimum distance classification In minimum distance classifier the database file containing image data is classified that uses a cluster of 256 possible signature class segments as described by signature parameter. Every segment is further described in terms of signature, for example, signature data are stored belonging to a particular class. In every class signature segment, only mean vector is used. Other data, such as standard deviations and covariance matrices, are neglected (though the maximum likelihood classifier uses this). The classification results in a theme map that signals to a specific database image channel. Each class is uniquely encoded in grayscale in theme map, which helps in encoding a class, specified on creation of class signatures. Transfer of the theme map later to the display results in loading a pseudo-color table so that each class is represented by a different color.

10.2.5.6.4 Parallelepiped classification In this classifier the limits of the class are stored in each signature class for determining a given pixel belongs to the class or not. The dimensions of each side of a parallelepiped that surrounds the average of the class in feature space are determined by specifying the class limits. Only the pixel falling interior to the parallelepiped is allocated to the class. However, a pixel belongs to more than one class in the case of class overlapping. And a pixel is further allocated to the null class, if it fails to fall under any of the classes. The parallelepiped classifier is used when speed is required. In many cases the drawback causes due to lack of accuracy and a large number of pixels are classified as ties.

10.3 Popular augmentation techniques

10.2.5.6.5 Unsupervised classification In this method a wide range of unidentified pixels is classified into a number of classes depending on natural groupings available in the values of the image. It is independent of training data specified by the analyst. The core concept is that values that fall under a given cover type should lie closer to each other in the measurement space (i.e., should be of similar gray levels), but data falling under different classes must be relatively well separated.

10.2.5.6.6 Rich training data Data augmentation is a commonly used technique, which increases image dataset size for image classification task. It involves generating new images by transforming (rotate, translate, or/and scale, adding some noise) the ones in the dataset. This process can be performed in two ways. First, performing all the necessary transformations in advance leads to the increase in the size of our dataset, which is called offline augmentation. This method is mostly used for comparatively smaller dataset as there lies a chance of increasing the size of the dataset by a factor, which is equal to the number of transformations the user perform. Second is the online augmentation, transformations are performed on a minibatch, before feeding it to our machine learning model. It is used in a situation where we deal with larger datasets, as the explosive increase in size cannot be afforded, instead, transformations are performed on the mini-batches that we would feed to our model. Some machine learning frameworks support online augmentation, which can be accelerated by the graphics processing unit (GPU).

10.3 Popular augmentation techniques 10.3.1 FLIP The images can always be flipped horizontally and vertically. A vertical flip can be considered equivalent to firstly rotate an image by 180 degrees and then perform a horizontal flip. By applying the FLIP technique we can achieve the data augmentation factor of 2 3 to 4 3 . [Note: Images of FLIP tech.]

10.3.2 Rotation This operation does not guarantee the perseverance of the dimensions of the image. If there is a square image, we can preserve its size by rotating it at right angles. In the case of an image shaped as rectangle, rotation by 180 degrees is required to conserve its dimension. Rotation of the image by finer angles helps in

185

186

CHAPTER 10 Identification of face along with configuration beneath

changing the final image size. By applying the rotation technique, we can achieve the data augmentation factor of 2 3 to 4 3 .

10.3.3 Scale The scaling of the image can be done inward and outward. In outward scaling the dimension of the output image is greater than the original dimension of the image. In image framework a section is removed from the new image, of the similar dimension as that of the original image. In the inward scaling, the image size is downscale, bounding us to assume what lies beyond the boundary. Here the data augmentation factor is arbitrary in nature.

10.3.4 Crop In cropping a random sample is taken from a segment of the initial image. After that, resizing of this segment is done to match it to the initial image size. This procedure is generally referred to as random cropping. Here the data augmentation factor is arbitrary in nature.

10.3.5 Translation In translation the image is moved either along the X or Y direction (or both). This procedure of optimization proves helpful when most of objects are found almost at any location in the image. This makes the CNN to see all over the image. Here the data augmentation factor is arbitrary in nature.

10.3.6 Gaussian noise The occurrence of overfitting happens when neural network is learning highfrequency attributes (patterns having high occurrence) that are not of any use. Gaussian noise, which possesses zero mean, basically contains data points belonging to all frequencies, effectively disturbing the high-frequency attributes. This also results in distribution of lower frequency components (intended data), but our neural network may learn to see ahead of that. Addition of just the appropriate amount of noise will help in enhancement of the learning capability.

10.4 Localization Object localization helps the network in identification of the location the object and putting a bounding box around it.

10.4 Localization

For an object localization problem, we have an input image, which goes through a ConvNet that results in a vector of features fed to a softmax to classify the object (e.g., with four classes for pedestrians/cars/bike/background). Now, for localization of those objects in the image as well, changes in the neural network are to be made, to have a few more output units that encompass a bounding box. In particular, there is the addition of four more numbers, which helps in identification of the x and y coordinates of the upper left section and the height and width of the bounding box (bx, by, bh, and bw). The neural network now will output the previous four numbers, plus the probability of class labels (also four in our case). Therefore the target label will be:

where pc represents the confidence score of an object to be in the image. It responds to the question “is there an object?” Here c1, c2, and c3 represent the presence of the object and also indicate to which class, that is, 1, 2, or 3, that particular object belongs to. So, it tells us which object it is. Finally, bx, by, bh, and bw indicate the coordinates of the bounding box around the detected object. If an image has a person, the target label will be:

187

188

CHAPTER 10 Identification of face along with configuration beneath

In case no object is detected, the output is simply:

The places occupied by the question mark have significance in this vector. Technically the network will output big numbers or NaN in these positions. This technique is also used for “landmarks detection” or “classification with localization.” In this case the output will be even bigger since the network outputs the x and y coordinates of important points within an image.

10.5 Methodology 10.5.1 Preprocessing Resizing: 1. First step is to capture an image that is used as an input. The image is then passed to a program. This program is used to create an image pyramid. The program/model creates various copies of the same image in different sizes. This step is necessary to detect the faces of all different sizes in an input image. 2. Then the stage 1 kernel will be used to scan each copy of the scaled images for faces. Typically it starts from upper left corner of an image to all the way to the bottom right corner. The kernel (12 3 12) starts scanning the image section from (0,0) to (12,12), which is then passed to the network. 3. If the network recognizes face, it will return the bounding box coordinates and repeat the same procedure with other sections of image as well. The repetition will be with sections from (0 1 2x, 0 1 2y) to section (12 1 2x, 12 1 2y) by shifting the kernel of size 12 3 12, with stride of 2.

10.5 Methodology

10.5.2 Detection phase Architecture: 1. Neural network 1 (NN1): The segment of images passed through stage 1 kernel is used as an input to a stage 1 CNN, called NN1. The NN1 consists of three convolutions layer after each of them, a rectified linear unit, or ReLU layer is implemented, to increase the nonlinearity of the input image. The network consists of 1 maxpool layer for input downscaling. The method proposed is used to get regression vectors and candidate windows. These vectors are then used for candidate calibration. After which, nonmaximum suppression is put to work for merging the candidates that are highly overlapped. 2. NN2: All inputs are then fed to other CNN named Net 2, this is the same as above network (i.e., NN1), but with more layers. It takes as input the bounding box as input. It further refines the output of previous network (NN1). The input to this layer is of size 24 3 24 3 3 3. NN3: This network is named Net 3, the output network. Net 3 takes as input the bounding box vectors of previous layer (Net 2). This network marks the facial landmarks and splits to three layers at the end giving three different outputs (Figs. 10.210.4).

FIGURE 10.2 P-Net.

FIGURE 10.3 R-Net.

189

190

CHAPTER 10 Identification of face along with configuration beneath

FIGURE 10.4 O-Net.

Training: There will be three stage training of our networks: 1. Classification: In classification a classifier categorizes the input either into “face” or “not a face.” So this is a binary classification problem. Where ’xj the cross entropy loss is, Lj 5 2 ðyjloglogðPjÞ 1 ð1 2 yjÞð1 2 loglogðPjÞÞÞ

where xj is a sample, Pj is the probability of a sample (xj) being a face. And yj Að0; 1Þ refers to the grounded truth label for each training sample. 2. Bounding box regression: This is a regression problem. We predict the bounding box for every sample xj. And for every xj the Euclidean loss will be: Lj 5 jjy^_j 2 yj jj22

where y^j is the dependent variable from CNN and yj is ground truth coordinates, which includes height, width, top-left corner. Ordinary least squares as a loss function are used, to predict weight coefficients for every feature. It is mathematically easy and computationally cheap, but has high tendency to over-fit datasets with large numbers of features. Ridge regression is used by BBR, to prevent this tendency of overfitting, which results in shrinking of the weight coefficients by a regularization penalty, λ. The greater λ, the more high-value coefficients are penalized and pushed toward zero. 3. Localization of face landmarks: Face landmarks are seen as a regression problem. Similar to bounding box, the minimized Euclidean loss will be: Lj 5 jjy^_j 2 yj jj22

In this regressor, y^j is landmark deduced from network and yj is the true (ground truth) coordinate. 4. Rich training data: There are different tasks for each CNNs, and we have used a variety of training images in the process of learning of the network. The data we used consists of images of face, not a face, semialigned faces. In some cases, some loss functions are not used.

10.6 Experiments

5. Online hard example mining (OHEM): The OHEM is a method of optimization of classifier to perform better even for some of the challenging examples with less computation cost. This results in improving the overall network performance. For instance, we have training set having couple of images with one or more human faces and bounding boxes as a label for each one. To train, we need classifier to train with both, positive and negative training examples. Positive refers to the person(face) and negative refers to nonperson(not face). For hard (or negative) mining the idea is to generate the couple of random bounding boxes. All these boxes should not overlap with the positive(face) labels in the frame. We can then call these extra bounding boxes as negative(not face). Then we train the classifier with both positive and negative labels. This method increases the overall network performance.

10.6 Experiments 10.6.1 Training data Since both face detection and alignment are performed together, In our training process, we have used four different kinds of annotations, which are as follows: The negatives: These are the regions having Intersection over Union (IoU) ratio less than 0.3 to any ground truth faces. The positives: These are the regions having IoU above 0.65 to a ground truth face. Part faces: These accounts for the regions in which the value of IoU is between 0.4 and 0.65 to a ground truth face. Landmark faces: There are five landmark positions labeled in a face. Negatives and positives are helpful in classification tasks, positives and part faces are used for bounding box regression, and landmark faces are consumed in facial landmark localization. The training data for each network is described as follows: NN1: In this network, we crop a large number of patches from WIDER FACE [13] randomly for collecting positives, negatives, and part face. Then, faces from CelebA [14] are cropped as landmark faces. NN2: In NN2 architecture the first stage of our framework is used to detect faces from WIDER FACE [13] for collection of positives, negatives, and part face while detecting landmark faces from CelebA [14]. NN3: It is similar to NN2 used for collecting data, but the first two stages are used to detect faces.

191

192

CHAPTER 10 Identification of face along with configuration beneath

10.6.1.1 The effectiveness of online hard sample mining For evaluating the contribution of the proposed online hard sample mining strategy, we train two NN3 and compare their loss curves. To make the comparison more directly, we only train the NN3 for the face classification task. All training parameters, including the network initialization, are the same in these two NN3. To compare them easier, we use fixed learning rate.

10.6.1.2 Effectiveness of both detection and alignment of face For assessing the contribution of face detection and alignment, the performances of two different NN3 on FDDB (with the same NN2 and NN1 for fair comparison) are evaluated. Comparison of the performance of bounding box regression in these two NN3 is also performed.

10.6.1.3 Face detection evaluation For evaluation of the performance of face detection model, the proposed method was compared against exceptionally well methods present in FDDB and WIDER FACE Dataset. This method consistently outperforms all the previous approaches by a large margin in both the benchmarks. We evaluate this approach on challenging photos.

10.6.1.4 Face alignment evaluation We will test the following methods to test performance against our method for face alignment: RCPR [10], TSPM [16], ESR [15], CDM [17], SDM [18], and TCDCN [12]. The method we proposed failed to detect 15 images. So we crop the central region of these 15 images and treat them as the input for NN3, landmarks and the ground truths, and normalized with respect to the interocular distance. Our method outperforms the major successful and efficient methods with a sufficient difference.

10.6.1.5 Runtime efficiency Given the multistage structure, this method is able to achieve very fast speed in face detection and alignment task. It takes 16 fps on a 2.00 GHz CPU and 99 fps on GPU (AMD Radeon). Our implementation is based on Python code.

10.7 Conclusion Here, we have demonstrated a framework for face detection and alignment with very high accuracy using multiple CNNs. Experiments done with the best methods known confirm that the method proposed in this chapter is much more efficient than most of the other methods, which are known to be very efficient. Our method surpasses other methods across

References

several challenging benchmarks. We have tested and compared our method over three popular benchmarks 1. FDDB 2. WIDER FACE 3. AFLW (for face alignment) We are going to use the existing correlation between varied face analysis methods in the future to improve our method even further.

References [1] P. Viola, M.J. Jones, Robust real-time face detection, Int. J. Comput. Vis. 57 (2) (2004) 137154. [2] B. Yang, J. Yan, Z. Lei, S.Z. Li, Aggregate channel features for multi-view face detection, in: IEEE International Joint Conference on Biometrics, 2014. pp. 18. [3] M. Mathais, R. Beneson, M. Pedersoli, L.Van Gool, Face detection without bells and whistles, in: European Conference on Computer Vision, 2014, pp. 720735. [4] J. Yan, Z. Lei, L. Wen, S. Li, The fastest deformable part model for object detection, in: IEEE Conference on Computer Vision and Pattern Recognition, 2014 pp. 24972504. [5] X. Zhu, D. Ramanan, Face detection, pose estimation, and landmark localization in the wild, in: IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 28792886. [6] A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks, in: Advances in Neural Information Processing Systems, 2012, pp. 10971105. [7] Y. Sun, Y. Chen, X. Wang, X. Tang, Deep learning face representation by joint identification-verification, in: Advances in Neural Information Processing Systems, 2014, pp. 19881996. [8] B. Yang, J., Yan, Z. Lei, S.Z. Li, Convolutional channel features, in: IEEE International Conference on Computer Vision, 2015, pp. 8290. [9] H. Li, Z. Lin, X. Shen, J. Brandt, G. Hua, A convolutional neural network cascade for face detection, in: IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 53255334. [10] X.P. Burgos-Artizzu, P. Perona, P. Dollar, Robust face landmark estimation under occlusion, in: IEEE International Conference on Computer Vision, 2013, pp. 15131520. [11] T.F. Cootes, G.J. Edwards, C.J. Taylor, Active appearance models, IEEE Trans. Pattern Anal. Mach. Intell. 23 (6) (2001) 681685. [12] J. Zhang, S. Shan, M. Kan, X. Chen, Coarse-to-fine auto-encoder networks (CFAN) for real-time face alignment, in: European Conference on Computer Vision, 2014, pp. 116. [13] S. Yang, P. Luo, C.C. Loy, X. Tang, WIDER FACE: a face detection benchmark, arXiv preprint arXiv:1511.06523 (2015).

193

194

CHAPTER 10 Identification of face along with configuration beneath

[14] Z. Liu, P. Luo, X. Wang, X. Tang, Deep learning face attributes in the wild, in: IEEE International Conference on Computer Vision, 2015, pp. 37303738. [15] X. Cao, Y. Wei, F. Wen, J. Sun, Face alignment by explicit shape regression, Int. J. Comput. Vis. 107 (2) (2012) 177190. [16] Q. Zhu, M.C. Yeh, K.T. Cheng, S. Avidan, Fast human detection using a cascade of histograms of oriented gradients, in: IEEE Computer Conference on Computer Vision and Pattern Recognition, 2006, pp. 14911498. [17] X. Yu, J. Huang, S. Zhang, W. Yan, D. Metaxas, Pose-free facial landmark fitting via optimized part mixtures and cascaded deformable shape model, in: IEEE International Conference on Computer Vision, 2013, pp. 19441951. [18] X. Xiong, F. Torre, Supervised descent method and its applications to face alignment, in: IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 532539.

Further reading D. Chen, S. Ren, Y. Wei, X. Cao, J. Sun, Joint cascade face detection and alignment, in: European Conference on Computer Vision, 2014, pp. 109122. S.S. Farfade, M.J. Saberian, L.J. Li, Multi-view face detection using deep convolutional neural networks, in: ACM on International Conference on Multimedia Retrieval, 2015, pp. 643650. G. Ghiasi, C.C. Fowlkes, Occlusion coherence: detecting and localizing occluded faces, arXiv preprint arXiv:1506.08347 (2015). V. Jain, E.G. Learned-Miller, FDDB: A Benchmark for Face Detection in Unconstrained Settings. Technical Report UMCS-2010-009, University of Massachusetts, Amherst, 2010. M. Ko¨stinger, P. Wohlhart, P.M. Roth, H. Bischof, Annotated facial landmarks in the wild: a large-scale, real-world database for facial landmark localization, in: IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2011, pp. 21442151. Luxand, Incorporated: Luxand face SDK. Available from: ,http://www.luxand.com/.. M.T. Pham, Y. Gao, V.D.D. Hoang, T.J. Cham, Fast polygonal integration and its application in extending Haar-like features to improve object detection, in: IEEE Conference on Computer Vision and Pattern Recognition, 2010, pp. 942949. R. Ranjan, V.M. Patel, R. Chellappa, A deep pyramid deformable part model for face detection, in: IEEE International Conference on Biometrics Theory, Applications and Systems, pp. 1-8. S. Yang, P. Luo, C.C. Loy, X. Tang, From facial parts responses to face detection: a deep learning approach, in: IEEE International Conference on Computer Vision, 2015, pp. 36763684. C. Zhang, Z. Zhang, Improving multiview face detection with multi-task deep convolutional neural networks, in: IEEE Winter Conference on Applications of Computer Vision, 2014, pp. 10361041. Z. Zhang, P. Luo, C.C. Loy, X. Tang, Facial landmark detection by deep multi-task learning, in: European Conference on Computer Vision, 2014, pp. 94108.

CHAPTER

11

Setting up a neural machine translation system for English to Indian languages

Sandeep Saini1 and Vineet Sahula2 1

Department of Electronics and Communication Engineering, Myanmar Institute of Information Technology, Mandalay, Myanmar 2 Department of Electronics and Communication Engineering, Malaviya National Institute of Technology, Jaipur, India

11.1 Introduction Machine translation (MT) was one of the initial tasks taken by computer scientists, and the research in this field is going on for the last five decades. During these years, it has been remarkable progress that linguists and computer engineers have worked together to achieve the current status of the MT system design. MT task was initially handled with dictionary-matching techniques and slowly upgraded to rule-based approaches. During the last two decades, most of the MT systems were based on statistical MT (SMT) approach. In these systems [1,2] the basic units of the translation process are phrases and sentences. These phrases can be composed of one or more words. Most of the conventional translation systems are based on Bayesian inferencing to predict the estimate translation probabilities for pairs of phrases. In these pairs, one phrase belongs to the source language and the other from the target language. The words are considered in a group of n. These n-words in source sentence are mapped to other m-words in the target sentence. Every time we have the same pairing in the dataset, the probability of translating a certain source group to the corresponding target group is more. For example, if we consider n 5 3 and few sentences such as “My name is Ram,” “My name is Shyam,” “My name is Meera,” we would find that the three words “My name is” are being repeated and would match to a certain pair of same words in target sentences. Thus the probability of translating “My name is” to a certain pair of words in the target sentence is higher. In a real scenario, since the probability of these phrases is very low, pairing and

Cognitive Informatics, Computer Modelling, and Cognitive Science, Volume 1. DOI: https://doi.org/10.1016/B978-0-12-819443-0.00011-8 © 2020 Elsevier Inc. All rights reserved.

195

196

CHAPTER 11 Setting up a neural machine translation system

predicting the correct pair is very difficult in these systems. To improve the probability of a certain pair of phrases, increasing the size of the dataset is one of the most feasible solutions. With the limitations of conventional MT systems [3] and dependence on huge datasets, there is a demand to search for alternate methods for MT. In recent years the research community has also shifted its translation research focus on neural MT (NMT). Mikolov et al. [4] proposed a recurrent neural network (RNN)based language model. In this model, he was able to encode natural language as a sequence of vectors, which were fed to the neural network for training. The model was able to perform basic NLP tasks. Later, Sutskever et al. [5] proposed a sequence to sequence learning mechanism using long- and short-term memory (LSTM) models. LSTM is an extended and modified version of RNN that are capable of creating the language models. This neural networkbased MT system had eight layers of encoder and eight layers of the decoder. The core idea of NMT is to use deep learning and representation learning. NMT models require only a fraction of the memory needed as compared to the traditional SMT models. Furthermore, unlike conventional translation systems, all parts of the neural translation model are trained jointly (end-to-end) to maximize the translation performance [5,6]. Normally NMT tends to require a lot of computing power, which means that it is normally a great technique if we have enough time or computing powers. The other issue with older NMT was inconsistent in handling rare words. Since these inputs were sparsely available in the network, learning and inferencing were not efficient. By using LSTM models and having eight layers of encoder and decoder, this system removes these errors to a large extent. The third major issue with NMT was that the system used to forget the words after a long. This issue is also resolved in the eight-layer approach. After 2014, this work from Sutskever et al. has inspired many researchers, and NMT is developing as a good alternative to conventional MT techniques. Google has deployed Google Neural Machine Translation (GNMT) on an eight-layer encoder-decoder architecture. This system requires huge Graphics Processing Unit (GPU) computations for training the neural network. In this work, we explore a simplified and shallow network that can be trained using a regular GPU as well. We have explored different architectures of the shallow network and showed satisfactory results for the Indian language. Since most of the work related to NMT has been focused on European and rich resource languages [7], we find this exploration as a challenge to adapt the principles of NMT on low-resource Indian languages. We have considered Hindi for our initial development and testing of the system. After initial success with English to Hindi MT, we have experimented with English to Bangla, Urdu, Malayalam, Telugu, and Tamil as well. We explain the basics of NMT in the next section and then discuss the building blocks in setting up the system. We have explained different configurations of the system in Section 11.3. In Section 11.4, we have thoroughly discussed different results obtained for all the languages.

11.2 Neural machine translation

11.2 Neural machine translation NMT is an MT system that uses an artificial neural network to increase fluency and accuracy the process of MT. NMT is based on a simple encoderdecoderbased network. The type of neural networks used in NMT is RNNs [8]. The reason for selecting RNN for the task is the basic architecture of the RNN. RNN involves cyclic structure which enables the learning of repeated sequences much easier as compared to other neural network architectures. RNN can be unrolled to store the sentences as a sequence in both sources as well as target languages. A typical structure for RNN is described in Fig. 11.1. The figure explains how a single layer of RNN can be unrolled into multiple layers, and information of the previous time period can be stored in a single cell as well. RNN architecture can be modified to provide better solutions for a specific task. Chopra et al. [9] have used attentive RNN architecture for abstractive sentence summarization. Luong et al. [10] have developed attention-based NMT using modified RNN architectures. Let X and Y be the source and target sentence pairs, respectively. The decoder is using conditional probability PðYX Þ 5 PðYjX1 ; X2 ; X3 ; . . . ; Xm

 (11.1)

where X1, X2, . . ., Xm are the fixed size vectors encoded by the encoder. Using the chain rule, the above expression then becomes PðYjXÞ 5 Pðyi jy1 ; y2 ; y3 ; . . . ; yi21 ;X1 ; X2 ; X3 ; . . . ; Xm Þ

(11.2)

In RNN the final decision at time t is computed as dht dht dht21 dh2 dh1 5 3 ;...; 3 dh0 dht21 dht22 dh1 dh0

FIGURE 11.1 Typical structure of an RNN. RNN, Recurrent neural network.

(11.3)

197

198

CHAPTER 11 Setting up a neural machine translation system

Thus because of the multiplicative effect, the output in longer sentences is very low and results in inaccuracy. In practice, it is difficult for RNNs to learn these dependencies. Since the typical sentences in any language have such complex context-dependent cases, RNN should not be used for encoder and decoder design. To overcome the shortcomings of the RNNs, we use LSTM models for encoding and decoding.

11.2.1 Long- and short-term memory model LSTM [11] is a variation of RNN and are known to learn problems with longrange temporal dependencies, so RNNs are sometimes replaced with LSTMs in MT networks. LSTMs also have this chain-like structure, but the structure of the repeating module is different from RNN. In place of a single neural network layer, there are four layers in a module. These layers interact within the same modules as well as with other modules for learning. A typical structure of the LSTM module is shown in Fig. 11.2. In this module, there are four gates for four different operations in the learning process. The first gate is “forget gate” ft. This gate exhibits one of the most important properties of LSTM network. This helps in deciding which part of the previous learning should be forgotten in this layer. Forgetting is as important as learning in this architecture. If we forget unimportant information and keep only the important information, the memory requirement of the system is highly reduced. The same process is used in our brains cortex region as well. We forget most of the things that we observe/learn in a day but tend to remember the important ones for a long time. The next gate is a sigmoid layer called the “input gate layer” (it) decides which values the system will update. This gate is helpful in deciding the function of the current layer. The selection of correct inputs will help in better learning. The third gate is a tanh layer that creates a vector of new ht–1

ht+1

ht

x

+ tanh

x

x

σ

Xt–1

Xt

σ

tanh

σ

Xt+1

FIGURE 11.2 The repeating module in an LSTM contains four interacting layers. LSTM, Long- and shortterm memory.

11.2 Neural machine translation

candidate values, Ct, that could be added to the state in the same module. Finally, the output is decided by the fourth layer. This is also a tanh function that generates the state for the next modules. Thus LSTM network is capable of replicating the memory function of the human brain if properly trained. The corresponding equations for all these functions are as follows: ft 5 σðWf 3 ½ht21 ; xt  1 bf Þ

(11.4)

it 5 σðWi 3 ½ht21 ; xt  1 bi Þ

(11.5)

Ct 5 tanhðWC 3 ½ht21 ; xt  1 bC Þ

(11.6)

0

0

Ct 5 it 3 Ct21 1 ðit 3 Ct Þ

(11.7)

ot 5 σðWo 3 ½ht21 ; xt  1 bo Þ

(11.8)

ht 5 ot 3 tanh ðCt Þ

(11.9)

In Fig. 11.3 the model reads an input sentence “ABC” and produces “WXYZ” as the output sentence. The model stops making predictions after generating the end-of-sentence token as the output. LSTM can be modified in various ways to improve the quality of translation for a particular language pair or context. LSTM can be hybridized with other neural network architectures as well. Xingjian et al. [12] have worked on a hybrid convolutional LSTM network for precipitation nowcasting. They formulate precipitation nowcasting as a spatiotemporal sequence forecasting problem in which both the input and the prediction target are spatiotemporal sequences. The proposed ConvLSTM has performed better than the regular LSTM models. Bahdanau et al. [13] have worked on a hybrid architecture consisting of RNN and deconvolutional neural network. Almost every work on NMT is based on encoderdecoder architecture [14]. In this work, we have developed an LSTM-based NMT system for English to Indian language translation. We have used bidirectional LSTM [15] for encoder and decoder architectures. We have used global attention model. Next section explains the architecture in details.

FIGURE 11.3 Sentence modeling in LSTM network. LSTM, Long- and short-term memory.

199

200

CHAPTER 11 Setting up a neural machine translation system

Table 11.1 Number of native speakers of six languages in India [16]. Language

Family

Native speakers in India (millions)

Hindi Bangla Telugu Tamil Urdu Malayalam

Indo-Aryan Indo-Aryan Dravidian Dravidian Indo-Aryan Dravidian

422 83 74 60 51 33

11.3 Setting up the neural machine translation system We have focused on six Indian languages for our NMT system development. These six languages are widely spoken not only in India but in other countries as well. According to the Indian census of 2001, the number of native speakers of these six languages is mentioned in Table 11.1. Worldwide speakers for these languages are much more than these numbers as many other countries have a large population of these languages speakers. Once we decided about the dataset, we looked for the encoder and decoder architecture that would suit the computational resources available to us.

11.3.1 Encoder and decoder In this LSTM-based NMT, we use a bidirectional encoder [5]. Bi-LSTMs are an extension of conventional LSTM, and it can improve the performance of sequence classification problems. This feature of bi-LSTMs suits our task of sequence classification. The core idea of a bi-LSTM is that it trains two, instead of one, LSTMs on a single input sequence. The first LSTM trains the sequence as it is, and the second trains it in the reverse order. Unlike LSTMs, bi-LSTMs have access to future input from the current state without the need for time delays. Fig. 11.4 shows the architecture for the bidirectional encoder used in our NMT system. The encoder presented in this figure is a single layer encoder. In GNMT, eight-layer encoder and decoder clocks are used to process the information. For better efficiency in the learning process, multiple layers of LSTMs are preferred in encoder as well as decoder designs. The decoder is designed to decode the vectors back to target language words. We have experimented with multilayer decoders in the system. A typical two-layer decoder is shown in Fig. 11.5. Considering the computational constraints in our machines, we have experimented with two and four layer architectures only.

11.3 Setting up the neural machine translation system

Upper layer

Bottom layer

FIGURE 11.4 Bidirectional encoder design using bi-LSTM. LSTM, Long- and short-term memory.

11.3.2 Attention in the model Attention layer is the bridging layer between encoder and decoder of an NMT system. There are two kinds of attention models: global and local. The idea of a global attention model is to consider all the hidden states of the encoder when deriving the context vector ct. In global attention model, at, which is a variable length alignment vector with size equals to the number of time steps on the source side, is derived by comparing the current target hidden state ht with each source hidden state hs. The concept of modeling the language is different in local attention model. In local attention model the model first predicts a single aligned position pt for the current target word. With the help of a window, which is centered around the source position, pt computes a context vector ct. In our system, we have used local attention model. A block diagram showing the functionality of attention layer is shown in Fig. 11.6.

201

202

CHAPTER 11 Setting up a neural machine translation system

Top layer

Bottom layer

FIGURE 11.5 Two-layer decoder architecture.

11.3.3 Residual connections and bridges The success of a neural network depends on the depth of the network. However, as the depth of network increases, it becomes more and more difficult to train, due to vanishing and exploding gradients [17]. This problem has been addressed in the past using the idea of modeling differences between an intermediate layers’ output and the targets. These are called residual connections. With residual connections the input of a layer is added element-wise to the output before feeding to the next layer. In Fig. 11.7 the output of LSTM1 is added to the input and sent as an input to LSTM2. Residual connections are known to greatly improve the gradient flow in the backward pass, which allows us to train very deep networks.

11.3 Setting up the neural machine translation system

Output Attention layer

Context vector ct Aligned position pt Local weights at

Source vector

Target vector

FIGURE 11.6 Local attention model.

Top layer

Bottom layer

FIGURE 11.7 Residual connections inside the encoder.

203

204

CHAPTER 11 Setting up a neural machine translation system

Attention layer

FIGURE 11.8 Complete system block diagram for NMT. NMT, Neural machine translation.

FIGURE 11.9 Graphical representation of vector relations between source and target sentences.

An additional layer is needed between the encoder and decoder layers. Fig. 11.8 shows the complete system consisting of encoder, decoder, residual connections, and bridge. Fig. 11.9 shows the graphical representation of how sentences are converted into vectors and associated with those in the target language.

11.3.4 Out-of-vocabulary words Both NMT and conventional SMT systems have to face the problem of rare or out-of-vocabulary words. If a particular word is not a part of the dataset, the trained system do not have any matching pattern for that particular word. In

11.4 Results and discussions

the case of SMT a very small nonzero probability is assigned to such words, and the best possible translation is generated. Fung and Cheung [18] and Shao and Ng [19] have adopted comparable corpora and web resources to extract translations for each unknown word. In this work, we have utilized transliteration and webmining techniques with external monolingual/bilingual corpora, to find the translation of the unknown words.

11.4 Results and discussions We have focused on the implementation of our NMT system in two phases. In the first phase, we have focused only on EnglishHindi translation. The reason for selecting Hindi is the availability of more resources for the language pair. Once the NMT system is operational on this language pair, we have tested the same on five other language pairs as well.

11.4.1 Datasets The initial requirement for setting up an MT system is the availability of parallel corpus for source and target languages. As mentioned, first we worked on EnglishHindi datasets. Hindi is not as resourceful language as its European counterparts in terms of availability of large datasets. Premier institutes in India and abroad have been working from past two decades on the development of parallel corpus for Indian languages. We have considered the following three different datasets for the experiments. 1. EnglishHindi parallel corpus from the Institute for Language, Cognition, and Computation, the University of Edinburgh [20]; 2. Institute of Formal and Applied Linguistics (UFAL) at the Computer Science School, Faculty of Mathematics and Physics, Charles University in Prague, Czech Republic [21]; and 3. Center for Indian Language Technology (CFILT), IIT Bombay [22]. Datasets from ILCC, University of Edinburgh, contains translated sentences from Wikipedia. IIT Bombay and UFAL datasets contain the data from multiple disciplines. All these datasets are exhaustive with an abundant variety of words. Table 11.2 provides information regarding the number of words and sentences in each dataset. For the remaining five languages as well as Hindi, we have used another combined dataset. This dataset is obtained from John Hopkins University [23]. This dataset is constructed using Wikipedia articles over a period of 1 year. In the given dataset, every language has a few thousands of sentences only. The number of unique words for each language and the corresponding English words are mentioned in Table 11.3.

205

206

CHAPTER 11 Setting up a neural machine translation system

Table 11.2 Details of ILCC and UFAL Hindi datasets. No. of sentences No. of words No. of unique words

ILCC dataset

UFAL dataset

CFILT dataset

41,396 245,675 8342

237,885 1,048,297 21,673

1,492,827 20,601,012 250,619

Table 11.3 Details of English to Indian language dataset. Language

No. of unique words

No. of pairs in dictionary

Bangla Telugu Tamil Urdu Malayalam

4075 12,193 11,492 26,363 41,502

6011 38,532 69,128 113,911 144,505

A number of unique words and the number of pairs in the dictionary for each language are mentioned here.

11.4.2 Experimental setup This NMT system is implemented by taking the core algorithm from the tutorials from Peter Neubig [24] and Seq2Seq (https://google.github.io/seq2seq/nmt/). TensorFlow (https://www.tensorflow.org/install/) and Theano (http://deeplearning. net/software/theano/install.html) are the platforms used in the system design. We have set up the system on an Nvidia GPU, which is having NVIDIA Quadro K4200 graphics card. This GPU has 24 stacks and a total number of CUDA cores 1344.

11.4.3 Training details Once the dataset is preprocessed, the source and target files are fed into the encoder layer to prepare the vectors from the sentences. We have used stochastic gradient descent [25], an algorithm for training. We have worked on two different layer sizes. Two and four layer networks are trained for different combinations of encoder and decoder architectures. Since we have used GPU, training time for the neural network for different datasets for different architectures was in few hours only. For the experiment, we have used a different number of sentences for each dataset. Details about a number of sentences used in training and testing for each dataset are described in Table 11.4. Training time per epoch for each dataset for a selected number of sentences is shown in Table 11.5. Every dataset is trained for 10 epochs. The final results shown correspond to 10 epochs of training.

11.5 Discussions

Table 11.4 Training details for three datasets used for Hindi. Dataset

Total sentences

Training

Validation

Testing

ILCC UFAL CFILT

41,396 237,885 1,492,827

28,000 70,000 140,000

6000 15,000 30,000

6000 15,000 30,000

The system is now trained for five other language pairs as well with the same set of configurations. Training time details for these datasets are mentioned in Table 11.6.

11.4.4 BLEU score We have evaluated our system using the Bilingual Evaluation Understudy (BLEU) score [26]. In each configuration the BLEU score of the translation scores are different, and Table 11.7 shows the BLEU score for each different configuration. The same mechanism is used to evaluate English to other 5 Indian language NMT systems as well. The BLEU scores obtained for these language pairs are mentioned in Table 11.8. This NMT system works with satisfactory results for six Indian languages. Google has also extended its NMT [27] for various Indian languages, and all these six languages are part of GNMT project. For all these languages a lot of experiments have been conducted using SMT techniques. A brief comparison of contemporary SMT systems with the discussed NMT is shown in Table 11.9. Some of the sample translations generated from our system are shown in Fig. 11.10. These translations are generated from the sentences taken from the test sets of respective datasets.

11.5 Discussions The results obtained from NMT-based EnglishHindi MT are comparable with conventional statistical or phrase-based MT systems. One of the earliest SMTbased system, Anusaaraka [32], is lacking the capability to handle complex sentences and does not perform at par with the latest MT systems. We have been able to achieve satisfactory results for Hindi, Bangla, Telugu, and Tamil. The results are not matching the standards of SMT results for Urdu and Malayalam. We have also worked on a four-layer encoderdecoder architecture in comparison to fully developed eight-layer architecture of GNMT. This system does not outperform GNMT (with a BLEU score of 38.20 for EnFr), but it is showing many comparable results, when compared to Anusaaraka (21.18), AnglaMT (22.21), and Anglabharati (20.66) [29] and other SMT systems.

207

Table 11.5 Training time for different configurations of encoder and decoders for neural machine translation (NMT) for English to Hindi. Training time (hh:mm:ss) Dataset

No. of sentences

2-Layer LSTM 1 SGD

4-Layer LSTM 1 SGD

2-Layer (bi-dir) LSTM 1 SGD 1 Res

4-Layer (bi-dir) LSTM 1 SGD 1 Res

ILCC UFAL CFILT

28,000 70,000 140,000

02:58:35 07:34:28 16:38:24

6:34:54 13:46:25 28:38:12

3:28:34 8:31:24 15:43:23

7:45:56 15:23:41 32:25:16

LSTM, Long- and short-term memory; SGD, stochastic gradient descent.

Table 11.6 Training time for different EnglishIndian language pairs. Training time (hh:mm:ss) Dataset

No. of sentences

2-Layer LSTM 1 SGD

4-Layer LSTM 1 SGD

2-Layer (bi-dir) LSTM 1 SGD 1 Res

4-Layer (bi-dir) LSTM 1 SGD 1 Res

BanglaEnglish TeluguEnglish TamilEnglish UrduEnglish MalayalamEnglish

20,788 43,038 35,027 33,798 29,518

02:18:54 04:24:18 03:58:14 03:46:34 03:30:45

4:54:34 09:10:34 08:23:51 08:03:11 07:43:25

2:28:34 04:45:24 04:15:52 03:55:55 03:55:51

4:45:56 09:45:23 08:45:42 08:15:51 07:45:36

LSTM, Long- and short-term memory; SGD, stochastic gradient descent.

11.5 Discussions

Table 11.7 BLEU score calculated for four different configurations of the system for English to Hindi neural machine translation. BLEU score for EnHi Configuration

ILCC

UFAL

CFILT

2-Layer 4-Layer 2-Layer 4-Layer

12.512 13.534 12.854 13.863

14.237 16.895 15.785 17.987

16.854 17.124 18.100 18.215

LSTM 1 SGD LSTM 1 SGD (bi-dir) LSTM 1 SGD (bi-dir) LSTM 1 SGD 1 Res

BLEU, Bilingual Evaluation Understudy; En, English; Hi, Hindi; LSTM, long- and short-term memory; SGD, stochastic gradient descent.

Table 11.8 BLEU score calculated for four different configurations of the system for English to Indian languages neural machine translation. Configuration

EnBn

EnTe

EnTa

EnUr

EnMl

2-Layer 4-Layer 2-Layer 4-Layer

11.25 12.67 10.45 11.45

12.78 12.92 11.34 12.56

9.65 10.03 9.92 8.74

12.34 12.59 11.08 12.49

10.05 9.98 11.34 12.45

LSTM 1 SGD LSTM 1 SGD (bi-dir) LSTM 1 SGD (bi-dir) LSTM 1 SGD 1 Res

Bn, Bangla; BLEU, Bilingual Evaluation Understudy; En, English; LSTM, long- and short-term memory; Ml, Malayalam; SGD, stochastic gradient descent; Ta, Tamil; Te, Telugu; Ur, Urdu.

Table 11.9 Comparison of BLEU scores for various statistical machine translation systems with the discussed neural machine translation (NMT) system. Language pair EnHi EnBn EnTe EnTa EnUr EnMl

Existing systems and BLEU scores Ramanathan et al. [28] 16.78, Sachdeva et al. [29] 22.21 Irvine and Callison-Burch [30] 12.1, Islam et al. [31] 11.70 Irvine and Callison-Burch [30] 11.7 Irvine and Callison-Burch [30] 9.5 Irvine and Callison-Burch [30] 20.4 Irvine and Callison-Burch [30] 13.6

NMT results (BLEU score) 18.215 12.67 12.92 10.03 12.59 12.45

Bn, Bangla; BLEU, Bilingual Evaluation Understudy; En, English; Hi, Hindi; Ml, Malayalam; Ta, Tamil; Te, Telugu; Ur, Urdu.

209

210

CHAPTER 11 Setting up a neural machine translation system

FIGURE 11.10 Sample translations generated from the NMT of English to Indian languages. NMT, Neural machine translation.

11.6 Conclusion Statistical phrase-based MT systems have been facing the problem of accuracy and requirement of large datasets for a long time, and in this work, we have investigated the possibility of using a shallow RNN and LSTM-based neural machine translator for solving the issue of MT. We have used quite a small amount of dataset and less number of layers for our experiment. The results show that NMT can provide better results for the larger dataset and have a large number of layers in encoder and decoder. Compared to contemporary SMT and PBMT systems, NMT-based MT performs much better. Future work would involve fine-tuning the training of long and rare sentences using smaller datasets. We would like to explore NMT for Indian language pairs as well. Since the grammar structure for many of the Indian languages is similar to each other, we expect the higher order of BLEU scores in the future.

References [1] A. Lavie, S. Vogel, L. Levin, E. Peterson, K. Probst, A.F. Llitjo´s, et al., Experiments with a Hindi-to-English transfer-based MT system under a miserly data scenario, ACM Trans. Asian Lang. Inf. Process. 2 (2) (2003) 143163.

References

[2] S. Saini, V. Sahula, A survey of machine translation techniques and systems for Indian languages, in: 2015 IEEE International Conference on Computational Intelligence Communication Technology, 2015, pp. 676681. [3] S. Saini, U. Sehgal, V. Sahula, Relative clause based text simplification for improved English to Hindi translation, in: 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI), IEEE, 2015. [4] T. Mikolov, M. Karafia´t, L. Burget, J. Cernocky´, S. Khudanpur, Recurrent neural network based language model, Interspeech 2 (2010) 3. [5] I. Sutskever, O. Vinyals, Q.V. Le, Sequence to sequence learning with neural networks, Advances in NIPS, CoRR, abs/1409, 3215, 2014. [6] S. Saini, V. Sahula, Neural machine translation for English to Hindi, in: 2018 Fourth International Conference on Information Retrieval and Knowledge Management (CAMP), IEEE, 2018. [7] T.-L. Ha, J. Niehues, A. Waibel, Toward multilingual neural machine translation with universal encoder and decoder, arXiv preprint arXiv:1611.04798 (2016). [8] L.R. Medsker, L.C. Jain, Recurrent neural networks, Des. Appl. 5 (2001). [9] S. Chopra, M. Auli, A.M. Rush, Abstractive sentence summarization with attentive recurrent neural networks, in: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016, pp. 9398. [10] M.-T. Luong, H. Pham, C.D. Manning, Effective approaches to attention-based neural machine translation, arXiv preprint arXiv:1508.04025 (2015). [11] S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Comput. 9 (8) (1997) 17351780. [12] S. Xingjian, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, W. Woo, Convolutional LSTM network: a machine learning approach for precipitation nowcasting, Advances in Neural Information Processing Systems (2015) 802810. [13] D. Bahdanau, K. Cho, Y. Bengio, Neural machine translation by jointly learning to align and translate, arXiv preprint arXiv:1409.0473 (2014). [14] J. Gehring, M. Auli, D. Grangier, D. Yarats, Y.N. Dauphin, Convolutional sequence to sequence learning, arXiv preprint arXiv:1705.03122 (2017). [15] M. Schuster, K.K. Paliwal, Bidirectional recurrent neural networks, IEEE Trans. Signal Process. 45 (11) (1997) 26732681. [16] Lin, Angel, and Peter W. Martin, eds. Decolonisation, globalisation: Language-ineducation policy and practice. Vol. 3. Multilingual Matters, 2005. [17] R. Pascanu, T. Mikolov, Y. Bengio, On the difficulty of training recurrent neural networks, in: International Conference on Machine Learning, 2013, pp. 13101318. [18] P. Fung, P. Cheung, Mining very-non-parallel corpora: parallel sentence and lexicon extraction via bootstrapping and E, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (2004) 5763. [19] L. Shao, H.T. Ng, Mining new word translations from comparable corpora, in: Proceedings of the 20th International Conference on Computational Linguistics, Association for Computational Linguistics, 2004, p. 618. [20] Institute for Language, Cognition, and Computation, University of Edinburgh, Indic Multi-Parallel Corpus, Technical Report, 2011. Available from: ,http://homepages. inf.ed.ac.uk/miles/babel.html..

211

212

CHAPTER 11 Setting up a neural machine translation system

[21] O. Bojar, V. Diatka, P. Rychly´, P. Strana´k, V.´ı Suchomel, A. Tamchyna, et al., HindEnCorp - Hindi-English and Hindi-only corpus for machine translation, LREC (2014) 35503555. [22] Resource Center for Indian Language Technology Solutions (CFILT) IBH Corpus, Technical Report, 2019. Available from: ,http://www.cfilt.iitb.ac.in/downloads. htm.. [23] M. Post, C. Callison-Burch, M. Osborne, Constructing parallel corpora for six Indian languages via crowdsourcing, in: Proceedings of the Seventh Workshop on Statistical Machine Translation, Association for Computational Linguistics, 2012, pp. 401409. [24] G. Neubig, Neural machine translation and sequence-to-sequence models: a tutorial, arXiv preprint arXiv:1703.01619 (2017). [25] D. Needell, R. Ward, N. Srebro, Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm, in: Advances in Neural Information Processing Systems, 2014, pp. 10171025. [26] K. Papineni, S. Roukos, T. Ward, W.-J. Zhu, BLEU: a method for automatic evaluation of machine translation, in Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, Association for Computational Linguistics, Stroudsburg, PA, 2002, pp. 311318. [27] Cloud Translation API, Dynamically Translate Between Thousands of Available Language Pairs, Technical Repo, 2016rt. Available from: ,https://cloud.google.com/ translate/docs/languages.. [28] A. Ramanathan, J. Hegde, R.M. Shah, P. Bhattacharyya, M. Sasikumar, Simple syntactic and morphological processing can help English-Hindi statistical machine translation, Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I. (2008) 513520. [29] K. Sachdeva, R. Srivastava, S. Jain, D.M. Sharma, Hindi to English machine translation: using effective selection in multi-model SMT, LREC (2014) 18071811. [30] A. Irvine, C. Callison-Burch, Combining bilingual and comparable corpora for low resource machine translation, in: WMT@ ACL, 2013, pp. 262270. [31] M.Z. Islam, J. Tiedemann, A. Eisele, English to Bangla phrase-based machine translation, in: The 14th Annual Conference of The European Association for Machine Translation, Saint-Rapha¨el, France, 2010, pp. 2728. [32] A. Bharati, V. Chaitanya, A.P. Kulkarni, R. Sangal, Anusaaraka: machine translation in stages, CoRR, cs. CL/0306130,VIVEK-BOMBAY-10, 1997, 2225.

CHAPTER

An extreme learning-based adaptive control design for an autonomous underwater vehicle

12

Biranchi Narayan Rath and Bidyadhar Subudhi Department of Electrical Engineering, National Institute of Technology Rourkela, Rourkela, India

12.1 Introduction Research on control of autonomous underwater vehicles (AUVs) has gained momentum since the last two decade owing to their several interesting applications in defense and other related fields. Controlling an AUV in a marine habitat environment is a demanding work owing to uncertain dynamics of AUV. Thus there is a need of identifying the dynamics of the AUV through system identification techniques such that the controller can be deigned to handle changes in the parameters. A polynomial form of Nonlinear AutoRegressive Moving Average with eXogenous input (NARMAX) model is one of the efficient model structures used for designing an adaptive controller for handling the parametric variations [1,2]. A nonlinear system can be described by using a polynomial NARMAX model owing to its simple model structure. However, in such model, the number of candidate regressors increases exponentially as the degree of polynomial increases. To resolve the above problem, forward regression orthogonal least square algorithm is proposed in [2]. But this modeling and the task of selecting a subset model from such a huge set is computationally expensive. On the contrary, data length is an important factor for determining the dimension of the candidate set for single-hidden layer feed forward neural (SLFN) network, that is, the number of candidate regressors remains unaffected by augmenting lags in the input, output, or noise as discussed in [3]. This gives a flexibility for the SLFN approach in accommodating nonlinear systems with huge lags in the input, output, or noise. SLFN networks are considered in system identification schemes since the hidden layer of this network has the basis function which is a nonlinear mapping of the multivariable inputs to scalar values, thus prohibiting the increase in basis functions and limiting the increase in the number of input variables. In view of the above advantages, SLFN are very popular models for identification [2,46]. However, training of such models do have some demerit. Gradient-based SLFN Cognitive Informatics, Computer Modelling, and Cognitive Science, Volume 1. DOI: https://doi.org/10.1016/B978-0-12-819443-0.00012-X © 2020 Elsevier Inc. All rights reserved.

213

214

CHAPTER 12 An extreme learning-based adaptive control design

network suffers from slow convergence rate, and a very large number of iteration is required to reach the global optima [7]. To overcome such problems, extreme learning machine (ELM) approach has been proposed in Refs. [8,9] that offer improved performances for solving regression problems along with minimization of the average of the output weights [10]. In such network the selection of weights and hidden biases is random, while hidden layer output matrix is analyzed by MoorePenrose generalized inverse operation to obtain the weights between the output and hidden layer. ELM is, thus, used to identify the diving dynamics of AUV off-line using the data set obtained by simulating the INFANTE AUV model [11]. However, in order to handle the uncertainties arising out of variations in ocean current and wave disturbances [1113] on-line, ELM model needs to be adaptive. Such type adaptation of ELM model on-line is commonly known as on-line sequential ELM (OS-ELM) in literature [14]. In the view of aforesaid reason an OS-ELM model is applied for the identification of AUV dynamics. In [15,16], trajectory tracking algorithms for a fully actuated AUV were proposed. But keeping in view the cost and weight of the actuator and energy requirement for long-term missions, fully actuated AUVs are not preferred. It is challenging to design a tracking algorithm for an underactuated AUV since majority of the systems display nonholonomic restraints or are not entirely linearizable. Therefore for both fully actuated and underactuated AUVs through the design of path following control algorithms, it is possible to achieve smooth convergence to the path as compared to trajectory tracking. Moreover, in path following control, the actuation signals are less likely to undergo actuation saturation, so the path following algorithm is certainly a better choice for underactuated AUV. Many computational intelligence-based techniques are reported such as in [1719], sliding mode controller [20,21], Lyapunov-based backstepping controller [22,23], HN controller [2426], and model predictive controller strategy [2730]. Though there are many development in control strategy recently, still proportional, integral, and derivative (PID) controllers are extensively used for tracking performance [31,32] since it is quite simple and easy to implement. However, the design of controller [31,32] is based on linear model of AUV. Thus, in this chapter a nonlinear self-tuning PID controller to control an underactuated AUV for tracking desired depth trajectory is discussed. The contributions of the chapter is as follows: 1. Proposes a sequential ELM model for obtaining the predicted depth dynamics of AUV. 2. Proposes an ELM-based time delay estimator, to estimate variable delay in the control network. 3. Developed an adaptive nonlinear self-tuning PID controller using the nonlinear ELM model and time delay estimator for diving tracking of AUV. This chapter is outlined as follows. The modeling of AUV in depth plane and the problem formulation is presented in Section 12.2. The parameter estimation of

12.2 Modeling of autonomous underwater vehicle

AUV dynamics employing OS-ELM network is detailed in Section 12.3. In Section 12.4 the diving control law is derived for AUV. A time delay estimator is then proposed in Section 12.5. The results and analysis of performances of the proposed PID controller performances of the identification technique using OSELM network and control algorithm using proposed nonlinear PID controller is presented in Section 12.6. Subsequently, the chapter is concluded in Section 12.7.

12.2 Modeling of autonomous underwater vehicle in diving plane and problem statement 12.2.1 Kinematic The motion of AUV is defined using two frame, namely, inertial reference frame {I} and body-fixed frame {B} as displayed in Fig. 12.1. The position and orientation parameters η1 5 [xT, yT, zT,φT, θT, ψT]T are obtained in {I}, whereas the linear and angular velocities parameters ν T 5 [uT, vT, wT, pT, qT, rT]T are obtained with reference to {B}. To observe AUV motion from {I} a transformation matrix   J  ðη2 Þ 5 diagðJ1 ðη2 Þ; J2 ðη2 ÞÞ from {B} to {I} is defined and is given as follows: 

   η_ 1 J ðη Þ 5 1 2 η_ 2 03 3 3

03 3 3  J2 ðη2 Þ





ν1  ν2



(12.1)

where 2

cosðψ Þcosðθ Þ 6 J1 ðη2 Þ 5 4 sinðψ Þcosðθ Þ 2sinðθ Þ

3 2sinðψ Þcosðφ Þ 1 cosðψ Þsinðθ Þsinðφ Þ 7 cosðψ Þcosðφ Þ 1 sinðφ Þsinðθ Þsinðφ Þ 5   cosðθ Þsinðφ Þ sinðψ Þsinðφ Þ 1 cosðψ Þcosðφ Þsinðθ Þ # 2cosðψ Þsinðφ Þ 1 sinðθ Þsinðψ Þcosðφ Þ cosðθ Þcosðφ Þ

Surge

{Body fixed frame}

ay

{Inertial reference frame} y z

Sw

x

Heave

Roll

Pitch

Yaw

FIGURE 12.1 Three DOF model of AUV with reference frame. AUV, autonomous underwater vehicle; DOF, degree-of-freedom.

215

216

CHAPTER 12 An extreme learning-based adaptive control design

2

3 1 sinðφ Þtanðθ Þ cosðφ Þtanðθ Þ J2 ðη2 Þ 5 4 0 2sinðφ Þ 5 cosðφ Þ 0 sinðφ Þ=cosðθ Þ cosðφ Þ=cosðθ Þ

and η_ represents the velocities parameters of AUV in the {I} frame and corresponding velocities of the AUV {B} frame are ν  .

12.2.2 Dynamics Dynamics of the AUV consists of nonlinearity and coupling between various terms. The AUV has six degree-of-freedom (DOF) equation of motion along xT, yT, and zT axes as shown in Fig. 12.1. The following are the dynamic equation along its respective axis [12].

• Surge motion: 











m½u_ 2 v r  1 w q 2 xg ðq 2 1 r 2 Þ 1 yg ðp q 2 r_ Þ 1 zg ðp r  1 q_ Þ 5 XHS 













1 Xu ju j u ju j 1 X  u_ 1 Xw q w q 1 Xq q q q 1 Xvr v r  1 Xrr  r  r 1 Xprop u_

(12.2)

• Sway motion 









m½v_ 2 w p 1 u r  2 yg ðp 2 1 r 2 Þ 1 zg ðq r  2 p_ Þ 1 xg ðp q 1 r_ Þ 







 







5 YHS 1 Yv jv j v jv j 1 Yr jr j r  jr  j 1 Yv v_ 1 Yr_ r_ 1 Yu r u r  



 



 



(12.3)

1 Yw p w p 1 Yp q p q 1 Yu v u v 1 Yu u δr u δr 2

• Heave motion 









m½w_  2 u q 1 v p 2 zg ðq 2 1 p 2 Þ 1 xg ðp r 2 q_ Þ 1 yg ðq r  1 p_ Þ 





5 ZHS 1 Zw jw j w jw j 1 Zq jq j q jq j 1 Z 









w_ 







w_  1 Zq_ q_ 1 Zu q u q 1 Zv p v p

1 Zr p r  p 1 Zu w u w 1 Zu u δs u 2 δs (12.4)

• Roll motion 



Ix p_ 1 ðIz 2 Iy Þq r  2 ðr_ 1 p q ÞIxz 1 ðr 2 2 q 2 ÞIyz 1 ðp r  2 q_ ÞIxy 1 m½yg ðw_  2 u q 1 v p Þ 2 zg ðv_ 2 w p 1 u r  Þ    KHS 1 Kpjp j p jp j 1 Kp p_ 1 Kprop

(12.5)

• Pitch motion 



Iy q_ 1 ðIx 2 Iz Þp r  2 ðp_ 1 q r  ÞIxy 1 ðp 2 2 r 2 ÞIzx 1 ðq p 2 r_ ÞIyz 1 m½zg ðu_ 2 v r  1 w q Þ 2 xg ðw_  2 u q 1 v p Þ     5 MHS 1 Mw jw j w jw j 1 Mq jq j q jq j 1 M  w_  1 _  w       M  q_ 1 Mu q u q 1 Mv p v p 1 Mr p r  p 1 Mu w u w 1 Mu u δs u 2 δs q_

(12.6)

12.2 Modeling of autonomous underwater vehicle

Table 12.1 Definition of autonomous underwater vehicle parameter. Added mass



















Drag force

Xu_  ; Y _  ; Z _  ; K _  ; M _  ; Y_ ; Z _  ; M _  ; N _  ; N_ v w w r v p q q r     Mww ; Mqq ; Xuu ; Yvv

Hydrostatic force

XHS ; YHS ; ZHS ; KHS ; MHS ; NHS

Propeller thrust and lift force



















Zu u δs ; Zu u δr ; Nu u δr

• Yaw motion 



Iz r_ 1 ðIy 2 Ix Þp q 2 ðq_ 1 r  p ÞIyz 1 ðq 2 2 p 2 ÞIxy 1 ðr  q 2 p_ ÞIzx 1 m½xg ðv_ 2 w p 1 u r  Þ 2 yg ðu_ 2 v r  1 w q Þ    5 NHS 1 Nv jv j v jv j 1 Nr jr j r  jr  j         1 N  v_ 1 Nr_ r_ 1 Nu r u r  1 Nw p w p 1 Np q p q 1 Nu v u v 1 Nu u δr u 2 δr v_ (12.7)

The parameters used in (12.2)(12.7) which affect the overall dynamics of the AUV are as detailed in Table 12.1. Remark 1: Considering no coupling term in the dynamic of heading plane, the diving dynamics is identified independently, that is, surge motion, heave motion, and pitch motion, which indeed increase the precision of the controller during diving tracking. Remark 2: Further, as AUV considered is an underactuated system, there is no control input for heave motion. Remark 3: Generally, roll motion exists for 3D-motion. In this work, we have only considered the depth plan motion; thus the effect of roll motion can be neglected in this work.

12.2.3 Discretization of the kinematic and dynamic of autonomous underwater vehicle for controlling the autonomous underwater vehicle in diving motion Using Remarks 12.1, 12.2, 12.3 and by using Euler’s first form, the equation governing the depth plane motion is discretized with sampling time TT. Fig. 12.1 shows the depth framework of AUV in three DOF. (uT, wT, qT) and (zT, θ) denote body and reference frames, respectively. uT is the surge velocity, qT is the pitch rate, and wT is the heave velocity, whereas zT is the inertial depth coordinates of AUV and θT is the pitch orientation.

217

218

CHAPTER 12 An extreme learning-based adaptive control design

The kinematic model for heading motion [1] is given by 











zk 5 zk21 1 T  ð2 uc sinθk21 1 wk21 cosθk21 Þ;    θk 5 θk21 1 T  qk21 :

(12.8)

where k is the sampling instant and T denotes the sampling time. The dynamic model for heading motion of AUV [1] is given as 0

1    2 BÞcosðθk Þ Zuw Zww     wk 5 wk21 1 T u wk 1 w jw jA    m2Z  m2Z  m2Z  k k w_ w_ w_ 0 1   Z 1 m Z   mz     u q qq g 2A 1 T @ u qk 1 q jq j 1    q m2Z  k k m 2 Zw_ m2Z  w_ w_ 0 1     M M 1 m  jq j Wz sinðθ Þ 1 Bx cosðθ Þ      g b q uq k21 k21 qk 5 qk21 1 T  @ 1 q jq j 1 u qk21 A    Iyy 2 M  k21 k21 Iyy 2 M  Iyy 2 M  q_ q_ q_ 0 1     2 M M u  jw j M    w uδ  ww s A 1 T @ w jw j 1   u wk21 1  δ s;k21 Iyy 2 M  k k Iyy 2 Mq_ Iyy 2 Mq_ q_ (12.9) 



 @ðW

where δs is the input to stern plane. Remark 4: Considering no coupling term in the dynamic of heading plane [1,32], the parameters of depth dynamics in Eq. (12.9) is identified independently. By doing so, the tracking performance of the proposed controller increases.

12.3 Identification of autonomous underwater vehicle dynamics using extreme learning machine model Define xin A ℜm1n as the augmented input vector to the ELM model, that is,     xin 5 ½xk ; uk  where xk 5 ½wk21 ; qk21 T as the output of ELM model and uk denotes the actuation signal to the stern plane. The ELM model is shown in Fig. 12.2A. The nonlinear discrete-time state space representation of AUV dynamic given by Eq. (12.9) is identified using ELM model and is given by x^k11 5 f ðxin;k Þ where

f ðxin;k Þ 5

nh X

ϕi ðwhj;i xin 1 bi Þwi 5 Φ wo

(12.10) (12.11)

i51

where ϕ denotes hidden layer activation function, ΦT is the hidden layer output matrix, nh is the number of hidden nodes, wo indicates the output parameters

12.3 Identification of autonomous underwater vehicle dynamics

[w (k) q (k)]

AUV dynamic

u (k)

b1



Σ

e (k)

+

φ (.) 1

w

h

Σ bn

[wˆ (k) qˆ (k)]

w

h

ELM model

φ (.) nh

(A)

[zd ]

Kinematic controller

qd

Self-tuning PID controller

δs

AUV dynamic

q

AUV kinematic

z

rˆ TDL

OS-ELM model

TDL

(B)

FIGURE 12.2 (A) ELM structure and (B) proposed control structure. ELM, Extreme learning machine.

linking the hidden and the output nodes, respectively, wh denotes the ELM model’s input parameters, and j varies from 1 to number of samples N. Remark 5: Considering no coupling term in the dynamic of dive plane [1,32], the heading parameters Eq. (12.9) is identified independently, thus increasing the precision of the controller during heading tracking. In ELM learning algorithm the hidden layer parameters, that is, input parameters wh and the biases b are not tuned rather are randomly assigned between [ 2 1 1] [3335]. Thus the training steps of ELM model involve finding only least square solution to the output parameters w. It is acquired after solving the below-mentioned ridge regressionbased optimization problem

219

220

CHAPTER 12 An extreme learning-based adaptive control design

minfOΦ wo 2 yo O2 1 ηOwo O2 g wo

(12.12)

where η denotes the regularization coefficient and yo is the desired output vector. The penalty term η||w||2 in Eq. (12.12) is used to penalize the estimate weights, thus increasing the generalization performances of the predicted model. The output optimum weights is thus obtained by solving Eq. (12.12) and is given as 





wo 5 ðΦ T Φ 1ηIÞ21 Φ T yo

(12.13)

where I is the identity matrix. Remark 6: The hidden layer parameters are obtained by randomly assigning the input parameters and hidden nodes biases between [ 2 1 1]. By doing so, the training reduces to a single step linear calculation. However, doing so possess certain demerit. It may result in ill-conditioned output matrix from the hidden layer. Thus to increase the efficacy of ELM model, optimization of the parameters of the hidden layer is required.

12.3.1 Sequential extreme learning machine model for autonomous underwater vehicle dynamic ELM is a batch-learning process. However, in real-time, the training data arrives one by one; thus there is a need to revise the ELM model so as to make it on-line sequentially. Moreover, to account any sort of uncertainties that may arise due to wave disturbances, model mismatch, ocean current, or parameter variation during formulation of control law for AUV, the ELM model need to adapt on-line, thus there is a need of on-line sequential algorithm for such ELM model. Such type of on-line sequential learning algorithm is referred to as OS-ELM [14]. The steps adopted for adaptive ELM model (OS-ELM) [14] is shown in Algorithm 12.1.

Algorithm 12.1 Sequential ELM Algorithm Step 1: Based on experimentally collected data sets, the ELM model is trained off-line to find the model parameters and subsequently, determining the output weights vector using (12.11). Step 2: For on-line training, set the iteration i 5 0 and for each new inputoutput pair of data 1. estimate output matrix of the hidden layer 2. using following adaptive law, determine the output weight vector wi11 5 wi 1 Ki11 ϕTi11 ðyi11 2 ϕi11 wi Þ Ki11 5 Ki 2 Ki ϕTi11 ðI1ϕi11 Ki ϕTi11 Þ21 ϕi11 Ki 3. increment iteration, that is, i 5 i 1 1, and follow step 2

12.4 Design of diving controller

12.4 Design of diving controller The cascade control structure for depth tracking of AUV consists of kinematic controller and dynamic controller, as shown in Fig. 12.2B. The inputs to the identified ELM model is given via tapped delay line, which stores the inputs temporarily and predicts the desired velocity of the AUV. The kinematic controller provides suitable guidance law to the dynamic controller, which indeed drives the AUV to reference depth profile. In other words the kinematic receives input as position (z) and pitch orientation (θ) of inertial frame and generates the desired velocity (q). The dynamic controller then takes q as input and generates suitable actuation signal (δs) to the fins (stern plane) that will steer the AUV to desired depth.

12.4.1 Kinematic backstepping control law Let zd be the reference depth trajectory that the AUV is required to track. Then the diving cross-track error is given as 

ze;k 5 zk 2 zd :

(12.14)

The objective is to make ze,k asymptotically tends to be zero when t-N. Thus to reduce ze,k at every sampling instant, an appropriate Lyapunov function is chosen and is given by V1;k 5

1 2 ze;k 2

(12.15)

To bring about the minimization of the Lyapunov function V1,k at each instant of time, the following condition is necessitated: V1;k 2 V1;k21 # 0:

(12.16)

Substituting zk from (12.8) into (12.15), the objective function V1,k is rewritten as  1   V1;k 5 V1;k21 1 T  Uze;k21 sinðθk21 2 δd Þ 1 T 2 sin2 ðθk21 2 δdÞ; 2

(12.17)

V1;k 2 V1;k21 5 TUze;k sinðθk21 2 δd Þ:

(12.18)

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   where δd 5 tan21 ðwk21 =u Þ is the angle of attack and U 5 u 2 1 wk21 2 the resultant velocity in the vertical axis. Because of the slow varying dynamics of  AUV, the sampling time can be assumed to be Ts {1. With Ts {1, Eq. (12.17) is written as

To satisfy Eq. (12.16), Eq. (12.18) can be written as 

T  Uze;k sinðθk21 2 δd Þ # 0:

(12.19)

221

222

CHAPTER 12 An extreme learning-based adaptive control design

As the sampling time T and the resultant velocity component U are always positive, the desired pitch orientation can be therefore be selected as 



θk21 5 2 θa tanhð2Nδ ze;k21 Þ 1 δd

(12.20)

where θa is the approaching angle and Nδ is a nonnegative term. Further, to minimize the difference between the actual (Eq. 12.8) and reference pitch orientation (Eq. 12.20) at kth sampling instant, a pitch orientation error (eθ,k) is defined and is given by 

eθ;k 5 θk 2 θd;k :

(12.21)

As the pitch rate and pitch orientation satisfy strict-feedforward form, using suitable Lyapunov function and following the steps as followed previously to get desired pitch orientation, the desired pitch velocity for kth instant is thus given as qd;k 5

1  ðk1 eθ;k21 2 θk21 Þ; T

(12.22)

where k1 is a positive constant and takes value between (0, 1). Using the expression of eθ,k21 from (12.21) to (12.22), the desired pitch velocity qd,k can be exacted at every sampling instant and will be further used by the dynamic controller, as discussed in Section 12.4.2, to evaluate a suitable actuation signal that will steer the AUV to follow a reference depth trajectory.

12.4.2 Dynamic nonlinear proportional, integral, and derivative control law This section presents a discussion of the dynamic control system which will drive the AUV to track the reference yaw rate, generated by the kinematic control system. The objective is to formulate a cost function in terms of error and minimiza^ 2 qdes ðkÞÞ tion of the same with respect to input such that the error ðqe ðkÞ 5 qðkÞ will asymptotically be zero, that is, Minimiseu Θ 5

1X  ðq ðkÞ2qdes ðkÞÞ2 2

(12.23)

Eq. (12.23) can be rewritten in terms of estimated value of AUV dynamic as Minimiseu Θ 5

1X  ðq^ ðkÞ2qdes ðkÞÞ2 2

(12.24)

However, the discrete version of PID control law that satisfies (12.24) is given as uðk 1 1Þ 5 uðkÞ 1 ½kp ðk 1 1Þ

kd ðk 1 1Þ

ki ðk 1 1ÞxTc

(12.25)

where kp, ki, ki are the proportional, integral, and derivative gains of the PID controller. xc is the error vector and is given as xc 5 ½qe ðk 1 1Þ 2 qe ðkÞ qe ðk 1 1Þ 2 2qe ðkÞ 1 qe ðk 2 1Þ T

where qe(k) 5 q (k) 2 qd(k).

qe ðk 1 1Þ

(12.26)

12.5 Control law formulation with delay prediction

To deal with uncertainties during control law formulation the parameters of the PID controller (12.25) needs to be updated so as to achieve efficient tracking performances. Using steepest descent algorithm, the parameters of the PID controller are updated as 8 > < kp ðk 1 1Þ 5 kp ðkÞ 1 ηp qe ðkÞJðkÞ½qe ðk 1 1Þ 2 qe ðkÞ kd ðk 1 1Þ 5 kd ðkÞ 1 ηd qe ðk 1 1ÞJðkÞ½qe ðk 1 1Þ 2 2qe ðkÞ 1 qe ðk 2 1Þ > : ki ðk 1 1Þ 5 ki ðkÞ 1 ηi qe ðkÞJðkÞqe ðk 1 1Þ

(12.27)

where η denotes the step size, and J (k) is the Jacobian of the ELM model. The steps for designing the dynamic controller are given in Algorithm 12.2.

Algorithm 12.2 Self-tuning PID controller Algorithm Step 1: Initialize kp(k), ki(k), kd(k), ηp(k), ηi(k), ηd(k), hidden layer parameters and nc. Step 2: Determine w0, K0 off-line using OS-ELM. Step 3: For k 5 1 to Iterations Step 4: Calculate θe(k) and qd(k) from AUV kinematic equation Step 5: Read wT(k) and qT(k) from AUV using (12.9). ^ ^ using (12.10) Step 6: Estimate wðkÞ and qðkÞ Step 7: Calculate error qe(k) 5 q(k) 2 qd(k) Step 8: For i 5 1: size(xin, 2) (Calculate Jacobian J (k), i.e., (yu) of the OS-ELM model) Step 9: For j 5 1: nh Step 10: Update yu using following adaptive law as yu 5 yu 1 wh ði; jÞðI 2 HðjÞHðjÞT Þwo ðjÞ Step 11: End Step 12: End Step 13: Calculate kp(k 1 1), ki(k 1 1), kd(k 1 1) using (12.27) Step 14: Determine control law using (12.25) Step 15: Update the gains of the controller using (12.27). Step 16: End

12.5 Control law formulation with delay prediction Owing to the slow sampling rate of sensors like the Doppler velocity log (DVL) sensor (sampling rate between 4 and 5 Hz) [36], the position control of AUV is inaccurate as variable delay introduced by DVL sensor modifies the depth dynamics of AUV. Thus variable delay ðt^d ðkÞÞ needs to be predicted on-line such as to improve the proposed PID controller performances during position control of AUV. In view of aforementioned reason an ELM model, as discussed in Section 12.3, is used to predict the time delay at every sampling instant. Let the nonlinear difference equation for delay prediction given as td ðkÞ 5 f ðtd ðk 2 1Þ; td ðk 2 2Þ; . . .; td ðk 2 nd ÞÞ

where nd is the lag in the time delay.

(12.28)

223

224

CHAPTER 12 An extreme learning-based adaptive control design

The inputoutput data set (xinp, yo) are collected by time-stamp before the current sampling time where xinp 5 td(k 2 1), td(k 2 2),. . ., td(k 2 nd) denotes the input vector and yo 5 td(k) is the output vector of the ELM model. With predicted value of delay ðt^d ðkÞÞ using ELM model, OS-ELM model then identifies the modified dynamics of AUV. The control law (12.25) directly depends on the Jacobian of the identified OS-ELM model of AUV, and the control action directly takes the delay into consideration before it forms the closed loop system, thereby improving the performance of the proposed PID controller in face of variable delay.

12.6 Results and discussion The proposed self-tuning PID controller for AUV is verified in simulation using MATLAB environment. The parameters of INFANTE AUV [11] are used for the design of proposed PID controller. The sampling time is considered as 0.56 second. The hidden layer parameters were initialized between [ 2 1 1]. Based on the mean square error obtained during training and testing phases, number of hidden nodes is chosen as four for the OS-ELM model for identifying the dynamics of AUV. Another ELM structure with hidden nodes, set as three and nd as two, are use to construct the time delay estimator. In both the ELM structures a hyperbolic tangent activation function is used. To implement the proposed control algorithm the surge velocity is kept constant at 1 m/s and the desired depth is given in (12.29). All the initial states of the AUV are set to zero.  zd 5

30; 0;

500 # t # 1500 and 2500 # t # 4000 otherwise

(12.29)

Two cases are considered for the simulation purpose. Case I: With fixed delay (without delay estimator): Fig. 12.3 shows the performance of the proposed controller without a delay estimator. Here, a fixed delay of 0.3 second is used which do not corresponds to actual variable delay that exists due to the presence of DVL sensor. From Fig. 12.3A and B, it is seen that the proposed controller does not track the reference yaw trajectory efficiently. Fig. 12.3DF shows the corresponding states of AUV and Fig. 12.3C shows the corresponding control input. Further, it is seen that from Fig. 12.3A and B there exists oscillation in the output depth trajectory while tracking, which degrades the depth tracking performances of the AUV. To avoid that a delay estimator is proposed using ELM, which will predict the actual delay that may exists in the network due to the presence of the DVL sensor. Case II: With variable delay (with delay estimator): To verify the efficacy of the proposed controller in presence of proposed time delay estimator, a variable delay is considered which varies randomly between

12.7 Conclusion

40

40

Tracking error

Depth (m)

30 20 10 Desired Actual

0 –10

0

500

1000

1500

2000

2500

3000

3500

20 0 –20 –40 0

4000

500

1000

1500

0.5

1

Pitch (rad)

Control input (rad)

2

0 –0.5

3500

4000

0 –1

0

500

1000

1500

2000

2500

3000

3500

–2

4000

0

500

1000

1500

2000

2500

3000

3500

4000

3500

4000

Time (s)

(D)

(C) 4

0.4

Heave velocity (m/s)

Pitch rate (rad/s)

3000

Desired Actual

Time (s)

2 0 –2 –4 –6

2500

(B)

(A) 1

–1

2000

Time (s)

Time (s)

Desired Actual 0

500

1000

1500

2000

2500

3000

3500

4000

0.2 0 –0.2 –0.4 –0.6

0

500

1000

1500

2000

Time (s)

Time (s)

(E)

(F)

2500

3000

FIGURE 12.3 Performance of the proposed PID controller without delay estimator: (A) depth orientation, (B) cross-track error, (C) control input, (D) pitch orientation, (E) pitch rate, and (F) heave velocity during diving maneuvering. PID, Proportional, integral, and derivative.

0.05 and 0.2 second. The simulation of the proposed controller in the presence of delay estimator is shown in Fig. 12.4. From Fig. 12.4A and B, it is seen that the proposed controller in presence of a proposed delay estimator track the reference yaw trajectory efficiently in the face of variable delay caused in the sensor measurements. Fig. 12.4DF shows the states of AUV during diving maneuvering. The control input required for tracking of reference depth profile is shown in Fig. 12.4C. Further, it is observed that the input and states of the AUV changes very quickly due to the presence of variable delay in the sensor measurements.

12.7 Conclusion We propose the development of self-tuning PID controller with a time delay estimator for tracking of desired depth trajectory of an AUV using ELM structure. It is observed from the simulation results that the identified OS-ELM model predicts the states of the AUV successfully by including the predicted delay time

225

CHAPTER 12 An extreme learning-based adaptive control design

40

40

20 10 Desired Actual

0 –10

0

500

1000

1500

2000 2500 Time (s)

3000

3500

20

Tracking error

Depth (m)

30

0 –20 –40

4000

0

500

1000

1500

(A)

3500

4000

Desired Actual

0.5

Pitch (rad)

Control input (rad)

3000

(B)

0.5 0 –0.5 –1 0

2000 2500 Time (s)

1

1

0 –0.5

0.2 0

–1

–0.2

500

1000

1500

2000 2500 Time (s)

3000

3500

–1.5 0

4000

1220

500

1000

1240

1500

(C) Desired Actual

0

2

1260

1280

1300

2000 2500 Time (s)

3000

3500

4000

3000

3500

4000

(D)

0.5

–0.5 1100 1120 1140 1160 1180 1200

0

0.5

Heave velocity (m/s)

4 Pitch rate (rad/s)

226

0

–2 0

500

1000

1500

2000 2500 Time (s)

(E)

3000

3500

4000

–0.5 0

500

1000

1500

2000 2500 Time (s)

(F)

FIGURE 12.4 Performance of the proposed PID controller in the presence of estimated delay: (A) depth orientation, (B) cross-track error, (C) control input, (D) pitch orientation, (E) pitch rate, and (F) heave velocity during diving maneuvering. PID, Proportional, integral, and derivative.

information from ELM-based delay estimator in the dynamics of AUV. Further, it is also observe that the proposed self-tuning PID controller provides excellent tracking performances and is quite effective against the presence of variable delay caused by the sensors in the control network.

References [1] R. Rout, B. Subudhi, NARMAX self-tuning controller for line-of-sight-based waypoint tracking for an autonomous underwater vehicle, IEEE Trans. Control Syst. Technol. 25 (4) (2017) 15291536. [2] S.A. Billings, Nonlinear System Identification, John Wiley & Sons, 2013. [3] F. Lewis, S. Jagannathan, A. Yesildirak, Neural Network Control of Robot Manipulators and Non-linear Systems, CRC Press, 1998.

References

[4] M.A. Kon, L. Plaskota, A. Cohen, C. Rabut, L. Schumaker, Neural networks, radial basis functions, and complexity, in: Proceedings of Bialowieza Conference on Statistical Physics, 1997, pp. 122145. [5] J. Gonza´lez, I. Rojas, J. Ortega, H. Pomares, F.J. Fernandez, A.F. D´ıaz, Multiobjective evolutionary optimization of the size, shape, and position parameters of radial basis function networks for function approximation, IEEE Trans. Neural Netw. 14 (6) (2003) 14781495. [6] H.-L. Wei, S.A. Billings, J. Liu, Term and variable selection for non-linear system identification, Int. J. Control 77 (1) (2004) 86110. [7] G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: theory and applications, Neurocomputing 70 (13) (2006) 489501. [8] G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: a new learning scheme of feedforward neural networks, Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on, vol. 2, IEEE, 2004, pp. 985990. [9] G.-B. Huang, D.H. Wang, Y. Lan, Extreme learning machines: a survey, Int. J. Mach. Learn. Cybern. 2 (2) (2011) 107122. [10] P.L. Bartlett, The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network, IEEE Trans. Inf. Theory 44 (2) (1998) 525536. [11] C. Silvestre, Multi-objective Optimization Theory With Applications to the Integrated Design of Controllers/Plants for Autonomous Vehicles (dissertation), 2000. [12] T.I. Fossen, Guidance and Control of Ocean Vehicles, John Wiley & Sons Inc, 1994. [13] T.T.J. Prestero, Verification of a Six-Degree of Freedom Simulation Model for the Remus Autonomous Underwater Vehicle (Ph.D. dissertation), Massachusetts Institute of Technology, 2001. [14] N.-Y. Liang, G.-B. Huang, P. Saratchandran, N. Sundararajan, A fast and accurate online sequential learning algorithm for feedforward networks, IEEE Trans. Neural Netw. 17 (6) (2006) 14111423. [15] G. Antonelli, S. Chiaverini, N. Sarkar, M. West, Adaptive control of an autonomous underwater vehicle: experimental results on ODIN, IEEE Trans. Control Syst. Technol. 9 (5) (2001) 756765. [16] B.K. Sahu, B. Subudhi, Adaptive tracking control of an autonomous underwater vehicle, Int. J. Autom. Comput. 11 (3) (2014) 299307. [17] X. Wu, Z. Feng, J. Zhu, R. Allen, Line of sight guidance with intelligent obstacle avoidance for autonomous underwater vehicles, OCEANS 2006, IEEE, 2006, pp. 16. [18] K. Teo, E. An, P.-P.J. Beaujean, A robust fuzzy autonomous underwater vehicle (AUV) docking approach for unknown current disturbances, IEEE J. Oceanic Eng. 37 (2) (2012) 143155. [19] X. Xiang, C. Yu, L. Lapierre, J. Zhang, Q. Zhang, Survey on fuzzy-logic-based guidance and control of marine surface vehicles and underwater vehicles, Int. J. Fuzzy Syst. 20 (2) (2018) 572586. [20] E. Zakeri, S. Farahat, S.A. Moezi, A. Zare, Robust sliding mode control of a mini unmanned underwater vehicle equipped with a new arrangement of water jet propulsions: Simulation and experimental study, Appl. Ocean Res. 59 (2016) 521542. [21] J. Cheng, J. Yi, D. Zhao, Design of a sliding mode controller for trajectory tracking problem of marine vessels, IET Control Theory Appl. 1 (1) (2007) 233237.

227

228

CHAPTER 12 An extreme learning-based adaptive control design

[22] A.P. Aguiar, A.M. Pascoal, Dynamic positioning and way-point tracking of underactuated AUVs in the presence of ocean currents, Int. J. Control 80 (7) (2007) 10921108. [23] B.N. Rath, B. Subudhi, V. Filaretov, A. Zuev, A new backstepping control design method for autonomous underwater vehicle in diving and steering plane, TENCON 2017-2017 IEEE Region 10 Conference, IEEE, 2017, pp. 19841987. [24] S. Mahapatra, B. Subudhi, Design of a steering control law for an autonomous underwater vehicle using nonlinear HN state feedback technique, Nonlinear Dyn. 90 (2) (2017) 837854. [25] E. Roche, O. Sename, D. Simon, LFT/Hinf varying sampling control for autonomous underwater vehicles, in: 4th IFAC Symposium on System, Structure and Control, 2010. [26] S. Saat, S.K. Nguang, A.M. Darsono, N. Azman, Nonlinear HN feedback control with integrator for polynomial discrete-time systems, J. Franklin Inst. 351 (8) (2014) 40234038. [27] K. Alexis, C. Papachristos, R. Siegwart, A. Tzes, Robust explicit model predictive flight control of unmanned rotorcrafts: design and experimental evaluation, 2014 European Control Conference (ECC), IEEE, 2014, pp. 498503. [28] K. Alexis, C. Papachristos, R. Siegwart, A. Tzes, Robust model predictive flight control of unmanned rotorcrafts, J. Intell. Rob. Syst. 81 (34) (2016) 443469. [29] Z. Li, J. Deng, R. Lu, Y. Xu, J. Bai, C.-Y. Su, Trajectory-tracking control of mobile robot systems incorporating neural-dynamic optimized model predictive approach, IEEE Trans. Syst. Man Cybern., A: Syst. 46 (6) (2016) 740749. [30] C. Shen, B. Buckham, Y. Shi, Modified C/GMRES algorithm for fast nonlinear model predictive tracking control of AUVs, IEEE Trans. Control Syst. Technol. 25 (5) (2017) 18961904. [31] S.P. Hou, C.C. Cheah, Can a simple control scheme work for a formation control of multiple autonomous underwater vehicles, IEEE Trans. Control Syst. Technol. 19 (5) (2011) 10901101. [32] R. Rout, B. Subudhi, Inverse optimal self-tuning PID control design for an autonomous underwater vehicle, Int. J. Syst. Sci. 48 (2) (2017) 367375. [33] Q.-Y. Zhu, A.K. Qin, P.N. Suganthan, G.-B. Huang, Evolutionary extreme learning machine, Pattern Recognit. 38 (10) (2005) 17591763. [34] Y. Xu, Y. Shu, Evolutionary extreme learning machinebased on particle swarm optimization, International Symposium on Neural Networks, Springer, 2006, pp. 644652. [35] G. Zhao, Z. Shen, C. Miao, Z. Man, On improving the conditioning of extreme learning machine: a linear case, Information, Communications and Signal Processing, 2009. ICICS 2009. 7th International Conference on, IEEE, 2009, pp. 15. [36] J. Kim, H. Joe, S.C. Yu, J.S. Lee, M. Kim, Time-delay controller design for position control of autonomous underwater vehicle under disturbances, IEEE Trans. Ind. Electron. 63 (2) (2016) 10521061.

CHAPTER

13

Geometric total plaque area is an equally powerful phenotype compared with carotid intimamedia thickness for stroke risk assessment: A deep learning approach

Elisa Cuadrado-Godia1, Saurabh K. Srivastava2, Luca Saba3, Tadashi Araki4, Harman S. Suri5, Argiris Giannopolulos6, Tomaz Omerzu7, John Laird8, Narendra N. Khanna9, Sophie Mavrogeni10, George D. Kitas11,12, Andrew Nicolaides13 and Jasjit S. Suri14 1 Department of Neurology, IMIM—Hospital del Mar, Barcelona, Spain Department of Computer Science & Engineering, ABES EC, Ghaziabad, India 3 Department of Radiology, Azienda Ospedaliero Universitaria, Cagliari, Italy 4 Department of Cardiology, Toho University, Tokyo, Japan 5 Brown University, Providence, RI, United States 6 Department of Vascular Surgery, Imperial College, London, United Kingdom 7 Deparment of Neurology, University Medical Centre Maribor, Maribor, Slovenia 8 Department of Cardiology, St. Helena Hospital, St. Helena, CA, United States 9 Department of Cardiology, Apollo Hospitals, New Delhi, India 10 Cardiology Clinic, Onassis Cardiac Surgery Center, Athens, Greece 11 Arthritis Research UK Epidemiology Unit, Manchester University, Manchester, United Kingdom 12 Department of Rheumatology, Group NHS Foundation Trust, Dudley, United Kingdom 13 Vascular Diagnostic Center, University of Cyprus, Nicosia, Cyprus 14 Stroke Monitoring Division, AtheroPointt, Roseville, CA, United States 2

13.1 Introduction Cardiovascular diseases (CVDs) are prevalent in both developing and developed countries. The mortality rate due to CVD is about five million each year [1]. In United States, there is a death due to heart attack or stroke in every 43 seconds

Cognitive Informatics, Computer Modelling, and Cognitive Science, Volume 1. DOI: https://doi.org/10.1016/B978-0-12-819443-0.00013-1 © 2020 Elsevier Inc. All rights reserved.

229

230

CHAPTER 13 Geometric total plaque area is an equally powerful

[2]. The financial toll is 53.6 billion [2] US dollars a year, which includes both direct and indirect costs. The cause of the stroke and myocardial infarction [3] is due to the lack of oxygenated blood supply in the brain and heart through the arteries. This is due to the plaque formation in the arterial wall and the disease is called atherosclerosis [4]. Both external (environmental such as pollution) and internal factors [4,5] (such as lipid formation, genetics, diabetes, hypertension, cholesterol, and rheumatoid arthritis) contribute to this disease formation [5,6]. The stenosis in the arterial walls can be imaged using several imaging modalities such as MRI, CT, and ultrasound, of which, ultrasound offers several advantages such as low cost, user friendliness, and diagnosis [79]. With advancement in image reconstruction technology such as ultrasound beam formation, one can obtain a high-resolution ultrasound image depicting the arterial lesions. The quantification of these lesions can act as a risk biomarker for carotid artery disease. Thus one requires an advanced set of tools for the quantification of these carotid artery disease risk biomarkers. Carotid intimamedia thickness (cIMT) is one of the most popular biomarkers that are used for monitoring stroke and cardiovascular risk [3,10,11]. Most of the clinics or vascular laboratories either use manual methods or semiautomated methods for cIMT measurement. These methods require the sonographer (or sonologist or a radiologist) to place the region of interest (ROI) window in the far wall (either distal or mid or proximal) of the carotid artery. Recently, the second identified biomarker so-called total plaque area (TPA) has shown to have its link with stroke and cardiovascular risk [1215]. These studies measure TPA using manual methods such as mouse tracings of the media region along with the plaque that is above the baseline. Since the manual cIMT and TPA computations are tedious and prone to inter- and intraobserver variabilities, there is need of fast, automated, and accurate strategy for both cIMT and TPA measurements. This study is focused on the development of such automated cIMT and TPA measurements. Interestingly, the shape of the carotid artery resembles a cylinder, with a fixed thickness. Therefore one can model the measurement of TPA by fitting a 3D cylinder in the carotid artery with a uniform thickness [16]. Since the mid common carotid artery (CCA) section has a cylindrical geometry, we therefore hypothesize that fast and accurately TPA computations are possible in the mid and proximal regions of the carotid artery. The thickness of the cylinder can be computed by taking the difference between the outer and inner cylinder. In Fig. 13.1 the area of the intimamedia complex for the far wall is computed by subtracting the inner cylinder from the outer cylinder. This outer cylindrical area along the length of the carotid artery is a function of lumen diameter (LD) and thickness of the cylinder (cIMT). Similarly, the inner cylindrical area will be a function of LD. As a result, all we need both cIMT and LD computations for the TPA measurements. For cIMT computation, we have used a standard intelligencebased deep learning (DL)based technique for automated carotid wall interface detection

13.1 Introduction

FIGURE 13.1 Left: IMT complex for the near and far wall. Right: Computer-assisted common carotid artery. IMT, Intimamedia thickness. Ultrasound scans

cIMT and LD measurement

gTPA measurement

gTPA measurement

Risk assessment

FIGURE 13.2 Main pipeline, global picture of the system.

followed by cIMT and TPA measurements. Since DL paradigms require manual tracings to train the neural network, we have adapted two different manual tracers for the DL design. This leads to two sets of DL systems. The DL system captures plaque morphology along the carotid walls (typically carotid mid or carotid proximal subsections of the carotid artery), and cIMT is measured using the automated standardized polyline distance method [17,18]. Since the wall interfaces for lumen intima (LI) and media adventitia (MA) are all along the morphology of the plaque, the TPA computations are therefore labeled as morphologic TPA (mTPA). As explained earlier, mTPA is a function of LD and cIMT. From here, we will call LD as D, representing diameter [19]. The advantage of geometric TPA (gTPA) is its fast computation due to cylindrical fitting approach. Our study presents a novel and fast solution using cylinder-based method for an automated cIMT and gTPA computation, which outperforms over conventional systems. Our contribution is demonstrated in Fig. 13.2. The pipeline consists of three phases: (1) cIMT and LD measurement, (2) gTPA measurement, and (3) risk stratification and assessment.

13.1.1 Performance numbers Our system demonstrates that the coefficient of correlation (CC) between gTPA and cIMT using DL and manual were 0.92 (P , .001) and 0.94 (P , .001), respectively. Using two cutoffs leading to low-, moderate-, and high-risk assessment system, the area under the curve (AUC) for cIMT and gTPA was 0.76 (P , .001) and 0.85 (P , .001) using DL1 and 0.76 (P , .001) and 0.86 (P , .001) using DL2, respectively. The gTPA is an equally powerful carotid risk biomarker like

231

232

CHAPTER 13 Geometric total plaque area is an equally powerful

cIMT. Given the cIMT and LD, cylindrical fitting is a fast method for gTPA measurements. The chapter is organized as follows. Section 13.2 presents background survey related to IMT, LD, and TPA. Section 13.3 shows materials and methods used for gTPA computation. Section 13.4 presents the results, and Section 13.5 demonstrates statistical tests and correlation plots. Section 13.6 explains discussions on result and hypothesis validation. Finally, the last section presents the conclusion and future directions.

13.2 Background survey on cIMT, LD, and TPA measurements 13.2.1 cIMT detection and measurement methods Several studies had tried for LI/MA detection of the carotid far wall and corresponding cIMT measurements. Molinari et al. [18] proposed automated techniques for cIMT measurement. The first method was Completely Automated Layers EXtraction that was integrated with feature extraction, line fitting, and classification [20]. The second method was Automated Robust Edge Snapper (CARES) that combines feature extraction and edge detection paradigm [21]. The third method Automated Multiresolution Edge Snapper (CAMES) was designed on a multiresolution-based approach and utilized the concept of scale space [18]. Further, the fourth method was a system of Carotid Automated Double-Line Extraction System based on Edge-Flow [22]. Previous methods utilized edge detection technique with ultrasound texture and edge energies. In the year 2012 Suri et al. [23] proposed an automated system AtheroEdge for automated cIMT measurement. The study used scale-space strategy for the computation of cIMT. Ikeda et al. [24] in 2015 proposed a combination of global and local strategy with texture-based entropy and morphology for cIMT measurement along with classification paradigm. In 2016 Saba et al. [25] developed an automated cloudbased solution AtheroCloud for cIMT measurement. Recently in 2017 Ikeda et al. [26] used the bulb edge point as a reference marker and proposed an automated segmental-cIMT measurement technique. The previously discussed methods depend on features such as grayscale median and calcium area for automated cIMT computation for risk assessment. The external factors make these spatial methods prone to inter- and intraoperator observer variability and reproducibility study and lack with robust system. The DL-based system removes some limitations in ultrasound imaging technology. The neural networks intelligence power is used to gain shape information from carotid ultrasound cohorts. It helps to take advantage of multiresolution approaches for increased processing speed and feature extraction at multiple scales and thus improves spatial deck of information. Current imaging techniques experience challenges in feature extraction due to the presence of calcium in near

13.2 Background survey on cIMT, LD, and TPA measurements

wall and due to the shadows in the far wall. This results in the LI, MA border position errors and cIMT error. Even though previous methods did employ multiresolution-based approaches for increasing the processing speed, the feature extraction at multiple scales was not derived, thus lacking a comprehensive spatial deck of information. Furthermore, carotid ultrasound cohorts have shape information, which can be learnt via neural networks, intelligence power of which is unsurpassable. The current DL-based study removes all the abovementioned challenges and thus provides reliability and robustness to the system. This study is motivated by previous works of Suri et al. who had applied machine intelligence in different fields of medicine such as gynecology, urology, dermatology, neurology [2730], and recently in endocrinology area [31].

13.2.2 LD detection and measurement methods and our proposal Previous literature has also tried several methods for LD measurement. Molinari et al. [20] used four points of the ROI using Hough transform. However, the algorithm performance is limited in the case of less bright images where the lumen region may not get detected at all. Loizou et al. [32] used an integrated approach for geometric feature extraction, line fitting, and classification to extract the CCA. However, the final algorithm outcome is affected by noise and presence of similar echo graphic structures (such as jugular vein) and fails in classification of final line pairs, that is, CCA near wall (also known as LI-near) and CCA far (also known as LI-far) wall. Suri et al. [33] introduced snake-based approach for the CCA segmentation, but it suffered from initialization and boundary leakage problem [34,35]. Araki et al. [36] combined the scale-space approach with level set for determining the lumen borders. Kuppili et al. [37] used combination of spatial transformation and scale spacebased approach to estimate the LI-far wall and LI-near wall. The major drawback of the previously discussed methodologies used has lack of intelligence-based approach in their models. Further, they also lack the accumulated information from the population required for intelligent learning by the system. The earlier systems also lacked model-based imaging, which is required for full automation. These limitations demand an immediate need for intelligence-based reliable, accurate, and robust method for LD measurement, which can be used as an indicator of atherosclerotic buildup for predicting the risk of stroke. Current study is focused on the results evaluated by an automated LD measurement using DL paradigm, a class of AtheroEdge (AtheroPoint, Roseville, CA) system in CCA. We are motivated by training-based learning strategies in the field of classification and segmentation of ultrasound images. Extreme learning machine and support vector machines have been used successful in characterization and stratification of ultrasound fatty liver disease images [38]. However, they do not produce accurate results in case of segmentation as they depend on conventional feature extraction techniques. We thus introduce the results of DL-based system [39] for CCA lumen segmentation from ultrasound images. The main benefit of using DL is that it is independent from conventional feature generation

233

234

CHAPTER 13 Geometric total plaque area is an equally powerful

techniques. This is because DL system generates features internally. In this chapter, we had applied three stage DL-based model for results evaluation. TPA grows all around the artery tree, which causes life-threatening cardiovascular events. We hypothesize that cylindrical fitting method covers the larger cylinder area for plaque burden calculation in arteries. Intelligence-based cIMT and LD computations using the DL framework can provide better results in comparison to the conventional exiting systems.

13.3 Materials and methodology 13.3.1 Patient demographics and image acquisition The study consisted of 204 patients (157M/47F; mean age: 69 6 11 years). The database consisted of a total of 396 images but out of the supplied 407 left and right carotid images taken from 204 patients. Table 13.1 shows the detail of the patient demographics. Table 13.1 Patient demographics. Demographic variables

Men (n 5 157) mean

Women (n 5 47) mean

Combined mean

Age (years) Rt mean cIMT (mm) Lt mean cIMT (mm) Plaque score LDL cholesterol (mg/dL) HDL cholesterol (mg/dL) eGFR (µmol/L) Calcium (%) History of CVD (%) Denovo HT (mmHg) DM Dyslipidemia Family history HD Smoking FBS HbA1c (mg/dL) Cr

67.0 6 11.0 1.0 6 0.5 1.2 6 0.5 8.9 6 5.5 100.0 6 31.9 48.1 6 13.2 42.2 6 18.6 2.8 6 2.8 13.2 1.0 6 0.0 0.7 6 0.5 0.3 6 0.5 0.6 6 0.5 0.1 6 0.3 0.1 6 0.3 0.5 6 0.5 121.7 6 34.1 6.3 6 1.2 1.7 6 2.1

75.3 6 8.4 1.1 6 0.4 1.2 6 0.5 9.1 6 4.8 105.1 6 29.9 59.0 6 18.1 56.0 6 24.2 3.5 6 2.7 8.5 1.0 6 0.0 0.7 6 0.4 0.2 6 0.4 0.4 6 0.5 0.1 6 0.3 0.1 6 0.3 0.1 6 0.4 119.2 6 38.1 6.3 6 0.9 1.3 6 1.9

68.9 6 11.0 1.1 6 0.5 1.2 6 0.5 9.0 6 5.3 101.1 6 31.5 50.5 6 15.0 45.3 6 20.8 2.8 6 2.2 21.7 1.0 6 0.0 0.7 6 0.4 0.3 6 0.4 0.6 6 0.5 0.1 6 0.3 0.1 6 0.3 0.4 6 0.5 121.1 6 34.9 6.3 6 1.1 1.6 6 2.1

cIMT, Carotid intimamedia thickness; CVD, cardiovascular disease; HDL, high-density lipoprotein; LDL, low-density lipoprotein.

13.3 Materials and methodology

A total of eleven images were rejected due to lack of tissue information in the grayscale ultrasound scans. The patient’s data and the ethical approval were granted by Toho University Internal Review Board (IRB), Japan. The mean hemoglobin (HbA1c), glucose, low-density lipoprotein (LDL) cholesterol (LDL-C), high-density lipoprotein cholesterol, and total cholesterol were 5.8 6 1.0, 108 6 31, 99.80 6 31.30, 50.40 6 15.40, and 174.6 6 37.7 (mg/dL), respectively. From the pool of 203 patients, 92 patients were regular smokers. The hypertensive and high-cholesterol patients were on adequate medication: statin was prescribed for 93 patients to lower the cholesterol levels and 84 of them received renin angiotensin system antagonists. The blood pressure statistics of the patients was not available. A sonographic scanner (Aplio XV, Aplio XG, Xario; Toshiba, Inc., Tokyo, Japan) equipped with a 7.5-MHz linear array transducer was employed to examine the left and right carotid arteries. All scans were performed under the supervision of an experienced sonographer (with 15 years of experience). High-resolution images were acquired as per the recommendations by the American Society of Echocardiography Carotid Intima Media Thickness Task Force. The mean pixel resolution of the database was 0.05 6 0.01 mm/pixel.

13.3.2 gTPA modeling using cylindrical fitting In this section, we analyze the relationship between gTPA, LD, and cIMT. The basic idea is to compute the cylindrical outer and inner area and to subtract them take a difference to compute the mTPA. Outer area of the circular risk can be used for the outer cylinder as represented in Eqs. (13.1)(13.3): Outer lumen area (LAouter) of the circle along the LA (longitudinal axis) 5π30 Radius 3 Radius 1 D 5 π 3 @ 1 cIMTA 2 0 1 2 D D 5 π 3 @ 1 2 3 3 cIMT 1 cIMT2 A 2 4 0 1 2 D 5 π 3 @ 1 D 3 cIMT 1 cIMT2 A 4

(13.1)

Inner lumen area (LAinner) of the circle along the LA (longitudinal axis): 5 π 3 Radius 3 Radius 0 12 D 5π3@ A 2

(13.2)

235

236

CHAPTER 13 Geometric total plaque area is an equally powerful

Total area of IMT complex: gTPA 5 LAouter  LAinner 0 12 0 12 D D @ A 1cIMT 2 π 3 @ A gTPA 5 π 3 2 2 0 1 0 1 2 D D2 2 gTPA 5 π 3 @ 1 D 3 cIMT 1 cIMT A 2 π 3 @ A 4 4 0 1 D2 gTPA 5 π 3 1 π 3 D 3 cIMT 1 π 3 cIMT2 2 π 3 @ A 4

(13.3)

gTPA 5 π 3 D 3 cIMT 1 π 3 cIMT2 gTPA 5 π 3 cIMT 3 ðD 1 cIMTÞ

Thus gTPA is a function of cIMT and LD.

13.3.3 Overall architecture The expanded version of the overall system (Fig. 13.2) is shown in Fig. 13.3. There are the three fundamental stages. Stage I consists of cIMT and LD border estimation using DL paradigm, given the gold standard manual tracings of cIMT and LD. A detailed discussion of stage I is provided in the next subsection. The main focus of stage II is to compute gTPA based on cIMT and LD measurements, which in turn is computed using standardized polyline distance method. The process of gTPA computation in stage II is called IMT complex and is performed using the cylindrical fitting approach. Given the ground truth manual readings for cIMT and mTPA, we determine the risk of the patient using two different cutoffs. This stratifies the patient into low-, moderate-, and high-risk bins. Stage III represents the risk stratification and assessment based on gold standard and risk threshold.

13.3.4 cIMT and LD detection using DL system In this section, we briefly discuss the concept of how the intelligence is used for deep feature extraction from the grayscale ultrasound images and further segment the inner lumen region and outer wall region. The DL system consists of three stages: preprocessing, LI and MA region segmentation, and finally LI/MA border detection. The stage I requires automatic cropping of the ultrasound scans to remove the patient information followed by binarization process. To speed the core of the DL system, multiresolution approach is adapted when the image is down sampled. The preprocessed image is then encoded for feature extraction and finally segmentation using decoder. In stage II where the gold standard is utilized for training the lumen region and interadventitial wall region. The stage III consists of calibration process where the machine learning system is used for the correction of raw border estimated from the DL system. This is a linear system leading to final smooth LI/MA border estimation for the far wall.

13.4 Experimental protocol, results, and its validation

FIGURE 13.3 Overall architecture for gTPA and cIMT phenotype measurements and risk stratification. cIMT, Carotid intimamedia thickness; gTPA, geometric total plaque area.

Japanese cohort is used for gTPA calculation. Gold standard data was prepared with the help of sonographer by taking manual readings. Polyline distance method is applied over prepared dataset and DL is used for LD measurement. Manual and DL methods are used for cIMT measurement. Finally, we have used Linhart et al. [19] formula for gTPA calculation. The overall system is demonstrated in Fig. 13.4.

13.4 Experimental protocol, results, and its validation This section consists of the following experimental protocols and its results: 1. 2. 3. 4.

DL system results and visual display of LI and MA interfaces Mean value computations for cIMT and gTPA for two DL systems Relationship of age versus cIMT and gTPA measurements Validation of DL systems against the manual readings

237

238

CHAPTER 13 Geometric total plaque area is an equally powerful

US carotid scans

Data preparation

LD gold standard

PDM

IAD gold standard

PDM

LD measurement

cIMT measurement

Deep learning

Deep learning

gTPA

FIGURE 13.4 Flow diagram for DL-based LD and cIMT measurements. cIMT, Carotid intimamedia thickness; DL, deep learning; LD, lumen diameter.

13.4.1 DL system results and visual display of LI and MA interfaces A methodology recently developed by Suri’s group [40] is used for the computation of LI/MA wall interfaces. The display of LI/MA borders using DL strategy is shown in Fig. 13.5, Fig. 13.6, and Fig. 13.7, respectively, for the low-risk, moderate-risk, and high-risk patients. The LI borders can be clearly seen as dashed lines that represent the ground truth borders. Figs. 13.513.7 represent the sample of low-, moderate-, and high-risk categories of grayscale images of CCA. The corresponding LD, cIMT, and gTPA values are also mentioned.

13.4.2 Mean value computations for cIMT and gTPA for two DL systems Table 13.2 shows the mean and standard deviation (SD) values of the cIMT and gTPA corresponding to the two DL systems.

13.4.3 Relationship of age versus cIMT/gTPA The relationship between age versus cIMT and between age versus gTPA is shown in Table 13.3. gTPA was observed to be higher than cIMT for both DL and manual systems. The corresponding plots are as presented in Figs. 13.8 and 13.9.

13.4.4 Validation It is important to validate the behavior of gTPA against other wall parameters. This includes cIMT, LD, and interadventitial diameter (IAD) derived using DL1 and DL2 frameworks. Further, we need to know how gTPA behaves against the manual (ground truth) readings. If the results between automated system and

13.4 Experimental protocol, results, and its validation

FIGURE 13.5 Three examples of low-risk category using DL1 system. Row 1: LD: 4.16 mm, cIMT: 0.91 mm, gTPA: 14.47 mm2. Row 2: LD: 6.20 mm, cIMT: 0.76 mm, gTPA: 16.70 mm2. Row 3: LD: 6.63 mm, cIMT: 0.59 mm, gTPA: 13.28 mm2. cIMT, Carotid intimamedia thickness; gTPA, geometric total plaque area; LD, lumen diameter.

239

240

CHAPTER 13 Geometric total plaque area is an equally powerful

FIGURE 13.6 Three examples of moderate-risk category using DL1 system. Row 1: LD: 6.47 mm, cIMT: 1.56 mm, gTPA: 39.39 mm2. Row 2: LD: 6.86 mm, cIMT: 1.15 mm, gTPA: 28.82 mm2. Row 3: LD: 6.31 mm, cIMT: 1.16 mm, gTPA: 27.25 mm2. cIMT, Carotid intimamedia thickness; gTPA, geometric total plaque area; LD, lumen diameter.

manual readings tend to show statistical significance, this would signify one step closer to the validation criteria. Sections 13.4.4.113.4.4.3 show the regression plots of the relationships between gTPA and other wall parameters, while the corresponding table is shown in Table C.1.

13.4 Experimental protocol, results, and its validation

FIGURE 13.7 Three examples of high-risk category using DL1 system. Row 1: LD: 6.54 mm, cIMT: 2.15 mm, gTPA: 58.61 mm2. Row 2: LD: 5.66 mm, cIMT: 2.66 mm, gTPA: 69.60 mm2. Row 3: LD: 7.68 mm, cIMT: 1.79 mm, gTPA: 52.81 mm2. cIMT, Carotid intimamedia thickness; gTPA, geometric total plaque area; LD, lumen diameter.

241

242

CHAPTER 13 Geometric total plaque area is an equally powerful

Table 13.2 Comparison between mean of carotid intimamedia thickness (cIMT) and geometric total plaque area (gTPA) for the two deep learning (DL) systems. DL type

cIMT (mean 6 SD) (mm)

gTPA (mean 6 SD) (mm2)

DL1 DL2

0.91 6 0.34 0.88 6 0.30

20.52 6 9.44 19.44 6 8.18

SD, Standard deviation.

Table 13.3 Coefficient of correlation (CC) of age (years) versus carotid intimamedia thickness (cIMT) and geometric total plaque area (gTPA) with deep learning (DL) systems 1 and 2. SN 1 2 3 4

Relationship type Age (years) vs cIMT (DL1) Age (years) vs cIMT (GT1l) Age (years) vs gTPA (DL1) Age (years) vs gTPA (GTl)

CC

P-Value

0.1925

, .001

0.1536

, .001

0.2007

, .001

0.1665

, .001

Relationship type Age (years) vs cIMT (DL2) Age (years) vs cIMT (GT2) Age (years) vs gTPA (DL2) Age (years) vs gTPA (GTl2)

CC

P-Value

0.1868

, .001

0.1415

, .001

0.2038

, .001

0.1554

, .001

13.4.4.1 gTPA versus cIMT for DL1, GT1, DL2, and GT2 Fig. 13.10A and B shows the CC plots of gTPA (DL1) versus cIMT (DL1) and gTPA (GT1) versus cIMT (GT1). The CC value for DL system was 0.94 (P-value , .001), while for the manual reading, it was 0.95 (P-value , .001). This clearly shows the consistent behavior of DL with the doctor’s manual readings. Abovementioned results prove our assumption that if the CC values are close to each other, and if the behavior is linear, then the DL can be considered to be validated. A similar pattern was observed between gTPA (DL2) versus cIMT (DL2) (see Fig. 13.11A) and gTPA (GT2) versus cIMT (GT2) (see Fig. 13.11B) leading to CC values of 0.92 (P-value , .001) and 0.94 (P-value , .001). The previous results prove the consistency of DL1 and DL2 systems.

13.4.4.2 gTPA versus LD for DL1, GT1, DL2, and GT2 We further validated the relationship between gTPA and LD for DL1 against GT1, and DL2 against GT2. This can be seen in Fig. 13.12A that shows the correlation plots of gTPA (DL1) versus LD (DL1) using DL1 system, while Fig. 13.12B shows the relationship between gTPA (GT1) versus LD (GT1). Note that the CC was 0.31 (P-value , .001) and 0.24 (P-value , .001) for DL1 and

13.4 Experimental protocol, results, and its validation

FIGURE 13.8 Correlation coefficient plots of age (years) versus cIMT (DL1) and cIMT (GT1) shown in (A) and (B) and age (years) versus gTPA (DL1) and gTPA (GT1) shown in (C) and (D) for DL1 system. cIMT, Carotid intimamedia thickness; gTPA, geometric total plaque area.

GT1 systems, respectively. A similar pattern was observed for gTPA (DL2) versus LD (DL2) (see Fig. 13.13A) and gTPA (GT2) vs LD (GT2) (see Fig. 13.13B) leading to CC values of 0.34 (P-value , .001) and 0.32 (P-value , .001). This proves our assumption that if the CC values are close to each other, and the behavior is linear, then the DL systems can be considered to be validated. Note that the CC between gTPA versus LD is not as strong as gTPA versus cIMT, which is obvious. This is due to the reason that as cIMT increases, TPA must increase and as LD increases (heading to normal), the gTPA will be moderate or low. This further validates the stability of DL systems.

13.4.4.3 gTPA versus IAD for DL1, GT1, DL2, and GT2 We further validate the relationship between gTPA and IAD for DL1 against GT1 and DL2 against GT2. This can be seen in Fig. 13.14A that shows the correlation

243

244

CHAPTER 13 Geometric total plaque area is an equally powerful

FIGURE 13.9 Correlation coefficient plots of age (years) versus cIMT (DL2) and cIMT (GT1) shown in (A) and (B) and age (years) versus gTPA (DL2) and gTPA (GT2) shown in (C) and (D) for DL2 system. cIMT, Carotid intimamedia thickness; gTPA, geometric total plaque area.

plots of gTPA (DL1) versus IAD (DL1)based system, while Fig. 13.14B shows the relationship between gTPA (GT1) versus IAD (GT1). Note that this CC is 0.61 (P-value , .001) and 0.51 (P-value , .001), respectively. A similar pattern was observed for gTPA (DL2) versus IAD (DL2) (see Fig. 13.15A) and gTPA (GT2) versus IAD (GT2) (see Fig. 13.15B) leading to CC values of 0.64 (P-value , .001) and 0.59 (P-value , .001). This clearly shows both DL1 and DL2 systems had same behavior with respect to manual readings. This results further prove our assumption, if the CC values are reasonably close to each other, and the behavior is linear, then the DL can be considered to be validated.

13.4.4.4 BlandAltman plots The BlandAltman plots of the average of gTPA (DL1) and cIMT (DL1) and average of gTPA (GT1) and cIMT (GT1) for DL1 and GT1 system,

13.4 Experimental protocol, results, and its validation

FIGURE 13.10 Correlation plots of gTPA (DL1) versus cIMT (DL1) and gTPA (Manual 1) versus cIMT (Manual 1)based system. Manual implies GT. cIMT, Carotid intimamedia thickness; gTPA, geometric total plaque area.

FIGURE 13.11 Correlation plots of gTPA (DL2) versus cIMT (DL2) and gTPA (Manual 2) versus cIMT (Manual 2)based system. Manual implies GT. cIMT, carotid intimamedia thickness; gTPA, geometric total plaque area.

respectively, are shown in Fig. 13.16. Similar patterns were also shown in Fig. 13.17 that represents average of gTPA (DL2) versus cIMT (DL2) and average of gTPA (GT2) versus cIMT (GT2) for DL2 and GT2 system, respectively. The BlandAltman plot or difference plot is used to compare the two measurement techniques. It is a graphical method that shows the differences and plots

245

246

CHAPTER 13 Geometric total plaque area is an equally powerful

FIGURE 13.12 Correlation plots of gTPA (DL1) versus LD (DL1) and gTPA (Manual 1) versus LD (Manual 1)based system. gTPA, Geometric total plaque area; LD, lumen diameter.

FIGURE 13.13 Correlation plots of gTPA (DL2) versus LD (DL2) and gTPA (Manual 2) versus LD (Manual 2)based system. Manual implies GT. gTPA, Geometric total plaque area; LD, lumen diameter.

the averages of the two techniques. Finally, the mean differences are plotted against the two methods. The limit of agreement defines the mean differences and 6 1.96 times the SD of the differences. It is a well-known measurement technique that shows the systematic biasing between measurements and informs us how far the two methods are likely to be. When we are comparing or assessing methods repeatability, it is important to calculate confidence intervals for 95% limits of agreement.

13.5 Statistical tests and 10-year risk analysis

FIGURE 13.14 Correlation plots of gTPA (DL1) versus IAD (DL1) and gTPA (Manual 1) versus IAD (Manual 1)based system. gTPA, Geometric total plaque area; IAD, interadventitial diameter.

FIGURE 13.15 Correlation plots of gTPA (DL2) versus IAD (DL2) and gTPA (Manual 2) versus IAD (Manual 2)based system. Manual implies GT. gTPA, Geometric total plaque area; IAD, interadventitial diameter.

13.5 Statistical tests and 10-year risk analysis 13.5.1 Risk analysis Receiver operating characteristic (ROC) is a measure of performance validation of any technique as it captures all the record variation at every possible cutoff. AUC value shows the accuracy of the performance measure. For ROC analysis

247

248

CHAPTER 13 Geometric total plaque area is an equally powerful

FIGURE 13.16 BlandAltman plots of mean of gTPA (DL1) versus cIMT (DL1) and mean of gTPA (GT1) versus cIMT (GT1). cIMT, Carotid intimamedia thickness; gTPA, geometric total plaque area.

FIGURE 13.17 BlandAltman plots of mean of gTPA (DL2) versus cIMT (DL2) and mean of gTPA (GT2) versus cIMT (GT2). cIMT, Carotid intimamedia thickness; gTPA, geometric total plaque area.

the cohort data is divided into three risk classes: low-risk, moderate-risk, and high-risk patients using two cutoffs. For gTPA the cutoffs were 20 and 40 mm2. For cIMT the cutoffs were 0.6 and 0.9 mm: low risk (0 to less than 0.6 mm), moderate risk (greater and equal to 0.6 and less than 0.9 mm), and high risk (greater and equal to 0.9 mm). Same thresholds were considered while evaluating the manual readings. The plots for both DL systems and manual systems are shown in Fig. 13.18A and B. From the plots, it can be observed that gTPA performs better than cIMT for both DL and manual systems.

13.5 Statistical tests and 10-year risk analysis

FIGURE 13.18 ROC plot comparison between gTPA and cIMT for (A) DL1 and (B) GT1. (A) gTPA(DL1) versus cIMT (DL1) and (B) ROC plot of gTPA(GT1) versus cIMT (GT1). cIMT, Carotid intimamedia thickness; gTPA, geometric total plaque area; ROC, receiver operating characteristic.

13.5.2 Statistical tests All the five statistical test results (paired sample t-test, MannWhitney test, Wilcoxon test, KS test, and Friedman test) are shown in Table 13.4. The P-values of the results confirm that the paired samples qualify the test successfully. The results of DL-based system with respect to GT1 and GT2 have been analyzed and tested using paired t-test, MannWhitney test, and Wilcoxon test, corresponding box plots of which are shown in Fig. 13.19. The corresponding P-values for paired t-test corresponding to gTPA and cIMT with reference to both DL systems and its ground truth values are observed to be less than .0001. This proves that the results are statistically significant for all the combinations; results are shown in Table D.1, Table D.5, Table D.9, and Table D.13. The P-values for MannWhitney test for DL1 and DL2 with reference to GT1 and GT2 corresponding to gTPA and cIMT are less than .0001. This proved that both the results are statistically significant and are as shown in Table D.2, Table D.6, Table D.10, and Table D.14. The P-values for Wilcoxon test for DL1 and DL2 with reference to GT1 and GT2 corresponding to gTPA and cIMT are less than .0001. This proved that both the results are statistically significant and are as shown in Table D.3, Table D.7, Table D.11, and Table D.15. We have performed KolmogorovSmirnov test for DL1 and DL2 and results are significant in terms of their P-values. The P-values with respect to DL1 and DL2 were below .0001. Further, we performed Friedman tests for DL1 and DL2 and their corresponding results are shown in Table D.4, Table D.8, Table D.12, and Table D.16. The P-values corresponding to DL1 and DL2 are

249

Table 13.4 Summary table for five statistical tests. SN 1 2 3 4

Relationship type

Paired sample ttest (P-value)

MannWhitney test (P-value)

Wilcoxon test (P-value)

KolmogorovSmirnov (P-value)

Friedman test (P-value)

gTPA (DL1) vs cIMT (DL1) gTPA (GT1) vs cIMT (GT1) gTPA (DL2) vs cIMT (DL2) gTPA (GT2) vs cIMT (GT2)

P , .0001

P , .0001

P , .0001

P 5 1.52e 2 175

P , .0001

P , .0001

P , .0001

P , .0001

P 5 1.52e 2 175

P , .0001

P , .0001

P , .0001

P , .0001

P 5 1.52e 2 175

P , .0001

P , .0001

P , .0001

P , .0001

P 5 1.52e 2 175

P , .0001

cIMT, Carotid intimamedia thickness; gTPA, geometric total plaque area.

13.5 Statistical tests and 10-year risk analysis

FIGURE 13.19 Box plots of cIMT and gTPA for both DL1 and DL2. cIMT, Carotid intimamedia thickness; gTPA, geometric total plaque area.

below .0001, therefore rejecting the null hypothesis that the data was taken from same distribution cannot be retained for DL1 and DL2.

13.5.3 Ten-year risk assessment Risk assessment is a mechanism through which we can identify the CVD risk based on the cIMT thickness and gTPA. Here, we considered 10-year risk for all the patients. The cohort consists of male and female both categories, so the risk is calculated as per the following equations: cIMTð10-year; menÞ 5 cIMTðcurrent; menÞ 1 projection rateðmenÞ 3 10

(13.4)

cIMTð10-year; womenÞ 5 cIMTðcurrent; womenÞ 1 projection rateðwomenÞ 3 10

(13.5)

251

252

CHAPTER 13 Geometric total plaque area is an equally powerful

FIGURE 13.19 (Continued)

gTPAð10-year; menÞ 5 gTPAðcurrent; menÞ 1 projection rateðmenÞ 3 10 gTPAð10-year; womenÞ 5 gTPAðcurrent; womenÞ 1 projection rateðwomenÞ 3 10

(13.6) (13.7)

where the projection rate for the men and women are 0.03 and 0.02 mm/year, respectively. Our aim is to design a linear model for computing the 10-year risk due to age. The assumption of this model does not take into consideration the effect of diabetes and BMI. Both these factors are not part of this study. The current versus 10-year risk corresponding to cIMT and gTPA for both DL systems are shown in Figs. 13.20 and 13.21. The corresponding AUC summary is shown in Table 13.5. Table 13.5 showed that current gTPA is higher than current cIMT and gTPA10 is better than cIMT10, which proves our assumption that gTPA is a strong clinical biomarker for stroke risk and can be adapted for risk assessment. The AUC for gTPA showed an improvement over cIMT by 14.36% and 12.57% for DL1 and DL2, respectively. The corresponding 10-year risk improvements were 9.09% and 6.26%.

13.5 Statistical tests and 10-year risk analysis

FIGURE 13.20 ROC plots of cIMT versus gTPA: (A) current cIMT risk (middle) versus current gTPA risk (top), (B) 10-year cIMT (middle) versus 10-year gTPA (top). cIMT, Carotid intimamedia thickness; gTPA, geometric total plaque area; ROC, receiver operating characteristic.

FIGURE 13.21 ROC plots of current risks versus 10-year risks: (A) current cIMT risk (red) versus 10-year cIMT risk (blue), (B) current gTPA risk (red) versus 10-year gTPA risk (blue). cIMT, Carotid intimamedia thickness; gTPA, geometric total plaque area; ROC, receiver operating characteristic.

Table 13.5 Current risk versus 10-year risk. Type of plot (DL system 1)

AUC (before)

AUC (after)

Type of plot (DL system 2)

AUC (before)

AUC (after)

cIMT (DL1) vs cIMT (GT1) gTPA (DL1) vs gTPA (GT1)

0.745

0.792

0.764

0.819

0.852

0.864

cIMT (DL2) vs cIMT (GT2) gTPA (DL2) vs gTPA (GT2)

0.860

0.870

AUC, Area under the curve; cIMT, carotid intimamedia thickness; DL, deep learning; gTPA, geometric total plaque area.

253

254

CHAPTER 13 Geometric total plaque area is an equally powerful

13.6 Discussion The study presented a cylindrical-based model for the automated computation of gTPA, given the cIMT and LD. A intelligence-based technique [40] was employed for computing the cIMT and LD. We demonstrated that gTPA is an equally strong biomarker as cIMT for risk assessment. The results showed a high CC between gTPA and cIMT using DL and manual 0.92 (P , .001) and 0.94 (P , .001), respectively. Further, by using two cutoffs, leading to low-, moderate-, and high-risk assessment system, the AUC for cIMT and gTPA were 0.76 (P , .001) and 0.85 (P , .001) using DL1 and 0.76 (P , .001) and 0.86 (P , .001) using DL2, respectively. The study further demonstrates that gTPA is more strongly associated with age compared to cIMT.

13.6.1 Benchmarking There are several similarities between our study and the work done by others. This can be seen from Table 13.6 in which we can observe that most of the previously published works used “manual” criteria for gTPA measurement while current study adapted automated morphological-based gTPA measurement. Further, the current study used intelligence-based criteria for cIMT and LD for detection. The work done by Spence et al. [14] has been prominent for TPA computation. The demographics of the study consisted of 1686 patients with mean age (56.95 years), mean hypertension range (650 mmHg), BMI range (27.4 kg/m2), mean diabetes mellitus (8.9 mg/dL), mean pack years of smoking (12.6 years), and percentage of male and female (53% and 47%). The study identify low and moderate category of carotid plaque for early state cardiovascular risk detection with cutoffs 4.4 and 21.2 mm2. Two years later, the same group [15] again published a study consisted of 1821 patients with mean age (57.2 6 14.6 years), mean LDL-C (3.2 6 1.13 mg/dL), BMI range (27.4 kg/m2), and male (963) and female (858) for TPA calculation. The two mean TPA cutoffs were 87 and 106 mm2. The study had an objective of identification of carotid plaque for cardiovascular risk. For the moderate type of risk stratification Spence et al. [41] took 876 patients with mean age of (53.4 6 12.0 years) with an average 6.7% diabetes mellitus cases out of which 11.5% were smokers. This study considered 463 male and 413 females. The mean TPA cutoff was 38 mm2. Mathiesen et al. [42] proposed a mechanism for identifying gTPA. The demographics consisted of a total 6580 patients with mean age 60.2 6 10.2 years. The patients were categorized into low risk of plaque burden by considering gTPA cutoff value of 3.9 6 2.2 mm2. This study had quite low cutoff, partially due to attributing factor due to low percentage (3.22%) of diabetes mellitus patients. Mathiesen et al. [43] again presented a study of 2743 patients having an average gTPA cutoff of 6 mm2. The patient demographics consisted of an average age of

Table 13.6 Benchmarking table. C1

SN R1

R2

C2 Author and year [London] 2008

[Norway] 2012

C3

C4

C5

C6

C7

C8

C9

C10

C11 2

C12

C13

PS

Class

HT

HbA1c (mg/dL)

LDL-C

BMI (kg/m2)

Age (mean)

TPA, mm (2 cutoffs)

Type

DM (%)

TPA technique

876

2









53.4 6 12

38

M

6.7

mTPA (w)

2743

2







25.85 6 3.5

56.2 6 9.64

6.08

L

1.40

C14

C15

M/F

Smoking (%)

a

mTPA (wp)

463 (M)

For-39.6

413 (F)

Cu-11.5

1307 (M)

Cu-27.7

1436 (F) R3

[Norway] 2013

4194a

2







26.6 6 3.9

61.2 6 9.8

9.85 6 16.2

L



mTPA (w)

1994 (M)



2200 (F) R4

[Orlando] 2013

1327b

3 (EL,I,ED)

71%



129 6 34

28 6 5

66 6 9

9.1 and 21.5

LM

20

mTPA (wp)

544 (M)

Ne-48,For37,Cu-15

783 (F) R5

[Orlando] 2015

1356b

2









,70 (744)

20.3

LM

70 1 (612) R6

[London] 2016

2035

3 (LR,MR,HR)

35%







59 6 0.2

N81

mTPA (w)

Y19 33.0 and 80.4

MH

14

mTPA (wp)

540 (M)

No-48

816 (F)

Yes-52

1175 (M)



860 (F)

(Continued)

Table 13.6 Benchmarking table. Continued C1

SN R17

C2 Author and year [London] 2017

C3

C4

C5

C6

C7

C8

C9

C10

C11 2

C12

C13

[Germany] 2017

C15 Smoking (%)

PS

Class

HT

HbA1c (mg/dL)

LDL-C

BMI (kg/m2)

Age (mean)

TPA, mm (2 cutoffs)

Type

DM (%)

TPA technique

M/F

2447

2





, 1.8

27.61 6 5.0

63.59 6 13.4

113.33 6 121.5

H

16.7

mTPA (wp)

1413 (M)

129.56 6 134.3 R8

C14 a

5144

4





3.8 6 1.0

51.5 6 9.5

25 and 75

16.24 6 19.6

1174 (F) M



mTPA (wp)

3073 (M)

Cu-23.8

2071 (F) R9

[Germany] 2018

8008

6 (Ty1,Ty2a,b, Ty3, Ty4a,b)





158 6 39

R10

Proposed (2018)

204

3(LR,MR,HR)



5.8 6 1.0

101.1 6

27.96 6 4.2

31.5

54 6 6

70 and 120

H

8.4

mTPA (wp)



Cu-43.2

68.9 6 11.0

20 and 40

M



gTPA

157 (M)



47 (F) gTPA, Geometric total plaque area; mTPA, morphologic total plaque area; LDL-C, low-density lipoprotein cholesterol. a NR, Nonsmokers. b Stroke free.

13.6 Discussion

56.2 6 9.65 years and average BMI of 25.85 6 3.5 kg/m2. Mathiesen et al. [44] considered 4194 nonsmoker patients in the study for identifying the low risk associated with plaque burden. Their gTPA cutoff for the low-risk category was 13 mm2. The difference in cutoff can be attributed due to factors such as lower age-group (61.2 6 9.8 years), nonsmokers and higher female (2200) compared to male (1994) patient population. Rundek et al. [16] presented a study consisting of cohort having 71% hypertensive, high HbA1c (129 6 34 mmol/mol), 20% diabetes patients with an average age of 66 6 9 years. Their low-risk cutoff was 9 mm2. This can be attributed due to lower female participants in the cohort. The study showed that females (783) had a lower plaque burden compared to male (544) candidates. Dong et al. [45] further presented a study where 45% of the cohort consisted of high age-group ( . 70 years) and the TPA cutoff was 20 mm2, stratifying the risk into low risk and moderate risk. Spence et al. [46] considered a cutoff that was twice the cutoff of our current study. The study consisted of cohort (2035 patients) with 35% hypertensive, 37% with higher systolic blood pressure, and 14% were diabetic. The study took an average of 16, 33, and 80 mm2 cutoff points for risk stratification into low-risk, moderate-risk, and high-risk patients. Our current study took two cutoffs for gTPA as 20 and 40 mm2, which is closer to Spence’s study [41]. The slight difference between our cutoff and Spence’s cutoff lies due to the demographic factors such as hypertension, diabetes, and hypercholesterolemia. Spence and Solo [47] computed the TPA in bifurcation zone for both near and far wall of the carotid artery and showed 30% higher TPA compared to one sided wall. A similar study was published by Suri et al. demonstrating 34% more plaque in bulb compared to CCA [26]. The study [15] found substantial proportion of high-risk patient with plaque progression despite low level of LDL-C. The study considered higher male percentage with 16% diabetic, higher BMI, and higher cholesterol patients yielding higher TPA. Recently, Adams et al. [48] presented an extensive study for detection of subclinical atherosclerosis by lowering the risk thresholds. The study showed patients taken from younger age-group (51.5 6 9 years) for detection of moderate and low risk of plaque burden using morphological TPA calculation method. Their cutoffs for low and moderate risks were 25 and 75 mm2, respectively. The mean LDL of the population was 3.8 6 1.0 mmol/L while 23.8% were smokers. Adams et al. [49] performed a pilot study on a small group of patients (33) to detect cardiovascular event. The patient demographics consisted of mean BMI (27.96 6 4.20 kg/ m2) in which 43.2% were smokers and 8.4% had diabetes. The mean patient age was 54 6 6 years. The authors considered two gTPA cutoffs of 70 and 120 mm2, respectively. From the previous discussions, we conclude that risk factors such as LDL, BMI, age, hypertension, and diabetes played a prominent role in cutoff design of TPA and this could be a diagnostic tool for risk stratification and possibly prediction of cardiovascular events.

257

258

CHAPTER 13 Geometric total plaque area is an equally powerful

13.6.2 Strengths/weakness/extensions The system offers the following key advantages: (1) simplicity in gTPA computation given the cIMT and LD computation; (2) simple model for CCA subclinical atherosclerosis modeling; (3) current cIMT and LD computation models can be easily replaced by other models; (4) current model of cIMT/LD is estimated using intelligence-based strategy; and (5) the system can be extended for internal carotid artery. In spite of several advantages, the system encounters the following challenges: (1) the model will not be suitable for multifocal and nonsubclinical subjects or subjects having plaque burden with large plaque above the baseline. (2) The system requires larger database validation.

13.7 Conclusion cIMT is currently well-adapted biomarker for monitoring stroke and coronary artery disease. Recently, manually computed TPA was proposed to be useful biomarker for stroke and cardiovascular risk. The manual cIMT and TPA computations are prone to inter- and intraobserver variabilities and tedious in computations. This study presented a DL-based technique for carotid wall interface detection followed by automated cIMT, LD, and TPA measurements. Standardized two DL methods were used for wall detection, which followed the plaque morphology. The cIMT measurement used automated standardized polyline distance method while mTPA was measured using the concept of cylinder fitting for CCA and only requires cIMT and LD. The CC between gTPA and cIMT using DL and manual prove gTPA as an equally powerful carotid risk biomarker like cIMT. Given the cIMT and LD, cylindrical fitting was observed as a fast method for gTPA measurements.

Acknowledgments We are very grateful to SAGE publisher for giving us permission to reproduce of this chapter. The original citation is E. Cuadrado-Godia et al., “Geometric Total Plaque Area Is an Equally Powerful Phenotype Compared With Carotid Intima-Media Thickness for Stroke Risk Assessment: A Deep Learning Approach,” J. Vasc. Ultrasound, Volume: 42 issue: 4, page(s): 162188, Nov. 2018. We would like to acknowledge Mainak Biswas (NIT Goa) for his inputs in deep learning system design. We also acknowledge Dr. Sumit K. Banchhor (Global Biomedical Technologies, Inc., Roseville, CA, USA) for proof reading the manuscript.

Conflict of interest The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this chapter.

Appendix A LD/IMT measurement using deep learning system

Funding The author(s) received no financial support for the research, authorship, and/or publication of this chapter.

Appendix A LD/IMT measurement using deep learning system

FIGURE A.1 Deep learningbased system for cIMT and LD detection. cIMT, Carotid intimamedia thickness; LD, lumen diameter.

259

260

CHAPTER 13 Geometric total plaque area is an equally powerful

Appendix B Polyline distance method Polyline distance metric The polyline distance metric (PDM) [18] is used to measure cIMT between LI and MA interfaces. Error value of LI and MA can be calculated by measuring LI error between deep learning LI-far and ground truth LI-far interfaces, and MAerror between deep learning MA-far and ground truth MA-far interfaces. The PDM computation is given as follows: let the first and second interfaces be denoted as I1 and I2. Let the reference point on I1 be vertex V1 and the segment in I2 be defined by vertices V2 and V3. Let the distance between V1 and V2 be d1 and the distance between V1 and V3 be denoted as d2. Let D (V1, L) be the polyline distance between vertex V1:(u1, v1) on V1 and line segment L formed by two points V2: (u2,v2) and V3: (u3, v3). Let delta (@) is the distance of the reference point, V1, toward the line segment L. The perpendicular distance between the line segment L and the reference point, V1, is given by A 5 πr 2 . Then, the polyline distance D (V1,L) can be defined as DðV1 ; LÞ 5

  d p  0,@,0 minðd1 ; d2 Þ @ , 0; @ . 1

(B.1)

where qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðu1 2u2 Þ2 1 ðv1 2v2 Þ2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi d2 5 ðu1 2u3 Þ2 1 ðv1 2v3 Þ2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi d1 5 ðu1 2u2 Þ2 1 ðv1 2v2 Þ2

d1 5

(B.2) (B.3) (B.4)

@5

ðv3 2 v2 Þðv1 2 v2 Þ 1 ðu3 2 u2 ) ðu1 2 u2 Þ ðu3 2u2 Þ2 1 ðv3 2v2 Þ2

(B.5)

dp 5

ðv3 2 v2 Þðu2 2 u1 Þ 1 ðu3 2 u2 Þ ðv1 2 v2 Þ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðu3 2u2 Þ2 1 ðv3 2v2 Þ2

(B.6)

and

The process to obtain DðV1 ; LÞ is repeated for the rest of the points of the contour Ij and is given by: D ðI1 ; I2 Þ 5

N X   D Vi ; SI2

(B.7)

i51

where N is the total number of points on I1 and SI2 is the segment on contour I2 . This algorithm is repeated in reverse, where C2 becomes the reference contour and I1 becomes the segment contour. The reverse is represented as DðI1 ; I2 Þ.

Appendix C Correlation coefficient of gTPA against all the wall

Finally, by combining both DðI1 ; I2 Þ and DðI2 ; I1 Þ, we obtain the PDM which is given by: DPDM ðI1 :I2 Þ 5

DðI1 ; I2 Þ 1 DðI2 ; I1 Þ ð#pointAI1 1 #pointsAI2 Þ

(B.8)

Appendix C Correlation coefficient of gTPA against all the wall parameters The complete CC between gTPA and rest of the wall parameters for DL and GT systems is shown in Table C.1.

gTPA versus cIMT For DL system 1 and manual system 1, gTPA (DL1) versus cIMT (DL1) and gTPA (Manual 1) versus cIMT (Manual 1) are 0.9433 (P-value , .0001) and 0.9549 (P-value , .0001), respectively. For DL system 2 and manual system 2, gTPA (DL2) versus cIMT (DL2) and gTPA (Manual 2) versus cIMT (Manual 2) are 0.9273 (P-value , .0001) and 0.9410 (P-value , .0001), respectively. Our observations show that gTPA is strongly related to the manual readings taken by the experts in both the DL systems. The CC between gTPA and cIMT for DL1 and DL2 is 0.9433 (P-value , .0001) and 0.9273 (P-value , .0001), respectively.

gTPA versus LD For DL system 1 and manual system 1, gTPA (DL1) versus LD (DL1) and gTPA (Manual 1) versus LD (Manual 1) are 0.3186 (P-value , .0001) and 0.2439 (P-value , .0001), respectively. For DL system 2 and manual system 2, gTPA (DL2) versus LD (DL2) and gTPA (Manual 2) versus LD (Manual 2) are 0.3465 (P-value , .0001) and 0.3214 (P-value , .0001), respectively.

gTPA versus IAD For DL and manual system 1, gTPA (DL1) versus IAD (DL 1) and gTPA (Manual 1) versus IAD (Manual 1) are 0.6141 (P-value , .0001) and 0.5144 (P-value , .0001), respectively. For DL system 2 and manual system 2, gTPA (DL2) versus IAD (DL2) and gTPA (Manual 2) versus IAD (Manual 2) are 0.6460 (P-value , .0001) and 0.5903 (P-value , .0001), respectively.

261

262

CHAPTER 13 Geometric total plaque area is an equally powerful

Table C.1 Coefficient of correlation (CC) of geometric total plaque area (gTPA) versus lumen diameter (LD)/interadventitial diameter (IAD)/carotid intimamedia thickness (cIMT) values w.r.t. DL1, DL2 and Manual 1, Manual 2.

SN

gTPA vs LD/ IAD/cIMT for DL1 and GT1

CC

P-Value

gTPA vs LD/ IAD/cIMT for DL2 and GT2

CC

P-Value

0.9273

,.0001

gTPA vs cIMT 1 2

gTPA (DL1) vs cIMT (DL1) gTPA (GT1) vs cIMT (GT1)

0.9433

,.0001

0.9549

,.0001

gTPA (DL2) vs cIMT (DL2) gTPA (GT2) vs cIMT (GT2)

0.9410

, .0001

gTPA vs LD 3 4

gTPA (DL1) vs LD (DL1) gTPA (GT1) vs LD (GT1)

0.3186

, .0001

0.2439

, .0001

gTPA (DL2) vs LD (DL2) gTPA (GT2) vs LD (GT2)

0.3465

,.0001

0.3214

,.0001

0.6460

,.0001

0.5903

,.0001

gTPA vs IAD 5 6

gTPA (DL1) vs IAD (DL1) gTPA (GT1) vs IAD (GT1)

0.6141

,.0001

0.5144

,.0001

gTPA (DL2) vs IAD (DL2) gTPA (GT2) vs IAD (GT2)

GT1 is same as Manual 1; GT2 is same as Manual 2.

Appendix D Statistical tests Table D.1 Statistical significance between geometric total plaque area (gTPA) (DL1) and carotid intimamedia thickness (cIMT) (DL1) using paired sample t-test. Parameters

gTPA (DL1) vs cIMT (DL1)

Mean difference Std. dev. of differences 95% CI Test statistic t DF Two-tailed probability

2 19.6107 9.1326 2 20.5130 to 218.7084 2 42.731 395 P 5 .0001 (,.05)

DF, Degree of freedom.

Appendix D Statistical tests

Table D.2 Statistical significance between geometric total plaque area (gTPA) (GT1) and carotid intimamedia thickness (cIMT) (GT1) using paired sample t-test. Parameters

gTPA (GT1) vs cIMT (GT1)

Mean difference Std. dev. of differences 95% CI Test statistic t DF Two-tailed probability

2 19.0317 9.7633 2 19.9963 to 218.0672 2 38.7910 935 P 5 .0001 (,.05)

DF, Degree of freedom.

Table D.3 Statistical significance between geometric total plaque area (gTPA) [deep learning (DL)2] and carotid intimamedia thickness (cIMT) (DL2) using paired sample t-test. Parameters

gTPA (DL2) vs cIMT (DL2)

Mean difference Std. dev. of differences 95% CI Test statistic t DF Two-tailed probability

2 18.5626 7.9152 2 19.3445 to 217.7806 2 46.668 395 P 5 .0001 (,.05)

DF, Degree of freedom.

Table D.4 Statistical significance between geometric total plaque area (gTPA) (GT2) and carotid intimamedia thickness (cIMT) (GT2) using paired sample t-test. Parameters

gTPA (GT2) vs cIMT (GT2)

Mean difference Std. dev. of differences 95% CI Test statistic t DF Two-tailed probability

2 18.6870 8.6008 2 19.5367 to 217.8372 2 43.236 395 P 5 .0001 (,.05)

DF, Degree of freedom.

263

264

CHAPTER 13 Geometric total plaque area is an equally powerful

Table D.5 Statistical significance between geometric total plaque area (gTPA) (DL1) and carotid intimamedia thickness (cIMT) (DL1) using MannWhitney test. Parameters

gTPA (DL1) vs cIMT (DL1)

Average rank of first group Average rank of second group MannWhitney U Large sample test statistic Z Two-tailed probability

594.5000 198.5000 0.00 24.357 P 5 .0001 (,.05)

Table D.6 Statistical significance between geometric total plaque area (gTPA) (GT1) and carotid intimamedia thickness (cIMT) (GT1) using MannWhitney test. Parameters

gTPA (GT1) vs cIMT (GT1)

Average rank of first group Average rank of second group MannWhitney U Large sample test statistic Z Two-tailed probability

594.5000 198.5000 0.00 24.357 P 5 .0001 (,.05)

Table D.7 Statistical significance between geometric total plaque area (gTPA) (DL2) and carotid intimamedia thickness (cIMT) (DL2) using MannWhitney test. Parameters

gTPA (DL2) vs cIMT (DL2)

Average rank of first group Average rank of second group MannWhitney U Large sample test statistic Z Two-tailed probability

594.5000 198.5000 0.00 24.357 P 5 .0001 (,.05)

Appendix D Statistical tests

Table D.8 Statistical significance between geometric total plaque area (gTPA) (GT2) and carotid intimamedia thickness (cIMT) (GT2) using MannWhitney test. Parameters

gTPA (GT2) vs cIMT (GT2)

Average rank of first group Average rank of second group MannWhitney U Large sample test statistic Z Two-tailed probability

594.5000 198.5000 0.00 24.357 P 5 .0001 (,.05)

Table D.9 Statistical signification between geometric total plaque area (gTPA) (DL1) and carotid intimamedia thickness (cIMT) (DL1) using Wilcoxon test. Parameters

gTPA (DL1) vs cIMT (DL1)

Number of 1 ve differences Number of 2 ve differences Large sample test statistic Z Two-tailed probability

0 396 17.2446 P 5 .0001 (,.05)

Table D.10 Statistical signification between geometric total plaque area (gTPA) (GT1) and carotid intimamedia thickness (cIMT) (GT1) using Wilcoxon test. Parameters

gTPA (GT1) vs cIMT (GT1)

Number of 1 ve differences Number of 2 ve differences Large sample test statistic Z Two-tailed probability

0 396 17.2446 P 5 .0001 (,.05)

265

266

CHAPTER 13 Geometric total plaque area is an equally powerful

Table D.11 Statistical signification between geometric total plaque area (gTPA) (DL2) and carotid intimamedia thickness (cIMT) (DL2) using Wilcoxon test. Parameters

gTPA (DL2) vs cIMT (DL2)

Number of 1 ve differences Number of 2 ve differences Large sample test statistic Z Two-tailed probability

0 396 17.2446 P 5 .0001 (,.05)

Table D.12 Statistical signification between geometric total plaque area (gTPA) (GT2) and carotid intimamedia thickness (cIMT) (GT2) using Wilcoxon test. Parameters

gTPA (GT2) vs. cIMT (GT2)

Number of 1 ve differences Number of 2 ve differences Large sample test statistic Z Two-tailed probability

0 396 17.2446 P 5 .0001 (,.05)

Table D.13 Statistical signification between geometric total plaque area (gTPA) (DL1) and carotid intimamedia thickness (cIMT) (DL1) using Friedman test. Parameters N Minimum 25th percentile Median 75th percentile Maximum Chi-square Significance

gTPA (DL1) vs cIMT (DL1) 396 5.2332 14.3400 18.5270 24.1850 69.5960

396 0.3178 0.6930 0.8610 1.0330 2.6620 396.0000 P , .0001 (,.05; passes)

Appendix D Statistical tests

Table D.14 Statistical signification between geometric total plaque area (gTPA) (GT1) and carotid intimamedia thickness (cIMT) (GT1) using Friedman test. Parameters

gTPA (GT1)

N Minimum 25th percentile Median 75th percentile Maximum Chi-square Significance

396 4.0479 13.464 17.789 23.219 68.592

cIMT (GT1) 396 0.2019 0.656 0.829 1.025 2.569 396.0000 P , .0001 (,.05; passes)

Table D.15 Statistical signification between geometric total plaque area (gTPA) (DL2) and carotid intimamedia thickness (cIMT) (DL2) using Friedman test. Parameters

gTPA (DL2)

N Minimum 25th percentile Median 75th percentile Maximum Chi-square Significance

396 5.6896 13.779 17.996 23.008 73.375

cIMT (DL2) 396 0.2692 0.660 0.832 1.034 2.151 396.0000 P , .0001 (,.05; passes)

Table D.16 Statistical signification between geometric total plaque area (gTPA) (GT2) and carotid intimamedia thickness (cIMT) (GT2) using Friedman test. Parameters

gTPA (GT2)

N Minimum 25th percentile Median 75th percentile Maximum Chi-square Significance

396 4.1997 13.705 17.807 23.150 67.469

cIMT (GT2) 396 0.2463 0.667 0.836 1.016 2.378 396.0000 P , .0001 (,.05; passes)

267

268

CHAPTER 13 Geometric total plaque area is an equally powerful

Appendix E List of abbreviations/symbols SN

Abbreviations/ symbols

Description

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

IMC IMC-B cIMT TPA gTPA mTPA LI MA LDL ROC AUC DL P1 P2 P3 d1 d2 δ cIMT10 gTPA10

Intimamedia complex Intimamedia complex border Complex intimamedia thickness Total plaque area Geometric total plaque area Morphological total plaque area Lumen intima Media adventitia Low-density lipoprotein Receiver operating characteristic Area under the curve Deep learning Reference point on C 1 Reference point on C2 Reference point on C3 Euclidean distance between vertex P1 and vertex P2 Euclidean distance between vertex P1 and vertex P3 Distance of the reference point P1 and the line segment L 10-year risk of cIMT 10-year risk of gTPA

References [1] D. Lloyd-Jones, R.J. Adams, T.M. Brown, M. Carnethon, S. Dai, G. De Simone, et al., Heart disease and stroke statistics—2010 update, Circulation 121 (7) (2010) e46e215. [2] Cardiovascular Diseases, World Health Organization. Available online: ,https:// www.who.int/health-topics/cardiovascular-diseases/#tab5tab_1.. [3] D.H. O’Leary, J.F. Polak, R.A. Kronmal, T.A. Manolio, G.L. Burke, S.K. Wolfson Jr, Carotid-artery intima and media thickness as a risk factor for myocardial infarction and stroke in older adults, N. Engl. J. Med. 340 (1) (1999) 1422. [4] P. Libby, Y.J. Geng, G.K. Sukhova, D.I. Simon, R.T. Lee, Molecular determinants of atherosclerotic plaque vulnerability, Ann. N.Y. Acad. Sci. 811 (1) (1997) 134145. [5] L. Saba, S.K. Banchhor, N.D. Londhe, T. Araki, J.R. Laird, A. Gupta, et al., Webbased accurate measurements of carotid lumen diameter and stenosis severity: an ultrasound-based clinical tool for stroke risk assessment during multicenter clinical trials, Comput. Biol. Med. 91 (2017) 306317. [6] F. Molinari, K.M. Meiburger, G. Zeng, L. Saba, U.R. Acharya, L. Famiglietti, et al., Automated carotid IMT measurement and its validation in low contrast ultrasound

References

database of 885 patient Indian population epidemiological study: results of AtheroEdget Software, Int. Angiol. 31 (1) (2012) 42. [7] A.K. Patel, H.S. Suri, J. Singh, D. Kumar, S. Shafique, A. Nicolaides, et al., A review on atherosclerotic biology, wall stiffness, physics of elasticity, and its ultrasound-based measurement, Curr. Atheroscler. Rep. 18 (12) (2016) 83. [8] L. Saba, P.K. Jain, H.S. Suri, N. Ikeda, T. Araki, B.K. Singh, et al., Plaque tissue morphology-based stroke risk stratification using carotid ultrasound: a polling-based PCA learning paradigm, J. Med. Syst. 41 (6) (2017) 98. [9] A. Laine, J.M. Sanches, J.S. Suri, Ultrasound Imaging: Advances and Applications, Springer, 2012. [10] L. Saba, J.M. Sanches, L.M. Pedro, J.S. Suri (Eds.), Multi-modality Atherosclerosis Imaging and Diagnosis, Springer, New York, 2014. [11] J.S. Suri, C. Kathuria, F. Molinari (Eds.), Atherosclerosis Disease Management, Springer Science & Business Media, 2010. [12] M.L. Bots, Carotid intima-media thickness as a surrogate marker for cardiovascular disease in intervention studies, Curr. Med. Res. Opin. 22 (11) (2006) 21812190. [13] V. Nambi, L. Chambless, A.R. Folsom, M. He, Y. Hu, T. Mosley, et al., Carotid intima-media thickness and presence or absence of plaque improves prediction of coronary heart disease risk: the ARIC (Atherosclerosis Risk in Communities) study, J. Am. Coll. Cardiol. 55 (15) (2010) 16001607. [14] J.D. Spence, M. Eliasziw, M. DiCicco, D.G. Hackam, R. Galil, T. Lohmann, Carotid plaque area: a tool for targeting and evaluating vascular preventive therapy, Stroke 33 (12) (2002) 29162922. [15] J.D. Spence, R.A. Hegele, Noninvasive phenotypes of atherosclerosis: similar windows but different views, Stroke 35 (3) (2004) 649653. [16] S. Alsulaimani, H. Gardener, M.S. Elkind, K. Cheung, R.L. Sacco, T. Rundek, Elevated homocysteine and carotid plaque area and densitometry in the Northern Manhattan Study, Stroke 44 (2) (2013) 457461. [17] F. Molinari, G. Zeng, J.S. Suri, Intima-media thickness: setting a standard for a completely automated method of ultrasound measurement, IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 57 (5) (2010) 11121124. [18] F. Molinari, C.S. Pattichis, G. Zeng, L. Saba, U.R. Acharya, R. Sanfilippo, et al., Completely automated multiresolution edge snapper—a new technique for an accurate carotid ultrasound IMT measurement: clinical validation and benchmarking on a multi-institutional database, IEEE Trans. Image Process. 21 (3) (2012) 12111222. [19] A. Linhart, J. Gariepy, P. Giral, J. Levenson, A. Simon, Carotid artery and left ventricular structural relationship in asymptomatic men at risk for cardiovascular disease, Atherosclerosis 127 (1) (1996) 103112. [20] F. Molinari, G. Zeng, J.S. Suri, An integrated approach to computer-based automated tracing and its validation for 200 common carotid arterial wall ultrasound images, J. Ultrasound Med. 29 (3) (2010) 399418. [21] F. Molinari, U.R. Acharya, G. Zeng, K.M. Meiburger, J.S. Suri, Completely automated robust edge snapper for carotid ultrasound IMT measurement on a multi-institutional database of 300 images, Med. Biol. Eng. Comput. 49 (8) (2011) 935945. [22] F. Molinari, K.M. Meiburger, G. Zeng, A. Nicolaides, J.S. Suri, CAUDLES-EF: carotid automated ultrasound double line extraction system using edge flow, J. Digit. Imaging 24 (6) (2011) 10591077.

269

270

CHAPTER 13 Geometric total plaque area is an equally powerful

[23] S. Luca, F. Molinari, K.M. Meiburger, U.R. Acharya, A. Nicolaides, J.S. Suri, Interand intra-observer variability analysis of completely automated cIMT measurement software (AtheroEdget) and its benchmarking against commercial ultrasound scanner and expert readers, Comput. Biol. Med. 43 (9) (2013) 12611272. [24] N. Ikeda, A. Gupta, N. Dey, S. Bose, S. Shafique, T. Arak, et al., Improved correlation between carotid and coronary atherosclerosis SYNTAX score using automated ultrasound carotid bulb plaque IMT measurement, Ultrasound Med. Biol. 41 (5) (2015) 12471262. [25] L. Saba, K.B. Sumit, H.S. Suri, N.D. Londhe, T. Araki, N. Ikeda, et al., Accurate cloud-based smart IMT measurement, its validation and stroke risk stratification in carotid ultrasound: a web-based point-of-care tool for multicenter clinical trial, Comput. Biol. Med. 75 (2016) 217234. [26] N. Ikeda, N. Dey, A. Sharma, A. Gupta, S. Bose, S. Acharjee, et al., Automated segmental-IMT measurement in thin/thick plaque with bulb presence in carotid ultrasound from multiple scanners: stroke risk assessment, Comput. Methods Programs Biomed. 141 (2017) 7381. [27] U.R. Acharya, M.R.K. Mookiah, S.V. Sree, R. Yanti, R.J. Martis, L. Saba, et al., Evolutionary algorithm-based classifier parameter tuning for automatic ovarian cancer tissue characterization and classification, Ultraschall Med. 35 (03) (2014) 237245. [28] U.R. Acharya, S.V. Sree, S. Kulshreshtha, F. Molinari, J.E.W. Koh, L. Saba, et al., GyneScan: an improved online paradigm for screening of ovarian cancer via tissue characterization, Technol. Cancer Res. Treat. 13 (6) (2014) 529539. [29] G. Pareek, U.R. Acharya, S.V. Sree, G. Swapna, R. Yantri, R.J. Martis, et al., Prostate tissue characterization/classification in 144 patient population using wavelet and higher order spectra features from transrectal ultrasound images, Technol. Cancer Res. Treat. 12 (6) (2013) 545557. [30] V.K. Shrivastava, N.D. Londhe, R.S. Sonawane, J.S. Suri, Reliable and accurate psoriasis disease classification in dermatology images using comprehensive feature space in machine learning paradigm, Expert Syst. Appl. 42 (15) (2015) 61846195. [31] M. Maniruzzaman, M.J. Rahman, M. Al-Mehedi Hasan, H.S. Suri, M.M. Abedin, A. El-Baz, et al., Accurate diabetes risk stratification using machine learning: role of missing value and outliers, J. Med. Syst. 42 (2018) 92. [32] C.P. Loizou, T. Kasparis, C. Spyrou, M. Pantziaris, Integrated system for the complete segmentation of the common carotid artery bifurcation in ultrasound images, Artif. Intell. Appl. Innovations 412 (1) (2013) 292301. [33] J.S. Suri, K. Liu, S. Singh, et al., Shape recovery algorithms using level sets in 2-D/3-D medical imagery: a state-of-the-art review, IEEE Trans. Inf. Technol. Biomed. 6 (1) (2002) 828. [34] J.S. Suri, S. Laxminarayan, PDE and Level Sets, Springer Science & Business Media, 2002. [35] T. Araki, P.K. Kumar, H.S. Suri, N. Ikeda, A. Gupta, L. Saba, et al., Two automated techniques for carotid lumen diameter measurement: regional versus boundary approaches, J. Med. Syst. 40 (7) (2016) 119. [36] P.K. Kumar, T. Araki, J. Rajan, L. Saba, F. Lavra, N. Ikeda, et al., Accurate lumen diameter measurement in curved vessels in carotid ultrasound: an iterative scale-space and spatial transformation approach, Med. Biol. Eng. Comput. 55 (2017) 14151434.

References

[37] V. Kuppili, et al., Extreme learning machine framework for risk stratification of fatty liver disease using ultrasound tissue characterization, J. Med. Syst. 41 (10) (2017) 152. [38] Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521 (7553) (2015) 436444. [39] J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, in: Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2015. [40] M. Biswas, V. Kuppili, T. Araki, D.R. Edla, E.C. Godia, L. Saba, et al., Deep learning strategy for accurate carotid intima-media thickness measurement: an ultrasound study on Japanese diabetic cohort, Comput. Biol. Med. 98 (2018) 100117. [41] J.H. Klein, R.A. Hegele, D.G. Hackam, M.L. Koschinsky, M.W. Huff, J.D. Spence, Lipoprotein (a) is associated differentially with carotid stenosis, occlusion, and total plaque area, Arterioscler. Thromb. Vasc. Biol. 28 (10) (2008) 18511856. [42] E.B. Mathiesen, S.H. Johnsen, T. Wilsgaard, K.H. Bønaa, M.L. Løchen, I. Njølstad, Carotid plaque area and intima-media thickness in prediction of first-ever ischemic stroke: a 10-year follow-up of 6584 men and women: the Tromsø Study, Stroke 42 (2011) 972978. STROKEAHA-110. [43] M. Herder, S.H. Johnsen, K.A. Arntzen, E.B. Mathiesen, Risk factors for progression of carotid intima-media thickness and total plaque area: a 13-year follow-up study: the Tromsø Study, Stroke 43 (2012) 18181823. STROKEAHA-111. [44] E. Kamycheva, S.H. Johnsen, T. Wilsgaard, R. Jorde, E.B. Mathiesen, Evaluation of serum 25-hydroxyvitamin D as a predictor of carotid intima-media thickness and carotid total plaque area in nonsmokers: the Tromsø Study, Int. J. Endocrinol. 2013 (2013) 305141. [45] C. Dong, D. Della-Morte, D. Cabral, L. Wang, S.H. Blanton, C. Seemant, et al., Sirtuin/uncoupling protein gene variants and carotid plaque area and morphology, Int. J. Stroke 10 (8) (2015) 12471252. [46] H.A. Perez, N.H. Garcia, J.D. Spence, L.J. Armando, Adding carotid total plaque area to the Framingham risk score improves cardiovascular risk classification, Arch. Med. Sci. 12 (3) (2016) 513520. [47] J.D. Spence, K. Solo, Resistant atherosclerosis: the need for monitoring of plaque burden, Stroke 48 (6) (2017) 16241629. [48] M. Romanens, M.B. Mortensen, I. Sudano, T. Szucs, A. Adams, Extensive carotid atherosclerosis and the diagnostic accuracy of coronary risk calculators, Prev. Med. Rep. 6 (2017) 182186. [49] A. Adams, W. Bojara, K. Schunk, Early diagnosis and treatment of coronary heart disease in asymptomatic subjects with advanced vascular atherosclerosis of the carotid artery (Type III and IV b findings using ultrasound) and risk factors, Cardiol. Res. 9 (1) (2018) 22.

271

CHAPTER

14

Statistical characterization and classification of colon microarray gene expression data using multiple machine learning paradigms

Md. Maniruzzaman1,2, Md. Jahanur Rahman2, Benojir Ahammed1, Md. Menhazul Abedin1, Harman S. Suri3, Mainak Biswas4, Ayman El-Baz5, Petros Bangeas6, Georgios Tsoulfas7 and Jasjit S. Suri8 1

Statistics Discipline, Khulna University, Khulna, Bangladesh Department of Statistics, University of Rajshahi, Rajshahi, Bangladesh 3 Brown University, Providence, RI, United States 4 Advanced Knowledge Engineering Centre, Global Biomedical Technologies, Inc., Roseville, CA, United States 5 Department of Bioengineering, University of Louisville, Louisville, KY, United States 6 Department of Surgery, Papageorgiou Hospital, Aristotle University Thessaloniki, Thessaloniki, Greece 7 Department of Surgery, Aristotle University of Thessaloniki, Thessaloniki, Greece 8 Stroke Monitoring Division, AtheroPoint, Roseville, CA, United States 2

14.1 Introduction Cancer is the second leading cause of death globally, with 9.6 million deaths per year. Further, there are about 18.1 million new cancer cases that emerge every year. There are over 100 types of cancers such as colon, liver, ovarian and breast, and so on [1,2]. According to reports from the world health organization (WHO) in 2018, there were 1.76 million deaths for lung cancer, 862,000 deaths for colon cancer, and 782,000 deaths for liver cancer [2]. In 2017 there were almost 600,000 deaths due to cancer in the United States [3]. Thus there is a clear need to understand the underlying mechanism and characteristics of this potentially fatal disease to efficiently detect and treat this ubiquitous affliction. We also need to diagnose patients with cancer and identify the most significant genes which are responsible for it.

Cognitive Informatics, Computer Modelling, and Cognitive Science, Volume 1. DOI: https://doi.org/10.1016/B978-0-12-819443-0.00014-3 © 2020 Elsevier Inc. All rights reserved.

273

274

CHAPTER 14 Statistical characterization and classification

The microarray genes expression data constitute of a large number of genes for just a small sample size. Further, these genes are highly correlated and have high levels of noise in them. There are many supervised and unsupervised machine learning (ML) techniques which have been adopted to identify the most significant genes [4,5]. These techniques suffer from overfitting and multicollinearity problems due to noise, large number of genes, and small sample size [47]. Therefore there is a need for removing the noise or unnecessary genes via a novel detection paradigm and predict the high- or low-risk groups with the ML systems using cross-validation (CV) protocols. Therefore ML techniques are used as an aim to model the progression rate and treatment of the cancer patients. Moreover, ML-based classifiers are also used to detect the most significant genes from the complex data set. The unsupervised learning algorithms such as the hierarchical clustering [8,9], self-organizing map [10], fuzzy neural network (NN) [1113], K-means clustering [14], etc., have been used to identify genes which are responsible for cancer. These techniques did not identify the most significant genes. As a result, their classification accuracies were low. Supervised learning techniques such as artificial NN (ANN) [12,13], support vector machine (SVM) [1517], and so on were used for both feature extraction from gene expression data and gene characterization using training/testing paradigm. The different statistical tests such as t-test [1820], KruskalWallis (KW) test [2123], entropy, and information gain [2426] have also been widely used in large scale gene expression data sets. Thus these statistical tests provide a powerful paradigm for identification and can lead to a better design model. Unfortunately, these methods were not used for the classification paradigms for gene identification. In all the methods discussed earlier, there were no attempts to combine the gene sets with a suitable classifier for accurate gene identification. Therefore we hypothesize that if the best classifier is selected along with the suitable statistical test combination, an optimized protocol is obtained with higher accuracy. This can thus lead to a better optimized ML design. The global design of our ML system for gene expression data classification is shown in Fig. 14.1. On the premise of this foundational assumption, our study presents a two-stage system where we first identify the most significant genes using four statistical tests namely: Wilcoxon sign rank sum (WCSRS) test, t-test, KW test, and F-test which are adapted for cancerous gene identification using P-values. The two-stage, that is, ML paradigm picks and mix-matches the most suitable classifier for best results and this includes 10 different classifiers namely: linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), naive Bayes (NB), Gaussian process classification (GPC), SVM, ANN, logistic regression (LR), decision tree (DT), Adaboost (AB), and random forest (RF). Our ML framework is evaluated using three kinds of CV protocols namely K2, K10, and JK. Further, we also used reliability index (RI) as a performance evaluation measure of the ML system. Our system demonstrates that WCSRS test when combined with RF-based classifier gives the highest classification accuracy as compared to other conventional techniques.

14.1 Introduction

Gene cancer data

Statistical tests

Data normalization

t-test WCSRS test KW test F-test

Feature extract

Protocol types

Training/Test

Machine learning

Cancer

LDA QDA NB GPC SVM ANN LR DT AB RF

Classifier types

K2, K10, JK

Control

FIGURE 14.1 Global system of high-risk gene detection, cross-validation protocols for machine learning system embedded with several statistical tests and classification methods.

Overall this study offers the following contributions: 1. Combinational approach and simplicity of application: Optimizes the ML system by selecting the best combination of statistical tests and classifiers among the four statistical tests (WCSRS test, t-test, KW test, and F-test) and 10 classifiers (LDA, QDA, NB, GPC, SVM, ANN, LR, DT, AB, and RF). 2. Automated system design: The system is fully automated with plug-and-play model. 3. Exhaustive data analysis: Understands ML system using different training protocols under four statistical tests combined with 10 classification strategies. This further involves optimization of the best matching strategy between data normalization, detection, and classification. 4. Performance evaluation: Helps understand the effect of data size on ML systems and computing the performance parameters such as: accuracy (ACC), sensitivity (SE), specificity (SP), positive predictive value (PPV), negative predictive value (NPV), and F-measure (FM). 5. Reliability analysis: As part of performance evaluation, we further compute the RI and area under the curve (AUC). 6. Scientific validation: As part of validation scheme of our ML system, we demonstrate the same performance using the breast cancer data set.

275

276

CHAPTER 14 Statistical characterization and classification

The overall layout is as follows: Section 14.2 presents the patients’ demographics along with data normalization. In Section 14.3, we present the materials and methods. Further, we present the four statistical tests and 10 classification methods along with statistical evaluation. Five experimental protocols and results are provided in Sections 14.4 and 14.5, respectively. Performance evaluations and hypothesis validations are presented in Section 14.6. A detailed discussion along with claims of the study, benchmarking different ML systems, intercomparison of the classifiers, and also their strengths, weaknesses, and extension of the study are presented in Section 14.7. Finally, the conclusion is presented in Section 14.8.

14.2 Patients demographics In this study, we have used a publicly available real colon cancer data set [27]. The colon cancer data set was taken from Kentridge biomedical data repository, United States [27]. It includes those gene expression samples that were analyzed with an Affymetrix Oligonucleotide complementary array for more than 6500 human genes. The data set contains information from 62 patients (40 patients are cancer and 22 patients are control). About 2000 gene expressions were selected out of 6500 genes as presented in the above already developed protocol [27]. The criteria of selection were based on the confidence in measured expression levels; however, the source of the data set did not mention the number for the confidence of the expression levels [27]. The data of all samples in the microarray are presented in a table constructing the gene expression matrix [27]. The rows of the matrix correspond to the single gene and the columns to the single patient (sample). This gene expression matrix was adapted for our global system discussed in Fig. 14.1.

14.3 Materials and methods The main motivation for ML-based paradigm is to predict the high-risk genes based on the generalized ML system. Thus an efficient scheme is needed for preprocessing the input data for better characterization of the gene expression. The overall system consists of an efficient feature selection paradigm consisting of statistical tests. The added advantage of this solution is to prevent underfitting and overfitting of the data [47]. The effect of the global system is shown in Fig. 14.1. The training/testing paradigm of the entire system is shown in Fig. 14.2. The first step is to divide the data set into two segments: training data and test data. These two segments are divided by the dotted line and are as follows: training the gene expression data or offline system (shown on the left) and testing the gene expression or online system (shown on the right). The next stage is the filtering and normalization of data, and then select the top differential expressed (DE) genes using four

14.3 Materials and methods

Protocol types

Gene expression data

Training data

Testing data

Filtering and normalization

Filtering and normalization

Select top DE genes using statistical tests

Select top DE genes using statistical tests

Gene selection

Gene selection

Select most informative genes

Select most informative genes

Classifier types

Classifier types

Build classifier Ground truth labels

Online classification

Estimate parameters of training classifier

Predicted class

Offline classification system

Online classification system

FIGURE 14.2 Local system for the machine learning.

statistical tests (WCSRS test, t-test, KW test, and F-test) based on the P-value. The DE genes are trained based on the binary class (cancer vs control) ML framework. Estimating the training parameters of the classifiers is then applied to the online testing gene which gets transformed to predict the online (testing) risk class. Moreover, 10 classifiers namely: LDA, QDA, NB, GPC, SVM, ANN, LR, DT, AB, and RF have been adapted to classify the patients into two categories: cancer versus control.

14.3.1 Gene expression data normalization To avoid bias, it is customary that ML system receives input data set of the gene expression which is no longer has redundant gene expressions. This is

277

278

CHAPTER 14 Statistical characterization and classification

accomplished using the normalization of the variables [27]. This is computed using the standardized equation, given as follows: Z5

X2μ σ

(14.1)

where X is the variable to be normalized, μ and σ are the arithmetic mean and standard deviation of that variable, and Z is the standardized variable that lies between 0 and 1.

14.3.2 Feature selection Feature selection is the process of selecting the subset of the most informative genes to improve the performance of the model. There are three reasons for using feature selection techniques (FST): (1) simplicity of model for interpretations, (2) reduction of computational cost and saving time, and (3) avoiding the curse of dimensionality. The most informative genes are selected using t-test in different classes for each gene [27]. The main assumption of t-test is that the data follow must be normal distribution [27]. However, the real gene expression data set does not always follow the normality conditions. Nonparametric tests (WCSRS and KW) perform better than t-test. The most significant genes have P-values ,.0001 [28]. A brief discussion of four statistical tests viz. WCSRS test, t-test, KW test, and F-test, discussed in the following sections.

14.3.2.1 Wilcoxon sign rank sum test The WCSRS test is a nonparametric test that is used to compare two matching samples or repeated measurements on a single sample to assess whether their population mean ranks differ or not. Let x1i and x2i (i 5 1, 2, 3, . . . , n) be the set of two measurements. In the first case, we calculate the absolute difference between two measurements and calculate the sgn(jx2i 2 x1i j), a sign function. If jx2i 2 x1i j equals zero, we exclude the pairs from analysis. As a result the sample size is reduced and let nr be the reduced sample size. The remaining nr pair is ordered from smallest to largest value of absolute difference. The test statistic is W5

Nr X

½sgnjx2i 2 x1i j  Ri 

(14.2)

i51

The value of test statistic (W) of WCSRS test is compared to P-value. We select the genes whose P-values are ,.0001. In the current study, we select 27 genes using WCSRS test (see Fig.14.3).

14.3.2.2 t-Test The t-test is a parametric test that is used to determine the difference between two samples’ means (cancer and control), come from two normal populations with unknown variance. The test statistics of t-test is written as follows:

14.3 Materials and methods

FIGURE 14.3 Selection of 27 genes using Wilcoxon sign rank sum test.

  μ 2 μ  1i 2i q q ffiffiffi ffi ; i 5 1; 2; 3; . . . ; 2000 ffiffiffi ffi t5 s21i s22i 1 n1 n2

(14.3)

where μ1i and μ2i are the arithmetic means of the two classes namely cancer and control, respectively; s21i and s22i are the variances of cancer and control class, respectively; n1 and n2 are the total number of cancer and control patients, respectively. Eq. (14.3) follows t-distribution with (n1 1 n2 2 2Þ degrees of freedom. In case of colon cancer gene expression data set, 33 out of 2000 genes are significantly different (P , .0001) (see Fig. 14.4).

14.3.2.3 KruskalWallis test The KW test is a nonparametric test that is used when data violates the normality assumptions [29,30]. If n1 and n2 are the sample sizes for the two groups (cancer vs control), R1 and R2 are the sum of the ranks for cancer and control patients groups. Then the test statistic of the KW test is written as H5

 2  12 R1 R2 1 2 2 3ðn 1 1Þ nðn 1 1Þ n1 n2

(14.4)

The above test statistics (H) of KW test in Eq. (14.4) follows a Chi-square distribution with (2 2 1) 5 1 degree of freedom. In this study, 22 significant genes are selected using KW test (P , .0001) (see Fig. 14.5).

14.3.2.4 F-test The main goal of F-test is to perform tests whether or not all the different classes of Y have the same mean as X. To perform F-test, the following notations

279

280

CHAPTER 14 Statistical characterization and classification

FIGURE 14.4 Selection of 33 genes using t-test.

FIGURE 14.5 Selection of 22 genes using KruskalWallis test.

are used. Let nj be number of classes with Y 5 j (j 5 1, 2),μj be the sample mean of the predictors X for the target variables Y 5 j, S2j be the sample variance of the predictors X for the target variables Y 5 j, μ be the overall mean of n P the predictors X: μ 5 nj Xj =n, where n is the total number of patients and J j51

is the total number of classes. The P-value is the probability value determined J J P P 2 n j ð μj 2 μÞ  ðnj 2 1ÞSj 2 j51 j51 , which follows by F-statistic and is given by F 5 ðJ 2 1Þ ðn 2 1Þ

14.3 Materials and methods

FIGURE 14.6 Selection of 133 genes using F-test.

F-distribution with (J 2 1) and (n 2 1) degree of freedom. We select 133 genes whose P-values ,.0001 (see Fig. 14.6). Using the approaches discussed earlier, the erroneous genes are removed using four statistical tests on P-value. We select genes whose P-value is ,.0001 [28]. Using this cutoff point, we have identified 27, 33, 22, and 133 genes for the WCSRS test, t-test, KW test, and F-test. Ten ML-based classifiers are applied on the selected genes (SGs) and also choose the best combination of statistical tests and classifiers are chosen on the basis of classification accuracy. We choose the combination which gives the highest classification accuracy.

14.3.3 Classifier types As presented in global system in Fig. 14.1, the ML system requires a classifier for CV system which is discussed in the design in Fig. 14.2. In this study, we have adapted 10 classifiers for risk stratification in ML framework. They are adapted due to their simplicity and popularity: LDA, QDA, NB, GPC, SVM, ANN, LR, DT, AB, and RF. The performances of these classifiers are evaluated using ACC, SE, SP, PPV, NPV, FM, and AUC which are shown in Fig. 14.7. A brief discussion on the classifiers is presented in the following sections.

14.3.3.1 Linear discriminant analysis The LDA is a supervised learning and was introduced in 1936 by Fisher [31]. It is a generalization of Fisher LDA that is used in statistics and ML paradigm to find a linear combination of features that classify into two or more classes [32] and each class or groups have the equal covariance matrices. The main objective of classifier is to classify in such a way that maximizes separation between

281

282

CHAPTER 14 Statistical characterization and classification

FIGURE 14.7 Performance of the machine learning system.

classes and minimizes within classes [33]. The mathematical formula of LDA is written as XT

X21 

 1 T X21   X̅2 2 X̅1 2 X̅2 1X̅1 X̅2 2 X̅1 . c 2

(14.5)

where X is data matrix, X̅2 and say canP X̅1 are the mean vectors for two groups cer and control, respectively, is the sample covariance matrix, XT is the transpose of the data matrix X, and c is the threshold of the decision boundary. The value of c may be zero, greater than zero, or less than zero. If the value of c 5 0, classes are similar. If the value of c . 0, then it is classified into cancers group and vice versa.

14.3.3.2 Quadratic discriminant analysis The QDA is also a supervised learning that is used in ML and statistical learning to classify the objects into two or more classes by quadratic surface. It is an extension of LDA and does not assume equal covariance matrices among the groups [34]. The mathematical formulations of the QDA is written as XT

X21

X21 X21

T T X 1 2 X̅ X 2 X̅ 2 2 1 2 2 1 " P ! #   X21 X21 T T .c X̅2 2 X̅1 X̅1 1 log P2  2 X̅2 2 1 2

X21

1

(14.6)

P P where X is the data matrix, XT is the transpose of the data matrix X, 1 and 2 are the sample covariance matrices for two groups say cancer patients and control, T T respectively, X̅1 and X̅2 are the transpose of the mean vector for cancer patients and control, and c is the cutoff point of the decision boundary. The classification rule of QDA is same as like LDA.

14.3 Materials and methods

14.3.3.3 Naive Bayes The NB classifiers are a family of simple probabilistic classifiers. It is a collection of classification algorithms based on Bayes’ theorem. It is not a single algorithm but a family of algorithms where all of them share a common principle, that is, every pair of features being classified is independent of each other and every feature has equal contribution to outcome [35]. The data set is divided into two parts: (1) feature matrix (xi) contains features or explanatory variables and (2) response vector (yÞ contains outcomes. The Bayes theorem states as Pðyjx1 ; . . . ; xn Þ 5

PðyÞπni51 Pðxi jyÞ πni51 Pðxi Þ

(14.7)

where Pðyx1 ; . . . ; xn Þ is the conditional probability of y given x1, x2,. . ., xn; P(y) is the probability of y; Pðxi jyÞ is the conditional probability of xi ði 5 1; 2; 3; . . . ; nÞ given y; p(x) is the probability of xi; π is the product symbol. We have found the probability of given set of inputs for all possible values of class variable y and picked up the output with maximum probability. Thus the classifier model is mathematically expressed as y 5 argmaxy PðyÞπni51 Pðxi jyÞ

(14.8)

14.3.3.4 Gaussian process classification Gaussian process (GP) is a nonparametric method that is mainly used in ML. GP is used both in classification and regression. It can easily handle various problems such as insufficient capacity of the classical linear method, complex data types, the curse of dimension, etc. A GP is a collection of random variables, any finite number of which has a joint Gaussian distribution. A GP is a Gaussian random function and is specified by a mean function and covariance function [36,37]. Mathematically, it is defined as   f BGP mðxÞ; Kðx;xT Þ



 T

(14.9)

where x is the vector, m(x) is the mean vector of x, and K x;x is the positive definite or kernel matrix  covariance   or kernel function that is defined as follows: K x;xT 5 E ðx 2 mðxÞÞ xT 2 m xT : The kernel matrix may be as linear, polynomial (Poly), sigmoid, radial basis function (RBF), and so on. In our current study, we have used three types of kernel namely linear, polynomial with order two (Poly-2), and RBF, respectively.

14.3.3.5 Support vector machine The SVM is a supervised learning technique that is used in ML [38]. Suppose that the data set S consists of a series of observations x1 ; x2 ; . . . :; xp AX and a series of labels y1 ; y2 ; . . . ; yn Ay associated with the observations. A separating hyper plane classifier method is to learn a function f :X-y from S that is used to

283

CHAPTER 14 Statistical characterization and classification

Start

Data Type Cancer

Control

Class 1

Class 2 LDA QDA NB

SVM

Machine Learning System

ANN LR

Classifier type

GPC

K2, K10, JK Protocols

284

DT AB RF

Classifiers Predicted Class

Performance type

ACC

SE

SP

PPV

NPV

FM

AUC

End

FIGURE 14.8 Hyper plane separating two classes.

predict the label of any new observation xAX by f ðxÞ and classified into two classes as yA 2 1; 1 1. Then a separating hyper plane has the property that WT X 1 b . 0 if yi 5 1 and WT X 1 b , 0 if yi 5 21 (see Fig. 14.8). The minimal distance from the observations to the hyper plane is known as the margin. The maximal margin hyper plane is the separating hyper plane for which the margin is large. If data are not linearly separable, SVM uses kernel trick for the purpose. Kernels are functions which projects low dimensional input space into a higher dimensional space, that is, it converts nonseparable problem to separable problem. There are also adopted same kinds of kernel as like GPC.

14.3.3.6 Artificial neural network The ANN is one of the important tools used in ML. It is the brain-inspired system that is intended to replicate the way that humans learn [39]. NN consists of input and output layers, as well as (in most cases) a hidden layer consisting of units

14.3 Materials and methods

that transform the input into something that the output layer can use. In this study, we have used Back-propagation algorithm for training ANN. Further, we have implemented different number of hidden layers; ranging from 1 to 50 to get the better network architecture [39].

14.3.3.7 Logistic regression The LR is a supervised learning which was introduced in 1958 by David Cox [40]. LR is used while the output variable is categorical and input variable is either discrete or continuous [41]. LR estimates the parameters and predicts the probability of output variable (cancer vs control) based on the input variables and choosing the cutoff point values. If the probability value is greater than the cutoff point, it belongs to one class and vice versa [42].

14.3.3.8 Decision tree A DT classifier is a decision support tool that uses a tree structure built using input features. The main objective of this classifier is to build a model that predicts the target variables based on several input features. One can easily extract decision rules for a given input data which makes this classifier suitable for any kinds of application [43].

14.3.3.9 Adaboost The AB means adaptive boosting, an ML technique. Freund and Schapire introduced AB algorithm in 1996 [44] and won Go¨del prize in 2003 for their work. It is used in conjunction with different types of algorithm to improve classifier’s performance. AB is very sensitive to noisy data and outliers. In some problems, it is less susceptible to the overfitting problem than other learning algorithms. Every learning algorithm tends to suit some problem types better than others, and typically has many different parameters and configurations to adjust before it achieves optimal performance on a data set. AB is known as the best out-of-thebox classifier [44].

14.3.3.10 Random forest The RF is an ensemble learning method for regression and classification in ML which involves the construction of multiple DTs via bootstrap aggregation [4547]. RF classifies the trees based on the prediction of the tree structure. The main advantage of RF is that it is a better fit for the categorical data after obtaining the final solution in the majority voting system, where result of each tree is judged. In this study, we have used 500 trees.

14.3.4 Statistical evaluation Performances of all classifiers are evaluated by different measurement factors such as accuracy (ACC), sensitivity (SE), specificity (SP), PPV, NPV, FM, respectively. These measurement factors are calculated by using true positive

285

286

CHAPTER 14 Statistical characterization and classification

(TP), true negative (TN), false positive (FP), and false negative (FN). Using these measurements, the performance measures (PMs) are defined in the following sections.

14.3.4.1 Accuracy It is the proportion of the sum of true positive and true negative against total number of population.  It is expressed mathematically as follows: TP 1 TN ACC ð%Þ 5 TP 1 FN 1 FP 1 TN 3 100

14.3.4.2 Sensitivity It is the proportion of the predicted true condition   to actual positive. It is expressed mathematically as follows: SE ð%Þ 5 TP TP 1 FN 3 100.

14.3.4.3 Specificity It is the proportion of the predicted false positiveto actual  negative condition. It is expressed mathematically as follows: SP ð%Þ 5 FP FP 1 TN 3 100:

14.3.4.4 Positive predictive value The PPV is the proportion of the predicted true positive condition against total   positive condition. It is expressed mathematically as follows: PPV ð%Þ 5 TPTP 1 FP 3 100:

14.3.4.5 Negative predictive value It is the proportion of the predicted true negative condition against  totalnegative condition. It is mathematically expressed as follows: NPV ð%Þ 5 FNTN 1 TN 3 100.

14.3.4.6 F-measure The FM is the harmonic mean of recall and precision. It is mathematically expressed as follows: FM ð%Þ 5 2TP 12TP FP 1 FN 3 100:

14.4 Five experimental protocols In this section, we discuss five different kinds of experimental protocols, namely (1) kernel optimization, (2) effect of P-value during statistical tests on ML performance, (3) intercomparison of the classifiers, (4) effect of dominant genes, and (5) effect of data size on memorization versus generalization.

14.4.1 Experiment 1: Kernel optimization The main objective of this section is to choose the best kernel for SVM and GPbased classifiers during the training and testing phases. The optimized kernel is then chosen for all the subsequent experimental protocols. Three sets of SVM and GPC kernels are chosen namely: linear, radial basis function (RBF), and poly-2

14.4 Five experimental protocols

(polynomial of degree order two) during the experimentation. The optimization is tried for all three sets of CV protocols namely: K2, K10, and JK. The best kernel is selected based on accuracy with respect to the changes in the dominant genes. We select the kernel which gives the highest classification accuracy over the dominant gene features.

14.4.2 Experiment 2: Effect of P-value during statistical tests on machine learning performance This experiment presents the effect of the P-value cutoffs using the four statistical tests (WCSRS test, t-test, KW test, and F-test) on classifiers. We adopted four Pvalue cutoffs as: .05, .01, .001, and .0001, respectively. Using these P-values, we select 387, 194, 64, and 27 genes out of 2000 genes. Similarly, keeping the same P-value cutoffs, 478, 246, 94, and 33 genes were selected using t-test, 387, 188, 62, and 22 using KW test and 714, 431, 246, and 133 genes using F-test, respectively. We have adopted three partition protocols namely: K2, K10, and JK for the generalization of the ML system and these procedures were repeated for 20 trails (T 5 20). Finally, the system mean classification accuracy is computed.

14.4.3 Experiment 3: Intercomparison of the classifiers We have designed a set of 40 combination systems by the crisscrossing four statistical tests (WCSRS test, t-test, KW test, and F-test) and 10 classifiers viz. LDA, QDA, NB, GPC, SVM, ANN, LR, DT, AB, and RF and three set of CV protocols (K2, K10, and JK) and mean accuracy was computed for each of 20 trials. We choose the best combination of FST/statistical test and classifier which gives the highest classification accuracy. The following formula is used to compute the mean classification accuracy. K53 P F54 P T520 P P562 P

μ̅m ðcÞ 5

k51 f 51 t51

Aðk; t; f ; p; cÞ

p

K 3T 3F3P

(14.10)

where Aðk; t; f ; p; cÞ represents the accuracy of the classifer “c,” when protocol type (PT) is “k,” trial type is “t,” FST is “f,” and for the patient “p.” K, T, F, and P represent the total number of PT, trial type, FST, and number of patients. Note K, F, T, and P are 3, 4, 20, and 62, respectively.

14.4.4 Experiment 4: Effect of dominant genes The main objective of this experiment is to show the effect of dominant genes on the classification accuracy over FST. JK protocol is used to compute the mean classification accuracy over FST which can be computed using the formula mentioned as:

287

288

CHAPTER 14 Statistical characterization and classification

F54 P T520 P P562 P

α̅m ðcÞ 5

f 51

t

Að f ; t; p; cÞ

p

(14.11)

F3T 3P

where Aðf ; t; p; cÞ represents the accuracy of the classifer “c” computed, when FST is “f,” trial type is “t,” and number of patient is “p,” and total FST, trial type, and number of patients are: F, T, and P. Then, the mean accuracy of the classification algorithms is evaluated in terms of measures.

14.4.5 Experiment 5: Effect of data size on memorization versus generalization The main objective of this section is to show the effect of data size on the classification accuracy for risk stratification using ML-based paradigm. It is necessary to understand the accuracy of all classifiers with increasing training data size. In this experiment, we increase the data size from 10% to 100% in the interval of 10% increments. This experiment consists of 10 data sizes from n 5 6, 12, 19, 25, 31, 37, 43, 50, 56, and 62, respectively. Each time we take training data set (combination of cancer and control); we compute the accuracy with change in data size using the following formula: ni K53 P F54 P T520 P P

α̅Ni ðcÞ 5

k51 f 51 t51 i51

Aðk; t; f ; ni ; cÞ

K 3T 3F3P

; i 5 1; 2; 3; . . . ; 10

(14.12)

where Aðk; t; f ; ni ; cÞ represents the accuracy of the classifer “c” computed, when PT is “k,” trial type is “t,” FST is “f,” and data size is “ni ;” while the total number of PT, trial type, FST, and number of data size are: K, T, F, and ni , respecitvely. Then, the mean accuracy of the classification algorithms is evaluated in terms of measures.

14.5 Results This section presents the results using the above four experimental protocol setups, as discussed in Sections 14.4.114.4.5. The results of these five experiments are described in Sections 14.5.114.5.5, respectively.

14.5.1 Results of experiment 1: Kernel optimization Accuracy versus dominant genes for three types of CV (K2, K10, and JK) protocol is demonstrated in Fig. 14.9AC, respectively. The corresponding table may be seen in Appendix A: Table A.1. The plot shows an increase in classification accuracy with an increase in dominant genes (D). As seen in Fig. 14.9AC, the GP-based classifier with Poly-2 gave the highest classification accuracy with dominant gene change (P-value decreasing). Fig. 14.9AC also indicates that

14.5 Results

FIGURE 14.9 Comparison of different kernels (linear, radial basis function, and Poly-2) for fixed data size (n 5 62) using three partition protocols: (A) K2, (B) K10, and (C) JK. Black medical arrow indicates the result of proposed method.

SVM classifier with RBF kernel gave the highest classification accuracy with changes in the dominant genes. However, classification accuracy of all classifiers has more variations for K10 protocol compared to K2 protocol. This occurred due to small sample size. So we select that Poly-2 kernel for GP-based classifier and RBF kernel for SVM. Note that Appendix A: Table A.1 represents the mean of the classification accuracy over the dominant genes (D) for each set of kernels.

14.5.2 Results of experiment 2: Effect of P-value during statistical tests on machine learning performance The main objective of this experiment was to show the effect of P-values on the ML training phase. In this study, we select the most informative genes on

289

290

CHAPTER 14 Statistical characterization and classification

FIGURE 14.10 Relationship between cutoff point of P-value and number of significant genes.

the basis of four statistical P-values as .05, .01, .001, and .0001, respectively. The number of genes decreases as we decrease the cutoff point of P-values as shown in Fig. 14.10. It is noted that the mean accuracy of all classifiers and all FST (WCSRS test, t-test, KW test, and F-test) are maximum for P-value ,.0001 as shown in Tables A.2A.4 and 14.1, respectively. Note that slopes of the tests are in the following order: WCSRS test, t-test, KW test, and F-test, showing that WCSRS is most effective test. The classification accuracy of all classifiers increases for all statistical tests (Table 14.1). It is observed that RFbased classifier gives the highest classification accuracy for all protocols with P-value ,.0001 having the lowest number of genes (see R1 to R4 in Table 14.1). The corresponding figures are described in Appendix B. Note that as we go down the table from K2 to K10 to JK, the number of training data set increase, thereby increasing the accuracy. Further, it is to be noted that for each protocol, as the P-values decreases, the accuracy increases, providing the evidence that the classifiers train well for the most significant genes with less noise.

14.5.3 Results of experiment 3: Intercomparison of the classifiers The objective of this section is to compare the 40 different sets of combination systems for each partition protocol. Table 14.2 shows the classification accuracy of the 40 combination system while the data size is fixed (n 5 62). It shows that the combination of WCSRS test and RF-based classifier gives the highest classification accuracy 99.81% among all other combinations. It is

Table 14.1 Change in mean accuracy of all classifiers and P-values of Wilcoxon sign rank sum test. C1

C2

C3

C4

C5

C6

C7

C8

C9

C10

C11

C12

PT

SN

P-values

# of genes

LDA

QDA

NB

GPC

SVM

ANN

LR

DT

AB

RFa

K2

R1 R2 R3 R4 R1 R2 R3 R4 R1 R2 R3 R4

.05 .01 .001 .0001 .05 .01 .001 .0001 .05 .01 .001 .0001

387 194 64 27 387 194 64 27 387 194 64 27

80.01 79.68 78.55 59.51 80.83 75.83 72.83 75.83 96.48 96.43 99.56 95.34

56.77 62.74 64.35 71.94 75.83 71.66 73.33 75.83 64.86 66.65 67.74 84.60

73.70 75.97 82.42 85.16 76.67 80.83 80.00 81.67 76.10 82.60 83.82 84.05

82.19 82.74 85.00 86.45 88.33 82.50 85.00 87.50 93.39 92.54 91.99 92.03

82.58 82.97 85.96 85.00 87.50 88.33 90.00 87.50 93.45 93.29 93.39 91.75

70.48 74.35 76.77 77.90 71.05 73.43 70.63 70.00 78.63 79.87 77.68 90.66

74.35 73.39 74.52 76.45 78.33 74.99 72.08 71.25 99.45 99.74 99.58 89.40

66.45 72.58 75.32 76.29 74.16 78.33 77.50 78.34 99.74 99.69 99.74 94.85

75.16 79.36 78.38 80.97 82.91 75.00 87.92 81.84 96.46 96.35 94.77 95.76

84.56 85.97 85.32 85.80 88.33 92.50 93.49 95.05 99.79 99.82 99.77 99.81

K10

JK

AB, Adaboost; ANN, artificial neural network; Ci, ith number of column; DT, decision tree; GPC, Gaussian process classification; LDA, linear discriminant analysis; LR, logistic regression; NB, naive Bayes; PT, protocol type; P-values, probability values; Rj, jth number of row; RF, random forest; SN, serial number; SVM, support vector machine. a Bold and shaded values indicate the result of the proposed method.

292

CHAPTER 14 Statistical characterization and classification

Table 14.2 Accuracy of four tests with 10 classifiers using three protocols (120 readings). C1

C2

C3

C4

C5

PT

SN

CT

WCSRS test

t-test

KW test

F-test

K2

R1 R2 R3 R4 R5 R6 R7 R8 R9 R10

LDA QDA NB GPC SVM ANN LR DT AB RFa

63.87 85.83 85.97 86.94 85.16 80.81 57.74 71.29 79.35 88.06

61.45 72.90 80.00 85.29 84.35 78.71 57.42 72.26 78.87 86.26

70.81 84.03 83.39 86.16 84.68 80.81 66.77 74.19 82.26 90.68

78.39 70.00 75.81 83.23 77.42 82.10 46.94 72.58 81.45 85.65

K10

R1 R2 R3 R4 R5 R6 R7 R8 R9 R10

LDA QDA NB GPC SVM ANN LR DT AB RFa

74.58 87.50 85.42 89.29 84.17 76.25 71.67 75.83 74.58 95.50

64.17 79.17 78.75 86.58 85.42 78.75 64.58 80.00 82.92 89.83

80.00 85.42 82.50 86.25 87.92 82.08 74.69 79.58 79.58 92.92

75.00 77.50 77.50 83.58 78.33 78.33 54.37 77.92 82.50 89.33

JK

R1 R2 R3 R4 R5 R6 R7 R8 R9 R10

LDA QDA NB GPC SVM ANN LR DT AB RFa

95.34 84.60 84.05 92.03 90.66 89.40 91.78 94.85 95.76 99.81

91.31 67.82 80.67 92.26 91.75 82.90 89.98 91.70 95.86 99.72

89.36 80.72 84.08 91.96 89.00 92.10 90.37 94.85 94.95 99.50

96.54 74.14 83.92 94.48 90.48 90.46 88.76 96.41 95.73 99.74

CT, Classifier type; JK, jackknife; K2, two-fold cross-validation; K10, 10-fold cross-validation; KW, KruskalWallis; PT, protocol type; SN, serial number; WCSRS, Wilcoxon sign rank sum. a Bold and shaded values indicate the result of the proposed method.

also observed that NB-based classifier gives the lowest classification accuracy followed by QDA. All other classification parameters like SE, SP, PPV, NPV, and AUC of the WCSRS and RF-based classifier-based combination systems are shown in Table 14.3. Also the best results of WCSRS-based RF-

14.5 Results

Table 14.3 Six performance evaluation parameters using all 10 classifiers for Wilcoxon sign rank sum test. C1

C2

C3

C4

C5

C6

C7

PT

SN

CT

SE (%)

SP (%)

PPV (%)

NPV (%)

FM (%)

AUC (%)

K2

R1 R2 R3 R4 R5 R6 R7 R8 R9 R10

LDA QDA NB GPC SVM ANN LR DT AB RFa

65.56 88.72 83.78 90.83 90.09 84.45 60.84 76.81 85.47 92.78

61.29 84.99 91.18 80.15 77.09 75.40 52.42 63.79 71.46 80.08

76.95 88.94 94.59 89.23 87.21 84.94 69.98 79.80 84.03 89.88

49.12 81.77 73.94 82.23 81.80 74.40 43.35 66.19 75.80 85.75

69.75 87.84 88.58 89.91 88.45 84.30 64.06 76.40 83.84 91.03

67.42 88.79 92.12 87.34 47.36 88.54 58.03 75.80 89.13 93.00

K10

R1 R2 R3 R4 R5 R6 R7 R8 R9 R10

LDA QDA NB GPC SVM ANN LR DT AB RFa

82.74 91.17 82.62 91.92 86.08 80.99 77.33 82.82 82.98 93.08

60.58 84.32 89.42 82.32 81.25 68.93 64.83 64.60 62.60 87.29

81.23 88.55 92.07 90.06 90.68 81.26 78.70 81.43 74.44 92.94

61.00 85.54 73.04 87.82 72.65 66.86 62.65 68.81 76.40 84.00

81.08 89.18 87.88 91.68 87.85 80.46 76.80 80.61 76.58 92.22

82.02 90.98 94.14 88.73 36.02 85.53 75.34 75.53 82.66 97.78

JK

R1 R2 R3 R4 R5 R6 R7 R8 R9 R10

LDA QDA NB GPC SVM ANN LR DT AB RFa

95.44 89.11 80.28 92.98 92.50 93.22 89.96 97.22 94.60 99.84

95.16 76.39 90.91 90.30 87.32 82.46 95.09 90.54 97.87 99.75

97.31 87.29 94.14 94.59 93.00 90.75 97.11 94.94 98.80 99.87

92.05 79.46 71.73 87.65 86.49 87.39 83.95 94.78 90.95 99.72

96.36 88.19 86.66 93.77 92.75 91.90 93.39 96.05 96.64 99.85

99.51 89.57 93.99 94.33 95.81 95.18 96.20 98.88 99.47 99.95

ACC, Accuracy; AUC, area under the curve; CT, classifier type; FM, F-measure; NPV, negative predictive value; PPV, positive predictive value; PT, protocol type; SE, sensitivity; SP, specificity. a Bold and shaded values indicate the result of the proposed method.

based classifier among 40 combination systems are validated using AUC (see column 7 of Table 14.3). Appendix A shows the performance evaluation parameters of all classifiers for t-test (Table A.5), KW test (Table A.6), and F-test (Table A.7).

293

294

CHAPTER 14 Statistical characterization and classification

FIGURE 14.11 Comparison of all classifiers based on accuracy with dominant genes for Wilcoxon sign rank sum based statistical test with P-value .0001. Black medical arrow indicates the result of proposed method.

14.5.4 Results of experiment 4: Effect of dominant genes Experiment 3 shows that the combination of WCSRS-based statistical test and RF-based classifier is the best. This section presents the effect of dominant genes on the classification accuracy for WCSRS-RF-based combination system (Fig. 14.11). It indicates that the classification accuracy of all classifiers is increased by increasing the number of genes. So we conclude that RF-based classifier is the best compared to others.

14.5.5 Results of experiment 5: Effect of data size on memorization versus generalization This experiment shows the effect of increase in data size on classification accuracy of the ML system. We divide the actual data size (n) into 10 parts depending upon an increase in training data size (in %) as 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, and 100%. The corresponding 10 types of data sets consisted 6, 12, 19, 25, 31, 37, 43, 50, 56, and 62 patients. Each time we take training data set (combination of cancer patients and control patients); we compute the accuracy of the ML system. During this protocol, we find the classification accuracy (on y-axis) with changing data size (on x-axis). Typically, with an increase in data size, the accuracy increase demonstrating the ML’s ability to learn (so-called memorization). When the accuracy reaches flatness or starts to slightly fall, that cutoff point is considered to have reached the point of generalization. Fig. 14.12

14.6 Performance evaluation and hypothesis validations

FIGURE 14.12 Effect of varying data size: accuracy versus data size (n) for all classifier systems. Medical arrow indicates the result of proposed method.

shows the change in percentage of accuracy with the change in data size. It is observed that the net generalization yields the generalization cutoff of 50% of the cohort (patient pool of 32 patients). This means that 50% of the patients are needed to achieve the generalization. Fig. 14.12 also shows the classification accuracy of all classifiers increases with an increases in data size and the best performance is obtained using RF-based classifier compared to others. The system accuracy is computed by averaging the classification accuracies over all data size for three partition protocols (K2, K10, and JK) (Table 14.4). Table 14.4 confirms that RF-based classifier system gives the highest classification accuracy for all protocols compared to other classifiers.

14.6 Performance evaluation and hypothesis validations 14.6.1 Gene separation index Our study hypothesized that the number of gene separation index (nGSI) can depict the segregation power of the genes. The nGSI is mathematically represented as nGSI 5 jFn 2 Fd j

(14.13)

where G is the total number of genes, Fn and Fd are the mean value of the cancer patients and control. nGSI enables us to distinguish genes of particular class. The relationship between nGSI and P-values with accuracy is demonstrated in Table 14.5 and effect of P-value on nGSI is illustrated in Fig. 14.13.

295

CHAPTER 14 Statistical characterization and classification

Table 14.4 Systems mean accuracy of all classifiers for different partition protocols (K2, K10, and JK) for 20 trials (T). PT

LDA

QDA

NB

GPC

SVM

ANN

LR

DT

AB

RFa

K2 K10 JK

68.63 73.44 98.68

78.19 82.40 83.95

81.29 86.43 87.60

85.41 78.85 95.25

82.90 66.33 94.14

57.22 78.33 94.89

72.58 79.90 97.18

72.58 91.90 93.65

80.48 93.14 93.35

87.66 93.14 99.77

AB, Adaboost; ANN, artificial neural network; DT, decision tree; GPC, Gaussian process classification; LDA, linear discriminant analysis; LR, logistic regression; NB, naive Bayes; PT, protocol type; QDA, quadratic discriminant analysis; RF, random forest; SVM, support vector machine. a Bold and shaded values indicate the result of proposed method.

Table 14.5 Interrelationship between nGSI and system mean accuracy. P-value

Cancer

.05 .01 .001 .0001

580.50 644.28 775.04 1314.90

Control 6 386.42 6 439.74 6 497:78 6 756:62

456.66 512.32 614.40 1018.71

6 279.65 6 315.62 6 371:92 6 537:11

nGSI

Accuracy

123.84 131.96 160.64 296.19

89.84 90.70 90.80 91.83

nGSI, number of gene separation index.

350 300 250

nGSI

296

200 150 100 50 0 0.05

0.01

0.001

0.0001

Cufoff point of P-values

FIGURE 14.13 Gene separation index.

14.6.2 Interrelationship between nGSI and classification accuracy Our experiment uses the statistical tests along with their P-value and traces the improvement in classification accuracy based on the gene separation index. The interrelationship between nGSI and classification accuracy is presented in Table 14.5. It is

14.6 Performance evaluation and hypothesis validations

observed that as the P-values decrease, the nGSI as well as classification accuracy increases.

14.6.3 Reliability index The performance of the ML system was evaluated using the system RI. The RI is computed as follows: σn ζ ni 5 1 2 i μni

! 3 100

(14.14)

where σni and μni present the standard deviations and mean of all acuracies for all combinations of P-values with four statistical tests viz. WCSRS test, t-test, KW test, and F-test, respectively. The system RI has been computed by averaging the RIs over the data size which is shown in Fig. 14.14. It shows the value of RI for all four statistical tests and all 10 classifiers using three protocols and also confirms that the performance of RF-based classifier was the best compared to others. The corresponding data are shown in Table 14.6.

14.6.4 Receiver operating curve analysis After choosing the most suitable kernel, that is, RBF for the SVM and Poly-2 for the GP-based classifiers, we have computed the AUC of SVM and GPC along with all classifiers for JK protocols. The results in Appendix A (Tables A.5A.7) and Table 14.3 show the mean of SE, SP, PPV, NPV, FM, and AUC for K2, K10, and JK protocols. Column 7 from Tables A.5A.7 and 14.3 show the AUC for all classifiers using t-test, KW test, F-test, and WCSRS test, respectively. WCSRS-RF- and KW-RF-based classifier combination system gives the same AUC (99.95%) as shown in Tables 14.3 and A.6. Tables A.5A.7 (C7 column) show the AUC for all classifiers using t-test, KW test, and F-test. Further, Table 14.3 (see C7 column) shows the AUC for WCSRS test. Tables 14.3 andA.6 show that WCSRS-RF- and KW-RFbased classifier combination system gives the same AUC (99.95%). Similarly, Tables A.6 and A.7 show that KW-RF- and F-RF-based combination system also gives the same AUC (99.96%). Therefore the mean AUC value for four statistical tests with RF-based classifier is almost close to unity. That means the highest AUC were observed for the RF-based classifier which further proved our hypothesis.

14.6.5 Validation of proposed methods For the validation of the proposed method, we have used breast cancer data set [48]. Also we have applied four statistical tests and 10 classifiers and our results show that RF-based classifier gives the highest classification accuracy (see Table 14.7). Therefore our proposed work is validated for both colon cancer data set and breast cancer data set which is shown in Table 14.7.

297

298

CHAPTER 14 Statistical characterization and classification

FIGURE 14.14 Reliability index versus data size (n) for 10 classifiers for each statistical tests: (A) Wilcoxon sign rank sum test, (B) t-test, (C) KW test, and (D) F-test. Black medical arrow indicates the result of proposed method.

14.7 Discussion This study presented a unique ML-based risk stratification system to classify cancer patients. This unique study experimentally preformed, demonstrated and validated the concept of merger of statistical tests for cancer gene selection and ML-based algorithms for cancer patient classification using SGs. A total of 40 combination systems had been designed by the cross combination of four statistical tests (WCSRS test, t-test, KW test, and F-test) and 10 classifiers (LDA, QDA, NB, GPC, SVM, ANN, LR, DT, AB, and RF) for every partition protocol. Since there were three sets of partition protocols, thus there were a total set of 120 experiments. The performance was evaluated for each combination of FST and classifier using classification accuracy. In the first stage, the most informative genes were selected with the help of four statistical tests with P-value ,.0001.

Table 14.6 System reliability index for the combination of 10 classifiers and four statistical tests. C1

C2

C3

C4

C5

C6

C7

C8

C9

C10

C11

PT

SN

ST

LDA

QDA

NB

GPC

SVM

ANN

LR

DT

AB

RFa

K2

R1 R2 R3 R4 R1 R2 R3 R4 R1 R2 R3 R4

WCSRS t KW F WCSRS t KW F WCSRS T KW F

98.91 98.37 98.83 98.79 99.27 98.37 98.83 98.79 98.67 98.17 98.22 97.83

97.97 97.97 97.87 97.97 96.58 97.97 97.87 97.97 97.55 96.50 94.30 95.20

97.33 98.74 99.45 98.47 99.59 98.74 99.45 98.47 98.26 98.05 98.80 98.09

99.29 96.94 98.25 97.25 99.29 96.94 97.80 98.25 98.66 98.25 99.31 98.17

98.64 98.04 99.46 99.11 99.05 98.04 99.46 99.11 98.67 98.11 98.60 98.30

97.48 98.42 98.55 98.86 98.31 98.42 98.55 98.86 98.29 98.63 98.78 98.72

98.55 98.86 97.88 98.03 99.33 98.86 97.88 98.03 98.99 98.40 98.42 98.89

99.33 98.79 99.01 98.86 98.22 98.79 99.01 98.86 98.91 98.30 98.50 98.68

99.12 98.97 99.04 98.97 99.02 98.97 99.04 98.97 98.55 98.60 98.80 98.91

99.43 99.11 99.37 99.30 99.40 99.11 99.37 99.30 99.46 99.09 99.05 98.71

K10

JK

AB, Adaboost; ANN, artificial neural network; DT, decision tree; GPC, Gaussian process classification; KW, KruskalWallis; LDA, linear discriminant analysis; LR, logistic regression; NB, naive Bayes; PT, protocol type; QDA, quadratic discriminant analysis; RF, random forest; ST, statistical tests; SVM, support vector machine; WSRS, Wilcoxon sign rank sum. a Bold and shaded values indicate the result of proposed method.

300

CHAPTER 14 Statistical characterization and classification

Table 14.7 Validation of our proposed work for breast cancer data using K10 protocols. C1

C2

C3

C4

C5

SN

CT

WCSRS test

t-test

KW test

F-test

R1 R2 R3 R4 R5 R6 R7 R8 R9 R10

LDA QDA NB GPC SVM ANN LR DT AB RFa

72.41 62.07 62.93 81.76 80.17 73.66 74.14 82.76 87.07 95.25

70.69 59.48 62.07 77.63 71.55 74.91 74.14 80.17 87.93 94.31

71.55 58.62 61.21 80.13 77.59 79.96 74.40 81.90 89.66 94.40

70.69 59.48 62.07 76.81 71.55 73.66 74.14 80.17 87.93 93.79

AB, Adaboost; ANN, artificial neural network; CT, classifier type; DT, decision tree; GPC, Gaussian process classification; KW, KruskalWallis; LDA, linear discriminant analysis; LR, logistic regression; NB, naive Bayes; PT, protocol type; QDA, quadratic discriminant analysis; RF, random forest; ST, statistical tests; SVM, support vector machine; WSRS, Wilcoxon sign rank sum. a Bold and shaded values indicate the result of proposed method.

In the second stage, all the 40 combinations systems, we performed two experiments: (1) by keeping data size fixed (n 5 62) and (2) by varying the data size from n 5 6, 12, 19, 25, 31, 37, 43, 50, 56, 62 patients for generalizing classification and this process was repeated for 20 (T 5 20) trials to avoid any bias in the results and further to reduce the variability. Performance parameters of all classifiers were compared on the basis of ACC, SE, SP, PPV, NPV, and FM for both set of above experiments. The performance of the ML system was validated using three set of unique measures such as statistical test WCSRS and classifier based on RF was better than all others. A benchmarking of the proposed system against the previous work was also explored which is presented in the next section.

14.7.1 Benchmarking different machine learning systems There were various studies in literature on the diagnosis and classification of cancer patients. Table 14.8 shows the comparisons of our proposed method against the previous methods in the literature. The columns (C1C11) of Table 14.8 have been presented as C1: Author’s; C2: Year; C4: Class; C5: FST; C6: SGs; C7: Classifier types (CTs); C8: PTs; C9: Performance measures (PMs) in percentages; C10: Performance validations (PVs); and C11: RI. Also the rows of Table 14.8 are presented as the serial number (SN). Kumar et al. [49] applied gravitational search algorithm (GSA), binary-coded genetic algorithm (BCGA), real-coded genetic algorithm (RCGA), and particle swarm optimization (PSO) on the colon cancer microarray data set. The data set

Table 14.8 Comparisons of our proposed method to previous methods in the literature. C1

C2

C3

C4

C5

C6

C7

C8

C9

C10

C11

SN

Authors

Year

DS

Class

FST

SG

CT

PT

PM (%)

PV

RI

R1

Kumar et al. [49]

2010

62

NA

All

ACC: 58.70

No

No

Shen et al. [50]

2008

62

NA

All

NA

ACC: 89.55

No

No

R3

Alladi et al. [51]

2015

62

t-test

10

GSA, BCGA RCGA, PSO HPSOTS PTS, PPSO LR, NN SVM

JK

R2

K10

ACC: 85.80

No

No

R4

Vanitha et al. [26]

2015

62

MI

3

ACC: 74.19

No

No

Sun et al. [52]

2006

62

DWT

All

KNN NN, SVM PNN

JK

R5

JK

ACC: 92.00

No

No

R6

Chen et al. [53]

2007

62

MK-SVM II

JK

ACC: 93.50

No

No

Liu and Gao [54]

2018

MK SVM Entropy

11

R7

Cancer: 40 Control: 22 Cancer: 40 Control: 22 Cancer: 40 Control: 22 Cancer: 40 Control: 22 Cancer: 40 Controls: 22 Cancer: 40 Control: 22 

SVM

JK

No

Proposed method

2019

Cancer: 40 Control: 22

t KW F WCSRS

LDA, QDA NB, GPC LR, SVM ANN, DT AB, RFa

K2 K10 JK

AUC: 95.110 AUC: 96.00 AUC:96.00 ACC: 99.81 SE: 99.84 SP: 99.75 PPV: 99.87 NPV: 99.72 FM: 99.85 AUC: 99.95

No

R8

205 217 1068 33 22 133 27

Yes

Yes



62

BCGA, Binary-coded genetic algorithm; DS, data size; FST, feature selection technique; GSA, gravitational search algorithm; HPSOTS, Hybrid particle swarm optimization; NA, not available; PM, Performance measure; PPSO, pure particle swarm optimization; PSO, particle swarm optimization; PT, protocol type; PTS, pure tabu search; PV, performance validations; RCGA, real-coded genetic algorithm; RI, reliability index; SG, selected gene; WSRS, Wilcoxon sign rank sum. a Bold and shaded values indicate the best results of the proposed method.

302

CHAPTER 14 Statistical characterization and classification

consisted of 2000 genes from 62 patients (40 cancers and 22 controls). They did not use any FSTs to select the most informative genes. They applied GSA, BCGA, RCGA, and PSO to classify the cancer patients and demonstrated that GSA gave the highest classification accuracy of 58.70%. Shen et al. [50] used hybrid particle swarm optimization (HPSOTS), pure tabu search (PTS), pure particle swarm optimization (PPSO) to classify cancer patients. The HPSOTS classifier achieved the maximum of 89.55% accuracy among all classifiers for the study. Alladi et al. [51] applied t-test to identify the most significant biomarkers of cancer disease based on the P-values (P , .05). They only selected 10 biomarkers out of 2000 biomarkers on the basis of P-values which was less than .05. They had taken 80% of the data set for training and the rest for testing. They applied three ML techniques namely: LR, NN, and SVM with radial basis kernel and showed that SVM with radial basis kernel gave the highest classification accuracy among others. Vanitha et al. [26] used mutual information (MI) for feature selection and applied three ML techniques: K-nearest neighborhoods (KNNs), NN, and SVM to classify cancer patients. They identified the three most significant biomarkers based on MI for cancer. The SVM with linear kernel gave the highest 74.19% accuracy when compared to others. Sun and his colleagues used discrete wavelet transformation to reduce the dimension of the feature space and classify the colon cancer data set by probabilistic NN (PNN) and achieved the highest accuracy of 92.00% [52]. Chen et al. [53] used multiple kernel support vector machine (MK-SVM) as a FSTs and classifier. The authors selected 11 DE genes using MK-SVM and applied MKSVM to classify cancer patients. The authors showed that the proposed method achieved 93.50% classification accuracy. Liu and Gao [54] studied the detecting pathway biomarkers of diabetic progression with differential entropy. They used three types pathway data set as: KEGG, Biocarta, and Reactome. The KEGG data set has 205 rat genes, 217 genes for Biocarta, and 1068 genes for Reactome data set. They detected the most significant genes/biomarkers using pathway entropy based on P-values (P , .05). They identified 190 genes out of 205 from KCGG, 197 genes from Biocarta, and 644 from Reactome. They used JK CV protocols and applied SVM as the classifiers for biomarkers screening. The performance of the SVM classifiers was evaluated using AUC computed from receiver operating curve. It was observed that the AUC of SVM for KCGG data is 95.10, 96.00 for Biocarta data set, and 96.00 for Reactome data set. They also used GEO data set as validation of the proposed methods. Table 14.8 confirms that our proposed WCSRS-RF-based approach can identify and diagnose cancer with an accuracy of 99.81% which is the highest accuracy compared to all other previous studies.

14.7.2 A note on the intercomparison of classifiers One of the main objectives of this study was to compare the performances of the combination of four statistical tests and 10 classifiers leading to 40 crosscombinations for each partition protocol. A thorough investigation was made to choose the best kernel for SVM- and GP-based classifier based on the classification

14.7 Discussion

accuracy. Three types of kernel were adopted for the study viz. Linear, Ploy-2, and RBF. Fig. 14.9AC shows the comparison of three types of kernel using three partition protocols (K2, K10, and JK) for SVM- and GP-based classifier. It was clearly observed that GP-based classifier with Poly-2 kernel and SVM classifier with RBF kernel gave the highest classification accuracy (GPC (Poly-2): 92.48% and SVM (RBF): 91.41%) compared to others. Table 14.1 shows the mean accuracy of all protocols for WCSRS test, three partition protocols (K2, K10, and JK) and different cutoff points of P-values. It observed that RF-based classifier gave the highest classification accuracy for all protocols using four kinds of P-values. Fig. 14.10 shows the relationship between cutoff point of P-values and number of significant genes. The resultant plot indicates that decreasing the cutoff points of P-values, the most significant genes are selected differentiating the cancerous versus control pools. Table 14.2 shows the performance of four statistical tests with 10 classifiers using three protocols (120 readings) based on the classification accuracy with fixed data size. It was observed that WCSRS-RF-based combination system using JK protocol gave the highest classification accuracy of 99.81%, which was expected. Fig. 14.11 indicates that the classification accuracy improved by increasing the number of genes. It was also observed that the RF-based classifier gave the highest classification accuracy compared to others. Further, the six performance evaluation parameters from the 10 classifiers for WCSRS test are shown in Table 14.3. Our results indicate that WCSRS-RF-based combination gave the highest SE, SP, and FM along with AUC. Further, the performances of all classifiers are presented in Fig. 14.12. It was observed while varying data size that the accuracy of all classifiers improved with an increase in the data size. In this experiment, RI was introduced as validation for the performance of all classifiers and all statistical tests with varying data size. We also showed that our systems were reliable (see Fig. 14.14). The system accuracy and RI were calculated for all 40 systems with three partition protocols (K2, K10, and JK) and are presented in Tables 14.4 and 14.5. The highest system accuracy and RI obtained for JK protocol was of 99.77% and 99.46%, respectively, for WCSRSbased test and RF-based classifier combination system. The best performance achieved by RF-based classifier was followed by LDA, QDA, NB, GPC, RF, SVM, ANN, LR, DT, and AB, respectively. Among all the statistical tests, the best performance was obtained by WCSRS followed by t-test, F-test, and KW test. Finally, we conclude that the combination of WCSRS test and RF-based classifier, that is, WCSRS-RF produced the best performances for all of our experiments. Because WCSRS statistical test along with lowest P-values (P , .0001) can identify the highrisk genes and this genes are used to classify cancer patients using RF-based classifier with increasing the number of trees (n 5 500 trees). As a result, the combination of WCSRS- and RF-based classifier performs better results compared to others.

14.7.3 Strengths, weakness, and extensions This study represented a risk stratification system to accurately classify cancer disease. The data set consisted of 62 patients with two classes: cancer and control.

303

304

CHAPTER 14 Statistical characterization and classification

Our study showed that WCSRS-based statistical test with RF-based classifiers gives the best classification accuracy along with higher FM, AUC, and RI. As part of the extension to our existing system, penalized SVM (PSVM) and signaling pathways as a constraint may be adapted for FST to get the most significant features. Even though the current study scope of work was based on the ML paradigm demonstrating the role of detection and classification, one can extend this to adapt deep learning (DL) paradigm and boosting tree methods on microarray gene expression data and compare with our current study.

14.8 Conclusion This study presented an exhaustive evaluation of ML systems for classification of colon cancer gene expression data which has two major components: (1) identification of high-risk differential gene expression using statistical tests and (2) development of an ML strategy for predicting the cancerous genes. Four statistical tests such as WCSRS test, t-test, KW test, and F-test were adapted for cancerous gene identification using P-values. Further, 10 ML systems were designed using 10 different classifiers such as: LDA, QDA, NB, GPC, SVM, ANN, LR, DT, AB, and RF. Our overall mean accuracy of ML system using all four tests and all 10 classifiers was 90.50%. The highest classification accuracy of 99.81% was obtained by adapting the combination of WCSRS test along with RF-based classifier. An improvement of 8% was obtained over previously published data in literature. RF-based model with statistical tests for detection of high-risk cancerous genes showed the best performance for accurate gene cancer classification in multicenter clinical trials.

14.9 Acknowledgments The authors thank Dr. Suri’s team for proof reading the manuscript. The authors gratefully acknowledge the contribution of Statistics Discipline, Science, Engineering and Technology School, Khulna University, Khulna 9208, Bangladesh. The authors also thank to the editor and referees for their comments and positive critique.

14.10 Ethical approvals No ethics approval is required for this data set.

14.11 Funding No fund received for this project.

Appendix A

14.12 Conflict of interest None.

14.13 Author’s contributions Md. Maniruzzaman: Statistical analysis and draft the original manuscript; Md. Jahanur Rahman: Information about the technology; Benojir Ahammed: Acquisition of data and methodology; Md. Menhazul Abedin: Data preprocessing and interpretation; Harman S. Suri: English writing and strategy. Mainak Biswas: Manuscript design; Ayman El-Baz: ML concepts and design; Petros Bangeas: English writing and strategy; Georgios Tsoulfas: Clinical Contributions and application; and Jasjit S. Suri: Principal investigator and management of the project.

Appendix A This section demonstrates the optimization of kernels on the two machine learning classifiers namely: Gaussian process classification (GPC) and support vector machine (SVM) including three types of kernel namely: linear, polynomial with order two (Poly-2), and radial basis function (RBF). This section also demonstrates the different classifiers performance while changing statistical tests. Ten classifiers are compared in each of the tables (Tables A.1A.7) shown below. Each table corresponds to different set of statistical tests. Ten classifiers were linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), naive Bayes (NB), GPC, support vector machine (SVM), artificial neural network (ANN), logistic regression (LR), decision tree (DT), Adaboost (AB), and random forest (RF). The performances of these classifiers are evaluated by using accuracy (ACC), sensitivity (SE), specificity (SP), positive predictive value (PPV), negative predictive value (NPV), F-measure (FM), and area under the curve (AUC). Table A.1 Comparison of mean accuracy for three different kernels using three partitions of protocols. PT

SVM: Linear

GPC: Linear

SVM: RBFa

GPC: RBF

SVM: Ploy-2

GPC: Poly-2a

K2 K10

80.99 6 6.04 84.16 6 3.68

81.79 6 3.84 86.85 6 4.49

85.75 6 13.21 86.53 6 4.14

84.31 6 5.25 85.43 6 4.61

82.50 6 6.25 83.54 6 13.95

91.11 6 6.50 92.27 6 9.20

JK

86.06 6 12.98

90.25 6 6.29

87.03 6 13.04

91.41 6 6.82

83.94 6 13.85

92.48 6 9.09

GPC, Gaussian process classification; PT, protocol type; SVM, support vector machine. Mean accuracy is expressed as accuracy 6 standard deviation. a Bold and shaded values indicate the selected kernels.

305

Table A.2 Change in mean accuracy of all classifiers and different P-values of t-test. C1

C2

C3

C4

C5

C6

C7

C8

C9

C10

C11

C12

PT

SN

P-values

# of genes

LDA

QDA

NB

GPC

SVM

ANN

LR

DT

AB

RFa

K2

R1 R2 R3 R4 R1 R2 R3 R4 R1 R2 R3 R4

.05 .01 .001 .0001 .05 .01 .001 .0001 .05 .01 .001 .0001

478 246 94 33 478 246 94 33 478 246 94 33

81.45 82.90 79.35 58.71 73.33 75.83 78.33 77.50 96.54 96.35 97.87 91.31

59.03 58.71 58.06 66.29 63.34 63.34 59.17 74.16 65.95 63.86 61.24 67.82

71.94 70.97 77.58 76.13 70.00 77.50 72.50 86.67 71.20 75.84 77.58 80.67

79.35 80.71 81.45 82.90 85.00 88.33 87.50 81.66 93.47 93.24 92.25 92.26

79.84 81.77 82.42 80.00 86.67 85.83 85.83 68.42 93.42 93.03 93.42 91.75

77.58 77.09 78.87 76.45 79.99 71.84 74.21 70.42 73.10 71.62 69.69 82.90

72.74 75.80 70.32 75.48 79.16 70.83 74.16 70.50 90.22 90.14 90.27 89.98

69.67 71.29 75.00 73.39 79.17 77.50 75.83 81.67 90.66 90.66 90.79 91.70

76.29 78.38 78.06 81.61 82.08 80.00 80.00 85.00 96.43 96.38 94.12 95.86

83.71 83.90 84.19 87.42 87.52 88.33 91.67 92.32 99.74 99.79 99.77 99.72

K10

JK

AB, Adaboost; ANN, artificial neural network; DT, decision tree; GPC, Gaussian process classification; LDA, linear discriminant analysis; LR, logistic regression; NB, naive Bayes; QDA, quadratic discriminant analysis; RF, random forest; SVM, support vector machine. a Bold and shaded values indicate the result of proposed method.

Table A.3 Change in mean accuracy of all classifiers and different P-values of KruskalWallis test. C1

C2

C3

C4

C5

C6

C7

C8

C9

C10

C11

C12

PT

SN

P-values

# of genes

LDA

QDA

NB

GPC

SVM

ANN

LR

DT

AB

RFa

K2

R1 R2 R3 R4 R1 R2 R3 R4 R1 R2 R3 R4

.05 .01 .001 .0001 .05 .01 .001 .0001 .05 .01 .001 .0001

387 188 62 22 387 188 62 22 387 188 62 22

80.97 83.23 80.65 70.96 81.67 75.00 70.00 80.83 99.56 96.38 98.83 89.36

58.06 65.00 65.48 67.74 55.00 60.83 60.83 70.00 67.74 66.65 67.74 80.72

75.16 76.77 83.87 83.07 65.83 79.17 83.33 85.83 83.82 83.66 83.97 84.08

80.80 79.35 85.97 86.93 82.50 83.33 82.50 85.00 91.99 92.54 91.97 91.96

82.42 85.16 82.90 86.94 80.00 82.50 84.33 85.83 93.39 93.29 92.74 89.00

70.32 77.25 77.42 72.10 78.16 77.10 75.53 79.74 77.68 69.87 78.15 92.10

73.07 75.00 79.83 77.42 74.17 74.58 73.34 75.83 99.58 99.09 98.99 90.37

75.00 72.58 75.65 76.78 79.17 70.83 80.00 84.17 99.74 99.69 99.71 94.85

74.84 77.58 79.52 80.32 77.50 80.00 85.83 83.33 94.77 96.35 94.77 94.95

84.84 85.16 85.16 87.42 85.83 86.67 87.50 89.17 99.77 99.79 99.82 99.50

K10

JK

AB, Adaboost; ANN, artificial neural network; DT, decision tree; GPC, Gaussian process classification; LDA, linear discriminant analysis; LR, logistic regression; NB, naive Bayes; QDA, quadratic discriminant analysis; RF, random forest; SVM, support vector machine. a Bold and shaded values indicate the result of proposed method.

Table A.4 Change in mean accuracy of all classifiers and different P-values of F-test. C1

C2

C3

C4

C5

C6

C7

C8

C9

C10

C11

C12

PT

SN

P-values

# of genes

LDA

QDA

NB

GPC

SVM

ANN

LR

DT

AB

RFa

K2

R1 R2 R3 R4 R1 R2 R3 R4 R1 R2 R3 R4

.05 .01 .001 .0001 .05 .01 .001 .0001 .05 .01 .001 .0001

714 431 246 133 714 431 246 133 714 431 246 133

72.26 74.03 76.68 79.03 79.17 73.34 80.83 79.16 96.33 96.38 96.28 96.54

53.39 56.45 56.77 54.19 55.84 55.00 63.33 68.33 57.83 64.31 63.45 74.14

65.64 70.16 75.00 77.26 63.33 68.34 68.34 79.58 70.94 71.49 75.91 83.92

75.32 71.61 76.29 78.07 74.17 79.17 86.33 87.50 95.03 93.81 94.33 94.48

69.35 70.48 71.61 72.25 70.17 60.63 73.34 75.42 90.79 93.00 93.00 90.48

76.61 77.58 70.81 71.77 72.81 74.58 71.56 71.40 71.36 72.14 68.31 90.46

76.77 73.39 73.87 77.58 78.92 70.00 71.25 74.17 99.09 99.12 98.99 88.76

69.52 72.74 72.58 73.39 80.83 80.41 75.00 84.17 99.66 99.71 99.66 96.41

73.22 77.74 77.26 78.71 79.58 77.25 78.34 79.17 96.43 96.38 96.38 95.73

75.81 76.78 79.20 81.13 84.17 88.33 88.34 88.42 99.71 99.77 99.77 99.74

K10

JK

AB, Adaboost; ANN, artificial neural network; DT, decision tree; GPC, Gaussian process classification; LDA, linear discriminant analysis; LR, logistic regression; NB, naive Bayes; QDA, quadratic discriminant analysis; RF, random forest; SVM, support vector machine. a Bold and shaded values indicate the result of proposed method.

Appendix A

Table A.5 Performance evaluation parameters of all classifiers for t-test. C1

C2

C3

C4

C5

C6

C7

PT

SN

CT

SE (%)

SP (%)

PPV (%)

NPV (%)

FM (%)

AUC (%)

K2

R1 R2 R3 R4 R5 R6 R7 R8 R9 R10

LDA QDA NB GPC SVM ANN LR DT AB RFa

60.01 84.84 76.90 87.88 88.01 78.61 55.04 80.65 77.70 91.90

64.88 54.08 86.63 77.32 81.05 79.99 62.38 59.21 82.09 64.29

78.21 77.23 91.97 87.96 89.38 88.31 73.35 77.97 88.48 81.25

43.79 67.58 68.68 84.36 80.37 67.57 43.33 64.12 67.85 70.74

67.36 79.99 82.84 89.54 87.81 82.52 61.98 78.48 82.08 82.51

67.78 83.17 91.62 87.80 40.80 87.59 58.92 75.58 87.03 92.75

K10

R1 R2 R3 R4 R5 R6 R7 R8 R9 R10

LDA QDA NB GPC SVM ANN LR DT AB RFa

68.11 94.41 69.80 94.56 86.61 81.88 63.24 84.46 84.22 87.82

62.20 55.71 93.82 78.75 85.17 74.90 69.33 62.90 80.24 72.56

73.14 78.90 95.45 90.07 91.95 86.68 84.45 85.48 88.50 81.64

55.93 70.25 62.54 92.38 76.48 65.24 42.74 76.50 76.57 80.00

69.08 85.02 80.15 91.67 88.54 83.33 71.17 84.48 84.90 83.90

70.97 89.78 90.72 88.79 43.36 88.05 67.13 76.40 92.14 90.56

JK

R1 R2 R3 R4 R5 R6 R7 R8 R9 R10

LDA QDA NB GPC SVM ANN LR DT AB RFa

90.69 62.82 70.12 93.06 94.60 87.44 90.20 91.41 94.56 99.80

92.45 76.91 99.85 90.81 86.58 74.65 89.59 92.23 98.24 99.59

95.64 83.19 99.89 94.97 92.77 86.71 94.10 95.61 99.02 99.78

84.59 53.22 64.77 87.99 89.88 77.69 83.49 85.73 90.96 99.65

93.08 71.58 82.40 93.95 93.67 86.82 92.08 93.41 96.72 99.79

98.12 76.47 94.23 95.88 94.68 91.58 93.74 97.75 99.41 99.96

AB, Adaboost; ACC, accuracy; ANN, artificial neural network; AUC, area under the curve; DT, decision tree; FM, F-measure; GPC, Gaussian process classification; LDA, linear discriminant analysis; LR, logistic regression; NB, naive Bayes; NPV, negative predictive value; PPV, positive predictive value; QDA, quadratic discriminant analysis; RF, random forest; SE, sensitivity; SP, specificity; SVM, support vector machine. a Bold and shaded values indicate the result of proposed method.

309

310

CHAPTER 14 Statistical characterization and classification

Table A.6 Performance evaluation parameters of all classifiers for KruskalWallis test. C1

C2

C3

C4

C5

C6

C7

PT

SN

CT

SE (%)

SP (%)

PPV (%)

NPV (%)

FM (%)

AUC (%)

K2

R1 R2 R3 R4 R5 R6 R7 R8 R9 R10

LDA QDA NB GPC SVM ANN LR DT AB RFa

75.47 92.33 83.43 91.54 91.36 87.04 69.32 81.82 85.27 91.33

64.05 71.17 83.22 86.76 73.10 69.55 63.75 62.71 76.79 76.43

78.09 84.80 90.13 92.70 86.00 84.09 78.95 79.42 87.39 88.36

60.66 83.62 73.57 87.38 83.46 76.55 52.45 68.56 75.98 77.94

76.10 87.92 86.48 91.96 88.31 85.12 72.87 79.89 85.76 88.48

76.72 90.11 91.03 90.88 41.91 88.79 71.07 76.88 88.98 91.78

K10

R1 R2 R3 R4 R5 R6 R7 R8 R9 R10

LDA QDA NB GPC SVM ANN LR DT AB RFa

90.83 91.85 80.04 94.90 93.52 92.13 75.83 82.53 81.87 95.71

55.70 78.02 86.15 84.25 76.42 66.10 71.35 74.90 75.24 79.24

81.17 85.26 93.22 93.80 89.59 81.96 83.63 86.83 84.88 87.30

70.60 83.75 68.74 94.08 84.50 86.31 62.44 70.23 73.53 85.85

84.90 87.46 85.21 94.59 90.98 85.82 78.84 83.43 82.37 88.01

85.56 89.01 92.69 93.40 19.60 90.83 77.39 81.35 89.80 96.19

JK

R1 R2 R3 R4 R5 R6 R7 R8 R9 R10

LDA QDA NB GPC SVM ANN LR DT AB RFa

94.96 92.50 80.36 96.79 92.46 92.92 96.69 97.22 93.55 99.84

79.18 59.31 90.84 83.17 82.70 90.61 78.89 90.54 97.51 99.76

89.27 80.52 94.10 91.32 90.69 94.82 89.35 94.94 98.57 99.87

89.66 81.30 71.80 93.64 85.78 88.16 93.18 94.78 89.34 99.72

92.02 86.10 86.69 93.95 91.56 93.78 92.84 96.05 95.98 99.85

98.47 87.82 94.53 95.21 95.96 96.88 94.88 98.88 99.30 99.95

AB, Adaboost; ACC, accuracy; ANN, artificial neural network; AUC, area under the curve; DT, decision tree; FM, F-measure; GPC, Gaussian process classification; LDA, linear discriminant analysis; LR, logistic regression; NB, naive Bayes; NPV, negative predictive value; PPV, positive predictive value; QDA, quadratic discriminant analysis; RF, random forest; SE, sensitivity; SP, specificity; SVM, support vector machine. a Bold and shaded values indicate the result of proposed method.

Appendix A

Table A.7 Performance evaluation parameters of all classifiers for F-test. C1

C2

C3

C4

C5

C6

C7

PT

SN

CT

SE (%)

SP (%)

PPV (%)

NPV (%)

FM (%)

AUC (%)

K2

R1 R2 R3 R4 R5 R6 R7 R8 R9 R10

LDA QDA NB GPC SVM ANN LR DT AB RFa

84.05 78.29 72.73 88.93 85.25 82.75 45.99 85.34 81.35 87.29

69.03 58.16 81.35 72.75 43.81 82.45 48.40 51.09 82.34 69.56

83.23 77.19 87.71 86.52 76.64 88.69 59.25 76.02 89.58 84.15

70.85 59.28 64.32 81.47 83.48 73.98 35.78 66.05 71.76 75.65

83.21 76.79 78.73 86.95 84.72 85.02 50.90 79.82 84.73 85.22

84.83 76.39 84.03 85.41 51.19 89.61 45.89 71.57 88.84 88.95

K10

R1 R2 R3 R4 R5 R6 R7 R8 R9 R10

LDA QDA NB GPC SVM ANN LR DT AB RFa

84.17 93.90 72.63 93.78 92.66 79.00 56.26 83.71 86.61 83.65

63.71 54.71 83.33 79.87 54.36 73.92 51.55 69.36 70.92 81.92

78.21 75.96 88.81 89.49 76.94 88.14 66.14 83.57 85.98 93.45

75.54 88.33 65.44 93.12 84.17 60.36 42.76 69.99 78.43 69.38

79.49 83.07 79.03 91.10 83.69 82.88 59.08 82.40 85.77 87.30

79.97 81.63 86.04 88.94 25.69 87.15 54.22 80.69 88.07 92.67

JK

R1 R2 R3 R4 R5 R6 R7 R8 R9 R10

LDA QDA NB GPC SVM ANN LR DT AB RFa

97.46 68.15 75.16 94.66 95.00 93.17 92.74 99.44 95.00 99.82

94.87 85.04 99.85 94.17 82.26 85.53 81.52 90.91 97.07 99.59

97.19 89.28 99.90 96.74 90.71 92.26 90.14 95.24 98.42 99.78

95.40 59.51 68.86 90.67 90.04 87.57 86.11 99.01 91.81 99.69

97.32 77.27 85.78 95.68 92.80 92.66 91.42 97.27 96.62 99.80

99.40 84.98 93.33 96.78 96.85 96.89 93.40 99.40 99.39 99.96

AB, Adaboost; ACC, accuracy; ANN, artificial neural network; AUC, area under the curve; DT, decision tree; FM, F-measure; GPC, Gaussian process classification; LDA, linear discriminant analysis; LR, logistic regression; NB, naive Bayes; NPV, negative predictive value; PPV, positive predictive value; QDA, quadratic discriminant analysis; RF, random forest; SE, sensitivity; SP, specificity; SVM, support vector machine. a Bold and shaded values indicate the result of proposed method.

311

312

CHAPTER 14 Statistical characterization and classification

Appendix B This appendix demonstrates the behavior of 10 classifiers (LDA, QDA, NB, GPC, SVM, ANN, LR, DT, AB, and RF) with change in P-value (Effect of Feature Selection). Fig. B.1 explains the effect of P-values on 10 types of classifiers (by computing the classification accuracy) for three partition protocols (K2, K10, and JK).

FIGURE B.1 Mean accuracy of all 10 classifiers varying P-values for different protocols: (A) K2 protocols; (B) K10 protocols; and (C) JK protocols.

Appendix C

Appendix C Table C.1 List of abbreviations. SN

Abbrev.

Full form

SN

Abbrev.

Full form

1 2

ML FST

26 27

K2 K10

2-fold cross-validation 10-fold cross-validation

3 4

ST KW

Machine learning Feature selection technique Statistical test KruskalWallis

28 29

JK PPSO

5 6

MI WCSRS

30 31

TP TN

7 8

CT LDA

32 33

FP FN

False positive False negative

9

QDA

34

ACC

Accuracy

10 11

NB GPC

35 36

SE SP

Sensitivity Specificity

12

SVM

37

PPV

13

ANN

Mutual information Wilcoxon sign rank sum Classifier type Linear discriminant analysis Quadratic discriminant analysis Naive Bayes Gaussian process classification Support vector machine Artificial neural network

Jackknife Pure particle swarm optimization True positive True negative

38

NPV

14 15

LR DT

Logistic regression Decision tree

39 40

FM ROC

16 17 18

AB RF Poly-2

41 42 43

AUC P-value DS

19 20

RBF KNN

44 45

PT PM

Protocol type Performance measure

21

GSA

46

PV

Performance validation

22

BCGA

47

SG

Selected gene

23

RCGA

48

RI

Reliability index

24

PSO

49

MK

Multiple kernel

25

PTS

Adaboost Random forest Polynomial kernel with order two Radial basis function K-nearest neighborhood Gravitational search algorithm Binary-coded genetic algorithm Real-coded genetic algorithm Particle swarm optimization Pure tabu search

Positive predictive value Negative predictive value F-measure Receiver operating curve Area under the curve Probability value Data size

50

SN

Serial number

313

314

CHAPTER 14 Statistical characterization and classification

Appendix D Table D.1 List of mathematical symbols. SN

Symbols

Descriptions

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

X μ σ μ1i μ2i s21i s22i n1 n2 R1 R2 N X̅1 X̅2 P

Data matrix Overall mean vector Overall standard deviation Sample mean of the cancer patients Samples mean of the control patients Sample variances of cancer patients Sample variances of control patients The total number of cancer patients The total number of control patients Sum of the ranks for cancer patients Sum of the ranks for control patients Total number of patients The mean vectors of cancer patients The mean vector of control patients The sample covariance matrix Threshold of the decision boundary The sample covariance matrix for cancer patients The sample covariance matrix for control patients The transpose of the mean vector for cancer patients The transpose of the mean vector for control patients The conditional probability of y given x The conditional probability of x given y The marginal probability of x The marginal probability of y Kernel matrix Overall mean accuracy of the classifiers. The standard deviation of the ith group The mean of accuracy of the ith group

c P P1 2 T

X̅1 T X̅2 Pðyx Þ PðxjyÞ P(x) P(y)   K x;xT μ̅m ðcÞ σn i μ ni

References [1] S. Hollstein, B. Vogelstein, C.C. Harris, 53 mutations in human cancers, Science 253 (5015) (1991) 4953. [2] F. Bray, J. Ferlay, I. Soerjomataram, R.L. Siegel, L.A. Torre, A. Jemal, Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries, CA: A Cancer J. Clin. 1 (1) (2018) 131. [3] R.L. Siegel, K.D. Miller, A. Jemal, Cancer statistics, 2015, CA: A Cancer J. Clin. 65 (1) (2015) 529.

References

[4] E.F. Matthias, R. Anthony, K. Nikola, Evolving connectionist systems for knowledge discovery from gene expression data of cancer tissue, Artif. Intell. Med. 28 (2) (2003) 165189. [5] S. Monti, P. Tamayo, J. Mesirov, T. Golub, Consensus clustering: a resamplingbased method for class discovery and visualization of gene expression microarray data, Mach. Learn. 52 (1) (2003) 91118. [6] J.H. Hong, S.B. Cho, The classification of cancer based on DNA microarray data that uses diverse ensemble genetic programming, Artif. Intell. Med. 36 (1) (2006) 4358. [7] C.L. Huang, C.J. Wang, A GA-based feature selection and parameters optimization for support vector machines, Expert. Syst. Appl. 31 (2) (2006) 231240. [8] E.J. Yeoh, M.E. Ross, S.A. Shurtleff, W.K. Williams, D. Patel, R. Mahfouz, et al., Classification, subtype discovery, and prediction of outcome in pediatric acute lymphoblastic leukemia by gene expression profiling, Cancer Cell 1 (2) (2002) 133143. [9] D.K. Slonim, From patterns to pathways: gene expression data analysis comes of age, Nat. Genet. 32 (4) (2002) 502506. [10] S. Hautaniemi, O. Yli-Harja, J. Astola, P. Kauraniemi, A. Kallioniemi, M. Wolf, et al., Analysis and visualization of gene expression microarray data in human cancer using self-organizing maps, Mach. Learn. 52 (2) (2003) 4566. [11] W.L. Tung, C. Quek, GenSo-FDSS: a neural-fuzzy decision support system for pediatric ALL cancer subtype identification using gene expression data, Artif. Intell. Med. 33 (1) (2005) 6188. [12] T. Ando, M. Suguro, T. Kobayashi, M. Seto, H. Honda, Selection of causal gene sets for lymphoma prognostication from expression profiling and construction of prognostic fuzzy neural network models, J. Biosci. Bioeng. 96 (2) (2003) 161167. [13] H. Takahashi, K. Masuda, T. Ando, T. Kobayashi, H. Honda, Prognostic predictor with multiple fuzzy neural models using expression profiles from DNA microarray for metastases of breast cancer, J. Biosci. Bioeng. 98 (3) (2004) 193199. [14] L. Li, C.R. Weinberg, T.A. Darden, L.G. Pedersen, Gene selection for sample classification based on gene expression data: study of sensitivity to choice of parameters of the GA/KNN method, Bioinformatics. 17 (12) (2001) 11311142. [15] Y. Mao, X. Zhou, D. Pi, Y. Sun, S.T. Wong, Multiclass cancer classification by using fuzzy support vector machine and binary decision tree with gene selection, Biomed. Res. Int. 2005 (2) (2005) 160171. [16] W. Wei, L. Xin, X. Min, P. Jinrong, R. Setiono, A hybrid SOM-SVM method for analyzing zebra fish gene expression, IEEE Comput. Soc. 2 (2) (2004) 323326. [17] I. Guyon, J. Weston, S. Barnhill, V. Vapnik, Gene selection for cancer classification using support vector machines, Mach. Learn. 46 (3) (2002) 389422. [18] M. Jeanmougin, A. De Reynies, L. Marisa, C. Paccard, G. Nuel, M. Guedj, Should we abandon the t-test in the analysis of gene expression microarray data: a comparison of variance modeling strategies? PLoS One 5 (9) (2010) 1233612345. [19] S. Kuyuk, I. Ercan, Commonly used statistical methods for detecting differential gene expression in microarray experiments, Biostat. Epidemiol. Int. J. 1 (1) (2017) 18. [20] Y. Su, T.M. Murali, V. Pavlovic, M. Schaffer, S. Kasif, RankGene: identification of diagnostic genes based on expression data, Bioinformatics. 19 (12) (2003) 15781579.

315

316

CHAPTER 14 Statistical characterization and classification

[21] D. Chen, Z. Liu, X. Ma, D. Hua, Selecting genes by test statistics, Biomed. Res. Int. 2 (2005) (2005) 132138. [22] Y. Shi, A.M. Chinnaiyan, H. Jiang, rSeqNP: a non-parametric approach for detecting differential expression and splicing from RNA-Seq data, Bioinformatics. 31 (13) (2015) 22222224. [23] M.W. Butler, N.R. Hackett, J. Salit, Y. Strulovici-Barel, L. Omberg, J. Mezey, et al., Glutathione S-transferase copy number variation alters lung gene expression, Eur. Respir. J. 38 (1) (2011) 1528. [24] M. Maniruzzaman, M.J. Rahman, M. Al-Mehedi Hassan, H.S. Suri, M.M. Abedin, A. El-Baz, et al., Accurate diabetes risk stratification using machine learning: role of missing value and outliers, J. Med. Syst. 42 (5) (2018) 92108. [25] H. Peng, F. Long, C. Ding, Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy, IEEE Trans. Pattern Anal. Mach. Intell. 27 (8) (2005) 12261238. [26] C.D.A. Vanitha, D. Devaraj, M. Venkatesulu, Gene expression data classification using support vector machine and mutual information-based gene selection, Proc. Comput. Sci. 47 (2015) 1321. [27] U. Alon, N. Barkai, D.A. Notterman, K. Gish, S. Ybarra, D. Mack, et al., Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays, Proc. Natl. Acad. Sci. 96 (12) (1999) 67456750. [28] M. Patr´ıcio, J. Pereira, J. Criso´stomo, P. Matafome, M. Gomes, R. Seic¸a, et al., Using Resistin, glucose, age and BMI to predict the presence of breast cancer, BMC Cancer. 18 (1) (2018) 2936. [29] F.S. Nahm, Nonparametric statistical tests for the continuous data: the basic concept and the practical use, Korean J. Anesthesiol. 69 (1) (2016) 814. [30] S.S. Sawilowsky, Nonparametric tests of interaction in experimental design, Rev. Educ. Res. 60 (1) (1990) 91126. [31] R.A. Fisher, The use of multiple measurements in taxonomic problems, Ann. Eugen. 7 (2) (1936) 179188. [32] A.K. Jain, R.P. Duin, J. Mao, Statistical pattern recognition: a review, IEEE Trans. Pattern Anal. Mach. Intell. 22 (1) (2000) 437. [33] T. Sapatinas, Discriminant analysis and statistical pattern recognition, J. R. Stat. Soc.: Ser. A (Stat. Society) 168 (3) (2005) 635636. [34] G.I. Webb, J.R. Boughton, Z. Wang, Not so naı¨ve Bayes: aggregating one dependence estimators, Mach. Learn. 58 (1) (2005) 524. [35] T.M. Cover, Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition, IEEE Trans. Electron. Comput. 14 (3) (1965) 326334. [36] S. Brahim-Belhouari, A. Bermak, Gaussian process for nonstationary time series prediction, Comput. Stat. Data Anal. 47 (4) (2004) 705712. [37] C.E. Rasmussen, Gaussian processes in machine learning, Adv. Lect. Mach. Learn. 3176 (2004) 6371. [38] C. Cortes, V. Vapnik, Support-vector networks, Mach. Learn. 20 (3) (1995) 273297. [39] A. Reinhardt, T. Huber, Using neural networks for prediction of the subcellular location of proteins, Nucleic Acids Res. 26 (9) (1998) 22302236.

References

[40] D.R. Cox, The regression analysis of binary sequences, J. R. Stat. Soc. Ser. B (Methodol.) 20 (2) (1958) 215242. [41] J. Friedman, T. Hastie, R. Tibshirani, Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors), Ann. Stat. 28 (2) (2000) 337407. [42] B. Tabaei, W. Herman, A multivariate logistic regression equation to screen for diabetes, Diab. Care 25 (11) (2002) 19992003. [43] J.R. Quinlan, Simplifying decision trees, Int. J. Man-Mach. Stud. 27 (3) (1987) 221234. [44] W. Hu, W. Hu, S. Maybank, Adaboost-based algorithm for network intrusion detection, IEEE Trans. Syst. Man. Cybern. B (Cyber.) 38 (2) (2008) 577583. [45] L. Breiman, Bagging predictors, Mach. Learn. 24 (2) (1996) 123140. [46] Breiman,, L, Random forests, Mach. Learn. 45 (1) (2001) 532. [47] A. Liaw, M. Wiener, Classification and regression by random forest, R News 2 (3) (2002) 1822. [48] T. Dahiru, P-value, a true test of statistical significance? A cautionary note, Ann. Ib. Postgrad. Med. 6 (1) (2008) 2126. [49] P.G. Kumar, T.A.A. Victoire, P. Renukadevi, D. Devaraj, Design of fuzzy expert system for microarray data classification using a novel genetic swarm algorithm, Expert. Syst. Appl. 39 (2) (2012) 18111821. [50] Q. Shen, W.M. Shi, W. Kong, Hybrid particle swarm optimization and tabu search approach for selecting genes for tumor classification using gene expression data, Comput. Biol. Chem. 32 (1) (2008) 5360. [51] S.M. Alladi, P. Shinde Santosh, V. Ravi, U.S. Murthy, Colon cancer prediction with genetic profiles using intelligent techniques, Bioinformation. 3 (3) (2008) 130133. [52] G. Sun, X. Dong, G. Xu, Tumor tissue identification based on gene expression data using DWT feature extraction and PNN classifier, Neurocomputing 69 (6) (2006) 387402. [53] Z. Chen, J. Li, L. Wei, A multiple kernel support vector machine scheme for feature selection and rule extraction from gene expression data of cancer tissue, Artif. Intell. Med. 41 (2) (2007) 161175. [54] Z.P. Liu, R. Gao, Detecting pathway biomarkers of diabetic progression with differential entropy, J. Biomed. Inform. 82 (2018) 143153.

317

CHAPTER

Identification of road signs using a novel convolutional neural network

15

Yang Pan, Vijayakumar Kadappa and Shankru Guggari Data Analytics Research Lab, Department of Computer Applications, B.M.S. College of Engineering, Bengaluru, India

15.1 Introduction Road signs play a central role in guiding the traffic in the expected way, while the misinterpretation or the ignorance of road signs rank are the causes of top 25 car accidents [1]. Under circumstances such as the weather interfaces, high speeding, and the damage of the road signs, the driver gets distracted to interpret the road signs properly. The computer can handle the hurdles and be the proper assistant of the drivers. The driver assistance system in the main branches of driverless cars takes the role for capturing the road signs and then classifying and displaying it on the user interface panel within a short time. The techniques behind it always include two steps—detection and classification. Either the color-based or the shape-based algorithms are used to crop the unnecessary background out and keep the region of interest (ROI), then the filtered images are sent as the input for the classification step [2]. In the second step, options are abundant for choosing the classifiers, while traditionally support vector machine (SVM) [2,3], decision tree, and neural network [e.g., artificial neural network (ANN), convolutional neural network (CNN), or variants] are in the list. The LeNet obtains a decent classification accuracy though CNN structure, which is simple. It has the CNN structure of a convolutional layer, pooling layer, followed by again a convolutional and pooling layer with three fully connected layers [4,5]. But most of the methods are sensitive to the change of illumination, and feature extraction process, such as scale-invariant feature transform (SIFT) and speed-up robust features (SURF) algorithms. They consume considerate amount of computational time [6]. Li and Yang [7] describe development tendency, technical ambiguities and introduce three new feature-based algorithms with a combination of image preprocessing, feature extraction, and classifier used to classify the image. Our work aims at improving classification accuracy and reduces the time required for the processing of road signs. We used German Traffic Sign Recognition Benchmark (GTSRB) [8] data set to validate the significant of the introduced technique. The data set consists of 43 different kinds of traffic signs. Cognitive Informatics, Computer Modelling, and Cognitive Science, Volume 1. DOI: https://doi.org/10.1016/B978-0-12-819443-0.00015-5 © 2020 Elsevier Inc. All rights reserved.

319

320

CHAPTER 15 Identification of road signs using a novel convolutional

FIGURE 15.1 Different deviations of road sign plates.

The database is unevenly distributed and it contains the road sign images in different conditions, such as low light, low resolution and motion blurred (Fig. 15.1), which makes it a challenge to build a classification model. To address these challenges, we investigate different configurations of CNN [9] structure, parameters and processing techniques. Contrast limited adaptive histogram equalization (CLAHE) is applied for the image preprocessing before the classification by CNN, which shows limited improvement on the accuracy after certain number of epochs, but the training process turns to reach the stable classification accuracy faster. The adjustment of the histogram of the training images does help to boost the training process. Normally, the data set is split into training set and test set with a ratio. To make sure each image in the database would be utilized for training, cross-validation is used. The cross-validation reduces the effect of bias in training and test sets splitting. In part 2, we review the development of the image classification techniques, from the traditional approaches to the more advanced approaches like neural networks. Separating the whole classification process into detection, classification, and steps in between, we discuss different approaches applied. First, we need to confirm the existence of the road signs and the ROI for avoiding unnecessary background information. In some implementations, feature extraction methods yield the feature information, which is used for the last recognition step. Later in the third part, our proposed method is introduced and explained in detail.

15.2 Literature review

Part 4 describes result analysis of the proposed technique and compared with other benchmark methods like LeNet [10], ANN, and SVM. The structure of the chapter is organized in five sections as follows: detailed literature review present in Section 15.2, the proposed method is explained in Section 15.3. Experimental result analysis of the proposed CNN is presented in Section 15.4, and conclusions from the experimental results are discussed in Section 15.5.

15.2 Literature review There are generally two steps in road signs—detection and classification. The ROI is found on the basis of color segmentation method in the detection step (ROI) [6,11 15]. HSV (hue, saturation, and value) color space is dominant among color segmentation algorithms, with its advantage in the robustness against changing illuminations. By utilizing HSV color space, object detection models get their way of cropping the ROI neatly and efficiently. Fuzzy image processing works as a tool to obtain useful features out of image [6], and the object contours in the image can be depicted based on the fuzzy logic edge detection algorithm, where the gradients are calculated. After the fuzzy processing the images are converted to binary image for the last step of object detection. The ratio constraints applied on the Binary Large Object (BLOB) filters the possible road signs, and the bounding box is cropped as the ROI. In some other work using the HSV color space [11], selective search locates the contours. Unnecessary contours of background objects are ignored, and the remaining contours are merged hierarchically. The selective method is tested in different color spaces, and the color space showing the best detection accuracy is selected for further processing. For the color-based segmentation the road signs detection gets successful by applying the criteria on the HSV respectively [13]. Color quantization reduces the effect of the noise. Afterward the ROI is determined by checking the shape of the bounding box. The color segmentation is followed by shape segmentation [12], which applies pattern matching and edge detection. In the pattern matching step the templates in binary form are compared with the candidate images. It is the ratio of pixels that are matched to the pixels of white, while edge detection performs the double check to determine whether the shape in candidate image matches that of template. Differing from the common color- or shape-based object detector, a general purpose graphics processing unit (GPU) [16] extracts the Byte-MCT features to get the ROI. Further, it applies landmark-based parallel-window searching algorithm. This algorithm is also based on AdaBoost training, which aggregates several weak classifiers into one strong classifier. The detection sometimes has been done according to the environment and gains good performance. The Laplacian operator on Red channel does great working for locating the speed limit signs,

321

322

CHAPTER 15 Identification of road signs using a novel convolutional

with the help of evaluating the displacement pixels between two adjacent frames in video clips [14]. It focuses on recognizing the speed limit, which is regular with red circular externally and white inside. The color segmentation is efficient due to the color ratios are similar of certain types of road signs in certain region [15] where Hessian-based BLOB detection is used and the SURF is applied for the recognition step. Based on detecting the side paint of the lane, a three-dimensional (3D) model is built to identify the possible region of the appearance of the traffic signs. The maximally stable extremal regions are used with the HSV color space to extract the ROI by considering the adjacent frames. The optical character recognition is robust with the calculation of depth of those texts on the road sign plates [17]. However, some of those implementations are restricted in recognizing only limited number of road signs. Based on detecting the side paint of the lane, a 3D model is built to identify the possible region of the appearance of the traffic signs. Between the detection and classification, feature extraction works as the conjunction. Localization refinement [18] crops the bounding box of road signs based on the shape and color. Road sign templates have the fixed portion of intrinsic color. For quantizing the similarity on the distinct color portion, the energy minimization problem based function is analyzed. The terms such as shape and smoothness are the decisive parameters valuing the similarity. Histogram of oriented gradient (HOG) variant (HOGv) [19] is the improved descriptor based on the HOG descriptor. We considered both the contrast sensitive and contrast insensitive orientations in HOGv, thus more details are included without missing important features. Besides, HOGv reduces the dimension of the extracted feature information by normalizing the oriented histogram of pixels. More flavors of different HOG-based descriptors are introduced as the cascaded method. The descriptors such as Integral HOG and Compressed Integral HOG collaborate together [20]. In the classification step, ANNs and SVMs are used frequently as classifiers. SURF algorithm is used to extract processed features. It checks the gradients of the image and marks the interesting points with diameter. Subsequently, the output vectors of SURF algorithm are used by multilayer perceptron [6]. Similarly, the CNN such as GoogLeNet and AlexNet shows decent classification accuracy around 80% [11]. LeNet is the CNN first raised by Sermanet and LeCun [10], and it is used for the classification of the MNIST handwriting database [21]. Recently, texture-based features coupled with higher order spectra are used for recognition work, where features are used to detect the shape and the content within the image. It uses latent Dirichlet allocation (LDA) to distinguish different traffic signs. The superiority of the method is evaluated with Belgium traffic sign classification and GTSRB data sets [2]. In other work, initially, ROI is detected using color-based segmentation. It captures sign candidate from the ROI with the support of Haar wavelet features that are obtained from Adaboost, subsequently SURF is used to recognize traffic signs [13]. More recently, traffic sign is recognized based on its two-dimensional (2D) pose by estimating its location and precise boundary using CNN [22]. Another CNN-based method makes use of SVM to convert a RGB traffic image into grayscale image. Its structure includes fixed

15.3 Proposed convolutional neural network method

and learnable layers for cropping boundaries and increasing detection accuracy. The method uses Boostrap technique to avoid overfitting to improve the classification accuracy. Precision and recall curve of danger and mandatory categories shows the superiority of the technique [23]. An idea with the 3D point cloud of the object is recorded and squeezed from 3D to 2D with the novel feature descriptors. Horizontal single spin image scans the object horizontally and generates the horizontal structure. Vertical quantity and vertical angle accumulation images are scanned vertically, and the later one is the improved version with additional spatial information consideration. With the 3D information converted to 2D, the CNNs are able to take such input for high-level feature learning. Multiple CNNs are used in the fusion module where features are combined [24]. Similarly the fusion of 2D and 3D images is used to recognize traffic signs. The traditional road signs detection methods are implemented in 2D images, which lose the spatial information such as the depth and distance. Thus both the 2D and 3D data are used for the detection and classification of road signs, respectively. There are two neural networks involved, the first one is trained to recognize the road signs and their attachment from the 3D data, the concrete pole for instance; then the extracted subpatterns as 2D images are processed by the second neural network for classification. In the 3D detection step, some preprocessing steps such as k-means are applied before the 3D-Contour Sample Distance feature extraction. 3D mesh is used to modeled the shape of the road sign object by storing information about the distance between the center of mass and the surface. With key points drawn the interpolation step helps to recover the surface of the object, where it also increases the data input for the neural network [25]. The detection and recognition sometimes have been done according to the environment or the certain job. The Laplacian operator on red channel does great working for locating the speed limit signs, with the help of evaluating the displacement pixels between two adjacent frames in video clips [14]. It focuses on recognizing the speed limit and gains advantage in the accuracy. The recognition process is enhanced by locating the 0 that is always the second digit of the speed limit value. In another study, color segmentation is efficient due to the fact that the color ratios are similar in certain types of road signs in certain region [15] where Hessian-based BLOB detection is used. The SURF is applied for the recognition step. However, a limited number of road signs are tested. More recently background-absorbing Markov chain is used to detect the road signs. It removes background and highlights key information. First, the Simple Linear Iterative Clustering method divides the image into pixels then records its marginal boundaries [26].

15.3 Proposed convolutional neural network method The structure of the proposed CNN has four adjacent convolutional layers at the beginning, followed by a MaxPooling layer and three fully connected layers

323

324

CHAPTER 15 Identification of road signs using a novel convolutional

256

64

43

Road sign 17*17*64 Features (25*25)

Cl1 Cl2 Cl3 Cl4

Max Pooling Layer

Class label

9*9*64 Features

Four adjacent Convolution Layers

Three fully connected layer

FIGURE 15.2 Layers of the proposed CNN. CNN, Convolutional neural network.

(Fig. 15.2). The image resizing is done at the beginning with the input shape of 25 pixel 3 25 pixel. Then, filter maps of size 3 3 3 are applied. The output dimensions of filter maps are 32, 32, 64, and 64, respectively, among those convolutional layers. The number of features obtained from fourth convolution layer is 17 3 17 3 64. For the consideration of the validity of performing dot product, we apply the MaxPooling layer between convolutional layers and fully connected layers. Normally the MaxPooling transformation shrinks the size of target representation, by half, to 9 3 9 3 64 features. These features are used to train fully connected layers. The fully connected network contains three layers with 256, 128, and 43 nodes. A total of 43 fully connected nodes are matched with the number of classes that are used for the classification purpose. For the activation functions, sigmoid is applied in first two fully connected layers and the softmax is applied on the output layer, while rectified linear unit is applied among convolutional layers. The adjacent convolutional layers are useful to extract the hierarchical features. Practically, we need multiple convolutional layers to extract the hierarchical features of an object. A single convolutional layer provides limited information as the filter sliding window goes through the target representative matrix only once. Each filter map is responsible for extracting a certain feature. For instance, there are some road signs with minor differences. Their plates contain a circle, but with arrows inside pointing to different directions. The filter maps, sometimes, can only detect the outer texture (the circle in this case) and cannot distinguish between the directions of the arrow inside. Four continuous convolutional layers would be suitable to obtain enough hierarchical details. We use a MaxPooling layer after four convolutional layers with the consideration of the size of input images. If the images are resized to a comparative small size, such as 15 pixel 3 15 pixel, then it would be risky to apply MaxPooling layer as it will shrink the size by half. The shrinking of the size may lead to loss of salient information. The fully connected layer is mandatory part of CNN. With enough complexity provided in this layer, we can achieve a decent classification accuracy easily.

15.4 Experimental analysis

Table 15.1 Distribution of the class sizes of German Traffic Sign Recognition Benchmark database. Class ID

No. of images

Class ID

No. of images

Class ID

No. of images

0 1 2 3 4 5 6 7 8 9 10 11 12 13

210 2220 2250 1410 1980 1860 420 1440 1410 1470 2010 1320 2100 2160

14 15 16 17 18 19 20 21 22 23 24 25 26 27

780 630 420 1110 1200 210 360 330 390 510 270 1500 600 240

28 29 30 31 32 33 34 35 36 37 38 39 40 41 42

540 270 450 780 240 689 420 1200 390 210 2070 300 360 240 240

15.4 Experimental analysis Superiority of the proposed CNN method is based on GTSRB [8] database along with fivefold cross-validation and holdout techniques. The GTSRB database consists of 39,209 common German traffic signs images with 43 classes. The class-wise distribution of road signs is as described in Table 15.1. Classification accuracy and computational time are the measures used to validate the superiority of the proposed technique. To compute experimental results, a computer system with macOS, Intel i5 core processor, and 8 GB RAM is used with Python (version 3.6.4) [27] and Keras library [28].

15.4.1 Preprocessing: impact of input shape of images An important parameter of proposed CNN is input shape, which is once specified, all the input images get resized accordingly before the training process. The default input shape is 64 3 64, and we resize the input shape of images as 25 3 25 to improve the computational efficiency. For testing the performance of our proposed parameters, images of class 1, 2, 12, and 13 from GTSRB [8] are selected first as the sample classes due to the fact that those four classes hold the most number of images among all the 43 classes.

325

326

CHAPTER 15 Identification of road signs using a novel convolutional

Table 15.2 Proposed convolutional neural network performance comparison with different input shapes for four classes. Input shape

Classification accuracy (%)

Average training time per epoch (s)

Default input shape (64 3 64) Revised input shape (25 3 25)

99.37 99.25

78.9 8.8

The splitting strategy is applied for each class with 4:1 as a split ratio between training and test sets. The proposed CNN method is train for 10 epochs and recorded the average classification accuracy and time elapse per epoch as indicated in Table 15.2. Table 15.2 reveals the revised parameter for input shape (i.e., from 64 3 64 to 25 3 25), which causes to increase the training efficiency significantly as the average time spent per epoch decreases from 78.9 to 8.8 seconds.

15.4.2 Preprocessing using contrast limited adaptive histogram equalization The contrast of the image is enhanced by using CLAHE algorithm [29]; it equalizes the distribution of brightness of the image. However, CLAHE is applicable to a single color channel, thus the direct output is a gray image. To get the colored output of CLAHE, we convert the image color space from RGB to HSV then apply CLAHE on the Value parameter of HSV. With the other two channels unchanged, all three color channels of HSV are combined. Finally the color space conversion is done from HSV to RGB for obtaining the RBG image. The CLAHE process is shown in Figs. 15.3 15.5, where the gray output and colored output images follow the original image.

15.4.3 Comparison of the proposed CNN against LeNet using holdout and cross-validation In the holdout strategy, we split the data set into 80% train data set and 20% test data set, from the database of 39,209 traffic signs images. While applying crossvalidation technique, fivefold cross-validation handles database for ensuring the 4 to 1 split ratio between training and test sets. Based on the holdout and crossvalidation techniques, in total, we get four possible strategies combined with CLAHE (Tables 15.3 and 15.4). There are two strategies under each of holdout and cross-validation techniques: without-CLAHE-applied and with-CLAHEapplied. The classification accuracies in epoch-wise and computational time elapse are recorded in Figs. 15.6 and 15.7. As shown in Fig. 15.6, those four classification accuracy curves get nearly overlapped from epoch 10, and the curve of “without CLAHE under holdout” strategy

15.4 Experimental analysis

FIGURE 15.3 Original image before applying CLAHE. CLAHE, Contrast limited adaptive histogram equalization.

FIGURE 15.4 Gray image after applying CLAHE. CLAHE, Contrast limited adaptive histogram equalization.

327

328

CHAPTER 15 Identification of road signs using a novel convolutional

FIGURE 15.5 Colored image after applying CLAHE. CLAHE, Contrast limited adaptive histogram equalization.

Table 15.3 Comparison of classification accuracy between proposed convolutional neural network (CNN) and LeNet using holdout and crossvalidation techniques. Classification accuracy using holdout (%)

Classification accuracy using cross-validation (%)

Methods

Without CLAHE

With CLAHE

Without CLAHE

With CLAHE

Proposed CNN LeNet

97.96 94.78

96.82 95.61

97.27 93.33

96.13 95.10

CLAHE, Contrast limited adaptive histogram equalization.

Table 15.4 Comparison of time elapse between proposed convolutional neural network (CNN) and LeNet using holdout and cross-validation techniques. Holdout approach (time elapse in seconds)

Cross-validation (time elapse in seconds)

Methods

Without CLAHE

With CLAHE

Without CLAHE

With CLAHE

Proposed CNN LeNet

373 135

362 173

395 146

383 167

CLAHE, Contrast limited adaptive histogram equalization.

15.4 Experimental analysis

FIGURE 15.6 Classification accuracy of four strategies epoch-wise of the proposed CNN. CNN, Convolutional neural network.

FIGURE 15.7 Time elapse of four strategies of proposed CNN. CNN, Convolutional neural network.

dominates the highest accuracy in almost every epoch. When we check the time elapse spent of those four strategies (Fig. 15.12), the two strategies under holdout approach take less time than their counterparts of cross-validation approach.

329

330

CHAPTER 15 Identification of road signs using a novel convolutional

By applying the same four strategies based on holdout and cross-validation, we perform the comparison between proposed CNN and LeNet in classification accuracy and time elapse. The results are demonstrated in Tables 15.3 and 15.4, respectively. The classification accuracy obtained at the last epoch is shown in Table 15.3. The time elapse of LeNet is only about 36% 48% (Table 15.4 and Fig. 15.7) of the time elapse of the proposed CNN due to the comparatively simpler CNN structure. However, the classification accuracy of the proposed CNN outperforms that of LeNet in every combination up to 4%. Besides, the variance of the classification accuracy of the proposed CNN is 4.46 3 1025 comparing to 7.20 3 1025 of LeNet. Though the proposed CNN suffers in time elapse comparing to LeNet, it has better classification accuracy and is more robust against different processing strategies with lower variance of classification accuracy.

15.4.4 Comparison of proposed CNN against ANN and SVM using holdout and cross-validation As shown in Fig. 15.8 and 15.9, under the holdout approach, our proposed CNN method outperforms the other two counterparts (ANN and SVM) both in classification accuracy and time elapse. The proposed method shows an improvement of 3% and 46% in terms of classification accuracy over SVM and ANN, respectively. It is observed that the performance of ANN is not appreciable.

FIGURE 15.8 Comparison of classification accuracy between ANN, SVM, and proposed CNN using holdout method. CNN, Convolutional neural network; SVM, support vector machine.

15.4 Experimental analysis

FIGURE 15.9 Comparison of computational time between ANN, SVM, and proposed CNN using holdout method. CNN, Convolutional neural network; SVM, support vector machine.

Further we compare the proposed CNN with SVM implementation with fivefold cross-validation applied. The proposed CNN outperforms the SVM implementation, with classification accuracy of 97.3% 94.2% of the SVM, and time elapse between these methods is almost same as shown in Figs. 15.10 and 15.11. In brief, our method is robust against noisy images as compared to ANN and SVM methods in terms of classification accuracy and computational time.

15.4.5 Comparison of proposed CNN method against kNN, CART, and random forest Recently, except from the growing number of implementations using neural networks, random forest (RF) classification is one of the methods with decent accuracy and light training cost. After the segmentation step the target road signs are centralized. Then the HOGs’ feature contributes the classification criteria and used GTSRB database with same training and test split ratio. Among the features to use, we select Hu moments, Haralick texture, and color histogram and group of Weighted Hu moments are robust to the transformation of images [30]. The Haralick texture can tell the important information embedded in the texture of different road signs [31]. Especially when they have same external frame but with minor difference inside, like signs with different speed limit. Different histograms with various colors are an option. We have found the more features applied, the better but limited improvement of the accuracy at the expense of processing time (interestingly the feature extraction time of three features is

331

332

CHAPTER 15 Identification of road signs using a novel convolutional

FIGURE 15.10 Comparison of classification accuracy between SVM and proposed CNN using crossvalidation techniques. CNN, Convolutional neural network; SVM, support vector machine.

FIGURE 15.11 Comparison of computational time between SVM and proposed CNN using crossvalidation techniques. CNN, Convolutional neural network; SVM, support vector machine.

15.4 Experimental analysis

923

Method applied

kNN

CART

534

636

RF

Proposed method 300

373 400

500

600 700 Time elapse (S)

800

900

1000

FIGURE 15.12 Time elapse of the proposed CNN and other methods (RF, CART, kNN). CART, Classification and regression trees; CNN, convolutional neural network; RF, random forest.

smaller than single or two features). Next we choose these three features and implement the road signs recognition by using kNN, CART (classification and regression trees), and RF technologies. The RF method is a collection of multiple decisions trees and shows huge improvement as compared to CART. It is observed that kNN has similar classification accuracy as that of RF, but the training time expense is larger. However, our proposed CNN method outperforms in both the accuracy and the time efficiency (Figs. 15.12 and 15.13). The proposed method has the best classification accuracy among the traditional implementations (kNN, CART, and RF). The CART struggles to predict the right label, while the kNN and RF have similar performance in accuracy but still around 10% less than the proposed method. As we see in Fig. 15.14, it is an empirical process to optimize the various strategies using RF method to achieve higher accuracy, which is often very timeconsuming. However, our method can evade such troublesome process and still with decent performance on classification. In the back propagation process the weights are adjusted automatically and repeatedly according to the loss function. The optimized weights can be achieved with enough iterations given and it has the potential to gain better classification accuracy with the proper tune of the CNN parameters such as the learning rate, optimizer function, number of layers, and normalization methodologies.

15.4.6 Why does the proposed CNN outperform other methods? Among traditional classification methods, kNN is an easiest machine learning algorithm. It is a lazy learning as the adaption of previously neglected data or

333

CHAPTER 15 Identification of road signs using a novel convolutional

Method applied

kNN

0.863

CART

0.508

RF

0.872

0.975

Proposed method 0.4

0.5

0.6 0.7 0.8 Classification accuracy

0.9

1

FIGURE 15.13 Classification accuracies of proposed CNN and other methods (RF, CART, kNN). CART, Classification and regression trees; CNN, convolutional neural network; RF, random forest.

color histogram

Strategies applied

334

0.827

Haralick texture + color histogram

0.86

Hu moments + Haralick texture + color histogram 0.8

0.872

0.825

0.85 0.875 Classification accuracy

0.9

FIGURE 15.14 Classification accuracies of the various strategies using random forest.

new data is often delayed, and it costs extra time for updating the distance of all training data for each prediction. For the decision tree classifier, it is hard to judge the depth of the binary decision split. Overfitting arises when the split steps are exceeding the proper number. As the gathering of decision trees, the RF method costs much in tuning the structure of all the trees. Among those mentioned methods, the process of tuning is difficult and the time is costly

References

considering the selection of features out of the enormous candidates. While the filters of CNN gain the convenience, it is flexible to set all the parameters such as the size of the filter. The filters are also customizable with certain programming packages applied. For the ANN and SVM methods, feature extraction must be done separately before training the classifier. As mentioned earlier, SURF and HOG are the feature descriptors used for ANN and SVM and they extract and convert features of the images into the useful data for the classifier. Different from the traditional classification, for the CNN, the filter maps do the job of feature extraction. Depending on the number of filter maps we apply in different layers, they are capable and flexible enough to learn the feature information. It can lead to a higher classification accuracy as long as proper structure of the layers are given. The CNN contains the feature descriptor in itself, so the computational time is comparatively reduces. LeNet has comparatively simpler structure with two convolutional layers comparing to four of the proposed CNN to extract salient features. It was proposed in classifying handwritten and machine-printed characters [21], in which the features are easy to be analyzed such as the Arabic numbers. So it performs not good as our method for the complicated objects as road signs, where the features are different and difficult to learn. We have colored image with the interface of the environment, shade, blocking objects, rotation, illumination, and so on. The simple designed structure is not suitable to do such heavy task and additional structure is needed to proceed the feature extraction.

15.5 Conclusion The proposed system fully utilizes the advantage of the CNN and takes the image input directly without much image preprocessing needed. The proposed method is robust against the road sign images with variety of noise and occlusions. The accuracy of the proposed system is promising with simple customized neural network structure. The time spent in the training is also affordable by analyzing the sizes of images to set the most suitable parameter in input shape. For the comparison with LeNet, though LeNet saves computational time due to the comparatively simple CNN structure, our work leads to better classification accuracy over LeNet. By comparing to other implementation methods such as ANN, SVM, kNN, Decision Tree, and RF, the proposed system outperforms with higher classification accuracy and less time elapse.

References [1] L. Offices of Michael Pines, Top 25 causes of car accidents. ,https://seriousaccidents.com/legal-advice/top-causes-of-car-accidents/., 2018.

335

336

CHAPTER 15 Identification of road signs using a novel convolutional

[2] A. Gudigar, S. Chokkadi, U. Raghavendra, U.R. Acharya, Local texture patterns for traffic sign recognition using higher order spectra, Pattern Recognit. Lett. 94 (2017) 202 210. [3] B. Stecanella, An introduction to support vector machines (SVM). ,https://monkeylearn.com/blog/introduction-to-support-vector-machines-svm/., 2017. [4] S. Das, CNN architectures: LeNet, AlexNet, VGG, GoogLeNet, ResNet (SVM). ,https://medium.com/@sidereal/cnns-architectures-lenet-alexnet-vgg-googlenet-resnetand-more-666091488df5/., 2017. [5] Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, Proc. IEEE 86 (11) (1998) 2278 2324. [6] Z. Abedin, P. Dhar, M.K. Hossenand, K. Deb, Traffic sign detection and recognition using fuzzy segmentation approach and artificial neural network classifier respectively, in: 2017 International Conference on Electrical, Computer and Communication Engineering (ECCE), 2017, pp. 518 523. [7] C. Li, C. Yang, The research on traffic sign recognition based on deep learning, in: 2016 16th International Symposium on Communications and Information Technologies (ISCIT), 2016, pp. 156 161. [8] S. Houben, J. Stallkamp, J. Salmen, M. Schlipsing, C. Igel, Detection of traffic signs in real-world images: the German traffic sign detection benchmark, in: International Joint Conference on Neural Networks, no. 1288, 2013. [9] H. Pokharna, The best explanation of convolutional neural networks on the internet. ,https://$medium.com/technologymadeeasy/., 2019. [10] P. Sermanet, Y. LeCun, Traffic sign recognition with multi-scale convolutional networks, in: Proceedings of International Joint Conference on Neural Networks (IJCNN’11), 2011. [11] S. Huang, H. Lin, C. Chang, An in-car camera system for traffic sign detection and recognition, in: 2017 Joint 17th World Congress of International Fuzzy Systems Association and 9th International Conference on Soft Computing and Intelligent Systems (IFSA-SCIS), 2017, pp. 1 6. [12] A. Broggi, P. Cerri, P. Medici, P.P. Porta, G. Ghisio, Real time road signs recognition, in: 2007 IEEE Intelligent Vehicles Symposium, 2007, pp. 981 986. [13] L. Chen, Q. Li, M. Li, Q. Mao, Traffic sign detection and recognition for intelligent vehicle, in: 2011 IEEE Intelligent Vehicles Symposium (IV), 2011, pp. 908 913. [14] Y. Kageyama, K. Suzuki, C. Ishizawa, T. Suzuki, Extraction and recognition of speed limit signs in night-scene videos, J. Inst. Ind. Appl. Eng. 6 (2018) 29 33. [15] A. Alam, Z.A. Jaffery, Indian traffic sign detection and recognition, Int. J. Intell. Transp. Syst. Res. (2019) 1 15. [16] K. Lim, Y. Hong, Y. Choi, H. Byun, Traffic sign detection using a cascade method with fast feature extraction and saliency test, PLoS One 12 (3) (2017) 1 22. [17] N. Ben Romdhane, H. Mliki, M. Hammami, An improved traffic signs recognition and tracking method for driver assistance system, in: 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), 2016, pp. 1 6. [18] Z. Zhu, J. Lu, R.R. Martin, S. Hu, An optimization approach for localization refinement of candidate traffic signs, IEEE Trans. Intell. Transp. Syst. 18 (11) (2017) 3006 3016. [19] Z. Huang, Y. Yu, J. Gu, H. Liu, An efficient method for traffic sign recognition based on extreme learning machine, IEEE Trans. Cybern. 47 (4) (2017) 920 933.

References

[20] D. Wang, X. Hou, J. Xu, S. Yue, C. Liu, Traffic sign detection using a cascade method with fast feature extraction and saliency test, IEEE Trans. Intell. Transp. Syst. 18 (12) (2017) 3290 3302. [21] Y. LeCun, C. Cortes, MNIST handwritten digit database. ,http://yann.lecun.com/ exdb/mnist/.. [22] H.S. Lee, K. Kim, Simultaneous traffic sign detection and boundary estimation using convolutional neural network, IEEE Trans. Intell. Transp. Syst. (2018) 1652 1663. [23] Y. Wu, Y. Liu, J. Li, H. Liu, X. Hu, Traffic sign detection based on convolutional neural networks, in: The 2013 International Joint Conference on Neural Networks (IJCNN), 2013, pp. 1 7. [24] Z. Luo, J. Li, Z. Xiao, Z.G. Mou, X. Cai, C. Wang, Learning high-level features by fusing multi-view representation of MLS point clouds for 3D object recognition in road environments, ISPRS J. Photogramm. Remote Sens. 150 (2019) 44 58. [25] D.R. Bruno, D.O. Sales, J. Amaro, F.S. Osrio, Analysis and fusion of 2D and 3D images applied for detection and recognition of traffic signs using a new method of features extraction in conjunction with deep learning, in: 2018 International Joint Conference on Neural Networks (IJCNN), 2018, pp. 1 8. [26] Z. Zhu, G. Xu, H. He, J. Jiang, T. Wang, Recognition of speed signs in uncertain and dynamic environments, J. Phys.: Conf. Ser. 1187 (2019) 1 6. [27] Sphinx, The python standard library—python 3.6.4 documentation. ,https://$docs. python.org/., 2019. [28] F. Chollet, et al., Kera. ,https://keras.io/., 2015. [29] G. Bradski, The OpenCV library, Dr. Dobb’s J. Software Tools 25 (2000) 120 125. [30] L. Ma, C. Xu, G. Zuo, B. Bo, F. Tao, Detection method of insulator based on faster R-CNN, in: 2017 IEEE 7th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), 2017, pp. 1410 1414. [31] E. Brewster, J. Keller, M. Popescu, Detection method of insulator based on faster RCNN, in: Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XXII, 2017, 101821F.

337

CHAPTER

Machine learning behind classification tasks in various engineering and science domains

16 Tilottama Goswami

Department of Computer Science and Engineering, Anurag Group of Institutions, Hyderabad, India

16.1 What are classification tasks? We all learn from past experiences. To decide, our experiences play a key role in making it in the right direction. But how does our brain decide what to do and what not? The neurological science says “the interactions between processes of learning, memory, and decision-making are controlled by signals of the prefrontal cortex.” The neurons of our brain link with each other and make themselves activated to conclude a decision. The scientists made it a prime focus to know how the brain works to decide. Like human beings, the machines can also make its decisions. Sometimes in our life we come across various situations where the decision-making is about considering one item among two or many. To take such decisions, we consider certain parameters that act as the factors depending on which the decisions are made. For example, to buy a car we consider the factors such as price, mileage, and diesel consumption, color. Comparing these factors for different categories of cars, finally we conclude to buy one. Hence decision-making comes under daily routine for everyone. Classification is a machine learning (ML) task where the machine decides a yes/no answer depending on its past experiences. Technically, classification is a systematic approach of categorizing the data into various discrete groups. The predicted class is a target or label. The group/label/target can be binary in nature such as yes/no, true/false, malignant/nonmalignant, spam/not spam, etc. This learning task is called binary classification. If the classification groups/labels/targets are more than two, then the learning task is called multiclass classification such as N types of faults, three types of severity—low/medium/high. Binary and multiclass classification may be single- or multilabel classification. The instance in a single label belongs to only one set of label/class, disjoint with others. The examples cited are all single label, binary class or multiclass. The classes assigned are disjoint, instance and cannot be assigned both, such as spam and not spam.

Cognitive Informatics, Computer Modelling, and Cognitive Science, Volume 1. DOI: https://doi.org/10.1016/B978-0-12-819443-0.00016-7 © 2020 Elsevier Inc. All rights reserved.

339

340

CHAPTER 16 Machine learning behind classification tasks

Problems, where one can have a set of targets, are multilabel classifications. In multilabel, an instance can belong to more than one conceptual class. For example, a patient may suffer from two to three diseases, semantic scene classification—a photograph falls into two classes such as dessert and sunset. A movie category can be multiclass U/A, U, A that are disjoint or multilabel such as comedy, romance, tragic, and drama (nondisjoint). ML classification applications are popular in almost every type of the sectors; the electrical industry, the electronics sector, the business and commercial hubs, the agricultural department, research and developments of entire science, the medicine industry, and so on.

16.2 Classification tasks in engineering and science domains Prediction and analysis plays a key role in problem statements to be solved in engineering and science domains. In medical field, prognosis can use machine intelligence to classify the degree of disease outcome from given dataset with expected signs and symptoms. The medical science is another prominent area where the implementation of artificial intelligence (AI)/ML is rising exponentially. The treatments of the patients are more systematic and data based. There is a high possibility of automatic reports generation from the scanned data. The popular datasets on which classification algorithms can be used are breast cancer, cervical cancer, nutrients requirements in body, and so on. Interestingly, the nature is getting analyzed with the ML in the field of geology. The various surveys on the data of hundreds of years from natural resources help the geologists make predictions on the study of rock behavior, salt, and nutrient contents of soil based on ML. The electrical power system implements classifications in different use cases such as to classify whether the system is stable or not, whether the transmission line is faulty or healthy. It is useful to classify the faults such as single line to ground faults, double line faults, double line to ground faults, and triple line faults. The electronics industry has a wide variety of devices where the classification of diodes, switches, rectifiers, and other components is a compulsory task. The classification is very helpful in the business sector as well. Starting from the user customization and recommendation systems, the algorithm spread widely in other aspects such as business modeling, better customer segmentation, and so on. The finance and the stock market is one of the high implemented sectors of the ML tools. The stock prediction and classification of higher profit stock, the analysis of share and classification of share, and the profit on return makes the investors take better decision. The companies prefer a good model of classification that suggests that on which items the production will make maximum gain on return.

16.3 Machine learning classification algorithms

The 70% of India’s revenue depends on agriculture. In the field of agricultural science, classification of crops based on soil types and availability of water table can be automated. The irrigation of the crops is completely based on the weather prediction and rainfall. Each variety of crops has its own favorable soil and climatic condition to cultivate maximum. The crop classification based on their suitable area and weather to irrigate is one of the major applications of ML in agricultural industry. The kind of fertilizers required to improve growth of the crops is one more agricultural use case. The rural development involves the technical advancement in poultry farming, cattle bearing, and dairy products as well. The study of earth is called as geology. The study of rocks, occurrence of earthquakes, nature of landforms, age of mountains and rocks, stretch of plateaus and plains, habitats of each landform are all that makes our geography. The scientists are using AI and ML for each of these use cases. The huge earth data from the satellites provides the information for the system to train. It logically performs its task to make classification on various scenarios. The various classification tasks that can be used in any domain data are image classification, speech recognition and classification, audio and video classification, text classification, and pattern recognition. Traditional ML is a statistical approach to AI. Cognitive approach is something that works like a human brain. Neural Network and Deep Learning is a cognitive approach. The next section shows the different approaches/algorithms of classifications.

16.3 Machine learning classification algorithms The real-time analysis may not ensure to provide the target class or prior experiences in all cases of decision-making. Sometimes the industries may expect the system to make its own decisions without any past experiences. Although the assigned values as the target will be beneficial in making decision fast and easy, the real-time scenario may not give any experimental value in the target class. In such circumstances it is essential for the system to decide and assign its predicted target class. On the basis of these situations, classification task can be categorized as supervised and unsupervised classification. Supervised classification: It is the classification of given target values for each set of attributes. It makes the decision-making easy and convenient by the algorithms as it already has the predefined result class for the set of features. Technically, it is a classification task of categorizing classes with the labeled training data. The labeled data provided at training sets is used to map the result for testing set. For instance, in a given dataset of animals, the task is to identify the class of dog, cat, hen, sheep, and cow. Given the attributes of color, shape, and size, it provides the classes in the training set and makes a prediction of the animal during the testing phase. There are various options to make the learning

341

342

CHAPTER 16 Machine learning behind classification tasks

algorithm fit well, in order to predict more accurate result during testing phase— more training examples, better feature selection—reduced number of features or additional features, increasing or decreasing learning rate, cross-validation set helps in solving overfitting problem. Bias and variance factors play an important rule, high bias for simple model results in underfitting, and larger complex models have high variance, which results in overfitting. For neural network, start with single hidden layer and slowly move toward more hidden layers with crossvalidation set. Unsupervised classification: The classification methods where the training data is not labeled. Based on user-selected number of classes, it is able to classify the categories and cluster the homogeneous classes depending on statistical/distance measures. It therefore turns out to be very powerful method that is used in real-time industrial applications where it is not always possible to provide labeled data in the training set. The popularly used approach in such case is called clustering. The unsupervised learning technique is faster compared to supervised learning methods. There is not much requirement of prior expertise, but it indeed needs the knowledge of the features to identify and classify the classes. This basically is handled by the domain experts. Be it supervised or unsupervised, the main motive behind the methods is to classify. Now the question comes, what to classify? Classification can be done on any kind of data be it text or media (image/audio/video), it primarily makes the class distinguishing based on the feature selection. The classification methods for supervised classification are considered in this chapter, which follows mainly any of the two approachesstatistical approach and cognitive approach.

16.3.1 Statistical methods There are various algorithms under statistical approach for supervised learning, which are discussed as follows:

• • • • •

logistic regression Naı¨ve Bayes k-nearest neighbor (KNN) classification Decision Tree support vector machines (SVM)

Along with core-classification algorithms, we will discuss the ensemble classifiers such as

• Random Forest • Bagging • Boosting (AdaBoost and Gradient Boost) Each of the classification algorithms has its own principle to classify.

16.3 Machine learning classification algorithms

16.3.1.1 Logistic regression Logistic regression is applicable to two class outputs. This algorithm makes a binary classification based on more than one independent attributes when fitted in a logistic function. It is a special case of linear regression. Unlike regression analysis, it classifies the categorical data. This is one of the most widely used algorithms for binary classification. The various assumptions it takes before fitting are as follows:

• a binomial distribution of classes • linearity between the logistic function and the independent attributes y 5 mx 1 c

(16.1)

y is the dependent variable whereas x is the independent variable. Depending on the value of x, the response variable y can be calculated. For any linear functionality, as we know there comes a constant c. The function considers a threshold value. The calculated y when crosses the threshold, it falls in the positive class and the rest falls in the category of the negative class.

16.3.1.1.1 Mathematical illustration For any linear regression the value of f(x) can be any finite value, but it hardly categorizes any class. Hence it makes the case of logistic regression that classifies the response into upper limit or lower limit. The function f(x) determines the value of the function. Let us consider the threshold value as .5, then f ðxÞ $ 0:5-1 and f ðxÞ , 0:5-0 Let us consider the sigmoid function; hθðxÞ 5 sigmoidðZÞ

(16.2)

ifZ-N; y 5 1 ifZ- 2N; y 5 0

If the estimated probability comes as .7, it means that 70% of the result is confident of the actual value. If the threshold is .5 and the predicted output is more than the threshold value, it categorizes the result as positive class or 1. This justifies how logistic regression makes a categorical classification from the prediction analysis approach of the regression technique. There are three types of logistic regression as mentioned below: 1. Binary logistic regression—This categorizes two possible classes (e.g., good/ bad). 2. Multinomial logistic regression—Unordered categorization of three or more classes (e.g., male/female/other). 3. Ordinal logistic regression—Ordered categorization of three or more classes (e.g., feedback ratings—1/2/3/4/5).

343

344

CHAPTER 16 Machine learning behind classification tasks

A case study is discussed and implemented in Section 16.4, for a medical domain dataset.

16.3.1.1.2 Pros 1. A good performance baseline for the rest complicated algorithms. 2. High feature selection capability and elimination of features, which has slightest impact on the dependent variable. 3. Easy and simple to implement; efficient in training the model. 4. Requires less optimization and tuning.

16.3.1.1.3 Cons 1. Being linear in nature, it is inefficient in making nonlinear decisions. 2. Highly prone to overfit. 3. Needs a lot of data preprocessing prior to learn.

16.3.1.1.4 Application 1. Voting possibility of any voter 2. Success ratings of any campaign 3. Earthquake prediction

16.3.1.2 Naı¨ve Bayes This algorithm’s base-working principle is relied on Bayes’ theorem. According to Bayes’ rule in the probability and statistics, it describes the probability of an event depending on the previous information about any attribute conditionally related to the event. In simple words, this theorem finds out how much possibility of occurrence an event may have based on its past experience. This is what inferred by the supervised learning as well. In this algorithm, each of the features is independent of the other features. Hence the name “Naı¨ve” justifies.

16.3.1.2.1 Mathematical illustration PðcxÞ 5 PðxcÞPðcÞ PðxÞ

(16.3)

P(c|x) represents probability of class c of given predictor x. P(x|c) represents the probability of predictor x given class c. P(c) represents the known probability of class. P(x) represents the known probability of predictor. For an example, to determine the probability of going to school in a rainy day be the posterior probability, P(c|x). Let the occurrence of this situation previously be P(x|c). Using (16.3), we determine the probability of going to school in a rainy day.

16.3 Machine learning classification algorithms

There are three types of Naı¨ve Bayes model: 1. Gaussian—this model assumes that the features are at uniform distribution. 2. Multinomial—this model basically deals with the frequency of the outcome or response. Technically, it shows the number of times the outcome gives a positive result. Hence it becomes discrete in nature. 3. Bernoulli—it is also considered as the binomial model that categorizes the outcomes into only two classes. This independent nature of the features makes the classifiers more flexible enough to learn to use higher dimensional attributes with fixed amount of training data.

16.3.1.2.2 Pros 1. Quick in predicting test data for multiclass dataset. 2. Better in performance compared to baseline accuracy of logistic regression.

16.3.1.2.3 Cons 1. Zero-frequency error refers to the substitution of data points as 0 wherever it misses to observe the class. This results in an inaccurate prediction. 2. The features are mutually independent. Hence the predictions need to be the product with the conditional probabilities. This makes the result continuous, but the outcome desired is categorical.

16.3.1.2.4 Applications 1. Text filtration (spam and not spam). 2. Real-time prediction 3. Recommendation system

16.3.1.3 k-Nearest neighbor KNN is an algorithm that considers the homogeneous data points to be clustered together to define a class. As the name states nearest neighbor, it takes the data points “nearest” to the test data and classifies the classes within it. The new test data is compared with the classes surrounding it in neighborhood. The value of k is highly significant in this algorithm as it denotes the frequency of nearest neighbors the KNN model considers. For an example, if k 5 1 then it classifies the test data with the nearest neighbor. If k 5 2, then it classifies the test data with its two nearest neighbors and so on. The examples of various k values are shown in case study 4, which exhibits how the performance varies with the variable k. Ideally, the model always outperforms at k 5 1. The KNN algorithm is a supervised learning technique and is known as a lazy algorithm. A lazy algorithm is one which makes instant testing of the data. It does not require any training set of data. Hence it is also considered as the nonparametric algorithm that refers to the

345

346

CHAPTER 16 Machine learning behind classification tasks

determination of the model structure from the data itself. The various distance functions that are available to calculate the nearest range of the neighborhood are Euclidean distance, Manhattan distance, Chebyshev distance, Minkowski distance, Mahalanobis distance, etc.

16.3.1.3.1 Pros 1. 2. 3. 4.

Simple and fast algorithm compared to the previous algorithms. It does not constrain on nonlinear data. Very flexible as it can be applied for both classification and regression task. As far distance calculation is concerned, it facilitates the degree of accessibility.

16.3.1.3.2 Cons 1. High saturation of data as the testing instances is involved in an instantaneous learning approach. 2. KNN being a lazy learner, it occupies a large space and needs a considerable computational time during testing. Hence it is difficult to apply on large dataset. 3. It offers a biasing nature towards the homogeneous data points at the neighborhood boundaries. This results in less performance accuracy in making predictions.

16.3.1.3.3 Applications 1. Recommendation system 2. Security system

16.3.1.4 Classification Decision Tree A Decision Tree is one that classifies the outcomes based on the decision it takes. It forms a tree structure with a yes/no condition relying on which the classes are segregated and labeled. It is a supervised approach of learning where the past data is provided while the algorithm learns. It works on the principle of “divide and conquer” that makes it a condition-based algorithm. The approach attempts a series of questions to split the data and makes the classification more generic. In general decision, tree is applied for any binary tree that requires only two classes to be classified. At multiclass classification the Decision Tree used is C4.5. The Decision Tree to classify uses Gini index. It is a measure that determines the likelihood of the occurrence of an event by calculating the likelihood of each subnodes for both the classes of binary classes: Gini index 5

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P2 1 q2

(16.4)

In Eq. (16.4), P denotes the probability of the positive classes, and q denotes the possibility of the negative classes in a binary classification. The stopping condition is one of the crucial points for a Decision Tree that enables the tree to partition and to decide, whether to go further. Hence it measures at what depth the tree learns.

16.3 Machine learning classification algorithms

16.3.1.4.1 Pros 1. A very comprehensible method to make the learning easy 2. Less time complexity 3. Does not require much data preprocessing

16.3.1.4.2 Cons 1. Prone to overfit. Decision Tree makes overfitting of the data that can be pruned to optimize the model performance. 2. The major drawback of the Decision Tree approach is that while partitioning the data it loses important data and therefore cannot give a good accuracy.

16.3.1.4.3 Applications 1. Data manipulation 2. Handling of missing values 3. Variable selection

16.3.1.5 Support vector machine SVM is an algorithm that classifies the data points in a hyper planar structure. There are some nonlinear datasets that cannot be separated with a singledimensional line of separation. Under such conditions, it needs the line of separation to be made at the higher dimension. Basically SVM considers the degree of its dimension equivalent to the number of classes it predicts. For example, to classify two classes it needs a separator in a hyperplane of two dimensions similarly to classify n number of classes, the separator has n dimensions. Technically, there are n hyper planes that make the higher dimensional space. The most optimal hyperplane needs to be selected out of them. The support vectors are the points or the vectors that form the hyperplane. The margin width is the distance between the planar sides. More the margin width more will be the optimal hyperplane. Hence a maximum margin width is always advisable. Ideally, the hyperplane should classify the classes perfectly but in real time there comes the condition of overlapping of points. The prediction using SVM is much at the outer proximity of the statistical method rather it gets the flavor of the cognitive approach of classification.

16.3.1.5.1 Pros 1. Higher dimensionality will result in outstanding classification of the classes. 2. It no longer gets into the boundaries of the statistical methods rather it works in the cognitive approach. 3. It is less prone to overfit the data.

16.3.1.5.2 Cons 1. Needs greater time to compute as it has the higher dimensionality approach. 2. Kernel selection is complex.

347

348

CHAPTER 16 Machine learning behind classification tasks

16.3.1.5.3 Application 1. 2. 3. 4.

Face detection Handwriting recognition Bioinformatics Image classification

16.3.1.6 Ensemble classifier An ensemble classifier improves the performance of a classification algorithm. For any real-time application of classification task, the algorithms provide the satisfactory performance but cannot reach ideally the 100% accuracy due to certain loop holes, technically termed variance error. The ensemble algorithms focus on reducing the rate of variance error and making the model more toward accuracy. The dominant ensemble classifiers are Bagging, Boosting, and Random Forest.

• Bagging—Bootstrap aggregating acronym as Bagging is a typical ensemble



method that uses a traditional Decision Tree in its inner working principle. It works on the principle of bootstrapping a subset of features for a certain number of observations (random pick up), running Decision Tree on each of such subsets, and finally aggregating the model performance. In simple terms, if the dataset has P features and q number of observations, Bagging takes a subset of features of some random observations and performs classification on it using a Decision Tree. This is repeated for other subsets. Then it considers the average performance (aggregated performance) of all the subsets. The aggregated performance is comparatively better than the performance of only classification task on the entire subset. A special case of Bagging method is Random Forest. This algorithm works in the same way as the bagging tree only with a minor difference selection of split point in the tree. In a conventional bagging algorithm, the Decision Tree checks all the available split points and then fix at the most optimal one. The Random Forest has the provision of making selected choice to determine the split point. This reduces the complexity of searching each and every split point. Boosting—It has a sequential process of selecting the features from which it extricates out the weak learners and boost them up. This approach makes a better way of improving the classification performance. From the name itself, it “boosts” or strengthen the deficit learners into a strong learner. It has again two types of boosting—AdaBoost and Gradient Boost.

AdaBoost (Adaptive Boosting)—This is the most traditional boosting technique. Initially it trains all the instances with equal weights. After training the learners, those which turn out difficult to classify are given extra focus by the ensemble in the next level. Each time the loop iterates weights are increased to give more focus on the weaker learners. Gradient Tree Boosting—It works like the AdaBoost that it improves the weak learners into the strong learners. But the difference lies in the methods they

16.3 Machine learning classification algorithms

follow. Gradient Tree follows the Decision Tree classifier. It makes the initial learners, predicts the dataset, and then calculates the loss. Depending on the loss, it builds the next improved training set and the cycle repeats.

16.3.2 Cognitive methods Cognitive psychology is a complex task that takes part in multiple processes such as creative thinking, decision-making, referring to earlier experiences, memory fetching, and problem solving. To mimic the human way of performing the classification task, one has to understand the cognitive psychology that helps us interpret and takes decisions. The tasks such as remembering, reasoning, retrospection have to be embedded as functionality in the neural network that is similar to how the human brain functions.

16.3.2.1 How classification in artificial neural network differ from statistical machine learning algorithms? Statistical ML algorithms include Naı¨ve Bayes (Classifier), SVM, KNN, Decision Tree, and Random Forest. Any supervised algorithm needs a training dataset to understand the expected outcome, positive and negative ones, or the assigned classes for a given one. This helps in classifying fraud and authorized, spam and not spam, malignant and nonmalignant, etc. Artificial neural network (ANN) working methodology is similar to human brain that works on a set of neurons established in the architecture of multilayers. Multilayers include input layer, output layer, and in-between hidden layers. Each neuron is a perceptron that takes input and produces an output depending on activation function. All the neurons are interlinked through weights that adapt themselves to have a final value. ANN as other statistical approaches is more flexible in its approach. Experiments can be conducted with various architectures by an increasing number of layers, a number of neurons in each layer, using different activation functions. ANN has advantages over classical ML methods. It is still an enigma as why this works perfectly in some cases and does not for others. The neural network works even better with large scale of input data, its features. Feature engineering is a crucial preprocessing step for ML. This involves few steps such as important discriminating feature selection, missing values to be predicted and filled, and dimensional reduction for feature length if required. Deep learning or modern ML can do automated feature learning that makes feature engineering simpler and more accurate.

16.3.2.2 Types of artificial neural network ANN architecture are of generally two types—FeedForward and Feedback:

• FeedForward ANN: In FeedForward ANN, the information flows in one •

direction. No feedback loops are present, mostly applied for pattern recognition and classification tasks. Feedback ANN: The information can be fed back to a layer of the network through loop, which is used in content addressable memories, where for example a sequence has to be learnt.

349

350

CHAPTER 16 Machine learning behind classification tasks

16.3.2.3 Convolution neural network One among the types of ANNs are a convolution neural network (CNN). CNN is used when input is an image and tasks related to image processing, analysis, or understanding are to be performed. Like neural networks, CNNs are made up of neurons with weights and biases that are learnable. Several inputs are received by each neuron, which takes a weighted sum over the inputs and pass the weighted sum through an activation function and responds with an output. CNNs require much lower preprocessing compared to other classification algorithms. CNN can be used for medical diagnosis, surveillance, and industrial quality inspection.

16.3.2.4 Architecture of a ConvNet ConvNet architecture is analogous to that of the connectivity pattern in human brain and inspired by the organization of the visual cortex. The classic network architecture of CNN comprises convolutional layers stacked in sequence. The architecture serves as a general design guideline that a practitioner can adopt to solve various computer vision tasks. The CNN architecture prove to be very efficient feature extractors that can be used for image classification, object detection, and image segmentation. Classic network architecture found in the literature are LeNet-5, AlexNet, and VGG 16. Few of the modern network architecture are Inception, ResNet, ResNeXt, and DenseNet. The state-of-the-art neural network architecture try to use inductive biases that consider various characteristic features such as shape, color, texture, and object itself, in a flat or hierarchical manner to discriminate a category from another for classification. The deep neural network is a modern one that is trained to optimize the learning objectives in an efficient manner. The efficiency comes with more and more hidden layers in its architecture, hence the terminology “deep”. In the literature there is no hard and fast rule as how many layers make a feedforward network a deep network. According to Universal Approximation Theorem, by Cybenko in 1989, a shallow neural network (one hidden layer) can approximate any function (linear or nonlinear activations) and in the process can learn anything. It is mentioned in a book by Goodfellow that accuracy achieved is better with more hidden layers for various tasks in various domains. Recent research works are also going on to find out whether a shallow neural network works with exponential number of neurons. The neural network architecture indeed depends on the task complexity. It is still an open research problem as how having many layers is helping in solving complex classification to recognition problems.

16.4 Case study—machine learning implementation 16.4.1 Case study 1: Medical industry The classification algorithms encompass almost all the domains. There is a huge scope for the software technologies to embark into the medical premises. The

16.4 Case study—machine learning implementation

boundaries of the medical sector comprise of large medicine data, pharmaceutical information, patient health checkup data, etc. Here in this case, the study is based on a medical diabetes data from the UCI datasets repository. The dataset contains eight features as pregnancy, glucose, blood pressure, insulin, BMI, Diabetes, Pedigree Functions, and Age. These features make the decision of classifying whether the patient is diabetic or not. There are total 768 instances in the dataset and no missing values. The environment used is Jupyter Notebook, and the Python language is used as the programing language. There are some important libraries that are imported for reading datasets efficiently (pandas), handling statistical calculations (NumPy), data visualization library (Seaborn), and library to frame the plots of the dataset (Matplotlib). import pandas as pd1 import numpy as np1 import matplotlib.pyplot as plt1 import seaborn as sns1 Load the dataset. read_csv("filename") of pandas library is called and let data is the output variable.

The data understanding is the basic requirement before any further steps toward prediction. To understand the data, there are few standards to follow. This is one of the very important steps to get familiar with the data available and its pattern. The head () and tail () functions give the data information from the top and bottom, respectively. The describe function provides explanation. The mean and standard deviation shows the distribution of the data that can be visualized by the plots made by the matplotlib functions. Data1.tail() Data1.describe Data1.shape corr 5 Data1.corr() print(corr) print(Data1.mean()) print(Data1.std())

The clear picture of the dataset is provided if the frequency of instances of the target variables distribution is well clarified with the count: Data1['Outcome'].value_counts()

The major important step is to label the data as variable X and the values are stored in y: X 5 Data1.drop('Outcome', axis 5 1) y 5 Data1['Outcome']

351

352

CHAPTER 16 Machine learning behind classification tasks

The train and the test dataset are split where the ratio of the split is based on the programer. The standard split ratio is 80:20 divisions, mentioned as test_size equal to 0.20. The train-test_split is imported from the sklearn.model_selection libraries. The various classification algorithms as discussed in the previous section such as Logistic Regression, Naı¨ve Bayes, SVMs, and Classification Tree are implemented separately to predict the diabetic patients. Each of the algorithms is imported from their corresponding packages available in the scikit learn. The various models are created by calling respective functions depending on the requirement of implementing the ML algorithms. LogisticRegression() is imported from sklearn.linear_model, DecisionTreeClassifier() from sklearn.tree, SVC() for support vector classifier from sklearn.svm. The kernel for SVC can be linear or nonlinear. Import GaussianNB() from sklearn.naive_bayes. In the case of ANNs, import Sequential from keras.models, Dense from keras.layers. One can add hidden layers, that is, dense layer along with the activation function and number of neurons. Once the model is created, the training data is sent to fit () and subsequently, predict () is called on test dataset modelA.fit(X_train1, y_train1) y_pred1 5 modelA.predict(X_test1) correct1 5 (y_pred1 55 y_test1).sum() accuracy1 5 correct/len(y_test1) 100 print("accuracy 5 ", accuracy1, "%")

Accuracy varies from 76.30% to 79.22% depending on algorithms and their default parameter settings.

16.4.2 Case study 2: Geographical data Geography is one of the significant sectors that implement the ML tasks in making prediction of distinct earth forms. For instances the classification of volcanic mountains, landforms and their category separation, the huge flora and fauna variety, etc. are the challenging and most demanding tasks in classification. The dataset used here is Sonar data, collected from the Kaggle competition. Sonar is a device that sends a pulse of signal which strikes any surface and returns back. The receiver that collects the signal from the returning pulse characterizes the surface it strikes. It consists of 111 patterns obtained at various angles and under various conditions. The label mentions “R” if the object is a rock and “M” if it is a mine (metal cylinder). The numbers in the labels are in increasing order of aspect angle, but they do not encode the angle directly. The dataset has 60 features and 207 instances. A label encoder is used to label the target column categorical in a numeric representation. The Label Encoder () is obtained from the preprocessing package of sklearn. preprocessing. For performance measure, accuracy score, confusion matrix, and classification report

16.4 Case study—machine learning implementation

metrics are used such as confusion_matrix, accuracy_score, classification_report from sklearn.metrics.classification library. The label encoding is to incorporate the numerical values to the target column: X1 5 sonar_data.values[:,0:1].astype(float) Le1 5 LabelEncoder() Target1 5 sonar_data.R Le2 5 LabelEncoder.fit(Le1, y 5 ["R", "M"]) y 5 Le2.transform(Target1)

Once the numerical labeling is done, the dataset is set for the train test split. For training, the train data uses 70% of the dataset whereas the test data takes the other 30%. The KNN classification algorithm is used, with parameters algorithm 5 ’auto’, leaf_size 5 30, metric 5 ’minkowski’, metric_params 5 None, n_jobs 5 None, n_neighbors 5 5, p 5 1, weights 5 'uniform'.

16.4.3 Case study 3: Finance dataset The commercial sector encourages a hub of technological advancement to enhance their service reliability. The share and stock market prediction, the customer segregation, identification of big budget shares, and transaction classification are the most recent area to develop. This case study provides a special insight of imbalance dataset and its handling techniques. An imbalanced dataset is one that has an unequal and very high difference in the proportion of the target classes. For an example, the classification algorithm needs to decide whether to use a blanket in winter season or not. It is very obvious that one uses blanket at winter but at some rare events one does not. Hence the target class has a maximum positive class and rare negative class. Such scenario makes the dataset highly disproportionate. It results in very high accuracy rate where the algorithm learns only on the maximum events, which makes a wrong illusion to the user about excellent performance of the algorithms. There are several ways by which this anomaly can be solved like duplicating the minor classes and increasing their data points else decreasing the data points and make them as per the frequency of the minor data points. The case study of a customer transaction dataset Santander Customer Prediction is used from the Kaggle. The dataset contain 200,000 instances and 202 features of numerical data. The task was to classify which customer can make future transaction based on the past transactions. The dataset was imbalanced. Ninety percent of the data was negative class whereas the rest 10% was positive class, being binary classification. The imbalance of the dataset can be eliminated by two processes—data preprocessing (external method) and algorithmic approach (internal method). The external method updates the data and makes it balanced before feeding into

353

354

CHAPTER 16 Machine learning behind classification tasks

learning algorithms; hence it is at the data level. The internal method makes use of the various algorithms to increase the efficiency; hence it is at the algorithm level. To balance the data, various algorithms are available. Synthetic Minority Oversampling Technique (SMOTE) works on the principle by virtue of which it creates synthetic data points of the minor classes to make it as per the number of data points of the majority classes. Technically, the minor class data points are over sampled. SMOTE has been used in this case study from the imblearn.over_sampling package: before oversampling, count of label “1”: 16069 before oversampling, count of label “0”: 143931 after oversampling, the shape of train_X: (287862, 200) after oversampling, the shape of train_y: (287862) after oversampling, count of label “‘1’”: 143931 after oversampling, count of label “‘0’”: 143931 RandomForestClassifier() is used for ensemble classifiers. Accuracy achieved is 86.135%.

16.4.4 Case study 4: Electrical dataset The power sector impacts largely on the development of any nation. The power generation and supply represents the county’s growth in international scale. The generated power distribution is a de facto of the entire electrical power system. The main challenge in the electrical power distribution is the fault occurrence. In ideal scenario, the receiver end gets the same power supplied by the sending end. Due to internal losses, the receiving end receives less power from the generation side. The power interruption caused by the electrical faults is a fatal damage to the big scale industries. Hence it is important to find out the fault and make maintenance of it. It improves the service and reliability of the power system. There are total 11 types of faults categorized under line to ground fault, line to line, double line to ground fault, and triple lines fault. The task is to classify these faults depending on the three-phase voltage and three-phase current values using the ML classification algorithms. The dataset used in the study is a synthetic data simulated using MATLAB/ Simulink. The data set contains six features and 11,310 data instances. The dataset can be obtained here. The algorithms used are Classification Decision Tree, KNN and SVM, and ensemble classifiers. This case study is an example of multiclass classification. A multiclass dataset is one which contains more than one class to classify. In this case study, there are 11 kinds of faults to classify hence it is an example of multiclass classification. The target class has string representation of the categorical output. Thus it needs to be encoded to the numerical categorization. In preprocessing, LabelEncoder() is used for the type output class. The clarification of the class distribution can be done by counting the frequency of each category by calling value_counts(). The classification algorithms used to

Further reading

classify the faults are Decision Tree, KNN, and SVM. In this case study the implementation of the ensemble classifiers such as Random Forest, Bagging, and Boosting is also exhibited. Decision Tree classifier: Accuracy 5 83.33% SVM: accuracy 5 91.06% KNNs: accuracy for k 5 1 is 83.33%, for k 5 2 is 87.22%, for k 5 3 is 84.74%, and for k 5 8 is 88.107%. Ensemble Classifiers such as BaggingClassifier(), RandomForestClassifier(), and GradientBoostingClassifier() are imported from sklearn.ensemble. Cross validation is implemented by using KFold() for the models being selected with n_splits attribute with value 10. All these case studies vividly show how classification task can be implemented at various sectors of the nation’s development. The scope of the chapter is to provide an overall overview of the various ML methodologies, from traditional statistical to modern neural networks to achieve classification task objectives; citing case studies from different science and engineering domains and finally the software code walk through to give an exposure to the software libraries.

Acknowledgments I would like to thank my undergraduate students Ms. Uponika Barman Roy, and Ms. Vaisshnavi Yerrapothu, student members of Indian Society for Technical Education in helping me with code implementation in Python.

Further reading A. Gosain, S. Sardana, Handling class imbalance problem using oversampling techniques: a review, in: 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Udupi, 2017, pp. 7985, doi:10.1109/ ICACCI.2017.8125820. P. Shukla, K. Bhowmick, To improve classification of imbalanced datasets, in: 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, 2017, pp. 15, doi:10.1109/ ICIIECS.2017.8276044. P.P. Pattanaik, C.K. Panigrahi, Stability and fault analysis in a power network considering IEEE 14 bus system, in: 2018 Second International Conference on Inventive Systems and Control (ICISC), Coimbatore, 2018, pp. 11341138. doi:10.1109/ ICISC.2018.8398981. M. Mitchell, Machine Learning, McGraw-Hill, 1997. Scikit-learn, Scikit-learn: machine learning in Python, 2007. ,https://scikit-learn.org/.. I. Goodfellow, Y. Bengio, A. Courville, ,http://www.deeplearningbook.org/. Deep Learning, MIT Press, 2016.

355

356

CHAPTER 16 Machine learning behind classification tasks

Machine Learning Repository, UCI datasets, 2007. ,https://archive.ics.uci.edu/ml/datasets. php/.. Kaggle Datasets, 2019. ,https://www.kaggle.com/datasets/.. T. Goswami, A. Agarwal, C.R. Rao, Statistical learning for texture characterization, in: Proceedings of the 2014 Indian Conference on Computer Vision, Graphics and Image Processing, ICVGIP-2014, 1418 December 2014, Bangalore, India, ACM, 2014, ISBN 978-1-4503-3061-9, 11:1-11:8 IISc, Bangalore. Q. Zou, K. Qu, Y. Luo, D. Yin, Y. Ju, H. Tang, Predicting diabetes mellitus with machine learning techniques, Front. Genet 9 (2018) 515. Available from: https://doi.org/ 10.3389/fgene.2018.00515. S. Srivastava, L. Sharma, V. Sharma, A. Kumar, H. Darbari, Prediction of Diabetes Using Artificial Neural Network Approach: ICoEVCI 2018, India, 2019, doi:10.1007/978981-13-1642-5_59. T. Goswami, Impact of deep learning in image processing and computer vision, in: Microelectronics, Electromagnetics and Telecommunications, Springer, Singapore, 2018. 978-981-10-7328-1 ,https://doi.org/10.1007/978-981-10-7329-8_48.. A.A. Pinheiro, I.M. Brandao, C. da Costa, Vibration analysis of rotary machines using machine learning techniques, Eur. J. Eng. Res. Sci. 4 (2) (2019) 1216. M.Z. Ali, M.N.S.K. Shabbir, X. Liang, Y. Zhang, T. Hu, Machine learning-based fault diagnosis for single- and multi-faults in induction motors using measured stator currents and vibration signals, IEEE Trans. Ind. Appl. 55 (3) (2019) 23782391. Available from: https://doi.org/10.1109/TIA.2019.2895797. ,https://github.com/Uponika/Fault-Classification/blob/master/faultData.csv/., 2019.

Index Note: Page numbers followed by “f” and “t” refer to figures and tables, respectively.

A ABM, 40 Acoustics, 60, 63 64 Active contour method, 102 103 Adaboost (AB), 285, 348 Agricultural science, 341 AKARI project in Asia, 164 AlexNet, 322 323 Alpha waves, 138 139 AMBIENT Network, 164 Analog-to-digital converter, 144 145 Annotated Facial Landmark Wild (AFLW), 179 ANT Neuro, 40 Arrhythmi, 140 142 Arthritis, 93 Artificial intelligence (AI), 1 2 Artificial neural network (ANN), 72 73, 274, 284 285, 320 321 classification in, 349 types of, 349 AtheroCloud, 232 AtheroEdge, 232 234 Atherosclerosis, 230 Automated Multiresolution Edge Snapper (CAMES), 232 Automated Robust Edge Snapper (CARES), 232 Autonomic Network Architecture Project, 165 Autonomous underwater vehicle (AUV), 213 control law formulation with delay prediction, 223 224 design of diving controller, 221 223 dynamic nonlinear proportional, integral, and derivative control law, 222 223 kinematic backstepping control law, 221 222 in diving plane and problem statement, 215 218 discretization of kinematic and dynamic of AUV, 217 218 dynamics, 216 217 kinematic, 215 216 extreme learning machine model, 218 220 Average pooling, 183 and max pooling, 183 Average voltage, 142

B Bagging, 348 Bayesian model, 170 173

advantages over other alternative models, 172 173 consistent process for uncertain information, 172 decision theory, 172 different variable types, 172 flexible, 172 handling expert knowledge, 172 handling missing data, 172 173 robustness property, 172 semantic interpretation of the model parameters, 172 Bayesian network, 170 171 collateral relationship with graded cognitive network, 173 environment in which Bayesian works the best, 171 importance of, 171 Bayesian networks, limitations of, 173 Bayesian statistics, 2 Bernoulli naı¨ve Bayes model, 345 Beta waves, 138 139 Binary classification, 339 340, 343 Binary Large Object (BLOB) filters, 321 Binary logistic regression, 343 BIONETS, 164 BioSemi, 40 Bland Altman plots, 244 246, 248f BLEU score, 207 Boosting, 348 Boostrap technique, 322 323 Bounding box regression, 190 Brain computer interface (BCI), 27 39 case study on, 42 48 classification, 47 data acquisition and preprocessing, 43 dataset, 43 feature extraction using wavelet transform, 47 k-nearest neighbor (kNN), 47 48 optimized channel selection using particle swarm optimization, 44 problem statement, 43 proposed method, 43 44 standard channel selection, 47 support vector machine (SVM), 48 working of particle swarm optimization for channel selection, 44 47

357

358

Index

Brain computer interface (BCI) (Continued) challenges, 41 42 data, high dimensionality of, 41 electrode, implantation of, 41 information transfer rate (ITR), 42 technical challenges, 42 classification, 39 electrodes, placement of, 32 33 electroencephalography acquisition devices, 40 electroencephalography signal acquisition, 33 34 features and feature extraction technique, 36 39 Fourier transform based feature, 36 37 statistical features, 39 wavelet-based feature, 38 39 history, 28 invasive brain computer interface, 28 29 noninvasive brain computer interface, 29 31 electroencephalography (EEG), 30 functional magnetic resonance imaging (fMRI), 31 functional near-infrared spectroscopy (fNIRS), 31 magnetoencephalography (MEG), 30 31 positron emission tomography (PET), 31 preprocessing, 34 36 human functioning artifacts, 35 36 technical artifacts, 34 results, 48 49 semiinvasive brain computer interface, 29 working of, 32f Brain functions and disorders, 149t Brain hemorrhage, 71 edge enhancement, 76 feature extraction algorithm, 78 82 gray level co-occurrence matrix (GLCM) features, 79 81 minimal angular local binary patterns (MALBP), 78 79 Naı¨ve Bayes-probabilistic kernel classifier, 81 82 literature survey, 72 74 modified multi-level set segmentation algorithm (MMLSS), 76 78 Naı¨ve Bayes-Probabilistic Kernel Classifier (NB-PKC) and Support Vector Machine (SVM), 84 89 accuracy, 84 dice coefficient, 84 fault acceptance rate (FAR), 86 fault rejection rate, 86 global acceptance rate (GAR), 86 Jaccard coefficient, 84

kappa coefficient, 85 precision, 84 recall, 84 ROC Curve, 86 89 Brain oscillations, 6 Brain products LiveAmp, 40 Breathing disorders, 148 Byte-MCT, 321 322

C Cancer, 273 Cardiovascular diseases (CVDs), 229 Carotid Automated Double-Line Extraction System, 232 Carotid intima media thickness (cIMT), 230, 258 computation, 230 231 detection and measurement methods, 232 233 detection using DL system, 236 237 mean value computations for cIMT and gTPA, 238 CART, 333 CASCADAS, 165 Cascaded convolutional neural networks, 178 Circadian oscillator, 123 124 Circadian rhythm, 123 124 circadian clock and, 123 124 data sheet, 132 134 case study at an educational campus, 132 134 perception of eye as a visual and nonvisual information sensor, 125 photoreceptors in the eye, 125 131 light-emitting diodes, 126 131 SunLike light-emitting diodes, 132 Circadian rhythm sleeps disorders, 149 Circadian stimulus (CS), 129 CLAHE algorithm, 326 Classification Decision Tree, 346 347 Classification tasks, 339 340 in engineering and science domains, 340 341 Classifiers intercomparison of, 287, 290 293, 302 303 types, 281 285 Adaboost (AB), 285 artificial neural network (ANN), 284 285 decision tree (DT), 285 Gaussian process (GP) classification, 283 linear discriminant analysis (LDA), 281 282 logistic regression (LR), 285 naive Bayes (NB), 283 quadratic discriminant analysis (QDA), 282 random forest (RF), 285 support vector machine (SVM), 283 284

Index

Clustering, 342 Cognionics, 40 Cognitive ability assessment, 3t Cognitive informatics (CI), 93, 167 discussion, 117 experimental analysis, 110 116 machine learning approach, for knee X-ray analysis, 96 110 classification, 108 110 feature computation/extraction, 103 108 identification of region of interest, 98 image acquisition, 97 98 preprocessing/enhancement, 98 segmentation, 99 103 and resources, 9 10 Cognitive language processing, 2 Cognitive maps, 3 4, 21 22 as adaptive interface tool in online course, 13 19 cognitive mapping and theories, 13 14 based on students’ mental model, 20 and perception, 4f, 10 11 with weights of subject modules and concepts, 22 Cognitive modeling, 6 8 cognitive networks, 8, 8f for reasoning at human level, 167 168 Cognitive networks, 162 163 Cognitive neuroscience, 137 Cognitive perception, 123 Cognitive psychology, 349 Cognitive science, 5f, 6, 27, 28f history of, 1 6 cognition, brain, and consciousness, 3 5 dynamic theory and cultural aspect, 5 6 psychology, philosophy, and cognitive neuroscience, 6 Cognitive theory of multimedia learning (CTML), 19 Coherence estimation functions (case study), 151 152 Colon microarray gene expression data, statistical characterization and classification of, 273 benchmarking different machine learning systems, 300 302 experimental protocols and results, 286 295 effect of data size on memorization versus generalization, 288, 294 295 effect of dominant genes, 287 288, 294 effect of P-value during statistical tests on machine learning performance, 287, 289 290 intercomparison of the classifiers, 287, 290 293 kernel optimization, 286 289 materials and methods, 276 286

classifier types, 281 285 feature selection techniques (FST), 278 281 gene expression data normalization, 277 278 statistical evaluation, 285 286 note on the intercomparison of classifiers, 302 303 patients demographics, 276 performance evaluation and hypothesis validations, 295 297 gene separation index, 295 interrelationship between nGSI and classification accuracy, 296 297 receiver operating curve analysis, 297 reliability index (RI), 297 validation of proposed methods, 297 strengths, weakness, and extensions, 303 304 Color-based algorithms, 319 Color rendering index (CRI), 128 129 Common carotid artery (CCA), 230, 233 234 Completely Automated Layers EXtraction, 232 Computer Aided Diagnostic (CAD) system, 71 Cones, 126 Consciousness, 3 4 Context-aware student model, 16 17 Continuous wavelet transform (CWT), 38 Contrast, 104 105 Contrast limited adaptive histogram equalization (CLAHE), 319 320, 326, 327f Control law formulation with delay prediction, 223 224 ConvNet architecture, 350 Convolutional neural network (CNN), 72 73, 178, 319, 323 324 experimental analysis, 325 335 comparison of proposed CNN against ANN and SVM, 330 331 comparison of proposed CNN method against kNN, CART, and random forest, 331 333 comparison of the proposed CNN against LeNet, 326 330 impact of input shape of images, 325 326 preprocessing using contrast limited adaptive histogram equalization, 326 proposed CNN versus other methods, 333 335 literature review, 321 323 Convolution kernel, 181 Convolution neural network (CNN), 350 Convolution operation, 181 Correlated color temperature (CCT), 129 131 Correlation, 105 Crops, classification of, 341 Cross-validation (CV) protocols, 273 274 Cuckoo optimization algorithm, 73 74 Cylinder fitting, 231 232, 234 gTPA modeling using, 235 236

359

360

Index

D Dark goggles, 131 Darkness hormone, 124 Data augmentation, 185 collection of, 180 high dimensionality of, 41 modeling of, 180 normalization of, 180 Data set, 153 155 Decision-making, 27 Decision tree (DT), 108 109, 346 347 classifier, 285 Deep learning (DL) system, 230 233, 238 Deformable part model (DPM), 178 Dice coefficient, 84 DICOM (Digital Imaging and Communications in Medicine) standards, 97 98 Differential expressed (DE) genes, 276 277 Discrete wavelet transform (DWT), 38 Distributed parallel computing, 137 138 case study, 152 157 Dominant genes, effect of, 287 288, 294 Dreaming stage. See REM stage Dual Tree Complex Wavelet Transform and Spatial Constrained K-means algorithm, 72 73 Dynamic nonlinear proportional, integral, and derivative control law, 222 223 Dysrhythmi, 140 142

E Edge-based methods, 99 101 Efficacy, 129 Electrodes implantation of, 41 placement of, 32 33 Electrodes artifacts, 144 Electroencephalogram signal recording variables and components, 140 145 artifacts in electroencephalogram recording, 143 144 electrode gel, 143 electrode positioning, 143 electroencephalogram electrodes, 142 143 electroencephalogram recording device, 144 145 filtering, 144 frequency, 140 142 impedance, 142 morphology, 142 voltage, 142 Electroencephalogram waves, 140

coherence estimation functions (case study), 151 152 distributed parallel computation (case study), 152 157 Java parallel processing framework architecture (JPPF) (case study), 151 client layer, 151 execution layer, 151 service layer, 151 sleeping stage, 145 146 type of channel selection for cognitive, 146 147 Electroencephalography (EEG), 30, 36 37, 137 acquisition devices, 40 analysis of electroencephalogram signals, 139 application of, 150 disorders detection using, 147 149 Fourier analysis plot, 140f history of, 138 139 human brain EEG signals, 141t signal acquisition, 33 34 spectral analysis plot, 139f subject preparation and equipment setup for recoding of using an electro cap, 145 topography, 139 Electrolyte gel, 145 Emotiv, 40 Encoder and decoder, 200 Energy, 105 Ensemble classifier, 348 349, 355 Error correcting output code (ECOC), 109 110 Expectation-Maximization Segmentation Software (EMS), 72 73 Extreme learning machine (ELM) approach, 213 214, 220 identification of AUV dynamics using, 218 220 sequential extreme learning machine model, 220 Eye perception of, as a visual and nonvisual information sensor, 125 photoreceptors in, 125 131 light-emitting diodes, 126 131 Eye movement’s artifacts, 144

F Face detection and alignment, 177 localization, 186 188 machine learning life cycle, 179 185 collection of data, 180 modeling of data, 180 normalization of data, 180 production and deployment of models, 180 185

Index

training and feature engineering of model, 180 methodology, 188 191 detection phase, 189 191 preprocessing, 188 popular augmentation techniques, 185 186 crop, 186 FLIP, 185 Gaussian noise, 186 rotation, 185 186 scale, 186 translation, 186 training data, 191 192 effectiveness of both detection and alignment of face, 192 effectiveness of online hard sample mining, 192 face alignment evaluation, 192 face detection evaluation, 192 runtime efficiency, 192 Face Detection Dataset and Benchmark (FDDB), 178 179 Fast Fourier transform (FFT) filter, 37, 151 Fault acceptance rate (FAR), 86 Fault rejection rate, 86 Feature extraction algorithm, 78 82 gray level co-occurrence matrix (GLCM) features, 79 81 minimal angular local binary patterns (MALBP), 78 79 Naı¨ve Bayes-probabilistic kernel classifier, 81 82 Feature selection techniques (FST), 278 281 F-test, 279 281 Kruskal Wallis (KW) test, 279 t-test, 278 279 Wilcoxon sign rank sum test (WCSRS), 278 Feedback ANN, 349 FeedForward ANN, 349 50 60 Hz artifact, 144 Finance and stock market, 340 FIND, 164 FLAIR (Fluid Attenuated Inversion Recovery) MR image, 72 73 FLIP technique, 185 Forget gate, 198 199 Fourier transform based feature, 36 37 F-test, 279 281 Functional magnetic resonance imaging (fMRI), 31 Functional near-infrared spectroscopy (fNIRS), 31 Fuzzy neural network, 274

G G.tec, 40 Gaussian kernel, 48 Gaussian naı¨ve Bayes model, 345 Gaussian noise, 186 Gaussian process (GP) classification, 283 Gene expression data normalization, 277 278 Gene separation index, 295 Geology, 341 Geometric TPA (gTPA), 230 231 benchmarking, 254 257, 255t correlation coefficient of, 261 experimental protocol, results, and its validation, 237 246 DL system results and visual display of LI and MA interfaces, 238 mean value computations for cIMT and gTPA for two DL systems, 238 relationship of age versus cIMT/gTPA, 238 validation, 238 246 materials and methods used for gTPA computation cIMT and LD detection using DL system, 236 237 gTPA modeling using cylindrical fitting, 235 236 overall architecture, 236 patient demographics and image acquisition, 234 235, 234t statistical tests and 10-year risk analysis, 247 253 risk analysis, 247 248 statistical tests, 249 251, 262 267 ten-year risk assessment, 251 253 strengths/weakness/extensions, 258 German Traffic Sign Recognition Benchmark (GTSRB), 319 320, 322 323, 325 Gini index, 346 Global acceptance rate (GAR), 86 GNMT, 196, 200 Google, 196 GoogLeNet, 322 323 Gradient-based SLFN network, 213 214 Gradient Recalled Echo (GRE) technique, 74 Gradient Tree Boosting, 348 349 Gray level co-occurrence matrix (GLCM) features, 79 81

H HAGGLE, 164 Haralick features, 104 105 Hegelian arguments, 4 5 Hemoglobin, 31

361

362

Index

Hemorrhage. See Brain hemorrhage Hessian-based BLOB detection, 321 322 Hierarchical clustering, 274 Histogram of oriented gradients (HOGs), 103 features, 105 107 HOG variant (HOGv), 322 Homogeneity, 105 HSV (hue, saturation, and value) color space, 321 Human brain sleep stages, 147t Human reasoning mechanism, 166 167 Hypersomnias, 148 Hypothetical instruction model, 23 25

I Image preprocessing, 319 320, 325 326 using contrast limited adaptive histogram equalization, 326 Image reconstruction technology, 230 Images, classification of, 183 185 maximum likelihood classification, 184 minimum distance classification, 184 parallelepiped classification, 184 rich training data, 185 supervised classification, 183 184 unsupervised classification, 185 IMT complex, 236 Information transfer rate (ITR), 42 Insomnias, 148 Instructional planning to improve student’s cognitive ability, 21 22 cognitive map with weights of subject modules and concepts, 22 Intelligence in networks, 161 162 Intelligent networks learning and reasoning for, 165 166 need for, 163 Intermodular interconceptual relationship, 20f International Classification of Sleep Disorders (ICSD), 148 149 Interrelationship between nGSI and classification accuracy, 296 297 Intra Parenchymal Hemorrhage (IPH), 82 Intrinsically photosensitive retinal ganglion cell (ipRGC), 126 Invasive brain computer interface, 28 29 Isolated symptoms, 149

J Jaccard coefficient, 84 Java parallel processing framework (JPPF), 137 138, 150 case study, 151, 152f

client layer, 151 execution layer, 151 service layer, 151 Joint space width (JSW), 93

K Kappa coefficient, 85 Kellgren Lawrence grading system, 93, 95 96 Kernel, 181 182 optimization, 286 289 Kinematic backstepping control law, 221 222 K-means clustering algorithm, 274 K-nearest neighbor (KNN) algorithm, 39, 47 48, 108, 333 335, 345 346 Knee X-ray analysis. machine learning approach for, 97 110 classification, 108 110 decision tree, 108 109 error correcting output code (ECOC), 109 110 k-nearest neighbor, 108 random forest (RF), 109 feature computation/extraction, 103 108 haralick features, 104 105 histogram of oriented gradients features, 105 107 local binary pattern (LBP), 107 108 shape features, 104 statistical feature, 103 104 zernike features, 105 identification of region of interest, 98 image acquisition, 97 98 preprocessing/enhancement, 98 segmentation, 99 103 active contour method, 102 103 edge-based methods, 99 101 Otsu’s based method, 101 102 texture-based segmentation, 101 Kruskal Wallis (KW) test, 279

L Label encoding, 352 353 Laplacian operator, 321 322 Lazy algorithm, 345 346 LeNet, 320 321, 335 Likelihood Ratio (LR), 86 89 Linear discriminant analysis (LDA), 281 282 Linear/nonlinear activations, 350 Local binary pattern (LBP) algorithm, 78, 103, 107 108 Logistic regression (LR), 39, 285, 343 344 Long- and short-term memory (LSTM) model, 196, 198 199, 199f

Index

Lumen diameter (LD), 230 detection and measurement methods, 233 234 detection using DL system, 236 237 Lumen intima (LI), 230 231 Lumens, 128 Lyapunov function, 222

M Machine consciousness, 13 cognitive maps as adaptive interface tool in online course, 13 19 cognitive mapping and theories, 13 14 cognitive maps based on students’ mental model, 20 context modeling and reasoning, 17 19 hypothetical instruction model, 23 25 instructional planning to improve student’s cognitive ability, 21 22 cognitive map with weights of subject modules and concepts, 22 multimedia processing and acquisition system, 19 web-based online course, 14 17 Machine learning (ML), 339 340 -based paradigm, 276 local system for, 277f Machine learning classification algorithms, 341 350 cognitive methods, 349 350 architecture of a ConvNet, 350 classification in ANN and statistical machine learning algorithms, 349 convolution neural network (CNN), 350 types of artificial neural network, 349 statistical methods, 342 349 Decision Tree, 346 347 ensemble classifier, 348 349 k-nearest neighbor (KNN), 345 346 logistic regression, 343 344 naı¨ve Bayes, 344 345 support vector machine (SVM), 347 348 Machine learning implementation case study electrical dataset, 354 355 finance dataset, 353 354 geographical data, 352 353 medical industry, 350 352 Machine learning life cycle, 179 185 classification of images, 183 185 maximum likelihood classification, 184 minimum distance classification, 184 parallelepiped classification, 184 rich training data, 185 supervised classification, 183 184 unsupervised classification, 185

collection of data, 180 convolution operation, 181 difference between average pooling and max pooling, 183 fully connected layer, 183 kernel, 181 182 modeling of data, 180 normalization of data, 180 pooling, 182 183 training and feature engineering of model, 180 Machine learning performance effect of P-value during statistical tests on, 287, 289 290 Machine learning systems, benchmarking, 300 302 Machine translation (MT), 195 discussions, 207 209 neural machine translation (NMT) system, 197 199 attention in the model, 201 encoder and decoder, 200 long- and short-term memory (LSTM) model, 198 199 out-of-vocabulary words, 204 205 residual connections and bridges, 202 204 results and discussions, 205 207 BLEU score, 207 datasets, 205 experimental setup, 206 training details, 206 207 Magnetic Resonance Imaging (MRI), 71 Magnetoencephalography (MEG), 30 31 Matlab command for wavelet decomposition, 38 MATLAB/Simulink, 354 355 Maximum likelihood classification, 184 Max pooling, 182 MaxPooling layer, 323 324 mBrainTrain, 40 Media adventitia (MA), 230 231 Medical Image Processing, 2 Melatonin hormone, 124 Memorization versus generalization effect of data size on, 288, 294 295 Memory models, 6 Minimal angular local binary patterns (MALBP), 73 74, 78 79 Minimum distance classification, 184 Modified multi-level set segmentation algorithm (MMLSS), 73 74, 76 78 Monomorphic activity, 142 Moore Penrose generalized inverse operation, 213 214 Morphologic TPA (mTPA), 230 231 Mother wavelet, 38

363

364

Index

Multiclass classification, 339 340 Multimedia processing and acquisition system, 19 Multinomial logistic regression, 343 Multinomial naı¨ve Bayes model, 345 Muse, 40 Mutual information (MI), 302 Myocardial infarction, 230

N Naı¨ve Bayes, 344 345 Naive Bayes (NB) classifiers, 283 Naı¨ve Bayes-probabilistic kernel classifier (NBPKC), 81 82 and support vector machine (SVM), 84 89 accuracy, 84 dice coefficient, 84 fault acceptance rate (FAR), 86 fault rejection rate, 86 global acceptance rate (GAR), 86 Jaccard coefficient, 84 kappa coefficient, 85 precision, 84 recall, 84 ROC Curve, 86 89 Narcolepsy, 149 Natural light, 124 Networks, 161 background, 163 165 challenges in current network, 162 cognition approach, 165 cognitive, 162 163 cognitive model for reasoning at human level, 167 168 future trends, 173 174 human reasoning mechanism, 166 167 intelligence in, 161 162 intelligent networks learning and reasoning for, 165 166 need for, 163 learning approaches, 169 170 new intelligent approach, 168 169 requirement of Bayesian approach for cognitive network, 170 173 Bayesian model, importance of, 171 Bayesian network, 170 171 collateral relationship with graded cognitive network, 173 consistent process for uncertain information, 172 decision theory, 172 different variable types, 172 environment in which Bayesian works the best, 171 flexible, 172

handling expert knowledge, 172 handling missing data, 172 173 robustness property, 172 semantic interpretation of the model parameters, 172 research challenges, 174 175 Neural machine translation (NMT) system, 196 199 attention in the model, 201 encoder and decoder, 200 long- and short-term memory (LSTM) model, 198 199 out-of-vocabulary words, 204 205 residual connections and bridges, 202 204 Neural network in computer language, 3 4 Neurons, 3 4 NeuroSky, 40 Noninvasive brain computer interface, 29 31 electroencephalography (EEG), 30 functional magnetic resonance imaging (fMRI), 31 functional near-infrared spectroscopy (fNIRS), 31 magnetoencephalography (MEG), 30 31 positron emission tomography (PET), 31 Nonlinear AutoRegressive Moving Average with eXogenous input (NARMAX) model, 213 214 Nonlinearity in BCI, 42 Nonrandom Eye Movement (NREM) sleep, 145 146 Nonstationary signals, 42 Number of gene separation index (nGSI), 295

O Objective function, 44 Ocular photoreceptor, 125 Ocular phototransduction, 126 Online course domain, 15 Online hard example mining (OHEM), 191 Online hard sample mining, effectiveness of, 192 On-line sequential ELM (OS-ELM), 213 214, 220 OpenBCI, 40 Optics, 58 60, 62 64, 65t Optimization techniques, 44 Ordinal logistic regression, 343 Osteoarthritis (OA), 93 94 radiological features of, 94 95 Otsu’s based method, 101 102, 102f Out-of-vocabulary words, 204 205

P P300 signal, 39

Index

Parallelepiped classification, 184 Parallel processing channel selection for alpha, beta, theta, and delta waves using, 150 157 coherence estimation functions, 151 152 distributed parallel computation, 152 157 Java parallel processing framework architecture [JPPF], 151 Parasomnias, 149 Particle swarm optimization (PSO), 43, 45 46, 45f Peak voltage, 142 Penalized SVM (PSVM), 303 304 Photoreceptors in the eye, 125 131 light-emitting diodes, 126 131 Physlets, 56 Pineal gland, 124 Polyline distance method, 260 261 Polyline distance metric (PDM), 260 261 Polymorphic activity, 142 Polynomial kernel, 48 Polysomnogram (PSG), 148 149 Pooling, 182 183 Popular augmentation techniques, 185 186 crop, 186 FLIP technique, 185 Gaussian noise, 186 rotation, 185 186 scale, 186 translation, 186 Positron emission tomography (PET), 31 Power, 128 Prediction and analysis, 340 Prewitt edge detection, 100 101, 101f Probability theory, 2 Proportional, integral, and derivative (PID) controllers, 214 Psychiatric disorders, 149 P-values, 249 251

Q Quadratic discriminant analysis (QDA), 282

R Radiation information, 125 Random eye movement (REM) sleep, 139, 145 146 Random forest (RF), 109, 285, 349 classification, 331, 333 Receiver operating characteristic (ROC), 247 248 Receiver operating curve analysis, 297 Recurrent neural network (RNN) based language model, 196 Region of interest (ROI), 319 Reliability index (RI), 297

REM stage, 146 Residual connections, 202 Residual connections and bridges, 202 204 Rheumatoid arthritis, 93 Rhythmic, 140 142 Risk biomarkers, 230, 258 Road signs, 319 identification of, using CNN, 319 experimental analysis, 325 335 literature review, 321 323 Rods, 125 ROI (Region Of Interest), 74, 98 Routing algorithms, 163

S

Santander Customer Prediction, 353 Self-contained course, cognitive features and performance of background, 56 58 discussion, 64 67 limitations, 67 methodology, 58 59 results, 59 64 Self-organizing map, 274 Semiinvasive brain computer interface, 29 Seoul Semiconductor, 132 Sequential extreme learning machine model for AUV dynamic, 220 Shape-based algorithms, 319 SIFT, 319 Signal-to-noise ratio (SNR), 41 Signature analysis, 183 184 Single-hidden layer feed forward neural (SLFN) network, 213 214 Sinusoidal activities, 142 Skin artifacts, 144 Skull-stripping, 73 Sleep, 129 defined, 145 146 Sleep disorder, 148 149 Sleeping stage electroencephalogram waves, 145 146 Sleep-related movement disorders, 149 Sleep wake cycles, 131 Snake. See Active contour method Sobel edge detection, 99 100, 100f Sonar, 352 Speed-up robust features (SURF) algorithms, 319, 322 323 Statistical evaluation, 274, 285 286 accuracy, 286 F-measure, 286 negative predictive value, 286

365

366

Index

Statistical evaluation (Continued) positive predictive value, 286 sensitivity, 286 specificity, 286 Statistical machine learning algorithms classification in, 349 Statistical machine translation (SMT) approach, 195 196 Steady state visual evoked potential (SSVEP), 39 Stenosis in the arterial walls, 230 Stock market, 340 Stress response and the pupillary light reflex, 126 Stroke, 230 Student modeling, 15 Subconsciousness, 3 4 SunLike light-emitting diodes, 123, 132 Supervised classification, 183 184, 341 342 Supervised learning techniques, 274 Support vector machine (SVM), 39, 48, 73, 274, 283 284, 302, 320 323, 330 331, 335, 347 348 Suprachiasmatic nucleus (SCN), 124 Synthetic Minority Oversampling Technique (SMOTE), 353 354

T 10 20 international system of electrode placement, 32 33 Texture-based segmentation, 101, 102f Total plaque area (TPA), 230, 234 geometric TPA (gTPA), 230 231

morphologic TPA (mTPA), 230 231 Training sites, 183 184 Transient activity, 142 t-test, 278 279

U Ultrasound, 230, 232 233, 236 Universal Approximation Theorem, 350 Unsupervised classification, 185, 342 Unsupervised learning algorithms, 274

V Variance error, 348 Visual and nonvisual information sensor perception of eye as, 125

W Wavelet-based feature, 38 39 Wavelet transform, feature extraction using, 47 Wave motion, 59 Wearable sensing, 40 Web-based online course, 14 17 Web server and subject content retrieval, 14 WIDER FACE dataset, 179 Wilcoxon sign rank sum test (WCSRS), 278, 303 Wittgenstein’s theory, 6

Z Zernike features, 105