Biomedical Image Analysis: Special Applications in MRIs and CT scans 9789819999385, 9789819999392

This book provides an in-depth study of biomedical image analysis. It reviews and summarizes previous research work in b

125 54 11MB

English Pages 177 [173] Year 2024

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Biomedical Image Analysis: Special Applications in MRIs and CT scans
 9789819999385, 9789819999392

Table of contents :
Preface
Acknowledgement
Contents
1 Introduction
1.1 Related Works in Image Segmentation
1.2 Related Works in Image Clustering
1.3 Organization of the Book
References
2 Parkinson's Disease MRIs Analysis Using FuzzyClustering Approach
2.1 Introduction
2.2 Mathematical Formulation for Uncertainty Representation
2.3 The Proposed Method
2.3.1 Representation of Pixels
2.3.2 Formation of FIS
2.3.3 Measure of Uncertainty
2.3.4 Clustering of FEM
2.3.5 Pattern Visualization
2.4 Experimental Results
2.4.1 Experimental Set-Up
2.4.2 Performance Evaluation Metrics
2.4.3 Discussion on Segmentation of MRI
2.4.4 Discussion on Pattern Classification and Visualization
2.5 Conclusions and Future Directions
References
3 Parkinson's Disease MRIs Analysis Using Neutrosophic-Entropy Segmentation Approach
3.1 Introduction
3.2 Mathematical Formulation of Uncertainty
3.3 The Proposed Algorithm
3.3.1 Description of NEATSA
3.3.2 Algorithm and Computational Complexity
3.4 Experimental Results
3.4.1 Dataset Description
3.4.2 Performance Evaluation Metrics
3.4.3 Experimental Set-Up
3.4.4 Discussion on Experimental Results
3.5 Conclusions and Future Directions
References
4 Parkinson's Disease MRIs Analysis Using Neutrosophic-Entropy Clustering Approach
4.1 Introduction
4.2 Theoretical Basis
4.3 The Proposed Method
4.3.1 Description of the Proposed Method
4.4 Experimental Results
4.4.1 Dataset Description and Experimental Set-Up
4.4.2 Performance Evaluation Metrics
4.4.3 Discussion on the Results Obtained by the NEBCA
4.4.4 Discussion on the Results Obtained by the the HSV Color System
4.4.5 Discussion on the Computation Time
4.4.6 Algorithm and Computational Complexity
4.5 Conclusions and Future Directions
References
5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach
5.1 Introduction
5.2 Motivation and Contributions
5.3 Background for the Study
5.4 The Proposed T2NS and Related Concepts
5.4.1 T2NS Theory
5.4.2 Set-Theoretic Operations and Properties for T2NS
5.4.3 Uncertainty Measurement of T2NS
5.5 The Proposed Image Segmentation Method
5.5.1 Gray Pixel Space of Input Image
5.5.2 Histogram of the GPS
5.5.3 Application of the T2NS
5.5.4 Computation of T2NSE for the T2NS
5.5.5 Determination of Thresholds
5.5.6 Segmentation of Image
5.5.7 Fusion of Segmented Images
5.6 Experimental Results
5.6.1 Dataset Description
5.6.2 Performance Evaluation Metrics
5.6.3 Visual Analysis
5.6.4 Multiple Adaptive Thresholds Selection
5.6.5 Statistical Analysis
5.6.6 Computational Complexity Analysis
5.7 Conclusions and Future Directions
References
6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach
6.1 Introduction
6.2 Image Segmentation Using KMC Algorithm
6.3 The Proposed FFQOA
6.3.1 Inspiration for the FFQOA
6.3.2 Background for the FFQOA
6.3.3 Mathematical Modeling for the FFQOA
6.3.4 Personal Best and Global Best Displacements
6.3.5 The Search Scope Components
6.4 The Proposed FFQOAK Method
6.4.1 Phases of the FFQOAK Method
6.4.2 Optimization Process of the ProposedFFQOAK Method
6.5 Experimental Results
6.5.1 Dataset and Preprocessing Descriptions
6.5.2 Performance Evaluation Metrics
6.5.3 Statistical Analyses
6.5.4 Convergence Analysis
6.5.5 Visual Analysis of Segmented Images
6.6 Conclusions and Future Directions
References

Citation preview

Brain Informatics and Health

Pritpal Singh

Biomedical Image Analysis Special Applications in MRIs and CT scans

Brain Informatics and Health Editors-in-Chief Ning Zhong, Department of Life Science & Informatics, Maebashi Institute of Technology, Maebashi-City, Japan Ron Kikinis, Department of Radiology, Harvard Medical School, Boston, MA, USA Series Editors Weidong Cai, School of Computer Science, The University of Sydney, Sydney, NSW, Australia Henning Müller Switzerland

, University of Applied Sciences Western Switzerland, Sierre,

Hirotaka Onoe, Graduate School of Medicine, Kyoto University, Kobe, Japan Sonia Pujol, Department of Radiology, Harvard Medical School, Boston, MA, USA Philip S. Yu, Department of Computer Science, University of Illinois at Chicago, Chicago, IL, USA

Informatics-enabled studies are transforming brain science. New methodologies enhance human interpretive powers when dealing with big data sets increasingly derived from advanced neuro-imaging technologies, including fMRI, PET, MEG, EEG and fNIRS, as well as from other sources like eye-tracking and from wearable, portable, micro and nano devices. New experimental methods, such as in to imaging, deep tissue imaging, opto-genetics and dense-electrode recording are generating massive amounts of brain data at very fine spatial and temporal resolutions. These technologies allow measuring, modeling, managing and mining of multiple forms of big brain data. Brain informatics & health related techniques for analyzing all the data will help achieve a better understanding of human thought, memory, learning, decision-making, emotion, consciousness and social behaviors. These methods also assist in building brain-inspired, human-level wisdom-computing paradigms and technologies, improving the treatment efficacy of mental health and brain disorders. The Brain Informatics & Health (BIH) book series addresses the computational, cognitive, physiological, biological, physical, ecological and social perspectives of brain informatics as well as topics relating to brain health, mental health and well-being. It also welcomes emerging information technologies, including but not limited to Internet of Things (IoT), cloud computing, big data analytics and interactive knowledge discovery related to brain research. The BIH book series also encourages submissions that explore how advanced computing technologies are applied to and make a difference in various large-scale brain studies and their applications. The series serves as a central source of reference for brain informatics and computational brain studies. The series aims to publish thorough and cohesive overviews on specific topics in brain informatics and health, as well as works that are larger in scope than survey articles and that will contain more detailed background information. The series also provides a single point of coverage of advanced and timely topics and a forum for topics that may not have reached a level of maturity to warrant a comprehensive textbook.

Pritpal Singh

Biomedical Image Analysis Special Applications in MRIs and CT scans

Pritpal Singh Department of Data Science and Analytics Central University of Rajasthan Ajmer, Rajasthan, India

ISSN 2367-1742 ISSN 2367-1750 (electronic) Brain Informatics and Health ISBN 978-981-99-9938-5 ISBN 978-981-99-9939-2 (eBook) https://doi.org/10.1007/978-981-99-9939-2 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore Paper in this product is recyclable.

Preface

Biomedical Image Analysis: Special Applications in MRIs and CT Scans is written primarily for researchers in biomedical image domain, computer science, and research organizations. There are several excellent books in biomedical image analysis, written from elementary to advance levels. The writers of these books have their intended audiences. This book is different from those in that it deals with practical applications of soft computing techniques in biomedical image analysis. This book generally discusses basic theory of those techniques and their applications with various examples without complicated mathematics. In this book, I analyze significant problems of biomedical image analysis in depth, starting with model formulation, architecture, basic steps, empirical analyzes, and performance measures in terms of statistical parameters to see how well the proposed models performs. The present book has been accomplished at National Taipei University of Technology (Taiwan), Jagiellonian University (Poland), and Central University of Rajasthan (India). All experiments were conducted at National Taipei University of Technology (Taiwan) and Jagiellonian University (Poland). The empirical results presented in the book were published by esteemed journals. Rajasthan, India June, 2023

Pritpal Singh

v

Acknowledgement

Prima facie, I am grateful to the God for the good health and well-being that were necessary to complete this book. I would like to dedicate this book to my beloved father-in-law (Late Gyani Bhajan Singh Anandpuri) and younger brother-in-law (Late Balbinder Singh Bala), who left us in a deep state of sorrow and pain. I want to express my greatest gratitude to my beloved wife (Manjit), and baby (Simerpreet) for their endless love, constant support, encouragement, and patients. Last but not least, I would like to thank almighty for everything.

vii

Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Related Works in Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Related Works in Image Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Organization of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 2 3 4 5

2

Parkinson’s Disease MRIs Analysis Using Fuzzy Clustering Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Mathematical Formulation for Uncertainty Representation. . . . . . . . . . . 2.3 The Proposed Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Representation of Pixels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Formation of FIS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Measure of Uncertainty. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.4 Clustering of FEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.5 Pattern Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Experimental Set-Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Performance Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Discussion on Segmentation of MRI . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Discussion on Pattern Classification and Visualization . . . . . . . 2.5 Conclusions and Future Directions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9 10 12 13 13 14 14 15 16 17 17 18 20 23 26 26

Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Segmentation Approach. . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Mathematical Formulation of Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 The Proposed Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Description of NEATSA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Algorithm and Computational Complexity . . . . . . . . . . . . . . . . . . . . 3.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29 29 30 34 34 38 39

3

ix

x

Contents

3.4.1 Dataset Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Performance Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 Experimental Set-Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.4 Discussion on Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Conclusions and Future Directions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

5

Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Clustering Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Theoretical Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 The Proposed Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Description of the Proposed Method. . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Dataset Description and Experimental Set-Up . . . . . . . . . . . . . . . . 4.4.2 Performance Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Discussion on the Results Obtained by the NEBCA . . . . . . . . . . 4.4.4 Discussion on the Results Obtained by the the HSV Color System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.5 Discussion on the Computation Time. . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.6 Algorithm and Computational Complexity . . . . . . . . . . . . . . . . . . . . 4.5 Conclusions and Future Directions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Motivation and Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Background for the Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 The Proposed T2NS and Related Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 T2NS Theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Set-Theoretic Operations and Properties for T2NS . . . . . . . . . . . 5.4.3 Uncertainty Measurement of T2NS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 The Proposed Image Segmentation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Gray Pixel Space of Input Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Histogram of the GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Application of the T2NS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.4 Computation of T2NSE for the T2NS . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.5 Determination of Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.6 Segmentation of Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.7 Fusion of Segmented Images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Dataset Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.2 Performance Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.3 Visual Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.4 Multiple Adaptive Thresholds Selection. . . . . . . . . . . . . . . . . . . . . . .

39 39 40 41 49 49 51 52 53 58 58 60 61 61 63 67 74 74 75 76 79 80 81 83 85 85 91 93 95 95 95 96 98 99 100 100 103 103 103 104 108

Contents

5.6.5 Statistical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.6 Computational Complexity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Conclusions and Future Directions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Image Segmentation Using KMC Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 The Proposed FFQOA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Inspiration for the FFQOA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Background for the FFQOA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Mathematical Modeling for the FFQOA . . . . . . . . . . . . . . . . . . . . . . 6.3.4 Personal Best and Global Best Displacements . . . . . . . . . . . . . . . . 6.3.5 The Search Scope Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 The Proposed FFQOAK Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Phases of the FFQOAK Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Optimization Process of the Proposed FFQOAK Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Dataset and Preprocessing Descriptions . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Performance Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.3 Statistical Analyses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.4 Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.5 Visual Analysis of Segmented Images . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Conclusions and Future Directions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xi

108 115 119 120 121 122 124 125 125 126 127 130 131 133 133 136 137 138 138 140 149 156 164 164

Chapter 1

Introduction

“Everything should be made as simple as possible, but not simpler.” Albert Einstein

To develop pattern recognition and vision method for medical images has become one of the most challenging tasks in view of practical as well as industrial interest [34, 55, 58]. The medical image segmentation method helps to discover uniform objects alongside their edges and textures [4]. However, magnetic resonance images (MRIs) segmentation is a very tedious task due to their inherited uncertain structures and inhomogeneities in grayscale intensities [41]. For efficient and faster decisionmaking, radiologists and medical practitioners highly depended on MRIs [51]. For this reason, image analysts or computer vision experts engaged in developing many algorithms for MRIs segmentation, which differed from objective to objective [24]. It is clear that a large number of color shades can be perceived by the human eye, but only two-dozen gray shades can be identified [13]. This circumstance is quite common when attempting to differentiate objects and regions using gray shades rather than color shades. MRI segmentation focuses on the detection of different regions, i.e., gray matter, white matter, and cerebrospinal fluid, by examining gray level pixel distributions [23]. Many developments in the direction of MRI segmentation methods have been reported recently [20, 43]. Such approaches can be divided into following categorizes, listed as follows. 1. Thresholding based method (TBM): TBM is the method of splitting regions into light and dark regions [22]. Several well-known TBMs, including Otsu’s global threshold method (OGTM) [40], adaptive threshold method [40], watershed method (WM) [35], and iterative thresholding algorithm (ITA) [57], were available in literature. Different methods of segmentation were suggested by researchers to solve different domain-specific problems. For example, Datta et al. [15] introduced a method for the segmentation of gray matter in the spinal cord. Tidwell et al. [49] proposed a new method for spinal cord tissues segmentation using MRIs.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 P. Singh, Biomedical Image Analysis, Brain Informatics and Health, https://doi.org/10.1007/978-981-99-9939-2_1

1

2

1 Introduction

2. Clustering based method (CBM): CBM has been commonly used in segmentation to classify the brain structure by clustering pixels of MRIs. CBMs may be divided into hard clustering and soft clustering algorithms depending on their applications. Hard clustering algorithm helps to find the objects’ natural boundaries, while soft clustering algorithms perform the same task by finding the objects’ fuzzy boundaries. K-means clustering algorithm (KMC) [27] was one of the most common algorithms in the hard clustering category [17]. In KMC, according to some distance criteria, each pixel of the gray level in MRIs is assigned to the specific cluster [21, 38]. The soft clustering algorithm like fuzzy c-means (FCM) [8] was commonly used to describe objects in a non-overlapping way. Based on fuzzy classification, the FCM performs segmentation in MRIs, where each pixel can have different degrees of membership in different classes [1, 28]. 3. Fuzzy based method (FBM): FBM is the method of segmenting using fuzzy set theory [61] by reflecting the inherent uncertainties in images. Based on this theory, Huang and Wang introduced fuzzy threshold based image segmentation method. Chaira and Ray [10] proposed fuzzy divergence based method (FDBM) for image segmentation. Many researchers, such as Pham and Prince [41], Shen et al. [47], Zhang et al. [62] and Udupa et al. [52] used this benefit of fuzzy set in the segmentation of MR images. 4. Deep learning based method (DLBM): DLBM is the method of automatic learning of features from a large number of training datasets [16, 18, 46]. For example, Liu et al. [33] used the DLBM to extract features from 83 regions of interests from PET and MRIs. A three dimensional convolution neural network was used by Goceri [19] for Alzheimer’s disease diagnosis using MRIs.

1.1 Related Works in Image Segmentation In image processing and pattern recognition, image segmentation is one of the tedious tasks, which have many applications in the domain of computer vision, robotics, object detection, features extraction, and so forth [32]. However, image segmentation is a troublesome mechanism due to involvement of complexities in terms of contrast, brightness, noise, etc. [59]. The main purpose of image segmentation is to separate each important object from the rest of the objects [50]. Hence, it is a mechanism of partitioning an image into various parts in such a way that each part has its own area. According to Cheng et al. [12], it is a method of partitioning an image I into non-overlapping areas .AI : n  .

i=1

AI = I, and AI 1 ∩ AI 2 = ∅, I 1 /= I 2

(1.1.1)

1.2 Related Works in Image Clustering

3

Grayscale image segmentation methods are basically based on partitioning an image by detecting discontinuous gray level values in a particular region [25]. If there is a homogeneity in gray level values of a particular region, then such a partition is done using clustering, thresholding, edge detection, etc. [12]. In image processing and pattern recognition, there are numerous benchmark methods available, which include global thresholding method [40], gray threshold method [40], adaptive threshold method [9] and watershed method [54]. In image processing and pattern recognition, fuzzy set theory is broadly used due to its capability to deal with uncertainty very precisely. Tobias and Seara [50] proposed an image segmentation approach by providing threshold to histogram based on the similarity between gray levels, which is assessed through fuzzy measure. Chaira and Ray [11] used four types of fuzzy thresholding approaches, where membership value of each pixel is defined using Gamma membership function. Various applications of fuzzy set theory in image segmentation can be found in these articles [10, 11, 14, 31]. Atanassov [5] proposed an intuitionistic fuzzy set (IFS) theory, which is recently used in image processing. Rather than assigning only one degree of membership to a pixel as in the case of fuzzy set theory based approaches, IFS can assign the degrees of memberships based on hesitant information [7]. In IFS, degree of membership of hesitant information is determined using two functions, viz., degree of membership and degree of non-membership. Melo-Pinto et al. [36] presented a method based on IFS for determining whether the pixel belongs to the background or the object. Then, by using a multilevel threshold method, their proposed method performed the image segmentation. Ananthi et al. [2] introduced a new image segmentation method on the basis of constructing IFS from multiple fuzzy sets. Verma et al. [53] used intuitionistic fuzzy c-means (IFCM) algorithm, which is a modified version of FCM, in brain image segmentation. Ananthi et al. [3] presented a new IFS based clustering algorithm to analyze brain tumor using MRIs.

1.2 Related Works in Image Clustering Clustering is used in a variety of disciplines such as image analysis, data analysis and pattern analysis. Many image processing algorithms are prone to various uncertainties, such as blurring in gray levels. Despite this problem, clustering algorithms have been widely used in image clustering. One of the existing algorithms is KMCA, which assigns each gray level to a particular cluster based on Euclidean distance [44]. Juang and Wu [29] solved the problem of lesion object detection in brain MRI by using KMCA. Mignotte [37] introduced a modified KMCA for image clustering by considering de-texturing and spatial constraints. An adaptive KMCA [38] was used in breast MRI clustering. However, KMCA suffered from two main problems [45]: (a) many cluster centers with local optima, and (b) sensitive to cluster center initialization. Therefore, researchers employed the fuzzy c-means algorithm (FCM) [8], which was developed based on fuzzy set theory, in the clustering of MRIs.

4

1 Introduction

FCM performed brain tumor clustering in an unsupervised manner, where each pixel can have different degrees of membership in different classes [42]. Although FCM is a robust algorithm, it only works well when the images are not noisy. To address this drawback, Yoon et al. proposed [60] suggested preprocessing the MRIs before initiating the clustering process. Zhao et al. [63] improved the performance of FCM clustering in MRI clustering by integrating the fuzzy partition approach. Narayanan et al. [39] proposed a modified FCM (MFCM) clustering algorithm for brain tumor identification and corresponding tissue clustering. Wu and Zhang proposed a new FCM clustering algorithm based on fuzzy local information and Bregman divergence for MRI clustering. Both the KMCA and FCM algorithms were based on iterative learning. To overcome this limitation, a non-iterative clustering algorithm based on interval type-2 fuzzy set was introduced [56]. It is observed that data is often subject to imprecise information and noise. This uncertainty always leads to a problem in determining the membership values of a particular data point [26]. To solve this problem, the concept of intuitionistic fuzzy set (IFS) was introduced by Atanassov [6], which was an extension of fuzzy set theory. IFS theory was able to deal with uncertainty and imprecise knowledge in terms of two memberships, namely the degree of membership and the degree of nonmembership [48]. Recently, IFS-based clustering method was proposed by Verma et al. [53]. Kumar et al. [30] proposed IFS and a kernel distance measure based clustering algorithm for clustering of MRIs called kernel intuitionistic fuzzy entropy c-means (KIFECM).

1.3 Organization of the Book This book contains five chapters. Overview of each chapter is given below: 1. In Chap. 2, fuzzy information gain (FIG) function and K-means clustering algorithm based clustering algorithm is discussed, whose application is shown in analyzing the MRIs of Parkinson’s disease. 2. In Chap. 3, neutrosophic set (NS) theory and neutrosophic entropy information (NEI) based segmentation algorithm is presented, whose application is shown in segmenting the MRIs of Parkinson’s disease. 3. In Chap. 4, neutrosophic-entropy based clustering algorithm (NEBCA) and HSV color system based clustering algorithm is discussed, and its application is demonstrated in performing clustering of MRIs of Parkinson’s disease. 4. In Chap. 5, type-2 neutrosophic set (T2NS) based segmentation algorithm is proposed, and the application of the algorithm is demonstrated in segmenting the brain tumor tissue structures in MRIs. 5. In Chap. 6, K-means clustering algorithm (KMC) and novel fast forward quantum optimization algorithm (FFQOA) is discussed, called FFQOAK (FFQOA+KMC). The application of the FFQOAK is demonstrated in clustering the chest CT scan images of COVID-19.

References

5

References 1. Agrawal S, Panda R, Dora L (2014) A study on fuzzy clustering for magnetic resonance brain image segmentation using soft computing approaches. Appl Soft Comput 24:522–533 2. Ananthi VP, Balasubramaniam P, Lim CP (2014) Segmentation of gray scale image based on intuitionistic fuzzy sets constructed from several membership functions. Pattern Recogn 47(12):3870–3880 3. Ananthi VP, Balasubramaniam P, Kalaiselvi T (2016) A new fuzzy clustering algorithm for the segmentation of brain tumor. Soft Comput 20(12):4859–4879 4. Association IS (2015) IEEE recommended practice for three-dimensional (3D) medical modeling. IEEE Computer Society, New York 5. Atanassov KT (1986) Intuitionistic fuzzy sets. Fuzzy Sets Syst 20(1):87–96 6. Atanassov KT, Steova S (1986) Intuitionistic fuzzy sets. Fuzzy Sets Syst 20(1):87–96 7. Balasubramaniam P, Ananthi VP (2014) Image fusion using intuitionistic fuzzy sets. Inf Fusion 20:21–30 8. Bezdek JC, Ehrlich R, Full W (1984) FCM: the fuzzy c-means clustering algorithm. Comput Geosci 10(2–3):191–203 9. Bradley D, Roth G (2007) Adaptive thresholding using the integral image. J Graph Tools 12(2):13–21 10. Chaira T, Ray AK (2003) Segmentation using fuzzy divergence. Pattern Recogn Lett 24(12):1837–1844 11. Chaira T, Ray A (2004) Threshold selection using fuzzy set theory. Pattern Recogn Lett 25(8):865–874 12. Cheng HD, Jiang X, Sun Y, Wang J (2001) Color image segmentation: advances and prospects. Pattern Recogn 34(12):2259–2281 13. Cheng HD, Jiang X, Wang J (2002) Color image segmentation based on homogram thresholding and region merging. Pattern Recogn 35(2):373–393 14. Ciesielski KC, Udupa JK (2010) Affinity functions in fuzzy connectedness based image segmentation I: equivalence of affinities. Comput Vis Image Underst 114(1):146–154 15. Datta E, Papinutto N, Schlaeger R, Zhu A, Carballido-Gamio J, Henry RG (2017) Gray matter segmentation of the spinal cord with active contours in MR images. NeuroImage 147:788–799 16. Goceri E (2017) Deep learning in medical image analysis: recent advances and future trends. In: 11th International conference on computer graphics, visualization, computer vision and image processing, Lisbon, pp 305–311 17. Goceri E (2017) Intensity normalization in brain MR images using spatially varying distribution matching. In: 11th International conference on computer graphics, visualization, computer vision and image processing, Lisbon, pp 300–304 18. Goceri E (2019) Challenges and recent solutions for image segmentation in the era of deep learning. In: 9th International conference on image processing theory, tools and applications, Istanbul, pp 1–6 19. Goceri E (2019) Diagnosis of Alzheimer’s disease with Sobolev gradient based optimization and 3D convolutional neural network. Int J Numer Methods Biomed Eng e3225. https://doi. org/10.1002/cnm.3225 20. Goceri E, Gürcan MN, Dicle O (2014) Fully automated liver segmentation from SPIR image series. Comput Biol Med 53:265–278 21. Goceri E, Shah ZK, Gurcan MN (2017) Vessel segmentation from abdominal magnetic resonance images: adaptive and reconstructive approach. Int J Numer Methods Biomed Eng 33(4):e2811 22. Gordillo N, Montseny E, Sobrevilla P (2013) State of the art survey on MRI brain tumor segmentation. Magn Reson Imaging 31(8):1426–1438 23. Harris GJ, Barta PE, Peng LW, Lee S, Brettschneider PD, Shah A, Henderer JD, Schlaepfer TE, Pearlson GD (1994) MR volume segmentation of gray matter and white matter using manual thresholding: dependence on image brightness. Am J Neuroradiol 15(2):225–230

6

1 Introduction

24. Huang YP, Zaza SMM, Chu WJ, Krikorian R, Sandnes FE (2018) Using fuzzy systems to infer memory impairment from MRI. Int J Fuzzy Syst 20(3):913–927 25. Hurtik P, Madrid N, Dyba M (2019) Sensitivity analysis for image represented by fuzzy function. Soft Comput 23(6):1795–1807 26. Iakovidis DK, Pelekis N, Kotsifakos E, Kopanakis I (2008) Intuitionistic fuzzy clustering with applications in computer vision. In: Advanced concepts for intelligent vision systems, Juanles-Pins, pp 764–774 27. Jain AK (2010) Data clustering: 50 years beyond K-means. Pattern Recogn Lett 31(8):651–666 28. Jiang XL, Wang Q, He B, Chen SJ, Li BL (2016) Robust level set image segmentation algorithm using local correntropy-based fuzzy c-means clustering with spatial constraints. Neurocomputing 207:22–35 29. Juang LH, Wu MN (2010) MRI brain lesion image detection based on color-converted Kmeans clustering segmentation. Measurement 43(7):941–949 30. Kumar D, Agrawal R, Verma H (2020) Kernel intuitionistic fuzzy entropy clustering for MRI image segmentation. Soft Comput 24:4003–4026 31. Lan J, Zeng Y (2013) Multi-threshold image segmentation using maximum fuzzy entropy based on a new 2D histogram. Optik-Int J Light Electron Opt 124(18):3756–3760 32. Li Y, Guo Y, Kao Y, He R (2017) Image piece learning for weakly supervised semantic segmentation. IEEE Trans Syst Man Cybern Syst 47(4):648–659 33. Liu S, Liu S, Cai W, Che H, Pujol S, Kikinis R, Feng D, Fulham MJ (2014) Multimodal neuroimaging feature learning for multiclass diagnosis of Alzheimer’s disease. IEEE Trans Biomed Eng 62(4):1132–1140 34. Liu Y, Yang G, Afshari Mirak S, Hosseiny M, Azadikhah A, Zhong X, Reiter RE, Lee Y, Raman SS, Sung K (2019) Automatic prostate zonal segmentation using fully convolutional network with feature pyramid attention. IEEE Access 7:163626–163632 35. Mangan AP, Whitaker RT (1999) Partitioning 3D surface meshes using watershed segmentation. IEEE Trans Vis Comput Graph 5(4):308–321 36. Melo-Pinto P, Couto P, Bustince H, Barrenechea E, Pagola M, Fernandez J (2013) Image segmentation using Atanassov’s intuitionistic fuzzy sets. Expert Syst Appl 40(1):15–26 37. Mignotte M (2011) A de-texturing and spatially constrained K-means approach for image segmentation. Pattern Recogn Lett 32(2):359–367 38. Moftah HM, Azar AT, Al-Shammari ET, Ghali NI, Hassanien AE, Shoman M (2014) Adaptive k-means clustering algorithm for MR breast image segmentation. Neural Comput Appl 24(7):1917–1928 39. Narayanan A, Rajasekaran MP, Zhang Y, Govindaraj V, Thiyagarajan A (2019) Multichanneled MR brain image segmentation: a novel double optimization approach combined with clustering technique for tumor identification and tissue segmentation. Biocybern Biomed Eng 39(2):350–381 40. Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 9(1):62–66 41. Pham DL, Prince JL (1999) Adaptive fuzzy segmentation of magnetic resonance images. IEEE Trans Med Imaging 18(9):737–752 42. Phillips W, Velthuizen R, Phuphanich S, Hall L, Clarke L, Silbiger M (1995) Application of fuzzy c-means segmentation technique for tissue differentiation in MR images of a hemorrhagic glioblastoma multiforme. Magn Reson Imaging 13(2):277–290 43. Portela NM, Cavalcanti GD, Ren TI (2014) Semi-supervised clustering for MR brain image segmentation. Expert Syst Appl 41(4):1492–1497 44. Queen JM (1967) Some methods for classification and analysis of multivariate observations. In: Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, Oakland, CA, vol 1 45. Selim SZ, Ismail MA (1984) K-means-type algorithms: a generalized convergence theorem and characterization of local optimality. IEEE Trans Pattern Anal Mach Intell 6(1):81–87 46. Shen D, Wu G, Suk HI (2017) Deep learning in medical image analysis. Annu Rev Biomed Eng 19:221–248

References

7

47. Shen S, Sandham W, Granat M, Sterr A (2005) MRI fuzzy segmentation of brain tissue using neighborhood attraction with neural-network optimization. IEEE Trans Inf Technol Biomed 9(3):459–467 48. Singh P, Huang YP, Wu SI (2020) An intuitionistic fuzzy set approach for multi-attribute information classification and decision-making. Int J Fuzzy Syst 22:1506–1520 49. Tidwell VK, Kim JH, Song SK, Nehorai A (2010) Automatic segmentation of rodent spinal cord diffusion MR images. Magn Reson Med 64(3):893–901 50. Tobias OJ, Seara R (2002) Image segmentation by histogram thresholding using fuzzy sets. IEEE Trans Image Process 11(12):1457–1465 51. Tu Z, Bai X (2009) Auto-context and its application to high-level vision tasks and 3D brain image segmentation. IEEE Trans Pattern Anal Mach Intell 32(10):1744–1757 52. Udupa JK, Wei L, Samarasekera S, Miki Y, van Buchem MA, Grossman RI (1997) Multiple sclerosis lesion quantification using fuzzy-connectedness principles. IEEE Trans Med Imaging 16(5):598–609 53. Verma H, Agrawal R, Sharan A (2016) An improved intuitionistic fuzzy c-means clustering algorithm incorporating local information for brain image segmentation. Appl Soft Comput 46:543–557 54. Vincentand L, Soille P (1991) Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Trans Pattern Anal Mach Intell 13(6):583–598 55. Wang G (2016) A perspective on deep imaging. IEEE Access 4:8914–8924 56. Wang Z, Yang Y (2018) A non-iterative clustering based soft segmentation approach for a class of fuzzy images. Appl Soft Comput 70:988–999 57. Wu H, Barba J, Gil J (2000) Iterative thresholding for segmentation of cells from noisy images. J Microsc 197(3):296–304 58. Xu K, Cao J, Xia K, Yang H, Zhu J, Wu C, Jiang Y, Qian P (2019) Multichannel residual conditional GAN-leveraged abdominal pseudo-CT generation via Dixon MR images. IEEE Access 7:163823–163830 59. Yang X, Zhao W, Chen Y, Fang X (2008) Image segmentation with a fuzzy clustering algorithm based on ant-tree. Signal Process 88(10):2453–2462 60. Yoon UC, Kim JS, Kim JS, Kim IY, Kim SI (2001) Adaptable fuzzy C-means for improved classification as a preprocessing procedure of brain parcellation. J Digit Imaging 14(1):238– 240 61. Zadeh LA (1965) Fuzzy sets. Inf Control 8(3):338–353 62. Zhang Y, Ye S, Ding W (2017) Based on rough set and fuzzy clustering of MRI brain segmentation. Int J Biomath 10(02):1750026 63. Zhao F, Jiao L, Liu H (2013) Kernel generalized fuzzy c-means clustering with spatial information for image segmentation. Digit Signal Process 23(1):184–199

Chapter 2

Parkinson’s Disease MRIs Analysis Using Fuzzy Clustering Approach

“The best way to predict your future is to create it.” Peter F. Drucker

Abstract Parkinson’s disease (PD) is one of the serious diseases in the neurodegenerative disease group, whose early stage pre-diagnosis is still very tedious. Radiologists and medical practitioners mostly depended on the analysis of PD patients’ magnetic resonance images (MRIs) to identify this disease. Due to presence of grayscale features and uncertain inherited information in MRIs, their pattern recognition and visualization were very complex. With this motivation, a new method for analyzing and visualizing patterns in MRI images was presented in this study. For this purpose, this study adopted fuzzy information gain (FIG) function and K-means clustering algorithm. The FIG function was used to quantify the fuzzified pixels information, whereas K-means clustering algorithm was employed to cluster those fuzzified pixels information. Finally, changes in MRIs were recognized and classified into three distinct regions, viz., the minimum changed region (MINCR), the maximum changed region (MAXCR) and the average changed region (AVGCR). Experimental results were provided by comparing PD patients’ segmented MRIs with seven well-known image segmentation and clustering methods, including adaptive threshold method, watershed method, gray threshold method, fuzzy based method, K-means clustering algorithm, adaptive Kmeans clustering algorithm and fuzzy c-means (FCM) algorithm. The proposed method achieved an average mean squared error of 63.49, peak signal-to-noise ratio of 30.14 and Jaccard similarity coefficient of 0.92 among nine MRIs of PD. The performance showed an improvement of 20.73–32.94%, 3.54–6.20% and 6.98–64.29% over the average mean squared error, peak signal-to-noise ratio and Jaccard similarity coefficient, respectively compared to other image segmentation and clustering methods. Keywords Fuzzy information gain · Image segmentation · K-means clustering · Magnetic resonance images (MRIs) · Parkinson’s disease (PD)

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 P. Singh, Biomedical Image Analysis, Brain Informatics and Health, https://doi.org/10.1007/978-981-99-9939-2_2

9

10

2 Parkinson’s Disease MRIs Analysis Using Fuzzy Clustering Approach

2.1 Introduction Parkinson’s disease (PD) is one of the severe neurodegenerative diseases, the prediagnosis of which remains very cumbersome at an early stage, as it depends primarily on clinical or medical evidence. This disease involves motor symptom dysfunction such as tremor, trembling, slow motion (bradykinesia), and altered gait [16]. Despite serious attention given for developing standard tests or methods based on a blood sample or image analysis by the scientific community, there is still no efficient solution for early detection of PD. The experts use positron emission tomography (PET) or single-photon emission computerized tomography (SPECT) scans to evaluate the level of PD [1]. Those two scanning techniques are, however, too costly and are only used in specialized laboratories. Physicians diagnose this disease very lately in most cases, and make delayed decisions for the treatment of this disease when the neuron system of the patients nearly destroyed. In medical terms, the critical stage of this disease is known as Braak Stage III-IV [3]. For the pre-diagnosis of PD, use of CT (computed tomography) or MRI scan has been increased recently. However, diagnosis of PD using MRI suffers from the following issues such as: 1. In most of the cases, MRI of PD patients seem to be normal or no significant changes [2]. Hence, MRI scanning is useful only for recognizing secondary diseases caused by the PD. 2. Tredici et al. [18] addressed that first structural damages due to PD came into notice after 10 years. MRI scanning is not clearly able to differentiate PD patients and non-PD patients [11]. 3. Imaging techniques can only be able to identify the structural changes in brainstem for PD [17]. They are only helpful to diagnose the premotor disease and its progression. 4. Notwithstanding development in digital imaging, experts still rely on the grayscale MRIs for preparing reports [12]. Due to this reason, it is quite likely that severely affected regions have not been recognized in the human’s brain. This chapter introduces a new clustering method using the proposed fuzzy information gain (FIG) and K-means clustering algorithm. Main objectives are to resolve the problems associated with analyzing the MRIs. The research objectives and contributions of the study are discussed next. 1. To identify a method for uncertainty representation in MRIs: For this problem, this study found it suitable to use fuzzy set theory [20]. Based on this theory, this study used the concept of fuzzy information (FI) [15]. This FI provides a unique facility by combining uncertain information along with its degree of membership together. This FI forms the basis of representing uncertain changes in respective MRIs by integrating all FI together.

2.1 Introduction

11

2. To quantify the changes in MRIs: This study used the FIG function to measure the changes in terms of information [15]. Based on the degree of memberships associated with each FI, this FIG function can quantify the amount of uncertainty available for particular changes in MRIs. Based on this FIG function, this study has also explored to identify the region of maximum, minimum and average changes in MRIs. 3. To recognize the changes in MRIs: After quantification of uncertainty using FIG function, we carried out to recognize the significant changes in MRIs. These changes were classified as the MAXCR, MINCR and AVGCR. For recognizing and locating those changes efficiently, this study used K-means clustering algorithm [14], where FIG values associated with corresponding MRIs were used as inputs in this algorithm. PD MRIs were employed for the experimental purpose [7], whose descriptions are provided in the subsequent section. Empirical analysis revealed the effectiveness of the proposed method over the existing well-known image segmentation and clustering methods. List of terms, abbreviations and notations used throughout the article are presented in Table 2.1. The rest of this chapter is arranged as follows. Section 2.2 introduces mathematical formulation for uncertainty representation in image. The proposed method for clustering of MRIs is discussed in Sect. 2.3. Experimental results are discussed in Sect. 2.4. Conclusions and future directions are discussed in Sect. 2.5. Table 2.1 List of terms, abbreviations and notations Terms Parkinson’s disease Magnetic resonance image Fuzzy information Fuzzy information system Fuzzy information gain Gray pixel space Fuzzified entropy matrix Minimum changed region Maximum changed region Average changed region Joint region information function Mean squared error Peak signal-to-noise ratio Jaccard similarity coefficient Correlation coefficient

Abbreviation PD MRI FI FIS FIG GPS FEM MINCR MAXCR AVGCR JRIF MSE PSNR JSC CC

Notation – – .F˜ ˜ .K .E(F˜ ) .Gs .F˜EM – – – – MSE PSNR .JIinput ,Ioutput (g) r

12

2 Parkinson’s Disease MRIs Analysis Using Fuzzy Clustering Approach

2.2 Mathematical Formulation for Uncertainty Representation For a problem space, if every event is considered as an individual uncertain information, then it can be represented by its corresponding degree of membership using the fuzzy set. Assuming .U = {e1 , e2 , . . . , en } represents a universe of discourse for n number of events, where .i = 1, 2, . . . , n. Various changes in these events can be characterized by categorizing them as low change, moderate change, high change, and so on. For the representation of such changes, we can use the fuzzy set theory. For the universe of discourse U , we can define the fuzzy set .X˜ based on the events by X˜ = μ(e1 )/e1 + μ(e2 )/e2 + . . . + μ(en )/en .

.

(2.2.1)

Here, .μ represents the degree of membership function used in the fuzzy set theory. Hence, .μ(ei ) gives the degree of membership value between range .[0, 1] ˜ Here, the symbol “.+” denotes for the event .ei that associated with the fuzzy set .X. the fuzzy union operation and the symbol “./” denotes the separator rather than the commonly used summation and division in algebra, respectively. Singh and Dhiman [15] introduced the concept of FI on the basis of the above fuzzy set formulation. Here, we will correlate this concept for the better analysis of MRIs. A FI contains an uncertain set of events and their related fuzzified information. It can be defined for the universe of discourse U as follows. Definition 2.2.1 (FI) A FI for the event .ei is a paired set of elements .{ei , μ(ei )}, i = 1, 2, . . . , n, which can be denoted by .F˜ . Mathematically, it can be expressed as: F˜ = {ei , μ(ei )}/U, ∀ei ∈ U.

.

(2.2.2)

The FI does not often give full descriptions of the uncertainty for a given event. In problem space, complete information about the inherent uncertainty can be given by integrating these FI associated with the events. Such a collection of FI, therefore, constitutes a system called FIS. Mathematically, it can be expressed as follows. ˜ Definition 2.2.2 (FIS) The FIS is a set of FI defined on the U . It is denoted by .K. It can be expressed, mathematically, as: K˜ = [{e1 , μ(e1 )}/U, {e2 , μ(e2 )}/U, . . . , {en , μ(en )}/U ].

.

(2.2.3)

Here, each .{ei , μ(ei )}/U denotes the individual FI defined based on the U , where ei ∈ U and .μ(ei ) ∈ [0, 1].

.

To quantify the fuzziness involved in each FI, FIG function [15] is used. The FIG function can be expressed as follows.

2.3 The Proposed Method

13

Definition 2.2.3 (FIG) It is a function to quantify uncertainty represented in terms of FI for the set of uncertain events defined in the universe of discourse U . Mathematically, it can be expressed as: E(F˜ ) = −

n 

.

μ(ei ) log2 (μ(ei )).

(2.2.4)

i=1

Here, .ei ∈ U , and .μ(ei ) ∈ [0, 1]. In Eq. (2.2.4), the .E(F˜ ) is called the FIG function.

2.3 The Proposed Method This section introduces the proposed method for the segmentation of MRIs based on FIG and K-means clustering algorithm. Each step of the proposed method is explained in the subsequent subsection.

2.3.1 Representation of Pixels For an image .Iinput , various changes can be distinguished in terms of color intensities that are associated with those pixels. By combining those pixels, a space can be defined for the .Iinput , which can be termed as a gray pixel space (GPS). In the following, we will introduce a definition for the GPS in context of .Iinput . Definition 2.3.1 (GPS) For an input image .Iinput , it has L levels with .Xi,j gray level at .(i, j ) pixel location. A GPS is a collection of .Xi,j gray pixels that form a space in the image. The GPS for the image .Iinput can be represented as .Gs . Mathematically, it can be expressed in the following matrix form. ⎡

X1,1 X1,2 ⎢ X2,1 X2,2 ⎢ . Gs = ⎢ . .. ⎣ .. . Xm,1 Xm,2

⎤ . . . X1,n . . . X2,n ⎥ ⎥ .. ⎥ .. . . ⎦ . . . Xm,n

(2.3.1)

where, the maximum number of rows and columns for the .Iinput are denoted by the m and n, respectively. Here, .i = 1, 2, . . . , m and .j = 1, 2, . . . , n.

14

2 Parkinson’s Disease MRIs Analysis Using Fuzzy Clustering Approach

2.3.2 Formation of FIS In the representation of the GPS, there is an absence of clear boundary of each special object, which is hard to be classified as a changed or unchanged pixel. The representation of such pixels that belong to the GPS is an uncertain concept. This inherent uncertainty of nature leads us towards the representation of pixels in terms of FIS. In the following, we formulate the FIS for each pixel that belongs to the GPS. Definition 2.3.2 (FIS for the GPS) It is a collection of FI for each pixel that belongs to the GPS. Mathematically, it can be represented as: ⎡

⎤ {X1,1 , μ(X1,1 )}/U {X1,2 , μ(X1,2 )}/U . . . {X1,n , μ(X1,n )}/U ⎢ {X2,1 , μ(X2,1 )}/U {X2,2 , μ(X2,2 )}/U . . . {X2,n , μ(X2,n )}/U ⎥ ⎥ ˜ =⎢ .K ⎥. ⎢ .. .. .. .. ⎦ ⎣ . . . . {Xm,1 , μ(Xm,1 )}/U {Xm,2 , μ(Xm,2 )}/U . . . {Xm,n , μ(Xm,n )}/U (2.3.2) In Eq. (2.3.2), each .{Xi,j , μ(Xi,j )}/U represents individual FI of the crisp pixel Xi,j ∈ U and .μ(Xi,j ) ∈ [0, 1].

.

2.3.3 Measure of Uncertainty The amount of information available in terms of degrees of memberships can be measured using the FIG function (Eq. (2.2.4)). Using this function, the FIG value for the individual FI .{Xi,j , μ(Xi,j )}/U are obtained, and presented in a matrix form. We refer this matrix as the fuzzified entropy matrix (FEM). Definition 2.3.3 (FEM) It is a matrix, where each FI is represented in terms of its corresponding FIG value. The FEM is denoted as .F˜EM . It can be defined as: ⎡

.

F˜EM

⎤ E(μ(X1,1 )) E(μ(X1,2 )) . . . E(μ(X1,n )) ⎢ E(μ(X2,1 )) E(μ(X2,2 )) . . . E(μ(X2,n )) ⎥ ⎢ ⎥ =⎢ ⎥. .. .. .. .. ⎣ ⎦ . . . . E(μ(Xm,1 )) E(μ(Xm,2 )) . . . E(μ(Xm,n ))

˜ Here, .E(μ(Xi,j )) satisfies Eq. (2.2.3) and .μ(Xi,j ) ∈ K.

(2.3.3)

2.3 The Proposed Method

15

2.3.4 Clustering of FEM In this study, a set of pixels is assumed as a system, which is called the GPS. Now, it is obvious that if entropy of a particular pixel is changed in the GPS, then its corresponding gray level values will surely be changed. These changes can be studied using similarity and dissimilarity analysis of FIG values associated with the FEM. One of such techniques suitable for this kind of analysis is to employ clustering algorithms. In the context of clustering FIG values available in the FEM, this study uses the K-means clustering algorithm. For each .E(μ(Xi,j )) ∈ F˜EM , the K-means clustering algorithm is able to generate k-number of clusters. Steps involved in this algorithm are explained next. Step 1. Input: ˜EM : a FEM containing n number of FIG values in the form .E(μ(Xi,j )), where .F .i = 1, 2, . . . , m and .j = 1, 2, . . . , n. k: select the number of clusters. Step 2. Select K initial clusters in space as: Z1 (1), Z2 (1), . . . , Zk (1)

.

Here, 1 indicates the first iteration of the algorithm. Step 3. repeat Step 4. For the .k th iterative step, each .E(μ(Xi,j )) ∈ Sj (k) if the following relation is satisfied: D(E(μ(Xi,j )), Zj (k)) =

n m  

.

(E(μ(Xi,j )) − Zj (k))2

(2.3.4)

i=1 j =1

For .i = 1, 2, . . . , K, .i /= j , where .Sj (k) denotes the set of FIG values whose cluster centre is .Zj (k). Step 5. Compute the new cluster centres .Zj (k + 1) using the following equation as: Zj (k + 1) =

.

1  E(μ(Xi,j )), ∀E(μ(Xi,j )) ∈ Cj (k) Nj

where .Nj is the number of FIG values in the .Sj (k) and .j = 1, 2, . . . , K. Step 6. Goto Step 3 until cluster centers do not modify any more. Output: A set consists of K number of clustered FIG values.

(2.3.5)

16

2 Parkinson’s Disease MRIs Analysis Using Fuzzy Clustering Approach

2.3.5 Pattern Visualization In this study, the segmented MRIs are classified into three different regions, viz., MINCR, MAXCR and AVGCR. They can be defined as follows. Definition 2.3.4 (MINCR, MAXCR and AVGCR) The MINCR, MAXCR and AVGCR can be defined based on the .F˜EM as: MI NCR =

m

n

.

E(μ(Xi,j )) ∈ F˜EM .

(2.3.6)

E(μ(Xi,j )) ∈ F˜EM .

(2.3.7)

i=1 j =1

MAXCR =

n m i=1 j =1

AV GCR =

MI NCR + MAXCR . 2

(2.3.8)



Here, the symbols . and . represent the fuzzy intersection and union operations among the information associated with two regions. Input: image Iinput with L levels and gray level pixel Xi,j at (i, j ) pixel position, where i = 1, 2, . . . , m and j = 1, 2, . . . , n. /∗Perform clustering operation∗/ for ∀Xi,j do Represent Iinput as the GPS. Convert GPS into the FIS. Prepare the FEM from the FIS. Apply the K-means clustering algorithm in the FEM. end /∗End of clustering operation∗/ Output: segmented image Ioutput with K number of clustered FIG values. /∗Perform pattern visualization operation∗/ Input: K number of clustered FIG values in the form E(μ(Xi,j )), where i = 1, 2, . . . , m and j = 1, 2, . . . , n. for ∀E(μ(Xi,j )) do Classify the FEM into three regions, viz., MINCR, MAXCR and AVGCR. Visualize different patterns using the JRIF, which includes paired set of regions, viz., J (MI NCR, MAXCR), J (MI NCR, AV GCR) and J (MAXCR, AV GCR). end /∗End of pattern visualization operation∗/ Output: classified image with three different regions. Algorithm 1: PROCEDURES PERFORM_CLUSTERING() and PERFORM_ VISUALIZATION().

2.4 Experimental Results

17

In this study, more visualization impact will be given by considering the information available in the MINCR, MAXCR and AVGCR. For this purpose, information associated with any set of distinct regions .R = {MI NCR, MAXCR, AV GCR} can be jointed together. This joint operation is performed by using the proposed function called as a joint region information function (JRIF). In the following, this JRIF can be defined for the set of regions R as follows. Definition 2.3.5 (JRIF) It is a collection of FIG values associated with the paired set of regions as .J (MI NCR, MAXCR), .J (MI N CR, AV GCR) and .J (MAXCR, AV GCR) from the R, where J represents the JRIF. Mathematically, it can be defined as: J (MI NCR, MAXCR) =

.

J (MI NCR, AV GCR) = J (MAXCR, AV GCR) =

m n

{MI NCR, MAXCR}.

(2.3.9)

{MI NCR, AV GCR}.

(2.3.10)

{MAXCR, AV GCR}.

(2.3.11)

i=1 j =1 m n

i=1 j =1 m n

i=1 j =1

Here, the symbol . represents the fuzzy union operation among the information associated with two different regions. An algorithmic representation of the proposed method is presented in Algorithm 1, which includes two different procedures as .PERFORM_CLUSTERING() and .PERFORM_VISUALIZATION(). The procedure .PERFORM_CLUSTERING() is used for segmenting the MRIs, while the procedure .PERFORM_VISUALIZATION () is used for the pattern classification and visualization from the segmented MRIs. Figure 2.1 shows a flowchart of the proposed method, which clearly describes its working process.

2.4 Experimental Results In this section, experimental results related to segmentation of MRIs of PD are discussed. To evaluate the performance of the proposed method, this study carried out experiments on MRIs images of PD. This dataset was acquired from the Image and Data Archive (IDA) [7]. Selected MRIs are shown in Fig. 2.2 along with their respective ground truth (GT) images.

2.4.1 Experimental Set-Up The proposed method was developed based on Matlab version R2019a, which was running on a system with Microsoft Windows 10 Home, 64 bits operating system,

18

2 Parkinson’s Disease MRIs Analysis Using Fuzzy Clustering Approach

Fig. 2.1 Flowchart of the proposed method

Select the MRIs. Represent them into the CPS.

Convert the CPS into the FIS. Prepare the FEM from the FIS. Apply the K-means clustering algorithm in the FEM. Get the segmented image. Classify the FEM and visualize different patterns using the JRIF.

Core i7-9700F processor, 16 GB main memory and 3.00 GHz CPU. In the course of the experiment, the universe of discourse for the .Iinput was assumed as .U = [min(Iinput ), max(Iinput )], where .min and .max give the minimum and maximum grayscale values from the .Iinput , respectively. Then, the FEM was defined to start the segmentation process on the basis of K-means clustering algorithm.

2.4.2 Performance Evaluation Metrics The performance of the proposed method was evaluated using the well-known parameters used in image segmentation analysis. These selected metrics include mean squared error (MSE), peak signal-to-noise ratio (PSNR), Jaccard similarity coefficient (JSC) and correlation coefficient (CC). These metrics are described on the basis of original image (.Iinput ), segmented image (.Ioutput ) and GT image (.Iground ) as follows. ● MSE: The MSE is utilized to calculate an average gray level intensity value lost during the segmentation of the original image .Ip. The lower MSE value

2.4 Experimental Results

19

#1

#2 (a)

(a)

(b)

#3

(b)

#4 (a)

(b)

#5

(a)

(b)

#6 (a)

(b)

(a)

(b)

#7

(a)

(b)

(a)

(b)

#8

#9 (b)

(a)

Fig. 2.2 MRIs of PD patients with their respective GT images: (a) original image, and (b) GT image

indicates minimal intensity loss and generates a better segmented image .Ioutput . Mathematically, it can be represented as: MSE =

.

N M   1 (Iinput − Ioutput )2 M ×N

(2.4.1)

m=1 n=1

Here, .M × N denotes the size of image in terms of pixels. ● PSNR: The PSNR has a negative correlation with the MSE, so its higher value shows less distortion and generates a better segmented image .Ioutput .

20

2 Parkinson’s Disease MRIs Analysis Using Fuzzy Clustering Approach

Mathematically, it can be represented as:  P SNR = 10 × log10

.

(255)2 MSE

 (2.4.2)

● JSC: The JSC measures the resemblance between two images. It can be described as the intersection of the pixel sets, divided by the size of the union of the pixel sets. The JSC value lies between the range .[0, 1]. A similarity value close to 1 indicates that the segmented regions have a perfect similarity with GT image. It can be defined as: JIoutput ,Iground (c) =

.

XIoutput ∩Iground (c)

(2.4.3)

XIoutput ∪Iground (c)

In Eq. (2.4.3), .XIoutput ∩Iground (c) and .XIoutput ∪Iground (c) represent the intersection and union of pixels that belong to the class c in case of both the segmented image and GT image, respectively. ● CC: The CC measure is used to identify the match between the original image and the segmented image. The CC values lies between the range .[−1, 1]. A CC value close to 1 indicates that the segmented regions have a perfect match with original image. It can be defined as: M  N 

r=

.

(Iinput − I¯p )(Ioutput − I¯output )

m=1 n=1 M  N 

(Iinput

m=1 n=1

− I¯input )2



M  N 

(Ioutput

− I¯output )2

 (2.4.4)

m=1 n=1

where r denotes the CC value. In Eq. (2.4.4), .I¯input and .I¯output represent the means of original image and the segmented image, respectively.

2.4.3 Discussion on Segmentation of MRI Original MRIs and their corresponding segmented images are shown in Figs. 2.3 and 2.4 that are obtained from the proposed and existing methods, which include adaptive threshold method [4], watershed method [21], gray threshold method [13], fuzzy based method [5], K-means clustering algorithm [19], adaptive Kmeans clustering algorithm [10] and FCM algorithm [8]. Based on these segmented images, it can be easily observed that the different regions of the human brain were not uniformly segmented using the existing methods. Comparing segmented images with segmented images obtained from the proposed method, it is clear that the regions of the brain are adequately segmented. Segmented images based on the

2.4 Experimental Results

21

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

Fig. 2.3 Clustering of gray and white matters for image #1: (a) original image, (b) proposed method, (c) RGB effect of (b), (d) adaptive threshold method [4], (e) watershed method [21], (f) gray threshold method [13], (g) fuzzy based method [5], (h) K-means clustering algorithm [19], (i) adaptive K-means clustering algorithm [10], and (j) FCM algorithm [8]

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

Fig. 2.4 Clustering of gray and white matters for image #2: (a) original image, (b) proposed method, (c) RGB effect of (b), (d) adaptive threshold method [4], (e) watershed method [21], (f) gray threshold method [13], (g) fuzzy based method [5], (h) K-means clustering algorithm [19], (i) adaptive K-means clustering algorithm [10], and (j) FCM algorithm [8]

existing methods shown in Figs. 2.3 and 2.4 revealed that previous methods can not manage such images because of inconsistent and ambiguous boundaries. On the other hand, the results obtained using the proposed method clearly segmented such images, where boundaries and objects can be easily separated (in Figs. 2.3 and 2.4b). For the better visualization of segmented information, each segmented image was presented through the RGB effect. For images #1–#2, their corresponding RGB effect images were shown in Fig. 2.3c and Fig. 2.4c, respectively. Finally, statistical analyzes of the proposed method and existing methods were conducted using the MSE, PSNR and JSC metrics. The efficiency of the proposed method was evaluated based on well-known methods that included adaptive thresh-

22

2 Parkinson’s Disease MRIs Analysis Using Fuzzy Clustering Approach

Table 2.2 Comparison of MSE obtained by the proposed method and existing methods for the MRIs of PD Method Adaptive [4] Watershed [21] Gray [13] Fuzzy [5] K-means [19] Adaptive K [10] FCM [8] Proposed

.#1

.#2

.#3

.#4

.#5

.#6

.#7

.#8

.#9

95.11 96.11 93.11 84.23 83.14 81.14 79.23 65.17

93.11 95.24 92.23 85.23 82.23 81.23 79.24 60.71

93.16 93.12 93.14 83.14 83.14 82.14 78.34 64.92

94.54 94.23 93.16 85.11 81.30 81.30 81.13 62.33

96.91 94.26 93.19 89.23 83.55 82.55 80.65 65.46

94.11 95.23 92.21 84.55 84.56 81.56 80.45 63.23

95.34 92.33 92.33 84.66 81.44 81.44 79.34 62.67

93.45 96.23 94.36 87.55 83.56 81.56 80.78 63.78

96.34 93.16 94.26 88.44 85.56 83.56 81.66 63.17

Average 94.67 94.43 93.11 85.79 83.16 81.83 80.09 63.49

Table 2.3 Comparison of PSNR obtained by the proposed method and existing methods for the MRIs of PD Method Adaptive [4] Watershed [21] Gray [13] Fuzzy [5] K-means [19] Adaptive K [10] FCM [8] Proposed

.#1

.#2

.#3

.#4

.#5

.#6

.#7

.#8

.#9

28.38 28.34 28.47 28.91 28.97 29.07 29.18 30.02

28.48 28.38 28.52 28.86 29.01 29.07 29.18 30.33

28.47 28.47 28.47 28.97 28.97 29.02 29.23 30.04

28.40 28.42 28.47 28.86 29.06 29.06 29.07 30.22

28.27 28.39 28.44 28.63 28.91 29.00 29.06 30.01

28.39 28.34 28.48 28.86 28.86 29.05 29.08 30.16

28.34 28.48 28.48 28.85 29.02 29.06 29.14 30.19

28.43 28.30 28.38 28.71 28.91 29.05 29.06 30.12

28.29 28.44 28.39 28.66 28.81 28.94 29.01 30.16

Average 28.38 28.40 28.46 28.81 28.95 29.04 29.11 30.14

old method [4], the watershed method [21], the gray threshold method [13], the fuzzy method [5], the K-means clustering algorithm [19], the adaptive K-means clustering algorithm [10], and the FCM algorithm [8]. Table 2.2 presents the comparisons of the proposed method with other methods in terms of the average MSE values. For images #1–#9, adaptive threshold method [4], watershed method [21], gray threshold method [13], fuzzy based method [5], K-means clustering algorithm [19], adaptive K-means clustering algorithm [10] and FCM algorithm [8] had average MSE values of 94.67, 94.43, 93.11, 85.79, 83.16, 81.83 and 80.09, respectively. By contrast, the proposed method achieved an average MSE value of 63.49, which showed significant improvement ranging 20.73–32.94% over other methods. For the comparisons of average PSNR they were shown in Table 2.3. The average PSNR values from the adaptive threshold method [4], watershed method [21], gray threshold method [13], fuzzy based method [5], K-means clustering algorithm [19], adaptive K-means clustering algorithm [10] and FCM algorithm [8] were 28.38, 28.40, 28.46, 28.81, 28.95, 29.04 and 29.11, respectively. By contrast, the proposed method exhibited average PSNR value of 30.14, which showed improvement in average PSNR values over other methods by 3.54–6.20%. Table 2.4 shows comparisons of the average JSC for the proposed method with other methods. The average JSC values obtained from other seven methods were 0.56, 0.65, 0.68, 0.76, 0.78, 0.81 and 0.86, respectively. The proposed method achieved an average

2.4 Experimental Results

23

Table 2.4 Comparison of JSC obtained by the proposed method and existing methods for the MRIs of PD Method Adaptive [4] Watershed [21] Gray [13] Fuzzy [5] K-means [19] Adaptive K [10] FCM [8] Proposed

.#1

.#2

.#3

.#4

.#5

.#6

.#7

.#8

.#9

0.55 0.65 0.69 0.72 0.79 0.80 0.84 0.94

0.54 0.54 0.68 0.74 0.77 0.83 0.86 0.95

0.57 0.57 0.66 0.77 0.79 0.81 0.86 0.95

0.58 0.65 0.67 0.78 0.77 0.82 0.87 0.89

0.57 0.64 0.68 0.72 0.79 0.81 0.86 0.91

0.57 0.71 0.65 0.76 0.79 0.81 0.87 0.90

0.59 0.64 0.68 0.79 0.79 0.82 0.86 0.93

0.54 0.69 0.68 0.78 0.79 0.81 0.88 0.91

0.53 0.73 0.69 0.78 0.78 0.80 0.87 0.90

Average 0.56 0.65 0.68 0.76 0.78 0.81 0.86 0.92

JSC value of 0.92, which again showed significant improvement of 6.98–64.29% compared to other methods. These statistical analyses revealed that the proposed method outperformed existing methods for MRIs segmentation of PD.

2.4.4 Discussion on Pattern Classification and Visualization We have performed the CC analysis among the original MRIs and the three different regions categorized as the MINCR, MAXCR and AVGCR. These three different classified regions for the original images #1–#4 are shown in Fig. 2.5b– d, respectively. The CC exhibited by the original MRIs and three different regions are presented in Table 2.5. The CC values indicated matching of regions among the original MRIs and three different regions. In Table 2.5, the CC values corresponding to the MINCR indicated minimum matching (Fig. 2.5b) quantified through the MRIs (Fig. 2.5b). The radiologists and medical practitioners can use the images related to the MINCR for identifying and locating minor changes in human brain during PD. Similarly, the CC values corresponding to the MAXCR indicated maximum matching as depicted in Table 2.5. The MAXCR related images (Fig. 2.5c) can be used for identifying and locating maximum changes in human brain during PD. The CC values corresponding to the AVGCR indicated average matching as depicted in Table 2.5. These images (Fig. 2.5d) can be used for identifying and locating moderate changes in human brain during PD. For generating more significant patterns through the segmentation, further experiment was carried out to visualize them by combining the FIG values associated with the MINCR and MAXCR. For this purpose, .J(MINCR, MAXCR) function was utilized. These results are shown in Fig. 2.6b for the images #1–#9, where patterns of human brain during PD were clearly visualized. The RGB effect was also provided to create more visual impact for the patterns as shown in Fig. 2.6c. Based on the results, it was obvious that the .J(MINCR, MAXCR) function highlighted the inherited information from the MINCR and MAXCR together. The proposed

24

2 Parkinson’s Disease MRIs Analysis Using Fuzzy Clustering Approach

#1 (a)

(b)

(c)

(d)

(a)

(b)

(c)

(d)

(a)

(b)

(c)

(d)

(a)

(b)

(c)

(d)

#2

#3

#4

Fig. 2.5 Classification of MRIs into three different regions for images #1–#4: (a) original image, (b) MINCR of (a), (c) MAXCR of (a), and (d) AVGCR of (a)

Table 2.5 The correlation coefficient analysis of three different regions of MRIs

Image #1 #2 #3 #4 #5 #6 #7 #8 #9

MINCR 0.72 0.76 0.86 0.74 0.84 0.74 0.76 0.75 0.83

MAXCR 0.84 0.83 0.89 0.90 0.89 0.91 0.80 0.83 0.83

AVGCR 0.78 0.79 0.86 0.82 0.86 0.82 0.77 0.79 0.83

2.4 Experimental Results

25

Fig. 2.6 Patterns visualization of MRIs for images #1–#9: (a) original image, (b) MAXCR) of (a), and (c) RGB effect of (b)

.J(MINCR,

method was also able to recognize the patterns as well as able to provide good visual information. MRI classification with three different regions, i.e. MINCR, MAXCR and AVGCR, were shown in Fig. 2.5. MRI classification regions defined by the .J(MINCR, MAXCR) function were shown in Fig. 2.6. These classification regions were obtained through the segmented MRIs. In these figures, RGB effect was provided to the MRIs classification regions to represent different inherent patterns. The RGB effect is device-dependent in nature [6, 9], so the different color combinations were observed in the MRI classification regions described by the MINCR, MAXCR, AVGCR and .J(MINCR, MAXCR) to other types of MRIs.

26

2 Parkinson’s Disease MRIs Analysis Using Fuzzy Clustering Approach

2.5 Conclusions and Future Directions For computer-vision researchers, successful segmentation of the MRI remains an issue. This study was conducted to investigate MRIs of PD patients using the proposed segmentation approach based on hybridization of FIS, FIG and K-means clustering algorithm. FIS was used to prepare a fuzzified information system for the grayscale pixels and helped to represent them in terms of degrees of memberships. Then, inherited uncertainty associated with each pixel was obtained using the FIG function. Based on these FIG values, the FEM was prepared. The K-means clustering algorithm was applied on the FEM to group FIG values based on their similarity and dissimilarity. The K-means clustering algorithm determines the similarity and dissimilarity among the FIG values based on the well-defined distance function. In this study, region available in MRIs was classified into three different categories as the MINCR, MAXCR and AVGCR. Results showed that each distinct category of region carried different information, which can help radiologists and medical practitioners for the pre-diagnosis of PD in early stage. For analyzing and visualizing the patterns more effectively, the JRIF function was utilized. This function was able to reflect significant patterns in the segmented MRIs by considering information of the MINCR and MAXCR. The segmented MRIs based on the JRIF can help the medical experts to analyze the changes in MRIs very precisely. The proposed approach was evaluated with the MSE, PSNR, JSC and CC metrics. Empirical analysis demonstrated the effectiveness of the proposed method over the well-known image segmentation methods. The study’s limitation was that the suggested approach was only validated with MRIs of PD patients. The proposed approach may be improved in the future in such a way that it can be applied to other types of MRIs.

References 1. Agrawal M, Biswas A (2015) Molecular diagnostics of neurodegenerative disorders. Front. Mol. Biosci. 2:1–10 2. Badea L, Onu M, Wu T, Roceanu A, Bajenaru O (2017) Exploring the reproducibility of functional connectivity alterations in Parkinson’s disease. PloS One 12(11):e0188196 3. Braak H, Tredici KD, Rüb U, Vos RAD, Steur ENJ, Braak E (2003) Staging of brain pathology related to sporadic Parkinson’s disease. Neurobiol Aging 24(2):197–211 4. Bradley D, Roth G (2007) Adaptive thresholding using the integral image. J Graph Tools 12(2):13–21 5. Chaira T, Ray AK (2003) Segmentation using fuzzy divergence. Pattern Recognit Lett 24(12):1837–1844 6. Gonzalez RC, Woods RE, Eddins SL (2003) Digital image processing using MATLAB. Prentice-Hall, Inc., Upper Saddle River 7. IDA (2019) Image and Data Archive. https://ida.loni.usc.edu/ 8. Jiang XL, Wang Q, He B, Chen SJ, Li BL (2016) Robust level set image segmentation algorithm using local correntropy-based fuzzy c-means clustering with spatial constraints. Neurocomputing 207:22–35

References

27

9. Jin X, Chen G, Hou J, Jiang Q, Zhou D, Yao S (2018) Multimodal sensor medical image fusion based on nonsubsampled shearlet transform and S-PCNNs in HSV space. Signal Process 153:379–395 10. Moftah HM, Azar AT, Al-Shammari ET, Ghali NI, Hassanien AE, Shoman M (2014) Adaptive k-means clustering algorithm for MR breast image segmentation. Neural Comput Appl 24(7):1917–1928 11. Nagano-Saito A, Washimi Y, Arahata Y, Kachi T, Lerch JP, Evans AC, Dagher A, Ito K (2005) Cerebral atrophy and its relation to cognitive impairment in Parkinson disease. Neurology 64(2):224–229 12. Ogura A, Kamakura A, Kaneko Y, Kitaoka T, Hayashi N, Taniguchi A (2017) Comparison of grayscale and color-scale renderings of digital medical images for diagnostic interpretation. Radiol Phys Technol 10(3):359–363 13. Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 9(1):62–66 14. Queen JM (1967) Some methods for classification and analysis of multivariate observations. In: Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, Oakland, CA, USA, vol 1 15. Singh P, Dhiman G (2018) Uncertainty representation using fuzzy-entropy approach: Special application in remotely sensed high-resolution satellite images (RSHRSIs). Appl Soft Comput 72:121–139 16. Stamford JA, Schmidt PN, Friedl KE (2015) What engineering technology could do for quality of life in Parkinson’s disease: a review of current needs and opportunities. IEEE J Biomed Health Inform 19(6):1862–1872 17. Stoessl AJ, Martin WW, McKeown MJ, Sossi V (2011) Advances in imaging in Parkinson’s disease. Lancet Neurol 10(11):987–1001 18. Tredici KD, Rüb U, Vos RAD, Bohl JR, Braak H (2002) Where does Parkinson disease pathology begin in the brain? J Neuropathol Exp Neurol 61(5):413–426 19. Yao H, Duan Q, Li D, Wang J (2013) An improved k-means clustering algorithm for fish image segmentation. Math Comput Model 58(3–4):790–798 20. Zadeh LA (1965) Fuzzy sets. Inf Control 8(3):338–353 21. Zhang M, Zhang L, Cheng H (2010) A neutrosophic approach to image segmentation based on watershed method. Signal Process 90(5):1510–1517

Chapter 3

Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Segmentation Approach

Although this may seem a paradox, all exact science is dominated by the concept of approximation. By Bertrand Shaw (1872–1970)

Abstract Brain MRIs are composed of three main regions such as gray matter, white matter and cerebrospinal fluid. Radiologists and medical practitioners make decisions through evaluating the developments in these regions. Study of these MRIs suffers from two major issues such as: (a) the boundaries of their gray matter and white matter regions are ambiguous and unclear in nature, and (b) their regions are formed with unclear inhomogeneous gray structures. These two issues make the diagnosis of critical diseases very complex. To solve these issues, this study presented a method of image segmentation based on the neutrosophic set (NS) theory and neutrosophic entropy information (NEI). By nature, the proposed method is adaptive to select the threshold value and is entitled as neutrosophic-entropy based adaptive thresholding segmentation algorithm (NEATSA). Experimental results, including statistical analyses showed that NEATSA can segment the main regions of MRIs very clearly compared to the well-known methods of image segmentation available in literature of pattern recognition and computer vision domains. Keywords Neutrosophic set (NS) theory · Neutrosophic entropy information (NEI) · Parkinson’s disease (PD) · Image segmentation · Magnetic resonance images (MRIs)

3.1 Introduction Indeterminacy may come from multiple sources with information. Different interpretations of the same object by different people, for example, can contribute to neutrality in information. In real-world decision-making, information cannot always be represented by crisp values. This situation in the interpretation of information contributes to indeterminacy. To deal with such an indeterminacy, the neutrosophic © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 P. Singh, Biomedical Image Analysis, Brain Informatics and Health, https://doi.org/10.1007/978-981-99-9939-2_3

29

30

3 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Segmentation. . .

set (NS) theory was proposed by Smarandache [18]. The NS theory can represent the uncertainty in terms of a truth-membership, an indeterminacy-membership and a falsity-membership. Range of all three membership functions is real and nonreal interval .]− 0, 1+ [ [8, 10, 11]. Applications of this theory were found in image segmentation. For example, Guo and Cheng [6] introduced neutrosophic-clustering based method (NCBM) by employing the concept of entropy. Zhang [24] proposed a NS domain based image segmentation method by integrating the idea of watershed. The main drawbacks of these [6, 24] methods, however, were that they were nonadaptive threshold-based methods and they required user intervention in segmenting input images. Ali et al. [2] presented a new fuzzy clustering algorithm based on the neutrosophic orthogonal matrices for segmenting dental X-Ray images. Anter and Hassenian [3] proposed a novel hybrid approach based on NS for automatic CT liver tumor segmentation. The main drawback of their approach was that to obtain the final segmented image, they performed two times segmentation operations. Radiologists and medical practitioners rely heavily on MRIs for reliable and faster decision-making [20]. For this reason, many algorithms for MR image segmentation are still being built by image analysts or computer vision experts, which vary from objective to objective [12]. For example, many applications (such as expert systems, belief systems, and information fusions) need accurate analysis of MRIs to make appropriate decisions. But the above literature survey showed that the researchers are still unable to accomplish this work. On the other hand, there is a critical requisite of a method that can provide accurate interpretation. Image segmentation methods as suggested in [6, 24] indicated that the NS theory was able to deal correctly with both indeterministic and uncertainty conditions. This research proposed the NEATSA based on the theory of NS and NEI by this inspiration. The main advantage of the proposed NEATSA is that it is adaptive by nature to select the threshold value. Therefore, MRIs can be segmented in an unsupervised manner. MRIs of PD [7] were used for experimental purposes, the details of which are given in the subsequent section. Experimental results, including statistical analysis, indicate the efficiency of the proposed method over existing well known image segmentation methods such as OGTM [15], ITA [23], FDBM [5], KMC [13], FCM [9], and NCBM [6]. The remainder of this article is structured as follows. Section 3.2 presents mathematical formulations for image representation of uncertainty. Section 3.3 illustrates the proposed image segmentation algorithm. Experimental results are discussed in Sect. 3.4. Conclusions and future directions are discussed in Sect. 3.5.

3.2 Mathematical Formulation of Uncertainty This section presents the definitions and representation of NS and NEI followed by mathematical representation of uncertainty using NS. A NS can be defined as follows.

3.2 Mathematical Formulation of Uncertainty

31

x-axis T(u)

1

u u

F(u)

0 u I(u)

1

y-axis

1

xis z-a Fig. 3.1 A three-dimensional representation of neutrosophic membership functions (i.e., T , I , and F ) of NS

Definition 3.2.1 ((NS) [18]) Assume that U is a universe of discourse. A NS .N for .u ∈ U can be represented by a truth-membership function T , an indeterminacymembership function I and a falsity-membership F , where .T , I, F : U →]− 0, 1+ [, − + .u ≡ u(T (u), I (u), F (u)) ∈ N, and . 0 ≤ T (u) + I (u) + F (u) ≤ 3 . A three-dimensional representation of neutrosophic membership function (i.e., T , I , and F ) of NS is depicted in Fig. 3.1. In this figure, truth-membership, falsitymembership and indeterminacy-membership are shown along the x, y and z-axes, respectively. Wang et al. [22] defined an instance of the NS as a single valued neutrosophic set (SVNS) that can be presented as follows: Definition 3.2.2 ((SVNS) [22]) A NS (denoted as .N) can be represented as a SVNS on the universe of discourse .U = {u1 , u2 , u3 , . . . , un } as: N=

n 

.

i=1

ui 〈T (ui ), I (ui ), F (ui )〉

(3.2.1)

Example 3.2.1 (For Definition 3.2.2) Assume that an image consists of different gray levels, whose universe of discourse U consists of n-different values as: .U = {u1 , u2 , . . . , un }. Their corresponding memberships can be defined using three

32

3 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Segmentation. . .

membership functions, viz., .T (ui ), .I (ui ) and .F (ui ). Now, a SVNS can be expressed for the set of gray levels on the U by: N=

.

u1 u1 + 〈T (u1 ), I (u1 ), F (u1 )〉 〈T (u2 ), I (u2 ), F (u2 )〉 un +. . . + 〈T (un ), I (un ), F (un )〉

(3.2.2)

In the following, definition for neutrosophic membership function is given [17]: Definition 3.2.3 (Neutrosophic Membership Function [17]) A neutrosophic membership function for the element .u ∈ U can be defined in terms of truth-membership function T , an indeterminacy-membership function I and a false-membership function F as follows: T (u) =

.

u − min(U ) . max(U ) − min(U )

F (u) = 1 − T (u).  I (u) = T (u)2 + F (u)2

(3.2.3) (3.2.4) (3.2.5)

In Eq. (3.2.3), min and max represent the minimum and maximum functions, which return the minimum and maximum values from the universe of discourse U , respectively. For any element u belongs to the universe of discourse U , it can be defined as NS using (3.2.3)–(3.2.5). For this representation, minimum and maximum values of the U are used, which can be obtained from the functions .min(U ) and .max(U ), respectively. By following Eq. (3.2.3), only truth-membership of u can be determined. For the determination of false-membership and indeterminacymembership of u, Eqs. (3.2.4) and (3.2.5) are used, respectively. The main advantage of these three formulas is that it can restrict the truth-membership, false-membership and indeterminacy-membership between the range 0–1. Example 3.2.2 (For Definition 3.2.3) Consider an image whose one of the gray level pixel has value .uk = 164, and its universe of discourse lies within a range .0 − 255, i.e., .U = [0, 255]. Hence, NS for the .uk with respect to the .U = [0, 255] 164 can be represented by . 〈0.64,0.73,0.36〉 . Here, these memberships are derived from Eqs. (3.2.3)–(3.2.5). Based on the above formulation of NS, the notion of neutosophic information (NI) has been introduced. An NI comprises a set of uncertain event and its corresponding neutosophic information. For the above universe of discourse U , it can be defined as follows.

3.2 Mathematical Formulation of Uncertainty

33

Definition 3.2.4 (NI) An NI for the element u is a paired set of elements {u, 〈T (u), I (u), F (u)〉}, which can be denoted by .NI . Mathematically, it can be expressed as:

.

NI = {u, 〈T (u), I (u), F (u)〉}, u ∈ U

.

(3.2.6)

Definition 3.2.5 (Complement of NS) The complement of NS .N is denoted by N c , and can be defined as: .T c (u) = F (u), .I c (u) = 1 − I (u) and .F c (u) = T (u), such that .∀u ∈ U .

.

Definition 3.2.6 (Neutrosophication [17]) The operation of neutrosophication transforms a crisp set into a NS. Thus, a neutrosifier .N is applied to a crisp subset i of the universe of discourse U that yields a neutrosophic subset .N(i : N), which can be expressed as:  N(i : N) =

(T (u), I (u), F (u))N(u)

.

(3.2.7)

U

Here, .(T (u), I (u), F (u))N(u) represents the product of a scalar .(T (u), I (u), F (u))  and the NS .N (u); and . is the union of the family of NS .(T (u), I (u), F (u))N(u), .u ∈ U . Entropy can be used in unpredictable circumstances to measure inherited uncertainties [16]. If such uncertainties are expressed by NS, then their computation is also possible from entropy. The NEI function can be used for this purpose to measure the entropy of each NI, which can be defined as follows. Definition 3.2.7 (NEI) The NEI of a NI (i.e., .NI ) is denoted as a function .E(NI ), where .E(NI ) : E(NI ) → [0, 1], which can be defined as follows: E(NI ) = 1 −

.

1 (T (u) + I (u) + F (u)) × E1 E2 E3 3

(3.2.8)

u∈U

Here, .E1 =| T (u) − T c (u) |, .E2 =| I (u) − I c (u) |, and .E3 =| F (u) − F c (u) |, where each .N I satisfies Eq. (3.2.6). Example 3.2.3 (For Definition 3.2.7) By referring to Example 3.2.2, NI for .uk 164 . Hence, its corresponding NEI value can be represented as: .NI = 〈0.64,0.73,0.36〉 can be given .E(NI ) = 0.98, which can be obtained from Eq. (3.2.8); where .E1 =| 0.64 − 0.36 |, .E2 =| 0.73 − 0.27 |, and .E3 =| 0.36 − 0.64 |.

34

3 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Segmentation. . .

3.3 The Proposed Algorithm Description of the proposed NEATSA, which is based on the NS theory and NEI, is given in this section. Pseudocode accompanied by the computational complexity analysis of the proposed NEATSA are also provided in the following section.

3.3.1 Description of NEATSA Each step of the proposed NEATSA is discussed as follows. Step 1: Represent the MR input image .Ip as a gray level space (GLS). The .Ip has K gray levels and .G(i, j ) is the gray level pixel at the position .(i, j ), where .i = 1, 2, . . . , m and .j = 1, 2, . . . , n. A GLS is a set of .G(i, j ), whose combination makes a space in the image. The GLS for the .Ip can be represented in the following matrix form. ⎤ G(1, 1) G(1, 2) . . . G(1, n) ⎢ G(2, 1) G(2, 2) . . . G(2, n) ⎥ ⎥ ⎢ . GLS = ⎢ ⎥ .. .. .. .. ⎦ ⎣ . . . . G(m, 1) G(m, 2) . . . G(m, n) ⎡

(3.3.1)

Here, m and n represent the maximum number of rows and columns for the .Ip , respectively, where .∀G(i, j ) ∈ GLS (Fig. 3.2). Step 2: Determine the probability of each gray level value .Dk ∈ G(i, j ) based on the histogram function .H (Dk ) as: H (Dk ) =

.

θk ; k = 1, 2, . . . , K N

(3.3.2)

Here, .θk denotes frequency of the .Dk and N is the total number of pixels. Step 3: Define normalized histogram .H¯ (Dk ) for the .H (Dk ) as: 1 H (Dk ) H¯ (Dk ) = K K

.

(3.3.3)

k=1

Step 4: Obtain the cumulative distributions of the .H¯ (Dk ) as: CD(k) = H¯ (Dk )

.

CD(k + 1) =

K 

.

k=1

H¯ (Dk ) = (k + 1)N (Dk+1 )

(3.3.4) (3.3.5)

3.3 The Proposed Algorithm

35

Fig. 3.2 Example of the application of the proposed NEATSA in the segmentation of MRIs: (a) original image, (b) normalized histogram of (a), (c) cumulative distribution curves for the .CD(k) and .CD(k + 1), (d) entropy curves for the .N I (k) and .N I (k + 1), (e) threshold T at location L, and (f) segmented MR image with .M = 9.4763, .L = 77 and .T = 0.2941

Step 5: Represent .CD(k) and .CD(k+1) into two different NI .NI (k) and .NI (k+1), respectively, as: NI (k) = {CD(k), 〈T (CD(k)), I (CD(k)), F (CD(k))〉}

.

(3.3.6)

NI (k + 1) = {CD(k + 1), 〈T (CD(k + 1)), I (CD(k + 1)), F (CD(k + 1))〉}

.

(3.3.7)

36

3 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Segmentation. . .

In Eq. (3.3.6), .T (CD(k)), .I (CD(k)) and .F (CD(k)) denote the truthmembership, false-membership and indeterminacy-membership of the .CD(k) between the range 0 to 1, respectively. Similarly, in Eq. (3.3.7), .T (CD(k + 1)), .I (CD(k + 1)) and .F (CD(k + 1)) denote the truth-membership, falsemembership and indeterminacy-membership of the .CD(k + 1) between the range 0 to 1, respectively. Step 6: Obtain the NEI for the .NI (k) and .NI (k + 1) as: E(N I (k)) = 1 −

.

1 3



(T (CD(k)) + I (CD(k)) + F (CD(k))) × E1 E2 E3

CD(k)∈U

(3.3.8) E(NI (k + 1)) = 1 −

.

1 3



(T (CD(k + 1)) + I (CD(k + 1))

CD(k+1)∈U

+F (CD(k + 1))) × E1 E2 E3

(3.3.9)

In Eq. (3.3.8), .E1 =| T (CD(k)) − T c (CD(k)) |, .E2 =| I (CD(k)) − I c (CD(k)) |, and .E3 =| F (CD(k)) − F c (CD(k)) |. Similarly, in Eq. (3.3.9), c c .E1 =| T (CD(k+1))−T (CD(k+1)) |, .E2 =| I (CD(k+1))−I (CD(k+1)) |, c and .E3 =| F (CD(k + 1)) − F (CD(k + 1)) |. Step 7: Determine the maximum NEI value denoted as .M. The .M can be obtained by aggregating the .E(N I (k)) and .E(NI (k + 1)) as: M(L) = max[E(N I (k)) + E(NI (k + 1))]; L ∈ [1, 256]

.

(3.3.10)

Here, .max is the maximum function that returns a .M value at location L by aggregating the .E(N I (k)) and .E(NI (k + 1)) values. Step 8: Choose the gray level .Dk corresponding to the .M as a threshold .T at location L: T = Dk(L) ; L ∈ [1, 256]

.

(3.3.11)

Step 9: Transform the .Ip into segmented image .Is using the .T as: Is =

.

G(i, j ) = 1; for f (i, j ) ≥ T G(i, j ) = 0; for b(i, j ) < T

(3.3.12)

where, .G(i, j ) = 1 and .G(i, j ) = 0 indicate the image pixels that belong to the foreground (.f (i, j )) and background (.b(i, j )) classes, respectively. An example to demonstrate the proposed NEATSA in PD’s MR image segmentation is outlined below.

3.3 The Proposed Algorithm

37

Fig. 3.3 Example of the application of the proposed NEATSA in the segmentation of MRIs: (a) original image, (b) normalized histogram of (a), (c) cumulative distribution curves for the .CD(k) and .CD(k + 1), (d) entropy curves for the .N I (k) and .N I (k + 1), (e) threshold T at location L, and (f) segmented MR image with .M = 9.4763, .L = 77 and .T = 0.2941

Example 3.3.1 Figure 3.3 shows the results of important steps of image segmentation based on the proposed NEATSA. The normalized histogram of the input image (Fig. 3.3a) is shown in Fig. 3.3b. The cumulative distributions for the .CD(k) and .CD(k + 1) are shown as curves in Fig. 3.3c. The .CD(k) and .CD(k + 1) were transformed into the .NI (k) and .NI (k + 1), respectively. The entropy values for the .NI (k) and .NI (k + 1) were then determined using Eqs. (3.3.8) and (3.3.9), respectively. Individual .NI (k) and .NI (k +1) entropy curves are shown in Fig. 3.3d. Finally, by aggregating the entropy values of the .NI (k) and .NI (k +1), an aggregate entropy curve is plotted. Maximum NEI value (M) is derived from this curve, which

38

3 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Segmentation. . .

is 9.4753. This M is associated at location .L = 77 with the gray level .Dk = 0.2941. The gray level .Dk = 0.2941 is chosen as the threshold T . Finally, this threshold value is used to segment the input image. The segmented output image is shown in Fig. 3.3f. Input: image Ip with K levels and gray level pixel Gi,j at the pixel position (i, j ), where i = 1, 2, . . . , m and j = 1, 2, . . . , n. for ∀Gi,j do Represent Ip as the GLS. Determine the number of pixels G(i, j ) with K number of gray levels with the H (Dk ) function. Define the H¯ (Dk ) for the H (Dk ). Obtain the CD(k) and CD(k + 1) for the H¯ (Dk ). Represent CD(k) and CD(k + 1) as the NI (k) and NI (k + 1), respectively. Obtain the E(N I (k)) and E(NI (k + 1)) for the NI (k) and NI (k + 1), respectively. Obtain the M by aggregating the E(N I (k)) and E(NI (k + 1)) with max function. Choose the gray level Dk corresponding to the M as a threshold T at location L. Transform the Ip into segmented image using the T . end Output: segmented image Is . Algorithm 2: PROCEDURE NEATSA().

3.3.2 Algorithm and Computational Complexity An algorithmic representation of the proposed method has been shown in Algorithm 2. The computational complexity of the proposed method has been evaluated in terms of time and space, which are described below. 1. Time complexity: The NEATSA requires .O(M ×N) time to read an input image, and to perform the segmentation operation, where M and N denote the number of rows and columns for the gray level pixels. 2. Space complexity: The space complexity of the proposed NEATSA is the maximum amount of space, which is considered at any one time during its initialization of segmentation process. Hence, the total space complexity of the proposed NEATSA is .O(M × N).

3.4 Experimental Results

39

3.4 Experimental Results 3.4.1 Dataset Description PD is regarded as one of the serious diseases in the group of neurodegenerative diseases. It is very difficult to diagnose early stage of PD because it relies mostly on clinical or medical evidence. This disease primarily induces motor symptom dysfunction, including tremor, trembling, slow motion (bradykinesia) and altered gait [19]. The scientific community has paid a great attention to identifying the signs of this disease based on an analysis of the image or blood sample, but no effective solution for pre-diagnosis of PD has been found. Experts use positron emission tomography (PET) or single-photon emission computerized tomography (SPECT) scans to identify the PD level [1]. Such two scanning tools, however, are very expensive and only available in sophisticated laboratories. It has been observed that experts can recognize PD very lately, which lead to delays in treatment. In pathological terms, the extreme stage of PD is known as Braak Stage III-IV. At this stage, patients’ neuron system become very weak or damage [4]. Despite advancements in digital imaging, physicians or radiologists often make reports based on grayscale MRIs [14]. Because of this, it is quite likely that there was no knowledge of severely affected or impaired areas in the brain of humans. To some extent, this research attempted to solve this problem by applying the proposed NEATSA in a variety of PD MRIs. MRIs of PD were collected from the Image and Data Archive [7] website for the experimental purpose. Selected MRIs were classified into three different categories as: (a) training set, (b) validation set, and (c) testing set. Each set includes 10 different PD MRIs.

3.4.2 Performance Evaluation Metrics In terms of the consistency of segmented images with original images, the output of the proposed NEATSA was evaluated. Well-known metrics, which included mean squared error (MSE), peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) were chosen for the performance evaluation [21]. These metric definitions are based on the input image .Ip and the segmented image .Is as described below: • MSE: The MSE is used to quantify the error between original and segmented images. A lower value of MSE indicates a better segmentation. It can be defined as:  1 (Ip (m, n) − Is (m, n))2 ) R×C R

MSE = (

.

C

m=1 n=1

Here, .R × C gives the size of image in terms of pixels.

(3.4.1)

40

3 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Segmentation. . .

• PSNR: The PSNR measures the contrast between the original and segmented images. The PSNR has inverse relation with the MSE, hence its higher value indicates a better segmentation. It can be defined as:

P SNR = 10 × log10

.



(255)2 MSE

(3.4.2)

• SSIM: The SSIM is used to assess the quality of segmented image based on the computation of three terms, viz., luminance term, contrast term and structural term. A higher value of SSIM indicates a better segmentation. It can be defined as: SSI M = [L(Ip , Is )]α × [C(Ip , Is )]β × [S(Ip , Is )]γ

.

(3.4.3)

where .L(Ip , Is ), .C(Ip , Is ) and .S(Ip , Is ) are called the luminance term, contrast term and structural term, respectively. Here, .α, .β and .γ are three non-negative real numbers. The .L(Ip , Is ), .C(Ip , Is ) and .S(Ip , Is ) can be defined as: L(Ip , Is ) =

.

C(Ip , Is ) =

2μIp μIs + C1 μ2Ip + μ2Is + C1 2σIp σIs + C2 σI2p + σI2s + C2

S(Ip , Is ) =

σIp Is + C3 σIp σIs + C3

.

(3.4.4)

.

(3.4.5)

(3.4.6)

where .μIp , .μIs , .σIp , .σIs and .σIp Is are called the local means, standard deviations and cross-covariance for the images .Ip and .Is , respectively. Here, .C1 , .C2 and .C3 are three non-negative real number constants.

3.4.3 Experimental Set-Up The proposed NEATSA was implemented using Matlab R2016a environment operating on Microsoft Windows 10 with 64 bits on Core i-7 processor and 3.20 GHz processor with 8 GB main memory. During the experiment, an initial assumption was made for the universe of discourse of the .Ip as .U = [min(Ip ), max(Ip )], where .min and .max functions return the minimum and maximum values from the set of MRIs, respectively. Then the NEI was determined using the proposed NEATSA for each NI to initiate the segmentation process.

3.4 Experimental Results

41

3.4.4 Discussion on Experimental Results In Figs. 3.4, 3.5, and 3.6a(column-wise), original MRIs of PD patients are shown for the training, validation and testing sets, respectively. Figures 3.4, 3.5, and 3.6b–g(column-wise) present the segmented MRIs obtained from the OGTM, ITA, FDBM, KMC, FCM and NCBM, respectively. Segmented MRIs generated from the proposed NEATSA are shown in Figs. 3.4, 3.5, and 3.6h(column-wise). Different parameter values such as .M, L and .T necessary for MR image segmentation (Figs. 3.4, 3.5, and 3.6h(column-wise)) are given in Table 3.1. It can be easily observed that the pixels were not smoothly segmented by the OGTM, ITA, FDBM, KMC, FCM and NCBM, according to the visual analysis of segmented images. By comparison, segmented MRIs generated from the proposed NEATSA showed that pixels were clearly segmented in the case of training, validation and testing sets. The PSNR, MSE and SSIM were used to perform statistical analysis of the proposed NEATSA and existing methods. Such comparative statistics of the proposed NEATSA are provided in Tables 3.2, 3.3, and 3.4 with existing OGTM, ITA, FDBM, KMC, FCM and NCBM in terms of training, validation and testing sets, respectively. Averages of PSNR, MSE and SSIM were estimated and listed in the last columns of Tables 3.2, 3.3, and 3.4 for ease of comparison. For training, validation and testing sets, the corresponding average PSNR values obtained using the proposed NEATSA were 62.05, 62.37, and 62.99, respectively, which were far higher than the existing OGTM, ITA, FDBM, KMC, FCM and NCBM values. Similarly, training, validation and testing sets, the corresponding average MSE values obtained using the proposed NEATSA were 0.0413, 0.0378 and 0.10, respectively, which were much less than existing OGTM, ITA, FDBM, KMC, FCM and NCBM values. Eventually, the average SSIM statistics for the training, validation and testing sets were obtained from the original and segmented images. The average SSIM values for the training, validation and testing sets were 0.6560, 0.6917, and 0.7006, respectively, which were much higher than the existing OGTM, ITA, FDBM, KMC, FCM and NCBM values. Such statistical analyzes revealed that the proposed NEATSA surpassed existing OGTM, ITA, FDBM, KMC, FCM and NCBM. It can be observed from the above comparisons that existing methods cannot segment images with unclear and ambiguous boundaries. Real-world MRIs always have inherent this kind of uncertainty. Nonetheless, the results obtained using the proposed NEATSA clearly segmented the MRIs, making it easy to discern boundaries and features. Average CPU time (in millisecond) was estimated for the existing methods (i.e., OGTM, ITA, FDBM, KMC, FCM and NCBM) and the proposed NEATSA for the segmentation of training, validation and testing sets. These results are presented in Table 3.5. In Table 3.5, the proposed method showed 150.05, 143.15 and 144.11 milliseconds of average CPU time to segment the training, validation and testing sets, respectively. On the other hand, existing methods, i.e., OGTM, ITA, FDBM, KMC, FCM and NCBM consumed much more CPU time than the

42

3 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Segmentation. . . (a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(1)

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

(10) Fig. 3.4 Segmentation of training set (images 1–10): (a) original image, (b) OGTM, (c) ITA, (d) FDBM, (e) KMC, (f) FCM, (g) NCBM, and (h) NEATSA

3.4 Experimental Results (a)

(b)

43 (c)

(d)

(e)

(f)

(g)

(h)

(1)

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

(10) Fig. 3.5 Segmentation of validation set (images 1–10): (a) original image, (b) OGTM, (c) ITA, (d) FDBM, (e) KMC, (f) FCM, (g) NCBM, and (h) NEATSA

44

3 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Segmentation. . . (a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(1)

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

(10) Fig. 3.6 Segmentation of Testing set (images 1–10): (a) original image, (b) OGTM, (c) ITA, (d) FDBM, (e) KMC, (f) FCM, (g) NCBM, and (h) NEATSA

Testing set

Validation set

Dataset Training set

Parameter .M L .T .M L .T .M L .T

Image 1 9.9317 94 0.3608 9.6922 76 0.2902 9.6805 92 0.3529

2 9.5439 91 0.3490 10.1313 91 0.3490 9.8931 84 0.3216

3 10.4107 102 0.3922 10.2858 99 0.3804 9.9080 85 0.3255

4 10.2939 95 0.3647 10.4156 95 0.3647 9.7704 70 0.2667

5 9.3624 79 0.3020 9.4862 86 0.3294 9.1963 83 0.3176

6 9.5477 74 0.2824 10.8073 109 0.4196 9.7432 87 0.3333

Table 3.1 Information associated with the segmentation of MRIs based on the proposed NEATSA 7 10.4385 100 0.3843 10.7841 107 0.4118 10.6828 101 0.3882

8 9.6913 69 0.2627 9.5464 74 0.2824 10.7928 105 0.4039

9 9.4246 92 0.3529 10.5701 97 0.3725 9.4776 84 0.3216

10 10.1277 87 0.3333 9.8750 81 0.3098 10.5755 97 0.3725

3.4 Experimental Results 45

Method OGTM

Image Metric 1 2 13.22 13.14 PSNR MSE 3121.50 3178.62 SSIM 0.4354 0.4714 9.46 9.38 ITA PSNR 3121.65 3179.04 MSE 0.4354 0.4715 SSIM PSNR 13.18 13.11 FDBM 3153.08 3203.03 MSE 0.4354 0.4714 SSIM PSNR KMC 13.36 13.23 3021.13 3116.12 MSE SSIM 0.0027 0.0026 PSNR 8.59 9.87 FCM 9057.02 6746.12 MSE SSIM 0.0449 0.0435 6.37 6.11 NCBM PSNR MSE 15111.00 16039.32 0.0482 0.0465 SSIM NEATSA PSNR 61.83 61.88 MSE 0.0430 0.0426 0.6474 0.6870 SSIM

4 5 12.40 14.07 14.77 3772.38 2568.72 2187.44 0.4061 0.4065 0.4638 8.64 10.30 11.00 3773.00 2568.90 2187.76 0.4061 0.4065 0.4638 12.36 14.01 14.71 3805.85 2602.96 2214.84 0.4060 0.4065 0.4637 12.50 14.18 14.85 3681.56 2501.18 2145.87 0.0018 0.0018 0.0024 10.45 9.52 9.81 5909.65 7316.57 6850.01 0.0703 0.0595 0.0450 13.88 6.48 6.29 2681.60 14741.90 15389.27 0.0268 0.0500 0.0713 62.02 62.36 61.39 0.0412 0.0381 0.0476 0.6285 0.6138 0.6706

3

7 8 16.89 13.51 15.26 1339.95 2917.81 1953.31 0.4511 0.4017 0.4277 13.13 9.75 11.47 1340.04 2917.81 1963.02 0.4511 0.4017 0.4275 14.68 13.47 15.20 2229.84 2950.10 1981.22 0.4637 0.4017 0.4273 17.03 13.67 15.37 1298.36 2812.64 1903.05 0.0022 0.0019 0.0019 9.25 10.97 10.97 7783.94 5247.14 5245.95 0.0326 0.0528 0.0644 13.47 6.21 6.53 2948.86 15700.30 14584.28 0.0230 0.0309 0.0459 62.87 62.93 60.84 0.0338 0.0334 0.0540 0.6602 0.6320 0.6437

6

Table 3.2 Comparison of the proposed NEATSA with the existing methods based on the training set images

13.97 2625.00 0.5424 10.21 2625.10 0.5424 13.93 2648.38 0.5423 14.08 2561.27 0.0021 10.49 5851.41 0.0434 14.40 2379.97 0.0428 62.58 0.0362 0.7203

9

Average 13.53 14.08 2906.24 2344.90 0.4428 0.4449 9.77 10.31 2907.60 2658.40 0.4429 0.4449 13.49 13.81 2935.27 2772.50 0.4427 0.4461 13.64 14.19 2832.49 2587.40 0.0024 0.0022 10.70 10.06 5581.45 6558.90 0.0675 0.0524 13.23 9.30 3116.38 10269.00 0.0409 0.0426 61.78 62.05 0.0435 0.0413 0.6564 0.6560

10

Method OGTM

Image Metric 1 17.09 PSNR MSE 1281.91 SSIM 0.4341 13.32 ITA PSNR 1281.98 MSE 0.4341 SSIM PSNR 9.84 FDBM 2860.27 MSE 0.3787 SSIM PSNR KMC 17.18 1255.66 MSE SSIM 0.0024 PSNR 9.54 FCM 7292.37 MSE SSIM 0.0361 2.43 NCBM PSNR MSE 15741.06 0.0124 SSIM NEATSA PSNR 63.41 MSE 0.0299 0.6712 SSIM

3 13.48 13.62 2940.78 2848.45 0.4553 0.38623 10.23 10.84 2940.97 2848.60 0.4554 0.3862 10.18 10.78 2971.42 2888.39 0.4553 0.3862 13.60 13.74 2860.07 2771.82 0.0025 0.0020 10.70 8.86 5578.80 8513.95 0.0646 0.0575 10.45 3.67 2790.52 14866.88 0.0098 0.0098 62.17 61.94 0.0397 0.0419 0.6896 0.6891

2 13.43 2971.77 0.3772 11.11 2972.11 0.3772 11.06 3009.26 0.3772 13.54 2899.21 0.0020 8.79 8652.01 0.0563 12.09 2373.95 0.0050 61.71 0.0442 0.7174

4

6 14.12 12.54 2536.31 3647.54 0.4826 0.3673 12.23 11.07 2536.55 3647.69 0.4826 0.3673 12.19 11.02 2561.32 3687.05 0.4825 0.3673 14.24 12.65 2467.40 3560.74 0.0029 0.0020 9.82 8.99 6826.62 8265.10 0.0435 0.0652 13.40 5.09 1941.83 14452.79 0.0012 0.0272 61.71 61.93 0.0442 0.0420 0.6747 0.6453

5

8 9 13.50 16.34 13.73 2930.29 1520.53 2774.69 0.3788 0.4584 0.38807 12.41 15.64 13.39 2930.89 1520.77 2774.69 0.3788 0.4585 0.3881 12.36 9.69 13.34 2965.27 2960.27 2807.60 0.3787 0.3787 0.3880 13.65 16.44 13.87 2826.88 1486.88 2687.07 0.0018 0.0022 0.0018 10.42 10.42 11.34 5947.16 5949.02 4813.92 0.0630 0.0362 0.0612 12.90 5.39 6.00 2618.64 16106.47 15218.13 0.0149 0.0453 0.0342 62.96 62.58 62.80 0.0332 0.0362 0.03 0.7004 0.6943 0.6913

7

Table 3.3 Comparison of the proposed NEATSA with the existing methods based on the validation set images

15.32 1924.00 0.4459 15.32 1924.10 0.4459 15.25 1955.88 0.4459 15.43 1876.35 0.0024 9.45 7432.18 0.0454 13.09 3216.92 0.0418 62.47 0.0371 0.7441

10

Average 14.32 2537.60 0.4174 12.56 2537.80 0.4174 11.57 2866.70 0.4039 14.43 2469.20 0.0022 9.83 6927.10 0.0529 8.45 8932.70 0.0202 62.37 0.0378 0.6917

Method OGTM

Image Metric 1 2 3 4 15.02 16.22 15.80 15.97 PSNR MSE 2063.52 1564.71 1725.14 1655.94 SSIM 0.4977 0.4333 0.4395 0.4322 15.02 16.22 15.80 15.97 ITA PSNR 2063.52 1564.78 1725.14 1656.90 MSE 0.4977 0.4333 0.4395 0.4323 SSIM PSNR 14.98 16.14 15.72 15.89 FDBM 2083.71 1594.44 1754.65 1686.52 MSE 0.4976 0.4332 0.4395 0.4320 SSIM PSNR KMC 15.18 16.34 15.90 16.08 1989.17 1520.68 1683.40 1616.93 MSE SSIM 0.0025 0.0026 0.0025 0.0024 PSNR 11.88 9.45 9.53 10.18 FCM 4250.73 7444.32 7310.91 6286.58 MSE SSIM 0.0371 0.0384 0.0395 0.0480 5.75 6.24 6.19 6.50 NCBM PSNR MSE 17456.14 15589.09 15756.48 14684.40 0.0244 0.0345 0.0378 0.0385 SSIM NEATSA PSNR 63.79 63.47 63.23 61.59 MSE 0.0274 0.0295 0.0311 0.0454 0.7126 0.6922 0.6906 0.6829 SSIM

6 7 8 9 10 Average 15.52 14.50 13.37 13.84 14.90 13.66 14.88 1837.69 2327.49 3013.63 2706.59 2121.18 2822.28 2183.80 0.5752 0.4878 0.3828 0.3740 0.5365 0.3859 0.4545 15.52 14.50 13.37 13.84 14.90 13.66 14.88 1837.99 2327.75 3014.30 2706.91 2121.50 2822.44 2184.10 0.5752 0.4878 0.3828 0.3740 0.5365 0.3859 0.4545 15.48 14.45 13.33 13.78 14.86 13.61 14.82 1856.95 2354.50 3047.32 2742.86 2142.56 2855.66 2211.90 0.5751 0.4878 0.3827 0.3739 0.5365 0.3859 0.4544 15.69 14.63 13.50 14.00 15.04 13.80 15.02 1768.80 2257.73 2926.32 2611.00 2052.66 2732.73 2115.90 0.0021 0.0023 0.0018 0.0020 0.0024 0.0017 0.0022 11.36 10.12 11.29 9.70 11.42 11.40 0.10 4788.59 6376.35 4868.32 7015.14 4722.99 4744.87 5780.90 0.0298 0.0416 0.0645 0.0603 0.0370 0.0608 0.0457 14.54 6.06 6.32 6.41 5.88 6.33 7.02 2304.94 16221.64 15279.11 14990.17 16941.74 15244.67 14447.00 0.0312 0.0389 0.0333 0.0250 0.0317 0.0354 0.0331 63.40 62.74 62.80 62.99 63.06 62.84 62.99 0.0300 0.0349 0.0344 0.0330 0.0324 0.0341 0.10 0.7393 0.6882 0.6802 0.7192 0.7136 0.6872 0.7006

5

Table 3.4 Comparison of the proposed NEATSA with the existing methods based on the testing set images

References Table 3.5 Comparison of the proposed NEATSA with the existing methods in terms of average CPU time (in millisecond)

49 Method OGTM ITA FDBM KMC FCM NCBM NEATSA

Training set 227.31 239.95 267.66 282.91 281.91 162.05 150.05

Validation set 238.32 249.85 257.66 272.81 262.81 173.15 143.15

Testing set 235.12 245.75 258.58 273.61 271.61 170.11 144.11

proposed NEATSA. It indicated that the proposed NEATSA was computationally efficient to segment the MRIs.

3.5 Conclusions and Future Directions Effective MR image segmentation remains a problem for computer vision researchers. This study was carried out by the proposed segmentation method (i.e., NEATSA) based on the theory of NS and NEI to analyze MRIs of PD patients. The NS theory was used to describe the inherent uncertainties in MRIs in terms of NI. The NEI function was used to measure the uncertainties associated with each NI. Based on the aggregation of NEI values, a maximum NEI value was determined. Finally, a gray level at the location of maximum NEI value was selected as a threshold to segment the MRIs. The proposed NEATSA was validated for comparative studies based on the PSNR, MSE and SSIM. The proposed NEATSA also consumed less time for segmenting the MRIs of PD as compared to existing methods of image segmentation. NEATSA-based segmented MRIs can help radiologists and medical practitioners to more precisely interpret the features in MRIs of PD. The proposed NEATSA is an unsupervised and adaptive by nature that can select the threshold value itself. The limitation of the study, however, is that the proposed NEATSA is only validated with MRIs of PD. The applicability of the proposed NEATSA can be confirmed with other types of MRIs in the future.

References 1. Agrawal M, Biswas A (2015) Molecular diagnostics of neurodegenerative disorders. Front Mol Biosci 2:1–10 2. Ali M, Son LH, Khan M, Tung NT (2018) Segmentation of dental X-ray images in medical imaging using neutrosophic orthogonal matrices. Expert Syst Appl 91:434–441 3. Anter AM, Hassenian AE (2019) CT liver tumor segmentation hybrid approach using neutrosophic sets, fast fuzzy c-means and adaptive watershed algorithm. Artif Intell Med 97:105–117

50

3 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Segmentation. . .

4. Braak H, Tredici KD, Rüb U, Vos RAD, Steur ENJ, Braak E (2003) Staging of brain pathology related to sporadic Parkinson’s disease. Neurobiol Aging 24(2):197–211 5. Chaira T, Ray AK (2003) Segmentation using fuzzy divergence. Pattern Recognit Lett 24(12):1837–1844 6. Guo Y, Cheng HD (2009) New neutrosophic approach to image segmentation. Pattern Recognit 42(5):587–595 7. IDA (2019) Image and Data Archive. https://ida.loni.usc.edu/ 8. Jana C, Pal M, Karaaslan F, Qiang Wang J (2018) Trapezoidal neutrosophic aggregation operators and its application in multiple attribute decision making process. Sci Iranica 1–23. https://doi.org/10.24200/sci.2018.51136.2024 9. Jiang XL, Wang Q, He B, Chen SJ, Li BL (2016) Robust level set image segmentation algorithm using local correntropy-based fuzzy c-means clustering with spatial constraints. Neurocomputing 207:22–35 10. Karaaslan F (2018) Gaussian single-valued neutrosophic numbers and its application in multiattribute decision making. Neutrosophic Sets Syst 22(1):101–117 11. Karaaslan F, Hayat K (2018) Some new operations on single-valued neutrosophic matrices and their applications in multi-criteria group decision making. Appl Intell 48(12):4594–4614 12. Keshavan A, Datta E, McDonough IM, Madan CR, Jordan K, Henry RG (2018) Mindcontrol: a web application for brain segmentation quality control. NeuroImage 170:365–372 13. Moftah HM, Azar AT, Al-Shammari ET, Ghali NI, Hassanien AE, Shoman M (2014) Adaptive k-means clustering algorithm for MR breast image segmentation. Neural Comput Appl 24(7):1917–1928 14. Ogura A, Kamakura A, Kaneko Y, Kitaoka T, Hayashi N, Taniguchi A (2017) Comparison of grayscale and color-scale renderings of digital medical images for diagnostic interpretation. Radiol Phys Technol 10(3):359–363 15. Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 9(1):62–66 16. Singh P, Dhiman G (2018) Uncertainty representation using fuzzy-entropy approach: special application in remotely sensed high-resolution satellite images (RSHRSIs). Appl Soft Comput 72:121–139 17. Singh P, Rabadiya K (2018) Information classification, visualization and decision-making: A neutrosophic set theory based approach. In: Proceedings of 2018 IEEE International Conference on Systems, Man, and Cybernetics, Miyazaki, Japan, pp 409–414 18. Smarandache F (2002) Neutrosophy, a new branch of philosophy. Mult-Valued Log 8(3):297– 384 19. Stamford JA, Schmidt PN, Friedl KE (2015) What engineering technology could do for quality of life in Parkinson’s disease: a review of current needs and opportunities. IEEE J Biomed Health Inf 19(6):1862–1872 20. Tu Z, Bai X (2009) Auto-context and its application to high-level vision tasks and 3D brain image segmentation. IEEE Trans Pattern Anal Mach Intell 32(10):1744–1757 21. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612 22. Wang H, Smarandache F, Zhang Y, Sunderraman R (2005) Single valued neutrosophic sets. In: Proceedings of 10th international conference on fuzzy theory and technology, Salt Lake City, Utah 23. Wu H, Barba J, Gil J (2000) Iterative thresholding for segmentation of cells from noisy images. J Microscopy 197(3):296–304 24. Zhang M, Zhang L, Cheng H (2010) A neutrosophic approach to image segmentation based on watershed method. Signal Process 90(5):1510–1517

Chapter 4

Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Clustering Approach

“There are no solved problems. There are only problems that are more or less solved.” By Jules Henri Poincare (1854–1912)

Abstract Brain MRIs consist of three major regions: gray matter, white matter and cerebrospinal fluid. Medical experts make decisions on different serious diseases by evaluating the developments in these areas. One of the significant approaches used in analyzing the MRIs were segmenting the regions. However, their segmentation suffers from two major problems as: (a) the boundaries of their gray matter and white matter regions are ambiguous in nature, and (b) their regions are formed with unclear inhomogeneous gray structures. For these reasons, diagnosis of critical diseases is often very difficult. This study presented a new method for MRI segmentation, which consisted of two main parts as: (a) neutrosophic-entropy based clustering algorithm (NEBCA), and (b) HSV color system. The NEBCA’s role in this study was to perform segmentation of MR regions, while HSV color system was used to provide better visual representation of features in segmented regions. Application of the proposed method was demonstrated in 30 different MRIs of Parkinson’s disease (PD). Experimental results were presented individually for the NEBCA and HSV color system. The performance of the proposed method was evaluated in terms of statistical metrics used in an image segmentation domain. Experimental results, including statistical analysis reflected the efficiency of the proposed method over the existing well-known image segmentation methods available in literature. For the proposed method and existing methods, the average CPU time (in nanosecond) was computed and it was found that the proposed method consumed less time to segment MRIs. The proposed method can effectively segment different regions of MRIs and can very clearly represent those segmented regions. Keywords Neutrosophic set (NS) theory · Parkinson’s disease (PD) · Image segmentation · Visualization · Magnetic resonance images (MRIs)

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 P. Singh, Biomedical Image Analysis, Brain Informatics and Health, https://doi.org/10.1007/978-981-99-9939-2_4

51

52

4 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Clustering. . .

4.1 Introduction Researchers moved in the direction of developing fuzzy based method (FBM) for performing segmentation as well as the proper management of inherent uncertainties [1]. There were well-known fuzzy based methods (FBMs) introduced in literature for MRI segmentation. For example, Udupa et al. [19] proposed fuzzyconnectedness based method for identification of multiple sclerosis (MS) disease through the segmentation of white matter. Khotanlou et al. [8] proposed brain tumors detection method using fuzzy set by dealing with imprecision and variability in MRIs. Rueda et al. [14] designed fuzzy connectedness method for the detection of tumor in MRIs. Pednekar and Kakadiaris [12] proposed fuzzy connectedness based method for MRI segmentation. Dou et al. [2] introduced an automatic fuzzy information fusion based framework for segmenting tumor areas of human brain. The neutrosophic set (NS) theory as proposed by Smarandache [18] can represent the uncertainty in term of a truth-membership, an indeterminacy-membership and a falsity-membership. Some of the significant applications of this theory were found in image segmentation. For example, Guo and Cheng [4] introduced neutrosophicclustering based method (NCBM) by employing the concept of entropy. Zhang [25] proposed a NS domain based image segmentation method by integrating the idea of watershed. Image segmentation methods as proposed in [4, 25] indicated that NS theory was able to deal with indeterministic situation as well as uncertainties precisely. In MRI analysis, however, application of the NS theory has not yet been explored. Despite advances in digital imaging, medical practitioners or radiologists often rendered reports based on grayscale MRIs [11]. Because of this, it is quite likely that they often can not find severely affected or impaired areas in the brain of humans. The main motivation of this study is therefore to expand our research towards the analysis of PD’s MRIs. The objectives along with the contributions made in this study are presented as follows: 1. Providing a way of showing uncertainty in MRIs: This research highlighted the use of the NS theory [18] to solve this problem. Based on this theory, inherent uncertainties available in pixels were represented in terms neutrosophic information (NI).This NI provided a unique facility by integrating uncertainties with their respective degree of memberships. In other words, this NI provides the basis for the representation of ambiguity in MRIs. 2. Quantifying the uncertainties: This research proposed neutrosophical entropy information (NEI) function to quantify the uncertainties associated with each NI. Depending on the degree of membership associated with each NI, these uncertainties were quantified. 3. Segmenting the regions: This study was carried out to segment the regions based on the NEI available for MRIs. For this purpose, this study proposed NEBCA to effectively segment regions of MRIs. Based on similarity or dissimilarity, this algorithm performs the segmentation task by clustering NEI.

4.2 Theoretical Basis

53

4. Transforming the segmented regions into color system: The colormap is one of the most effective methods for visualizing scientific data by employing color sequence [22]. In visualization system, color is frequently employed to encode important features. Available colormap systems such as RGB and CMYK were unable to effectively represent features in MRIs due to their device-dependent nature [7]. As the device changes, features representation made by RGB and CMYK color systems also change [26]. In most of the cases, such mechanisms cause problems for human visual and perception systems. A visual system of humans is more sensitive to light than to hue and saturation. However, our visual system can distinguish between brightness and information about hue and saturation. Hence, for better visualization, brightness is more important than color representation of features. The hue-saturation-value (HSV) color system [3] fulfills this requirement. In HSV color system, H, S and V represents the hue, saturation and value (also called brightness), respectively. It is frequently used color system to provide color effect in images due to its appropriateness for features representation and visualization [13]. The HSV color system was successfully used in MRI analysis [24]. By this motivation, this study transferred the segmented regions of MRIs in the HSV color system to provide better visualization of the features in the segmented regions. The proposed method for the MRI segmentation consists of two main components as: (a) NEBCA, and (b) HSV color system. The main advantage of the proposed method is that the MRI segmentation can be carried out in a supervised way. Therefore, the segmentation can be performed at various levels until satisfactory results are obtained. For the experimental purpose, MRIs of PD [5] were employed. Experimental results, including statistical analysis, demonstrated the efficiency of the proposed method over existing well-known image segmentation methods available in literature. The remainder of this chapter is organized as follows. Section 4.2 presents theoretical basis of NS and NEI followed by mathematical representation of uncertainty. The proposed method for MRI segmentation is illustrated in Sect. 4.3. Experimental results are discussed in Sect. 4.4. Conclusions and future directions are discussed in Sect. 4.5.

4.2 Theoretical Basis This section presents the definitions and representation of NS and NEI followed by mathematical representation of uncertainty using NS. A NS can be defined as follows. Definition 4.2.1 ((NS) [18]) Assume that U is a universe of discourse. A NS .N for .u ∈ U can be represented by a truth-membership function T , an indeterminacymembership function I and a falsity-membership F , where .T , I, F : U →]− 0, 1+ [, − + .u ≡ u(T (u), I (u), F (u)) ∈ N, and . 0 ≤ T (u) + I (u) + F (u) ≤ 3 .

54

4 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Clustering. . .

x-axis T(u)

1

u u

F(u)

0 u I(u)

1

y-axis

1

xis z-a Fig. 4.1 A three-dimensional representation of neutrosophic membership function (i.e., T , I , and F ) of NS

A three-dimensional representation of neutrosophic membership function (i.e., T , I , and F ) of NS is depicted in Fig. 4.1. In this figure, truth-membership, falsitymembership and indeterminacy-membership are shown along the x, y and z-axes, respectively. Wang et al. [21] defined an instance of the NS as a single valued neutrosophic set (SVNS) as follows: Definition 4.2.2 ((SVNS) [21]) A NS (denoted as .N) can be represented as a SVNS on the universe of discourse .U = {u1 , u2 , u3 , . . . , un } as follows: N=

n 

.

i=1

ui 〈T (ui ), I (ui ), F (ui )〉

(4.2.1)

Example 4.2.1 (For Definition 4.2.2) Assume that an image consists of different gray levels, whose universe of discourse U consists of n-different values as: .U = {u1 , u2 , . . . , un }. Their corresponding memberships can be defined using three

4.2 Theoretical Basis

55

membership functions, viz., .T (ui ), .I (ui ) and .F (ui ). Now, a SVNS can be expressed for the set of gray levels on the U by: N=

.

u1 u1 + 〈T (u1 ), I (u1 ), F (u1 )〉 〈T (u2 ), I (u2 ), F (u2 )〉 un +. . . + 〈T (un ), I (un ), F (un )〉

(4.2.2)

The probability is used to measure the occurrence of any event. In most cases, the frequency of occurrences does not 100% sure. Additionally, changes in brain MRIs can be observed in terms of gray matter, white matter and cerebrospinal fluid. Such changes are very vague and can not be precisely defined on the basis of the calculation of probability. NS theory can therefore be considered as the appropriate method to describe these uncertainties, where the uncertainties associated with these MRIs can be defined with three different degree of memberships. In the following, definition for neutrosophic membership function is given [17]: Definition 4.2.3 ((Neutrosophic Membership Function) [17]) A neutrosophic membership function for the element .u ∈ U can be defined in terms of truthmembership function T , an indeterminacy-membership function I and a falsemembership function F as follows: T (u) =

.

u − min(U ) . max(U ) − min(U )

F (u) = 1 − T (u).  I (u) = T (u)2 + F (u)2

(4.2.3) (4.2.4) (4.2.5)

In Eq. (4.2.3), .min and .max represent the minimum and maximum functions, which return the minimum and maximum values from the universe of discourse U , respectively. For any element u belongs to the universe of discourse U , then using (4.2.3)– (4.2.5) it can be defined as NS. For this representation, minimum and maximum values of the U are used, which can be obtained from the functions .min(U ) and .max(U ), respectively. By following Eq. (4.2.3), only truth-membership of u can be determined. For the determination of false-membership and indeterminacymembership of u, Eqs. (4.2.4) and (4.2.5) are used, respectively. The main advantage of these three formulas is that it can restrict the truth-membership, false-membership and indeterminacy-membership between the range 0–1. Example 4.2.2 (For Definition 4.2.3) Consider an MRI of a PD patient as shown in Fig. 4.2. In this image, assume that a gray level pixel has value .uk = 164, whose universe of discourse lies within a range .0 − 255, i.e., .U = [0, 255]. Hence, NS for 164 the .uk with respect to the .U = [0, 255] can be represented by . 〈0.64,0.73,0.36〉 . Here, these memberships are derived from Eqs. (4.2.3)–(4.2.5).

56

4 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Clustering. . .

Fig. 4.2 An MRI of a PD patient

Based on the above formulation of NS, the notion of NI has been introduced. A NI comprises a set of uncertain event and their neutrosophic information. For the above universe of discourse U , it can be defined as follows. Definition 4.2.4 (NI) A NI for the element u is a paired set of elements {u, 〈T (u), I (u), F (u)〉}, which can be denoted by .NI. Mathematically, it can be expressed as:

.

NI = {u, 〈T (u), I (u), F (u)〉}, u ∈ U

.

(4.2.6)

The main difference between the NI and the fuzzy information (FI) [16] is that NI depends on three degree of memberships, i.e., truth-membership, false-membership and indeterminacy-membership; while FI relies only on the truth-membership of the fuzzy set. Thus, the NI can express the inherited uncertainty more precisely than the FI. The NI often does not provide complete information about the uncertainty for a particular element. A complete information about the inherited uncertainty in the problem space can be given by combining these NI for all the elements belonging to the U. Such NI collection can be expressed in the form of a matrix called the neutrosophic information matrix (NIM). It can be defined as follows. Definition 4.2.5 (NIM) A collection of NI defined on the universe of discourse U is called the NIM. It is denoted by .NIM. Mathematically, it can be expressed as: NIM = [{u1 , 〈T (u1 ), I (u1 ), F (u1 )〉} + {u2 , 〈T (u2 ), I (u2 ), F (u2 )〉}+

.

. . . + {un , 〈T (un ), I (un ), F (un )〉}]

(4.2.7)

Here, each .{ui , 〈T (ui ), I (ui ), F (ui )〉} represents the individual NI with respect to the universe of discourse U , where .ui ∈ U and .〈T (ui ), I (ui ), F (ui )〉 : U → [0, 1].

4.2 Theoretical Basis

57

Definition 4.2.6 (Complement of NS) The complement of NS .N is denoted by .Nc , and can be defined as: .T c (u) = F (u), .I c (u) = 1 − I (u) and .F c (u) = T (u), such that .u ∈ U . Definition 4.2.7 ((Neutrosophication) [17]) The operation of neutrosophication transforms a crisp set into a NS. Thus, a neutrosifier .N is applied to a crisp subset i of the universe of discourse U that yields a neutrosophic subset .N(i : N), which can be expressed as:  N(i : N) =

(T (u), I (u), F (u))N(u)

.

(4.2.8)

U

Here, .(T (u), I (u), F (u))N(u) represents the product of a scalar .(T (u), I (u), F (u))  and the NS .N(u); and . is the union of the family of NS .(T (u), I (u), F (u))N(u), .u ∈ U . Entropy can be used in unpredictable circumstances to measure inherited uncertainties [16]. If such uncertainties are expressed by NS, then their computation is also possible from entropy. The NEI function can be used for this purpose to measure the entropy of each NI, which can be defined as follows. Definition 4.2.8 (NEI) The NEI of a NI (i.e., .NI) is denoted as a function .E(NI), where .E(NI) : E(NI) → [0, 1], which can be defined as follows: E(NI) = 1 −

.

1 (T (u) + I (u) + F (u)) × E1 E2 E3 3

(4.2.9)

u∈U

Here, .E1 =| T (u) − T c (u) |, .E2 =| I (u) − I c (u) |, and .E3 =| F (u) − F c (u) |, where each .NI satisfies Eq. (4.2.6). The main difference between the NEI function and the Shannon’s entropy function [15] is that NEI measures the amount of uncertainty represented by the NS; whereas uncertainty measurement of Shannon’s entropy function relies on the probability distribution of the event. The NEI uses three degree of memberships of the NS, i.e., truth-membership, false-membership and indeterminacy-membership to measure the uncertainty. Example 4.2.3 (For Definition 4.2.8) By referring to Example 4.2.2, NI for the 164 . Hence, its corresponding NEI value uk can be represented as: .NI = 〈0.64,0.73,0.36〉 can be given .E(NI) = 0.98, which can be obtained from Eq. (4.2.9); where .E1 =| 0.64 − 0.36 |, .E2 =| 0.73 − 0.27 |, .E3 =| 0.36 − 0.64 |.

.

58

4 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Clustering. . .

4.3 The Proposed Method In this section, description of the proposed method is provided, which comprises two main components as: (a) NEBCA and (b) HSV color system. In the subsequent section, pseudo code followed by the computational complexity analysis of the proposed method are presented.

4.3.1 Description of the Proposed Method Each step of the proposed method is discussed as follows. Step 1. Represent the MR input image .Ip as a gray level space (GLS). The .Ip has L levels and .G(i, j ) is the gray level pixel at the position .(i, j ), where .i = 1, 2, . . . , m and .j = 1, 2, . . . , n. A GLS is a set of .G(i, j ), whose combination makes a space in the image. The GLS for the .Ip can be mathematically denoted as .GLS and represented in the following matrix form. ⎡

⎤ G(1, 1) G(1, 2) . . . G(1, n) ⎢ G(2, 1) G(2, 2) . . . G(2, n) ⎥ ⎢ ⎥ . GLS = ⎢ ⎥ .. .. .. .. ⎣ ⎦ . . . . G(m, 1) G(m, 2) . . . G(m, n)

(4.3.1)

Here, m and n represent the maximum number of rows and columns for the .Ip , respectively, where .∀G(i, j ) ∈ GLS. Step 2. Represent GLS as the NIM. Each .G(i, j ) is represented by individual NI using neutrosophic membership function that can be denoted as .NI(i, j ). This .NI(i, j ) set is called the NIM, which can be represented in the following matrix form. ⎡

⎤ NI(1, 1) NI(1, 2) . . . NI(1, n) ⎢ NI(2, 1) NI(2, 2) . . . NI(2, n) ⎥ ⎢ ⎥ . NIM = ⎢ ⎥ .. .. .. .. ⎣ ⎦ . . . . NI(m, 1) NI(m, 2) . . . NI(m, n)

(4.3.2)

In Eq. (4.3.2), each .NI(i, j ) represents individual NI of the gray level pixel G(i, j ) whose memberships can be expressed on the universe of discourse U as .〈T (G(i, j )), I (G(i, j )), F (G(i, j ))〉 : U → [0, 1].

.

4.3 The Proposed Method

59

Step 3. Determine the entropy of each .NI(i, j ). The entropy of each .NI(i, j ) is determined based the NEI function. The NEI of each .NI(i, j ) is represented by .E(NI(i, j )). This .E(NI(i, j )) set can be represented in the following matrix form. ⎡

⎤ E(NI(1, 1)) E(NI(1, 2)) . . . E(NI(1, n)) ⎢ E(NI(2, 1)) E(NI(2, 2)) . . . E(NI(2, n)) ⎥ ⎢ ⎥ .X = ⎢ ⎥ .. .. .. .. ⎣ ⎦ . . . . E(NI(m, 1)) E(NI(m, 2)) . . . E(NI(m, n))

(4.3.3)

In Eq. (4.3.3), each .E(NI(i, j )) satisfies Eq. (4.2.9). Step 4. Perform the clustering operation. Following sub-steps are used to cluster each .E(NI(i, j )) available in the .E. sub-step 4.1. Select C neutrosophic clusters in neutrosophic space as N1 (1), N2 (1), . . . , Nc (1), where 1 represents the .1st iteration of clustering operation. sub-step 4.2. repeat sub-step 4.3. At the .cth iterative step, each .E(NI(i, j )) ∈ Cj (c) if it satisfies the following relation

.

||D[E(NI(i, j )), Cj (c)]|| < ||D[E(NI(i, j )), Ci (c)]||

.

(4.3.4)

For .i = 1, 2, . . . , C, .i /= j , where .Cj (c) denotes the set of NEI whose neutrosophic cluster centre is .Cj (c). Here, .D[E(NI(i, j )), Cj (c)] and .D[E(NI(i, j )), Ci (c)] are defined in Eqs. (4.3.5) and (4.3.6), respectively. D[E(NI(i, j )), Cj (c)] =

m  n 

.

D[E(NI(i, j )) − Cj (c)]2. (4.3.5)

i=1 j =1

D[E(NI(i, j )), Ci (c)] =

n m   D[E(NI(i, j )) − Ci (c)]2 (4.3.6) i=1 j =1

sub-step 4.4. Determine the new neutrosophic cluster centres .Cj (c + 1) using the following equation as: Cj (c + 1) =

.

1  E(NI(i, j )), ∀E(NI(i, j )) ∈ Cj (c) Zj

(4.3.7)

where .Zj is the number of NEI in .Cj (c) and .j = 1, 2, . . . , C. sub-step 4.5. Goto sub-step 4.2 until there is no more change in the neutrosophic cluster centres.

60

4 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Clustering. . .

Step 5. Get the segmented image .Is . The above clustering operation is used to transform the .Ip into the .Is . Step 6. Transform the segmented regions into the HSV color system. Segmented regions of the .Is can be converted from the RGB color system to the HSV color system using the following formulas as [7]:

H =

.

S=

⎧ ⎪ ⎪0, (S = 0) ⎪ ⎪ ⎪ ⎪60 × G−B , (max(R, G, B) = R and G ≥ B ⎪ ⎨ S×V 60 × ⎪ ⎪ ⎪ ⎪ ⎪60 × ⎪ ⎪ ⎩60 ×

2+(B−R) S×V , (max(R, G, B) 4+(R−B) S×V , (max(R, G, B) 6+(G−B) S×V , (max(R, G, B)

=G

.

(4.3.8)

=B = R and G < B

max(R, G, B) − min(R, G, B) . max(R, G, B)

V = max(R, G, B)

(4.3.9) (4.3.10)

where .R(red), .G(green) and .B(blue) are the normalized values of RGB. Here, max represents the maximum function that returns maximum from RGB. The H is employed to differentiate one color to another where .H ∈ [0◦ − 360◦ ]; the S indicates the purity of a particular color where .S ∈ [0 − 100]; and the V represents the brightness of a particular color where .V ∈ [0 − 100].

.

Input: Ip with L levels and gray level pixel Gi,j at the pixel position (i, j ), where i = 1, 2, . . . , m and j = 1, 2, . . . , n. for ∀G(i, j ) do Represent the Ip as the GLS (i.e., GLS). Represent the GLS as the NIM (i.e., NIM). Obtain the NEI of each NI(i, j ) ∈ NIM denoted as the E(NI(i, j )). Represent the E(NI(i, j )) set in the matrix form denoted as the E. Perform the clustering operation to cluster each E(NI(i, j )) available in the E. end Output: Segmented image Is . Transform the segmented regions of the Is into the HSV color system.

Algorithm 3: PROCEDURE NEBCA().

4.4 Experimental Results This section discusses about the dataset and experimental set-up, performance evaluation metrics followed by the discussions on empirical analyses.

4.4 Experimental Results

61

4.4.1 Dataset Description and Experimental Set-Up MRIs of PD were collected from the Image and Data Archive [5] for the experimental purpose. Selected MRIs were classified into three different categories as: (a) training set, (b) validation set, and (c) testing set. Each different set consists of 10 different MRIs of PD. Each set contains three-dimensional T1-weighted MRIs of male and female with different age groups from 40 to 70. During the experiment, the training set was used for learning the proposed method with initial number of clusters. The validation set was used to adjust the number of clusters to improve the performance of the proposed method. Finally, the performance of the proposed method was assessed on the testing set. The proposed method was implemented using Matlab R2016a environment operating on Microsoft Windows 10 with 64 bits on Core i-7 processor and 3.20 GHz processor with 8 GB main memory. During the experiment, an initial assumption was made for the universe of discourse for the input image .Ip as .U = [min(Ip ), max(Ip )], where .min and .max functions return the minimum and maximum values from the set of MRIs, respectively. Then, the NEI was determined for each NI belongs to the NIM for initiating the segmentation operation using the proposed NEBCA. Finally, segmented regions were transformed into the HSV color system.

4.4.2 Performance Evaluation Metrics Different sets of evaluation metrics were used for the two different components of the proposed method, namely, NEBCA and HSV color system. The performance of the proposed NEBCA was evaluated with respect to the consistency of segmented images with original images, while the performance of the HSV color system was evaluated with respect to better visualization of the features of the segmented regions. Three well-known metrics, which included mean squared error (MSE), peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) were used for the performance evaluation of NEBCA. On the other hand, three different metrics were used as standard deviation (SD), the proposed total neutrosophic entropy information (TNEI) function, and the Shannon’s total entropy (STE) function for the performance evaluation of the HSV color system. These metric descriptions are based on the input image .Ip and the segmented image .Is , which are defined below.

62

4 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Clustering. . .

• MSE [20]: The MSE is used to quantify the error between original and segmented images. A lower value of MSE indicates a better segmentation. It can be defined as: MSE =

.

R C 1  (Ip (m, n) − Is (m, n))2 R×C

(4.4.1)

m=1 n=1

Here, .R × C gives the size of image in terms of pixels. • PSNR [20]: The PSNR measures the contrast between the original and segmented images. The PSNR has inverse relation with the MSE, hence its higher value indicates a better segmentation. It can be defined as:   (255)2 .P SNR = 10 × log10 (4.4.2) MSE • SSIM [20]: The SSIM is used to assess the quality of segmented image based on the computation of three terms, viz., luminance term, contrast term and structural term. A higher SSIM value indicates a better segmentation. It can be defined as: SSI M = [L(Ip , Is )]α × [C(Ip , Is )]β × [S(Ip , Is )]γ

.

(4.4.3)

where .L(Ip , Is ), .C(Ip , Is ) and .S(Ip , Is ) are called the luminance term, contrast term and structural term, respectively. Here, .α, .β and .γ are three non-negative real numbers. The .L(Ip , Is ), .C(Ip , Is ) and .S(Ip , Is ) can be defined as: L(Ip , Is ) =

.

C(Ip , Is ) =

2μIp μIs + C1 μ2Ip + μ2Is + C1 2σIp σIs + C2 σI2p + σI2s + C2

S(Ip , Is ) =

σIp Is + C3 σIp σIs + C3

.

(4.4.4)

.

(4.4.5)

(4.4.6)

where .μIp , .μIs , .σIp , .σIs and .σIp Is are called the local means, standard deviations and cross-covariance for the images .Ip and .Is , respectively. Here, .C1 , .C2 and .C3 are three non-negative real number constants. • SD [9]: The SD can be used to assess the quality of HSV color system. A higher value of SD indicates a better representation and visualization of the segmented regions. It can be defined as the square root of variance. It can be defined as:   R C    (I (m, n) − μ)2  m=1 n=1 s .SD = (4.4.7) R×C Here, .μ is the mean of the .Is .

4.4 Experimental Results

63

• TNEI: The TNEI is used to calculate the total quantity of inherent information, after representing the segmented regions through the HSV color system. A higher TNEI value indicates a higher amount of information present in the HSV color system. For the HSV color system, the TNEI can be defined as: R C 1  (T (Is (m, n)) + I (Is (m, n)) R×C

TN EI = 1 −

.

m=1 n=1

+F (Is (m, n))) × E1 E2 E3

(4.4.8)

Here, .E1 =| T (Is (m, n)) − T c (Is (m, n)) |, .E2 =| I (Is (m, n)) − I c (Is (m, n)) |, and .E3 =| F (Is (m, n)) − F c (Is (m, n)) |. The robustness of the TNEI function was compared with the STE function [15]. The STE can be defined as follows. • STE [15]: STE quantifies the total amount of information inherent in the .Is based on the probability distribution. A higher STE value indicates a higher amount of information present in the .Is . It can be defined as: H =−

C R  

.

Pi,j log2 (Pi,j )

(4.4.9)

m=1 n=1

Here, .Pi,j represents the probability of each gray level pixel in the .Is , where i = 1, 2, . . . , m and .j = 1, 2, . . . , n. The function H is known as the Shannon’s entropy function.

.

4.4.3 Discussion on the Results Obtained by the NEBCA In this section, segmented results obtained from the NEBCA are presented. For the quality assessment of segmented images, the NEBCA was compared with wellknown image segmentation methods that included ITA [23], FDBM1 [1], KMC [10], FCM [6] and NCBM [4]. Original MRIs of PD patients are shown in Figs. 4.3, 4.4, and 4.5a(columnwise) for the training, validation and testing sets, respectively. The segmented MRIs obtained from the ITA, FDBM, KMC, FCM and NCBM are presented in Figs. 4.3, 4.4, and 4.5b–f(column-wise), respectively. In Figs. 4.3, 4.4, and 4.5g(columnwise), the segmented MRIs obtained from the NEBCA are shown. Based on visual analysis of the segmented images, it is easy to observe that the pixels were not smoothly segmented by the ITA, FDBM, KMC, FCM and NCBM. By contrast, segmented MRIs produced from the NEBCA indicated that in case of training,

1 FDBM

stands for fuzzy divergence based method.

64

4 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Clustering. . . (a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(1)

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

(10) Fig. 4.3 Segmentation of training set (images 1–10): (a) original image, (b) ITA, (c) FDBM, (d) KMC (with .k = 3), (e) FCM, (f) NCBM, (g) NEBCA, and (h) HSV color system of (g)

4.4 Experimental Results (a)

(b)

65 (c)

(d)

(e)

(f)

(g)

(h)

(1)

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

(10) Fig. 4.4 Segmentation of validation set (images 1–10): (a) original image, (b) ITA, (c) FDBM, (d) KMC (with .k = 3), (e) FCM, (f) NCBM, (g) NEBCA, and (h) HSV color system of (g)

66

4 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Clustering. . . (a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(1)

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

(10) Fig. 4.5 Segmentation of testing set (images 1–10): (a) original image, (b) ITA, (c) FDBM, (d) KMC (with .k = 3), (e) FCM, (f) NCBM, (g) NEBCA, and (h) HSV color system of (g)

4.4 Experimental Results

67

validation and testing sets, pixels associated with gray matter and white matter were clearly segmented. The performance of the NEBCA and existing methods was carried out statistically using the PSNR, MSE and SSIM. Comparison statistics are provided in Tables 4.1, 4.2, and 4.3 of the proposed NEBCA with existing ITA, FDBM, KMC, FCM and NCBM in terms of training, validation and testing sets, respectively. The averages of PSNR, MSE and SSIM were depicted in the last columns of Tables 4.1, 4.2, and 4.3 for ease of comparison. In case of training, validation and testing sets, their corresponding average PSNR values obtained using the NEBCA were 32.00, 31.92 and 37.84, respectively, which were much higher than existing ITA, FDBM, KMC, FCM and NCBM. Similarly, the average MSE values obtained for the training, validation and testing sets using the proposed NEBCA were 48.13, 49.92 and 28.86, respectively, which were much less than existing ITA, FDBM, KMC, FCM and NCBM. Finally, the average SSIM statistics were obtained from the original and segmented images for the training, validation and testing sets. For the training, validation and testing sets, the average SSIM values were 0.6560, 0.6917 and 0.7006, respectively, which were much higher than existing ITA, FDBM, KMC, FCM and NCBM. Hence, these statistical analyses showed that the NEBCA outperformed existing ITA, FDBM, KMC, FCM and NCBM. From the above comparisons, it can be observed that existing methods are unable to segment those images that have uncertain and vague boundaries. Real-world MRIs always have such kinds of inherent uncertainty. However, results obtained using the NEBCA clearly segmented the MRIs, where boundaries and features can easily be distinguished.

4.4.4 Discussion on the Results Obtained by the the HSV Color System Statistical analyses of the HSV color system images are presented in this section. The HSV color system of the segmented MRIs obtained from the NEBCA for the training, validation and testing sets are presented in Figs. 4.3, 4.4, and 4.5h(columnwise), respectively. Based on the visual analysis of the HSV color system images, it can be easily observed that the HSV color system adequately provided the color effect in the segmented regions. The proposed method used the same HSV color system to represent more than two homogeneous regions in the segmented MRIs. By contrast, segmented regions of MRIs can be visualized clearly with the HSV color system produced from the NEBCA, which provide a better representation of the features. Finally, the HSV color system images were assessed with the SD and TNEI. For the comparison purpose, the SD and STE were also derived for the segmented MRIs obtained from the ITA, FDBM, KMC, FCM and NCBM. The averages of SD, STE and TNEI were displayed in the last columns of Tables 4.4, 4.5, and 4.6 for ease of

Image Method Metric 1 2 PSNR ITA 9.46 9.38 3121.65 3179.04 MSE 0.4354 0.4715 SSIM FDBM PSNR 13.18 13.11 MSE 3153.08 3203.03 0.4354 0.4714 SSIM KMC 13.36 13.23 PSNR 3021.13 3116.12 MSE 0.0027 0.0026 SSIM 8.59 9.87 PSNR FCM 9057.02 6746.12 MSE SSIM 0.0449 0.0435 NCBM PSNR 6.37 6.11 16039.32 MSE 15111.00 SSIM 0.0482 0.0465 NEBCA PSNR 33.08 30.47 MSE 33.08 58.8340 SSIM 0.6474 0.6870 4

5

8.64 10.30 11.00 3773.00 2568.90 2187.76 0.4061 0.4065 0.4638 12.36 14.01 14.71 3805.85 2602.96 2214.84 0.4060 0.4065 0.4637 12.50 14.18 14.85 3681.56 2501.18 2145.87 0.0018 0.0018 0.0024 10.45 9.52 9.81 5909.65 7316.57 6850.01 0.0703 0.0595 0.0450 13.88 6.48 6.29 2681.60 14741.90 15389.27 0.0268 0.0500 0.0713 32.44 29.58 30.59 37.4062 72.1300 57.1790 0.6285 0.6138 0.6706

3

7

8

13.13 9.75 11.47 1340.04 2917.81 1963.02 0.4511 0.4017 0.4275 14.68 13.47 15.20 2229.84 2950.10 1981.22 0.4637 0.4017 0.4273 17.03 13.67 15.37 1298.36 2812.64 1903.05 0.0022 0.0019 0.0019 9.25 10.97 10.97 7783.94 5247.14 5245.95 0.0326 0.0528 0.0644 13.47 6.21 6.53 2948.86 15700.30 14584.28 0.0230 0.0309 0.0459 37.34 29.65 30.66 12.0824 70.9621 56.2628 0.6602 0.6320 0.6437

6

Table 4.1 Comparison of the NEBCA with the existing methods based on the training set images

10.21 2625.10 0.5424 13.93 2648.38 0.5423 14.08 2561.27 0.0021 10.49 5851.41 0.0434 14.40 2379.97 0.0428 36.32 15.2810 0.7203

9

9.77 2907.60 0.4429 13.49 2935.27 0.4427 13.64 2832.49 0.0024 10.70 5581.45 0.0675 13.23 3116.38 0.0409 29.83 68.0801 0.6564

10

Average 10.31 2658.40 0.4449 13.81 2772.50 0.4461 14.19 2587.40 0.0022 10.06 6558.90 0.0524 9.30 10269.00 0.0426 32.00 48.13 0.6560

68 4 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Clustering. . .

Image Method Metric 1 PSNR ITA 13.32 1281.98 MSE 0.4341 SSIM FDBM PSNR 9.84 MSE 2860.27 0.3787 SSIM KMC 17.18 PSNR 1255.66 MSE 0.0024 SSIM 9.54 PSNR FCM 7292.37 MSE SSIM 0.0361 NCBM PSNR 2.43 MSE 15741.06 SSIM 0.0124 NEBCA PSNR 30.51 MSE 58.3341 SSIM 0.6712

3

10.23 10.84 2940.97 2848.60 0.4554 0.3862 10.18 10.78 2971.42 2888.39 0.4553 0.3862 13.60 13.74 2860.07 2771.82 0.0025 0.0020 10.70 8.86 5578.80 8513.95 0.0646 0.0575 10.45 3.67 2790.52 14866.88 0.0098 0.0098 33.64 29.41 28.3546 75.1073 0.6896 0.6891

2 11.11 2972.11 0.3772 11.06 3009.26 0.3772 13.54 2899.21 0.0020 8.79 8652.01 0.0563 12.09 2373.95 0.0050 29.42 74.8227 0.7174

4

6

12.23 11.07 2536.55 3647.69 0.4826 0.3673 12.19 11.02 2561.32 3687.05 0.4825 0.3673 14.24 12.65 2467.40 3560.74 0.0029 0.0020 9.82 8.99 6826.62 8265.10 0.0435 0.0652 13.40 5.09 1941.83 14452.79 0.0012 0.0272 35.29 29.15 19.4070 79.6522 0.6747 0.6453

5

8

9

12.41 15.64 13.39 2930.89 1520.77 2774.69 0.3788 0.4585 0.3881 12.36 9.69 13.34 2965.27 2960.27 2807.60 0.3787 0.3787 0.3880 13.65 16.44 13.87 2826.88 1486.88 2687.07 0.0018 0.0022 0.0018 10.42 10.42 11.34 5947.16 5949.02 4813.92 0.0630 0.0362 0.0612 12.90 5.39 6.00 2618.64 16106.47 15218.13 0.0149 0.0453 0.0342 35.88 30.79 29.46 16.9257 54.6004 74.1940 0.7004 0.6943 0.6913

7

Table 4.2 Comparison of the NEBCA with the existing methods based on the validation set images

15.32 1924.10 0.4459 15.25 1955.88 0.4459 15.43 1876.35 0.0024 9.45 7432.18 0.0454 13.09 3216.92 0.0418 35.67 17.7789 0.7441

10

Average 12.56 2537.80 0.4174 11.57 2866.70 0.4039 14.43 2469.20 0.0022 9.83 6927.10 0.0529 8.45 8932.70 0.0202 31.92 49.92 0.6917

4.4 Experimental Results 69

Method ITA

Image 2 3 4 Metric 1 PSNR 15.02 16.22 15.80 15.97 2063.52 1564.78 1725.14 1656.90 MSE 0.4977 0.4333 0.4395 0.4323 SSIM FDBM PSNR 14.98 16.14 15.72 15.89 MSE 2083.71 1594.44 1754.65 1686.52 0.4976 0.4332 0.4395 0.4320 SSIM KMC 15.18 16.34 15.90 16.08 PSNR 1989.17 1520.68 1683.40 1616.93 MSE 0.0025 0.0026 0.0025 0.0024 SSIM 11.88 9.45 9.53 10.18 PSNR FCM 4250.73 7444.32 7310.91 6286.58 MSE SSIM 0.0371 0.0384 0.0395 0.0480 NCBM PSNR 5.75 6.24 6.19 6.50 15589.09 15756.48 14684.40 MSE 17456.14 SSIM 0.0244 0.0345 0.0378 0.0385 NEBCA PSNR 33.08 40.54 40.16 34.05 MSE 49.1582 9.3822 10.8770 47.1630 SSIM 0.7126 0.6922 0.6906 0.6829 6

7

8

9

10

15.52 14.50 13.37 13.84 14.90 13.66 1837.99 2327.75 3014.30 2706.91 2121.50 2822.44 0.5752 0.4878 0.3828 0.3740 0.5365 0.3859 15.48 14.45 13.33 13.78 14.86 13.61 1856.95 2354.50 3047.32 2742.86 2142.56 2855.66 0.5751 0.4878 0.3827 0.3739 0.5365 0.3859 15.69 14.63 13.50 14.00 15.04 13.80 1768.80 2257.73 2926.32 2611.00 2052.66 2732.73 0.0021 0.0023 0.0018 0.0020 0.0024 0.0017 11.36 10.12 11.29 9.70 11.42 11.40 4788.59 6376.35 4868.32 7015.14 4722.99 4744.87 0.0298 0.0416 0.0645 0.0603 0.0370 0.0608 14.54 6.06 6.32 6.41 5.88 6.33 2304.94 16221.64 15279.11 14990.17 16941.74 15244.67 0.0312 0.0389 0.0333 0.0250 0.0317 0.0354 40.63 40.48 34.84 34.93 37.30 42.39 10.9580 12.0008 46.3387 47.9280 29.2332 25.5624 0.7393 0.6882 0.6802 0.7192 0.7136 0.6872

5

Table 4.3 Comparison of the NEBCA with the existing methods based on the testing set images Average 14.88 2184.10 0.4545 14.82 2211.90 0.4544 15.02 2115.90 0.0022 0.10 5780.90 0.0457 7.02 14447.00 0.0331 37.84 28.86 0.7006

Image 2 3 4 5 6 7 8 9 10 Average Method Metric 1 ITA 11265 6595 8583 7795 6496 4910 5059 13,747 4919 10,548 7991 SD STE 49,815 27,526 31,586 30,219 27,145 22,666 23,262 56,975 22,393 40,785 33,237 9704 9242 11,222 9625 8094 8096 7726 9152 8816 10,117 9179 FDBM SD STE 42,998 41,931 47,760 39,038 35,566 35,645 33,970 36,506 39,195 43,362 39,597 KMC SD 1,049,962 971,248 1,268,343 1,099,786 817,842 657,227 912,801 1,014,384 899,342 1,111,446 980,240 60,672 57,275 63,599 62,673 58,291 51,571 49,712 70,081 55,268 64,494 59,364 STE 1,677,790 2,029,611 3,620,542 3,713,679 3,004,171 2,539,239 2,900,271 3,341,971 2,120,826 3,332,918 2,828,100 FCM SD 28,000 33,142 56,118 56,435 48,603 39,733 46,868 56,965 34,338 54,912 45,511 STE NCBM SD 1,839,597 1,801,611 2,278,901 1,897,678 1,415,275 1,303,311 1,657,140 2,043,652 1,652,692 1,891,292 1,778,100 STE 90,030 88,625 100,621 89,521 76,284 68,413 84,071 101,294 79,158 85,374 86,339 HSV 14,191,045 13,877,543 16,891,134 14,498,675 10,917,055 9,166,441 12,712,394 15,229,110 12,564,913 14,081,128 13,413,000 SD TNEI 323,441 323,438 323,438 323,448 323,463 323,468 323,451 323,444 323,392 323,455 323,440

Table 4.4 Comparison of the HSV color system images with the existing methods based on the training set images

Image 2 3 4 5 6 7 8 9 10 Average Method Metric 1 ITA 4817 7183 6222 7520 5097 7833 7731 5333 6715 5693 6414 SD STE 22,720 29,496 27,194 30,458 24,696 30,532 30,926 23,771 27,316 23,584 27,069 7188 9724 9564 11,222 9480 11,022 9196 7795 8130 7051 9037 FDBM SD STE 31,327 42,167 40,107 39,038 41,441 46,421 38,479 35,885 34,211 30,558 37,963 KMC SD 624,766 1,085,192 1,006,501 1,038,479 891,552 1,243,708 1,132,872 699,797 992,629 82,386 879,790 48,803 60,929 61,603 63,518 61,633 64,267 58,692 52,093 53,330 53,841 57,871 STE 3,029,183 3,410,081 2,951,197 2,895,611 2,047,212 3,017,339 3,271,324 2,572,516 3,046,551 3,170,195 2,941,100 FCM SD 47,640 54,831 45,571 44,538 34,151 45,675 50,059 41,400 50,394 49,258 46,352 STE NCBM SD 1,189,890 3,036,221 2,706,286 2,531,667 2,235,618 2,870,159 2,296,317 1,378,680 1,804,202 1,374,973 2,142,400 STE 51,435 72,725 125,483 118,628 103,843 118,746 103,526 72,892 88,705 69,091 92,507 HSV 13,191,035 14,877,541 17,892,134 15,498,661 11,917,035 8,166,431 13,712,324 14,228,112 11,594,911 15,086,129 13,616,000 SD TNEI 323,442 323,437 325,437 325,447 325,464 325,467 325,455 325,445 325,393 325,456 325,040

Table 4.5 Comparison of the HSV color system images with the existing methods based on the validation set images

Image 2 3 4 5 6 7 8 9 10 Average Method Metric 1 ITA 4971 4963 4912 5693 5195 5567 7511 7303 5443 6280 5783 SD STE 22,592 22,935 22,822 23,584 21,607 23,289 29,462 29,927 22,785 25,847 24,485 7188 5581 6232 6858 6004 7797 8626 8384 6896 8034 7160 FDBM SD STE 31,327 23,159 26,170 28,192 27,982 35,518 36,491 34,256 32,180 33,885 30,916 KMC SD 769,351 700,374 738,411 783,490 711,473 910,469 1,051,772 1,031,961 825,867 979,349 850,250 47,939 49,056 50,219 56,601 45,132 53,910 53,894 54,338 50,235 51,344 51,267 STE 3,029,183 2,822,368 2,704,874 2,912,397 2,049,046 2,558,446 3,143,538 3,268,021 2,367,129 3,111,236 2,796,600 FCM SD 47,640 44,161 42,248 50,121 31,560 39,093 50,861 50,610 37,582 51,146 44,502 STE NCBM SD 1,438,586 1,249,349 1,331,539 1,517,312 1,300,572 1,554,901 1,853,909 1,813,203 1,438,486 1,728,196 1,522,600 STE 76,188 65,275 69,109 76,217 67,285 77,510 89,506 87,779 74,228 85,430 76,853 HSV 13,121,045 14,877,541 17,891,234 13,498,676 11,917,058 8,966,451 13,712,374 16,224,110 13,564,813 15,081,141 13,885,000 SD TNEI 322,442 322,439 324,439 323,448 323,461 324,469 325,452 324,144 323,191 323,251 323,670

Table 4.6 Comparison of the HSV color system images with the existing methods based on the testing set images

74

4 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Clustering. . .

Table 4.7 Comparison of the proposed method with the existing methods in terms of average CPU time (in millisecond)

Method ITA FDBM KMC FCM NCBM Proposed method

Training set 239.95 267.66 282.91 281.91 162.05 152.15

Validation set 249.85 257.66 272.81 262.81 173.15 144.14

Testing set 245.75 258.58 273.61 271.61 170.11 145.11

comparison. The corresponding average SD values obtained using the HSV color system for training, validation and testing sets were 13,413,000, 13,616,000 and 13,885,000, respectively, which were much higher than existing ITA, FDBM, KMC, FCM and NCBM. Similarly, the average TNEI values obtained using the HSV color system for training, validation and testing sets were 323,440, 325,040 and 323,670, respectively. These average TNEI values were significantly higher than the average STE values obtained for existing ITA, FDBM, KMC, FCM, and NCBM. Hence, these statistical analyses showed that the HSV color system reflected a high quantity of information compared to existing OGTM, ITA, FDBM, KMC, FCM and NCBM.

4.4.5 Discussion on the Computation Time For segmenting the training, validation and testing sets, average CPU time (in millisecond) were computed for the existing methods (i.e., ITA, FDBM, KMC, FCM and NCBM) and the proposed method. These results are presented in Table 4.7. In Table 4.7, the proposed method required 152.15, 144.14 and 145.11 milliseconds average CPU time to segment the training, validation and testing sets, respectively. By contrast, existing methods, viz., ITA, FDBM, KMC, FCM and NCBM required much higher CPU time than the proposed method. It indicated that the proposed method is computationally efficient to segment as well as better representation of the features in terms of the HSV color system in MRIs.

4.4.6 Algorithm and Computational Complexity An algorithmic representation of the proposed method has been shown in Algorithm 3. The computational complexity of the proposed method has been evaluated in terms of time and space, which are described below.

4.5 Conclusions and Future Directions

75

1. Time complexity of the proposed method: (a) The proposed method requires .O(M) time to read the input image .Ip , and convert it into the GLS, where M denotes the number of gray level pixel .G(i, j ) ∈ Ip . (b) It requires .O(N ) time to represent each .G(i, j ) as the .NI(i, j ), where N denotes the number of .NI(i, j ). (c) The proposed method requires .O(M × N) time to compute the NEI of each .NI(i, j ) ∈ NIM denoted as the .E(NI(i, j )). (d) It requires .O(M × N) time to perform the clustering operation. (e) It requires .O(M × N) time to get the segmented image .Is . (f) For the transformation of segmented regions into the HSV color system, it requires .O(M × N) time. Hence, the total time complexity of the proposed method is .O(M × N) for the maximum number of iterations. 2. Space complexity of the proposed method: The space complexity of the proposed method is the maximum amount of space, which is considered at any one time during its initialization of segmentation process. Hence, the total space complexity of the proposed method is .O(M), where M denotes the number of gray level pixel .G(i, j ) ∈ Ip .

4.5 Conclusions and Future Directions Effective MRI segmentation remains a challenge for computer vision researchers. This study was carried out by the proposed segmentation method based on NS theory and HSV color system to analyze MRIs of PD patients. In this study, NS theory was used to represent the inherent uncertainties of MRIs in terms of the NI. The NIM was prepared by collecting NI. The NEI function was used to quantify uncertainties associated with each NI. Then the process of clustering was carried out on the basis of the proposed NEBCA. Finally, in the segmented images, the HSV color system was applied to better represent the features in the segmented regions. Based on the proposed method, segmented MRIs will help radiologists and medical practitioners interpret PD features in MRIs in a more accurate way. The proposed method was validated in terms of performance evaluation metrics, viz., MSE, PSNR, SSIM, SD, TNEI and STE. Segmented results showed that the proposed method outperformed other well-known existing image segmentation methods. The proposed method also consumed less time in comparison to existing image segmentation methods. The proposed method is a supervised method based on which MRIs can be segmented at various levels by selecting the number of clusters. However, the limitation of the study is that the proposed method is validated only with MRIs of PD patients. The applicability of the proposed method to other types of MRIs can be verified in the future.

76

4 Parkinson’s Disease MRIs Analysis Using Neutrosophic-Entropy Clustering. . .

References 1. Chaira T, Ray AK (2003) Segmentation using fuzzy divergence. Pattern Recognit Lett 24(12):1837–1844 2. Dou W, Ruan S, Chen Y, Bloyet D, Constans JM (2007) A framework of fuzzy information fusion for the segmentation of brain tumor tissues on MR images. Image Vision Comput 25(2):164–171 3. Fang M, Zhang YJ (2017) Query adaptive fusion for graph-based visual reranking. IEEE IEEE J Sel Top Signal Process 11(6):908–917 4. Guo Y, Cheng HD (2009) New neutrosophic approach to image segmentation. Pattern Recognit 42(5):587–595 5. IDA (2019) Image and Data Archive. https://ida.loni.usc.edu/ 6. Jiang XL, Wang Q, He B, Chen SJ, Li BL (2016) Robust level set image segmentation algorithm using local correntropy-based fuzzy c-means clustering with spatial constraints. Neurocomputing 207:22–35 7. Jin X, Chen G, Hou J, Jiang Q, Zhou D, Yao S (2018) Multimodal sensor medical image fusion based on nonsubsampled shearlet transform and S-PCNNs in HSV space. Signal Process 153:379–395 8. Khotanlou H, Colliot O, Atif J, Bloch I (2009) 3D brain tumor segmentation in MRI using fuzzy classification, symmetry analysis and spatially constrained deformable models. Fuzzy Sets Syst 160(10):1457–1473 9. Manchanda M, Sharma R (2016) A novel method of multimodal medical image fusion using fuzzy transform. J Vis Commun Image Represent 40:197–217 10. Moftah HM, Azar AT, Al-Shammari ET, Ghali NI, Hassanien AE, Shoman M (2014) Adaptive k-means clustering algorithm for MR breast image segmentation. Neural Comput Appl 24(7):1917–1928 11. Ogura A, Kamakura A, Kaneko Y, Kitaoka T, Hayashi N, Taniguchi A (2017) Comparison of grayscale and color-scale renderings of digital medical images for diagnostic interpretation. Radiol Phys Technol 10(3):359–363 12. Pednekar AS, Kakadiaris IA (2006) Image segmentation based on fuzzy connectedness using dynamic weights. IEEE Trans Image Process 15(6):1555–1562 13. Pekel JF, Vancutsem C, Bastin L, Clerici M, Vanbogaert E, Bartholomé, E, Defourny P (2014) A near real-time water surface detection method based on hsv transformation of MODIS multispectral time series data. Remote Sens Environ 140:704–716 14. Rueda S, Knight CL, Papageorghiou AT, Noble JA (2015) Feature-based fuzzy connectedness segmentation of ultrasound images with an object completion step. Med Image Anal 26(1):30– 46 15. Shannon CE (2001) A mathematical theory of communication. SIGMOBILE Mob Comput Commun Rev 5(1):3–55 16. Singh P, Dhiman G (2018) Uncertainty representation using fuzzy-entropy approach: special application in remotely sensed high-resolution satellite images (RSHRSIs). Appl Soft Comput 72:121–139 17. Singh P, Rabadiya K (2018) Information classification, visualization and decision-making: a neutrosophic set theory based approach. In: Proceeding of 2018 IEEE international conference on systems, man, and cybernetics, Miyazaki, Japan, pp 409–414 18. Smarandache F (2002) Neutrosophy, a new branch of philosophy. Mult-Valued Log 8(3):297– 384 19. Udupa JK, Wei L, Samarasekera S, Miki Y, van Buchem MA, Grossman RI (1997) Multiple sclerosis lesion quantification using fuzzy-connectedness principles. IEEE Trans Med Imaging 16(5):598–609 20. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

References

77

21. Wang H, Smarandache F, Zhang Y, Sunderraman R (2005) Single valued neutrosophic sets. In: Proceedings of 10th international conference on fuzzy theory and technology, Salt Lake City, Utah 22. Ware C, Turton TL, Bujack R, Samsel F, Shrivastava P, Rogers DH (2019) Measuring and modeling the feature detection threshold functions of colormaps. IEEE Trans Vis Comput Graph 25(9):2777–2790 23. Wu H, Barba J, Gil J (2000) Iterative thresholding for segmentation of cells from noisy images. J Microsc 197(3):296–304 24. Yang Y, Yang M, Huang S, Que Y, Ding M, Sun J (2017) Multifocus image fusion based on extreme learning machine and human visual system. IEEE Access 5:6989–7000 25. Zhang M, Zhang L, Cheng H (2010) A neutrosophic approach to image segmentation based on watershed method. Signal Process 90(5):1510–1517 26. Zhou L, Hansen CD (2016) A survey of colormaps in visualization. IEEE Trans Vis Comput Graph 22(8):2051–2069

Chapter 5

Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

“The purpose of computing is insight, not numbers.” By Richard W. Hamming (1915–1998)

Abstract In this chapter, we introduce an extension theory of the neutrosophic set (NS) called type-2 neutrosophic set (T2NS). This new theory provides a granular representation of features and helps to model uncertainties with six different memberships very effectively. To demonstrate the real-time application of this theory, a new segmentation method for brain tumor tissue structures in magnetic resonance imaging (MRI) is presented. There are inconsistencies in the gray levels observed in MRIs due to their low illumination. The proposed theory addressed this problem by performing neutrosophication operation on gray levels based on six different membership functions called type-2 neutrosophic membership functions. During segmentation, a concept of T2NS entropy is used to quantify each gray level of MRIs. The proposed method is able to select multiple adaptive thresholds for segmentation of brain tumor tissue structures in MRIs from the locations of maximum entropy values of gray levels. Finally, an image fusion operation is performed on the segmented images with different thresholds to include all features and identify the location of brain tumor. The fusion images are compared with the segmented images obtained by five different methods, including fuzzy c-means algorithm, modified fuzzy c-means algorithm, fuzzy-K-means clustering algorithm, kernel intuitionistic fuzzy entropy c-means and neutrosophic entropy-based adaptive thresholding method. The proposed method achieves Jaccard similarity coefficients of 97.07, 97.92 and 97.13% in the case of three different sets of MRIs, namely Set I, Set II and Set III, respectively. The proposed method exhibits correlation coefficients of 0.9638, 0.9698 and 0.9610 for the Set I, Set II and Set III of MRIs, respectively. Similarly, the proposed method shows uniformity measures of 0.9624, 0.9633 and 0.9660 for the Set I, Set II and Set III of MRIs, respectively. These three performance evaluation metrics show the effectiveness of the proposed method compared to the existing methods.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 P. Singh, Biomedical Image Analysis, Brain Informatics and Health, https://doi.org/10.1007/978-981-99-9939-2_5

79

80

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

Keywords Type-2 neutrosophic set · Entropy · Image fusion · Image segmentation · Magnetic resonance imaging (MRI) · Brain tumor tissue segmentation

5.1 Introduction Brain tumors are considered one of the most deadly and complicated diseases to detect [3]. They are a collection of prodigious brain tissues, and depending on their growth, they are classified as meningiomas, gliomas and pituitary tumors [2]. Meningioma tumors do not spread to adjacent tissue and grow slowly. Glioma tumors grow by means of contiguous gilal cells and are considered the most common brain tumor. Pituitary tumors are caused by irregular growth of cells in a gland called the pituitary gland. Although these tumors do not spread to neighboring cells, they can cause serious health problems by disrupting hormone secretion in the body. MRI is one of the most commonly used brain imaging techniques for brain tumor diagnosis. Figure 5.1a–c shows MRIs of meningioma, glioma and pituitary tumors, respectively. Meningioma tumors are usually located near the skull, gray matter and cerebrospinal fluid. Glioma tumors usually contain white matter. Pituitary tumors are located near the sphenoid sinus, internal carotid arteries and optic chiasm. The diagnosis and treatment of brain tumors depend largely on the expertise of the physician in assessing the location, size, characteristics and edges of the tumor. The direct scan reports that the physician receives from the radiologist often provide less reliable results due to numerous limitations such as the presence of noise in the MRIs and human interference. As a result, image processing methods have been used as a tool to locate such tumors over the past decade. A non-deterministic situation can arise from multiple sources of information. For example, different meanings of the same object to different people can cause nondeterministic situations. In the real world, information cannot always be interpreted in terms of crisp or real values. In the case of information analysis and

Fig. 5.1 MRIs of three common types of brain tumors: (a) meningioma, (b) glioma, and (c) pituitary tumor

5.2 Motivation and Contributions

81

decision making, this problem mainly leads to indeterminacy. Smarandache [17] proposed the concept of NS to deal with such indeterminacy. In this study, this NS is considered as Type-1 NS (T1NS) for simplicity. This T1NS theory represents uncertainty with three different memberships as: truth, indeterminacy and falsity. T1NS has been used by several researchers to define different uncertainties in images. For example, Guo and Cheng [5] introduced a T1NS-based image segmentation method by using the concept of entropy. Zhang [22] proposed a T1NS domain-based image segmentation method by integrating the idea of watershed. Using wavelet transform with T1NS, Sengur and Guo [12] proposed an unsupervised color texture based image segmentation method. Singh [14] presented a new MRI image segmentation of Parkinson’s disease based on T1NS and entropy concepts called neutrosophic entropy-based adaptive thresholding method (NEATSA). The remainder of this chapter is organized as follows. The motivation and contributions of this study are discussed in Sect. 5.2. The background required for the study is presented in Sect. 5.3. The proposed T2NS theory, its mathematical operations and properties are presented in Sect. 5.4. The proposed image segmentation method is described in Sect. 5.5. Experimental results are discussed in Sect. 5.6. Conclusions and future directions are discussed in Sect. 5.7.

5.2 Motivation and Contributions It should be noted that the above discussed methods have not gained popularity among medical practitioners. There are also many inherent difficulties associated with the analysis of brain tumors in MRIs, which are discussed below: • Massive MRIs database of brain tumors is generally limited to the research environment. • Many applications (e.g., expert systems, belief systems and information fusion) require granular analysis of information for appropriate decision making. However, the above literature review has shown that this work is yet to be done by researchers in the case of MRIs. It shows that a theory that can provide granular analysis is definitely needed. • MRIs have bias fields and inconsistent boundaries that make the process of segmentation very difficult [20]. • MRIs typically include four different imaging modes, namely T1-weighted, T2-weighted, post-contrast T1-weighted, and fluid-attenuated inversion-recovery (FLAIR). Different MRI modes are used for different medical purposes. However, it is still a challenging task for physicians to identify the tissue structures of the brain tumor and recognize its location in MRIs because they may vary greatly in position, scale, shape and appearance. Therefore, a fully automatic segmentation method is currently desirable in the processing of MRIs so that the tissue structures and location of tumors can be identified.

82

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

From this motivation, this study chose T1NS and its features to derive a more robust method for granular feature representation in MRIs. However, the propagation of the study in this direction suffers from the following difficulties as: Difficulty 1: Neutrosophic membership functions of T1NS is two-dimensional, which do not support the granular representation of uncertainties. Difficulty 2: More granularization can make the uncertainty modeling process very complicated because the dimension of membership functions needs to be increased/extended. Difficulty 3: For a new extension, it is necessary to derive the formulas that support various mathematical operations (e.g., union, intersection, complement, etc.). This chapter aims to overcome the above difficulties by introducing the extension theory of T1NS, which is called T2NS theory. The extension of T1NS theory, i.e., T2NS, has the potential to deal with granular analysis of information. This is possible because the neutrosophic memberships in T2NS theory are themselves neutrosophic. With the possibility of additional degrees of neutrality in T2NS, it becomes possible to model uncertainty in a simple way. This T2NS also covers the two-dimensional scope for modeling uncertainties through mutual correlation with the two-dimensional modeling of T1NS. Finally, solutions to difficulty 3 are identified by relying on the existing formulas of T1NS. This research demonstrates the application of T2NS in segmenting brain tumor tissue structures in MRIs. Tissue structures segmentation can help in detecting the size, type and location of tumors in MRIs. For this purpose, a new T2NS, T2NS entropy (T2NSE) and image fusion (IF) based segmentation method has been proposed. The proposed segmentation method is titled T2NSEIF, which has the following advantages as: • It performs the segmentation in an unsupervised manner. • It chooses different gray levels as thresholds based on the concept of maximum T2NSE. • It able to generate multiple segmented images with multiple thresholds. • An image fusion operation is performed on multiple segmented images to include all significant features of brain tumor tissue structures. This final image also includes the location of this segmented tumor structures. Empirical results are analyzed based on the original MRIs and ground truth (GT) images of brain tumor structures. The performance of T2NSEIF is compared with five other methods, including FCM [23], MFCM [11], FKMCA [8], KIFECM [10] and NEATSA [14]. Various comparative metrics such as Jaccard similarity coefficient (JSC) [8], correlation coefficient (CC) [8] and uniformity measure (UM) [6] show the efficiency of T2NSEIF in terms of brain tumor tissue structures segmentation in MRIs. Moreover, another goal of this work is to express the T2NS theory simply so that it could be widely used in other fields.

5.3 Background for the Study

83

5.3 Background for the Study First, we will present the definitions and representation of T1NS as follows: Definition 5.3.1 ((T1NS) [17]) Assume that U is a universe of discourse. A NS .N for .u ∈ U can be represented by a truth-membership function T , an indeterminacymembership function I and a falsity-membership F , where .T , I, F : U →]− 0, 1+ [, − + .u ≡ u(T (u), I (u), F (u)) ∈ N, and . 0 ≤ T (u) + I (u) + F (u) ≤ 3 . From a philosophical point of view, the T1NS takes the value of .]− 0, 1+ [ on real standard or non-standard subsets. Thus, for engineering applications, it is appropriate to take the interval .[0, 1] instead of .]− 0, 1+ [, since it is difficult to use − + .] 0, 1 [ in real applications such as engineering and scientific problems. A two-dimensional representation of type-1-neutrosophic membership functions (T1NMFs) is shown in Fig. 5.2. In this figure, the truth, falsity and indeterminacy membership functions are plotted along the x, y and z axes, respectively. This representation of T1NMFs differs from the intuitionistic fuzzy set (IFS) [1] because the uncertainty representation in IFS depends mainly on two functions, namely the degree of membership and non-membership. Wang et al. [19] defined an instance of the T1NS as a single valued neutrosophic set (SVNS). When the universe of discourse U is discrete and finite, then the SVNS can be defined as follows:  N=

.

 〈T (u1 ), I (u1 ), F (u1 )〉 〈T (u2 ), I (u2 ), F (u2 )〉 + + ... u1 u2    〈T (ui ), I (ui ), F (ui )〉 , ∀ui ∈ U = ui

(5.3.1)

i=1

x-axis

Fig. 5.2 Representation of T1NMFs

T(u) 1

u u

F(u)

0 u

I(u) xis z-a

1

1

y-axis

84

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

When the universe of discourse U is continuous and infinite, then the SVNS can be defined as follows:   〈T (ui ), I (ui ), F (ui )〉 .N = (5.3.2) , ∀ui ∈ U ui U In Eqs. (5.3.1) and (5.3.2), the horizontal bar denotes a delimiter. The numerator in each term represents the membership values in the T1NS .N associated with the observation of the universe of discourse U indicated in the denominator. In Eq. (5.3.1), the summation symbol indicates the aggregation of each observation; hence, the “.+” signs denote an aggregation operator. In Eq. (5.3.2), the integral sign denotes a continuous function-theoretic aggregation operator for continuous variables [21]. In the following, definitions for the T1NMFs are given [16]: Definition 5.3.2 ((T1NMFs) [16]) The T1NMFs, viz., T , I and F for .u ∈ U can be defined as: T (u) =

.

u − min(U ) . max(U ) − min(U )

F (u) = 1 − T (u).  I (u) = T (u)2 + F (u)2

(5.3.3) (5.3.4) (5.3.5)

In Eq. (5.3.3), “.min” and “.max” represent the minimum and maximum functions, respectively. For .u ∈ U , its Type-1 neutrosophic memberships (T1NMs) can be determined using Eqs. (5.3.3)–(5.3.5). In Eq. (5.3.3), .min(U ) and .max(U ) give the minimum and maximum values from the U . By following Eq. (5.3.3), only T of u can be determined. For the determination of F and I , Eqs. (5.3.4) and (5.3.5) are used, respectively. The main benefit of these three formulas is that they can restrict the range of T , F and I between 0 and 1. Example 5.3.1 (For Definition 5.3.2) In a gray scale image, consider .Pk = 164 be a gray level value of the pixel at location k. The universe of discourse for the gray levels can be defined within a range .0 − 255, i.e., .U = [0, 255], where .Pk ∈ U . Hence, the .Pk can be represented as a T1NS .N in terms of T , F and I (Eqs. (5.3.3)– (5.3.5) respectively), as: N=

.

〈T (Pk ), I (Pk ), F (Pk )〉 〈0.64, 0.73, 0.36〉 = Pk 164

(5.3.6)

Definition 5.3.3 (Complement of T1NS) The complement of T1NS .N is denoted by .Nc , and can be defined as: .T c (u) = F (u), .I c (u) = 1 − I (u) and .F c (u) = T (u), such that .u ∈ U .

5.4 The Proposed T2NS and Related Concepts

85

Definition 5.3.4 ((Neutrosophication) [16]) The operation of neutrosophication is the process assigning degree of memberships to the elements of the universe of discourse U using T1NMFs. A neutrosifier .N = (T , I, F ) is a 3-tuple of membership functions .T , I, F : U → [0, 1]. When it is employed to U , the neutrosifier .N yields a NS .N(U ) in U as: N(U ) = {〈u, T (u), I (u), F (u)〉|u ∈ Z}

.

(5.3.7)

Definition 5.3.5 ((Deneutrosophication) [16]) The operation of deneutrosophication transforms a T1NS into a crisp set.

5.4 The Proposed T2NS and Related Concepts This section presents theory, various mathematical operations and concept of uncertainty measurement for the proposed T2NS.

5.4.1 T2NS Theory The term “T2NS” and its related definitions are explained in this section. Welldefined mathematical terms and definitions are included here that will help us to effectively represent the concept of T2NS. In the rest of this chapter, these terms and definitions will be used in general. Let us consider the representation of T1NMFs (Fig. 5.2). In this figure, if the three points of T1NMs are randomly shifted left and right along the three axes (it is not necessary that they are shifted with the same distances), then the sharp boundary of the neutrosophic membership triangle becomes blurred. This blurred region is shown in Fig. 5.3. From this blurred region, we can see that the T1NMFs no longer have three T1NMs for a given value of .u ∈ U . Instead, the T1NMFs generate two additional neutrosophic memberships individually for a single truth, falsity and indeterminacy membership. That is, each of the T1NMFs generates two additional neutrosophic memberships for .u ∈ U , which can be taken as .ul and .ur . Here, the suffixes “l” and “r” denote the shift of T1NMs from u to the left (L) and right (R), respectively. It is not conceivable that these values can be identical with respect to the neutrosophic memberships. Therefore, we choose these memberships randomly and represent them in the form of a triangle to demonstrate our concept. For .u ∈ U , new two-dimensional membership features evolve: type 2 neutrosophic membership functions (T2NMFs), which map to a T2NS. A T2NS can be defined based on the T2NMFs. However, the left (L) and right (R) shifts of the T1NMs form the basis of the T2NS. This shift is called the LR-shift. Therefore, we give the definition for the LR -shift before introducing the definition of T2NS.

86

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

Fig. 5.3 Representation of T2NMFs

Definition 5.4.1 (LR-Shift) Shifting the T1NMFs for .u ∈ U to the left (i.e. ul ) and to the right (i.e. .ur ) along the three axes (i.e., x, y and z) yields .〈T1 (u, ul ), I1 (u, ul ), F1 (u, ul )〉 ∈ Su ⊆ [0, 1] and .〈T2 (u, ur ), I2 (u, ur ), F2 (u, ur )〉 ∈ Su ⊆ [0, 1]. Then the one-to-one mapping of .T1 (u, ul ) with .T2 (u, ur ), .I1 (u, ul ) with .I2 (u, ur ) and .F1 (u, ul ) with .F2 (u, ur ) is called a LR-shift of .〈T (u), I (u), F (u)〉. .

Now, we can define the T2NS based on the LR-shift. ˜ in U can be represented Definition 5.4.2 (T2NS) For .u ∈ U , a T2NS denoted as .N by the LR-shift of T1NMFs, i.e., .〈T (u), I (u), F (u)〉. That is, when .(u → ul ) and .(u → ur ), then .u ∈ U and .ul , ur ∈ Su ⊆ [0, 1], i.e., 〈T12 (ul , ur ), I12 (ul , ur ), F12 (ul , ur )〉 =

u r ∈Su

.

ul ∈Su

〈T12 (ul , ur ), I12 (ul , ur ), F12 (ul , ur )〉 ul , ur (5.4.1)

The “.→” symbol here indicates either a left shift (i.e., .ul ) or a right shift (i.e., .ur ) along the axes. For the purpose of representation, the left-hand side of the Eq. (5.4.1) ˜ in the rest of the article. is denoted as .N

5.4 The Proposed T2NS and Related Concepts

87

If U and .Su are both discrete, then the right most part of Eq. (5.4.1) can be represented in the form of continuous discrete universe of discourse as follows: ˜ = N

u r ∈Su

.

ul ∈Su



=

n

i=1

〈T12 (ul , ur ), I12 (ul , ur ), F12 (ul , ur )〉 ul , ur

ur

∈Su ul ∈Su

T12 (ul , ur ) ,

n





i=1

ur

∈Su ul ∈Su

I12 (ul , ur ) ,

n

i=1



ur

∈Su ul ∈Su

F12 (ul , ur )

ul , ur (5.4.2)

Here, . denotes union operation, and the .u ∈ U is discretized into n-values having equal length discretization. In Eq. (5.4.2), .T12 (ul , ur ) consists of Type-2 truth-membership functions: T1 (u, ul ) and .T2 (u, ur ), .I12 (ul , ur ) consists of Type-2 indeterminacy-membership functions: .I1 (u, ul ) and .I2 (u, ur ), and .F12 (ul , ur ) consists of Type-2 falsitymembership functions: .F1 (u, ul ) and .F2 (u, ur ), where .u ∈ U and .ul , ur ∈ Su ⊆ [0, 1]. For the ease of simplification, a T2NS can also be represented by:

.

˜ = {〈u, T1 (u, ul ), T2 (u, ur )〉, 〈u, I1 (u, ul ), I2 (u, ur )〉, 〈u, F1 (u, ul ), F2 (u, ur )〉 N

.

| u ∈ U, ∀ul , ur ∈ Su ⊆ [0, 1]}

(5.4.3)

where, .〈T1 (u, ul ), I1 (u, ul ), F1 (u, ul )〉 ⊆ [0, 1], and .〈T2 (u, ur ), I2 (u, ur ), F2 (u, ur )〉 ⊆ [0, 1]. In Definition 5.4.2, .ul and .ur are obtained from the u and .∀ul , ur ∈ Su ⊆ [0, 1]. A T2NS becomes T1NS if the uncertainties somehow decrease or vanish. In such a scenario, T2NMFs reduce to T1NMFs. This is only possible if the corresponding T2NMFs for each .u ∈ U must be equivalent, i.e., .T1 (u, ul ) = T2 (u, ur ), .I1 (u, ul ) = I2 (u, ur ), and .F1 (u, ul ) = F2 (u, ur ). In this situation, the blurred region (as in Fig. 5.3) will disappear. In this case, Fig. 5.3 can be resembled with Fig. 5.2. The blurred region shown in Fig. 5.3 is the neutrosophic region. Another example of the neutrosophic region is shown in Fig. 5.5. This blurred region, called the neutrosophic region, is extremely valuable because this region fully represents the inherent uncertainties measured by the T2NMFs. This region evolves due to the direct involvement of the uncertain nature of each non-stationary event. There is also an exceptionally beneficial verbal representation of all uncertainties that are the consequences of all T2NMFs. It also allows us to represent a T2NS graphically. In doing so, we remove the main difficulty about the impractical representation of T2NS. The neutrosophic region shows how the uncertainties are distributed over the plane. What this region will look like depends largely on the choice of T2NMFs. A T2NS can also be represented as a multi-valued neutrosophic set (MVNS) as follows:

88

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

˜ can be represented as Definition 5.4.3 (T2NS as a MVNS) For .ui ∈ U , a T2NS .N the MVNS by considering all T2NMFs: ˜ = N

n 

.

i=1

 

〈T (ui ),I (ui ),F (ui )〉 ui



〈T1 (ui ,ul ),I1 (ui ,ul ),F1 (ui ,ul )〉〈T2 (ui ,ur ),I2 (ui ,ur ),F2 (ui ,ur )〉 ul ,ur



(5.4.4)

where, .∀ul , ur ∈ Su ⊆ [0, 1]. Above Eq. (5.4.4) can also be expressed as:  ˜ = N

.

〈T (u1 ),I (u1 ),F (u1 )〉 u1



〈T1 (u1 ,ul ),I1 (u1 ,ul ),F1 (u1 ,ul )〉〈T2 (u1 ,ur ),I2 (u1 ,ur ),F2 (u1 ,ur )〉 ul ,ur





〈T (u2 ),I (u2 ),F (u2 )〉 u2

+



〈T2 (u2 ,ul ),I1 (u2 ,ul ),F1 (u2 ,ul )〉〈T2 (u2 ,ur ),I2 (u2 ,ur ),F2 (u2 ,ur )〉 ul ,ur

 

...+ 〈T (un ),I (un ),F (un )〉 un

+



〈T2 (un ,ul ),I1 (un ,ul ),F1 (un ,ul )〉〈T2 (un ,ur ),I2 (un ,ur ),F2 (un ,ur )〉 ul ,ur



(5.4.5)

Theorem 5.4.1 For .u ∈ U , the corresponding type-2 neutrosophic memberships (T2NMs) cannot be the same. Proof For .u ∈ U , its corresponding T1NMs are not simultaneously equal, i.e. T (u) /= I (u) /= F (u). Accordingly, the memberships obtained from .T1 (u, ul ) and .T2 (u, ur ) cannot be simultaneously equal. It can be concluded that the memberships obtained by the remaining T2NMFs are not equal either, i.e., .T1 (u, ul ) /= I1 (u, ul ) /= F1 (u, ul ) and .T2 (u, ur ) /= I2 (u, ur ) /= F2 (u, ur ). ⨆ ⨅

.

Definition 5.4.4 (Scaling of Shifting) For .u ∈ U , derivation of T2NMs between the range or scale .[0, 1] by shifting u to the left (i.e., .u → ul ) and right (i.e., .u → ur ) is called a scaling of shifting. The degree of membership .0.5 is considered to be a mid-point of scaling for ease of computation. The T2NMs can be obtained by shifting .T (u) to either direction, based on the following scaling conditions: Condition 1: if .T (u) > 0.5, then .T1 (u, ul ) ∈ [0.5, T (u)) and .T2 (u, ur ) ∈ (T (u), 1]. Condition 2: if .T (u) < 0.5, then .T1 (u, ul ) ∈ [0, T (u)) and .T2 (u, ur ) ∈ (T (u), 0.5]. Condition 3: if .T (u) = 0.5, then .T1 (u, ul ) ∈ [0, 0.5] and .T2 (u, ur ) ∈ [0.5, 1]. Above three shifting conditions are shown in Fig. 5.4a–c, respectively.

5.4 The Proposed T2NS and Related Concepts

89

Fig. 5.4 Scaling of shifting for T2NMs: (a) Condition 1, (b) Condition 2 and (c) Condition 3

Definition 5.4.5 (T2NMFs) For .u ∈ U , the T2NMFs can be defined in terms of scaling of shifting of T1NMs as: T (u) + 0.5 . 2 T (u) + 1 T2 (u, ur ) = . 2 T1 (u, ul ) =

(5.4.6)

.

(5.4.7)

F1 (u, ul ) = 1 − T1 (u, ul ).

(5.4.8)

F2 (u, ur ) = 1 − T2 (u, ur ).  I1 (u, ul ) = T1 (u, ul )2 + F1 (u, ul )2.  I2 (u, ur ) = T2 (u, ur )2 + F2 (u, ur )2

(5.4.9) (5.4.10) (5.4.11)

Definition 5.4.6 (Neutrosophic Region (NR)) For .u ∈ U , the associated T2NS consists of a bounded region formed as a result of the LR -shift of T1NMFs, i.e. .〈T (u), I (u), F (u)〉. This region is called NR. Mathematically, it can be expressed by: ˜ = NR(N)



.

Su

(5.4.12)

u∈U

where, .∀ul , ur ∈ ui and .∀ul , ur ∈ Su ⊆ [0, 1]. The NR represents not only all T2NMFs but also their nature of uncertainties. In Fig. 5.3, uncertainties of all T2NMFs occupy a region, known as the NR. Definition 5.4.7 (Centroid of NR (CNR)) A CNR is a point of concurrency of a NR, where all 3 medians intersect each other. It can be denoted as .CN R .

90

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

Fig. 5.5 Representation of the CNR

Figure 5.5 shows the effectiveness of all T2NMs, which can be defined by the distance between each degree of membership in the CNR. The greater the distance of a particular T2NM from the CNR, the greater its effect on the information compared to the other T2NMs. In Fig. 5.5, a plane (i.e., xy, zx, and yz) is shown for .u ∈ U whose axes represent type-2 truth, indeterminacy and falsity- memberships, respectively. Here, six different lines emanate from the u that intersects all three planes together at two different points. All of these points represent T2NMs. Definition 5.4.8 (Basis of T2NMFs) A basis of T2NMFs for .u ∈ U is the T1NMFs of u. Definition 5.4.8 refers that if .〈T (u), I (u), F (u)〉 is the T1NMFs of u, then it can be considered as the basis of any T2NMFs, i.e., .〈T1 (u, ul ), I1 (u, ul ), F1 (u, ul )〉 and .〈T2 (u, ur ), I2 (u, ur ), F2 (u, ur )〉. Definition 5.4.9 (T1NS as a T2NS). The T2NS can be represented by three pairs of T2NMFs as: .〈(u), T1 (u, ul ), T2 (u, ur )〉, .〈(u), I1 (u, ul ), I2 (u, ur )〉 and .〈(u), F1 (u, ul ), F2 (u, ur )〉. This representation implies that there must be a maximum value from each of the three pairs of T2NMFs. Definition 5.4.10 (Connected Type-2 Neutrosophic set (CT2NS)) For .u ∈ U , a CT2NS denoted as .C˜N has n-values, where .C˜N has a one-to-one relation with elements of .〈T12 (ul , ur ), I12 (ul , ur ), F12 (ul , ur )〉 and .〈T (u), I (u), F (u)〉, where an element from T1NS maps exactly to another element of T2NS. Mathematically, it can be expressed by: u ∈ U : T (u) ↔ T12 (ul , ur ), I (u) ↔ I12 (ul , ur ), F (u) ↔ F12 (ul , ur ) (5.4.13)

.

5.4 The Proposed T2NS and Related Concepts

91

5.4.2 Set-Theoretic Operations and Properties for T2NS Some essential set-theoretic operations and properties related to T2NS are explained in this section. The collection of mathematical properties shows that T2NS can be well-defined notions with strong and effective mathematical support. Each property is given below in Definitions 5.4.11–5.4.20. Definition 5.4.11 (Complement of T2NS) denoted by .N˜c and is defined by: ˜c = N



.

˜ is The complement of T2NS .N

〈1 − T12 (ul , ur ), 1 − I12 (ul , ur ), 1 − F12 (ul , ur )〉

(5.4.14)

ul ,ur ∈Su

˜ c itself is a T2NS. The complement of T2NS, i.e., .N ˜ 1 and .N ˜ 2 , union Definition 5.4.12 (Union Operation on T2NS) For two T2NSs .N ˜ ˜ operation can be defined for both .N1 and .N2 : ˜1 ∨N ˜2 = N

u r ∈Su

.

1 2 1 2 〈T12 (ul , ur ) ∨ T12 (ul , ur ), I12 (ul , ur ) ∨ I12 (ul , ur ),

ul ∈Su 1 2 F12 (ul , ur ) ∨ F12 (ul , ur )〉

(5.4.15)

˜ 1 = 〈T 1 (ul , ur ), I 1 (ul , ur ), F 1 (ul , ur )〉 and .N ˜ 2 = 〈T 2 (ul , ur ), Here, .N 12 12 12 12 2 2 1 2 1 (u , u ) ∨ I12 (ul , ur ), F12 (ul , ur )〉. In Eq. (5.4.15), .T12 (ul , ur ) ∨ T12 (ul , ur ), .I12 l r 2 (u , u ) and .F 1 (u , u ) ∨ F 2 (u , u ) can be defined as follows: I12 l r l r l r 12 12 1 2 T12 (ul , ur ) ∨ T12 (ul , ur ) = T11 (u, ul ) ∨ T12 (u, ul ), T21 (u, ur ) ∨ T22 (u, ur )

.

(5.4.16)

.

1 2 I12 (ul , ur ) ∨ I12 (ul , ur )

=

I11 (u, ul ) ∨ I12 (u, ul ), I21 (u, ur ) ∨ I22 (u, ur )

=

F11 (u, ul ) ∨ F12 (u, ul ), F21 (u, ur ) ∨ F22 (u, ur )

(5.4.17)

.

1 2 F12 (ul , ur ) ∨ F12 (ul , ur )

(5.4.18) Here, .∨ returns maximum value of the union operation. ˜ 1 and .N ˜ 2 , their highest T2NMs can be attained at Theorem 5.4.2 For two T2NSs .N ˜ 2. ˜1 ∨N support of .N Proof It supports the Definition 5.4.12.

⨆ ⨅

92

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

˜ 1 and .N ˜ 2, Definition 5.4.13 (Intersection Operation on T2NS) For two T2NSs .N ˜ 1 and .N ˜ 2: intersection operation can be defined for both .N ˜2 = ˜1 ∧N N

u r ∈Su

.

1 2 1 2 〈T12 (ul , ur ) ∧ T12 (ul , ur ), I12 (ul , ur ) ∧ I12 (ul , ur ),

ul ∈Su 1 2 F12 (ul , ur ) ∧ F12 (ul , ur )〉

(5.4.19)

˜ 1 = 〈T 1 (ul , ur ), I 1 (ul , ur ), F 1 (ul , ur )〉, and .N ˜ 2 = 〈T 2 (ul , ur ), Here, .N 12 12 12 12 2 (u , u ), F 2 (u , u )〉. In Eq. (5.4.19), .T 1 (u , u ) ∧ T 2 (u , u ), .I 1 (u , u ) ∧ I12 l r 12 l r 12 l r 12 l r 12 l r 2 (u , u ), and .F 1 (u , u ) ∧ F 2 (u , u ) can be defined by: I12 l r 12 l r 12 l r 1 2 T12 (ul , ur ) ∧ T12 (ul , ur ) = T11 (u, ul ) ∧ T12 (u, ul ), T21 (u, ur ) ∧ T22 (u, ur )

.

(5.4.20)

.

1 2 I12 (ul , ur ) ∧ I12 (ul , ur )

=

I11 (u, ul ) ∧ I12 (u, ul ), I21 (u, ur ) ∧ I22 (u, ur )

=

F11 (u, ul ) ∧ F12 (u, ul ), F21 (u, ur ) ∧ F22 (u, ur )

(5.4.21)

.

1 2 F12 (ul , ur ) ∧ F12 (ul , ur )

(5.4.22) Here, .∧ returns minimum value of the intersection operation. ˜ 1 and .N ˜ 2 , their lowest T2NMs can be attained at Theorem 5.4.3 For two T2NSs .N ˜ ˜ support of .N1 ∧ N2 . Proof It supports the Definition 5.4.13.

⨆ ⨅

˜ 1 and .N ˜ 2, Definition 5.4.14 (Algebraic Operations on T2NS) For two T2NSs .N ˜ ˜ algebraic operations can be defined for both .N1 and .N2 : ˜2 = ˜1 ✶N .N

u r ∈Su

1 2 1 〈T12 (ul , ur ) ✶ T12 (ul , ur ), I12 (ul , ur )

ul ∈Su 2 1 2 ✶ I12 (ul , ur ), F12 (ul , ur ) ✶ F12 (ul , ur )〉

(5.4.23)

˜ 1 = 〈T 1 (ul , ur ), I 1 (ul , ur ), F 1 (ul , ur )〉, and .N ˜ 2 = 〈T 2 (ul , ur ), Here, .N 12 12 12 12 2 2 I12 (ul , ur ), F12 (ul , ur )〉. Here, the symbol “.✶” denotes any one of the algebraic operations: .×, ÷, +, −. Definition 5.4.15 (Transitivity on T2NS) Transitivity property can be defined for ˜ 1 , .N ˜ 2 and .N ˜ 3: three T2NSs .N ˜1 < N ˜ 3 if N ˜1 < N ˜ 2 and N ˜2 < N ˜3 N

.

(5.4.24)

5.4 The Proposed T2NS and Related Concepts

93

˜ 1 = 〈T 1 (ul , ur ), I 1 (ul , ur ), F 1 (ul , ur )〉, .N ˜ 2 = 〈T 2 (ul , ur ), I 2 (ul , ur ), Here, .N 12 12 12 12 12 3 3 3 2 ˜ F12 (ul , ur )〉, and .N3 = 〈T12 (ul , ur ), I12 (ul , ur ), F12 (ul , ur )〉. Definition 5.4.16 (Anti-symmetricity on T2NS) Anti-symmetricity property can ˜ 1 and .N ˜ 2 by: be defined for two T2NSs .N ˜ 2 if N ˜1 ≤ N ˜ 2 and N ˜2 ≤ N ˜1 ˜1 = N N

.

(5.4.25)

˜ 1 and .N ˜ 2 , commuDefinition 5.4.17 (Commutative on T2NS) For two T2NSs .N ˜ 1 and .N ˜ 2: tative relation can be defined on .N ˜1 ∨N ˜2 ⇒ N ˜2 ∨N ˜1 N

.

(5.4.26)

˜ 1 , .N ˜ 2 and .N ˜ 3, Definition 5.4.18 (Associativity on T2NS) For three T2NSs .N ˜ 1 , .N ˜ 2 and .N ˜ 3: associativity relation can be defined on .N ˜1 ∨N ˜ 2) ∨ N ˜3 ⇒ N ˜ 1 ∨ (N ˜2 ∨N ˜ 3) (N

.

(5.4.27)

˜ 1 , identity relation can be Definition 5.4.19 (Identity on T2NS) For a T2NS .N defined: ˜ 1 and N ˜1 ∨U ⇒ U ˜1 ∨∅ ⇒ N N

.

(5.4.28)

˜ 1 , idempotent relation can Definition 5.4.20 (Idempotent on T2NS) For a T2NS .N be defined: ˜1 ⇒ N ˜ 1 and N ˜1 ∨N ˜1 ⇒ N ˜1 ˜1 ∧N N

.

(5.4.29)

5.4.3 Uncertainty Measurement of T2NS Inherent uncertainty of an event can be measured by entropy [15]. If such uncertainties are represented by T2NS, then entropy can also be used to measure them quantitatively. In the following, we provide entropy measurement formulas for both T1NS and T2NS. Definition 5.4.21 ((T1NS Entropy (T1NSE)) [16]) The T1NSE of an SVNS .N is denoted as a measure .EN (u), where .EN (u) : N(u) → [0, 1], and can be defined as: 1 EN (u) = 1 − (T (u) + I (u) + F (u)) × E1 E2 E3 3

.

(5.4.30)

Here, .E1 =| T (u) − T c (u) |, .E2 =| I (u) − I c (u) |, and .E3 =| F (u) − F c (u) |.

94

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

More uncertainties can be represented in terms of T2NS compared to T1NS, so Eq. (5.4.30) needs to be modified for the T2NS. In the following, this modified equation is written in Eq. (5.4.31) provided under Definition 5.4.22. ˜ is denoted Definition 5.4.22 (T2NS Entropy (T2NSE)) The T2NSE of a T2NS .N as a measure .EN˜ (u), where .u ∈ U and .ul , ur ∈ Su ⊆ [0, 1]. It can be defined as: 1 EN˜ (u) = 1 − [(T1 (u, ul ) + I1 (u, ul ) + F1 (u, ul )) 6 ×(T2 (u, ur ) + I2 (u, ur ) + F2 (u, ur ))]

.

×(E1 E2 E3 + E4 E5 E6 )

(5.4.31)

Here, .E1 =| T1 (u) − T1c (u) |, .E2 =| I1 (u) − I1c (u) |, .E3 =| F1 (u) − F1c (u) |, E4 =| T2 (u) − T2c (u) |, .E5 =| I2 (u) − I2c (u) |, and .E6 =| F2 (u) − F2c (u) |. Definition 5.4.23 (Normalized T2NSE Matrix) A normalized T2NSE matrix can be formed by subtracting mean of all T2NSE from each individual T2NSE for Nnumber of T2NSs. Mathematical representation of normalized T2NSE matrix is given by: ⎡

|EN˜ (u) − E¯ N˜ (u)| |EN˜ (u) − E¯ N˜ (u)| ⎢ |E 11 (u) − E¯ (u)| |E 12 (u) − E¯ (u)| ⎢ N˜ 21 ˜ ˜ 22 ˜ N N N .N(E ˜ (u)) = ⎢ .. .. ⎢ N ⎣ . . |EN˜ m1 (u) − E¯ N˜ (u)| |EN˜ m2 (u) − E¯ N˜ (u)|

⎤ · · · |EN˜ 1n (u) − E¯ N˜ (u)| · · · |EN˜ 2n (u) − E¯ N˜ (u)| ⎥ ⎥ ⎥ .. .. ⎥ . ⎦ . · · · |EN˜ mn (u) − E¯ N˜ (u)| (5.4.32)

In Eq. (5.4.32), each .EN˜ ij (u) denotes corresponding T2NSE for the T2NS, where ¯ ˜ (u) is the mean of all T2NSE. .i = 1, 2, . . . , m and .j = 1, 2, . . . , n. Here, .E N

Definition 5.4.24 (T2NSE Covariance) For the set of N-number of T2NSs, the T2NSE covariance is obtained by multiplying the normalized T2NSE matrix (i.e., .N (EN˜ (u))) with transpose of .N(EN˜ (u)) (i.e., .N (EN˜ (u))T ). Mathematically, T2NSE covariance can be represented by: N(EN˜ (u))cov = N(EN˜ (u)) × N(EN˜ (u))T

.

(5.4.33)

Definition 5.4.25 (Distance Measurement for T2NSE Covariances) For two different T2NSE covariances denoted as .N(E1N˜ (u))cov and .N(E2N˜ (u))cov , its corresponding distance can be obtained using measures, which include the Euclidean, Manhattan and Minkowski distances [7]. Mathematically, it can be represented by: De = dist[N(E1N˜ (u))cov , N(E2N˜ (u))cov ]

.

(5.4.34)

5.5 The Proposed Image Segmentation Method

95

Here, .De represents the distance measure between the .N(E1N˜ (u))cov and N (E2N˜ (u))cov . The Euclidean, Manhattan and Minkowski distances between .N (E1 ˜ (u))cov and .N(E2N˜ (u))cov are defined in Eqs. (5.4.35)–(5.4.37), N respectively. .

De =

.



(N (E1N˜ (u))cov )2 − (N (E2N˜ (u))cov )2.

(5.4.35)

De = | N(E1N˜ (u))cov − N(E2N˜ (u))cov | .  De = x (N (E1N˜ (u))cov )x − (N (E2N˜ (u))cov )x

(5.4.36) (5.4.37)

In Eq. (5.4.37), x represents a real number such that .x ≥ 1.

5.5 The Proposed Image Segmentation Method In this section, we explain the proposed image segmentation method (i.e., T2NSEIF) based on the concepts of T2NS, T2NSE and IF. Each step of the proposed T2NSEIF method is explained in the following subsections.

5.5.1 Gray Pixel Space of Input Image An input image .IX can have .Pi (i = 1, 2, . . . , n) gray level pixels with ndimensional vectors that take their values in the gray level range .[0, G] with .G = 255. That is, for the .IX , the 256 gray levels are in the scale of integers .[0, . . . , 255]. Therefore, the gray pixel space (GPS) [9] can be defined on a set of domains .Pi forming a space in the plane .U = [0, G]. The GPS is denoted as .G(Pi ) and can be expressed by: .

G(Pi ) : Pi ⊂ R2 → U ⊂ R

(5.5.1)

5.5.2 Histogram of the GPS Determining the optimal threshold(s) is a search process depends on which different significant regions, boundaries and textures can be detected in the image. Various thresholding methods have been proposed in the literature, and their detailed review was available in this article [13]. The secret of effective segmentation is to determine the optimal thresholds, and most methods are based on the histogram approach. A histogram denotes the frequencies of the different gray levels present in the pixels

96

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

of .IX . For each gray level, denoted .GLg , its relative frequency is determined using the following histogram function .H (GLg ): H (GLg ) = γg ; g ∈ U

.

(5.5.2)

Here, .γg ⊆ G(Pi ) denotes the number of gray level pixels with gray level .GLg .

5.5.3 Application of the T2NS ˜ s . By following Eq. (5.4.3), Each .γg of the .IX is represented into T2NS denoted as .N ˜ s is defined in Eq. (5.5.3): the .N .

˜ s = {〈γg , T1 (γg , (γg )l ), T2 (γg , (γg )r )〉, N 〈γg , I1 (γg , (γg )l ), I2 (γg , (γg )r )〉, 〈γg , F1 (γg , (γg )l ), F2 (γg , (γg )r )〉}

(5.5.3)

Here .γg ∈ U, ∀(γg )l , (γg )r ∈ Sγg ⊆ [0, 1]. In Eq. (5.5.3), the T2NMFs can be determined by following Eqs. (5.4.6)–(5.4.11), respectively: T (γg ) + 0.5 . 2 T (γg ) + 1 T2 (γg , (γg )r ) = . 2 F1 (γg , (γg )l ) = 1 − T1 (γg , (γg )l ). T1 (γg , (γg )l ) =

.

F2 (γg , (γg )r ) = 1 − T2 (γg , (γg )r ).  I1 (γg , (γg )l ) = (T1 (γg , (γg )l ))2 + (F1 (γg , (γg )l ))2.  I2 (γg , (γg )r ) = (T2 (γg , (γg )r ))2 + (F2 (γg , (γg )r ))2

(5.5.4) (5.5.5) (5.5.6) (5.5.7) (5.5.8) (5.5.9)

In Eq. (5.5.3), the .(γg )l and .(γg )r denote the shifting of .γg to the left and right, respectively. In the following, an example is illustrated for the representation of T2NS and its related T2NMs. Example 5.5.1 This example is presented with reference to Example 5.5.2. In Example 5.5.2, a T1NS .N representation of the .Pk ∈ U is shown in Eq. (5.3.6). ˜ The T2NMs for the .N ˜ can be Now, this .N can be represented as a T2NS .N. determined using Eqs. (5.4.6)–(5.4.11). For example, the Type-1 truth-membership for the .Pk is obtained as: .T (Pk ) = 0.64. Since, .T (Pk ) > 0.5, hence it follows the

5.5 The Proposed Image Segmentation Method

97

Condition 1(Definition 5.4.4). Now, its corresponding .T1 (Pk , Pl ) and .T2 (Pk , Pr ) can be determined by following Eqs. (5.4.6) and (5.4.7), respectively: 0.64 + 0.5 = 0.57 2 0.64 + 1 T2 (Pk , Pr ) = = 0.82 2 T1 (Pk , Pl ) =

.

Now, we can obtain .F1 (Pk , Pl ) and .F2 (Pk , Pr ) by using Eqs. (5.4.8) and (5.4.9), respectively: F1 (Pk , Pl ) = 1 − T1 (Pk , Pl ) = 1 − 0.57 = 0.43

.

F2 (Pk , Pr ) = 1 − T2 (Pk , Pr ) = 1 − 0.82 = 0.18 Finally, .I1 (Pk , Pl ) and .I2 (Pk , Pr ) can be determined using Eqs. (5.4.10) and (5.4.11), respectively: I1 (Pk , Pl ) =

.

=

 

T1 (Pk , Pl )2 + F1 (Pk , Pl )2 (0.57)2 + (0.43)2

= 0.71  I2 (Pk , Pr ) = T2 (Pk , Pl )2 + F2 (Pk , Pl )2  = (0.82)2 + (0.18)2 = 0.84 Now, by following Eq. (5.4.3), the T2NS for the .Pk ∈ U can be expressed by including all T2NMs:  ˜ = 〈Pk , T1 (Pk , Pl ), T2 (Pk , Pr )〉, 〈Pk , I1 (Pk , Pl ), I2 (Pk , Pr )〉, N  〈Pk , F1 (Pk , Pl ), F2 (Pk , Pr )〉

.

= {〈164, 0.57, 0.82〉, 〈164, 0.71, 0.84〉, 〈164, 0.43, 0.18〉}

(5.5.10)

Here, .Pk ∈ U, ∀Pl , Pr ∈ Su ⊆ [0, 1]. This T2NS representation shows that if .Pk is shifted to the left (i.e., .Pk → Pl ) and right (i.e., .Pk → Pr ), then the corresponding T2NMs are .〈0.57, 0.71, 0.43〉 and .〈0.82, 0.84, 0.18〉, respectively. Representation of T2NMs for the Example 5.5.1 is shown in Fig. 5.6.

98

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

Fig. 5.6 Representation of T2NMs (for Example 5.5.1)

5.5.4 Computation of T2NSE for the T2NS ˜ s is determined as: The total T2NSE for the .N EN˜ s (γg ) =



.

γg ∈U

1 [1 − {(T1 (γg , (γg )l ) + I1 (γg , (γg )l ) + F1 (γg , (γg )l )) 6 × (T2 (γg , (γg )r ) + I2 (γg , (γg )r ) + F2 (γg , (γg )r ))} × (E1 E2 E3 + E4 E5 E6 )]

(5.5.11)

Here, .E1 =| T1 (γg ) − T1c (γg ) |, .E2 =| I1 (γg ) − I1c (γg ) |, .E3 =| F1 (γg ) − F1c (γg ) | , E4 =| T2 (γg ) − T2c (γg ) |, .E5 =| I2 (γg ) − I2c (γg ) |, and .E6 =| F2 (γg ) − F2c (γg ) |. The .EN˜ s (γg ) is partitioned into two parts using functions .EA (Q) and .EB (Q) as:

EA (Q) =

.

Q  1 [1 − {(T1 (γg , (γg )l ) + I1 (γg , (γg )l ) + F1 (γg , (γg )l )) 6 g=0

× (T2 (γg , (γg )r ) + I2 (γg , (γg )r ) + F2 (γg , (γg )r ))} × (E1 E2 E3 + E4 E5 E6 )]; Q ∈ [1, G − 2] EB (Q) =

G 

.

g=Q+1

(5.5.12)

1 [1 − {(T1 (γg , (γg )l ) + I1 (γg , (γg )l ) 6

+ F1 (γg , (γg )l )) × (T2 (γg , (γg )r ) + I2 (γg , (γg )r ) + F2 (γg , (γg )r ))} × (E1 E2 E3 + E4 E5 E6 )]; Q ∈ [1, G − 2]

(5.5.13)

5.5 The Proposed Image Segmentation Method

99

In Eq. (5.5.12), .EA (Q) computes the total T2NSE of .γg , where .g ∈ [0, Q]. Similarly, in Eq. (5.5.13), .EB (Q) computes the total T2NSE of .γg , where .g ∈ [Q + 1, G].

5.5.5 Determination of Thresholds For images with a normal distribution of gray levels, a single global threshold is appropriate for segmentation. Such a threshold can easily split the images into a number of regions that do not overlap. In MRI , various variations in gray levels are observed due to inhomogeneous regions. Therefore, their global thresholdbased segmentation may not be appropriate as it sometimes fails to separate the regions of interest from the background. Therefore, this study proposed the selection of multiple thresholds based on different gray levels. Based on this approach, different gray levels are selected from the GPS as multiple adaptive thresholds .T δ(δ = 1, 2, . . . , n) that can be defined as: T δ = argmaxδ (EA (Q) + EB (Q)); Q ∈ [1, G − 2]

.

(5.5.14)

Here, the .T x represents the xth threshold value and .argmaxx is an argument which returns the xth maximum total T2NSE. Input: Image IX with gray level pixels Pi (i = 1, 2, . . . , n) at pixel position i with G gray levels. for ∀Pi do Represent IX as G(Pi ) (Eq. (5.5.1)). Determine the frequency of each gray level GLg using H (GLg ) (Eq. (5.5.2)), where γg ⊆ G(Pi ) denotes the number of gray level pixels with gray level GLg . Represent each γg ˜ s (Eq. (5.5.3)). into T2NS denoted as N ˜ s as E ˜ (γg ) (Eq. (5.5.11)). Get the total T2NSE for the N Ns Partition EN˜ s (γg ) into two parts using functions EA (Q) and EB (Q) (Eqs. (5.5.12) and (5.5.13), respectively). Determine the multiple adaptive thresholds T δ(δ = 1, 2, . . . , n) (Eq. (5.5.14)). Transform the IX into the IY = {S1 , S2 , . . . , Sn } (Eq. (5.5.15)). Generate the IF by performing a fusion operation on the IY (Eq. (5.5.16)). end output: Fusion image IF .

Algorithm 4: PROCEDURE T2NSEIF().

100

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

5.5.6 Segmentation of Image Based on multiple thresholds, an image can be easily segmented into foreground and background. This approach generates multiple segmented images with different features. The main goal of generating multiple segmented images is to avoid misclassifying and neglecting significant objects surrounding the regions of interest. Based on this approach, .IX is converted into a set of multiple segmented images .IY = {S1 , S2 , . . . , Sn } with different adaptive thresholds .T δ:  S1 , S2 , . . . , Sn =

.

Cj = 1; for F Gj ≥ T δ Cj = 0; for BGj < T δ

(5.5.15)

Here .Cj = 1 and .Cj = 0 denote the segmented image pixels corresponding to the classes foreground (.F Gj ) and background (.BGj ), respectively.

5.5.7 Fusion of Segmented Images In image fusion, one or more images with different features obtained according to different criteria are combined into a single image [4]. In this approach, different segmented images of the .IY obtained with different thresholds are combined to produce a single resulting image. This is done by performing the following fusion operation: IF =

n 

.

Si

(5.5.16)

i=1

Here, .IF denotes the desired resultant image and called as the IF. An algorithmic representation of the proposed T2NSEIF method is summarized in Algorithm 4. Figure 5.7 shows a flowchart of the proposed T2NSEIF method, which clearly describes its working process. An example of the proposed T2NSEIF method for segmenting a brain tumour tissue structures in MRI is provided in Example 5.5.2. Example 5.5.2 The main goal of the T2NSEIF method is to segment brain tumor tissue structures and to detect the location of segmented tumor structures in MRIs by distinguishing foreground and background classes of pixels. An MRI of brain tumor tissue consists predominantly of white regions (i.e., their gray values are represented by the truth membership), predominantly of a combination of white and non-white regions (i.e., their gray values are represented by the indeterminacy membership) and non-white regions (i.e., their gray values are represented by the falsity membership). The results of the main steps for segmentation of brain tumor tissue in MRI are shown in Fig. 5.8. In this figure, an original MRI of a

5.5 The Proposed Image Segmentation Method

101

Fig. 5.7 Flowchart of the proposed T2NSEIF method

brain tumor and the corresponding GT image of the tumor tissue are shown in Fig. 5.8a and b, respectively [2]. The histogram of the original image (Fig. 5.8a) is shown in Fig. 5.8c. Each .γg of the histogram is transformed into T2NS denoted ˜ s . The total T2NSE is obtained for the .N ˜ s , which is denoted as .E ˜ (γg ). The as .N Ns .E ˜ (γg ) is partitioned into two parts as .EA (Q) and .EB (Q). Then, different multiple Ns adaptive thresholds .T δ(δ = 1, 2, . . . , n) are chosen. The segmented images with five different thresholds, namely .T 1 = 0.2314, .T 2 = 0.2275, .T 3 = 0.2235, .T 4 = 0.2196 and .T 5 = 0.2157, are shown in Fig. 5.8d–h, respectively. By

102

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

Fig. 5.8 Example of segmenting the brain tumor tissue structures and its location recognition based on the T2NSEIF method: (a) original MRI, (b) GT image of tumor tissue, (c) histogram of (a), (d) 1st level of segmentation with threshold T 1 = 0.2314, (e) 2nd level of segmentation with threshold T 2 = 0.2275, (f) 3rd level of segmentation with threshold T 3 = 0.2235, (g) 4th level of segmentation with threshold T 4 = 0.2196, (h) 5th level of segmentation with threshold T 5 = 0.2157, (i) fusion image based on aggregation of segmented images (d)–(h), (j) skullremoval image from the fusion image (i), (k) color effect on fusion image (i), (l) color effect on skull-removal image (j), and (m) histogram of (j)

performing the fusion operation on the segmented images (Fig. 5.8d–h), a fusion image is created, which is shown in Fig. 5.8i. This fusion image suggests that the proposed T2NSEIF method is able to accurately segment the tumor tissue on MRI. Finally, on the fusion image (Fig. 5.8i), a skull-removal operation is performed using

5.6 Experimental Results

103

the image morphological method [18]. The purpose of skull-removal is to clearly visualize the size and structure of the tumor tissue. This skull-removal image is shown in Fig. 5.8j. The color effect is shown in the fusion image (Fig. 5.8i) and in the skull-removal image (Fig. 5.8j), which are shown in Fig. 5.8k and l, respectively. The color effect in Fig. 5.8k provides a better representation of the location of the tumor tissue in the skull, while the color effect in Fig. 5.8l provides a better visual representation of the size and structure of the tumor tissue. The normalized histogram for the skull-removal image (Fig. 5.8j) is shown in Fig. 5.8m, which shows correct segmentation of the foreground and background classes of pixels.

5.6 Experimental Results In this section, we present the dataset description, performance evaluation metrics, visual analysis, statistical analysis, and computational complexity analysis related to brain tumor tissue structures segmentation and location detection in MRIs. These discussions and results are based on the proposed T2NSEIF method.

5.6.1 Dataset Description The proposed T2NSEIF is evaluated with different types of MRIs of brain tumors selected from a set of 3064 T1-weighted contrast-enhanced images [2]. This dataset contains MRIs from three different brain tumors, namely meningioma, glioma and pituitary. For experimental purposes and simpler presentation of results, three sets of MRIs are prepared: (a) Set I (#1–#10), (b) Set II (#11–#20), and (c) Set III (#21– #30). Each set contains 10 separate brain tumor MRIs along with the corresponding GT images of tumor tissue structures. In each GT image, the brain tumor tissue is shown in the foreground with the white.

5.6.2 Performance Evaluation Metrics The proposed T2NSEIF is evaluated in terms of its uniformity with the GT images. For this purpose, three well-known statistical metrics, namely JSC, CC and UM, are used. • JSC: The JSC is used to indicate the degree of similarity between the segmented tumor tissue structures in the fusion image (i.e., .IF ) and the corresponding GT image (i.e., .IT ) of the tumor tissue structures. It can be defined as the intersection of the pixel sets divided by the union of the pixel sets between .IF and .IT . The value for JSC is between 0–100%. A JSC value near 100% indicates that the

104

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

IF has a perfect similarity to the corresponding .IT . Mathematically, this can be expressed as:

.

JIF ,IT (θ ) =

.

WIF ∩IT (θ ) × 100% WIF ∪IT (θ )

(5.6.1)

In Eq. (5.6.1), .WIF ∩IT (θ ) and .WIF ∪IT (θ ) denote the intersection and union of the pixel sets of class .θ for .IF and .IT , respectively. • CC: The CC is used to indicate the similarity between the segmented tumor tissue structures in the fusion image (i.e., .IF ) and the corresponding GT image (i.e., .IT ) of the tumor tissue structures. The range for CC is defined between .[−1, 1]. A CC value close to 1 indicates perfect similarity of the segmented regions of the .IF with the respective .IT . Mathematically, it can be expressed as: M

N

R=

.

(IF − I¯F )(IT − I¯T )

m=1 n=1 M

N

(IF − I¯F )2



m=1 n=1

M

N

(IT − I¯T )2



(5.6.2)

m=1 n=1

where, R denotes the CC value. In Eq. (5.6.2), .I¯F and .I¯T represent the means of .IF and .IT , respectively. Here .M × N represents the pixel size of the image. • UM: UM is used to evaluate the quality of .IF obtained by the multiple adaptive thresholds .T δ(δ = 1, 2, . . . , n). The value for UM is between 0–1. A value for UM close to 1 means that the quality of .IF is better. Mathematically, it can be expressed as: n

U = 1 − 2δ

.

(IF − I¯F )2

δ=1 M

N

(5.6.3)

(max(IF ) − min(IF ))

m=1 n=1

where, U indicates the UM value. Here, .I¯F represents the mean value of .IF . In Eq. (5.6.3), “.min” and “.max” represent the minimum and maximum functions, respectively. Here, .M × N represents the pixel size of the image.

5.6.3 Visual Analysis Figures 5.9, 5.10, and 5.11a and b show MRIs of three different types of brain tumors (namely, meningioma, pituitary tumor and glioma) along with the respective GT images of the tumor tissue. Figures 5.9, 5.10, and 5.11c show histograms of the MRIs reflecting different gray intensities in the images. The proposed T2NSEIF

5.6 Experimental Results

105

Fig. 5.9 The meningioma tumor tissue structure segmentation and its location recognition based on the T2NSEIF method: (a) original MRI, (b) GT image of tumor tissue, (c) histogram of (a), (d) fusion image, (e) skull-removal image from the fusion image (d), (f) color effect on fusion image (d), (g) color effect on skull-removal image (e), and (h) histogram of (e)

method is applied to these images to determine whether the tissue structures of these three types of brain tumors can be segmented properly or not. The fusion images obtained by the proposed T2NSEIF method are shown in Figs. 5.9, 5.10, and 5.11d. The skull-removal operation is performed on the fusion images (Figs. 5.9,

106

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

Fig. 5.10 The pituitary tumor tissue structure segmentation and its location recognition based on the T2NSEIF method: (a) original MRI, (b) GT image of tumor tissue, (c) histogram of (a), (d) fusion image, (e) skull-removal image from the fusion image (d), (f) color effect on fusion image (d), (g) color effect on skull-removal image (e), and (h) histogram of (e)

5.10, and 5.11d), and shown in Figs. 5.9, 5.10, and 5.11e. Finally, fusion images (Figs. 5.9, 5.10, and 5.11d) and skull-removal images (Figs. 5.9, 5.10, and 5.11e) with color effect are shown in Figs. 5.9, 5.10, and 5.11f and Figs. 5.9, 5.10, and 5.11g, respectively. Figures 5.9, 5.10, and 5.11f shows that the proposed T2NSEIF

5.6 Experimental Results

107

Fig. 5.11 The glioma tumor tissue structure segmentation and its location recognition based on the T2NSEIF method: (a) original MRI, (b) GT image of tumor tissue, (c) histogram of (a), (d) fusion image, (e) skull-removal image from the fusion image (d), (f) color effect on fusion image (d), (g) color effect on skull-removal image (e), and (h) histogram of (e)

method is able to effectively detect the location of brain tumor tissue, while Figs. 5.9, 5.10, and 5.11g shows the structures of the tumor tissue. The histograms of the fusion images (Figs. 5.9, 5.10, and 5.11d) also show that the pixels in the foreground and background classes are clearly segmented.

108

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

5.6.4 Multiple Adaptive Thresholds Selection The respective multiple adaptive thresholds for brain tumor tissue segmentation are determined for set I, set II and set III. From the proposed T2NSEIF method, five thresholds for tumor tissue structure segmentation are selected from each MRI. These five thresholds generate five segmented images with different features of the tumor tissue (Example 5.5.2). By performing the fusion operation on these segmented images, a fusion image containing the final structures of the brain tumor tissue is developed for each set. Different multiple adaptive thresholds for Set I, Set II and Set III are shown in Figs. 5.12, 5.13, and 5.14, respectively.

5.6.5 Statistical Analysis In this subsection, the advantages of the T2NSEIF method in terms of tumor tissue structures segmentation are presented. The whole simulation process is performed in Matlab R2016a environment running on Microsoft Windows 10 with 64 bit on Core i-7 processor, 3.20 GHz frequency and 8 GB main memory. The performance of the proposed method is compared with five well-known approaches of MRI segmentation which include FCM [23], MFCM [11], FKMCA [8], KIFECM [10] and NEATSA [14]. MRIs of Set I, Set II and Set III and the respective GT of tumor tissue are shown in Figs. 5.15, 5.16, and 5.17a and b, column by column, respectively. Figures 5.15, 5.16, and 5.17c show the fusion images obtained by the proposed T2NSEIF method. These fusion images are obtained by aggregating different segmented images derived using multiple adaptive thresholds as shown in Figs. 5.12, 5.13, and 5.14. For better visualization of the tissue structures of the brain tumor, the fusion images of Figs. 5.15, 5.16, and 5.17c are shown with color effect in Figs. 5.15, 5.16, and 5.17d. Segmented images obtained using the existing methods such as FCM, MFCM, FKMCA, KIFECM and NEATSA are shown in Figs. 5.15, 5.16, and 5.17e–i, column by column, respectively. From the segmented images, it can be easily seen that the tissue structures of the brain tumor are not sufficiently segmented using the existing methods [8, 10, 11, 14, 23]. By comparing the segmented images with the fusion images, it is clear that the proposed T2NSEIF method correctly segments the tissue structures of the brain tumor. Segmented images obtained from the existing methods shown in Figs. 5.15, 5.16, and 5.17e–i indicate that the existing methods cannot handle these images due to the presence of inconsistent and vague boundaries. However, the fusion images obtained by the proposed T2NSEIF method accurately highlight the tumor tissue and its structures in the MRIs. The fusion images also show that the proposed T2NSEIF method effectively detects the location of the tumor tissue compared to existing methods. For statistical analyses, the JSC, CC and UM values are determined for the segmented tumor tissue structures. These statistical analyses are performed based

5.6 Experimental Results

109

Fig. 5.12 Multiple adaptive thresholds for the brain tumor tissue structures segmentation of Set I (#1–#10) are shown in (a)–(j), respectively

110

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

Fig. 5.13 Multiple adaptive thresholds for the brain tumor tissue structures segmentation of Set II (#11–#20) are shown in (a)–(j), respectively

5.6 Experimental Results

111

Fig. 5.14 Multiple adaptive thresholds for the brain tumor tissue structures segmentation of Set III (#21–#30) are shown in (a)–(j), respectively

112

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach (a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

#1

#2

#3

#4

#5

#6

#7

#8

#9

#10

Fig. 5.15 Brain tumor tissue structures segmentation of Set I (#1–#10): (a) original MRIs, (b) GT images of tumor tissue, (c) T2NSEIF (with skull-removal operation), (d) color effect on (c), (e) FCM with cluster numbers 4, (f) MFCM, (g) FKMCA with cluster numbers 4, (h) KIFECM, and (i) NEATSA

5.6 Experimental Results (a)

(b)

113 (c)

(d)

(e)

(f)

(g)

(h)

(i)

#11

#12

#13

#14

#15

#16

#17

#18

#19

#20

Fig. 5.16 Brain tumor tissue structures segmentation of Set II (#11–#20): (a) original MRIs, (b) GT images of tumor tissue, (c) T2NSEIF (with skull-removal operation), (d) color effect on (c), (e) FCM with cluster numbers 4, (f) MFCM, (g) FKMCA with cluster numbers 4, (h) KIFECM, and (i) NEATSA

114

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach (a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

#21

#22

#23

#24

#25

#26

#27

#28

#29

#30

Fig. 5.17 Brain tumor tissue structures segmentation of Set III (#21–#30): (a) original MRIs, (b) GT images of tumor tissue, (c) T2NSEIF (with skull-removal operation), (d) color effect on (c), (e) FCM with cluster numbers 4, (f) MFCM, (g) FKMCA with cluster numbers 4, (h) KIFECM, and (i) NEATSA

5.6 Experimental Results

115

on the GT images of tumor tissue structures. The values of JSC, CC and UM are shown in Tables 5.1, 5.2, and 5.3. The first column of Tables 5.1, 5.2, and 5.3 contains the list of different methods, including the proposed T2NSEIF method. The second column of the Tables 5.1, 5.2, and 5.3 shows the values of three statistical metrics, namely JSC, CC and UM. The third to twelfth column in the Tables 5.1, 5.2, and 5.3 represent the JSC, CC and UM values of the segmented images obtained from FCM, MFCM, FKMCA, KIFECM, NEATSA and proposed T2NSEIF method respectively. The average values of JSC, CC and UM are considered and presented in the last column of Tables 5.1, 5.2, and 5.3 to facilitate the comparison. In the case of Set I, Set II and Set III, the corresponding average JSC values obtained by the proposed T2NSEIF method are 97.07, 97.92 and 97.13%, respectively, which are much higher than the existing FCM, MFCM, FKMCA, KIFECM and NEATSA methods. For the Set I, Set II and Set III, the average values of CC are 0.9638, 0.9698 and 0.9610, respectively, which is much higher than the existing FCM, MFCM, FKMCA, KIFECM and NEATSA methods. The average UM values for the Set I, Set II and Set III are 0.9624, 0.9633 and 0.9660, respectively, which is much better than the existing FCM, MFCM, FKMCA, KIFECM and NEATSA methods. These statistical analyses show that the proposed T2NSEIF method outperforms the existing methods in segmenting brain tumor tissue structures in MRIs.

5.6.6 Computational Complexity Analysis The computational complexity of the proposed T2NSEIF method, described below, is evaluated in terms of time and space. 1. Time Complexity: To read an MRI and perform the segmentation operation, the T2NSEIF requires .O(M × N) time, where .M × N represents the pixel size of the image. 2. Space Complexity: The space complexity of the proposed T2NSEIF method is the maximum amount of space considered in a given time period during the initiation of the segmentation process of an MRI. Therefore, the proposed T2NSEIF method has total .O(M × N) space complexity. To compare the time complexity, the average CPU time (in nanoseconds) for segmenting the tissue structures of brain tumors in MRIs of Set I, Set II, and III is calculated for the existing methods (i.e., FCM, MFCM, FKMCA, KIFECM, and NEATSA) and the proposed T2NSEIF method. These results are presented in Table 5.4. In Table 5.4, the proposed T2NSEIF method shows an average CPU time of 150.05, 155.15 and 152.11 milliseconds to segment tumor tissue structures of Set I, Set II, and Set III, respectively. In contrast, the existing methods FCM, MFCM, FKMCA, KIFECM and NEATSA require much more CPU time than the proposed T2NSEIF method. This shows that the proposed T2NSEIF method is computationally efficient to segment the tissue structures of brain tumors in MRIs.

T2NSEIF

NEATSA

KIFECM

FKMCA

MFCM

Method FCM

Metric JSC (%) CC UM JSC (%) CC UM JSC (%) CC UM JSC (%) CC UM JSC (%) CC UM JSC (%) CC UM

.#2

42.31 0.4120 0.4230 52.31 0.5130 0.5340 78.45 0.7732 0.7341 79.87 0.7845 0.7451 88.97 0.8788 0.8643 95.32 0.9380 0.9552

.#1

41.20 0.4010 0.4120 51.56 0.5020 0.5230 76.45 0.7512 0.7231 79.33 0.7823 0.7341 89.87 0.8867 0.8493 96.23 0.9590 0.9442

Image 42.10 0.4132 0.4242 51.01 0.5042 0.5362 77.45 0.7632 0.7363 78.98 0.7755 0.7473 88.17 0.8745 0.8853 99.23 0.9810 0.9594

.#3

41.11 0.4065 0.4175 51.15 0.5075 0.5285 79.56 0.7867 0.7287 77.99 0.7667 0.7397 87.97 0.8690 0.8480 98.23 0.9888 0.9499

.#4

41.34 0.4010 0.4120 51.32 0.5014 0.5230 73.25 0.7211 0.7231 78.32 0.7732 0.7341 88.87 0.877 0.8435 98.99 0.9810 0.9442

.#5

43.40 0.4221 0.4331 53.41 0.5222 0.5441 78.42 0.7723 0.7442 77.67 0.7612 0.7552 88.93 0.8793 0.8844 94.01 0.9367 0.9653

.#6

43.21 0.4234 0.4344 55.21 0.5464 0.5465 79.42 0.7867 0.7467 77.59 0.7656 0.7587 88.92 0.8745 0.8954 96.23 0.9510 0.9699

.#7

46.74 0.4535 0.4645 56.74 0.5565 0.5765 76.41 0.7534 0.7767 76.34 0.7567 0.7877 88.32 0.8767 0.8958 96.50 0.9540 0.9799

.#8

45.61 0.4423 0.4533 55.64 0.5621 0.5643 79.67 0.787 0.7643 77.89 0.7634 0.7753 88.67 0.8745 0.8945 96.99 0.9589 0.9864

.#9

43.45 0.4231 0.4341 53.46 0.5235 0.5461 77.48 0.7634 0.7461 78.97 0.77457 0.7571 88.45 0.8789 0.8494 98.97 0.9899 0.9692

.#10

Average 43.05 0.4198 0.4308 53.18 0.5239 0.5422 77.66 0.7658 0.7423 78.30 0.7704 0.7534 88.71 0.8770 0.8710 97.07 0.9638 0.9624

Table 5.1 Comparison of the proposed with the existing methods for the brain tumor tissue structures segmentation of Set I (#1–#10) in terms of JSC, CC and UM

116 5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

T2NSEIF

NEATSA

KIFECM

FKMCA

MFCM

Method FCM

Metric JSC (%) CC UM JSC (%) CC UM JSC (%) CC UM JSC (%) CC UM JSC (%) CC UM JSC (%) CC UM

.#12

43.31 0.4221 0.4321 56.21 0.5530 0.5542 62.39 0.6132 0.6542 79.49 0.7885 0.7643 82.25 0.8188 0.8452 98.23 0.9780 0.9552

.#11

51.20 0.4090 0.4190 61.56 0.6020 0.5391 76.11 0.7531 0.6391 79.45 0.7853 0.7493 84.45 0.8367 0.8342 98.23 0.9790 0.9442

Image 45.67 0.4431 0.4531 52.21 0.5142 0.5652 71.01 0.7038 0.6652 77.95 0.7715 0.7753 83.65 0.8245 0.8484 97.43 0.9610 0.9594

.#13

41.21 0.4068 0.4168 56.65 0.5575 0.5379 67.76 0.6661 0.6379 79.59 0.7867 0.7480 85.65 0.8490 0.8399 98.68 0.9788 0.9499

.#14

41.64 0.4012 0.4112 59.82 0.5814 0.5333 66.76 0.6511 0.6323 78.25 0.7736 0.7435 85.65 0.8470 0.8342 97.63 0.9610 0.9542

.#15

56.43 0.5521 0.5621 63.49 0.6222 0.6742 76.59 0.7528 0.7742 78.42 0.7712 0.7844 87.55 0.8693 0.8553 97.89 0.9667 0.9653

.#16

47.29 0.4631 0.4731 65.21 0.6464 0.5852 75.23 0.7465 0.6852 79.82 0.7856 0.7954 86.75 0.8545 0.8588 98.31 0.9710 0.9699

.#17

59.74 0.5835 0.5935 65.44 0.6565 0.6956 67.74 0.6633 0.7956 76.91 0.7568 0.7957 86.75 0.8567 0.8888 97.89 0.9640 0.9899

.#18

59.91 0.5823 0.5923 64.64 0.6321 0.6944 76.69 0.7571 0.7944 78.69 0.7734 0.7945 86.75 0.8545 0.8854 99.32 0.9889 0.9854

.#19

53.99 0.5281 0.5381 62.46 0.6135 0.6492 73.41 0.7234 0.7492 79.48 0.7857 0.7494 89.45 0.8889 0.8582 95.56 0.9499 0.9592

.#20

Average 50.04 0.4791 0.4891 60.77 0.5979 0.6028 71.37 0.7030 0.7027 78.80 0.7778 0.7700 85.89 0.8500 0.8548 97.92 0.9698 0.9633

Table 5.2 Comparison of the proposed with the existing methods for the brain tumor tissue structures segmentation of Set II (#11–#20) in terms of JSC, CC and UM

5.6 Experimental Results 117

T2NSEIF

NEATSA

KIFECM

FKMCA

MFCM

Method FCM

Metric JSC (%) CC UM JSC (%) CC UM JSC (%) CC UM JSC (%) CC UM JSC (%) CC UM JSC (%) CC UM

.#22

52.31 0.5121 0.5221 57.31 0.5630 0.5821 66.52 0.6532 0.6821 77.45 0.7685 0.7821 85.90 0.8488 0.8821 99.32 0.9880 0.9621

.#21

43.20 0.4290 0.4390 56.56 0.5520 0.5890 69.81 0.6831 0.6890 79.75 0.7863 0.7890 87.57 0.8667 0.8890 95.93 0.9490 0.9690

Image 56.00 0.5531 0.5631 54.01 0.5342 0.5751 61.93 0.6038 0.6761 76.85 0.7515 0.7771 85.80 0.8445 0.8881 95.27 0.9410 0.9691

.#23

52.11 0.5168 0.5268 56.65 0.5565 0.5868 61.96 0.6061 0.6868 79.86 0.7877 0.7878 88.99 0.8790 0.8888 94.23 0.9388 0.9699

.#24

53.34 0.5212 0.5312 56.72 0.5513 0.5852 69.36 0.7011 0.6862 78.25 0.7746 0.7872 82.89 0.817 0.8882 95.89 0.9410 0.9692

.#25

43.41 0.4521 0.4621 63.41 0.6252 0.5721 63.79 0.6228 0.6721 76.72 0.7512 0.7721 87.98 0.8667 0.8821 98.91 0.9787 0.9621

.#26

43.24 0.4231 0.4331 63.61 0.6264 0.5831 68.29 0.6765 0.6831 79.62 0.7854 0.7831 88.99 0.8746 0.8831 96.53 0.9510 0.9631

.#27

46.84 0.4535 0.4635 66.74 0.6525 0.5835 66.99 0.6533 0.6836 76.91 0.7588 0.7837 88.38 0.8767 0.8838 96.51 0.9540 0.9639

.#28

65.61 0.6423 0.6523 66.64 0.6521 0.6623 69.97 0.6871 0.6623 79.47 0.7839 0.7723 84.89 0.8347 0.8823 98.87 0.9789 0.9623

.#29

59.95 0.5881 0.5982 67.46 0.6735 0.5981 69.99 0.6834 0.6983 78.98 0.7797 0.7982 87.87 0.8679 0.8982 99.88 0.9899 0.9692

.#30

Average 51.60 0.5091 0.5191 60.91 0.5987 0.5917 66.86 0.6570 0.6820 78.39 0.7728 0.7833 86.93 0.8577 0.8866 97.13 0.9610 0.9660

Table 5.3 Comparison of the proposed with the existing methods for the brain tumor tissue structures segmentation of Set III (#21–#30) in terms of JSC, CC and UM

118 5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

5.7 Conclusions and Future Directions Table 5.4 Comparison of the proposed with the existing methods in terms of average CPU time (in milliseconds)

119 Method FCM MFCM FKMCA KIFECM NEATSA T2NSEIF

Set I 282.91 267.66 239.95 227.31 160.66 150.05

Set II 272.81 257.66 249.85 238.32 169.56 155.15

Set III 273.61 258.58 245.75 235.12 165.46 152.11

5.7 Conclusions and Future Directions In this chapter, a new T2NS theory has been proposed along with its associated properties, theorems and definitions. The applicability of the proposed T2NS was demonstrated in the segmentation of brain tumor tissue structures and for identifying their locations in MRIs. Based on the proposed T2NS, an image segmentation method called T2NSEIF was proposed. The T2NSE and image fusion concepts were also integrated into the proposed T2NSEIF method. In the proposed method, T2NS was used for the neutrosophication of gray levels of MRIs while their inherent uncertainties were measured using T2NSE. The proposed method performed segmentation in MRIs with multiple thresholds based on the locations of maximum T2NSE values of gray levels. Finally, different segmented images were fused together to identify different features as well as precise brain tumor structures in MRIs. The experiments were performed on thirty different benchmark MRIs of brain tumors with their corresponding GT images. The performance of the proposed segmentation method was evaluated along with the well-known methods such as FCM, MFCM, FKMCA, KIFECM and NEATSA using JSC, CC and UM metrics. The experimental results in brain tumor tissue structure segmentation showed the robustness of the proposed method over the existing methods. CPU time was also calculated for each of the methods and it was found that the proposed T2NSEIF method consumes much less time than the existing methods. The proposed method was verified and validated with the MRIs of brain tumors, which is the limitation of the study. In future, the robustness of the proposed method in segmentation of other types of MRIs will be explored. Based on the proposed T2NS theory, other areas of image processing can be explored, such as enhancement, segmentation using color, texture cues, change detection, edge detection and so on. The T2NSE can be used to measure the similarity relationship and the information contained in images. The T2NSE can also be used to measure the similarity or dissimilarity of DNA sequences in the field of bioinformatics. The proposed T2NS can be useful in the field of robotics for detecting different arm movements based on angular position. In future, the proposed research work can also be extended in detection of crack areas in concrete, electronic devices, steels, metals etc.

120

5 Brain Tumor Segmentation Using Type-2 Neutrosophic Thresholding Approach

References 1. Atanassov KT, Steova S (1986) Intuitionistic fuzzy sets. Fuzzy Sets Syst 20(1):87–96 2. Cheng J, Huang W, Cao S, Yang R, Yang W, Yun Z, Wang Z, Feng Q (2015) Enhanced performance of brain tumor classification via tumor region augmentation and partition. PloS One 10(10):e0140,381 3. DeAngelis LM (2001) Brain tumors. N Engl J Med 344(2):114–123 4. Du J, Li W, Xiao B, Nawaz Q (2016) Medical image fusion by combining parallel features on multi-scale local extrema scheme. Knowl Based Syst 113:4–12 5. Guo Y, Cheng HD (2009) New neutrosophic approach to image segmentation. Pattern Recognit 42(5):587–595 6. Hammouche K, Diaf M, Siarry P (2008) A multilevel automatic thresholding method based on a genetic algorithm for a fast image segmentation. Comput Vis Image Underst 109(2):163–175 7. Han J, Pei J, Kamber M (2011) Data mining: concepts and techniques. Elsevier 8. Huang YP, Singh P, Kuo HC (2020) A hybrid fuzzy clustering approach for the recognition and visualization of MRI images of Parkinson’s disease. IEEE Access 8:25,041–25,051 9. Jourlin M (2016) Gray-level LIP model. Notations, recalls, and first applications. In: Jourlin M (ed) Logarithmic image processing: theory and applications. Advances in imaging and electron physics, vol 195. Elsevier, Amsterdam, pp 1–26 10. Kumar D, Agrawal R, Verma H (2020) Kernel intuitionistic fuzzy entropy clustering for MRI image segmentation. Soft Comput 24:4003–4026 11. Narayanan A, Rajasekaran MP, Zhang Y, Govindaraj V, Thiyagarajan A (2019) Multichanneled MR brain image segmentation: a novel double optimization approach combined with clustering technique for tumor identification and tissue segmentation. Biocybern Biomed Eng 39(2):350–381 12. Sengur A, Guo Y (2011) Color texture image segmentation based on neutrosophic set and wavelet transformation. Comput Vis Image Underst 115(8):1134–1144 13. Sezgin M, Sankur B (2004) Survey over image thresholding techniques and quantitative performance evaluation. J Electron Imaging 13(1):146–166 14. Singh P (2020) A neutrosophic-entropy based adaptive thresholding segmentation algorithm: a special application in MR images of Parkinson’s disease. Artif Intell Med 104:101,838 15. Singh P, Dhiman G (2018) Uncertainty representation using fuzzy-entropy approach: special application in remotely sensed high-resolution satellite images (RSHRSIs). Appl Soft Comput 72:121–139 16. Singh P, Rabadiya K (2018) Information classification, visualization and decision-making: a neutrosophic set theory based approach. In: Proceeding of 2018 IEEE international conference on systems, man, and cybernetics, Miyazaki, Japan, pp 409–414 17. Smarandache F (1999) A unifying field in logics. Neutrosophy: neutrosophic probability, set and logic. American Research Press, Rehoboth 18. Swiebocka-Wiek J (2016) Skull stripping for mri images using morphological operators. In: International conference on computer information systems and industrial management, Vilnius, Lithuania, pp 172–182 19. Wang H, Smarandache F, Zhang Y, Sunderraman R (2005) Single valued neutrosophic sets. In: Proceedings of 10th international conference on fuzzy theory and technology, Salt Lake City, Utah 20. Yang Y, Jia W, Yang Y (2019) Multi-atlas segmentation and correction model with level set formulation for 3D brain MR images. Pattern Recognit 90:450–463 21. Zadeh LA (1965) Fuzzy sets. Inf Control 8(3):338–353 22. Zhang M, Zhang L, Cheng H (2010) A neutrosophic approach to image segmentation based on watershed method. Signal Process 90(5):1510–1517 23. Zhao F, Jiao L, Liu H (2013) Kernel generalized fuzzy c-means clustering with spatial information for image segmentation. Digit Signal Process 23(1):184–199

Chapter 6

COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

“Sometimes the questions are complicated and the answers are simple.” By Dr. Seuss (1904–1991)

Abstract The World Health Organization (WHO) has declared Coronavirus Disease 2019 (COVID-19) as one of the highly contagious diseases and considered this epidemic as a global health emergency. Therefore, medical professionals urgently need an early diagnosis method for this new type of disease as soon as possible. In this research work, a new early screening method for the investigation of COVID19 pneumonia using chest CT scan images has been introduced. For this purpose, a new image segmentation method based on K-means clustering algorithm (KMC) and novel fast forward quantum optimization algorithm (FFQOA) is proposed. The proposed method, called FFQOAK (FFQOA+KMC), initiates by clustering gray level values with the KMC algorithm and generating an optimal segmented image with the FFQOA. The main objective of the proposed FFQOAK is to segment the chest CT scan images so that infected regions can be accurately detected. The proposed method is verified and validated with different chest CT scan images of COVID-19 patients. The segmented images obtained using FFQOAK method are compared with various benchmark image segmentation methods. The proposed method achieves mean squared error, peak signal-to-noise ratio, Jaccard similarity coefficient and correlation coefficient of 712.30, 19.61, 0.90 and 0.91 in case of four experimental sets, namely Experimental_Set_1, Experimental_Set_2, Experimental_Set_3 and Experimental_Set_4, respectively. These four performance evaluation metrics show the effectiveness of FFQOAK method over these existing methods. Keywords Coronavirus Disease 2019 (COVID-19) · K-means clustering (KMC) algorithm · Fast forward quantum optimization algorithm (FFQOA) · Computed tomography (CT) images · Image segmentation

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 P. Singh, Biomedical Image Analysis, Brain Informatics and Health, https://doi.org/10.1007/978-981-99-9939-2_6

121

122

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

6.1 Introduction In late 2019, the novel COVID-19 pneumonia was observed in Wuhan City, China [6, 34]. Huang et al. [10] identified the typical manifestation symptoms of COVID19 based on 41 patients in Wuhan city, which included fever, cough, myalgia and fatigue. All of these 41 patients suffered from pneumonia and showed abnormalities in their chest computed tomography (CT). These patients also had serious health problems that included acute respiratory illness, acute cardiac injury, and secondary infections. Of these, 13 patients were transferred to the Intensive Care Unit (ICU), while 6 patients died during the course of treatment. For the first time, Chan and colleagues [2] had found evidence of human-to-human spread COVID-19 disease at the Hong Kong University. COVID-19 is associated with severe respiratory symptoms leading to ICU admissions and death with high frequency [16, 33]. The type of pneumonia caused by COVID-19 is a highly infectious disease and this outbreak has been declared a global public health emergency by the WHO [30]. A real-time RT-PCR approach was used to diagnose COVID-19 pneumonia, which indicated positive symptoms of severe acute respiratory syndrome coronavirus 2 in nine pregnant women [3]. However, in the case of COVID-19 infection, the RT-PCR approach has a very low positive rate and may not be effective for early detection and treatment of suspected patients [8]. Nevertheless, medical imaging technologies such as X-ray, CT, magnetic resonance imaging (MRI), etc. have made a significant contribution to improve diagnostic accuracy, timeliness, and performance [21]. A recent study shows that certain features associated with COVID-19 can be detected in the lungs by chest CT image [5]. Li et al. [20] have used a deep learning approach to separate COVID-19 from all other viral pneumonias based on CT chest examinations. Such studies [5, 20] have shown that CT could be an effective tool for early COVID19 testing and diagnosis. Notwithstanding the advantages of CT scanning, these images share some common image features between COVID-19 and other types of pneumonia that make it very difficult to distinguish between them. Such features can therefore be distinguished in terms of similarities and dissimilarities using various image processing methods. Image segmentation is one of the tedious tasks in image processing and pattern recognition, with many applications in computer vision, robotics, object recognition and so on. The main purpose of image segmentation is to separate each object in the image from the rest of the artifacts [28]. Thus, it is a mechanism for dividing an image into different parts so that each part has its own region. According to Cheng et al. [4], it is a method of partitioning an image I into non-overlapping regions .(I1 , I2 , . . . , In ) such that: n  .

i

Ii = I, and Ii ∩ Ij = ∅, Ii /= Ij

(6.1.1)

6.1 Introduction

123

Image segmentation is a tedious mechanism due to the involvement of complexities related to contrast, brightness, noise, etc. [31]. In segmentation, some issues always arise such as: 1. How to distinguish objects from each other, since they often overlap in color? 2. How to distinguish objects from the background because their color levels may be similar? 3. How to adequately quantify color levels so that certain objects belong to a specific set? One of the most popular methods is the K-means clustering (KMC) algorithm [22], which assigns each color level to the respective cluster depending on certain distance criteria [32]. However, the KMC algorithm has the following inherent drawbacks [25]: • It is observed that the KMC algorithm generates many cluster centers with local optima and often misses the global optima. • The optimal results of the KMC algorithm are very sensitive to the initial definition of the cluster centers. It is observed that different clusters can be generated with different initial cluster centers. Many improved methods are proposed by hybridizing the following optimization algorithms to overcome the above drawbacks of KMC algorithm. These methods are given in the following list: • • • •

Genetic Algorithm (GA).+KMC.=GAK [17], Particle Swarm Optimization (PSO).+KMC.=PSOK [29], Dynamic PSO (DPSO).+KMC.=DPSOK [19], and Ant Colony Optimization (ACO).+KMC.=ACOK [23].

However, such hybridization of optimization algorithms is either too difficult or only partially overcomes the drawbacks of the KMC algorithm. This shows the need for further studies to solve the problems of the KMC algorithm, especially towards solving the problems of optimal cluster centers and global optima. Through this motivation, this study proposed a novel FFQOA which is the updated version of the existing quantum optimization algorithm (QOA) [27] and modified QOA (MQOA) [12]. However, these algorithms have certain weaknesses such as: • A well-defined formulation of the quantum system that serves as a set of search agents to find the optimal global solution is not properly defined. • There is no description of appropriate ranges for the various constants. • There is no consideration of the locally and globally optimal values for finding the optimal solution. Significant improvements are made to the QOA and MQOA as part of this research, which have been incorporated into the FFQOA. These are listed as: • Well-defined ranges for the constants and parameters are given. • Formulations for the quantum system are properly defined.

124

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

• Formulas for initializing the quantum’s location, movement and displacement in the quantum system are included. • Mechanisms for enhancing the search scope and updating the displacement are included. For this purpose, the FFQOA considers the personal best and the global best displacements achieved by the quantum in the quantum system. This study has proposed a new image segmentation method called FFQOAK based on FFQOA and KMC algorithm. Considering the prevalence of COVID19, the proposed FFQOAK method has been employed in segmenting the chest CT scan images of COVID-19 patients [15]. The aim of this application is to segment these images into different regions and detect the infected regions. In this method, KMC algorithm is used to cluster the gray level values of chest CT scan images. The main strategy of this algorithm is to cluster the gray level values in such a way that the Euclidean distance between the gray level values belonging to each cluster is minimized. In this algorithm, each cluster center is represented by the intensity of the gray level values. However, the KMC algorithm tries to find the best cluster centers for the gray level values with each iteration. Since, the KMC algorithm endures with the problems of optimal cluster centers and global optimum, the FFQOA is used to solve these problems by minimizing the Euclidean distance function. Experimental results show that the FFQOAK method is able to generate optimal segmented images by highlighting the infected regions with good visual effects in the CT scan images. The performance of the proposed FFQOAK method has been compared with five other methods including KMC [14], GAK [17], PSOK [29], DPSOK [19] and ACOK [23]. Various comparative metrics based on mean squared error (MSE), peak signal-to-noise ratio (PSNR), Jaccard similarity coefficient (JSC) and correlation coefficient (CC) show the efficiency of the proposed FFQOAK method. The remainder of this chapter is arranged as follows. Section 6.2 presents the application of the KMC algorithm for image segmentation. The proposed FFQOA is presented in Sect. 6.3. The proposed FFQOAK method is presented in Sect. 6.4. Experimental results are discussed in Sect. 6.5. Conclusions and future directions are discussed in Sect. 6.6.

6.2 Image Segmentation Using KMC Algorithm For an input image .Ip , each gray level value .Pi (i = 1, 2, . . . , n) can be defined with n-dimensional vectors, which take their values in the range .[0, G] with .G = 255. That is, for the .Ip , the 256 gray levels belong to the universe of discourse .U = [0, G]. Therefore, a gray level domain (GLD) [13] can be described on a set of domains .Pi forming a space in the plane U . The GLD is denoted as .Gld , and can be expressed as: .

Gld : Pi ⊂ R2 → U ⊂ R

(6.2.1)

6.3 The Proposed FFQOA

125

In the context of clustering .Gld , this study uses the KMC algorithm. Steps involved in this algorithm are explained next. Step 1. Input: an image .Ip . Step 2. A set of gray level values .Gld = {P1 , P2 , . . . , Pn }. Step 3. Initialize .θ number of clusters .z = 1, 2, . . . , θ. Step 4. Assume a set of randomly initialized cluster centers as .C(e) = [C1 (e), C2 (e), . . . , Cθ (e)]; where, e represents the 1st epoch of the algorithm. Step 5. Repeat. (a) Calculate the Euclidean distance .d[Pi , Cj (e)] between gray level value .Pi ∈ Gld and the cluster center .Cj (e) ∈ C(e) using the relation given below as: d[Pi , Cj (e)] = |Pi − Cj (e)|2

.

(6.2.2)

If .Cj (e) is the nearest center for the .Pi , then it is assigned to the cluster .Zj . (b) Assign all the gray level values to the closest cluster center based on the minimum Euclidean distance. (c) Recalculate the new cluster centers using the following equation as: Cj (e + 1) =

.

η 1 Pi ; (j = 1, 2, . . . , θ ) η

(6.2.3)

i=1

Here, .η represents the size of the cluster .Zj . Step 6. Go to Step 5, and step up the epoch. This process is continued until the cluster centers stop changing or the algorithm reaches the maximum number of epochs E, i.e., .e = 1, 2, . . . , E. Step 7. Output: reshape the .θ number of clustered gray level values into a segmented image .Is .

6.3 The Proposed FFQOA In this section, inspiration, background, mathematical modeling are presented along with the pseudocode of the proposed FFQOA.

6.3.1 Inspiration for the FFQOA It is known from experiments that quantum motion is very different from the motion of rigid objects. Since rigid objects consist of many atoms, quantum effects in rigid objects are somehow considered to be averaged. Rigid objects in quantum physics

126

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

Fig. 6.1 A quantum system with q number of quanta .Qk (.k = 1, 2, . . . , q)

are composed of many quanta, and is called quantum system (Fig. 6.1). In a quantum system, the motion of a microscopic quantum usually can be defined by classical mechanics [18]. In this approach, the location, movement and displacement of the quantum can be defined in the direction of motion. In this study, it has been assumed that there is an inertial time reference with respect to which a quantum can move to achieve the best displacement. In the quantum system, the search for the best displacement continues until each quantum achieves its own best displacement. Based on this motivation, a new optimization algorithm inspired by the displacement of quantum is proposed in this study. The proposed algorithm is referred to as FFQOA.

6.3.2 Background for the FFQOA The FFQOA is a quantum-based heuristic search algorithm based on the simulation of quantum displacement activity within a quantum system. The original goal of the FFQOA is to mathematically simulate the elegant and uncertain displacements of quanta by discovering their patterns that excite the quanta to move by following the Schrödinger equation [24]. In the intermediate phase, this algorithm enhances the search scope of the instantaneous movement of individual quantum by rearranging them within the quantum system. Their movements lead to displacements that give both stability and optimal structure to the quantum system. For this purpose, the FFQOA has been developed, which is very simple and effective in solving optimization problems. In FFQOA, the search agent is called “quantum”, which is allowed to move in the multidimensional search space. A collective form of quantum is called quanta. Each quantum has its own point of origin, which is called location, and the state of change of its location is called movement. Movement of quantum leads to change in location, which is called displacement. Enhancements in the displacements of quanta are made on the basis of the exchange of information with a successful quantum. A successful quantum is evaluated by the effective

6.3 The Proposed FFQOA

127

displacement. Therefore, the displacement of a quantum is affected by the effective displacement of its surrounding quanta. Consequently, the search activity of a quantum is influenced by other quanta within the quantum system. The result of modelling this interaction operation is that each quantum in the search space shifts in the direction of the preceding quanta. Therefore, each quantum maintains this information in the quantum system: (a) personal location, (b) personal movement, (c) personal displacement, (d) personal successes in the form of displacements, and (e) surrounding successes in the form of displacements. Finally, each quantum attempts to achieve stability in the multidimensional search space by mimicking personal successes and surrounding success.

6.3.3 Mathematical Modeling for the FFQOA In the following, we provide the mechanism of FFQOA by formulating an optimization problem as: Optimize (Max. or Min.) fh (x), (h = 1, 2, . . . , H ), x ∈ Q

.

(6.3.1)

subject to the linear constraints .

λj (x) ≥ 0, (j = 1, 2, . . . , J ),

.

Θm (x) ≥ 0, (m = 1, 2, . . . , M),

(6.3.2) (6.3.3)

where, .fh (x), .λj (x) and .Θm (x) are functions of the following design vector: x = (xn ), (n = 1, 2, . . . , N)

.

(6.3.4)

where, the components .xn of x are called decision variables, and N is the number of decision variables. In Eq. 6.3.1, the functions .fh (x) are referred as the objective functions, where .H = 1 indicates that it is only a single objective. The space expanded by the .xi is called the quantum system .Q, whereas the space produced by the .fh (x) values is called the solution space. In Eqs. 6.3.2 and 6.3.3, .λj and .Θm are called constraints. For these constraints, a condition set is defined for each decision variable .xn as .GLB ≤ xn ≤ GU B , which restricts the value of an .xn within a lower bound (.GLB ) and an upper bound (.GU B ). Step 1. Initialization of quantum in the quantum system: First, assume that the solutions to the optimization problems are scattered in the quantum system. Each quantum is allowed to move to search the solution in this system. A quantum system is defined by initializing each quantum in the search space with the following Schrödinger equation [24] as: .

Qk (e) = φ · Q1k (e) + (1 − φ) · Q2k (e)

(6.3.5)

128

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

In Eq. 6.3.5, .Qk (e) represents the k-th quantum with an epoch e, and .k = 1, 2, . . . , q; where q denotes the total number of quanta in the .Q. Here, .Q1k (e) and .Q2k (e) are two wave functions for the k-th quantum; .φ = a + ib is a complex √ number, a and b are real numbers in .[0, 1] and i is the imaginary unit .i = −1. In the representation of complex number, multiplication by .−1 refers to a 180-degree rotation about the origin of the k-th quantum. Hence, the multiplication by i refers to a 90-degree rotation of the k-th quantum in the “positive”, counterclockwise direction [1]. Since it is not possible to use the complex number .φ directly to initialize the quantum in the search space, therefore, √ its absolute value is used in the computation process, defined as a 2 + b2 . Both .Q1k (e) and .Q2k (e) can be defined as: .|φ| =   Q1k (e) = GU B + r1 · (GU B − GLB ) .   Q2k (e) = GLB + r2 · (GU B − GLB )

.

(6.3.6) (6.3.7)

In Eqs. 6.3.6 and 6.3.7, .r1 and .r2 represent two different random functions, which are defined in .[0, 1], respectively. Step 2. Location of quantum: To model this, it is assumed that for each quantum there must be a location in the quantum system. Mathematically, the location acquired by .Qk (e) is denoted as .Lk (e), and can be defined as: .

Lk (e) =

1 e−2/Qk (e) Qk (e)

(6.3.8)

Step 3. Movement of quantum: To search the solution, each quantum is allowed to move in the quantum system. Mathematically, movement exhibited by .Qk (e) is called .Mk (e), and can be defined as: Mk (e) = |Qk (e) −

.

Lk (e) ln(1/mf )| 2

(6.3.9)

Here, .mf is called the quantum movement factor, which can be taken in .]0, 1]. Step 4. Displacement of quantum: The movement of each quantum provides a displacement to it. The displacement accompanied by each quantum in the quantum system can be defined by the .Lk (e) and .Mk (e). The displacement of the .Qk (e) is denoted by .Dk (e), and can be expressed as: Dk (e) = 2 · |Lk (e) − Mk (e)|

.

(6.3.10)

6.3 The Proposed FFQOA

129

Step 5. Fitness evaluation of displacement: A fitness value is determined for .Dk (e), and it is updated if a better solution than the previous one exists. Define Q with q number of quanta: Qk (e)(k = 1, 2, . . . , q) ∈ Q (Eq. 6.3.5). while e < E do for ∀Qk (e) do /*get the personal best displacement*/ if Jˆ(Di (e)) < Jˆ(Dj (e)) then Dj (e) = Di (e); end /*get the surrounding best displacement*/ if Jˆ(Dj (e)) < Jˆ(pBDk (e)) then pBDk (e) = Dj (e); end end for ∀Qk (e) do Update Mk (e): Mk (e + 1) (Eq. 6.3.11). Update Dk (e): Dk (e + 1) (Eq. 6.3.16). end /*Repeat the process until stopping criterion is satisfied*/ e = e + 1; end Algorithm 5: PROCEDURE pBD() Step 6. Enhancement of search scope of quantum: Every quantum enhances its search scope by adjusting its corresponding .Mk (e). This enhancement of .Mk (e) for the next epoch .e + 1 is represented by .Mk (e + 1), and can be defined as: Mk (e + 1) = M1 + M2 + M3

.

(6.3.11)

In Eq. 6.3.11, .M1 , .M2 and .M3 can defined using the following Eqs. 6.3.12– 6.3.14, respectively: .

M1 = α · Mk (e).

(6.3.12)

M2 = ln(1/mf ) · r3 · [pBDk (e) − Dk (e)].

(6.3.13)

M3 = ln(1/mf ) · r4 · [gBD(e) − Dk (e)]

(6.3.14)

In Eq. 6.3.12, .α is called the quantum acceleration factor, which can be defined as: α = αmax − e ×

.

|αmax − αmin | E

(6.3.15)

130

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

Here, .e = 1, 2, . . . , E, where E denotes the maximum number of epochs set for the algorithm. Here, .αmin and .αmax can be taken in .[0.1, 0.9], where .αmax > αmin . In Eq. 6.3.13, .pBDk (e) is the personal best displacement achieved since the first epoch for the k-th quantum. In Eq. 6.3.14, the .gBD(e) is the global best displacement achieved so far among the displacements. In Eqs. 6.3.13 and 6.3.14, .r3 and .r4 represent two different random functions, which are defined in .[0, 1], respectively. Step 7. Update the displacement of quantum: Every quantum updates its displacement with the help of previous displacement .Dk (e) and enhanced search scope .Mk (e + 1). This adjustment of .Dk (e) for the next epoch is represented by .Dk (e + 1), and can be defined as: Dk (e + 1) = Dk (e) + Mk (e + 1)

.

(6.3.16)

6.3.4 Personal Best and Global Best Displacements The personal best displacement, i.e., .pBDk (e) is the best displacement experienced by the quantum since the first epoch e. By considering the optimization problem (Eq. 6.3.1), the personal best displacement for Eq. 6.3.13 at the next epoch .e + 1 is determined as:  pBDk (e); if Jˆ(Dk (e + 1)) ≥ Jˆ(pBDk (e)) .pBDk (e + 1) = (6.3.17) Dk (e + 1); if Jˆ(Dk (e + 1)) < Jˆ(pBDk (e)) Here, .Jˆ denotes the fitness function, which calculates how the corresponding displacement is close to the optimal solution. For the global best displacement, surrounding quanta act like a network within the quantum system. For the enhancement of search scope, this network is used to extract information from all the quanta. In this case, network information is the best displacement among the quanta, denoted as the .gBD(e) for the epoch e. For Eq. 6.3.14, it can be obtained as: .

gBD(e) ∈ {pBD1 (e), pBD2 (e), . . . , pBDq (e)} | Jˆ(gBD(e)) = min{pBD1 (e), pBD2 (e), . . . , pBDq (e)}

(6.3.18)

6.3 The Proposed FFQOA

131

Determination procedures for the personal best and global best displacements are summarised in Algorithms 5 and 6, respectively. Define Q with q number of quanta: Qk (e)(k = 1, 2, . . . , q) ∈ Q (Eq. 6.3.5). while e < E do for ∀Qk (e) do /*get the personal best displacement*/ if Jˆ(Di (e)) < Jˆ(Dj (e)) then Dj (e) = Di (e); end /*get the global best displacement*/ if Jˆ(Dj (e)) < Jˆ(gBD(e)) then gBD(e) = Dj (e); end end for ∀Qk (e) do Update Mk (e): Mk (e + 1) (Eq. 6.3.11). Update Dk (e): Dk (e + 1) (Eq. 6.3.16). end /*Repeat the process until stopping criterion is satisfied*/ e = e + 1; end Algorithm 6: PROCEDURE gBD()

6.3.5 The Search Scope Components The search scope enhancement equation (Eq. 6.3.11) is comprised of three components as: • In .M1 (Eq. 6.3.12), the .Mk (e) term stores information about the preceding movement information, i.e., information about the prior immediate movement path. The integration of .α with .Mk (e), i.e., “.α · Mk (e)” term can be viewed as an accelerated component that prevents the quantum from radically altering movement and influencing towards the present movement. Hence, this component is termed as the preceding movement component of the quantum. • In .M2 (Eq. 6.3.13), the “.ln(1/mf ) · r3 · [pBDk (e) − Dk (e)]” term maintains the personal network information of the .Qk (e) in terms of past displacements. In a context, this component embodies individual knowledge of the displacement which was the best for the quantum at the personal network level. The procedure for computing the personal best displacement in this network level is presented as Algorithm 5. The advantage of this component is that it attracts quanta towards their own personal best displacements. This activity resembles the quanta’s propensity to attain those displacements that have provided stability to them

132

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

Fig. 6.2 A two-dimensional geometrical representation of movement and displacement enhancements for a quantum: (a) at epoch e, and (b) at epoch .e + 1

most during their past displacements. This component is referred to the personal network component of the quantum. • In .M3 (Eq. 6.3.14), the “.ln(1/mf ) · r4 · [gBD(e) − Dk (e)]” term maintains the global network information of the .Qk (e) by quantifying the fitness of the k-th quantum with respect to surrounding quanta. This component’s effect is that each quantum is attracted to the global best displacement discovered by the quantum’s surrounding. The procedure for computing the global best displacement in this network level is presented as Algorithm 6. This component is termed as the global network component of the quantum. A vector representation of the search scope enhancement equation (Eq. 6.3.11) can naively be represented in a two-dimensional search space. For the ease of interpretation, a quantum can be assumed in a two-dimensional search space. An example of the search scope enhancement of the quantum is depicted in Fig. 6.2. Figure 6.2a indicates the stage of .Dk (e) for the .Qk (e) at the epoch e. It should be noted that the .Dk (e) makes the .Qk (e) closer to the .pBDk (e). For the epoch .e + 1, the process of search scope enhancement is updated, which is shown in Fig. 6.2b. The figure shows that the .Dk (e + 1) contributes to the quantum by taking it closer to the .gBD(e + 1). The proposed FFQOA is summarized in Algorithm 7 in terms of q number of quanta in the .Q.

6.4 The Proposed FFQOAK Method

133

Define Q with q number of quanta: Qk (e)(k = 1, 2, . . . , q) ∈ Q (Eq. 6.3.5). for ∀Qk (e) do Define location of each quantum: Lk (e) (Eq. 6.3.8). Define movement of each quantum: Mk (e) (Eq. 6.3.9). Compute displacement of each quantum: Dk (e) (Eq. 6.3.10). Evaluate the fitness of each displacement Dk (e) to get: pBDk (e) (Algorithm 5) and gBD(e) (Algorithm 6). end while e < E do for each Qk (e) do Update Mk (e): Mk (e + 1) (Eq. 6.3.11). Update Dk (e): Dk (e + 1) (Eq. 6.3.16). end /*Repeat the process until stopping criterion is satisfied*/ e=e+1; end Algorithm 7: PROCEDURE FFQOA()

6.4 The Proposed FFQOAK Method This section introduces the proposed FFQOAK method along with its searching processes of optimal solution.

6.4.1 Phases of the FFQOAK Method Each phase of the proposed method is explained next. Step 1. Input: an image .Ip . Step 2. A set of gray level values .Gld = {P1 , P2 , . . . , Pn }. Step 3. Initialize .θ number of clusters .z = 1, 2, . . . , θ. Step 4. Obtain the set of initial cluster centers as .C = [C1 , C2 , . . . , Cθ ] using KMC algorithm. Step 5. Apply the FFQOAK in the .Gld to get the optimal segmented image .Is as: sub-step 5.1. Initialization of quantum in the quantum system: Analogue to Step 1 of FFQOA. sub-step 5.2. Location of quantum: Analogue to Step 2 of FFQOA. sub-step 5.3. Movement of quantum: Analogue to Step 3 of FFQOA.

134

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

sub-step 5.4. Displacement of quantum: Obtain the displacement (analogue to Step 4 of FFQOA), and assign the set of initial cluster centers C to any displacement .Dj (e). sub-step 5.5. Repeat. (a) Calculation of Euclidean distance: Calculate the Euclidean distance between gray level value .Pi ∈ Gld and the displacement .Dj (e) using the relation given below as: d[Pi , Xz ] = |Pi − Xz |2 ; ∀Xz ∈ Dj (e)

.

(6.4.1)

If .Xz is the nearest center to .Pi , then it is assigned to cluster .Zz . (b) Assignment of gray level values: Assign all the gray level values to the closest center based on the minimum Euclidean distance. (c) Fitness evaluation of cluster center: Evaluate the fitness of j -th cluster center .Dj (e), which can be defined as: .Jˆj (e) =

n 

|Pi − Xz |2

(6.4.2)

i=1

The value of .Jˆj (e) is considered to be optimal, when it satisfies the following condition: n n  ∂ Jˆj (e) ∂  2 = |Pi − Xz | = −2 |Pi − Xz | = 0 . ∂Xz ∂Xz i=1

(6.4.3)

i=1

The average fitness of all displacements in the .Q, where .Q ⊆ Dj (e), is computed as: θ 1 ˆ Jˆavg (e) = Jj (e) θ

.

j =1

(6.4.4)

6.4 The Proposed FFQOAK Method

135

Input: an image Ip . A set of gray level values Gld = {P1 , P2 , . . . , Pn }. Initialize θ number of clusters z = 1, 2, . . . , θ . Obtain the initial cluster centers: C = [C1 , C2 , . . . , Cθ ] (using KMC algorithm). /*Apply the FFQOAK in the Gld to get the optimal segmented image Is */ for ∀Qk (e) do Define Q with q number of quanta: Qk (e)(k = 1, 2, . . . , q) ∈ Q (Eq. 6.3.5). Define location of each quantum: Lk (e) (Eq. 6.3.8). Define movement of each quantum: Mk (e) (Eq. 6.3.9). Obtain the displacement (Eq. 6.3.10), and assign the set of initial cluster centers C to any displacement Dj (e). end while e < E do for ∀Qk (e) do Calculate the Euclidean distance between gray level value Pi ∈ Gld and the displacement Dj (e) (Eq. 6.4.1). Assign all the gray level values to the closest center based on the minimum Euclidean distance. Evaluate the fitness of j -th cluster center Dj (e) (Eq. 6.4.2) to get: pBDk (e) (Algorithm 5) and gBD(e) (Algorithm 6). Update the Mk (e): Mk (e + 1) (Eq. 6.3.11). Update the Dk (e): Dk (e + 1) (Eq. 6.3.16). end /*Repeat the process until there is no change in the cluster centers.*/ e=e+1; end Output: reshape the θ number of clustered gray level values into a segmented image Is . Algorithm 8: PROCEDURE FFQOAK() sub-step 5.6. Enhancement of search scope of quantum: Analogue to Step 6 of FFQOA. sub-step 5.7. Update the displacement of quantum: Analogue to Step 7 of FFQOA. sub-step 5.8. Go to sub-step 5.8., and step up the epoch. This process is continued until the cluster centres stop changing or the algorithm reaches the maximum number of epochs E, i.e., .e = 1, 2, . . . , E. Step 6. Output: reshape the .θ number of clustered gray level values into a segmented image .Is . The proposed FFQOAK method is summarized in Algorithm 8 in terms of q number of quanta in the .Q.

136

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

6.4.2 Optimization Process of the Proposed FFQOAK Method The FFQOAK is comprised of six major stages in the segmentation of images as: Stage I: For each quantum, a quantum system is defined and its displacement is calculated. Stage II: The initial displacement of each quantum is considered as the initial cluster center. Stage III: Each displacement (Stage II) is assigned to the Euclidean distance function of the KMC algorithm. Stage IV: The fitness parameter is used to evaluate the fitness of each cluster center Stage V: The search scope of each quantum is enhanced by updating its movement and displacement within the quantum system. Stage VI: Go to Stage III with updated displacements, until it discovers the optimal global solution or the nearest optimal global solution. The FFQOAK is implemented in Matlab R2014a version (8.3.0.532) in Microsoft Windows 8.1 environment on Core i-5 processor with 3.20 GHz and 8 GB memory. Based on Singh [26], various parameters associated with the proposed FFQOAK are set as: • • • • • •

Number of clusters, .θ = 3, Number of quanta, .Q = 20, Real numbers, .a = 0.1 and .b = 0.1, Quantum movement factor, .mf = 0.14, .αmax = 0.9, .αmin = 0.2, and Maximum number of epochs, .E = 100.

Each image has been preprocessed before segmentation due to noise and low contrast features. This is done using the adaptive filtering technique [7], followed by the histogram equalization method [9]. The preprocessed image is called enhanced CT scan image. Some of the essential processes of the proposed FFQOAK are illustrated by a chest CT scan image of COVID-19 in Fig. 6.3. The input and its enhanced CT scan images are shown in Fig. 6.3a and b, respectively. The proposed FFQOAK is applied to the enhanced CT scan image for segmentation. The search landscape of the fitness evaluation metric is shown in Fig. 6.3c. Comparison of fitness values between the first quantum and the optimal quantum is shown in Fig. 6.3d. Their differences show that the optimal quantum converges very well by avoiding the local optimal solution. The convergence curve of the average fitness values is shown in Fig. 6.3e, and shows that all quanta perform well in finding the optimal solution at each epoch. But, only a certain quantum reaches the global optimal solution. The search history curve, as shown in Fig. 6.3f, demonstrates the location history of the quanta during the search for the global optimal solution. Using the optimal cluster centers, the input image is reshaped to form the segmented image. This segmented image is shown in Fig. 6.3g.

6.5 Experimental Results

137

Fig. 6.3 Searching processes of the proposed FFQOAK during segmentation: (a) CT scan image of COVID-19, (b) enhanced CT scan image of (a), (c) search landscape, (d) comparison of fitness, (e) average fitness curve, (f) search history, and (g) segmented image of (b)

It can be observed that by including FFQOA, FFQOAK can achieve fast convergence. The initial cluster centers optimized in Stage III can be considered as the global optimal cluster centers of the KMC algorithm. Therefore, the requirement of adopting the initial cluster centers for the KMC algorithm in the next stages no longer exists.

6.5 Experimental Results This section includes descriptions of the dataset and its preprocessing, performance evaluation metrics, statistical analyses, convergence analysis and finally visual analysis.

138

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

6.5.1 Dataset and Preprocessing Descriptions The proposed FFQOAK method has been evaluated with different types of chest CT scan images of COVID-19 patients obtained from Jun et al. [15]. This dataset contains 20 labeled COVID-19 CT scan images. For experimental purposes, 10 labeled COVID-19 CT scan images are used, which are labeled as Case 1, Case 2, and so on. From each label, four different images are extracted and arranged in four experimental sets, namely Experimental_Set_1, Experimental_Set_2, Experimental_Set_3 and Experimental_Set_4. Each experimental set contains ground truth (GT) of the respective image. The descriptions of the features associated with each experimental set in terms of case label, extracted images, original image size (in KB) and enhanced image size (in KB) are listed in Table 6.1. The main objective of splitting the dataset into four sets is to evaluate whether the same set of parameters of FFQOAK used for the Experimental_Set_1 is suitable for the other experimental sets or not.

6.5.2 Performance Evaluation Metrics The performance of the proposed FFQOAK method has been evaluated by comparing the segmented slices with GT. For this purpose, well-known statistical metrics are used, namely MSE, PSNR, JSC and CC. Such metrics can evaluate the performance of the proposed FFQOAK in terms of its consistency with the GT. These metrics are defined in terms of the input image (.Ip ), the segmented image (.Is ) and the respective GT (.Ig ) as [11]: • MSE: The MSE value is used to measure an average loss in intensity of gray level values during the segmentation of .Ip . A smaller MSE value implies less intensity loss and results in better .Is . Mathematically, this can be expressed as: MSE =

.

U V 1  (Ip − Is )2 U ×V

(6.5.1)

u=1 v=1

Here, .U × V represents the size of image in terms of pixels. • PSNR: The PSNR is inversely related to the MSE, i.e., a higher value shows less distortion and thus a better .Is . Mathematically, this can be expressed as:  P SNR = 10 × log10

.

(255)2 MSE

 (6.5.2)

• JSC: JSC evaluates the similarity between the .Is and .Ig . It is defined as the size of the intersection of the pixel sets divided by the size of the union of the pixel sets from the .Is and .Ig . The JSC value lies between the range 0–100%. A JSC

6.5 Experimental Results

139

Table 6.1 Features of selected images of chest CT scan images of COVID-19

Dataset Experimental_Set_1

Experimental_Set_2

Experimental_Set_3

Experimental_Set_4

Case label 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10

Extracted image #142 #94 #105 #85 #100 #110 #94 #96 #109 #155 #118 #106 #81 #71 #76 #87 #113 #120 #90 #179 #129 #97 #92 #95 #87 #98 #102 #107 #100 #166 #136 #86 #100 #77 #80 #92 #89 #114 #104 #171

Extracted CT scan image size (in KB) 169 237 218 202 207 211 212 184 181 173 179 231 216 191 193 211 225 169 173 177 188 250 223 206 210 222 222 192 185 179 173 223 190 179 179 191 177 181 185 163

Enhanced CT scan image size (in KB) 129 172 154 146 155 140 134 126 130 121 120 7.15 152 144 147 150 149 127 136 130 124 164 150 139 143 141 139 129 137 127 71.0 92.5 81.6 82.2 80.9 83.0 76.5 75.3 77.8 70.6

140

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

value near 100% means that the region of interest of the .Is has a perfect similarity to the corresponding .Ig . Mathematically, this can be expressed as: JIs ,Ig (X) =

.

OIs ∩Ig (X)

(6.5.3)

OIs ∪Ig (X)

In Eq. 6.5.3, .OIs ∩Ig (X) and .OIs ∪Ig (X) denote the intersection and union of the pixel sets associated with the X class of .Is and .Ig , respectively. .✶ CC: The CC test is used to determine the similarity between .Is and .Ig . The range for CC is defined between .[−1, 1]. A value for CC close to 1 indicates a perfect match between the segmented regions and the respective GT. Mathematically, this can be expressed: U V

r=

.

(Is − I¯s )(Ig − I¯g )

u=1 v=1 V U

(Is − I¯s )2

u=1 v=1



V U

(Ig − I¯g )2

; − 1 ≤ r ≤ 1

(6.5.4)

u=1 v=1

where, r indicates the CC value. In Eq. 6.5.4, .I¯s and .I¯g represent the means of .Is and .Ig , respectively.

6.5.3 Statistical Analyses In Figs. 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, and 6.11a–b, extracted CT scan images and their respective enhanced images of COVID-19 patients are shown in columns. Figures 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, and 6.11c show the GT of the respective images of Figs. 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, and 6.11a. Figures 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, and 6.11d show segmented images obtained by the proposed FFQOAK method. Segmented images obtained using the existing methods such as KMC [14], GAK [17], PSOK [29], DPSOK [19] and ACOK [23] are shown in Figs. 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, and 6.11e–i, respectively. These segmented images are obtained with the number of clusters .θ = 3. Since all these images are in gray scale, clustering with more than .θ = 3 does not produce significant features of infections in the lungs. Based on the segmented images, it is easy to determine that the infected regions in the extracted images are not sufficiently segmented by the existing methods [14, 17, 19, 23, 29]. Comparing the segmented images of existing methods with the proposed FFQOAK method, it is found that the proposed FFQOAK method properly segmented the infected regions in the extracted images. The segmented images obtained from the existing methods shown in Figs. 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, and 6.11e–i, show that the existing methods cannot deal with these images as they have inconsistent and vague boundaries. On the other hand, the segmented

(b)

(c)

(d)

(e)

(f)

(g) (h)

(i)

Fig. 6.4 Segmentation of CT scan images of Experimental_Set_1 (Case labels: 1–5) of COVID-19 using the proposed FFQOAK and existing methods: (a) extracted CT scan image, (b) enhanced CT scan image of (a), (c) GT of (a), (d) proposed FFQOAK, (e) KMC, (f) GAK, (g) PSOK, (h) DPSOK, and (i) ACOK

(#100)

(#85)

(#105)

(#94)

(#142)

(a)

6.5 Experimental Results 141

(b)

(c)

(d)

(e)

(f)

(g) (h)

(i)

Fig. 6.5 Segmentation of CT scan images of Experimental_Set_1 (Case labels: 6–10) of COVID-19 using the proposed FFQOAK and existing methods: (a) extracted CT scan image, (b) enhanced CT scan image of (a), (c) GT of (a), (d) proposed FFQOAK, (e) KMC, (f) GAK, (g) PSOK, (h) DPSOK, and (i) ACOK

(#155)

(#109)

(#96)

(#94)

(#110)

(a)

142 6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

(b)

(c)

(d)

(e)

(f)

(g) (h)

(i)

Fig. 6.6 Segmentation of CT scan images of Experimental_Set_2 (Case labels: 1–5) of COVID-19 using the proposed FFQOAK and existing methods: (a) extracted CT scan image, (b) enhanced CT scan image of (a), (c) GT of (a), (d) proposed FFQOAK, (e) KMC, (f) GAK, (g) PSOK, (h) DPSOK, and (i) ACOK

(#76)

(#71)

(#81)

(#106)

(#118)

(a)

6.5 Experimental Results 143

(b)

(c)

(d)

(e)

(f)

(g) (h)

(i)

Fig. 6.7 Segmentation of CT scan images of Experimental_Set_2 (Case labels: 6–10) of COVID-19 using the proposed FFQOAK and existing methods: (a) extracted CT scan image, (b) enhanced CT scan image of (a), (c) GT of (a), (d) proposed FFQOAK, (e) KMC, (f) GAK, (g) PSOK, (h) DPSOK, and (i) ACOK

(#179)

(#90)

(#120)

(#113)

(#87)

(a)

144 6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

(b)

(c)

(d)

(e)

(f)

(g) (h)

(i)

Fig. 6.8 Segmentation of CT scan images of Experimental_Set_3 (Case labels: 1–5) of COVID-19 using the proposed FFQOAK and existing methods: (a) extracted CT scan image, (b) enhanced CT scan image of (a), (c) GT of (a), (d) proposed FFQOAK, (e) KMC, (f) GAK, (g) PSOK, (h) DPSOK, and (i) ACOK

(#87)

(#95)

(#92)

(#97)

(#129)

(a)

6.5 Experimental Results 145

(b)

(c)

(d)

(e)

(f)

(g) (h)

(i)

Fig. 6.9 Segmentation of CT scan images of Experimental_Set_3 (Case labels: 6–10) of COVID-19 using the proposed FFQOAK and existing methods: (a) extracted CT scan image, (b) enhanced CT scan image of (a), (c) GT of (a), (d) proposed FFQOAK, (e) KMC, (f) GAK, (g) PSOK, (h) DPSOK, and (i) ACOK

(#166))

(#100))

(#107))

(#102))

(#98)

(a)

146 6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

(b)

(c)

(d)

(e)

(f)

(g) (h)

(i)

Fig. 6.10 Segmentation of CT scan images of Experimental_Set_4 (Case labels: 1–5) of COVID-19 using the proposed FFQOAK and existing methods: (a) extracted CT scan image, (b) enhanced CT scan image of (a), (c) GT of (a), (d) proposed FFQOAK, (e) KMC, (f) GAK, (g) PSOK, (h) DPSOK, and (i) ACOK

(#80)

(#77)

(#100)

(#86)

(#136)

(a)

6.5 Experimental Results 147

(b)

(c)

(d)

(e)

(f)

(g) (h)

(i)

Fig. 6.11 Segmentation of CT scan images of Experimental_Set_4 (Case labels: 6–10) of COVID-19 using the proposed FFQOAK and existing methods: (a) extracted CT scan image, (b) enhanced CT scan image of (a), (c) GT of (a), (d) proposed FFQOAK, (e) KMC, (f) GAK, (g) PSOK, (h) DPSOK, and (i) ACOK

(#171)

(#104)

(#114)

(#89)

(#92)

(a)

148 6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

6.5 Experimental Results

149

images derived from the proposed FFQOAK method clearly highlight the infected regions and their boundaries. Finally, a statistical analysis of the proposed and existing methods is performed using the MSE, PSNR, JSC and CC metrics. To simplify the comparison, the average values of MSE, PSNR, JSC and CC are considered. The proposed FFQOAK method is compared with the existing methods [14, 17, 19, 23, 29]. Table 6.2 shows the results of the comparison in terms of average MSE values. The average MSE values of the four experimental sets, namely, Experimental_Set_1, Experimental_Set_2, Experimental_Set_3 and Experimental_Set_4, obtained by KMC, GAK, PSOK, DPSOK, ACOK and the proposed FFQOAK method are 5293.23, 2203.31, 1844.26, 1305.61, 1136.73 and 712.30, respectively. These statistics show that the average MSE values of all these existing methods are much higher than the proposed FFQOAK method. Table 6.3 shows the results of the comparison in terms of average PSNR values. The average PSNR values of the four experimental sets obtained using KMC, GAK, PSOK, DPSOK, ACOK and the proposed FFQOAK method are 11.02, 14.70, 15.68, 17.01, 17.60 and 19.61 respectively. The comparison of these values shows that the proposed method has a higher PSNR value than the existing methods. Table 6.4 shows the average JSC values of the existing methods and the proposed FFQOAK method. For Experimental_Set_1, Experimental_Set_2, Experimental_Set_3 and Experimental_Set_4, the respective average JSC values obtained from KMC, GAK, PSOK, DPSOK, ACOK and proposed FFQOAK method are 0.37, 0.48, 0.74, 0.77, 0.84 and 0.90, respectively. These values show that the average JSC value of the proposed FFQOAK method is significantly higher than that of the existing methods. Table 6.5 shows the average CC values for the existing methods and the proposed FFQOAK method. From the four experimental sets, the average CC values for the KMC, GAK, PSOK, DPSOK, ACOK and proposed FFQOAK method are 0.43, 0.53, 0.68, 0.77, 0.84 and 0.91, respectively. These statistics show that the proposed FFQOAK method has a higher CC value than the existing methods. This statistical analysis shows that the proposed FFQOAK method outperforms the existing methods in segmenting the different regions of chest CT scan images of COVID-19 patients. Based on the statistical analyses discussed above, it is obvious that the performance of the proposed FFQOAK method is better than the KMC algorithm. These analyses also indicate that the proposed FFQOAK method outperforms the existing hybridized methods such as GAK, PSOK, DPSOK and ACOK.

6.5.4 Convergence Analysis The main reason of outperforming the proposed FFQOAK method is that FFQOA is more robust than GA, PSO, DPSO and ACO, which are hybridized with the KMC algorithm. In order to justify it, fitness of the GAK, PSOK, DPSOK and ACOK are evaluated. The main goal of the GAK, PSOK, DPSOK, ACOK and FFQOAK is to search the best fitness value for Euclidean distance function during the segmentation

150

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

Table 6.2 Comparison of MSE with the existing methods and the proposed FFQOAK for the chest CT scan images of COVID-19 Dataset Experimental _Set_1

Experimental _Set_2

Experimental _Set_3

Experimental _Set_4

Average

Case label 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 –

Extracted image #142 #94 #105 #85 #100 #110 #94 #96 #109 #155 #118 #106 #81 #71 #76 #87 #113 #120 #90 #179 #129 #97 #92 #95 #87 #98 #102 #107 #100 #166 #136 #86 #100 #77 #80 #92 #89 #114 #104 #171 –

KMC 7106.24 6140.72 4465.27 6294.26 6310.28 4976.81 6014.15 3219.35 5737.93 5664.07 4798.78 5098.64 2026.53 4830.01 5393.30 4594.02 5739.52 8951.75 5436.72 4594.02 6116.24 5140.72 4365.17 6284.16 6410.18 5976.31 5012.15 3119.15 4737.13 4614.17 5126.64 5210.72 5165.17 7184.16 6310.38 5878.36 5115.35 3219.35 4837.23 4514.27 5293.23

GAK 2101.60 2108.73 2192.48 2191.21 2191.91 2110.35 2111.36 2148.27 2191.90 2192.73 2193.26 2192.25 2164.63 2116.22 2117.85 2124.75 2175.23 2127.63 2112.82 2124.75 2111.10 2128.13 2092.58 2161.24 2391.95 2120.15 2131.16 2248.17 2291.91 2292.13 2211.20 2328.13 2192.38 2361.14 2491.95 2320.15 2241.66 2143.27 2491.92 2392.23 2203.31

PSOK 1579.67 1800.24 1218.01 1265.11 1214.44 1786.87 1848.87 2690.30 2636.93 2844.77 1413.13 1406.40 1231.64 1794.11 1276.13 1156.87 1645.72 1585.28 1132.82 1156.87 2519.67 2810.24 1318.11 1365.31 1312.42 1686.57 1948.77 2590.32 2646.83 2644.87 1219.67 1910.14 1818.21 1765.32 1612.45 1786.67 1848.71 2690.31 2746.82 2844.82 1844.26

DPSOK 1419.67 1700.14 1113.01 1065.11 1114.10 1386.23 1248.15 1290.30 1336.93 1344.77 1113.13 1106.40 1131.64 1294.11 1176.13 1056.87 1245.72 1285.28 1032.82 1056.87 1419.17 1610.24 1213.01 1165.11 1214.10 1376.23 1348.15 1390.30 1346.93 1314.75 1519.12 1620.21 1413.21 1365.11 1314.12 1476.43 1448.25 1491.32 1446.82 1214.54 1305.61

ACOK 1119.34 1010.15 1013.01 1065.11 1014.23 1086.45 1018.12 1090.13 1036.93 1044.77 1013.13 1006.14 1011.64 1024.11 1016.23 1016.27 1045.72 1085.28 1032.82 1016.87 1219.54 1110.15 1113.21 1265.11 1314.23 1286.45 1118.12 1191.13 1138.93 1244.77 1319.14 1210.25 1213.23 1165.21 1214.24 1486.41 1218.22 1291.23 1238.92 1344.17 1136.73

FFQOAK 719.12 810.21 713.32 615.45 614.12 711.45 718.12 710.11 736.12 714.14 709.12 800.11 703.12 725.15 714.12 716.45 718.12 810.11 716.12 714.14 619.45 640.22 733.32 715.45 614.12 716.45 728.12 717.11 711.12 745.64 729.33 710.12 723.66 625.15 724.22 716.65 768.22 717.11 732.42 715.45 712.30

6.5 Experimental Results

151

Table 6.3 Comparison of PSNR with the existing methods and the proposed FFQOAK for the chest CT scan images of COVID-19 Dataset Experimental _Set_1

Experimental _Set_2

Experimental _Set_3

Experimental _Set_4

Average

Case label 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 .−

Extracted image #142 #94 #105 #85 #100 #110 #94 #96 #109 #155 #118 #106 #81 #71 #76 #87 #113 #120 #90 #179 #129 #97 #92 #95 #87 #98 #102 #107 #100 #166 #136 #86 #100 #77 #80 #92 #89 #114 #104 #171 .−

KMC 9.61 10.25 11.63 10.14 10.13 11.16 10.34 13.05 10.54 10.60 11.32 11.06 15.06 11.29 10.81 11.51 10.54 8.61 10.78 11.51 10.27 11.02 11.73 10.15 10.06 10.37 11.13 13.19 11.38 11.49 11.03 10.96 11.00 9.57 10.13 10.44 11.04 13.05 11.28 11.58 11.02

GAK 14.91 14.89 14.72 14.72 14.72 14.89 14.89 14.81 14.72 14.72 14.72 14.72 14.78 14.88 14.87 14.86 14.76 14.85 14.88 14.86 14.89 14.85 14.92 14.78 14.34 14.87 14.84 14.61 14.53 14.53 14.68 14.46 14.72 14.40 14.17 14.48 14.63 14.82 14.17 14.34 14.70

PSOK 16.15 15.58 17.27 17.11 17.29 15.61 15.46 13.83 13.92 13.59 16.63 16.65 17.23 15.59 17.07 17.50 15.97 16.13 17.59 17.50 14.12 13.64 16.93 16.78 16.95 15.86 15.23 14.00 13.90 13.91 17.27 15.32 15.53 15.66 16.06 15.61 15.46 13.83 13.74 13.59 15.68

DPSOK 16.61 15.83 17.67 17.86 17.66 16.71 17.17 17.02 16.87 16.84 17.67 17.69 17.59 17.01 17.43 17.89 17.18 17.04 17.99 17.89 16.61 16.06 17.29 17.47 17.29 16.74 16.83 16.70 16.84 16.94 16.31 16.04 16.63 16.78 16.94 16.44 16.52 16.40 16.53 17.29 17.01

ACOK 17.64 18.09 18.07 17.86 18.07 17.77 18.05 17.76 17.97 17.94 18.07 18.10 18.08 18.03 18.06 18.06 17.94 17.78 17.99 18.06 17.27 17.68 17.67 17.11 16.94 17.04 17.65 17.37 17.57 17.18 16.93 17.30 17.29 17.47 17.29 16.41 17.27 17.02 17.20 16.85 17.60

FFQOAK 19.56 19.04 19.60 20.24 20.25 19.61 19.57 19.62 19.46 19.59 19.62 19.10 19.66 19.53 19.59 20.25 19.57 19.05 19.58 19.59 20.21 20.07 19.48 19.59 19.59 19.58 19.51 19.57 19.61 19.41 19.50 19.62 19.54 20.17 19.53 19.58 19.28 19.57 19.48 19.59 19.61

152

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

Table 6.4 Comparison of JSC with the existing methods and the proposed FFQOAK for the chest CT scan images of COVID-19 Dataset Experimental _Set_1

Experimental _Set_2

Experimental _Set_3

Experimental _Set_4

Average

Case label 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 .−

Extracted image #142 #94 #105 #85 #100 #110 #94 #96 #109 #155 #118 #106 #81 #71 #76 #87 #113 #120 #90 #179 #129 #97 #92 #95 #87 #98 #102 #107 #100 #166 #136 #86 #100 #77 #80 #92 #89 #114 #104 #171 .−

KMC 0.36 0.39 0.36 0.34 0.39 0.31 0.32 0.33 0.34 0.38 0.38 0.38 0.39 0.39 0.38 0.39 0.37 0.40 0.39 0.39 0.36 0.38 0.37 0.34 0.38 0.34 0.35 0.36 0.36 0.36 0.36 0.37 0.40 0.38 0.39 0.39 0.36 0.40 0.36 0.37 0.37

GAK 0.49 0.50 0.53 0.51 0.52 0.48 0.47 0.52 0.43 0.43 0.48 0.48 0.41 0.49 0.48 0.41 0.42 0.47 0.46 0.45 0.48 0.51 0.53 0.53 0.51 0.58 0.47 0.52 0.46 0.48 0.49 0.47 0.45 0.47 0.46 0.47 0.43 0.46 0.47 0.46 0.48

PSOK 0.64 0.65 0.76 0.75 0.73 0.77 0.74 0.73 0.74 0.73 0.78 0.75 0.74 0.72 0.75 0.76 0.74 0.75 0.76 0.74 0.75 0.64 0.76 0.74 0.72 0.76 0.75 0.76 0.76 0.74 0.73 0.75 0.74 0.73 0.76 0.75 0.74 0.73 0.74 0.75 0.74

DPSOK 0.79 0.77 0.78 0.79 0.79 0.76 0.79 0.78 0.79 0.79 0.79 0.78 0.79 0.76 0.79 0.76 0.78 0.77 0.79 0.78 0.78 0.79 0.75 0.77 0.76 0.76 0.74 0.76 0.73 0.76 0.78 0.79 0.78 0.76 0.79 0.76 0.77 0.78 0.79 0.77 0.77

ACOK 0.81 0.80 0.82 0.83 0.82 0.83 0.84 0.82 0.83 0.83 0.89 0.84 0.82 0.86 0.82 0.88 0.87 0.82 0.83 0.84 0.83 0.84 0.81 0.85 0.83 0.85 0.83 0.81 0.83 0.84 0.86 0.83 0.81 0.87 0.83 0.86 0.87 0.85 0.82 0.85 0.84

FFQOAK 0.88 0.87 0.89 0.90 0.89 0.90 0.89 0.89 0.90 0.90 0.92 0.91 0.89 0.91 0.87 0.91 0.90 0.89 0.90 0.91 0.90 0.91 0.88 0.91 0.90 0.91 0.90 0.89 0.90 0.91 0.91 0.90 0.87 0.91 0.88 0.91 0.90 0.89 0.89 0.90 0.90

6.5 Experimental Results

153

Table 6.5 Comparison of CC with the existing methods and the proposed FFQOAK for the chest CT scan images of COVID-19 Dataset Experimental _Set_1

Experimental _Set_2

Experimental _Set_3

Experimental _Set_4

Average

Case label 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 .−

Extracted image #142 #94 #105 #85 #100 #110 #94 #96 #109 #155 #118 #106 #81 #71 #76 #87 #113 #120 #90 #179 #129 #97 #92 #95 #87 #98 #102 #107 #100 #166 #136 #86 #100 #77 #80 #92 #89 #114 #104 #171 .−

KMC 0.40 0.41 0.41 0.40 0.40 0.42 0.42 0.45 0.44 0.42 0.41 0.42 0.44 0.40 0.40 0.45 0.46 0.48 0.48 0.42 0.42 0.41 0.43 0.40 0.41 0.42 0.46 0.45 0.44 0.42 0.43 0.45 0.40 0.44 0.40 0.42 0.41 0.45 0.46 0.42 0.43

GAK 0.50 0.51 0.51 0.50 0.50 0.52 0.52 0.55 0.54 0.52 0.51 0.52 0.54 0.50 0.50 0.55 0.56 0.58 0.58 0.52 0.52 0.51 0.53 0.50 0.51 0.52 0.56 0.55 0.54 0.52 0.53 0.55 0.50 0.54 0.50 0.52 0.51 0.55 0.56 0.52 0.53

PSOK 0.65 0.66 0.66 0.65 0.65 0.67 0.67 0.70 0.69 0.67 0.66 0.67 0.69 0.65 0.65 0.70 0.71 0.73 0.73 0.67 0.67 0.66 0.68 0.65 0.66 0.67 0.71 0.70 0.69 0.67 0.68 0.70 0.65 0.69 0.65 0.67 0.66 0.70 0.71 0.67 0.68

DPSOK 0.74 0.75 0.75 0.74 0.74 0.76 0.76 0.81 0.80 0.78 0.75 0.76 0.78 0.74 0.74 0.79 0.80 0.82 0.82 0.76 0.76 0.75 0.77 0.74 0.75 0.76 0.80 0.79 0.78 0.76 0.77 0.79 0.74 0.78 0.74 0.76 0.75 0.79 0.80 0.76 0.77

ACOK 0.81 0.82 0.82 0.81 0.81 0.83 0.83 0.88 0.87 0.85 0.82 0.82 0.85 0.81 0.81 0.86 0.87 0.89 0.89 0.83 0.83 0.82 0.84 0.81 0.82 0.83 0.87 0.86 0.85 0.83 0.84 0.86 0.81 0.85 0.81 0.83 0.82 0.86 0.87 0.83 0.84

FFQOAK 0.94 0.88 0.91 0.94 0.90 0.91 0.89 0.92 0.91 0.89 0.89 0.92 0.90 0.89 0.89 0.90 0.91 0.93 0.93 0.90 0.89 0.90 0.89 0.88 0.91 0.92 0.91 0.90 0.89 0.88 0.90 0.92 0.94 0.95 0.93 0.95 0.92 0.93 0.95 0.92 0.91

154

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

process of the chest CT scan images of COVID-19 patients. Table 6.6 presents outcomes of comparison in terms of average of best fitness values. To simplify the comparison, the average of the best fitness values is considered. The average of the best fitness values for Experimental_Set_1, Experimental_Set_2, Experimental_Set_3, and Experimental_Set_4 obtained from GAK, PSOK, DPSOK, and ACOK are .3.83 × 10−2 , .2.66 × 10−3 , .2.41 × 10−4 and .2.51 × 10−5 , respectively. However, the average of the best fitness value for Experimental_Set_1, Experimental_Set_2, Experimental_Set_3 and Experimental_Set_4 obtained by FFQOAK method is .3.17 × 10−7 , which is much lower than the existing methods, viz., GAK, PSOK, DPSOK and ACOK. This shows that the FFQOA is very effective compared to the selected optimization algorithms i.e. GA, PSO, DPSO and ACO. Therefore, it improves the performance of the proposed FFQOAK method (especially the KMC algorithm) compared to the existing GAK, PSOK, DPSOK and ACOK methods. To better understand the behavior of FFQOAK compared to the existing methods GAK, PSOK, DPSOK and ACOK in finding the best fitness value, the convergence curve analysis is performed. The convergence curves of the selected methods including FFQOAK are shown in Figs. 6.12, 6.13, 6.14, and 6.15 for four experimental sets. From these figures, four different behaviors for the GAK, PSOK, DPSOK, ACOK and the proposed FFQOAK methods can be seen within 100 epochs: Behavior I: It is observed that the FFQOAK method converges very fast in the entire search space compared to the GAK, PSOK, DPSOK and ACOK methods in terms of four experimental sets. Behavior II: The FFQOAK converges very well to the search of the best fitness value with respect to four experimental sets. Behavior III: The FFQOAK converges very expressively from the first steps of the epochs with respect to Experimental_Set_1 compared to the GAK, PSOK, DPSOK and ACOK methods. This expressive convergence behavior is also observed for Experimental_Set_2, Experimental_Set_3 and Experimental_Set_4. Behavior IV: It is observed that the GAK and PSOK methods are close to the local optimal fitness values in the case of four experimental sets. However, the DPSOK and ACOK methods show a small difference in finding the optimal fitness values compared to the GAK and PSOK methods. As shown in Figs. 6.12 and 6.15, the proposed FFQOAK method is able to search the optimal fitness values without getting into a local optimal situation compared to the GAK, PSOK, DPSOK and ACOK methods. This analysis shows that the FFQOAK maintains the right balance between exploration and exploitation to find the best fitness values. From Figs. 6.12, 6.13, 6.14, and 6.15, it is also evident that the FFQOAK method is very competitive and has a high success rate compared to GAK, PSOK, DPSOK and ACOK methods in solving the clustering problem of chest CT scan images of COVID-19 patients.

6.5 Experimental Results

155

Table 6.6 Comparison of the best fitness value with the existing methods and proposed FFQOAK in terms of segmenting the chest CT scan images of COVID-19 Dataset Experimental _Set_1

Experimental _Set_2

Experimental _Set_3

Experimental _Set_4

Average

Case label 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 .−

Extracted image #142 #94 #105 #85 #100 #110 #94 #96 #109 #155 #118 #106 #81 #71 #76 #87 #113 #120 #90 #179 #129 #97 #92 #95 #87 #98 #102 #107 #100 #166 #136 #86 #100 #77 #80 #92 #89 #114 #104 #171 .−

GAK −2

PSOK −3

DPSOK −4

ACOK −5

FFQOAK −7

.×10

.×10

.×10

.×10

.×10

2.68 1.91 4.08 5.93 3.05 3.48 3.65 4.06 4.05 4.93 4.00 2.11 3.40 3.97 5.40 4.00 5.69 3.13 5.69 4.14 2.18 1.92 4.18 5.93 3.15 3.45 3.15 4.16 4.15 4.92 1.18 2.62 4.38 4.93 4.15 4.45 3.25 3.86 3.95 3.92 3.83

2.34 1.00 1.71 3.70 3.84 1.90 2.51 2.21 2.07 2.35 1.06 1.00 4.07 2.83 1.91 5.05 5.51 3.95 3.75 3.95 2.14 1.10 1.81 3.60 3.74 1.80 2.61 2.31 2.17 2.45 2.64 2.10 1.91 3.50 2.74 2.10 2.81 2.31 2.97 2.75 2.66

1.87 1.48 1.65 1.81 3.47 2.66 3.31 1.77 2.18 2.00 2.78 1.18 3.02 2.27 4.07 1.92 2.37 2.00 3.61 2.00 1.83 1.58 1.75 1.91 3.57 2.36 3.41 1.87 2.28 2.10 2.83 1.68 1.75 2.91 3.87 2.86 2.71 2.67 2.38 2.50 2.41

6.44 1.25 8.58 2.25 1.46 1.11 1.45 2.17 1.02 2.50 3.24 1.77 1.64 1.23 2.33 1.16 2.59 1.83 1.16 1.83 5.44 2.25 6.58 2.15 1.36 1.21 1.75 2.27 1.12 2.60 4.44 3.15 4.58 3.15 2.16 1.61 1.45 2.57 1.62 2.10 2.51

2.66 3.83 1.02 2.12 2.66 2.20 1.24 3.36 1.40 6.38 3.94 7.88 5.36 4.64 1.78 2.50 3.33 7.18 1.39 7.18 2.16 3.93 1.12 2.22 2.86 2.40 1.14 3.46 1.20 6.48 2.66 3.95 2.12 3.22 2.56 2.60 1.54 3.56 1.26 4.48 3.17

156

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

Fig. 6.12 Convergence curve analysis of Experimental_Set_1 (Case label: 1–10)

6.5.5 Visual Analysis of Segmented Images A visual analysis is performed to evaluate the quality of segmented chest CT scan images of COVID-19 patients. The evaluation criterion is based on the ability to detect foreground and background regions and identify the infected regions. To

6.5 Experimental Results

157

Fig. 6.13 Convergence curve analysis of Experimental_Set_2 (Case label: 1–10)

demonstrate the visual analysis, segmented images obtained from KMC, GAK, PSOK, DPSOK, ACOK and proposed FFQOAK method are selected. To perform the clustering operation of each of the methods, the number of clusters is chosen

158

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

Fig. 6.14 Convergence curve analysis of Experimental_Set_3 (Case label: 1–10)

as .θ = 3. For demonstration, some of the CT scan images are selected from four experimental sets. For visual analysis, the enhanced CT scan and the corresponding segmented images obtained by each method are cropped and enlarged to clearly

6.5 Experimental Results

159

Fig. 6.15 Convergence curve analysis of Experimental_Set_4 (Case label: 1–10)

show their features. Figures 6.16, 6.17, 6.18, and 6.19a show the enhanced and enlarged CT scan images for Case labels 4 and 5. Segmented images obtained using the proposed FFQOAK method are shown in Figs. 6.16, 6.17, 6.18, and 6.19b.

(b)

(c)

(d)

(e)

(f)

(g)

Fig. 6.16 Visual analysis of segmented results of Experimental_Set_1 (Case labels: 4 and 5): (a) enhanced and enlarged CT scan image, (b) proposed FFQOAK, (c) KMC, (d) GAK, (e) PSOK, (f) DPSOK, and (g) ACOK

Case label 5:#100

Case label 4:#85

(a)

160 6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

(b)

(c)

(d)

(e)

(f)

(g)

Fig. 6.17 Visual analysis of segmented results of Experimental_Set_2 (Case labels: 4 and 5): (a) enhanced and enlarged CT scan image, (b) proposed FFQOAK, (c) KMC, (d) GAK, (e) PSOK, (f) DPSOK, and (g) ACOK

Case label 5:#76

Case label 4:#71

(a)

6.5 Experimental Results 161

Fig. 6.18 Visual analysis of segmented results of Experimental_Set_3 (Case labels: 4 and 5): (a) enhanced and enlarged CT scan image, (b) proposed FFQOAK, (c) KMC, (d) GAK, (e) PSOK, (f) DPSOK, and (g) ACOK

162 6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

(b)

(c)

(d)

(e)

(f)

(g)

Fig. 6.19 Visual analysis of segmented results of Experimental_Set_4 (Case labels: 4 and 5): (a) enhanced and enlarged CT scan image, (b) proposed FFQOAK, (c) KMC, (d) GAK, (e) PSOK, (f) DPSOK, and (g) ACOK

Case label 5:#80

Case label 4:#77

(a)

6.5 Experimental Results 163

164

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

For visual analysis, the segmented images obtained by the existing KMC, GAK, PSOK, DPSOK and ACOK methods are shown in Figs. 6.16, 6.17, 6.18, and 6.19c– g, respectively. In Figs. 6.16, 6.17, 6.18, and 6.19b, the white dense areas in the lungs show the signs and symptoms of COVID-19. Therefore, in Figs. 6.16, 6.17, 6.18, and 6.19b, it can be seen that the proposed FFQOAK method produces better segmented images than KMC, GAK, PSOK, DPSOK and ACOK (Figs. 6.16, 6.17, 6.18, and 6.19c–g, respectively).

6.6 Conclusions and Future Directions In this study, a new hybridized method called FFQOAK was proposed. This method was based on the FFQOA and KMC algorithms. The proposed FFQOAK was applied to the segmentation of the chest CT scan images of COVID-19 patients. In the proposed method, KMC algorithm was used to segment the images while FFQOA algorithm was used to obtain the optimal segmented images. The proposed FFQOAK method was compared with the existing image segmentation methods which include KMC, GAK, PSOK, DPSOK and ACOK. The performance of the proposed as well as the existing methods was evaluated using statistical metrics such as MSE, PSNR, JSC and CC. The experimental results showed that the proposed FFQOAK method outperformed the existing methods. The empirical analysis showed that the proposed FFQOAK method not only preserved the advantages of the fast convergence ability of the KMC, but also solved its disadvantage of easily achieving an optimal local solution by using the FFQOA. Therefore, it can be concluded that the proposed FFQOAK method in this study was effective in analyzing the chest CT scan images of COVID-19 patients through the segmentation approach and proved to be an additional promising diagnostic method for medical experts. The limitation of the study is that the proposed FFQOAK method was validated only with chest images. For future work, the proposed method can be improved to be applied to other types of medical images such as X-rays and MRIs. In future, we will also try to demonstrate the application of the proposed FFQOA in solving various engineering design problems.

References 1. Berezin FA, Shubin MA (1991) The Schr¨odinger equation, 1st edn. Kluwer Academic Publishers, Dordrecht 2. Chan JFW, et al (2020) A familial cluster of pneumonia associated with the 2019 novel coronavirus indicating person-to-person transmission: a study of a family cluster. Lancet 395(10223):514–523 3. Chen H, et al (2020) Clinical characteristics and intrauterine vertical transmission potential of COVID-19 infection in nine pregnant women: a retrospective review of medical records. Lancet 395(10226):809–815

References

165

4. Cheng HD, Jiang X, Sun Y, Wang J (2001) Color image segmentation: advances and prospects. Pattern Recogn 34(12):2259–2281 5. Chung M, et al (2020) CT imaging features of 2019 novel coronavirus (2019-nCoV). Radiology 4:200230 6. Cohen J, Normile D (2020) New SARS-like virus in China triggers alarm. Science 367(6475):234–235 7. Douglas SC, Losada R (2002) Adaptive filters in Matlab: from novice to expert. In: Proceedings of 2002 IEEE 10th digital signal processing workshop, 2002 and the 2nd signal processing education workshop, Pine Mountain, GA, pp 168–173 8. Fang Y, et al (2020) Sensitivity of chest CT for COVID-19: comparison to RT-PCR. Radiology 19:200432 9. Han J, Yang S, Lee B (2011) A novel 3-D color histogram equalization method with uniform 1-D gray scale histogram. IEEE Trans Image Process 20(2):506–512 10. Huang C, et al (2020) Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet 395(10223):497–506 11. Huang YP, Singh P, Kuo HC (2020) A hybrid fuzzy clustering approach for the recognition and visualization of MRI images of Parkinson’s disease. IEEE Access 8:25041–25051 12. Huang YP, Singh P, Kuo WL, Chu HC (2021) A type-2 fuzzy clustering and quantum optimization approach for crops image segmentation. Int J Fuzzy Syst 1–15. https://doi.org/ 10.1007/s40815-020-01009-2 13. Jourlin M (2016) Gray-level LIP model. Notations, recalls, and first applications. In: Jourlin M (ed) Logarithmic image processing: theory and applications, advances in imaging and electron physics, vol 195. Elsevier, New York, pp 1–26 14. Juang LH, Wu MN (2010) MRI brain lesion image detection based on color-converted Kmeans clustering segmentation. Measurement 43(7):941–949 15. Jun M, et al (2020) COVID-19 CT lung and infection segmentation dataset. https://doi.org/10. 5281/zenodo.3757476 16. Katris C (2021) A time series-based statistical approach for outbreak spread forecasting: application of COVID-19 in Greece. Expert Syst Appl 166:114077 17. Khrissi L, Akkad NE, Satori H, Satori K (2020) Image segmentation based on K-means and genetic algorithms. In: Embedded systems and artificial intelligence, Fez, pp 489–497 18. Levi AFJ (2012) Applied quantum mechanics, 2nd edn. Cambridge University Press, Cambridge 19. Li H, He H, Wen Y (2015) Dynamic particle swarm optimization and K-means clustering algorithm for image segmentation. Optik 126(24):4817–4822 20. Li L, et al (2020) Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT. Radiology 19:200905 21. Nowaková J, Prílepok M, Snášel V (2017) Medical image retrieval using vector quantization and fuzzy s-tree. J Med Syst 41(2):18 22. Queen JM (1967) Some methods for classification and analysis of multivariate observations. In: Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, Oakland, CA, vol 1 23. Saatchi S, Hung CC (2005) Hybridization of the ant colony optimization with the K-means algorithm for clustering. In: Scandinavian conference on image analysis, Joensuu, pp 511–520 24. Schrödinger E (1935) The present status of quantum mechanics. Naturwissenschaften 23(48):1–26 25. Selim SZ, Ismail MA (1984) K-means-type algorithms: a generalized convergence theorem and characterization of local optimality. IEEE Trans Pattern Anal Mach Intell 6(1):81–87 26. Singh P (2021) FQTSFM: a fuzzy-quantum time series forecasting model. Inf Sci 566:57–79 27. Singh P, Dhiman G, Kaur A (2018) A quantum approach for time series data based on graph and Schr¨odinger equations methods. Mod Phys Lett A 33(35):1850208 28. Tobias OJ, Seara R (2002) Image segmentation by histogram thresholding using fuzzy sets. IEEE Trans Image Process 11(12):1457–1465

166

6 COVID-19 CT Scan Image Segmentation Using Quantum-Clustering Approach

29. van der Merwe DW, Engelbrecht AP (2003) Data clustering using particle swarm optimization. In: The 2003 Congress on evolutionary computation, Canberra, ACT, vol 1, pp 215–220 30. WHO (2020) Laboratory testing for 2019 novel coronavirus (2019-nCoV) in suspected human cases. Interim guidance. Tech. rep. 31. Yang X, Zhao W, Chen Y, Fang X (2008) Image segmentation with a fuzzy clustering algorithm based on ant-tree. Signal Process 88(10):2453–2462 32. Yao H, Duan Q, Li D, Wang J (2013) An improved k-means clustering algorithm for fish image segmentation. Math Comput Model 58(3–4):790–798 33. Zhou P, et al (2020) A pneumonia outbreak associated with a new coronavirus of probable bat origin. Nature 579:270–273 34. Zhu N, et al (2020) A novel coronavirus from patients with pneumonia in China, 2019. N Engl J Med 382(8):727–733