Diabetes and Fundus OCT [1] 0128174404, 9780128174401

Diabetes and Fundus OCT brings together a stellar cast of authors who review the computer-aided diagnostic (CAD) systems

917 175 15MB

English Pages 434 [417] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Diabetes and Fundus OCT [1]
 0128174404, 9780128174401

Table of contents :
Cover
Diabetes and Fundus OCT
Copyright
Contributors
Computer-aided diagnosis system based on a comprehensive local features analysis for early diabetic retinopath ...
Introduction
Materials and methods
Contrast enhancement and noise elimination
Vessel segmentation
Local feature extraction and diagnosis
Blood vessel density estimation
Retinal blood vessel caliber
Width of the FAZ
Bifurcation points
Mild DR diagnosis
Experimental results
Conclusions
References
Deep learning approach for classification of eye diseases based on color fundus images
Introduction
Deep learning
Convolutional neural network
Fundus imaging
Image acquisition
Main vascular abnormalities
Tortuosity
Generalized arteriolar narrowing
Focal arteriolar narrowing
Bifurcations abnormalities
Crossing abnormalities
Main nonvascular
Microaneurysms and red dots
Hemorrhages
Hard exudates
Cotton wool spots
Hypertensive retinopathy
Diabetic retinopathy
Research method
Research framework
Dataset
Preprocessing
Classifier configuration
Model training and cross validation
Experiment result
Conclusion
References
Further reading
Fundus retinal image analyses for screening and diagnosing diabetic retinopathy, macular edema, and glaucoma d ...
Introduction
A brief history of fundus retinal imaging
Public retinal image databases
Automatic retinal image analysis
Performance metrics used in ARIA
Retinal vessel segmentation
Optic disc (or ONH), macula and fovea: Detection and segmentation
Macular edema and DR classification using fundus image
Detection of microaneurysms and hemorrhage
Detection of exudates
Automatic classification of DR and DME severity levels
Glaucoma classification using ARIA
Future trends in fundus retinal imaging
Conclusion
References
Mobile phone-based diabetic retinopathy detection system
Introduction
Causes of diabetic retinopathy
Types of diabetic retinopathy
Related works
Proposed system
Artificial neural networks
Discrete wavelet transform
Smartphone as an ophthalmoscope
Software requirement and description
Algorithm and flow chart
Experimental results
Conclusion
References
Computer-aided diagnosis of age-related macular degeneration by OCT, fundus image analysis
Introduction
Risk factors
Symptoms of macular degeneration
Macular degeneration diagnosis, treatment, and prevention
Diagnosis
AMD treatment
Dry AMD treatment
Wet AMD treatment
Related work
Methodology of fundus image analysis for AMD diagnosis
Methodology
Method-1
Method-2
Method-3
Results
Discussion
OCT image analysis for AMD diagnosis
Methodology
Method-1
Method-1
Decision rule
Method-2
Method-3
Method-4
Results
Discussion
Conclusion
References
Further reading
Retinal diseases diagnosis based on optical coherence tomography angiographyCOMP: Please use
Introduction
OCTA versus FA
Retinal and optic nerve disease diagnosis based on OCTA analysis
Diabetic retinopathy
Glaucoma
Age-related macular degeneration
Anterior ischemic optic neuropathy
Retinal vein occlusion
Retinal artery occlusion
Limitations of OCTA
Conclusion
References
Optical coherence tomography: A review
Introduction
Retina anatomy in OCT
Related work
Normal healthy eye
Glaucoma
Central serous chorioretinopathy
Anterior ischemic optical neuropathy
Diabetic macular edema
Cystoids macular edema
Age-related macular degeneration
Other different diseases
Challenges and future directions
Automatic segmentation techniques
OCT CAD systems
Standard number of layers
Weak layer boundaries
Artifacts
Conclusion
References
An accountable saliency-oriented data-driven approach to diabetic retinopathy detection
Introduction
State of the art
Methodology
Accountable machine learning
Global data-driven approach
Local saliency-oriented data-driven approach
Patches extraction
Fisher vector encoding
Integration
Per patient analysis
Contextualizing with state of the art
Experimental protocol
Validation protocol
Datasets
Results
Global data-driven approach
Local saliency-oriented data-driven approach
Global and local information
Comparison with state of the art
Conclusion
References
Machine learning-based abnormalities detection in retinal fundus images
Introduction
Abnormality detection
Preprocessing
Illumination equalization
Adaptive contrast equalization
Contrast limited adaptive histogram equalization (CLAHE)
Color normalization
Histogram equalization and histogram specification
Optic disc detection
Candidate region detection
Profile features
Shape features
Intensity features
Tortuous retinal vessel
Database
Radon transform
Extraction of vascular skeleton
Elimination of bifurcation and crossover points
Tortuosity measurement
Arc over chord ratio method
Curvature method
Conclusion
References
Further reading
Optical coherence tomography angiography of retinal vascular diseases
Introduction
OCT-A of normal eyes
Selected retinal pathologies in OCT-A
Diabetic retinopathy
Retinal vascular occlusions
Retinal vein occlusions
Branch retinal vein occlusion (BRVO)
Central retinal vein occlusion (CRVO)
Retinal artery occlusion
Branch retinal artery occlusion (BRAO)
Central retinal artery occlusion (CRAO)
Macular telangiectasia (MacTel)
Prepapillary vascular loop
Summary
References
Screening and detection of diabetic retinopathy by using engineering concepts
Introduction
Anatomy of the eye
Worldwide scenario
Mild NPDR
Moderate NPDR
Severe NPDR
PDR
Benefits of early screening of DR
Engineering concepts used in the diagnosis of DR
Collection of the database for the screening of the disease
Fundus photography
Online fundus database
Design of the automated system for DR screening
Methodology
Image enhancement
Vessel segmentation
Thinning
Graph-based approach
Crossover location identification
How to find optimal forest (set of vessels)
Feature extractions
Feature vectors
KNN classification for A/V differentiation in blood vessels
MI classification of vein based on thickness measurement
MI detection approach
MI classification based on SVM classifier
Software description
Results and conclusions
References
Further reading
Optical coherence tomography angiography in type 3 neovascularization
Introduction
Epidemiology and risk factors
Pathogenesis
Classification
Multimodal imaging and type 3 neovascularization
OCT-A and type 3 neovascularization
Optical coherence tomography angiography-Overview on technical aspects
OCT-A and type 3 neovascularization
OCT-A and nascent type 3 neovascularization
Treatment
Therapeutic rationale in the treatment of neovascular AMD
Treatment of active type 3 neovascularization
Treatment of nascent type 3 neovascularization
Conclusion
References
Diabetic retinopathy detection in ocular imaging by dictionary learning
Diabetes
Imaging biomarkers
Fundus
Preprocessing
Segmentation of different anatomical features
Blood vessel extraction methods
Optic disk segmentation approaches
Detection of biomarkers
Approaches used for MA detection
Approaches used for exudates detection
Approaches used for hemorrhages detection
OCT
DR classification
Fundus
OCT
Dictionary learning
Fundus
Learning
Classification
Result
OCT
Sparse representation and dictionary learning framework
Separating the particularity and the commonality dictionary learning (COPAR)
Fisher discrimination dictionary learning (FDDL)
Low-rank shared dictionary learning (LRSDL)
Experimental results
Conclusion
References
Lesion detection using segmented structure of retina
Introduction
Literature review
Lesion detection using segmentation structure of retina
Preprocessing
Gaussian filter
Adaptive histogram equalization
Segmentation
Fuzzy c-means clustering
Fuzzy c-means algorithm
Flow chart
Feature extraction and selection
Feature extraction and selection using morphological operations
Classification
SVM classifier
Result and discussions
Performance analysis
Conclusion
References
Index
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
R
S
T
U
V
W
Z

Citation preview

Diabetes and Fundus OCT

Diabetes and Fundus OCT Edited by Ayman S. El-Baz University of Louisville, Louisville, KY, United States

Jasjit S. Suri AtheroPoint, CA, USA

Elsevier Radarweg 29, PO Box 211, 1000 AE Amsterdam, Netherlands The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States © 2020 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library ISBN: 978-0-12-817440-1 For information on all Elsevier publications visit our website at https://www.elsevier.com/books-and-journals

Publisher: Stacy Masucci Acquisitions Editor: Tari K. Broderick Editorial Project Manager: Samantha Allard Production Project Manager: Maria Bernard Cover Designer: Matthew Limbert Typeset by SPi Global, India

Contributors Waleed Habib Abdulla Department of Electrical, Computer and Software Engineering, The University of Auckland, Auckland, New Zealand Edi Abdurachman Binus Graduate Program, Bina Nusantara University, Jakarta, Indonesia Ahmed Aboelfetouh Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt Marah Talal Alhalabi Electrical and Computer Engineering Department, Abu Dhabi University, Abu Dhabi, United Arab Emirates Zahra Amini Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran Punal M. Arabi BME Dept, ACS College of Engineering, Bangalore, India Sandra Avila Institute of Computing, University of Campinas (Unicamp), Campinas, Brazil Francesco Bandello Department of Ophthalmology, University Vita-Salute, Scientific Institute San Raffaele, Milan, Italy Enrico Borrelli Department of Ophthalmology, University Vita-Salute, Scientific Institute San Raffaele, Milan, Italy Widodo Budiharto Binus Graduate Program, Bina Nusantara University, Jakarta, Indonesia Adriano Carnevali Department of Ophthalmology, University Magna Graecia, Catanzaro, Italy Renoh Johnson Chalakkal Department of Electrical, Computer and Software Engineering, The University of Auckland, Auckland; oDocs Eye Care Ltd., Dunedin, New Zealand Eleonora Corbelli Department of Ophthalmology, University Vita-Salute, Scientific Institute San Raffaele, Milan, Italy Prema Daigavane Electronics Engineering, G. H. Raisoni College of Engineering, Nagpur, India xi

xii

Contributors

Glan Devadas Electronics and Instrumentation Engineering, Vimal Jyothi Engineering College, Kannur, Kerala, India D. Anto Sahaya Dhas Electronics and Communication Engineering, Vimal Jyothi Engineering College, Kannur, Kerala, India Nabila Eladawi Bioengineering Department, University of Louisville, Louisville, KY, United States Ayman El-Baz Bioengineering Department, University of Louisville, Louisville, KY, United States Mohammed Elmogy Bioengineering Department, University of Louisville, Louisville, KY, United States Alexandre Ferreira Institute of Computing, University of Campinas (Unicamp), Campinas, Brazil Harry W. Flynn, Jr Bascom Palmer Eye Institute, Department of Ophthalmology, University of Miami Miller School of Medicine, Miami, FL, United States Mohammed Ghazal Electrical and Computer Engineering Department, Abu Dhabi University, Abu Dhabi, United Arab Emirates Sheng Chiong Hong oDocs Eye Care Ltd., Dunedin, New Zealand G. Indumathi Mepco Schlenk Engineering College, Sivakasi, India Gayatri Joshi BME Dept, ACS College of Engineering, Bangalore, India Anoop Balakrishnan Kadan Electronics and Communication Engineering, Vimal Jyothi Engineering College, Kannur, Kerala, India Rahele Kafieh Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran Nikita Kashyap Electronics and Communication Engineering, Dr. C.V. Raman University, Bilaspur, Chhattisgarh, India Robert Keynton Bioengineering Department, University of Louisville, Louisville, KY, United States Ashraf Khalil Computer Science Department, College of Engineering, Abu Dhabi University, Abu Dhabi, United Arab Emirates Rosa Lozada Department of Ophthalmology, University of Puerto Rico School of Medicine, San Juan, PR, United States Ali H. Mahmoud Bioengineering Department, University of Louisville, Louisville, KY, United States

Contributors

xiii

Hatem Mahmoud Department of Ophthalmology, Faculty of Medicine, Al-Azhar University, Cairo, Egypt; Bioengineering Department, University of Louisville, Louisville, KY, United States K.C. Manoj Electronics and Communication Engineering, Vimal Jyothi Engineering College, Kannur, Kerala, India Elaheh Mousavi Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran Wani Patil Electronics Engineering, G. H. Raisoni College of Engineering, Nagpur, India Ramon Pires Institute of Computing, University of Campinas (Unicamp), Campinas, Brazil Giuseppe Querques Department of Ophthalmology, University Vita-Salute, Scientific Institute San Raffaele, Milan, Italy Lea Querques Department of Ophthalmology, University Vita-Salute, Scientific Institute San Raffaele, Milan, Italy Hossein Rabbani Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran Alaa Riad Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt Anderson Rocha Institute of Computing, University of Campinas (Unicamp), Campinas, Brazil T.V. Roshini Electronics and Communication Engineering, Vimal Jyothi Engineering College, Kannur, Kerala, India Boy Subirosa Sabarguna Binus Graduate Program, Bina Nusantara University, Jakarta, Indonesia Riccardo Sacconi Department of Ophthalmology, University Vita-Salute, Scientific Institute San Raffaele, Milan, Italy Perumal Sankar Electronics and Communication Engineering, TOC H Institute of Science and Technology, Ernakulam, Kerala, India V. Sathananthavathi Mepco Schlenk Engineering College, Sivakasi, India Shlomit Schaal Department of Ophthalmology and Visual Sciences, University of Massachusetts Medical School, Worcester, MA, United States Stephen G. Schwartz Bascom Palmer Eye Institute, Department of Ophthalmology, University of Miami Miller School of Medicine, Miami, FL, United States

xiv

Contributors

Dharmendra Kumar Singh Electrical and Electronics Engineering, Dr. C.V. Raman University, Bilaspur, Chhattisgarh, India Girish Kumar Singh Computer Science and Applications, Dr. Harisingh Gour University, Sagar, Madhya Pradesh, India Bambang Krismono Triwijoyo Binus Graduate Program, Bina Nusantara University, Jakarta; Bumigora University, Mataram, Indonesia Victor M. Villegas Department of Ophthalmology, University of Puerto Rico School of Medicine, San Juan, PR; Bascom Palmer Eye Institute, Department of Ophthalmology, University of Miami Miller School of Medicine, Miami, FL, United States Jacques Wainer Institute of Computing, University of Campinas (Unicamp), Campinas, Brazil

1

Computer-aided diagnosis system based on a comprehensive local features analysis for early diabetic retinopathy detection using OCTA Nabila Eladawia, Mohammed Elmogya, Mohammed Ghazalb, Hatem Mahmoudc, Ali H. Mahmouda, Ashraf Khalild, Ahmed Aboelfetouhe, Alaa Riade, Robert Keyntona, Ayman El-Baza BIOE NGINEERING D EPARTMENT, UNIVERSI TY OF LOUI SVI LLE, LOUISVILLE, KY, U N IT ED ST A TES b ELE CTRI CAL AND CO MPUTE R E NGI NE ER ING D EP AR TMENT , ABU DHABI UNI VERSITY, ABU DHABI, UNITED ARAB E MIRATES c DEPARTMENT OF OP HT HALMOLOGY, FACULTY OF MEDICINE, A L-AZHAR U N IV ER S I TY , C AI R O , E GY PT d COMP UTER SCI ENCE DEPART MENT , CO L L E G E O F E NG I NE E R I N G , A B U DHABI UNIVERSITY, AB U DHABI, UNITED ARAB EMIRATES e INFORMATI ON SY ST EMS D EP A RTME NT , F AC UL TY OF COMP UTER S AND INFORMATION, MANSOURA UNIVERSITY, MANSOURA, E GYPT a

1 Introduction Retinovascular diseases constitute a major cause of vision loss. Diabetes over a long period gives rise to deterioration of small retinal blood vessels. These retinal blood vessels leak fluids and blood, which cause retinal tissue swelling. As a result, it may cause diabetic retinopathy (DR), which is a significant complication of diabetes [1]. The clinical features, such as neovascularization, microaneurysms, and hemorrhages, are seen in people suffering from DR. Neovascularization is the appearance of new unusual blood vessels in many parts of the eye containing, of course, the retina. The walls of these new vessels are weak and may break and bleed. One of the primary effects of neovascularization and bleeding is the appearance of new vascular crossover and bifurcation points in the retinal vasculature network [2]. Therefore, early and accurate detection of these signs is important to prevent blindness and avoid DR complications. Manual diagnosis and analysis of the retinal images is a time-consuming and tedious process. Thus, automatic detection and diagnosis will minimize time and effort, which will help in the early detection of the disease [3]. There are various imaging modalities for retina that estimate the state of the blood vasculature network. These include fluorescein angiography (FA), color fundus, and optical

Diabetes and Fundus OCT. https://doi.org/10.1016/B978-0-12-817440-1.00001-2 © 2020 Elsevier Inc. All rights reserved.

1

2

Diabetes and Fundus OCT

coherence tomography angiography (OCTA) images. The OCTA is a new noninvasive imaging modality, which captures the blood vasculature network in various plexuses to indicate different layers of the retina. By using OCTA imaging modality, the ophthalmologist can easily examine avascular capillary, superficial capillary, and deep capillary plexuses in addition to choroid and choriocapillaris plexuses separately [4]. The other advantage of OCTA is that it allows ophthalmologists to measure foveal avascular zone (FAZ) and nonperfusion area bilaterally without obscuration by leakage of fluorescein dye [5]. Therefore, in this chapter, we have analyzed OCTA images with two different retinal plexuses to detect and diagnose mild DR cases and distinguish them from normal cases. In the literature, retinal image analysis is a very rich research area. There are many studies that have been done to diagnose various retinal diseases by analyzing retinal blood vessels, retinal layers, or both. For instance, Agemy et al. [6] have introduced a method using OCTA to map retinal vascular perfusion density and to compare various stages of DR. They have noticed a considerable decrease in the density values of the capillary perfusion as DR progresses. Hwang et al. [7] have introduced a method using OCTA to demonstrate the changes occurred in the area of FAZ in DR patients. They have noticed that the total area of the FAZ is greater in DR than in normal cases. Hwang et al. [8] have investigated the more significant features that can be retrieved from OCTA scans for DR patients. Stanga et al. [9] have differentiated between healthy and DR patients in OCTA scans based on the enlargement of the FAZ area for DR cases. Takase et al. [10] evaluated the FAZ area using OCTA images. They have noticed that diabetic eyes show an increase in FAZ area as compared with healthy eyes, irrespective of the existence of DR. Bhanushali et al. [11] have analyzed the area of the FAZ, the vessel density, the spacing between small vessels, and the spacing between large ones by applying local fractal analysis to the superficial and deep retinal OCTA images. Krawitz et al. [12] used OCTA images to examine the axis ratio of the FAZ to distinguish between healthy and diabetic patients. They found a significant difference between the values for normal cases and diabetic cases. Tarassoly et al. [13] have conducted an experiment to see the capability of OCTA in detecting the abnormalities in the images of DR patients and compared it with fundus fluorescein angiography (FFA). Using 120 DR eyes, they were able to detect microaneurysms, intraretinal microvascular abnormalities, and neovascularization. They concluded that OCTA has a higher detection rate for intraretinal microvascular abnormalities than FFA. Ishibazawa et al. [14] have evaluated how OCTA images can capture the features of DR. They collected 47 eyes for DR patients. They have concluded that OCTA could obviously detect microaneurysms, neovascularization, and retinal nonperfused areas in DR patients. They have also concluded that OCTA images could be used effectively to evaluate the treatment of DR. Soares et al. [15] have performed an observational study to compare the ability FA and OCTA in classifying patients with DR. They used 50 DR eyes, 26 of them from DR patients, and two graders to grade and classify the images. They found out that OCTA is better in

Chapter 1 • CAD system for DR detection from OCTA

3

grading the central subfield and parafoveal macula vasculature than FA especially for FAZ and capillary drop out. Freiberg et al. [16] have conducted an experiment to analyze the difference in FAZ dimensions in healthy controls and DR patients using OCTA images. They have used 29 images of DR patients and 25 of healthy controls. In the superficial layer, they have noticed an enlargement of the FAZ diameter in DR subjects. The difference was even more noticeable in the deep layer. They have concluded that OCTA can accurately distinguish between normal and DR patients using FAZ dimensions. You et al. [17] have used 22 DR patients and 15 healthy controls to investigate the ability of OCTA in measuring the vessels density and how accurately these measures can differentiate between normal and diseased subjects. They have concluded that OCTA was able to measure the vessel density in all subjects accurately. The results demonstrated that vessel density in DR patients is less than that in normal subjects. From the literature mentioned earlier, it may be noticed that most of the published work concentrated on processing the OCTA images manually. Also, if it is automatically processed, mostly it will be fundus images that lack depth information. Finally, most of the current computer-aided diagnosis (CAD) systems base their diagnosis on features that were extracted globally. To detect the DR in its early stage, these features may not be sufficient. In this chapter, we have tried to eliminate the limitations mentioned previously by presenting a CAD system that is able to segment blood vessels from different retinal plexuses using OCTA images. Then, four newly derived local features were extracted to characterize the appearance and spatial structure of the retinal blood vessels to aid the CAD system in its diagnosis. The rest of the chapter is organized as follows. Section 2 discusses the structure of the proposed OCTA-based diagnosis system. The four extracted features are elucidated in detail. Section 3 describes the experimental results. Finally, Section 4 presents the conclusion and the future work.

2 Materials and methods Our CAD system consists of four major phases, as illustrated in Fig. 1. First, a preprocessing phase is developed to improve the contrast of the processed OCTA images in addition to reduction in the noise effect. Second, an automated segmentation phase is implemented to extract the retinal blood vessels from other background tissues. Then, the feature extraction phase is developed to extract four various local features from the segmented superficial and deep retinal OCTA plexuses. These local features are the width of the FAZ area, the retinal blood vessel density, the blood vessel caliber, and the vascular bifurcation and crossover points. Finally, a two-stage random forest (RF) classifier is trained by using these extracted features to distinguish the normal cases from patients with mild DR. Fig. 2 shows the flowchart of the proposed system. In the following sections, the phases of the proposed system are described in detail.

4 Diabetes and Fundus OCT

FIG. 1 The proposed OCTA diagnosis system for early signs of mild DR.

Chapter 1 • CAD system for DR detection from OCTA

5

FIG. 2 The flowchart of the proposed OCTA diagnosis system for early signs of mild DR.

2.1 Contrast enhancement and noise elimination In the beginning, we need to improve the contrast and homogeneity in addition to decrease the noise, which can be found in the OCTA plexuses before segmenting the retinal blood vessels. First, the regional dynamic histogram equalization is applied to generate a

6

Diabetes and Fundus OCT

uniformly distributed gray levels for the processed images. Then, a combination of the generalized Gauss-Markov random field model and an adaptive gray level threshold estimation technique is utilized to improve the homogeneity of the processed images [18]. The resulting smoothed image is used as an input to the retinal blood vessel segmentation stage.

2.2 Vessel segmentation The vessel segmentation phase is developed to retrieve and separate the retinal blood vessels from other background tissues for different retinal OCTA plexuses, such as superficial and deep plexuses. To segment blood vessels, the segmentation technique combines three different models, which are prior intensity, current intensity, and higher-order spatial models. First, the prior gray intensity model is generated from a set of training images, which are manually labeled by three different retina experts. Second, an enhanced version of the Expectation Maximization algorithm is used to generate the current intensity model. Finally, the higher-order Markov-Gibbs random field (HO-MGRF) is used to calculate the higherorder spatial model. Both the HO-MGRF and current intensity models were used to handle overcome the low contrast between the background tissues and the blood vessels. Finally, a Naı¨ve Bayes (NB) is applied by labeling and analyzing the connected components to generate a refined final result. Fig. 3 shows the output of the preprocessing and segmentation

FIG. 3 The output of the preprocessing and segmentation stages for normal and mild DR subjects by using superficial and deep OCTA plexuses.

Chapter 1 • CAD system for DR detection from OCTA

7

stages for normal and mild DR subjects by using superficial and deep OCTA plexuses. For more detail about the segmentation technique, the reader is referred to Ref. [19].

2.3 Local feature extraction and diagnosis In this stage, we have extracted four distinguishing features that are used in the final step to classify the images into mild DR or normal cases. The details of those features are given in the following sections.

2.3.1 Blood vessel density estimation Many retinal diseases can be observed by analyzing the vasculature of the retina. To detect the alterations in the blood vessels of the retina, a detailed analysis of the vasculature is required [20]. According to the literature and in the opinion of our retina experts, the blood vessel density will be changed in case of DR patients. Therefore, it can be used as a feature to differentiate the normal from DR cases. To capture the vessel density changes in OCTA images, the local blood vessel density is calculated for both retinal OCTA plexuses. A Parzen window technique was implemented to compute the local density of the blood vessels. Using a given window size, the Parzen window technique calculates the vascular density (PPWPW(Br)) at specific location r. This location depends on the neighboring pixels to the processed pixel in the segmented OCTA images (Br). We have tested five different squared Parzen window sizes (3  3, 5  5, 7  7, 9  9, and 11  11) to estimate the density to ensure that our system is not sensitive to the choice of the window size. Finally, we used the cumulative distribution function (CDF) to represent the density of the blood vessels for each tested window size. The generated CDFs were used as features to help in disseminating the normal from DR cases. The minimum incremental value for the CDF is chosen to be 0.01. Therefore, each CDF for specific window size will be represented as 100 elements vector. These vectors are supplied to the classifier to assist in diagnosing the processed images. Algorithm 1 lists the steps for calculating the density of blood vessels by using Parzen window technique with different window sizes. Fig. 4 shows an example of the four extracted features.

n n n Algorithm 1 The proposed blood vessels density estimation algorithm. Data: The segmented superficial and deep plexuses, Parzen window sizes, and increment value of CDF (0.01) (1) (2) (3) (4) (5)

Read the Parzen window size. Calculate PPW(Br) for each pixel in the segmented image. Count the number of occurrences of each probability value in the image. Read the increment value of the CDF. Calculate the CDF (PCDF:PW(N)) for the current window size.

Result: The CDFs of the superficial and deep plexuses at different window sizes

n n n

8

Diabetes and Fundus OCT

FIG. 4 The four extracted local features of a superficial retinal plexus for mild DR case.

2.3.2 Retinal blood vessel caliber It has the ability to distinguish the small and large retinal vessels by analyzing the intensity levels and appearance of segmented blood vessels from both OCTA plexuses. First, the segmented OCTA plexus is multiplied by the original plexus to obtain the value of intensities for the segmented vessels. Then, CDF is generated for the resulting intensity values that indicate the variations of retinal blood vessel caliber. The incremental value for the generated CDF was chosen to be 0.02. So, each one of these CDFs will be represented as a 128-element vector. Fig. 4 shows the segmented blood vessel caliber and its CDF curve from a superficial plexus for a DR case, respectively. The blue color indicates the lowest

Chapter 1 • CAD system for DR detection from OCTA

9

appearance level for the blood vessel (i.e., the small vessels), whereas the red color indicates the highest appearance level (i.e., the large vessels).

2.3.3 Width of the FAZ One of the important signs of the change in the visual acuteness is the FAZ area. An enlargement of the FAZ area is usually found as a result of the loss of capillaries in DR cases [21]. The region growing algorithm is implemented to segment the FAZ area from the segmented OCTA plexuses. The central point of the segmented plexus is chosen as a seed point (rseed), as all our images are centered at the macula. Morphological filters are applied after the region growing algorithm to fill small holes and to remove discontinuous regions in the FAZ area. A median filter is applied to smooth the segmented area. To represent the extracted FAZ area as a local feature, the distance map is calculated. The distance map is formed using the Euclidean distance between each pixel in the extracted FAZ and its closest boundary pixel. The resulting distances represented as a CDF curve is calculated with an incremental value of 0.03. Each CDF curve is introduced as a 150-element vector, which indicates the maximum value of the distance map of the FAZ area. Fig. 4 shows the extracted FAZ area with its distance map and the CDF curve from a superficial plexus for DR case.

2.3.4 Bifurcation points In retinal images, vascular bifurcation, branch, and crossover points can be considered special landmarks for predicting many retinal diseases. The changes that occur in bifurcation and crossover points may be an indication of an illness [22]. The bifurcation points can be recognized easily because of their T-shape with three surrounding branches [23]. On bifurcation, a vessel is divided into two vessels. The branch is a smaller vessel that comes out or grow from a wider vessel. A crossover appears where two vessels (artery and vein) cross each other. Our main concern here is to detect bifurcation points. To detect bifurcation points, we first extract the large vessels by multiplying the original image by the segmented image then applying a threshold, as shown in Fig. 4. Then, a thinning technique is used to erase pixels of the borders and return the skeleton of the vessels without affecting the connectivity or the direction of the blood vessels. Bifurcation points are calculated by analyzing the neighborhood pixels for each point in the generated skeleton by using the following equation: NðXÞ ¼

8 1X jMi ðXÞ  Mi + 1 ðXÞj 2 i¼1

(1)

where N(X) is the number of the intersections that is computed for each point (X) of the skeleton and Mi(X) are the number of neighborhood pixels of X that is consecutively named clockwise. Each point will be marked as one of four types according to its number of intersections. It is a vessel endpoint if N(X) ¼ 1. It is an internal vessel point if N(X) ¼ 2. It is a vessel bifurcation point if N(X) ¼ 3. Finally, it is a vessel crossover point if N(X) ¼ 4.

10

Diabetes and Fundus OCT

In addition, a skeleton filtering step is performed to delete unreal points. So, we have deleted the ones that are shorter than an established threshold, which is the maximum vessel width expected in the image. To use these detected bifurcation points as a feature in our system, we have divided the image into 8  8, 16  16, …,1024  1024 windows. Then, we calculated the number of bifurcation, crossover, and branch points in each window size. Window 128  128 gave us the best result and we have used that in our system. Algorithm 2 lists the steps for generating feature vectors for bifurcation, crossover, and branch points.

n n n Algorithm 2 The proposed algorithm for extracting the vascular bifurcation points. Data: The original and segmented superficial images. (1) (2) (3) (4)

Multiply the original superficial image by the segmented image. Apply a threshold to extract the large blood vessels. Apply the thinning technique to generate the vessels’ skeleton. Calculate the number of vessel endpoints, internal vessel points, vessel bifurcation points, and vessel crossover points. (5) Apply skeleton filtering technique to delete unreal points. (6) Generate the feature vectors: • Divide the image into 8  8, 16  16, …, 1024  1024 windows. • Calculate the number of bifurcation, crossover, and branch points in each window size. • Find the best window size. • Generate the feature vectors for bifurcation, crossover, and branch points. Result: The feature vectors for bifurcation, crossover, and branch points.

n n n 2.4 Mild DR diagnosis It is the final phase in our proposed system where the classification of the images as normal or mild DR takes place. The classifier uses the four features that have been extracted in the previous phase to help in classification process. The first three features, the vessel density, the vessels caliber, and the distance map of the FAZ area, are supplied to the classifier as CDF curves for both superficial and deep plexuses. The fourth feature, which is the avascular bifurcation, is represented as three vectors that represent the number of bifurcation, branch, and crossover points in specific window size (128  128) for the superficial retinal plexus. The proposed CAD system employs the RF classifier, which achieved the best results as compared with other state-of-the-art classifiers, to classify the images. The diagnosis phase is a two-stage classification process, as shown in Fig. 1. The first stage processes each extracted feature independently. The second stage fuses the result of the first stage to get a final diagnosis decision.

Chapter 1 • CAD system for DR detection from OCTA

11

3 Experimental results A dataset of 133 cases was used to evaluate our diagnosis system (34 for normal and 99 for mild DR cases). A ZEISS AngioPlex OCT Angiography machine [24] was used to capture the images. This machine provides a noninvasive vascular imaging platform for blood flow in the retina. As mentioned earlier, our presented system was tested on both deep and superficial retinal plexuses. We used the image size of 1024  1024. The OCTA images were 6  6 mm2 sections centered on the fovea. Five different metrics were used to evaluate our diagnosis system. We used accuracy (ACC), sensitivity (Sens.), specificity (Spec.), the area under the curve (AUC), and dice similarity coefficient (DSC). Two cross-validation techniques were used: twofold and fourfold cross-validations. In addition, the performance of the proposed system was evaluated against five state-of-the-art classifiers, such as the support vector machine (SVM) with linear kernel, SVM with radial basis function (RBF), SVM with polynomial kernel, classification tree, and K-nearest neighbor (KNN). Table 1 lists the results of the classification based on the four extracted features for the tested classifiers. It shows that the RF classifier outperforms the other state-of-the-art techniques and provides promising results. We have conducted six experiments to understand the effect of four extracted features and their different integration on the diagnosis result. First, we have evaluated the system by using the density feature with a given window size for superficial plexus only and then for deep plexus. Second, we combined density feature for both tested plexuses with given window size. The window size of 11  11 provided the best results in the first two experiments. Therefore, we fixed the window size 11  11 in other experiments for the vessel density. In addition, the combination of the density features in both plexuses achieved the best results among the others. Third, the density feature is gathered with the vessel caliber for both plexuses. Fourth, the density feature is combined with the distance Table 1 The results of the classification based on the density of the blood vessels (11  11 window), the retinal blood vessel caliber, the distance map of the FAZ, and bifurcation points from both the superficial and deep retinal maps. Classifiers

Validation

ACC (%)

Sens. (%)

Spec. (%)

AUC (%)

DSC (%)

SVM (linear)

Fourfold Twofold Fourfold Twofold Fourfold Twofold Fourfold Twofold Fourfold Twofold Fourfold Twofold

88.6 68.6 91.4 70 88.6 85.7 90 90 84.3 84.3 94.3 97.1

95.7 85.1 95.7 83 95.7 91.5 95.7 93.6 91.5 93.6 97.9 97.9

73.9 34.8 82.6 43.5 73.9 73.9 78.3 82.6 69.6 65.2 87 95.7

84.8 59.9 89.2 63.2 84.8 82.7 87 88.1 80.5 79.4 92.4 96.8

91.8 78.4 93.8 78.8 91.8 89.6 92.8 92.6 88.7 88.9 95.8 97.9

SVM (polynomial) SVM (RBF) KNN Classification tree RF

Bold text presents the best results.

12

Diabetes and Fundus OCT

Table 2 The results of some feature combinations on both superficial and retinal plexuses using RF classifier with twofold cross-validation. Features Vessels density + vessels caliber Vessels density + FAZ Bifurcation points + vessels density Bifurcation points + vessels caliber Bifurcation points +FAZ Vessels density + vessels caliber + bifurcation points Vessels density + vessels caliber + FAZ + bifurcation points

ACC (%)

Sens. (%)

Spec. (%)

AUC (%)

DSC (%)

90 81.4 84.3 71.4 72.9 90 97.1

95.7 85.1 91.5 85.1 97.7 91.5 97.9

78.3 73.9 69.9 43.5 21.7 87 95.7

87 79.5 80.5 64.3 59.8 89.2 96.8

92.8 86 88.7 80 82.9 92.5 97.9

Bold text presents the best results.

map of the FAZ area for both plexuses. Fifth, the first three extracted features are used to evaluate the proposed diagnosis system. Finally, the four features are used to evaluate the system. We found that the last scenario gets the best results as compared with other scenarios. Table 2 lists the results of the classification based on the four extracted features for RF classifier based on twofold cross-validation.

4 Conclusions An OCTA-based diagnosis system for the detection of the early signs of DR has been discussed. First, we extracted the blood vessels from both deep and superficial OCTA images. Then, four local features that represent the shape and appearance were extracted. Our system was able to extract the blood vessel caliber, blood vessel density, distance map of the FAZ, and the bifurcation points. To train an RF classifier, these features were used. Using 133 subjects, the system obtained an accuracy of 97%. This result demonstrates that our system has the ability detect the early signs of DR. We will try to extract more features in the future to help in enhancing the diagnostic capability of our system. We will also try to apply the system to detect other diseases in early stages and measure its accuracy. This study could also be used for various other applications in medical imaging, such as the kidney, the heart, the prostate, the lung, and the brain in addition to the retina, as well as several nonmedical applications [25–28]. One such application is renal transplant functional assessment, especially with the development of developing noninvasive CAD systems for renal transplant function assessment, utilizing different image modalities (e.g., ultrasound, computed tomography [CT], MRI, etc.). Accurate assessment of renal transplant function is critically important for graft survival. Although transplantation can improve a patient well-being, there is a potential posttransplantation risk of kidney dysfunction that, if not treated in a timely manner, can lead to the loss of the entire graft, and even patient death. In particular, dynamic and diffusion MRI-based systems have been clinically used to assess transplanted kidneys with the advantage of providing information on each kidney separately.

Chapter 1 • CAD system for DR detection from OCTA

13

For more details about renal transplant functional assessment, the readers are referred to Refs. [29–56]. This study also finds an important application in cardiac imaging. The clinical assessment of myocardial perfusion plays a major role in the diagnosis, management, and prognosis of ischemic heart disease. Thus, there have been ongoing efforts to develop automated systems to accurately analyze myocardial perfusion using first-pass images [57–73]. Abnormalities of the lung could also be another promising area of research and a related application of this study. Radiation-induced lung injury is the main side effect of radiation therapy in lung cancer patients. Although higher radiation doses increase the radiation therapy effectiveness for tumor control, it can lead to lung injury as a greater quantity of normal lung tissues is included in the treated area. Almost one-third of patients who undergo radiation therapy develop lung injury following radiation treatment. The severity of radiation-induced lung injury ranges from ground-glass opacities and consolidation at the early phase to fibrosis and traction bronchiectasis in the late phase. Early detection of lung injury will thus help to improve management of the treatment [74–116]. This study can also be applied to other brain abnormalities, such as dyslexia in addition to autism. Dyslexia is one of the most complicated developmental brain disorders that affect children’s learning abilities. Dyslexia leads to the failure to develop age-appropriate reading skills in spite of the normal intelligence level and adequate reading instructions. Neuropathological studies have revealed an abnormal anatomy of some structures, such as the corpus callosum in dyslexic brains. There has been a lot of studies in the literature that aims at developing CAD systems for diagnosing such disorder, along with other brain disorders [117–139]. For the vascular system [140], this study could also be applied for the extraction of blood vessels, for example, from phase contrast magnetic resonance angiography (MRA). Accurate cerebrovascular segmentation using noninvasive MRA is crucial for the early diagnosis and timely treatment of intracranial vascular diseases [122, 123, 141–146].

References [1] M. Purandare, K. Noronha, Hybrid system for automatic classification of diabetic retinopathy using fundus images, in: Proceedings of 2016 Online International Conference on Green Engineering and Technologies (IC-GET), 2016, pp. 1–5. [2] M. Iqbal, A. Aibinu, M. Nilsson, I. Tijani, M.E. Salami, Detection of vascular intersection in retina fundus image using modified cross point number and neural network technique, in: Proceedings of the 2008 International Conference on Computer and Communication Engineering, 2008, pp. 241–246. [3] J. Nayak, P.S. Bhat, R. Acharya U, C.M. Lim, M. Kagathi, Automated identification of diabetic retinopathy stages using digital fundus images, J. Med. Syst. 32 (2) (2008) 107–115. [4] L. An, T.T. Shen, R.K. Wang, Using ultrahigh sensitive optical microangiography to achieve comprehensive depth resolved microvasculature mapping for human retina, J. Biomed. Opt. 16 (10) (2011) 106013.

14

Diabetes and Fundus OCT

[5] T. Somkijrungroj, S. Vongkulsiri, W. Kongwattananon, P. Chotcomwongse, S. Luangpitakchumpol, K. Jaisuekul, Assessment of vascular change using swept-source optical coherence tomography angiography: a new theory explains central visual loss in Behcet’s disease, J. Ophthalmol. 2017 (2017) 2180723. [6] S.A. Agemy, N.K. Scripsema, C.M. Shah, T. Chui, P.M. Garcia, J.G. Lee, R.C. Gentile, Y.-S. Hsiao, Q. Zhou, T. Ko, et al., Retinal vascular perfusion density mapping using optical coherence tomography angiography in normals and diabetic retinopathy patients, Retina 35 (11) (2015) 2353–2363. [7] T.S. Hwang, Y. Jia, S.S. Gao, S.T. Bailey, A.K. Lauer, C.J. Flaxel, D.J. Wilson, D. Huang, Optical coherence tomography angiography features of diabetic retinopathy, Retina (Philadelphia, PA) 35 (11) (2015) 2371. [8] T.S. Hwang, S.S. Gao, L. Liu, A.K. Lauer, S.T. Bailey, C.J. Flaxel, D.J. Wilson, D. Huang, Y. Jia, Automated quantification of capillary nonperfusion using optical coherence tomography angiography in diabetic retinopathy, JAMA Ophthalmol. 134 (4) (2016) 367–373. [9] P.E. Stanga, A. Papayannis, E. Tsamis, F. Stringa, T. Cole, Y. D’Souza, A. Jalil, New findings in diabetic maculopathy and proliferative disease by swept-source optical coherence tomography angiography, in: OCT Angiography in Retinal and Macular Diseases, vol. 56, Karger Publishers, 2016, pp. 113–121. [10] N. Takase, M. Nozaki, A. Kato, H. Ozeki, M. Yoshida, Y. Ogura, Enlargement of foveal avascular zone in diabetic eyes evaluated by en face optical coherence tomography angiography, Retina 35 (11) (2015) 2377–2383. [11] D. Bhanushali, N. Anegondi, S.G.K. Gadde, P. Srinivasan, L. Chidambara, N.K. Yadav, A.S. Roy, Linking retinal microvasculature features with severity of diabetic retinopathy using optical coherence tomography angiography, Invest. Ophthalmol. Vis. Sci. 57 (9) (2016) OCT519–OCT525. [12] B.D. Krawitz, S. Mo, L.S. Geyman, S.A. Agemy, N.K. Scripsema, P.M. Garcia, T.Y.P. Chui, R.B. Rosen, A circularity index and axis ratio of the foveal avascular zone in diabetic eyes and healthy controls measured by optical coherence tomography angiography, Vision Res. 139 (2017) 177–186, https:// doi.org/10.1016/j.visres.2016.09.019. [13] K. Tarassoly, A. Miraftabi, M. Soltan Sanjari, M.M. Parvaresh, The relationship between foveal avascular zone area, vessel density, and cystoid changes in diabetic retinopathy: an optical coherence tomography angiography study, Retina 38 (8) (2018) (online), Available from: https://journals. lww.com/retinajournal. [14] A. Ishibazawa, T. Nagaoka, A. Takahashi, T. Omae, T. Tani, K. Sogawa, H. Yokota, A. Yoshida, Optical coherence tomography angiography in diabetic retinopathy: a prospective pilot study, Am. J. Ophthalmol. 160 (1) (2015) 35–44. e1, https://doi.org/10.1016/j.ajo.2015.04.021. ˆ . Costa, T. Santos, M. Durbin, J. Cunha[15] M. Soares, C. Neves, I.P. Marques, I. Pires, C. Schwartz, M.A Vaz, Comparison of diabetic retinopathy classification using fluorescein angiography and optical coherence tomography angiography, Br. J. Ophthalmol. 101 (2017) 62–68, https://doi.org/ 10.1136/bjophthalmol-2016-309424. [16] F.J. Freiberg, M. Pfau, J. Wons, M.A. Wirth, M.D. Becker, S. Michels, Optical coherence tomography angiography of the foveal avascular zone in diabetic retinopathy, Graefes Arch. Clin. Exp. Ophthalmol. 254 (6) (2016) 1051–1058, https://doi.org/10.1007/s00417-015-3148-2. [17] Q. You, W.R. Freeman, R.N. Weinreb, L. Zangwill, P.I.C. Manalastas, L.J. Saunders, E. Nudleman, Reproducibility of vessel density measurement with optical coherence tomography angiography in eyes with and without retinopathy, Retina 37 (8) (2017) (online), Available from: https:// journals.lww.com/retinajournal. [18] G. Gimel´farb, A.A. Farag, A. El-Baz, Expectation-maximization for a linear combination of Gaussians, in: Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004, vol. 3, 2004, pp. 422–425, ISSN 1051–4651, https://doi.org/10.1109/ICPR.2004.1334556. [19] N. Eladawi, M. Elmogy, O. Helmy, A. Aboelfetouh, A. Riad, H. Sandhu, S. Schaal, A. El-Baz, Automatic blood vessels segmentation based on different retinal maps from OCTA scans, Comput. Biol. Med. 89 (2017) 150–161.

Chapter 1 • CAD system for DR detection from OCTA

15

[20] S.G.K. Gadde, N. Anegondi, D. Bhanushali, L. Chidambara, N.K. Yadav, A. Khurana, A. Sinha Roy, Quantification of vessel density in retinal optical coherence tomography angiography images using local fractal dimension vessel density in OCTA images, Invest. Ophthalmol. Vis. Sci. 57 (1) (2016) 246. [21] M.H. Ahmad Fadzil, L.I. Iznita, H.A. Nugroho, Analysis of foveal avascular zone for grading of diabetic retinopathy, Int. J. Biomed. Eng. Technol. 6 (3) (2011) 232–250, https://doi.org/10.1504/ IJBET.2011.041463. [22] A. Bhuiyan, B. Nath, K. Ramamohanarao, Detection and classification of bifurcation and branch points on retinal vascular network, in: 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA), December, 2012, pp. 1–8, https://doi.org/10.1109/ DICTA.2012.6411742. [23] L. Chen, Y. Xiang, Y. Chen, X. Zhang, Retinal image registration using bifurcation structures, in: 2011 18th IEEE International Conference on Image Processing, September, ISSN 1522-4880, 2011, pp. 2169–2172, https://doi.org/10.1109/ICIP.2011.6116041. [24] ZEISS, AngioPlex OCT Angiography 2017, Available from: http://www.zeiss.com/meditec/us/c/octangiography.html. [25] A.H. Mahmoud, Utilizing Radiation for Smart Robotic Applications Using Visible, Thermal, and Polarization Images (Ph.D. thesis), University of Louisville, 2014. [26] A. Mahmoud, A. El-Barkouky, J. Graham, A. Farag, Pedestrian detection using mixed partial derivative based histogram of oriented gradients, in: 2014 IEEE International Conference on Image Processing (ICIP), IEEE, 2014, pp. 2334–2337. [27] A. El-Barkouky, A. Mahmoud, J. Graham, A. Farag, An interactive educational drawing system using a humanoid robot and light polarization, in: 2013 IEEE International Conference on Image Processing, IEEE, 2013, pp. 3407–3411. [28] A.H. Mahmoud, M.T. El-Melegy, A.A. Farag, Direct method for shape recovery from polarization and shading, in: 2012 19th IEEE International Conference on Image Processing, IEEE, 2012, pp. 1769–1772. [29] A.M. Ali, A.A. Farag, A. El-Baz, Graph cuts framework for kidney segmentation with prior shape constraints, in: Proceedings of International Conference on Medical Image Computing and ComputerAssisted Intervention (MICCAI’07), Brisbane, Australia, October 29–November 2, vol. 1, 2007, pp. 384–392. [30] A.S. Chowdhury, R. Roy, S. Bose, F.K.A. Elnakib, A. El-Baz, Non-rigid biomedical image registration using graph cuts with a novel data term, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’12), Barcelona, Spain, May 2–5, 2012, pp. 446–449. [31] A. El-Baz, A.A. Farag, S.E. Yuksel, M.E.A. El-Ghar, T.A. Eldiasty, M.A. Ghoneim, Application of deformable models for the detection of acute renal rejection, in: Deformable Models, Springer, New York, NY, 2007, pp. 293–333. [32] A. El-Baz, A. Farag, R. Fahmi, S. Yuksel, M.A. El-Ghar, T. Eldiasty, Image analysis of renal DCE MRI for the detection of acute renal rejection, in: Proceedings of IAPR International Conference on Pattern Recognition (ICPR’06), Hong Kong, August 20–24, 2006, pp. 822–825. [33] A. El-Baz, A. Farag, R. Fahmi, S. Yuksel, W. Miller, M.A. El-Ghar, T. El-Diasty, M. Ghoneim, A new CAD system for the evaluation of kidney diseases using DCE-MRI, in: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI’08), Copenhagen, Denmark, October 1–6, 2006, pp. 446–453. [34] A. El-Baz, G. Gimel’farb, M.A. El-Ghar, A novel image analysis approach for accurate identification of acute renal rejection, in: Proceedings of IEEE International Conference on Image Processing (ICIP’08), San Diego, California, USA, October 12–15, 2008, pp. 1812–1815. [35] A. El-Baz, G. Gimel’farb, M.A. El-Ghar, Image analysis approach for identification of renal transplant rejection, in: Proceedings of IAPR International Conference on Pattern Recognition (ICPR’08), Tampa, Florida, USA, December 8–11, 2008, pp. 1–4.

16

Diabetes and Fundus OCT

[36] A. El-Baz, G. Gimel’farb, M.A. El-Ghar, New motion correction models for automatic identification of renal transplant rejection, in: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI’07), Brisbane, Australia, October 29–November 2, 2007, pp. 235–243. [37] A. Farag, A. El-Baz, S. Yuksel, M.A. El-Ghar, T. Eldiasty, A framework for the detection of acute rejection with dynamic contrast enhanced magnetic resonance imaging, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’06), Arlington, Virginia, USA, April 6–9, 2006, pp. 418–421. [38] F. Khalifa, G.M. Beache, M.A. El-Ghar, T. El-Diasty, G. Gimel’farb, M. Kong, A. El-Baz, Dynamic contrast-enhanced MRI-based early detection of acute renal transplant rejection, IEEE Trans. Med. Imaging 32 (10) (2013) 1910–1927. [39] F. Khalifa, A. El-Baz, G. Gimel’farb, M.A. El-Ghar, Non-invasive image-based approach for early detection of acute renal rejection, in: Proceedings of International Conference Medical Image Computing and Computer-Assisted Intervention (MICCAI’10), Beijing, China, September 20–24, 2010, pp. 10–18. [40] F. Khalifa, A. El-Baz, G. Gimel’farb, R. Ouseph, M.A. El-Ghar, Shape-appearance guided level-set deformable model for image segmentation, in: Proceedings of IAPR International Conference on Pattern Recognition (ICPR’10), Istanbul, Turkey, August 23–26, 2010, pp. 4581–4584. [41] F. Khalifa, M.A. El-Ghar, B. Abdollahi, H. Frieboes, T. El-Diasty, A. El-Baz, A comprehensive noninvasive framework for automated evaluation of acute renal transplant rejection using DCE-MRI, NMR Biomed. 26 (11) (2013) 1460–1470. [42] F. Khalifa, M.A. El-Ghar, B. Abdollahi, H.B. Frieboes, T. El-Diasty, A. El-Baz, Dynamic contrastenhanced MRI-based early detection of acute renal transplant rejection, in: 2014 Annual Scientific Meeting and Educational Course Brochure of the Society of Abdominal Radiology (SAR’14), Boca Raton, Florida, March 23–28, 2014. [43] F. Khalifa, A. Elnakib, G.M. Beache, G. Gimel’farb, M.A. El-Ghar, G. Sokhadze, S. Manning, P. McClure, A. El-Baz, 3D kidney segmentation from CT images using a level set approach guided by a novel stochastic speed function, in: Proceedings of International Conference Medical Image Computing and Computer-Assisted Intervention (MICCAI’11), Toronto, Canada, September 18–22, 2011, pp. 587–594. [44] F. Khalifa, G. Gimel’farb, M.A. El-Ghar, G. Sokhadze, S. Manning, P. McClure, R. Ouseph, A. El-Baz, A new deformable model-based segmentation approach for accurate extraction of the kidney from abdominal CT images, in: Proceedings of IEEE International Conference on Image Processing (ICIP’11), Brussels, Belgium, September 11–14, 2011, pp. 3393–3396. [45] M. Mostapha, F. Khalifa, A. Alansary, A. Soliman, J. Suri, A. El-Baz, Computer-aided diagnosis systems for acute renal transplant rejection: challenges and methodologies, in: A. El-Baz, L. Saba, J. Suri (Eds.), Abdomen and Thoracic Imaging, , Springer, New York, NY, 2014, pp. 1–35. [46] M. Shehata, F. Khalifa, E. Hollis, A. Soliman, E. Hosseini-Asl, M.A. El-Ghar, M. El-Baz, A.C. Dwyer, A. El-Baz, R. Keynton, A new non-invasive approach for early classification of renal rejection types using diffusion-weighted MRI, in: 2016 IEEE International Conference on Image Processing (ICIP), IEEE, 2016, pp. 136–140. [47] F. Khalifa, A. Soliman, A. Takieldeen, M. Shehata, M. Mostapha, A. Shaffie, R. Ouseph, A. Elmaghraby, A. El-Baz, Kidney segmentation from CT images using a 3D NMF-guided active contour model, in: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, 2016, pp. 432–435. [48] M. Shehata, F. Khalifa, A. Soliman, A. Takieldeen, M.A. El-Ghar, A. Shaffie, A.C. Dwyer, R. Ouseph, A. El-Baz, R. Keynton, 3D diffusion MRI-based CAD system for early diagnosis of acute renal rejection, in: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, 2016, pp. 1177–1180.

Chapter 1 • CAD system for DR detection from OCTA

17

[49] M. Shehata, F. Khalifa, A. Soliman, R. Alrefai, M.A. El-Ghar, A.C. Dwyer, R. Ouseph, A. El-Baz, A level set-based framework for 3D kidney segmentation from diffusion MR images, in: 2015 IEEE International Conference on Image Processing (ICIP), IEEE, 2015, pp. 4441–4445. [50] M. Shehata, F. Khalifa, A. Soliman, M.A. El-Ghar, A.C. Dwyer, G. Gimel’farb, R. Keynton, A. El-Baz, A promising non-invasive CAD system for kidney function assessment, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2016, pp. 613–621. [51] F. Khalifa, A. Soliman, A. Elmaghraby, G. Gimel’farb, A. El-Baz, 3D kidney segmentation from abdominal images using spatial-appearance models, Comput. Math. Methods Med. 2017 (2017) 1–10. [52] E. Hollis, M. Shehata, F. Khalifa, M.A. El-Ghar, T. El-Diasty, A. El-Baz, Towards non-invasive diagnostic techniques for early detection of acute renal transplant rejection: a review, Egypt. J. Radiol. Nucl. Med. 48 (1) (2016) 257–269. [53] M. Shehata, F. Khalifa, A. Soliman, M.A. El-Ghar, A.C. Dwyer, A. El-Baz, Assessment of renal transplant using image and clinical-based biomarkers, in: Proceedings of 13th Annual Scientific Meeting of American Society for Diagnostics and Interventional Nephrology (ASDIN’17), New Orleans, Louisiana, USA, February 10–12, 2017. [54] M. Shehata, F. Khalifa, A. Soliman, M.A. El-Ghar, A.C. Dwyer, A. El-Baz, Early assessment of acute renal rejection, in: Proceedings of 12th Annual Scientific Meeting of American Society for Diagnostics and Interventional Nephrology (ASDIN’16), Pheonix, Arizona, USA, February 19–21, 2016, 2017. [55] A. Eltanboly, M. Ghazal, H. Hajjdiab, A. Shalaby, A. Switala, A. Mahmoud, P. Sahoo, M. El-Azab, A. El-Baz, Level sets-based image segmentation approach using statistical shape priors, Appl. Math. Comput. 340 (2019) 164–179. [56] M. Shehata, A. Mahmoud, A. Soliman, F. Khalifa, M. Ghazal, M.A. El-Ghar, M. El-Melegy, A. El-Baz, 3D kidney segmentation from abdominal diffusion MRI using an appearance-guided deformable boundary, PLoS ONE 13 (7) (2018) e0200082. [57] F. Khalifa, G. Beache, A. El-Baz, G. Gimel’farb, Deformable model guided by stochastic speed with application in cine images segmentation, in: Proceedings of IEEE International Conference on Image Processing (ICIP’10), Hong Kong, September 26–29, 2010, pp. 1725–1728. [58] F. Khalifa, G.M. Beache, A. Elnakib, H. Sliman, G. Gimel’farb, K.C. Welch, A. El-Baz, A new shapebased framework for the left ventricle wall segmentation from cardiac first-pass perfusion MRI, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’13), San Francisco, California, April 7–11, 2013, pp. 41–44. [59] F. Khalifa, G.M. Beache, A. Elnakib, H. Sliman, G. Gimel’farb, K.C. Welch, A. El-Baz, A new nonrigid registration framework for improved visualization of transmural perfusion gradients on cardiac first-pass perfusion MRI, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’12), Barcelona, Spain, May 2–5, 2012, pp. 828–831. [60] F. Khalifa, G.M. Beache, A. Firjani, K.C. Welch, G. Gimel’farb, A. El-Baz, A new nonrigid registration approach for motion correction of cardiac first-pass perfusion MRI, in: Proceedings of IEEE International Conference on Image Processing (ICIP’12), Lake Buena Vista, Florida, September 30–October 3, 2012, pp. 1665–1668. [61] F. Khalifa, G.M. Beache, G. Gimel’farb, A. El-Baz, A novel CAD system for analyzing cardiac first-pass MR images, in: Proceedings of IAPR International Conference on Pattern Recognition (ICPR’12), Tsukuba Science City, Japan, November 11–15, 2012, pp. 77–80. [62] F. Khalifa, G.M. Beache, G. Gimel’farb, A. El-Baz, A novel approach for accurate estimation of left ventricle global indexes from short-axis cine MRI, in: Proceedings of IEEE International Conference on Image Processing (ICIP’11), Brussels, Belgium, September 11–14, 2011, pp. 2645–2649. [63] F. Khalifa, G.M. Beache, G. Gimel’farb, G.A. Giridharan, A. El-Baz, A new image-based framework for analyzing cine images, in: A. El-Baz, U.R. Acharya, M. Mirmedhdi, J.S. Suri (Eds.), Handbook of Multi

18

Diabetes and Fundus OCT

Modality State-of-the-Art Medical Image Segmentation and Registration Methodologies, vol. 2, Springer, New York, NY, 2011, pp. 69–98, ISBN 978-1-4419-8203-2 (Chapter 3). [64] F. Khalifa, G.M. Beache, G. Gimel’farb, G.A. Giridharan, A. El-Baz, Accurate automatic analysis of cardiac cine images, IEEE Trans. Biomed. Eng. 59 (2) (2012) 445–455. [65] F. Khalifa, G.M. Beache, M. Nitzken, G. Gimel’farb, G.A. Giridharan, A. El-Baz, Automatic analysis of left ventricle wall thickness using short-axis cine CMR images, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’11), Chicago, Illinois, March 30– April 2, 2011, pp. 1306–1309. [66] M. Nitzken, G. Beache, A. Elnakib, F. Khalifa, G. Gimel’farb, A. El-Baz, Accurate modeling of tagged CMR 3D image appearance characteristics to improve cardiac cycle strain estimation, in: 2012 19th IEEE International Conference on Image Processing (ICIP), Orlando, Florida, USA, September, IEEE, 2012, pp. 521–524. [67] M. Nitzken, G. Beache, A. Elnakib, F. Khalifa, G. Gimel’farb, A. El-Baz, Improving full-cardiac cycle strain estimation from tagged CMR by accurate modeling of 3D image appearance characteristics, in: 2012 9th IEEE International Symposium on Biomedical Imaging (ISBI), Barcelona, Spain, May, IEEE, 2012, pp. 462–465 (selected for oral presentation). [68] M.J. Nitzken, A.S. El-Baz, G.M. Beache, Markov-Gibbs random field model for improved full-cardiac cycle strain estimation from tagged CMR, J. Cardiovasc. Magn. Reson. 14 (1) (2012) 1–2. [69] H. Sliman, A. Elnakib, G.M. Beache, A. Elmaghraby, A. El-Baz, Assessment of myocardial function from cine cardiac MRI using a novel 4D tracking approach, J. Comput. Sci. Syst. Biol. 7 (2014) 169–173. [70] H. Sliman, A. Elnakib, G.M. Beache, A. Soliman, F. Khalifa, G. Gimel’farb, A. Elmaghraby, A. El-Baz, A novel 4D PDE-based approach for accurate assessment of myocardium function using cine cardiac magnetic resonance images, in: Proceedings of IEEE International Conference on Image Processing (ICIP’14), Paris, France, October 27–30, 2014, pp. 3537–3541. [71] H. Sliman, F. Khalifa, A. Elnakib, G.M. Beache, A. Elmaghraby, A. El-Baz, A new segmentation-based tracking framework for extracting the left ventricle cavity from cine cardiac MRI, in: Proceedings of IEEE International Conference on Image Processing (ICIP’13), Melbourne, Australia, September 15–18, 2013, pp. 685–689. [72] H. Sliman, F. Khalifa, A. Elnakib, A. Soliman, G.M. Beache, A. Elmaghraby, G. Gimel’farb, A. El-Baz, Myocardial borders segmentation from cine MR images using bi-directional coupled parametric deformable models, Med. Phys. 40 (9) (2013) 1–13. [73] H. Sliman, F. Khalifa, A. Elnakib, A. Soliman, G.M. Beache, G. Gimel’farb, A. Emam, A. Elmaghraby, A. El-Baz, Accurate segmentation framework for the left ventricle wall from cardiac cine MRI, in: Proceedings of International Symposium on Computational Models for Life Science (CMLS’13), Sydney, Australia, 27–29 November, vol. 1559, 2013, pp. 287–296. [74] B. Abdollahi, A.C. Civelek, X.-F. Li, J. Suri, A. El-Baz, PET/CT nodule segmentation and diagnosis: a survey, in: L. Saba, J.S. Suri (Eds.), Multi Detector CT Imaging, Taylor & Francis, New York, NY, pp. 639–651, 2014, ISBN 978-1-4398-9397-5 (Chapter 30). [75] B. Abdollahi, A. El-Baz, A.A. Amini, A multi-scale non-linear vessel enhancement technique, in: 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC, IEEE, 2011, pp. 3925–3929. [76] B. Abdollahi, A. Soliman, A.C. Civelek, X.-F. Li, G. Gimel’farb, A. El-Baz, A novel Gaussian scale spacebased joint MGRF framework for precise lung segmentation, in: Proceedings of IEEE International Conference on Image Processing (ICIP’12), IEEE, 2012, pp. 2029–2032. [77] B. Abdollahi, A. Soliman, A.C. Civelek, X.-F. Li, G. Gimel’farb, A. El-Baz, A novel 3D joint MGRF framework for precise lung segmentation, in: Machine Learning in Medical Imaging, Springer, 2012, pp. 86–93.

Chapter 1 • CAD system for DR detection from OCTA

19

[78] A.M. Ali, A.S. El-Baz, A.A. Farag, A novel framework for accurate lung segmentation using graph cuts, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’07), IEEE, 2007, pp. 908–911. [79] A. El-Baz, G.M. Beache, G. Gimel’farb, K. Suzuki, K. Okada, Lung imaging data analysis, Int. J. Biomed. Imaging 2013 (2013) 1–2. [80] A. El-Baz, G.M. Beache, G. Gimel’farb, K. Suzuki, K. Okada, A. Elnakib, A. Soliman, B. Abdollahi, Computer-aided diagnosis systems for lung cancer: challenges and methodologies, Int. J. Biomed. Imaging 2013 (2013) 1–46. [81] A. El-Baz, A. Elnakib, M. Abou El-Ghar, G. Gimel’farb, R. Falk, A. Farag, Automatic detection of 2D and 3D lung nodules in chest spiral CT scans, Int. J. Biomed. Imaging 2013 (2013) 1–11. [82] A. El-Baz, A.A. Farag, R. Falk, R. La Rocca, A unified approach for detection, visualization, and identification of lung abnormalities in chest spiral CT scans, in: International Congress Series, vol. 1256, Elsevier, 2003, pp. 998–1004. [83] A. El-Baz, A.A. Farag, R. Falk, R. La Rocca, Detection, visualization and identification of lung abnormalities in chest spiral CT scan: phase-I, in: Proceedings of International Conference on Biomedical Engineering, Cairo, Egypt, vol. 12, no. 1, 2002. [84] A. El-Baz, A. Farag, G. Gimel’farb, R. Falk, M.A. El-Ghar, T. Eldiasty, A framework for automatic segmentation of lung nodules from low dose chest CT scans, in: Proceedings of International Conference on Pattern Recognition (ICPR’06), vol. 3, IEEE, 2006, pp. 611–614. [85] A. El-Baz, A. Farag, G. Gimel’farb, R. Falk, M.A. El-Ghar, A novel level set-based computer-aided detection system for automatic detection of lung nodules in low dose chest computed tomography scans, in: Lung Imaging and Computer Aided Diagnosis, ,vol. 10, CRC Press, Boca Raton, FL, 2011, pp. 221–238. [86] A. El-Baz, G. Gimel’farb, M. Abou El-Ghar, R. Falk, Appearance-based diagnostic system for early assessment of malignant lung nodules, in: Proceedings of IEEE International Conference on Image Processing (ICIP’12), IEEE, 2012, pp. 533–536. [87] A. El-Baz, G. Gimel’farb, R. Falk, A novel 3D framework for automatic lung segmentation from low dose CT images, in: A. El-Baz, J.S. Suri (Eds.), Lung Imaging and Computer Aided Diagnosis, , Taylor & Francis, New York, NY, 2011, pp. 1–16, ISBN 978-1-4398-4558-5 (Chapter 1). [88] A. El-Baz, G. Gimel’farb, R. Falk, M. El-Ghar, Appearance analysis for diagnosing malignant lung nodules, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’10), IEEE, 2010, pp. 193–196. [89] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, A novel level set-based CAD system for automatic detection of lung nodules in low dose chest CT scans, in: A. El-Baz, J.S. Suri (Eds.), Lung Imaging and Computer Aided Diagnosis, vol. 1, Taylor & Francis, New York, NY, 2011, pp. 221–238, ISBN 978-1-4398-4558-5 (Chapter 10). [90] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, A new approach for automatic analysis of 3D low dose CT images for accurate monitoring the detected lung nodules, in: Proceedings of International Conference on Pattern Recognition (ICPR’08), IEEE, 2008, pp. 1–4. [91] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, A novel approach for automatic follow-up of detected lung nodules, in: Proceedings of IEEE International Conference on Image Processing (ICIP’07), vol. 5, IEEE, 2007, p. V-501. [92] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, A new CAD system for early diagnosis of detected lung nodules, in: 2007 IEEE International Conference on Image Processing (ICIP), vol. 2, IEEE, 2007, p. II-461. [93] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, H. Refaie, Promising results for early diagnosis of lung cancer, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’08), IEEE, 2008, pp. 1151–1154.

20

Diabetes and Fundus OCT

[94] A. El-Baz, G.L. Gimel’farb, R. Falk, M. Abou El-Ghar, T. Holland, T. Shaffer, A new stochastic framework for accurate lung segmentation, in: Proceedings of Medical Image Computing and ComputerAssisted Intervention (MICCAI’08), 2008, pp. 322–330. [95] A. El-Baz, G.L. Gimel’farb, R. Falk, D. Heredis, M. Abou El-Ghar, A novel approach for accurate estimation of the growth rate of the detected lung nodules, in: Proceedings of International Workshop on Pulmonary Image Analysis, 2008, pp. 33–42. [96] A. El-Baz, G.L. Gimel’farb, R. Falk, T. Holland, T. Shaffer, A framework for unsupervised segmentation of lung tissues from low dose computed tomography images, in: Proceedings of British Machine Vision (BMVC’08), 2008, pp. 1–10. [97] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, 3D MGRF-based appearance modeling for robust segmentation of pulmonary nodules in 3D LDCT chest images, in: Lung Imaging and Computer Aided Diagnosis, CRC Press, Boca Raton, FL, 2011, pp. 51–63 (Chapter 3). [98] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, Automatic analysis of 3D low dose CT images for early diagnosis of lung cancer, Pattern Recogn. 42 (6) (2009) 1041–1051. [99] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, S. Rainey, D. Heredia, T. Shaffer, Toward early diagnosis of lung cancer, in: Proceedings of Medical Image Computing and Computer-Assisted Intervention (MICCAI’09), Springer, 2009, pp. 682–689. [100] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, J. Suri, Appearance analysis for the early assessment of detected lung nodules, in: Lung Imaging and Computer Aided Diagnosis, CRC Press, Boca Raton, FL, 2011, pp. 395–404 (Chapter 17). [101] A. El-Baz, F. Khalifa, A. Elnakib, M. Nitkzen, A. Soliman, P. McClure, G. Gimel’farb, M.A. El-Ghar, A novel approach for global lung registration using 3D Markov Gibbs appearance model, in: Proceedings of International Conference Medical Image Computing and Computer-Assisted Intervention (MICCAI’12), Nice, France, October 1–5, 2012, pp. 114–121. [102] A. El-Baz, M. Nitzken, A. Elnakib, F. Khalifa, G. Gimel’farb, R. Falk, M.A. El-Ghar, 3D shape analysis for early diagnosis of malignant lung nodules, in: Proceedings of International Conference Medical Image Computing and Computer-Assisted Intervention (MICCAI’11), Toronto, Canada, September 18–22, 2011, pp. 175–182. [103] A. El-Baz, M. Nitzken, G. Gimel’farb, E. Van Bogaert, R. Falk, M.A. El-Ghar, J. Suri, Three-dimensional shape analysis using spherical harmonics for early assessment of detected lung nodules, in: Lung Imaging and Computer Aided Diagnosis, 2011, pp. 421–438 (Chapter 19). [104] A. El-Baz, M. Nitzken, F. Khalifa, A. Elnakib, G. Gimel’farb, R. Falk, M.A. El-Ghar, 3D shape analysis for early diagnosis of malignant lung nodules, in: Proceedings of International Conference on Information Processing in Medical Imaging (IPMI’11), Monastery Irsee, Germany (Bavaria), July 3–8, 2011, pp. 772–783. [105] A. El-Baz, M. Nitzken, E. Vanbogaert, G. Gimel’Farb, R. Falk, M. Abo El-Ghar, A novel shape-based diagnostic approach for early diagnosis of lung nodules, in: 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, IEEE, 2011, pp. 137–140. [106] A. El-Baz, P. Sethu, G. Gimel’farb, F. Khalifa, A. Elnakib, R. Falk, M.A. El-Ghar, Elastic phantoms generated by microfluidics technology: validation of an imaged-based approach for accurate measurement of the growth rate of lung nodules, Biotechnol. J. 6 (2) (2011) 195–203. [107] A. El-Baz, P. Sethu, G. Gimel’farb, F. Khalifa, A. Elnakib, R. Falk, M.A. El-Ghar, A new validation approach for the growth rate measurement using elastic phantoms generated by state-of-the-art microfluidics technology, in: Proceedings of IEEE International Conference on Image Processing (ICIP’10), Hong Kong, September 26–29, 2010, pp. 4381–4383. [108] A. El-Baz, P. Sethu, G. Gimel’farb, F. Khalifa, A. Elnakib, R. Falk, M.A. El-Ghar, J. Suri, Validation of a new imaged-based approach for the accurate estimating of the growth rate of detected lung nodules using real CT images and elastic phantoms generated by state-of-the-art microfluidics technology,

Chapter 1 • CAD system for DR detection from OCTA

21

in: A. El-Baz, J.S. Suri (Eds.), Handbook of Lung Imaging and Computer Aided Diagnosis, vol. 1, Taylor & Francis, New York, 2011, pp. 405–420, ISBN 978-1-4398-4557-8 (Chapter 18). [109] A. El-Baz, A. Soliman, P. McClure, G. Gimel’farb, M.A. El-Ghar, R. Falk, Early assessment of malignant lung nodules based on the spatial analysis of detected lung nodules, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’12), IEEE, 2012, pp. 1463–1466. [110] A. El-Baz, S.E. Yuksel, S. Elshazly, A.A. Farag, Non-rigid registration techniques for automatic follow-up of lung nodules, in: Proceedings of Computer Assisted Radiology and Surgery (CARS’05), vol. 1281, Elsevier, 2005, pp. 1115–1120. [111] A.S. El-Baz, J.S. Suri, Lung Imaging and Computer Aided Diagnosis, CRC Press, Boca Raton, FL, 2011. [112] A. Soliman, F. Khalifa, N. Dunlap, B. Wang, M. El-Ghar, A. El-Baz, An ISO-surfaces based local deformation handling framework of lung tissues, in: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, 2016, pp. 1253–1259. [113] A. Soliman, F. Khalifa, A. Shaffie, N. Dunlap, B. Wang, A. Elmaghraby, A. El-Baz, Detection of lung injury using 4D-CT chest images, in: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, 2016, pp. 1274–1277. [114] A. Soliman, F. Khalifa, A. Shaffie, N. Dunlap, B. Wang, A. Elmaghraby, G. Gimel’farb, M. Ghazal, A. El-Baz, A comprehensive framework for early assessment of lung injury, in: 2017 IEEE International Conference on Image Processing (ICIP), IEEE, 2017, pp. 3275–3279. [115] A. Shaffie, A. Soliman, M. Ghazal, F. Taher, N. Dunlap, B. Wang, A. Elmaghraby, G. Gimel’farb, A. El-Baz, A new framework for incorporating appearance and shape features of lung nodules for precise diagnosis of lung cancer, in: 2017 IEEE International Conference on Image Processing (ICIP), IEEE, 2017, pp. 1372–1376. [116] A. Soliman, F. Khalifa, A. Shaffie, N. Liu, N. Dunlap, B. Wang, A. Elmaghraby, G. Gimel’farb, A. El-Baz, Image-based CAD system for accurate identification of lung injury, in: 2016 IEEE International Conference on Image Processing (ICIP), IEEE, 2016, pp. 121–125. [117] B. Dombroski, M. Nitzken, A. Elnakib, F. Khalifa, A. El-Baz, M.F. Casanova, Cortical surface complexity in a population-based normative sample, Transl. Neurosci. 5 (1) (2014) 17–24. [118] A. El-Baz, M. Casanova, G. Gimel’farb, M. Mott, A. Switala, An MRI-based diagnostic framework for early diagnosis of dyslexia, Int. J. Comput. Assist. Radiol. Surg. 3 (3–4) (2008) 181–189. [119] A. El-Baz, M. Casanova, G. Gimel’farb, M. Mott, A. Switala, E. Vanbogaert, R. McCracken, A new CAD system for early diagnosis of dyslexic brains, in: Proceedings of the International Conference on Image Processing (ICIP’2008), IEEE, 2008, pp. 1820–1823. [120] A. El-Baz, M.F. Casanova, G. Gimel’farb, M. Mott, A.E. Switwala, A new image analysis approach for automatic classification of autistic brains, in: Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’2007), IEEE, 2007, pp. 352–355. [121] A. El-Baz, A. Elnakib, F. Khalifa, M.A. El-Ghar, P. McClure, A. Soliman, G. Gimel’farb, Precise segmentation of 3-D magnetic resonance angiography, IEEE Trans. Biomed. Eng. 59 (7) (2012) 2019–2029. [122] A. El-Baz, A. Farag, G. Gimel’farb, M.A. El-Ghar, T. Eldiasty, Probabilistic modeling of blood vessels for segmenting MRA images, in: 18th International Conference on Pattern Recognition (ICPR’06), vol. 3, IEEE, 2006, pp. 917–920. [123] A. El-Baz, A.A. Farag, G. Gimel’farb, M.A. El-Ghar, T. Eldiasty, A new adaptive probabilistic model of blood vessels for segmenting MRA images, in: Medical Image Computing and Computer-Assisted Intervention—MICCAI 2006, vol. 4191, Springer, 2006, pp. 799–806. [124] A. El-Baz, A.A. Farag, G. Gimel’farb, S.G. Hushek, Automatic cerebrovascular segmentation by accurate probabilistic modeling of TOF-MRA images, in: Medical Image Computing and ComputerAssisted Intervention—MICCAI 2005, Springer, 2005, pp. 34–42.

22

Diabetes and Fundus OCT

[125] A. El-Baz, A. Farag, A. Elnakib, M.F. Casanova, G. Gimel’farb, A.E. Switala, D. Jordan, S. Rainey, Accurate automated detection of autism related corpus callosum abnormalities, J. Med. Syst. 35 (5) (2011) 929–939. [126] A. El-Baz, A. Farag, G. Gimelfarb, Cerebrovascular segmentation by accurate probabilistic modeling of TOF-MRA images, in: Image Analysis. vol. 3540, Springer, 2005, pp. 1128–1137. [127] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, V. Kumar, D. Heredia, A novel 3D joint Markov-Gibbs model for extracting blood vessels from PC-MRA images, in: Medical Image Computing and Computer-Assisted Intervention—MICCAI 2009, vol. 5762, Springer, 2009, pp. 943–950. [128] A. Elnakib, A. El-Baz, M.F. Casanova, G. Gimel’farb, A.E. Switala, Image-based detection of corpus callosum variability for more accurate discrimination between dyslexic and normal brains, in: Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’2010), IEEE, 2010, pp. 109–112. [129] A. Elnakib, M.F. Casanova, G. Gimel’farb, A.E. Switala, A. El-Baz, Autism diagnostics by centerlinebased shape analysis of the corpus callosum, in: Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’2011), IEEE, 2011, pp. 1843–1846. [130] A. Elnakib, M. Nitzken, M.F. Casanova, H. Park, G. Gimel’farb, A. El-Baz, Quantification of age-related brain cortex change using 3D shape analysis, in: 2012 21st International Conference on Pattern Recognition (ICPR), IEEE, 2012, pp. 41–44. [131] M. Mostapha, A. Soliman, F. Khalifa, A. Elnakib, A. Alansary, M. Nitzken, M.F. Casanova, A. El-Baz, A statistical framework for the classification of infant DT images, in: 2014 IEEE International Conference on Image Processing (ICIP), IEEE, 2014, pp. 2222–2226. [132] M. Nitzken, M.F. Casanova, G. Gimel’farb, A. Elnakib, F. Khalifa, A. Switala, A. El-Baz, 3D shape analysis of the brain cortex with application to dyslexia, in: 2011 18th IEEE International Conference on Image Processing (ICIP), Brussels, Belgium, September, IEEE, 2011, pp. 2657–2660 (Selected for oral presentation. Oral acceptance rate is 10% and the overall acceptance rate is 35%). [133] F.E.-Z.A. El-Gamal, M.M. Elmogy, M. Ghazal, A. Atwan, G.N. Barnes, M.F. Casanova, R. Keynton, A.S. El-Baz, A novel CAD system for local and global early diagnosis of Alzheimer’s disease based on PIB-PET scans, in: 2017 IEEE International Conference on Image Processing (ICIP), IEEE, 2017, pp. 3270–3274. [134] M. Ismail, A. Soliman, M. Ghazal, A.E. Switala, G. Gimel’farb, G.N. Barnes, A. Khalil, A. El-Baz, A fast stochastic framework for automatic MR brain images segmentation, PLoS ONE 12 (11) (2017) e0187391. [135] M.M.T. Ismail, R.S. Keynton, M.M.M.O. Mostapha, A.H. ElTanboly, M.F. Casanova, G.L. Gimel’farb, A. El-Baz, Studying autism spectrum disorder with structural and diffusion magnetic resonance imaging: a survey, Front. Hum. Neurosci. 10 (2016) 211. [136] A. Alansary, M. Ismail, A. Soliman, F. Khalifa, M. Nitzken, A. Elnakib, M. Mostapha, A. Black, K. Stinebruner, M.F. Casanova, et al., Infant brain extraction in T1-weighted MR images using BET and refinement using LCDG and MGRF models, IEEE J. Biomed. Health Inform. 20 (3) (2016) 925–935. [137] M. Ismail, A. Soliman, A. ElTanboly, A. Switala, M. Mahmoud, F. Khalifa, G. Gimel’farb, M. F. Casanova, R. Keynton, A. El-Baz, Detection of white matter abnormalities in MR brain images for diagnosis of autism in children, in: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, 2016, pp. 6–9. [138] M. Ismail, M. Mostapha, A. Soliman, M. Nitzken, F. Khalifa, A. Elnakib, G. Gimel’farb, M.F. Casanova, A. El-Baz, Segmentation of infant brain MR images based on adaptive shape prior and higher-order MGRF, in: 2015 IEEE International Conference on Image Processing (ICIP), IEEE, 2015, pp. 4327–4331. [139] E.H. Asl, M. Ghazal, A. Mahmoud, A. Aslantas, A. Shalaby, M. Casanova, G. Barnes, G. Gimel’farb, R. Keynton, A. El-Baz, Alzheimer’s disease diagnostics by a 3D deeply supervised adaptable convolutional network, Front. Biosci. (Landmark Ed.) 23 (2018) 584–596.

Chapter 1 • CAD system for DR detection from OCTA

23

[140] A. Mahmoud, A. El-Barkouky, H. Farag, J. Graham, A. Farag, A non-invasive method for measuring blood flow rate in superficial veins from a single thermal image, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2013, pp. 354–359. [141] A. El-Baz, A. Shalaby, F. Taher, M. El-Baz, M. Ghazal, M.A. El-Ghar, A.L.I. Takieldeen, J. Suri, Probabilistic modeling of blood vessels for segmenting magnetic resonance angiography images, Med. Res. Arch. 5 (3) (2017) 1–22. [142] A.S. Chowdhury, A.K. Rudra, M. Sen, A. Elnakib, A. El-Baz, Cerebral white matter segmentation from MRI using probabilistic graph cuts and geometric shape priors, in: ICIP, 2010, pp. 3649–3652. [143] Y. Gebru, G. Giridharan, M. Ghazal, A. Mahmoud, A. Shalaby, A. El-Baz, Detection of cerebrovascular changes using magnetic resonance angiography, in: Cardiovascular Imaging and Image Analysis, CRC Press, Boca Raton, FL, 2018, pp. 1–22. [144] A. Mahmoud, A. Shalaby, F. Taher, M. El-Baz, J.S. Suri, A. El-Baz, Vascular tree segmentation from different image modalities, in: Cardiovascular Imaging and Image Analysis, CRC Press, Boca Raton, FL, 2018, pp. 43–70. [145] F. Taher, A. Mahmoud, A. Shalaby, A. El-Baz, A review on the cerebrovascular segmentation methods, in: 2018 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), IEEE, 2018, pp. 359–364. [146] H. Kandil, A. Soliman, L. Fraiwan, A. Shalaby, A. Mahmoud, A. ElTanboly, A. Elmaghraby, G. Giridharan, A. El-Baz, A novel MRA framework based on integrated global and local analysis for accurate segmentation of the cerebral vascular system, in: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, 2018, pp. 1365–1368.

2

Deep learning approach for classification of eye diseases based on color fundus images

Bambang Krismono Triwijoyoa,b, Boy Subirosa Sabargunaa, Widodo Budihartoa, Edi Abdurachmana a

B I N US GR AD U A T E PR OG R A M , BI N A N U S A NT ARA UNIVERSITY, JAKARTA, INDONESIA b BUMIGOR A UNI VE RSI TY, MATARAM, INDONESIA

1 Introduction One of the objects of research in the image recognition is medical images of which retinal images are a key factor for ophthalmologists on diagnosis of several eye diseases, with retinal microvascular signs, which occurs in response to the presence of high blood pressure in the patient [1]. Automated analysis of digital images of the retina image is possible to help ophthalmologist to perform eye diseases screening. Retinal eye image classification is an interesting computer vision problem with wide applications in the medical field. An understanding of retinal image is very important for ophthalmologists to assess eye diseases such as glaucoma and hypertension, if left untreated this disease can cause vision problems and blindness. Many previous studies have concluded that understanding vascular abnormalities of retinal images helps doctors in providing early diagnosis and treatment for stroke [2, 3], brain damage [4], carotid atherosclerosis [5], arterial disease [6], cerebral amyloid-angiopathy [7]. Some evidence from this study shows that periodic retinal eye examinations by specialists enhance the quality of life of patients. The retina of the human eyes is light-sensitive tissues that are important and necessary for vision. Anatomically, the retina of the eye has some similarities with the central nervous system. The fact that the retinal blood vessels are affected by microvascular conditions of the brain [8] causes damage to the retinal eye to indicate systemic microvascular damage associated with several diseases such as hypertension or diabetes [9]. In the past decade, the classification of retinal images has gained widespread research attention resulting in a large number of reports in the literature [10–13]. Although there are many methods proposed, recognizing retinal eye images remains a challenging computing problem. The challenges include fundal variability of each patient. Many researchers have developed a system of automatic detection of retinal disease and diabetic retinopathy Diabetes and Fundus OCT. https://doi.org/10.1016/B978-0-12-817440-1.00002-4 © 2020 Elsevier Inc. All rights reserved.

25

26

Diabetes and Fundus OCT

using the support vector machine (SVM) method [14]. The classification of exudate regions using the fuzzy C-means grouping method has been proposed by Kande and Subbaiah [15]. rez et al. [16] presents an automatic segmentation of retinal Research by Palomera-Pe blood vessels. The classification method of diabetic retinopathy using a spectral component of visual neural-visual neutral has been proposed by Sivakumar [17]. Further research by Marı´n et al. [13] also proposed artificial neural networks to detect diabetic retinopathy. Detection of retinal eye bleeding which is a symptom of diabetic retinopathy has been proposed by Ricci and Perfetti [18] using a SVM. The development of the automatic detection system of diabetic retinopathy in digital retinal images and their potential evaluation in diabetic retinopathy screening has been proposed by Usher et al. [19]. The intelligent automatic detection system of diabetic retinopathy for diagnostic purposes has been carried out by Mohamed [20] using advanced feed neural networks. At present, a periodic direct ophthalmoscopy examination is the best approach for screening, although it uses low sensitivity ophthalmoscopy [21–23]. However, the number of available ophthalmologists is a limiting factor in initiating screening [24]. With the increasing availability of fundus digital cameras, for automatic analysis of digital images, it can help ophthalmologists to perform eye disease diagnosis. On the other hand, the development of computer science, especially deep learning has the ability to identify objects in many domains, one of which is the identification of objects in images [25]. Unlike conventional learning machines that require special algorithms for feature extraction, in-depth learning enables computational models consisting of several processing layers to study data representation with various levels of abstraction, which allows receiving raw input data and automatically finds representations needed for detection or classification. This chapter explains the implementation of deep learning approach for classification of eye diseases based on color fundus images as input. The remaining part of this chapter is divided into four sections. After this introductory section, in the fundus imaging section, we will describe and discuss relevant image acquisition, main vascular and nonvascular abnormalities, hypertensive, and diabetic retinopathy grading. In the third section, we will discuss deep learning methods and experimental result. In the last section, we conclude the result of this study and our future work.

2 Deep learning Deep learning is a method of representation of learning with different levels of representation, obtained by composing nonlinear modules each of which transforms representations at one level into a representation at a higher level. For classification tasks, the higher representational layers reinforce the input aspects that are important for discrimination and suppress irrelevant variations. An image input in the form of an array of pixel values, in the first representation layer the features studied usually represent the presence or

Chapter 2 • Approach for classification of eye diseases

27

absence of edges in the orientation and the specific location in the image. The second layer usually detects the motif by finding a certain edge arrangement, regardless of small variations in the edge position. The third layer can assemble the motif into a larger combination that corresponds to the parts of the known object, and the next layer will detect the object as a combination of these parts. Image classification is the process of labeling an image into one of several predefined classes. One of deep learning model is the convolutional neural network (CNN) [26], which is the development of multilayered neural networks, trained using supervised learning methods. In general, the process of image classification using machine learning shows in Fig. 1. •

• • •

Training phase: In this phase, a machine learning algorithm trains using a dataset composed of the images and their corresponding labels. The training phase for an image classification problem has two main steps: Feature extraction: Utilizes domain knowledge to extract new features that will be used by the machine learning algorithm. Model training: Utilizes a clean dataset composed of the images features and the corresponding labels to train the machine learning model. Prediction phase: Utilizes the trained model to predict labels of unseen images. The prediction phase applies the same feature extraction process to the new images and we pass the features to the trained machine learning algorithm to predict the label.

In conventional image classification, the feature extraction process used is generally very limited which can only apply to certain datasets and use special algorithms. This is because of the various differences between images, among others, different angles of view, scale differences, differences in lighting conditions, object deformation, and so on. Fig. 2 shows three kinds of image classification approach.

Training phase Labels

Images

Feature extractor

Features

Machine learning algorithm

Prediction phase

Image

Feature extractor

Features

Trained classifier

Label

FIG. 1 Image classification using a machine learning algorithm has two phases: training phase and predicting phase [27].

28

Diabetes and Fundus OCT

Traditional pattern recognition: Fixed/handcrafted feature extractor Feature extractor

Trainable classifier

Mainstream modern pattern recognition: Unsupervised mid-level features Feature extractor

Mid-level features

Trainable classifier

Deep learning: Representations are hierarchical and trained Low-level features

Mid-level features

High-level features

Trainable classifier

FIG. 2 Image classifications approach [25].

In traditional image pattern recognition, there is a preliminary process for feature extraction using a special algorithm, prior to the classification process. While in the process of recognition of image patterns, after feature extraction there is additional process of transforming the feature into a more optimal form before the process of classification. In the deep learning, pattern recognition approach revolves the feature extraction process, where feature extraction is directly performed by the neural network itself through the convolutional layer. Deep learning is a new field of science in the field of machine learning that has recently developed due to the development of GPU technology. Deep learning has excellent capabilities in computer vision. One of them is in case of object classification in the image. One of machine learning method for object image classification is CNN.

2.1 Convolutional neural network CNN is the development of multilayer perceptron (MLP) designed to process twodimensional data. CNN is included in the type of deep neural network because of the high depth of the network and applied to many image data. In the case of image classification, MLP is less appropriate to use because it does not store spatial information from image data and assumes each pixel is an independent feature resulting in poor results. CNN was developed under the name NeoCognitron by Kunihiko Fukushima, a researcher from NHK Broadcasting Science Research Laboratories, Kinuta, Setagaya, Tokyo, Japan [28]. The concept was later matured by Yann LeChun, a researcher from AT & T Bell Laboratories in Holmdel, New Jersey, United States. LeNet was successfully applied by LeCun in his research on numerical and handwriting recognition [29]. In 2012, Alex Krizhevsky with his CNN implementation won the ImageNet Large-Scale Visual Recognition Challenge 2012 competition. The achievement becomes a proving moment

Chapter 2 • Approach for classification of eye diseases

29

that deep learning method, specifically CNN. CNN’s method proved to surpass other machine learning methods such as SVM in the case of object classification in the image. The CNN procedures have similarities to MLP, but each neuron is presented in a twodimension form, unlike the MLP, each neuron measures only one dimension. On CNN, data propagated on the network is two-dimensional data, so linear operation and weighting parameters on CNN are different. In linear CNN operations using convolution operations, while the weights are no longer a single dimension, but the four-dimension form is a collection of convolution kernels. CNN consists of various layers and several neurons on each layer. Both of these are difficult to determine using definite rules and apply differently to different data [30]. Fig. 3 illustrates the CNN network architecture. CNN is a type of feed-forward artificial neural network where the pattern of connectivity between neurons is inspired by biological neural networks. The response of individual neurons to stimuli in their receptive fields can be approximated mathematically by convolution surgery. The convolution network is inspired by biological processes and is a variation of multilayer perceptron designed to use a minimal amount of preprocessing. CNN has extensive applications such as image and video recognition, recommender systems, and natural language processing. CNN uses third-order tensor, where an input image size of H  W pixels with three channels (color channels R, G, and B) is processed through a series of successive stages of the process. One process stage is called a layer consisting of convolutional layer, pooling layer, normalization layer, fully connected layer, and output layer. CNN operates in sequence layer by layer as illustrated in Fig. 3, where x1 as input in the form of tensor order 3, which will be processed in the first layer is w1 in the form of tensor and produce output x2, where x2 is also the input for second layer and so on until the outl l l put layer. So in the lth layer with the input of the third-order tensor is xl with xl 2 ℝH W H , so it takes the triple index set (il, jl, dl) to refer to one element xl, on the dl channel and at the spatial location (il, jl) that is on the row il and the column to jl. In the CNN learning process mini-batch strategy method is used so that xl becomes the l l l l fourth-order tensor xl 2 ℝH W H N , where N is the mini-batch size. In the last layer, a back propagation machine learning method is added to the fully connected z layer,

FIG. 3 CNN architecture [23].

30

Diabetes and Fundus OCT

wherein the classification problem of the image with class C, the output of xl is connected with the vector with C element. •



Input Layer x1 in the form of a third-order tensor, where x1 2 ℝH W D is a representation of the colored image of the size of H row, column W, and D color channels. In this case, H ¼ 256, W ¼ 256, and the three channels are red canal (R), green channel (G), and blue channel (B), so the number of image elements is 256  256  3 and each element is designated by index (i, j, d), where 0  i < H, 0  j < W and 0  d < 3. Fig. 4 shows the input layer format schema The convolutional layer wl uses multiple convolutional kernels. It is assumed that the kernel D and each kernel of H  W are used, and all kernels are denoted as f, where f is a l fourth-order tensor with ℝHWD D and the index variable 0  i < H, 0  j < W, l l 0  d < D , and 0  d < D are used to point to one of the kernel elements. Fig. 5 describes the process schema on the convolutional layer. i

j

d

In order for the size of the input and output images in the convolutional layer to be of the same size, a padding technique is applied, if the input image is Hl  Wl  Dl and the kernel size is H  W  Dl  D, then the result of convolution sized (Hl  H + 1)  (Wl  W + 1)  D.   For each input, image channel are added H1 line, above the first row and below the last 2 W 1 row, and added 2 columns, to the left of the first column and to the right of the last column, the convolutional layer will measure Hl  Wl  D the value of the row element and the additional column of the padding result is 0. Stride (s) is the concept of the convolution process, where if the value is s ¼ 1, then the kernel will shift convolution to each location of the input image element, whereas if the value is s > 1, each movement shift of the convolution process in the input image will be

X11

X12 X11

X12 X21 .... .... Xi1

.... X12

....

d Color channels

d3

X1i

X1j

d2

X11

X12

....

X1j

X21

X22

....

X2j

....

....

....

....

Xi1

Xi2

Xi3

Xij

Xi1

Width: j units (pixels) FIG. 4 Input layer.

d1

Height: i units (pixels)

Chapter 2 • Approach for classification of eye diseases

31

FIG. 5 Convolutional layer.

shifted as much as s pixel location. The convolution process is expressed through the following equation: yil + 1 , jl + 1 , d ¼

H X W X dl X i¼0 j¼0 d l ¼0

fi, j, dl , d  xill + 1 + i, jl + 1 + j, dl

(1)

For all 0  d  D ¼ Dl+1, as well as for any spatial location (il+1, jl+1) for 0  il+1 < Hl  H + 1 ¼ Hl+1, 0  jl+1 < Wl  W + 1 ¼ Wl+1, and xill + 1 + i, jl + 1 + j, dl refers to elements of xl at locations with indices (il+1 + i, jl+1 + j, dl). Bias (bd) added to Eq. (1) as yil + 1 , jl + 1 ,d . •

The ReLU layer does not change the input size, where xl and y are of the same size. The rectified linear unit (ReLU) layer can be considered as the transfer function of each of the input elements as n o yi, j, d ¼ max 0, xil, j, d

(2)

where 0  il+1 < Hl ¼ Hl+1, 0  j < Wl ¼ Wl+1, and 0  d < Dl ¼ Dl+1, within the ReLU layer there is no learning parameter as found in the pooling layer. Fig. 6 shows the process of the ReLU layer.

32

Diabetes and Fundus OCT

Transfer function

15

20

–10

35

18 –110 25

100

20

–15

25

101

75

18

yi,j,d = max{0,xli,j,d} y0,2,0 = max{0,–10} = 0 100

15

20

0

18

0

25

–10

20

0

25

0

23

101

75

18

23

0,0

35

FIG. 6 ReLU layer.



Pooling Layer. Using the same notation derived from the convolutional layer, where l l l xl 2 ℝH W D is the input on the first layer of the pooling layer, the pooling operation does not use learning or kernel parameters so wl ¼ null. Pooling with spatial size (H  W) will yield the result of the pooling process y or xl+1 in the form of a third-order tensor in the size Hl+1  Wl+1  Dl+1 where Hl +1 ¼

Hl Wl l+1 ,W l +1 ¼ , D ¼ Dl H W

(3)

The operation of the merged layer is carried out by channel per channel on the image, with each channel being a matrix with the size of Hl  Wl elements divided into Hl+1  Wl+1 nonoverlapping into the submatrix in size H  W. Fig. 7 shows the process of the pooling layer. 4

6

1

3

0

8

12

9

2

3

16

100

1

46

74

27

8

12

46

100

35

19

25

6

13

22

16

63

35

63

4

3

7

10

9

10

9

8

1

3

(i)

(iii)

Max pooling nonoverlapping(Strid=2): 9

7

3

2

26

37

14

1

37

14

15

29

16

0

29

54

8

6

54

H l = 4, W l = 4, D l = 3, H = 2, W = 2, Hl + 1 =

Hl

=

H

4 2

= 2 , Dl + 1 = Dl = 3

2

Wl+1 =

Wl

(ii) yil+1,j l+1,d y0,1,0

W

=

4 2

= 2

= max0≤i 200) pixels in fundus image.

S.No

Dry image (>200)

Wet image (>200)

Healthy image (>200)

1 2 3 4 5 6 7 8 AVG

952 5721 376 1543 29 0 381 409 1176.375

585 2892 2398 4406 2506 130 146 712 1721.875

761 3865 242 138 680 25 10 3092 1101.625

2.3 Discussion For the first method, a set of 30 fundus photography retinal images are considered having 10 images for each category namely normal, DMD, and WMD images. From the binary images, the number of white pixels is found. The percentage of the number of white pixels with the total number of pixels in each 512  512 binary image (262,144 pixels) is then calculated. The obtained results show that the normal eye images have minimum of 83,304 to a maximum of 119,989 white pixels which is of minimum of 32% to a maximum 45% in the 512  512 images. The DMD eye images have minimum of 123,663 to a maximum of 238,991 white pixels which is of minimum of 48% to a maximum 91% in 512  512 images. The WMD eye images have minimum of 10,139 to a maximum of 52,548 white pixels which is of minimum of 5% to a maximum 20% in 512  512 images. A decision rule is framed based on these results as (i) If the number of white pixels present in the region of image concerned lies in the range of 30%–50% of the total number of pixels, the eye may be healthy. (ii) If the number of white pixels present in the region of image concerned is >50% of the total number of pixels, the eye may be suffering from DMD.

144

Diabetes and Fundus OCT

(iii) If the number of white pixels present in the region of image concerned is 200 in these binary images are found and tabulated in Table 4. The average value of the number of high-intensity pixels is found for all the healthy, DMD, and WMD images separately and found to be 1176, 1722, and 1102, respectively. It can be seen from the values obtained that the values of dry AMD and healthy eye images are closer and only the average value of wet AMD images is higher. When compared to considering the percentage of white pixels as classifying feature, this method is inferior for it cannot distinguish the dry AMD and healthy eye images effectively; it would not be much useful in wet AMD case also since value here takes a wide variation.

3 OCT image analysis for AMD diagnosis 3.1 Methodology 3.1.1 Method-1 In the following section, three methods are proposed for the screening and detection of AMD using OCT images. Method-1 In the proposed method, two sets comprising 16 images in a total of healthy OCT and OCT images of age-related macular degeneration are taken for experimentation. Following is the algorithm for the proposed method:

Chapter 5 • Age-related macular degeneration

145

Step 1: Consider a RGB healthy and AMD image. Step 2: Acquired image is converted from RGB to gray. Step 3: The gray image is filtered using a Gaussian filter and enhanced using contrast stretching. Step 4: The enhanced image is converted to binary image. Step 5: RPE layer is extracted. Step 6: The extracted RPE layer is divided into eight equal portions and number of white pixels in each of these portions is counted and the mean pixel value is calculated using the formula—Mean pixel value ¼ (q1 + q2 + q3 + q4 + q5 + q6 + q7 + q8)/8. Step 7: The mean pixel value is calculated for both the healthy set and the AMD set of images, and named as m1 and m2, respectively. Using these mean pixel values the average mean value M is calculated to fix a threshold as Average mean, M ¼ m1 + m2/2. Step 8: Using this threshold value, a decision rule is made in order to classify healthy images and AMD-affected images. Decision rule If the mean pixel value of the image under testing is lesser than the threshold value, the image of interest is classified as AMD affected one and if the mean pixel value is greater than the fixed threshold values, it is classified as a normal image, i.e., M > 125, the image is a healthy image. M < 125, the image is AMD-affected image.

3.1.2 Method-2 In the proposed method, 2 sets (16 images in total) of healthy OCT and OCT images of age-related macular degeneration images are taken for experimentation (Figs. 8–10). The algorithm of the proposed method is given below: Step 1: Healthy OR AMD image is acquired. Step 2: Acquired image is converted from RGB to gray. Step 3: The gray image is filtered using a Gaussian filter and enhanced using contrast stretching. Step 4: The enhanced image is converted to a binary image. Step 5: RPE layer is extracted. Step 6: The extracted RPE layer is divided into eight equal portions and the ratio of the number of white pixels to the number of black pixels in each of these portions is calculated and the mean pixel ratio is calculated as Mean value ¼ (p1 + p2 + p3 + p4 + p5 +p6 + p7 + p8)/8. Step 7: The mean pixel value is calculated for both the healthy set and the AMD set of images, and named as m1 and m2, respectively. Using these mean pixel values the average mean value M is calculated to fix a threshold as, Average mean, M ¼ m1 + m2/2. Step 8: Using this threshold value, a decision rule is made in order to classify healthy images and AMD-affected images.

146

Diabetes and Fundus OCT

Image acquisition

RGB to gray conversion

Preprocessing

Binary thresholding

RPE layer extraction

Sampling and white pixel number calculation

No. of whitepixels > 125

Healthy image

AMD affected image

FIG. 8 Flow diagram of the Method 1.

3.1.3 Method-3 In the proposed method, 2 sets (16 images in total) of healthy OCT and OCT images of agerelated macular degeneration images are taken for experimentation. The algorithm of the proposed method is given below: Step 1: Healthy OR AMD image is acquired. Step 2: Acquired image is converted from RGB to gray. Step 3: The gray image is filtered using a Gaussian filter and enhanced using contrast stretching. Step 4: The enhanced image is converted to a binary image. Step 5: RPE layer is extracted. Step 6: The extracted RPE layer is divided into eight equal portions and the number of higher intensity pixels, i.e., pixels with intensity level >200 in each of these portions is

Chapter 5 • Age-related macular degeneration

Image acquisition

RGB to gray conversion

Preprocessing

Binary thresholding

RPE layer extraction

Calculation of ratio white to block pixels

Decision making FIG. 9 Flow diagram of the Method 2.

Image acquisition

RGB to gray conversion

Preprocessing

Binary thresholoding

Calculation of higher intensity pixels, i.e., > 200

Comparison FIG. 10 Flow diagram of the Method 3.

147

148

Diabetes and Fundus OCT

calculated and the mean value of the number of higher intensity is calculated as Mean value ¼ (p1 + p2 + p3 + p4 + p5 + p6 + p7 + p8)/8. Step 7: The mean value of the number of higher intensity pixels is calculated for both the healthy set and the AMD set of images, and named as m1 and m2, respectively. Using these mean values of the number of higher intensity pixels, the average mean value M is calculated to fix a threshold as Average mean, M ¼ (m1 + m2)/2. Step 8: Using this threshold value, a decision rule is made in order to classify healthy images and AMD-affected images.

3.1.4 Method-4 Here it is proposed to combine method-1 and method-3 to form a hybrid method in which as in method-1 the mean pixel intensity value is calculated for both the healthy set and the AMD set of images, and named as m1 and m2, respectively. Using these mean pixel values, the average mean value M1 is calculated to fix a threshold as Average mean, M 1 ¼ (m1 + m2)/2 and then as in method-3 the mean value of the number of higher intensity pixels is calculated for both the healthy set and the AMD set of images, and named as m1 and m2, respectively. Using these mean values of the number of higher intensity pixels, the average mean value M2 is calculated to fix a threshold as Average mean, M2 ¼ ðm1 + m2Þ=2:

A new decision rule is framed as If M 1 > 125 or M2 < 5891, the image under test is a healthy image. If M1 < 125 or M 2 > 5891, the image under test is AMD-affected image.

3.2 Results Fig. 11 shows the set of healthy OCT images and Fig. 12 shows the set of AMD-OCT images; Figs. 13 and 14 show the image processing steps involved in RPE layer extraction from healthy and AMD-affected OCT images, respectively. Tables 5 and 6 show the mean pixel numbers of healthy and AMD-affected OCT images, respectively. Threshold value calculations quad1 + quad2 + quad3 + quad4 + quad5 + quad6 + quad7 + quad8 8 m1ðTable 1Þ ¼ 151:1875

Mean pixel value ¼

m2ðTable 2Þ ¼ 100:5625 Average mean M ¼ ðm1 + m2Þ=2 151:1875 + 100:5625 2  125ðThreshold valueÞ

¼

Chapter 5 • Age-related macular degeneration

149

FIG. 11 Set of healthy OCT images.

FIG. 12 Set of AMD OCT images.

Tables 7 and 8 show the values of the ratio of a number of white pixels to total number of pixels in the regions of the images and also mean value of these region ratios of every image. The average of the mean values is also calculated and given as mean (AVG). Threshold value calculations Mean of the ratio of number of white pixels to total no:of pixels in the region quad1 + quad2 + quad3 + quad4 + quad5 + quad6 + quad7 + quad8 8 m1ðTable 1Þ ¼ 0:18023 ¼

m2ðTable 2Þ ¼ 0:1917 Average mean M ¼ ðm1 + m2Þ=2 0:18023 + 0:1917 2  0:185ðThreshold valueÞ

¼

Fig. 11 shows the set of healthy OCT images and Fig. 12 shows the set of AMD-OCT images; Figs. 13 and 14 show the image processing steps involved in RPE layer extraction from healthy and AMD-affected OCT images, respectively.

150

Diabetes and Fundus OCT

original image

denoised image

binary image

RPE Extracted

quad1

quad2

quad3

quad4

quad5

quad6

quad7

quad8

FIG. 13 Preprocessing of healthy OCT images.

Chapter 5 • Age-related macular degeneration

original image

denoised image

binary image

RPE Extracted

quad1

quad2

quad3

quad4

quad5

quad6

quad7

quad8

FIG. 14 Preprocessing of AMD OCT images.

151

152

Diabetes and Fundus OCT

Table 5

Mean pixel numbers of healthy OCT images.

No. of images

q1

q2

1 137 112 2 116 113 3 100 106 4 124 174 5 125 171 6 168 162 7 124 177 8 97 143 Mean pixel number (m1) ¼ 151.1875

Table 6

q3

q4

q5

q6

q7

q8

Mean

166 117 123 177 252 166 228 75

124 91 122 227 216 242 127 92

142 96 122 202 263 226 260 112

149 76 135 247 237 170 221 49

177 81 139 251 191 172 201 53

138 68 126 198 159 88 157 76

143.1250 97.7500 121.6250 200 201.7500 174.2500 186.8750 87.1250

Mean pixel numbers of AMD OCT images.

No. of images

q1

q2

1 107 129 2 80 98 3 111 91 4 133 133 5 0 83 6 76 58 7 123 138 8 63 85 Mean pixel number (m2) ¼ 100.5625

Mean pixel value ¼

q3

q4

q5

q6

q7

q8

Mean

116 62 74 158 134 58 163 108

228 65 124 78 126 69 99 80

236 62 123 51 144 80 57 91

206 82 97 37 78 67 37 73

91 84 91 37 0 82 37 62

76 63 92 60 30 55 56 43

148.6250 74.5000 100.3750 85.8750 70.6250 68.1250 88.7500 75.6250

quad1 + quad2 + quad3 + quad4 + quad5 + quad6 + quad7 + quad8 8

m1ðTable 1Þ ¼ 151:1875 m2ðTable 2Þ ¼ 100:5625 Average mean M ¼ ðm1 + m2Þ=2 ¼

151:1875 + 100:5625 2

Threshold value  125

Threshold value calculations Mean of the ratio of number of white pixels to total no:of pixels in the region   quad1 + quad2 + quad3 + quad4 + quad5 + quad6 + quad7 + quad8 ¼ 8 m1ðTable 1Þ ¼ 2753:59375 m2ðTable 2Þ ¼ 9027:67188 Average mean M ¼ ðm1 + m2Þ=2 ¼

2753:59375 + 9027:67188 ¼ 5891ðThreshold valueÞ 2

Chapter 5 • Age-related macular degeneration

153

Table 7 Ratio of number of white pixels to total no of pixels in the regions of healthy eye images. S. No

P11

Healthy1 0.2305 Healthy2 0.3438 Healthy3 0.1902 Healthy4 0.2687 Healthy5 0.2165 Healthy6 0.1182 Healthy7 0.2069 Healthy8 0.1566 Mean (AVG)1.0 m1

P12

P13

P14

P15

P16

P17

P18

Mean

0.2162 0.3299 0.1835 0.2145 0.1038 0.1198 0.1966 0.1734

0.2134 0.3195 0.1782 0.2118 0.1248 0.0930 0.1884 0.1247

0.1362 0.2100 0.1094 0.1388 0.1256 0.0678 0.1282 0.0991

0.1240 0.2379 0.1034 0.1200 0.1184 0.0697 0.1444 0.0901

0.1790 0.3463 0.1503 0.1795 0.1388 0.1091 0.2177 0.1505

0.1879 0.3399 0.1558 0.1862 0.1102 0.1267 0.2271 0.1796

0.1988 0.4592 0.1672 0.2300 0.0907 0.1731 0.2200 0.2226

0.18575 0.3233 0.15475 0.19368 0.1286 0.1096 0.19066 0.1495 0.18023

Table 8 Ratio of number of white pixels to total no of pixels in the regions of AMD eye images. S. No

P11

AMD1 0.1735 AMD2 0.0734 AMD3 0.2357 AMD4 0.1035 AMD5 0.2714 AMD6 0.1675 AMD7 0.1204 AMD8 0.1557 Mean(AVG)1.0 m2

P12

P13

P14

P15

P16

P17

P18

Mean

0.2009 0.0896 0.3448 0.1127 0.4148 0.0992 0.1006 0.1095

0.1764 0.0926 0.4194 0.1366 0.5038 0.1611 0.0784 0.0608

0.1530 0.0884 0.3701 0.1244 0.4366 0.1416 0.0480 0.1117

0.2394 0.0607 0.3183 0.1252 0.3626 0.1461 0.0690 0.1304

0.3472 0.0538 0.2771 0.1454 0.3164 0.1433 0.1033 0.1175

0.2600 0.0673 0.2229 0.1674 0.2471 0.1297 0.1147 0.1221

0.2060 0.1404 0.3380 0.1674 0.5916 0.2234 0.2782 0.1676

0.2195 0.0834 0.3157 0.1353 0.3930 0.1513 0.1140 0.1219 0.1917

From Tables 5, 6, 9, and 10, it is seen that out of 16 images taken, 15 images answered the proposed method, whereas 1 image failed. Hence, the accuracy is improved and equal to 94% (Table 11).

3.3 Discussion For the first method, two sets of healthy and AMD images, eight images in each group are taken for experimentation. The images undergo preprocessing step of noise removal to remove the Gaussian noise present in the OCT images using Gaussian filter and then contrast stretching of the image is done to improve the quality of the image. Preprocessed images are binary thresholded to extract the RPE layer. The continuity and the thickness of the RPE layer are considered by researchers as disease markers for AMD diagnosis. The extracted RPE layer of every OCT image of the healthy and AMD-affected eyes is then divided into eight parts and the number of white pixels in each part is calculated; the

154

Diabetes and Fundus OCT

Table 9

Number of higher intensity pixels in the regions of healthy eye images.

S. No

P11

P12

P13

P14

P15

P16

P17

P18

Mean

Healthy1 Healthy2 Healthy3 Healthy4 Healthy5 Healthy6 Healthy7 Healthy8 AVG

5969 1017 3613 3214 18,245 1159 63 2773

4608 1032 3584 2051 1088 1227 123 3844

4658 1035 3676 2100 1066 1008 141 2785

4655 1024 3584 2048 1087 1998 56 4346

4608 1026 3599 2048 1026 1852 81 4038

4703 1035 3615 2114 1082 1606 350 3967

4678 1045 3593 2082 1057 1617 95 3782

16,286 1006 3596 2652 1045 11,044 266 13,045

4234.875 1027.5 3607.5 2288.625 3212 2688.875 146.875 4822.5 2753.59375

Table 10

Number of higher intensity pixels in the regions of AMD eye images.

S. No

P11

AMD1 4016 AMD2 2556 AMD3 7681 AMD4 8155 AMD5 3703 AMD6 6637 AMD7 9707 AMD8 1112 Mean(AVG)

P12

P13

P14

P15

P16

P17

P18

Mean

2375 9216 6643 8294 4376 6670 6721 2441

1446 9216 5693 8192 4218 6659 10,636 1909

842 9230 4608 8195 4164 6795 8391 1077

609 9216 5452 8281 4606 6805 8900 2094

287 9434 5630 8192 3509 6677 9164 2649

3080 9705 4762 8278 5719 6690 8104 4226

6618 9429 2075 19,392 6575 47,190 25,188 29,103

2409.125 1137.123 5318 9622.375 4608.75 11,765.625 10,851.375 19,145.875 9027.67188

Table 11 Hybrid method: Average of mean pixel intensity value—M1, mean value of the number of higher intensity pixels—M2.

S. No

M1 (AVG of mean pixel intensity of healthy OCT images)

M2 (number of higher intensity pixels in the regions of healthy fundus eye images)

M1 (AVG mean pixel intensity of AMD OCT images)

M2 (number of higher intensity pixels in the regions of affected fundus eye images)

1 2 3 4 5 6 7 8

143 97 121 200 202 174 187 87

4235 1028 3608 2289 3212 2689 147 4823

149 75 100 86 71 68 89 76

2409 1137 5318 9622 4609 11,766 10,851 19,146

Chapter 5 • Age-related macular degeneration

155

results are tabulated in Tables 5 and 6. For every image, the mean value of the pixels present in all the eight parts is calculated and the average mean value for every RPE layer is also found and tabulated. The grand average of all the average mean values of all the images is then calculated; m1 and m2 represent the values for healthy and AMD eye images, respectively. From these values, average value M is calculated as m1 + m2/2. From the results obtained, M is found equal to be 125. Hence, the decision rule is framed that if mean value of any image is >125, then the image is of a healthy eye and if the mean value is 0.185, then the image is of AMD-affected eye and if the mean value is 200) in each part is calculated; the results are tabulated in Tables 9 and 10. For every image, the mean value of the

156

Diabetes and Fundus OCT

number of higher intensity pixels of all the eight regions is also calculated. The average of all the mean values of all the images is then calculated and termed as mean (AVG); m1 and m2 represent the mean (AVG) values for healthy and AMD eye images, respectively. From these values, average value M is calculated as m1 + m2/2. From the results obtained, M is found equal to be 5891. Hence, the decision rule is framed that if the mean value of any image is >5891, then the image is of AMD-affected eye and if the mean value is 0.8 in normal cases and r > 0.7 in abnormal ones. They found that the thickness of the inner layer with DMA was 1.12 times more than normal cases. In DME cases, the INL layer appeared relatively thicker, and the ONL layer appears irregular and exceeding 200 μm. Therefore, the thickness maps of the INL and ONL could be used as an indicator of disease severity and for tracking the changes in DME patients overtime. Correia et al. [60] identified the changes happened to ONL layer in DME patients. The OCT images were obtained from healthy subjects as controls and patients with DME. The images were distributed into three groups. These groups were healthy subjects, DME patients where ONL thickness was significantly increased, and DME patients with normal thickness for ONL. For all processed images, the ONL was segmented and processed to produce a representative A-scan. They used the optical and physical characteristics of the healthy human retina as a reference in their proposed system. They used a Monte Carlo technique with a model for the ONL to simulate an A-scan for each group and compare it to the real OCT data. The results showed that there were two types of edema, cytotoxic (intracellular) and vasogenic (extracellular). In the DME without changes in the ONL group, the most observed feature for the real OCT data was the increase in the volume of the nucleus. The data of patients with DME with increased ONL could be created by increasing ONL thickness from the healthy status by the exact increase in ONL thickness. Abhishek et al. [63] segmented the intraretinal layers for edema patients and normal subjects. The segmented layers were ILM and RPE layers. The graph-based segmentation was solely based on the pixel intensity variation and distance between neighbor pixels. They used the weighting scheme and shortest path search to identify the neighborhood pixel. In this algorithm, the preprocessing step could be considered as optional.

Chapter 7 • Optical coherence tomography: A review

203

There were 12 diseased images and 9 normal images. The algorithm was able to detect ILM and RPE layers in 7 out of 9 normal subjects and 11 out of 12 DME affected subjects due to expert validation. It was found that the range of the thickness of the normal subject was less than 50, whereas the range of the DME subjects was more than 50. In some cases, it even reached 200. So, the thickness could be used as a good sign of the presence of edema. In this method, if they limit the search regions it can detect the other retinal layers. Another advantage of this method is that it is less prone to noise.

2.6 Cystoids macular edema Cystoids macular edema (CME) occurs in a variety of diseases like diabetic retinopathy, AMD, retinal vein occlusion, and intraocular inflammation [64]. It affects the full thickness of the retinal tissue involving the anatomic fovea [65]. It appears more often in people over 60 years old. It affects visual acuity and may lead to loss of vision or even blindness. There are some current studies that discuss the diagnosis of CME disease from analyzing the OCT retinal images. For example, Zhang et al. [66] determined the volume of CME for the retinas with MH in 3D OCT images. Their system consisted of three main phases, which were preprocessing, coarse segmentation, and fine segmentation. The preprocessing phase included denoising, intraretinal layers segmentation and flattening, and MH and vessel silhouettes elimination. An AdaBoost classifier was used to get the seeds and constrained regions for graph cut in the second phase. In the last phase, a graph cut algorithm is used to get fine segmentation results. The proposed system was evaluated by 3D OCT images from 18 patients with CMEs and MH. The true positive volume fraction (TPVF) was 84.6%, and the false positive volume fraction (FPVF) was 1.7%. The accuracy rate (ACC) for CME volume segmentation was 99.7%. For the validation, a leave-one-out strategy was used during training and testing. Supervised by an experienced ophthalmologist, the CME regions were segmented manually to work as ground truth. However, a more accurate shape model of the MH (includes the maximum/minimum diameter of the MH) was needed. In addition, the proposed system detected the obvious cyst only. Slokom et al. [67] identified CME regions in SD-OCT of the macula. An algorithm was made to detect cystoids. First, it identified the borders of the cytoids. Then, to obtain surface area of fluid in the image, a quantitative analysis of liquid in cystoids regions was made. They used data from six patients with CME associated with diverse retinopathy. In the central part of the retina, two patients had a singular cystoids region, and the others had multiple cystoids regions. They applied the distribution metric for image segmentation that appeared as a result from prediction theory to detect cystoids in OCT images. Applying level set process, an energy model based on the metric was incorporated into the geometric active contour algorithm. The clinical expert classified the extracted results from every image as good and fair extraction. In all, 95% were classified as good and fair extraction cases. The average precision was 95.02% and average sensitivity was 88.46%. The area of each cystoids region was calculated and compared to the manual extraction, which was in all cases smaller than those of the clinical expert.

204

Diabetes and Fundus OCT

2.7 Age-related macular degeneration In the United States, there are approximately 8 million persons who have monocular and binocular AMDs. This disease is detected by searching for drusen, which is defined as abnormality between the basal lamina of RPE and the inner collagenous layer of BM [68, 69]. Fig. 5 shows an OCT retinal image for AMD patient. Khanifar et al. [70] categorized the drusen ultrastructure in AMD using OCT retinal images. A sample of 31 eyes of 31 AMD patients was utilized in this study. The images were analyzed, and the drusen were scored by four categories, which are shape, predominant internal reflectivity, homogeneity, and the presence of overlying hyperreflective foci. They calculated the spread of each morphologic pattern and the combinations of the extracted morphologic patterns. In all, 21 images were chosen for the adequate quality. In all, 17 drusen were found from 120 drusen in the whole images. Most of the found drusen were convex, homogeneous, with medium internal reflectivity, and without overlying hyperreflective foci. Nonhomogenous drusen were found in 16 eyes, 5 of them have a distinct hyperreflective core. Hyperreflective foci overlying drusen were in seven eyes. Schuman et al. [71] detected the changes in the retina using OCT images in AMD patients. In all, 17 eyes of 12 patients with nonneovascular AMD and drusen were used. In all, 17 eyes of 10 age-matched were used as a control. Over 97% of drusen, the PRL was thinned. The average PRL thickness was reduced by 27.5% over drusen. They found that difference was valid and significant (P ¼.004). They observed two types of hyperreflective abnormalities over drusen in the neurosensory retina. Finally, they concluded that distinct hyperreflective speckled patterns occur over drusen in 41% of AMD eyes and never in control eyes.

FIG. 5 An AMD case captured by ZEISS OCT machine.

Chapter 7 • Optical coherence tomography: A review

205

Oliveira et al. [72] proposed a method to integrate sparse higher-order potentials (SHOPs) into a multisurface segmentation framework. It was used to confront with local boundary variations caused by drusen, which was important to evaluate AMD progress. The mean unsigned error for the inner retinal pigment epithelium (IRPE) was 5.65  6.26 μm and for BM was 4.37  5.25 μm. The results were comparable to those obtained by two experts. Their average interobserver variability was 7.30  6.87 μm for IRPE and 5.03  4.37 μm for BM. The IRPE and the other boundaries were successfully segmented. Their proposed technique was evaluated in a dataset of 20 AMD patients. The dataset also included manual segmentations of the ILM, IRPE, and BM boundaries performed by two expert graders.

2.8 Other different diseases In this section, we discuss other work that does not lay under any of the above diseases or research studies that deal with more than one disease. First, we discuss the PED, which is considered as a notable feature of many chorioretinal disease processes, including AMD, polypoidal choroidal vasculopathy, CSC, and uveitis. It can be classified as serous, fibrovascular, or drusenoid. Shi et al. [35] proposed a framework for segmenting the retinal layers in 3D OCT images with serous retinal PED. Their framework consisted of three main stages. The first stage was the fast denoising and B-scan alignment. The second stage was the multiresolution graph search-based surface detection. Finally, the third stage was the PED region detection and surface correction above the PED region. They evaluated their system by using 20 PED patient. The experimental results showed that the TPVF was 87.1%, the FPVF was 0.37%, the positive predicative value for PED was 81.2%, and the average running time was 220 seconds for OCT data of 512  64  480 voxels. On the other hand, Sugmuk et al. [36] proposed a segmentation technique to divide OCT images to detect the shape of the drusen in the RPE layer. They used the RPE layer to find RFL layer to detect a bubble in the blood area. They used binary classification technique to classify AMD and DME diseases depending on the retrieved characteristics. They used only 16 OCT images (10 images for AMD and 6 images for DME) in their experiments. Their proposed classification system achieved 87.5% for accuracy. ElTanboly et al. [73] proposed a segmentation framework for retinal layers from 2D OCT data. Their framework was based on the joint model, which included shape, intensity, and spatial information. It could segment 12 distinct retinal layers in normal and diseased subjects. The shape was built using a subset of coaligned training OCT images, which were initially aligned using an innovative method to employ multiresolution edge tracking. Then, the visual appearance features were described using pixel-wise image intensities and spatial interaction features. A linear combination of discrete Gaussians was used to model the empirical gray-level distribution of OCT data. To eliminate the noise, they integrated the proposed model with a second-order Markov-Gibbs random field spatial interaction model. They tested their framework on 250 normal and diseased OCT images with AMD and DR. Their proposed segmentation method was evaluated using Dice coefficient

206

Diabetes and Fundus OCT

(DSC ¼ 0.763  0.1598), agreement coefficient (AC1 ¼ 73.2  4.46%), and average deviation distance (AD ¼ 6.87  2.78 μm) metrics.

3 Challenges and future directions As previously mentioned, the OCT image modality provides a great diagnosing aid for many organs. It is used in a wide range for diagnosing retinal diseases, such as glaucoma, CSC, AION, DME, and CME. Many studies were conducted to extract some specific features from OCT retinal images to diagnosis some diseases. In this section, we will discuss some challenges, which are currently facing the analysis of OCT retinal images. In addition, some future research directions will be discussed briefly in the sequent points.

3.1 Automatic segmentation techniques OCT devices nowadays produce a large number of images. This large number of images make it difficult for humans to investigate. In addition, the existence of speckle noise, low image contrast, irregular shapes, morphological features (retinal detachment, MHs, drusen), and accurate manual segmentation of retinal layers is considered as a difficult task [57]. So, there is a need to produce automatic segmentation techniques to handle this large number of images. Automatic segmentation techniques can reduce time and effort. Also, they can provide repeatable and quantitative results [74]. They should be accurate and robust to image degeneration and low signal to noise ratio. It can allow early diagnosis or therapy monitoring. Thickness measurements are necessary for detecting pathological changes and diagnosing of retinal diseases [40]. Nevertheless, a few segmentation approaches have been developed, which addressed the problem of layers that are either invisible or missing [56, 75]. The proposed technique should be robust with respect to different types of OCT scanners from different manufacturers. They also should be robust with respect to the presence of blood vessel artifacts in the OCT images [5].

3.2 OCT CAD systems Regarding the current work in OCT CAD systems, the experimental results show that these systems are used in off-line clinical or pathology studies. Therefore, additional speedup is required for OCT CAD systems to become suitable for clinical practice [35].

3.3 Standard number of layers As far as we know, there is no standard number of detected layers in OCT retinal images. Some studies prefer to define all intraretinal layer, whereas other studies prefer only to define the most critical retina layers that are needed to identify a disease.

3.4 Weak layer boundaries The target layers in the retina lack strong boundaries, which are surrounded by tissues with similar intensity profile. In addition, many objects are laying in a small region [41, 76].

Chapter 7 • Optical coherence tomography: A review

207

3.5 Artifacts There are many types of artifacts that are found in OCT images. Intensity inhomogeneity is considered as an important reason that significantly affects the accuracy of the segmentation process of the retina layers [77]. There are many reasons for intensity inhomogeneity in OCT images, such as poor scan quality, after multiframe averaging, opacity of transparent ocular media, off-center acquisitions, and vignetting due to misalignment. This problem negatively affects the performance of the used processing techniques, especially the segmentation process. Finally, little work had been done to correct this issue [6].

4 Conclusion OCT is one of the fastest developing image modalities, especially in the last decade. OCT images have made it easier to differentiate between the anatomy of diseased and normal retina. This difference can be used to detect the early signs of many retinal diseases making OCT a very valuable modality. It can also be used to monitor the effect of the treatment on different parts of the retina. As we can see from this chapter, every disease has some spatial features that can be detected easily using OCT. In this chapter, we discussed how OCT could detect and define the structure of diseased and healthy eyes. Also, we reviewed some of the recent publications that demonstrate the ability of OCT in detecting and diagnosing a lot of retinal diseases. Also, we pointed out some of the challenges that face researchers in extracting and analyzing the information extracted from the OCT images. The promising experimental results are promising as shown in our review, which suggests that OCT is a reliable modality that can help in detecting and diagnosing retinal diseases even in its early stage. This work could also be applied to various other applications in medical imaging, such as kidney, heart, prostate, lung, and brain in addition to the retina, as well as several nonmedical applications [78–81]. One application is renal transplant functional assessment, especially with developing noninvasive CAD systems for renal transplant function assessment, utilizing different image modalities (e.g., ultrasound, computed tomography, MRI, etc.). Accurate assessment of renal transplant function is critically important for graft survival. Although transplantation can improve a patient well-being, there is a potential posttransplantation risk of kidney dysfunction that, if not treated in a timely manner, can lead to the loss of the entire graft, and even patient death. In particular, dynamic and diffusion MRI-based systems have been clinically used to assess transplanted kidneys with the advantage of providing information on each kidney separately. For more details about renal transplant functional assessment, the readers are referred to Refs. [82–109]. The heart is also an important application to this work. The clinical assessment of myocardial perfusion plays a major role in the diagnosis, management, and prognosis of ischemic heart disease patients. Thus, there have been ongoing efforts to develop automated systems for accurate analysis of myocardial perfusion using first-pass images [110–126].

208

Diabetes and Fundus OCT

Abnormalities of the lung could also be another promising area of research and a related application to this work. Radiation-induced lung injury is the main side effect of radiation therapy for lung cancer patients. Although higher radiation doses increase the radiation therapy effectiveness for tumor control, this can lead to lung injury as a greater quantity of normal lung tissues is included in the treated area. Almost one-third of patients who undergo radiation therapy develop lung injury following radiation treatment. The severity of radiation-induced lung injury ranges from ground-glass opacities and consolidation at the early phase to fibrosis and traction bronchiectasis in the late phase. Early detection of lung injury will thus help to improve management of the treatment [127–169]. This work can also be applied to other brain abnormalities, such as dyslexia in addition to autism. Dyslexia is one of the most complicated developmental brain disorders that affect children’s learning abilities. Dyslexia leads to the failure to develop age-appropriate reading skills in spite of the normal intelligence level and adequate reading instructions. Neuropathological studies have revealed an abnormal anatomy of some structures, such as the corpus callosum in dyslexic brains. There has been a lot of work in the literature that aims at developing CAD systems for diagnosing such disorder, along with other brain disorders [170–192]. For the vascular system [193], this work could also be applied for the extraction of blood vessels, for example, from phase contrast magnetic resonance angiography (MRA). Accurate cerebrovascular segmentation using noninvasive MRA is crucial for the early diagnosis and timely treatment of intracranial vascular diseases [175, 176, 194–199].

References [1] D. Huang, E.A. Swanson, C.P. Lin, J.S. Schuman, W.G. Stinson, W. Chang, M.R. Hee, T. Flotte, K. Gregory, C.A. Puliafito, et al., Optical coherence tomography, Science 254 (5035) (1991) 1178–1181, https://doi.org/10.1126/science.1957169. [2] R.A. Gabbay, S. Sivarajah, Optical coherence tomography-based continuous noninvasive glucose monitoring in patients with diabetes, Diabetes Technol. Ther. 10 (3) (2008) 188–193, https://doi. org/10.1089/dia.2007.0277. [3] R.V. Kuranov, J. Qiu, A.B. McElroy, A. Estrada, A. Salvaggio, J. Kiel, A.K. Dunn, T.Q. Duong, T.E. Milner, Depth-resolved blood oxygen saturation measurement by dual-wavelength photothermal (DWP) optical coherence tomography, Biomed. Opt. Express 2 (3) (2011) 491–504, https://doi.org/ 10.1364/BOE.2.000491. [4] V. Kajic, B. Povazˇay, B. Hermann, B. Hofer, D. Marshall, P.L. Rosin, W. Drexler, Robust segmentation of intraretinal layers in the normal human fovea using a novel statistical model based on texture and shape analysis, Opt. Express 18 (14) (2010) 14730–14744, https://doi.org/10.1364/OE.18.014730. [5] R. Kafieh, H. Rabbani, M.D. Abramoff, M. Sonka, Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map, Med. Image Anal. 17 (8) (2013) 907–928, https://doi.org/10.1016/j.media.2013.05.006. [6] A. Lang, A. Carass, M. Hauser, E.S. Sotirchos, P.A. Calabresi, H.S. Ying, J.L. Prince, Retinal layer segmentation of macular OCT images using boundary classification. Biomed. Opt. Express 4 (7) (2013) 1133–1152, https://doi.org/10.1364/BOE.4.001133.

Chapter 7 • Optical coherence tomography: A review

209

[7] J. Cheng, D. Tao, Y. Quan, D.W.K. Wong, G.C.M. Cheung, M. Akiba, J. Liu, Speckle reduction in 3D optical coherence tomography of retina by A-scan reconstruction, IEEE Trans. Med. Imaging 35 (2016) 2270–2279, https://doi.org/10.1109/TMI.2016.2556080. [8] M.W. Jenkins, M. Watanabe, A.M. Rollins, Longitudinal imaging of heart development with optical coherence tomography, IEEE J. Sel. Top. Quantum Electron. 18 (3) (2012) 1166–1175. [9] B. Potsaid, I. Gorczynska, V.J. Srinivasan, Y. Chen, J. Jiang, A. Cable, J.G. Fujimoto, Ultrahigh speed spectral/Fourier domain OCT ophthalmic imaging at 70,000 to 312,500 axial scans per second, Opt. Express 16 (19) (2008) 15149–15169. [10] M.E.J. van Velthoven, D.J. Faber, F.D. Verbraak, T.G. van Leeuwen, M.D. de Smet, Recent developments in optical coherence tomography for imaging the retina, Prog. Retin. Eye Res. 26 (1) (2007) 57–77. [11] S.A. Boppart, G.J. Tearney, B.E. Bouma, J.F. Southern, M.E. Brezinski, J.G. Fujimoto, Noninvasive assessment of the developing Xenopus cardiovascular system using optical coherence tomography, Proc. Natl Acad. Sci. USA 94 (9) (1997) 4256–4261. [12] M.J. Suter, S.K. Nadkarni, G. Weisz, A. Tanaka, F.A. Jaffer, B.E. Bouma, G.J. Tearney, Intravascular optical imaging technology for investigating the coronary artery, JACC Cardiovasc. Imaging 4 (9) (2011) 1022–1039. [13] G.J. Tearney, M.E. Brezinski, J.G. Fujimoto, N.J. Weissman, S.A. Boppart, B.E. Bouma, J.F. Southern, Scanning single-mode fiber optic catheter-endoscope for optical coherence tomography, Opt. Lett. 21 (7) (1996) 543–545. [14] G.J. Tearney, M.E. Brezinski, B.E. Bouma, S.A. Boppart, C. Pitris, J.F. Southern, J.G. Fujimoto, In vivo endoscopic optical biopsy with optical coherence tomography, Science 276 (5321) (1997) 2037–2039. [15] T. Gambichler, G. Moussa, M. Sand, D. Sand, P. Altmeyer, K. Hoffmann, Applications of optical coherence tomography in dermatology, J. Dermatol. Sci. 40 (2) (2005) 85–94. [16] J.M. Schmitt, M.J. Yadlowsky, R.F. Bonner, Subsurface imaging of living skin with optical coherence microscopy, Dermatology 191 (2) (1995) 93–98. [17] K.D. Rao, Y. Verma, H.S. Patel, P.K. Gupta, Non-invasive ophthalmic imaging of adult zebrafish eye using optical coherence tomography, Curr. Sci. 90 (11) (2006) 1506. [18] L. Kagemann, H. Ishikawa, J. Zou, P. Charukamnoetkanok, G. Wollstein, K.A. Townsend, M. L. Gabriele, N. Bahary, X. Wei, J.G. Fujimoto, et al., Repeated, noninvasive, high resolution spectral domain optical coherence tomography imaging of zebrafish embryos, Mol. Vis. 14 (2008) 2157–2170. [19] S.H. Syed, K.V. Larin, M.E. Dickinson, I.V. Larina, Optical coherence tomography for high-resolution imaging of mouse development in utero, J. Biomed. Opt. 16 (4) (2011) 046004. [20] J.C. Burton, S. Wang, C.A. Stewart, R.R. Behringer, I.V. Larina, High-resolution three-dimensional in vivo imaging of mouse oviduct using optical coherence tomography, Biomed. Opt. Express 6 (7) (2015) 2713–2723. [21] A. Alex, A. Li, X. Zeng, R.E. Tate, M.L. McKee, D.E. Capen, Z. Zhang, R.E. Tanzi, C. Zhou, A circadian clock gene, Cry, affects heart morphogenesis and function in Drosophila as revealed by optical coherence microscopy, PLoS ONE 10 (9) (2015) e0137236. [22] A.P. Yow, J. Cheng, A. Li, C. Wall, D.W.K. Wong, J. Liu, H.L. Tey, Skin surface topographic assessment using in vivo high-definition optical coherence tomography, in: 2015 10th International Conference on Information, Communications and Signal Processing (ICICS), December, 2015, pp. 1–4, https:// doi.org/10.1109/ICICS.2015.7459853. [23] A. Li, J. Cheng, A.P. Yow, C. Wall, D.W.K. Wong, H.L. Tey, J. Liu, Epidermal segmentation in highdefinition optical coherence tomography, in: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), August, 2015, pp. 3045–3048, ISSN 1094-687X, https://doi.org/10.1109/EMBC.2015.7319034.

210

Diabetes and Fundus OCT

[24] S.S. Akhoury, L.N. Darlow, Extracting subsurface fingerprints using optical coherence tomography, in: 2015 Third International Conference on Digital Information, Networking, and Wireless Communications (DINWC), February, 2015, pp. 184–187, https://doi.org/10.1109/ DINWC.2015.7054240. [25] M. Xu, J. Cheng, D.W.K. Wong, J. Liu, A. Taruya, A. Tanaka, Graph based lumen segmentation in optical coherence tomography images, in: 2015 10th International Conference on Information, Communications and Signal Processing (ICICS), December, 2015, pp. 1–5, https://doi.org/10.1109/ICICS. 2015.7459951. [26] R. Shalev, D. Prabhu, K. Tanaka, A.M. Rollins, M. Costa, H.G. Bezerra, R. Soumya, D.L. Wilson, Intravascular optical coherence tomography image analysis method, in: 2015 41st Annual Northeast Biomedical Engineering Conference (NEBEC), April, 2015, pp. 1–2, ISSN 2160-6986, https://doi.org/ 10.1109/NEBEC.2015.7117058. [27] C. Kut, K.L. Chaichana, J. Xi, S.M. Raza, X. Ye, E.R. McVeigh, F.J. Rodriguez, A. Quin˜ones-Hinojosa, X. Li, Detection of human brain cancer infiltration ex vivo and in vivo using quantitative optical coherence tomography, Sci. Transl. Med. 7 (292) (2015) 292ra100. [28] V.J. Srinivasan, E.T. Mandeville, A. Can, F. Blasi, M. Climov, A. Daneshmand, J.H. Lee, E. Yu, H. Radhakrishnan, E.H. Lo, et al., Multiparametric, longitudinal optical coherence tomography imaging reveals acute injury and chronic recovery in experimental ischemic stroke, PLoS ONE 8 (8) (2013) e71478. [29] R.A. Leitgeb, M. Villiger, A.H. Bachmann, L. Steinmann, T. Lasser, Extended focus depth for Fourier domain optical coherence microscopy, Opt. Lett. 31 (16) (2006) 2450–2452. [30] F. Li, Y. Song, A. Dryer, W. Cogguillo, Y. Berdichevsky, C. Zhou, Nondestructive evaluation of progressive neuronal changes in organotypic rat hippocampal slice cultures using ultrahigh-resolution optical coherence microscopy, Neurophotonics 1 (2) (2014) 025002. [31] C. Leahy, H. Radhakrishnan, V.J. Srinivasan, Volumetric imaging and quantification of cytoarchitecture and myeloarchitecture with intrinsic scattering contrast, Biomed. Opt. Express 4 (10) (2013) 1978–1990. [32] V.J. Srinivasan, H. Radhakrishnan, J.Y. Jiang, S. Barry, A.E. Cable, Optical coherence microscopy for deep tissue imaging of the cerebral cortex with intrinsic contrast, Opt. Express 20 (3) (2012) 2220–2239. [33] O. Assayag, K. Grieve, B. Devaux, F. Harms, J. Pallud, F. Chretien, C. Boccara, P. Varlet, Imaging of nontumorous and tumorous human brain tissues with full-field optical coherence tomography, NeuroImage Clin. 2 (2013) 549–557. [34] C. Magnain, J.C. Augustinack, E. Konukoglu, M.P. Frosch, S. Sakadzˇi c, A. Varjabedian, N. Garcia, V.J. Wedeen, D.A. Boas, B. Fischl, Optical coherence tomography visualizes neurons in human entorhinal cortex, Neurophotonics 2 (1) (2015) 015004. [35] F. Shi, X. Chen, H. Zhao, W. Zhu, D. Xiang, E. Gao, M. Sonka, H. Chen, Automated 3-D retinal layer segmentation of macular optical coherence tomography images with serous pigment epithelial detachments, IEEE Trans. Med. Imaging 34 (2) (2015) 441–452, https://doi.org/10.1109/ TMI.2014.2359980. [36] J. Sugmk, S. Kiattisin, A. Leelasantitham, Automated classification between age-related macular degeneration and diabetic macular edema in OCT image using image segmentation, in: 2014 Seventh Biomedical Engineering International Conference (BMEiCON), November, 2014, pp. 1–4, https://doi.org/10.1109/BMEiCON.2014.7017441. [37] J. Novosel, Z. Wang, H. de Jong, M. van Velthoven, K.A. Vermeer, L.J. van Vliet, Locally-adaptive loosely-coupled level sets for retinal layer and fluid segmentation in subjects with central serous retinopathy, in: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), April, 2016, pp. 702–705, https://doi.org/10.1109/ISBI.2016.7493363.

Chapter 7 • Optical coherence tomography: A review

211

[38] H. Ishikawa, D.M. Stein, G. Wollstein, S. Beaton, J.G. Fujimoto, J.S. Schuman, Macular segmentation with optical coherence tomography, Invest. Ophthalmol. Vis. Sci. 46 (6) (2005) 2012–2017. [39] M.K. Garvin, M.D. Abramoff, X. Wu, S.R. Russell, T.L. Burns, M. Sonka, Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images, IEEE Trans. Med. Imaging 28 (9) (2009) 1436–1447, https://doi.org/10.1109/TMI.2009.2016958. [40] A.M. Bagci, M. Shahidi, R. Ansari, M. Blair, N.P. Blair, R. Zelkha, Thickness profiles of retinal layers by optical coherence tomography image segmentation, Am. J. Ophthalmol. 146 (5) (2008) 679–687. e1, https://doi.org/10.1016/j.ajo.2008.06.010. [41] S. Lu, C.Y.L. Cheung, J. Liu, J.H. Lim, C.K.S. Leung, T.Y. Wong, Automated layer segmentation of optical coherence tomography images, IEEE Trans. Biomed. Eng. 57 (10) (2010) 2605–2608, https://doi. org/10.1109/TBME.2010.2055057. [42] F. Rossant, I. Ghorbel, I. Bloch, M. Paques, S. Tick, Automated segmentation of retinal layers in OCT imaging and derived ophthalmic measures, in: 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, June, 2009, pp. 1370–1373, ISSN 1945-7928, https://doi.org/10.1109/ ISBI.2009.5193320. [43] Q. Yang, C.A. Reisman, Z. Wang, Y. Fukuma, M. Hangai, N. Yoshimura, A. Tomidokoro, M. Araie, A.S. Raza, D.C. Hood, K. Chan, Automated layer segmentation of macular OCT images using dual-scale gradient information, Opt. Express 18 (20) (2010) 21293–21307. [44] A. ElTanboly, M. Ismail, A. Shalaby, A. Switala, A. El-Baz, S. Schaal, G. Gimel´farb, M. El-Azab, A computer-aided diagnostic system for detecting diabetic retinopathy in optical coherence tomography images, Med. Phys. 44 (3) (2017) 914–923. [45] R. Brancato, B. Lumbroso, Guide to Optical Coherence Tomography Interpretation, I.N.C Innovation-News Communication, 2004, ISBN 9788886193412, Available from: https://books. google.com/books?id¼x-ZiQgAACAAJ. [46] J.S. Schuman, C.A. Puliafito, J.G. Fujimoto, Optical Coherence Tomography of Ocular Diseases, SLACK Incorporated, 2004, ISBN 9781556426094, Available from: https://books.google.com/ books?id¼SjhpQgAACAAJ. [47] D. Koleva-Georgieva, Optical Coherence Tomography Findings in Diabetic Macular Edema, INTECH Open Access Publisher, Rijeka, 2012. [48] M. Salarian, R. Ansari, J. Wanek, M. Shahidi, Accurate segmentation of retina nerve fiber layer in OCT images, in: 2015 IEEE International Conference on Electro/Information Technology (EIT), May, 2015, pp. 653–656, ISSN 2154-0357, https://doi.org/10.1109/EIT.2015.7293411. [49] S. Roychowdhury, D.D. Koozekanani, M. Reinsbach, K.K. Parhi, 3-D localization of diabetic macular edema using OCT thickness maps, in: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), August, 2015, pp. 4334–4337, ISSN 1094-687X, https://doi.org/10.1109/EMBC.2015.7319354. [50] S. Lee, E. Lebed, M.V. Sarunic, M.F. Beg, Exact surface registration of retinal surfaces from 3-D optical coherence tomography images, IEEE Trans. Biomed. Eng. 62 (2) (2015) 609–617, https://doi.org/ 10.1109/TBME.2014.2361778. [51] K.A. Vermeer, J. van der Schoot, H.G. Lemij, J.F. de Boer, Automated segmentation by pixel classification of retinal layers in ophthalmic OCT images, Biomed. Opt. Express 2 (6) (2011) 1743–1756. [52] H. Bogunovi c, M. Sonka, Y.H. Kwon, P. Kemp, M.D. Abra`moff, X. Wu, Multi-surface and multi-field co-segmentation of 3-D retinal optical coherence tomography, IEEE Trans. Med. Imaging 33 (12) (2014) 2242–2253. [53] S.-E. Ahn, J. Oh, J.-H. Oh, I.K. Oh, S.-W. Kim, K. Huh, Three-dimensional configuration of subretinal fluid in central serous chorioretinopathy 3-dimensional configuration of subretinal fluid, Invest. Ophthalmol. Vis. Sci. 54 (9) (2013) 5944–5952.

212

Diabetes and Fundus OCT

[54] C.M. Eandi, J.E. Chung, F. Cardillo-Piccolino, R.F. Spaide, Optical coherence tomography in unilateral resolved central serous chorioretinopathy, Retina 25 (4) (2005), ISSN 0275–004X. [55] Y. Imamura, T. Fujiwara, R. Margolis, R. Spaide, Enhanced depth imaging optical coherence tomography of the choroid in central serous chorioretinopathy, Retina 29 (10) (2009), ISSN 0275–004X. [56] J. Novosel, K.A. Vermeer, L. Pierrache, C.C.W. Klaver, L.I. van den Born, L.J. van Vliet, Method for segmentation of the layers in the outer retina, in: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), August, 2015, pp. 5646–5649, ISSN 1094-687X, https://doi.org/10.1109/EMBC.2015.7319673. [57] M.K. Garvin, M.D. Abramoff, R. Kardon, S.R. Russell, X. Wu, M. Sonka, Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search, IEEE Trans. Med. Imaging 27 (10) (2008) 1495–1505, https://doi.org/10.1109/TMI.2008.923966. [58] T.R. Hedges, L.N. Vuong, A.O. Gonzalez-Garcia, C.E. Mendoza-Santiesteban, M.L. Amaro-Quierza, Subretinal fluid from anterior ischemic optic neuropathy demonstrated by optical coherence tomography, Arch. Ophthalmol. 126 (6) (2008) 812–815. [59] R. Varma, N.M. Bressler, Q.V. Doan, et al., Prevalence of and risk factors for diabetic macular edema in the United States, JAMA Ophthalmol. 132 (11) (2014) 1334–1340, https://doi.org/10.1001/ jamaophthalmol.2014.2854. [60] A. Correia, L. Pinto, A. Arau´jo, S. Barbeiro, F. Caramelo, P. Menezes, M. Morgado, P. Serranho, R. Bernardes, Monte Carlo simulation of diabetic macular edema changes on optical coherence tomography data, in: IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), IEEE, 2014, pp. 724–727. [61] G.R. Wilkins, O.M. Houghton, A.L. Oldenburg, Automated segmentation of intraretinal cystoid fluid in optical coherence tomography, IEEE Trans. Biomed. Eng. 59 (4) (2012) 1109–1114, https://doi. org/10.1109/TBME.2012.2184759. [62] S. Roychowdhury, D.D. Koozekanani, S. Radwan, K.K. Parhi, Automated localization of cysts in diabetic macular edema using optical coherence tomography images, in: 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, 2013, pp. 1426–1429. [63] A.M. Abhishek, T.T.J.M. Berendschot, S.V. Rao, S. Dabir, Segmentation and analysis of retinal layers (ILM & RPE) in optical coherence tomography images with edema, in: 2014 IEEE Conference on Biomedical Engineering and Sciences (IECBES), December, 2014, pp. 204–209, https://doi.org/ 10.1109/IECBES.2014.7047486. [64] T.G. Rotsos, M.M. Moschos, Cystoid macular edema, Clin. Ophthalmol. 2 (2008) 919–930. [65] H. Faghihi, F. Ghassemi, K.G. Falavarjani, G.S. Anari, M. Safizadeh, K. Shahraki, Spontaneous closure of traumatic macular holes, Can. J. Ophthalmol. 49 (4) (2014) 395–398. [66] L. Zhang, K. Lee, M. Niemeijer, R.F. Mullins, M. Sonka, M.D. Abra˜moff, Automated segmentation of the choroid from clinical SD-OCT automated segmentation of choroid from SD-OCT, Invest. Ophthalmol. Vis. Sci. 53 (12) (2012) 7510, https://doi.org/10.1167/iovs.12-10311. [67] N. Slokom, H. Trabelsi, I. Zghal, Segmentation of cystoid macular edema in optical coherence tomography, in: 2016 2nd International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), March, 2016, pp. 303–306, https://doi.org/10.1109/ ATSIP.2016.7523096. [68] Q. Chen, T. Leng, L. Zheng, L. Kutzscher, J. Ma, L. de Sisternes, D.L. Rubin, Automated drusen segmentation and quantification in SD-OCT images, Med. Image Anal. 17 (8) (2013) 1058–1072, https:// doi.org/10.1016/j.media.2013.06.003. [69] P.A. Keane, P.J. Patel, S. Liakopoulos, F.M. Heussen, S.R. Sadda, A. Tufail, Evaluation of age-related macular degeneration with optical coherence tomography, Surv. Ophthalmol. 57 (5) (2012) 389–414, https://doi.org/10.1016/j.survophthal.2012.01.006.

Chapter 7 • Optical coherence tomography: A review

213

[70] A.A. Khanifar, A.F. Koreishi, J.A. Izatt, C.A. Toth, Drusen ultrastructure imaging with spectral domain optical coherence tomography in age-related macular degeneration, Ophthalmology 115 (11) (2008) 1883–1890. e1, https://doi.org/10.1016/j.ophtha.2008.04.041. [71] S.G. Schuman, A.F. Koreishi, S. Farsiu, S. Ho Jung, J.A. Izatt, C.A. Toth, Photoreceptor layer thinning over drusen in eyes with age-related macular degeneration imaged in vivo with spectral-domain optical coherence tomography, Ophthalmology 116 (3) (2009) 488–496. e2, https://doi.org/10.1016/ j.ophtha.2008.10.006. [72] J. Oliveira, S. Pereira, L. Gonc¸alves, M. Ferreira, C.A. Silva, Sparse high order potentials for extending multi-surface segmentation of OCT images with drusen, in: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), August, 2015, pp. 2952–2955, ISSN 1094-687X, https://doi.org/10.1109/EMBC.2015.7319011. [73] A. ElTanboly, A. Placio, A. Shalaby, A. Switala, O. Helmy, S. Schaal, A. El-Baz, An automated approach for early detection of diabetic retinopathy using SD-OCT images, Front. Biosci. (Elite Ed.) 10 (2018) 197–207. [74] A. Yazdanpanah, G. Hamarneh, B.R. Smith, M.V. Sarunic, Segmentation of intra-retinal layers from optical coherence tomography images using an active contour approach, IEEE Trans. Med. Imaging 30 (2) (2011) 484–496, https://doi.org/10.1109/TMI.2010.2087390. [75] L. Ngo, G. Yih, S. Ji, J.H. Han, A study on automated segmentation of retinal layers in optical coherence tomography images, in: 2016 4th International Winter Conference on Brain-Computer Interface (BCI), February, 2016, pp. 1–2, https://doi.org/10.1109/IWW-BCI.2016.7457465. [76] Q. Song, J. Bai, M.K. Garvin, M. Sonka, J.M. Buatti, X. Wu, Optimal multiple surface segmentation with shape and context priors, IEEE Trans. Med. Imaging 32 (2) (2013) 376–386. [77] I.C. Han, G.J. Jaffe, Evaluation of artifacts associated with macular spectral-domain optical coherence tomography, Ophthalmology 117 (6) (2010) 1177–1189. e4, https://doi.org/10.1016/j.ophtha. 2009.10.029. [78] A.H. Mahmoud, Utilizing Radiation for Smart Robotic Applications Using Visible, Thermal, and Polarization Images (Ph.D. dissertation), University of Louisville, 2014. [79] A. Mahmoud, A. El-Barkouky, J. Graham, A. Farag, Pedestrian detection using mixed partial derivative based histogram of oriented gradients, in: 2014 IEEE International Conference on Image Processing (ICIP), IEEE, 2014, pp. 2334–2337. [80] A. El-Barkouky, A. Mahmoud, J. Graham, A. Farag, An interactive educational drawing system using a humanoid robot and light polarization, in: 2013 IEEE International Conference on Image Processing, IEEE, 2013, pp. 3407–3411. [81] A.H. Mahmoud, M.T. El-Melegy, A.A. Farag, Direct method for shape recovery from polarization and shading, in: 2012 19th IEEE International Conference on Image Processing, IEEE, 2012, pp. 1769–1772. [82] A.M. Ali, A.A. Farag, A. El-Baz, Graph cuts framework for kidney segmentation with prior shape constraints, in: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI’07), Brisbane, Australia, October 29–November 2, vol. 1, 2007, pp. 384–392. [83] A.S. Chowdhury, R. Roy, S. Bose, F.K.A. Elnakib, A. El-Baz, Non-rigid biomedical image registration using graph cuts with a novel data term, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’12), Barcelona, Spain, May 2–5, 2012, pp. 446–449. [84] A. El-Baz, A.A. Farag, S.E. Yuksel, M.E.A. El-Ghar, T.A. Eldiasty, M.A. Ghoneim, Application of deformable models for the detection of acute renal rejection, in: Deformable Models, Springer, New York, NY, 2007, pp. 293–333. [85] A. El-Baz, A. Farag, R. Fahmi, S. Yuksel, M.A. El-Ghar, T. Eldiasty, Image analysis of renal DCE MRI for the detection of acute renal rejection, in: Proceedings of IAPR International Conference on Pattern Recognition (ICPR’06), Hong Kong, August 20–24, 2006, pp. 822–825.

214

Diabetes and Fundus OCT

[86] A. El-Baz, A. Farag, R. Fahmi, S. Yuksel, W. Miller, M.A. El-Ghar, T. El-Diasty, M. Ghoneim, A new CAD system for the evaluation of kidney diseases using DCE-MRI, in: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI’08), Copenhagen, Denmark, October 1–6, 2006, pp. 446–453. [87] A. El-Baz, G. Gimel’farb, M.A. El-Ghar, A novel image analysis approach for accurate identification of acute renal rejection, in: Proceedings of IEEE International Conference on Image Processing (ICIP’08), San Diego, California, USA, October 12–15, 2008, pp. 1812–1815. [88] A. El-Baz, G. Gimel’farb, M.A. El-Ghar, Image analysis approach for identification of renal transplant rejection, in: Proceedings of IAPR International Conference on Pattern Recognition (ICPR’08), Tampa, Florida, USA, December 8–11, 2008, pp. 1–4. [89] A. El-Baz, G. Gimel’farb, M.A. El-Ghar, New Motion correction models for automatic identification of renal transplant rejection, in: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI’07), Brisbane, Australia, October 29–November 2, 2007, pp. 235–243. [90] A. Farag, A. El-Baz, S. Yuksel, M.A. El-Ghar, T. Eldiasty, A framework for the detection of acute rejection with dynamic contrast enhanced magnetic resonance imaging, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’06), Arlington, Virginia, USA, April 6–9, 2006, pp. 418–421. [91] F. Khalifa, G.M. Beache, M.A. El-Ghar, T. El-Diasty, G. Gimel’farb, M. Kong, A. El-Baz, Dynamic contrast-enhanced MRI-based early detection of acute renal transplant rejection, IEEE Trans. Med. Imaging 32 (10) (2013) 1910–1927. [92] F. Khalifa, A. El-Baz, G. Gimel’farb, M.A. El-Ghar, Non-invasive image-based approach for early detection of acute renal rejection, in: Proceedings of International Conference Medical Image Computing and Computer-Assisted Intervention (MICCAI’10), Beijing, China, September 20–24, 2010, pp. 10–18. [93] F. Khalifa, A. El-Baz, G. Gimel’farb, R. Ouseph, M.A. El-Ghar, Shape-appearance guided level-set deformable model for image segmentation, in: Proceedings of IAPR International Conference on Pattern Recognition (ICPR’10), Istanbul, Turkey, August 23–26, 2010, pp. 4581–4584. [94] F. Khalifa, M.A. El-Ghar, B. Abdollahi, H. Frieboes, T. El-Diasty, A. El-Baz, A comprehensive noninvasive framework for automated evaluation of acute renal transplant rejection using DCE-MRI, NMR Biomed. 26 (11) (2013) 1460–1470. [95] F. Khalifa, M.A. El-Ghar, B. Abdollahi, H.B. Frieboes, T. El-Diasty, A. El-Baz, Dynamic contrastenhanced MRI-based early detection of acute renal transplant rejection, in: 2014 Annual Scientific Meeting and Educational Course Brochure of the Society of Abdominal Radiology (SAR’14), Boca Raton, Florida, March 23–28, 2014, p. CID:1855912. [96] F. Khalifa, A. Elnakib, G.M. Beache, G. Gimel’farb, M.A. El-Ghar, G. Sokhadze, S. Manning, P. McClure, A. El-Baz, 3D kidney segmentation from CT images using a level set approach guided by a novel stochastic speed function, in: Proceedings of International Conference Medical Image Computing and Computer-Assisted Intervention (MICCAI’11), Toronto, Canada, September 18–22, 2011, pp. 587–594. [97] F. Khalifa, G. Gimel’farb, M.A. El-Ghar, G. Sokhadze, S. Manning, P. McClure, R. Ouseph, A. El-Baz, A new deformable model-based segmentation approach for accurate extraction of the kidney from abdominal CT images, in: Proceedings of IEEE International Conference on Image Processing (ICIP’11), Brussels, Belgium, September 11–14, 2011, pp. 3393–3396. [98] M. Mostapha, F. Khalifa, A. Alansary, A. Soliman, J. Suri, A. El-Baz, Computer-aided diagnosis systems for acute renal transplant rejection: challenges and methodologies, in: A. El-Baz, L. Saba, J. Suri (Eds.), Abdomen and Thoracic Imaging, Springer, 2014, pp. 1–35. [99] M. Shehata, F. Khalifa, E. Hollis, A. Soliman, E. Hosseini-Asl, M.A. El-Ghar, M. El-Baz, A.C. Dwyer, A. El-Baz, R. Keynton, A new non-invasive approach for early classification of renal rejection types using diffusion-weighted MRI, in: IEEE International Conference on Image Processing (ICIP), IEEE, 2016, pp. 136–140.

Chapter 7 • Optical coherence tomography: A review

215

[100] F. Khalifa, A. Soliman, A. Takieldeen, M. Shehata, M. Mostapha, A. Shaffie, R. Ouseph, A. Elmaghraby, A. El-Baz, Kidney segmentation from CT images using a 3D NMF-guided active contour model, in: IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, 2016, pp. 432–435. [101] M. Shehata, F. Khalifa, A. Soliman, A. Takieldeen, M.A. El-Ghar, A. Shaffie, A.C. Dwyer, R. Ouseph, A. El-Baz, R. Keynton, 3D diffusion MRI-based CAD system for early diagnosis of acute renal rejection, in: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, 2016, pp. 1177–1180. [102] M. Shehata, F. Khalifa, A. Soliman, R. Alrefai, M.A. El-Ghar, A.C. Dwyer, R. Ouseph, A. El-Baz, A level set-based framework for 3D kidney segmentation from diffusion MR images, in: IEEE International Conference on Image Processing (ICIP), IEEE, 2015, pp. 4441–4445. [103] M. Shehata, F. Khalifa, A. Soliman, M.A. El-Ghar, A.C. Dwyer, G. Gimel’farb, R. Keynton, A. El-Baz, A promising non-invasive CAD system for kidney function assessment, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2016, pp. 613–621. [104] F. Khalifa, A. Soliman, A. Elmaghraby, G. Gimel’farb, A. El-Baz, 3D kidney segmentation from abdominal images using spatial-appearance models, Comput. Math. Methods Med. 2017 (2017) 1–10. [105] E. Hollis, M. Shehata, F. Khalifa, M.A. El-Ghar, T. El-Diasty, A. El-Baz, Towards non-invasive diagnostic techniques for early detection of acute renal transplant rejection: a review, Egypt. J. Radiol. Nucl. Med. 48 (1) (2016) 257–269. [106] M. Shehata, F. Khalifa, A. Soliman, M.A. El-Ghar, A.C. Dwyer, A. El-Baz, Assessment of renal transplant using image and clinical-based biomarkers, in: Proceedings of 13th Annual Scientific Meeting of American Society for Diagnostics and Interventional Nephrology (ASDIN’17), New Orleans, Louisiana, USA, February 10–12, 2017. [107] M. Shehata, F. Khalifa, A. Soliman, M.A. El-Ghar, A.C. Dwyer, A. El-Baz, Early assessment of acute renal rejection, in: Proceedings of 12th Annual Scientific Meeting of American Society for Diagnostics and Interventional Nephrology (ASDIN’16), Pheonix, Arizona, USA, February 19–21, 2016, 2017. [108] A. Eltanboly, M. Ghazal, H. Hajjdiab, A. Shalaby, A. Switala, A. Mahmoud, P. Sahoo, M. El-Azab, A. El-Baz, Level sets-based image segmentation approach using statistical shape priors, Appl. Math. Comput. 340 (2019) 164–179. [109] M. Shehata, A. Mahmoud, A. Soliman, F. Khalifa, M. Ghazal, M.A. El-Ghar, M. El-Melegy, A. El-Baz, 3D kidney segmentation from abdominal diffusion MRI using an appearance-guided deformable boundary, PLoS ONE 13 (7) (2018) e0200082. [110] F. Khalifa, G. Beache, A. El-Baz, G. Gimel’farb, Deformable model guided by stochastic speed with application in cine images segmentation, in: Proceedings of IEEE International Conference on Image Processing (ICIP’10), Hong Kong, September 26–29, 2010, pp. 1725–1728. [111] F. Khalifa, G.M. Beache, A. Elnakib, H. Sliman, G. Gimel’farb, K.C. Welch, A. El-Baz, A new shapebased framework for the left ventricle wall segmentation from cardiac first-pass perfusion MRI, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’13), San Francisco, California, USA, April 7–11, 2013, pp. 41–44. [112] F. Khalifa, G.M. Beache, A. Elnakib, H. Sliman, G. Gimel’farb, K.C. Welch, A. El-Baz, A new nonrigid registration framework for improved visualization of transmural perfusion gradients on cardiac first-pass perfusion MRI, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’12), Barcelona, Spain, May 2–5, 2012, pp. 828–831. [113] F. Khalifa, G.M. Beache, A. Firjani, K.C. Welch, G. Gimel’farb, A. El-Baz, A new nonrigid registration approach for motion correction of cardiac first-pass perfusion MRI, in: Proceedings of IEEE International Conference on Image Processing (ICIP’12), Lake Buena Vista, Florida, September 30–October 3, 2012, pp. 1665–1668. [114] F. Khalifa, G.M. Beache, G. Gimel’farb, A. El-Baz, A novel CAD system for analyzing cardiac first-pass MR images, in: Proceedings of IAPR International Conference on Pattern Recognition (ICPR’12), Tsukuba Science City, Japan, November 11–15, 2012, pp. 77–80.

216

Diabetes and Fundus OCT

[115] F. Khalifa, G.M. Beache, G. Gimel’farb, A. El-Baz, A novel approach for accurate estimation of left ventricle global indexes from short-axis cine MRI, in: Proceedings of IEEE International Conference on Image Processing (ICIP’11), Brussels, Belgium, September 11–14, 2011, pp. 2645–2649. [116] F. Khalifa, G.M. Beache, G. Gimel’farb, G.A. Giridharan, A. El-Baz, A new image-based framework for analyzing cine images, in: A. El-Baz, U.R. Acharya, M. Mirmedhdi, J.S. Suri (Eds.), Handbook of Multi Modality State-of-the-Art Medical Image Segmentation and Registration Methodologies, vol. 2, Springer, New York, NY, 2011, pp. 69–98, ISBN 978-1-4419-8203-2 (Chapter 3). [117] F. Khalifa, G.M. Beache, G. Gimel’farb, G.A. Giridharan, A. El-Baz, Accurate automatic analysis of cardiac cine images, IEEE Trans. Biomed. Eng. 59 (2) (2012) 445–455. [118] F. Khalifa, G.M. Beache, M. Nitzken, G. Gimel’farb, G.A. Giridharan, A. El-Baz, Automatic analysis of left ventricle wall thickness using short-axis cine CMR images, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’11), Chicago, Illinois, March 30– April 2, 2011, pp. 1306–1309. [119] M. Nitzken, G. Beache, A. Elnakib, F. Khalifa, G. Gimel’farb, A. El-Baz, Accurate modeling of tagged CMR 3D image appearance characteristics to improve cardiac cycle strain estimation, in: 2012 19th IEEE International Conference on Image Processing (ICIP), Orlando, Florida, USA, September, IEEE, 2012, pp. 521–524. [120] M. Nitzken, G. Beache, A. Elnakib, F. Khalifa, G. Gimel’farb, A. El-Baz, Improving full-cardiac cycle strain estimation from tagged CMR by accurate modeling of 3D image appearance characteristics, in: 2012 9th IEEE International Symposium on Biomedical Imaging (ISBI), Barcelona, Spain, May, IEEE, 2012, pp. 462–465 (selected for oral presentation). [121] M.J. Nitzken, A.S. El-Baz, G.M. Beache, Markov-Gibbs random field model for improved full-cardiac cycle strain estimation from tagged CMR, J. Cardiovasc. Magn. Reson. 14 (1) (2012) 1–2. [122] H. Sliman, A. Elnakib, G.M. Beache, A. Elmaghraby, A. El-Baz, Assessment of myocardial function from cine cardiac MRI using a novel 4D tracking approach, J. Comput. Sci. Syst. Biol. 7 (2014) 169–173. [123] H. Sliman, A. Elnakib, G.M. Beache, A. Soliman, F. Khalifa, G. Gimel’farb, A. Elmaghraby, A. El-Baz, A novel 4D PDE-based approach for accurate assessment of myocardium function using cine cardiac magnetic resonance images, in: Proceedings of IEEE International Conference on Image Processing (ICIP’14), Paris, France, October 27–30, 2014, pp. 3537–3541. [124] H. Sliman, F. Khalifa, A. Elnakib, G.M. Beache, A. Elmaghraby, A. El-Baz, A new segmentation-based tracking framework for extracting the left ventricle cavity from cine cardiac MRI, in: Proceedings of IEEE International Conference on Image Processing (ICIP’13), Melbourne, Australia, September 15–18, 2013, pp. 685–689. [125] H. Sliman, F. Khalifa, A. Elnakib, A. Soliman, G.M. Beache, A. Elmaghraby, G. Gimel’farb, A. El-Baz, Myocardial borders segmentation from cine MR images using bi-directional coupled parametric deformable models, Med. Phys. 40 (9) (2013) 1–13. [126] H. Sliman, F. Khalifa, A. Elnakib, A. Soliman, G.M. Beache, G. Gimel’farb, A. Emam, A. Elmaghraby, A. El-Baz, Accurate segmentation framework for the left ventricle wall from cardiac cine MRI, in: Proceedings of International Symposium on Computational Models for Life Science (CMLS’13), Sydney, Australia, November 27–29, vol. 1559, 2013, pp. 287–296. [127] B. Abdollahi, A.C. Civelek, X.-F. Li, J. Suri, A. El-Baz, PET/CT nodule segmentation and diagnosis: a survey, in: L. Saba, J.S. Suri (Eds.), Multi Detector CT Imaging, Taylor & Francis, 2014, pp. 639–651, ISBN 978-1-4398-9397-5 (Chapter 30). [128] B. Abdollahi, A. El-Baz, A.A. Amini, A multi-scale non-linear vessel enhancement technique, in: 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC, IEEE, 2011, pp. 3925–3929. [129] B. Abdollahi, A. Soliman, A.C. Civelek, X.-F. Li, G. Gimel’farb, A. El-Baz, A novel Gaussian scale spacebased joint MGRF framework for precise lung segmentation, in: Proceedings of IEEE International Conference on Image Processing (ICIP’12), IEEE, 2012, pp. 2029–2032.

Chapter 7 • Optical coherence tomography: A review

217

[130] B. Abdollahi, A. Soliman, A.C. Civelek, X.-F. Li, G. Gimel’farb, A. El-Baz, A novel 3D joint MGRF framework for precise lung segmentation, in: Machine Learning in Medical Imaging, Springer, 2012, pp. 86–93. [131] A.M. Ali, A.S. El-Baz, A.A. Farag, A novel framework for accurate lung segmentation using graph cuts, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’07), IEEE, 2007, pp. 908–911. [132] A. El-Baz, G.M. Beache, G. Gimel’farb, K. Suzuki, K. Okada, Lung Imaging Data Analysis, Int. J. Biomed. Imaging 2013 (2013) 1–2. [133] A. El-Baz, G.M. Beache, G. Gimel’farb, K. Suzuki, K. Okada, A. Elnakib, A. Soliman, B. Abdollahi, Computer-Aided Diagnosis Systems for Lung Cancer: Challenges and Methodologies, Int. J. Biomed. Imaging 2013 (2013) 1–46. [134] A. El-Baz, A. Elnakib, M. Abou El-Ghar, G. Gimel’farb, R. Falk, A. Farag, Automatic detection of 2D and 3D lung nodules in chest spiral CT scans, Int. J. Biomed. Imaging 2013 (2013) 1–11. [135] A. El-Baz, A.A. Farag, R. Falk, R. La Rocca, A unified approach for detection, visualization, and identification of lung abnormalities in chest spiral CT scans, in: International Congress Series, vol. 1256, Elsevier, 2003, pp. 998–1004. [136] A. El-Baz, A.A. Farag, R. Falk, R. La Rocca, Detection, visualization and identification of lung abnormalities in chest spiral CT scan: phase-I, in: Proceedings of International Conference on Biomedical Engineering, Cairo, Egypt, vol. 12, 2002. [137] A. El-Baz, A. Farag, G. Gimel’farb, R. Falk, M.A. El-Ghar, T. Eldiasty, A framework for automatic segmentation of lung nodules from low dose chest CT scans, in: Proceedings of International Conference on Pattern Recognition (ICPR’06), vol. 3, IEEE, 2006, pp. 611–614. [138] A. El-Baz, A. Farag, G. Gimel’farb, R. Falk, M.A. El-Ghar, A novel level set-based computer-aided detection system for automatic detection of lung nodules in low dose chest computed tomography scans, Lung Imaging Comput. Aided Diagn. 10 (2011) 221–238. [139] A. El-Baz, G. Gimel’farb, M. Abou El-Ghar, R. Falk, Appearance-based diagnostic system for early assessment of malignant lung nodules, in: Proceedings of IEEE International Conference on Image Processing (ICIP’12), IEEE, 2012, pp. 533–536. [140] A. El-Baz, G. Gimel’farb, R. Falk, A novel 3D framework for automatic lung segmentation from low dose CT images, in: A. El-Baz, J.S. Suri (Eds.), Lung Imaging and Computer Aided Diagnosis, Taylor & Francis, 2011, pp. 1–16, ISBN 978-1-4398-4558-5 (Chapter 1). [141] A. El-Baz, G. Gimel’farb, R. Falk, M. El-Ghar, Appearance analysis for diagnosing malignant lung nodules, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’10), IEEE, 2010, pp. 193–196. [142] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, A novel level set-based CAD system for automatic detection of lung nodules in low dose chest CT scans, in: A. El-Baz, J.S. Suri (Eds.), Lung Imaging and Computer Aided Diagnosis, vol. 1, Taylor & Francis, 2011, pp. 221–238, ISBN 978-1-43984558-5 (Chapter 10). [143] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, A new approach for automatic analysis of 3D low dose CT images for accurate monitoring the detected lung nodules, in: Proceedings of International Conference on Pattern Recognition (ICPR’08), IEEE, 2008, pp. 1–4. [144] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, A novel approach for automatic follow-up of detected lung nodules, in: Proceedings of IEEE International Conference on Image Processing (ICIP’07), vol. 5, IEEE, 2007, p. V-501. [145] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, A new CAD system for early diagnosis of detected lung nodules, in: IEEE International Conference on Image Processing, 2007. ICIP 2007, vol. 2, IEEE, 2007, p. II-461. [146] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, H. Refaie, Promising results for early diagnosis of lung cancer, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’08), IEEE, 2008, pp. 1151–1154.

218

Diabetes and Fundus OCT

[147] A. El-Baz, G.L. Gimel’farb, R. Falk, M. Abou El-Ghar, T. Holland, T. Shaffer, A new stochastic framework for accurate lung segmentation, in: Proceedings of Medical Image Computing and ComputerAssisted Intervention (MICCAI’08), 2008, pp. 322–330. [148] A. El-Baz, G.L. Gimel’farb, R. Falk, D. Heredis, M. Abou El-Ghar, A novel approach for accurate estimation of the growth rate of the detected lung nodules, in: Proceedings of International Workshop on Pulmonary Image Analysis, 2008, pp. 33–42. [149] A. El-Baz, G.L. Gimel’farb, R. Falk, T. Holland, T. Shaffer, A framework for unsupervised segmentation of lung tissues from low dose computed tomography images, in: Proceedings of British Machine Vision (BMVC’08), 2008, pp. 1–10. [150] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, 3D MGRF-based appearance modeling for robust segmentation of pulmonary nodules in 3D LDCT chest images, in: Lung Imaging and Computer Aided Diagnosis, 2011, pp. 51–63 (Chapter 3). [151] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, Automatic analysis of 3D low dose CT images for early diagnosis of lung cancer, Pattern Recogn. 42 (6) (2009) 1041–1051. [152] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, S. Rainey, D. Heredia, T. Shaffer, Toward early diagnosis of lung cancer, in: Proceedings of Medical Image Computing and Computer-Assisted Intervention (MICCAI’09), Springer, 2009, pp. 682–689. [153] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, J. Suri, Appearance analysis for the early assessment of detected lung nodules, in: Lung Imaging and Computer Aided Diagnosis, CRC Press, Boca Raton, FL, 2011, pp. 395–404 (Chapter 17). [154] A. El-Baz, F. Khalifa, A. Elnakib, M. Nitkzen, A. Soliman, P. McClure, G. Gimel’farb, M.A. El-Ghar, A novel approach for global lung registration using 3D Markov Gibbs appearance model, in: Proceedings of International Conference Medical Image Computing and Computer-Assisted Intervention (MICCAI’12), Nice, France, October 1–5, 2012, pp. 114–121. [155] A. El-Baz, M. Nitzken, A. Elnakib, F. Khalifa, G. Gimel’farb, R. Falk, M.A. El-Ghar, 3D shape analysis for early diagnosis of malignant lung nodules, in: Proceedings of International Conference Medical Image Computing and Computer-Assisted Intervention (MICCAI’11), Toronto, Canada, September 18–22, 2011, pp. 175–182. [156] A. El-Baz, M. Nitzken, G. Gimel’farb, E. Van Bogaert, R. Falk, M.A. El-Ghar, J. Suri, Three-dimensional shape analysis using spherical harmonics for early assessment of detected lung nodules, in: Lung Imaging and Computer Aided Diagnosis, CRC Press, Boca Raton, FL, 2011, pp. 421–438 (Chapter 19). [157] A. El-Baz, M. Nitzken, F. Khalifa, A. Elnakib, G. Gimel’farb, R. Falk, M.A. El-Ghar, 3D shape analysis for early diagnosis of malignant lung nodules, in: Proceedings of International Conference on Information Processing in Medical Imaging (IPMI’11), Monastery Irsee, Germany (Bavaria), July 3–8, 2011, pp. 772–783. [158] A. El-Baz, M. Nitzken, E. Vanbogaert, G. Gimel’Farb, R. Falk, M. Abo El-Ghar, A novel shape-based diagnostic approach for early diagnosis of lung nodules, in: 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, IEEE, 2011, pp. 137–140. [159] A. El-Baz, P. Sethu, G. Gimel’farb, F. Khalifa, A. Elnakib, R. Falk, M.A. El-Ghar, Elastic phantoms generated by microfluidics technology: validation of an imaged-based approach for accurate measurement of the growth rate of lung nodules, Biotechnol. J. 6 (2) (2011) 195–203. [160] A. El-Baz, P. Sethu, G. Gimel’farb, F. Khalifa, A. Elnakib, R. Falk, M.A. El-Ghar, A new validation approach for the growth rate measurement using elastic phantoms generated by state-of-the-art microfluidics technology, in: Proceedings of IEEE International Conference on Image Processing (ICIP’10), Hong Kong, September 26–29, 2010, pp. 4381–4383. [161] A. El-Baz, P. Sethu, G. Gimel’farb, F. Khalifa, A. Elnakib, R. Falk, M.A. El-Ghar, J. Suri, Validation of a new imaged-based approach for the accurate estimating of the growth rate of detected lung nodules using real CT images and elastic phantoms generated by state-of-the-art microfluidics technology,

Chapter 7 • Optical coherence tomography: A review

219

in: A. El-Baz, J.S. Suri (Eds.), Handbook of Lung Imaging and Computer Aided Diagnosis, vol. 1, Taylor & Francis, New York, NY, 2011, pp. 405–420, ISBN 978-1-4398-4557-8 (Chapter 18). [162] A. El-Baz, A. Soliman, P. McClure, G. Gimel’farb, M.A. El-Ghar, R. Falk, Early assessment of malignant lung nodules based on the spatial analysis of detected lung nodules, in: Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’12), IEEE, 2012, pp. 1463–1466. [163] A. El-Baz, S.E. Yuksel, S. Elshazly, A.A. Farag, Non-rigid registration techniques for automatic followup of lung nodules, in: Proceedings of Computer Assisted Radiology and Surgery (CARS’05), vol. 1281, Elsevier, 2005, pp. 1115–1120. [164] A.S. El-Baz, J.S. Suri, Lung Imaging and Computer Aided Diagnosis, CRC Press, Boca Raton, FL, 2011. [165] A. Soliman, F. Khalifa, N. Dunlap, B. Wang, M. El-Ghar, A. El-Baz, An ISO-surfaces based local deformation handling framework of lung tissues, in: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, 2016, pp. 1253–1259. [166] A. Soliman, F. Khalifa, A. Shaffie, N. Dunlap, B. Wang, A. Elmaghraby, A. El-Baz, Detection of lung injury using 4D-CT chest images, in: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, 2016, pp. 1274–1277. [167] A. Soliman, F. Khalifa, A. Shaffie, N. Dunlap, B. Wang, A. Elmaghraby, G. Gimel’farb, M. Ghazal, A. ElBaz, A comprehensive framework for early assessment of lung injury, in: 2017 IEEE International Conference on Image Processing (ICIP), IEEE, 2017, pp. 3275–3279. [168] A. Shaffie, A. Soliman, M. Ghazal, F. Taher, N. Dunlap, B. Wang, A. Elmaghraby, G. Gimel’farb, A. ElBaz, A new framework for incorporating appearance and shape features of lung nodules for precise diagnosis of lung cancer, in: 2017 IEEE International Conference on Image Processing (ICIP), IEEE, 2017, pp. 1372–1376. [169] A. Soliman, F. Khalifa, A. Shaffie, N. Liu, N. Dunlap, B. Wang, A. Elmaghraby, G. Gimel’farb, A. El-Baz, Image-based CAD system for accurate identification of lung injury, in: 2016 IEEE International Conference on Image Processing (ICIP), IEEE, 2016, pp. 121–125. [170] B. Dombroski, M. Nitzken, A. Elnakib, F. Khalifa, A. El-Baz, M.F. Casanova, Cortical surface complexity in a population-based normative sample, Transl. Neurosci. 5 (1) (2014) 17–24. [171] A. El-Baz, M. Casanova, G. Gimel’farb, M. Mott, A. Switala, An MRI-based diagnostic framework for early diagnosis of dyslexia, Int. J. Comput. Assist. Radiol. Surg. 3 (3–4) (2008) 181–189. [172] A. El-Baz, M. Casanova, G. Gimel’farb, M. Mott, A. Switala, E. Vanbogaert, R. McCracken, A new CAD system for early diagnosis of dyslexic brains, in: Proceedings of the International Conference on Image Processing (ICIP’2008), IEEE, 2008, pp. 1820–1823. [173] A. El-Baz, M.F. Casanova, G. Gimel’farb, M. Mott, A.E. Switwala, A new image analysis approach for automatic classification of autistic brains, in: Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’2007), IEEE, 2007, pp. 352–355. [174] A. El-Baz, A. Elnakib, F. Khalifa, M.A. El-Ghar, P. McClure, A. Soliman, G. Gimel’farb, Precise segmentation of 3-D magnetic resonance angiography, IEEE Trans. Biomed. Eng. 59 (7) (2012) 2019–2029. [175] A. El-Baz, A. Farag, G. Gimel’farb, M.A. El-Ghar, T. Eldiasty, Probabilistic modeling of blood vessels for segmenting MRA images, in: 18th International Conference on Pattern Recognition (ICPR’06), vol. 3, IEEE, 2006, pp. 917–920. [176] A. El-Baz, A.A. Farag, G. Gimel’farb, M.A. El-Ghar, T. Eldiasty, A new adaptive probabilistic model of blood vessels for segmenting MRA images, in: Medical Image Computing and Computer-Assisted Intervention—MICCAI 2006, vol. 4191, Springer, 2006, pp. 799–806. [177] A. El-Baz, A.A. Farag, G. Gimel’farb, S.G. Hushek, Automatic cerebrovascular segmentation by accurate probabilistic modeling of TOF-MRA images, in: Medical Image Computing and ComputerAssisted Intervention—MICCAI 2005, Springer, 2005, pp. 34–42.

220

Diabetes and Fundus OCT

[178] A. El-Baz, A. Farag, A. Elnakib, M.F. Casanova, G. Gimel’farb, A.E. Switala, D. Jordan, S. Rainey, Accurate automated detection of autism related corpus callosum abnormalities, J. Med. Syst. 35 (5) (2011) 929–939. [179] A. El-Baz, A. Farag, G. Gimelfarb, Cerebrovascular segmentation by accurate probabilistic modeling of TOF-MRA images, in: Image Analysis, vol. 3540, Springer, 2005, pp. 1128–1137. [180] A. El-Baz, G. Gimel’farb, R. Falk, M.A. El-Ghar, V. Kumar, D. Heredia, A novel 3D joint Markov-Gibbs model for extracting blood vessels from PC-MRA images, in: Medical Image Computing and Computer-Assisted Intervention—MICCAI 2009, vol. 5762, Springer, 2009, pp. 943–950. [181] A. Elnakib, A. El-Baz, M.F. Casanova, G. Gimel’farb, A.E. Switala, Image-based detection of corpus callosum variability for more accurate discrimination between dyslexic and normal brains, in: Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’2010), IEEE, 2010, pp. 109–112. [182] A. Elnakib, M.F. Casanova, G. Gimel’farb, A.E. Switala, A. El-Baz, Autism diagnostics by centerlinebased shape analysis of the corpus callosum, in: Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’2011), IEEE, 2011, pp. 1843–1846. [183] A. Elnakib, M. Nitzken, M.F. Casanova, H. Park, G. Gimel’farb, A. El-Baz, Quantification of agerelated brain cortex change using 3D shape analysis, in: 2012 21st International Conference on Pattern Recognition (ICPR), IEEE, 2012, pp. 41–44. [184] M. Mostapha, A. Soliman, F. Khalifa, A. Elnakib, A. Alansary, M. Nitzken, M.F. Casanova, A. El-Baz, A statistical framework for the classification of infant DT images, in: 2014 IEEE International Conference on Image Processing (ICIP), IEEE, 2014, pp. 2222–2226. [185] M. Nitzken, M.F. Casanova, G. Gimel’farb, A. Elnakib, F. Khalifa, A. Switala, A. El-Baz, 3D shape analysis of the brain cortex with application to dyslexia, in: 2011 18th IEEE International Conference on Image Processing (ICIP), September, IEEE, Brussels, Belgium, 2011, pp. 2657–2660 (Selected for oral presentation. Oral acceptance rate is 10% and the overall acceptance rate is 35%). [186] F.E.-Z.A. El-Gamal, M.M. Elmogy, M. Ghazal, A. Atwan, G.N. Barnes, M.F. Casanova, R. Keynton, A. S. El-Baz, A novel CAD system for local and global early diagnosis of Alzheimer’s disease based on PIB-PET scans, in: 2017 IEEE International Conference on Image Processing (ICIP), IEEE, 2017, pp. 3270–3274. [187] M. Ismail, A. Soliman, M. Ghazal, A.E. Switala, G. Gimel’farb, G.N. Barnes, A. Khalil, A. El-Baz, A fast stochastic framework for automatic MR brain images segmentation, PLoS ONE 12 (11) (2017) e0187391. [188] M.M.T. Ismail, R.S. Keynton, M.M.M.O. Mostapha, A.H. ElTanboly, M.F. Casanova, G.L. Gimel’farb, A. El-Baz, Studying autism spectrum disorder with structural and diffusion magnetic resonance imaging: a survey, Front. Hum. Neurosci. 10 (2016) 211. [189] A. Alansary, M. Ismail, A. Soliman, F. Khalifa, M. Nitzken, A. Elnakib, M. Mostapha, A. Black, K. Stinebruner, M.F. Casanova, et al., Infant brain extraction in T1-weighted MR images using BET and refinement using LCDG and MGRF models, IEEE J. Biomed. Health Inform. 20 (3) (2016) 925–935. [190] M. Ismail, A. Soliman, A. ElTanboly, A. Switala, M. Mahmoud, F. Khalifa, G. Gimel’farb, M. F. Casanova, R. Keynton, A. El-Baz, Detection of white matter abnormalities in MR brain images for diagnosis of autism in children, in: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, 2016, pp. 6–9. [191] M. Ismail, M. Mostapha, A. Soliman, M. Nitzken, F. Khalifa, A. Elnakib, G. Gimel’farb, M.F. Casanova, A. El-Baz, Segmentation of infant brain MR images based on adaptive shape prior and higher-order MGRF, in: 2015 IEEE International Conference on Image Processing (ICIP), IEEE, 2015, pp. 4327–4331. [192] E.H. Asl, M. Ghazal, A. Mahmoud, A. Aslantas, A. Shalaby, M. Casanova, G. Barnes, G. Gimel’farb, R. Keynton, A. El-Baz, Alzheimer’s disease diagnostics by a 3D deeply supervised adaptable convolutional network, Front. Biosci. (Landmark Ed.) 23 (2018) 584–596.

Chapter 7 • Optical coherence tomography: A review

221

[193] A. Mahmoud, A. El-Barkouky, H. Farag, J. Graham, A. Farag, A non-invasive method for measuring blood flow rate in superficial veins from a single thermal image, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2013, pp. 354–359. [194] A. El-Baz, A. Shalaby, F. Taher, M. El-Baz, M. Ghazal, M.A. El-Ghar, A.L.I. Takieldeen, J. Suri, Probabilistic modeling of blood vessels for segmenting magnetic resonance angiography images, Med. Res. Arch. 5 (3) (2017) 1–22. [195] A.S. Chowdhury, A.K. Rudra, M. Sen, A. Elnakib, A. El-Baz, Cerebral white matter segmentation from MRI using probabilistic graph cuts and geometric shape priors, in: ICIP, 2010, pp. 3649–3652. [196] Y. Gebru, G. Giridharan, M. Ghazal, A. Mahmoud, A. Shalaby, A. El-Baz, Detection of cerebrovascular changes using magnetic resonance angiography, in: Cardiovascular Imaging and Image Analysis, CRC Press, Boca Raton, FL, 2018, pp. 1–22. [197] A. Mahmoud, A. Shalaby, F. Taher, M. El-Baz, J.S. Suri, A. El-Baz, Vascular tree segmentation from different image modalities, in: Cardiovascular Imaging and Image Analysis, CRC Press, Boca Raton, FL, 2018, pp. 43–70. [198] F. Taher, A. Mahmoud, A. Shalaby, A. El-Baz, A review on the cerebrovascular segmentation methods, in: 2018 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), IEEE, 2018, pp. 359–364. [199] H. Kandil, A. Soliman, L. Fraiwan, A. Shalaby, A. Mahmoud, A. ElTanboly, A. Elmaghraby, G. Giridharan, A. El-Baz, A novel MRA framework based on integrated global and local analysis for accurate segmentation of the cerebral vascular system, in: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, 2018, pp. 1365–1368.

8

An accountable saliency-oriented data-driven approach to diabetic retinopathy detection Ramon Pires, Alexandre Ferreira, Sandra Avila, Jacques Wainer, Anderson Rocha I N STI TU TE O F CO M PU TI N G, UNIVERSITY O F CAMPINAS ( UNICAMP), CAMPINAS, BRAZIL

1 Introduction Diabetes mellitus, a universal chronic condition that occurs by virtue of raised levels of glucose in the blood, affects some 425 million adults worldwide (1 in 11), and if the growth trend continues, 629 million adults (1 in 10) will suffer with this condition by 2045 [1]. Diabetic retinopathy (DR) is among the chronic diabetes complications, being the leading cause of vision loss in working-age adults. Just in the United States, nearly 7.7 million people at the age of 40 and older are victims of DR [2], but due to the shortage of ophthalmologists and optometrists, the prevalence in developing countries is even larger [3]. Besides the scarcity of ophthalmologists, another critical barrier for a proper DR treatment is that about half of the diabetic patients are not even aware of their health conditions [1]. Those factors restrict consultations and annual follow-ups as recommended by the World Health Organization and professional ophthalmic organizations. To alleviate the physician scarcity and substantially reduce the workload, several solutions based on machine-learning algorithms have been proposed. Extensive research addresses the issue with automated screening solutions, deciding who should be referred to the ophthalmologist for further examination [4–15]. Initially, the methods were focused on handcrafted approaches that rely on lesions’ visual characteristics and moved toward unified approaches (some of them bypassing explicit lesion information). Those approaches have evolved into advanced data-driven methods that dramatically improved the performance and are the current state of the art for automated DR screening [10]. However, as the capacity of decision methods progresses and becomes practically autonomous, it also becomes hard to understand and interpret them. These concerns are notably prevalent in medical imaging—it is important not only to provide a robust and accurate performance, but also intuitive explanations about how and why a particular

Diabetes and Fundus OCT. https://doi.org/10.1016/B978-0-12-817440-1.00008-5 © 2020 Elsevier Inc. All rights reserved.

223

224

Diabetes and Fundus OCT

decision was taken. These concerns encompass to come up with trends of ensuring machine-learning accountability expressed as strategic notes by the European Political Strategy Centre [16] and standards by the scientific community [17]. In this work, we showcase a transparent referable diabetic retinopathy (RDR) detector that explores data-driven concepts such as convolutional neural network (CNN) [18], show pixel-level evidence that led to the decision, and use those hotspots to extract local descriptors and encode them into a rich and discriminative representation. The method combines local and global information to triage patients who require referral to the ophthalmologist and those who can wait until the next screening, and presents to physicians and patients the reasons regarding the decision. We organized this chapter into five more sections. In Section 2, we overview the prior art on the topic. In Section 3, we describe the proposed accountable data-driven methodology for DR referral detection. In Section 4, we present the adopted experimental protocol while, in the Section 5, we present experimental results and decisions. Finally, in Section 6, we conclude the chapter and discuss possible future work.

2 State of the art Several works explore handcrafted and data-driven methods to address the automated screening of DR. Bags of visual words (BoVW) is among the most explored hand-craft mid-level representation for retinal image analysis in several scopes: assessing quality [19, 20], detecting lesions [21–26], and assessing referability [5, 13–15]. With respect to referral assessment, the classical approaches rely on lesion detection [13, 14, 20, 22, 24, 25], and most of them require specific techniques for each lesion of interest. In a different fashion, Pires et al. [13, 14] applied a unified BoVW-based methodology (that works for any pathology) to detect lesions; and gathered the individual responses to assess referability. Despite being explainable, lesion-based approaches that depend on the traditional two-tier route—lesion detection followed by referral decision-making—are subject to lose critical information. To alleviate that disadvantage, Pires et al. [15] proposed a direct referral assessment that bypasses lesion classifiers and, instead, relies on retinal images with all cogent information directly for referral analysis. This method has outperformed several existing lesion-based methodologies. Although the direct hand-craft approaches alleviate the wastage of critical information, it does not settle totally that limitation. As any handcrafted technique, those approaches are subjected to lose details that could be evinced in data-driven approaches to provide effective decisions. Data-driven approaches with an end-to-end learning procedure are the current state of the art for RDR detection [8, 10, 12, 27]. By ensembling 10 convolutional networks pretrained with the ImageNet dataset [10], Gulshan et al. came up with multiple binary decisions regarding (1) moderate or worse DR; (2) severe or worse DR; (3) referable diabetic

Chapter 8 • An accountable saliency-oriented data-driven approach to diabetic

225

macular edema (ME); or (4) fully gradable; and reached an AUC of 99% for referral over Messidor-2 dataset.a Pires et al. [27] proposed an effective and efficient data-driven model for the binary task of referral/nonreferral. The designed CNN was inspired on VGG-16 and on the o_O.b The authors optimized the model by exploring a multiresolution training strategy—that allows them to cope with high-resolution input images—and a robust feature extraction augmentation process. Under a rigorous scientific point of view, the authors show how much the employed procedural investigation improves performance, reaching an AUC of 98.2% in a patient-basis analysis under a cross-dataset protocol testing using the Messidor-2 dataset. In automated diagnostic tasks, the classification accuracy is remarkably relevant, but understanding the reasons behind a computer-aided decision has become even more required and appreciated lately. Notwithstanding, most data-driven approaches are not absolutely explainable. Here we present some recent works that propose accountable methods for tasks related to DR and/or performs final decisions based on local and global information. In this work, we interchange the terms heatmaps and saliency maps to express pixel importance analysis toward a decision. To capture visual properties from retinal images, Nandy et al. [28] collect blobs from a limited set of annotated lesions and learn class-aware GMMs (control and disease), thereby refining and combining them to provide universal GMMs. Those universal GMMs are used as prior distributions to adapt individual image’s blobs distributions. Similarities between components of universal and adapted (individual) GMMs compose the feature vector of a particular image, ultimately used to train a referral model. The proposed GMMbased method achieved an AUC of 92.1% using Messidor dataset under a fourfold crossvalidation protocol. Yang et al. [29] combine local and global mechanisms for lesion location and severity grading, respectively. The local stage has the purpose of not only detecting lesions but also reweighting the images based on a naı¨ve strategy whereby predictions and probabilities are weighted —for example, control patches are totally removed (label 0) while exudates (label 3) are rewarded more than microaneurysms and hemorrhages (labels 1 and 2, respectively). After reweighting, the images are used as input to the global network that grades the severity and categorizes as referable if the stage is beyond mild nonproliferative diabetic retinopathy (NPDR). The two-stage method provides an AUC of 95.9% for RDR over EyePACS test dataset. Wang et al. [30] proposed an interpretable approach for DR screening that predicts the stage of DR based on the whole image and suspicious patches. Zoom-in-Net mimics the clinician usual procedure to examine retinal images. The entire Zoom-in-Net comprises three subnetworks. The first one classifies retinal images producing probabilistic a

An image is regarded as referable if it fulfills criterion (1), criterion (3), or both.

b

The team ranked second in a recent competition promoted at Kaggle by California Health Foundation, aiming at proposing robust solutions for DR detection.

226

Diabetes and Fundus OCT

responses for each class, while the second receives feature maps from a specific point of the first network (before fully connected stage) and produces scores and contextual heatmaps for each disease level. The second network is responsible for processing and combining the high-dimensional patches extracted in the virtue of the heatmaps. The third network combines features from all patches by global max pooling, concatenates with image-level features from the first network, and classifies the image generating class scores. The Zoom-in-Net combines (by sum) the five-dimensional scores from each subnetwork, and later on trains an SVM for DR stage detector, whose decisions are later on converted for referral. Using Messidor dataset under a 10-fold cross-validation protocol, the authors reached an AUC of 95.7%. An end-to-end BoVW-like methodology that bypasses the previous stage of codebook learning was proposed recently by Costa et al. [31]. The method consists on jointly training two neural networks under the same objective function and joint optimization process. The first network learns weights in order to encode local features—previously detected and described with SURF algorithm [32]. The approach aggregates those encoded features by max-pooling and uses the learned representation as input to the second network that discriminates the image in terms of presence/absence of lesions or signs of referable/nonreferable conditions. The authors enhance the model interpretability by modifying the loss function according to the class and forcing sparse representations for control images and dense representations for disease cases. To enhance classification of DR severity, Roy et al. [33] proposed a hybrid approach that combines data-driven (global) information using BoVW-based (local) approaches by incorporating local representations encoded in terms of particular lesions. The imagebased global features come from a well-performing deep neural network (DNN) for severity estimation, while patch-based local features consist of nonoverlapping 224  224 patches described using VGG Net [34] (pretrained with ImageNet dataset) and encoded into discriminative and generative pathology histograms. The discriminative histogram proposed by Roy et al. is based upon encoding local patches with a dictionary created through random forest’s trees [33]. After encoding into a higher dimensional and sparse space, the local features are assigned to the lesion class for which it has been classified by a pretrained multilesion SVM classifier; and the sparse activations are aggregated by sum pooling. The generative histogram is based on BoVW representations by Fisher vectors (FVs) [35]. Before calculating the gradients of local patches with respect to Gaussians’ mean and standard deviations, the authors reduced the local feature dimensionality by applying principal component analysis. Roy et al. concatenate global features (from the second fully connected layer), discriminative histograms and generative histograms, and train a random forest classifier to evaluate DR severity. By combining local and global information, the authors boost the performance from 0.81 (only global information) to 0.86 in quadratic kappa score. Quellec et al. [12] proposed an accountable solution for automated referral assessment and automated detection of DR-related lesions. The authors trained the o_O and Alexnet architectures for RDR; and without retraining, evaluated how well the CNNs could detect

Chapter 8 • An accountable saliency-oriented data-driven approach to diabetic

227

lesions. The optimal checkpoints for referral and for each individual lesion (the one the provides better performance during the learning process on a dataset with manually delineated lesions) were ensembled in a patient-basis viewpoint. The ensemble corresponds to a random forest trained with responses of six different networks (checkpoints exported in different steps), for left and right eyes. By combining those responses, the method reached an AUC of 95.4% using Kaggle/EyePACS dataset.

3 Methodology In this section, we present a robust and self-explainable detector of DR referability that encompasses global data-driven and local saliency-oriented representations. The method adopts a deep CNN trained in an end-to-end fashion for screening of DR (referral) and uses locally significant regions to capture evidence that might be stressed to enhance the model. Note that the second stage requires a reasonably well-optimized model trained in the first stage. In the next steps, we briefly describe concepts related to accountability that is being widely discussed in artificial intelligence community nowadays. In sequence, we describe the purely end-to-end data-driven approach and the sequential saliency-oriented local methodology that reinforces the performance as well as the understanding of the solution. Fig. 1 depicts an overview of the proposed solution.

3.1 Accountable machine learning The adoption of machine-learning techniques in supporting automated DR screening brings an essential question: how these so-called “black boxes” results can be explained? In this chapter, we have used a post hoc interpretation [36], which explains predictions to ophthalmologists but hides the computational details of the technique.

Legend

Network probabilities

Global approach

Referral DNN

Local saliency-oriented approach

Referral DNN Map(GT)

Global approach

Referral DNN

NN1 Features

Features

Patches

Referral DNN

NN1

NN2

AVG

Local saliency-oriented approach

Fisher vector

Map(+) Map(–)

Patches

NN2

Referral DNN

AVG

Fisher vector Patches

Training

Testing

FIG. 1 Overview of the proposed method. Training: two neural networks (NN) are trained; the first based on features from a trained referral DNN and a second one based on lesion patches. Testing: the testing phase combines results of the two trained neural networks plus the probabilities of the already trained referral DNN.

228

Diabetes and Fundus OCT

Considering our data-driven approach through deep CNN, this explanation can be done using saliency maps. The map highlights input local regions that influence the results based on the output gradient. However, they are often noisy and difficult to interpret by human eyes. Smilkov et al. [37] discuss this problem comparing some strategies used to create saliency maps. In this chapter, we have used the Guided Backpropagation [38] strategy, which ignores negative values in the network backward flow. In this way, it underlines positive neuron contributions to the gradients and attenuates negative ones in the ReLU functions. The extracted saliency maps represent sharper visualizations of the activated screening images, which are pivotal for accountability. In addition, our proposed approach takes an advantage of these maps by extracting saliency-oriented local features, described later on in this section.

3.2 Global data-driven approach Our data-driven approach comprehends essentially a deep CNN model for automated DR screening. The network design we employ herein is the Inception-ResNet-v2 [39], previously trained using ImageNet dataset. We keep using the cross-entropy loss function with two neurons in the last layer, corresponding to positive and negative categories of referral. In this case, we are transferring knowledge learned on a source task to improve the learning process in a target task. Transfer learning is normally sought when the (target) training set is not large, there exists a reasonable solution for a related (source) problem, and both problems are similar. Since the source and target domains here are very different, we finetune the model with a considerable high learning rate (we are not concerned so much about distorting pretrained parameters and need to practically train from scratch). Before actually training the entire network, we trained only the last layer, keeping most of the parameters frozen and avoiding immediate destruction of learned patterns. Afterwards, we extended the backpropagation to all the network, and optimized the model with the RMSProp algorithm [40]. We employed an initial learning rate of 0.01 and decayed it with stochastic gradient descent with warm restarts [41], an aggressive learning rate reduction combined with periodic restarts. The reduction relies on cosine function, and the ith restart takes place after 3.0  1.5i epochs to a learning rate of 0.01  0.8i. Increasing the learning rate at each restart allows us to skip of possible local minima and continue exploring the loss. To avoid overfitting, we use a weight decay of 0.0004 and keep the dropout with fixed probability of 0.8 (i.e., preserving 80% of the weights in the training stage). Regarding data augmentation, we employ both geometric and photometric transformations. Geometric perturbations comprise zoom, flipping, rotations, translations, and stretching and are performed by randomly choosing a variable into a predefined interval (for instance, we apply rotations between 0 and 360 degrees, zoom between 1/1.15 and 1.15, and translations between 40 and 40). The color augmentation adds, in each image channel, multiples of principal components (from RGB pixels). The eigenvalues, from the pixels in each channel, are multiplied by random variables from a Gaussian distribution. For photometric

Chapter 8 • An accountable saliency-oriented data-driven approach to diabetic

229

transformations, we apply color augmentation that adds to training images multiples of principal components previously found on RGB pixel values throughout the training set. The magnitudes are proportional to the corresponding eigenvalues times a random variable drawn from a Gaussian distribution with mean 0 and standard deviation 0.5.

3.3 Local saliency-oriented data-driven approach The main novelty in this work is our saliency-oriented local representation methodology that relies on heatmaps to gather significant regions (from the previous data-driven global decision) for enhancing the pipeline and providing a more robust screening method. After optimizing the CNN, we can use the model to diagnose new images as well as to extract saliency maps that highlight pixel importance for the decision made. In this work, we use the guided backpropagation method to acquire pixel importance. Fig. 2 depicts a retinal image, its respective saliency map toward importance analysis for referral decision and their superposition. In this section, we describe the patch extraction protocol under an image preprocessing viewpoint, briefly review the encoding technique (FV), and provide a detail about the methodology for local feature representation.

3.3.1 Patches extraction After we pass the original image into the deep network and propagate back the pixel importance for the decision taken to generate a saliency map (see Fig. 2), we process the map and capture coordinates that are sequentially used to capture regions that could be relevant to enhance decision. The saliency map has the same dimensions of the image. As we intend to capture importance and preserve locality, we initially convert the 3D tensor to a gray-scale 2D tensor by summing up the activations per channel. In general, heatmaps are subject to visual noise. To reduce undesirable effects in region selection, we apply a threshold with binarization purposes. Other recent alternatives such as adding noise to reduce noise could be explored [37], but a single threshold was a reasonable choice toward efficiency. We filtered the maps with a threshold of 150, a good trade-off for removing noise and preserving

FIG. 2 Retinal image (left); respective saliency map extracted with guided backpropagation (middle); and the superposition of the map with the image, highlighting important regions and providing an explanation of the reasons behind the decision made by the model (right).

230

Diabetes and Fundus OCT

small activations. In sequence, we invert and erode the binary structure (basic mathematical morphology operation) in order to extend chunks and connect close components. After processing the saliency map, we identify contours in the 2D structure and capture their respective coordinates. To preserve the aspect ratio and produce visible regions toward boundaries (e.g., keep the boundaries of lesions), we square the regions and enlarge them using a factor inversely proportional to the original patch size. That operation doubles the height and width for small regions (microaneurysm candidates) and extends by 10% the dimensions of large regions (in general blood vessels or possible connected large lesions). Smallest regions (in general microaneurysm) are enlarged more than largest regions. In Fig. 3, we show a fundus image superposed with its saliency map, and respective significant regions extracted based on pixel importance for datadriven referral decision.

3.3.2 Fisher vector encoding Once we have the patches, an FV encoding strategy is used to capture their local descriptors by pooling patches features [42, 43]. By combining generative and discriminative techniques, we rank low-level patches descriptions based on their deviation from a GMM (generative model) by calculating the patch gradient with respect to the model

FIG. 3 Saliency-oriented squared patches from which we extract local descriptors. We enlarge patches in a controlled design taking into account the region sizes: for example, small regions (in general microaneurysms) are enlarged more than large regions (e.g., soft and hard exudates).

Chapter 8 • An accountable saliency-oriented data-driven approach to diabetic

231

parameters. When compared with BoVW-based approaches, the FV encoding is more robust since it is not restricted to the GMM parameters only [35].

3.3.3 Integration After training the deep model for RDR, generating the saliency map for interpretation and local representation and encoding the multiple patch-based features, we train a shallow neural network to take a novel complementary decision regarding need of consultation. To avoid the new model to be strictly dependent on the decisions performed by the baseline CNN model, we extract two different, separate mid-level representations for the test sets—one for each class. We extract those maps by guided backpropagation, each of which guided by one specific class/neuron. Given that the groundtruth is known, we can fully use it to extract a unique saliency map and respective mid-level representation for the training set. As exposed in Fig. 1, for inference, the final local saliency-oriented decision is taken by averaging the two per-class responses.

3.4 Per patient analysis The more data available, the more confident and effective the learned model is. One requirement for a robust data-driven model is to have an availability of a large amount of data, except when transferring parameters previously learned from a different, but similar task. Since our purpose is examining whether or not a patient needs to see a doctor within 1 year, we could substantially leverage the accuracy of the model once photographs of the two retinas are available. As such, we combine image information to provide final patient responses both in feature level and score level. When the method involves feature extraction, we concatenate features of both and include a binary indicator variable that refers to left or right. Regarding score level, we assign to the patient the response of the retina that presents the highest probability of needing referral.

3.5 Contextualizing with state of the art Our accountable solution encompasses saliency-oriented data-driven local patches, encoded with FVs, combined with a global data-driven representation. In this section, we compare the proposed method with recent works related to retinal image analysis (not necessarily regarding referral decisions). Table 1 summarizes the comparison regarding accountability, combining local and global information, contextual patch extraction, and mid-level representation. Yang et al. [29] apply a two-stage data-driven method to detect lesions and identity severity of DR. Although using a naı¨ve pixel reweighting approach, the automated diagnosis of DR severity is regarded as accountable because input images are weighted based on lesion type and location, aspects that altogether determine the DR stage. The authors also used local and global information. Even they are directly used for different purposes (local features for lesion detection and global features for severity analysis), the local information has meaningful importance on severity diagnosis. Note, however, that the patch

232

Diabetes and Fundus OCT

Table 1

Contrasting recent similar works with ours.

Work

AML

GLI

CPE

Yang et al. [29] Wang et al. [30] Costa et al. [31] Roy et al. [33] Quellec et al. [12] Ours

✓ ✓ ✓ ✓a ✓ ✓

✓ ✓

✓ ✓ ✓

✓ ✓

MLR

✓ ✓



AML, accountable machine learning; CPE, contextual patch extractor; GLI, global and local information; MLR, mid-level image representation. a Partially accountable.

extraction is performed in a sliding-window fashion rather than contextual, and the regions are used directly to take individual decision regarding lesions instead of being encoded into a rich representation for severity/referral decisions. The Zoom-in-Net methodology [30] involves three subnetworks with particular functions. The approach is accountable in the sense that it generates heatmaps that represent pixel importance. Similar to us, the authors also use those heatmaps for region extraction, and combine those regions in the third subnetwork. Those local information are also combined with global information. Note, however, that the method only combines features of all the patches with a global pooling, instead of encoding those information into a richer and contextual-aware representation. The final result is achieved by combining three complete models, aggregating the three best performing results. Costa et al. [31] proposed a handcrafted BoVW-like methodology that simultaneously learn encoding and classification steps. The approach is accountable as it can pinpoint regions on the image that triggered the diagnosis decision, evenly trained with weakly labeled data (image-level annotation regarding presence/absence of lesions). Note, however, that the regions are entirely detected by the SURF algorithm. The method does not necessarily extract patches in a contextual manner, but describe noncontextual interest points and encode them into a learned mid-level representation. The method does not take into account global representation. The hybrid method proposed by Roy et al. [33] combines discriminative and generative local lesion-based representations with a global data-driven representation. Although individual patches are classified according to the presence of lesion (in a certain step of discriminative representation pipeline), it is not projected back onto the image to show why the decision regarding DR stage was taken (e.g., one specific region was classified as neovascularization and that justifies the severity decision). Therefore, Roy et al.’s approach is partially accountable. The adopted patch extraction is not dependent on pixel importance or presence of lesion, for instance, but used in a posterior lesion detection and encoding (mid-level representation). Quellec et al. [12] proposed a solution that jointly detects RDR and what influences the decision in the pixel level (in terms of lesion or other biomarkers). The accountable

Chapter 8 • An accountable saliency-oriented data-driven approach to diabetic

233

solution also produces heatmaps with an additional strategy that reduces drafting artifacts during training. Although also using heatmaps in order to detect lesions at the lesion level (as well as image level), the approach does not necessarily combines global and local information. Indeed, the lesion detection had an important impact on the final RDR performance, but it was taken by ensembling different models specialized in detecting lesions but without retraining for it. “Different” models consist on parameters that came up with the same architecture, but captured under distinct training steps. Additionally, the method provides the use of heatmaps to extract regions with potential candidates to lesions and use them to detect/identify the lesion itself. They neither encode nor combine with global information to enhance the detection of RDR.

4 Experimental protocol In this section, we describe the cross-dataset—a robust validation protocol that we adopt in this work—as well as the datasets that we employ for training and testing.

4.1 Validation protocol We adopt the strict cross-dataset validation protocol, which is close to real-world operational conditions, as training and testing are performed on datasets collected in different conditions—hospitals, cameras, and at least 1 year apart. In practice, in a clinical operation of screening, rarely new images under analysis will have the same specifications as images used for training the classification model. By using this protocol, and showing results training the classifier with Kaggle/EyePACS data and testing on different datasets (e.g., Messidor-2), we come up with a solution that highlights the strong capacity of the model to generalize.

4.2 Datasets A recent competition promoted at Kagglec by the California Health Foundation was an important landmark for research in automated DR screening. The Kaggle/EyePACS dataset comprises 88,702 images, collected and annotated by EyePACS, a platform for retinal screening. In this work, we combine the sets originally used for training and test in the competition, as training. The images, whose sizes range from 320  211 pixels to 5184  3456 pixels, were taken under a variety of imaging conditions. The dataset comprises images of left and right eyes. We convert the labels, originally for disease stages, to referral necessity following the International Clinical Diabetic Retinopathy (ICDR) recommendations [44]: tagging as nonreferable only those patients with no DR signal or mild NPDR. The conversion of labels is not algorithmic, but it is done manually before training the classifiers.

c

See https://www.kaggle.com/c/diabetic-retinopathy-detection.

234

Diabetes and Fundus OCT

For testing, we use DR2 and Messidor-2 datasets. DR2d was collected at the Department of Ophthalmology, Federal University of Sa˜o Paulo (UNIFESP), Brazil and comprises 520 images captured using a TRC-NW8 (Topcon Inc., Tokyo, Japan) nonmydriatic retinal camera with a Nikon D90 camera. DR2 images were manually categorized by two independent specialists whose mean intergrader κ is 0.77. A total of 435 images have annotations for RDR; out of which, 98 images were graded by at least one expert as requiring referral (56 images graded as positive by both experts), a total of 337 images were annotated by both experts as not requiring referral within 1 year. Messidor-2 dataset, an extension of the Messidor dataset, is a collection of DR examinations, each of which consisting of two macula-centered eye fundus images (one per eye) [45, 46]. These images were captured with a Topcon TRC NW6 nonmydriatic fundus camera with a 45-degree field of view (FOV). Messidor-2 contains 874 examinations (1748 retina images). Images from Messidor-2 were independently graded by three boardcertified retinal specialists from all subjects according to the ICDR severity scale and a modified definition of ME [7, 8]. The mean κ value among the three experts is 0.822. The reference standard for RDR is available for researchers.e We also use two additional datasets with local annotations regarding DR-related lesions to create codebooks for mid-level representation: DR1 and IDRiD. The DR1 dataset, also provided by the Department of Ophthalmology, UNIFESP, Brazil, encompasses 1077 retinal images, out of which 595 images are normal and 482 images have at least one disease. Each image in the DR1 dataset was manually annotated for DR-related lesion (presence/absence) by three medical specialists, and only images for which the three specialists agree were kept in the final dataset. Retinal images were captured using a TRC-50X (Topcon Inc.) mydriatic camera with maximum resolution of one megapixel and a 45-degree FOV. A few images are not only labeled in image level, but also have the locations of lesions (in terms of image coordinates), essential in our context of lesion-aware mid-level representation. DR1 is available in the same link as DR2 mentioned above. Finally, IDRiD is a dataset captured by a retinal specialist at an Eye clinic located in Nanded, Maharashtra, India, and provided by a recent competition of segmentation and grading of DR.f IDRiD comprises 516 images acquired using a Kowa VX-10 alpha digital fundus camera with 50-degree FOV. Images are graded regarding disease severity level and diabetic ME. Additionally, 81 color-fundus images have pixel-level annotations of lesions such as microaneurysms, cotton-wool spots, hard exudates, and hemorrhages. Some images contain multiple lesions.

d

Publicly available under accession number 10.6084 and URL http://dx.doi.org/10.6084/m9.figshare.953671.

e f

See http://www.medicine.uiowa.edu/eye/abramoff/.

See https://idrid.grand-challenge.org.

Chapter 8 • An accountable saliency-oriented data-driven approach to diabetic

235

5 Results In this section, we present results for the proposed global data-driven approach and local saliency-oriented data-driven approach to RDR detection, as well as for the combination of those image representations. We divide the section into three parts: in part A, we describe the experiments and present global results, both with the performance of the CNN alone and with feature extraction for new classifier training; in part B, we detail the contextual patch extraction and mid-level encoding and show promising results obtained for local features; finally, in part C, we combine global and local information for enhancing DR screening. In the three parts, we validate the solution on different datasets and evaluate the effectiveness. As exposed in Section 4, both Kaggle/EyePACS and Messidor-2 datasets contain images from the left and right eyes for each patient. We evaluate the DR2 dataset only under the per-image setup as it does not comprise two images per patient. We analyze the Messidor-2 dataset by contrasting per-image decision (each image with a particular response) with per-patient decision (combining individual responses of both eyes). In this direction, the patient’s needs of referral is based on the highest resulting classification probability between the two eyes (see Section 3.4). For approaches that require feature extraction (including the local saliency-oriented data-driven method), the per-patient scenario also includes concatenating feature from both eyes.

5.1 Global data-driven approach First, we investigate the efficacy of the purely data-driven method for referral assessment. We use the Inception-Resnet CNN [39], whose parameters were previously optimized with ImageNet.g We then adapt such parameters for two-class screening of DR. The weight adaptation starts by training only parameters that are indeed being optimized from scratch (last layer) for a few iterations, to avoid losing patterns previously captured. Thereafter, we propagate the training efforts to the entire network. We achieved the optimal performance over the validation set (10% of the Kaggle/EyePACS dataset) in terms of AUC after 26 epochs. From then on, kept training the network but the result did not improve. The performance was reached after four learning rate restarts (see Section 3.2 for more details regarding learning rates). After training the model for RDR screening, we investigate the performance over distinct datasets. The training was performed with the Kaggle/EyePACS dataset, while the test was carried out with Messidor-2 and DR2, which have different acquisition conditions. Basically, we evaluate the global data-driven method in two different setups: simply passing the test set once to obtain responses (softmax probabilities) and passing both training and test sets to extract feature and posteriorly training and validating a

g The parameters are available for research purpose under URL https://github.com/tensorflow/models/tree/ master/research/slim.

236

Diabetes and Fundus OCT

FIG. 4 ROC results for referral assessment using the proposed global data-driven approach in a cross-dataset protocol: training with Kaggle/EyePACS and testing with DR2.

particular classifier. We emphasize that one of the most valuable advantages of extracting features is that it provides flexibility to choose different machine learning algorithms later on. Fig. 4 depicts the ROC curves for training with Kaggle/EyePACS dataset and testing with DR2. Results were achieved in the per-image scenario and two different protocols adopted for global information (softmax probabilities and feature extraction). The CNN alone provided an AUC of 93.73% (95% CI: 89.9–96.9). By feeding the neural network with global features extracted from the CNN, we reach an AUC of 95.8% (95% CI: 93.5–97.7), reducing the classification error by 33% over the softmax version. The results show the models learned with the Kaggle/EyePACS dataset images (with higher variance) produced relevant results with a very different dataset (DR2). Fig. 5 depicts cross-dataset ROC curves and respective AUCs for testing with Messidor-2 dataset. Using softmax probabilities (the chance of needing consultation), both the image and patient analysis provide AUC of 98.3%.h By extracting features and using a new shallow neural network for decision-making, we reach an AUC of 97.6% h

AUC of 98.3% (95% CI: 97.8–98.8) for image analysis and 98.3% (95% CI: 97.6–99.0) for patient analysis.

Chapter 8 • An accountable saliency-oriented data-driven approach to diabetic

237

FIG. 5 ROC results for referral assessment using global data-driven approach in a cross-dataset protocol: training with Kaggle/EyePACS and testing with Messidor-2.

(95% CI: 96.8–98.3) for per-image decision, while diagnosing the patient yields an AUC of 98.5% (95% CI: 97.8–99.0). We emphasize that all results whose approach requires feature extraction, either on a per-image and per-patient basis, and involves combining features from left and right eyes. The difference is on measuring the performance with the decisions themselves or attributing the maximum score to the patient.

5.2 Local saliency-oriented data-driven approach Now we turn our attention to the use of heatmaps that express pixel importance for extracting regions of interest and to the encoding of their representation from the CNN to provide an additional and complementary model for RDR detection. We use FVs for mid-level representation, with lesion-aware GMMs constructed with two datasets that have regions annotated by experts (DR1 and IDRiD). We evaluate the performance through an analysis per image and per patient, when possible. Fig. 6 shows results achieved with the mid-level representation for testing with DR2 and Messidor-2. For the former, only images are analyzed individually, while the latter consider the two scenarios. FV provides an AUC of 97.3% (95% CI: 95.6–98.6) with DR2.

238

Diabetes and Fundus OCT

FIG. 6 ROC results for referral assessment using the proposed local saliency-oriented data-driven approach in a crossdataset protocol: training with Kaggle/EyePACS and testing with DR2 and Messidor-2.

Regarding Messidor-2, features extracted through saliency maps activations give an AUC of 97.0% (95% CI: 96.0–97.9) per image, that is considerable improved to 98.7% (95% CI: 98.1–99.3) when left- and right-eye responses are combined. In both DR2 and Messidor-2, local-based results are significantly superior to the global-based ones, showing the novel approach has potential, and that the operation of emphasizing areas that the CNN model could not assimilate sufficiently is promising.

5.3 Global and local information Our next analysis involves the combination of responses we achieve with both local and global techniques. In this work, we perform late fusion by averaging the three softmax probabilities: (1) from the CNN, (2) from the shallow neural network trained/tested with data-driven features, and (3) from the shallow neural network trained/tested with the mid-level representations. We present results for DR2 only for analyzing images and consider image and patient scenarios for Messidor-2 dataset. Fig. 7 depicts ROC curves and respective AUCs for late fusion. We observe the combination of global and local information provides an AUC of 96.3% for DR2 dataset (95% CI: 93.7–98.3). For Messidor-2, in turn, we achieved an AUC of 98.3% (95% CI: 97.8–98.8) in the

Chapter 8 • An accountable saliency-oriented data-driven approach to diabetic

239

FIG. 7 ROC results for referral assessment using both the proposed global data-driven approach and local saliencyoriented approach in a cross-dataset protocol: training with Kaggle/EyePACS and testing with DR2 and Messidor-2.

image-based setup, and improved the approach performance to 98.7% (95% CI: 98.1–99.2) by combining eye responses and diagnosing patients. We note the fusion does not outperform the local-based information (AUC ¼ 97.3%) for DR2 dataset, since the performance of the CNN alone achieved an AUC of 93.7%. Possibly, the result did not improve since left and right eyes play an important role in any approach. The improvement is more evident for Messidor-2. In terms of image analysis, the AUC with fusion is the same as the AUC by the CNN: 98.3%. However, the method improves the classification accuracy from 85.2% to 89.9%. When we diagnose patients, the improvement is even more noticeable. By applying the late fusion of global and local information, in turn, the AUC is equivalent to the one achieved by FV encoding (AUC ¼ 98.7%). However, the fusion not only increases the accuracy—from 88.3% to 89.5%—but also reduces both false positive and false negative rates (1 false negative for 91 false positives).

5.4 Comparison with state of the art Just for the sake of completeness, we now compare our results with previous works that have used DR2 and/or Messidor-2 datasets for testing in prior art. The DR2 is widely used

240

Diabetes and Fundus OCT

for referral assessment [13–15]; however, it has mostly been applied under the 52-fold cross-validation protocol. Here we compare the performance with a recent work that also employs the cross-dataset validation policy [27]. In this work, we have reached an AUC of 97.3% by using mid-level representation, and 96.3% using the average of all individual responses. In turn, Pires et al. [27] achieved also 96.3%. Contrasting the local-based approach with the previous result, we reduce the classification error by 27%. We reinforce that although we use here a larger training set (the entire Kaggle/EyePACS dataset), the result in Ref. [27] is provided by a robust feature extraction augmentation with a CNN trained under a multiresolution training procedure. While we train the CNN with images in 299  299, previous work has performed the optimization and validation with images of 448  448 pixels. In turn, we also compare the performance with previous works that have validated models with the Messidor-2 dataset [7, 8, 27]. Note, however, that this comparison is not totally direct as the methods use different training sets. Considering only the perpatient scenario, Abra`moff et al. [7] reached an AUC of 93.7% and further improved this result significantly to 98.0% [8] (95% CI: 96.8–99.2). More recently, Pires et al. [27] reported an AUC of 98.2% (95% CI: 97.4–98.9) using a CNN with multiresolution training and extracting global data-driven features in an augmented fashion. Herein we provide a remarkable improvement by reaching an AUC of 98.7% (95% CI: 98.1–99.2), showing the robustness of applying local saliency-oriented region characteristics to reinforce the learning process the network has acquired globally. We reduce the classification error by 28% over the solution proposed by Pires et al. [27], and 35% over the solution proposed by Abra`moff et al. [8].

6 Conclusion In this work, we presented an accountable and robust framework for automated screening of DR. A series of works in prior art have been focused on accurate data-driven approaches to effective diagnostic, even using a single deep CNN or ensembling a set of models. However, the interpretability of those models—which has become a requirement in order to understand the reasons behind a decision—is frequently disregarded. In this vein, we proposed the use of saliency maps whose objective is twofold: highlighting regions that potentially influence the decision taken, and capturing regions of interest that could be leveraged for the final model response. We trained the baseline data-driven CNN with the Inception-Resnet-v2 architecture and parameters previously learned in a different challenge. Initially, we have trained just the last layer for a few iterations, and thereafter propagate the learning to the entire model. Purely data-driven CNNs naturally perform global evaluations, by receiving the entire image and assigning decision probabilities. Herein, we use global information to extract two data-driven responses: the softmax probability itself as well as scores coming from a neural network that receives presoftmax features. For the DR2 dataset, the CNN alone provided an AUC of 93.7% while the neural network trained with deep features reached 95.8%

Chapter 8 • An accountable saliency-oriented data-driven approach to diabetic

241

of AUC. In turn, Messidor-2 had 98.3% from the CNN and 98.5% of AUC from the additional classifier, both combining left- and right-eye responses. The main novelty in the current work is the breakdown of the global data-driven scenario. The pipeline involves extracting saliency-oriented regions of interest and combining those information through FV. Exploring the encoded contextual local-based representations, we achieved an AUC of 97.3% using DR2 dataset, while the performance using Messidor-2 reached 98.7%. By enhancing considerably the performance both over DR2 and Messidor-2, in comparison with the strict global method, we showed that the guided mid-level representation, which emphasizes areas that the CNN model could not assimilate sufficiently, is promising. Once the performance of combining global data-driven and local saliency-oriented characteristic depends on the robustness of the baseline CNN model, in future work we intend to explore higher image resolutions, possibly exploring strategies such as multiresolution training [27] to easily capture very small lesions and subtle image details. We also intend to move toward an opposed direction comparing to the natural advance of research in DR diagnostics. Instead of detecting lesions and using the assembled information to decide about disease stages or a need of referral, in a bottom-up manner, we intend to identify RDR and use pixel importance for pointing out and recognizing lesions or anatomical retina parts, as a top-down approach. Finally, we want to explore saliency-oriented representation for severity analysis and perform an extensive comparison with the prior art.

References [1] International Diabetes Federation, IDF Diabetes Atlas, eighth ed., 2017. Available from: http://www. idf.org/diabetesatlas. (Accessed 28 May 2016). [2] Vision Problems in the U.S., Prevalence of Adult Vision Impairment and Age-Related Eye Disease in America, Available from: http://www.visionproblemsus.org/ (Accessed 28 May 2018). [3] D.M. Gibson, The geographic distribution of eye care providers in the United States: implications for a national strategy to improve vision health, Prev. Med. 73 (2015) 30–36. [4] E. Decencie`re, G. Cazuguel, X. Zhang, G. Thibault, J.-C. Klein, F. Meyer, B. Marcotegui, G. Quellec, M. Lamard, R. Danno, TeleOphta: machine learning and image processing methods for teleophthalmology, IRBM 34 (2013) 196–203. [5] G. Quellec, M. Lamard, A. Erginay, A. Chabouis, P. Massin, B. Cochener, G. Cazuguel, Automatic detection of referral patients due to retinal pathologies through data mining, Med. Image Anal. 29 (2016) 47–64. [6] M. Bhaskaranand, J. Cuadros, C. Ramachandra, S. Bhat, M. Nittala, S. Sadda, K. Solanki, EyeArt + EyePACS: automated retinal image analysis for diabetic retinopathy screening in a telemedicine system, in: Ophthalmic Medical Image Analysis Second International Workshop, 2015, pp. 105–112. [7] M.D. Abra`moff, J.C. Folk, D.P. Han, J.D. Walker, D.F. Williams, S.R. Russell, P. Massin, B. Cochener, P. Gain, L. Tang, Automated analysis of retinal images for detection of referable diabetic retinopathy, JAMA Ophthalmol. 131 (3) (2013) 351–357. [8] M.D. Abra`moff, Y. Lou, A. Erginay, W. Clarida, R. Amelon, J. Folk, M. Niemeijer, Improved automated detection of diabetic retinopathy on a publicly available dataset through integration of deep learning, Invest. Ophthalmol. Vis. Sci. 57 (13) (2016) 5200–5206.

242

Diabetes and Fundus OCT

[9] E. Colas, A. Besse, A. Orgogozo, B. Schmauch, N. Meric, E. Besse, Deep learning approach for diabetic retinopathy screening, Acta Ophthalmol. 94 (2016) 1755–3768. [10] V. Gulshan, L. Peng, M. Coram, M.C. Stumpe, D. Wu, A. Narayanaswamy, S. Venugopalan, K. Widner, T. Madams, J. Cuadros, Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs, JAMA 316 (22) (2016) 2402–2410. [11] R. Gargeya, T. Leng, Automated identification of diabetic retinopathy using deep learning, Ophthalmology 124 (7) (2017) 962–969. [12] G. Quellec, K. Charrie`reb, Y. Boudib, B. Cochenerc, M. Lamardc, Deep image mining for diabetic retinopathy screening, Med. Image Anal. 39 (2017) 178–193. [13] R. Pires, H. Jelinek, J. Wainer, S. Goldenstein, E. Valle, A. Rocha, Assessing the need for referral in automatic diabetic retinopathy detection, IEEE Trans. Biomed. Eng. 60 (12) (2013) 3391–3398. [14] R. Pires, H.F. Jelinek, J. Wainer, E. Valle, A. Rocha, Advancing bag-of-visual-words representations for lesion classification in retinal images, PLoS ONE 9 (6) (2014) e96814, https://doi.org/10.1371/journal. pone.0096814. [15] R. Pires, S. Avila, H. Jelinek, J. Wainer, E. Valle, A. Rocha, Beyond lesion-based diabetic retinopathy: a direct approach for referral, IEEE J. Biomed. Health Inform. 21 (1) (2017) 193–200. [16] European Political Strategy Centre, The age of artificial intelligence. Towards a European strategy for human-centric machines. EPSC Strategic Notes, March, vol. 29, 2018. Available from: https://ec. europa.eu/epsc/publications/strategic-notes/age-artificial-intelligence_en. (Accessed 17 October 2018). [17] K. Crawford, R. Calo, There is a blind spot in AI research, Nat. News 538 (7625) (2016) 311. [18] Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521 (7553) (2015) 436–444. [19] R. Pires, H.F. Jelinek, J. Wainer, A. Rocha, Retinal image quality analysis for automatic diabetic retinopathy detection, in: IEEE Conference on Graphics, Patterns and Images (SIBGRAPI), 2012, pp. 229–236. [20] H.F. Jelinek, R. Pires, R. Padilha, S. Goldenstein, J. Wainer, A. Rocha, Quality control and multi-lesion detection in automated retinopathy classification using a visual words dictionary, in: IEEE Intl. Conf. Eng. Med. Biol. Soc., 2013, pp. 5857–5860. [21] A. Rocha, T. Carvalho, H. Jelinek, S. Goldenstein, J. Wainer, Points of interest and visual dictionaries for automatic retinal lesion detection, IEEE Trans. Biomed. Eng. 59 (8) (2012) 2244–2253. [22] H. Jelinek, R. Pires, R. Padilha, S. Goldenstein, J. Wainer, A. Rocha, Data fusion for multi-lesion diabetic retinopathy detection, Proc. IEEE Comput.-Based Med., 2012, pp. 1–4. [23] R. Pires, S. Avila, H.F. Jelinek, J. Wainer, E. Valle, A. Rocha, Automatic diabetic retinopathy detection using BossaNova representation, in: IEEE Intl. Conf. Eng. Med. Biol. Soc., 2014, pp. 146–149. , I. Sadek, F. Me riaudeau, Discrimination of retinal images containing bright lesions using [24] D. Sidibe sparse coded features and SVM, Comput. Biol. Med. 62 (2015) 175–184. [25] R. Pires, T. Carvalho, G. Spurling, S. Goldenstein, J. Wainer, A. Luckie, H.F. Jelinek, A. Rocha, Automated multi-lesion detection for referable diabetic retinopathy in indigenous health care, PLoS ONE 10 (6) (2015) e0127664, https://doi.org/10.1371/journal.pone.0127664. [26] S. Naqvi, M. Zafar, I. Ul Haq, Referral system for hard exudates in eye fundus, Comput. Biol. Med. 64 (2015) 217–235. [27] R. Pires, S. Avila, J. Wainer, E. Valle, M.D. Abra`moff, A. Rocha, A data-driven approach to referable diabetic retinopathy detection, Artif. Intell. Med. 96 (2019) 93–106. [28] J. Nandy, W. Hsu, M.L. Lee, An incremental feature extraction framework for referable diabetic retinopathy detection, in: 2016 IEEE 28th International Conference on Tools With Artificial Intelligence (ICTAI), IEEE, 2016, pp. 908–912.

Chapter 8 • An accountable saliency-oriented data-driven approach to diabetic

243

[29] Y. Yang, T. Li, W. Li, H. Wu, W. Fan, W. Zhang, Lesion detection and grading of diabetic retinopathy via two-stages deep convolutional neural networks, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2017, pp. 533–540. [30] Z. Wang, Y. Yin, J. Shi, W. Fang, H. Li, X. Wang, Zoom-in-net: deep mining lesions for diabetic retinopathy detection, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2017, pp. 267–275. [31] P. Costa, A. Galdran, A. Smailagic, A. Campilho, A weakly-supervised framework for interpretable diabetic retinopathy detection on retinal images, IEEE Access 6 (2018) 18747–18758, https://doi.org/ 10.1109/access.2018.2816003. [32] H. Bay, A. Ess, T. Tuytelaars, L.V. Gool, Speeded-up robust features (SURF), Comput. Vis. Image Underst. 110 (3) (2008) 346–359. [33] P. Roy, R. Tennakoon, K. Cao, S. Sedai, D. Mahapatra, S. Maetschke, R. Garnavi, A novel hybrid approach for severity assessment of diabetic retinopathy in colour fundus images, in: IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), IEEE, 2017, pp. 1078–1082. [34] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, International Conference on Learning Representations, 2015. [35] J. Sa´nchez, F. Perronnin, T. Mensink, J. Verbeek, Image classification with the Fisher vector: theory and practice, Int. J. Comput. Vis. 105 (3) (2013) 222–245. [36] Z.C. Lipton, The mythos of model interpretability, Queue 16 (3) (2018) 30. gas, M. Wattenberg, Smoothgrad: removing noise by adding [37] D. Smilkov, N. Thorat, B. Kim, F. Vie noise, in: Workshop on Visualization for Deep Learning, 34th International Conference on Machine (ICML), 2017. [38] J.T. Springenberg, A. Dosovitskiy, T. Brox, M. Riedmiller, Striving for simplicity: the all convolutional net, in: 3rd International Conference on Learning Representations (ICLR), 2015. [39] C. Szegedy, S. Ioffe, V. Vanhoucke, A.A. Alemi, Inception-v4, inception-resnet and the impact of residual connections on learning, in: Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), 4, 2017, p. 12. vol. [40] T. Tieleman, G. Hinton, Lecture 6.5-rmsprop: divide the gradient by a running average of its recent magnitude, COURSERA Neural Netw. Mach. Learn. 4 (2) (2012) 26–31. [41] I. Loshchilov, F. Hutter, SGDR: stochastic gradient descent with warm restarts, in: 5th International Conference on Learning Representations (ICLR), 2017. [42] F. Perronnin, D. Larlus, Fisher vectors meet neural networks: a hybrid classification architecture, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3743–3752. [43] F. Perronnin, C. Dance, Fisher kernels on visual vocabularies for image categorization, in: 2007 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2007, pp. 1–8. [44] C. Wilkinson, F. Ferris III, R. Klein, P. Lee, C. Agardh, M. Davis, D. Dills, A. Kampik, R. Pararajasegaram, J. Verdaguer, Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales, Ophthalmology 110 (9) (2003) 1677–1682. [45] G. Quellec, M. Lamard, P.M. Josselin, G. Cazuguel, B. Cochener, C. Roux, Optimal wavelet transform for the detection of microaneurysms in retina photographs, IEEE Trans. Med. Imaging 27 (9) (2008) 1230–1241. [46] E. Decencie`re, X. Zhang, G. Cazuguel, B. Lay, B. Cochener, Feedback on a publicly distributed image database: the Messidor database, Image Anal. Stereol. 33 (3) (2014) 231–234.

9

Machine learning-based abnormalities detection in retinal fundus images G. Indumathi, V. Sathananthavathi MEP CO SCH LE NK EN G I NE ERI N G COLLEGE, SIVAKAS I, INDIA

1 Introduction Human eye is the highly complex structure which translates our external environment into electrical signals for the brain to sense. Cornea is the front surface of the eye that focuses the external light and the amount of light entering the eye is controlled by diaphragm, camera shutter of human eye. The average thickness of center area of cornea is about 550 μm, or slightly more than half a millimeter [1]. Optical image signal to electrical signal for the understandability of brain is actually done by the neurosensory tissue layer named as retina. Iris is the thin layer which controls the size of the eye and also it limits the amount of light that reaches retina. For retinal imaging both entry and exit path of light is only through pupil which depends on the field of view (FOV) of fundus camera that is classified as narrow or wide angle fundus camera. Narrow angle fundus camera captures the angle range from 20° or even less. Wide angle camera captures the range from 45° to 135°. Fundus photographs are captured with specialized dyes like fluorescein and indocyanine green [2]. Most of the retinal diseases are diagnosed through the observations on retina changes by fundus images. The retinal regions are imaged by passing the light through pupil, a center hole located in the iris layer. Ophthalmologists prefer retinal fundus images for continuous monitoring and progression of eye disease. Retinal region changes are the early symptoms of most of the related diseases. Nowadays, diabetes is the most commonly seen diseases which affect the person’s eyesight. Diabetic retinopathy is the vision-threatening disease which arises due to the following reasons: • • •

When a person having diabetes for a longer period or more years Irregular diet and sugar level of diabetic person High cholesterol and blood pressure

Diabetes and Fundus OCT. https://doi.org/10.1016/B978-0-12-817440-1.00009-7 © 2020 Elsevier Inc. All rights reserved.

245

246

Diabetes and Fundus OCT

When the blood sugar level is not properly maintained by the diabetic patient then it leads to the reduction of blood supply to the retinal vessels. Blood vessels are the nutrients carrier for all parts of the eye. Lack of blood supply leads to insufficient nutrients supply and hence it leads to the reduction in vision. At the later stage the diabetic person may lose his entire vision. Early stages of diabetic retinopathy have no noticeable symptoms and it will be sensed by the person only when the disease progression is more. The disease progression indicates the symptoms like floaters (dark spots floating in their vision), fluctuating, blurred, and impaired vision. At this noticeable time, the vision loss is irreversible in most of the cases. Hence early prediction is advisable and it is possible by regular eye checkup through fundus imaging. Doctors predict the diabetic retinopathy stages of the patients only through several symptoms of the internal anatomical changes. National Eye Institute [3] categorizes the diabetic retinopathy into Proliferative and nonproliferative retinopathy. Nonproliferative retinopathy may be mild, moderate or severe nonproliferative retinopathy. In the mild nonproliferative retinopathy, a small swelling of blood vessels happens and this swelling leads to blockage condition named as moderate nonproliferative retinopathy. As the number of blockage increases blood supply to the internal parts of eye decreases, leading to the formation of new fragile blood vessels. This stage is called as severe nonproliferative retinopathy. Since the new blood vessels formed are abnormal, it may rupture and leak blood into the internal region called hemorrhages. This stage is identified as the proliferative retinopathy. New blood vessel formation and an increase in vessel tortuosity is also commonly noticeable symptoms in fundus images. The initial stage of blood leakage is identified through microaneurysm and more by hemorrhages. Brian Bucca from Barbara Davis Center for Diabetes [4], clearly explains various symptoms of diabetic retinopathy in a better way. Microaneurysm is the initial visible symptom due to the enlargement of capillaries and it looks like a small red dot having dimension less than that of blood vessels. When the microaneurysms rupture, blood leaks in the retinal region and it is called as hemorrhages. Hemorrhages are usually larger than microaneurysm and based on the shapes of hemorrhages it is classified as dot, blot, and flame-shaped hemorrhages. In addition to microaneurysms and hemorrhages, hard and soft exudates are identified as retinopathy symptoms. Hard and soft exudates including cotton wool spots are generally labeled as bright lesions; Microaneurysms and hemorrhages are labeled as red lesions. Disease progression with the time period is identified through the above-mentioned symptoms. But it is the time-consuming process when abnormalities location and size are considered. Also in the measurement of blood vessel tortuosity, changes are not noticeable to the human expert as image comparison takes too much of time. Hence there is a need for automated system to identify and locate all these abnormalities. The challenge behind this abnormality detection includes the intensity range that coincides with other normal regions, illumination and contrast nonuniformity throughout the image. There are numerous works already proposed for lesion detection both in unsupervised

Chapter 9 • Machine learning-based abnormalities detection

247

as well as supervised method. Compared to unsupervised methods, supervised methods are most popular in abnormality detection in retinal images. Baudoin, Lay, and Klein, proposed neural network classifier-based method [5] to detect the red lesions in fluorescein angiographies. Preprocessing was done to reduce the uneven illumination by smoothing the image. Top hat filtering is applied to detect the edges and hence further segmentation of red lesions using morphological matched the filtering and region-growing methods. The effectiveness of classifier can be measured using ROC (receiver operating characteristics) and accuracy. Messidor database is used for their methodology which contains more than 1000 images and accuracy obtained is about 89.8%. Another neural network-based method was proposed by Sinthanayothin [6], to detect red lesion. Their methodology includes preprocessing steps for illumination equalization and contrast enhancement of an image. Recognition of main retinal components using multilayer perception neural network can be derived from principal component analysis. Pixel-based method is used to extract the features in green channel of image. Classification accuracy of about 89.9% is achieved in Messidor database. Niemeijer et al. [7] proposed a bright lesion detection methodology based on k-nearest neighbor classifier. Authors also address the illumination and vignetting effects in the image using illumination equalization and contrast equalization. Morphological opening and closing operation are applied to extract brightest region in the image. Lesions have better visibility in Green channel and hence candidate region is extracted from the green channel. They also used morphological top-hat filtering for candidate region extraction. The images used for their experimentation is taken from Retinopathy Online challenge database which contains 50 training images and 50 testing images and obtain an accuracy of about 87.7%. Linear discriminate analysis classification method [8] for lesions is proposed by T. Walter et al. In this work, preprocessing is also done before candidate region extraction. Green channel is used to extract the candidate region. Here, local maxima is considered as candidate region using morphological operation. All the three channels are individually applied with illumination equalization, contrast equalization, and intensity adjustment. In this work, Diaretdb1 [9, 10] database is used. It consists of 28 training images and 56 testing images. Blood vessels are the major false positives in lesion detection due to the same range of intensity level. Mizutani et al. [11] proposed double ring filter to remove blood vessels. The authors claimed the preprocessing requirement to reduce the differences in brightness and color of the retinal images. After preprocessing, initial detection of microaneurysm is done using morphological operations. For this lesion detection of the green channel is used which has high contrast to red lesions. Next, removal of blood vessel is done to avoid false positives using double ring filter. Candidate region is extracted by using morphological operations. Here, local maxima is considered as candidate region. After extraction of candidate region feature extraction is done. Here area, degree of circularity, and length to width ratio are considered as features. Retinopathy online challenge database is used for this work. The intensity and similarity of lesions with other anatomical parts of eye shape of the lesions are considered as significant feature

248

Diabetes and Fundus OCT

in most of the existing works. Ravishankar et al. [12] proposed the lesion detection based on the shape of lesions. Smoothening is applied as preprocessing for removing the noise present in an image. Optic disc is another challenge in lesion detection, hence it should be eliminated before candidate region extraction. In this work, optic disc is removed using maximum variance method and morphological operations. Candidate region is separated using region growing method. Blood vessel extraction is done using the morphological closing operation. Sobel edge detection and canny edge detection is done to detect the edges and they are considered as features. They are used to train support vector machine. Zhang et al. [13] proposed multiscale correlation-based method to detect the red lesion. In this work, Preprocessing (illumination equalization and contrast enhancement) is included to improve the contrast quality. Multiple correlation coefficients are used as features which is the measure of association strength between the independent variables and the dependent variables. Finally, classification is done using k-nearest neighbor method and the accuracy achieved is about 89.21%. In this work, Diaretdb1 database is used for the experimentation. Lazar et al. [14] proposed the cross-section profile analysis for lesion detections. The image is smoothened to remove the noise artifacts and the methodology is not included with any equalization. Candidate region is identified through local regional maxima. Using line detector method the cross-sectional intensity profile is found. With the help of profile dimension, lesions are differentiated and compared to other regions. They also insist the effect of false-positive due to optic disc. The images used are taken from E-optha database which contains 50 training and testing images. Seoud [15] has used dynamic shape features to detect the lesions. Nonuniform illumination and contrast effect is addressed by equalization methods in preprocessing. Morphological closing and opening operation are applied to remove optic disc. Candidate region is extracted using dynamic transformation method. Dynamic shapes are considered as features and random forest classifier is used for classification. The accuracy achieved is about 92.8% using Diaretdb1 database. Zhang [16] used mass screening approach for clear vision of exudates in fundus images. In this work, preprocessing includes illumination equalization and contrast improvement. Spatial calibration is done to take the region of interest. Here optic disc removal is done by entropy-based method. Candidate region is considered as local minima. Here area and perimeter are considered as features for classification by random forest classifier. The performance of this method is about 92.89% accuracy using Diaretdb1 database. Habib et al. [17] proposed an ensemble classifier-based technique for microanuerysm detection. Authors used MESSIDOR database and created a ground truth dataset for detection of microaneurysm. In this work, intensity and shape features are used, in addition to Gaussian matched filter, moment invariants, morphological, Gaussian, and Flemings features are included as feature vector values. Feature selection is implemented and classification only with those features improves the classification accuracy. Su Wang et al. [18] proposed another microaneurysm detection method based on profile-based features. For all the candidate regions cross-sectional profiles are extracted with equal intervals of

Chapter 9 • Machine learning-based abnormalities detection

249

angle. They totally obtained 12 cross-sectional profiles and then singular spectrum analysis is applied for decomposition. The correlation of the obtained profile with the ideal profile curve is calculated and based on this, false positives will get reduced. By observing the correlation coefficient the candidates which imitate like microaneurysms may get eliminated. From all the literature surveys it is observed that shape and profile features are predominantly used in most of the existing methodologies. But the severity of the diabetic retinopathy can be identified through shapes and dimension of the abnormalities in the funds images.

2 Abnormality detection The abnormalities detection in retinal fundus images is a challenging task due to the nature of fundus photographs. Nonuniformity of illumination and contrast need to be equalized before the detection process. In addition, the intensity range of abnormalities fall in the intensity level of other anatomical structures, hence those regions are initially identified to reduce the effect of false positives. A sample of normal and abnormal images are shown in Fig. 1. The region of interest for all the abnormalities can be identified with the above consideration. The spatial calibration result for some of the abnormal and normal images from diaretdb01 database is shown in Figs. 2 and 3. Green channel usually shows better visibility compared to red and blue channels. Channel separation of retinal image is shown in Fig. 4.

2.1 Preprocessing Preprocessing is the preliminary process in the retinal fundus images to enhance the image to get better visibility. Preliminary symptoms of diabetic retinopathy include new vessel formation, microaneurysm, and hemorrhages. In order to differentiate microaneurysms from the blood vessel and the other regions, the abnormal regions should be well distinguished by the automated algorithm. The identification of the abnormal regions without any flaw can be achieved by the correction of nonuniform illumination and also by the enhancement of contrast. Most of the popular public database images need to be preprocessed in order to rectify the above-mentioned issues.

Normal image FIG. 1 Sample images.

Normal image

Abnormal image

Abnormal image

250

Diabetes and Fundus OCT

Input image

Spatial calibration output

Kernel sizes Diameter =1.4124e+03 d1 =141.2381 d2 = 3.9233 d3 = 50.4422 Diameter =1.4107e+03 d1 =141.0682 d2 =3.9186 d3 =50.3815 Diameter = 1.4200e+03 d1 =142.0006 d2 =3.9445 d3 =50.7145 Diameter =1.4200e+03 d1 =142.0006 d2 =3.9445 d3 = 50.7145

FIG. 2 Spatial calibration result for abnormal images.

There were illumination variations in different regions which were corrected initially using illumination equalization algorithm. Green channel is identified as the better visibility channel compared to red and blue channel, hence it is preferred to do the preprocessing in Green channel.

2.1.1 Illumination equalization Most of the retinal images available in public database have various illumination effects throughout the image. To overcome this vignetting effect, the illumination equalization method is applied to compensate for the nonuniform illumination by the background estimated image. A large mean filter (M1) of size (d) is applied to each color component of the original image to estimate its illumination. Then, the resulting color image is

Chapter 9 • Machine learning-based abnormalities detection

Input image

Spatial calibration output

251

kernel sizes Diameter =1.4163e+03 d1 =141.6301 d2 =3.9342 d3 =50.582 Diameter =1.4163e+03 d1 =141.6301 d2 =3.9342 d3 =50.5822 Diameter =1.4163e+03 d1 =141.6301 d2 =3.9342 d3 =50.5822 Diameter =1.4187e+03 d1 =141.8705 d2 =3.9408 d3 =50.6681

FIG. 3 Spatial calibration result for abnormal images.

subtracted from the original one to correct for potential shade variations. Finally, the average intensity of the original image is added to keep the same intensity range as that of original image and it is given by Illumination equalized image ¼ I + μ  M1

where I—image channel; μ—average intensity; M1—mean filter of size d.

(1)

252

Diabetes and Fundus OCT

Input image

Red channel

Green channel

Blue channel

FIG. 4 Channel separation of input retinal image.

2.1.2 Adaptive contrast equalization In order to enhance the low contrast areas in an image and to sharp the details, adaptive contrast equalization [19] can be applied. For this local standard deviation (Istd) is computed for each color channel of size (d3). Then mean filter of diameter (M3) is found for each color channel which is subtracted from one. This value is multiplied with inverses of local standard deviation. It is specified by Eq. (2) Ice ¼

1 ∗ð1  M3 Þ Id

(2)

where Ice—contrast equalized image; Id—local standard deviation for each channel; M3— mean filter of size d3.

2.1.3 Contrast limited adaptive histogram equalization (CLAHE) Contrast limited adaptive histogram equalization is an enhancement algorithm [20] used in image processing to enhance contrast in images. Contrast limited adaptive histogram equalization (CLAHE) is better than adaptive histogram equalization in the sense that it

Chapter 9 • Machine learning-based abnormalities detection

253

has the tendency to overamplify noise in relatively homogeneous regions of an image. This limitation can be overcome by CLAHE by its contrast limiting, hence it is therefore suited for local contrast enhancement in the image regions. CLAHE limits the noise amplification by clipping the histogram at a predefined value before computing the cumulative distribution function which in turn limits the slope of the cumulative distribution function and hence the transformation function. The value at which the histogram is clipped is known as a clip limit; it depends on the size of the neighborhood region.

2.1.4 Color normalization Normalization is usually preferred in the computer vision-based object recognition in order to meet the color constancy like human vision. Human vision has the ability to recognize the object or human irrespective of the environmental lighting conditions. Machine vision needs to eliminate or compensate for the distortions caused by lightning or shadows. As per the capturing of retinal fundus images are concerned, due to the shape of eyeball, light could not focus all the internal regions with the illumination. Also there is a possibility of more shadows or darken region in the image. The solution for all these problems is to normalize the colors in each channel.

2.1.5 Histogram equalization and histogram specification Histogram equalization [21] and histogram specification [22] are the most popular color normalization techniques. The desired intensity histogram can be obtained by changing the range of pixel intensities. Histogram equalization is applied by mapping the pixel values of input and output images using nonlinear transfer functions. This nonlinear transfer function modifies the intensity values in a way that the resultant image has uniform intensity distribution. Histogram specification is otherwise called as histogram matching. It is the method of transformation of one image to another by matching the histogram of input to that of the desired histogram. In general destination, histogram will be a uniformly distributed one. Mainly histogram matching is suitable for the application to correct the shadow effects in the images. Usually, the retinal image processing, need to undergo all the above-mentioned preprocessing steps to get a better result. Without illumination, color and contrast normalization, information present in the nonilluminated region is not recoverable especially for the detection of abnormalities like microaneurysms. The images with different preprocessing for normal and abnormal images are given in Figs. 5 and 6, respectively.

2.1.6 Optic disc detection The optic disc is a significant source of false positives in red lesion detection [15]. Therefore its removal is a necessary preprocessing step to eliminate the false positives in the detection of lesions. Morphological operation is the widely used and simplest methodology to extract optic disc region. The morphological opening means the removal of some of the foreground (bright) pixels from the edges of regions of foreground. Morphological closing means enlarging the boundaries of foreground (bright) regions in an image. In this work, these processes are done for several times to extract only the brightest region in an image. Fig. 7 shows the optic disc detected image.

254

Diabetes and Fundus OCT

Input images

Illumination equalization

Adaptive contrast equalization

Color normalization

FIG. 5 Preprocessed output for normal images.

3 Candidate region detection Abnormalities detection is initiated by candidate region detection based on the morphological operation. The ultimate aim of the morphological operation is to eliminate false positives due to blood vessels. The blood vessels can be eliminated by morphological operation using line structuring element positioned at all possible locations in the image. Bottom hat transform can be applied to eliminate the possible vascular structure. The resultant image may have more isolated pixels other than lesion regions. Those isolated pixels can be removed to extract only the abnormal regions. Even with all these processes, due to the nature of retinal images, more false positives are observed. Hence features from all the resultant regions can be extracted and used to train the classifier. Some of the significant features in the abnormality detection include profile features, shape features, and intensity-based features.

Chapter 9 • Machine learning-based abnormalities detection

Input images

Illumination equalization

Adaptive contrast equalization

255

Color normalization

FIG. 6 Preprocessed output for abnormal images.

3.1 Profile features Profile feature [15] is the set of attributes which describe the intensity profile of the particular region of an image. The reason for preferring profile feature is because profile nature of abnormalities is different from that of vascular regions. The peak value, width of the peak, rising or decreasing slope value, difference between peak and minima are some of the efficient profile features which distinguish microaneurysms or hemorrhages to vascular regions.

3.2 Shape features Blood vessels are the elongated regions with some specified thickness in the retinal image. Microaneurysms are the tiny blood spots that usually take the shape of circular or

256

Diabetes and Fundus OCT

Input image

Detected optic disc

FIG. 7 Optic disc detection.

elliptical. Hemorrhages may take circular or in some cases, irregular shape but the area occupied by them are large compared to microaneurysms. Hence the area of the abnormalities, their aspect ratio, and symmetry can be used as the shape features [23, 24] in order to differentiate the abnormalities with the remaining regions in the retinal fundus image.

3.3 Intensity features Intensity features also provide vital information to discriminate abnormalities compared to the remaining regions. The intensity level of abnormalities is distinguishable compared to background regions both for red and bright lesions. If the optic disc and blood vessels are removed before candidate extraction, false positives can be reduced to a small number as possible. Mean, standard deviation of the abnormal regions can be used as the resourceful features to train the classifier algorithm.

Chapter 9 • Machine learning-based abnormalities detection

257

FIG. 8 Normal and tortuous image.

4 Tortuous retinal vessel Twisted blood vessels are commonly observed in all living beings including human. Tortuous retinal arteries and veins are associated with hypertension, diabetes, and genetic disorders. While mild tortuosity is a common anomaly without clinical symptoms, severe tortuosity can lead to various serious symptoms. Clinical observations have linked tortuous arteries and veins with aging, atherosclerosis, hypertension, genetic defects, and diabetes mellitus. Arteries may take a tortuous path due to abnormal development of vascular disease. Tortuous blood vessels have become a common angiographic finding in many studies and clinical screenings. With the advance of imaging technology, more and more tortuous vessels are being detected. Fig. 8 shows the sample normal and tortuous image. Blood vessel tortuosity is a widely observed vascular anomaly affecting a range of vessels, from large arteries and veins to small arterioles and venules, in almost all locations of the body. Tortuosity has often been reported in the aorta and capillaries, as well as in the vertebral, iliac, femoral, coronary, cerebral, and internal carotid arteries. Tortuous retinal arteries and veins have often been observed in patients with retinopathy and other diseases. Normal retinal blood vessels are smoothly warped, but because of retinal diseases, they tend to expand and bend tortuous with the progression of retinal diseases. Abnormal retinal tortuosity happens due to accretions of twist along the blood vessels. Retinal blood vessels tortuosity can be demarcated as the unusual curve, loop or crinkly shapes of vessels extending from the optic disc to the peripheral. In general, tortuosity may be seen in the small vascular region or it may be observed on the entire vascular tree. In recent years, tortuosity of the retinal vessels is considered as one of the earliest symptoms to a number of vascular diseases like diabetic retinopathy. Sasongko, Wong, Nguyen, Cheung, Shaw, and Wang [25] examined the association of retinal vessel tortuosity with diabetes and diabetic retinopathy (DR). They measured the retinal vessel tortuosity from disc-centered retinal photographs using a semiautomated computer program by a single grader of arterioles and venules within 0.5–2 disc diameters away from the optic disc. Emanuele Trucco, Hind Azegrouz, and Baljean Dhillon [26], measured the tortuosity for the selected retinal blood vessels by ophthalmologists. This novel measure of tortuosity depends on boundary localization, curvature estimation,

258

Diabetes and Fundus OCT

and thickness of the retinal blood vessels. This method gives better agreement with the medical judgment. William E. Hart and Michael Goldbaum [27] proposed the automatic measurement of blood vessel tortuosity. The tortuosity measures were calculated in two levels of classification: first is to classify the blood vessel segments into tortuous or not tortuous and second is to perform the classification on the entire vascular structure. The performance of classifiers was compared using the classification rate and the integrated relative operating characteristic (ROC). Accuracy of classification is the classification rate and ROC measures the number of test samples that are correctly classified as true-positives as a function of negative test samples that are classified as false-positives. Enrico Grisan et al. [28] proposed a new algorithm for the evaluation of tortuosity in blood vessels recognized in digital fundus images. The algorithm has been compared with other available tortuosity measures on a set of 30 arteries and one of 30 veins from 60 different images. These vessels were preliminarily ordered by a retina specialist by increasing perceived tortuosity. The proposed algorithm proved to be the best one in matching the clinically perceived vessel tortuosity. Masoud Aghamohamadian-Sharbaf and Hamid Reza Pourreza [29] proposed an automatic method for measuring single vessel and vessel network tortuosity. The authors claimed their method is simple, having a low computational burden, and matched with the clinically perceived tortuosity. Curvature-based method is used to measure the tortuosity which is a sign of local inflection of a curve. Template disc method does not possess linearity against curvature, hence the proposed method has some alterations and it is used to measure tortuosity on a publicly available data bank. Comparison is made both on the vessel level and on the vessel network level. Mowda Abdalla et al. [30] quoted that tortuosity of retinal blood vessels is one of the earliest signs to more retinal diseases. In this chapter, distance-based approach, curvature-based approach, and mixed method is discussed. Mixed method has better accuracy than other methods. Enea Poletti et al. [31] proposed an algorithm to estimate vessel tortuosity in images which is captured with a wide-field fundus camera for the retinopathy of prematurity persons. Vessels were manually traced in 20 images, to provide error-free input data for the tortuosity estimation and offers a quantitative diagnostic parameter. Adel Ghazikhani and Hamidreza Pourreza [32] introduced a novel algorithm for vessel detection in retinal images. The proposed algorithm incorporates radon transform (RT) and genetic algorithm (GA) to detect vessels. RT is used for vessel detection because vessels could be approximated accurately by sets of lines. RT is performed locally because vessels have different size and widths and normal RT is not capable of detecting them. A genetic algorithm was used to optimize the parameters of local RT. Arc over chord ratio method and curvature method have better accuracy and less computation complexity.

4.1 Database The retinal images are obtained from the public databases namely DRIVE [33] and EIARG [34]. The DRIVE database contains 40 retinal images which are obtained during a

Chapter 9 • Machine learning-based abnormalities detection

259

FIG. 9 Sample images (A) DRIVE database images (B) EIARG database images.

screening test in the Netherlands. The set of 40 images has been divided into training and a test set, both containing 20 images. They have 45° field of view. The EIARG database contains 130 tortuous retinal images. Linear vessel structures are extracted from the DRIVE database and tortuous from EIARG database. Input images of 22 images were used for this proposed work. Out of 22 input images, 12 images were taken from EIARG database and 10 images from DRIVE database. Example images from EIARG and DRIVE databases are shown in Fig. 9.

260

Diabetes and Fundus OCT

4.2 Radon transform RT, transfers a function to a line parameter domain. To achieve this, RT computes the function’s integral or sum on a set of lines and it is a powerful line detector. RT is applied as vessels are linear structures. An image is a two-dimensional signal, therefore, RT can be used to detect lines in images. If the whole image is applied as an input signal to RT, it would detect lines going through the whole image. Since a retinal image contains vessels with different size and width, this approach is not appropriate. Therefore RT is performed for small regions to detect small lines. The circular mask is used to perform RT locally. The reason for using circular mask is that if square mask is used then RT may produce unreal peaks at the lines moving across the square mask’s diameters, causing to have false lines detection. In circular masks all the lines moving across the center point have the same size, so false detected lines will not be present. Before RT is performed on the circular mask it inverts the gray levels of the image under the mask so that the vessels may have lighter gray levels than background.

4.3 Extraction of vascular skeleton After vessel detection, the next proposed method is used to extract the skeletonized vessel structures. Skeletonization is one of the morphological thinning processes. To find the bifurcation and crossover points easily, the skeletonization process has to be done. It removes the boundaries of the object without breaking the objects apart. The vessels are reduced to the pixel width of one in skeletonization process. Morphological thinning operation is performed with 2  2 structural elements. Bifurcation and crossover points can be easily detected in skeletonized image. Skeletonization process is performed to identify crossover and bifurcation points without any conflict. Then this skeletonized image is given as the input for bifurcation and crossover elimination step. In vessel detection, first, the green channel of the image is extracted. The retinal images are usually low contrast images. Vessels are clearly visible in green channel due to high contrast. Preprocessing is performed to enhance the contrast of green channel. The vessel networks are segmented from the retinal image using radon transform. In Fig. 10, the first two rows are the input image, preprocessed and vessel detected images for the images from DRIVE database and the last two rows are from EIARG database. When compared with DRIVE database, the vessels in EIARG database are more twisted. The extraction of the vascular skeleton is done by morphological thinning process. To get the vessel network of width one, skeletonization is done. It removes the pixels on the boundaries of the object without breaking the objects apart. Skeletonization process is performed to identify crossover and bifurcation points without any conflict. The skeletonized images for DRIVE and EIARG database are shown in Fig. 11.

4.4 Elimination of bifurcation and crossover points Bifurcation and crossover of the vessels could make errors in the tortuosity evaluation process. Therefore, prior to the curvature estimation procedure, all areas containing

Chapter 9 • Machine learning-based abnormalities detection

261

FIG. 10 Vessel detection (A) input image, (B) preprocessed image, and (C) vessel detected image.

bifurcation and crossover should be eradicated. The patches which are free from bifurcation and crossover points are then taken into consideration for measuring tortuosity. The following mask given in Fig. 12 is used to detect the bifurcation and crossover points. The false detection of normal vessels as tortuous vessels by the classifier is due to the presence of bifurcation and crossover points, they should be detected and removed from the retinal image. The detection of bifurcated and crossover point in an image is shown in Figs. 13 and 14. The image is separated into patches and then tortuosity is measured for each and individual patch using curvature and arc over chord ratio method. The patches used for tortuosity measurement are selected in such a way that it should not contain any bifurcation and crossover points. The patches selected without bifurcation and crossover points are

262

Diabetes and Fundus OCT

FIG. 11 Skeletonized image (A) vessel detected image, (B) skeletonized image.

1 1 1 1 1 1 1 1 1

FIG. 12 Mask.

marked in a red boundary box. The patches which are free from bifurcation and crossover points are shown in Fig. 14.

4.5 Tortuosity measurement Tortuosity measurement methods may be either distance-based methods or curvaturebased methods. Arc over chord ratio is an example of distance-based approach. Using

Chapter 9 • Machine learning-based abnormalities detection

263

FIG. 13 DRIVE database (A) skeletonized image and (B) bifurcation and crossover points detected image.

arc over chord ratio alone, the measured tortuosity value is not accurate. To achieve better accuracy, curvature-based tortuosity measurement along with arc over chord ratio method may be considered for the grading (Fig. 15).

4.5.1 Arc over chord ratio method Distance-based approach have simple mathematical expressions. Arc over chord ratio is the simplest method used for tortuosity measurement. In this method, the length of the curve is labeled as Lc and the straight distance between the endpoints of the blood vessel segment is designated as chord length Lx. The tortuosity is measured as the ratio of Lc to Lx. To calculate tortuosity using arc over chord ratio method, first, the bifurcated and crossover eliminated vessels are divided into blocks. Tortuosity is calculated for every labeled blood vessel segment. The mask shown in Fig. 16 is used to check the tortuosity for every blood vessel segment. This mask is placed at the center on the beginning of one end of each blood vessel segment and it is passed along the blood vessel till the other end of the segment. Its main purpose is to determine the length of blood vessel. Coordinates of the first pixel and end one for blood vessel segment are determined and stored. The mask moves along the blood vessel segment from starting point to the ending point with one pixel at each step. During each step the orientation of blood vessel segment pixels is checked according to the mask and either 1 or 2 to the counter is added which is then used to determine the blood vessel segment length and the current pixel before movement is erased.

264

Diabetes and Fundus OCT

FIG. 14 EIARG database (A) Skeletonized image and (B) bifurcation and crossover points detected image.

The count is ultimately equal to the length of the blood vessel segment when the last pixel in blood vessel segment is reached. The arc to chord ratio needs two factors to find the tortuosity, the first is the length of blood vessel segment which is calculated previously with variable (count), while the second is the Dstraight between the first point and the last point in blood vessel segment. The straight distance (Dstraight) is calculated using Euclidean distance as follows. Dstraight ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðx2  x1 Þ2 + ðy2  y1 Þ2

(3)

where (x1, y1) are the coordinates of the first point in the blood vessel segment and (x2, y2) are the coordinates of the second point in the blood vessel segment. The tortuosity of the blood vessel segment is defined as the ratio of the count to the Dstraight. The tortuosity is found by Tortuosity ¼

Count Dstraight

(4)

4.5.2 Curvature method Curvature is yet another approach to measure the tortuosity. Curvature can be explained as the amount by which a surface deviates from a straight line. Curvature-based tortuosity

Chapter 9 • Machine learning-based abnormalities detection

265

FIG. 15 Patches free from bifurcation and crossover points.

0 7

0 2

1 4

FIG. 16 Bifurication and crossover mask.

measures are more reliable. In the Euclidean plane, the curvature is defined as the rate of change of slope as a function of arc length. Given a blood vessel segment (S), as a plane curve, where S is represented by center line points represented by S ¼ [(x1, y1), (x2, y2), …, (xn2, yn2), (xn1, yn1), (xn, yn)] and given parametrically in the Cartesian coordinates as x and y, the curvature k is calculated by 00



y x0  y 0 x

00

ðx0 2 + y 0 2 Þ3=2

(5)

Table 1 gives the arc over chord ratio and curvature measurement for the sample patches extracted for classifier training.

266

Diabetes and Fundus OCT

Table 1 Patch

Tortuosity values using arc over chord and curvature methods. Arc over chord ratio

Curvature measurement

1.006

0.986

0.78736

0.0647

1.3959

1.025

1.2407

1.123

1.2189

1.301

From the observed results, the tortuosity value of normal patches lies between 0.5 and 1.09 in arc over chord ratio method and in curvature method, tortuosity value of normal patches lies between 0.06 and 0.99. Tortuosity value of abnormal patches lies above the value of 1.2 in arc over chord ratio method. From the observed results, in curvature method tortuosity value of abnormal patches lies above the value of 1. This arc over chord ratio and curvature measurement values may be considered as the features for the classifier to detect the tortuosity level.

Chapter 9 • Machine learning-based abnormalities detection

267

5 Conclusion In this chapter, the abnormal conditions normally seen in retinal fundus images for diabetic retinopathy are discussed. Diabetic patients have abnormal blood sugar level which may damage their retinal blood vessels and leads to diabetic retinopathy. Most commonly seen symptoms like microaneurysms, hemorrhages, and tortuous vessels used for diagnosis purpose are discussed. As per machine learning perspective some of the features which are needed to be included to improve the accuracy of the classification process is discussed. Preprocessing steps for each of the abnormal condition detection process and its requirement is also mentioned. Some experimentation done on the images from publicly available databases like Diaretdb1, DRIVE, and EIARG are also provided.

References [1] https://www.allaboutvision.com/resources/cornea.html. [2] P.J. Saine, M.E. Tyler, Ophthalmic Photography: Retinal Photography, Angiography, and Electronic Imaging, second ed., Butterworth-Heinemann, Boston, 2002. [3] http://www.visionaware.org. [4] http://www.ucdenver.edu/academics/colleges/medicalschool/centers/BarbaraDavis/Clinical/Pages/ Ophthalmology.aspx. [5] C. Baudoin, B. Lay, J. Klein, Automatic detection of microaneurysms in diabetic fluorescein angiographies, Rev. Epidemiol. Sante Publique 32 (1984) 254–261. [6] C. Sinthanayothin, et al., Automated detection of diabetic retinopathy on digital fundus images, Diabet. Med. 19 (2) (2002) 105–112. [7] M. Niemeijer, B. van Ginneken, J. Staal, M.S.A. Suttorp-Schulten, M.D. Abra`moff, Automatic detection of red lesions in digital color fundus photographs, IEEE Trans. Med. Imag. 24 (5) (May 2005) 584–592. [8] T. Walter, et al., Automatic detection of microaneurysms in color fundus images, Med. Image Anal. 11 (6) (2007) 555–566. [9] T. Kauppi, V. Kalesnykiene, J.-K. Kamarainen, L. Lensu, I. Sorri, A. Raninen, R. Voutilainen, H. Uusitalo, €lvia €inen, J. Pietila €, DIARETDB1 diabetic retinopathy database and evaluation protocol, TechniH. Ka cal report (PDF), 2007. [10] T. Kauppi, V. Kalesnykiene, J.-K. Kamarainen, L. Lensu, I. Sorri, A. Raninen, R. Voutilainen, €lvia €inen, J. Pietila €, DIARETDB1 diabetic retinopathy database and evaluation proH. Uusitalo, H. Ka tocol, in: Proc of the 11th Conf. on Medical Image Understanding and Analysis, 2007. [11] Z. Mingzhu, New Method of Circle’s Center and Radius Detection in Image, IEEExplore.ieee org, 2008. [12] A. Mizutani, C. Muramatsu, Y. Hatanaka, S. Suemori, T. Hara, H. Fujita, Automated microaneurysm detection method based on double ring filter in retinal fundus images, in: SPIE Med. Imag. Comput.-Aid. Diagnosis, 7260, 2009. pp. 72601N–72601N-8. [13] S. Ravishankar, A. Jain, A. Mittal, Automated feature extraction for early detection of diabetic retinopathy in fundus images, in: Proc. IEEE Conf. Comput. Vis. Pattern Recognition, 2009, pp. 210–217. [14] B. Zhang, X. Wu, J. You, Q. Li, F. Karray, Detection of microaneurysms using multi-scale correlation coefficients, Pattern Recognit. 43 (6) (2010) 2237–2248. [15] I. Lazar, A. Hajdu, Retinal microaneurysm detection through local rotating cross-section profile analysis, IEEE Trans. Med. Imag. 32 (2) (2013) 400–407. [16] A.M. Mendonca, A. Sousa, L. Mendonca, A. Campilho, Automatic localization of the optic disc by combining vascular and intensity information, Comput. Med. Imaging Graph. 37 (5–6) (2013) 409–417.

268

Diabetes and Fundus OCT

[17] M.M. Habib, R.A. Welikala, A. Hoppe, C.G. Owen, A.R. Rudnicka, S.A. Barman, Detection of microaneurysms in retinal images using an ensemble classifier, Inform. Med. Unlocked 9 (2017) 44–57. [18] S. Wang, H.L. Tang, L.I. Al turk, H. Yin, S. Sanei, Localizing microaneurysms in fundus images through singular spectrum analysis, IEEE Trans. Biomed. Eng. 64 (5) (2017). [19] S.M. Pizer, E.P. Amburn, J.D. Austin, et al., Adaptive histogram equalization and its variations, Comput. Vis. Graph. Image Process. 39 (1987) 355–368. [20] K. Zuiderveld, Contrast limited adaptive histograph equalization, in: Graphic Gems IV, Academic Press Professional, San Diego, 1994, pp. 474–485. [21] W. Burger, M.J. Burge, Digital Image Processing: An Algorithmic Introduction Using Java, Springer, ISBN 978-1846283796. 2008. [22] R.C. Gonzalez, B.A. Fittes, Gray-level transformations for interactive image enhancement, in: 2nd Conference on Remotely Manned Systems: Technology and Applications. Los Angeles, California, 1975, pp. 17–19. [23] C.I. Sanchez, R. Hornero, A. Mayo, M. Garcia, Mixture model-based clustering and logistic regression for automatic detection of microaneurysms in retinal images, Proc. SPIE 7260 72601M-1-72601M-8. (2009). [24] B. Zhang, F. Karray, Q. Li, L. Zhang, Sparse representation classifier for microaneurysm detection and retinal blood vessel extraction, Inf. Sci. 200 (2012) 78–90. [25] M. Sasongko, T. Wong, T. Nguyen, C. Cheung, J. Shaw, Retinal vascular tortuosity in persons with diabetes and diabetic retinopathy, Diabetologia 54 (9) (2011) 2409–2416. [26] E. Trucco, H. Azegrouz, B. Dhillon, Modeling the tortuosity of retinal vessels: does caliber play a role? IEEE Trans. Biomed. Eng. 57 (9) (Sep. 2010) 2239–2247. ^ te , et al., Automated measurement of retinal vascular tortuosity, Proc. [27] W.E. Hart, M. Goldbaum, B. Co AMIA Annu. Fall Symp. (1997) 459. [28] E. Grisan, M. Foracchia, A. Ruggeri, A novel method for the automatic evaluation of retinal vessel tortuosity, in: Proceedings of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 1, 2003, pp. 866–869. [29] M. Aghamohamadian-Sharbaf, H.R. Pourreza, A novel curvature-based algorithm for automatic grading of retinal blood vessel tortuosity, IEEE J. Biomed. Health Inform. 20 (2) (2016). [30] M. Abdalla, Quantifying Retinal Blood Vessel’s Tortuosity, University of Lincoln, London, UK, 2015. [31] E. Poletti, E. Grisan, A. Ruggeri, Image-level Tortuosity Estimation in Wide-field Retinal Images from Infants with Retinopathy of Prematurity, in: 34th Annual International Conference of the IEEE EMBS San Diego, California USA, 28 August—1 September, 2012. [32] A. Ghazikhani, H. Pourreza, Vessel detection in retinal images using Radon transform and genetic algorithm, in: The 6th Iranian Machine Vision and Image Processing Conference, 2010. [33] J.J. Staal, M.D. Abramoff, M. Niemeijer, M.A. Viergever, B. van Ginneken, Ridge based vessel segmentation in color images of the retina, IEEE Transactions on Medical Imaging 23 (2004) 501–509. [34] M. Aghamohamadian-Sharbaf, H.R. Pourreza, T. Banaee, A novel curvature-based algorithm for automatic grading of retinal blood vessel tortuosity, IEEE J. Biomed. Health Inform. 20 (2) (March 2016) 586–595, https://doi.org/10.1109/JBHI.2015.2396198.

Further reading [35] R.C. Gonzalez, Digital Image Processing Using Matlab, Publishing House of Electronics Industry, 2005. [36] S.Y.K. Yuen, T.S.L. Lam, N.K.D. Leung, Connective hough transform, Image Vis. Comput. 11 (5) (1993) 295–301.

10

Optical coherence tomography angiography of retinal vascular diseases

Rosa Lozadaa, Victor M. Villegasa,b, Harry W. Flynn, Jrb, Stephen G. Schwartzb a

DEPARTMENT OF OP HT HALMOLOGY, U NI VERSI TY OF PU ER TO RI CO S CHOO L O F MEDI CINE , SAN JUAN, PR, UNITED STATES b B AS COM P ALMER EY E I NS TIT UTE , DE PARTME NT OF OP HTHALMOLOGY, UNIVERSITY O F MIAMI MILLER SC HOOL OF MEDIC INE, MIAMI, FL , UNI TE D STAT ES

1 Introduction Optical coherence tomography (OCT) is a noninvasive imaging technique that obtains high-resolution images and has broad clinical applications in the diagnosis and treatment of patients with retinal and choroidal diseases [1–12]. OCT provides an excellent anatomic detail but cannot image retinal or choroidal blood flow. Historically, fluorescein angiography (FA) and indocyanine green angiography (ICGA) have been used to image vascular flow in the posterior segment. However, FA and ICGA generally require intravenous dye infusion that can be associated with risks of nausea, vomiting, syncope, and potentially serious allergic reactions including anaphylaxis. Optical coherence tomography angiography (OCT-A) is a novel imaging technique that can be used to document the retinal and choroidal vasculature without the administration of intravenous dyes. OCT-A is an emerging technology that may provide excellent viewing of posterior segment blood flow, which is useful in the care of patients with many disorders, especially retinal vascular diseases. OCT-A performs automatic segmentation and images at different layers within the retina and choroid, allowing precise anatomic localization of pathology. However, leakage is not imaged on OCT-A. This permits detailed imaging of retinochoroidal vascular structures without the interruption of late sequences by dye leakage. OCT-A has certain advantages and disadvantages compared to FA and ICGA. Obtaining an entire OCT-A study is faster than obtaining an entire dye-based angiographic study. However, dye-based angiography takes individual photographs using a camera, each of which is very quick and requires less cooperation from the patient. OCT-A requires the patient to fixate for the entire duration of scanning—approximately 10 s per eye—which can be difficult for uncooperative patients and for patients with poor vision and eyes that cannot fixate well (Table 1). Diabetes and Fundus OCT. https://doi.org/10.1016/B978-0-12-817440-1.00010-3 © 2020 Elsevier Inc. All rights reserved.

269

270

Diabetes and Fundus OCT

Table 1

Comparison between OCT-A and dye-based angiography.

Optical coherence tomography angiography

Dye-based angiography

Noninvasive (no intravenous dye) No systemic risks Allows en face and B-scan images Allows segmentation of retinal and choroidal layers Images only dynamic blood flow Rapid image acquisition More dependent on patient’s ability to fixate

Invasive (requires intravenous dye) Systemic risks of nausea, vomiting, allergy, and anaphylaxis Allows only en face images No segmentation Leakage is identified More time consuming Less dependent on patient’s ability to fixate

OCT-A is only able to document vessels with moving red blood cells but does not image vessels with no or minimal flow. This may explain the inability of OCT-A to detect some of the microaneurysms seen on clinical examination and FA in patients with diabetic retinopathy and other retinal vascular diseases. Initially, all OCT-A scanners used spectral-domain (SD) technology. Newer devices use swept-source (SS) imaging. Compared to SD-OCT, SS-OCT uses longer-wavelength infrared light, enabling improved tissue penetration and better imaging through media opacities [13]. In this chapter, most OCT-A images were obtained using a commercially available Cirrus 5000 with AngioPlex (Zeiss, Jena, Germany), utilizing SD technology. A 6  6 mm slab was used, and no subsequent image processing was performed. In the outpatient clinics, 10 images (slabs) are typically obtained per eye: retina, vitreoretinal interface (VRI), superficial retina (superficial), deep retina (deep), avascular, choriocapillaris, choroid, two custom slabs, and a retina depth encoded slab. In clinical practice, the authors primarily use the retina slab for most patients with retinal vascular diseases, with specific examples below. Several OCT-A images were obtained using a prototype SS-OCT device which, at the time this chapter was prepared, was not commercially available in the United States (Zeiss).

2 OCT-A of normal eyes An OCT-A of a healthy 48-year-old man is shown in Fig. 1. The left image represents the right eye and the right image represents the left eye. The upper images represent the en face flow images and the bottom images represent the corresponding B-scans (equivalent to a typical spectral-domain OCT image). In the B-scans, the purple segmentation lines mark the layers imaged. In the retina slab, imaging extends from the internal limiting membrane (ILM) to 60 μm above the retinal pigment epithelium (RPE) [2].

3 Selected retinal pathologies in OCT-A 3.1 Diabetic retinopathy Diabetic retinopathy is the most common retinal vascular disease and the leading cause of visual loss among working-age patients in the United States [3]. Diabetic retinopathy may

Chapter 10 • Optical coherence tomography angiography

271

FIG. 1 Normal optical coherence tomography angiography (OCT-A). (A) Right eye en face OCT-A above with structural OCT below and (B) left eye en face OCT-A above with structural OCT below.

be divided into nonproliferative diabetic retinopathy (NPDR) and proliferative diabetic retinopathy (PDR), which by definition is characterized by the presence of extraretinal neovascular tissue. An example of NPDR with diabetic macular edema (DME) is shown in Fig. 2, which also allows comparison between fluorescein angiography and OCT-A. A 74-year-old male with type 2 diabetes presented for the evaluation. Color fundus photography demonstrates macular exudates, hemorrhages, and microaneurysms (Fig. 2A). Early-phase fluorescein angiography (1:08) demonstrates retinal vascular flow patterns and leakage from multiple microaneurysms (Fig. 2B). Late-phase fluorescein angiography (10:32) demonstrates leakage from the microaneurysms consistent with DME (Fig. 2C). The OCT-A retina slab (Fig. 2D) demonstrates a widened and irregular foveal avascular zone (FAZ). Multiple irregular vascular changes are apparent, consistent with microaneurysms, most notable superior

272

Diabetes and Fundus OCT

FIG. 2 Nonproliferative diabetic retinopathy (NPDR) with macular edema. (A) Color fundus photography, (B) early fluorescein angiography, (C) late fluorescein angiography, and (D) optical coherence tomography angiography (OCT-A).

Chapter 10 • Optical coherence tomography angiography

273

FIG. 3 Proliferative diabetic retinopathy (PDR). (A) Color fundus photography, (B) optical coherence tomography angiography (OCT-A) retina slab, centered on the optic disc, and (C) OCT-A vitreoretinal interface slab, centered on the optic disc.

and nasal to the FAZ. Finally, an area of irregular vessels at the inferior border of the study, slightly nasally, is consistent with intraretinal microvascular abnormality (IRMA). An example of PDR is shown in Fig. 3. This patient also represents an example of clinically useful information being found in images other than the retina slab. A 56-year-old female with type 1 diabetes presented for evaluation. Color fundus photography (Fig. 3A) demonstrates prominent neovascularization of the disc (NVD). The OCT-A retina slab (Fig. 3B), centered on the optic nerve, demonstrates a dark area representing optical shadowing of NVD, which is anterior to the plane of the retina. The OCT-A of the vitreoretinal interface (VRI) slab (Fig. 3C) shows the flow signal through the NVD, with no normal retinal vessels in this plane. An example of PDR imaged with SS-OCT-A is shown in Fig. 4. A 43-year-old male with type 2 diabetes presented for evaluation. Color fundus photography (Fig. 4A) demonstrates prominent NVD. SS-OCT-A (Fig. 4B) demonstrates perfused neovascular tissue greater than that seen clinically. Another example of PDR imaged with SS-OCT-A is shown in Fig. 5. A 63-year-old male with type 2 diabetes presented for evaluation. Color fundus photography (Fig. 5A) demonstrates preretinal hemorrhage along the inferotemporal arcade and also superonasal to the center of the macula. Focal/grid photocoagulation marks are evident. Montage SS-OCT-A (Fig. 5B) reveals capillary nonperfusion and neovascularization elsewhere (NVE) along the inferotemporal vascular arcade just superior to the area of preretinal

274

Diabetes and Fundus OCT

FIG. 4 Proliferative diabetic retinopathy. (A) Color fundus photography and (B) swept-source optical coherence tomography angiography, centered on the optic disc.

FIG. 5 Proliferative diabetic retinopathy. (A) Color fundus photography and (B) montage swept-source optical coherence tomography angiography.

hemorrhage. Additional areas of capillary nonperfusion are evidently superior to the disc and at the inferior edge of the image.

3.2 Retinal vascular occlusions Retinal vascular occlusions may produce profound ischemic changes that are well imaged with OCT-A [4, 12].

3.2.1 Retinal vein occlusions Retinal vein occlusion is the second most common retinal vascular disorder after diabetic retinopathy [5].

Chapter 10 • Optical coherence tomography angiography

275

FIG. 6 Chronic branch retinal vein occlusion. (A) Color fundus photography and (B) optical coherence tomography angiography (OCT-A).

Branch retinal vein occlusion (BRVO) OCT-A can provide striking images in patients with BRVO [6]. An example of a chronic branch retinal vein occlusion (BRVO) is shown in Fig. 6. This patient demonstrates how OCT-A may reveal pathology not seen on clinical examination. A 75-year-old male with chronic BRVO, status-post treatment with antivascular endothelial growth factor (antiVEGF) agents, presented for follow-up. Color fundus photography (Fig. 6A) demonstrates relatively mild retinal hemorrhages and microaneurysms, mostly temporal to the center of the macula. Collateral retinal vessels crossing the horizontal raphe are noted. OCT-A (Fig. 6B) more clearly demonstrates these collateral vessels. In addition, a widened and irregular FAZ is noted, as well as other areas of capillary nonperfusion, especially temporally. Another example of BRVO is shown in Fig. 7. This case represents an example in which the OCT-A arguably gives more clinically useful information than does the fluorescein angiography. An 85-year-old male with a chronic BRVO, status-post grid photocoagulation, returned for evaluation. Color fundus photography (Fig. 7A) shows sclerotic retinal vessels, and some exudates temporal to the macula. Grid laser photocoagulation burns are evident. Early-phase fluorescein angiography (00:47) demonstrates delayed filling of the

276

Diabetes and Fundus OCT

FIG. 7 Chronic branch retinal vein occlusion (BRVO). (A) Color fundus photography, (B) early fluorescein angiography, (C) late fluorescein angiography, and (D) optical coherence tomography angiography (OCT-A).

superotemporal retinal vessels and early staining of the photocoagulation burns (Fig. 7B). Late-phase fluorescein angiography (5:09) demonstrates staining of vessel walls, staining of photocoagulation burns, and scattered areas of capillary nonperfusion (Fig. 7C). OCT-A, however, demonstrates a striking absence of flow signal in the area of the BRVO (Fig. 7D). The FAZ is widened and irregular as well. The fluorescein angiography is somewhat

Chapter 10 • Optical coherence tomography angiography

277

misleading, because the widespread staining photocoagulation burns may suggest the presence of macular edema. The OCT-A more clearly demonstrates the striking ischemia present in this case. Central retinal vein occlusion (CRVO) Central retinal vein occlusion (CRVO) is an occlusion of the central retinal vein at or near the optic nerve. An example of a CRVO is shown in Fig. 8. A 71-year-old male with a chronic history of CRVO, status-post anti-VEGF therapy, returned for follow-up. Color fundus photography (Fig. 8A) demonstrates retinal vascular tortuosity and dilatation, as well as scattered intraretinal hemorrhages. OCT-A clearly delineates the retinal vascular tortuosity and dilatation (Fig. 8B). There are relatively few areas of capillary nonperfusion in this example, suggesting a perfused (nonischemic) CRVO.

FIG. 8 Central retinal vein occlusion. (A) Color fundus photography and (B) optical coherence tomography angiography (OCT-A).

278

Diabetes and Fundus OCT

3.2.2 Retinal artery occlusion Retinal artery occlusion results from obstruction of blood flow due to embolic, thrombotic, inflammatory, or other causes. Branch retinal artery occlusion (BRAO) Examples of chronic BRAO are shown in Figs. 7 and 8. A 76-year-old male was evaluated. Color fundus photography shows a superotemporal embolus (Fig. 9A). The OCT map shows a wedge-shaped thinning in the distribution of the affected artery (Fig. 9B). OCT-A shows a diminished flow signal in the same area (Fig. 9C).

FIG. 9 Superior arcade branch retinal artery occlusion (BRAO). (A) Color fundus photography, (B) optical coherence tomography (OCT) thickness map, and (C) optical coherence tomography angiography (OCT-A).

Chapter 10 • Optical coherence tomography angiography

279

Central retinal artery occlusion (CRAO) The profound retinal ischemia in CRAO can be imaged by OCT-A. Examples of acute and chronic CRAO are shown in Figs. 10 and 11, respectively. A 77-year-old female presented with a 1-week history of visual loss. Color fundus photography (Fig. 10A) shows optic disc pallor and macular cherry-red spot. OCT-A (Fig. 10B) shows widespread capillary nonperfusion with some perfusion centrally (consistent with cherry-red spot).

FIG. 10 Acute central retinal artery occlusion (BRAO). (A) Color fundus photography and (B) optical coherence tomography angiography (OCT-A).

An example of a long-standing CRAO can be seen in Fig. 11. An 84-year-old female with a known history of chronic CRAO returned for follow-up. Color fundus photography (Fig. 11A) demonstrates optic disc atrophy and retinal vascular attenuation, as well as drusen and macular pigment alterations. OCT-A (Fig. 11B) reveals marked capillary nonperfusion with flow signal only through the major retinal vessels.

280

Diabetes and Fundus OCT

FIG. 11 Chronic central retinal artery occlusion (CRAO). (A) Color fundus photography and (B) optical coherence tomography angiography (OCT-A).

3.3 Macular telangiectasia (MacTel) Macular telangiectasia (MacTel) is characterized by alterations in the capillary network of the macula and associated damage of all retinal layers including the neurosensory retina. The most common form is MacTel type 2 [7]. OCT-A can provide clinically relevant images in patients with MacTel with or without secondary subretinal neovascularization [8]. An example of MacTel type 2 is shown in Fig. 12. This case represents another example of useful information being found in images other than the retina slab. A 71-year-old male presented for follow-up. Fundus photography (Fig. 12A and B) demonstrates gray discoloration of the retina with pigment changes and right-angle venules. The OCT-A (Fig. 12C– F) retina slab of each eye demonstrates a widened and irregular FAZ. The FAZ abnormality appears worse in the OCT-A superficial slabs of each eye. Recent studies evaluating MacTel in OCT-A have reported similar characteristics in which the superficial retinal plexus appears to contract and extend into the deeper retinal plexus [9].

3.4 Prepapillary vascular loop Prepapillary vascular loops can be arterial or venous and are a result of anomalous reabsorption of the primary vitreous. Most lesions are congenital, unilateral, and

Chapter 10 • Optical coherence tomography angiography

(A)

281

(B)

(C)

(D)

(E)

(F)

FIG. 12 Macular telangiectasia (MacTel) type 2. (A) Color fundus photography, right eye, (B) color fundus photography, left eye, (C) optical coherence tomography angiography (OCT-A) retina slab, right eye, (D) OCT-A superficial slab, right eye, (E) OCT-A retina slab, left eye, and (F) OCT-A superficial slab, left eye.

282

Diabetes and Fundus OCT

FIG. 13 Prepapillary venous vascular loop. (A) Color fundus photography and (B) optical coherence tomography angiography (OCT-A), centered on the optic disc.

asymptomatic; however, associated vascular occlusion has been reported. Prepapillary vascular loops usually project from the optic disc to the vitreous cavity, return to the optic disc, and may be associated with a persistent posterior hyaloid vessel. A 56-year-old asymptomatic male with 20/20 visual acuity was found to have a venous prepapillary vascular loop in the left eye (Fig. 13A). OCT-A (Fig. 13B) demonstrates artifact toward the top of the image but does the image part of the loop at the inferior pole of the disc. The image is actually better visualized on the structural OCT.

4 Summary OCT-A is an emerging noninvasive technology that allows the evaluation of the posterior segment blood flow without dye injection. The benefits of OCT-A are most apparent in the care of patients with retinal vascular diseases. As clinical experience with OCT-A continues, ophthalmologists will develop greater expertise in interpreting the images.

Chapter 10 • Optical coherence tomography angiography

283

References [1] S.G. Schwartz, H.W. Flynn Jr., A. Grzybowski, A. Pathengay, I.U. Scott, Optical coherence tomography angiography, Case Rep. Ophthalmol. Med. 2018 (2018) 7140164. [2] P.J. Rosenfeld, M.K. Durbin, L. Roisman, F. Zheng, A. Miller, G. Robbins, K.B. Schaal, G. Gregori, ZEISS Angioplex™ spectral domain optical coherence tomography angiography: technical aspects, Dev. Ophthalmol. 56 (2016) 18–29. [3] S.Y. Wang, C.A. Andrews, W.H. Herman, T.W. Gardner, J.D. Stein, Incidence and risk factors for developing diabetic retinopathy among youths with type 1 or type 2 diabetes throughout the United States, Ophthalmology 124 (4) (2017) 424–430. [4] S.C. Wu, V.M. Villegas, J.L. Kovach, Optical coherence tomography angiography of combined central retinal artery and vein occlusion, Case Rep. Ophthalmol. Med. 2018 (2018) 4342158. [5] M. Ip, A. Hendrick, Retinal Vein Occlusion Review, Asia Pac. J. Ophthalmol. (Phila) 7 (1) (2018) 40–45. [6] S.G. Schwartz, A. Monroig, H.W. Flynn Jr., Multimodal images of chronic branch retinal vein occlusion, Int. Med. Case Rep. J. 10 (2017) 159–162. [7] P. Charbel Issa, M.C. Gillies, E.Y. Chew, et al., Macular telangiectasia type 2, Prog. Retin. Eye Res. 34 (2013) 49–77. [8] V.M. Villegas, J.L. Kovach, Optical coherence tomography angiography of macular telangiectasia type 2 with associated subretinal neovascular membrane, Case Rep. Ophthalmol. Med. 2017 (2017) 8186134. [9] R.F. Spaide, M. Suzuki, L.A. Yannuzzi, A. Matet, F. Behar-Cohen, Volume-rendered angiographic and structural optical coherence tomography angiography of macular telangiectasia type 2, Retina 37 (3) (2017) 424–435. [10] V.M. Villegas, A.L. Monroig, L.H. Aguero, S.G. Schwartz, Optical coherence tomography angiography of two choroidal nevi variants, Case Rep. Ophthalmol. Med. 2017 (2017) 1368581. [11] V.S. Chang, S.G. Schwartz, H.W. Flynn Jr., Optical coherence tomography angiography of retinal arterial macroaneurysm before and after treatment, Case Rep. Ophthalmol. Med. 2018 (2018) 5474903. [12] P. Shah, S.G. Schwartz, H.W. Flynn Jr., Multimodal images of acute central retinal artery occlusion, Case Rep. Ophthalmol. Med. 2017 (2017) 5151972. [13] P.E. Stanga, E. Tsamis, A. Papayannis, F. Stringa, T. Cole, A. Jalil, Swept-source optical coherence tomography angio (Topcon Corp, Japan): technology review, Dev. Ophthalmol. 56 (2016) 13–17.

11

Screening and detection of diabetic retinopathy by using engineering concepts Wani Patil, Prema Daigavane ELECTRONICS E NGINEERING, G. H. RAISONI COLLEGE OF ENGINEERING, NAGPUR, INDIA

1 Introduction Diabetic retinopathy (DR), a vision threatening eye disease, is a complication of systemic disease of diabetes caused due to damage to the sensitive retina affecting the blood vessels of the retina [1]. Eventually, DR can lead to blindness and nearly 80% of people who have diabetes for 10 years and more suffers from DR [2]. When the pancreas is unable to produce sufficient amounts of insulin or fails to produce insulin then the human being suffers from chronic disease called diabetes. Diabetes is the disease caused due to damage to the insulin producing cells, an important hormone which regulates the blood sugar level of the human body [3]. If the blood sugar level rises above the normal level due to the uncontrolled diabetes known as hyperglycemic, then it over a period of the time can cause severe damage to the sensitive organs of the body including kidneys, liver, nerves, and the sensitive blood vessels of the retina [4]. There are two types of diabetes: Type 1 and Type 2. Insulin-dependent childhood-onset diabetes falls under Type 1 diabetes, which is not preventable and daily administration of the insulin is required [3], whereas, noninsulindependent or adult-onset diabetes is a Type 2 diabetes, in which the body fails to produce sufficient amount of the insulin. Type 2 diabetes is caused due to the improper lifestyle including overweight, physical inactivity, etc., and majority of the people suffering from diabetes fall under this category. Recently, many children were also found to suffer from this type of diabetes [3]. The World Health Organization (WHO) revealed that the number of people with diabetes has risen from 108 million to 422 million between 1980 and 2014 as well as the prevalence of diabetes during the same period has risen from 4.7% to 8.5%. About 2.6% of blindness worldwide can be attributed to diabetes [3]. The blindness due DR is preventable but can cause moderate to severe vision impairment [5]. A 2010 estimate indicated that DR is found in more than one-third of the diabetic patients and about one-third of the diabetic patients were found to suffer from severe vision impairment due to DR especially severe nonproliferative DR (NPDR) and proliferative DR (PDR) [6]. Thus, diabetes which is Diabetes and Fundus OCT. https://doi.org/10.1016/B978-0-12-817440-1.00011-5 © 2020 Elsevier Inc. All rights reserved.

285

286

Diabetes and Fundus OCT

growing at an alarming rate is one of the most challenging issue of the health care and according to the WHO, in the next 25 years, the number of the people with diabetes is expected to increase to 350 million [7]. Only 50% of the patients know the complications of the disease which makes the condition worse. Diabetes shows late complications as per the medical perspective which include major and minor vascular changes resulting in heart disease as well as renal and retina problems [8]. Due to the inadequate treatment, the developing countries are facing bad consequences as the risk of people with diabetes developing blindness is 1:25 times more compared to those without diabetes [9]. The patient recognizes the changes in the retina in the advanced stage when the treatment is complicated and nearly impossible as DR is a silent disease. Hence, the treatment of DR is most effective at the early stages of the disease. Thus, increased awareness and early detection of the disease through regular screening are of paramount importance. Annual DR screening plays a vital role in preventing the vision impairment in diabetic patients. In the UK, annual screening of DR is offered to all the diabetic patients over the age of 12 and above regularly. As a result, DR screening is gaining demand in the research field due to the global enhancement of the number of the patients diagnosed with diabetes. However, the diagnosis of disease from such a large database is time consuming as well as cost effective. To lower the cost of such screening, semiautomatic or aid screeners are much in demand and as a result, it is found that in the working age population in England, DR is not the cause of blindness [9]. For the assessment of such the large database, the design of the automated system for the screening of DR has gained attention of the researcher in the multidisciplinary fields. Further research in the diagnosis of DR is being conducted which helps to detect DR in the initial stage and thus helps in the prevention of the vision loss of the patients. The semiautomated or automated system for the detection of DR has merged the engineering field into medical field. In clinical analysis, for the automatic or semiautomatic vessel extraction algorithms, a new database of the fundus images taken from diabetic patients lays a test bed in the design. Computer-aided systems help in timely and accurate detection of DR which includes the digital retinal images known as the fundus images. The automated system uses fundus images which contain the details of the retinal blood vascularization, macula, optic disc and the fovea. DR causes change in the blood vessel structure of the retina causing abnormality in the retina. The design of the automated system should be able to detect these abnormalities accurately. Automated system designed for the screening of the DR combine the computer vision along with the processing of the images including the machine learning and statistical analysis techniques. This state of the art gives the results scientific, technical, and clinical importance by conducting a comprehensive, detailed, robust, and complete study. This chapter explores the engineering techniques used to investigate thoroughly the abnormality caused to the retinal vasculature during the progression from diabetes to DR by utilizing and developing different tools involving image processing, statistical analysis, and machine learning.

Chapter 11 • Screening and detection of DR using engineering concepts

287

1.1 Anatomy of the eye Eye is the most important sensitive organ and the source of vision in humans. The nervous system of the human brain processes these visual effects into information which can be recognized by the human being. The eye has lens to focus analogous to the camera’s eye. The eye uses a special type of layer of cells, known as the retina to create a picture unlike the camera which uses the film, which enables to focus over the wide range of the objects having different texture, size, and contrast with a higher speed compared to that of the camera. As shown in Fig. 1, before focusing on the image, the light first enters the eye through the cornea which filters the light. Aqueous humor is the anterior chamber which is firm and curved at the anterior chamber of the eye. Then the compensation of the changing light conditions is done, when the light travels through the pupil by contracting or relaxing. Subsequently, by the process of squeezing and stretching the rays are focused on the retina with the help of the lens. Fundus is the interior surface of the eye that lies opposite to the lens of the eye [10]. A multilayered sensory tissue known as retina, which contains millions of photoreceptors, lies on the backside of the eye. The main function of these photoreceptors is to convert the captured light into the electrical impulses which results in the production of images on the retina in the eye as these impulses travel through the optic nerve [10]. Photoreceptors include rods and cones named according to their shape. Rods detect the movement of the objects as they response instantly to the contrast change even in the presence of low light; however, the rods cannot sense the color and are imprecise. The rods are located at

Iris

Pupil

Cornea

Posterior chamber

Anterior chamber (aqueous humor)

Zonular fibres

Ciliary muscle

Lens

Suspensory ligament

Retina Choroid Sclera

Vitreous humour Hyaloid canal

Optic disc

Optic nerve

Fovea

Retinal blood vessels

FIG. 1 Schematic view of the anatomy of the eye.

288

Diabetes and Fundus OCT

the periphery of the retina and generally enable scotopic vision (night vision). On the other hand, cones are concentrated more inside macula which consists of high precision cells responsible the photopic vision (day vision) with the capability of identifying the colors of the objects. The central part of the macula known as fovea provides the best visualization of the objects along with the capability of distinguishing the objects [11]. DR affects the blood vessels in the retina and causes peripheral vision loss which may go unnoticed for some time, or the central vision loss due to the damage to the macula. Fig. 2 shows the serious effects of loss of the vision due to DR compared with the normal vision of the human being.

1.2 Worldwide scenario Diabetes increases the blood sugar level which damages the retinal blood capillaries resulting in DR, thereby causing the blood and the fluid from the retinal blood vessels to leak into the retina. This results in the visualization disorders in the human being leading to the occurrence of the microaneurysms, hemorrhages, which further leads to the formation of the hard exudates and cotton wool spots or venous loops in the retina [12]. DR is classified into two depending on the severity of the disease as nonproliferative DR (NPDR) and proliferative DR (PDR) [12, 13], which are further classified into different types depending on the specific features of DR as discussed in the following.

1.2.1 Mild NPDR This is the initial stage of DR where the patient’s vision is not affected much but blurring of the vision is caused due to lens swelling from hyperglycemic episodes or cataract formation due to the diabetics as shown in Fig. 3. Mild signs of NPDR are seen in approximately 40% of people with diabetes [14].

1.2.2 Moderate NPDR The next advanced stage of DR is the presence of numerous small bulges in the blood vessels known as microaneurysms (as indicated by yellow circle in Fig. 4) and retinal

FIG. 2 Effects of the loss of central vision due to diabetic retinopathy. The left image shows the vision of a healthy person whereas the right image indicates the vision of a patient with damaged macula region. Image courtesy the National Eye Institute.

Chapter 11 • Screening and detection of DR using engineering concepts

289

FIG. 3 An image of the patient affected by mild NPDR with no proper sign of the DR. The retinal vessels can be seen clearly at the center of OD and fovea within 1 disc diameter.

FIG. 4 Image showing the signs of moderate NPDR containing microaneurysm at the background along with the presence of retinal hemorrhage(s) and exudates causing maculopathy.

290

Diabetes and Fundus OCT

hemorrhages in the retina. Hemorrhages are the blood spots appeared on the retina due to the blood leakage from the damage blood vessels. These spots are temporary and disappear after some time. This is not so serious stage but it causes pericyte loss to the blood vessels which a patient can notice but no further treatment can be taken till the next screening. Patients may also notice some exudates which may leak some fluid into the retina along with the formation of cotton wool spots with venous beading as shown in figure.

1.2.3 Severe NPDR Deep round hemorrhages occur due to the blockage of more number of the blood vessels resulting in the pericyte damage of the blood vessels. Cotton wool spots (CWS) may form due to the blockage of axoplasmic flow which may result in ischemic areas. The intraretinal microvascular abnormalities (IRMA) formed in the retina due to the leakage of the vascular endothelial growth factor (VEGF) hormone due to the blockage of the blood vessels. This hormone secretes growth factors which tend to develop weak blood vessels in the retinal blood vascular structure of the retina, thus tiny new vessels traps within the retina as shown by yellow circles in Fig. 5, as well as venous beading and venous loops.

1.2.4 PDR The blockages in the blood vessels nourish the retina which triggers the growth of new blood vessels in the retinal blood vascularization giving rise to a disease known as

FIG. 5 The pre-PDR signs including IRMA, hemorrhages, cotton loop, and venous loops are indicated by yellow circles.

Chapter 11 • Screening and detection of DR using engineering concepts

291

neovasularization. As a result, severe NPDR advances into PDR growing new blood vessels in the retina typically containing new vessels at the disc (NVD) or elsewhere (NVE). These newly grown blood vessels are weak which results in the leakage of the blood or fluid in the vitreous region of the retina forming cloudy vision causing the disease vitreous hemorrhages. The leaked blood vessels can contaminate the vitreous region and cause blindness or severe loss of vision [15] and thus results in the distortion of the retina or traction of the retina as shown by yellow circles in Fig. 6. Severe visual loss may occur in about 3% of people affected by PDR (Tables 1 and 2).

FIG. 6 NVD and NVE along with preretinal hemorrhage (circled in yellow on the right).

Table 1

Elaborating the early sign of DR.

Sr. no.

Hemorrhages

Disease

Feature

Region of eye

1.

Microaneurysms

Small red dots

Secondary capillary wall

2.

Dot and blot hemorrhages Float hemorrhages Flame-shaped hemorrhages

Early sign of diabetic retinopathy Early sign of diabetic retinopathy Hypertension

Similar to micro-aneurysms Horizontal red spots

Deeper layers of inner nuclear In the small veins

High pressure

Fire flame shaped

Intracranial pressure and trauma

Small and round in shapelike trauma

3. 4.

5.

Bolt hemorrhages

Superficial nerve fiber layer from arterioles or veins Bleeds and breaks in the space between retina and vitreous Continued

292

Diabetes and Fundus OCT

Table 1

Elaborating the early sign of DR—cont’d

Sr. no.

Hemorrhages

Disease

Feature

Region of eye

6.

Sub-macular hemorrhage Cotton wool spots Hard exudates Venous loops and venous beading Intraretinal microvascular abnormalities Macular edema

Age-related macular degeneration Ischemia

Leaks near the vessels (smaller in shape) Gary spots with soft edge

Rupture of choroidal vessels under fovea Superficial retinal

Macular edema Increase in ischemia

Yellow colored fatty lipids Dark red spots

abnormalities due to the microvascular changes Leaking of the fluid results in its development

Leaks near the vessels (smaller in shape)

Macular retina Adjacent to areas of nonperfusion Borders of nonperfused retina

7. 8. 9. 10.

11.

Light sensitive a thin layer of tissues at the back of the retina

Complicated diabetic retinopathy

Table 2

Proliferative diabetic retinopathy.

Sr. no.

Disease

Feature

Region of eye

1.

Neovascularization

Retinal vessels

2.

Preretinal hemorrhages Vitreous hemorrhages Fibrovascular tissue proliferation Traction

Leakage of blood in vessels Pockets of blood and is boat shaped Clumps of blood clots within the gel Vessel regressed

Associated with neovascularization

Retinal detachment

Loss of vision completely

3. 4. 5.

It develops in the space available between the retina and the posterior hyaloids face. Vitreous region of the eye.

1.3 Benefits of early screening of DR DR is one of the major causes of the loss of vision in the Western countries and it is estimated by the WHO (World Health Organization) that by 2025, the diabetes population may rise to 300 million [16] which is the major issue. However, severe vision loss in the diabetic patient can be reduced by >50%, if timely and early detection along with the regular treatment is provided at the initial stage of DR [17]. Fig. 7 provides a comparison of normal eye fundus image with that of the DR affected eye. A number of abnormalities such as hemorrhages, abnormal grown blood vessels, cotton wool spots, hard exudates, etc., can be observed in the figure. The computer simulations used for the prevention and treatment of the visual loss or blindness are relatively inexpensive as compared to the conventional health care and rehabilitation cost [18]. Due to the availability of less number of the ophthalmologist and the increase impact of the diabetes on the human body, there is limitation in the screening capability of the patients. Hence the timely detection of the

Chapter 11 • Screening and detection of DR using engineering concepts

Diabetic retinopathy Normal eye

293

Can lead to loss of vision

Diabetic retinopathy Newly formed blood vessels

Cornea

Hemorrhages

Iris Macula Fovea

Pupil

Retina

Microaneurysms

Retinal blood vessels

FIG. 7 Difference between normal eye and DR affected eye.

FIG. 8 Data flow through diagram of the diagnostic system described in Ref. [19].

various types of DR by the means of an automatic or semiautomatic system is gaining more researchers attention for saving sight in human population. Fig. 8 shows the system developed at Oak Ridge National Laboratory and UT Hamilton Eye Institute used for the diagnosis of DR [19]. The main objective of the designed

294

Diabetes and Fundus OCT

approach is to diagnose the visual content of the retinal fundus image by automatically estimating the presence and stratification of retinopathy. The automated system aims at designing the system suitable for nonclinical environments such as pharmacies or shopping malls, similar to the omnipresent automatic blood pressure monitors that anyone can use with basic training. The diagnostic system shown in Fig. 8 starts with the analysis of the fundus image captured from the camera followed by the following assessments: • •





Quality assessment: the fundus image is judged or tested for its quality. The system will ask for images of retinal tear if the test is not passed. Locate anatomic structures: different anatomy of the eye like blood vasculature structure, optic disc, and macula center are located using different strategies as described in Ref. [20]. Detection of the lesion: by the process of segmentation the lesions are detected. Morphological reconstruction methods are used for the segmentation of the lesion [21]. Extraction of the features: in this stage different statistical features about the lesion are calculated such as the density of the vascular within the lesion, the moments related to the central macula, sharpness and the smoothness of the boundary of the lesion, etc. [22].

Depending on the extracted features, the test image is either used to detect the severity of lesion by using the previously trained database or may be transmitted to an ophthalmologist. By using the dynamically trained database, the ophthalmologist will classify the image with a particular disease according to the severity level of the disease. According to the diagnosis of the disease for the recommended test image, the diagnosis report of the disease along with the severity level, normal, moderate, or severe levels, is prepared and presented to the operator. Table 3 illustrates the classification of the NPDR into different levels.

2 Engineering concepts used in the diagnosis of DR Currently, multidisciplinary approach is used in the medical field to diagnose, evaluate, and plan the disease more accurately and intuitively. Engineering field is the most

Table 3

Nonproliferative diabetic retinopathy.

Sr. no.

Severity

Disease

1. 2. 3.

Mild Moderate Severe

Indicated by the presence of at least 1 microaneurysms Includes presence of hemorrhages, microaneurysms and hard exudates Includes presence of hemorrhages, microaneurysms and venous beading along with intraretinal microvascular abnormalities

Chapter 11 • Screening and detection of DR using engineering concepts

295

prominently used field that helps to design, implement, and evaluate a novel approach for the diagnosis of the disease more accurately, intuitively, and effectively conveying information through images. Image processing is the most attractive, challenging, and versatile field in which the image is converted into digital form and which is subjected to perform some operations such as the enhancement of the image or extraction of some useful information from it. The input to the MATLAB in image processing may be an image or any video frames as it allows signal dispensation, which generates a new image at the output which may be characterized by the applied processes which results in a change in the characteristics associated with that image. It treats the applied image as two-dimensional array which carries information of the images and depending on the characteristics of the available image a set of different signal processing techniques are applied on it. Today, image processing is a rapidly growing technology and is widely used in various fields of the business and hence it is the center of attraction for the various researchers of different disciplines. The three steps basically used in image processing are as follows: • • • •

The image taken by optical scanner or digital photography is imported as input to the system. The analysis and manipulation of the image are done depending on the application required. Finally in the last stage, depending on applied processing, output image is obtained with new and enhanced characteristics based on the required expectations. The processing of image which alters the characteristics of the input image may be divided into five groups as follows: • Visualization: enhancement in the intensities of the image to observe fine objects that are not visible by naked eyes. • Image preprocessing: creating an enhanced image by enhancing the contrast of the image applying different algorithms and filters. • Image retrieval: extracting the region of interest (ROI) from the image. • Pattern measurements: measuring different objects present in the image. • Recognition of the image: it helps to discriminate the objects present in the image.

The diagnosis of DR depending on the digital image analysis has huge benefit and thus allows the researcher to examine a large number of the retinal images at low cost in time-efficient automated systems. The analysis and comparison of the human fundus image are the fundamental steps toward the investigation of diseases of the retina. It is done by recording the human eye fundus for a particular instant of time and developing the dynamic process for the detection of the disease and registering an instant of visible state [23]. The design of the automated system has key benefits including image viewing and image management that allows the researcher to monitor the progression of the disease. Earlier photographic films were used for processing; currently digital imaging is used as it has all the advantages of visual reviewing, immediate storage, easy transfer and transmission, and exact duplicates without information loss or original degradation. It also has the advantage of easy manipulation of the image for the process of filtering, digital enhancement of contrast, magnification, illumination correction, etc. [24].

296

Diabetes and Fundus OCT

A novel framework and methodology used for the design of an automated system is presented in this chapter which will enable us to study the impairment of blood vasculature and identify DR caused due to the diabetes. In this chapter, we have discussed the methodology which will be used for the screening and validation of the DR. The engineering approach uses the following image processing steps once the retinal image is put into the system (Fig. 9): • • • • • •

preprocessing of the images segmentation of the images extraction of features from the images comparison of the extracted features of the affected image and the normal image classification of diseases along with the severity grading into normal, moderate, and severe scale validation of the diseases

3 Collection of the database for the screening of the disease 3.1 Fundus photography The fundus image of the eye can be captured with the help of fundus photography by using a specialized camera known as fundus camera which is a flash-enabled camera with intricate microscope. A fundus photo contains a detailed visualization of the main structure of the eye like the central and peripheral retina, optic disc, and macula [25]. The fundus image is a 24-bit image having a resolution of 768  576 which has 8-bits of red, green, and blue (RGB) channels in JPEG format. Retinal photography uses two types of fundus camera, i.e., mydriatic and nonmydriatic fundus cameras. A nonmydriatic digital fundus camera having 45° orientation along with one-field photograph is used for screening macular ischemia. In one-field photography dot or blot

Retinal images from databases Preprocessing of retinal images

Segmentation of the process for the detection of three DR diseases Feature extraction

SVM classification of the disease Normal

Moderate

FIG. 9 Flow chart of the automated system used for DR diagnosis.

Severe

Chapter 11 • Screening and detection of DR using engineering concepts

297

hemorrhages and hard exudates are properly visualized whereas in two-field photography abnormalities like neovascularization and soft exudates are visualized.

3.2 Online fundus database A researcher requires a huge database of the fundus images for the design of an automated system and the research commences with the collection of the database of the fundus image. Online database contain different types of abnormalities caused due to diabetes which includes the diseases like early stage and severe stage DR, glaucoma, age-related macular degeneration, etc. They provide new alternative collection of the database and hence provide good retinal images for carrying out the research and training phases. The freely available public dataset for the diagnosis of DR are Messidor, Retinopathy Online Challenge Stare, Drive, Diaretdb 1, Diaretdb 0, CMIF, Reviewdb, etc. The researcher can download these databases from the internet and can directly access them freely to develop, compare, and validate different automated methods for DR screening [26].

4 Design of the automated system for DR screening DR causes morphological changes in the blood vascular structure of the retina such as changes in the diameter, length, branching angles, or tortuosity in the vascular structure which results in different abnormalities. These abnormalities are caused due to metabolic disease such as diabetes, hypertension, and cardiovascular diseases [27]. The quantitative and descriptive study of the blood vascular structure included in the automated system is a challenging task due to the variability in the contrast level in the images, at the same time the noise present in the image makes the task worse along with the blood vessel width change and the lesions present in the image. As a result, the segmentation of the blood vessels plays a vital role in the design of the automated system which can be accomplished by using three distinct approaches, i.e., thresholding method, tracking method, and machine-trained classifiers. Thresholding approach uses different operators and filters like Sobel operators, Laplacian operators, Gaussian filters, etc., to enhance the contrast between blood vessels and background by selecting an appropriate gray threshold values [28]. However, selection of the gray threshold value is a crucial task as a small value of the threshold adds noise to the image and the large value of the threshold causes loss of small fine vessels from the image. Hence, the adaptive or local thresholding method is used for the segmentation of the image. Tracking method tracks the center of the vessels automatically along the longitudinal vessel axis from the starting point to the end point of the vessel [29]. However, the disadvantage of this method is that vessel crossings and bifurcations may create confusion during the tracking technique. Third, machine classifier method uses different classifiers such as Bayesian classifier, neural networks, and support vector machine, to discriminate the vessels from nonvessels [29, 30].

298

Diabetes and Fundus OCT

In this chapter, an automated detection system for the detection and the analysis of macular ischemia (MI) is proposed. MI causes small bulge-shaped blockage in the veins of the retinal blood vessels due to the lack of flow of oxygen in the blood vessels toward the macula. We proposed a design of an automated system for the detection of the macular ischemia (MI), an MI is the early stage DR and further research is ongoing in the early detection of MI as it prevents severe vision loss and blindness. The vessel segmentation plays a very important role in the detection of MI as there is geometric change in the width of the blood vessels due to the depletion of the flow of blood in the blood vessels. The veins of the blood vessel shrink due to the lack of the oxygen flow toward the macula. Macula is the central portion of the vision in the eye and any abnormality in it causes blindness by severe vision loss. The proposed automated MI system extracts blood vessels by graph trace method by identifying the nodes of the vessels and features are extracted for the detection of arteries and veins. Further, the classification of artery/veins is done with help of KNN algorithm/ technique. Then the vein diameter is estimated by comparing it with that of the normal eye diameter and the grading of the MI is done with the help of SVM classifier. For the proposed system, the fundus image database is taken from the DRIVE database and the approach is as described by the flow chart shown in Fig. 10. Rational image

Converting image

Segment vessels

Extracting graph and feature

Group

Classification

Artery FIG. 10 Flow chart describing the automated system for MI detection.

Vein

Chapter 11 • Screening and detection of DR using engineering concepts

299

4.1 Methodology The test image taken from the database is of the resolution 768  584 pixels which is resized in the design. For the better visualization of the blood vessels in the fundus image, it is first converted into the grayscale image as the blood vascularization of the retina has poor values of the reflectance as compared to the background of the retina. In the proposed system, the red channel is extracted from the fundus RGB image as blood vasculature has better view in red channel with its grayscale values and as the red component is less noisy, it contains significant information that can be extracted from the fundus image (Fig. 11).

4.2 Image enhancement In our proposed work, the preprocessing of the image is the important part of the system in which the test image is first filtered which is accompanied by the generation of the point spread function (PSF) of the image. The PSF indicates the impulse response of the imaging system, i.e., camera, which consists of a function including the point source or point object which incorporates the spatial description of the optical transfer function of the camera [31]. The PSF is independent of the object position and hence it is shift invariant. Hence the magnification M used to define the image plane coordinates by assuming that there is no distortion is given by. ðxi, yiÞ ¼ ðMxo, MyoÞ

(1)

Thus, the object plane field is mathematically represented by the image plane convolution integral as. ZZ O ðxo, yoÞ ¼

O ðu, vÞ δ ðxo  u, yo  vÞ du dv

(2)

where δ (xo  u, yo v) represents the shifting property of impulse function of 2D object. Hence the image plane field, containing the description of the object plane field, is expressed in terms of the superposition of the images of each of the individual impulse functions as. ZZ I ðxi, yiÞ ¼

O ðu, vÞ PSF ðxi=M  u, yi=M  vÞ du dv

(3)

where the impulse function δ(xo  u, yo v) leads to the PSF (xi/M  u, yi/M  v). In the preprocessing stage we use the filter that is used to approximate the linear motion of the camera by lens pixel and an angle θ in counterclockwise direction by convolving the test image with PSF. To restore the features of the image from the unwanted noise, the inverse filtering uses deconvolution and recovers the image. However, the performance of the inverse filtering is degraded by additive noise and thus Wiener filtering is used for the removal of the additive noise from the image and for the smoothing of the image [32]. Weiner filtering gives minimum mean square error (MMSE) by using Fourier transform orthogonality principle as.

300

Diabetes and Fundus OCT

Input image

Retinal image RGB color image

Convert to gray

Vessels image

Faltering

Morphological operation

Running

Thinning

Intensity pixel

Hue

Extract feature

Saturation

Intensities

Slandered deviation

Artery Classification feature Vein

Performan ce graph

FIG. 11 Details of the proposed methodology.

Basic station

Chapter 11 • Screening and detection of DR using engineering concepts

W ðf 1 , f 2 Þ ¼

H∗ðf1 , f2 ÞSxx ðf1 , f2 Þ jH ðf1 , f2 Þj2 Sxx ðf1 , f2 Þ + Sηη ðf1 , f2 Þ

,

301

(4)

where Sxx(f1, f2) indicates the power spectra of the original image and Sɳɳ(f1, f2) indicates the additive noise along with the blurring filter H(f1, f2), thus Wiener filter uses filtering part and another noise smoothing part.

4.3 Vessel segmentation The fine and thin vessels can be extracted by using morphological operations which gives the form, structure, or shape to an object image if any present in the test image. It follows the nonlinear mathematical principle which uses two sets of data as the input, i.e., one set containing the test image and other set is the structuring element which is used as the mask. The test image is in the gray level and the structuring element is a matrix of zero and one value [33] which is used as the mask as shown in Fig. 12. Fig. 12 shows a disc-shaped structuring mask with radius 3. However, different formats of the mask such as line shaped, ball shaped, disc shaped, square shaped, etc., can be used as structuring element. But in our proposed work we recommend a disc-shaped structuring element as blood vascularization somewhat looks like a disc: In the morphological operation, the mask slides on each pixel of the test image which results in the image containing value 1 pixel which resembles the effectiveness of the mask and value 0 resembles the ineffectiveness of the mask as discussed in the following: The morphological operation used for erosion and dilation operators are given as (5) and (6): AB ¼ min fAðx + u, y + vÞ  Bðu, vÞg u, v

(5)

AB ¼ max fAðx  u, y  vÞ + Bðu, vÞg u, v

(6)

where A (x, y) represents the gray-level image matrix and B(u, v) represents the structural element matrix. Dilation operation potentially fills in small holes and connects disjoint objects, thus resulting in the expansion of the test image. However, in the erosion operation, the object

0001000 0011100 0111110 1111111 0111110 0011100 0001000 FIG. 12 Disc-shaped structural element (mask) with radius 3.

302

Diabetes and Fundus OCT

boundaries are etched away and hence results in the shrinkage of the image. The dilation and the erosion of the object depend on the proper selection of the structuring element, which is customized according to the application used. A combination of erosion and dilation is used for segmentation of the vessels which is expressed by Eqs. (7), (8), respectively. In our proposed work we used the closing function for the filling of the vessels whereas the opening function removes any peak values other than vessel values as. A  B ¼ ðABÞB

(7)

A  B ¼ ðABÞB

(8)

where A is the grayscale image, B is binary structuring element and the symbol  represents the dilation of the image, and  represents the erosion of the image. Thus, the opening function removes weak connections between objects and small details and the small holes are removed by the closing function. The morphological operations also help in contrast enhancement of the image by two types of transformations: top-hat transformation and bottom-hat transformation. The choice of the transformation depends on the orientation of the object image under test. In general, the top-hat transformation resembles light objects on a dark background whereas the bottom-hat transformation usually refers to dark objects on a light background. Eqs. (9), (10) define both expressions [34]: Top  HatðAÞ ¼ ATH ¼ A  ðA  BÞ

(9)

Bottom  HatðAÞ ¼ ABH ¼ ðA  BÞ  A

(10)

As expressed by Eq. (9), in the top-hat transformation opening function of the image is subtracted from the original image whereas Eq. (10) shows bottom-hat transformation in which the original image is subtracted from the closing function of the image. The fundus gray image has dark objects on the white background, and in our proposed work we are interested in extracting blood vessels for the screening of MI. As a result, the bottom-hat transformation algorithm is used in our work which removes the objects other than blood vascularization from the image in the opening function of the image whereas top-hat transformation the removes the objects which does not fit in the image. Thus, the connectivity of the objects is obtained in the resultant image along with the contrast enhancement of the image. The retinal blood vessels contain both large and thin vessels connected together. We are interested in extracting small vessels for MI screening and hence larger vessels whose area is >15 pixels are removed from the image using the area property of the image. For the detection of MI, the vessel width plays a vital role as MI causes a decrease in vessel width due to shrinkage of the blood vessels. The perimeter pixel value of the extracted blood vessels plays a pivotal role in the measurement of vessel width. The set of the interior pixels resembles the perimeter of the region. A nonzero pixel resembles the perimeter of the object which is connected to at least one zero-valued pixel. For a good approximation of the vessel perimeter, an 8-connectivity is used to determine the connectivity of the pixel inside the region to the pixel outside the region (Fig. 13).

Chapter 11 • Screening and detection of DR using engineering concepts

303

FIG. 13 Vessel segmented by morphological operation.

4.4 Thinning Thinning results in the skeletization of blood vessels with the unitary thickness extracted by applying the morphological operation infinity times by removing the border pixels from the image until the object shrinks to a minimally connected stroke. Hence, this process is known as the thinning of blood vessels and thus the centerline image is obtained [35] as shown in Fig. 14.

FIG. 14 Segmentation of vessels.

304

Diabetes and Fundus OCT

FIG. 15 (A) Segmented retinal vessel and (B) centerline pixels image.

FIG. 16 Feature extraction of the images.

The end points, split points, crossover points, and intersection points of the extracted blood vessels are identified from these centerline pixels by using graph trace method (Figs. 15 and 16).

4.5 Graph-based approach The small bulges in the blood vascular structure of the image result in MI, which is tracked using graph-based trace method. However, during the computation and the identification of the blood vessels in the tracing algorithm, the crossover points, and the vessel bifurcation results major issues. The advantage of using graph-based algorithm is that it can simultaneously segment the vessel edges as well as measure the width of retinal vessels in fundus images. Our main objective is to extract the binary tree of the blood vessels for the subsequent vessel

Chapter 11 • Screening and detection of DR using engineering concepts

305

measurements. The graph-based approach identifies blood vessels along with the identification of the crossover points as well as search for the optimal forest (set of vessel trees). The segmentation of the centerline pixel image starts with the extraction of the seed points from the blood vessels and the connectivity of the blood vessels in topological level is extracted as the lines that are present in the centerline pixel image. From these seed points the nodes are extracted, which helps to identify true vessel segment from the false vessel segments. The end points, split points, crossover points, and intersection points of the extracted blood vessels are identified by considering P as the set of all white pixels present in the line image. The graph-based algorithm identifies the connected pixels pi,pj and they are considered to be connected if adj(pi,pj) belongs to set P{pi,pj} [36]. The graph-based algorithm uses eight neighborhood pixels p1 to p8 that surround the main pixel “P” to identify the crossing number in the blood vessels. As shown in Fig. 17, with respect to the neighborhood point p, the term xnum(p) represents the black to white transitions, then depending on x(num), the nodes are extracted as shown below. • • • •

If If If If

xnum(p) xnum(p) xnum(p) xnum(p)

¼1; ¼2; ¼3; 4;

node is end point cross point is end point node is split point node is intersection point

Nonvessel points are detected along with the connected set of pixels. The main aim of our proposed work is to detect the true blood vessels from the node analysis and to neglect the false blood vessels from the extracted retinal blood vessels. The extracted blood vessel tree contains blood vessels which cross each other as well as suffers from the directional change between the segments and hence it is essential to identify the crossover points as well as the directional change between the segments to detect the blood vessels.

4.5.1 Crossover location identification In retinal blood vessels, frequently, the vessels cross each other over the point or over the segment forming the crossover point or crossover segment, respectively. The crossover point is detected as follows.

(A)

(B)

(C)

FIG. 17 Junction samples (A) surrounded pixels p1 to p8 (B), (C) shaded pixels shows junction [36].

306

Diabetes and Fundus OCT

The centerline image consists of a set of white pixels P, then the junction is crossover point if more than or equal to four segments are adjacent to the junction J following the condition cross(J) is true if j{s  SP jadj(s, J)}j 4. Similarly the directional change between segments is identified by considering the crossover segment which occurs when two different vessels share a segment. A short segment between two junctions is mistaken as crossover segment and hence to avoid this directional change between adjacent segments and their pixel intensity values are used to differentiate crossover segments. Let sa and sb be two segments adjacent to a common junction and pa and pb be the end points of sa and sb (Fig. 18). Let the vector va starts on sa and ends at pa and the vector vb starts from pb and ends on sb. The directional change between sa and sb with respective to the vectors va and vb is estimated as. ΔDðsa, sbÞ ¼ cos  1ðva  vbÞ=ðj vakvbj Þ where ΔDðsa, sbÞ lies between ½0°, 180°

(11)

Eq. (11) measures the magnitude of the directional change when we go from sa to sb as shown in Fig. 18.

4.5.2 How to find optimal forest (set of vessels) The best set of vessels from the blood vessels graph is obtained by modeling the segment as graph segment by using some optimization technique. The graph segment is said to be connected if it has a set of two segments followed by vessel verification procedure as the nonvessel points are also detected along with the set of the true blood vessels from the retinal image using the graph tracing algorithm (Fig. 19).

4.6 Feature extractions The features are extracted from the centerline pixel image and processed to identify the vein and the artery. The extracted features are [37] as follows: i. ii. iii. iv. v. vi. vii.

red, green, and blue intensities of centerline pixels, hue, saturation, and intensity of centerline pixels, red, green, and blue mean value of centerline pixels, hue, saturation, and intensity mean value of the vessel, red, green, and blue intensities of standard deviation values of vessels, hue, saturation, and intensity standard deviation values of the vessels, red and green maximum and minimum intensity values of the vessel.

FIG. 18 Measurement of the directional change between different segments [38].

Chapter 11 • Screening and detection of DR using engineering concepts

307

FIG. 19 Image after node calculations.

4.7 Feature vectors By using 3  3 neighborhood, feature vectors from each centerline pixel are extracted which consists of four features, viz., mean (MR) and standard deviation (SR) of red channel (from RGB color space) and mean (MH) and standard deviation of the region of interest is extracted. The differentiation of the artery and veins from the extracted blood vessels tree is based on the choice of particular color features [38,39]. It is observed that the arteries are brighter than the veins, i.e., arteries have more red channel intensities as compared to the veins as both have different color and size feature (Fig. 20) [38].

4.8 KNN classification for A/V differentiation in blood vessels Feature vectors are extracted from the centerline pixels of the blood vessels and further used for the A/V (artery/veins) classification. The discrimination of arteries and veins are done by considering the feature selection of averaging red values and luminance values over the pixels, which belong to the skeleton of each vessel. To include the walls of the vessels in the vessel width measurement, the mean value for pixels of each vessel in red channel and luminance channel of HSL space are also used. From the associated centerline pixels feature vectors of a pair of vessel tree are classified into two clusters/classes (with respective centroids) using the KNN classifier algorithm. The Euclidean distance

308

Diabetes and Fundus OCT

Red

(A)

Green

Blue

(B)

FIG. 20 (A) A/V discrimination with artery appears darker than vein in fundus image. (B) Histogram of the red channel vessels of artery and vein can be seen more separable as compared to other channels of RGB fundus image.

between the cluster center and the pixel feature space is measured and accordingly each centerline pixel is assigned a degree of belonging to the predefined cluster/classes labeled as arterial or venous and hence the identification of arteries and veins are discriminated. MRmean, SRmean, MHmean, SHmean are the average values of the four features extracted from the centerline pixels in that cluster, the details of which are presented in Table 4. The KNN algorithm assigns the feature vector either positive or negative classes, depending on the training data as a set of vectors and assigned class label. The value of “k” depends on the number of neighbors influencing the classification. The choice of k values can be simple if the algorithm is run many times with different k values and choose the one with the best performance. The KNN algorithm is discussed in the following. Consider that our data set contains samples each with n attributes which together form an n-dimensional independent vector: x ¼ ðx1, x2, …, xnÞ

(12)

Along with this, consider another attribute-dependent variable, denoted by y, whose value depends on other n attributes x. Assume that the variable y is a categorical variable, and the scalar function f assigns the class as y ¼ f(x) to every such vector. Suppose these vectors are combined together in a set of T with their corresponding classes: xðiÞ,y ðiÞ for i ¼ 1, 2, …, T

Table 4

(13)

The four best features resulted from feature selection [40].

Feature description for each vessel segment 1 2 3

4

Mean value of the red channel: MEAN(P),P ¼ {pi j pi  vessel-skeleton & pi  RED image} Variance value in Luminance channel of HSL space, which belong to the skeleton of the related vessel segment. Variance (P),P ¼ {pi j pi  vessel -skeleton & pi  LUMINANCE image} Mean value for all vessel pixels inside a window located at center of vessel segment for image in red channel. Size of window is equal to width of the thickest vessel segment. Mean (P),P ¼ {pi j pi  Window &pi  VESSEL & pi  RED image} Variance value for all vessel pixels inside a window located at center of vessel segment for image in luminance channel. Size of window is equal to width of the thickest vessel segment. Variance (P),P ¼ {pi j pi  Window &pi  VESSEL &pi LUMINANCE image}

Chapter 11 • Screening and detection of DR using engineering concepts

309

This set is the training set for the design. Now, suppose that a new sample x ¼ u appears in the classifier, then the KNN algorithm should be able to find the class that this sample belongs. By using Eq. (13) we may be able to classify this new sample simply by computing v ¼ f(u) if the function f is known. But the drawback is that we do not know anything about f except that it is sufficiently smooth. The K-nearest neighbor (KNN) algorithm identifies k samples in the training set whose independent variables x are similar to u, and classifies the new sample into class v. The degree of the neighborhood in the algorithm is calculated as a distance or dissimilarity measure between samples based on the independent variables given by Euclidean distance (14)COMP: link the Eq. (14) here. The Euclidean distance between the points x and u is. d ðx, uÞ ¼

qX ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n ðxi  uiÞ2 i¼1

(14)

Clustering methods can also be preferred to measure the distance between points in the space of independent predictor variables. K ¼ 1 is the simplest case in which the sample in the training set is found closest to u and set v ¼ y where y is the class of the nearest neighboring sample. The classification of the samples by using a single nearest neighbor can be very powerful technique when we have a large number of samples in our training set. An arbitrarily sophisticated classification rule proved that the system is able to reduce misclassification of error at best to half of that of the simple 1-NN rule. For k-NN algorithm find the k-nearest neighbors of u and classify the new sample by using a majority decision rule. The k-NN algorithm with the higher value of k helps in smoothing the data and also reduces the risk of overfitting. For accuracy improvement we select the value of k in units or tens rather than in hundreds or thousands. However, the drawback of the algorithm is that if k ¼ n then it merely predict the class that has the majority in the training data for all samples irrespective of u and oversmoothing occurs. Advantages of KNN The kNN algorithm has many advantages as it is simple and easy to implement and thus helps in seeking solution to any classification problem. • • •

As it is easy to implement and debug the process, the process is transparent. The k-NN reduces noise during its operation and makes the training set smooth and, as a result, can be effective in improving the accuracy of the classifier. Run-time improvement is possible on the large case bases.

Processed image after node calculation and feature extraction undergo classification of arteries and veins using KNN algorithm technique where red vessels indicate arteries and blue colored vessels represents veins which is shown in Fig. 21.

310

Diabetes and Fundus OCT

FIG. 21 Extracted A/V Classification of the blood vessels form k-NN algorithm.

4.9 MI classification of vein based on thickness measurement To find the detected vein diameter, for a particular vessel centerline pixel position, we map together the vessel edge detected image and centerline detected image [41, 42]. The vessel edge detected image is taken as the mask and the mask is considered at the center of the pixel from the vessel centerline image. The mask is applied at the edges of the image as we are interested in finding the potential edge pixels in any side of that centerline pixel position. In width measurement method, instead of searching for all the pixel positions inside the mask, we calculate the pixel position by shifting its position up to the size of the mask and then rotate it by the position from 0 to 180 degrees as shown in Fig. 22.

(x1, y1)

180°

Centerline pixel position

q (x,,y,) Line represents the width of that Cross-section

(x2, y2) FIG. 22 Schematic representation of finding the mirror of an edge pixel (left) and width or minimum distance from potential pairs of pixels (right).

Chapter 11 • Screening and detection of DR using engineering concepts

311

In order to access every point in the mask, the rotation angle is increased in the step 180° size less than masklength . The edge pixel is searched from the position gray scale and once it is identified we find its mirror by shifting the angle to 180° and increasing the distance from one to the maximum size of the mask as shown in Fig. 22. In this way, we find the width or diameter of the cross-sectional area by picking the entire potential pixel pairs from the rotational invariant mask. x ¼ x0 + r cosθ and y ¼ y 0 + r sinθ

(15)

where (x0 , y 0 ) represents the vessel centerline pixel position, r ¼ 1, 2, .. marksize and 2 θ ¼ 0, .., 180. During the measurement of the vessel width for particular pixel position, if the edge image has grayscale value 255 (white or edge pixel) then the mirror of this pixel is found as (x2,y2) considering θ ¼ 0 + 180 and varying r as described in Ref. [42]. Then the pairs of the pixels are extracted on the opposite edges (at line end points) by assuming that the imaginary lines pass through the centerline pixels. The Euclidian distance expressing the width of the cross section is found from these pixels pair as given by the following equation qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðx1  x2Þ2 + ðy1  y2Þ2

(16)

As a result, the width for all the pixels can be measured including the vessels one pixel wide (for which we have the edge and the centerline itself ).

5 MI detection approach By choosing analysis option from GUI, the detection of Macular Ischemia is done by comparing each pixels value of dataset1 image and dataset 2 image with normal or healthy image shown in Fig. 23. For the proposed MI approach, we estimate the vein width of the extracted blood vessels by applying the proposed methodology to the normal image and then to the test image. A comparative analysis of the extracted vein vessels helps us to identify the MI. Thus the vein width of the normal image gives us the threshold value for the identification of the MI.

6 MI classification based on SVM classifier For severity grading of disease SVM classifier is implemented using followings steps: (i) data setup: Dataset consists of N samples each containing three classes; (ii) calculation of the SVM linear kernel ( t0) value. The twofold cross validation (meaning use 1/2 data to train and the other 1/2 to test) is used to find the best parameter C; (iii) after finding the best parameter value for C, we train the entire data again using this parameter value;

312

Diabetes and Fundus OCT

FIG. 23 Image for detection of Macular Ischemia.

(iv) plot support vectors; (v) plot decision area. The proposed method consists of different vectors of higher dimensions and SVM helps to map these vectors in the space with an optimal hyperplane constructions. The optimal one hyperplane is selected out of many hyperplanes in such a way that the distance is maximized between itself and the nearest sample vectors of each class and this results in maximizing the margin. The resulting hyperplane is known as the optimal separating hyperplane resulting in the margin which specifies the sum of the distances of the hyperplane to the closest training vectors of each class. Expression for hyperplane: wx+b¼0

(17)

where x is the training vectors sets, w the vectors perpendicular to the separating hyperplane, and b the offset parameter which allows the increase of the margin as Margin is d1 + d2. When the decision function of the data is not a linear function, the kernel function is used which maps the input space vectors through a nonlinear transformation rather than fitting nonlinear curves to the vector space to separate the data. This allows in the classification of the higher dimensional data through an optimal kernel function

Chapter 11 • Screening and detection of DR using engineering concepts

d1

313

d2 d1

FIG. 24 SVM classifier approach.

implemented in SVM model at the cost of the classifier complexity and classification error is controlled explicitly (Fig. 24). n-class SVM classifier In the n-class SVM classifier, all the available classes are grouped into two disjoint groups of classes which enables to train the SVM classifier by generating a root node of the decision tree by considering the samples of the first group as positive examples and the samples of the second group as negative examples. As a consequence, a first (left) subtree is assigned to the classes from the first clustering group, while the classes of the second clustering group are being assigned to the (right) second subtree and this process recursively continues until there is only one class per group which defines a leaf in the decision tree (Figs. 25 and 26).

2 2 2 2 2 22 77 7 77 7 7

3 3 3 3 3 3 3

44 4 44 4 4 6 1 11 1 111 5 5 5 5 5 55

FIG. 25 n-SVM classifier approach.

6

6 6 6

6 6

314

Diabetes and Fundus OCT

Normal

(A)

Moderate

(B)

Severe

(C)

FIG. 26 Images of different stages of DR: (A) normal stage, (B) mild stage, and (C) severe stage.

7 Software description Highly computing and interactive environment makes the MATLAB a versatile high-level technical language which are provided with different algorithm with the ability to solving any problem related to the analysis and the visualization of the data along with their numerical computations. MATLAB helps the user to solve the technically computed problems faster than the traditional programming C, C++ languages. MATLAB is a powerful data analysis and visualization tool which uses a powerful matrix operation characterized by a powerful programming and graphics techniques in terms of different sets of different MATLAB programs to design any particular task. MATLAB toolboxes are supported with different sets of programs. Image processing toolbox is the MATLAB toolbox which has the ability to handle different images under process with the help of different functions along with different commands and techniques which makes the MATLAB programming very easy to write and understand. In image processing, the image is processed in the form of matrices as a standard data type by considering different elements as pixels which are characterized into gray values and RGB values.

8 Results and conclusions The proposed work uses fundus images obtained from the ophthalmologist along with the database known as the DRIVE (Digital Retinal Images for Vessel Extraction) for the screening of the disease. A total of 100 retinal images were screened in this study including 40 fundus color images derived from the DRIVE database along with their ground truth images. The fundus images processed in our system uses digitalized images by using a nonmydriatic 3CCD camera with a resolution of 24-bits per pixel with an image size of 565  584 with 45° FoV (field of view). Fig. 27 shows some of the database images. For the classification of the disease, the available database is equally divided into training and test sets. The proposed early extraction and the detection of the blood vascularization method enables us to use the automated assessment technique for the analysis and the classification of initial vascular change along with the discrimination of the vessel graph into arteries and veins. The proposed methodology enables us to classify the whole blood vascularization tree into arteries and veins and it is not restricted to the classification of the

Chapter 11 • Screening and detection of DR using engineering concepts

315

FIG. 27 Database images used under test set.

specific region of interest, normally around the optic disc. The literature survey revealed that most of the available research methodologies have concentrated their work based on the intensity feature vector used for the classification of the blood vessels into arteries and veins. In spite of using the intensity features, our proposed work concentrated on extracting additional information from the extracted blood vascular network graph. From the extracted blood vessel graph, different node types like the bifurcation, crossing, or meeting point nodes are obtained by extracting the node degree orientation of the links along with the angles between the links including the vessel calibration of each link. Depending on the obtained information, the A/V classes are finally assigned to each vessel in the graph by using KNN classifier. Depending on the extracted vein, the diameter of the vein is estimated by mapping the extracted vessel with the extracted centerline vessels by considering the masking approach. The normal image is initially processed and the vein diameter is estimated which is used as a threshold value for the detection and the classification of the MI. Depending on the threshold value, the MI is detected and classified using a multiclass SVM classifier which ensures fast classification along with the achievement of good performance as compared to the previous conventional classifier but at the rate of challenging computational training phase. The presented proposed work for the detection of MI is conceptually simple and provides an effective and innovative method which will surely help the ophthalmologists to upgrade their level in the detection of the disease. By applying SVM technique severity grading is done along with analysis results of both images from different datasets as shown in Fig. 28. According to severity results corresponding test images are stored in folders named as normal, moderate, and severe, respectively (Table 5). Furthermore, we compared the performance of our approach with other recently proposed methods, and we conclude that we have achieved better results (Fig. 29).

316

Diabetes and Fundus OCT

FIG. 28 Final output with analysis result.

Table 5

Evaluation result and comparison of input images from datasets.

No. of test

First dataset image

Second dataset image

Severity stage of first dataset

1 2 3 4 5 6 7

1 2 4 3 8 10 3

1 20 17 6 3 11 3

Moderate Normal Moderate Normal moderate Severe Normal

Severity stage of second dataset

Sensitivity first dataset, second dataset

Specificity first dataset, second dataset

Accuracy first dataset, second dataset

Error rate first dataset, second dataset

Moderate Moderate Severe Moderate Moderate Normal Moderate

0,0 0,0 0,0 0,0 0,0 0,0 0.78, 0.78

1,1 1,1 1,1 1,1 1,1 1,1 0.90,1

80%, 85% 85%, 95% 80%, 95% 70%, 80% 80%, 85% 80%, 75% 85%, 90%

0.15, 0.20 0.15, 0.05 0.20, 0.05 0.30, 0.20 0.20, 0.15 0.20, 0.25 0.15, 0.1

Chapter 11 • Screening and detection of DR using engineering concepts

317

FIG. 29 Comparison of the existing and the proposed method.

References [1] R. Williams, M. Airey, H. Baxter, J. Forrester, T. Kennedy-Martin, A. Girach, Epidemiology of diabetic retinopathy and macular oedema: a systematic review, Eye 18 (10) (2004) 963–983. [2] P.J. Kertes, T.M. Johnson, Evidence-Based Eye Care, Lippincott, Williams and Wilkins, 2006. McBrien, N. (1998) Optometry: an evidence-based clinical discipline. Clin Exp Optom(Editorial) 81(6): 234–235. [3] K.G.M.M. Alberti, P.f. Zimmet, Definition, diagnosis and classificationof diabetes mellitus and its complications. part 1: diagnosis and classification of diabetes mellitus. rovisional report of a who consultation, Diabet. Med. 15 (7) (1998) 539–553. € scher, F. Cosentino, J.A. Beckman, et al., Diabetes and vascular disease patho[4] M.A. Creager, T.F. Lu physiology, clinical consequences, and medical therapy: part I, Circulation 108 (12) (2003) 1527–1532. [5] R.R. Bourne, G.A. Stevens, R.A. White, J.L. Smith, S.R. Flaxman, H. Price, J.B. Jonas, J. Keeffe, J. Leasher, K. Naidoo, et al., Causes of vision loss worldwide, 1990–2010: a systematic analysis, Lancet Glob. Health 1 (6) (2013) e339–e349. [6] C.S. Lee, A.Y. Lee, D.A. Sim, P.A. Keane, H. Mehta, J. Zarranz-Ventura, M. Fruttiger, C.A. Egan, A. Tufail, Reevaluating the definition of intraretinal microvascular abnormalities and neovascularization elsewhere in diabetic retinopathyusing optical coherence tomography and fluorescein angiography, Am J. Ophthalmol. 159 (1) (2015) 101–110. [7] World Diabetes, A Newsletter From the World Health Organization, 4, 1998. [8] Cigna healthcare coverage position–A report, 2007. Retrieved from: http://www.cigna.com/ customer_care/healthcare_professional/coverage_positions/medical/mm_0080_coverageposition criteria_imaging_systems_optical.pdf. Accessed 5 December 2007. [9] G. Liew, M. Michaelides, C. Bunce, A comparison of the causes of blindness certifications in England and Wales in working age adults (16-64 years), 1999-2000 with 2009-2010, BMJ Open 4 (2) (2014). pp. e004015-e004015. [10] B. Cassin, S. Solomon, Dictionary of Eye Terminology, Triad Publishing Company, Gainsville, Florida, 1990.

318

Diabetes and Fundus OCT

[11] G. Wyszecki, W.S. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulae, second ed., John Wiley & Sons, New York, NY, 1982. [12] U.R. Acharya, E.Y.K. Ng, J.S. Suri, Image Modelling of Human Eye, Artech House, MA, 2008. [13] Early Treatment Diabetic Retinopathy Study Research Group, Grading diabetic retinopathy from stereoscopic color fundus photographs: an extension of the modified Airlie House classification, ETDRS report number 10, Ophthalmology 98 (1991) 786–806. [14] J. Nayak, P.S. Bhat, U.R. Acharya, C.M. Lim, M. Kagathi, Automated identification of different stages of diabetic retinopathy using digital fundus images, J. Med. Syst. 32 (2) (2008) 107–115. [15] Diabetic Retinopathy. Retrieved from: http://www.hoptechno.com/book45.htm. Accessed 17 January 2009. [16] A.F. Amos, D.J. McCarty, P. Zimmet, The rising global burden of diabetes and its complications: estimates and projections to the year 2010, Diabet. Med. 14 (Suppl 5) (1997) S1–85. [17] ETDRS Research group, Early photocoagulation for diabetic retinopathy. Early treatment diabetic retinopathy study report number 9, Ophthalmology 98 (1991) 766–785. [18] J. Javitt, L. Aiello, Y. Chiang, F. Ferris, J. Canner, S. Greenfield, Preventive eye care in people with diabetes is cost-saving to the federal government, Diabetes Care 17 (1994) 909–917. [19] K.W. Tobin, M.D. Abramoff, E. Chaum, L. Giancardo, V.P. Govindasamy, T.P. Karnowski, M.T. S. Tennant, S. Swainson, Using a patient image archive to diagnose retinopathy, in: Conf. of the IEEE EMBS, (submitted) 2008. [20] K.W. Tobin, E. Chaum, V. Priya Govindasamy, T.P. Karnowski, Detection of anatomic structures in human retinal imagery, IEEE Trans. Med. Imaging 26 (12) (Dec 2007) 1729–1739. [21] Z. Ben Sbeh, L.D. Cohen, G. Mimoun, G. Coscas, A new approach of geodesic reconstruction for drusen segmentation in eye fundus images, IEEE Trans. Med. Imaging 20 (12) (Dec 2001) 1321–1333. [22] K.W. Tobin, M. Abdelrahman, E. Chaum, V. Govindasamy, T.P. Karnowski, A probabilistic framework for content-based diagnosis of retinal disease, Conf. Proc. IEEE Eng. Med. Biol. Soc. 2007 (2007) 6744–6747. [23] W.V. Patil, P.T. Daigavane, A survey on the automated identification of retinal diseases caused by diabeties, Inventi Rapid Biomed. Eng. 2015 (1) (2014) 1–6. [24] www.thediabetescentre.org.uk. [25] Fundus Photography Overview—Ophthalmic Photographers’ Society. www.opsweb.org. Retrieved 2015-09-17. [26] W.V. Patil, P.T. Daigavane, A survey on the automated identifiction of retinal diseases caused by diabeties, Inventi Rapid Biomed. Eng. 2015 (1) (2014) 1–6. [27] J. Sivakumar, J. Jeno, Automated extraction of blood vessels in retinal image, Int. J. Adv. Res. Electr. Electron. Instrum. Eng. 3 (3) (2014). [28] A.R. Mohammed, Q. Munib, A. Mohammed, An improved matched filter for blood vessel detection of digital retinal images, Comput. Biol. Med. 37 (2007) 262–267. [29] Y. Xu, H. Zhang, H. Li, G.S. Hu, An improced algorithm for vessel centerline tracking in coronary angiograms, Comput. Methods Prog. Biomed. 88 (2007) 131–143. [30] R. Elisa, P. Renzo, Retinal blood vessel segmentation using line operators and support vector classification, IEEE Trans. Med. Imaging 26 (2007) 1357–1365. [31] https://en.wikipedia.org/wiki/Point_spread_function. [32] http://www.owlnet.rice.edu/ elec539/Projects99/BACH/proj2/wiener.html. [33] K.Q. Sun, N. Sang, Morphological enhancement of vascular angiogram with multiscale detected by Gabor filters, Electron. Lett. 44 (2) (2008).

Chapter 11 • Screening and detection of DR using engineering concepts

319

[34] T.Y. Zhang, C.Y. Suen, A fast parallel algorithm for thinning digital patterns, Commun. ACM 27 (3) (1984) 236–239. [35] M. Vickerman, P. Keith, T. Mckay, VESGEN 2d: automated, user-interactive software for quantification and mapping of angiogenic and lymphangiogenic trees and networks, Anat. Rec. 292 (2009) 320–332. [36] Q.P. Lau, M.L. Lee, W. Hsu, T.Y. Wong, Simultaneously identifying all true vessels from segmented retinal images, IEEE Trans. Biomed. Eng. 60 (7) (July 2013). [37] S. Maheswari, S.V. Anandh,” Artery vein classification of blood vessels in retinal image: an automated approach”, Int. J. Adv. Res. Sci. Eng., Vol. No. 4, Special Issue (02), March 2015. [38] E. Grisan, A. Ruggeri, A divide et impera strategy for automatic classification of retinal vessels into arteries and veins, Proceedings of the 25th Annual International Conference of the IEEE, vol. 1 (2003) 890–893. [39] D. Faber, M. Aalders, E. Mik, Oxygen saturation-dependent absorption and scattering of blood, Phys. Rev. Lett. 93 (2) (2004) 028102. [40] G. Mirsharif, F. Tajeripour, F. Sobhanmanesh, H. Pourreza, T. Banaee. Developing an automatic method for separation of arteries from veins in retinal images. ieeexplore.ieee.org/document/ 6413351, 17 January 2013. [41] R. Pal, S.K. Pal, Entropic thresholding, Signal Process. 16 (1989) 97–108. [42] U.T.V. Nguyen, A. Bhuiyan, L.A.F. Park, K. Ramamohanarao, An effective retinal blood vessel segmentation method using multi-scale line detection, Pattern Recogn. 46 (3) (2012) 703–715.

Further reading [43] L.Y. Wong, U.R. Acharya, Y.V. Venkatesh, C. Chee, C.M. Lim, E.Y.K. Ng, Identification of different stages of diabetic retinopathy using retinal optical images, Inf. Sci. 178 (1) (2008) 106–121. [44] J.B. Jonas, U. Schneider, G.O.H. Naumann, Count and density of human retinal photoreceptors, Graefes Arch. Clin. Exp. Ophthalmol. 230 (1992) 505–510.

12

Optical coherence tomography angiography in type 3 neovascularization

Riccardo Sacconia, Enrico Borrellia, Adriano Carnevalib, Eleonora Corbellia, Lea Querquesa, Francesco Bandelloa, Giuseppe Querquesa a

DEPARTMENT OF OP HTHALMOLOGY, U N IV ERS ITY V IT A- S A L U T E, SCIE N TI F IC IN S T ITU T E S A N RAFFAELE, MILAN, ITALY b DEPARTMENT OF OP HT HALMOLOGY, UNIVERSITY MAGNA GR AE CIA, CATANZARO, ITAL Y

1 Introduction Type 3 neovascularization (NV) is a well-established form of neovascular age-related macular degeneration (AMD). The origin of this lesion gathered significant attention because of its controversial nature. During the past decade, numerous terms were used to identify type 3 NV, including “abnormal deep retinal vascular complex” [1], used when it was first described in 1992 by Mary Elizabeth Hartnett, “retinal choroidal anastomosis,” and “retinal anastomosis to the lesion.” In 2001, Lawrence Yannuzzi [2] suggested the acronym of RAP (retinal angiomatous proliferation) in support of the hypothesis that neovessels in type 3 NV arise within the inner half of the retina. On the contrary, in 2003, Donald Gass [3] presumed a choroidal origin for these neovessels. Therefore, he proposed that type 3 NV could be described using the term “occult chorioretinal anastomosis,” hypothesizing an initial development of an occult choroidal NV, followed by the formation of a choroidal-retinal anastomosis. Due to these conflicting theories on the origin, in 2008, Bailey Freund [4] proposed for this disease the descriptive term of “type 3 NV,” highlighting the intraretinal location of the NV and distinguishing this subgroup of neovascular AMD (nAMD) from type 1 and 2, previously described by Gass in his landmark textbook of macular disease. Nowadays, modern imaging technologies, including structural optical coherence tomography (OCT), real-time dye-based angiography, and OCT angiography (OCT-A) have demonstrated that type 3 NV origins from the deep capillary plexus (DCP) and this is followed by a disruption of the outer retinal layers and penetration through the retinal pigment epithelium (RPE) [5]. However, while type 3 NV initially develops in the retina but Diabetes and Fundus OCT. https://doi.org/10.1016/B978-0-12-817440-1.00012-7 © 2020 Elsevier Inc. All rights reserved.

321

322

Diabetes and Fundus OCT

may not necessarily involve the choroid, in the latest AMD classification, Spaide [6] has recommended using the term macular neovascularization rather than choroidal neovascularization, which may not be accurate for eyes affected by type 3 NV form.

1.1 Epidemiology and risk factors Type 3 NV accounts for more than 30% cases of neovascular AMD [5]. Compared to type 1 and 2, this distinct subgroup of NV has peculiar clinical and epidemiologic findings, partly due to environmental and genetic factors that have not been completely understood. Regarding type 3 NV, women are more frequently affected than men, ranging from 64.7% to 71% of the affected individuals [2–8]. Furthermore, a higher prevalence was detected in the Caucasian race [2–9]. No known cases of type 3 NV in blacks have been described [7] and prevalence appears to be greater in hyperpigmented eyes [2–10]. Type 3 NV is a bilaterally aggressive disease with a predictable symmetry. Annual and cumulative rates of neovascularization appearance in fellow unaffected eyes are greater than the other forms of exudative AMD. In details, Yannuzzi reported an average of 15 months until the development of neovascularization in the unaffected eyes with type 3 NV in the fellow eyes [2]. In a cohort of 52 fellow eyes, Gross and colleagues reported a cumulative incidence of neovascularization of 40% at 12 months, 56% at 24 months, and 100% at 36 months [9]. In a small cohort of 20 fellow unaffected eyes with type 3 NV in the fellow eyes in a Japanese population, the authors displayed an incidence of neovascularization of 50% at 49 months of follow-up (range 24–108 months) [11]. Other risk factors, including age and arterial hypertension, have been associated with the development of both type 3 and other types of exudative AMD. However, even though the role of arterial hypertension is still uncertain, it is now widely evident that type 3 patients are significantly older than other neovascular AMD patients—the average age of 79 vs 76 years. Although several studies have evaluated genetic risk factors for exudative AMD, a small number of these have specifically addressed the genetic variants associated with type 3 NV. As an example, some CFH gene variants are more associated with type1/2 NV than type 3 NV development. On the contrary, ARMS2 variants appeared to have a stronger association with type 3 neovascular phenotype [8, 12, 13]. These findings highlight the importance to identify genetic biomarkers which may predispose to the development of different NV forms.

1.2 Pathogenesis The presence of specific fundoscopic findings, including reticular pseudodrusen (RPD)— also defined subretinal drusenoid deposits—small hemorrhages and pigmentary changes have been well documented in eyes with type 3 NV and in the unaffected fellow eyes, both in Caucasian and Asian populations.

Chapter 12 • Optical coherence tomography

323

Although the overall frequency of RPD in neovascular AMD eyes ranges between 22% and 36%, they are more easily found in eyes with type 3 NV (68.4%–83%) [14]. Conversely, RPD is less commonly found in eyes with type 1/2 neovascular AMD, with a prevalence of 9%–13.9% and 2%–3.4%, respectively [15]. The exact mechanism linking RPD and type 3 NV is, however, unknown. It has been hypothesized that the persisting presence of extracellular lipid elements in the outer retina might cause the production of reactive oxygen species, which may subsequently stimulate the NV development. Another possible explanation postulates that RPD may be the consequence of an RPE impairment which may cause an altered cytokine metabolism and concentration in the outer retina. Moreover, eyes with RPD were demonstrated to have a reduced expression of sFlt (soluble fms-like tyrosine kinase-1), which is a soluble receptor for the vascular endothelial growth factor (VEGF). This molecule is physiologically produced by the photoreceptors and it is known to bind and inactivate free VEGF in the outer retina. A lower level of sFlt may thus result in an increased level of free VEGF, this possibly triggering type 3 NV development [6]. Finally, both RPD and type 3 NV lesions are known to be associated with hypoperfusion of the choriocapillaris (CC). Marques and colleagues conducted a quantitative assessment of the fundoscopic features comparing fellow unaffected eyes with type 3 NV and fellow eyes with type 1/2 NV [16]. The authors demonstrated that the total area occupied by drusen is significantly smaller in fellow eyes of type 3 NV [16]. This observation further corroborated the hypothesis that type 3 NV is more associated with the presence of RPD.

2 Classification In parallel with the OCT imaging improvement, the previous type 3 progression model proposed by Yannuzzi based on the interpretation of fluorescein angiography (FA) and clinical examination was surpassed. He originally emphasized the significant contribution to the vasogenic process from the retinal circulation, whether the neovascularization could start within the retina (initial focal retinal proliferation and progression), the choroid (initial focal choroidal proliferation and progression) or both circulations (focal retinal proliferation with preexisting or simultaneous choroidal proliferation) [7]. In 2001, Yannuzzi et al. [2] classified RAP lesions in the following stages: – – –

Stage I: intraretinal neovascularization. Stage II: subretinal neovascularization with a retinal-retinal anastomosis and with or without a serious pigment epithelial detachment. Stage III: choroidal neovascularization with a vascularized pigment epithelial detachment and a retinal-choroidal anastomosis.

Later, in 2008, Yannuzzi [8] revised his version of “retinal origin” of type 3 neovascularization and proposed three different variants of the vasogenic process: (1) initial proliferation from the retinal capillary plexuses and progression to choroidal neovascularization;

324

Diabetes and Fundus OCT

Table 1 OCT.

Proposed stages of type 3 neovascularization as observed with spectral domain

Stage

Features

Precursors

Migrated retinal pigment epithelium (RPE) cells representing intraretinal hyperreflective foci in the outer retina. Preexisting drusenoid pigment epithelium detachment (PED) could be present at the site of the type 3 lesion Intraretinal hyperreflective focus associated with cystoid macular edema without outer retinal disruption. Preexisting drusenoid PED could be present at the site of type 3 lesion Hyperreflective focus with cystoid macular edema and outer retinal disruption with or without RPE disruption. Preexisting drusenoid PED could be present at the site of type 3 lesion Progressive downward growth of type 3 lesion, proliferating through the RPE and creating a vascularized serous pigment epithelium detachment. If present, preexisting drusenoid PED becomes vascularized by the previously intraretinal lesion, creating a mixed PED with a serous component

Stage 1 Stage 2 Stage 3

Adapted from D. Su, S. Lin, N. Phasukkijwatana, et al. An updated staging system of type 3 neovascularization using spectral domain optical coherence tomography. Retina 36 (2016) S40–S49.

(2) initial proliferation from the choroid and progression to retinal neovascularization; (3) simultaneous retinal and choroidal proliferation. In 2016, Daniel Su and colleagues [5] proposed a novel classification using OCT. This classification takes into account the anatomical relationships between type 3 NV and the surrounding structures. This classification has the following stages in detail: – – – –

Precursor stage, which is characterized by a hyperreflective focus in the outer retina. Stage 1, notable for a larger intraretinal hyperreflective lesion associated with cystoid macular edema (CME) but without outer retinal disruption. Stage 2, defined by outer retinal disruption, along with RPE atrophy in most cases. Stage 3, characterized by the presence of a more pronounced CME with an intraretinal neovascular lesion which goes through the RPE and connects with a pigment epithelium detachment (PED). In this stage, eccentric subretinal fluid could be found in a minority of cases, these overlying margins of the PED.

Of note, since serous PEDs are common at the onset of type 3 NV, most cases are diagnosed as stage 3, type 3 NV lesions, whereas stage 1/2 lesions are observed in less than half of the whole cases (Table 1).

3 Multimodal imaging and type 3 neovascularization Since the first description of RAP by Yannuzzi and colleagues in 2001 [2], which was based on clinical findings, FA, and indocyanine green angiography (ICGA), various authors have imaged these lesions with newer imaging techniques and a multimodal imaging approach. This has significantly expanded our knowledge on this lesion [17–20]. On FA (Fig. 1), the presence of type 3 neovascularization should be considered in those cases showing intraretinal and subretinal leakage with indistinct margins or a vascularized PED. Moreover, a focal area of intraretinal staining surrounded by retinal edema should

Chapter 12 • Optical coherence tomography

325

FIG. 1 Fluorescein angiography of a patient affected by treatment-naı¨ve type 3 neovascularization (A: early phases; B: late phases).

raise the suspicion of type 3 lesion. Since PEDs are common in stages II and III of type 3 lesion, FA may be helpful to distinguish serous PEDs (stage II), which are characterized by an intense early hyperfluorescence and progressive pooling, from vascularized PEDs, which affect stage III and appears as a hyperfluorescent circumscribed lesion with a notch [21]. Furthermore, in those type 3 lesions associated with RPE tear, FA may display a hyperfluorescent area with sharply demarcated borders [22]. Using ICGA (Fig. 2), type 3 lesion is displayed as a focal area of intense hyperfluorescence (hot spot) with a late

FIG. 2 Indocyanine green angiography of a patient affected by treatment-naı¨ve type 3 neovascularization(A: early phases; B: intermediate phases; C: late phases) and structural optical coherence tomography (D) passing through the lesion.

326

Diabetes and Fundus OCT

extension of the leakage within the retina from the intraretinal neovascularization. ICGA may be also important to distinguish a serious PED from a vascularized PED according to the presence or absence of choroidal neovascularization [22]. The advent of structural OCT has radically improved our capability to describe and characterize type 3 lesions. Using a time-domain OCT, Brancato et al. [23] first demonstrated in 2002 that RAP appears as a focal hyperreflective lesion in the neuroretinal layers. Since this first OCT description, several studies employing OCT have deeply described type 3 lesions and their precursors. Using multimodal imaging, Querques et al. [20] sought to elucidate the true origin of type 3 neovascularization. In this chapter, type 3 lesions were demonstrated to be often preceded by an underlying drusenoid PED and focal outer retinal atrophy. Noteworthy, the authors localized early type 3 neovascularization to hyperreflective lesions located in the outer retina adherent to underlying drusen or drusenoid PED, while there was no evidence of neovascular lesions originating from the choroid. Furthermore, using high-density spectral-domain OCT (SD-OCT) scans, the authors observed an apparent communication between the intraretinal neovascular complex and the material in the sub-RPE space (Fig. 3). Considering these findings altogether, the authors concluded that the intraretinal neovascular complex might vascularize the sub-RPE material from above. In detail, at the very early stage of the disease, there might be a neovascular complex within the outer nuclear layer, which may extend through the disrupted RPE into the sub-RPE space. These findings were further confirmed by Nagiel and colleagues [18]. In the latter study, the authors obtained images of type 3 precursor lesions in 17 eyes with AMD. In all the examined cases, small hyperreflective foci located at the outer nuclear layer/outer plexiform layer junction were displayed to precede the development of type 3 lesions, as well as RPE atrophy. Moreover, a review of color fundus photographs for these 17 eyes revealed the presence of pigment at the site of future type 3 neovascularization in 12 cases (70%). In the same study, the authors described the imaging characteristics of 40 eyes with AMD and treatment-naı¨ve type 3 neovascular lesions. On SD-OCT images, type 3 neovascularizations were characterized by an amorphous or round hyperreflective lesion of varying size located between the RPE and outer plexiform layer with adjacent macular edema and occasional sub-RPE fluid. In this study, all 40 eyes with type 3 lesions were treated with intravitreal antiVEGF treatment and most neovascular lesions had regression after 1 or 2 injections.

4 OCT-A and type 3 neovascularization 4.1 Optical coherence tomography angiography—Overview on technical aspects Optical coherence tomography angiography is an emerging noninvasive imaging technique that allows for rapidly producing three-dimensional (3D) angiographic images of the retinal and choroidal vasculature allowing us to study the pathogenesis of several retinal diseases [21, 24–28].

Chapter 12 • Optical coherence tomography

327

FIG. 3 Structural optical coherence tomography B-scans of a patient affected by type 3 neovascularization at the preclinical stage (A), at the diagnosis (B) and after the treatment (C).

The basic concept of OCT-A is to detect the movement of erythrocytes within the vasculature as an intrinsic contrast agent. In detail, OCT-A devices perform several repeated B-scans at the same retinal location and the obtained structural information is compared to detect signal changes secondary to flowing erythrocytes (motion contrast). Furthermore, each B-scan is composed of several A-scans that are acquired at sequential positions to cover the whole B-scan size. Optical coherence tomography angiography may be captured with spectral-domain OCT, which in commercial devices employs a wavelength at around 840 nm, or with swept-source OCT, which uses a longer wavelength (1050 nm) [21, 24].

328

Diabetes and Fundus OCT

OCT-A has several potential advantages, including the ability to perform depthresolved analysis of the vasculature, which has improved our ability to image and investigate type 3 neovascularizations. This has significantly expanded our knowledge of the clinicopathological characteristics of this neovascular lesion. Furthermore, while some OCT-A studies use the en face images to visualize the retinal and choroidal flow at different depths, this visualization is more susceptible to segmentation artifacts, especially in the eyes with retinal and choroidal pathologies [24]. For this reason, cross-sectional OCT-A scans may be helpful for studying these eyes. These OCT-A B-scans typically show flow in red over the grayscale OCT image. Most studies employing OCT-A in eyes with type 3 neovascularization have employed OCT-A B-scan to characterize these lesions.

4.2 OCT-A and type 3 neovascularization Using OCT-A, several studies have confirmed that type 3 neovascularization originates from the deep vascular complex (DVC) and progresses downward toward the RPE leading to exudation and PED (Figs. 4 and 5) [29–33]. Importantly, the microvascular morphology of type 3 neovascularization secondary to AMD has been described using OCT-A [29–33]. In these studies, OCT-A illustrated lesions characterized by a distinct high flow, tuft-like capillary network. Miere and colleagues [32] described the OCT-A characteristics of 18 treatment-naı¨ve AMD eyes with type 3 neovascularization. In this study, the authors displayed that all 18 eyes were characterized by a retinal-retinal anastomosis ultimately abutting into the sub-RPE space. Of note, the choriocapillaris segmentation revealed the tuft-shaped lesion

FIG. 4 Optical coherence tomography angiography of a patient affected by treatment-naı¨ve type 3 neovascularization (A: superficial capillary plexus, B: deep capillary plexus, C: ORCC slab).

Chapter 12 • Optical coherence tomography

329

FIG. 5 Optical coherence tomography angiography of a patient affected by treatment-naı¨ve type 3 neovascularization (A: superficial capillary plexus, B: deep capillary plexus, C: ORCC slab).

apparently connected to a deeper small clew-like lesion in 15 out of 18 cases. Importantly, in two cases, the small clew-like lesion seemed connected with the choroid through a small caliber vessel. The functional and structural status of the CC has been displayed to be closely associated with the development of type 3 neovascularization. Outer retinal ischemia has been proposed as a mechanism driving the development of this unique form of neovascularization [34]. In a study with OCT-A, the CC perfusion was investigated and compared between eyes affected by type 3 neovascularization and the fellow eyes without neovascularization [34]. Furthermore, the latter fellow eyes were compared with the fellow eyes of AMD patients with unilateral type 1 or 2 neovascularization. This study showed an overall decreased flow in the CC in eyes with type 3 neovascularization compared with the fellow eyes with non-neovascular AMD. Furthermore, the CC perfusion in the fellow non-neovascular eyes (with T3 neovascularization in the other eye) was decreased in comparison with the non-neovascular AMD fellow eyes from patients with unilateral type 1 or type 2 neovascularization. This OCT-A study corroborated the hypothesis that CC hypoperfusion may thus be driving the development of type 3 neovascularization. It has been hypothesized that an imbalance between VEGF and other RPE-derived angiogenic cytokines arising from the apical side of the RPE cells may induce the development of type neovascularization originating from the DVP [34]. Indeed, it was displayed that in eyes with untreated neovascular AMD, aqueous humor levels of VEGF were significantly higher in eyes with type 3 lesion than in eyes with type 1 or type 2 lesion [35]. Finally, given that the unaffected eye with type 3 lesion in the fellow eye harbors an increased risk of type 3 neovascularization and macular atrophy development, the reduced CC perfusion in these eyes might explain, at least in part, this increased risk [36, 37].

330

Diabetes and Fundus OCT

4.3 OCT-A and nascent type 3 neovascularization In a recent paper, our group reported for the first time the ability of OCT-A to detect neovascular flow before the clinical onset of type 3 neovascularization and before the detection of intraretinal edema with structural OCT. [38] Analyzing 15 eyes of 15 patients, we reported that this neovascular flow was shown inside the hyperreflective foci (HRF) detected at structural OCT and we identified the presence of HRF with flow in all cases that progressed to the development of type 3 neovascularization. Previously, Su and associates [5] interpreted the presence of HRF at structural SD-OCT as a precursor lesion of type 3 neovascularization and attributed these foci to migrated retinal pigment epithelium cells. Other authors demonstrated that HRF detected using structural OCT corresponded to a small tuft of vessels with OCT-A but only after the development of intraretinal edema [29, 32, 39]. Although we know that some HRF could represent migrated RPE cells, we reported for the first time the presence of flow in some HRF, even in absence of exudation or an “active” form of type 3 neovascularization. Following these 15 eyes, we reported the downgrowth progression of HRF with a flow from the DCP to the RPE. Only when the detectable flow reaches the RPE and sub-RPE space the type 3 neovascularization was complicated by intraretinal fluid, an “active” form of type 3 neovascularization (Fig. 6). Analyzing all these features, we defined a new clinical entity, namely “Nascent type 3 neovascularization”. The main feature characterizing nascent type 3 NV is the downgrowth progression of HRF from the DCP to the RPE detected using OCT-A and structural OCT.

FIG. 6 Multimodal imaging evaluation of a patient affected by type 3 neovascularization at the preclinical stage (A: infrared reflectance [IR], B: B-scan with flow of optical coherence tomography angiography [OCT-A], C: structural OCT) and at the clinical “active” stage (D: IR, E: B-scan with flow of OCT-A, F: structural OCT).

Chapter 12 • Optical coherence tomography

331

At baseline, nascent type 3 NV were seen as HRF with flow detected mainly in the DCP and the avascular slabs, and they were not continuous with the RPE. At this stage, nascent type 3 NV was characterized by the absence of intraretinal fluid (Fig. 7). FA and ICGA

FIG. 7 Multimodal imaging evaluation of a patient affected by “nascent” type 3 neovascularization (A: structural optical coherence tomography [OCT], B: enface OCT-angiography of avascular slab and corresponding B-scan with flow, C–D: early (C) and late (D) phases of fluorescein angiography, E–F: early (E) and late (F) phases of indocyanine green angiography).

332

Diabetes and Fundus OCT

showed a hyperfluorescent lesion corresponding to the HRF and were characterized by dye leakage of a variable degree in the late frames of the FA but no leakage at the ICGA. Typically, nascent type 3 NV displayed a down-growth progression from the DCP to the RPE during the time (Fig. 8). Only when the neovascular complex reaches the RPE and sub-RPE space, the type 3 neovascularization becomes “active” and it is complicated by intraretinal fluid. In this process, OCT-A is of paramount importance to detect the early stages of the disease and to change our approach in the follow-up and treatment of the patient. The OCT-A findings of nascent type 3 NV supported the theory of the retinal origin of type 3 neovascularization (as supposed by Yannuzzi in 2001 [2]). In fact, nascent type 3 neovascularization without leakage originated from the DCP over a drusenoid PED, and later progressed into the RPE and sub-RPE space generating intraretinal exudation. However, less commonly the intraretinal neovascular complex was associated with concomitant type 1 neovascularization under the drusenoid PED and without an obvious anastomosis between these two structures. Interestingly, Miere et al. [32] speculated that the presence of a drusenoid PED associated with photoreceptor loss and outer retinal and RPE atrophy may promote the development of a neovascular complex originating from the DCP with subsequent downgrowth toward the RPE and sub-RPE space. Our findings supported also this theory. However, in our series, some HRF with the flow at OCT-A did not change during the follow-up and the flow remained isolated to the DCP and avascular slab. Moreover, we also reported a case of HRF with flow that showed a complete resolution of the lesion detected by OCT-A and dye angiography. For this reason, it is important to underline that not all HRF with flow progress to an “active” type 3 neovascularization, but only HRF with a flow that shows a downgrowth progression from the DCP to the RPE (the main feature of nascent type 3 NV) and reach the RPE and sub-RPE space.

5 Treatment 5.1 Therapeutic rationale in the treatment of neovascular AMD Advances in imaging technology along with an increased understanding of the disease have led to a paradigm shift in the management of neovascular AMD. Vascular endothelial growth factor is elucidated to be the main suspect behind the new vessels in nAMD [40]. Indeed development of molecules targeting VEGF such as pegaptanib, bevacizumab, ranibizumab, and aflibercept was a major breakthrough in the treatment of nAMD and anti-VEGF injections were accepted as the gold standard in nAMD treatment after the landmark MARINA [40] and ANCHOR [41] clinical trials (two multicenter, randomized, controlled, clinical trials that studied the efficacy of monthly intravitreal anti-VEGF injections in nAMD patients). However, monthly injections have added a heavy financial burden on patients with chronic nAMD. Subsequently, the PrONTO [42] and HARBOR trials demonstrated the advantages of giving intravitreal anti-VEGF injections on a pro re nata

Chapter 12 • Optical coherence tomography

333

FIG. 8 Progression of a “nascent” type 3 lesion in the left eye of a patient affected by early stages of type 3 neovascularization (A: combined near-infrared reflectance and optical coherence tomography-angiography B-scan with flow at first examination, B: after 5 months, C: after 8 months, D: after 12 months, E: after 15 months).

334

Diabetes and Fundus OCT

or “as needed” basis. The current protocol of treatment practiced across various Western and European countries is a “treat and extend” or “treat and observe” [43] regimen. In the real-world observational AURA study, investigators observed a good initial response to therapy, which subsequently declined over time [44]. Three FDA-approved VEGF inhibitors have been used to treat nAMD. The first approved inhibitor, pegatanib sodium, is a PEGylated nucleic acid aptamer that specifically inhibits the activity of one splice variant of VEGF: VEGFA165 [45]. The second approved inhibitor is ranibizumab, a high-affinity humanized monoclonal antibody fragment (Fab) that neutralizes all splice variants of VEGF with a molecular weight of 48 kDa [46]. The reason to develop ranibizumab as a small Fab instead of a full-length antibody was to minimize systemic exposure and improve penetration across the retina [47]. Bevacizumab, a lower affinity 149 kDa full-length anti-VEGF antibody approved by the FDA for oncology indications, was used off-label to treat nAMD. The latest approved inhibitor is aflibercept, a recombinant decoy receptor fused to a human IgG1 constant region (Fc) that blocks the activity of all VEGFA splice variants, VEGFB, and placental growth factor (PlGF), with a molecular weight of 115 kDa [48]. In particular, aflibercept was approved in the United States in November 2011 and in the European Union in November 2012 following demonstration of non-inferiority of 8-weekly aflibercept to monthly ranibizumab in the VIEW 1 and 2 studies [49]. The aflibercept recommended dose in nAMD is 3 monthly loading doses, followed by an injection once every 8 weeks. Interestingly, despite the broader neutralizing activity of aflibercept (i.e., blocking VEGFB and PlGF in addition to VEGFA), this drug did not grant a greater magnitude of visual improvement in comparison with the other anti-VEGF therapies [49, 50]. However, studies to define the roles of PlGF and VEGFB in preclinical models of angiogenesis have been controversial [51, 52] and a few clinical data are available to address this question; therefore, whether inhibiting PlGF and VEGFB is an advantage or a liability remains to be determined. Another factor that was reported to distinguish aflibercept is that it binds VEGFA with a much higher affinity than other anti-VEGF drugs [53]. However, these results remain controversial since other studies conducted using multiple formats of affinity measurement found that ranibizumab can demonstrate a higher affinity than aflibercept under certain conditions [54], highlighting the importance of rigorous experimental design and cautious interpretation of in vitro binding data. Besides the magnitude of efficacy, the dosing schedule is also a significant consideration in the care of nAMD patients. In two phase III clinical trials, 2-mg aflibercept injections every 2 months were demonstrated to be non-inferior to monthly injection of 0.5 mg of ranibizumab with regard to efficacy and safety profiles [49]. However, it is difficult to directly compare the durability of the two drugs based on these data since there is a lack of comparison with ranibizumab treatment every 2 months. Furthermore, the molar ratio between aflibercept and ranibizumab in these two trials was 1.7:1 with aflibercept in excess. In general, the monthly dosing of ranibizumab is more effective than once every quarter or pro re nata (PRN or “as needed”) dosing. It is worth noting that some patients also did well under dosing regimens less frequent than monthly [55]; it will be important to identify what determines dosing frequencies.

Chapter 12 • Optical coherence tomography

335

Finally, emerging anecdotal data suggest that different patients and stages of the disease may influence the response to distinct VEGF drugs. These observations urge the clinical and research communities to validate and understand the mechanism behind these findings, with the common goal of maximizing the benefits of existing treatments. Although the VEGF inhibitors are the current standard of care for nAMD and demonstrated impressive clinical activity, not all patients benefit from these therapies [50]. Identifying additional or alternative therapies that can improve or surpass the current standard of care is of great interest. The development of new treatments will likely benefit from an in-depth understanding of what determines the efficacy of VEGF inhibitors. As an example, it is known that larger vascular lesions have a worse response to anti-VEGF treatment [56–60].

5.2 Treatment of active type 3 neovascularization Different studies tried to elucidate if type 3 neovascularization is similar or different to the other subtypes of NV in nAMD based on the clinical response to anti-VEGF treatment [61–65]. In 2016, Ebenezer and associates [61] reported the 2-year clinical outcomes of the Comparison of AMD Treatments Trials (CATT) which enrolled 126 eyes with type 3 NV. In this study, all patients were randomly assigned for treatment with ranibizumab or bevacizumab on a monthly or as needed basis. The authors reported that eyes with type 3 NV treated with anti-VEGF drugs needed a fewer number of injections in comparison to the other types of NV (6.1 vs 7.4 in the first year of treatment and 5.4 vs 6.6 in the second year). In addition, the mean BCVA improvement at 1 year of follow-up was greater in patients affected by type 3 NV in comparison with other types of NV (10.6 vs 6.9 letters) [61]. However, type 3 lesions were demonstrated to have an increased occurrence of RPE atrophy at the end of the follow-up [61]. Another important study reported the outcomes of intravitreal anti-VEGF treatment in type 3 NV in real-life practice in seven Italian centers and confirmed that a lower number of anti-VEGF injections (ranibizumab or bevacizumab) is required in this kind of neovascularization [62]. In this series of 95 eyes, the patients underwent a mean of 4.4 anti-VEGF injections over the 1-year follow-up reporting a significant BCVA improvement (from 0.66 to 0.53 LogMAR) [62]. Another study tried to clarify whether different stages of type 3 NV were associated with different treatment response to anti-VEGF [63]. The authors suggested that three consecutive loading doses of intravitreal ranibizumab is an effective treatment in the early stage (stage I) type 3 NV [63]. These eyes showed a significantly lower recurrence rate in comparison with eyes with type 3 NVs in the later stages. While several studies have demonstrated the efficacy of bevacizumab and ranibizumab in the treatment of type 3 NV, less is known about the efficacy of aflibercept in this kind of neovessels. Cho and associates [66] compared the efficacy of aflibercept in comparison to ranibizumab in a cohort of 63 treatment-naı¨ve eyes affected by type 3 NV. The authors concluded that there was no difference between the two drugs in terms of visual acuity improvement at 1 year. However, the authors displayed an increased occurrence of RPE atrophy in the group treated with aflibercept [66].

336

Diabetes and Fundus OCT

Matsumoto et al. [67] demonstrated a similar improvement in BCVA in the treatment of type 3 neovascularization with aflibercept injections using a treat-and-extend regime. An average improvement by 9.5 letters of the ETDRS chart was achieved, and central retinal thickness dropped from 340 to 133 μm. Chou et al. [68] observed improvements during a short-term follow-up period after three consecutive injections of aflibercept in 47% and stabilization of vision in 42% of the enrolled eyes. The treatment was less effective probably because of a higher percentage of type 3 lesion in the advanced forms (73% of the enrolled eyes had PED at the baseline visit). In conclusion, both ranibizumab and aflibercept injections achieved good results in the treatment of type 3 NV in terms of BCVA and central macular thickness improvement. The goal of the treatment is to treat the patient in the early stages of the disease, in order to perform fewer injections during the follow-up with a better VA outcome. However, we need to be careful in the number of injections, in order to minimize the development of atrophy.

5.3 Treatment of nascent type 3 neovascularization Due to the “aggressive” nature of type 3 neovascularization, it is very important to treat this kind of neovessels in the early stages of this lesion, offering thus hope for more favorable results [63, 69]. Current diagnostic methods of OCT and OCT-A, allow for an earlier detection of type 3 lesion and its differentiation from a typical exudative AMD form, enabling thus the treatment of earlier phases of the disease. With the use of OCT-A, now we are able to identify “Nascent type 3 NV” that are characterized by a downgrowth progression from the DCP to the RPE. However, the question is: when should we treat the patient? We need to clarify that the early diagnosis of HRF with flow due to migrating intraretinal RPE cells are much more commonly encountered than HRF due to nascent type 3 NV; in case of migrating RPE cells, HRF may rarely display downgrowth progression. OCT-A evidence of flow associated with HRF, however, indicated a greater likelihood of downward progression, although nascent type 3 neovascularization rarely regressed spontaneously without treatment unless there was growth into the RPE and sub-RPE space. For this reason (the possibility of regression of nascent type 3 NV), we suggest not treating patients until the neovascularization growth toward the RPE and sub-RPE space. Therefore, the diagnosis of nascent type 3 NV should warrant a closer follow up period with OCT-A to document progressive growth toward the RPE and sub-RPE space. We suggest that the complete progression of nascent type 3 NV (i.e., from the DCP to the RPE), may indicate the need for early treatment.

6 Conclusion In conclusion, using OCT-A, an emerging noninvasive imaging technique that allows for rapidly producing 3D angiographic images, several studies have confirmed that type 3

Chapter 12 • Optical coherence tomography

337

neovascularization originates from the DCP and progresses downward toward the RPE. These OCT-A findings supported the theory of the retinal origin of type 3 neovascularization, as supposed by Yannuzzi in 2001. Furthermore, OCT-A allowed us to detect neovascular flow before the clinical onset of type 3 neovascularization and before the detection of intraretinal edema. Previously, HRF at structural OCTwas defined as a precursor lesion of type 3 neovascularization and attributed these foci to migrated RPE cells. However, using OCT-A, the neovascular flow was shown inside the HRF detected at structural OCT; only when the detectable flow reaches the RPE and sub-RPE space, the type 3 neovascularization was complicated by intraretinal fluid, an “active” form of type 3 NV. This “preclinical” stage of type 3 NV was defined as “Nascent type 3 NV”. Typically, nascent type 3 NV displayed a downgrowth progression from the DCP to the RPE during the time. The identification of early stages of the disease is of paramount importance in the treatment and outcome of patients affected by type 3 NV. Indeed, anti-VEGF injections achieved good results in the treatment of type 3 NV in terms of BCVA and central macular thickness improvement. Several studies demonstrated that the treatment in the early stages of the disease allows us to perform fewer injections during the follow-up with a better VA outcome. However, because some “nascent” type 3 may regress without functional impairment, these lesions should be closely followed and eventually treated when flow progresses into the RPE and sub-RPE space in order to prevent progression to late stages.

References [1] M.E. Hartnett, J.J. Weiter, A. Gardts, A.E. Jalkh, Classification of retinal pigment epithelial detachments associated with drusen, Graefes Arch. Clin. Exp. Ophthalmol. 230 (1992) 11–19. [2] L.A. Yannuzzi, S. Negrao, T. Iida, C. Carvalho, H. Rodriguez-Coleman, J. Slakter, K.B. Freund, J. Sorenson, D. Orlock, N. Borodoker, Retinal angiomatous proliferation in age-related macular degeneration, Retina 21 (5) (2001) 416–434. [3] J.D. Gass, A. Agarwal, A.M. Lavina, K.A. Tawansy, Focal inner retinal hemorrhages in patients with drusen: an early sign of occult choroidal neovascularization and chorioretinal anastomosis, Retina 23 (6) (2003) 741–751. [4] K.B. Freund, I.V. Ho, I.A. Barbazetto, et al., Type 3 neovascularization: the expanded spectrum of retinal angiomatous proliferation, Retina 28 (2) (2008) 201–211. [5] D. Su, S. Lin, N. Phasukkijwatana, et al., An updated staging system of type 3 neovascularization using spectral domain optical coherence tomography, Retina 36 (2016) S40–S49. [6] R.F. Spaide, Improving the age-related macular degeneration construct: a new classification system, Retina 38 (5) (2018) 891–899. [7] L.A. Yannuzzi, K.B. Freund, B.S. Takahashi, Review of retinal angiomatous proliferation or type 3 neovascularization, Retina 28 (3) (2008) 375–384. [8] A. Caramoy, T. Ristau, Y.T. Lechanteur, et al., Environmental and genetic risk factors for retinal angiomatous proliferation, Acta Ophthalmol. 92 (8) (2014) 745–748. [9] N.E. Gross, A. Aizman, A. Brucker, J.M. Klancnik Jr., L.A. Yannuzzi, Nature and risk of neovascularization in the fellow eye of patients with unilateral retinal angiomatous proliferation, Retina 25 (2005) 713–718.

338

Diabetes and Fundus OCT

[10] R.F. Spaide, Fundus autofluorescence and age-related macular degeneration, Ophthalmology 110 (2003) 392–399. [11] M. Sawa, C. Ueno, F. Gomi, K. Nishida, Incidence and characteristics of neovascularization in fellow eyes of Japanese patients with unilateral retinal angiomatous proliferation, Retina 34 (2014) 761–777. [12] B.J. Wegscheider, M. Weger, W. Renner, et al., Association of complement factor H Y402H gene polymorphism with different subtypes of exudative age-related macular degeneration, Ophthalmology 114 (2007) 738–742. [13] H. Hayashi, K. Yamashiro, N. Gotoh, et al., CFH and ARMS2 variations in age-related macular degeneration, polypoidal choroidal vasculopathy, and retinal angiomatous proliferation, Invest. Ophthalmol. Vis. Sci. 51 (2010) 5914–5919. [14] F. De Bats, T. Mathis, M. Mauget-Fay¨sse, F. Joubert, P. Denis, L. Kodjikian, Prevalence of reticular pseudodrusen in age-related macular degeneration using multimodal imaging, Retina 36 (1) (2016) 46–52. [15] J.H. Kim, Y.S. Chang, J.W. Kim, T.G. Lee, C.G. Kim, Prevalence of subtypes of reticular pseudodrusen in newly diagnosed exudative age-related macular degeneration and polypoidal choroidal vasculopathy in korean patients, Retina 35 (12) (2015) 2604–2612. [16] J.P. Marques, I. Lains, M.A. Costa, et al., Retinal angiomatous proliferation: a quantitative analysis of the fundoscopic features of the fellow eye, Retina 35 (2015) 1985–1991. [17] G. Querques, L. Querques, R. Forte, N. Massamba, R. Blanco, E.H. Souied, Precursors of type 3 neovascularization: a multimodal imaging analysis, Retina 33 (6) (2013) 1241–1248. [18] A. Nagiel, D. Sarraf, S.R. Sadda, et al., Type 3 neovascularization: evolution, association with pigment epithelial detachment, and treatment response as revealed by spectral domain optical coherence tomography, Retina 35 (4) (2015) 638–647. [19] H. Matsumoto, T. Sato, S. Kishi, Tomographic features of intraretinal neovascularization in retinal angiomatous proliferation, Retina 30 (3) (2010) 425–430. [20] G. Querques, E.H. Souied, K.B. Freund, Multimodal imaging of early stage 1 type 3 neovascularization with simultaneous eye-tracked spectral-domain optical coherence tomography and high-speed realtime angiography, Retina 33 (9) (2013) 1881–1887. [21] R.F. Spaide, J.G. Fujimoto, N.K. Waheed, S.R. Sadda, G. Staurenghi, Optical coherence tomography angiography, Prog. Retin. Eye Res. 64 (2018) 1–55. [22] A.S.H. Tsai, N. Cheung, A.T.L. Gan, et al., Retinal angiomatous proliferation, Surv. Ophthalmol. 62 (4) (2017) 462–492. [23] R. Brancato, U. Introini, L. Pierro, et al., Optical coherence tomography (OCT) angiomatous prolifieration (RAP) in retinal, Eur. J. Ophthalmol. 12 (6) (2002) 467–472. [24] E. Borrelli, D. Sarraf, K.B. Freund, S.R. Sadda, OCT angiography and evaluation of the choroid and choroidal vascular disorders, Prog. Retin. Eye Res. (2018). https://doi.org/10.1016/j. preteyeres.2018.07.002. [25] R. Sacconi, E. Corbelli, A. Carnevali, L. Querques, F. Bandello, G. Querques, Optical coherence tomography angiography in geographic atrophy, Retina 38 (12) (2018) 2350–2355. [26] R. Sacconi, E. Borrelli, E. Corbelli, et al., Quantitative changes in the ageing choriocapillaris as measured by swept source optical coherence tomography angiography, Br. J. Ophthalmol. (2018). https:// doi.org/10.1136/bjophthalmol-2018-313004. [27] R. Sacconi, K.B. Freund, L.A. Yannuzzi, et al., The expanded spectrum of perifoveal exudative vascular anomalous complex, Am J. Ophthalmol. 184 (2017) 137–146. [28] A. Carnevali, R. Sacconi, E. Corbelli, et al., Optical coherence tomography angiography analysis of retinal vascular plexuses and choriocapillaris in patients with type 1 diabetes without diabetic retinopathy, Acta Diabetol. 54 (7) (2017) 695–702.

Chapter 12 • Optical coherence tomography

339

[29] L. Kuehlewein, K.K. Dansingani, T.E. de Carlo, et al., Optical coherence tomography angiography of type 3 neovascularization secondary to age-related macular degeneration, Retina 35 (11) (2015) 2229–2235. [30] N. Phasukkijwatana, A.C.S. Tan, X. Chen, K.B. Freund, D. Sarraf, Optical coherence tomography angiography of type 3 neovascularisation in age-related macular degeneration after antiangiogenic therapy, Br. J. Ophthalmol. 101 (5) (2016) 597–602. [31] X. Chen, M. Al-Sheikh, C.K. Chan, et al., Type 1 versus type 3 neovascularization in pigment epithelial detachments associated with age-related macular degeneration after anti-vascular endothelial growth factor therapy: a prospective study, Retina 36 (Suppl. 1) (2016) S50–S64. [32] A. Miere, G. Querques, O. Semoun, A. El Ameen, V. Capuano, E.H. Souied, Optical coherence tomography angiography in early type 3 neovascularization, Retina 35 (11) (2015) 2236–2241. [33] G. Querques, A. Miere, E.H. Souied, Optical coherence tomography angiography features of type 3 neovascularization in age-related macular degeneration, Dev. Ophthalmol. 56 (2016) 57–61. [34] E. Borrelli, E.H. Souied, K.B. Freund, et al., Reduced choriocapillaris flow in eyes with type 3 neovascularization due to age-related macular degeneration, Retina (2018). https://doi.org/10.1097/ IAE.0000000000002198. [35] F.G. Holz, D. Pauleikhoff, R. Klein, A.C. Bird, Pathogenesis of lesions in late age-related macular disease, Am J. Ophthalmol. 137 (3) (2004) 504–510. [36] R. Dell’Omo, M. Cassetta, E. Dell’Omo, et al., Aqueous humor levels of vascular endothelial growth factor before and after intravitreal bevacizumab in type 3 versus type 1 and 2 neovascularization. A prospective, case-control study, Am J. Ophthalmol. 153 (1) (2012). [37] M. Marsiglia, S. Boddu, C.Y. Chen, et al., Correlation between neovascular lesion type and clinical characteristics of nonneovascular fellow eyes in patients with unilateral, neovascular age-related macular degeneration, Retina 35 (5) (2015) 966–974. [38] R. Sacconi, D. Sarraf, S. Garrity, et al., Nascent type 3 neovascularization in age-related macular degeneration, Ophthalmol. Retina (2018). https://doi.org/10.1016/j.oret.2018.04.016. [39] K.K. Dansingani, J. Naysan, K.B. Freund, En face OCT angiography demonstrates flow in early type 3 neovascularization (retinal angiomatous proliferation), Eye 29 (2015) 703–706. [40] P.J. Rosenfeld, D.M. Brown, J.S. Heier, et al., Ranibizumab for neovascular age-related macular degeneration, N. Engl. J. Med. 355 (2006) 1419–1431. [41] D.M. Brown, P.K. Kaiser, M. Michels, et al., Ranibizumab versus verteporfin for neovascular agerelated macular degeneration, N. Engl. J. Med. 355 (2006) 1432–1444. [42] A.E. Fung, G.A. Lalwani, P.J. Rosenfeld, et al., An optical coherence tomography-guided, variable dosing regimen with intravitreal ranibizumab (Lucentis) for neovascular age-related macular degeneration, Am J. Ophthalmol. 143 (2007) 566–583. [43] K.B. Freund, J.F. Korobelnik, R. Devenyi, et al., Treat-and-extend regimens with anti-VEGF agents in retinal diseases: a literature review and consensus recommendations, Retina 35 (2015) 1489–1506. [44] C. Gianniou, A. Dirani, W. Ferrini, et al., Two-year outcome of an observeand-plan regimen for neovascular age-related macular degeneration: how to alleviate the clinical burden with maintained functional results, Eye (London) 29 (2015) 342–349. [45] S. Ishida, T. Usui, K. Yamashiro, et al., VEGF164-mediated inflammation is required for pathological, but not physiological, ischemia-induced retinal neovascularization, J. Exp. Med. 198 (2003) 483–489. [46] Y. Chen, C. Wiesmann, G. Fuh, et al., Selection and analysis of an optimized anti-VEGF antibody: crystal structure of an affinitymatured Fab in complex with antigen, J. Mol. Biol. 293 (1999) 865–881. [47] S. Lien, H.B. Lowman, Therapeutic anti-VEGF antibodies, Handb. Exp. Pharmacol. 181 (2008) 131–150.

340

Diabetes and Fundus OCT

[48] J. Holash, S. Davis, N. Papadopoulos, et al., VEGF-Trap a VEGF blocker with potent antitumor effects, Proc. Natl. Acad. Sci. U. S. A. 99 (2002) 11393–11398. [49] J.S. Heier, D.M. Brown, V. Chong, et al., Intravitreal aflibercept (VEGF trap-eye) in wet age-related macular degeneration, Ophthalmology 119 (12) (2012) 2537–2548. [50] J.E. Frampton, Ranibizumab a review of its use in the treatment of neovascular age-related macular degeneration, Drugs Aging 30 (2013) 331–358. [51] D. Ribatti, The controversial role of placental growth factor in tumor growth, Cancer Lett. 307 (2011) 1–5. [52] X. Li, A. Kumar, F. Zhang, et al., Complicated life, complicated VEGF-B, Trends Mol. Med. 18 (2012) 119–127. [53] N. Papadopoulos, J. Martin, Q. Ruan, et al., Binding and neutralization of vascular endothelial growth factor (VEGF) and related ligands by VEGF Trap, ranibizumab and bevacizumab, Angiogenesis 15 (2012) 171–185. [54] X. Wang, J. Yang, Analysis of the binding affinity of vascular endothelial growth factor A (VEGF) to ranibizumab, aflibercept and bevacizumab, in: ARVO (The Association for Research in Vision and Ophthalmology) Annual Meeting Abstracts, 2013. Session 255:10–11. [55] P. Lanzetta, P. Mitchell, S. Wolf, et al., Different antivascular endothelial growth factor treatments and regimens and their outcomes in neovascular age-related macular degeneration a literature review, Br. J. Ophthalmol. 97 (12) (2013) 1497–1507. [56] P.J. Rosenfeld, H. Shapiro, L. Tuomi, et al., Characteristics of patients losing vision after 2 years of monthly dosing in the phase III ranibizumab clinical trials, Ophthalmology 118 (2011) 523–530. [57] D.S. Boyer, A.N. Antoszyk, C.C. Awh, et al., Subgroup analysis of the MARINA study of ranibizumab in neovascular age-related macular degeneration, Ophthalmology 114 (2007) 246–252. [58] P.K. Kaiser, D.M. Brown, K. Zhang, et al., Ranibizumab for predominantly classic neovascular agerelated macular degeneration: subgroup analysis of first-year ANCHOR results, Am J. Ophthalmol. 144 (2007) 850–857. [59] S. Kang, Y.J. Roh, One-year results of intravitreal ranibizumab for neovascular age-related macular degeneration and clinical responses of various subgroups, Jpn. J. Ophthalmol. 53 (2009) 389–395. [60] G.S. Ying, J. Huang, M.G. Maguire, et al., Baseline predictors for one-year visual outcomes with ranibizumab or bevacizumab for neovascular age-related macular degeneration, Ophthalmology 120 (2013) 122–129. [61] E. Daniel, J. Shaffer, G.S. Ying, et al., Comparison of age-related macular degeneration treatments trials (CATT) research group. Outcomes in eyes with retinal angiomatous proliferation in the comparison of age-related macular degeneration treatments trials (CATT), Ophthalmology 123 (3) (2016) 609–616. [62] M.B. Parodi, S. Donati, F. Semeraro, et al., Intravitreal anti-vascular endothelial growth factor drugs for retinal angiomatous proliferation in real-life practice, J. Ocul. Pharmacol. Ther. 33 (2) (2017) 123–127. [63] Y.G. Park, Y.J. Roh, One year results of intravitreal ranibizumab monotherapy for retinal angiomatous proliferation: a comparative analysis based on disease stages, BMC Ophthalmol. 15 (2015). [64] M. Inoue, A. Arakawa, S. Yamane, K. Kadonosono, Long-term results of intravitreal ranibizumab for the treatment of retinal angiomatous proliferation and utility of an advanced RPE analysis performed using spectral-domain optical coherence tomography, Br. J. Ophthalmol. 98 (7) (2014) 956–960. [65] A.H. Skalet, A.K. Miller, M.L. Klein, A.K. Lauer, D.J. Wilson, Clinicopathologic correlation of retinal angiomatous proliferation treated with ranibizumab, Retina 37 (8) (2017) 1620–1624. [66] H.J. Cho, H.J. Hwang, H.S. Kim, J.I. Han, D.W. Lee, J.W. Kim, Intravitreal aflibercept and ranibizumab injections for type 3 neovascularization, Retina 38 (11) (2018) 2150–2158.

Chapter 12 • Optical coherence tomography

341

[67] H. Matsumoto, T. Sato, M. Morimoto, et al., Treat-and-extend regimen with aflibercept for retinal angiomatous proliferation, Retina 36 (12) (2016) 2282–2289. [68] H.D. Chou, W.C. Wu, N.K. Wang, L.H. Chuang, K.J. Chen, C.C. Lai, Short-term efficacy of intravitreal Aflibercept injections for retinal angiomatous proliferation, BMC Ophthalmol. 17 (1) (2017) 104. Erratum in: BMC Ophthalmol 2017;17 (1):144. [69] K.T. Tsaousis, V.E. Konidaris, S. Banerjee, et al., Intravitreal aflibercept treatment of retinal angiomatous proliferation: a pilot study and short-term efficacy, Graefes Arch. Clin. Exp. Ophthalmol. 253 (2015) 663–665.

13

Diabetic retinopathy detection in ocular imaging by dictionary learning Zahra Amini*, Rahele Kafieh*, Elaheh Mousavi, Hossein Rabbani MEDICAL IMAGE AND SIGNAL PROCESSING RES EARCH CE NTE R, S CHOOL O F ADVANCED TECHNO LOGIES IN MEDI CINE , I SFAHAN UNIVERSITY OF MEDICAL SCIENCES, ISFAHAN, IRAN

1 Diabetes Diabetes, also called hyperglycemia, is a metabolic disease characterized by increased levels of blood glucose, which usually leads to serious damage to the different organs of the body including eyes. Based on the World Health Organization (WHO) report, 422 million adults are suffering from diabetes and 1.5 million deaths are directly attributed to diabetes each year [1]. The most common is type 2 diabetes [2], in which the body does not use insulin properly (insulin resistance). At first, the pancreas makes extra insulin, but over time it cannot produce enough insulin to maintain the blood glucose at normal levels. Type 1 diabetes or juvenile diabetes (in which the body does not produce insulin) only occurs in 5% of people with diabetes. The percentage of Americans with diagnosed diabetes is projected to increase by 165%, from a prevalence of 4.0% in 2000 to 7.2% in 2050 [3]. All forms of diabetic eye diseases have the potential to cause severe vision loss and blindness [4] including diabetic retinopathy (DR), which is the leading cause of vision impairment and blindness among adults aged 20–74 years [4, 5]. DR is a progressive change in vascular permeability and an increase of fragile, new blood vessels in retina (light-sensitive layer in the third and inner coat of the eye) [6]. It is estimated that over one-third of diabetic people worldwide have signs of DR, and more than 5 million became blind owing to DR. This number is expected to double before 2030 [7]. Around 400 million people in the world [1, 8] and over 4.6 million cases in Iran suffer from diabetes [8, 9]. In 2013, Diabetes was categorized in top ten causes of disabilityadjusted life-years (DALYs) in the eastern Mediterranean region [10]. Over the past 20 years, diabetic retinopathy (DR) is revealed to be as one of the main complications of diabetes with the prevalence of 95% and 60% in type 1 and type 2 diabetic patients, respectively [11, 12]. This makes diabetes one of the leading cause for blindness and low vision [8, 13–15]. To clear up, blindness is reported to occur as an effect of late *Both authors have equal contribution as first author. Diabetes and Fundus OCT. https://doi.org/10.1016/B978-0-12-817440-1.00013-9 © 2020 Elsevier Inc. All rights reserved.

343

344

Diabetes and Fundus OCT

diagnosis of DR [5, 15–17] due to belated visits, and this resulted in blindness in 5.5% of the cases with type-2 diabetes (most common form of diabetes [18]) in Iran [15]. The low accessibility of specialists and required devices for DR detection resulted in the delayed visits. Therefore, by designing an accessible automatic ocular health kiosk, more frequent and scheduled screening of the diabetic patients can be provided, and early stages of preventable, treatable, and potentially blinding disorders can be detected. According to the current routine, only 35%–60% of diabetic patients receive an annual visit [2]. Regular DR screening requires the specialist equipment and expertise in the hospitals and specialist clinics, which enforces high cost per visit [19]. Moreover, the problem is more crucial for DR staging which needs more frequent visits. Vision loss due to DR is irreversible in most cases. However, the risk of blindness can be reduced by up to 95% by early detection and treatment [4]. An annual eye exam is proposed for people with diabetes because DR is not often associated with early symptoms. In addition, people with diagnosed DR may need more frequent eye examinations. The regular DR screening routine leads to delayed diagnosis until progression to severe DR such as proliferative diabetic retinopathy (PDR) or diabetic macular edema (DME). People at risk of developing severe DR need a comprehensive dilated eye exam as frequently as every 2–4 months. Since current DR screening requires specialist equipment and expertise in the clinicians [19], only 35%–60% diabetic patients receive annual visit [2, 20–22]. Different types of specialist equipment are commonly used in DR diagnosis, but two modalities are mainly emphasized here, such as fundus imaging and optical coherence tomography (OCT). To provide information on these two modalities, all sections in the rest of the chapter are categorized into fundus and OCT sub-sections. Section 2 discusses anatomical DR biomarkers; DR classification methods are shortly reviewed in Section 3; and according to discrimination capabilities of dictionary learning (DL) method, Section 4 includes two efficient DL-based classification works for DR detection in each modality.

2 Imaging biomarkers 2.1 Fundus For automatic analysis of color fundus images, traditional methods are used to extract imaging biomarkers such as micro aneurysms (MA), hard exudates, hemorrhages, venous loops, or cotton wool spots (Fig. 1). The long-term damage by diabetes forms MA and subsequently exudates besides hemorrhages. Prevention from blindness caused by DR is achieved in most cases, by combining both accurate and early diagnosis as well as right treatment [23]. However, the manual analysis of these biomarkers is time-consuming and not applicable in general screening. Recently, digital imaging provides a tool for DR screening. In addition, it has the potential to record high-quality permanent images of the retinal appearance, which can be used for monitoring of progression or treatment response, which can be reviewed by an ophthalmologist in different visits.

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 345

Blood vessels Microaneurysms

Optic disc

Exudate

Haemorrhage

Macula

FIG. 1 Different features in a DR image [23].

Actually, image processing is a tool to reduce the workload of clinicians and enhance the repeatability of diagnosis. Therefore, the utilization of digital image processing techniques for the automatic detection of DR has already been achieved. Most of the available techniques for this task has the following steps [24]: • • •

Preprocessing Segmentation of different anatomical features Detection of biomarkers

2.1.1 Preprocessing Correction of nonuniform illumination and contrast enhancement are mostly used as preprocessing techniques. For example, one of the commonly used methods for contrast enhancement in these images is contrast limited adaptive histogram equalization (CLAHE). Some works also change the color space or concentrate on specific color channels with more information (like a green channel).

2.1.2 Segmentation of different anatomical features To reduce false biomarkers detection, anatomical structures like optic disk (OD) and blood vessels have to be eliminated to avoid misclassifications. Blood vessel extraction methods Several vessel detection methods have been proposed, including multilayer perception neural network [25], adaptive thresholding to locate vessels [26], Hough transform [27], top-hat transformation [28, 29], Radon transform [30], two-dimensional (2D) matched filters [31], fuzzy C-means classifier [32], sparse tracking technique [33], scale and orientation-selective Gabor filter banks [34], algorithm based on the regional recursive hierarchical decomposition using quadtrees and post-filtration of edges [35], and texture-based vessel segmentation methods [36].

346

Diabetes and Fundus OCT

Some novel techniques based on curvelet transform have also been introduced in Refs. [37–40]. In Ref. [37], the method was introduced to localize the vessels in the following three fundamental steps: (1) Contrast enhancement was applied using the curvelet transform and curvelet coefficients were modified to enhance the edges in fundus images using a function defined by Starck et al. [41]. (2) The matched filter response (MFR) was calculated. (3) Vessels were segmented as follows: • Taking the curvelet transform of the MFR of the enhanced retinal image. • Retaining high-frequency components and remove all other coefficients. • Applying the inverse curvelet transform. • Applying a threshold using the mean of the pixel values of the image. • Applying length filtering and removal of misclassified pixels. Fig. 2 shows the vessel segmentation results by this method for a sample fundus image. An improved version [37] was then implemented for vessel extraction in fundus fluorescein angiograms (FFA) [38, 39]. In the preprocessing step, the nonuniform illumination and low contrast between the background and the blood vessels were improved. Then, the directional subbands of curvelet transform of the image were computed to make a set of

FIG. 2 Segmentation results: (A) the input image, (B) enhanced image, (C) MRF of the enhanced image, (D) high-frequency component of image, (E) manual segmentation result, and (F) final result of vessel segmentation [37].

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 347

FIG. 3 (A) FFA images, (B) vessel centerline, and (C) ground truth images [38, 39].

directional images. Then the following two computations were performed for each one of the directional images in a multi-scale framework. First, eigenvalues of Hessian matrix analysis were computed. Second, the first-order derivative of the directional images was evaluated. These features were used to extract initial vessel and finally, each of the initial vessel centerlines was confirmed or rejected based on the length and intensity features and eigenvalues analysis [38, 39]. Fig. 3 shows the vessel segmentation results of a sample FFA image. However, similar to what we see in most of these methods, a length filtering in the final step of the algorithm is inevitable to remove short and incorrect structures. The unavoidable problem associated with this step is that the length filtering algorithm works with a predefined threshold and short vessels localized correctly in the previous step is removed. To solve this problem, a feedback procedure is also proposed in Ref. [40], through which the calculated 2D vessel profile (after length filtering) is analyzed again and the missed vessels are revitalized using information from corresponding slices of OCT [40]. Fig. 4 shows an example of the proposed 2D vessel segmentation in different steps. Optic disk segmentation approaches Many times an interference exists between OD and biomarkers like MA, so the detection and elimination of OD is one of the important parts of biomarkers’ segmentation. Canny edge detector, principal component analysis (PCA) analysis, vessel direction matched the filter, and circular Hough transform are few methods used for detecting OD.

348

Diabetes and Fundus OCT

FIG. 4 An example of the proposed 2D vessel segmentation in different steps: (A) manual segmentation, (B) the vessel segmentation introduced in Ref. [29] after length filtering, (C) the region scanned by OCT is replaced by the result of feedback method, and (D) the result after suppressing the false positives around the optic nerve head [40].

Furthermore, some methods were proposed for the detection of OD in Refs. [39, 41–44]. In Refs. [41, 42], an automatic algorithm based on curvelet transform for the detection of OD was used. This method is robust to the changes found in the appearance of retinal fundus images and does not require user initialization. In this algorithm at first, digital curvelet transform of the enhanced retinal image and modification of its coefficients were used to find the probable location of OD. Following this step, since both OD and exudates are bright features in fundus images, bright lesions map (BLM) image was generated to distinguish between exudates and OD. In condition which size of yellowish objects in retinal images were negligible, the Canny edge detector was performed to directly detect OD location. Otherwise, some morphological operations were used to fill and erode circular OD region candidates in edge map and then the candidate region with the maximum summation of pixels in the strongest edge map was selected as a final location of OD. Finally, the boundary of the OD was extracted by using the level set deformable model. Fig. 5 shows the results of OD detection for sample color fundus images using the proposed method in Refs. [41, 42].

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 349

FIG. 5 OD detection for sample color fundus images using the proposed method in Refs. [41, 42].

In another work [39], a modified version [42] was introduced for the automatic extraction of OD in FFA. For this purpose, firstly extracted blood vessels were removed from FFA image by multi-structure elements morphology and modification of curvelet coefficients was used to enhance candidate regions for OD. Then, the boundary of candidate regions was extracted using Canny edge detector and Hough transform on the reconstructed image; also, information of the main arc of the retinal vessels surrounding the OD region helped us to determine the actual location of the OD. Finally, distance regularized level set evolution was applied to detect the OD boundary.

2.1.3 Detection of biomarkers Approaches used for MA detection MAs are constituted as the earliest recognizable biomarker for DR and detection of them is very important for further analysis. Mathematical morphology is commonly used for MA detection. Many researchers like Purwita et al. [45], Karnowski et al. [46], and Xu et al. [47] used morphological algorithms for MA detection. In most of these researches, firstly a mathematical morphological-based method was applied to detect and extract candidates of MAs and then extracted features were classified and validated by a kind of classifier. Some other groups used auto seed generation/region growing methods for MA detection [25, 48, 49]. For example, in Ref. [50] automatic seed generation was used to extract red lesion candidates. For true red lesions, detection feature map (spatiotemporal) classifier was used. In Ref. [51] seeds for region growing were generated by match filtering and top-hat processed image threshold, then geometric and hue features were used for the classification.

350

Diabetes and Fundus OCT

The third group of researchers tried to use different filters such as Laplacian of Gaussians filter [52] or double-ring filter [53] to extract features of MA candidates. Quellec et al. [54] proposed a template matching supervised MA detection method in wavelet subbands. Pallawala et al. [55] devised a two-step generalized eigenvector method for the detection of MA in retinal images. The first step extracts the location of MAs by suppressing other structures and blood vessels. The second step differentiates true MAs by their specific features. Lazar et al. [56] proposed a local maxima map algorithm for identifying the MAs in retinal images. Zhang et al. [57] developed a two-step approach for detecting and classifying MAs. At first multi-scale Gaussian correlation filter (MSCF) and region growing detect MA candidates. In the second phase, these MA candidates are classified with sparse representation classifier (SRC). Ram et al. [58] introduced MA detection strategy based on clutter rejection. Lama Seoud et al. [59] proposed a method for the lesion detection using shape features set called dynamic shape features (DSF) which skipped the precise candidate’s segmentation step. Lazar and Hajdu [60] developed a method based on local rotating cross-sectional profiles analysis. Akram et al. [61] presented a three-stage filter bank system for MA detection. In the first stage, candidate regions are extracted, in the second stage, the feature vector of shape, color, gray level, and statistical features are constructed, and in the third stage, classification is done. In Ref. [62] a new method for MAs detection in FFA images was proposed. As vessels and MAs have similar brightness in fundus and FFA images, at first extracted vessels by Ref. [37] were removed, then morphological operations were applied on resulted image for detecting MAs. Fig. 6 shows the various steps of the proposed method in Ref. [62] for a sample FFA image. Approaches used for exudates detection Exudates are mass of lipid and protein in the retina that has seeped out of blood vessels. They are typically bright, reflective, white lesions mostly seen together with MAs. After eliminating prominent structures of the retina, such as OD and blood vessels tree, the exudates were detected using a sequence of image processing methods [63]. Different algorithms including statistical classification [64], neural networks [65], fuzzy C-means [66, 67], support vector machine (SVM) [67, 68], and morphological algorithms [66, 69, 70] were used for exudates extraction. Since both OD and bright lesions are bright features in fundus images, an automatic algorithm for the detection of OD and exudates on low contrast images was introduced in Ref. [42]. The proposed algorithm was composed of the following three main stages. At first, bright candidate lesions in the image were extracted by the modification of curvelet coefficients of the enhanced retinal image. For this purpose, to obtain adequate illumination normalization in the regions near the OD and to increase the brightness of lesions in dark areas such as fovea a new bright lesions enhancement on the green plane of the retinal image was applied. Following this step, a new OD detection and boundary extraction method and level set method were proposed. Finally, to distinguish between exudates and OD (i.e., a false detection for the final exudates detection), the extracted candidate pixels in BLM that were not in OD regions (detected in the previous step) were considered as actual bright lesions.

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 351

FIG. 6 (A) Original image after removing vessels, (B) applying morphological dilation on (A), (C) background image by applying morphological erosion on (B), (D) mapping zero pixels of (C) on (B), (E) removing background, (F) thresholding, (G) detected MAs, and (H) MAs on original image [62].

Fig. 7 shows the extracted bright lesions with this method for a sample fundus image. Approaches used for hemorrhages detection In advanced degrees of DR, retinal hemorrhages become apparent. They are a symptom of increased ischemia (loss of oxygen) in the retina. Similar to MA detection methods, hemorrhage extraction algorithms can be divided into morphological processing, region growing methods, filters, and other miscellaneous approaches.

352

Diabetes and Fundus OCT

FIG. 7 Final extracted bright lesions: (A) color retinal fundus image, (B) green plane of color retinal images, (C) enhanced image, (D) reconstructed image from the modified curvelet coefficients, (E) extracted bright candidate features, (F) reconstructed image from modified curvelet coefficient for the determination of OD location, (G) smoothed image, (H) extracted OD region by level-set method (with removed vessels), (I) BLM image, and (J) final bright lesion extracted pixels (compared with illustrated image in (E) OD and some extra detection have been removed in this image) [42].

In the first group, Shivaram et al. [71] presented an algorithm for detecting hemorrhages and removing the blood vessels using a set of optimally adjusted mathematical morphology operators. In Ref. [72], blot hemorrhage candidates were detected by maximum of multiple linear top hats applied to the inverse image. Also, Zhang and Fan [73] proposed a lesion detection algorithm that handles spot lesions of variable intensities, shapes, and sizes by multiscale morphological processing techniques. In the second group, Bae et al. [74] proposed a hybrid method of hemorrhage detection. Candidate hemorrhages were extracted using template matching with normalized cross correlation. These extracted candidates lack information on the exact shape, where region growing segmentation was used to solve this problem.

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 353

Also, a three-stage red-lesion detection method was developed by Marino et al. [75]. Initially, the correlation filter set was used to detect lesion candidates. Then region growing method and feature-based filtering were applied to remove false positives. Besides, some researches have used different techniques. For instance, Zhang and Chutatape [76] used top-down approach to extract the hemorrhage. In their work, feature extraction was done by combining 2D PCA. These features were given to SVM classifier to achieve higher classification accuracy. In Ref. [77], an algorithm was proposed to separate MAs and HMs from the color retinal image. In order to prevent fovea to be considered as the red lesion, a new illumination equalization algorithm was proposed and applied to the green plane of the retinal image. In the next stage, curvelet coefficients were modified in order to lead dark objects to zero. Then these lesions were extracted as a candidate region by applying an appropriate threshold. Finally, the total structure of blood vessel was extracted by employing a curvelet-based technique and the false positives (FPs) were eliminated by subtracting the vessel structure from the candidate images. Fig. 8 shows the extracted red lesions with this method for a sample color fundus image.

2.2 OCT Optical coherence tomography (OCT) is being accepted as a routine diagnosis and evaluation standard for diabetic macular edema. OCT provides new information to control and examine the progress in the disease, by concerning retinal microstructure and thickness maps [78]. Tissue reflectivity and optical characteristics from OCT data have already been explored in a disease like glaucoma and DR. The optical characteristics along with thickness information are used to provide better classification between DR stages and healthy eyes [79]. Several biomarkers are proposed on diabetic macular edema to find correspondence between the pathophysiology and disease condition and to evaluate the therapy [78]. Such biomarkers are categorized into two main subgroups: morphological biomarkers and retinal layer thickness information. Different morphological biomarkers are described in Table 1 and Figs. 9 and 10. Also, the thickness information of retinal layers as biomarkers for DR detection is reviewed in Table 2. Technical development in OCT increased the imaging speed significantly, but the software behind the technology is not improved yet. The image processing methods, as the most effective software requirement in OCT, provide image enhancement and quantitative analysis, with acceptable speed and accuracy. Furthermore, the analysis of the optical properties of the layers (like texture features) is also provided by image processing techniques.

3 DR classification 3.1 Fundus Most of the clinical studies on DR detection is based on subject-dependent nonautomatic classification by trained experts. Automatic image processing and image classification

354

Diabetes and Fundus OCT

FIG. 8 Segmentation results: (A) the input image, (B) enhanced image, (C) modified image for specific parameters, (D) candidate lesions, (E) complemented of modified image, (F) low-frequency component of image, (G) vessel segmentation result, and (H) red lesions segmentation result [77].

Table 1

Presents prevalent morphological biomarkers in OCT data for DR analysis.

Morphological biomarkers Sponge-like swelling Cystiod macular edema Serous retinal detachment

Definition

Ref.

Thickening of the fovea with homogeneous optical reflectivity

[80]

Thickening of the fovea with markedly decreased optical reflectivity in outer layers

[80, 81]

Thickening of the fovea with subfoveal fluid accumulation and distinct outer border of detached neurosensory retina

[78, 80, 81]

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 355

FIG. 9 Example of morphological DR biomarkers in OCT. (A, B) Cyst and detachment, (C, D) cyst, (E, F) hyper-reflectivity (due to hard exudate), and (G, H) detachment [82].

356

Diabetes and Fundus OCT

FIG. 10 Examples of morphological DR biomarkers in OCT (grades of disruption).

provide a new opportunity for fast and easy interpretation of DR data. A range of DR biomarkers (as described in Section 2.1) is usually extracted and a classification method is then applied. A fair number of algorithms has already been proposed for automatic biomarker identification [37–39, 41, 42, 44, 54, 77, 98–101]. However, very few works are dedicated to DR detection. Furthermore, DR staging is another aspect of DR classification, which predicts the level of DR based on medical definitions. In a wide view, DR can be categorized into proliferative DR (PDR) and nonproliferative DR (NPDR). PDR is the advanced stage leading to high risk and causes severe vision loss and even blindness (Fig. 11E). NPDR itself can be classified to more detailed subcategories based on the presence of specific DR features [102]: • •

Mild NPDR: at least one MA with or without the presence of retinal hemorrhages, hard exudates, cotton wool spots, or venous loops (Fig. 11B) [23]. Moderate NPDR: numerous MAs and retinal hemorrhages are present. A limited amount and cotton wool spots of venous beading can also be seen (Fig. 11C) [23].

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 357

Table 2

Thickness information of retinal layers as biomarkers for DR detection.

Data

Retinal layer thickness biomarkers

Ref.

T1DM, T2DM, and healthy subjects

In the pericentral ring: thinning of the RNFL, GCL, and IPL in patients with mild NPDR compared to controls In the peripheral ring of the macula: thinning of the RNFL and IPL in patients Thinning of the RNFL thickness and Thickening of the INL/OPL layer in patients with Diabetes mellitus (DM) and no DR or initial DR Thickening of the INL and thinning of the GCL retinal layer Thinning of the RNFL and OPL, thickening of the ONL in patients with no DR when compared to the healthy controls In the perifoveal subfields of NPDR subjects: thickening the GCL+ IPL layer In the average macular, outer temporal superior, and outer temporal inferior ETDRS regions: thinning of the in patients Thinning in the peripapillary RNFL and macular GCC in patients

[83, 84]

T1DM and T2DM subjects T1DM and no or early signs of DR En-face OCT imaging

T1DM subjects with no DR, and ageand sex-matched controls Children with T1DM and healthy controls Children with T1DM

T1DM and T2DM subjects

T2DM and healthy subjects T1DM and T2DM subjects T2DM, diabetic patients without DR Uncontrolled DM, controlled DM and controlled DR subjects

In all quadrants except the superior nasal quadrant: in DM thinning of the GCL-IPL No change in macular RNFL thickness In parts of the pericentral and peripheral areas: thinning of the GCL-IPL complex, and thinning of the INL and ONL in the T1DM patients while in T2DM, they were thicker Thinning of the GCL-IPL and RNFL in both no DR and mild NPDR subjects Thinning of the GCC in T2DM subjects; choroidal thinning did not correlate with the changes of the GCC (no such changes in T1DM) Thinning of the photoreceptor layer in T2DM group Thinning of the GCL, IPL, and GCL+IPL in the uncontrolled DM group (with no DR)

[85] [86] [87]

[88] [89] [90]

[91]

[92, 93] [94] [95] [96, 97]

T1DM, T2DM stand for type I and II of Diabetes mellitus, respectively.



Severe NPDR: is characterized by any one of the following features: (1) numerous hemorrhages and MAs in four quadrants of the retina, (2) venous beading in two or more quadrants, and (3) intraretinal microvascular abnormalities in at least one quadrant (Fig. 11D) [23].

FIG. 11 Typical fundus images: (A) normal, (B) mild DR, (C) moderate DR, (D) severe DR, and (E) prolific DR [23].

358

Diabetes and Fundus OCT

The following works focus on DR detection or staging (with the different number of stages). It is well known that DR affects the vessels of the retina and they leak into the retina. In higher stages, DR affects the fovea and shape and the size of foveal avascular zone (FAZ) would change, which is responsible for central vision. Based on this reality, in Ref. [103] appropriate features were extracted from FAZ and used for grading of retina images into normal and abnormal classes. For this reason, vessels and OD were localized and using information about anatomical location of FAZ related to the OD and detecting endpoints of segmented vessels, FAZ was extracted [101]. Then for each image, the area and regularity of extracted FAZ were determined and used for DR grading. In Ref. [103], FFA and color fundus images were used simultaneously for the extraction of six features and were fed to SVM in order to determine DR severity stages. These features were the area of blood vessels, area of exudates, the regularity of foveal avascular zone, the number of MAs therein, the total number of MAs, and area of exudates. Seventy patients were involved in this study to classify DR into three groups, that is, (1) no DR, (2) mild/moderate NPDR, and (3) severe NPDR/PDR [103]. In Ref. [104] extracted features from blood vessels, exudates, hemorrhages, and MAs were used for DR detection. Sensitivity and specificity of this method are 75% and 83%, respectively. In Ref. [105] a decision support system (DSS) was introduced to determine normal and diabetic images with a sensitivity of 100% and specificity of 63%. In Ref. [106], contrast enhancement and segmentation were used for diabetic retinopathy screening using an artificial neural network. Sensitivity and specificity of this method were 95% and 53%, respectively. In Ref. [107], a discrimination methodology involving morphological techniques and texture analysis methods was proposed for extracting features from blood vessels and exudates. The accuracy of 94%, sensitivity and specificity of 90% and 100%, respectively, were reported. In Ref. [63], the staging of DR was performed using morphological features from DR biomarkers and SVM. The accuracy of 86%, sensitivity of 82%, and specificity of 86% were reported. In Refs. [108, 109], the marker-controlled watershed was applied for the segmentation of DR biomarkers.

3.2 OCT As elaborated in Section 1, DME is the most prevalent form of vision affection retinopathy in people with diabetes [110]. DME is defined by the accumulation of exudative fluid in the macula, in which the barrier of inner blood-retinal breaks (see Fig. 12). Six percent of diabetic patients suffer from DME [110].

FIG. 12 Examples of SD-OCT images in (left) DME and (right) normal B-scans.

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 359

As described in Section 2.2, OCT shows a significant contribution to DME detection. This section is dedicated to the works on DME classification based on DR biomarkers extracted from OCT [111]. Despite the fact that OCT has gained significant importance in recent years for detecting DME. Therefore, automated methods for DME classification in OCT volumes is of undeniable importance [112–117]. In Ref. [113], histogram of oriented gradients (HOG) descriptors and SVM were used for the classification of age-related macular degeneration (AMD), DME, and normal cases with respective accuracy of 100% at the detection of AMD and DME and 86.67% for normal subjects. In Ref. [114], linear configuration pattern (LCP) features were extracted and then refined by correlation-based feature subset (CFS) selection method. With this method, accuracy of 100%, 100%, and 93.33% was achieved for DME, normal, and AMD subjects, respectively. In Ref. [115], after aligning and cropping of retinal regions (as preprocessing step), sparse coding and a spatial pyramid were utilized for the representations of images. SVM classifier was then applied with accuracy of 100%, 100%, and 93.33% for DME, AMD, and normal subjects, respectively. In Ref. [118], artificial neural networks was applied to detect normal subjects from mild DR cases. Thickness, total reflectance, and fractal dimension of macular retinal layers were used in the analyses. In Ref. [116], after noise reduction and other preprocessing methods, deep learning algorithm with fine tuning on pretrained GoogLeNet [119] was used for the classification. The reported accuracy was 99%, 89%, and 86% for normal, AMD, and DME, respectively. In Ref. [117] 2D and 3D texture-based features were extracted from retinal OCTs and bag-of-words representation were tested on different classifiers. RBF-SVM classifier showed the highest result with the sensitivity of 81.2% and specificity of 93.7% for DME classification. In Ref. [120] a deep fusion classification network was applied on DR OCT biomarkers which achieved high accuracy in DR.

4 Dictionary learning As discussed above, the available methods in this area are based on conventional classification methods and need preprocessing, segmentations, and feature extraction stages to prepare the required input for the classifiers. However, in the proposed method, such steps are desired to be removed and the algorithm would be able to detect the class of each patch (or each image) more perceptually. In this case, the errors due to wrong segmentation or variable illumination will have no effect on the final decision. However, in modeling point of view, we can propose specific models for each class and label the new data to each class according to fitness of data to models. Fig. 13 shows a classification of different image modeling methods [121]. According to Fig. 13, these methods are divided into two main branches of spatial and transform domains. Furthermore, transform domain models are divided into data-adaptive and non-data adaptive transforms. Moreover, all models have been categorized into deterministic, stochastic, geometric, and energy-based groups.

360

Diabetes and Fundus OCT

Image modeling methods

T

S

Spatial domain

Transform domain

TD

TN

Data adaptive transform

Non-data adaptive transform TD1

ICA Kernel regression

TD2 TD3

Dictionary learning

TNX

TNF

TD4

*-Translation domain (X-lets)

Frequency domain

Deep learning

TD5

PCA

TD6

TNF1

Diffusion map

Fourier TNF2

TNX1

DCT

TNX2

Scale-translation TNX11

TNX3

Scale-translationfrequency

Translation-durationfrequency

TNX21

Wavelets Complex wavelets

Scale-translationangle TNX41

TNX31

Wavelet packets

TNX12

TNX4

Geometrical X-lets

Cosine packets

TNX42 TNX43 TNX44 TNX45

I

II

Stochatic I-1

I-11

GMM I-12

GSM

I-21 I-22

II-2

RMF HMM

Energy-based IV-1

II-1

Joint

IV

Geometric

Non-parametric Marginal

Contourlets Wedgelets

III

Deterministic I-2

Curvelets Bandlets

III-1

III-2

Deformable

Parametric Graph based

III-3

Morphological

III-21

III-31

III-22

III-32

Dilation

Active contour Level set

Erosion

III-33 Closing III-34

FIG. 13 Image modeling tree [121].

Opening

IV-2

Variational methods PDE

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 361

As elaborated in the previous paragraphs, image modeling can be facilitated in transform domain and in this transform-based image processing approaches have been developed dramatically. Transform-based modeling is based on using different types of atoms for data representation. An atomic representation is the decomposition of a signal over elementary waveforms chosen in a family called a dictionary [122] (Fig. 14): Y ¼ DX

(1)

where It is a well-known idea that if a correct dictionary is used, it will result in a sparse representation and the required information will be carried by a few nonzero coefficients. However, it is not possible to obtain an ideal sparse transform to adapt to different signals. Therefore, a great number of strategies in atomic representation have already been designed to extract the energy of a signal using a set of few vectors. One categorization to such representations is dividing them into data adaptive and non-data-adaptive methods. Each subclass can also be studied in single-scale or multiscale schemes. In non-data-adaptive methods, the dictionaries (basis or representing atoms) are predetermined irrespective of the data. As a result, each non-data-adaptive method may be an ideal representation for a particular type of data, but its performance may be very weak in other kinds of data. A simple example for a single-scale non-data-adaptive basis is the Fourier transform; wavelet transforms and geometrical X-lets are examples of a multiscale non-data-adaptive basis. In data-adaptive methods, the basis with most sparse results is identified to rearrange the coefficients and identify the proper basis for the data [123]. Examples for single-scale data-adaptive class are principal component analysis (PCA) [123, 124], independent component analysis (ICA) [125, 126], diffusion maps [123], and dictionary learning (DL) [127, 128]. PCA is designed for finding a new basis as a linear combination of the original basis, X (u,v) = SgÎGdg jg (u,v),jg Î L

Example of G: frequency (Fourier), Scale-translation (waveletes), scaletranslation-frequency (wavelet packet), translation-durationfrequency (cosine packet), scaletranslation-angle (geometrical Xlets, curvelets, bandlets, contour lets, wedge lets, etc.) FIG. 14 A schematic for image atomic representation.

Atoms: elementary functions (basis, frame, tight frame)

Transform coefficients.

362

Diabetes and Fundus OCT

which reexpresses the dataset in the best form [123, 124]. The best performance of datasets approximately lies in a lower-dimensional subspace. Despite PCA, ICA [125, 126] is designed to be overcomplete, i.e., the dimension of the data is in general smaller than the number of the elements and assumes that the data is multichannel. Different from PCA, a nonlinear method is developed by diffusion maps that discover an underlying manifold data that is sampled from Ref. [129]. In DL methods [127, 128], the dictionary is learned from the signal which is intended to be decomposed by these dictionaries. Such a learning process is based on different algorithms like method of optimal directions (MOD) or K-SVD.

4.1 Fundus DL classification is based on K-SVD and the overall procedure can be divided into two main steps including learning and classification [130]. The method can be implemented in two phases. In the first phase, only a rough classification between normal and abnormal cases will be done; in the second phase, the staging will be determined. The required steps for proposed classification are elaborated in the following paragraphs.

4.1.1 Learning In the learning step, patches (with overlap) can be extracted from each image and each patch is labeled to the desired class. The K-SVD algorithm is then applied to each class and the initial dictionaries (usually driven from cosine transform) are changed to fit the input data. The K-SVD algorithm designs a dictionary using singular-value decomposition (SVD) under strict sparsity constraints and it is a generalization of the K-means clustering approach. Finding the best sparse representation of data samples Y is the solution to the following problem: min D, X kY  DX k22 subject to8i, kxi k0 < T0

(2)

for a defined value of T0. In practice, the exact solution of Eq. (2) is computationally demanding and approximate solutions are proposed to deal with the problem. The procedure can be divided into two stages, which iterate until reaching a convergence. In the first stage, D is assumed to be fixed, and the representation using a sparse coding stage by orthonormal matching pursuit (OMP) is calculated [131]. Having the representations in hand, the dictionary can be updated using K-SVD approach [132] in the second stage.

4.1.2 Classification Two steps are defined for this part: (a) Finding the best atoms: Best atoms (Fig. 15) are defined to have the largest elements in the sparse coefficient matrix. In each normal/abnormal class, the atoms with

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 363

FIG. 15 The best atoms in each normal (A) diabetic and (B) class [130].

FIG. 16 The best discriminative atoms in each normal (A) diabetic and (B) class [130].

high correlation to atoms of other classes are eliminated to retain the discriminative atoms. (b) Classification of test images: Each new image is classified based on the representation over discriminative dictionaries of normal/abnormal class (Fig. 16). L0-norm of sparse coding coefficients of each class for image representation is then used as a measure of fitting to one class, instead of the reconstruction error. The procedure of this method is shown in Fig. 17.

4.1.3 Result The method is evaluated on 60 right and left fundus images from a public dataset available on [133]. The dataset information is provided in Table 3 and block patches with a size of 20 20 pixels were extracted from green channel of color fundus image. The results of the method are indicated in Table 4.

364

Diabetes and Fundus OCT

FIG. 17 The proposed algorithm for the classification of color fundus images [130].

Table 3

The dataset information in Ref. [130].

Class

No. of train data

No. of test data

Normal Diabetic

15 15

10 20

Table 4

The results of the method in Ref. [130].

TP

TN

FP

FN

Sensitivity

Specificity

18

7

3

2

90

70

4.2 OCT 4.2.1 Sparse representation and dictionary learning framework Considering the base idea of dictionary learning and its ability in the representation of data using a few dictionary bases, sparse representation-based classifiers (SRC) are posed. In this widely used category of classifiers, it was tried that a dictionary learns from training samples and uses its atoms as a prior information to depict new images. In SRC, the main goal is the representation of a given signal y with size n, as a linear combination of a limited number of atoms. If each atom is shown by dk where k ¼ 1, …, N, dictionary D 2 ℝnN is composed of N atoms, where N is the total number of atoms. Using this notation, y is represented as y ¼ Dx ¼

N X

xk dk

k¼1

where x is the vector of coefficients and xk represents its elements.

(3)

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 365

When n is greater than N, the dictionary is overcomplete and x will not be a unique vector. To overcome this problem sparsity constraints can be used. A sparse representation of y can be achieved by a sparse vector x whose most elements are zero or nearly zero. Consider D is a matrix whose column is training samples of different classes. Consequently, the sparse representation of signal y is obtained using the following optimization:   argmin ky  Dx k22 + λkxk1 x

(4)

where λ determines the effect of sparsity constraint. Dictionary learning process can be considered in different aspects. If we consider it as a sparse representation of some samples, it could be achieved by the optimization of Eq. (4) which includes sparsity constraint. While from a different viewpoint, dictionary learning can be used for reconstruction and discrimination. By imposing a discriminative term and using labels of training data in the dictionary learning formulation leads to the sufficiently distinctive data representation of each class. Therefore, dictionary learning minimizes sparse approximation errors over different classes while imposing class discrimination [134]. In order to integrate all these goals, the general formulation of dictionary learning can be extended as follows:   argmin kY  DX k22 + λ1 kX k1 + λ2 hðD, X, θÞ X, D

(5)

where Y and X show the set of training samples and their corresponding sparse coefficient matrix, respectively and h(D, X, θ) is the discriminative term which could be developed based on different methods. The effectiveness of different discriminative terms has been proved, but for a specific type of signals or images, it has been proved that some dictionaries provide a better performance [134]. For the task of OCT classification, this chapter investigated three different dictionary learning algorithms. Finally based on the resulted accuracy, they suggested that FDDL is the best classifier of HOG features of OCT images. The following three methods are used, and their results are briefly reviewed.

4.2.2 Separating the particularity and the commonality dictionary learning (COPAR) In the COPAR algorithm, the learning of the atoms specific to each class is performed simultaneously with the learning of atoms common between all classes [135]. The aim of the utilization of the shared atoms is improving the reconstruction ability of all categories and provides beneficial information at the time of classification. In general, the task of the classification based on dictionary learning is composed of the learning of C class-specific dictionaries Dc. In the ideal case, no overlap is expected between subspaces and dictionaries specific to each class. For example, each sample of class c is represented by y  Dcxc where xc is its coefficients vector. We also represent y using the total dictionary, D ¼ [D1, …, DC] which is composed of C sub-dictionaries, and D : y  Dx ¼ D1x1 + … + Dcxc + …DCxC. In this case, since y belongs to class c, it is expected that the most effective elements of x are related to xc and x is sparse. In the matrix form,

366

Diabetes and Fundus OCT

showing samples of class c by Yc, and the set of all samples by [Y1, …, Yc, …, YC] results the sparse coefficient matrix X. Some shared atoms represented by the class-specific dictionaries, Dcs, of different classes are used for the reconstruction step rather than classification. Although these shared atoms harm classification, they are integral for reconstruction. To circumvent this problem and improve the classification performance, the separation of the learning of the commonality D0, which provides the common bases for all classes, is proposed in the COPAR algorithm. Therefore, learning of the specific dictionary of each class and the commonality dictionary is represented as D ¼ ½D, D0  2 ℝ dK where d is the dimension of each atom and K is the total number of the learned atoms, thereby providing a more comprehensive representation of samples of each class. Furthermore, the total sparse coefficient h  T i T matrix is represented by X ¼ X T , X 0 , where X corresponds to the sparse coefficients of specific atoms of different classes, and X0 corresponds to the shared dictionary. Using the abovementioned concept, the general formulation of dictionary learning (Eq. 5) is reformulated as   λ2  1  f  g Y, D, X + λ1 X 1 + hðDÞ 2 2

(6)

 2 C n C o X X   j Y c  DX c 2 + kY c  D0 X 0  DC X c k + X c  c c F F c¼1 j¼1, j6¼c

(7)

  In the COPAR objective function, g Y, D, X is defined as

where Xi and Xc 2 ℝKNc express the sparse coefficients of Y on Di and Yc on D, respectively, and Xic shows the sparse coefficient of Yc on Di. It is worth mentioning that the estimated   sub-dictionaries based only on the two first terms of g Y, D, X may result in atoms which are used interchangeably. Therefore, with the aim of enforcing incoherency between subdictionaries, the Frobenius norm of coefficients, except its cth part is imported to   g Y, D, X . In addition to imposing the incoherency of class-specific sub-dictionaries, the minimi T 2 P PC D Dc  , has been used to zation of the last penalty term of Eq. (6), hðDÞ ¼ C c¼0

i¼0,i6¼0

i

F

improve the incoherency between the commonality and the particularities. Finally, the optimization of Eq. (4) results in the most representative D, D0, X,and X0. Following the dictionary learning step, the authors proposed two reconstruction errors for the classification of new data: global classifier (GC) and local classifier (LC). In the GC model, it was suggested to encode each new sample y by the global dictionary D. While in the case of the higher number of training samples, the LC model is employed. In the LC model, all class-specific dictionaries are examined for the reconstruction of the new sample, and finally, the minimum of the resulted errors determines the corresponding class of the new sample. GC and LC models are respectively formulated as follows:

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 367  2   e ¼ argmin y  DX 2 + γ X 1

(8)

 2   ec ¼ argmin y  Dc X c 2 + γ X c 1

(9)

X

Xc

4.2.3 Fisher discrimination dictionary learning (FDDL) Considering the point that sub-dictionaries should be as discriminative as possible for the task of classification, one of the best tools which could be helpful is the Fisher discrimination criterion. FDDL classifier benefiting from this criterion tries to learn a structured dictionary. In addition to providing discriminative residuals, FDDL also enforces the coefficients to be discriminative. To achieve coefficient discrimination, it uses Fisher discrimination criterion which minimizes the within-class scatter and maximizes the betweenclass scatter [136]. With regard to Eq. (3), g(Y, D, X) of FDDL algorithm, as the discriminator of the residuals is defined as g ðY, D, X Þ ¼

C n C X  2 o   1X Dj X j 2 kY c  DX c k2F + Y c  Dc X cc F + c F 2 c¼1 j¼1, j6¼c

(10)

While the first penalty term only relies on D to represent Yc, it is possible that its correspondent residuals are less related to Yc. Therefore, adding the second penalty term is suggested. However, other sub-dictionaries may cooperate in representing Yc, which reduces the discrimination power of Dc. The task of the third term is decreasing the representation power of Dj to Yc, c 6¼ j. As it was mentioned earlier, to boost the discriminative power of a dictionary, the discrimination of the sparse coefficient was suggested. This constraint is represented as h(X) which explicitly minimizes the within-class scatter and maximizes the between-class scatter as follows: h ðX Þ ¼

C n X c¼1

o kX c  Mc k2F  kMc  Mk2F + kX k2F

(11)

where Mc ¼ [mc, …, mc] 2 ℝKNc, M ¼ [m, …, m] shows the mean matrices, and m and mc represent the mean values of X and Xc columns, respectively. Similar to COPAR, for the classification of new test samples using the FDDL algorithm, both of the GC and LC models can be used, note that in FDDL, no shared dictionary learning.

4.2.4 Low-rank shared dictionary learning (LRSDL) Inspired by the COPAR algorithm, the main focus of this algorithm is also on learning common structures besides the class-specific basis [137]. To improve the learning of shared atoms, two constraints are imposed on the shared dictionary. The first constraint is about the number of the shared atoms since a wide subspace of the shared dictionary

368

Diabetes and Fundus OCT

involves some of the discriminative atoms. Therefore, it was suggested to impose the lowrank constraint on the shared dictionary. The authors mentioned that the contribution of the shared dictionary in the reconstruction of each signal should be nearly the same. Based on this concept, another constraint is also imposed on the sparse coefficients. Therefore, sparse coefficients of the shared dictionary will be close to each other. Considering the two new constraints and a slight change to Eq. (3), LRSDL cost function is introduced as   λ2      1  f  g Y, D, X + λ1 X 1 + h X + ηD0  ∗ 2 2

(12)

 C n C  X    o   X  T j 2 Y c  DX c 2 + Y c  D0 X 0  Dc X c 2 + g Y, D, X ¼ D X   c c j c F F F c¼1 j¼1, j6¼c

(13)

  In LRSDL cost function, g Y, D, X is composed of the representation of Yc by both the particular dictionary Dc and the shared dictionary D0. If we use the discriminative fidelity  term of Eq. (11) and consider the cooperation of D0 and Dc, g Y, D, X could be reformulated as

Furthermore, to impose the effect of D_0 in the Fisher-based discriminative coefficient term, h(x) appears as C n o  2   X h X ¼ kX c  Mc k2F  kMc  Mk2F + kX k2F + X 0  M0 F

(14)

c¼1

The term jj X0  M0 jj2 which is related to the shared dictionary and its sparse coefficients is included in Eq. (14), making the main difference with Eq. (11). This term which is the second constraint of LRSDL on the shared dictionary entails the similarities between the coefficients of all training samples represented by the shared dictionary. The last term of Eq. (12) is related to the constraint on the rank of the shared dictionary. Since minimizing rank(D0) is not convex, its convex relaxation introduced by nuclear norm jj D0 jj∗ is suggested. For the classification step of test samples using the LRSDL method, learning its corresponding sparse coefficients was mentioned. To do this search, two constraints can be imposed on them: sparsity of coefficients and encouraging X0 to be close to m0. This optimization is shown by  2   λ2  2 X ¼ argmin y  DX 2 + λ1 X 1 + X 0  m0 2 2 X

(15)

Using the optimal X and excluding the contribution of the shared dictionary using y ¼ y  D0 X 0 , the label of y is determined by   argmin w ky  Dc X c k22 + ð1  wÞkX  mc k22 1cC

where the duty of w is giving weights to the terms with respect to the problem.

(16)

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 369

4.2.5 Experimental results By focusing on the performance of dictionary learning methods, Ref. [138] has probed FDDL, COPAR, and LRSDL for the classification of OCT images. They used a publicly available dataset, introduced in Ref. [113], including three groups of normal subjects, DME patients, and AMD ones. After denoising, flattening the retina layers and extraction of histogram of oriented gradient (HOG) as the feature of preprocessed images, the authors examined the abovementioned classifiers. To deeply test the efficiency of dictionary learning methods with regard to the number of training samples, in the diverse crossvalidations models, the authors reported the accuracy of different DL classifiers. As an example, in the leave-three-out experiments, one OCT volume of each group is considered as the test sample, and the rest of the data is for training. Using the training set, dictionaries are trained and used for classifying test samples. The left part of Table 5 shows the classification results of leave-three-out cross-validation of COPAR, FDDL, and LRSDL for the mentioned OCT data set. By observing the classification results of the three mentioned dictionary learning methods, FDDL can classify data set into three groups including AMD, DME, and normal, with the accuracy of 98.37% which is slightly higher than COPAR and LRSDL methods. It has been shown that not only in leave-three-out cross-validation experiments but also in the experiments with the other number of training samples, the performance of FDDL was slightly better than that of the other two. One of the positive points of using dictionary learning methods for the classification of OCT images that were mentioned in Ref. [138] is their high ability in the early-stage detection of diseases in OCT images. It was proved in Ref. [138], the existence of the abnormality of AMD and DME is only 4% of B-Scans of an OCT volume, which is sufficient for determining the diseases by FDDL. As a final evaluation of the performance of dictionary learning methods in OCT images classification, authors compared their results to three other methods which were performed on the same data set. These comparisons for leave-three-out experiments are integrated in Table 5. As a brief description of the compared methods, Srinivasan et al. in Ref. [113] that was mentioned earlier as the reference of the examined data set, proposed to extract multi-scale HOG of images as the features of images. By introducing support vector machine (SVM) as the classifier, they achieved the accuracy of 100% in classifying AMD Table 5

Comparison of OCT images classification using different methods.

Group of OCT images

Dictionary learning classifiers [138] COPAR FDDL LRSDL (%) (%) (%)

Srinivasan et al. [113] (%)

Wang Yu et al. [114] (%)

Yankui Sun et al. [115] (%)

Normal DME AMD Whole data set

100 92.86 100 97.62

86.66 100 100 95.55

100 100 93.33 97.78

93.33 100 100 97.78

100 95.13 100 98.37

100 92 100 97.33

370

Diabetes and Fundus OCT

and DME patients. Noticing the high ability of linear configuration pattern (LCP) features at the detection of microscopic configurations and the local structural, Wang Yu et al. in Ref. [14] proposed a method to extract LCP features of OCT images. By selecting a subset of features using a heuristic function based on correlation criterion and sequential minimal optimization (SMO), Ref. [114] reported completely the identification of normal and AMD subjects. Yankui Sun et al. in Ref. [115] extracted SIFT descriptors of patches and used their sparse representation for a learned dictionary. By calculating max-pooling of the sparsecoded features and imposing linear SVM on them, classification of DME and AMD has been done successfully. By observing the results of different methods on the same OCT data set, the FDDL classifier can achieve the best results with the accuracy of 98.37%, which shows its better performance on the classification of OCT images.

5 Conclusion In this chapter, we illustrated the power of dictionary learning as an effective method for ocular image modeling. For this reason, dictionary learning was used for the classification of both normal/abnormal color fundus images and OCT images separately. In dictionary learning, after finding the best model for each category, the obtained (sparse) coefficients are used for the classification instead of directly classifying the images. Note that in conventional classification methods preprocessing, segmentation, and feature extraction stages are required to prepare the needed input for the classifiers whereas in the proposed classification method, such steps are desired to be removed and the algorithm would be able to detect the class of each patch (or each image) more perceptually. In future work, we will also test the ability of different discriminative dictionary learning methods for the classification and staging of different ocular diseases. Similar to the above strategy for classification, other discriminative dictionary learning methods can be proposed for ocular image segmentation. In this base, we can model the main subjects of ocular images by finding the best specific atoms of each subject and using discriminative dictionary learning for the purpose of segmentation. For example, we find the best specific atoms of each intraretinal layer and represent the data by specific atoms of each layer for OCT intra-layer segmentation. Similarly, one can identify different atoms (and therefore a different model) for an abnormal data. For example, the proposed atoms for cysts in abnormal data are different from each layer’s atoms. By only using the specific atoms of learned cysts, we would be able to segment the cysts in abnormal B-scans. Instead of discriminative dictionary learning, we can use representative dictionary learning for the purpose of ocular image denoising/restoration [139–141]. For example, in OCT imaging we can find the best sparse combinations of specific atoms of layers/cysts, which mostly are able to model the texture properties of main objects and global atoms, which mostly are able to model the edge singularities, for OCT image restoration.

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 371

References [1] J.E. Shaw, R.A. Sicree, P.Z. Zimmet, Global estimates of the prevalence of diabetes for 2010 and 2030, Diabetes Res. Clin. Pract. 87 (1) (2010) 4–14. [2] I. Benjamin, R.C. Griggs, E.J. Wing, J.G. Fitz, Andreoli and Carpenter’s Cecil Essentials of Medicine, Elsevier Health Sciences, (2015). [3] J. Boye, L. Geiss, A. Honeycutt, Projection of diabetes burden through 2050, Diabetes Care 24 (2001) 1936–1940. [4] https://nei.nih.gov/health/diabetic/retinopathy. [5] Centers for Disease Control and Prevention, National Diabetes Fact Sheet: National Estimates and General Information on Diabetes and Prediabetes in the United States, 2011, vol. 201(1), US Department of Health and Human Services, Centers for Disease Control and Prevention, Atlanta, GA, 2011. [6] R. Williams, M. Airey, H. Baxter, J. Forrester, T. Kennedy-Martin, A. Girach, Epidemiology of diabetic retinopathy and macular oedema: a systematic review, Eye 18 (10) (2004) 963–983. [7] S. Wild, G. Roglic, A. Green, R. Sicree, H. King, Global prevalence of diabetes estimates for the year 2000 and projections for 2030, Diabetes Care 27 (5) (2004) 1047–1053. [8] M. Katibeh, M. Pakravan, M. Yaseri, M. Pakbin, R. Soleimanizad, Prevalence and causes of visual impairment and blindness in central Iran; The Yazd eye study, J. Ophthalmic Vis. Res. 10 (3) (2015) 279. [9] M. Rasolabadi, S. Khaledi, M. Ardalan, M.M. Kalhor, S. Penjvini, A. Gharib, Diabetes research in Iran: a scientometric analysis of publications output, Acta Inform. Med. 23 (3) (2015) 160. [10] A.H. Mokdad, et al., Health in times of uncertainty in the eastern Mediterranean region, 1990–2013: a systematic analysis for the Global Burden of Disease Study 2013, Lancet Glob. Health 4 (10) (2016) e704–e713. [11] S. Garg, R.M. Davis, Diabetic retinopathy screening update, Clin. Diabetes 27 (4) (2009) 140–145. [12] N. Sayin, N. Kara, G. Pekel, Ocular complications of diabetes mellitus, World J. Diabetes 6 (1) (2015) 92–108. [13] M.A. Javadi, et al., Prevalence of diabetic retinopathy in Tehran province: a population-based study, BMC Ophthalmol. 9 (1) (2009) 1. [14] M. Amini, E. Parvaresh, Prevalence of macro-and microvascular complications among patients with type 2 diabetes in Iran: a systematic review, Diabetes Res. Clin. Pract. 83 (1) (2009) 18–25. [15] N. Horri, M. Farmani, M. Ghassami, S. Haghighi, M. Amini, Visual acuity in an Iranian cohort of patients with type 2 diabetes: the role of nephropathy and ischemic heart disease, J. Res. Med. Sci. 16 (2011). ˚ rhus [16] P. Jeppesen, T. Bek, The occurrence and causes of registered blindness in diabetes patients in A County, Denmark, Acta Ophthalmol. Scand. 82 (5) (2004) 526–530. [17] R. Klein, B.E. Klein, S.E. Moss, Visual impairment in diabetes, Ophthalmology 91 (1) (1984) 1–9. [18] C. Wilson, M. Horton, J. Cavallerano, L.M. Aiello, Addition of primary care–based retinal imaging technology to an existing eye care professional referral program increased the rate of surveillance and treatment of diabetic retinopathy, Diabetes Care 28 (2) (2005) 318–322. [19] P. Bragge, R.L. Gruen, M. Chau, A. Forbes, H.R. Taylor, Screening for presence or absence of diabetic retinopathy: a meta-analysis, Arch. Ophthalmol. 129 (4) (2011) 435–444. [20] M.D. Abra`moff, M. Niemeijer, M.S. Suttorp-Schulten, M.A. Viergever, S.R. Russell, B. Van Ginneken, Evaluation of a system for automatic detection of diabetic retinopathy from color fundus photographs in a large population of patients with diabetes, Diabetes Care 31 (2) (2008) 193–198.

372

Diabetes and Fundus OCT

[21] B. Dupas, et al., Evaluation of automated fundus photograph analysis algorithms for detecting microaneurysms, haemorrhages and exudates, and of a computer-assisted diagnostic system for grading diabetic retinopathy, Diabetes Metab. 36 (3) (2010) 213–220. [22] M. Niemeijer, M.D. Abramoff, B. van Ginneken, Image structure clustering for image quality verification of color retina images in diabetic retinopathy screening, Med. Image Anal. 10 (6) (2006) 888–898. [23] O. Faust, R. Acharya, E.Y.-K. Ng, K.-H. Ng, J.S. Suri, Algorithms for the automated detection of diabetic retinopathy using digital fundus images: a review, J. Med. Syst. 36 (1) (2012) 145–157. [24] X. Zhang, O. Chutatape, Top-down and bottom-up strategies in lesion detection of background diabetic retinopathy, in: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 2, IEEE, 2005, pp. 422–428. [25] C. Sinthanayothin, et al., Automated detection of diabetic retinopathy on digital fundus images, Diabet. Med. 19 (2) (2002) 105–112. [26] B. Zhang, F. Karray, Q. Li, L. Zhang, Sparse representation classifier for microaneurysm detection and retinal blood vessel extraction, Inf. Sci. 200 (2012) 78–90. [27] B.M. Ege, et al., Screening for diabetic retinopathy using computer based image analysis and statistical classification, Comput. Methods Prog. Biomed. 62 (3) (2000) 165–175. [28] J. Hipwell, F. Strachan, J. Olson, K. McHardy, P. Sharp, J. Forrester, Automated detection of microaneurysms in digital red-free photographs: a diabetic retinopathy screening tool, Diabet. Med. 17 (8) (2000) 588–594. [29] M. Niemeijer, B. Van Ginneken, J. Staal, M.S. Suttorp-Schulten, M.D. Abra`moff, Automatic detection of red lesions in digital color fundus photographs, IEEE Trans. Med. Imaging 24 (5) (2005) 584–592. [30] R. Pourreza, H. Pourreza, T. Banaee, Segmentation of blood vessels in fundus color images by Radon transform and morphological reconstruction, in: 2010 Third International Workshop on Advanced Computational Intelligence (IWACI), IEEE, 2010, pp. 522–526. [31] S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, M. Goldbaum, Detection of blood vessels in retinal images using two-dimensional matched filters, IEEE Trans. Med. Imaging 8 (3) (1989) 263–269. [32] A. Hoover, V. Kouznetsova, M. Goldbaum, Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response, IEEE Trans. Med. Imaging 19 (3) (2000) 203–210. [33] E. Grisan, A. Pesce, A. Giani, M. Foracchia, A. Ruggeri, A new tracking system for the robust extraction of retinal vessel structure, in: Engineering in Medicine and Biology Society, 2004. IEMBS’04. 26th Annual International Conference of the IEEE, vol. 1, IEEE, 2004, pp. 1620–1623. [34] D. Vallabha, R. Dorairaj, K.R. Namuduri, H. Thompson, Automated Detection and Classification of Vascular Abnormalities in Diabetic Retinopathy, (2004). [35] S. Dua, N. Kandiraju, H.W. Thompson, Design and implementation of a unique blood-vessel detection algorithm towards early diagnosis of diabetic retinopathy, in: International Conference on Information Technology: Coding and Computing (ITCC’05)—Volume II, vol. 1, IEEE, 2005, pp. 26–31. [36] A. Bhuiyan, B. Nath, J. Chua, R. Kotagiri, Blood vessel segmentation from color retinal images using unsupervised texture classification, in: 2007 IEEE International Conference on Image Processing, vol. 5, IEEE, 2007, pp. V-521–V-524. [37] M. Esmaeili, H. Rabbani, A. Mehri, A. Dehghani, Extraction of retinal blood vessels by curvelet transform, in: 2009 16th IEEE International Conference on Image Processing (ICIP), IEEE, 2009, pp. 3353–3356. [38] A. Soltanipour, S. Sadri, H. Rabbani, M. Akhlaghi, A. Doost-Hosseini, Vessel centerlines extraction from fundus fluorescein angiogram based on Hessian analysis of directional curvelet subbands, in: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, 2013, pp. 1070–1074.

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 373

[39] A. Soltanipour, S. Sadri, H. Rabbani, M.R. Akhlaghi, Analysis of fundus fluorescein angiogram based on the hessian matrix of directional curvelet sub-bands and distance regularized level set evolution, J. Med. Signals Sens. 5 (3) (2015) 141. [40] R. Kafieh, H. Rabbani, F. Hajizadeh, M. Ommani, An accurate multimodal 3-D vessel segmentation method based on brightness variations on OCT layers and curvelet domain fundus image analysis, IEEE Trans. Biomed. Eng. 60 (10) (2013) 2815–2823. [41] M. Esmaeili, H. Rabbani, A.M. Dehnavi, A. Dehghani, Automatic optic disk detection by the use of curvelet transform, in: 2009 9th International Conference on Information Technology and Applications in Biomedicine, IEEE, 2009, pp. 1–4. [42] M. Esmaeili, H. Rabbani, A. Dehnavi, A. Dehghani, Automatic detection of exudates and optic disk in retinal images using curvelet transform, IET Image Process. 6 (7) (2012) 1005–1013. [43] T. Mahmudi, R. Kafieh, H. Rabbani, A. Mehri, M. Akhlagi, Asymmetry evaluation of fundus images in right and left eyes using radon transform and fractal analysis, in: 2015 IEEE International Conference on Image Processing (ICIP), IEEE, 2015, pp. 163–167. [44] M. Jamshidi, H. Rabbani, Z. Amini, R. Kafieh, A. Ommani, V. Lakshminarayanan, Automatic detection of the optic disc of the retina: a fast method, J. Med. Signals Sens. 6 (1) (2016) 57. [45] A.A. Purwita, K. Adityowibowo, A. Dameitry, M.W.S. Atman, Automated microaneurysm detection using mathematical morphology, in: 2011 2nd International Conference on Instrumentation, Communications, Information Technology, and Biomedical Engineering (ICICI-BME), IEEE, 2011, pp. 117–120. [46] T.P. Karnowski, V.P. Govindasamy, K.W. Tobin, E. Chaum, M. Abramoff, Retina lesion and microaneurysm segmentation using morphological reconstruction methods with ground-truth data, in: 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, IEEE, 2008, pp. 5433–5436. [47] L. Xu, S. Luo, Optimal algorithm for automatic detection of microaneurysms based on receiver operating characteristic curve, J. Biomed. Opt. 15 (6) (2010). 065004-065004-6. [48] L. Streeter, M.J. Cree, Microaneurysm detection in colour fundus images, in: Proceedings of Image and Vision Computing, New Zealand, 2003, pp. 280–284. [49] A.D. Fleming, S. Philip, K.A. Goatman, J.A. Olson, P.F. Sharp, Automated microaneurysm detection using local contrast normalization and local vessel detection, IEEE Trans. Med. Imaging 25 (9) (2006) 1223–1232. [50] S. Balasubramanian, S. Pradhan, V. Chandrasekaran, Red lesions detection in digital fundus images, in: 2008 15th IEEE International Conference on Image Processing, IEEE, 2008, pp. 2932–2935. [51] G. Yang, L. Gagnon, S. Wang, M. Boucher, Algorithm for Detecting Micro-Aneurysms in LowResolution Color Retinal Images, 2001. [52] A. Bhalerao, A. Patanaik, S. Anand, P. Saravanan, Robust detection of microaneurysms for sight threatening retinopathy screening, in: Sixth Indian Conference on Computer Vision, Graphics & Image Processing, 2008. ICVGIP’08, IEEE, 2008, pp. 520–527. [53] Y. Hatanaka, T. Inoue, S. Okumura, C. Muramatsu, H. Fujita, Automated microaneurysm detection method based on double-ring filter and feature analysis in retinal fundus images, in: 2012 25th International Symposium on Computer-Based Medical Systems (CBMS), IEEE, 2012, pp. 1–4. [54] G. Quellec, M. Lamard, P.M. Josselin, G. Cazuguel, B. Cochener, C. Roux, Optimal wavelet transform for the detection of microaneurysms in retina photographs, IEEE Trans. Med. Imaging 27 (9) (2008) 1230–1241. [55] P. Pallawala, W. Hsu, M.L. Lee, S.S. Goh, Automated microaneurysm segmentation and detection using generalized eigenvectors, in: Seventh IEEE Workshops on Application of Computer Vision, 2005. WACV/MOTIONS’05, vol. 1, IEEE, 2005, pp. 322–327.

374

Diabetes and Fundus OCT

[56] I. Lazar, R.J. Qureshi, A. Hajdu, A novel approach for the automatic detection of microaneurysms in retinal images, in: 2010 6th International Conference on Emerging Technologies (ICET), IEEE, 2010, pp. 193–197. [57] B. Zhang, L. Zhang, J. You, F. Karray, Microaneurysm (MA) detection via sparse representation classifier with MA and Non-MA dictionary learning, in: 2010 20th International Conference on Pattern Recognition (ICPR), IEEE, 2010, pp. 277–280. [58] K. Ram, G.D. Joshi, J. Sivaswamy, A successive clutter-rejection-based approach for early detection of diabetic retinopathy, IEEE Trans. Biomed. Eng. 58 (3) (2011) 664–673. [59] L. Seoud, T. Hurtut, J. Chelbi, F. Cheriet, J.P. Langlois, Red lesion detection using dynamic shape features for diabetic retinopathy screening, IEEE Trans. Med. Imaging 35 (4) (2016) 1116–1126. [60] I. Lazar, A. Hajdu, Retinal microaneurysm detection through local rotating cross-section profile analysis, IEEE Trans. Med. Imaging 32 (2) (2013) 400–407. [61] M.U. Akram, S. Khalid, S.A. Khan, Identification and classification of microaneurysms for early detection of diabetic retinopathy, Pattern Recogn. 46 (1) (2013) 107–116. [62] S.H.M. Alipour, H. Rabbani, Automatic detection of micro-aneurysms in retinal images based on curvelet transform and morphological operations, in: SPIE Optical Engineering +Applications, International Society for Optics and Photonics, 2013. 88561W–88561W-6. [63] U. Acharya, C. Lim, E. Ng, C. Chee, T. Tamura, Computer-based detection of diabetes retinopathy stages using digital fundus images, Proc. Inst. Mech. Eng. H J. Eng. Med. 223 (5) (2009) 545–553. [64] H. Wang, W. Hsu, K.G. Goh, M.L. Lee, An effective approach to detect lesions in color retinal images, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2000, vol. 2, IEEE, 2000, pp. 181–186. [65] A. Hunter, J. Lowell, J. Owens, L. Kennedy, D. Steele, Quantification of diabetic retinopathy using neural networks and sensitivity analysis, in: Artificial Neural Networks in Medicine and Biology, Springer, 2000, pp. 81–86. [66] A. Sopharak, B. Uyyanonvara, Automatic exudates detection from diabetic retinopathy retinal image using fuzzy c-means and morphological methods, in: Proceedings of the Third IASTED International Conference Advances in Computer Science and Technology, Thailand, 2007, pp. 359–364. [67] Z. Xiaohui, A. Chutatape, Detection and classification of bright lesions in color fundus images, in: 2004 International Conference on Image Processing, 2004. ICIP’04, vol. 1, IEEE, 2004, pp. 139–142. [68] A. Osareh, M. Mirmehdi, B. Thomas, R. Markham, Comparative exudate classification using support vector machines and neural networks, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2002, pp. 413–420. [69] A. Sopharak, B. Uyyanonvara, S. Barman, T.H. Williamson, Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using mathematical morphology methods, Comput. Med. Imaging Graph. 32 (8) (2008) 720–727. [70] A.D. Fleming, S. Philip, K.A. Goatman, G.J. Williams, J.A. Olson, P.F. Sharp, Automated detection of exudates for diabetic retinopathy screening, Phys. Med. Biol. 52 (24) (2007) 7385. [71] J.M. Shivaram, R. Patil, Automated detection and quantification of haemorrhages in diabetic retinopathy images using image arithmetic and mathematical morphology methods, Int. J. Recent Trends Eng. 2 (2009) 174–176. [72] A. Fleming, K. Goatman, G. Williams, S. Philip, P. Sharp, J. Olson, Automated detection of blot haemorrhages as a sign of referable diabetic retinopathy, in: Proceedings of the Medical Image Understanding and Analysis, 2008. [73] X. Zhang, G. Fan, Retinal spot lesion detection using adaptive multiscale morphological processing, in: International Symposium on Visual Computing, Springer, 2006, pp. 490–501.

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 375

[74] J.P. Bae, K.G. Kim, H.C. Kang, C.B. Jeong, K.H. Park, J.-M. Hwang, A study on hemorrhage detection using hybrid method in fundus images, J. Digit. Imaging 24 (3) (2011) 394–404. [75] C. Marino, E. Ares, M. Penedo, M. Ortega, N. Barreira, F. Gomez-Ulla, Automated three stage red lesions detection in digital color fundus images, WSEAS Trans. Comput. 7 (4) (2008) 207–215. [76] X. Zhang, O. Chutatape, A SVM approach for detection of hemorrhages in background diabetic retinopathy, in: Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005, vol. 4, IEEE, 2005, pp. 2435–2440. [77] M. Esmaeili, H. Rabbani, A.M. Dehnavi, A. Dehghani, A new curvelet transform based method for extraction of red lesions in digital color retinal images, in: 2010 IEEE International Conference on Image Processing, IEEE, 2010, pp. 4093–4096. [78] S. Ruia, S. Saxena, C.M.G. Cheung, J.S. Gilhotra, T.Y. Lai, Spectral domain optical coherence tomography features and classification systems for diabetic macular edema: a review, Asia Pac. J. Ophthalmol. 5 (5) (2016) 360–367. [79] G.M. Somfai, H. Gerding, D.C. DeBuc, The use of optical coherence tomography for the detection of early diabetic retinopathy, Klin. Monatsbl. Augenheilkd. 235 (04) (2018) 377–384. [80] T. Otani, S. Kishi, Y. Maruyama, Patterns of diabetic macular edema with optical coherence tomography, Am J. Ophthalmol. 127 (6) (1999) 688–693. [81] G. Panozzo, et al., Diabetic macular edema: an OCT-based classification, in: Seminars in Ophthalmology, vol. 19, Taylor & Francis, 2004, pp. 13–20. [82] G.G. Dea´k, et al., A systematic correlation between morphology and functional alterations in diabetic macular edema, Invest. Ophthalmol. Vis. Sci. 51 (12) (2010) 6710–6714. [83] H.W. van Dijk, et al., Decreased retinal ganglion cell layer thickness in patients with type 1 diabetes, Invest. Ophthalmol. Vis. Sci. 51 (7) (2010) 3660–3665. [84] H.W. Van Dijk, et al., Early neurodegeneration in the retina of type 2 diabetic patients, Invest. Ophthalmol. Vis. Sci. 53 (6) (2012) 2715–2719. [85] S. Vujosevic, E. Midena, Retinal layers changes in human preclinical and early clinical diabetic ret€ ller cells alterations, J. Diabetes Res. 2013 (2013). inopathy support early retinal neuronal and Mu [86] F. Scarinci, et al., Single retinal layer evaluation in patients with type 1 diabetes with no or early signs of diabetic retinopathy: the first hint of neurovascular crosstalk damage between neurons and capillaries? Ophthalmologica 237 (4) (2017) 223–231. [87] J. Wanek, et al., Alterations in retinal layer thickness and reflectance at different stages of diabetic retinopathy by en face optical coherence tomography, Invest. Ophthalmol. Vis. Sci. 57 (9) (2016) OCT341–OCT347. [88] F.C. Gundogan, F. Akay, S. Uzun, U. Yolcu, E. C ¸ ag˘ıltay, S.J.O. Toyran, Early neurodegeneration of the inner retinal layers in type 1 diabetes mellitus, Ophthalmologica 235 (3) (2016) 125–132. [89] D. El-Fayoumi, N.M.B. Eldine, A.F. Esmael, D. Ghalwash, H.M. Soliman, Retinal nerve fiber layer and ganglion cell complex thicknesses are reduced in children with type 1 diabetes with no evidence of vascular retinopathy, Invest. Ophthalmol. Vis. Sci. 57 (13) (2016) 5355–5360. [90] O. Karti, et al., Retinal ganglion cell loss in children with type 1 diabetes mellitus without diabetic retinopathy, Ophthalmic Surg. Lasers Imaging Retina 48 (6) (2017) 473–477. [91] Y. Chen, J. Li, Y. Yan, X. Shen, Diabetic macular morphology changes may occur in the early stage of diabetes, BMC Ophthalmol. 16 (1) (2016) 12. [92] P. Carpineto, et al., Neuroretinal alterations in the early stages of diabetic retinopathy in patients with type 2 diabetes mellitus, Eye 30 (5) (2016) 673. [93] D.S. Ng, et al., Retinal ganglion cell neuronal damage in diabetes and diabetic retinopathy, Clin. Exp. Ophthalmol. 44 (4) (2016) 243–250.

376

Diabetes and Fundus OCT

[94] L. Pierro, L. Iuliano, M.V. Cicinelli, G. Casalino, F. Bandello, Retinal neurovascular changes appear earlier in type 2 diabetic patients, Eur. J. Ophthalmol. 27 (3) (2017) 346–351. [95] J.T. Ferreira, et al., Retinal neurodegeneration in diabetic patients without diabetic retinopathy, Invest. Ophthalmol. Vis. Sci. 57 (14) (2016) 6455–6460. [96] R. Shelton, J. Taibl, N. Shemonski, S. Sayegh, S. Boppart, Subretinal layer thickness ratio changes for early detection of diabetes, Invest. Ophthalmol. Vis. Sci. 54 (15) (2013) 2428. [97] B. Bhaduri, et al., Ratiometric analysis of optical coherence tomography-measured in vivo retinal layer thicknesses for the detection of early diabetic retinopathy, J. Biophotonics 10 (11) (2017) 1430–1441. [98] S.C. Lee, E.T. Lee, Y. Wang, R. Klein, R.M. Kingsley, A. Warn, Computer classification of nonproliferative diabetic retinopathy, Arch. Ophthalmol. 123 (6) (2005) 759–764. [99] K. Estabridis, R.J. de Figueiredo, Automatic detection and diagnosis of diabetic retinopathy, in: 2007 IEEE International Conference on Image Processing, vol. 2, IEEE, 2007, pp. II-445–II-448. [100] S.H.M. Alipour, H. Rabbani, Automatic detection of micro-aneurysms in retinal images based on curvelet transform and morphological operations, in: Applications of Digital Image Processing XXXVI, vol. 8856, International Society for Optics and Photonics, 2013p. 88561W. [101] S.H.M. Alipour, H. Rabbani, M.R. Akhlaghi, A new combined method based on curvelet transform and morphological operators for automatic detection of foveal avascular zone, SIViP 8 (2) (2014) 205–222. [102] E. T. D. R. S. R. Group, Grading diabetic retinopathy from stereoscopic color fundus photographs— an extension of the modified Airlie House classification: ETDRS report number 10, Ophthalmology 98 (5) (1991) 786–806. [103] S. Hajeb Mohammad Alipour, H. Rabbani, M.R. Akhlaghi, Diabetic retinopathy grading by digital curvelet transform, Comput. Math. Methods Med. 2012 (2012). [104] A. Singalavanija, J. Supokavej, P. Bamroongsuk, C. Sinthanayothin, S. Phoojaruenchanachai, V. Kongbunkiat, Feasibility study on computer-aided screening for diabetic retinopathy, Jpn. J. Ophthalmol. 50 (4) (2006) 361–366. [105] P. Kahai, K.R. Namuduri, H. Thompson, A decision support framework for automated screening of diabetic retinopathy, Int. J. Biomed. Imaging 2006 (2006). [106] D. Usher, M. Dumskyj, M. Himaga, T.H. Williamson, S. Nussey, J. Boyce, Automated detection of diabetic retinopathy in digital retinal images: a tool for diabetic retinopathy screening, Diabet. Med. 21 (1) (2004) 84–90. [107] J. Nayak, P.S. Bhat, R. Acharya, C.M. Lim, M. Kagathi, Automated identification of diabetic retinopathy stages using digital fundus images, J. Med. Syst. 32 (2) (2008) 107–115. [108] A.W. Reza, C. Eswaran, K. Dimyati, Diagnosis of diabetic retinopathy: automatic extraction of optic disc and exudates from retinal images using marker-controlled watershed transformation, J. Med. Syst. 35 (6) (2011) 1491–1501. [109] J. Amin, M. Sharif, M. Yasmin, H. Ali, S.L. Fernandes, A method for the detection and classification of diabetic retinopathy using structural predictors of bright lesions, J. Comput. Sci. 19 (2017) 153–164. [110] G.S. Tan, N. Cheung, R. Simo´, G.C. Cheung, T.Y. Wong, Diabetic macular oedema, Lancet Diabetes Endocrinol. 5 (2) (2017) 143–155. [111] A. Erginay, P. Massin, Optical coherence tomography in the management of diabetic macular edema, in: Optical Coherence Tomography, vol. 4, Karger Publishers, 2014, pp. 62–75. [112] D. Sidibe, et al., An anomaly detection approach for the identification of DME patients using spectral domain optical coherence tomography images, Comput. Methods Prog. Biomed. 139 (2017) 109–117.

Chapter 13 • Diabetic retinopathy detection in ocular imaging by dictionary learning 377

[113] P.P. Srinivasan, et al., Fully automated detection of diabetic macular edema and dry age-related macular degeneration from optical coherence tomography images, Biomed. Opt. Express 5 (10) (2014) 3568–3577. [114] Y. Wang, Y. Zhang, Z. Yao, R. Zhao, F. Zhou, Machine learning based detection of age-related macular degeneration (AMD) and diabetic macular edema (DME) from optical coherence tomography (OCT) images, Biomed. Opt. Express 7 (12) (2016) 4928–4940. [115] Y. Sun, S. Li, Z. Sun, Fully automated macular pathology detection in retina optical coherence tomography images using sparse coding and dictionary learning, J. Biomed. Opt. 22 (1) (2017) 016012. [116] S.P.K. Karri, D. Chakraborty, J. Chatterjee, Transfer learning based classification of optical coherence tomography images with diabetic macular edema and dry age-related macular degeneration, Biomed. Opt. Express 8 (2) (2017) 579–592. [117] G. Lemaıˆtre, et al., Classification of SD-OCT volumes using local binary patterns: experimental validation for DME detection, J. Ophthalmol. 2016 (2016). [118] G.M. Somfai, et al., Automated classifiers for early detection and diagnosis of retinopathy in diabetic eyes, BMC Bioinformatics 15 (1) (2014) 106. [119] C. Szegedy, et al., Going deeper with convolutions, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1–9. [120] A. ElTanboly, et al., A computer-aided diagnostic system for detecting diabetic retinopathy in optical coherence tomography images, Med. Phys. 44 (3) (2017) 914–923. [121] Z. Amini, H. Rabbani, Classification of medical image modeling methods: a review, Curr. Med. Imaging Rev. 12 (2) (2016) 130–148. [122] S. Mallat, A Wavelet Tour of Signal Processing, Academic Press, 1999. [123] J. Shlens, A tutorial on Principal Component Analysis, Systems Neurobiology Laboratory, University of California at San Diego, 2005. [124] I.T. Jolliffe, Principal Component Analysis, Springer-Verlag, New York, 1986. [125] P. Comon, Independent component analysis, a new concept? Signal Process. 36 (3) (1994) 287–314. €rinen, E. Oja, Independent component analysis: algorithms and applications, Neural Netw. [126] A. Hyva 13 (4) (2000) 411–430. [127] M. Varma, A. Zisserman, A statistical approach to material classification using image patch exemplars, IEEE Trans. Pattern Anal. Mach. Intell. 31 (11) (2009) 2032–2047. [128] B.A. Olshausen, Emergence of simple-cell receptive field properties by learning a sparse code for natural images, Nature 381 (6583) (1996) 607–609. [129] R.R. Coifman, S. Lafon, Diffusion maps, Appl. Comput. Harmon. Anal. 21 (1) (2006) 5–30. [130] N. Karami, H. Rabbani, A dictionary learning based method for detection of diabetic retinopathy in color fundus images, in: 2017 10th Iranian Conference on Machine Vision and Image Processing (MVIP), IEEE, 2017, pp. 119–122. [131] Y.C. Pati, R. Rezaiifar, P.S. Krishnaprasad, Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition, in: 1993 Conference Record of the 27th Asilomar Conference on Signals, Systems and Computers, vol. 1, 1993, pp. 40–44. [132] M. Aharon, M. Elad, A. Bruckstein, k-SVD: an algorithm for designing overcomplete dictionaries for sparse representation, IEEE Trans. Signal Process. 54 (11) (2006) 4311–4322. [133] http://www.biosigdata.com/?download_category¼ocular-images. [134] I. Tosic, P. Frossard, Dictionary learning, IEEE Signal Process. Mag. 28 (2) (2011) 27–38.

378

Diabetes and Fundus OCT

[135] S. Kong, D. Wang, A dictionary learning approach for classification: separating the particularity and the commonality, in: European Conference on Computer Vision, Springer, 2012, pp. 186–199. [136] M. Yang, L. Zhang, X. Feng, D. Zhang, Sparse representation based fisher discrimination dictionary learning for image classification, Int. J. Comput. Vis. 109 (3) (2014) 209–232. [137] T.H. Vu, V. Monga, Fast low-rank shared dictionary learning for image classification, IEEE Trans. Image Process. 26 (11) (2017) 5160–5175. [138] E. Mousavi, R. Kafieh and H. Rabbani, Classification of dry age-related macular degeneration and diabetic macular edema from optical coherence tomography images using dictionary learning, IET Image Process. J. (submitted). [139] R. Kafieh, H. Rabbani, I. Selesnick, Three dimensional data-driven multi scale atomic representation of optical coherence tomography, IEEE Trans. Med. Imaging 34 (5) (2015) 1042–1062. [140] A. Abbasi, A. Monadjemi, L. Fang, H. Rabbani, Optical coherence tomography retinal image reconstruction via nonlocal weighted sparse representation, J. Biomed. Opt. 23 (3) (2018), 036011. https:// doi.org/10.1117/1.JBO.23.3.036011. [141] A. Abbasi, A. Monadjemi, L. Fang, H. Rabbani, Y. Zhang, Three-dimensional optical coherence tomography image denoising through multi-input fully-convolutional networks, Comput. Biol. Med. (2019), https://doi.org/10.1016/j.compbiomed.2019.01.010.

14

Lesion detection using segmented structure of retina Anoop Balakrishnan Kadana, Perumal Sankarb, T.V. Roshinia, K.C. Manoja, D. Anto Sahaya Dhasa, Glan Devadasc a

ELE CT RON I CS A N D CO M M UN I CATI ON E N GI N EE RING, VIMAL JYOTHI ENGINE ERING COLLEGE, K AN N U R , KE R A L A , I N DI A b ELE CTRO NIC S AND C OMMUNICA TION ENGINEERING, TOC H INSTITUTE O F SCIENCE AND TECHN OLOGY, ERNAKULAM, KERALA, INDI A c EL ECT RONI CS AND I NSTRUMENTATION ENGINEERING, VIMAL J YOTHI ENGINEERING COLLEGE, KANNUR, KERALA, INDIA

1 Introduction Photos of the back of the eye are called fundus and mechanism to capture it is called fundus photography. Cameras that are specialized and fitted with an intricate microscope with flash enabled are used for fundus photography. Complex structures in retina and optic disk are visualized with fundus photography. Color filters and dyes like fluorescein and indocyanine green are used in fundus photography. The photography techniques for fundus had rapid evolution over the last century and now have reached advanced levels. With their sophisticated equipment and manufacturing technology, all manufactures are competing in the market. Most notable among them are Topcon, Canon, CSO, and CenterVue. Monocular indirect ophthalmoscopy is the technology used in the optical design of fundus cameras. An upright and magnified view of fundus images is provided by the camera. Magnification of 2.5  and a view of 30%–50% of the retinal area with zoom using auxiliary lenses from 15% is possible by a typical fundus camera. The observation system has a dissimilar path than that of illumination systems follow in case of a fundus camera similar to that of the indirect ophthalmoscope (Fig. 1). A doughnut-based aperture passes the light through a series of lenses. The light passes via the central aperture to create an annulus followed by camera objective lens and cornea and finally into the retina. The light reflects from the retina and passes through the unilluminated hole in the doughnut created by the illumination system. There is a minimal reflection of the light source due to the independence of the light path. The image is created by rays passing through the eyepiece which is low powered. On the press of a button, a mirror interrupts the path of illumination allowing flashbulb light passes into the eye to create the retinal image. A mirror in front of the observation

Diabetes and Fundus OCT. https://doi.org/10.1016/B978-0-12-817440-1.00014-0 © 2020 Elsevier Inc. All rights reserved.

379

380

Diabetes and Fundus OCT

FIG. 1 Fundus image.

telescope redirects light to the capturing medium. The capturing medium is either film or a digital CCD. By this process, capturing medium creates an in-focus image (Fig. 2). Ten semitransparent layers are found in the retina of the human eye. Each layer serves specific visual perception functions. Fundus photography provides the following views: 1. A high-level view of the topmost layers 2. View of the inner limiting membrane 3. View of inner layers.

FIG. 2 An OCT (optic coherence tomography).

Chapter 14 • Lesion detection using segmented structure of retina

381

Retinal abnormalities are initially found as some irregularities in a particular layer, which advance to other layers and finally lead to the spots in the nerve fiber layer. Accurate depth estimation of abnormality propagation is necessary for accurate diagnosis during examining a fundus. However, the technology advancements in fundus photography are still not able to provide accurate depth estimation. A two-dimensional view of fundus is created by the superimposition of two images. DRIVE and STARE are the two databases used for the evaluation of the proposed methodology in this work. These databases are chosen due to their popularity in testing the effectiveness of segmentation methods.

Database

Number of images

DRIVE

40

STARE

81

Camera

Resolution

Format

Canon CR5 3CCD camera. 450 FOV (Field of View) TopCon TRV-50 with 350 FOV

8 bit per color and 768  584 pixels 8 bit per color and 700  605 pixels

JPEG PPM

Diabetic retinopathy (DR) is an eye disease. It is caused by prolonged time of high blood sugar. This prolonged blood sugar allows deposits of protein and lipids and finally causes damage to the retina. Prolonged damage results in blindness. Patients with diabetes for 20 years or more have 80% percent chance of getting affected with DR. More than 90% of these new cases can be reduced if proper and vigilant treatment and timely monitoring are done as indicated in recent researches. DR often affects patients with a prolonged history of higher blood sugar. Each year in the United States, 12% of new cases of blindness between the age group of 20 and 64 occur due to diabetic retinopathy. An important problem in DR is that it lacks warning signs at the early stages of diagnosis. In case of DR, spots are created in retina in few days or a week. The spots are due to blood leakage. The blurred vision is caused due to leakage of blood. The blood may take a few days to months or even years to clear from inside of the eye. The blood leaks happen often during sleep. Damage to small blood vessels and neurons of the retina are caused by DR. Following retinal changes are noticed in diabetes patients before leading to diabetic retinopathy: 1. Retinal arteries getting narrowing down 2. Blood flow reduction in retinal areas 3. Inner neurons malfunctioning These changes are followed by changes in outer retina which includes 1. Visual function modification 2. Blood-retinal barrier malfunctioning

382

Diabetes and Fundus OCT

The malfunctioning of blood-retinal barrier causes curb in the movement of immune cells leading to blood leakage in retinal areas. The basement membrane of retinal blood vessels reduced in thickness. The capillaries slowly lose cells leading to progressive ischemia and microscopic aneurysms. It appears as balloon-like structures. The structures bulge from the capillary walls (Figs. 3 and 4). Another group of eye disease which results in permanent loss of vision is glaucoma. It has various types. The types are open angle, closed angle, and normal tension. The most common is open-angle glaucoma. It develops over a period of time without any pain. Side vision decrease is noticed initially. If the timely treatment is not done, it may lead to gradual loss of vision. Closed-angle glaucoma develops gradually or suddenly. It causes severe eye pain. Most glaucoma eyesight issues are permanent. The following are risk factors for glaucoma: 1. Increase pressure in the eyes 2. Conditions like migraines, high blood pressure, and obesity

FIG. 3 Normal vision.

FIG. 4 Same view with DR.

Chapter 14 • Lesion detection using segmented structure of retina

383

Increased eye pressure greater than 21 mmHg is risky. In some cases, prolonged eye pressure does not convert to damage. It results in normal tension in eye retina areas. Slow leakage of aqueous humor through trabecular meshwork causes this problem. Closed-angle glaucoma is occurred because iris blocks the trabecular meshwork. Dilated eye examinations indicate an abnormal amount of cupping in the optical nerve. Earlier treatment can slow down the progression of the disease and in some cases can stop it. Treatment involves medication, laser treatment, or surgery. Laser treatment is the preferred mode for treating it. If laser treatment is not effective, then it is followed by surgery. Closed-angle glaucoma is often considered as a medical emergency. Robust screening methods are important for timely intervention to prevent vision loss caused by diabetic retinopathy. Recent research by the International Diabetes Federation estimates a surging rate of diabetes around 552 million people by 2030. Besides other complications due to diabetes, vision loss between 20 and 74 age group is found in most of the developed countries. Red and bright lesions are the symptoms. The early stage of DR is indicated by the presence of red lesions and/or hard exudates (bright lesions). Microaneurysms (MAs) are focal dilatations of retinal capillaries and appear as red dots in retinal fundus images. Fig 5 shows the regions affected in the retina. The retina is a multilayered sensory tissue. Light enters the eye through the pupil and gets focused on retina. Retina has millions of photoreceptors which convert lights rays to electrical impulses. The impulse travels through the optic nerve to the brain where an image is formed. The optical disk is the entry and exit point for nerves and it is bigger than any part of retina. By careful examination of changes in the retinal vasculature, diagnosis of diseases can be made. The diseases detected by examining the retinal structures are

FIG. 5 Retinal main regions and DR related pathologies.

384

1. 2. 3. 4.

Diabetes and Fundus OCT

Diabetic retinopathy Cardiovascular disease Hypertension Stroke

Permanent loss of vision results if treatment is not done at an earlier stage. Segmentation of blood vessels is necessary for the detection of the disorders. Among the diseases, DR is a severe vision threat. Diagnosis includes identification of following abnormalities 1. 2. 3. 4. 5.

Cotton wool spots Red lesions Soft exudates Hard exudates Hemorrhages

Segmentation of blood vessels is an important operation for the detection of DR abnormalities. Segmentation and detection of morphological changes in retinal blood vessels helps in DR identification. Due to the breakdown of blood-retinal barrier, it results in the conditions like bright lesions or intraretinal lipid exodus. Due to loss of lipids and proteins, retinal edema and exudation are observed and finally due to capillary walls weakening dot hemorrhages and lesions are found. Due to the progression of DR, macular edema, neovascularization is noticed, and finally it results in retinal detachment. Systematic screening of diabetic patients by trained eye care specialists results in the early diagnosis of disease. An automated screening tool accommodates the screening and annual reviews in diabetes clinics. There are several methods to accurately diagnose specific DR related lesions. Fig. 6 shows retinal images with lesions. Retina is the posterior compartment of the eye. Damage in the retina often leads to severe eye diseases. Fundus images help in the diagnosis of retinal disorders. These images are captured using special devices like ophthalmoscopes. Each pixel of the fundus images consist of three bands including RGB (red, green, and blue). The pixel values are Nonproliferative diabetic retinopathy

Aneurysm Hemorrhage Hard exudate FIG. 6 Retinal images with lesions.

Proliferative diabetic retinopathy

Growth of abnormal blood vessels

Chapter 14 • Lesion detection using segmented structure of retina

385

quantized to 256 levels. The blood vessels are important parts of the retinal images with a clear description of arteries. A physician can diagnose diseases by noticing the changes in retinal images. In the case of retinal hemorrhage, bleeding occurs in light-sensitive tissues of the eye. It is also a severe eye disorder like DR. It is caused by 1. Hypertension 2. Retinal vein occlusion 3. Diabetes mellitus Shaking particularly in infants (shaken baby syndrome) and severe blows to head also results in retinal hemorrhages. Retinal hemorrhages occurring outside the macula go undetected for many years. It can be detected only a careful examination of ophthalmoscopy, fundus photography, or a dilated fundus exam. Posterior vitreous detachment or retinal detachment is the final result of retinal hemorrhages. Retinal image diagnosis also helps in the diagnosis of some of the cardiovascular diseases. It also helps in automatic laser surgery on the eye and biometric applications. The different DR abnormal lesions are 1. 2. 3. 4.

Visionless Dark lesions such as microaneurysms Hemorrhages Bright lesions such as hard and soft exudates

The severity grows over the years from the first occurrence of disease. The early stage is mild NPDR (nonproliferative diabetic retinopathy) and develops to severe NPDR. Microaneurysms dot and blot hemorrhages and hard or intraretinal exudates are seen in the retinal images in case of mild NPDR. These microaneurysms are small and are the first signs of retinopathy. Diabetes and the resulting diabetic retinopathy has become a very common disease due to insufficient amount of insulin in the body. More than 80% of all diabetes patients turn into diabetic retinopathy patients since the past decade and it may result in the loss of vision in these patients. Exodus formation in the retina is an indication of DR. Most ophthalmologists apply legitimate supervision of the retina as the diagnosis method. The deposits of protein and cellular debris, escaping from blood vessels and getting deposited in tissues of the eye, decrease the visual perception of eyes. Diluting of the pupil of the eye with the chemical solution facilitates the screening of retina and identifies exudates manually. The objective of this work is lesion detection using the segmented structure of the retina. By detecting the lesion in the retina, the early sign of DR can be recognized. Through analysis of retinal blood vessels and optic disk in retinal images, early onset of DR can be diagnosed. In this work, an effort is made to classify patients with the proliferation of DR and damage to the retina. The different DR diseases targeted for diagnosing in this work are

386

Diabetes and Fundus OCT

1. Red spots 2. Microaneurysm 3. Neovascularization In the process of diagnosis, blood vessels are segmented using curvelet transformation and morphological operation (erosion and dilation process are used). The optic disk part is segmented using a circular fitting method. From the segmented part, blood vessels are detected using canny edge detection. The remaining portion is the exodus region. Retinal hemorrhages are observed as dark patches in retinal images. The dark patches are indicative of bleeding and are signs of retinal disease or injury. The dark structures in hemorrhages retinal images appear in various sizes.

Dot hemorrhages Blot hemorrhages

Small isolated red spots Irregular dotted textured localized circular structures

Hemorrhages are a sign of diseases like diabetic retinopathy and hypertensive retinopathy. The severity of the disease can be measured using the location and extent of hemorrhages. Most computer-assisted screening and grading tools are accurately focused on the detection and segmentation of hemorrhages. Several diseases including DR can be diagnosed. The diagnosis is made with the detection and quantitative measurement of variations in the retinal blood vessels. The challenge in this detection and measurement is the intrinsic characteristics of abnormal retinal images, and thus it makes the blood vessel detection a difficult process. In the existing solutions for blood vessel segmentation, false positive risks are high due to the presence of diabetic retinopathy lesions. A novel scheme using morphological component analysis (MCA) is proposed to solve the above problem in this work. MCA uses the property of a sparse representation of signals. Each signal is assumed to be a linear combination of distinct morphological components. MCA applies an appropriate signal transformation method to separate the signals. By this way, vessels and legions are separated from each other. The retinal blood vessels are split using the Morlet wavelet transform. The performance of the proposed method is measured on the publicly available DRIVE and STARE datasets. The results are compared with several state-of-the-art methods in the same domain. The proposed solution is able to achieve an accuracy of 0.9523 and 0.9590 for datasets. The accuracy is higher than most methods and also provides superior performance for human observation. False positives are reduced in pathological regions compared to other existing methods. The proposed method is also robust in the presence of noise. Many research efforts have been made in the area of retinal vessel segmentation and quantitative measurement of vessel variations. The retinal analysis helps in the diagnosis of several DR diseases. Manual segmentation of blood vessels is a time-consuming task

Chapter 14 • Lesion detection using segmented structure of retina

387

and can be error prone. It also used to develop automatic vessel segmentation and vessel parameter analysis methods. The existing solutions for retinal vessel segmentation are categorized as follows: 1. 2. 3. 4. 5. 6.

Applying matching filters Use of multiscale algorithms Use of pattern recognition models Tracking of blood vessels Using model-based techniques Applying mathematical morphology operations

Various kernels can filter the blood vessel part in the retinal images. The image at different positions and orientations are modeled using this kernel. The filter response provides the desired feature. The method reported an accuracy of 0.92 when evaluated on the DRIVE dataset. The retinal image is enhanced using 2D Gaussian matched filter and the resulting image is simplified using a pulse-coupled neural network (PCNN). PCNN is able to segment the blood vessels. OSTU thresholding is used for searching the best segmentation results. Through analysis of regional connectivity, the final vessel map is obtained. A TP (true positive) of 0.80 and an FP (false positive) of 0.02 is obtained on the STARE dataset. Exodus legions are manually detected and graded by clinicians, which are time consuming and error prone. The leakage of small vessels in the retina can be correlated to sustaining higher blood sugar levels. In the onset stage known as nonproliferative retinopathy, bleeding of the capillaries or exodus is noticed. This is due to protein deposits in the retina. Till there is a no fluid buildup in the center of the eye, it may not lead to vision loss. As the disease progresses, revascularization occurs. At this stage, the disease is termed as proliferative retinopathy. It causes severe visual problems.

2 Literature review Retinal image analysis is gaining prominence in modern ophthalmology due to its nonintrusive diagnosis method. From retinal image analysis and morphological blood vessel and optical disk analysis, various abnormalities like diabetic retinopathy, glaucoma, and hypertension can be diagnosed. Early signs of DR can be diagnosed with the detection of the lesion in the retina. Salazar-Gonzalez and Kaba [1] proposed a graph cut technique. This technique helps to segment blood vessels from the retinal images. After the blood vessels are segmented, the location of the optical disk is estimated, and segmentation of optical disk is done using two methods. The Markov random field (MRF) image reconstruction method is used to segment the optic disk. Using prior local intensity knowledge of blood vessels, optic disk is segmented. The fundus image shows contrast enhancement on the green component.

388

Diabetes and Fundus OCT

Kaur and Singh [2] proposed a soft computing-based approach using fuzzy c-means for blood vessel segmentation. This method is able to detect abnormalities. Lesions are observed using region growing algorithm and artificial neural network (ANN) classifier. Wang et al. [3] proposed a statistical classification method for the detection of diabeticrelated eye diseases. Contrast-limited adaptive histogram equalization (CLAHE) is used for contrast enhancement of fundus images. The blood vessels are made more distinguishable then background using this method which allows for efficient segmentation. Features based on color and textures are extracted from the image. The minimum distance discriminant (MDD) function is trained using these features to classify diabetes-based diseases. The presence of lesions can be detected using the MDD classifier. The classifier uses pattern recognition techniques based on statistics. Gayathri and Narmadha [4] proposed a robust and computationally efficient approach. Using this approach different features and lesions in a fundus retinal image are localized. The method proposed in this work consists of preprocessing, contrast enhancement, blood vessel extraction, and dark lesions detection stages. Since the green channel has well-contrasted arteries and veins, the green channel information is contrast enhanced using adaptive histogram equalization. Curvelet transforms and Gabor filter-based feature extraction are done and these features are used to detect lesions in the image. Tripathi and Singh [5] proposed a noise suppression method to enhance the image. The enhanced image is more efficient for segmentation and later analysis. Histogram of the image is used to enhance the image and remove the noises. Akram et al. [6] proposed a three-stage system for the early detection of MAs using filter banks. The accurate detection of MA is very important for the early diagnosis of DR. At the first stage, all possible candidate regions for MA in the retinal image are extracted. Among the possible candidates, MA or non-MA is classified by the extraction of features using a hybrid classifier. Features like shape, color, intensity, and statistics are used. Gaussian mixture model (GMM) with support vector machine (SVM) and multimodel method-based modeling approach are together used as a hybrid classifier. The accuracy of classification is improved using a hybrid classifier. Rathinam and Selvarajan [7] compared the performance of five preprocessing techniques, such as contrast adjustment, median, average, holomorphic filtering, and adaptive histogram equalization. The performance of techniques is compared using mean square error (MSE) and peak-signal-to-noise ratio (PSNR). The median filtering and average filtering gave better results than other classifier in which the median filter has better PSNR and low MSE values. Meganathan and Sukumar [8] proposed median filtering to remove noises without fading sharp edges. By this way of removing noise, further retinal analysis becomes efficient. Median filtering is very robust and filters any outliers. It is the preferred choice for the removal of salt and pepper noise. Pixel is replaced by median of all pixels in the neighborhood of a small sliding window in this method. Gupta et al. [9] proposed the segmentation of hemorrhages for computer-assisted analysis of diseases like diabetic retinopathy and hypertensive retinopathy. Segmentation is

Chapter 14 • Lesion detection using segmented structure of retina

389

done using the region growing method. Centroids of a connected set of pixels with the same intensity are found. The author used an iterative intensity-based region growing where at each iteration the region adds neighbor pixels whose intensity is within a certain contrast limit from the current region mean. Adalarasan and Malathi [10] proposed a biogeography-based optimization algorithm to classify the pixels. The pixels are classified into two groups including vessel and nonvessel. The learning focus was on the extraction of normal and isolated characteristics in the green component of the fundus image. Adaptive filter is used to match the lump part of the vessel and the thresholding-based method is used for segmentation. Whardana and Suciati [11] proposed a mechanism for the segmentation of optic disk. Determining the position of an optic disk is difficult because it is very close to the blood vessels during the segmentation process. Using thresholding, OD is localized. K-mean clustering is used for segmentation. Walvekar and Salunke [12] proposed median filtering for noise removal and adaptive histogram equalization for constant enhancement. Noise removal is done by the application of morphological operations. Various features are extracted from blood vessels and optic disk for retinal analysis. Different morphological filters are used for the extraction process. Kavitha and Palani [13] proposed a blood vessel extraction method. The image is resized to 720  576 pixels without any alteration to the original aspect ratio. The green color plane is used for analysis. CLAHE method is used to suppress any noise that is present in the low contrast area of the image. After the inversion of the green component, edge detection is performed using the canny method. In the retinal layer, exodus appear as bright yellow or white deposits. For different stages of retinopathy, the shape and size may vary. The green channel image is converted to a uniform gray scale image using preprocessing. The morphological closing operation is carried to extract the blood vessels.

3 Lesion detection using segmentation structure of retina The process flow in the proposed system is documented in Fig. 7. Image acquisition is the retrieval of the image from a particular source. Hardwarebased sources are particularly used for image acquisition. The acquired image is preprocessed to remove noises and enhance image quality. The preprocessing method to be selected depends on the computation time and cost and the desired noise removal efficiency. The algorithm is selected based on the quality of denoised image desired. Linear or Nonlinear methods can be used for preprocessing. A filter is applied linearly to all pixels in case of a linear method. Through the use of filter, the corrupted image is filtered, and the uncorrupted image is obtained. The efficiency of nonlinear filter is better than the linear filter. Segmentation involves dividing an image into regions based on properties like color, texture, brightness, contrast, and gray level. Grayscale image is taken as input for segmentation. The output of the process is abnormalities. The use of segmentation is to give

390

Diabetes and Fundus OCT

Input image

Abnormal region extraction Gaussian filter + AHE

Fuzzy c means clustering

Morphological feature extraction

Abnormal

SVM classifier

Normal FIG. 7 Flow diagram.

greater information than which exists in medical images. Image segmentation operations separate an image into a cluster of connected pixel regions or shapes. The feature extraction process is used to extract features of interest from the segmented image. Classification involves a broad range of machine learning algorithms for making the decision about the class of image. The classification algorithms are based on the assignment of class to an image based on the features of the image. The features must have a high correlation to a single class among several distinct classes. In the case of supervised classification, the class is known priori and the dataset is organized to known clusters. In the case of unsupervised classification, the analyst merely specifies the number of desired categories and data is organized automatically based on it.

3.1 Preprocessing Gaussian filter and adaptive histogram equalizer uses for preprocessing of the image.

3.1.1 Gaussian filter It is a linear filter which is applied to all pixels in the image. It is done without defining the area of pixel corruption. Three different bands (red, green, blue) are present in the fundus image. More than a red and blue band, the green band has sufficient information for retinal analysis and exudates detection. So it is sufficient to use the green band alone. I ¼ ½IR IG IB 

(1)

Chapter 14 • Lesion detection using segmented structure of retina

391

where fundus RGB image is denoted as I and it can be decomposed into red, green, and blue components respectively. In the green component, exudates appear brighter than other two components and thus it is chosen for the detection of exudates. The optic disk also appears bright but in the green component, it appears fragmented. The fragmentation is due to the high contrast of the blood vessels. Thus the optic disk can be easily removed. The image is preprocessed by applying the Gaussian function. The Gaussian function removes high-frequency components from the image. For a smooth image, Gaussian noise must be removed, and it is possible with Gaussian processing of the image. IS ðx, y Þ ¼ IG ðx, y Þ ∗ g ðx, y Þ

(2)

where g(x, y) is the Gaussian function and ∗ is the convolution operation. 1 x +2y e 2σ 2πσ 2 2

g ðx, y Þ ¼

2

(3)

3.1.2 Adaptive histogram equalization The contrast of the image is increased using adaptive histogram equalization. The adaptive method computes several histograms where each corresponding to distinct parts of the image and uses it to redistribute the lightness value across the image. Local contrast is improved better in this method and the edges in each region of the image are strengthened using this method. The contrast between the background pixels is enhanced with adaptive histogram equalization. The noise pixels are also enhanced and they appear with background information due to adaptive histogram equalization. To improve the contrast of the image and to correct uneven illumination, the following operation is done: Ienhanced ¼

X p0 RðpÞ

s ð I ðp Þ  I ðp 0 Þ Þ h2

r

M

(4)

where I is the green channel of the fundus retinal color image, p denotes a pixel, and p0 is the neighborhood pixel around p. p0  R( p) is the square window neighborhood with length h. s(d) ¼ 1 if d > 0, and s(d) ¼ 0 otherwise with d ¼ s (I( p)  I(p0 )). M ¼ 255 value of the maximum intensity in the image, and r is a parameter to control the level of enhancement. The contrast between the vessel pixels is increased when r increases.

3.2 Segmentation Fuzzy c-means algorithm is the segmentation from Boolean logic, fuzzy logic reasons in vary from 0 to 1 instead of 0 or 1 crisp values the conditions of partial truth whose true completely false.

algorithm selected in this work. Different terms of fuzzy variables whose values in Boolean logic. It is designed to handle values range from completely true to

392

Diabetes and Fundus OCT

Clustering involves grouping a set of objects in such a way that similar objects are in one cluster and dissimilar appear in different clusters. The clustering methods are used for statistical analysis of data in many fields.

3.2.1 Fuzzy c-means clustering Fuzzy c-means clustering work by creating a fuzzy membership function which allocates each data point to a cluster. More the distance of data point toward the cluster center more is the value of the fuzzy membership functions. The summation of membership of each point is one. The cluster centers are updated according to the following formula: μij ¼ 1= vj ¼

Xc k¼1



dij dik



2

m1



Xn  m m , 8j ¼ 1, 2, …c μ x = μ i ij ij i¼1 i¼1

Xn

(5)

(6)

“vj” represents the jth cluster and “m” is the fuzziness index [1,∞]. “c” represents the number of the cluster center. “μij” represents the membership of ith data to jth cluster center. “dij” represents the Euclidean distance between ith data and jth cluster center.

3.2.2 Fuzzy c-means algorithm The fuzzy c-means algorithm is given in the following.

Chapter 14 • Lesion detection using segmented structure of retina

393

3.2.3 Flow chart

3.3 Feature extraction and selection Feature extraction is an important part of further image classification. The features extracted from the image must be informative and nonredundant. Features selection techniques allow us to drop redundant features and select relevant features. Features selection is all about dimensionality reduction. The dimension of extracted features is quite large and must be reduced in dimension. This dimensionality reduction is done by the elimination of redundant features and selecting more relevant features.

394

Diabetes and Fundus OCT

Without feature selection, the dimension of feature is quite high, and the classification algorithms need large memory and computation power. Also, classification becomes overfit in certain cases and affects the accuracy of the classification. Feature engineering is concerned with the identification of feature constructs for increasing classification accuracy. Feature engineering with expert guidance can help to reduce the dimension of features and if not available, statistical dimensional reduction techniques can help.

3.3.1 Feature extraction and selection using morphological operations In the mathematics domain, mathematical morphology (MM) is used for the analysis and processing of geometrical structures. It uses various concepts from set theory, the theory of lattice, and the theory of random functions. MM is a fundamental operation of morphological image processing, which transforms image according to the following characteristics: 1. Erosion 2. Dilation Initially, MM was developed for binary images and later it got extended to grayscale  images. MM was developed by George Matheron and Jean Serra in 1964 at the Ecole des Mines de Paris, France. Matheron supervised the Ph.D. thesis of Serra. The thesis worked on the quantification of mineral characteristics from thin cross sections. The result of this thesis was a novel practical approach, as well as theoretical advancements in integral geometry and topology. In the period from the 1960s to 1970s, MM worked mainly on binary images. The following techniques were developed: 1. 2. 3. 4. 5. 6. 7. 8.

Transformation of hit or miss Operation of dilation Operation of erosion Operation of granulometry Operation of thinning Operation of skeletonization Operation of ultimate erosion Operation of conditional bisector

Based on the novel image model as the random approach was also developed. In the period of mid-1970 to mid-1980 MM was applied to grayscale images. The existing concepts like dilation and erosion were extended with the addition of new operations like 1. morphological gradients 2. top-hat transform 3. watershed segmentation approach

Chapter 14 • Lesion detection using segmented structure of retina

395

Serra generalized MM in 1986 to the framework based on lattices. Due to this generalization, MM was able to apply for a large number of structures including RGB images, video, graphs, and meshes. Morphological filtering based on a new lattice framework was also proposed by Matheron and Serra. The existing morphological operations can be classified into the following categories: 1. 2. 3. 4.

Dilation Erosion Closing Opening

The dilation process involves passing the structuring element in a sliding window manner similar to convolution operation over the image. At each sliding window, different operation is performed. The sequence of steps in this process is given in the following: 1. There is no change in pixel and move to the next pixel is done if the start of structuring element coincides with a “white” pixel in the image. 2. The pixel is made black in case the structuring element coincides with a “black” in the image. The dilation is denoted as A  B. Any shape is possible for the structuring element. The typical shapes are in Figs.8 and 9. In case of erosion, pixels are converted to “white”, not “black”. Like dilation, the structuring element is moved across the image. The sequence of steps in the erosion process is given in the following: 1. There is no change to pixel and move is made to the next pixel if the start of the structuring element coincides with the white pixel in the image. 2. The pixel is made black in case of start of the structuring element is at “black” pixel in the image, and at least one of the “black” pixels in the structuring element falls over a white pixel in the image. The erosion is denoted as A Θ B (Fig. 10).

White (255)

Black (0)

4 neighbors structuring element FIG. 8 Structuring element.

8 neighbors structuring element

396

Diabetes and Fundus OCT

White (255)

Black (0)

(A)

(B)

(C)

FIG. 9 Dilation. (A) Original image. (B) Structural element; x ¼ origin. (C) Image after dilation; original in dashes.

White (255)

Black (0)

(A)

(B)

(C)

FIG. 10 Erosion. (A) Original image. (B) Structural element; x ¼ origin. (C) Image after erosion; original in dashes.

The basic operations of dilation and erosion can be combined into more complex sequences. Opening and closing are mostly used complex sequences of morphological filtering sequences. Erosion followed by dilation is called opening. This operation is used to eliminate all the pixels in regions that are too small to contain the structuring element. The structuring element in this case of the opening is called a probe as it looks for small objects to be filtered out of the image. The opening operation is expressed as AB ¼ (AΘB)  B. Dilation followed by erosion is called as closing. It is used to fill in holes and small gaps. We see that the closing operation has the effect of filling in holes and closing gaps. Even though dilation and erosion are involved in closing and opening, both generate different results due to the application of it in a different sequence. The closing operation is expressed as A●B ¼ (A  B)ΘB (Figs. 11 and 12).

Chapter 14 • Lesion detection using segmented structure of retina

397

White (255)

Black (0)

(A)

(B)

(C)

FIG. 11 Opening. (A) Original image. (B) Structural element; x ¼ origin. (C) Image after opening; erosion followed by dilation.

White (255)

Black (0)

(A)

(B)

(C)

FIG. 12 Closing. (A) Original image. (B) Structural element; x ¼ origin. (C) Image after closing; dilation followed by erosion; original in dashes.

3.4 Classification Image classification categorizes the pixels in the image to different classes. The classification is done on multispectral data. This data embedded in each pixel is used for classification. Based on spectral reflectance and emittance, different classification is resulted. Different classification methods have been proposed till now and the analyst must select the classifier that is most suitable for his work. No specific classifier can be termed as best and each classifier performs best for a different applications. So, it is essential to analyze among the alternatives, that is the classifier suited best for the work. The classification methods fall under two categories 1. Unsupervised 2. Supervised

398

Diabetes and Fundus OCT

Unsupervised methods are often referred to as clustering and do the classification without any class labels as priori. No labeled data is needed for unsupervised methods. In the case of supervising classification, labeled data is used to guide the training process and based on the training further classification is done. The labeled dataset is a prerequisite for the supervised method of classification. The labeled data must be defined in such a way that there is no conflict in the association of data to the class. The unsupervised method learns the classification label by clustering process. Unsupervised methods are better than supervised methods of classification. Due to the difficulty in selecting the training set that shows the variability of spectral response in each class, supervised methods become difficult. Also, the data collection process is time consuming and the class to be categorized for data is fixed. But unsupervised does not have this restriction and multiple dynamic classes can be learned. When the training dataset is well prepared and the statistical correlation between the features to the label is high, the accuracy of classification in supervised classification is higher than that of unsupervised classification. Labeled training data is not utilized for unsupervised classification. The family of the classifier analyzes the unknown pixel in the image. After analysis, the pixel is mapped to a number of class based on the natural grouping or clusters based on image values. When data in different classes are well separated, the performance of unsupervised classification is higher. The classes that result from the unsupervised classification are spectral classes. During initial times, the identity of the spectral classes is not known. A comparison of classified data with some base line reference is needed by the analyst to finalize the identity of the spectral classes. The classes which are separable spectrally are identified by defining the informational utility. Pixels located within the training areas are used to guide the classification algorithms. Based on this information, specific spectral values are matched to appropriate informational class.

3.4.1 SVM classifier SVM is a machine learning classifier introduced in 1992 Boser, Guyon, and Vapnik in COLT-92. It is a supervised learning method used for classification and regression. It belongs to a family of linear classifiers. It uses mathematical regression to learn a hyperplane separating the training data. The hyperplane bifurcates the data into a class. The hyperplane is well optimized at the same time avoids overfitting. Initially, it was popular with the NIPS community and later it became an active part of machine learning research. The support vector machine is modeled mathematically as follows. We are given a set of training data {(x1, y1)… (xl,yl)} in Rn  R sampled according to unknown probability distribution P(x,y), and a loss function V(y, f(x)) that measures the error, for a given x, f(x) is “predicted” instead of the actual value y. The problem consists of finding a function f that minimizes the expectation ofÐ the error on new data that is finding a function f that minimizes the expected error V(y, f(x)) P(x,y) dx dy.

Chapter 14 • Lesion detection using segmented structure of retina

399

In statistical modeling, we would choose a model from the hypothesis space, which is closest (with respect to some error measure) to the underlying function in the target space. A SVM is a binary classifier which can separate two classes. It creates a hyperplane which separates the classes. When trained with labeled training data, SVM provides an optimal hyperplane separating the class instances. Using the hyperplane, learnt new data instances can be classified. For the case linearly separable 2D points belonging to one of two classes, the hyperplane is a straight line. The support vector machine provides an optimal hyperplane based on the input training dataset. The straight-line hyperplane for linearly separable 2D points is shown in Fig. 13. In Fig. 13, there are multiple solutions modeled as different straight lines. The criteria to estimate the worth of the line is given below. •

The line passing close to the points is not good as it is very sensitive to noise.

So the line passing far from all points is close to the optimal solution. The optimization of SVM is based on finding the hyperplane with the largest minimum distance to the training samples. Optimal separating hyperplane maximizes the margin of the training samples (Fig. 14). The notation used to define the hyperplane mathematically is given as follows: f ðx Þ ¼ β 0 + β T x

(7)

X2

X1 FIG. 13 SVM classifier.

400

Diabetes and Fundus OCT

X2

O

pt

im

al

hy p

er

pl

an

e

Maximum margin

X1 FIG. 14 SVM classification.

where β is the weight vector and β0 is the bias. Among all the possible representation, the hyperplane is best represented as |β0 + βT x| ¼ 1

(8)

x is the training examples closest to the hyperplane. The training samples close to the hyperplane are the supporting vectors. This way of representation is called a canonical hyperplane. Using the result of geometry, the distance between the point x and the hyperplane is given as distance ¼

  β 0 + β T x  kβk

(9)

For the case of the canonical hyperplane, the numerator part is 1 and the distance is given as distance ¼

  β 0 + β T x  kβk

¼

1 kβk

(10)

The margin introduced is twice the distance to the closest sample and it is expressed mathematically as M¼

2 kβk

(11)

Chapter 14 • Lesion detection using segmented structure of retina

401

The problem of maximizing M is expressed as the problem of minimizing a function L on hyperplane L(β) while satisfying certain constraints. The constraints are the classification of the entire training sample xi correctly. Formally it is defined as   1 min β, β0 LðβÞ ¼ kβk2 subject to yi βT xi + β0  18i 2

(12)

where yi are the corresponding labels of the training samples. The problem is similar to that Lagrangian optimization and solved using Lagrange multipliers. Using it, the weight vector β and the bias β0 of the hyperplane are calculated.

4 Result and discussions For experimentation and measurement of performance of the proposed solution, DRIVE database is used. For each image in the database, a mask image is provided which delineates the FOV. Each of the fundus images is an RGB image with three components of red, green, and blue. Since red and blue bands do not have significant information for exodus detection, they are dropped and the green component in the image is used for analysis. The green component is obtained by executing the Gaussian function on the image. To make more clear and definite lesion portions of the fundus image, adaptive histogram equalization (AHE) is used (Figs. 15–17). After contrast improvement, the segmentation is done. Fuzzy c-mean algorithm is used for segmentation operation (Fig. 18). The segmentation groups similar pixels to clusters. On the segmentation portions, morphological feature extraction is done to extract abnormality features in the fundus image. Dilation and erosion are used for reducing the feature set dimensions. The red line in Fig. 19 denotes the abnormality region. The features extracted are passed to the SVM classifier to classify the image into normal or abnormal. Images with lesion are classified as abnormal and the images without lesion are classified as normal. Image with lesion are abnormal and image without lesion are normal. For abnormal images, the lesion part is extracted (Figs. 20 and 21).

4.1 Performance analysis Following parameters are measured and used for performance evaluation of the proposed solution 1. 2. 3. 4. 5.

(PSNR) Peak-signal-to-noise ratio (RMSE) Root-mean-square error Sensitivity Specificity Accuracy

402

Diabetes and Fundus OCT

FIG. 15 Input image.

FIG. 16 Gaussian filter image.

Chapter 14 • Lesion detection using segmented structure of retina

FIG. 17 Histogram image.

FIG. 18 Fuzzy c segmentation.

403

404

Diabetes and Fundus OCT

FIG. 19 Feature extraction.

FIG. 20 SVM result.

FIG. 21 Extracted abnormal region.

Chapter 14 • Lesion detection using segmented structure of retina

405

PSNR is measured as  PSNR ¼ 10log 10

R2 MSE

 (13)

The maximum pixel value in the image is denoted as R. MSE is the mean square error between the original (x) and the denoised image (y). It is calculated as MSE ¼

1

M X N X

M ∗N

i¼1 j¼1

ðxði, jÞ  y ði, jÞÞ2

(14)

where M  N is the size of the image in terms of horizontal and vertical pixels. The RMSE (root-mean-square error) is calculated as RMSE ¼ Sqrt ðMSE Þ

(15)

PSNR and RMSE values for the different images are shown in Table 1. The performance of the Gaussian filter used in the proposed work can be evaluated using the PSNR and RMSE values. Higher values of PSNR and lower values of RMSE signify better preprocessing of image and Gaussian noises are removed. Gaussian filter is ideal for removing Gaussian noises compared to salt and pepper and speckle noises from the image. For a better enhancement, the MSE and PSNR values should be lower and higher, respectively. Decreasing RMSE values and increasing PSNR values represent an improvement in the objective quality of the image. To evaluate the performance of the output result, the following performance measures are calculated: 1. Sensitivity 2. Specificity 3. Accuracy To calculate these parameters, the following measurements must be done for N images. 1. 2. 3. 4.

TP—True positive TN—True negative FP—False positive FN—False negative

Table 1

PSNR and RMSE values of images.

Images

PSNR

RMSE

1 2 3

33.2484 32.4363 33.0550

5.4157 5.9546 5.5266

406

Diabetes and Fundus OCT

Sensitivity is measured as Sensitivity ¼

TP 100 ðTP + FN Þ ∗

Specificity ¼

TN 100 ðTP + FN Þ ∗

Specificity is measured as

Accuracy is measured as Accuracy ¼

TP + TN ∗100 N

The values of these parameters for the images in DRIVE dataset for the proposed solution are given in the following:

Sensitivity Specificity Accuracy

89% 92% 94%

Higher specificity indicates lower error rate, and accuracy obtained by the proposed system is higher than existing systems.

5 Conclusion In this work, the classification system for abnormality in the retina is proposed. The proposed system uses Gaussian filtering for preprocessing and adaptive histogram equalization for contrast enhancement. Gaussian noises are removed by the application of Gaussian filtering on the input fundus image. Adaptive histogram equalization is able to improve the contrast of image better than traditional methods. The contrast-enhanced image is segmented using Fuzzy c-means clustering. Fuzzy c-means clustering shows better segments of the lesions in the image compared to other segmentation techniques. Morphological feature extraction is done with dilation and erosion operations for reducing the dimension of the features. By dimensionality reduction of features, the complexity of classifier is reduced. Classification of abnormal and normal images is done using SVM classifiers. Training of SVM is done to classify the images with lesions as abnormal and without lesions as a normal image. The accuracy of the proposed method is more than 94% which is higher than the existing solutions.

References [1] A. Salazar-Gonzalez, D. Kaba, Segmentation of the blood vessels and optic disk in retinal images, IEEE J. Biomed. Health Inform. 186 (4) (2014) 1874–1886.

Chapter 14 • Lesion detection using segmented structure of retina

407

[2] I. Kaur, L.M. Singh, A method of disease detection and segmentation of retinal blood vessels using fuzzy c-means and neutrosophic approach, IJIR 2 (6) (2016) 551–557. [3] H. Wang, W. Hsu, K.G. Goh, M.L. Lee, An effective approach to detect lesions in color retinal images, in: Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662), 2000. [4] K. Gayathri, D. Narmadha, Detection of dark lesions from coloured retinal image using curvelet transform and morphological operation, IJNTEC 2 (3) (2014) 15–21. [5] S. Tripathi, K.K. Singh, Automatic detection of exudates in retinal fundus images using differential morphological profile, IJET 0975-4024, 5 (3) (2013) 2024–2029. [6] M.U. Akram, S. Khalid, S.A. Khan, Identification and classification of microaneurysms for early detection of diabetic retinopathy, Pattern Recogn. 46 (1) (2013) 107–116. [7] S. Rathinam, S. Selvarajan, Comparison of image preprocessing techniques on fundus images for early diagnosis of glaucoma, IJSER 12 (4) (2013) 1368–1372. [8] C. Meganathan, P. Sukumar, Retinal lesion detection by using points of interest and visual dictionaries, IJARECE 2 (2) (2013) 175–181. [9] G. Gupta, K. Ram, S. Kulasekaran, N. Joshi, M. Sivaprakasam, R. Gandhi, Detection of retinal hemorrhages in the presence of blood vessels, in: Proceedings of the Ophthalmic Medical Image Analysis First International Workshop. Presented at the Ophthalmic Medical Image Analysis First International Workshop, 2014, pp. 105–111. [10] R. Adalarasan, R. Malathi, Automatic detection of blood vessels in digital retinal images using soft computing technique, Mater. Today Proc. 5 (1) (2017) 1950–1959. [11] A.K. Whardana, N. Suciati, A simple method for optic disk segmentation from retinal fundus image, IJIGSP 11 (2014) 36–42. [12] M. Walvekar, G. Salunke, Detection of diabetic retinopathy with feature extraction using image processing, IJETAE 5 (1) (2014) 32–41. [13] M. Kavitha, S. Palani, Retinal blood vessel segmentation algorithm for diabetic retinopathy and abnormality classification by supervised machine learning, IJNNA 0974-6048 (2012) 17–20.

Index Note: Page numbers followed by f indicate figures, t indicate tables, and b indicate boxes. A Accountable machine learning, 227–228 Accuracy, performance measures, 406 Adaptive contrast equalization, 252 Adaptive histogram equalization (AHE), 388, 391, 401, 402–403f Aflibercept, type 3 neovascularization, 334, 336 Age-related macular degeneration (AMD), 61, 133–137, 166–168 diagnosis, 135 dry macular degeneration, 135 exudative, 322, 336 fundus image analysis for, 138–144 neovascular, 321–322 reticular pseudodrusen, 323 treatment, 332–335 optical coherence tomography image analysis for, 144–156, 204–205, 359, 369–370 risk factors, 134 symptoms of, 134 treatment, 135–136 wet macular degeneration, 135–136 Amsler Grid testing, 135 Aneurysm, 114 Angiography, age-related macular degeneration, 135 ANN. See Artificial neural network (ANN) Anterior ischemic optic neuropathy (AION), 168–169, 199–200 Anti-vascular endothelial growth factor therapy, 135–136, 168 Arc over chord ratio method, 262–264, 266t Area under the curve (AUC), 69–70 ARIA. See Automatic retinal image analysis (ARIA) Arterial sclerosis, 37

Arteritic anterior ischemic optic neuropathy (AAION), 168–169 Artery/veins (A/V) classification, blood vessels, 307–309, 308f, 310f, 316–317f Artifacts, optical coherence tomography, 207 Artificial neural network (ANN), 116–119 classifier, 388 dependency graph, 119f structure of, 118f AUC. See Area under the curve (AUC) Augenspiegel, 61–63 Automated detection method, 116 Automated segmentation phase, 3 Automatic retinal image analysis (ARIA), 67–80, 83 fovea, 73–80 macula, 73–80 optic disc (OD), 73–80 optic nerve head (ONH), 73–80 performance metrics use in, 69–71 retinal vessel segmentation, 71–73 Automatic seed generation method, 349 Automatic vessel segmentation methods, 71–72 Axons, 117 B Babbage ophthalmoscope, 61–63 Bags of visual words (BoVW), 224, 226, 232 Barbara Davis Center for Diabetes, 246 Bayesian detection algorithm, 117 Bayesian regularization (BR), 115–116 Bevacizumab, 334–335 Bifurcation points, 9–10 Bifurcations abnormalities, 36 Biogeography-based optimization algorithm, 389

409

410

Index

Blood vessel density estimation, 7, 7b, 8f Blood vessel extraction method, 345–347, 389 Blood vessel segmentation, 6–7, 6f, 297, 346–347, 346f, 348f diabetic retinopathy screening, 301–302, 303f feature selection, 308t fuzzy c-means, 388 manual, 386–387 retinal, 386–387 thinning results, 303, 303f Blood vessel tortuosity, 246–247, 257–258 Boolean logic, 391 Bottom-hat transformation, 302 Branch retinal artery occlusion (BRAO), 278, 278f Branch retinal vein occlusion (BRVO), 170, 275–277, 275–276f Bright lesions map (BLM), 348 C Candidate region detection, retinal fundus image, 247–249, 254 intensity features, 256 profile features, 255 shape features, 255–256 Canny edge detection, 247–248, 349 Canonical hyperplane, 399–400 Central retinal artery occlusion (CRAO), 279, 279–280f Central retinal thickness (CRT), 172 Central retinal vein occlusion (CRVO), 170, 277, 277f Central serous chorioretinopathy (CSC), 198–199 Child Heart and Health Study in England (CHASE) database, 67 Choriocapillaris (CC), 323, 329 Choroidal neovascularization (CNV), 135–136, 166–168 Choroidal-retinal anastomosis, 321 Chronic hyperglycemia, 59–61, 80 Circular Hough transform (CHT), 76 Circular mask, 260 Classifier configuration, 51–52

Class-imbalance problem, 71 Clinically Significant Macular Edema (CSME), 91–92 Clip limit, 252–253 Closed-angle glaucoma, 382–383 Closing operation, 302, 396, 397f Clustering, 88, 309, 392, 398 fuzzy c-means, 391–392 K-means clustering, 362, 389 CNNs. See Convolutional neural networks (CNNs) Color fundus photography, 64–65, 344. See also Fundus photography branch retinal artery occlusion, 278, 278f central retinal artery occlusion, 279, 279f central retinal vein occlusion, 277, 277f chronic branch retinal vein occlusion, 275–277, 275–276f chronic central retinal artery occlusion, 279, 280f macular telangiectasia, 280, 281f nonproliferative diabetic retinopathy with macular edema, 271–273, 272f optic disc detection, 348, 349f prepapillary venous vascular loop, 282, 282f proliferative diabetic retinopathy, 273–274, 273–274f proposed algorithm for classification of, 364f superior arcade branch retinal artery occlusion, 278, 278f Color normalization, retinal fundus image, 253 Comparison of AMD Treatments Trials (CATT), 335 Computer-aided diagnosis (CAD) systems, 3, 88–89, 90f, 206, 286 Conditional random fields (CRFs), 73 Cones, 287–288 Confusion matrix, 69, 70f Content-based image retrieval technique, 116 Contrast equalization, 247, 252 Contrast limited adaptive histogram equalization (CLAHE), 252–253, 345, 388–389 Convolutional neural networks (CNNs), 28–34, 54f, 55t, 73, 86, 88, 92

Index 411

architecture, 29f configuration, 52t training time, 53f Cornea, 245 Corneal reflection, 63 Correlation-based feature subset (CFS) selection method, 359 Cotton-wool spots (CWS), 39, 39f, 80–83, 88, 290 CRFs. See Conditional random fields (CRFs) Crossover point detection, 305–306 CSME. See Clinically Significant Macular Edema (CSME) Cumulative distribution function (CDF), 7–8 Cup-to-disc area ratio (CAR), 73–74, 94–95 Cup-to-disc diameter ratio (CDR), 73–74, 94–96, 95f Cup-to-disc ratio, 61 Curvature method, 258, 262–266, 266t Curvelet transform, 73, 388 blood vessel extraction methods, 346–347 optic disk segmentation approach, 348 CWS. See Cotton-wool spots (CWS) Cystoid macular edema (CME), 203, 324 D Data-adaptive methods, diabetic retinopathy, 361–362 Data-driven approach, 224–225, 228–229, 235–237 Data mining approach, 136 Dataset, 42–50, 42–51t Decision support system (DSS), 358 Decision tree, 116 Deep capillary plexus (DCP), 321–322, 330–332, 336–337 Deep learning approach, 72–73, 92, 359 for classification of eye diseases based on color fundus images classifier configuration, 51–52 dataset, 42–50 experiment result, 53–55 fundus imaging, 34–40 model training and cross validation, 53 overview of, 26–34

random oversampling mechanism, 51 research framework, 40–41 retinal image, 25 Deep neural network (DNN), 226 D-Eye ophthalmoscope, 97 Diabetic macular edema (DME), 61, 80, 164, 344 detection methods, performance measures for, 93t optical coherence tomography, 201–203, 358–359, 358f, 369–370 severity levels, automatic classification of, 88–92 Diabetic maculopathy, 80 Diabetic retinopathy (DR), 1, 40, 80–92, 223–224, 245–246, 285–286, 343, 381, 383 abnormal lesions, 385 accountable machine learning, 227–228 automated diagnostic tasks, 225 caused by, 385 classification, 41t, 85f comparison with state of the art, 239–240 contextualizing with state of the art, 231–233 database images under test set, 314, 315f data-driven approach, 228–229, 235–237 datasets, 233–234 detection methods, performance measures for, 91t diagnosis, 10, 293–296, 293f, 296f, 343–344, 384–386 effects of central vision loss due to, 288, 288f fundus classification, 353–358 dictionary learning, 362–363, 363–364f, 364t imaging biomarkers, 344–353 fundus photography, 296–297 global and local information, 238–239 imaging biomarkers (see Imaging biomarkers, diabetic retinopathy) nonproliferative (see Nonproliferative diabetic retinopathy (NPDR)) normal retina vs., 293f

412

Index

Diabetic retinopathy (DR) (Continued) normal vision vs., 382f online fundus database, 297 optical coherence tomography classification, 358–359, 358f dictionary learning method, 364–370 imaging biomarkers, 353 optical coherence tomography angiography, 160–161, 270–274 pathology, 383f per patient analysis, 231 prevalence of, 59–61 proliferative (see Proliferative diabetic retinopathy (PDR)) retinal images with lesions, 384, 384f retinal layer thickness biomarkers, 357t saliency-oriented data-driven approach, 229–231, 237–238 screening, 286, 344 automated system design, 297–311, 298f, 345 benefits, 292–294 database collection, 296–297 MATLAB programming, 314 segmentation performance, 315, 317f severity levels, automatic classification of, 88–92 sign of, 291–292t stages of, 84t, 246, 314f state of the art, 224–227 symptoms of, 249 validation protocol, 233 Diabetic Retinopathy Conference: Segmentation and Grading Challenge, 67 Diabetic Retinopathy Risk Index, 89–90 Diabetic Retinopathy Severity Scale, 40 Diabetic Retinopathy Study (DRS), 40 Diaretdb1 database, 248 Dictionary learning (DL) method, diabetic retinopathy, 359 fundus, 362 classification, 362–363, 363f experimental results, 363, 364t learning step, 362

image modeling methods, 360f ocular image denoising/restoration, 370 optical coherence tomography, 369t experimental results, 369–370 Fisher discrimination dictionary learning, 367 low-rank shared dictionary learning, 367–368 separating particularity and commonality dictionary learning, 365–367 sparse representation-based classifiers, 364–365 Digital fundus images, 258 Digital Retinal Images for Vessel Extraction (DRIVE) database, 65, 258–260, 259f, 314, 381, 386–387, 406 Dilation process, 301–302, 395, 396f Discrete wavelet transform (DWT), 116, 119–120, 121f Distance-based methods, 262–263 DME. See Diabetic macular edema (DME) DR. See Diabetic retinopathy (DR) DR2 images, 234–236, 239–240 DRIVE database. See Digital Retinal Images for Vessel Extraction (DRIVE) database DRS. See Diabetic Retinopathy Study (DRS) Drusen detection system, 136–137 Dry macular degeneration (DMD), 135, 140f Dual-tree complex wavelet transform, 116 Dynamic shape features (DSF), 343, 350 Dynamic transformation method, 248 Dyslexia, 13, 174 E Early Treatment Diabetic Retinopathy Study (ETDRS), 40 Echo state neural network (ESNN), 116 EIARG database, 258–260, 259f Energy minimization method, 116 Engineering field, diabetic retinopathy, 294–296 Ensemble classifier-based technique, 248–249 Entropy-based method, 248 Erosion process, 301–302, 395, 396f ESNN. See Echo state neural network (ESNN)

Index 413

Euclidean distance, 126t Expectation Maximization algorithm, 6–7 Exudates detection, 86–88, 350–351, 352f segmentation methods, 89t Exudative age-related macular degeneration (AMD), 322, 336 Eye anatomy, 287–288, 287f EyePACS test dataset, 225 F False negative (FN), 69 False positive (FP), 69 FAZ. See Foveal avascular zone (FAZ) Feature extraction, 27, 388, 390, 393–396, 404f diabetic retinopathy screening, 304f, 306 and diagnosis, 7–10 phase, 3 Feature selection, 248–249, 308t, 393–396 Feature vectors, diabetic retinopathy screening, 307 FFA. See Fundus fluorescein angiography (FFA) Field of view (FOV), 59 Fisher discrimination dictionary learning (FDDL), 367, 369–370 Fisher vector (FV) encoding, 230–231 Fluorescein angiography (FA), 136, 160 chronic branch retinal vein occlusion, 275–277, 276f nonproliferative diabetic retinopathy with macular edema, 271–273, 272f optical coherence tomography angiography vs., 160–161 type 3 neovascularization, 323–326, 325f FOV. See Field of view (FOV) Fovea, 35, 73–80, 287–288 detection, 79f Foveal avascular zone (FAZ), 1–3, 9–10, 12, 159, 163, 271–273, 275–277, 280, 358 Fovea segmentation methods, 81–82t FP. See False positive (FP) F1-score, 71 Fully connected layer, 33, 33f Fundus, 379 diabetic retinopathy

classification, 353–358 dictionary learning, 362–363, 363–364f, 364t imaging biomarkers, 344–353 Fundus camera, 34–35, 59, 60f, 65, 296 Fundus fluorescein angiography (FFA), 2, 136, 346–347, 347f micro aneurysms detection, 350, 351f optic disk segmentation approach, 349 Fundus photography, 34–40, 36f, 61, 122f, 136, 245, 379, 380f. See also Color fundus photography for age-related macular degeneration, 138–144 database of, 297 diabetic retinopathy, 296–297 diagnosis of retinal disorders, 384–385 two-dimensional view, 381 views, 380–381 Fundus retinal image, 59–61, 60f automatic retinal image analysis (ARIA), 67–80 future trends in, 96–97 glaucoma classification using automatic retinal image analysis, 92–96 history of, 61–65, 64f macular edema and DR classification, 80–92 public retinal image databases, 65–67, 68t Fuzzy c-mean, 388, 391–392, 401, 403f G Gabor filter-based feature extraction, 388 Gabor wavelet transform coefficients, 72 Gaussian filter, 390–391, 402f Genetic algorithm (GA), 258 Geographic atrophy (GA), 167 Glaucoma, 73–74, 382–383 classification using automatic retinal image analysis, 92–96 optical coherence tomography, 196–198 optical coherence tomography angiography, 164–166 severity levels, 94f Glaucoma detection methods, 96t Global classifier (GC) model, 366–367

414

Index

Global schemes (GSs), 91 G-mean, 71 GoogLeNet, 88, 359 Graph-based approach, 304–306 Graph cut technique, 76, 387 Graph trace method, 298, 304 Gray scale value, 88 Gray threshold values, 297 Green channel, 247, 249–250, 388 Grid laser photocoagulation, 275–277 GSs. See Global schemes (GSs) Gunn’s sign, 37, 37f H Handcrafted technique, 224 Hard exudates, 39, 39f Hard thresholding, 96 Hemorrhages (HEM), 38, 38f, 80, 246, 255–256, 386 detection, 84–86, 351–353, 354f segmentation, 87t, 388–389 Hidden layer, of artificial neural network, 118 High-definition optical coherence tomography (SD-OCT), 191–192 Higher-order Markov-Gibbs random field (HO-MGRF), 6–7 Histogram equalization, 253, 388, 391, 401, 402–403f Histogram matching, 253 Histogram of oriented gradients (HOG), 359, 369 Histogram specification, 253 HO-MGRF. See Higher-order Markov-Gibbs random field (HO-MGRF) Hough transform, 349 Hybrid classifier, 388 Hyperglycemia, 285, 343 Hyperreflective foci (HRF), 330–332, 336 Hypertensive retinopathy, 39–40, 40t I Illumination equalization method, 247, 250–251 Image acquisition, 34–35, 34f Image classification, 27, 27–28f

Image enhancement, 299–301 Image labeling, 52t ImageNet dataset, 228 Image processing, 115, 350, 353 digital, 345 enhancement algorithm, 252–253 MATLAB, 294–295, 314 retinal, 253 transform-based, 361 Imaging biomarkers, diabetic retinopathy features, 345f fundus, 344–345 blood vessel extraction methods, 345–347 exudates detection, 350–351, 352f hemorrhages detection, 351–353, 354f micro aneurysms detection, 349–350, 351f optic disk segmentation approaches, 347–349, 349f preprocessing, 345 2D vessel segmentation, 346–347, 346f, 348f optical coherence tomography, 353, 354t, 355–356f Inception-Resnet CNN, 235 Inception-ResNet-v2, 228 Independent component analysis (ICA), 361–362 Indian Diabetic Retinopathy Image Dataset (IDRiD) database, 67, 234 Indocyanine green angiography (ICGA), 160, 324–326, 325f, 331–332 Inner limiting membrane (ILM), 196–198 Inner retinal pigment epithelium (IRPE), 205 Intermediate age-related macular degeneration (iAMD), 167 International Clinical Diabetic Retinopathy scale, 67 International Diabetes Federation, 383 Intraocular pressure (IOP), 61, 164 Intraretinal hemorrhage, 80 Intraretinal microvascular abnormalities (IRMA), 80–83, 290 IOP. See Intraocular pressure (IOP)

Index 415

Iris, 245 IRMA. See Intraretinal microvascular abnormalities (IRMA)

Low-rank shared dictionary learning (LRSDL), 367–369 LS. See Local schemes (LS)

J Juvenile diabetes, 343

M Machine classifier method, 297 Machine learning-based methods, 72 Macula, 35, 73–80, 133–134, 136 Macular capillaries, 160 Macular edema (ME), 41t, 61, 80–92, 160–161 Macular ischemia (MI) detection, 298, 302, 312f approaches, 311 automated system for, 298, 298f support vector machine classifier, 311–313, 313f vessel centerline pixel position, 310–311, 310f Macular microaneurysm (Mas), 164 Macular telangiectasia (MacTel), 280, 281f Markov random field (MRF) image reconstruction method, 387 Match filtering, 349 Mathematical morphology (MM), 393–396 MATLAB programming, 122, 314 Matthews correlation coefficient (MCC), 71 Maximum difference method, 76 Maximum variance method, 76 Max pooling, 33–34 MCC. See Matthews correlation coefficient (MCC) Mean square error (MSE), 388 Median filtering, 388–389 Messidor database, 246–247 Messidor-2 dataset, 234–236, 239–240 Method of optimal directions (MOD), 361–362 Methods to evaluate segmentation and indexing techniques in the field of retinal ophthalmology (MESSIDOR) database, 66 Microaneurysms (MAs), 37–38, 38f, 80, 246–249, 255–256, 288–290, 383, 388 biomarkers for diabetic retinopathy, 349–350, 351f detection of, 84–86

K Kaggle/EyePACS dataset, 233, 235–236 Kalman filter, 137 Keith-Wegener-Barker scoring system, 39–40 K-means clustering, 362, 389 K-nearest neighbor (KNN) classifier algorithm, 72, 88–89, 95–96, 247–248, 298, 307–309 K-singular-value decomposition algorithm, 362 L Lagrange multipliers, 401 Leave-one out cross validation method, 53 LED. See Light-emitting diode (LED) Length filtering algorithm, 347 Lesion detection, using retina segmentation structure, 389–390, 390f feature extraction and selection, 393–396 fuzzy c-means, 391–393 image classification, 397 supervising classification method, 398 support vector machine classifier, 398–401, 399–400f unsupervised classification method, 398 preprocessing, 390–391 support vector machine classifier, 398–401, 399–400f Level set deformable model, 348 Light-emitting diode (LED), 120–121 Linear configuration pattern (LCP), 359, 369–370 Linear discriminate analysis classification method, 247 Local classifier (LC) model, 366–367 Local rotating cross-sectional profiles analysis, 350 Local schemes (LS), 91 Low-pass filter method, 76

416

Index

Microaneurysms (MAs) (Continued) fundus fluorescein angiograms, 350, 351f segmentation methods, performance measures for, 87t Mild nonproliferative diabetic retinopathy (NPDR), 246, 288, 289f, 356, 357f Minimum distance discriminant (MDD) function, 388 Minimum mean square error (MMSE), 299–301 MLP. See Multilayer perceptron (MLP) Mobile phone-based diabetic retinopathy detection system Bayesian detection algorithm, 117 BP neural network, 115 causes of, 113 database of, 123–127, 127t echo state neural network (ESNN), 116 overview of, 113 probabilistic neural network (PNN), 116 proposed system, 117–123 types of, 113–114 Model training, 27 Moderate nonproliferative diabetic retinopathy (NPDR), 246, 288–290, 289f, 356, 357f Monocular indirect ophthalmoscopy, 379 Morlet wavelet transform, 386 Multiclass support vector machine (SVM) classifier, 315 Multilayer perceptron (MLP), 28 Multimodal imaging nascent type 3 neovascularization, 331–332, 331f type 3 neovascularization, 324–326, 330f Multiple correlation coefficients, 248 Multiscale correlation-based method, 248 Multi-scale Gaussian correlation filter (MSCF), 350 Multiscale morphological processing techniques, 352 N Naı¨ve Bayes (NB), 6–7 Nascent type 3 neovascularization (NV), 333f. See also Type 3 neovascularization (NV)

multimodal imaging, 331–332, 331f optical coherence tomography angiography, 330–332, 331f treatment, 336 National Eye Institute, 246 n-class support vector machine (SVM) classifier, 313, 313f Neovascular age-related macular degeneration (nAMD), 321–322 reticular pseudodrusen, 323 treatment, 332–335 Neovascularization, 1, 80–83, 290–291 Neovascularization elsewhere (NVE), 161, 273–274 Neovascularization of the disc (NVD), 161, 273 Nerve fiber layer (NFL), 196–198 Neural network, 117 Neural network classifier-based method, 246–247 New vessels at the disc (NVD), 290–291, 291f New vessels at the elsewhere (NVE), 290–291, 291f Node value, 117 No-discrimination line, 69–70 Noise removal, median filtering, 388–389 Noise suppression method, 388 Non-arteritic anterior ischemic optic neuropathy (NAION), 168–169, 168f Nonclinically Significant Macular Edema (Non-CSME), 91–92 Non-data-adaptive methods, diabetic retinopathy, 361 Nonlinear transfer functions, 253 Nonproliferative diabetic retinopathy (NPDR), 114, 246, 285–286, 288, 294t, 356–358, 357f, 387. See also Diabetic retinopathy (DR) diabetic macular edema, 271–273, 272f mild, 246, 288, 289f, 356 moderate, 246, 288–290, 289f, 356 retinal images with lesions, 384, 384f severe, 246, 290, 357 Normalization, color, 253

Index 417

Normal tension glaucoma, 383 NPDR. See Nonproliferative diabetic retinopathy (NPDR) O Oak Ridge National Laboratory, 293–294 Occult chorioretinal anastomosis, 321 OCT. See Optical coherence tomography (OCT) OD. See Optic disc (OD) oDocs nun, 97, 98f Offline system, 88–89 One-field photography, 296–297 ONH. See Optic nerve head (ONH) Open-angle glaucoma, 382–383 Opening operation, 302, 396, 397f Ophthalmoscope, 120–121 Optical camera, 42 Optical coherence tomography (OCT), 137, 191–194, 380f for age-related macular degeneration, 144–156, 204–205 anterior ischemic optical neuropathy, 199–200 artifacts, 207 automatic segmentation techniques, 206 central serous chorioretinopathy, 198–199 challenges, 206–207 computer-aided diagnosis systems, 206 cystoids macular edema, 203 diabetic macular edema, 201–203 diabetic retinopathy classification, 358–359, 358f dictionary learning method, 364–370 imaging biomarkers, 353 glaucoma, 196–198 normal healthy eye, 194–196 retina anatomy in, 193–194, 194f standard number of layers, 206 type 3 neovascularization, 323–324, 326 for typical normal person in macular region of retina, 192f weak layer boundaries, 206 Optical coherence tomography angiography (OCT-A), 159–160 age-related macular degeneration, 166–168

anterior ischemic optic neuropathy, 168–169 diabetic retinopathy, 161–164, 270–274 dyslexia, 13 vs. fluorescein angiography, 160–161 glaucoma, 164–166 limitations of, 173 macular telangiectasia, 280, 281f materials and methods of, 3–10, 4–5f nascent type 3 neovascularization, 330–332, 331f of normal eyes, 270 overview of, 1–3 prepapillary vascular loop, 280–282, 282f results of, 11–12, 11–12t retinal artery occlusion, 172–173, 278 branch retinal artery occlusion, 278, 278f central retinal artery occlusion, 279, 279–280f retinal vein occlusion, 169–172, 170f, 274 branch retinal vein occlusion, 275–277, 275–276f central retinal vein occlusion, 277, 277f spectral-domain technology, 270 swept-source, 270, 273–274, 274f type 3 neovascularization, 326–329, 327–329f Optic disc (OD), 61, 73–80, 247–248 detection, 253, 256f fundus fluorescein angiograms, 349 performance measures for, 81–82t segmentation, 347–349, 349f, 387, 389 Optic disc diameter (ODD), 73–74 Optic nerve disease, 161–173 Optic nerve head (ONH), 61, 73–80, 77–78f Optimal separating hyperplane, 312 Orthonormal matching pursuit (OMP), 362 P Particularity and commonality dictionary learning (COPAR) algorithm, 365–367, 369 Parzen window technique, 7–8 PCA. See Principal component analysis (PCA) PDR. See Proliferative diabetic retinopathy (PDR)

418

Index

Peak-signal-to-noise ratio (PSNR), 388, 405, 405t Performance metrics, in automatic retinal image analysis, 69–71 Peripapillary capillary density (PCD), 164 Peripapillary choriocapillaris (PCC), 169 Photoreceptors, 287–288 Pigment epithelium detachment (PED), 205, 324–326 Pixel-based method, 246–247 PNN. See Probabilistic neural network (PNN) Point spread function (PSF), 299 Polypoidal choroidal vasculopathy (PCV), 167 Pooling layer, 32, 32f Prediction phase, 27 Prepapillary vascular loop, 280–282, 282f Preprocessing, 246–250, 255f, 345, 388 adaptive contrast equalization, 252 color normalization, 253 contrast limited adaptive histogram equalization, 252–253 histogram equalization, 253 histogram specification, 253 illumination equalization, 250–251 optic disc detection, 253, 256f Preproliferative diabetic retinopathy, 80–83 Primary open angle glaucoma (POAG), 164 Principal component analysis (PCA), 76, 361–362 Probabilistic neural network (PNN), 116, 136 Probability map, 86 Proliferative diabetic retinopathy (PDR), 114, 163, 285–286, 290–291, 290f, 292t, 344. See also Diabetic retinopathy (DR) fundus images, 356, 357f, 358 retinal images with lesions, 384, 384f swept-source optical coherence tomography angiography, 273–274, 273f Proposed system, 117–123 algorithm and flow chart, 122–123 artificial neural network (ANN), 117–119 discrete wavelet transform (DWT), 119–120 flowchart of, 124f ophthalmoscope, 120–121 software requirement and description, 122

Public retinal image databases, 65–67, 68t Pulse-coupled neural network (PCNN), 387 R Radial basis function (RBF), 11 Radial peripapillary capillaries (RPC), 160 Radiation-induced lung injury, 174 Radon transform (RT), 258, 260 Random forest (RF), 3, 10–12 Random oversampling mechanism, 51 Ranibizumab, 334–336 RBF. See Radial basis function (RBF) Receiver operating characteristic (ROC), 69–70, 70f, 246–247 Rectified linear unit (ReLU) layer, 31, 32f Red dots, 37–38 Red lesions, 84–85 Red reflex, 61–63 Region growing method, 247–248, 349, 388 Region of interest (ROI), 69, 80, 138, 141f Relative operating characteristic (ROC), 258 ReLU layer. See Rectified linear unit (ReLU) layer Research framework, 40–41 Resilient back propagation (RP), 116 Reticular pseudodrusen (RPD), 322–323 Retina, 245, 287–288, 383–385, 383f Retina disease classification model, 41f Retinal angiomatous proliferation (RAP), 321 lesions classification, 321 time-domain optical coherence tomography, 326 Retinal artery occlusion (RAO), 172–173, 278 branch, 278, 278f central, 279, 279–280f Retinal blood vessel caliber, 8–9 Retinal disease, 161–173 Retinal eye bleeding, 26 Retinal fundus image, 82–84f, 245–249 abnormality detection, 249 candidate region detection, 254–256 normal vs., 249, 249f preprocessing, 249–253, 255f spatial calibration, 250–251f channel separation, 249, 252f

Index 419

preprocessed output for normal images, 253, 254f tortuous retinal vessel, 257–258, 257f, 261f bifurcation and crossover points, elimination of, 260–262, 265f databases, 258–259, 259f, 263–264f extraction of vascular skeleton, 260 radon transform, 260 skeletonization process, 260, 262f tortuosity measurement methods, 262–266 Retinal image, 25, 52f, 93f, 115f, 125f, 126t, 127f Euclidean distance of, 126t wavelet transform of, 125f Retinal ischemia, 163 Retinal microaneurysms, 37–38 Retinal pigment epithelium (RPE), 133, 136 type 3 neovascularization, 321–322, 326, 329–332, 336 Retinal vein occlusion (RVO), 169–172, 170f, 274 branch, 275–277, 275–276f central, 277, 277f Retinal Vessel Image set for Estimation of Widths (REVIEW) database, 66 Retinal vessel segmentation, 71–73, 74f, 75t, 386–387 Retinopathy Online Challenge (ROC), 66, 247–248 Retinovascular diseases, 1 REVIEW database. See Retinal Vessel Image set for Estimation of Widths (REVIEW) database RMSProp algorithm, 228 Robust screening methods, 383 ROC. See Receiver operating characteristic (ROC) Rods, 287–288 ROI. See Region of interest (ROI) Root-mean-square error (RMSE), 405, 405t RP. See Resilient back propagation (RP) S Saliency-oriented data-driven approach diabetic retinopathy, 229–231, 237–238 Fisher vector encoding, 230–231

integration, 231 patches extraction, 229–230, 229–230f Salus’s sign, 37 Scale-Invariant Feature Transform (SIFT), 92 Segmentation, of centerline pixel image, 305 Semiautomated computer program, 257–258 Sensitivity, 406 Severe nonproliferative diabetic retinopathy, 80–83, 246, 290, 357, 357f Singular-value decomposition (SVD), 362 Skeletonization process, 260, 262f Smartphone-based retinal imaging, 97f Smartphone indirect ophthalmology, principle of, 121f Smartphone Ophthalmoscopy Reliability Trial (SORT), 97 Sobel edge detection, 247–248 Soft exudates, 80–83 Softmax Layer, 33, 33f Soluble fms-like tyrosine (sFlt) kinase-1, 323 Sparse higher-order potentials (SHOPs), 205 Sparse representation-based classifiers (SRC), 350, 364–365 Specificity, 406 Spectral-domain optical coherence tomography (SD-OCT), 165, 191–192, 270 diabetic macular edema, 358, 358f nascent type 3 neovascularization, 330 type 3 neovascularization, 324t, 326 Speeded-up robust features (SURF) algorithm, 232 Split-spectrum amplitude-decorrelation angiography (SSADA) algorithm, 159 Square mask, 260 SSD. See Sum of squared difference (SSD) Stacked sparse autoencoder (SSAE), 86 Standard Diabetic Retinopathy Database Calibration level 0 (DIARETDB0), 66 Standard Diabetic Retinopathy Database Calibration level 1 (DIARETDB1), 66 Stride, 30–31 Structured Analysis of the Retina (STARE) database, 42, 66, 381, 386–387 Sum of squared difference (SSD), 77 Supervising classification method, 398

420

Index

Support vector machine (SVM) classifier, 11, 116, 353, 359, 398–401, 399–400f, 404f fundus retinal image analyses, 72, 86–90, 95–96 macular ischemia detection, 298, 311–313, 313f, 315 Swept-source optical coherence tomography angiography (SS-OCT-A), 270, 273–274, 274f T Template disc method, 258 Template matching, 76 Thinning of blood vessels, 303 Thinning technique, 9–10 Thresholding approach, 297 Time-domain optical coherence tomography (TD-OCT), 191–192, 326 TN. See True negative (TN) Top hat filtering, 246–247, 349 Top-hat transformation, 302 Tortuosity measurement methods, 35, 35f, 262–266 Tortuous retinal vessel, retinal fundus image, 257–258, 257f, 261f bifurcation and crossover points, elimination of, 260–262, 265f databases, 258–259, 259f, 263–264f extraction of vascular skeleton, 260 radon transform, 260 skeletonization process, 260, 262f tortuosity measurement methods, 262–266 Tracking method, 297 Training phase, 27 Transform-based modeling, diabetic retinopathy, 361 True negative (TN), 69 True positive (TP), 69 True positive volume fraction (TPVF), 203 Two cross-validation techniques, 11 2D vessel segmentation, 347, 348f Two-level discrete wavelet decompositions, 120f Type 1 diabetes (T1D), 59–61, 285, 343 Type 2 diabetes (T2D), 59–61, 285, 343 Type 3 neovascularization (NV), 321–322. See also Nascent type 3 neovascularization (NV)

aggressive nature, 336 classification, 323–324 epidemiology, 322 fluorescein angiography, 324–326, 325f indocyanine green angiography, 324–326, 325f multimodal imaging, 324–326, 330f optical coherence tomography angiography, 326–329, 327–329f pathogenesis, 322–323 risk factors, 322 spectral domain optical coherence tomography, 324t treatment, 332–336 treatment-naı¨ve, 325f, 326, 328–329, 328–329f, 335 vasogenic process, 323–324 U University of Auckland Diabetic Retinopathy (UoA-DR) database, 67 Unsupervised classification method, 398 UT Hamilton Eye Institute, 293–294 V Vascular abnormalities, 35–37 Vascular bifurcation points, 10b Vascular endothelial growth factor (VEGF), 59–61, 290 type 3 neovascularization, 323, 329, 332–334 Vascular skeleton extraction, 260 Venous beading, 80–83 Vessel concavity property, 73 Vessel density (VD), 159 Vessel detection methods, 345 Vessel tortuosity, 71 W Wavelets, 116, 119, 125f Weiner filtering, 299–301 Wet macular degeneration (WMD), 135–136, 141f Z Zoom-in-Net methodology, 225–226, 232