Biomedical Photonic Technologies 3527346562, 9783527346561

Biomedical Photonic Technologies A state-of-the-art examination of biomedical photonic research, technologies, and appli

279 108 10MB

English Pages 303 [305] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Biomedical Photonic Technologies
 3527346562, 9783527346561

Table of contents :
Cover
Title Page
Copyright
Contents
Preface
Chapter 1 Advanced Wide‐Field Fluorescent Microscopy for Biomedicine
1.1 Introduction
1.2 Optical Sectioning by Structured Illumination
1.2.1 Optical Section in Wide‐Field Microscopy
1.2.2 Principle of Optical Section with Structured Illumination
1.2.3 Methods for Generating Create Structured Illumination
1.2.4 Optical Section Algorithms with Structured Illumination
1.2.4.1 Simple Reconstruction Algorithm
1.2.4.2 HiLo Reconstruction Algorithm
1.2.4.3 Hilber–Huang Transform Reconstruction Algorithm
1.3 Super‐Resolution Imaging with Structured Illumination
1.3.1 Lateral Resolution in a Wide‐Field Microscope
1.3.2 Principle of Super‐Resolution SIM
1.3.3 SR‐SIM Setup Based Laser Interference
1.3.4 Super‐Resolution Reconstruction for SIM
1.3.5 Typical Artifacts and Removement Methods
1.4 3D imaging with Light Sheet Illumination
1.4.1 Principle and History
1.4.2 Light Sheet with Orthogonal Objectives
1.4.2.1 Light Sheet with Cylinder Lens
1.4.2.2 Scanning Light Sheet
1.4.2.3 Multidirection Illumination and Imaging
1.4.3 Single‐Lens Light‐Sheet Microscopy
1.5 Summary
References
Chapter 2 Fluorescence Resonance Energy Transfer (FRET)
2.1 Fluorescence
2.1.1 Fluorescence Emission
2.1.2 Molar Extinction Coefficient
2.1.3 Quantum Yield
2.1.4 Absorption and Emission Spectra
2.2 Characteristics of Resonance Energy Transfer
2.3 Theory of Energy Transfer for a Donor–Acceptor Pair
2.4 Types of FRET Application
2.5 Common Fluorophores for FRET
2.5.1 Chemical Fluorescence Probes
2.5.2 Gene‐Encoded Fluorescent Proteins (FPs)
2.5.3 Quantum Dot (QD)
2.6 Effect of FRET on the Optical Properties of Donor and Acceptor
2.7 Qualitative FRET Analysis
2.8 Quantitative FRET Measurement
2.8.1 Issue of Quantitative FRET Measurement: Spectral Crosstalk
2.8.2 Lifetime Method
2.8.3 Complete Acceptor Photobleaching
2.8.4 Partial Acceptor Photobleaching (pbFRET)
2.8.5 B/C‐PbFRET Method
2.8.6 Binomial Distribution‐Based Quantitative FRET Measurement for Constructs with Multiple‐Acceptors by Partially Photobleaching Acceptor(Mb‐PbFRET)
2.8.7 3‐Cube‐Based E‐FRET
2.8.8 Quantitative FRET Measurement Based on Linear Spectral Unmixng of Emission Spectra (Em‐spFRET)
2.8.8.1 Lux‐FRET Method
2.8.8.2 SpRET Method
2.8.8.3 Iem‐spFRET Method
2.9 Conventional Instrument for FRET Measurement
2.9.1 Fluorescence Lifetime Detector
2.9.2 Widefield Microscope
2.9.3 Confocal Fluorescene Microscope
2.9.4 Fluorescence Spectrometer
2.10 Applications of FRET in Biomedicine
2.10.1 Protein–Protein Interactions
2.10.2 Activation and Degradation of Protein Kinases
2.10.3 Spatio‐Temporal Imaging of Intracellular Ion Concentration
References
Chapter 3 Optical Coherence Tomography Structural and Functional Imaging
3.1 Introduction
3.2 Principles of OCT
3.3 Performances of OCT
3.3.1 Resolution
3.3.2 Imaging Speed
3.3.3 Signal‐to‐Noise Ratio (SNR)
3.3.4 Imaging Range
3.3.5 Sensitivity Falloff Effects in FD‐OCT
3.4 Development of OCT Imaging
3.4.1 Large Imaging Range
3.4.2 High‐Imaging Speed
3.4.3 Functional OCT
3.5 OCT Angiography
3.5.1 OCTA Contrast Origins
3.5.2 SID‐OCTA Imaging Algorithm
3.6 OCTA Quantification
3.6.1 Morphological Quantification
3.6.2 Hemodynamic Quantification
3.7 Applications of OCT
3.7.1 Brain
3.7.2 Ocular
3.7.3 Skin
3.8 Conclusion
References
Chapter 4 Coherent Raman Scattering Microscopy and Biomedical Applications
4.1 Introduction
4.1.1 Spontaneous Raman Scattering
4.1.2 Coherent Raman Scattering
4.2 Coherent Anti‐stokes Raman Scattering (CARS) Microscopy
4.2.1 Principles and Limitations
4.2.2 Endoscopic CARS
4.3 Stimulated Raman Scattering (SRS) Microscopy
4.3.1 Principles and Advantages
4.3.2 Hyperspectral SRS
4.3.3 High Speed SRS
4.4 Biomedical Applications of CRS Microscopy
4.4.1 Label‐Free Histology for Rapid Diagnosis
4.4.2 Raman Tagging and Imaging
4.5 Prospects and Challenges
References
Chapter 5 Fluorescence Imaging‐Guided Surgery
5.1 Introduction
5.2 Basics of Fluorescence Image‐Guided Surgery
5.3 Fluorescence Probes for Imaging‐Guided Surgery
5.4 Typical Fluorescence Imaging‐Guided Surgeries
5.4.1 Brain Tumor Resection
5.4.2 Open Surgeries for Cancer Resection in Other Organs
5.4.3 Laparoscopic/Endoscopic Surgeries
5.4.3.1 Cholecystectomy
5.4.3.2 Gastrectomy
5.4.3.3 Pulmonary Ground‐Glass Opacity in Thoracoscopic Wedge Resection
5.4.3.4 Head and Neck
5.4.4 Organ Transplant Surgery
5.4.5 Plastic Surgery
5.4.6 Orthopedic Surgery
5.4.7 Parathyroid Gland Identification
5.5 Limitations, Challenges, and Possible Solutions
References
Chapter 6 Enhanced Photodynamic Therapy
6.1 Introduction
6.2 Photosensitizers for Enhanced PDT
6.3 Light Sources for Enhanced PDT
6.3.1 Extended Penetration Depth
6.3.1.1 Lasers
6.3.1.2 Light‐Emitting Diodes
6.3.1.3 Self‐Excitation Light Sources
6.3.1.4 X‐Ray
6.3.1.5 Acoustic Waves
6.3.2 Optimized Scheme of Irradiation
6.4 Oxygen Supply for Enhanced PDT
6.4.1 Oxygen Replenishment
6.4.1.1 Oxygen Carriers
6.4.1.2 Oxygen Generators
6.4.2 Reduced Oxygen Consumption
6.4.2.1 Irradiation Scheme
6.4.2.2 Hypoxia‐Activated Approaches
6.4.2.3 Reduction of Oxygen Dependence
6.5 Synergistic Therapy for Enhanced PDT
6.5.1 Dual‐Modal Therapy
6.5.1.1 Surgery
6.5.1.2 Chemotherapy
6.5.1.3 Radiotherapy
6.5.1.4 Photothermal Therapy
6.5.1.5 Immunotherapy
6.5.1.6 Magnetic Hyperthermia Therapy
6.5.1.7 Sonodynamic Therapy
6.5.2 Triple/Multiple‐Modal Therapy
6.6 PDT Dosimetry
6.6.1 Explicit Dosimetry
6.6.1.1 Irradiation Light
6.6.1.2 PS Concentration
6.6.1.3 Tissue Oxygen Partial Pressure
6.6.2 Implicit Dosimetry
6.6.3 Biological Response
6.6.4 Direct Dosimetry
6.7 Clinical Applications
6.7.1 Tumor‐Targeting PDT
6.7.2 Vascular‐Targeted PDT
6.7.3 Microbial‐Targeting PDT
6.8 Future Perspective
Acknowledgments
References
Chapter 7 Optogenetics
7.1 Introduction
7.2 Introduction of Optogenetics
7.2.1 Find the Right Photosensitive Protein
7.2.2 The Opsin Gene Is Introduced into the Receptor Cell
7.2.3 Time and Space Control of Stimulation Light
7.2.4 Collect Output Signals and Read Results
7.3 The History and Development of Optogenetics
7.4 Photosensitive Protein
7.4.1 Introduction and Development of Photosensitive Protein
7.4.2 Types of Photosensitive Proteins
7.4.3 Improvement of Photosensitive Protein
7.4.3.1 Improvements to Excitatory Photosensitive Proteins
7.4.3.2 Improvements to Inhibitory Photosensitive Protein
7.4.4 Other Modifications of Photosensitive Proteins
7.4.5 Application of Photosensitive Protein
7.5 Precise Optogenetics
7.5.1 Single‐Photon Optogenetics
7.5.2 Multiphoton Optogenetics
7.5.2.1 Serial Scanning
7.5.2.2 Parallel Mode Lighting Method
7.6 Application and Development of Optogenetics
7.6.1 Application of Optogenetics in Gene Editing and Transcription
7.6.1.1 Genome Editing
7.6.1.2 Genome Transcription
7.6.2 Application of Optogenetics at the Cellular Level
7.6.2.1 Movement and Localization of Organelles
7.6.2.2 Regulating Cellular Pathways
7.6.3 The Application of Optogenetics in Animal Behavior Research
7.6.3.1 Animal Eating Behavior
7.6.4 Application of Optogenetics in Disease Treatment
7.6.4.1 Cardiac Electrophysiology
7.6.4.2 Epilepsy
7.6.4.3 Parkinson's Disease
7.7 Prospects and Prospects
7.7.1 Accurate Time Control
7.7.2 Precise Targeting
7.7.3 Precise Cell Subtype
7.7.4 Minimal Interference
References
Chapter 8 Optical Theranostics Based on Gold Nanoparticles
8.1 Thermoplasmonic Effects of AuNP
8.1.1 Overview of Thermoplasmonic Effects
8.1.2 Plasmonic Absorption of AuNP
8.1.3 Electron–Phonon Energy Transfer
8.1.4 Heat Diffusion and Interface Conductance
8.1.5 Bubble Formation Threshold
8.2 Gold Nanoparticles‐Mediated Optical Diagnosis
8.2.1 Gold Nanoparticles‐Mediated Diagnosis of Disease Markers
8.2.1.1 In Vitro Gold Nanoparticles‐Mediated Biomarker Diagnosis of Disease
8.2.1.2 In Vivo Gold Nanoparticles‐Mediated Diagnosis of Disease
8.2.2 Gold Nanoparticles‐Mediated Optical Bioimaging
8.2.2.1 Dark Field Imaging
8.2.2.2 Fluorescence and Luminescence Imaging
8.2.2.3 Photothermal and Photoacoustic Imaging
8.2.2.4 Surface‐Enhanced Raman (SERS) Imaging
8.2.2.5 Optical Coherent Tomography
8.2.2.6 Summarize
8.3 Gold Nanoparticle‐Based Anticancer Applications
8.3.1 Photothermal Therapy (PTT)
8.3.1.1 Photothermal Conversion Efficiency
8.3.1.2 Targeting Strategy
8.3.2 Photothermal Therapy Combined with Other Treatments
8.4 Precise Manipulation of Molecules by Laser Gold Nanoparticles Heating
8.4.1 Precise Manipulation of Protein Activity
8.4.2 DNA Melting, Detecting, and Selectively Destruction
8.4.3 Gold Nanoparticle‐Based Photoporation
8.5 Gold Nanoparticles in Clinical Trials
References
Index
EULA

Citation preview

Biomedical Photonic Technologies

Biomedical Photonic Technologies Edited by Zhenxi Zhang, Shudong Jiang, and Buhong Li

Editors Prof. Zhenxi Zhang

Xi’an Jiaotong University Institute of Biomedical Photonics and Sensing 28 Xianning Xi Road 710049 Xi’an China

All books published by WILEY-VCH are carefully produced. Nevertheless, authors, editors, and publisher do not warrant the information contained in these books, including this book, to be free of errors. Readers are advised to keep in mind that statements, data, illustrations, procedural details or other items may inadvertently be inaccurate. Library of Congress Card No.: applied for

Prof. Shudong Jiang

Dartmouth College Thayer School of Engineering 14 Engineering Drive Hanover, NH 03755 USA

British Library Cataloguing-in-Publication Data

Prof. Buhong Li

The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at .

Hainan University School of Science 58 Renmin Road 570228 Haikou China [email protected]

A catalogue record for this book is available from the British Library. Bibliographic information published by the Deutsche Nationalbibliothek

© 2023 WILEY-VCH GmbH, Boschstraße 12, 69469 Weinheim, Germany

Cover Image: © GrAl/Shutterstock

All rights reserved (including those of translation into other languages). No part of this book may be reproduced in any form – by photoprinting, microfilm, or any other means – nor transmitted or translated into a machine language without written permission from the publishers. Registered names, trademarks, etc. used in this book, even when not specifically marked as such, are not to be considered unprotected by law. Print ISBN: 978-3-527-34656-1 ePDF ISBN: 978-3-527-82353-6 ePub ISBN: 978-3-527-82354-3 oBook ISBN: 978-3-527-82355-0 Typesetting

Straive, Chennai, India

v

Contents Preface xiii 1

1.1 1.2 1.2.1 1.2.2 1.2.3 1.2.4 1.2.4.1 1.2.4.2 1.2.4.3 1.3 1.3.1 1.3.2 1.3.3 1.3.4 1.3.5 1.4 1.4.1 1.4.2 1.4.2.1 1.4.2.2 1.4.2.3 1.4.3 1.5

Advanced Wide-Field Fluorescent Microscopy for Biomedicine 1 Chong Chen and Hui Li Introduction 1 Optical Sectioning by Structured Illumination 2 Optical Section in Wide-Field Microscopy 2 Principle of Optical Section with Structured Illumination Methods for Generating Create Structured Illumination Optical Section Algorithms with Structured Illumination Simple Reconstruction Algorithm 7 HiLo Reconstruction Algorithm 8 Hilber–Huang Transform Reconstruction Algorithm 10 Super-Resolution Imaging with Structured Illumination Lateral Resolution in a Wide-Field Microscope 11 Principle of Super-Resolution SIM 13 SR-SIM Setup Based Laser Interference 14 Super-Resolution Reconstruction for SIM 16 Typical Artifacts and Removement Methods 18 3D imaging with Light Sheet Illumination 19 Principle and History 19 Light Sheet with Orthogonal Objectives 21 Light Sheet with Cylinder Lens 21 Scanning Light Sheet 22 Multidirection Illumination and Imaging 23 Single-Lens Light-Sheet Microscopy 25 Summary 27 References 27

3 4 7

11

vi

Contents

2 2.1 2.1.1 2.1.2 2.1.3 2.1.4 2.2 2.3 2.4 2.5 2.5.1 2.5.2 2.5.3 2.6 2.7 2.8 2.8.1 2.8.2 2.8.3 2.8.4 2.8.5 2.8.6

2.8.7 2.8.8 2.8.8.1 2.8.8.2 2.8.8.3 2.9 2.9.1 2.9.2 2.9.3 2.9.4 2.10 2.10.1 2.10.2 2.10.3

Fluorescence Resonance Energy Transfer (FRET) 31 Tongsheng Chen Fluorescence 32 Fluorescence Emission 32 Molar Extinction Coefficient 32 Quantum Yield 33 Absorption and Emission Spectra 33 Characteristics of Resonance Energy Transfer 33 Theory of Energy Transfer for a Donor–Acceptor Pair 35 Types of FRET Application 36 Common Fluorophores for FRET 36 Chemical Fluorescence Probes 37 Gene-Encoded Fluorescent Proteins (FPs) 37 Quantum Dot (QD) 38 Effect of FRET on the Optical Properties of Donor and Acceptor 39 Qualitative FRET Analysis 40 Quantitative FRET Measurement 41 Issue of Quantitative FRET Measurement: Spectral Crosstalk 41 Lifetime Method 41 Complete Acceptor Photobleaching 43 Partial Acceptor Photobleaching (pbFRET) 43 B/C-PbFRET Method 44 Binomial Distribution-Based Quantitative FRET Measurement for Constructs with Multiple-Acceptors by Partially Photobleaching Acceptor(Mb-PbFRET) 46 3-Cube-Based E-FRET 48 Quantitative FRET Measurement Based on Linear Spectral Unmixng of Emission Spectra (Em-spFRET) 49 Lux-FRET Method 49 SpRET Method 51 Iem-spFRET Method 52 Conventional Instrument for FRET Measurement 53 Fluorescence Lifetime Detector 53 Widefield Microscope 54 Confocal Fluorescene Microscope 54 Fluorescence Spectrometer 55 Applications of FRET in Biomedicine 55 Protein–Protein Interactions 55 Activation and Degradation of Protein Kinases 58 Spatio-Temporal Imaging of Intracellular Ion Concentration 61 References 65

Contents

3

3.1 3.2 3.3 3.3.1 3.3.2 3.3.3 3.3.4 3.3.5 3.4 3.4.1 3.4.2 3.4.3 3.5 3.5.1 3.5.2 3.6 3.6.1 3.6.2 3.7 3.7.1 3.7.2 3.7.3 3.8

4

4.1 4.1.1 4.1.2 4.2 4.2.1 4.2.2 4.3 4.3.1 4.3.2 4.3.3

Optical Coherence Tomography Structural and Functional Imaging 71 Peng Li and Zhihua Ding Introduction 71 Principles of OCT 72 Performances of OCT 76 Resolution 76 Imaging Speed 77 Signal-to-Noise Ratio (SNR) 78 Imaging Range 80 Sensitivity Falloff Effects in FD-OCT 80 Development of OCT Imaging 82 Large Imaging Range 82 High-Imaging Speed 84 Functional OCT 84 OCT Angiography 86 OCTA Contrast Origins 86 SID-OCTA Imaging Algorithm 88 OCTA Quantification 94 Morphological Quantification 94 Hemodynamic Quantification 96 Applications of OCT 100 Brain 100 Ocular 103 Skin 105 Conclusion 107 References 108 Coherent Raman Scattering Microscopy and Biomedical Applications 117 Minbiao Ji Introduction 117 Spontaneous Raman Scattering 117 Coherent Raman Scattering 119 Coherent Anti-stokes Raman Scattering (CARS) Microscopy 120 Principles and Limitations 120 Endoscopic CARS 120 Stimulated Raman Scattering (SRS) Microscopy 121 Principles and Advantages 121 Hyperspectral SRS 123 High Speed SRS 125

vii

viii

Contents

4.4 4.4.1 4.4.2 4.5

Biomedical Applications of CRS Microscopy 127 Label-Free Histology for Rapid Diagnosis 127 Raman Tagging and Imaging 130 Prospects and Challenges 134 References 134

5

Fluorescence Imaging-Guided Surgery 137 Shudong Jiang Introduction 137 Basics of Fluorescence Image-Guided Surgery 138 Fluorescence Probes for Imaging-Guided Surgery 140 Typical Fluorescence Imaging-Guided Surgeries 143 Brain Tumor Resection 143 Open Surgeries for Cancer Resection in Other Organs 145 Laparoscopic/Endoscopic Surgeries 148 Cholecystectomy 148 Gastrectomy 150 Pulmonary Ground-Glass Opacity in Thoracoscopic Wedge Resection 151 Head and Neck 151 Organ Transplant Surgery 151 Plastic Surgery 152 Orthopedic Surgery 153 Parathyroid Gland Identification 156 Limitations, Challenges, and Possible Solutions 157 References 158

5.1 5.2 5.3 5.4 5.4.1 5.4.2 5.4.3 5.4.3.1 5.4.3.2 5.4.3.3 5.4.3.4 5.4.4 5.4.5 5.4.6 5.4.7 5.5

6 6.1 6.2 6.3 6.3.1 6.3.1.1 6.3.1.2 6.3.1.3 6.3.1.4 6.3.1.5 6.3.2 6.4 6.4.1 6.4.1.1 6.4.1.2 6.4.2 6.4.2.1

Enhanced Photodynamic Therapy 165 Buhong Li and Li Lin Introduction 165 Photosensitizers for Enhanced PDT 168 Light Sources for Enhanced PDT 170 Extended Penetration Depth 171 Lasers 171 Light-Emitting Diodes 171 Self-Excitation Light Sources 172 X-Ray 173 Acoustic Waves 173 Optimized Scheme of Irradiation 173 Oxygen Supply for Enhanced PDT 174 Oxygen Replenishment 174 Oxygen Carriers 174 Oxygen Generators 175 Reduced Oxygen Consumption 176 Irradiation Scheme 176

Contents

6.4.2.2 6.4.2.3 6.5 6.5.1 6.5.1.1 6.5.1.2 6.5.1.3 6.5.1.4 6.5.1.5 6.5.1.6 6.5.1.7 6.5.2 6.6 6.6.1 6.6.1.1 6.6.1.2 6.6.1.3 6.6.2 6.6.3 6.6.4 6.7 6.7.1 6.7.2 6.7.3 6.8

Hypoxia-Activated Approaches 176 Reduction of Oxygen Dependence 176 Synergistic Therapy for Enhanced PDT 177 Dual-Modal Therapy 177 Surgery 177 Chemotherapy 178 Radiotherapy 178 Photothermal Therapy 179 Immunotherapy 179 Magnetic Hyperthermia Therapy 179 Sonodynamic Therapy 179 Triple/Multiple-Modal Therapy 180 PDT Dosimetry 180 Explicit Dosimetry 182 Irradiation Light 182 PS Concentration 182 Tissue Oxygen Partial Pressure 182 Implicit Dosimetry 183 Biological Response 183 Direct Dosimetry 184 Clinical Applications 189 Tumor-Targeting PDT 189 Vascular-Targeted PDT 190 Microbial-Targeting PDT 191 Future Perspective 192 Acknowledgments 194 References 194

7

Optogenetics 201 Prof. Ke Si Introduction 201 Introduction of Optogenetics 201 Find the Right Photosensitive Protein 203 The Opsin Gene Is Introduced into the Receptor Cell 204 Time and Space Control of Stimulation Light 204 Collect Output Signals and Read Results 205 The History and Development of Optogenetics 206 Photosensitive Protein 211 Introduction and Development of Photosensitive Protein 211 Types of Photosensitive Proteins 212 Improvement of Photosensitive Protein 213 Improvements to Excitatory Photosensitive Proteins 214 Improvements to Inhibitory Photosensitive Protein 215 Other Modifications of Photosensitive Proteins 216 Application of Photosensitive Protein 216

7.1 7.2 7.2.1 7.2.2 7.2.3 7.2.4 7.3 7.4 7.4.1 7.4.2 7.4.3 7.4.3.1 7.4.3.2 7.4.4 7.4.5

ix

x

Contents

7.5 7.5.1 7.5.2 7.5.2.1 7.5.2.2 7.6 7.6.1 7.6.1.1 7.6.1.2 7.6.2 7.6.2.1 7.6.2.2 7.6.3 7.6.3.1 7.6.4 7.6.4.1 7.6.4.2 7.6.4.3 7.7 7.7.1 7.7.2 7.7.3 7.7.4

Precise Optogenetics 217 Single-Photon Optogenetics 219 Multiphoton Optogenetics 224 Serial Scanning 225 Parallel Mode Lighting Method 227 Application and Development of Optogenetics 231 Application of Optogenetics in Gene Editing and Transcription 231 Genome Editing 231 Genome Transcription 232 Application of Optogenetics at the Cellular Level 232 Movement and Localization of Organelles 232 Regulating Cellular Pathways 233 The Application of Optogenetics in Animal Behavior Research 234 Animal Eating Behavior 234 Application of Optogenetics in Disease Treatment 235 Cardiac Electrophysiology 235 Epilepsy 236 Parkinson’s Disease 236 Prospects and Prospects 237 Accurate Time Control 237 Precise Targeting 237 Precise Cell Subtype 237 Minimal Interference 237 References 239

8

Optical Theranostics Based on Gold Nanoparticles 245 Cuiping Yao, Xiao-Xuan Liang, Sijia Wang, Jing Xin, Luwei Zhang, and Zhenxi Zhang Thermoplasmonic Effects of AuNP 245 Overview of Thermoplasmonic Effects 245 Plasmonic Absorption of AuNP 246 Electron–Phonon Energy Transfer 248 Heat Diffusion and Interface Conductance 250 Bubble Formation Threshold 253 Gold Nanoparticles-Mediated Optical Diagnosis 254 Gold Nanoparticles-Mediated Diagnosis of Disease Markers 255 In Vitro Gold Nanoparticles-Mediated Biomarker Diagnosis of Disease 255 In Vivo Gold Nanoparticles-Mediated Diagnosis of Disease 259 Gold Nanoparticles-Mediated Optical Bioimaging 260 Dark Field Imaging 260 Fluorescence and Luminescence Imaging 261 Photothermal and Photoacoustic Imaging 263 Surface-Enhanced Raman (SERS) Imaging 263 Optical Coherent Tomography 265

8.1 8.1.1 8.1.2 8.1.3 8.1.4 8.1.5 8.2 8.2.1 8.2.1.1 8.2.1.2 8.2.2 8.2.2.1 8.2.2.2 8.2.2.3 8.2.2.4 8.2.2.5

Contents

8.2.2.6 8.3 8.3.1 8.3.1.1 8.3.1.2 8.3.2 8.4 8.4.1 8.4.2 8.4.3 8.5

Summarize 265 Gold Nanoparticle-Based Anticancer Applications 265 Photothermal Therapy (PTT) 266 Photothermal Conversion Efficiency 266 Targeting Strategy 268 Photothermal Therapy Combined with Other Treatments 269 Precise Manipulation of Molecules by Laser Gold Nanoparticles Heating 271 Precise Manipulation of Protein Activity 271 DNA Melting, Detecting, and Selectively Destruction 272 Gold Nanoparticle-Based Photoporation 273 Gold Nanoparticles in Clinical Trials 276 References 278 Index 285

xi

xiii

Preface The development of laser technology has brought tremendous changes to optical science and technology, and the emergence of transformative research achievements and application technologies has become inevitable. Biomedical photonics is one of the important physical mechanisms of laser application in biomedical engineering, which has been widely used in cell micronano surgery, ophthalmic refractive surgery, tumor diagnosis and treatment, and other fields. The in-depth study of the physical mechanisms of biomedical photonics is of great significance to promote the development of related application technologies. This book presents the latest development in biomedical photonics and provides an effective reference for further research with a novel and unique perspective, clear and reasonable research methods, and systematic and rich research contents. The book consists of eight chapters, which introduce the theoretical basis of biomedical photonics in various fields from imaging to therapy. Meanwhile, based on the latest technological achievements and applications, and combined with current research and development status, this book discusses various imaging and therapeutic technologies based on the classical physical mechanism of laser interaction with tissue in detail, and systematically introduces the latest development and applications of biomedical photonics. The book starts from the basic theory of biomedical photonics to its application in the field of biomedical photonics, and summarizes the current research hotspots, research ideas, applied technologies and methods, as well as the future development trend of biomedical photonics. The book also includes the authors’ main research achievements in biomedical photonics over the past 20 years, describing the basic principles of the formation, development, and derivation of biomedical photonics phenomena at multiple scales, while also introducing the related technologies and applications of laser optics, physics, biology, thermal, and nanomaterials. It embodies the characteristics of interdisciplinary research and shows a new interdisciplinary field with great vitality. It can be used as a text book for information science, physical science, and life science, as well as a reference book for researchers in related science and technology.

xiv

Preface

This book focuses on biomedical applications and the main contents are as following: In Chapter 1, the principle and biomedical applications of advanced wide-field fluorescence microscope based on optical slice are summarized. In Chapter 2, the physical and molecular process, quantitative measurement, and application of fluorescence resonance energy transfer technology in living cells are systematically analyzed. In Chapter 3, the basic theory and structural and functional imaging of optical coherence tomography are summarized. In Chapter 4, the basic principles of coherent Raman scattering microscopy and its recent imaging applications in histopathology and biological studies are introduced. In Chapter 5, the history and basis of fluorescence imaging mediated surgery are systematically described, and the research and application of typical fluorescence imaging surgery are summarized. In Chapter 6, the latest development and clinical application of photodynamic therapy are reviewed, and the main challenges and prospects of photodynamic therapy are discussed. In Chapter 7, the research of opsin and precision optogenetics in optogenetics are introduced, and the limitations and challenges of this technology are put forward based on the application of optogenetics in neuroscience. In Chapter 8, the classical physical mechanism of the thermal plasma effect of gold nanoparticles and the related applications of optical imaging, detection, and phototherapy enhanced by gold nanoparticles are described. Authors of each chapters are Chapter 1: Prof. Hui Li and Dr. Chong Chen Chapter 2: Prof. Tongsheng Chen Chapter 3: Prof. Peng Li and Prof. Zhihua Ding Chapter 4: Prof. Minbiao Ji Chapter 5: Prof. Shudong Jiang Chapter 6: Profs. Buhong Li and Li Lin Chapter 7: Prof. Ke Si Chapter 8: Prof. Cuiping Yao, Dr. Xiaoxuan Liang, Prof. Sijia Wang and Prof. Jing Xin The draft of this book was completed by Ph.D. candidate Ping Wang. We sincerely hope that the publication of this book will help readers to understand the development of biomedical photonics, especially in China. 16 June 2022

Zhenxi Zhang, Shudong Jiang and Buhong Li

1

1 Advanced Wide-Field Fluorescent Microscopy for Biomedicine Chong Chen and Hui Li Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China

1.1 Introduction Life was assembled from molecules, subcellular organelles, cells, tissues, organs, to the whole organism in totally different ways. The assembly at each level has its own structure, dynamics, and functions, making for such a complex and beautiful living world. To study such complex structures, as nobel prize winner Feynmann said, “It is very easy to understand many of these fundamental biology questions: you just look at the things.” However, different imaging tools need to be developed for different purposes to look at the various biological objects, with a scale from nanometers to centimeters. Optical microscopy plays the most important role in inspecting the microscale biological world among all the imaging tools. Although optical microscopy has been invented for more than 300 years, we have witnessed significant improvement in the optical microscope technique in the last 30 years. These improvements mostly fall into two aspects: sample labeling technique and different imaging modalities. Image contrast is the first factor to be concerned with for optical imaging. By now, fluorescent imaging has provided the highest contrast due to the filtering out of excitation light. Organic dyes, quantum dots, and fluorescent protein are the three most widely used fluorescent labeling agents. Fluorescent proteins, which won the novel prize for chemistry in 2008, provide a genetic way for labeling so that fluorescent imaging with live cells, organelles, and even live animals becomes possible. High-end microscopes fall into two categories: widefield microscopes and point scanning microscopes (Figure 1.1). A wide-field microscope takes imaging by a camera and usually has high speed and high photon efficiency. Typical examples are the Tirf-microscope, structured illumination microscope, and single-molecule localization super-resolution microscope. The point-scanning microscope takes imaging by fast scanning the excitation laser beam or sample, usually at a lower speed but with higher axial sectioning capability. Typical examples include laser scanning confocal microscopes, two-photon microscopes, and STED super-resolution. Biomedical Photonic Technologies, First Edition. Edited by Zhenxi Zhang, Shudong Jiang, and Buhong Li. © 2023 WILEY-VCH GmbH. Published 2023 by WILEY-VCH GmbH.

2

1 Advanced Wide-Field Fluorescent Microscopy for Biomedicine

Sample

Sample

Objective lens

Objective lens Light source Tube lens Light source DM

DM Filter

Tube lens Pinhole

Tube lens

Camera (a)

PMT (b)

Figure 1.1 (a) Comparison of wide-field microscopy. (b) Point-scanning microscopy. Source: Chong Chen.

This chapter mainly introduces the advance of the wide-field fluorescent microscope in the last ten years. We first introduce the methods to improve the optical sectioning and the resolution by structured illumination, then introduce the methods by light sheet illumination. The optical principle, setup, image processing method will be introduced in each section. The chapter ends with a prospect for future development.

1.2 Optical Sectioning by Structured Illumination 1.2.1

Optical Section in Wide-Field Microscopy

The optical section in microscopy defines its capability to resolve structure axially. In an epi-fluorescent microscope, the entire sample space is illuminated, and all of the excited fluorescence signals collected by the objective can go into the array detector. Consequently, when the sample goes out of focus, its image becomes blurred, but the signals do not disappear. This problem presents a significant hindrance in wide-field microscopy. In optical microscopy, the depth of focus is how far the sample plane can move while the specimen remains in perfect focus. The numerical aperture of the objective lens is the main factor that determines the depth of focus (D): D = (𝜆 ⋅ n)∕NA2 + (Dn ∕(M ⋅ NA)) ⋅ e

(1.1)

where 𝜆 is the wavelength of the fluorescent light, n is the refractive index of the medium [usually air (1.000) or immersion oil (1.515)], NA is the numerical aperture

1.2 Optical Sectioning by Structured Illumination

3D PSF Focus plane

Depth of field

(a) Blurred focused image

(b)

Figure 1.2 (a) Sketch of the depth of field. (b) The defocused signals cause blur of the focused image. Source: Chong Chen.

of the objective lens. The variable e is the smallest distance that can be resolved by a detector that is placed in the image plane of the microscope objective, whose lateral magnification is M. For a high-end fluorescent microscope with NA 1.4 and a 100× magnification objective, the depth of focus is on the order of 500 to 700 nm, dependent on the fluorescent wavelength. This depth of field defines the best optical section capability for an epi-fluorescence microscope. However, when imaging samples with a thickness larger than the microscope’s depth-of-focus, the sample’s out of focus plane is also excited and forms a defocus image at the camera plane (Figure 1.2). The superimposition of these defocus images lowers the contrast of the camera’s captured image and practically lowers the axial resolution. The imperfections in the microscope’s optics and the scattering of fluorescence signals by the sample itself make the situation even worse. So, the priority demand to improve the wide-field microscope’s performance lies in eliminating the out-of-focus signal, yielding better optical sectioning capability.

1.2.2

Principle of Optical Section with Structured Illumination

In a laser scanning confocal microscope, the out-of-focus light is rejected using a pinhole. In a wide-field microscope, no pinhole can be used since it uses a array detector. One way to reject the out-of-focus signals is by using structured illumination. By utilizing a grating or a digital mirror device (DMD), stripe patterns were projected on the image plane so that a structured illumination was created to excite the fluorescence molecules within the focus plane. The actual focus range of the stripes can be made very sharp if the proper period of the grating is used. Out of focus, the strip patterns become uniform, which will generate a nonmodulated background. Therefore, the image formed by the microscope will consist of striped

3

4

1 Advanced Wide-Field Fluorescent Microscopy for Biomedicine

in-focus features superposed with uniformly illuminated out-of-focus features. A postprocessing algorithm could reject this background afterward, thus obtaining a better optical section capability. So, to obtain optical sectioning with structured illumination, two factors are needed: optical instrumentation to create structured illumination and the optical section reconstruction algorithm. These two aspects will be discussed in the following section.

1.2.3

Methods for Generating Create Structured Illumination

For structured illumination, the excitation light field needs to be patterned, and the pattern needs to be shifted or rotated to capture all sample information. Several methods have been developed for this purpose. (1) The fluorescence grating imager inserts a grid into the field diaphragm plane of a fluorescence microscope (Figure 1.3). The shift of the grid projection can be achieved by translating the grate or tilting a plane-parallel glass plate located directly behind the grid. This method requires very little modification to the microscope, so it is easy to implement and low cost. Zeiss, Inc. uses the technique in their APOTOME equipment for 3D imaging [1]. However, the image speed is limited by the mechanical movement of the grate or the glass plate. Generally, 10 frames per second could be obtained, which is not fast enough for some subcellular dynamics imaging. Another drawback is that the period of a grate is fixed, so one has to change the grid if one wants to use a different period. Figure 1.3 Structured illumination with grating for optical section. Source: Chong Chen.

Sample

Objective lens

Tube lens Grid

DM Filter Movement Tube lens

Camera Striped image

Light source

1.2 Optical Sectioning by Structured Illumination

L1

L2

LED

M1

Sample DMD Objective lens

M2

Tube lens DM

Filter

M3

Filter Tube lens

Camera

Figure 1.4

Structured illumination with digital mirror device. Source: Chong Chen.

(2) To avoid mechanical translation, a DMD was introduced to project fringe patterns on the sample plane [2]. The DMD is a micro-electro-mechanical system (MEMS) consisting of a few hundred thousand tiny switchable mirrors with two stable mirror states (such as, −12∘ and +12∘ ). When a micromirror is set at +12∘ toward the illumination, it is referred to as an “on” state. Similarly, at the position of −12∘ it is referred to as the “off” state. The mirrors are highly reflective and have a higher refreshing speed and a broader spectral response (Figure 1.4). However, since DMD is a reflective device and has to be positioned at 12∘ in the light path, the optical layout using DMD is more complex than using grate [3–6]. (3) Structured illumination could also be realized using an LED array as a light source, as implemented by V. Poher et al. [7]. The microstructured light source is an InGaN LED consisting of 120 side-by-side and individually addressable microstripe elements. Each stripe of the device is 17 microns wide and 3600 microns long, with a center-to-center spacing between stripes of 34 microns, giving an overall diode structure size of 3.6 × 4.08 mm. A dedicated electrical driver was constructed to allow arbitrary combinations of the stripes to be driven simultaneously to produce programmable line patterns (Figure 1.5). Using LED

5

6

1 Advanced Wide-Field Fluorescent Microscopy for Biomedicine

Sample

Objective lens Tube lens

LED array

Filter

DM

Filter Tube lens LED driver CCD driver Camera

17 μm

PC 34 μm

(a)

(b)

Figure 1.5 Structured illumination with LED array. (a) Schematic of structured illumination microscopy; (b) Schematic of LED array. Source: Chong Chen.

array for structured illumination does not require modification of the microscope setup, only a change of the light source. The whole system contains no moving parts, and the LED array could display up to 50 000 independent line patterns per second. However, the brightness of the array LED is still limited at this point. (4) Instead of a well-defined strip pattern, structured illumination could also be random speckle patterns from a laser, as invented by J. Mertz group [8]. Speckle patterns are random and granular-intensity patterns that exhibit inherently high contrast. Fluorescence images obtained with speckle illumination are therefore also granular; however, the contrast of the observed granularity provides a measure of how in focus the sample is: High observed contrast indicates that the sample is dominantly in focus, whereas low observed contrast indicates it is dominantly out of focus. The diffuser randomizes the phase front of the laser beam, resulting in a speckle pattern that is projected into the sample via a microscope objective. To obtain the latter, random the speckle patterns are applied within a single exposure of the camera, effectively simulating uniform illumination. Randomization of the speckle pattern is easily achieved by translating or rotating the diffuser (Figure 1.6). When the diffuser is static, the speckle illumination exhibits high contrast (top right panel). When the diffuser is rapidly

1.2 Optical Sectioning by Structured Illumination

Sample Speckle image Objective lens

Diffuser Uniform image DM

Piezo stage Fiber

Camera (a)

(b)

Figure 1.6 Structured illumination with random speckle patterns. (a) Schematic of speckle illumination microscopy; (b) random speckle illumination image and uniform wide-field image. Source: Chong Chen.

oscillated by a galvanometric motor, the resulting speckle becomes blurred over the course of the camera exposure, effectively simulating uniform illumination (bottom right panel). The observed speckle contrast thus serves as a weighting function, indicating the in-focus to out-of-focus ratio in a fluorescence image. While effective, this technique proved to be slow since several images were required to obtain an accurate estimate of speckle contrast A later implementation involved evaluating speckle contrast in space rather than time, using a single image [9].

1.2.4

Optical Section Algorithms with Structured Illumination

In optical section imaging with structured illumination, multiple images were captured for each layer with the same strip period but different phases. An afterwards reconstruction algorithm is then used to calculate a sectioned image from the multiple raw data points. By now, several different algorithms have been developed with different performance. 1.2.4.1 Simple Reconstruction Algorithm

The optical system consists simply of an illumination mask S(t0 , w0 ), which is imaged onto an object of amplitude transmittance or reflectance 𝜏(t1 , w1 ) the final image is recorded by a CCD camera in the image plane (t, w) [10, 11]. The mask is

7

8

1 Advanced Wide-Field Fluorescent Microscopy for Biomedicine

illuminated incoherently, which permits us to write the image intensity as I(t, 𝜔) =

S(t0 , w0 ) ∫∫ |2 | ( ) ( ) ( ) | | t × 𝜏 t h t + t , w + w , w + t, w + w dt dw h 1 1 1 2 1 1 1 1| |∫ ∫ 1 0 1 0 | | dt0 dw0

(1.2)

where h1,2 represents the amplitude point-spread function (PSF) of the optical system. The optical coordinates (t, w) that are related to real coordinates (x, y) through (t, w) = (2𝜋/𝜆)(x, y)n • sin 𝛼 where n • sin 𝛼 is the numerical aperture (NA) and 𝜆 denotes the wavelength. The illumination mask takes the form of a one-dimensional grid and can be written for simplicity as S(t0 , w0 ) = 1 + m ⋅ cos(̃vt0 + 𝜙0 )

(1.3)

where m denotes a modulation depth and 𝜙0 is an arbitrary spatial phase. The normalized spatial frequency ̃v is related to the actual spatial frequency 𝜈 through ṽ = 𝛽 ⋅ 𝜆 ⋅ 𝜈∕NA, where 𝛽 denotes the magnification between the grid plane and the specimen plane. And then, Eq. (1.3) was substituted into Eq. (1.2), I(t, w) = I0 + Ic cos 𝜙0 + Is sin 𝜙0

(1.4)

where I 0 represents a conventional wide-field image. I c and I s represent the images 𝜈 t0 ), respectively. These defthat are due to masks of forms m ⋅ cos(̃ 𝜈 t0 ) and m ⋅ sin(̃ ( 2 ) 1∕ initions suggest that if we are able to form Ip = Ic + Is 2 2 . We would remove the grid pattern from the image of the specimen. This result could be readily achieved by taking three images, I 1 , I 2 , and I 3 , which correspond to the relative spatial phases 0, 𝜙0 = 2𝜋/3, and 𝜙0 = 4𝜋/3, respectively. So, [ ] 1∕ Ip = (I1 − I2 )2 + (I1 − I3 )2 + (I2 − I3 )2 2

(1.5)

which is analogous to square-law detection in communications systems. Alternatively, Ip can be formed as | 2𝜋 4𝜋 || + I3 exp Ip = ||I1 + I2 exp j 3 3 || |

(1.6)

which is the equivalent of homodyne detection. We note that the conventional image, I 0 , can be recovered from I 1 + I 2 + I 3 . 1.2.4.2

HiLo Reconstruction Algorithm

HiLo microscopy is based on the acquisition of two images with different type of illumination in order to obtain one optically sectioned image (Figure 1.7) [12, 13]. A uniform-illumination image is used to obtain the high-frequency (Hi) components of the image, and a nonuniform-illumination image is used to obtain the low-frequency (Lo) components of the image. The corresponding intensity distributions of the uniform- and structured-illumination images are denoted as Iu (⃗r ) and

1.2 Optical Sectioning by Structured Illumination

In (⃗r ), respectively. The intensity distributions of the high- and low-frequency images are referred to as IHi (⃗r ) and ILo (⃗r ) with the spatial, two-dimensional coordinates ⃗r . The resulting full-frequency optically sectioned image is then obtained by IHiLo (⃗r ) = IHi (⃗r ) + 𝜂ILo (⃗r )

(1.7)

with 𝜂 being a scaling factor that depends on the experimental configuration of the setup. In order to obtain the high-frequency in-focus components, a typical characteristic of the optical transfer function (OTF) of a standard wide-field microscope is exploited: high-frequency components are only well-resolved as long as they are in-focus, while low-frequency components remain visible even if they are out-of-focus. Hence, high-frequency components are directly extracted by Iu (⃗r ) using IHi (⃗r ) = HP{Iu (⃗r )}

(1.8)

whereby HP denotes a Gaussian high-pass filter with the cutoff frequency kc applied in the frequency domain. The low-frequency component of the image is obtained by calculating ILo (⃗r ) = LP{CS (⃗r )Iu (⃗r )}

(1.9)

with the complimentary low-pass filter LP. The speckle contrast CS (⃗r ) acts as a weighting function for extracting the in-focus contributions and rejecting the out-of-focus contributions of the uniform-illumination image Iu (⃗r ). The overall spatial contrast Cn (⃗r ) is influenced by the speckles in the illumination as well as sample-induced speckles. In order to correct for the influence of the sample-induced speckles, the difference image I𝛿 (⃗r ) = In (⃗r ) − Iu (⃗r )

(1.10)

is used for speckle contrast calculation. The speckle contrast is defined as CS (⃗r ) =

⟨𝜎𝛿 (⃗r )⟩A ⟨In (⃗r )⟩A

(1.11)

where ⟨In (⃗r )⟩A and ⟨𝜎𝛿 (⃗r )⟩A are the mean of In (⃗r ) and the standard deviation of I 𝛿 , respectively. The speckle contrast is calculated over a partition of local evaluation areas A. It is assumed that each area is large enough to encompass several imaged speckle grains. The axial resolution is further increased by applying the band-pass filter ( ) ( ) 2 2 ⃗ = exp − |k | − exp − |k | W(k) (1.12) 2𝜎w2 𝜎w2 to the difference image I 𝛿 before evaluating ⟨𝜎𝛿 (⃗r )⟩A . As a result, the optical sectioning depth of the Lo-component can be adjusted by tuning 𝜎 w . In order to also adjust the optical sectioning depth of the Hi-component, the cutoff frequency of the Gaussian high-pass filter is also tuned by setting kc = 0.18𝜎 w . Since the high- and low-frequency components of the image are now determined, the resulting optically sectioned HiLo image is eventually obtained using Eq. (1.7). As a result, because

9

10

1 Advanced Wide-Field Fluorescent Microscopy for Biomedicine

Uniform illumination image

High-pass filter (high-frequency infocus image content)

HiLo image (all-frequency infocus image content)

(a)

Structured illumination image

Contrast extraction

Low-pass filter (low-frequency infocus image content)

Is-1

Is-2

WF

HP

LP

HiLo

100 μm

(b)

Figure 1.7 Algorithm for HiLo reconstruction. (a) The flow chart of HiLo algorithm; (b) A reconstruction of pumpkin stem section. Is -1, Is -2 structured illumination images; WF wide field image; HP high-pass filter image; LP low-pass filter image. Source: Chong Chen.

the measurements rates equals half of the camera framerate, the HiLo technique provides a powerful method for fast two-dimensional image acquisition. 1.2.4.3 Hilber–Huang Transform Reconstruction Algorithm

Xing zhou et al. propose a one-dimensional (1-D) sequence Hilbert transform (SHT) algorithm to decode the in-focus information (Figure 1.8) [14]. A key step of structured illumination microscopy (SIM) is to project a sinusoidal fringe onto the specimen of interest. Then, the captured structured images can be decomposed into the in-focus and the out-of-focus components: Icap (x, y) = In (x, y) + Im (x, y) ⋅ sin(2𝜋𝜈x + 𝜑)

(1.13)

where I n is the out-of-focus background, I m is the in-focus information, 𝜈 and 𝜑 are the spatial frequency and initial phase of the projected sinusoidal fringe, respectively. Because the intensity of the out-of-focus background I n remains constant,

1.3 Super-Resolution Imaging with Structured Illumination

we can subtract two phase-shifting raw images to eliminate the background. Thus, we obtain an input image I s with sinusoidal amplitude modulation described in Eq. (1.14): Is (x, y) = Im (x, y) ⋅ cos(2𝜋𝜈x + 𝜑s )

(1.14)

where 𝜑s is the mean value of the two arbitrary initial phases of the two raw images. The next step is to demodulate the sinusoidal amplitude and obtain the in-focus information I m from Eq. (1.14). It is seen that Im can be solved out by Eq. (1.14) directly. However, as a result of this, I m becomes extremely sensitive to the precision of 𝜑s . Thus, I s must be decoded by other ways to eliminate the residue sinusoidal pattern. Here, we utilize the Hilbert transform (HT) and construct a complex analytical signal I A that is presented in the form of: IA (x, y) = Is (x, y) + i ⋅ ISH (x, y)

(1.15)

where i is the imaginary unit and the imaginary part I SH of I A is the HT of the input pattern I S . In optical interferometry, the interferograms may contain complex structures, including different frequency components. Therefore, the decoding process must be based on the 2-D HT. However, in SIM, the projection fringe contains only a single spatial frequency in one orientation (either x direction or y direction). So, the 2-D image I S can be simplified as the combination of a sequence of 1-D sinusoidal amplitude modulation signals. In this case, the 1-D signal processing algorithm can be used for the 2-D image demodulation. In 1-D signal analysis, the HT is a powerful tool to achieve demodulation signal. Based on the characteristics of the HT, the HT of a cosine-modulated function becomes a sine-modulated function: HTx {b(x) ⋅ cos x} = 1∕2[HTx {b(x) ⋅ eix } + HTx {b(x) ⋅ e−ix }] = 1∕2[−i ⋅ b(x) ⋅ eix + b(x) ⋅ e−ix ] = b(x) ⋅ sin x

(1.16)

where HT x denotes the HT operation in x direction. Applying the 1-D Hilbert transform to the I s , we obtain the analytical signal: IA (x, y) = IS (x, y) + i ⋅ HTx {IS (x, y)} = Im (x, y) cos(2𝜈x + 𝜑s ) + i ⋅ Im (x, y) sin(2𝜈x + 𝜑s )

(1.17)

Finally, the optically sectioned image I m can be obtained by Im (x, y) = |IA (x, y)|

(1.18)

1.3 Super-Resolution Imaging with Structured Illumination 1.3.1

Lateral Resolution in a Wide-Field Microscope

Lateral resolution gets more attention in optical microscopy since it directly determines the finest structures that can be resolved. The lateral resolution of the optical system and the pixel size of the camera are the two factors that need to be

11

1 Advanced Wide-Field Fluorescent Microscopy for Biomedicine

Figure 1.8 Optical section algorithms with Hilber–Huang transform. Source: Chong Chen.

Start

Two raw images Subtracting

Input pattern

EM

D

1D

FA B

12

BIMFs

HT

Sectioned image SHT algorithm

Filtered pattern

Sectioned image

FABEMD-HS algorithm

considered first for magnification of the optical system. According to the Nyquist sampling rule, the pixel resolution (pixel size divided by the magnification) should be at least half of the feature size to be observed. However, in optical microscopy, the lateral resolution is determined not only by the pixel resolution but also by the PSF. Due to the wave diffraction of light, a collimated light focused by the optical system won’t be an infinite point but a diffraction-limited optical spot. Similarly, a point-like object will be imaged by an optical system as a blurry spot with a finite size. The intensity distribution of the spot is defined as the PSF. The lateral PSF in 2-D can be described as an airy pattern or approximated as a Gaussian function in general. The PSF depends on the light wavelength and numerical aperture of the imaging objective (Figure 1.9). Two point-like objects next to each other will be imaged as the superposition of two corresponding PSFs. When the two point-like objects are too close to each other, they cannot be resolved as separate points. The lateral resolution of an optical system can be defined as the minimum resolvable distance between two point-like objects in the image, as Rayleigh criteria: dmin = 0.61

𝜆 NA

(1.19)

Another way to define the resolution is by utilizing the OTF. The OTF is the Fourier transform of the PSF. The resolution of a light microscope is determined by

Intensity (a.u.)

1.3 Super-Resolution Imaging with Structured Illumination

Z (a.u.)

3 2 1 0 –1 –2 –3

–1

X (a.u.) (b) Intensity (a.u.)

(a)

0 1 X (a.u.)

(c)

Z (a.u.)

Figure 1.9 (a) Vertical sections through the 3D PSFs and resulting definition. The contrast has been greatly enhanced to show the weak side lobes (which are the rings of the airy disk, viewed edge on). (b) Schematic of horizontal resolution definition. (c) Schematic diagram of axial resolution definition. Source: Chong Chen.

the cutoff frequency of the OTF: 1 NA fcut−off = =2 𝜆 dmin

(1.20)

Only those spatial frequencies coming from the object that is inside the support of the OTF, i.e. smaller than the cut-off frequency, are detectable. In the last twenty years, several super-resolution techniques have been developed to break the diffraction limit and have been widely used in many biological applications. In the following, the structured illumination microscopy technique is introduced as a typical wide-field super solution microscope that is mostly suitable for live cell dynamic imaging.

1.3.2

Principle of Super-Resolution SIM

In 2000, Gustafsson, M. G., proposed a wide-field imaging technique with resolution beyond the diffraction limit by structured illumination [15]. The principle is to project a spatially patterned illumination onto the sample such that Moiré fringes are created. The recorded Moiré fringes of the fluorescent image contain the frequency of both the illumination structure and the spatial frequencies of the sample.

13

14

1 Advanced Wide-Field Fluorescent Microscopy for Biomedicine

(a)

k0

~ S(k-k0)

fc

-fc

k0

~ S(k+k0)

~ S(k)

(a)

-(fc+k0)

fc+k0 ~ ~ ~ S(k-k0) S(k) S(k+k0)

(b)

(b)

(c)

Figure 1.10 Principle of the super-resolution SIM. (a) Schematic of structured illumination pattern; (b) 2D OTF of SIM with excitation pattern in only one orientation. (c) 2D OTF of SIM with excitation pattern in three orientation. Source: Chong Chen.

High spatial frequency information of the sample, which was outside the passband of the microscope’s OTF is now downmodulated into the passband of the OTF. By acquiring several images, a super-resolved SIM image can be reconstructed through computational reconstruction (Figure 1.10). In order to achieve isotropic resolution improvement, nine images of three orientation angles with three phases of each orientation angle are usually acquired. In the last twenty years, the SR-SIM technique has been greatly studied and improved, which has now become the most popular super-resolution technique for live cell dynamic imaging. With linear excitation, linear SIM could achieve a twofold resolution improvement [16]. Nonlinear SIM had no theoretical resolution limit and experimentally attained better than 40 nm resolution with saturated excitation [17] or sequential activation & excitation [18]. Despite the fact that structured illumination was used in both optical sectioning and super-resolution, the principle, optical setup, and image reconstruction algorithms are quite different. The structure illumination in OS-SIM is generally formed by the projection of a fringe pattern onto the microscope field of view (FOV) using a noncoherent light source, while the structure illumination for SR-SIM is typically formed by laser interference because of the thinner pattern period required.

1.3.3

SR-SIM Setup Based Laser Interference

Many early SIM systems that use a mechanically moving physical grating to generate the spatial patterned illumination cannot guarantee the precise shift of the

1.3 Super-Resolution Imaging with Structured Illumination

Fiber coupler

AOTF

Laser

Sample

Collimation HWP

L2

SM

L1

Objective Entrance pupil DM

PBS Pizza HWP

LCVR

SLM (a)

D2

EM TL

Camera

30°

–30°

y

D3 Fast axis

D1 –30°

(b)

EX

L3

30°

Pizza HWP

z

Fast axis

x Polarization orientation Pizza HWP

LCVR

(c)

Figure 1.11 The layout of the 2D-SIM setup based on the proposed polarization optimization method. (a) Schematic of interference structured illumination microscopy; (b) Polarization of three orientation; (c) Schematic of polarization control method. Source: Chong Chen.

illumination pattern during image acquisition [16, 19]. The spatial light modulator (SLM)-based SIM system arose from the need for fast illumination pattern switching and an accurate pattern shift for video-rate observation in living cells [20, 21]. A ferroelectric-liquid-crystal-on-silicon (LCoS) display used as a SLM has two stable crystal axes driven by an applied voltage. Due to the pixelated structure of the display, additional unwanted diffraction orders are created and lead to a jagged edge in the illumination pattern in the sample plane. In SIM, a Fourier filter is used to block those unwanted diffraction orders. In the two-beam SIM system, the zeroth illumination order is blocked (Figure 1.11). Only the ±1st diffraction orders pass through the objective and form an interference pattern in the sample plane. This leads to the same interference pattern in the sample plane for both the grating pattern and its inverse image displayed on the SLM. Fast switching up to several kHz is offered by ferroelectric SLMs. However, low diffraction efficiency is a major drawback. In addition, the SLM has to display the inverse image of the previous grating pattern to prevent damage to the LCoS. Nevertheless, as the required illumination intensity for acquiring raw SIM images is relatively low, it is preferable to use a ferroelectric SLM to modulate the illumination light.

15

16

1 Advanced Wide-Field Fluorescent Microscopy for Biomedicine

The simplified sketch of the first prototype of the fastSIM setup is shown in Figure 1.11. An acousto-optical tunable filter (AOTF) right behind the CW laser ensures precise and fast switching of the illumination light. An illumination grating pattern is generated by an SLM placed in the intermediate image plane and is projected on the sample. In order to achieve high contrast of the illumination grating in the sample plane, a liquid crystal variable retarder (LCVR) and a customized azimuthally patterned polarizer (pizza polarizer) are placed after the SLM to achieve azimuthal polarization. A passive mask as a Fourier filter is used to block all unwanted diffraction orders except the ±1st diffraction orders. Polarization modulation is the most important part of the SIM optical path. In addition to the above methods, more optimized schemes have been developed in recent years. A polarization beam splitter (PBS) in this scheme was used to shorten the length of the optic path. In order to modulate the polarization of ±1 order diffraction beams in three different orientations, a combinatorial waveplate composed of six fan-shaped achromatic Pizza HWPs was employed. Three orientations corresponding to illumination light are defined as D1, D2, and D3, respectively. The fast axes of the two opposite HWPs for D1 are set to be parallel to the electrical vibration direction of the incident beam. The fast axes of the remaining four HWPs of D2 and D3 are rotated by +30∘ and −30∘ , respectively. The polarization of the ±1-order diffraction beams is rotated by 0∘ and ±60∘ in three orientations, corresponding to the above fast-axis rotation. After pizza HWP modulation, an LCVR is utilized to compensate for the additional phase generated by the dichroic mirror (DM). The fast axis of the LCVR is set to be parallel to the electrical vibration direction of the incident beam, as shown in Figure 1.11.

1.3.4

Super-Resolution Reconstruction for SIM

Linear super-resolution structured illumination microscopy (SR-SIM) is a wide-field imaging method that doubles the spatial resolution of fluorescent images [22]. The final SR image is reconstructed by postprocessing image algorithms. Here, we first briefly introduce the principle and implementation of the conventional Wiener SIM reconstruction algorithm (hereinafter referred to as “Wiener-SIM”). We start by considering a two-dimensional (2D) SIM raw data of the form D𝜃,n (r) ≅ {Sin (r) ⋅ [1 + m𝜃 cos(2𝜋k𝜃 ⋅ r + 𝜑𝜃 + 2𝜋 ⋅ (n − 1)∕3)]} ⊗ PSF(r) + Sout (r) + N(r)

(1.21)

where subscript 𝜃 and n are the orientation and phase of the illumination pattern (𝜃 = 1,2, 3, n = 1, 2, 3); r is the image spatial coordinates; Sin (r) is the sample on the objective focal plane; m𝜃 , k𝜃 , and 𝜑𝜃 are the modulation, spatial frequency and initial phase of the illumination pattern in the raw data, respectively; symbol ⊗ denotes the convolution operation; PSF(r) is the PSF of the microscope, Sout (r) is out-of-focus background fluorescent signal, and N(r) is noise.

1.3 Super-Resolution Imaging with Structured Illumination

The spectrum of SIM raw data can be obtained in the frequency domain by implementing Fourier transform to Eq. (1.21) ̃ (k) ≅ D 𝜃,n

[

m𝜃 2

⋅ Sin (k + k𝜃 ) ⋅ e

) ( −j 𝜑0 + 2𝜋(n−1)∕3

+ Sin (k) +

m𝜃 2

⋅ Sin (k − k𝜃 ) ⋅ e

)] ( j 𝜑0 + 2𝜋(n−1)∕3

⋅ H(k) + Sout (k) + N(k)

(1.22)

where H(k) is the OTF of the microscope, which is the Fourier transform of PSF(r). Sin (k) is the spectrum components of the sample at the focal plane, Sin (k ± k𝜃 ) are the spectrum components that contain the unresolvable high-frequency signals. Sout (k) and N(k) are the spectrum of the out-of-focus background and noise signal, respectively. For 2D-SIM, the raw data of three different phases are collected in the same orientation, and all the spectrum components in Eq. (1.22) can be separated by solving Eq. (1.23) e−j𝜑0 ) ( ej𝜑𝜃 ) ⎤ ⎡ S (k) ⎤ ⎡1 ( 𝜃,0 ⎥ ⎢ −j 𝜑𝜃 + 2𝜋∕3 j 𝜑𝜃 + 2𝜋∕3 ⎥ ⎢ e( ⎢S𝜃,+1 (k + k𝜃 )⎥ = ⎢1 e ( ) )⎥ ⎢S (k − k )⎥ ⎢ −j 𝜑𝜃 + 4𝜋∕3 j 𝜑𝜃 + 2𝜋∕3 ⎥ 𝜃 ⎦ ⎣1 e ⎦ ⎣ 𝜃,−1 e

−1

̃ 𝜃,1 (k)⎤ ⎡D ⎢ ̃ ⋅ D𝜃,2 (k)⎥ ⎢̃ ⎥ ⎣D𝜃,3 (k)⎦

(1.23)

To accurately extract the separated components in Eq. (1.23), and then shift them to the correct position, the correct illumination pattern parameters of m𝜃 , k𝜃 , and 𝜑𝜃 should be accurately determined from the raw data. Typically, these reconstruction parameters can be estimated by cross-correlation methods. In this chapter, we developed a robust reconstruction parameter by combining normalized cross-correlation and spectrum notch for determining the correct reconstruction parameters. Then, these separated components are shifted back to their correct position with sub-pixel precision ′

⎧ S (k) { (C𝜃,0 (k) ≅ ) 𝜃,0 ⎪ ] } [ ′ 2𝜋 ⎪ C (k + k ) ≅ FFT ej 𝜑𝜃 + ∕3 ⋅ FFT−1 S (k + k ) (r) (k) 𝜃,+1 𝜃 𝜃 ⎨ 𝜃,+1 { ( ) ] } [ ′ ⎪ −j 𝜑𝜃 + 4𝜋∕3 ⋅ FFT−1 S𝜃,−1 (k + k𝜃 ) (r) (k) ⎪C𝜃,−1 (k − k𝜃 ) ≅ FFT e ⎩

(1.24)

where C𝜃,0 (k), C𝜃,+1 (k + k𝜃 ), and C𝜃,−1 (k − k𝜃 ) are the shifted spectrum components. Thus, the reconstructed spectrum of SR-SIM can be obtained by the traditional generalized Wiener filtering deconvolution ∑ ∗ 𝜃,L C𝜃,L (k + L ⋅ k𝜃 ) ⋅ H𝜃,L (k + L ⋅ k𝜃 ) ̃ (k) SSR (k) = ⋅A (1.25) ∑ 2 2 𝜃,L |H𝜃,L (k + L ⋅ k𝜃 )| + W where symbol * is the conjugate operation. w is the Wiener constant, which is an ̃ (k) is the apodisation function used to suppress artifacts. In the empirical value A Wiener-SIM we implemented, a theoretical OTF based on imaging conditions was employed as the apodization function ̃ (k) = 2b(|k|) − sin[2b(|k|)] A 𝜋

(1.26)

17

18

1 Advanced Wide-Field Fluorescent Microscopy for Biomedicine

Finally, the reconstructed SR image is ̃(k)} Ssr (r) = FFT−1 {Sinitial−SR (k) ⋅ W

(1.27)

where symbol FFT −1 represents inverse Fourier transform. Conventionally, the raw SIM data with high modulation contrast and high signal-to-noise ratio (SNR), the PSF that matches the imaging conditions during raw data acquisition, and the reconstruction algorithm with excellent artifact suppression performance are three key factors for obtaining SR-SIM images with minimum artifacts. In general, Wiener-SIM usually estimates the illumination pattern parameters from the raw SIM data and then reconstructs the SR images based on the generalized Wiener filtering deconvolution (Eq. (1.25)). However, Wiener-SIM is not the optimal solution for reconstructing high-quality SR-SIM images because it still faces the following challenges.

1.3.5

Typical Artifacts and Removement Methods

Typical artifacts often appear in SR-SIM images, such as “hatching,” “honeycomb,” “snowflake,” “sidelobe,” and “hammerstroke” artifacts, etc., resulting in the fidelity and quantification of SIM images always facing challenges [23]. To pursue high-quality SR images with minimal artifacts, many efforts have been made, including in-depth imaging system establishment protocol, accurate reconstruction parameter estimation, practical system calibration and sample preparation guidelines, the guide for user-defined parameters adjustment, and some open-source reconstruction tools. These works have prompted researchers to gain insight into the causes, features, and suppression methods of typical artifacts. So far, the sources of artifacts in SR-SIM images have been well studied and summarized in the recently published papers. Although many artifacts can be distinguished subjectively, they cannot be robustly eliminated, and certain artifacts still frequently appear in the SIM images of publications. The presence of artifacts limits the accessibility of SIM to a few experts, hindering its wider use as a general imaging tool. More importantly, new structures found using SIM must be interpreted with special care to avoid incorrectly identifying artifacts as real features [24–26]. To reduce reconstruction artifacts, Wiener-SIM usually emphasizes collecting raw data with high modulation contrast and a high SNR [27, 28]. However, in actual experiments, since the modulation contrast and SNR of the raw SIM data are mainly determined by the quality of illumination patterns, the contrast properties of the labeled biological samples, and the optical properties of the imaging system, the raw SIM data with high modulation contrast and high SNR cannot always be collected. For example, when implementing ultrafast SR-SIM imaging, it is easy to collect raw data with low SNR. Meanwhile, as the imaging depth increases, the modulation contrast of raw data will also gradually decrease due to the rapid enhancement of the out-of-focus background fluorescence. Moreover, when the contrast of the labeled sample is poor, even if the illumination pattern at the objective focal plane has high modulation contrast, the raw data shows low modulation contrast. Furthermore, raw

1.4 3D imaging with Light Sheet Illumination

data with suboptimal modulation contrast is also unavoidable in either home-built SIM systems or commercial SIM systems when imaging device states are imperfect and users are unskilled in operation. For low SNR raw data, Wiener-SIM may not be able to estimate the correct pattern wave vectors, and noise-related artifacts in the reconstructed image are also more serious. For raw data with suboptimal modulation contrast, Wiener-SIM also faces the challenge that the correct reconstruction parameters may not be determined. For example, the estimated modulation depth is less than the actual value. According to Eq. (1.26), the smaller modulation factor can cause over-amplification of the high-frequency components, resulting in “snowflake” artifacts and artifacts related to high-frequency noise. In addition, for raw data with strong background fluorescence, out-of-focus fluorescence can cause “honeycomb” artifacts or “hammerstroke” artifacts. In practice, in order to avoid the risk of artifacts, a large number of suboptimal raw data sets are not effectively utilized or even abandoned. This not only caused a waste of time and money in the SIM imaging experiment, but more importantly, limited the suitable application scenarios of SIM.

1.4 3D imaging with Light Sheet Illumination 1.4.1

Principle and History

Light sheet fluorescence microscopy (LSFM) uses a thin plane of light to optically section transparent tissues or whole organisms that have been labeled with a fluorophore. Compared with confocal and two-photon microscopy, LSFM is able to image thicker tissues (>1 cm) with reduced photobleaching and phototoxicity because the specimen is exposed only to a thin light sheet. In addition, LSFM is a nondestructive method that produces well-registered optical sections that are suitable for three-dimensional reconstruction and can be processed by other histological methods (e.g. mechanical sectioning) after imaging. The first published account of a very simple version of an LSFM (called ultramicroscopy) was described by Siedentopf and Zsigmondy (1903), in which sunlight was projected through a slit aperture to observe gold particles. In 1993, Voie and colleagues developed a light sheet microscope system called orthogonal-plane fluorescence optical sectioning (OPFPS) [29]. OPFOS was developed by investigators in Francis Spelman’s laboratory at the University of Washington who were attempting to quantitatively assess hair cell structure and other cochlear features to improve the cochlear implant. OPFOS featured all of the elements that are present in current LSFM devices—namely, laser, beam expander, cylindrical lens to generate the light sheet, specimen chamber, orthogonal illumination of the specimen, specimen movement for z-stack creation, and specimen clearing and staining for producing fluorescent optical sections. In 1994, an oblique illuminating confocal microscope was being developed in Ernst Stelzer’s laboratory to improve the axial resolution

19

20

1 Advanced Wide-Field Fluorescent Microscopy for Biomedicine

(a) Relay length 2Zr

2W0

(b)

Figure 1.12 (a) Principle of light sheet. (b) The thickness and field of view of light sheet. Source: Chong Chen.

of confocal microscopy. It was called a confocal theta microscope [30, 31]. Theta confocal microscopy appeared to lay the foundation for their subsequent version of an LSFM device called selective or single-plane illumination microscopy (SPIM). As shown in Figure 1.12a, a typical light sheet microscope is an L-shaped structure composed of two objective lenses (one illumination objective and one detection objective). Obviously, in a LSFM, the excitation and collection branches are uncoupled. Therefore, it is helpful to study both the excitation and collection arms of the LSFM separately. FOV lateral and axial resolution are the key parameters to be considered when constructing an LSFM system. In a LSFM, a focused light beam is used to produce the excitation LS (Figure 1.12). When using a Gaussian beam, its beam waist (w0 ) can be related to the sectioning ability and, therefore, in a first approximation, to the axial resolution, Raxial , of the final image as Raxial = 2w0 =

2𝜆f n𝜆 𝜆 =2 =2 𝜋𝜃 𝜋D 𝜋NA

(1.28)

where f is focal length of the illumination objective, D is the entrance pupil diameter of illumination objective, and n is the refractive index. For a formal definition of resolution (see Section 1.2.1). Similarly, the Rayleigh range, Z r , can be related to the FOV of the image and will be given by FOV = 2Zr = 2

𝜋w20 𝜆

(1.29)

1.4 3D imaging with Light Sheet Illumination

1.4.2

Light Sheet with Orthogonal Objectives

1.4.2.1 Light Sheet with Cylinder Lens

The Ernst H. K. Stelzer laboratory published the first true light sheet fluorescence microscopy article [32]. They used SPIM to visualize all muscles in vivo in the transgenic Medaka line Arnie, which expresses green fluorescent protein in muscle tissue. SPIM can also be applied to visualize the embryogenesis of the relatively opaque Drosophila melanogaster in vivo. The SPIM was capable of resolving the internal structures of the entire organism with high resolution (better than 6 μm) as deep as 500 μm inside the fish, a penetration depth that cannot be reached using confocal LSM. The L-type light sheet microscope’s main components were shown. A series of lasers provide lines for fluorescence excitation. An optical system that includes a cylindrical lens focuses the laser light onto a thin light sheet. The sample is mounted on a transparent, low-concentration (0.5%) agarose gel (Figure 1.13). This agarose is prepared from an aqueous solution adequate for the sample, in our case, phosphate buffered saline (PBS), providing a suitable environment for a live sample. The cylinder of agarose containing the sample is immersed in PBS, which virtually eliminates refractive imaging artifacts at the agarose surface. The cylinder containing the sample is supported from above by a micropositioning device. By using the four available degrees of freedom (three translational and one rotational), the sample can be positioned such that the excitation light illuminates the plane of interest. An objective lens, detection filter, and tube lens are used to image the distribution of fluorophores in the illumination plane onto a CCD camera, with the detection axis arranged perpendicular to the axis of illumination. The light sheet thickness is adapted to the detection lens, i. e. the light sheet is made as thin as possible while keeping it uniform across the complete FOV of the objective lens. Its thickness is typically between 3 and 10 μm; e. g. for a 10×, 0.30 NA objective lens, the light sheet beam waist can be reduced to 6 μm, and the resulting width will vary less than 42% across the FOV of 660 μm.

Camera Tube lens Filter

Galvo mirror Top view

Cylindrical lens

Side view

(a)

(b)

Figure 1.13 Light sheet with cylinder lens. (a) Top view and side view of illumination path. (b) 3D diagram of a typical sample pool. Source: Chong Chen.

21

22

1 Advanced Wide-Field Fluorescent Microscopy for Biomedicine

Any fluorescence imaging system suffers from scattering and absorption in the tissue; in large and highly scattering samples, the image quality decreases as the optical path length in the sample increases. This problem can be reduced by multiview reconstruction, in which multiple 3D data sets of the same object are collected from different directions and combined in a postprocessing step. The sample was rotated mechanically and for each orientation (0∘ , 90∘ , 180∘ , and 270∘ ) a stack was recorded. The stacks were then reoriented in the computer to align them with the stack recorded at 0∘ . The fusion of these four data stacks yields a superior representation featuring similar clarity and resolution throughout the entire specimen. 1.4.2.2 Scanning Light Sheet

As known, the Gauss beam intensity is nonuniform. A digital scanned laser light sheet fluorescence microscopy (DSLM) was developed to achieve the imaging speed and quality required for recording large specimen [32]. (Figure 1.14) The idea behind DSLM is to generate a “plane of light” with a laser scanner that rapidly moves a micrometer-thin beam of laser light vertically and horizontally through the specimen. The DSLM has several advantages over standard light sheet microscopy. First, DSLM illuminates each line in the specimen with the same intensity, a crucial prerequisite for quantitative imaging of large specimens. Second, in contrast to standard light sheet-based microscopy, DSLM does not rely on apertures to form the laser profile, which reduces optical aberrations and thereby provides an exceptional image quality. Third, the entire illumination power of the light source is focused onto a single line, resulting in an illumination efficiency of 95% as compared with 3% in standard light sheet microscopy. Fourth, DSLM allows generating intensity-modulated illumination patterns (structured illumination), which can be used to enhance the image contrast in highly light-scattering specimens, such as large embryos. One of the fundamental novel ideas of the DSLM concept is the use of laser scanners to create a 2D sample illumination profile perpendicular to the detection axis. In the standard mode of DSLM operation, one of the scan mirrors of the scan

Scan lens Galvo mirrors

Tube lens

Detection objective Sample

Illumination objective

Figure 1.14 Light sheet with scanning beams. (a) Gaussian beam. (b) Bessel beam. Source: Chong Chen.

1.4 3D imaging with Light Sheet Illumination

head moves at a constant speed within a predefined angular range. An f-theta lens converts the angular scan range of the laser beam into a vertical set of parallel beams. Because of the scanning approach, a single diffraction-limited beam of light illuminates the sample at any time point along a well-defined line in space. Integration over time and space results in the illumination of an entire plane. Thus, in order to obtain a two-dimensional image, the camera integrates the signal while the laser scanners illuminate the respective plane in the detection system’s FOV. By using a constant scan speed, a homogeneous light sheet-like profile is generated, i.e. all horizontal lines are illuminated with the same light intensity. Because of the high scan speed, the entire two-dimensional profile is created in less than 1 ms, regardless of the extent of the field-of-view. The complete DSLM illumination/excitation system (subsystems 1–3 above) consists e.g. of a multiline argon krypton laser, an AOTF for laser wavelength selection and intensity control, a two-axis high-speed scan head, an f-theta lens, and a low-NA illumination objective lens operated with a regular tube lens. The illumination/excitation objective lens is mounted on a piezo nanofocus, which can move the lens along its optical axis. The specimen is placed inside a custom specimen chamber made e.g. from inert black Delrin. The specimen chamber features a temperature control system, which includes a temperature sensor inside the chamber and a heating foil attached below the chamber [33, 34]. 1.4.2.3 Multidirection Illumination and Imaging

The SiMView microscope for one-photon excitation consists of custom laser light sources, two scanned light-sheet illumination arms, two fluorescence detection arms equipped with sCMOS cameras, a custom four-view specimen chamber with perfusion system, and a four-axis specimen positioning system that is magnetically connected to the specimen holder in the imaging chamber (Figure 1.15) [35–38]. The four-view specimen chamber comprises a custom specimen holder, a custom mechanical scaffold manufactured from black Delrin, a multistage adaptor module for connecting the specimen holder to the specimen positioning, system and a custom perfusion system. The specimen holder was produced from medical-grade stainless steel. Using the positioning system, the holder can be translated in three dimensions and rotated around its main axis without breaking the water seal. Although this has been successful for imaging large multicellular organisms at single-cell resolution, a tradeoff exists between the minimum thickness of the light sheet and the FOV over which it remains reasonably uniform, such that, when imaging a 50-μm-diameter cultured cell, an optimized Gaussian light sheet diverges to a full-width at half-maximum (FWHM) thickness of 2.8 μm at either end. However, such light sheets are too thick to aid in improving axial resolution, which is 3–4 times poorer than transverse resolution, even when high-NA optics are used. To generate much thinner light sheets, the bessel beams are created by projecting an annular illumination pattern at the rear pupil of an excitation lens. Their central peak width, unlike Gaussian beams, can be decoupled from their longitudinal extent simply by changing the thickness of the annulus. The self-reconstructing property of such beams has recently been used to reduce shadowing and scattering artifacts

23

24

1 Advanced Wide-Field Fluorescent Microscopy for Biomedicine

(a)

(b)

(c)

(d)

Figure 1.15 Light sheet with multi-direction illumination and detection. (a, b) Schematic of two directions of light sheet illumination, respectively. (c, d) Top view of a and b. Source: Chong Chen.

DO

ETL

M1

IO2

IO1

M6

Chamber M3 M4 M5

M2

Galvo mirror

Voltage LCOS

PBS

Tile 1 Continuous light sheet with different ETL voltage (a)

Discontinuous light sheet with multiple waists

Tile 2

(b)

Figure 1.16 Light sheet microscope for extended field of view. (a) Schematic of light sheet microscopy for extended field of view with ETL; (b) Schematic of light sheet microscopy for extended field of view with LCOS (liquid crystal on silicon). Source: Chong Chen.

in plane illumination microscopy of multicellular organisms. Eric Betzig et al. use scanned Bessel beams of higher NA to create light sheets sufficiently thin to achieve isotropic 3D resolution and improve the expenditure of the photon budget to the point at which hundreds of 3D image stacks comprising tens of thousands of frames can be acquired from single living cells at rates of nearly 200 frames/s.

1.4 3D imaging with Light Sheet Illumination

In order to improve the 3D imaging ability of SPIM on large specimens, tremendous efforts have been spent on the optimization of the light sheet intensity profile, so that the excitation light can be confined near the detection focal plane over a long distance as much as possible. Unfortunately, the diffraction of light makes it impossible to optimize these properties at the same time. The excitation light is less confined as the length of a light sheet increases, which makes it challenging to image large specimens at high spatial resolutions using SPIM. One of the most effective approaches is to quickly move the light sheet axially within the imaging plane along the excitation light propagation direction, so that a high spatial resolution and good optical sectioning ability can be maintained in a FOV much larger than the size of the light sheet. Tiling light sheet selective plane illumination microscopy (TLS-SPIM) is a method using this strategy to improve the 3D imaging ability of SPIM on large specimens. In TLS-SPIM, a large FOV is imaged by tiling a short but thin light sheet at multiple positions within the imaging plane and taking an image at each light sheet tiling position (Figure 1.16). Scanning coaxial beam arrays, which are often created by diffraction optical elements (DOEs), can produce discontinuous light sheets. However, a DOE element only generates a coaxial beam array with a fixed intensity profile, which reduces the flexibility of TLS-SPIM. Additionally, the binary SLM included in common TLS microscopes can produce coaxial beam arrays so that the microscope has the ability to use with different intensity profiles, beam numbers, and periods. DOEs could be used in two ways in TLS-SPIM for 3D imaging. First, the imaging plane can be imaged by tiling the same DOE at multiple positions, which is the same as the operation in regular TLS-SPIM. Second, the imaging plane can be imaged by using multiple different DOEs with light sheet waists compensating for each other. The Galvo mirror directs the illumination light to one of the two symmetrical illumination paths by offsetting the initial angle and creates a virtual excitation light sheet for sample illumination by scanning the laser beam [39].

1.4.3

Single-Lens Light-Sheet Microscopy

The potential drawback of SPIM is that it requires the sample to be illuminated with a lens that is in the plane of the sample being imaged, and this means that conventional sample preparation techniques, e.g. glass microscope slides, cannot be used [3]. Recently, highly inclined laminated optical sheet microscopy was discovered to illuminate a thin sheet in a sample using the same objective that is used to collect the fluorescence [40]. The approach is similar to SPIM but with two significant differences: the illumination and detection beams are not at 90∘ (as is usual for SPIM) and the sheet of illumination does not align in the focal plane of the imaging system used to collect the reflected/scattered light or fluorescence (Figure 1.17). Oblique plane microscopy (OPM) shows that a similar concept can be applied to HiLo/SPIM, where the third microscope is used to tilt the image plane, rather than using it to image planes located at different axial positions. Thus, in-focus imaging of an oblique plane within the specimen is achieved. By taking advantage of the fact that high numerical aperture (NA) microscope objectives have an angular extent

25

1 Advanced Wide-Field Fluorescent Microscopy for Biomedicine

Tilted mirror

Glass

Objective lens

Tube lens

Scan lens Rotation

Galvo mirrors

DM

Tube lens Camera

Figure 1.17

Light sheet with a single objective. Source: Chong Chen.

that is much larger than 90∘ . In order to create an oblique sheet of illumination within the specimen, the laser beam was first expanded using a 10× telescope. As shown in Figure 1.18, the light was then focused by a cylindrical lens onto mirror, M1 and directed to the sample via tube lens 1, a DM and the high numerical aperture objective lens objective lens 1. Mirror M1 allowed the angle of the oblique sheet illumination on the specimen to be conveniently adjusted. Fluorescence from the specimen was then collected by the same objective and an intermediate image plane, IP1, was produced by the first infinity corrected microscope formed by lenses L1 and L2. Light at image plane IP1 is then de-magnified by the second infinity corrected microscope formed by objectives 2 and 3. In order to reimage any plane within the specimen, it is necessary to ensure that the lateral and axial magnifications between the specimen and FP3 are equal. This condition is achieved when the magnification is equal to n1/n2, where n1 is the refractive index of the immersion medium of the

L1

FP2 FP3

L2 Objective lens 2

Objective lens 1 FP1

Tube lens 1 M1

Figure 1.18 Chen.

IP1

Ob je len ctive s3 Tube lens 2

IP

2

era

DM

Sample

Ca m

26

Single-lens light sheet illumination with tilt angle corrected. Source: Chong

References

specimen and n2 is the refractive index of the immersion medium of the objective lens 2. It was then possible to reimage any plane in the specimen using a third microscope system, which was formed here by objective lenses 3 and tube lens 2.

1.5 Summary It is very likely that the point-scanning fluorescent microscope and the wide-field fluorescent microscope will coexist in the market for biomedical applications in the near future. However, with the advantages of faster imaging speed and lower photon bleaching, as well as the greatly improved optical sectioning capability and resolution, the wide-field fluorescent microscope may have a wide range of applications, especially for dynamic imaging of live samples, such as live-cell and miniatured model animals. New techniques to further improve their performance are also highly expected. Nowadays, biomedical studies do not satisfy with the observation of one cell or a few cells. Whole slide or whole sample observation is highly demanded, which requires the optical microscope to have a much bigger FOV without compromising of the resolution. This trend of wide-field fluorescent microscopes has appeared with specially designed objectives and using a camera array for detection. The other trend of the wide-field fluorescent microscope will be further improvement of the imaging speed, especially for 3D imaging. Currently, the 3D imaging speed is limited by the slow axial translation of the objective or the sample stage. Using an electrical tuning lens to adjust the focus could improve the speed but may introduce additional optical aberration. Simultaneous imaging of multiple planes could be a solution for faster 3D imaging in the future. Introducing adaptive optics to reduce the aberration for thick samples is also highly expected. With the bigger FOV and faster imaging speed, dramatic amounts of microscope data will be obtained, which are far beyond the capability of human eyes and the current capability of the generally used imaging process software. The development of automatic, precise, and intelligent imaging process algorithms and software will certainly be hot topics for the application of wide-field fluorescent microscopy in modern biomedicine.

References 1 Bauch, H. and Jörg, S. (2006). Optical sections by means of structured illumination: background and application in fluorescence microscopy. Photonik Int. 86–88. 2 Dan, D., Yao, B.L., and Lei, M. (2014). Structured illumination microscopy for super-resolution and optical sectioning. Chin. Sci. Bull. 59: 1291–1307, https://doi .org/10.1007/s11434-014-0181-1. 3 Dan, D. et al. (2013). DMD-based LED-illumination super-resolution and optical sectioning microscopy. Sci. Rep. 3: 1116, https://doi.org/10.1038/srep01116.

27

28

1 Advanced Wide-Field Fluorescent Microscopy for Biomedicine

4 Xu, D. et al. (2013). Fast optical sectioning obtained by structured illumination microscopy using a digital mirror device. J. Biomed. Opt. 18: 060503, https://doi .org/10.1117/1.JBO.18.6.060503. 5 Lin, C.Y., Lin, W.H., Chien, J.H. et al. (2016). In vivo volumetric fluorescence sectioning microscopy with mechanical-scan-free hybrid illumination imaging. Biomed. Opt. Express 7: 3968–3978, https://doi.org/10.1364/BOE.7.003968. 6 Yang, X. et al. (2019). Fringe optimization for structured illumination super-resolution microscope with digital micromirror device. J. Innovative Opt. Health Sci. 12, https://doi.org/10.1142/s1793545819500147. 7 Poher, V. et al. (2007). Optical sectioning microscopes with no moving parts using a micro-stripe array light emitting diode. Opt. Express 15: 11196–11206. 8 Lim, D., Ford, T.N., Chu, K.K., and Metz, J. (2011). Optically sectioned in vivo imaging with speckle illumination HiLo microscopy. J. Biomed. Opt. 16: 016014. 9 Hoffman, Z.R. and DiMarzio, C.A. (2013). Structured illumination microscopy using random intensity incoherent reflectance. J. Biomed. Opt. 18: 061216, https://doi.org/10.1117/1.JBO.18.6.061216. 10 Neil, M., Juškaitis, R., and Wilson, T. (1997). Method of obtaining optical sectioning by using structured light in a conventional microscope. Opt. Lett. 22 (24): 1905–1907. ¯ 11 Neil, M., Squire, A., JusÏkaitis, R. et al. (2000). Wide-Ueld optically sectioning euorescence microscopy with laser illumination. J. Microsc-Oxford 197: 1a˛4. 12 Ford, T.N., Lim, D., and Mertz, J. (2012). Fast optically sectioned fluorescence HiLo endomicroscopy. J. Biomed. Opt. 17: 021105, https://doi.org/10.1117/1.JBO .17.2.021105. 13 O’Holleran, K. and Shaw, M. (2014). Optimized approaches for optical sectioning and resolution enhancement in 2D structured illumination microscopy. Biomed. Opt. Express 5: 2580–2590, https://doi.org/10.1364/BOE.5.002580. 14 Zhou, X. et al. (2015). Double-exposure optical sectioning structured illumination microscopy based on Hilbert transform reconstruction. PLoS One 10: e0120892, https://doi.org/10.1371/journal.pone.0120892. 15 Gustafsson, M.G. (2000). Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J. Microsc. 198: 82–87, https://doi.org/ 10.1046/j.1365-2818.2000.00710.x. 16 Gustafsson, M.G. et al. (2008). Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination. Biophys. J. 94: 4957–4970, https://doi.org/10.1529/biophysj.107.120345. 17 Gustafsson, M.G. (2005). Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution. Proc. Natl. Acad. Sci. U.S.A. 102: 13081–13086, https://doi.org/10.1073/pnas .0406877102. 18 Li, D. et al. (2015). Advanced imaging. Extended-resolution structured illumination imaging of endocytic and cytoskeletal dynamics. Science 349: aab3500, https://doi.org/10.1126/science.aab3500.

References

19 Schermelleh, L. et al. (2008). Subdiffraction multicolor imaging of the nuclear periphery with 3D structured illumination microscopy. Science 320: 1332–1336, https://doi.org/10.1126/science.1156947. 20 Shao, L., Kner, P., Rego, E.H., and Gustafsson, M.G. (2011). Super-resolution 3D microscopy of live whole cells using structured illumination. Nat. Methods 8: 1044–1046, https://doi.org/10.1038/nmeth.1734. 21 Fiolka, R., Shao, L., Rego, E.H. et al. (2012). Time-lapse two-color 3D imaging of live cells with doubled resolution using structured illumination. Proc. Natl. Acad. Sci. U.S.A. 109: 5311–5315, https://doi.org/10.1073/pnas.1119262109. 22 Lu-Walther, H.W. et al. (2015). fastSIM: a practical implementation of fast structured illumination microscopy. Methods Appl. Fluoresc. 3: 014001, https://doi.org/ 10.1088/2050-6120/3/1/014001. 23 Demmerle, J. et al. (2017). Strategic and practical guidelines for successful structured illumination microscopy. Nat. Protoc. 12: 988–1010, https://doi.org/10.1038/ nprot.2017.019. 24 Forster, R., Wicker, K., Muller, W. et al. (2016). Motion artefact detection in structured illumination microscopy for live cell imaging. Opt. Express 24: 22121–22134, https://doi.org/10.1364/OE.24.022121. 25 Lahrberg, M., Singh, M., Khare, K., and Ahluwalia, B.S. (2018). Accurate estimation of the illumination pattern’s orientation and wavelength in sinusoidal structured illumination microscopy. Appl. Opt. 57: 1019–1025, https://doi.org/10.1364/ AO.57.001019. 26 Forster, R., Muller, W., Richter, R., and Heintzmann, R. (2018). Automated distinction of shearing and distortion artefacts in structured illumination microscopy. Opt. Express 26: 20680–20694, https://doi.org/10.1364/OE.26.020680. 27 Perez, V., Chang, B.J., and Stelzer, E.H. (2016). Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution. Sci. Rep. 6: 37149, https://doi.org/10.1038/srep37149. 28 Huang, X. et al. (2018). Fast, long-term, super-resolution imaging with Hessian structured illumination microscopy. Nat. Biotechnol. 36: 451–459, https://doi.org/ 10.1038/nbt.4115. 29 Voie, A.H., Burns, D., and Spelman, F. (1993). Orthogonal-plane fluorescence optical sectioning: three-dimensional imaging of macroscopic biological specimens. J. Microsc-Oxford 170: 229–236. 30 Stelzer, E.H.K. and Lindek, S. (1994). Fundamental reduction of the observation volume in far-field light microscopy by detection orthogonal to the illumination axis: confocal theta microscopy. Opt. Commun. 111: 536–547. 31 Haar, F.-M., Swoger, J., and Stelzer, E. (1999). Developments and Applications of Confocal Theta Microscopy, vol. 3605 PWB. SPIE. 32 Keller, P.J., Schmidt, A.D., Wittbrodt, J., and Stelzer, E.H. (2008). Reconstruction of zebrafish early embryonic development by scanned light sheet microscopy. Science 322: 1065–1069, https://doi.org/10.1126/science.1162493. 33 Planchon, T.A. et al. (2011). Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination. Nat. Methods 8: 417–423, https://doi .org/10.1038/nmeth.1586.

29

30

1 Advanced Wide-Field Fluorescent Microscopy for Biomedicine

34 Fahrbach, F.O., Voigt, F.F., Schmid, B. et al. (2013). Rapid 3D light-sheet microscopy with a tunable lens. Opt. Express 21: 21010–21026, https://doi.org/10 .1364/OE.21.021010. 35 Ahrens, M.B., Orger, M.B., Robson, D.N. et al. (2013). Whole-brain functional imaging at cellular resolution using light-sheet microscopy. Nat. Methods 10: 413–420, https://doi.org/10.1038/nmeth.2434. 36 Voigt, F.F. et al. (2019). The mesoSPIM initiative: open-source light-sheet microscopes for imaging cleared tissue. Nat. Methods 16: 1105–1108, https://doi.org/10 .1038/s41592-019-0554-0. 37 Wu, Y. et al. (2013). Spatially isotropic four-dimensional imaging with dual-view plane illumination microscopy. Nat. Biotechnol. 31: 1032–1038, https://doi.org/10 .1038/nbt.2713. 38 Krzic, U., Gunther, S., Saunders, T.E. et al. (2012). Multiview light-sheet microscope for rapid in toto imaging. Nat. Methods 9: 730–733, https://doi.org/10.1038/ nmeth.2064. 39 Wu, Y. et al. (2017). Reflective imaging improves spatiotemporal resolution and collection efficiency in light sheet microscopy. Nat. Commun. 8: 1452, https://doi .org/10.1038/s41467-017-01250-8. 40 Tokunaga, M., Imamoto, N., and Sakata-Sogawa, K. (2008). Highly inclined thin illumination enables clear single-molecule imaging in cells. Nat. Methods 5: 159–161, https://doi.org/10.1038/nmeth1171.

31

2 Fluorescence Resonance Energy Transfer (FRET) Tongsheng Chen South China Normal University, College of Biophotonics, MOE Key Laboratory of Laser Life Science & Guangdong Provincial Key Laboratory of Laser Life Science, Guangzhou 510631, China

Fluorescence resonance energy transfer (FRET) is widely used in all applications of fluorescence, such as medical diagnostics, DNA analysis, and optical imaging. The widespread use of FRET is due to the favorable distances for energy transfer, which are typically the size of a protein or the thickness of a membrane. Additionally, the extent of FRET is readily predictable from the spectral properties of the fluorophores. If the spectral properties of the fluorophores allow FRET, it will occur and will not be significantly affected by the biomolecules in the sample. These favorable properties allow for the design of experiments based on the known sizes and structural features of the sample. FRET is an electrodynamic phenomenon that can be explained using classical physics. FRET occurs between a donor (D) molecule in the excited state and an acceptor (A) molecule in the ground state. The donor molecules typically emit at shorter wavelengths that overlap with the absorption spectrum of the acceptor. Energy transfer occurs without the appearance of a photon and is the result of long-range dipole–dipole interactions between the donor and acceptor. The term resonance energy transfer (RET) is preferred because the process does not involve the appearance of a photon. The rate of energy transfer depends upon the extent of spectral overlap of the emission spectrum of the donor with the absorption spectrum of the acceptor, the quantum yield of the donor, the relative orientation of the donor and acceptor transition dipoles, and the distance between the donor and acceptor molecules. The distance dependence of RET allows measurement of the distances between donors and acceptors. The most common application of RET is to measure the distances between two sites on a macromolecule. RET is also used in studies in which the actual D–A distance is not being measured. Typical experiments of this type include DNA hybridization or any bioaffinity reactions. If the sample contains two types of macromolecules that are individually labeled as donor or acceptor, association of the molecules can usually be observed using RET. The observation of RET is sufficient to measure the extent of binding, even without calculation of the D–A Biomedical Photonic Technologies, First Edition. Edited by Zhenxi Zhang, Shudong Jiang, and Buhong Li. © 2023 WILEY-VCH GmbH. Published 2023 by WILEY-VCH GmbH.

32

2 Fluorescence Resonance Energy Transfer (FRET)

distance. At present, steady-state measurements are often used to measure binding interactions. Distances are usually obtained from time-resolved measurements. RET is also used to study macro molecular systems when there is more than a single acceptor molecule near a donor molecule. This situation often occurs for larger assemblies of macromolecules or when using membranes where the acceptor is a freely diffusing lipid analogue.

2.1 Fluorescence 2.1.1

Fluorescence Emission

In the past 30 years there has been a remarkable growth in the use of fluorescence in the biological sciences. Fluorescence is the emission of light from singlet excited states, in which the electron in the excited orbital returns to the ground state and occurs rapidly by the emission of a photon. The emission rates of fluorescence are typically 108 s−1 , so that a typical fluorescence lifetime is near 10 ns (10 × 10−9 s) [1]. Phosphorescence is the emission of light from triplet excited states, in which the electron in the excited orbital has the same spin orientation as the ground-state electron. Transitions to the ground state are forbidden and the emission rates are slow (103 –100 s−1 ), so that phosphorescence life times are typically milliseconds to seconds. Even longer lifetimes are possible, as is seen with “glow-in-the-dark” toys. Figure 2.1 is one form of a Jablonski diagram showing the emission of both fluorescence and phosphorescence [1, 2]. The generation of fluorescence emission includes three steps: excitation photon absorption-internal conversion-fluorescence emission.

2.1.2

Molar Extinction Coefficient

A fundamental aspect of fluorescence is the measurement of light absorption. According the Beer–Lambert Law, the optical density (A) of a solution for the incident light with wavelength (𝜆) is [1] ( ) I A = −log = 𝜀(𝜆) ⋅ CL (2.1) I0 S2

Internal conversion Intersystem crossing

S1

T1

Absorption Fluorescence hνA S0

2 1 0

hνA

hνF

hνP

Phosphorescence

Figure 2.1 One form of a Jablonski diagram. Source: Reproduced from Ref. [1] Figure 1.5 with permission of Springer Nature.

2.2 Characteristics of Resonance Energy Transfer

where 𝜀 is the decadic molar extinction coefficient (in M−1 cm−1 ), I 0 is the initial incident light intensity, I is the output light intensity, C is the concentration of the sample in moles/l, and L is the thickness of the sample. Thus 𝜀(𝜆) =

2.1.3

A CL

(2.2)

Quantum Yield

Quantum yield is one of the most important characteristics of a fluorophore. Quantum yield is the number of emitted photons relative to the number of absorbed photons. Substances with the largest quantum yields, approaching unity, such as rhodamines, display the brightest emissions. The meaning of quantum yield is best represented by a simplified Jablonski diagram (Figure 2.1). In this diagram we do not explicitly illustrate the individual relaxation processes leading to the relaxed S1 state. Instead, we focus on those processes that are responsible for returning to the ground state. In particular, we are interested in the emissive rate (𝛤 ) of the fluorophore and its rate of non radiative decay to S0 (knr ). The fluorescence quantum yield is the ratio of the number of photons emitted to the number absorbed. The rate constants 𝛤 and knr both depopulate the excited state. The fraction of fluorophores that decay through emission, and hence the quantum yield, is given by Q = 𝛤 /(𝛤 + knr ).

2.1.4

Absorption and Emission Spectra

Fluorescence spectra include an excitation spectrum (or absorption spectrum) and an emission spectrum. An absorption spectrum is a plot of the extinction coefficient (𝜀(𝜆)) versus the excitation wavelength. Fluorescence spectral data is generally presented as emission spectra. A fluorescence emission spectrum is a plot of the fluorescence intensity versus wavelength (nanometers) or wavenumber (cm−1 ). Emission spectra are typically independent of the excitation wavelength. The absorption spectrum of a fluorophore can be obtained by recording the emission intensity with different excitation wavelengths, while the emission spectrum of a fluorophore can be obtained by recording the emission intensity at different emission wavelengths with an excitation wavelength [1]. Figure 2.2 shows the absorption and emission spectra of cyan fluorescent protein (CFP), and Figure 2.3 shows the absorption and emission spectra of yellow fluorescent protein (YFP) [1].

2.2 Characteristics of Resonance Energy Transfer The distance at which RET efficiency is 50% is called the Förster distance [3], which is typically in the range of 20–60 Å. The rate of energy transfer from a donor to an acceptor kT (r) is given by [1, 3]: ( )6 1 R0 kT (r) = (2.3) 𝜏D r

33

2 Fluorescence Resonance Energy Transfer (FRET)

100 Excitation Emission

90 80

Fluorescence

70 60 50 40 30 20 10 0 350

400

450

500

550

600

650

Wavelength (nm)

Figure 2.2

Absorption and emission spectra of CFP. Source: Tongsheng Chen.

100 Excitation Emission

90 80 70 Fluorescence

34

60 50 40 30 20 10 0 400

450

500

550

600

650

700

Wavelength (nm)

Figure 2.3

Absorption and emission spectra of YFP. Source: Tongsheng Chen.

where 𝜏 D is the decay time of the donor in the absence of an acceptor, R0 is the Förster distance, and r is the donor-to-acceptor distance. Hence, the rate of transfer is equal to the decay rate of the donor (1/𝜏 D ) when the D-to-A distance (r) is equal to the Förster distance (R0 ), and the transfer efficiency is 50%. At this distance (r = R0 ) the donor emission would be decreased to half its intensity in the absence of acceptors. Figure 2.4 shows the energy levels related to the donor and acceptor. Equation (2.3) is only suitable for the D–A pair with single acceptor but not suitable for the D–A pair with multiple acceptors [4–6].

2.3 Theory of Energy Transfer for a Donor–Acceptor Pair

Figure 2.4 Energy levels of FRET. Source: Tongsheng Chen.

Donor Energy transfer Acceptor Excitation

Emission Donor fluorescence

FRET fluorescence

2.3 Theory of Energy Transfer for a Donor–Acceptor Pair Förster distance is [1, 3]: R60 =

9000 ⋅ (ln 10) ⋅ 𝜅 2 ⋅ QD ∞ f (𝜆) ⋅ 𝜀A (𝜆) ⋅ 𝜆4 d𝜆 ∫0 D 128 ⋅ 𝜋 5 ⋅ N ⋅ n4

(2.4)

where QD is the quantum yield of the donor in the absence of an acceptor, n is the refractive index of the medium, N is Avogadro’s number, r is the distance between the donor and acceptor, and 𝜏 D is the lifetime of the donor in the absence, 𝜀A (𝜆) is the extinction coefficient of the acceptor at 𝜆, which is typically in units of M−1 cm−1 . The term 𝜅 2 is a factor describing the relative orientation in space of the transition dipoles of the donor and acceptor. 𝜅 2 is usually assumed to be equal to 2/3, which is appropriate for dynamic random averaging of the donor and acceptor. f D (𝜆) is the donor’s corrected fluorescence intensity of the donor in the wave length range 𝜆 to 𝜆 + Δ𝜆 with the total intensity (area under the curve) normalized to unity as follows, fD (𝜆) =

FD (𝜆) ∞ ∫0

(2.5)

FD (𝜆)d𝜆

Define J(𝜆) as ∞

J(𝜆) =

∫0 FD (𝜆) ⋅ 𝜀A (𝜆) ⋅ 𝜆4 d𝜆 ∞

∫0 FD (𝜆)d𝜆

(2.6)

Equation (2.5) becomes [

] 1∕6 ∘ 𝜅 2 ⋅ QD R0 = 0.211 × A J(𝜆) n4

(2.7)

For a given D–A pair, QD , n, 𝜅, and N are constant. FRET efficiency is defined as E=

kT (r)

(2.8)

kT (r) + 𝜏D−1

Substitute Eq. (2.3) into Eq. (2.8) to obtain: E=

R60 R60 + r 6

1 ( )6

= 1+

r R0

(2.9)

35

36

2 Fluorescence Resonance Energy Transfer (FRET)

1.0

r = 0.5 R0 E = 0.985

0.5

r = R0 E = 0.50

Figure 2.5 Dependence of the energy transfer efficiency (E) on distance. R0 is the Förster distance. Reproduced from [1] Figure 13.2 with permission of Springer Nature.

E

r = 2 R0 E = 0.015 0

1

2

r/R0

According to Eq. (2.9), Figure 2.5 shows the dependence plot of E versus r. Therefore, there are three prerequisites for FRET: (i) r is near to R0 ; (ii) overlap of the emission spectra of donor with the absorption spectra of acceptor; and (iii) the dipoles of donor and acceptor are not perpendicular to each other.

2.4 Types of FRET Application There are two types of FRET applications: intramolecular FRET and intermolecular FRET [4, 5]. In intramolecular FRET, donor and acceptor fluorophores are linked to two different sites of a molecule (Figure 2.6). The conformational change of a molecule results in the change of the distance between the two sites labeled with donor and acceptor fluorophores respectively, thus affecting their FRET efficiency. FRET probes based on intramolecular FRET have been widely used to monitor the dynamics of calcium concentration, protein phosphorylation and activation of protein kinase in living cells. Intermolecular FRET refers to two independent molecules labeled with donor and acceptor fluorophores respectively (Figure 2.7), and is applicable to evaluating the interactions between molecules.

2.5 Common Fluorophores for FRET According to FRET theory, ideal fluorophores for FRET have following characteristics: (i) strong absorption capability and high quantum yield; (ii) large overlap between the emission spectra of donor and the absorption of acceptor; (iii) excellent stability independent of environment; (iv) do not affect the conformation and function of the targeting molecules. Fluorophores widely used include organic fluorescent molecules, gene-encoded fluorescent proteins (FPs) and inorganic nanoparticles.

2.5 Common Fluorophores for FRET

Orientation factor Intramolecular FRET

CFP

Distance factor

Host protein ‘‘open’’ conformation

YFP ‘‘Closed’’ conformation

Figure 2.6

Protein A

Figure 2.7

2.5.1

Intra-molecular FRET. Source: Tongsheng Chen.

Protein B

Protein–protein complex

Inter-molecular FRET. Source: Tongsheng Chen.

Chemical Fluorescence Probes

There are various types of chemical fluorescence probes with broad spectral range. They have been widely used to label subcellular organelles, DNA and protein molecules. At present, there are many mature and stable commercial chemical fluorescence probes that can be used for FRET measurements. The most commonly used FRET dye molecules include Cy3, Cy5, Cy7, and Alexa series of dye molecules. Detailed information on chemical fluorescent dyes and their optical properties can be seen in Molecular probes. Tables 2.1 and 2.2 list some of donor–acceptor pair and their Förster distance R0 [1].

2.5.2

Gene-Encoded Fluorescent Proteins (FPs)

Gene-encoded FPs with different spectral properties have been created by introducing mutations into the amino-acid sequence. In general GFPs have good photostability and display high quantum yields, which is probably because the $-barrel structure shields the chromophore from the local environment. A large number of FPs are now

37

38

2 Fluorescence Resonance Energy Transfer (FRET)

Table 2.1 pairs.

Representative Förster distance for various donor-acceptor

Donor

Acceptor

R0 (Å)

Naphthalene [6]

Dansyl

22

Dansyl [7]

FITC

33–41

Dansyl [6]

ODR

43

ε-A [6]

NBD

38

IAF [6]

TMR

37–50

Pyrene [6]

Coumarin

39

FITC [6]

TMR

49–54

IAEDANS [6]

FITC

49

IAEDANS [6]

IAF

46–56

IAF [6]

EIA

46

CF

TR

51

Bodipy [8]

Bodipy

57

BPE [6]

Cy5

72

Terbium [9]

Rhodamine

65

Europium [10]

Cy5

70

Europium [11]

APC

70

Source: Reproduced from [1] Table 13.3 with permission of Springer Nature.

available with emission maxima ranging from 448 to 600 nm, including blue fluorescent protein (BFP), (CFP, Cerulean), green fluorescent protein (GFP), (YFP, Venus, mVenus, Citene), and red fluorescent protein (rFP). FPs-based FRET has been widely used to monitor the dynamics of intracellular biochemical events in single living cells. Table 2.3 lists the Förster distance of FRET FPs pairs [12].

2.5.3

Quantum Dot (QD)

Quantum dots (QDs) have many excellent optical properties, such as high light absorption capacity, high quantum yield, high light stability and narrow spectrum. QDs are not only brighter than conventional organic fluorescent dyes and more resistant to photobleaching, but also have a longer fluorescent lifetime. Therefore, from the aspect of optical properties, QDs are more suitable to be the donor and acceptor pairs of FRET. Currently, the use of QD as live cell imaging or labeling is increasing rapidly in the field of cell research. However, the size and biological toxicity of QDs limit their use as FRET donor and acceptors in living cell markers.

2.6 Effect of FRET on the Optical Properties of Donor and Acceptor

Table 2.2

Förster distance for trypotaphan-acceptor pairs.

Donor

Acceptor

R0 (Å)

Trp

Nitrobenzoy

16

Trp

Dansy1

21–24

Trp

IAEDANS

22

Trp

Anthroyloxy1

24

Trp

TNB

24

Trp

Anthroy1

25

Trp

Tyr-NO2

26

Trp

Pyrene

28

Trp

Heme

29

Trp

NBS

30

Trp

DNBS

33

Trp

DPH

40

Source: Reproduced from [6] Table 2 with permission of Elsevier.

Table 2.3

Calculated Förster distances (in Nanometers) for FP pairings. Acceptor

Donor

EYFP

mVenus

ECFP

4.89

4.95

mCerulean

5.33

mCitrene

phiYFPm

mEGFP

copGFP

mCerulean

5.40

5.36

5.49

5.03

4.99

3.52

mEGFP

5.71

5.65

5.84

4.53

3.96

1.69

copCFP

5.61

5.57

5.72

4.80

4.34

1.95

phiGFP

3.91

3.83

4.46

2.27

1.87

0.83

mCitrene

5.15

5.05

5.67

3.09

2.51

1.33

mVenus

4.95

4.85

5.48

2.84

2.20

1.09

Source: Reproduced from [12] Table 2 with permission of Cambridge University Press.

2.6 Effect of FRET on the Optical Properties of Donor and Acceptor In general, FRET affects the following optical properties of donor–acceptor pair: donor fluorescence lifetime; donor emission fluorescence intensity; acceptor emission fluorescence intensity; overall fluorescence spectrum of a donor–acceptor pair; and the polarization property of their fluorescence. Lifetime of donor (𝝉 D ): FRET decreases (𝜏 D ) due to the additional decay of donor molecules in excited state by FRET. The donor molecule in the excited state has a way to jump back to the ground state in a radiation-free process due to FRET.

39

40

2 Fluorescence Resonance Energy Transfer (FRET)

Emission fluorescence intensity of donor–acceptor pair: Because of FRET effect, donor transfers its energy to acceptor, which reduces the intensity of the fluorescence emitted by the donor and increases the intensity of the acceptor. Emission spectra of donor–acceptor pair: FRET causes the shift of energy toward long wavelengths, resulting in a decrease in the short wavelength range and an increase in the long wavelength range. In addition, FRET also affects the polarization of emission fluorescence of both the donor and acceptor.

2.7 Qualitative FRET Analysis Channel intensity-based ratio method and partial receptor photobleaching-based method are currently used two qualitative FRET measurement methods. Channel intensity-based ratio method: Generally, a donor channel is selected to mainly or selectively collect the fluorescence emitted by a donor, and another receptor channel is selected to mainly collect the fluorescence emitted by an acceptor. Then the fluorescence intensity ratio of the two detection channels was used to qualitatively measure the FRET effect. For example, Miyawaki A et al. constructed a CFP and YFP-based calcium ion FRET probe (Cameleon using CFP as a donor and YFP as a acceptor) [13]. The higher the calcium ion concentration, the closer CFP and YFP are to each other, resulting in higher FRET efficiency between them. A 470/30 nm channel was selected to collect the emission fluorescence of CFP, while a 530/40 nm channel was selected to collect the emission fluorescence of YFP. In living cells expressing Cameleon, 436 nm light was selected to excite CFP, and then the ratio of fluorescence intensity between the two detection channels was measured to determine the spatial distribution and changes of intracellular calcium ions (Figure 2.8). Partial acceptor photobleaching-based method: For a donor-acceptor FRET pair, acceptor photobleaching results in an increase in donor fluorescence intensity, 433 nm

433 nm

476 nm M13

CFP

YFP

FRET

527 nm

Ca2+ CFP

7.2 CaM

YFP

CaM

4.0

0.8

Figure 2.8 Ratio imaging of intracellular calcium ion. Source: Tsien et al. [14], From American Association for the Advancement of Science.

2.8 Quantitative FRET Measurement

which can be used to determine the FRET between donor and acceptor. This method needs to select a donor detection channel to selectively detect the emission fluorescence of the donor. In 2009, a FRET gene encoding probe (SCAT3) that detects caspase-3 activity was used to detect caspase-3 activation during apoptosis in living cells [15, 16]. SCAT3 connects donor CFP with acceptor Venus by a peptide chain containing the caspase-3 cutting substrate [15].

2.8 Quantitative FRET Measurement Qualitative FRET signals depend not only on the degree of FRET effect but also on the detection system and the state of the system, including excitation wavelength, spectral response of the system, exposure time, and gain coefficient of the detector, and background. As a result, it is not feasible for the comparisons between the FRET signals detected by different research groups or even different samples from the same research group and the same trial, and FRET signals are typically qualitatively detected only to determine whether there is a FRET effect or whether there is a relative change in FRET efficiency. Quantitative FRET measurement is necessary for scientific communication and cooperation between different laboratories [17].

2.8.1

Issue of Quantitative FRET Measurement: Spectral Crosstalk

FRET requires a large overlap between the emission spectrum of the donor and the excitation spectrum of the acceptor. However, from the perspective of FRET quantitative detection, it is hoped to achieve selective excitation of the donor, and at the same time to selectively detect the emission fluorescence of the donor and acceptor. Due to the trailing effect of fluorescence emission spectrum, the emission fluorescence spectrum of the donor usually overlaps with that of the acceptor. Donor excitation light usually excites acceptor as well. In particular, FPs have a large overlap in excitation spectra and emission spectra (Figure 2.9) [18]. When FPs are selected as donors and acceptors, it is very difficult to selectively excite or to collect emission from only one of the FPs [19]. For the sake of FRET analysis, four kinds of spectral crosstalks are used for quantitative FRET measurement [17]. Donor Excitation Crosstalk: acceptor excitation excites donor. Acceptor Excitation Crosstalk: donor excitation excites acceptor. Donor Emission Bleedthrough: acceptor detection channel collects donor emission. Acceptor Emission Bleedthrough: donor detection channel collects acceptor emission.

2.8.2

Lifetime Method

FRET efficiency (E) can be determined by measuring the fluorescence lifetime of a donor in the absence (𝜏 D ) and presence (𝜏 DA ) of an acceptor as follows [1]: 𝜏 (2.10) E = 1 − DA 𝜏D

41

100

100 Dsred2 ECFP EGFP EYFP HcRed

90

80 Relative fluorescence

Relative fluorescence

80 70 60 50 40 30

70 60 50 40 30

20

20

10

10

0 300 (a)

Figure 2.9

350

400

450

500

550

Wavelength (nm)

600

650

Dsred2 ECFP EGFP EYFP HcRed

90

0 400

700 (b)

Excitation (a) and Emission (b) spectra of FPs. Source: Tongsheng Chen.

450

500

550

600

650

Wavelength (nm)

700

750

800

2.8 Quantitative FRET Measurement

In reality, the fluorescence lifetime is usually determined by measuring the exponential decay of the fluorescence intensity of a large number of donor molecules. Since the decay curve of the donor’s fluorescence intensity shows a multi-exponential decay, E is generally calculated using the donor’s average lifespan [20]. Lifetime method is usually considered as golden standard for quantitative FRET measurements [21, 22]. However, this method requires detection channels to selectively collect donor fluorescence, and the instruments used are expensive and complex, and require professional operators [20, 23–28].

2.8.3

Complete Acceptor Photobleaching

FRET efficiency (E) can be determined by selectively measuring the donor fluores(A ) (B ) and after IDD complete acceptor photobleaching as cence intensity before IDD follows, E =1−

B IDD

(2.11)

A IDD

Generally, maximum acceptor excitation light is used to completely bleach the receptor. However, it is difficult to achieve the complete photobleaching acceptor, especially for living cells. In addition, irradiation with strong light induces serious damage to cells. Therefore, this method is generally suitable only for fixed samples with no molecular movement or diffusion.

2.8.4

Partial Acceptor Photobleaching (pbFRET)

To reduce the damage to living cells, Elder et al. proposed a partial acceptor photobleaching-based method to measure FRET efficiency (E) as follows [28]: 1− E= 1−

B IDD A IDD

B IDD A IDD

(2.12)

(1 − PB )

where I ADD is the fluorescence intensity of donor before partial acceptor photoB is the fluorescence intensity of donor after partial bleaching at donor excitation, IDD acceptor photobleaching at donor excitation, and PB is the acceptor photobleaching degree. Equation (2.12) is only applicable to the FRET tandem construct with one acceptor molecule. For the FRET tandem construct with multiple (n) acceptor molecules, Eq. (2.12) becomes [29]: IA

1 − IDD B DD E= ( A IDD 1 − IB 1 − DD

PB n

)

(2.13)

In both Eqs. (2.12) and (2.13), the donor detection channel must selectively collect donor emission, and the acceptor excitation must selectively photobleach the acceptor.

43

44

2 Fluorescence Resonance Energy Transfer (FRET)

2.8.5

B/C-PbFRET Method

In the following, CH 1 and CH 2 represent the donor channel and acceptor channel, respectively [30]. CA-CH1 and CA-CH2 are the fractions of the fluorescence in CH 1 and CH 2 , respectively, to the total fluorescence of acceptor, and CD-CH1 and CD-CH2 are the fractions of the fluorescence in CH 1 and CH 2 , respectively, to the total fluorescence of donor. Therefore, CA−CH1 = CA−CH2 = CD−CH1 = CD−CH2 =

IA(CH1) IAtotal IA(CH2) IAtotal ID(CH1) IDtotal ID(CH2) IDtotal

𝜆

=

∫𝜆 2 Fil1(𝜆)∗ SPa (𝜆)d𝜆 1



𝜆

=

∫𝜆 4 Fil2(𝜆)∗ SPa (𝜆)d𝜆 3



𝜆

=

∫0 SPa (𝜆)d𝜆

∫𝜆 2 Fil1(𝜆)∗ SPd (𝜆)d𝜆 1



𝜆

=

∫0 SPa (𝜆)d𝜆

∫0 SPd (𝜆)d𝜆

∫𝜆 4 Fil2(𝜆)∗ SPd (𝜆)d𝜆 3



∫0 SPd (𝜆)d𝜆

(2.14) (2.15) (2.16) (2.17)

where Fil1(𝜆) and Fil2(𝜆) stand for the spectra response of optical system including detection channel transmission and camera for CH 1 and CH 2 , and SPd (𝜆) and SPa (𝜆) are the normalized emission spectra of donor and acceptor, respectively. Our previous report demonstrated that Fil1(𝜆) and Fil(𝜆) can be approximately considered to be constant for our system [16]. Before photobleaching the acceptor, the fluorescence in CH 1 consists of two parts: direct excitation of the donor, and the acceptor emission crosstalk including both direct excitation of the acceptor and FRET. The fluorescence in CH 2 normally includes three parts: the direct excitation of acceptor, FRET, and the donor emission spectral crosstalk. Therefore, the fluorescence intensity (I D ) in CH 1 and the fluorescence intensity (I A ) in CH 2 with donor excitation can be expressed as ID = ID(CH1) + (IA(CH1) + IFRET(CH1) ) = [Iex 𝜀D (𝜆ex )[D](1 − E)𝜙D CD−CH1 + (Iex 𝜀A (𝜆ex )[D] + Iex 𝜀D (𝜆ex )[D]E)𝜙A CA−CH1 ]SD (2.18) IA = IA(CH2) + IFRET(CH2) + ID(CH2) = [(Iex 𝜀A (𝜆ex )[D] + Iex 𝜀D (𝜆ex )[D]E)𝜙A CA−CH2 + Iex 𝜀D (𝜆ex )[D](1 − E)𝜙D CD−CH2 ]SA (2.19) where E is the FRET efficiency of donor-acceptor pair; I ex is the excitation laser intensity; 𝜀D (𝜆ex ) and 𝜀A (𝜆ex ) are the extinction coefficients of donor and acceptor at donor excitation; 𝜙D and 𝜙A are the quantum yields of donor and acceptor; SD and SA are the signal collection coefficients of CH1 and CH2. We normalized I D and I A to 100 in arbitrary unit at the start point of bleaching. Combining Eqs. (2.18) and (2.19), we obtain (R + E)𝜙A CA−CH2 + (1 − E)𝜙D CD−CH2 SD = ex SA (1 − E)𝜙D CD−CH1 + (Rex + E)𝜙A CA−CH1

(2.20)

2.8 Quantitative FRET Measurement

where Rex , the ratio of 𝜀A (𝜆ex ) to 𝜀D (𝜆ex ), is excitation crosstalk coefficient. Selectively and partially photobleaching acceptors with the maximum dose of photobleaching laser line leads to a decrease in the fluorescence intensity in CH2 and a corresponding increase in the fluorescence intensity in CH1. We introduce x, photobleaching degree, to represent the percentage of the bleached acceptors. The fluorescence intensity (I DP ) in CH1 and the fluorescence intensity (I AP ) in CH2 with donor excitation after partial photobleaching of acceptors can be given by { IDP = [Iex 𝜀D (𝜆ex )[D] − Iex 𝜀D (𝜆ex )([D] − [D]x)E]𝜙D CD−CH1 } + [Iex 𝜀A (𝜆ex )([D] − [D]x) + Iex 𝜀D (𝜆ex )([D] − [D]x)E]𝜙A CA−CH1 SD (2.21) { IAP = [Iex 𝜀A (𝜆ex )([D] − [D]x) + Iex 𝜀D (𝜆ex )([D] − [D]x)E]𝜙A CA−CH2 } + [Iex 𝜀D (𝜆ex )[D] − Iex 𝜀D (𝜆ex )([D] − [D]x)E]𝜙D CD−CH2 SA

(2.22)

Combine Eqs. (2.18)–(2.22), obtain ΔID = IDP − ID = [Iex 𝜀D (𝜆ex )[D]xE𝜙D CD−CH1 − (Iex 𝜀A (𝜆ex )[D]x + Iex 𝜀D (𝜆ex )[D]xE)𝜙A CA−CH1 ]SD (2.23) ΔIA = IA − IAP = [(Iex 𝜀A (𝜆ex )[D]x + Iex 𝜀D (𝜆ex )[D]xE)𝜙A CA−CH2 − Iex 𝜀D (𝜆ex )[D]xE𝜙D CD−CH2 ]SA (2.24) Combine Eqs. (2.23) and (2.24) to obtain ΔI (R + E)𝜙A CA−CH2 − 𝜙D CD−CH2 E SD = D ex SA ΔIA 𝜙D CD−CH1 E − (Rex + E)𝜙A CA−CH1

(2.25)

Combine Eqs. (2.20) and (2.25), obtaining B-Pb-FRET formula as, √ −b + b2 − 4ac (2.26) E= 2a where ) ( )( 𝜙2D ΔIA 𝜙D 𝜙D a= 1+ + r r − rch1 − 2 rch2 ΔIB 𝜙A 𝜙A ch2 ch1 𝜙A ( ) 2 ) ] ) [ ( ( ΔIA 𝜙D 𝜙D ΔIA ΔIA b= 1+ Rex Rex rch1 r − 1− 1+ −2 1+ ΔIB 𝜙2A ch2 ΔIB 𝜙A ΔIB ] [ 𝜙 ΔI − (1 − Rex ) A − Rex D rch2 rch1 ΔIB 𝜙A ) ( ΔIA 𝜙 𝜙 ΔI R2ex rch1 − D Rex rch2 rch1 A − D Rex c=− 1+ ΔIB 𝜙A ΔIB 𝜙A where r ch1 is the ratio of CA − CH1 to CD − CH1 , representing the degree of acceptor emission spectral crosstalk, and r ch2 is the ratio of CD − CH2 to CA − CH2 , representing the degree of donor emission spectral crosstalk.

45

46

2 Fluorescence Resonance Energy Transfer (FRET)

2.8.6 Binomial Distribution-Based Quantitative FRET Measurement for Constructs with Multiple-Acceptors by Partially Photobleaching Acceptor(Mb-PbFRET) Multiple-acceptors FRET can potently extend the detectable range of conventional FRET constructs with one acceptor [31]. This is especially important for live-cell measurements, where natural distribution differences and movement of proteins can influence the number of acceptors interacting with a given donor. For example, FRET constructs with one donor and ten acceptors can be used to determine distance as long as 20 nm. Therefore, multiple-acceptors FRET technology may become a powerful tool for biological application of molecule-molecule distance determination in a scale larger than 10 nm. In a complete acceptor photobleaching system, all the acceptors in the sample are photobleached, and only donors are left. However, partial acceptor photobleaching results in various FRET constructs in the sample, especially for the 1D-nA constructs (see Figure 2.10). There are some empirical methods, including emp-PbFRET and Ma-PbFRET [29, 32]. Here, we introduce a rigorous and quantitative measurement method for multiple-acceptors FRET construct named as Mb-PbFRET. Let us consider the complicated FRET system after partial acceptor photobleaching of n-acceptors (1D-nA) constructs. The randomness of photobleaching acceptors results in identical photobleaching probability, which is equal to the photobleaching degree (x) for each acceptor. Therefore, partially photobleaching acceptors of 1D-nA constructs leads to a complicated system consisting of n + 1 kinds of FRET constructs, which can be written as a general symbol (n-i)T-1D-iA, where “T” stands for the photobleached “A,” i is the number of acceptors, and (n-i) is the number of the photobleached acceptors (T) in this photobleached FRET construct with different FRET efficiencies and proportions as depicted in Figure 2.11. We use Ei to stand for the FRET efficiency of the (n-i)T-1D-iA construct, and Pi (x) to represent the proportion of the (n-i)T-1D-iA construct in the complicated system after partial Complete acceptor photobleaching A

B 1D-1A

1D-0A

1D-nA

1D-0A

Partial acceptor photobleaching C

D 1D-1A

1D-1A 1D-0A

1D-nA

1D-iA

Figure 2.10 Illustration of FRET constructs before and after complete and partial acceptor photobleaching. i = 0,1,…, n−1, n (A and C) single acceptor construct; (B and D) multiple acceptor construct. Source: Tongsheng Chen.

2.8 Quantitative FRET Measurement

n

Partial acceptor photobleaching x

Binomial distribution system

n

FRET construct:

nT-1D-0A

FRET efficiency:

E0

Proportion:

P0(x) = Cn0(1−x)0x n

n−i

i

(n−i)T-1D-iA Ei Pi(x) =Cni (1−x)ix n−i

n

0T-1D-nA En Pn(x) =Cnn(1−x)nx 0

Figure 2.11 FRET constructs of 1D-nA after partial acceptor photobleaching, i = 1,2,…, n − 1, n. Source: Reproduced from Ref. [33] Figure 1 with permission of AIP Publishing.

acceptor photobleaching. Therefore, Pi (x) is equal to the probability of photobleaching a 1D-nA to (n–i)T-1D-iA construct. According to the binomial distribution theory Pi (x) = Cni (1 − x)i xn–i , (i = 0, 1, 2, … , n)

(2.27)

where Cni = n!/[(n–i)!i!], and 𝛴Pi (x) = 1. Considering the changes in donor fluorescence caused by photobleaching acceptors, we define R(x) as the ratio of the donor fluorescence intensity after and before partial acceptor photobleaching when the acceptor photobleaching degree is x. Both x and R(x) are directly measured in experiments. On the other hand, the unquenched donor fluorescence after and before bleaching is proportional to the (1 − En ) and Pi (x)(1 – Ei ), respectively. So, R(x) is R(x) =

n ∑

Pi (x) ⋅ yi

(2.28)

i=0

where the linear proportional constants are yi =

(1 − Ei ) (1 − En )

(2.29)

Deformation of Eq. (2.29) gives the formula to calculate E of the 1D-nA FRET construct as: En = 1 −

1 y0

(2.30)

47

48

2 Fluorescence Resonance Energy Transfer (FRET)

2.8.7

3-Cube-Based E-FRET

In 2004, Zal and colleagues proposed an E-FRET microscopic imaging method based on three cubes [17]. This method can be performed on confocal and widefield microscopes [17, 34]. In E-FRET method, three cubes are used for donor excitation and donor collection (I DD ), acceptor excitation and acceptor collection (I AA ), and donor excitation and acceptor collection (I DA ). The formula of E-FRET is EfD =

Fc Fc + G ⋅ IDD

(2.31)

where f D denotes the proportion of donor–acceptor pairs to all donor molecules; G is constant for a given choice of fluorophores (QD and QA ) and the imaging setup and is independent of underlying FRET efficiency. Thus, G can be calibrated using a reference FRET sample with a different Emax than in the experiment as long as exposure timings remain proportional and the same donor and acceptor fluorophores are used. F c is the acceptor-sensitized fluorescence due to FRET. Fc = IDA − a(IAA − cIDD ) − d(IDD − bIAA )

(2.32)

where a, b, c, and d are spectral crosstalk parameters, and can be premeasured by using donor and acceptor sample respectively as follow [26]: a = IDA (A)∕IAA (A), b = IDD (A)∕IAA (A), c = IAA (D)∕IDD (D), d = IDA (D)∕IDD (D)

(2.33)

where I represents the pixel-by-pixel image intensity, minus background, using the combination of excitation and emission filters indicated by the lower index, for samples containing only donor or acceptor, as indicated in parentheses. A relatively simple and direct way to measure G is to merge donor recovery after acceptor photobleaching with sensitized-emission imaging. For a standard tandem FRET construct, the incomplete photobleaching of acceptor gives the formula for experimental determination of G [17]: post

G=

Fc − Fc post

IDD − IDD

(2.34)

where the superscript post denotes the measured ( ) ( intensity after ) partial acceptor phopost post post post post post tobleaching, Fc = IDA − a IAA − cIDD − d IDD − bIAA . In 2006, Chen and colleagues proposed a method of using two tandem FRET constructs with different FRET efficiency to determine G parameter as follows [34]: G=

1 2 − Fc2 ∕IAA Fc1 ∕IAA 2 2 1 1 IDD ∕IAA − IDD ∕IAA

where the superscript 1 and 2 denote the different construct.

(2.35)

2.8 Quantitative FRET Measurement

Once the G factor is determined, the total donor fluorescence can be numerically restored. We can thus determine the k factor using a 1 : 1 donor–acceptor fusion construct from [34] k=

IDD + Fc ∕G IAA

(2.36)

Once the k and G factors are determined for a particular donor and acceptor FP pair, one can measure the relative abundance of the donor and acceptor FP or FP-tagged proteins regardless of stoichiometry (Rt ) from Rt =

CAt CDt

=

kIAA IDD + Fc ∕G

(2.37)

2.8.8 Quantitative FRET Measurement Based on Linear Spectral Unmixng of Emission Spectra (Em-spFRET) With the development of spectral technology and instruments, spectral linear unmixing technology has been widely applied to quantitative FRET measurement. The emission spectrum of a FRET sample is a linear superposition of donor and acceptor emission spectra, while the emission spectrum of an acceptor consists of two parts: the direct acceptor emission spectrum and the FRET sensitized acceptor emission spectrum. Because of the difference in spectral shape, it is easy to separate the donor emission spectrum from the acceptor emission spectrum by linear spectral unmixing. However, since the direct acceptor emission spectra and FRET sensitized acceptor emission spectra have the exact same shape, it is impossible to directly resolve them with the linear spectral unmixing method. Therefore, it is necessary to predetermine the acceptor excitation crosstalk for quantitative FRET measurement. Three kinds of quantitative FRET detection based on spectral linear unmixing of the emission are described below. 2.8.8.1 Lux-FRET Method

In 2008, Wlodarczyk and colleagues set up the Lux-FRET method. In the Lux-FRET method, two excitations with different wavelengths are used [35]. For a FRET sample, which contains both a free donor at concentration [D], a free acceptor at [A], and the complex [DA], where the capital letters denote molecules with intact labels. The influence of incomplete labeling will be considered in the next section. We will measure an emission spectrum, which is a linear combination of five contributions, two of which have the emission characteristics of the donor (the contributions from free donors and the unquenched part of fluorescence from donors within FRET complexes), the remaining three with the emission characteristics of the acceptor (i.e. direct excitation of free acceptors, of acceptors within FRET pairs, and sensitized emission) { F i (𝜆) = I i 𝜂 i (𝜆) 𝜀iD QD eD (𝜆)[D] + 𝜀iD QD eD (𝜆)[D–A](1 − E) } + 𝜀iA QA eA (𝜆)[A] + 𝜀iA QA eA (𝜆)[D–A] + 𝜀iA QA eA (𝜆)[D–A]E (2.38)

49

50

2 Fluorescence Resonance Energy Transfer (FRET)

The excitation wavelengths should be selected such that one of them (i = 1) excites mainly the donor fluorophore, while the other one (i = 2) excites mainly the acceptor. In case the calibration samples have concentrations [Dref ] and [Aref ] for donor and acceptor fluorophores, respectively, the spectral intensities will be given by, i,ref

= ID 𝜀iD QD 𝜂 i (𝜆)eD (𝜆)[Dref ]

i,ref

= IA 𝜀iA QA 𝜂 i (𝜆)eA (𝜆)[Aref ]

FD FA

i,ref

(2.39)

i,ref

(2.40)

where I i,ref is excitation intensity, 𝜀iD and 𝜀iA are extinction coefficients of donor and acceptor at the two excitation wave lengths 𝜆i (i = 1, 2); QD , QA are quantum yields of donor and acceptor; and eD (𝜆), eA (𝜆) are standard emission spectra of the two fluorophores normalized to unit area. The functions 𝜂 i are detection efficiencies of the instrument used and may be different for different excitation wavelengths due to differences in filters. From the reference measurements (Eqs. (2.39) and (2.40)), excitation ratios can be given by i,ref

r

ex,i

FD (𝜆)QA eA

=

(2.41)

i,ref

FD (𝜆)QD eD Substitute Eqs. (2.39) and (2.40) into (2.38), we can obtain: ⎛ [D]+(1−E)[D−A] F i,ref (𝜆) ⎞ D Ii ⎜ [Dref ] ⎟ F (𝜆) = i,ref i i ⎜+ [A]+(1+E𝜀D ∕𝜀A )[D−A] F i,ref (𝜆)⎟ I ref A [A ] ⎝ ⎠ i

(2.42)

when the imaging conditions for the FRET sample and the reference sample are the same, F i (𝜆) can be obtained by linear spectral unmixing as follow i,ref

i,ref

F i (𝜆) = 𝛿 i FD + 𝛼 i FA

(2.43)

where 𝛿 i and 𝛼 i are the relative concentration of donor and acceptor 𝛿i = 𝛼i =

[D] + (1 − E)[DA] [Dref ] ) ( [A] + 1 + E 𝜀iD ∕𝜀iA [DA] [Aref ]

(2.44) (2.45)

therefore EfD = E

[D–A] 𝛼2 − 𝛼1 = ex,2 , [D] + [D − A] (r − r ex,1 )𝛿 1 + 𝛼 2 − 𝛼 1

(2.46)

EfA = E

[Dref ] [D–A] 𝛼2 − 𝛼1 = ref 1 ex,2 [A] + [D–A] [A ] 𝛼 r − 𝛼 2 r ex,1

(2.47)

RC =

[Aref ] [D–A] + [A] 𝛼 1 r ex,2 − 𝛼 2 r ex,1 = ref ex,2 [D–A] + [D] [D ] (r − r ex,1 )𝛿 1 + 𝛼 2 − 𝛼 1

(2.48)

where the [Dref ]/[Aref ] can be predetermined by using a tandem reference: RTC =

[Dref ] 𝛼 1 r ex,2 − 𝛼 2 r ex,1 = (r ex,2 − r ex,1 )𝛿 1 + 𝛼 2 − 𝛼 1 [Aref ]

(2.49)

2.8 Quantitative FRET Measurement

2.8.8.2 SpRET Method

In 2011, Levy and colleagues developed a method for highly sensitive and reliable spectral measurement of absolute FRET efficiency (SpRET) [36]. For a test sample, 488 emission spectra at 405 nm excitation (Em405 s ) and 488 nm excitation (Ems ) are: ∑ Em405 = WX405 × Xs405 (x = BG, AU, D, and A) (2.50) s Em488 = s



WX488 × Xs488 (x = BG, AU, and A)

(2.51)

where WX405 and WX488 are the weight factors of background (BG), autofluorescence (AU), donor (D), and acceptor (A); Xs405 and Xs488 are the fingerprint spectrum of background (BG), autofluorescence (AU), donor (D), and acceptor(A). Using acceptor sample to obtain the acceptor excitation crosstalk parameter SF: SF = WA405 ∕WA488 Thus, the acceptor sensitized signal is SE = WA405 − WA488 × SF FRET efficiency E is 𝜅t E= 𝜅f + 𝜅t + 𝜅nr

(2.52)

(2.53)

where 𝜅 t is the RET rate, 𝜅 f is the fluorescence rate, and 𝜅 nr is the nonradiative rate. In steady-state fluorescence intensity measurements, the number of donor molecules excited during a given time period equals the number of donor molecules relaxing back to the ground state through these three pathways: ExD = EmD + ret + nr

(2.54)

where ExD is the number of excited donor molecules, EmD is the number of photons emitted from donor molecules as fluorescence, ret is the number of photons transferred to acceptor molecules by FRET, and nr is the number of nonradiative events. Therefore, EmD =

𝜅f × ExD

𝜅f + 𝜅t + 𝜅nr 𝜅t × ExD ret = 𝜅f + 𝜅t + 𝜅nr 𝜅nr × ExD nr = 𝜅f + 𝜅t + 𝜅nr EmD is directly proportional to WD405 : EmD = K × WD405 where K is constant related to the instrument setup, similarly, ret = K ×

SE 𝜙a

(2.55)

51

52

2 Fluorescence Resonance Energy Transfer (FRET)

Define RD as the ratio of ret to EmD : RD =

ret SE = 405 EmD WD × 𝜙a

and E=

𝜅f × RD 𝜅f × (RD + 1) + 𝜅nr

(2.56)

while 𝜅nr =

𝜅f (1 − 𝜙d ) 𝜙d

Thus E=

SE × 𝜙d

SE × 𝜙d + WD405 × 𝜙a ( ) [At ] = KA × WA488 ∕𝜙a ) ( WD405 SE t + [D ] = KD × 𝜙d 𝜙a

(2.57) (2.58) (2.59)

and RC =

WA488 × 𝜙d [At ] = P × AD [Dt ] SE × 𝜙d + WD405 × 𝜙a

(2.60)

where PAD = K A /K D , can be predetermined by using a tandem FRET construct with 1 : 1 of donor: acceptor. 2.8.8.3 Iem-spFRET Method

Based on a novel notion to predetermine the molar extinction coefficient ratio (RC) of acceptor-to-donor for the correction of acceptor excitation crosstalk, the Tongsheng Chen group presents a robust and independent emission-spectral unmixing FRET methodology, Iem-spFRET, which can simultaneously measure the E and RC of a FRET sample without any external references, such that Iem-spFRET circumvents the rigorous restriction of keeping the same imaging conditions for all FRET experiments and thus can be used for the direct measurement of FRET samples [31, 37, 38]. For the test measurement, we consider a sample that contains a free donor at concentration Cd and a free acceptor at Ca as well as the paired donor at concentration CD and acceptor at CA . The emission spectra should be a linear combination of five contributions from free donor, the paired donor, free acceptor and the paired acceptor, including direct excitation and sensitized emission: { Fi (𝜆) = I i 𝜂 i K(𝜆) 𝜀iD QD eD (𝜆)[D] + 𝜀iD QD eD (𝜆)[D–A](1 − E) } (2.61) +𝜀iA QA eA (𝜆)[A] + 𝜀iA QA eA (𝜆)[D–A] + 𝜀iA QA eA (𝜆)[D–A]E where K(𝜆) is the spectral response curve of system. Equation (2.61) can be expressed as Si (𝜆) = 𝛼 i eD (𝜆) + 𝛽 i eA (𝜆)

(2.62)

2.9 Conventional Instrument for FRET Measurement

where 𝛼 i = I i 𝜀D (𝜆i )(m[mD ∼ nA] + [D] − m[mD ∼ nA]E)

(2.63)

𝛽 i = I i 𝜀D (𝜆i )[m[mD ∼ nA]EQA ∕QD + 𝛾 i (n[mD ∼ nA] + [A])QA ∕QD ]

(2.64)

Define 𝛿 i as, 𝛿i =

m[mD ∼ nA] + [D] − m[mD ∼ nA]E 𝛼i = 𝛽i m[mD ∼ nA]EQA ∕QD + 𝛾 i (n[mD ∼ nA] + [A])QA ∕QD

(2.65)

Combine Eqs. (2.63) and (2.64) to obtain: EfD = E RC =

𝛿2 𝛾 2 − 𝛿1𝛾 1 m[mD ∼ nA] = 1 2 2 t 1 [D ] 𝛿 𝛿 [𝛾 − 𝛾 ]QA ∕QD + 𝛿 2 𝛾 2 − 𝛿 1 𝛾 1

[At ] 𝛿1 − 𝛿2 = 1 2 2 t 1 [D ] 𝛿 𝛿 [𝛾 − 𝛾 ]QA ∕QD + 𝛿 2 𝛾 2 − 𝛿 1 𝛾 1

(2.66) (2.67)

where 𝛾(𝜆i ) can be premeasured by using a tandem FRET construct, in this case, [D] = [A] = 0, thus using Eq. (2.66) to obtain, 𝛾(𝜆i ) =

QD − QA E𝛿 i − EQD QA 𝛿 i

(2.68)

If the E of this reference construct is 0, then, ( ) Q 𝛾 𝜆iex = D i QA 𝛿

(2.69)

2.9 Conventional Instrument for FRET Measurement 2.9.1

Fluorescence Lifetime Detector

Fluorescence lifetime detection technology has been widely used to study material properties and biological problems. Fluorescence lifetime imaging (FLIM) combined with microscopic imaging technology has become an important technical tool to study the dynamic behavior of intracellular events in living cells [22, 39]. The fluorescence life of fluorophores is independent of their concentration. The FLIM was used to obtain the fluorescence life of the donor when the donor and acceptor t coexist, and the FRET efficiency of the donor and recipient samples could be obtained by using Eq. (2.10). There are two main methods for measuring fluorescence lifetime: the time-domain method and the frequency-domain method. In time-domain measurement, picosecond and femtosecond pulse excitation light are used as excitation light, and single photon counting technique is used for detection. The high hardware requirements of this instrument can be combined with multi-photon excitation scanning microscopic imaging technology for high spatio-temporal resolution FLIM imaging. The method of frequency domain measurement is excited by modulated excitation light. These instruments require relatively low hardware

53

54

2 Fluorescence Resonance Energy Transfer (FRET)

requirements and fast imaging speed, but tend to be wide-field microscopic FLIM imaging, resulting in low spatial resolution. However, when donor and acceptor are present, the FLIM of the donor directly requires selective detection of the donor fluorescence. It is often difficult for donors and acceptors with severely overlapping emission spectra (e.g. GFP and YFP pairs). Therefore, it is necessary to select the appropriate donor and acceptor pair carefully. In addition, FLIM detection generally requires strong excitation light and a long detection time, which may cause some damage to living cells.

2.9.2

Widefield Microscope

The wide-field fluorescence microscope is one of the most commonly used instruments in the fields of materials science, life science and medical science, as well as the conventional instrument for quantitative FRET imaging. The most famous quantitative imaging technique, 3-cube FRET, can be performed on fluorescence microscopes. Quantitative detection of 3-cube FRET is generally performed by using three cube cubes, namely DD cube, AA cube, and DA cube, which include an excitation filter, a dichroic filter, and an emission filter. DD cube is mainly used to stimulate donors and selectively detect donor fluorescence (I DD ). AA cube is mainly used to selectively stimulate receptors and detect receptor fluorescence (I AA ). DA cube is mainly used to stimulate the FRET sensitized fluorescence (I DA ) of the donor and the acceptor. A wide-field fluorescence microscope is very stable. A large number of experimental studies have shown that its performance can be maintained for at least three months of stability. Therefore, once being calibrated, quantitative measurement can be done over a long period of time without systematic correction on a wide-field fluorescence microscope. In fact, quantitative FRET detection based on partial receptor photobleaching can also be rapidly achieved with the 3-cubes approach [29, 31].

2.9.3

Confocal Fluorescene Microscope

Compared with wide-field fluorescence microscope, laser confocal fluorescence microscope has the advantages of high spatial resolution and high degree of automation, so it can realize dynamic FRET quantitative imaging detection with high spatiotemporal resolution. In particular, by combining with multi-photon excitation technology, FRET quantitative imaging analysis with higher spatial resolution can be achieved. With the development and widespread use of spectral confocal fluorescence microscopy and its combination with FLIM technology, nearly all quantitative FRET detection methods can be implemented on the instrument. It should be noted, however, that the confocal fluorescence microscope may vary slightly in the spectral response function of the system each time it is turned on, so the system should be corrected for every quantitative FRET measurement.

2.10 Applications of FRET in Biomedicine

2.9.4

Fluorescence Spectrometer

Due to the high spectral resolution, fluorescence spectrometer is easy to select the detection channel for selective donor fluorescence detection, so fluorescence spectrometer is particularly suitable for the quantitative detection of liquid FRET samples. Only a large number of live FRET samples were available for FRET measurement using a fluorescence spectrometer. When a spectrometer is combined with a fluorescence microscope, quantitative FRET measurement in a single living cell can be performed [22, 38].

2.10 Applications of FRET in Biomedicine Due to the advantages of fast analytical speed, high sensitivity, good selectivity, no pollution or little pollution, etc., FRET has been widely applied to study the structure and dynamics of biomolecules in biology, such as nucleic acid analysis, immune analysis, protein analysis, carbohydrate analysis, and clinically related drug analysis. There are three types of use of FRET technology: intermolecular interactions (see Figure 2.12), kinase activity (see Figure 2.13), and molecular conformational changes (see Figure 2.14). Examples of the use of FRET technology to study intracellular molecular behavior in a single living cell, using FPs as donor-acceptor-labeled target proteins to illustrate the three types of FRET methods.

2.10.1 Protein–Protein Interactions The mysteries of life science, including growth and development, genetic variation, cognition and behavior, evolution and adaptability, can and must be found in proteins and their interactions with other biological molecules. Protein science is a science that studies the temporal and spatial distribution, structure, function, and interaction mode of biological proteins, aiming to reveal the nature and law of life activities, and is the golden key to understand life phenomena and opens the door to human health. The in-depth study of the complex and diverse structural functions, interactions, and dynamic changes of proteins will fully reveal the nature of life phenomena at molecular, cellular, and biological levels, which is the main task in the post-genome era. It is necessary to develop advanced materials, methods, and technologies to analyze the complex structure, function, interaction and dynamic changes of proteins.

D Molecule1

Figure 2.12

A Molecule1

Type 1: Interaction. Source: Tongsheng Chen.

D Molecule1

A Molecule2

55

56

2 Fluorescence Resonance Energy Transfer (FRET)

A

D

D

A

Enzyme substrates

Figure 2.13

Type 2: Kinase activity assay. Source: Tongsheng Chen.

D D

Figure 2.14

A

A

Type 3: Conformational change of molecule. Source: Tongsheng Chen.

In the life activity of a cell, proteins and other biomolecules form complex interaction and regulation network together. Protein–protein interactions balance and constitute signaling pathways, which are the important molecular basis for complex life activities. Currently, it is still difficult to study proteins and their interactions in living cells in real-time, dynamical, and high resolution, so it is urgent to develop more reliable, sensitive, specific, and practical key research techniques. FRET imaging is the best tool to study protein–protein interactions in living cells. Optical molecular probes based on GFP and its mutants have opened up a new field for studying protein functions in living cells and in vivo. As early as 1972, Kerr and Wyllie as well as Gurrie found that there were two types of cell death from cell morphology, ultrastructure, and biochemical changes. One was the well-known cell necrosis (necrosis), and the other was the innovative programmed cell death (PCD), also called apoptosis. Apoptosis plays an important role in biological development and the maintenance of normal physiological activities. Apoptosis is a general physiological phenomenon that refers to the spontaneous programmed death process of various cells in order to maintain the stability of the body under certain physiological or pathological conditions. According to the pathway of apoptotic signal, it can be divided into three signaling pathways: death receptor pathway, mitochondrial pathway (endogenous apoptosis pathway), and endoplasmic reticulum stress pathway. In the mitochondrial apoptosis pathway, release of apoptotic factors from the mitochondrial membrane pore and decline of membrane potential are dependent on the formation of apoptotic channels on the mitochondrial membrane, among which Bax and Bak play a major role in the formation of apoptotic channels and the induction of apoptosis, and their pro-apoptotic activities are strictly controlled [40]. Normally,

2.10 Applications of FRET in Biomedicine

Anti-apoptotic = Pro-apoptotic

Basal state Pro-apoptotic stimulation (e.g. p53 stabilization)

Pro-survival stimulation (e.g. growth factor)

Proapo indu ptotic (e.g ction . ↑B AX)

Survival (a)

Apoptosis

BID BIM PUMA (b)

totic apop Anti- uction ind L-1) ↑ MC . g . (e

Survival

Apoptosis

BCL-2 BCL-xL BCL-w

BAD

MCL-1 A1

Noxa

Figure 2.15 The balance of anti-apoptotic and pro-apoptotic BCL-2 proteins dictates cellular fate. (a) The rheostat model. In a hypothetical basal state, the number of anti-apoptotic and pro-apoptotic molecules is equal; tipping this balance dictates cellular fate. If a stress (e.g. DNA damage) is applied, the induction of pro-apoptotic molecules provides the signal to engage MOMP. On the contrary, growth factor addition would promote cellular survival by increasing the amount of anti-apoptotic proteins. (b) The anti-apoptotic protein neutralization model. The BH3-only proteins BID, BIM and PUMA engage MOMP because they bind and neutralize all anti-apoptotic BCL-2 members. A combination of other BH3- only proteins is required to promote apoptosis because each neutralizes only a subset of anti- apoptotic BCL-2 proteins (e.g. BAD and Noxa neutralize BCL-2/BCL-xL/BCL-w and MCL-1/A1, respectively). This model contends that the BCL-2 effector molecules are sufficiently active to oligomerize and promote MOMP once anti-apoptotic proteins are neutralized. Source: Reproduced from Ref. [41] Figure 2 with permission of Elsevier.

Bax and Bak exist in cells as non-activated monomers. Bax is mainly distributed in the cytoplasm, while Bak is distributed in the outer membrane of mitochondria. In response to an apoptotic signal, Bax undergoes conformational change, translocation onto mitochondria and localizes on mitochondria by inserting the C-terminal membrane anchor into the outer membrane of the mitochondria, and then the permeable aperture is formed by oligomerization to mediate the release of apoptotic factors and decrease of membrane potential. Bak is naturally present in the outer membrane of mitochondria through its C-terminal membrane anchor structure, and oligomerization will also occur during apoptosis. Bcl-2 family proteins play a key role in the mitochondrial apoptosis. Figure 2.15 shows the regulation net of Bcl-2 proteins [41, 42]. In “mitochondrial” or “intrinsic” apoptosis pathway, the permeabilization of the mitochondrial outer membrane permeabilization (MOMP) represents the key

57

58

2 Fluorescence Resonance Energy Transfer (FRET)

decisive process [43]. MOMP enables the release of mitochondrial intermembrane proteins into the cytosol, which then activates a family of cytosolic cysteine proteases, the caspases. Bax and Bak are multi-domain proapoptotic Bcl-2 family proteins required for the process of MOMP. Although Bak is localized to mitochondria in nonapoptotic cells, large quantities of Bax are found in the cytosol and only translocate to mitochondria during apoptosis. To activate MOMP, Bax and Bak undergo specific conformational changes that enable both proteins to oligomerize and anchor in the mitochondrial outer membrane. Although the mechanisms of Bax and Bak activation during apoptosis are still an area of intensive research, evidence is growing that specific proapoptotic proteins of the Bcl-2 protein super-family, in particular the Bcl-2 homology-domain 3 (BH3)-only proteins tBid and Bim, may be able to directly activate Bax and Bak. Bax and Bak homo-tetramers and higher oligomers are believed to physically form the release channels in the mitochondrial outer membrane large enough to allow for the release of intermembrane proteins. In 2010, the Prehn group performed a quantitative confocal microscopic imaging and mathematical modeling study of the temporal and spatial dynamics of Bax translocation and oligomerization relative to MOMP initiation in single living cells [43]. To establish a valid reporter model for Bax signaling, DU-145 cells, which are devoid of endogenous Bax expression were stably transfected with expression vectors for YFP-Bax and/or CFP-Bax. DU-145 cells over expressing wild-type nontagged Bax served as an internal control. Meanwhile, TMRM (red) was used to mark mitochondria. TMRM is a membrane potential dependent probe, and its fluorescence intensity decreases with the decrease of membrane potential. Therefore, changes in mitochondrial membrane potential can be detected in real time by detecting the fluorescence intensity of TMRM in cells. Time-dynamic characteristics of FRET efficiency and the changes of mitochondrial membrane potential in the cells were dynamically detected after STS apoptosis stimulation (Figure 2.16). This study provides quantitative data on the process of Bax activation in intact cells and suggests that the process of Bax pore formation and MOMP may be an ultra-sensitive and robust process, and Bax activation exceeds by far the quantities required for MOMP induction, and that minimal Bax or Bak activation may be sufficient to trigger rapid pore formation [43].

2.10.2 Activation and Degradation of Protein Kinases Activation and degradation of protein kinases are important life activities and often determine the fate of cells. Therefore, study on the activation and degradation of intracellular protein kinases is also the key to study cell signal transduction mechanism. For example, caspases are important protein kinases that determine apoptosis [44–52]. Caspase (cysteinylaspartate specific proteinase) is a group of cysteine proteases with similar amino acid sequence and secondary structure, which is closely related to eukaryotic apoptosis and also involved in the maturation, cell growth and differentiation of cytokines.

2.10 Applications of FRET in Biomedicine Untreated

CFP before YFP bleach (FRET)

477 nm

Before bleach

405 nm Bax

100 Intensity (a.u.)

Apoptotic 405 nm

After bleach

528 nm Bax Bax

Bleached 405 nm

(a) DIC

CFP after YFP bleach

Bax

477 nm

0 450 500 550 600

(b)

Bax Bax

YFP-Bax TMRM

50

(c)

Emission wavelength (nm)

FRET

TMRM, average Pixel intensity (a.u.)

2

362

396

50 40 30 20 330

360

390

420

450

360

390

420

450

360

390

420

450

360

390

420

450

FRET at polarised mitochondria (a.u.)

404

406

410

430

(d)

Time (min)

FRET at pol. mitochondria

n.s.

YFP-Bax at pol. mitochondria

n.s.

FRET at depol. mitochondria and other cell regions

* n.s.

SD in YFP

−5

(f)

YFP-BaxMito/YFP-BaxWholeCell

FRET at depolarised mitochondria and other cell regions (a.u.)

408

0

5

Time of onset of mito. depolarisation

10 (min)

(e)

0.1 0.05 0 330 0.25 0.2 0.15 0.1 330 1.0

0.75

0.5 330

Time (min)

59

60

2 Fluorescence Resonance Energy Transfer (FRET)

Figure 2.16 Kinetics of Bax oligomerisation on the single-cell level as detected by FRET resemble kinetics of Bax translocation to polarized mitochondria. (a) Scheme of the CFP-Bax/YFP-Bax FRET approach. In untreated cells, Bax molecules do not interact and thus do not allow for resonance energy transfer between donor (CFP) and acceptor (YFP) fluorophores. During apoptosis, Bax interaction results in close proximity of donor and acceptor, allowing for YFP emission on CFP excitation. Acceptor photobleaching results in donor unquenching and serves as control for FRET. (b, c) Acceptor photobleaching in a DU-145 cell expressing CFP-Bax and YFP-Bax undergoing apoptosis after treatment with 3 𝛍M STS shows an increase in the donor (CFP) fluorescence intensity and a decrease in the sensitized YFP emission. YFP was bleached using 100% of the 514 nm line of the Argon laser running at 50% of its maximal power until no further decrease of the YFP fluorescence could be observed. Fluorescence emission spectra were recorded before and after the bleaching process using 4% excitation intensity of the 405 nm laser with the spectral detector set to a resolution of 𝚫𝝀 = 10.75 nm. Scale bar in confocal scans: 10 𝛍m. (d) Time-lapse imaging of a representative DU-145 cell expressing CFP-Bax and YFP-Bax, stained with 30 nM TMRM and treated with 3 𝛍M STS. Time stamps indicate time after STS addition. Scale bar = 10 𝛍m. TMRM indicates polarized mitochondria. The TMRM image was used for cell segmentation to measure Bax translocation to and oligomerisation at mitochondria. (e) Single-cell kinetics plotted for the cell presented in (d). The vertical dashed line indicates the onset of mitochondrial depolarisation (TMRM channel). Bax translocation to and oligomerisation at polarized mitochondria (fourth and second panel, respectively) preceded the onset of TMRM loss, whereas Bax oligomerisation in depolarised mitochondria and other cell regions was only detected later (third panel). (f) Quantification and sensitivity of Bax translocation/oligomerisation measurements during STS-induced apoptosis. Bax translocation to and oligomerisation at polarised mitochondria seem to coincide in DU-145 cells. Bax oligomerisation in the rest of the cell was detected only with a significant delay. Whole-cell analysis of YFP-Bax redistribution (S.D. of the YFP signal) was less sensitive than the segmentation approach for polarised mitochondria. The onset of ΔΨM depolarisation was taken as the time of reference. Bax oligomerisation in the remaining cell (FRET at depolarised mitochondria and other cell regions) was significantly delayed to the FRET increase at polarised mitochondria (n = 25 cells from nine experiments analysed, P < 0.05, paired sample t-test). Source: Düssmann et al. [43], Reproduced with permission of Springer Nature.

At present, apoptosis is considered to be a cascade of active cell death after various death signals. There are at least 11 members of the human caspase family, including caspase-1 ∼ 10, 13, and caspase-12 and 13 only exit in mice. Caspase-14 has recently been found in mice. According to the similarity of their protease sequences, caspases can be divided into three subfamilies: caspase-1 subfamily, including caspase-1,4,5 and 13; caspase-2 subfamily, including caspase-2 and 9; and caspase-3 subfamily, including caspase-3,6,7,8, and 10. Caspase-3 plays a key role in the late stage of apoptosis. Released cytochrome from mitochondria combines caspase-9 and apoptotic protease activating factor 1 (apoptotic pro - tease - activating factor 1, Apaf - 1), to form an apoptosome that activates caspase-3, finally killing cells. To monitor the dynamical caspase-3 activation during apoptosis in single living cells, Takemoto and Kuner constructed a FPs-based FRET probe (SCAT3) consisting of donor CFP, and acceptor YFP (Venus), and a peptide linker containing a sequence substrate of caspase-3 (Figure 2.17) [15]. Figure 2.18 shows the time-dynamic emission ratio imaging of intracellular caspase

2.10 Applications of FRET in Biomedicine

Cleavage site

435 nm

475 nm

435 nm ECFP

ECFP Venus

Cleaved by activated caspase

Venus

FRET 530 nm

(a) SCAT3(DEVD)

ECFP

NLS-SCAT3

ECFP

SCAT9(LEHD)

ECFP

CY3(DEVD)

ECFP

(b)

Venus

SSSELSGDEVDGTSGSEF

Venus

SSSELSGLEHDGTSGSEF

Venus

SSSELSGDEVDGTSGSEF

NLS

EYFP

TNF-α+CHX CHX

500

550

Wavelength (nm)

530/475 nm emission ratio

Caspase cleavage site

Emission intensity 450 (c)

SSSELSGDEVDGTSGSEF

2.5

1.5

0.5 TNF-α + CHX

600



+

SCAT3



+ CY3

(d)

Figure 2.17 Improved SCAT probes for caspases activation. (a) Scheme of SCAT probes. (b) Linker of SCATs. (c) Spectra of cells expressing SCAT3 before and after TNF-/CHX stimuli. (d) Emission ratio of 530/475 nm. Source: Reproduced from Ref. [15] Figure 1 with permission of Rockefeller University Press.

activity in single living HeLa cells stably expressing SCAT3 after TNF-/CHX apoptosis stimulation [15]. SCATs probe was also used by the Chen TS group to evaluate caspase activation in living cells during apoptosis induced by various anticancer drugs [46, 52–59].

2.10.3 Spatio-Temporal Imaging of Intracellular Ion Concentration Intracellular ions such as calcium, potassium, iron, chlorine, and zinc plasma are involved in many cellular activities. Real-time detection of these ions in situ is very

61

2 Fluorescence Resonance Energy Transfer (FRET)

SCAT3

Time after TNF-α/CHX exposure 0 1 2 3 4 5 0 1 2 3 4 5 (h)

Cleaved SCAT3 (a)

SCAT3 (DEVD)

SCAT3 (DEVG)

2 h 6 min 44 s

2 h 11 min 24 s

2 h 11 min 44 s

2 h 13 min 4 s

2 h 13 min 44 s

2 h 19 min 54 s

2 h 12 min 34 s

Venus/ECFP Ratio 2.6 Cell 1 2.0

Cell 2

(b) Venus/ECFP emission ratio

62

(c)

4

Cell 1 Cell 2

3 2 1 0 1.5

2.0

2.5 (h)

Time after TNF-α/CHX exposure

Figure 2.18 Single-cell imaging analysis of SCAT3-expressing living HeLa cells. (a) Western Blot analysis on casapse-3 activation after TNF-𝛂/CHX treatment. (b) Dynamic imaging of caspase-3 activation in single living cells expressing SCAT3 after TNF-𝛂/CHX treatment. (c) Dynamic ratio at Venus/ECFP emission in the cells of (b). Source: Takemoto et al. [15], Reproduced with permission of Rockefeller University Press.

important to understand the precise regulatory mechanism of cell signal transduction. In order to detect temporal and spatial changes of cellular calcium ions, Miyawaki et al. constructed a gene-coded calcium ion probe, Cameleon [13, 53]. As shown in Figure 2.19, the donor of Cameleon was CFP, the acceptor was YFP or Venus, the calmodulin CaM was linked to CFP, the M13 enzyme peptide linker was linked to YFP or Venus, and the short peptide chain link was used between CaM and M13 .When the calcium ion binds to CaM, CaM binds closely to M13 due to the conformational change, resulting in the proximity of CYP to YFP/Venus and the enhanced FRET effect between them. Cameleon is a reversible probe. When the cell

2.10 Applications of FRET in Biomedicine

440 or 480 nm

370 or 440 nm

CaM

M13 GFP or YFP

BFP or CFP

+4 Ca2+

P

−Ca2+

or

YF

FRET G

FP

370 or 440 nm

BFP or CFP

510 or 535 nm

CaM 4 Ca2+

Figure 2.19 Construct of Cameleon. Source: Reproduced from Ref. [13] Figure 1 with permission of Springer Nature.

433 nm

433 nm

476 nm M13

CFP

FRET

527 nm

2+

YFP

7.2

Ca

CFP

CaM

YFP

CaM

4.0

0.8

Figure 2.20 Ratio imaging of intracellular calcium ion. Source: Tsien et al. [14], From American Association for the Advancement of Science.

calcium ion increases, its FRET efficiency increases, and conversely, when the intracellular calcium ion concentration decreases, its FRET efficiency decreases. Figure 2.20 shows the fluorescence emission ratio (530/470 nm) of a cell stably expressing Cameleon before and after the addition of CaCl2 . When adding CaCl2 , the fluorescence emission ratio of the cells increased sharply, and then when the cells were cleaned and the calcium ion chelating agent was added, the fluorescence emission ratio of the cells decreased sharply. Due to its non-toxic, sensitive and reversible advantages, Cameleon has been widely used to study the temporal and spatial dynamics of calcium ions in various cellular activities and their biological functions.

63

2 Fluorescence Resonance Energy Transfer (FRET)

440 nm CFP

480 nm

CFP

LX

Flexible linker

XL L

LX XL L

64

17-βE ERα LBD

YFP

0 min

LXXLL

ERα LBD

ERα LBD

YFP

4 min

Emission ratio CFP/YFP

8 min 0.3

440 nm YFP

535 nm

12 min

16 min 1.8

CFP

20 min I(480)/I(535)

Figure 2.21 FRET sensor for estrogen ligand binding. Source: Awais et al. [62], Reproduced with permission of American Chemical Society.

Similar to the design principle of Cameleon probes, scientists have developed a large number of gene probes that can detect the temporal and spatial dynamic characteristics of biological events such as activation and degradation of other protein kinases, protein phosphorylation and methylation in real time in living cells [60–63]. Figure 2.21 shows a gene-probe and FRET imaging that provides spatiotemporal characteristics of the binding of estrogen to ligands on the cell membrane in living cells [62]. Figure 2.22 shows the dynamic micro-imaging of a gene probe and live cell spatio-temporal FRET imaging that reflects the temporal and spatial characteristics of protein phosphorylation in cells [63]. FRET technology, combined with other technologies, has been widely used in various disciplines and fields of research and applications. For example, FRET technology combined with total internal reflection microscopic imaging technology is used to study the mechanism of intermolecular interactions on the surface of cell membranes. Combined with monomolecular imaging and detection technology, FRET technology has been widely used to study intermolecular regulatory mechanisms at a single molecule level. Various types of FRET sensors are also widely used to quantitatively detect various traces of constituents and molecules. It is difficult to give comprehensive and detailed applications of FRET, but this chapter illustrates some representative applications of FRET technology. In conclusion, FRET technology has unique advantages in detecting molecular or particle events at the spatial scale of 1–10 nm or 1–20 nm. Do not forget the FRET technique in the toolbox when we encounter similar scientific questions in our research.

References

Flexible linker (GNNGGNNNGGS)

Phosphorylation recognition domain

Substrate domain 440 nm

YFP

CFP OH Tyr

480 nm

Phosphatase

SH2

Protein kinase

440 nm

CFP FRET YFP

P

535 nm

0s

40s

80s

300s

600s

10 μm Emission ratio 480 / 535 100 nM insulin

0.3

1.1

Figure 2.22 FRET sensor for protein phosphorylation. Source: Sato et al. [63], Reproduced with permission of Springer Nature.

References 1 Lakowicz, J.R. (2006). Principles of Fluorescence Spectroscopy. Boston, MA: Springer. ´ 2 Jabłonski, A. (1935). Über den Mechanismus der Photolumineszenz von Farbstoffphosphoren. Zeitschrift für Phys. 94 (1–2): 38–46. 3 Förster, T. (1948). Intrermolecular energy migration and fluorescence. Ann. Phys. 437 (1–2): 55–75. 4 Clegg, R.M. (1996). Fluorescence Imaging Spectroscopy and Microscopy. New York: Wiley.

65

66

2 Fluorescence Resonance Energy Transfer (FRET)

5 Carlson, H.J. and Campbell, R.E. (2009). Genetically encoded FRET-based biosensors for multiparameter fluorescence imaging. Curr. Opin. Biotechnol. 20 (1): 19–27. 6 Wu, P.G. and Brand, L. (1994). Resonance energy transfer: methods and applications. Anal. Biochem. 218 (1): 1–13. 7 Fairclough, R.H. and Cantor, C.R. (1978). The use of singlet-singlet energy transfer to study macromolecular assemblies. Methods Enzymol. 48: 347–379. 8 Johnson, I.D., Kang, H.C., and Haugland, R.P. (1991). Fluorescent membrane probes incorporating dipyrrometheneboron difluoride fluorophores. Anal. Biochem. 198 (2): 228–237. 9 Selvin, P.R. (1995). Fluorescence resonance energy transfer. Methods Enzymol. 246: 300–334. 10 Berlman, I. (1973). Energy Transfer Parameters of Aromatic Compounds. Elsevier. 11 Mathis, G. (1993). Rare earth cryptates and homogeneous fluoroimmunoassays with human sera. Clin. Chem. 39 (9): 1953–1959. 12 Rizzo, M.A., Springer, G., Segawa, K. et al. (2006). Optimization of pairings and detection conditions for measurement of FRET between cyan and yellow fluorescent proteins. Microsc. Microanal. 12 (03): 238–254. 13 Miyawaki, A., Llopis, J., Heim, R. et al. (1997). Fluorescent indicators for Ca2+ based on green fluorescent proteins and calmodulin. Nature 388 (6645): 882–887. 14 Tsien, R.Y. and Miyawak, A. (1998). Seeing the machinery of live cells. Science 280 (5371): 1954–1955. 15 Takemoto, K., Nagai, T., Miyawaki, A., and Miura, M. (2003). Spatio-temporal activation of caspase revealed by indicator that is insensitive to environmental effects. J. Cell Biol. 160 (2): 235–243. 16 Wang, L., Chen, T., Qu, J., and Wei, X. (2009). Quantitative analysis of caspase-3 activation by fitting fluorescence emission spectra in living cells. Micron 40 (8): 811–820. 17 Zal, T. and Gascoigne, N.R.J. (2004). Photobleaching-corrected FRET efficiency imaging of live cells. Biophys. J. 86 (6): 3923–3939. 18 Patterson, G., Day, R.N., and Piston, D. (2001). Fluorescent protein spectra. J. Cell Sci. 114 (5): 837–838. 19 Lansford, R., Bearman, G., and Fraser, S.E. (2001). Resolution of multiple green fluorescent protein color variants and dyes using two-photon microscopy and imaging spectroscopy. J. Biomed. Opt. 6 (3): 311. 20 Tramier, M., Gautier, I., Piolot, T. et al. (2002). Picosecond-hetero-FRET microscopy to probe protein-protein interactions in live cells. Biophys. J. 83 (6): 3570–3577. 21 Bastiaens, P. (1999). Fluorescence lifetime imaging microscopy: spatial resolution of biochemical processes in the cell. Trends Cell Biol. 9 (2): 48–52. 22 Biskup, C., Zimmer, T., Kelbauskas, L. et al. (2007). Multi-dimensional fluorescence lifetime and FRET measurements. Microsc. Res. Tech. 70 (5): 442–451. 23 Millington, M., Grindlay, G.J., Altenbach, K. et al. (2007). High-precision FLIM-FRET in fixed and living cells reveals heterogeneity in a simple CFP-YFP fusion protein. Biophys. Chem. 127 (3): 155–164.

References

24 Sarkar, P., Koushik, S.V., Vogel, S.S. et al. (2009). Photophysical properties of cerulean and venus fluorescent proteins. J. Biomed. Opt. 14 (3): 034047. 25 Hoppe, A., Christensen, K., and Swanson, J.A. (2002). Fluorescence resonance energy transfer-based stoichiometry in living cells. Biophys. J. 83 (6): 3652–3664. 26 Köllner, M. and Wolfrum, J. (1992). How many photons are necessary for fluorescence-lifetime measurements? Chem. Phys. Lett. 200 (1–2): 199–204. 27 Grailhe, R., Merola, F., Ridard, J. et al. (2006). Monitoring protein interactions in the living cell through the fluorescence decays of the cyan fluorescent protein. ChemPhysChem 7 (7): 1442–1454. 28 Elder, A., Domin, A., Kaminski Schierle, G. et al. (2009). A quantitative protocol for dynamic measurements of protein interactions by Förster resonance energy transfer-sensitized fluorescence emission. J. R. Soc. Interface 6 (suppl_1): S59–S81. 29 Yu, H., Zhang, J., Li, H. et al. (2012). An empirical quantitative fluorescence resonance energy transfer method for multiple acceptors based on partial acceptor photobleaching. Appl. Phys. Lett. 100 (25): 16–20. 30 Li, H., Yu, H., and Chen, T. (2012). Partial acceptor photobleaching-based quantitative FRET method completely overcoming emission spectral crosstalks. Microsc. Microanal. 18 (5): 1021–1029. 31 Zhang, L., Qin, G., Chai, L. et al. (2015). Spectral wide-field microscopic fluorescence resonance energy transfer imaging in live cells. J. Biomed. Opt. 20 (8): 086011. 32 Yu, H., Zhang, J., Li, H., and Chen, T. (2013). Ma-PbFRET: multiple acceptors FRET measurement based on partial acceptor photobleaching. Microsc. Microanal. 19 (1): 171–179. 33 Zhang, L., Yu, H., Zhang, J., and Chen, T. (2014). Binomial distribution-based quantitative measurement of multiple-acceptors fluorescence resonance energy transfer by partially photobleaching acceptor. Appl. Phys. Lett. 104 (24): 243706. 34 Chen, H., Puhl, H.L., Koushik, S.V. et al. (2006). Measurement of FRET efficiency and ratio of donor to acceptor concentration in living cells. Biophys. J. 91 (5): L39–L41. 35 Wlodarczyk, J., Woehler, A., Kobe, F. et al. (2008). Analysis of FRET signals in the presence of free donors and acceptors. Biophys. J. 94 (3): 986–1000. 36 Levy, S., Wilms, C.D., Brumer, E. et al. (2011). SpRET: highly sensitive and reliable spectral measurement of absolute FRET efficiency. Microsc. Microanal. 17 (2): 176–190. 37 Zhang, J., Li, H., Chai, L. et al. (2015). Quantitative FRET measurement using emission-spectral unmixing with independent excitation crosstalk correction. J. Microsc. 257 (2): 104–116. 38 Zhang, J., Lin, F., Chai, L. et al. (2016). IIem-spFRET: improved Iem-spFRET method for robust FRET measurement. J. Biomed. Opt. 21 (10): 105003. 39 Becker, W. (2005). Advanced Time-Correlated Single Photon Counting Techniques. Berlin, Heidelberg: Springer.

67

68

2 Fluorescence Resonance Energy Transfer (FRET)

40 Kim, H., Tu, H.-C., Ren, D. et al. (2009). Stepwise activation of BAX and BAK by tBID, BIM, and PUMA initiates mitochondrial apoptosis. Mol. Cell 36 (3): 487–499. 41 Chipuk, J.E. and Green, D.R. (2008). How do BCL-2 proteins induce mitochondrial outer membrane permeabilization? Trends Cell Biol. 18 (4): 157–164. 42 Leber, B., Lin, J., and Andrews, D.W. (2007). Embedded together: the life and death consequences of interaction of the Bcl-2 family with membranes. Apoptosis 12 (5): 897–911. 43 Düssmann, H., Rehm, M., Concannon, C.G. et al. (2010). Single-cell quantification of Bax activation and mathematical modelling suggest pore formation on minimal mitochondrial Bax accumulation. Cell Death Differ. 17 (2): 278–290. 44 Van De Craen, M., Van Den Brande, I., Declercq, W. et al. (1997). Cleavage of caspase family members by granzyme B: a comparative study in vitro. Eur. J. Immunol. 27 (5): 1296–1299. 45 Zhou, P., Chou, J., Olea, R.S. et al. (1999). Solution structure of Apaf-1 CARD and its interaction with caspase-9 CARD: a structural basis for specific adaptor/caspase interaction. Proc. Natl. Acad. Sci. U. S. A. 96 (20): 11265–11270. 46 Gao, W., Xiao, F., Wang, X., and Chen, T. (2013). Artemisinin induces A549 cell apoptosis dominantly via a reactive oxygen species-mediated amplification activation loop among caspase-9, -8 and -3. Apoptosis 18 (10): 1201–1213. 47 Nicholson, D. (1999). Caspase structure, proteolytic substrates, and function during apoptotic cell death. Cell Death Differ. 6 (11): 1028–1042. 48 Kumar, S. (2007). Caspase function in programmed cell death. Cell Death Differ. 14 (1): 32–43. 49 Zimmermann, K.C., Bonzon, C., and Green, D.R. (2001). The machinery of programmed cell death. Pharmacol. Ther. 92 (1): 57–70. 50 Shi, Y. (2002). Mechanisms of caspase activation and inhibition during apoptosis. Mol. Cell 9 (3): 459–470. 51 Wang, L., Du, F., and Wang, X. (2008). TNF-α induces two distinct caspase-8 activation pathways. Cell 133 (4): 693–703. 52 Xiao, F., Gao, W., Wang, X., and Chen, T. (2012). Amplification activation loop between caspase-8 and -9 dominates artemisinin-induced apoptosis of ASTC-a-1 cells. Apoptosis 17 (6): 600–611. 53 Kuner, T. and Augustine, G.J. (2000). A genetically encoded ratiometric neurotechnique indicator for chloride: capturing chloride transients in cultured hippocampal neurons. Neuron 27 (3): 447–459. 54 Lu, Y.-Y., Chen, T.-S., Wang, X.-P. et al. (2010). The JNK inhibitor SP600125 enhances dihydroartemisinin-induced apoptosis by accelerating Bax translocation into mitochondria in human lung adenocarcinoma cells. FEBS Lett. 584 (18): 4019–4026. 55 Lu, Y.-Y., Chen, T.-S., Wang, X.-P., and Li, L. (2010). Single-cell analysis of dihydroartemisinin–induced apoptosis through reactive oxygen species–mediated caspase-8 activation and mitochondrial pathway in ASTC-a-1 cells using fluorescence imaging techniques. J. Biomed. Opt. 15 (4): 046028.

References

56 Lu, Y.-Y., Chen, T.-S., Qu, J.-L. et al. (2009). Dihydroartemisinin (DHA) induces caspase-3-dependent apoptosis in human lung adenocarcinoma ASTC-a-1 cells. J. Biomed. Sci. 16 (1): 16. 57 Pang, Y., Qin, G., Wu, L. et al. (2016). Artesunate induces ROS-dependent apoptosis via a Bax-mediated intrinsic pathway in Huh-7 and Hep3B cells. Exp. Cell Res. 347 (2): 251–260. 58 Qin, G., Wu, L., Liu, H. et al. (2015). Artesunate induces apoptosis via a ROS-independent and Bax-mediated intrinsic pathway in HepG2 cells. Exp. Cell Res. 336 (2): 308–317. 59 Qin, G., Zhao, C., Zhang, L. et al. (2015). Dihydroartemisinin induces apoptosis preferentially via a Bim-mediated intrinsic pathway in hepatocarcinoma cells. Apoptosis 20 (8): 1072–1086. 60 Gao, X., Chen, T., Xing, D. et al. (2006). Single cell analysis of PKC activation during proliferation and apoptosis induced by laser irradiation. J. Cell. Physiol. 206 (2): 441–448. 61 Braun, D.C., Garfield, S.H., and Blumberg, P.M. (2005). Analysis by fluorescence resonance energy transfer of the interaction between ligands and protein kinase Cδ in the intact cell. J. Biol. Chem. 280 (9): 8164–8171. 62 Awais, M., Sato, M., Sasaki, K., and Umezawa, Y. (2004). A genetically encoded fluorescent indicator capable of discriminating estrogen agonists from antagonists in living cells. Anal. Chem. 76 (8): 2181–2186. 63 Sato, M., Ozawa, T., Inukai, K. et al. (2002). Fluorescent indicators for imaging protein phosphorylation in single living cells. Nat. Biotechnol. 20 (3): 287–294.

69

71

3 Optical Coherence Tomography Structural and Functional Imaging Peng Li and Zhihua Ding Zhejiang University, College of Optical Science and Engineering, State Key Lab of Modern Optical Instrumentation, Hangzhou 310027, China

3.1 Introduction Optical coherence tomography (OCT) is a label-free, noninvasive and high-resolution interferometric imaging modality developed in the early 1990s [1]. It is a mesoscopic (micrometer-scale) imaging and observation method for living tissues utilizing the interference formed between the echoes from reference mirror and biological sample. Its principle is similar to that of ultrasound (US) imaging, except that it uses light instead of sound. With the ability to obtain tomographic information of the sample or biological tissue without cutting the analyte, OCT is known as “optical biopsy” and avoids the disadvantages of traditional biopsy. As shown in Figure 3.1, compared with traditional imaging technologies, such as US imaging, magnetic resonance imaging (MRI), and X-ray computed tomography (CT), OCT has a higher resolution (several microns). On the other hand, when compared with ultra-high resolution technologies, such as confocal microscopy and multi-photon microscopy, OCT has a larger imaging depth that is suitable for tissue-level imaging. Therefore, OCT fills the gap between these two types of imaging technologies and has a wide application prospect in many fields, such as ophthalmology, dermatology, gastroenterology, neuroscience, and angiography. This chapter will focus on the principle, development, and performance of OCT technology, the structural and functional (Doppler OCT and OCT angiography) imaging based on OCT, and their applications in ophthalmology, dermatology, and brain imaging.

Biomedical Photonic Technologies, First Edition. Edited by Zhenxi Zhang, Shudong Jiang, and Buhong Li. © 2023 WILEY-VCH GmbH. Published 2023 by WILEY-VCH GmbH.

72

3 Optical Coherence Tomography Structural and Functional Imaging

T

y op

n tio

OC

T sc T nC olu o C s i cro t i e g r lu lO m gin so gh al na -hi -re tio ma foc i i I a h n d r g t US Co Hi MR Ul Tra 100 μm Penetration depth

1 mm

1 μm 1 μm

10 μm

1 cm 10 cm

150 μm

Entire body Resolution

300 μm

1 mm

Figure 3.1 Penetration depth and resolution of OCT and other imaging technologies. Source: Peng Li.

3.2 Principles of OCT OCT is based on a classic optical measurement technique known as low-coherence interferometry. The idea of using optical echoes to measure the internal structure of tissues originated from femtosecond optics [2]. In the 1980s, low-coherence interferometry was used to detect optical echoes and backscattering in optical fibers and waveguide devices [3]. In 1988, Fercher et al. first introduced low coherence interferometry into biomedical imaging and successfully measure the axial length of eyes [4]. Since then, the low-coherence interferometry has been used for a variety of biological measurements. Based on low-coherence interferometry, OCT imaging was first demonstrated in 1991 by Huang et al. and the imaging of retina and coronary arteries were performed in vitro [1]. The achievement was reported by Science, the world’s top academic journal, marking the birth of OCT technology. OCT can acquire the three-dimensional (3D) information inside the sample at high speed with high resolution and sensitivity. In depth (Z) direction, OCT obtains the depth-resolved information based on optical low-coherence interference principle. Figure 3.2a shows the original version of OCT, termed as time-domain OCT (TD-OCT), which is essentially the structure of a Michelson interferometer. The light emitted from the source is split by a beam splitter before entering into the reference arm and the sample arm. The reflection beams from the two arms converge again after passing through the beam splitter, and then are detected by the photoelectric detector. Here, the reference arm is a reflector that can move linearly, so that the optical path of the reference arm can be changed. For simplicity, the sample arm is assumed to be a single-layer reflection surface. If the interferometer adopts a source with monochromatic light (with long coherent length, such as laser), the interference signal acquired by the detector will vary with the optical path difference Δl

3.2 Principles of OCT

λ1

Scanned reference Δz

Δz = 0

(b) λ1, λ2, λ3

Sample λ1, λ2, ..., λn (c)

Light source Beam splitter

λ1, λ2, ..., λn

(d)

Detector (a)

(e)

Δz

Figure 3.2 Michelson interferometer and optical low coherence interference. (a) Michelson interferometer; (b) cosine-function simple harmonic oscillation of interference signal; (c, d) interferometric fringe; (e) short coherence length light. Source: Peng Li.

and can be quantitatively expressed as 2ER ES cos(2kΔl), as shown in Figure 3.2b. Here, ER and ES denote the electric field returning from the reference and sample arms, k refers to the wave number. However, if the light source of the interferometer is a broadband light source (its wave number k has a certain width), each spectral component will form a cosine-function simple harmonic oscillation (interferometric fringe) with different oscillation frequencies, as shown in Figure 3.2c, d. The results indicate that, if and only if the optical path difference approximates to 0, the interferometric fringes of the spectral components will form a constructive interference, and the interference signal attenuates rapidly with the increase of the optical path difference, thus to produce the interferometric envelop as shown in Figure 3.2e. This phenomenon is known as optical low coherence interference, which has been widely used in spatial positioning and ranging. By introducing two-dimensional spatial scanning mechanism in X – Y plane (generally, it can be achieved by X and Y optical scanning galvanometers), which is perpendicular to the depth direction, 3D OCT imaging can be realized. Similar to the naming method in ultrasonic technology, the single-pass scanning in depth (Z) direction is usually called A-scan, as shown in Figure 3.3. Multiple A-scans in X direction form a B-frame, while multiple B-frames in Y direction realize a 3D imaging. It is obvious that OCT extends the original one-dimensional low coherence ranging technique to a two-dimensional (or 3D) imaging technology. It has very important application value in the areas of scientific research (brain science, heart development, etc.) as well as clinical diagnosis and treatment (ophthalmology, cardiovascular, and cerebrovascular imaging). A later version of OCT, called Fourier domain OCT (FD-OCT), records spectral components of the interference signal and use Fourier transform to reconstruct the depth-resolved interferometric envelope signal. There are two kinds of spectral signal recording ways in FD-OCT, i.e. spectral recording strategy and frequency swept recording strategy, whose basic structures are shown in Figure 3.4a, b,

73

3 Optical Coherence Tomography Structural and Functional Imaging

y x z Cornea

Retina

Cornea Depth profile

74

Iris Lens C-scan Retina

(a)

A-line

B-frame

(b)

Figure 3.3 Reconstruction of 3D OCT image for mouse full eye in vivo. (a) Wide-field structural cross-section in the horizontal meridian using a full-range complex method. (b) Three-dimensional rendering of the OCT full-eye imaging of mouse. Source: Peng Li. Reference

Reference Sample

Broadband light

Sample Frequency sweep

Beam splitter

Beam splitter

Δz

Δz

Detector

Spectrometer (a)

(b) Δz2

Δz1

k

k

k Δz1 Δz2

Frequency domain k

Fourier transform

Δz3

Δz3 Spatial domain z

(c)

Figure 3.4 Schematic of Fourier domain low-coherence interference. (a) spectral domain detection; (b) swept-source detection; (c) signal reconstruction from frequency domain to spatial domain. Source: Peng Li.

respectively. Spectral-domain OCT (SD-OCT) normally adopts broadband light source for illumination and uses specific devices, such as a spectrometer with a high-speed line-scan digital camera, to disperse and parallelly record the spectral components of the low-coherence signal. By contrast, the swept-source OCT (SS-OCT) adopts the swept source for illumination, and a point detector is used to achieve time-sharing recording because the spectral components of the low

3.2 Principles of OCT

coherence signal are dispersed in time. Although SD-OCT and SS-OCT use different spectral signal recording methods, their signal reconstruction processes are almost the same, as shown in Figure 3.4c, the interferometric fringes corresponding to different depths Δz1 , Δz2 , and Δz3 are used to reconstruct the depth-resolved information in space domain through Fourier transform in frequency domain. To quantitatively discuss the reconstruction process, for a single spectral component, the fields incident on the beam splitter after returning from the reference and sample arms are given by: ER = E0 rR exp[j(kzR − 𝜔t)]

(3.1)

ES = E0 rS exp[j(kzS − 𝜔t)]

(3.2)

where E0 refers to the electric field intensity of the light before reaching the reference arm or sample arm; r R and r S denote the reflection coefficients of the reference and sample arms, respectively; zR and zS are optical paths of the reference and sample arms; and 𝜔 is the angular frequency. According to the light-wave superposition principle, the interference signal I d (k, Δz) of the spectral component with a wavenumber k, can be expressed as Formula (3.3), which is a function of spectral wavenumber k and optical path difference Δz = zR − zS : √ ⟨ ⟩ Id (k, Δz) = |ER + ES |2 = S(k)(RR + RS ) + 2S(k) RR RS ⋅ Re{exp(−jkΔz)} √ = S(k)(RR + RS ) + 2 S(k) RR RS cos(kΔz) (3.3) where S(k) encodes the power spectral dependence of the light source, RR = rR2 and RS = rS2 are the reflectivity of reference arm and sample arm. In TD detection, by scanning the reference arm, the information of id (Δz) is recorded by the single-point detector by taking the optical path difference Δz encoded in scanning time as the variable. Ignoring the DC (or constant) term, the output signal of the single-point detector can be expressed in an integral form of all spectral components of the interference signal I d (k, Δz), as illustrated below: { +∞ } +∞ √ id (Δz) = I (k, Δz)dk = 2 RR RS ⋅ Re S(k) exp(−jkΔz)dk ∫−∞ d ∫−∞ √ (3.4) = 2 RR RS ⋅ Re{𝛾(Δz)} In this formula, 𝛾(Δz) represents the complex temporal coherent degree of the light source. The maximum amplitude of its envelope happens at the position where the optical path difference Δz = 0. The information id (Δz) in the depth direction is recorded by point-by-point scanning of the reference arm. In Fourier domain detection, the reference arm is fixed, and the interferometric spectral signal I d (k, Δz) is recorded by taking the spectral wavenumber k as the variable through specific spectral resolution mechanism (spectrometer for SD-OCT and swept source for SS-OCT). Then, the information id (z) is reconstructed through Fourier transform. Ignoring the DC term, Formula (3.3) can be converted to the following expression by Fourier transform: +∞

id (z) =  {Id (k, Δz)} = I (k, Δz) exp(−jkz)dk ∫−∞ d √ = RR RS γ(Δz) ⊗ [𝛿(z + Δz) + 𝛿(z − Δz)]

(3.5)

75

76

3 Optical Coherence Tomography Structural and Functional Imaging

In this formula, ⊗ means convolution operation, and 𝛿(z ± Δz) reflects the conjugate mirror image problem caused by Fourier transform of a real signal. Therefore, the spectrum-resolved detection technique enables concurrently record the information in depth direction without point-by-point scanning of the reference arm. Besides, in the deduction of Formula (3.5) Wiener–Khinchin theorem is used, which means the power spectral density S(k) and the complex temporal coherent degree 𝛾(Δz) construct a Fourier transform pair: +∞

𝛾(Δz) =

∫−∞

S(k) exp(−jkΔz)dk

(3.6)

For the light source with Gaussian lineshape, the normalized power spectral density, and complex temporal coherent degree can be illustrated by: [ ( )] k − k0 2 1 S(k) = (3.7) √ exp − Δk Δk 𝜋 γ(Δz) = exp[−Δk2 Δz2 ]

(3.8)

where k0 and Δk represent the central frequency and frequency width of this Gaussian source. The coherence length lc is defined as the full width at half maximum (FWHM) of the interferometric envelope, which can be expressed by: √ 4 In(2) 4In(2) 𝜆20 lc = ⋅ (3.9) = 𝜋 Δ𝜆 Δk

3.3 Performances of OCT 3.3.1

Resolution

The lateral and axial resolutions of traditional optical microscope depend on the numerical aperture (NA) of the objective lens. The harsh requirement of axial resolution on the NA leads to a very small working distance, which limits the practical application. By contrast, one of the most prominent advantages of OCT is that the axial resolution mainly depends on the coherence length of the light source, so it can still achieve very high axial resolution even if an objective lens with small NA is used. The axial resolution 𝛿 z of OCT is determined by the coherence length lc of the light source. It can be expressed as follows for the light source with Gaussian lineshape [5]: 𝜆2 lc = 0.44 0 (3.10) 2 Δ𝜆 Given the spectral bandwidth of the light source, the smaller the central wavelength 𝜆0 is, the higher the axial resolution of OCT is. However, due to the influence of scattering and absorption of biological tissue, the optional operation band for OCT is limited, which is usually around the wavelengths of 800, 1000, and 1300 nm. Normally, there will be a corresponding operation band requirement for a specific 𝛿z =

3.3 Performances of OCT

imaging sample. For instance, in OCT fundus imaging, the weak absorption band around 800 nm is generally selected considering the effect of water absorption of the ocular tissue. In terms of the high scattering biological tissue (such as skin), a weak scattering band around 1300 nm is selected in most cases. And for the imaging of eye fundus choroid, where both absorption and scattering problems are encountered, a compromised band around 1000 nm can be used. In addition, for some special applications (such as detection of optical material), visible band can be adopted to significantly improve the axial resolution. Therefore, the operation band of the system needs to be determined according to the specific application situation. After the central wavelength 𝜆0 is selected, a broadband light source can be used to improve the axial resolution of OCT. In theory, the axial resolution 𝛿 z of OCT is in a simple reciprocal relationship with the bandwidth Δ𝜆 of the light source. But in practical, the axial resolution of OCT is ultimately limited by the available broadband light source, the detector, the passive optical elements (such as coupler and circulator), as well as the dispersion effect of the optical system. Nevertheless, with the continual development of technology, the axial resolution of OCT will also be constantly improved. Similar to traditional optical microscope, the lateral resolution 𝛿 x of OCT depends on the focusing status of the probe beam, that is, the NA of the objective lens: 4𝜆0 f ⋅ (3.11) 𝜋 D In this formula, f refers to the focal length of the focusing objective lens, and D means the beam diameter on the focusing objective lens in the sample arm. The lateral resolution of OCT can be improved by selecting a microscope objective with higher NA. However, the improvement of the lateral resolution is limited by the depth of focus (Z R ) or the confocal parameter b, which can be illustrated by: 𝛿x =

b = 2ZR =

𝜋𝛿x2 𝜆0

(3.12)

As shown in Figure 3.5, when the focusing lens with large NA is used in the OCT system, the light spot focused on the sample will be small and the lateral resolution on the focal point will be very high, but the depth of focus will be reduced and the lateral resolution will attenuate rapidly outside the depth of focus.

3.3.2

Imaging Speed

Imaging speed is one of the representative performance parameters of OCT, and the generation of OCT is mainly determined based on this parameter. Usually, the first generation of OCT, i.e. the TD-OCT, has the A-scan line scanning rate of hundreds of Hz, with the maximum rate up to 8 kHz. The second generation of OCT, i.e. the FD-OCT, has an imaging rate of tens of kHz, which is the mainstream commercialized OCT at present. The third generation of OCT is still based on the Fourier domain spectral detection technique, but due to the development of line-scan digital CMOS camera and high-speed swept source, its imaging speed can be up to hundreds of kHz and even several MHz.

77

78

3 Optical Coherence Tomography Structural and Functional Imaging

Low NA High NA

ZR

Δlc

ΔIc b

b ZR

Figure 3.5 Lateral resolution and depth of focus of OCT system adopting objective lens with different numerical apertures. Source: Peng Li.

As described above, TD-OCT obtains the depth information by point-by-point scanning of the reference arm in depth direction, and this mechanical scanning mechanism severely restricts the imaging speed of OCT. By contrast, in FD-OCT, the information in depth direction is reconstructed based on the spectral information that is concurrently acquired with the spectrum-resolved detection technique, which can be deemed as line-by-line scanning and thus greatly improve the imaging speed. For the SD-OCT, the imaging speed mainly depends on the line scanning rate of the line-scan digital camera in the spectrometer. And for the SS-OCT, it is mainly determined by the sweep rate of the light source. Both the line-scan digital camera and the swept source rely on the technical development in optoelectronic industry. With respect to the detection method of OCT signal, it is worth noting that the ultrahigh-speed imaging of OCT can be realized if parallel detection can also be implemented in X direction (even in Y direction), i.e. area scanning (even volume scanning).

3.3.3

Signal-to-Noise Ratio (SNR)

System SNR can be defined as the ratio of the power of interference signal to the variance of noise photocurrent, as illustrated by: ⟨PD ⟩2 (3.13) 𝜎2 Different detection mechanisms adopted in TD-OCT and FD-OCT bring about different SNR. Taking TD-OCT for example, to simplify the analysis, we assume that the sample is a single reflection surface in the distance of zS . In this case, the total detected photocurrent I D of a TD-OCT system is: { }] 𝜂e [ (3.14) ID = P + PS + 2Re ES ER∗ h𝜈 R In this formula, 𝜂 denotes the quantum efficiency of the detector; e means electron charge; h𝜈 refers to the energy of a single photon; PR and PS are the optical powers of RSNR =

3.3 Performances of OCT

the reference arm and sample arm, respectively. The third term in Formula (3.14) is the interference signal term, whose mean square peak signal power occurs at zR = zS and is given by: ( 𝜂e )2 ⟨PD ⟩2 = PR PS (3.15) h𝜈 If the extreme situation of shot noise is considered and PS ≪ PR so that can 2 be neglected in the DC term, the noise variance 𝜎TD of OCT system can be expressed by: 2 = 𝜎TD

𝜂e 2ePR BNEB h𝜈

(3.16)

In this formula, BNEB refers to the noise equivalent bandwidth of the system, which also equals to the bandwidth of the interference signal detection circuit. In addition, BNEB ∝ 1/𝜏, where 𝜏 means the integral time of the detector for each pixel. Therefore, we can obtain the system SNR as follows: RSNR =

PS 𝜂 𝜂 P 𝜏 ⋅ = ⋅ S 2 h𝜈 h𝜈 2BNEB

(3.17)

In this formula, PS 𝜏/h𝜈 means the number of the photons returned from each pixel position of the sample. That is, the SNR of the OCT system is proportional to the photon number collected by the detector on each pixel. Because of the subsequent beam splitter (the splitting ratio is set to 50 : 50), to get detected by the detector, at least two photons should be reflected from sample at each pixel position. To obtain an image with high SNR, the optical power needs to be enhanced, or the integral time for each pixel needs to be extended. However, such improvement is restricted by the safe optical power for biological tissues, as well as by the imaging speed. According to Formula (3.17), it is obvious that Fourier domain detection is advantageous over the time domain detection in terms of SNR. To facilitate the comparison among the SNR of SD-OCT, SS-OCT, and TD-OCT, we assume that the three systems have an identical light source power (all the signal light power returned from the sample arm is PS ), pixel number (the pixel number of both systems in Z direction is M/2), and imaging speed (the same line scanning rate, i.e. all the axial scanning time is t). Because of the conjugate mirror image, spectral information of M channels should be recorded in FD-OCT for further reconstruction, twice as much as those in time domain detection. For time domain OCT, due to the point-by-point scanning of the reference arm, the integral time of each pixel is 𝜏 = 2t/M, and the corresponding SNR is: RTD SNR =

𝜂 PS 𝜏 M h𝜈

(3.18)

In terms of SD-OCT, because it adopts simultaneous illumination and detection of M pixels in axial direction, the integral time for each pixel is 𝜏 = t, and the corresponding SNR is: RSD SNR =

𝜂 PS 𝜏 M = RTD 2 h𝜈 2 SNR

(3.19)

79

80

3 Optical Coherence Tomography Structural and Functional Imaging

As for SS-OCT, because the M spectral components of the light source are output in separate moments, the total optical power can be up to MPS , but the integral time for each pixel is 𝜏 = t/M, and the corresponding SNR is: 𝜂 PS 𝜏 M (3.20) = RTD 2 h𝜈 2 SNR As we can see, compared to TD-OCT, the SNR for FD-OCT systems increased by M/2 times. Comparing the SD-OCT with TD-OCT, the enhancement of SNR mainly benefits from the parallel detection with axial line-by-line scanning, which increases the single-pixel integral time by M/2 times. If area array detection can be implemented, under the same total detection time, the single-pixel integral time will be increased to 𝜏 = Nt (N represents the number of A-scan in each B-scan), and in theory, the SNR can be further increased to MN/2 times. RSS SNR =

3.3.4

Imaging Range

For an OCT system, the imaging range is defined as the maximum optical path difference Δzmax that can be detected. In time domain OCT, the imaging range is determined by the stroke of the reference arm. Under a specific axial scanning speed, the larger the imaging range, the longer the time consumed in axial scanning. However, in Fourier domain OCT, the maximum measuring range Δzmax is mainly determined by the spectral sampling rate 𝛿 S k of the system. According to Figure 3.4 and Formula (3.3), the larger the optical path difference Δz is, the higher the frequency of the interferometric spectral signal is. According to the Nyquist sampling theorem, the system’s spectral sampling rate 𝛿 S k determines the highest frequency of the interferometric spectral signal that can be recorded by the system, that is, the maximum measuring range: 𝜋 Δzmax = (3.21) 2n𝛿S k where n denotes the refractive index of the detected sample. In SD-OCT, the spectral sampling rate 𝛿 S k is mainly determined by the spectral range of the spectrometer and the operating pixel number of the line-scan digital camera, which also affects the axial resolution and imaging speed of OCT. Similarly, in SS-OCT, the spectral sampling rate 𝛿 S k is mainly determined by the sweep range of light source, detector bandwidth, and the sampling rate of the analog-to-digital (AD) converter. Therefore, these factors should be comprehensively considered in the design of the imaging range of OCT system.

3.3.5

Sensitivity Falloff Effects in FD-OCT

As illustrated by Figure 3.4 and Formula (3.5), in the FD-OCT, the depth-resolved OCT signal can be reconstructed by taking the Fourier transform of the interferometric spectral signal. However, in practical applications, the spectral resolution 𝛿 r k of the system is limited. With the increase of detection depth, the high-frequency interferometric spectral signal can not be effectively distinguished within the width

3.3 Performances of OCT

of 𝛿 r k, leading to the falloff of sensitivity with depth. In mathematics, the influence of 𝛿 r k can be described by a Gaussian function, i.e. exp[−4ln(2)k2 /(𝛿 r k)2 ]. The actual interferometric spectrum is equal to the convolution of the original interferometric spectrum I d (k, Δz) and this Gaussian function. Therefore, the interferometric spectrum should be rewritten in a convolution form: ] [ 4 ln(2)k2 Id′ (k, z) = Id (k, z) ⊗ exp − (3.22) (𝛿r k)2 The convolution in frequency domain corresponds to the multiplication in space domain after Fourier transform. Thus, according to Formula (3.22), the OCT signal can be expressed as: ] [ {′ } (𝛿r k)2 z2 ′ id (z) =  Id (k, z) = id (z) ⋅ exp − (3.23) 4 ln(2) The Fourier transform of a Gaussian function is still a Gaussian function. Hence, the signal i′d (z) finally obtained is the multiplication between the original signal id (z) and a Gaussian function. As shown in Figure 3.6, the original signal id (z) is modulated by a Gaussian lineshape due to the limited spectral resolution, which causes the decrease in the amplitude of the OCT signal i′d (z) (system sensitivity) with the increase of depth. In SD-OCT, the spectral resolution depends on the spot size of monochromatic spectral components in the spectrometer and the pixel size of line-scan digital camera; while in SS-OCT, the spectral resolution depends on the δr k

Frequency domain

Fourier transform

k

Δz1

Δz2

Sensitivity falling-off

Spatial domain

Δz3

z

Figure 3.6 Sensitivity falling off versus depth. The actual imaging depth of OCT is comprehensively affected by the system operation band, the probe power, the measuring range, the sensitivity falloff effect, the scattering and absorption properties of the detected sample, and so on. Source: Peng Li.

81

82

3 Optical Coherence Tomography Structural and Functional Imaging

instantaneous linewidth of the monochromatic spectral components of the light source (or the coherence length). Here, the spot size in the spectrometer can be improved by optimizing the optical design, but it is restricted by the optical diffraction limit. Other influencing factors rely on the development of the relevant devices.

3.4 Development of OCT Imaging 3.4.1

Large Imaging Range

The practical applications such as imaging of anterior segment require the OCT system with large imaging range in depth direction, or in other words. To realize large imaging range, some essential aspects should be considered in FD-OCT: (i) suppressing the conjugate mirror image; (ii) improving the spectral sampling rate and resolution. Because the detector only records the real part of the interferometric spectral signal, the Fourier transform of real signal leads to a conjugate mirror image, as indicated by the positive and negative terms in Formula (3.5). To avoid the interference of conjugate mirror image, the sample is usually placed on one side of the zero optical path during imaging, so only half of the imaging range can be used in practice. To effectively utilize the full imaging range, many scholars proposed several methods to eliminate the mirror image [6–14]. The main idea was to construct the corresponding imaginary signal, which combines with the real signal to form the complex analytic signal, thus eliminating the conjugate mirror image caused by the positive and negative frequencies. Among these methods, an ingenious method is to introduce a certain level of phase modulation by offsetting the rotating axis of the scanning galvanometer, thus realizing the full range OCT imaging [6–10]. In SD-OCT, the spectral sampling rate and resolution mainly depend on the pixel number and pixel size of the camera. With the development of photoelectronic technology industry, the line-scan digital cameras with 2048 [7] and 4096 [5] pixels were introduced to the OCT system one after another to improve the spectral sampling rate. By combining the full range imaging technology mentioned above, the fast OCT imaging for the whole anterior segment (from the front surface of cornea to the rear surface of the crystalline lens) of the human eye was successfully implemented. As shown in Figure 3.7 [7], by using new-type InGaAs line-scan digital camera and full range imaging technique, the SD-OCT achieves the ultra-large measuring range of 12 mm, high line scanning speed at 120 kHz, and axial resolution below 10 μm with the operation wavelength of 1050 nm [7]. It can not only obtain the geometric structure of the whole anterior segment of the human eye in a single scan but also enables clear observation of the structure of anterior chamber angle [8]. Because of the above merits, this technology is especially applicable to the study of dynamic adjustment of human lens, the relevant study of intraocular lens implantation, as well as the comprehensive inspection of angle-closure glaucoma (including the chamber angle, anterior chamber depth, and capacity, iris thickness and curvature, crystalline lens position and thickness, and the other important influencing factors). In addition,

3.4 Development of OCT Imaging

Cornea

Sclera Ciliary body

Iris Lens

2 mm (a)

(b) EP TM SS

S

CP

CR

Limbus

CR CP

AR Cornea

Anterior lens

Posterior lens

(c)

Figure 3.7 3D imaging of the whole anterior segment of the human eye. (a) 3D rendering of the full anterior segment (covering range of 12 × 18 × 18 mm); (b) typical cross-sectional image; (c) subregions of the corneoscleral limbus, cornea, anterior, and posterior parts of the crystalline lens. TM: trabecular meshwork, SS: sclera spur, AR: angle recess, EP: epithelium, CP: lens capsule, CR: lens cortex, S: stroma. Source: Li et al. [17], From SPIE.

this technology can facilitate microangiography [15, 16], tissue elasticity imaging [17, 18], and other functional extension technologies. In SS-OCT, the spectral sampling rate and resolution are mainly limited by the instantaneous linewidth of the light source. Grulkowski et al. [19] proposed a SS-OCT system, which adopted vertical cavity surface emission laser (VCSEL) as the light source, and could realize the centimeter scale imaging range through balancing resolution, sweep speed, and other parameters. However, in a single-sweep period, the output spectrum of the light source has some unstable nonlinear variations. And between two sweep periods, there is a random time delay from sweep triggering to sampling, which brings about extreme instability to the acquired frequency-swept spectral signals. Therefore, it is necessary to take complex phase compensation to make it usable in OCT functional imaging. In addition, many scholars proposed other methods, such as pixel shift method, optical frequency comb method, and virtual image phased array (VIPA) method, to extend the measuring range of spectral domain OCT [18]. The pixel shift method tries to improve the spectral sampling rate by mechanically shifting the detection pixel, thus, the imaging range was doubled in theoretical. However, mechanical shift operation has the shortcomings of low speed, instability, and hard-to-control shift accuracy. The spectral sampling function can be improved by adding an optical frequency comb so that the spectral resolution can be improved and the sensitivity falloff effect can be mitigated. The VIPA method utilizes the VIPA to further divide the spectrum of the diffraction grating, thus can achieve an imaging range of 81.87 mm [18].

83

84

3 Optical Coherence Tomography Structural and Functional Imaging

3.4.2

High-Imaging Speed

To avoid the dynamic artifacts caused by the spontaneous tissue jitter in living biological tissues it is important to enhance the speed of in vivo imaging. As described above, the imaging speed of FD-OCT is far higher than that of TD-OCT because it does not need any axial scanning mechanism. In SD-OCT, the imaging speed is mainly limited by the line scan rate of the line-scan digital camera. In recent years, CMOS cameras have been extensively used in OCT systems due to the improvement of the manufacturing process of CMOS, which can achieve the line scan rate up to hundreds of kHz [6]. As the camera speed is mainly limited by the integral time and readout time of the pixels, the larger the pixel number in operation is, the longer the readout time is required. CMOS cameras can flexibly configure the number of operating pixels, so by reducing the number of operating pixels, higher imaging speed can be obtained. Potsaid et al. [20] introduced CMOS into the SD-OCT as data acquisition device, thus increasing the axial line scan rate up to 310 kHz. To further improve the system imaging speed, the integral time and readout time of two cameras could be accurately controlled to make the two cameras perform information acquisition tasks alternatively so that the system imaging speed was twice as high as that with a single camera. In this way, it realized the line scan rate of 500 kHz [20, 21]. In SS-OCT, the rapid development of both frequency-swept laser technology (especially the Fourier domain mode-locked laser [FDML] technology) and high-speed data acquisition card has efficiently improved the imaging speed of the SS-OCT. Huber et al. [22] applied the FDML technology to SS-OCT imaging for the first time. After the development in recent years, this technique has successfully realized high-speed imaging at MHz scale [20, 23]. According to Formulae (3.18–3.20), with the increase of OCT line scan rate, the integral time for a single pixel is greatly shortened. To ensure enough signal amount, the probe power on the sample should be increased accordingly. However, due to the tough restriction on the safe standard of light radiation for biological tissues, the enhancement space of probe power is very limited. Therefore, the method of improving OCT imaging speed by enhancing the line scan rate (by reducing the pixel’s integral time) has some limitations. To overcome these limitations, the parallel detection technique with multiple A-lines (even in area scan or volume scan) was proposed to effectively improve the OCT imaging speed [11]. But in practical applications, the signal channels in the parallel detection usually have cross-talk interference problem. Recently, some scholars proposed to use the spectrum subdivision method with VIPA to realize the multichannel parallel detection. The parallel detection with 16 channels was successfully implemented, and the OCT imaging speed was improved to 3.2 MHz [24]. By utilizing the subtle difference among the spectra of the different signal channels, the cross-talk interference can be effectively suppressed theoretically. However, the VIPA device adopted in this technique still has considerable loss and needs to be further improved.

3.4.3

Functional OCT

Conventional OCT can realize 3D imaging of microscopic structures inside the tissue, whose contrast mainly comes from the difference in optical backscattering

3.4 Development of OCT Imaging

characteristics of different tissues. To further improve the biological specificity of imaging, many scholars have conducted a great deal of research on OCT to extract physiological function information. Utilizing the Doppler effect, 3D high-resolution information of tissue movement, such as blood flow, can be obtained [16, 20, 25]. Combining the spectral information, 3D acquisition of tissue absorption characteristics can be realized [26, 27], especially the extraction of parameters like blood oxygen level. And by combining the polarization effect, 3D acquisition of birefringence characteristics of tissue can be achieved, which can be used to identify the characteristic tissues, such as trabecular meshwork [28] and layer of optic fibers [29]. According to Formula (3.3), the movement in depth direction inside the sample will lead to the change of optical path difference Δz, causing the phase change of the OCT interference signal. According to the optical Doppler effect, the phase change 𝜑 and the motion displacement D satisfy the relationship of 𝜑 = 4𝜋n/𝜆0 × D, where n refers to the optical refractive index of the sample. The lower limit of the phase-changing displacement measurement is determined by the SNR of system, √ which approximates to 𝜆0 ∕4𝜋n RSNR usually at nm level. The upper limit of the x

y

M

z

z

OC

B

M

V

(a)

J

(b) t z

M

J B

(c) 15

–3

–15

Strain rate (1/s)

Flow velocity (mm/s)

3

(d)

Figure 3.8 OCT images of chick embryonic heart. (a) Longitudinal section; (b) cross section; (c) M-mode structural image along the vertical dashed line in (b) which the boundary of myocardial wall is indicated by the solid curves; (d) M-mode structural image superimposed with radial strain rate of the myocardial wall and Doppler velocity of blood flow (The vertical scale bar is 200 μm, and the horizontal bar is 0.1 seconds). Source: Li et al. [34], From SPIE.

85

86

3 Optical Coherence Tomography Structural and Functional Imaging

displacement measurement is affected by the π wrapping effect in phase calculation, which is about 𝜆0 /4n [16–18, 20, 30–33]. Based on the above theory, the quantitative measurement of blood flow rate [20, 30] and quantitative characterization of biological tissue movement characteristics [17, 19, 20] can be realized by using the phase information of OCT signal. As shown in Figure 3.8 [34], in the development process of heart, the interaction between heart wall and intracavity blood flow constructs the biomechanical environment of embryonic heart development. Based on OCT microstructure imaging, using the phase information of OCT signal to measure the fast blood flow rate and the slow strain rate of the deformation of heart wall tissue simultaneously provides an effective tool for study of the biomechanical environment change during the heart development process [34].

3.5 OCT Angiography Blood flow perfusion is an important index to measure the physiological function and pathological state of the body. Thus, blood flow detection is of great significance to the evaluation and diagnosis of tissue physiological and pathological states [35–38]. In vivo perfusion imaging can help us to understand the pathological mechanisms related to perfusions, such as stroke, Alzheimer’s disease, myocardial infarction, cancer, and fundus diseases, improve the diagnosis efficiency of diseases, and provide important theoretical guidance and evaluation means for the treatment of diseases and drug development. Utilizing flowing red blood cells (RBCs) as intrinsic contrast agents, optical coherence tomography angiography (OCTA) enables fast and safe 3D visualization of vasculature perfusion down to capillary level [39–41]. Because of its noninvasive, noncontact, and label-free properties, OCTA has been rapidly applied in scientific research and clinical application [42], such as ophthalmology [43–46], dermatology [47–49], neuroscience [50–52], brain imaging [53–55], and oncology [36]. Since the first idea of using OCTA to detect blood flow (based on Doppler principle), a large spectrum of OCTA algorithms has been proposed to improve the sensitivity and flow contrast of angiography. The purpose of this review is to provide a comprehensive review of OCTA techniques, including the OCTA contrast origins, ID-OCTA imaging, quantification, and applications. This section systematically reviews OCTA techniques, including the OCTA contrast origins and SID-OCTA imaging algorithm combining shape, inverse SNR (iSNR), and decorrelation features.

3.5.1

OCTA Contrast Origins

In most situations, each OCT pixel encompasses a large number of small phasors, arising from a collection of sub-resolution scatters, as shown in Figure 3.9a. The contribution of each small phasor can be expressed as a random sub-phasor 𝛽 exp( j𝜑), where 𝛽 and 𝜑 are the amplitude and phase components [56]. As plotted in Figure 3.9b, the OCT signal a exp( j𝜃) of an individual voxel is the coherent sum

3.5 OCT Angiography

x z

Im

a

Static tissue bed Pixel Dynamic flow

a exp ( jθ)

A-line

=

(b)

(a)

Re σb

σf Background

Static tissue Probability density

βl exp( jφl) I=1

θ

Sub-resolution scatterers

M

Averaged

Dynamic flow

Flow

OCT signals

OCTA signals

(c)

(d)

Figure 3.9 (a) Schematic of an OCT structural cross section. Due to finite resolution, each pixel encompasses a number of sub-resolution scatters that corresponds to a collection of random sub-phasors. (b) OCT signal of an individual pixel is a sum of the random sub-phasors. (c) The dynamic and static resultant phasors (i.e. OCT signals) have different probability distributions. (d) In OCTA, an additional contrast algorithm is employed to quantify the magnitude of dynamic changes. The static surrounding tissues have low magnitude values (dashed lines) and are removed by an appropriate threshold, leaving the dynamic signals to generate angiograms. Averaging further suppresses the variances of the distributions (filled areas). 𝜎 b and 𝜎 f are the standard deviations of the static background and dynamic flow, respectively. Source: Li et al/The Optical Society.

of all the sub-phasors in a coherence volume with a size determined by the probe beam area and coherence length of the light source [57], i.e.: a exp( j𝜃) =

M ∑

𝛽l exp( j𝜑l )

(3.24)

l=1

where a and 𝜃 are the amplitude and phase components of the random sum phasor and l is the index of total M independent sub-phasors. On the premise of a very large M, which is generally held for measured samples, it can be rigorously demonstrated that the random sum phasor is a circular complex Gaussian random variable. Moreover, its amplitude and intensity follow Rayleigh and exponential distributions, respectively [57–59]. The theoretical statistical characteristics of OCT signal and amplitude difference (AD) OCTA signal were summarized in Table 3.1, where

87

88

3 Optical Coherence Tomography Structural and Functional Imaging

Table 3.1 Item

Static Dynamic

Statistical characteristics of OCT amplitudes and AD OCTA signals [58]. OCT amplitude |ad/s |

AD OCTA |aAD, d/s |

] [ (a − C)2 1 Gauss √ exp − s 2 2σ 2𝜋σs ( )s a a Rayleigh d2 exp − d2 σd 2σd

( 2 ) aAD,s 1 Gauss √ exp − 2 4σs 𝜋σs ( 2 ) aAD,d 2 Truncated Gauss √ exp − 2 2σd 2𝜋σ d

Source: Adapted from Table 1 of Cheng et al. [58].

𝜎 s and 𝜎 d are the standard deviations of the statistical distribution of static background and dynamic blood flow signals, respectively, C is a constant phasor, and the subscript d and s indicate dynamic and static, respectively. In dynamic flow regions, the moving RBCs induce a rapid change in the spatial distribution of sub-resolution scatters, corresponding to the time-variant sub-phasors. However, in static tissue regions, the interference pattern is stationary, and consequently, the sub-phasors are stable. Besides, the additive zero-mean complex Gaussian noises induce an additional fluctuation on the basis of true OCT signals to generate resultant OCT signals. As shown in Figure 3.9c, the probability density functions (PDF) of the resultant OCT signals in dynamic flow regions and static tissue regions are different in the temporal dimension. Although such a statistical difference highlights the potential to differentiate dynamic signals and static signals, it requires a mass of repeated samples to reduce the classification error, which is almost impossible in practice. Therefore, additional contrast algorithms are necessary for effectively extracting the dynamic flow information from noisy backgrounds. With an OCTA algorithm, usually, a threshold was set to remove static tissues with low magnitude values, as shown by the dashed curves in Figure 3.9d. Besides, further averaging strategies were widely used to reduce the residual overlap, referring to the solid curves in Figure 3.9d, which will be introduced in Section 3.5.2.

3.5.2

SID-OCTA Imaging Algorithm

Up to now, to accommodate various system configurations and situations, a wide spectrum of OCTA algorithms have been proposed to estimate the motion magnitude of RBCs and extract dynamic blood flow from static tissue, which were summarized in Table 3.2. Generally, the OCTA algorithms distinguish dynamic flow from static tissues by analyzing the temporal changes of OCT signal between successive tomograms acquired at the same location. In terms of signal utilization, the algorithms based on phase enable high sensitivity, but they are highly susceptible to noise, especially when the local SNR is low [61, 62]. On the other hand, algorithms based on amplitude lose the phase information that possess highest sensitivity and the ability to detect sub wavelength motion, thus, is difficult to realize high sensitivity recognition of microvasculature [63–65]. By contrast, complex-based algorithms offer high motion contrast by

3.5 OCT Angiography

Table 3.2

Main research group and their proposed OCTA algorithms [60].

Research groups

Algorithms Signal components

Operations

Makita et al. [61]

Phase

Differential

Fingler et al. [62]

Phase

Variance

Mariampillai et al. [63]

Amplitude/intensity

Variance

Enfield et al. [64]

Amplitude/intensity

Decorrelation

Jia et al. [65]

Amplitude/intensity

Decorrelation

Wang et al. [66]

Complex

Differential

Li et al. [56, 67, 68]

Complex

Decorrelation

Source: Reproduced from Table 1 of Li and Li [60].

comprehensively using phase and amplitude information and offer high motion contrast [56, 67–69]. As for signal processing, decorrelation (or correlation) operation calculates the dissimilarity (or similarity) between concessive B-scan obtained at the same location [52, 64], which overcomes the shortcomings of insufficient sample size utilized in differential operation [61, 66] and susceptibility to motion noise in variance operation [62, 63]. Besides, decorrelation technique is intrinsically insensitivity to the disturbance caused by overall variation of the light source intensity [70] and less sensitivity to the Doppler angle [51, 71]. In this section, we focus on the complex-decorrelation algorithm because of its advantages mentioned above. The local complex decorrelation D was calculated with a 4D spatio-temporal average kernel defined as: |C| I ∑S ∑T−1 1 C= A(s0 + s, t) ⋅ A∗ (s0 + s, t + 1) S(T − 1) s=1 t=1 1 ∑S ∑T I= A(s + s, t) ⋅ A∗ (s0 + s, t) ST s=1 t=1 0 D=1−

(3.25) (3.26) (3.27)

where C is the first-order auto-covariance at spatial index s0 and I is the local zeroth-order auto-covariance, i.e. intensity. A(s, t) is the complex resultant OCT signal made up of true OCT signal and a zero mean complex circular Gaussian noise, and * means complex conjugate. s and t are index in three spatial and temporal dimensions with averaged kernel size of S and T, respectively. As shown in Figure 3.10b, OCT signals for pixels in dynamic flow areas changed over time while signals from static tissues remain steady. Accordingly, by calculating the correlation of OCT signals acquired at the same location between different time points, dynamic flow shows low correlation (or high decorrelation) while static tissue exhibits high correlation (or low decorrelation), and therefore can be distinguished. Furthermore, the decorrelation value calculated by the proposed algorithm can be used in parallel to correct the overall image misalignment caused by large movement

89

3 Optical Coherence Tomography Structural and Functional Imaging Time tf

Time tf + 1

1.2 Static

Static

Dynamic

Dynamic

@KingBrand

(b)

Dynamic

0.6 0.3 0

(a)

Static

0.9 Correlation

90

(c)

Tissue Flow ‒1

10

0

10 Lag time τ (ms)

Figure 3.10 A schematic diagram of the calculation of complex decorrelation. (a) A simplified figure of a vessel surrounded by static tissues. (b) OCT signals for pixels in dynamic flow areas and static tissues acquired from successive frames. (c) Plot of correlation versus lag time for signals in dynamic flow areas and static tissue regions. Source: Peng Li.

Y

Z

X

X Y

0.5 mm (b) X Z

(a) (c)

Figure 3.11 A mouse cortex microangiography via complex cross-correlation-based OCTA. (a) Three-dimensional rendering image; (b) corresponding en face angiogram; (c) OCT structural cross section overlaid with blood flow signals at the position marked by the yellow dashed line in (b). Source: Guo et al. [68], From IOP Publishing.

of biological tissues. Generally, the average value of the entire correlation map can be used to evaluate the quality of the image. Therefore, by changing the amount of offset between adjacent B-scans by pixel shifting, the influence of large movement in biological tissues can be greatly reduced, as illustrated by Figure 3.11 [68]. Figure 3.11 is an angiographic image of the mouse cerebral cortex microvascular network obtained by the complex cross-correlation-based OCTA method. As shown in Figure 3.11a, the vasculature can be clearly visualized. Figure 3.11b shows the maximum intensity projection map along the depth (Z)direction in Figure 3.11a. There is a high blood flow contrast between the blood vessels and the surrounding

3.5 OCT Angiography

background tissue, and the connections between blood vessels are clearly visible. Figure 3.11c shows the two-dimensional (Z, X) OCT structural cross section overlaid with blood flow signals along the yellow dotted line position in Figure 3.11b. Due to the limited scan rate and signal acquisition rate of the system, the proposed complex cross-correlation OCTA algorithm extracts microvascular signals based on the two-dimensional operation between B-scan frames, and the overall dislocation of the image is corrected in the process. However, the misalignment caused by biological tissue movement is often three-dimensional. If the system speed is further increased, and the 3D operation based on the volume is adopted, not only will the overall dislocation of the image be corrected more accurately, but also the detection sensitivity of the blood flow signal can be improved. Besides, it should be noticed that the computed motion index is not simply related to the motion magnitude of RBCs, but also influenced by the local signal intensity or local SNR. The SNR-dependent motion index would degrade the classification accuracy and visibility of vascular network [43, 46] and confuse the interpretation of hemodynamic quantification results [71, 73, 74]. Therefore, several methods have been proposed to suppress the motion artifacts induced by random noise. The simplest method was setting an intensity threshold and remove all voxels with low SNR [64, 68], but a balance must be struck between eliminating noise and including true flow in vasculature [75]. To make full use of the acquired OCT data, SNR adaptive algorithms were proposed. Makita, S. et al. proposed noise-immune algorithm by estimating the complex correlation coefficient of true OCT signals, rather than measured data [76]. Braaf, B. et al. modified the complex differential variance (CDV) algorithm by normalizing the CDV signal with analytically derived upper and lower limits [77]. However, those modified algorithms both involved complicated estimation of OCT parameters. Different from correcting SNR-dependent OCTA signal, an alternative solution was to build a SNR-adaptive classifier. The initial SNR-adaptive classifiers were based on numerical analyses. Zhang, A. et al. distinguished flow signal from static background in feature space by learning method [78]. Gao, S.S. et al. built a classifier by fitting the relationship between reflectance and decorrelation in foveal avascular zone (FAV) with linear regression analysis [79]. Li, P. et al. solved the depth-dependent motion-based classification by performing a histogram analysis and differentiating dynamic flow from static tissues through fitting [80]. Recently, Huang, L. et al. rigorously derived the theoretical asymptotic relation of decorrelation to iSNR and explore the distribution variance based on numerical simulation [81]. In the proposed iSNR-decorrelation (ID-OCTA) algorithm, a range of 3𝜎 was used as the distribution boundary of static signals. Accordingly, the classification line DC was define as: ( √ ) G iSNR (3.28) DC = E(D) + 3𝜎 = 1 + 3 N where G is the coefficient of variance (CoV) parameter approximately equal to 1.5 and N is the total temporal–spatial kernel size. Flow phantom experiment was implemented to validate the feasibility of the proposed ID-OCTA algorithm. As shown in Figure 3.12a, the structural cross section

91

3 Optical Coherence Tomography Structural and Functional Imaging SNR (dB)

(a)

10 Static

Dynamic

10 iSNR (log scale)

Depth (z)

92

10

–4

0

0.2

Decorrelation (D) 0.4 0.6 0.8

1

–3

–2

10–1

(c)

Static Dynamic 1 Dynamic 2

100 101

cmOCT

ID-OCTA

1

Decorrelation 0

(b)

ID-OCTA

(d)

Figure 3.12 Flow phantom data validate the feasibility of ID-OCTA. (a) Structural (intensity) cross section of flow phantom. Left half region is static area and right half region is flow area. The dashed boxed indicates the parts used for ID space mapping. Inside is the averaged depth profile indicating the SNR decay. (b) Decorrelation mapping of the cross section. (c) ID space mapping of the phantom data and the proposed classifier. The static and noise voxels are marked in blue and the dynamic voxels with different B-scan intervals are marked in red (9.9 ms) and in green (3.3 ms). Insert is an enlarged view of the dashed box region. The corresponding theoretical asymptotic relation is the black solid curve, the ID classifier is the magenta dashed line using Eq. (3.5), and the intensity threshold in correlation mapping OCT (cmOCT) is the green dashed line. The circled area indicates flow signals excluded by cmOCT. (d) Cross-sectional angiogram by the proposed ID-OCTA, the black dashed line indicates the dynamic boundary determined by cmOCT. Source: Huang et al. [81], IEEE.

offers prior knowledge of static (left half) and dynamic (right half) regions. To avoid ambiguity on the static–dynamic boundary, only the rectangular regions marked with dashed boxes were used for further quantitative analysis. The decorrelation mapping was illustrated in Figure 3.12b. Generally, the dynamic region presents a high decorrelation value and the static region shows a low value. However, the SNR (or intensity) of the probing light decays exponentially with the increase of the penetration depth, referring to the depth profile inserted in Figure 3.12a. Accordingly, due to the influence of random noise, static regions at deep positions also exhibit high decorrelation, as indicated by yellow ellipse in Figure 3.12b. Figure 3.12c illustrates the distribution of static signals (blue points) and dynamic signals (red and green points) with different B-scan intervals in ID space with log-scaled iSNR. The static and noise signals distribute around the theoretical asymptotic ID relation [the black curve in Figure 3.12c] and can be effectively removed by the ID classification line determined by Eq. (3.5) [the magenta curve in Figure 3.12c], which demonstrate the validity of the proposed ID-OCTA algorithm.

3.5 OCT Angiography

In addition, the signals in dynamic regions present higher decorrelation when calculated with a larger B-scan interval, corresponding to the conclusion in Figure 3.10c. In contrast, in correlation mapping OCT (cmOCT), a global intensity threshold was set to remove all signals without sufficient intensity [green dash line in Figure 3.12c]. Therefore, ID-OCTA presents higher visibility over cmOCT in deep dynamic regions as indicated by the reserved region below black dash line in Figure 3.12d. In addition, due to limited data sets in practical and other inevitable disturbance factors, such as breath and heartbeat, it is almost impossible to completely remove static tissues with OCTA algorithms solely [80]. Therefore, traditional imaging processing methods (including median filtering, Gaussian image smoothing, or some other denoising method) were widely used to enhance the flow contrast and improve vascular connectivity [72]. In addition to these customary methods, many algorithms have been introduced utilizing the continuous and tubular-like patterns of vessels in 3D space [refer to Figure 3.13a], mainly including Gabor filtering [82], Hessian-based methods [44, 83, 84]. A widely used modified vesselness function V 0 (s) was defined as following [80, 85, 86]: { 0, ( if 𝜆3 > 0 ) ( 2 )[ ( R2 )] (3.29) V0 (s) = R2A RB C exp − 2𝛼2 exp − 2𝛽 2 1 − exp − 2𝜃2 , others, √ √ where s is the detection scale, RA = |𝜆2 |/|𝜆3 |, RB = |𝜆1 |∕ |𝜆2 𝜆3 |, RC = 𝜆21 + 𝜆22 + 𝜆23 , and 𝜆1 , 𝜆2 , 𝜆3 are eigenvalues of the Hessian matrix (|𝜆1 | < |𝜆2 | < |𝜆3 |). In addition, 𝛼, 𝛽, and 𝜃 are thresholds controlling the sensitivity of the filter to the measures RA , RB , and RC . To exploit vessels with different size, the vesselness measure was analyzed at different scales. The response of the shape filter will be the maximum at a scale that

Vessel λ1

λ2 (b)

(d)

(c)

(e)

λ3

(a)

Figure 3.13 Schematic of the vascular shape and performance of 3D Hessian filtering. (a) Ideal vessel shows continuous and tubular-like patterns. Enface angiograms before (b) and after c Hessian-based shape filtering. (d) and (e) are enlarged views of the enclosed areas in (b) and (c), respectively. The scale bar = 400 μm. Source: Li et al. [80], From Optica Publishing Group.

93

94

3 Optical Coherence Tomography Structural and Functional Imaging

approximately matched the detected vessel size. Accordingly, the final vesselness estimation was defined as: V0 (r) =

max

smin ≤s≤smax

V0 (s, r)

(3.30)

where r is the position in original image, smin and smax are the minimum and maximum vessels size expected to be detected. Furthermore, considering the elongated tail artifacts, Li, P. et al. proposed to calculate the second-order deviation using a 3D anisotropic Gaussian kernel with an enlarged scale in the depth direction [80]: ( ) 1 1 T 2 G(r, s) = √ exp − r 𝛴 r (3.31) 3 2s2 2𝜋s2 ∕|𝛴|2 where 𝛴 = diag(a1 , a2 , a3 ) is the anisotropic matrix, and a is the anisotropic factor. The third factor a3 corresponds to the depth direction, which is primarily dependent on the tail length. As shown in Figure 3.13b, a number of noises appear in the initial enface angiogram. In contrast, both the flow contrast and vascular connectivity were significantly improved with 3D Hessian analysis-based shape filtering [see Figure 3.13c and compare Figure 3.13d with Figure 3.13e]. Besides, some other novel shape filter algorithms were also proposed to improve the OCTA images. Yousefi, S. et al. compounded Hessian filtered OCTA results and intensity images with a weighted average scheme, which mitigates the limitation of Hessian filter’s sensitivity to the scale parameters [86]. Li, A. et al. proposed another hybrid strategy: large vessel mask was generated by simply thresholding the filtered OCTA image and the micro vessels was obtained by top-hat enhancement and optimally oriented flux (OOF) algorithms [87]. Another recently proposed method was rotating ellipses to find the most likely local orientation of vessels and then performing median filtering with best matching elliptical directional kernel [72].

3.6 OCTA Quantification 3.6.1

Morphological Quantification

Quantification of the morphological features of OCTA images enables automatic binarization by using the quantification index as feedback. A SNR adaptive binarization method based on the linear boundary of static signals in ID space and the binary image similarity (BISIM) index was proposed. To quantitatively evaluate the linear SNR adaptive threshold, we defined an ID threshold DT [see the red line in Figure 3.14f]: DT = cot(𝛼T ) ⋅ iSNR

(3.32)

where 𝛼 T is the ID-threshold evaluating the slope of the ID line DT . The pixels were assigned to be white if its ID-threshold 𝛼 ∈ [0, 𝛼 T ], otherwise, be black. Changing the threshold 𝛼 T from 1∘ to 90∘ with an interval of 1∘ , a binary image sequence can be generated [see Figure 3.14a] and can be roughly divided into three classes: (i) the binary image has only dynamic flow signal when 𝛼 T ∈ [1 ∘ , 𝛼 1 ] [see Figure 3.14b1];

BISIM

3.6 OCTA Quantification

B(5°) B(10°) B(15°) B(20°)

B(25°)

B(30°) B(35°)

B(40°) B(45°)

B(89°)

α = 5°

0.5 mm

α = 25°

(b2)

(b3)

α2 0° Decorrelation D 0.0 0.2 0.4 0.6 0.8 1.0 0.0 αT

α = 40°

0.2 iSNR

(b1)

° 80 60° α1

40 ° 20 °

(e)

° 80 ° 60 ° 40 ° 20

(a)

800 600 400 200 0

0.4

α1

0.6

(c1)

(c2)

(c3)

υ(f6)

υ(f40)

(f)

α2

0

υ(f5)

υ(f5)

υ(f5)

(d1)

0.8

υ(f25)

(d2)

(d3)

(g)

Figure 3.14 Proposed ID-binary image similarity (ID-BISIM) thresholding method for data-based 3D-adaptive binarization. (a) Binary image sequence created by changing the ID-threshold αT from 1∘ to 90∘ with an interval of 1∘ . Typical binary OCTA cross-sections when (b1) 𝛼 T = 5∘ , (b2) 𝛼 T = 25∘ , and (b3) 𝛼 T = 40∘ . Cross-sectional maps of morphologic vector corresponding to (c1) b1, (c2) b2, and (c3) b3. Subtraction of the paired vector maps was performed to characterize the local structure differences between the two binary images: (d1) intra-class, (d2) inter-class, and (d3) inter-class. (e) Plot of BISIM index versus ID- threshold 𝛼 1 and 𝛼 2 . The BISIM index is minimal when 𝛼 1 = 7∘ and 𝛼 2 = 37∘ . (f) Voxels were projected into the ID space. ID-BISIM threshold 𝛼 T = 7∘ , as indicated by the red solid line. (g) Binarized OCTA cross-section with the ID-BISIM method. Source: Zhang et al. [88], AME Publishing Company.

(ii) the binary image has both flow signal and static tissue when 𝛼 T ∈ (𝛼 1 , 𝛼 2 ] [see Figure 3.14b2]; (iii) the binary image has flow signal, static tissue, and noise when 𝛼 T ∈ (𝛼 2 , 90∘ ] [see Figure 3.14b3]. The binary images of the same class have a similar structure while the images of different classes exhibit different structures. To estimate the intra-class similarity and the inter-class difference, we defined a morphologic vector array ⃗v(𝛼, z, x) to represent the structure of the binary image B(𝛼, z, x). As shown in Figure 3.14c, selecting a window from the binary image and setting the center of the window as the origin, the vector ⃗v(𝛼, z, x) is the coordinate center of nonzero pixels in the window. ∑

(k−1)∕2

⃗v(𝛼, z, x) =



(k−1)∕2

B(𝛼, z + h, x + j) ⋅ (h, j)

(3.33)

h=−(k−1)∕2 j=−(k−1)∕2

where a square window with k pixels is used. In this paper, k is set to 35 pixels for an image with total 300*300 pixels. h and j are the pixel indexes inside the window. The cross-sectional vector maps ⃗v(𝛼, z, x) reflect the local structure of the binary image at the position (z, x) [see Figure 3.14c]. The vector cross-sections within each class were paired. The Euclidear distances of the difference between the paired vector maps were performed to characterize the local structure difference between two binary images [see Figure 3.14d], and they were further summed over the full z–x dimensions with a step of k as the global difference: Δv(m, l) =

Z X ∑ ∑ z=1 x=1

|⃗v(m, z, x) − ⃗v (l, z, x) |

(3.34)

95

96

3 Optical Coherence Tomography Structural and Functional Imaging

where |⋅| denotes Euclidear distance, Z and X are the number of pixels in the depth and width directions of the image, respectively. m and l are the 𝛼 values of the paired vector maps ⃗v(𝛼, z, x). The binary images with similar structure [i.e. intra-class, see Figure 3.14d1] would have a smaller Δv score than the images with distinct structures [i.e. inter-class, see Figure 3.14d2–d3]. The binary image similarity of each class (V) was defined as the summation of all the paired images within each class: ∑ Vi = Δv(m, l), m, l ∈ class i and m ≠ l (3.35) m,l

where i is an index of the class, 3 ∑

BISIM =

i=1 3 ∑ i=1

Vi (3.36)

Cn2 i

where ni is the number of vector maps in class i. The change of the BISIM value was plotted versus the angles 𝛼 1 and 𝛼 2 [see Figure 3.14e, f]. When BISIM has the minimum value, which means obtained angles 𝛼 1 and 𝛼 2 divide signals in the ID space into three groups with minimum structural difference within each group, so the corresponding angle 𝛼 1 was assigned to the threshold 𝛼 T [see Figure 3.14f]. In this work, 𝛼 1 and 𝛼 2 were initially set as 15∘ and 30∘ , respectively. The gradient descent method [80] was used to quickly determine the minimum value of BISIM and the corresponding coordinates (𝛼1∗ , 𝛼2∗ ). Accordingly, the binarized cross-sectional angiogram was generated by applying the ID-threshold with 𝛼T = 𝛼1∗ [see Figure 3.14g].

3.6.2

Hemodynamic Quantification

In neuroscience, neural activities highly correlate with cerebral blood flow (CBF) changes, which is termed neurovascular coupling [89]. Accordingly, the hemodynamic response has been widely used to assess brain function [90]. Researches have shown OCTA decorrelation signal closely correlates with hemodynamic parameters, such as flow velocity and flux [73, 91, 92], but the limited dynamic range (including the lowest detectable flow and the fastest distinguishable flow) and the uncertainty of the decorrelation estimation hinder its further development [93]. To enhance the speed range of the decorrelation estimation, variable interscan time analysis (VISTA) that calculates paired B-scans with different interscan times was proposed [94, 95]. As shown in Figure 3.15a, the computed decorrelation was comprehensively influenced by interscan time and flow velocity. Another widely used approach was to enlarge the average kernel, in that case, both the fastest distinguishable flow and decorrelation uncertainty were improved [97, 98], as shown in Figure 3.15b. However, the kernel size was limited by the spatial resolution and the cost of imaging. Recently, Chen, R. et al. proposed an adaptive spatial–temporal (ST) kernal to improve the performance of OCTA decorrelation in monitoring stimulus-evoked hemodynamic responds [96]. In this study, decorrelation was computed with a

3.6 OCTA Quantification

0.2 0.5

1.0

1.5

2.0

Velocity (mm/s)

0.25 0.20

Dsat

0.6

0.15 σsat α

0.4

1

0.10

N

0.2 0

(b)

0.05 0

20

40

60

80

0 100

Ensemble size N

1 Saturation limit Decorrelation D

Time lag (τ): 4.0 ms 1.6 ms 0.8 ms 0.4 ms

0.4

1.0 0.8

Standard variance σ

0.6

0 0

(a)

– Mean decorrelation D

Decorrelation D

1 0.8

0.8 0.6 Background limit

0.4

N = 15 N = 60

0.2 0

0

0.2

(c)

0.4

0.6

0.8

1.0

iSNR (linear)

Figure 3.15 The numerical simulation results. (a) Plot of decorrelation versus flow velocity. Four different interscan times (4.0, 1.6, 0.8, and 0.4 ms) were selected and plotted in different colors. (b) Plots of the mean decorrelation (red) and standard variance (blue) versus the ensemble size (N) based on the simulation of totally dynamic signals. (c) ID space mapping of the simulated OCT voxels with different dynamic factors (black lines, totally static; blue lines, partially dynamic; red lines, totally dynamic) and ensemble size (N) (cyan patch: spatial kernel with 40 times averaging, N = nx nz = 15; pink patch: ST-kernel, N = nx nz nf nr = 600, setting nx = 3, nz = 5, nf = 4, nr = 10). Source: Chen et al. [96]/With permission of IEEE.

ST-kernel that has two general dimensions: spatial (denoted as S, including x and z) and temporal (denoted as T, including frame tf and trial tr ) dimensions: DST (tf ) = 1 −

∑ ∑

|

T

|∑ ∑ | | T S A(tf , tr , x, z)A∗ (tf + 1, tr , x, z)| | | ∑ ∑

∗ S A(tf ,tr ,x,z)A (tf ,tr ,x,z)+

T

|

∗ S A(tf +1,tr ,x,z)A (tf +1,tr ,x,z)

(3.37)

2

∑nx ∑nz where A is complex OCT signal and S is defined as x=1 z=1 , denotes the operation within the spatial sub-kernel. nx and nz are the numbers of phasor pairs in the x ∑ ∑n ∑n and z directions, respectively. Moreover, T is defined as t r=1 t f=1 , indicating the r f addition operation in the temporal dimension T, where nf and nr are the numbers of spatial sub-kernels in the B-frame and trail directions. DST is the decorrelation computed with an ST-kernel, which is composed of a total of N S N T phasor pairs: N S = nx nz phasor pairs in each spatial sub-kernel, and N T = nf nr spatial sub-kernels. In contrast, the conventional estimator DS is computed with a spatial (S) kernel: | |∑ | T A(tf , tr , x, z)A∗ (tf + 1, tr , x, z)| 1 ∑ | | (3.38) DS (tf ) = 1 − ∑ ∑ T | T A(tf ,tr ,x,z)A∗ (tf ,tr ,x,z)+ T A(tf +1,tr ,x,z)A∗ (tf +1,tr ,x,z)| NT ∑

2

Figure 3.15c illustrated the benefit of the ST-kernel over traditional S-kernel. The ST-kernel increased the decorrelation saturation limit from 0.77 [cyan in Figure 3.15c] to 0.96 [pink in Figure 3.15c] by enlarging the ensemble size by 40 times. On the other hand, the uncertainty in both methods was similar because of the same number of samples used. However, the decorrelation calculated with ST-kernel is susceptible to the bulk motion because of the use of phasor pairs in the temporal dimension. To suppress the influence of the bulk motion, the spatio sub-kernel was adaptively changed in the temporal dimension by solving a maximum entropy model. In the experiment, a transparent electrocorticographic (ECoG) was used to deliver the electrical pulses to the desired area in the somatosensory cortex [Figure 3.16a].

97

600 μm

600 μm

(b)

Decorrelation change ΔD

3 Optical Coherence Tomography Structural and Functional Imaging

ECoG array

0.09

(c)

0.30

0.03

0.15 Nonadaptive

0

0

–0.03 –1

0

1

0.10

0.06 0.04

2 3 Time (s)

4

5

0.5/20 0.2/20 0.5/1 Blank

0.08 ST

(f) 0.55 S

S

0.02

0 –0.02 –1

(a)

Adaptive

0.06

(d) Decorrelation change ΔD

98

(e)

ST 0 0

1

2

3

Time (s)

4

5

(g)

0.5/20

0.2/20

Figure 3.16 Stimulus-evoked hemodynamic responses in rat cortex in vivo. (a) Photograph of the rat cortex and a transparent electrocorticographic (ECoG) electrode array. The yellow dots indicate the active electrodes. The red-dashed line indicates the location of the OCT B-scan. (b) Cross-sectional image (x-z) with contrast of decorrelation. (c) ID-OCTA crosssectional image (x-z) with contrast of intensity-weighted decorrelation. (d) Time courses of the hemodynamic response to the stimulus (0.5 mA/20 pulses), using different settings: nonadaptive ST-kernel without (gray) and with (black) EMD filtering, adaptive ST-kernel without (cyan) and with (red) EMD. Hemodynamic signals were averaged over the whole B-frame (all the vessels in (c)). The bold arrow indicates an abrupt motion artifact at the time t = 0.3 seconds. The adaptive ST-kernel is highly immune to the bulk motion induced by decorrelation artifacts. (e) Time courses of hemodynamic responses to different stimuli (Red: 0.5 mA/20 pulses, Blue: 0.2 mA/20 pulses, Green: 0.5 mA/1 pulse, and Black: blank) with S- (dashed curves) and adaptive ST-kernels (solid curves). Averaging was performed over a single vessel (red arrow in (c)). EMD filtering was performed to all curves. The double-headed arrows indicate the separability between different stimuli. The adaptive ST-kernel enables a larger dynamic range and a superior separability between different stimuli. (f) Cross-sectional mapping of the stim-evoked hemodynamic response (0.5 mA/20 pulses, t = 4.5 seconds). The pseudo-color indicates the response magnitude and the black area is the surrounding tissue. (g) Enlarged views of the blood vessel (yellow arrow in (f)). Stims: 0.5 mA/20 pulses (left) and 0.2 mA/20 pulses (right), kernels: S (top) and adaptive ST (below), time: the average from t = 4 to 5 seconds. Scale bar = 0.6 mm. Source: Chen et al. [96], IEEE.

The stimulus-evoked hemodynamic responses were presented as a decorrelation change by subtracting the mean decorrelation value of the baseline, as shown in Figure 3.16d–g. Comparing the unfiltered raw curves [cyan and gray in Figure 3.16d], the adaptive ST-kernel (cyan) effectively suppressed the abrupt motion artifacts [red arrow in Figure 3.16d] and high-frequency fluctuations without changing the overall shape of the hemodynamic curves. In the adaptive ST-kernel, the effective suppression of the high-frequency fluctuations reduces the standard deviation (SD) of baseline (defined as the time range [−1, 0]) by 57%±20% and is helpful in determining the onset time. Moreover, the adaptive ST-kernel had a larger dynamic range than S-kernel and thus offered superior separability between different stimuli. In terms of the dynamic range, the decorrelation values were averaged from 0 to 5.25 seconds for each curve and were improved by 48%±13% (0.5/20), 45%±12% (0.2/20), and 49%±18% (0.5/1) in the adaptive ST-kernel compared with S-kernel [see Figure 3.16e]. As for the improvement of separability between different stimuli, the decorrelation differences between the 0.5/20 and 0.2/20 curves were separately averaged from 0 ∼ 5.25 seconds

3.6 OCTA Quantification

for the S and ST curves [see the double-headed arrows in Figure 3.16e], and the separability was enlarged by 180%±266% with ST-kernel. A cross-sectional mapping of the stimulus-evoked hemodynamic response with high spatial and temporal resolution can be generated [Figure 3.16f, t = 4.5 seconds]. As shown in Figure 3.16g, an enlarged view of a single vessel further demonstrated that the enhanced dynamic range and improved separability between different stimuli of the proposed adaptive ST-kernel over conventional S-kernel. In addition to the spatial and temporal dimensions, the independent samples can also be acquired in the wavelength and angular dimensions at the cost of the spatial resolution [56]. Jia, Y. et al. proposed to split the full OCT spectrum into several narrower bands [65]. Decorrelation was computed using the spectral bands separately and then averaged. Referring to the concept of angular compounding by B-scan Doppler-shift encoding that was used for speckle reduction [99], Li, P. et al. proposed a single-shot spatial angular compounded method to obtain independent samples [67]. In the proposed method, independent samples were obtained by encoding incident angles in full-space B-scan modulation frequencies and splitting the modulation spectrum in the spatial frequency domain. Figure 3.17a represents a typical sample arm in an OCT system. The collimated probe beam in the sample arm is centered on the pivot of the scanning mirror and a Doppler shift is introduced during the course of a B-scan [100]. The B-scan modulation frequency ( f m ) is linearly proportional to the offset (𝛿) [9, 101, 102], as shown in Figure 3.17b, f m can be expressed as: 2k𝛿w (3.39) 𝜋 where k is the central wavenumber of the light source, and w is the angular velocity of the scanning mirror. Consequently, different incidence angles are encoded by different modulation frequencies f m , so samples with independent incident angles can be acquired by splitting the modulation frequency. fm =

Scanning mirror Incident beam

Pivot

–δ +δ (b)

–θ +θ

(a)

fm

Lens

(c)

fm

Figure 3.17 (a) Schematic of a typical sample arm in an OCT system. (b) B-scan modulation frequency f m induced by off-pivot offset (𝛿). (c) Overlap between the negative and positive B-scan modulation frequencies. Source: Li et al. [67]/The Optical Society.

99

100

3 Optical Coherence Tomography Structural and Functional Imaging

However, since the spectrometer only records the real-valued spectrum, the Fourier transform along the lateral direction is Hermitian, which resulted in the overlapping of negative and positive frequencies of the B-scan modulation [see Figure 3.17c]. Therefore, a complex-valued spectral interferogram is necessary to distinguish negative and positive modulation frequencies. In the proposed method, the sample was placed on one side of the zero path length, and the ̃ x) is reconstructed by removing the conjugate complex-valued spectrum S(k, ̃ x) along the x images with a Heaviside function. Then the Fourier transform of S(k, direction generates full space B-scan modulation spectrum in the spatial frequency (𝜐) domain without complex-conjugate ambiguity. A Gaussian filter bank was used to split the full spectrum into several subbands. Finally, independent samples were created from angle-resolved subbands and used for further quantification. Moreover, the current temporal, wavelength, angular, and spatial averaging strategies trade imaging time and resolution for multiple independent samples. Although enough independent samples with an individual approach enable an ideal improvement of the quantification performance, the cost is unaffordable. Li, P. et al. demonstrated that the principle of those averaging approaches is equivalent and offers almost the same quantification enhancement. Accordingly, a hybrid averaging method was proposed to apportion the cost [56]. This study provides useful guidance for the design of experiment. The enhancement of the quantification performance was only determined by the total number of averaged samples. Therefore, the cost of imaging time, axial resolution, and lateral resolution caused by the increase of temporal, wavelength, and angular subband numbers can be apportioned and optimized imaging parameters according to certain situations.

3.7 Applications of OCT 3.7.1

Brain

Brain is a delicate and complex ecosystem for robust behavior, consisting of multiple tissue layers and compartments for different functions. Although the human brain has an average volume of ∼1450 cm [3] compared to the rodent brain with an average volume of ∼450 mm [3] [103], the building blocks are very similar. Neurons and glial cells, the major makeup of the brain, are on a scale of micrometers. Moreover, there is energetic demand during neuronal computation, and the brain’s limited energy supply constraints its information processing speed and hence the blood flow dynamics. The cortex of the brain is layered, and each layer contains various neuron subtypes and different input or output connections to the other regions of the central nervous system. The blood supply to the cortex includes large vessels over the surface of the brain and smaller diving vessels that penetrate almost perpendicular to the surface to feed the dense networks of capillaries in deeper layers. The capillary beds deliver blood to vital parts of the cortex that are metabolically active during neuronal firing. Compared with conventional technology, OCTA technology, as an extension of OCT technology, meet the needs of in vivo monitoring of blood perfusion

3.7 Applications of OCT

and tissue damage in brain research. OCTA technology can realize label-free, high resolution, and real time in vivo 3D blood flow imaging at the capillary level. Stroke is caused by blockage (ischemic) or rupture (hemorrhagic) of a blood vessel within the brain. This injury leads to a series of functional and structural changes that may result in an infarct or a peri-infarct brain tissue. The peri-infarct tissue region also called a penumbra, is potentially recoverable if it can be identified and treated appropriately. In 2011, Jia and coworker [104] presented the first preliminary stroke study using OCT angiography. They imaged both ipsilateral and contralateral sides of middle cerebral artery occlusion (MCAO) through intact skull on mice and observed that occlusive components of ischemic stroke still existed in the ipsilateral side after reperfusion from the occluded middle cerebral artery (MCA). In 2013, Srinivasan et al. [105] investigated ischemic stroke in mice through thinned skulls with multiparameter analysis, including capillary nonperfusion, CBF deficiency, and altered cellular scattering. In 2015, Baran et al. [106] studied the vascular dynamics of pial and penetrating arterioles after stroke on mouse and discovered that the arteriolar-arteriole anastomosis (AAA) plays a significant role in active vasodilation of pial arterioles such that the ones away from anastomosis shrink in diameter, whereas the ones close to anastomosis dilate during stroke. More recently, Yang, S et al. study the spatiotemporal dynamics of blood perfusion and tissue scattering on the chronic rat photothrombotic (PT) stroke model [54, 55]. In the experiment, the PT occlusion model of rats was induced by injected Rose Bengal (RB) and subsequent laser irradiation. Figure 3.18 presents typical blood perfusion images in a 3.6 × 1.8 mm2 field of view over the chronic poststroke time course in a male rat. In the baseline image before PT occlusion, the distal middle cerebral arteries (dMCAs), pial microvessels, and the cortical capillary bed could be visualized clearly. On day 1 (one hour after PT formation), all of the blood flow signals disappeared in the focal ischemic region. In the following three days, the focal ischemic region spread significantly with obvious disappearance of the

Baseline

PT30min

Day2

Day3

Day4

Day5

Day6

Day7

Day8

Day9

Day10

Day11

Day12

Day13

Day14

500 μm

Figure 3.18 Longitudinal monitoring of chronic post-PT vascular response in the male rat. Chronic monitoring was performed over 14 days. Baseline means the projection view before PT and day 1 means the day of PT administration. Scale bar = 500 μm. Source: Peng Li.

101

3 Optical Coherence Tomography Structural and Functional Imaging

Stim: INS@1870 nm

1

SOI 3

x y

SOI 2 SOI 1

Record: fOCT@1310 nm

Water absorption Transient heating Membrane capacitance

fOCT dR / R (%)

INS-fOCT Label-free, all-optical

INS Fiber

Potential-Scattering Coupling Membrane potential

2

Neuron scattering

(b)

SOI 3

x z

Neuron activity

SOI 2 SOI 1 400 μm

(a)

3 Stim 2

(c)

(e)

0.3 J/cm2 0.5 J/cm2 0.7 J/cm2 1.0 J/cm2

1 0 –1 –1 0 1

(d)

Velocity change (%)

102

9

3 5 Time (s)

Stim

7

9 blank 0.5 J 0.7 J

6

1.0 J

3 0 –3 –1

0

1 2 3 Time (s)

4

5

Figure 3.19 The schematic diagram of INS-fOCT and INS-evoked spatial and temporal fOCT signals in rat cortex. (a) The schematic diagram of INS-fOCT. (b) Projection view of the 3D OCT structural image. Yellow arrows indicate pial blood vessels. Red circles indicate site of INS stimulation. SOI means section of interest in OCT. SOI 1, sites at INS center; SOI 2: sites near INS edge; SOI 3, sites distant from INS. The closer to the INS center, the stronger the signals. The yellow dashed lines indicate SOIs 1 to 3 in fOCT. (c) fOCT cross-sections (green–red color scale) superimposed with OCT anatomical image (gray scale) at time window t = 0.5 seconds. (d) fOCT signal for different radiant exposures. (e) The time course of flow velocity (derived from interframe decorrelation, 240 fps) in response to INS with radiant levels of 0 (blank), 0.5, 0.7, and 1.0 J/cm2 . Source: Zhang et al. [107], From SPIE.

capillary network outside the irradiation core, while the large vessels in the peripheral area retained their structure with enlarged diameter. Then, from day 5, massive newly appeared blood flows could be observed both in the ischemic core and peripheral area. OCTA enables accurate assessment of the spatiotemporal dynamics of blood perfusion along the chronic recovery period, which is of great significance for understanding the pathological characteristics of these vascular diseases. In addition to ischemic stroke, OCTA has also been used to investigate neurovascular coupling, a close interplay between neural activity and the subsequent response in CBF. Shin et al. investigated the sensory-evoked hemodynamic changes in CBF (arteries, arterioles, veins, venules, and capillaries) in responses to single-whisker stimulation using OCTA over a 1.45 mm×1.45 mm field of view. Their results showed that the dilation of arterioles at the site of activation was accompanied by the dilation of upstream arteries, while the relatively negligible dilation was observed in veins [51]. The functional optical coherence tomography (fOCT), which enables to monitor neural activity, has also been used to map the functional response to visual stimulation in cat cortex [108–110] and the response to electrical stimulation in rat cortex [111, 112]. However, a label-free, all-optical approach with high spatial and temporal resolution was lack for manipulating and mapping brain function. Recently, Zhang, Y. et al. combined OCT and infrared neural stimulation (INS),

3.7 Applications of OCT

SVP

(b)

IVP

(c) Choroid

DVP

SVP (a)

IVP

DVP (d)

(e)

Figure 3.20 OCTA angiograms of retina from mouse with ID-OCTA algorithm. (a–e) are angiograms from the whole retinal layer (depth is encoded in color), superficial vascular plexus (SVP), intermediate vascular plexus (IVP), deep vascular plexus (DVP), and choroid. Scale bar = 0.5 mm. The lab-built FDOCT system operating at 840 nm central wavelength and a 120 kHz A-scan rate. 512 A-lines formed a B-scan, and totally 1536 B-scans were acquired in 512 y-locations with 3 repeated B-scans at each y-location, corresponding to a total acquisition time of 6.6 seconds. Scale bar = 0.5 mm. Source: Li et al. [115], From World Scientific Publishing.

which is a new stimulation technique for study of cortical function [113, 114], to develop a label-free, all-optical approach in a contact-free, large-scale, depth resolvable manner for stimulating, and mapping brain function in cerebral cortex up to millimeter in depth [107]. As shown in Figure 3.19a, INS induced the change of membrane potential by heating membrane capacitance. Then neuron scattering was influenced through potential-scattering coupling and recorded by fOCT. The experiment shows INS evoked a comparably localized fOCT signal in rat cortex, as shown in Figure 3.19c. SOI 1 contained a maximal number of significant pixels (∼449 pixels) that spanned approximately the same lateral extent as the OISI activation, SOI 2 exhibited a reduced number of pixels (∼354 pixels), and SOI 3 exhibited no significant scattering response to INS. Figure 3.19d shows the fOCT response at different radiant exposure. As expected, increased INS radiant exposure led to an increase in the fOCT signal magnitude [116]. Figure 3.19e shows the relative blood flow velocity changes in response to INS. The onset time of velocity change was delayed by ∼1 seconds after INS, which is consistent with the previous studies [117, 118].

3.7.2

Ocular

When OCT technology was first proposed by Huang et al. in 1991, one of its application fields was ophthalmology [1]. This experiment also symbolized the official birth of OCT technology. At present, ophthalmology is also the largest and most mature

103

104

3 Optical Coherence Tomography Structural and Functional Imaging

application field of OCT technology, because the transparent structure of the eye makes the probe light have a large penetration depth. And the imaging is done in vivo and there is no damage inflicted, allowing repeated scanning of the same area in the same eye, as well as imaging different locations in the same eye. Multiple A-lines are aligned to produce a two-dimensional image that can be thought of as a form of “in vivo histology.” With modern scanning techniques, volumetric scans can be acquired, rather than two-dimensional cross-sectional data, providing the ophthalmologist with comprehensive information as to the eye morphology. This technology makes it possible to screen a variety of ocular diseases and has become an important basis for clinical diagnosis. Glaucoma is the second leading cause of irreversible blindness worldwide, and estimates put the total number of suspected cases of glaucoma at over 60 million worldwide [119]. The anterior segment is the front part of the human eye, which forms the optical system and hence directly impacts vision. It is also the part of the eye that is most exposed to the influences of the external environment. The corneoscleral limbus contains several biological components, which are important constituents for understanding, diagnosing, and managing several ocular pathologies, such as glaucoma and corneal abnormalities. An anterior segment optical coherence tomography (AS-OCT) system integrated with OCT angiography was proposed by Li et al. [15] to noninvasively visualize the 3D microstructural and microvascular properties of the limbal region. The proposed AS-OCT has the ability to correct optical distortion of microstructural images enabling quantification of relationships in the anterior chamber angle. With the ability of microvascular images to visualize the microcirculation in the limbal area without the use of exogenous contrast agents, the AS-OCT enabled the visualization of the aqueous outflow pathway by combining the microstructural and microvascular information. As a clinical tool, it has the potential to detect early aqueous outflow system abnormalities that lead to pressure elevation in glaucoma. Due to the ocular hemodynamics in the elastic eyeball, particularly the vascular volume changes of the choroid, several ocular elements experience a pulsatile movement, i.e. ocular pulse (OP). OP is highly correlated with the biomechanical properties of the eye and may also act as a surrogate for changes in choroidal blood flow. The ability to noninvasively monitor the OP in vivo may play an important role in a number of ocular diseases, such as glaucoma, age-related macular degeneration, and diabetic retinopathy. Li et al. report on a phase-based method for accurately measuring the OP in the anterior chamber in vivo. The pulsatile relative motion between cornea and crystalline lens in rodents was visualized and quantified. Their results showed that the velocity amplitude of the relative motion is 10.3 ± 2.4 μm/s, and the displacement amplitudes at the respiratory and cardiac frequencies are 202.5 ± 64.9 and 179.9 ± 49.4 nm, respectively [16]. With the improvement of OCT system’s sensitivity and imaging speed, the research and application of OCTA in ophthalmology are also emerging. As a fast and noninvasive imaging mode, it provides a 3D profile of the blood vessel structure within the eye, while without the need for intravenous injection of fluorescent dye. Figure 3.20 shows representative ID-OCTA images of mouse retina acquired

3.7 Applications of OCT

SVP

(b)

DVP

(c) Avascular layer

SVP (a)

Choroid

DVP (d)

(e)

Figure 3.21 ID-OCT angiograms of human retina with prototype OCTA system. (a–e) are angiograms from the whole retinal layer (depth is encoded in color), SVP, DVP, avascular layer, and choroid. The prototype is a high-speed SDOCT system operating at a 840 nm central wavelength and a 250 kHz A-scan rate. 256 A-lines formed a B-scan, and totally 1024 B-scans were acquired in 256 y-locations with 4 repeated B-scans at each y-location, corresponding to a total acquisition time of 1.3 seconds. Scale bar = 0.5 mm. Source: Li et al. [115], From World Scientific Publishing.

with a lab-built OCT system and Figure 3.21 is a typical ID-OCTA image of human retina obtained by the prototype commercial OCTA system (Tai HS 300, Meditco, Shanghai, China). Retinal blood flow, as in the rest of the brain, is actively regulated in response to neuronal activity [122–124]. Currently, there are limited studies that explore the human retinal capillary responses during neurovascular coupling, a key to understanding the pathological disruption during disease. Nesper et al. assessed human retinal microvascular reactivity during dark adaptation and the transition to ambient light and after flicker stimulation using OCTA [123]. They divided the capillary plexuses of retina into three parts: superficial capillary plexus, middle capillary plexus, and deep capillary plexus, and explored the changes in vascular density and skeleton density of these three layers under different light conditions. Their experiments showed evidence suggesting constriction of deeper vessels and dilation of large SCP vessels during the transition from dark to light. This contrasts with redistribution of blood flow to deeper layers during dark adaptation and flicker stimulation.

3.7.3

Skin

When the first OCT devices were developed in the 1990s, one of the first targets for the researchers was skin [125, 126]. In OCT imaging, depth penetration increased with wavelength, but image detail and contrast were greater at shorter wavelengths.

105

106

3 Optical Coherence Tomography Structural and Functional Imaging

Before

After ED

ED HD

ED

D

fascia

M

200 µm

D

HD M

(b1)

(c1) ED

ED

D

D

HD

D

HD M

M (b2)

(c2)

(b3)

(c3)

HD (a)

Figure 3.22 Representative OCT structural and angiographic images of mouse dorsal skin in vivo, before (column b) and after (column c) FPT treatment, respectively. (a) Crosssectional schematic of the layered skin. (b1) and (c1) Structural cross-sections. (b2) and (c2) Cross-sectional angiograms. (b3) and (c3) Projection view of 3-D angiography. ED, epidermis; D, dermis; HD, hypodermis; M, muscles. Thin arrows indicate small vessels. Bold arrows indicate big vessels. Source: Guo et al. [120], From SPIE.

Most OCT research in dermatology has been using the 1300 nm band, which is convenient because of the good compromise between image resolution and depth penetration. Normal skin comprises an epidermis of 50–100 μm above a dermis of up to 2000 μm or more thickness. At around 1300 nm, water absorption is restricted, and the imaging depths of 1 mm or more can be routinely achieved, which can well meet the clinical needs. In dermatology, though OCT has yet to be implemented as a standard procedure in clinical practice, many studies on various skin diseases have been conducted based on this technology. Nonmelanoma skin cancer (NMSC), which is a disease originating from the epidermis and the most prevalent type of skin cancer [127], is one of the hot research fields in OCT. Ulrich et al. performed an observational and prospective study on 164 patients with 256 lesions suspicious for BCC [128]. With the aid of OCT, the structure information of depth resolution can also be observed, such as the cystic structure. This study demonstrated that the specificity in diagnosing BCC increased significantly from 28.6% by clinical assessment to 54.3% using dermoscopy and to 75.3% with the addition of OCT (P < 0.001). And the accuracy of diagnosis for all lesions increased from 65.8% with clinical evaluation to 76.2% following additional dermoscopy and to 87.4% with the addition of OCT. In recent years, some studies on inflammatory skin disorders (dermatitis, acne, and nail psoriasis) have also been conducted. Manfredini et al. studied the morphology and vascularity of acne and surrounding skin based on OCT [121]. They revealed the characteristic morphological features of acne vulgaris, including

3.8 Conclusion

vertical hypodense structures, interrupted entrance signal, and granular hyperechogenic material inside comedos. The correlation between oral antibiotic treatment and normalization of the abovementioned features has also been demonstrated. In particular, cutaneous blood perfusion is highly correlated to a number of peripheral vascular diseases, but the optical imaging of cutaneous blood vessels is quite challenging due to the high scattering property of skin. By matching the refractive index between different tissue components, tissue optical clearing (TOC) has been found useful for reducing light scattering and improving imaging performance. Guo et al. used a mixture of fructose with PEG-400 and thiazone (FPT) as an optical clearing agent in mouse dorsal skin and evaluated with OCTA [120]. As shown in Figure 3.22, the imaging quality of OCTA improved significantly, which is most likely caused by the FPT-induced dehydration of skin, and the reduction of scattering coefficient (more than ∼40.5%) and refractive-index mismatching (more than ∼25.3%) in the superficial (epidermal, dermal, and hypodermal) layers. In addition, OCTA demonstrated enhanced performance in imaging cutaneous hemodynamics with satisfactory spatiotemporal resolution and contrast when combined with TOC, which exhibited a powerful practical application in studying microcirculation.

3.8 Conclusion Since its birth, OCT has been rapidly developed and widely used and has greatly improved its imaging depth, speed, resolution, and sensitivity. The increase in the imaging depth expands the field of application of OCT. The increase in imaging speed makes real-time 3D imaging possible. And the increase in imaging resolution extends the resolution of imaging technology to the level of cell and molecular biology, which provides basis and possibility for early detection of diseases, such as cancer. This article reviews the principles, performance, and recent developments of OCT technology, and reviews in detail the unlabeled, multi-sample OCTA technology. The main contents include OCTA blood flow contrast mechanism based on the “sum of random vector” model, OCTA high-sensitivity detection algorithm based on shape, intensity, and decorrelation features, which we termed as SID-OCTA. In addition, hemodynamic quantification combining with adaptive ST-kernel and high-speed acquisition strategy of independent samples (including time, space, spectrum, and angle dimensions) as well as morphological quantification were introduced. Finally, some important research progress of OCT in recent years in the fields of brain, ophthalmology, and skin has also been reviewed. The above research progress is of great significance for improving the performance of OCT and OCTA and broadening their applications in the field of biomedicine. In the future, the research direction of OCT will still focus on the development of higher speed OCT, the development of higher resolution OCT, the further optimization and expansion of functional OCT, and the practical application of OCT technology.

107

108

3 Optical Coherence Tomography Structural and Functional Imaging

References 1 Huang, D., Wanson, E.A., Lin, C.P. et al. (1991). Optical coherence tomography. Science 254 (5035): 1178–1181. 2 Duguay, M.A. and Mattick, A.T. (1971). Ultrahigh speed photography of picosecond light pulses and echoes. Appl. Opt. 10 (9): 2162–2170. 3 Youngquist, R.C., Carr, S., and Davies, D.E. (1987). Optical coherence-domain reflectometry: a new optical evaluation technique. Opt. Lett. 12 (3): 158–160. 4 Fercher, A.F., Mengedoht, K., and Werner, W. (1988). Eye-length measurement by interferometry with partially coherent light. Opt. Lett. 13 (3): 186–188. 5 Fercher, A.F., Drexler, W., Hitzenberger, C.K., et al. (2003). Optical coherence tomography – principles and applications. Rep. Prog. Phys. 66 (2): 239–303. 6 Grulkowski, I., Gora, M., Szkulmowski, M., et al. (2009). Anterior segment imaging with Spectral OCT system using a high-speed CMOS camera. Opt. Express 17 (6): 4842–4858. 7 Li, P., An, L., Lan, G., et al. (2013). Extended imaging depth to 12 mm for 1050-nm spectral domain optical coherence tomography for imaging the whole anterior segment of the human eye at 120-kHz A-scan rate. J. Biomed. Opt. 18 (1): 16012. 8 Li, P., Johnstone, M., and Wang, R.K. (2014). Full anterior segment biometry with extended imaging range spectral domain optical coherence tomography at 1340 nm. J. Biomed. Opt. 19 (4): 046013. 9 An, L. and Wang, R.K. (2007). Use of a scanner to modulate spatial interferograms for in vivo full-range Fourier-domain optical coherence tomography. Opt. Lett. 32 (23): 3423–3425. 10 Li, P., Zhou, L., Ni, Y., et al. (2016). Angular compounding by full-channel B-scan modulation encoding for optical coherence tomography speckle reduction. J. Biomed. Opt. 21 (8): 86014. 11 Yasuno, Y., Endo, T., Makita, S., et al. (2006). Three-dimensional line-field Fourier domain optical coherence tomography for in vivo dermatological investigation. J. Biomed. Opt. 11 (1): 014014. 12 Wang, K., Ding, Z., Zeng, Y., et al. (2009). Sinusoidal B-M method based spectral domain optical coherence tomography for the elimination of complex-conjugate artifact. Opt. Express 17 (19): 16820–16833. 13 Dhalla, A.H. and Izatt, J.A. (2012). Complete complex conjugate resolved heterodyne swept source optical coherence tomography using a dispersive optical delay line: erratum. Biomed. Opt. Express 3 (3): 630–632. 14 Davis, A.M., Choma, M.A., and Izatt, J.A. (2005). Heterodyne swept-source optical coherence tomography for complete complex conjugate ambiguity removal. J. Biomed. Opt. 10 (6): 064005. 15 Li, P., An, L., Reif, R., et al. (2011). In vivo microstructural and microvascular imaging of the human corneo-scleral limbus using optical coherence tomography. Biomed. Opt. Express 2 (11): 3109–3118.

References

16 Li, P., Ding, Z., Ni, Y., et al. (2014). Visualization of the ocular pulse in the anterior chamber of the mouse eye in vivo using phase-sensitive optical coherence tomography. J. Biomed. Opt. 19 (9): 090502. 17 Li, P., Shen, T.T., Johnstone, M., et al. (2013). Pulsatile motion of the trabecular meshwork in healthy human subjects quantified by phase-sensitive optical coherence tomography. Biomed. Opt. Express 4 (10): 2051–2065. 18 Li, P., Reif, R., Zhi, Z., et al. (2012). Phase-sensitive optical coherence tomography characterization of pulse-induced trabecular meshwork displacement in ex vivo nonhuman primate eyes. J. Biomed. Opt. 17 (7): 076026. 19 Grulkowski, I., Liu, J.J., Potsaid, B., et al. (2013). High-precision, high-accuracy ultralong-range swept-source optical coherence tomography using vertical cavity surface emitting laser light source. Opt. Lett. 38 (5): 673–675. 20 Potsaid, B., Gorczynska, I., Srinivasan, V.J., et al. (2008). Ultrahigh speed spectral/Fourier domain OCT ophthalmic imaging at 70,000 to 312,500 axial scans per second. Opt. Express 16 (19): 15149–15169. 21 An, L., Li, P., Shen, T.T., et al. (2011). High speed spectral domain optical coherence tomography for retinal imaging at 500,000 A lines per second. Biomed. Opt. Express 2 (10): 2770–2783. 22 Huber, R., Wojtkowski, M., and Fujimoto, J.G. (2006). Fourier domain mode locking (FDML): a new laser operating regime and applications for optical coherence tomography. Opt. Express 14 (8): 3225–3237. 23 Klein, T., Wieser, W., Reznicek, L., et al. (2013). Multi-MHz retinal oct. Biomed. Opt. Express 4 (10): 1890–1908. 24 Lee, H.Y., Marvdashti, T., Duan, L., et al. (2014). Scalable multiplexing for parallel imaging with interleaved optical coherence tomography. Biomed. Opt. Express 5 (9): 3192–3203. 25 Zhao, Y., Chen, Z., Saxer, C., et al. (2000). Phase-resolved optical coherence tomography and optical Doppler tomography for imaging blood flow in human skin with fast scanning speed and high velocity sensitivity. Opt. Lett. 25 (2): 114–116. 26 Faber, D.J., Mik, E.G., Aalders, M.C., et al. (2003). Light absorption of (oxy-)hemoglobin assessed by spectroscopic optical coherence tomography. Opt. Lett. 28 (16): 1436–1438. 27 Kuranov, R.V., Qiu, J., McElroy, A.B., et al. (2011). Depth-resolved blood oxygen saturation measurement by dual-wavelength photothermal (DWP) optical coherence tomography. Biomed. Opt. Express 2 (3): 491–504. 28 Yasuno, Y., Yamanari, M., Kawana, K., et al. (2010). Visibility of trabecular meshwork by standard and polarization-sensitive optical coherence tomography. J. Biomed. Opt. 15 (6): 061705. 29 Cense, B., Chen, T.C., Park, B.H., et al. (2004). Thickness and birefringence of healthy retinal nerve fiber layer tissue measured with polarization-sensitive optical coherence tomography. Invest. Opthalmol. Vis. Sci. 45 (8): 2606. 30 O’Hara, K.E., Schmoll, T., Vass, C., et al. (2013). Measuring pulse-induced natural relative motions within human ocular tissue in vivo using phase-sensitive optical coherence tomography. J. Biomed. Opt. 18 (12): 121506.

109

110

3 Optical Coherence Tomography Structural and Functional Imaging

31 Park, B., Pierce, M.C., Cense, B., et al. (2005). Real-time fiber-based multi-functional spectral-domain optical coherence tomography at 1.3 microm. Opt. Express 13 (11): 3931–3944. 32 Li, P., Liu, A., Shi, L., et al. (2011). Assessment of strain and strain rate in embryonic chick heart in vivo using tissue Doppler optical coherence tomography. Phys. Med. Biol. 56 (22): 7081–7092. 33 Wang, R.K., Kirkpatrick, S., and Hinds, M. (2007). Phase-sensitive optical coherence elastography for mapping tissue microstrains in real time. Appl. Phys. Lett. 90 (16): 164105. 34 Li, P., Yin, X., Shi, L., et al. (2012). In vivo functional imaging of blood flow and wall strain rate in outflow tract of embryonic chick heart using ultrafast spectral domain optical coherence tomography. J. Biomed. Opt. 17 (9): 96006–96001. 35 Meyer, E.P., Ulmann-Schuler, A., Staufenbiel, M., et al. (2008). Altered morphology and 3D architecture of brain vasculature in a mouse model for Alzheimer’s disease. Proc. Natl. Acad. Sci. U. S. A. 105 (9): 3587–3592. 36 Vakoc, B.J., Lanning, R.M., Tyrrell, J.A., et al. (2009). Three-dimensional microscopy of the tumor microenvironment in vivo using optical frequency domain imaging. Nat. Med. 15 (10): 1219–1223. 37 Jain, R.K. (2005). Normalization of tumor vasculature: an emerging concept in antiangiogenic therapy. Science 307 (5706): 58–62. 38 Carmeliet, P. and Jain, R.K. (2000). Angiogenesis in cancer and other diseases. Nature 407 (6801): 249–257. 39 Zhang, A., Zhang, Q., Chen, C.L., et al. (2015). Methods and algorithms for optical coherence tomography-based angiography: a review and comparison. J. Biomed. Opt. 20 (10): 100901. 40 Gao, S.S., Jia, Y., Zhang, M., et al. (2016). Optical coherence tomography angiography. Invest. Ophthalmol. Vis. Sci. 57 (9): OCT27–OCT36. 41 Kashani, A.H., Chen, C.L., Gahm, J.K., et al. (2017). Optical coherence tomography angiography: a comprehensive review of current methods and clinical applications. Prog. Retin. Eye Res. 60: 66–100. 42 Chen, C.L. and Wang, R.K. (2017). Optical coherence tomography based angiography [invited]. Biomed. Opt. Express 8 (2): 1056–1082. 43 Jia, Y., Bailey, S.T., Wilson, D.J., et al. (2014). Quantitative optical coherence tomography angiography of choroidal neovascularization in age-related macular degeneration. Ophthalmology 121 (7): 1435–1444. 44 Chu, Z., Lin, J., Gao, C., et al. (2016). Quantitative assessment of the retinal microvasculature using optical coherence tomography angiography. J. Biomed. Opt. 21 (6): 066008. 45 Chalam, K.V. and Sambhav, K. (2016). Optical coherence tomography angiography in retinal diseases. J. Ophthalmic. Vis. Res. 11 (1): 84–92. 46 Roisman, L., Zhang, Q., Wang, R.K., et al. (2016). Optical coherence tomography angiography of asymptomatic neovascularization in intermediate age-related macular degeneration. Ophthalmology 123 (6): 1309–1319.

References

47 Liew, Y.M., McLaughlin, R.A., Gong, P., et al. (2012). In vivo assessment of human burn scars through automated quantification of vascularity using optical coherence tomography. J. Biomed. Opt. 18 (6): 069801. 48 Baran, U., Choi, W.J. and Wang, R.K. (2016). Potential use of OCT-based microangiography in clinical dermatology. Skin Res. Technol. 22 (2): 238–246. 49 Ulrich, M., Themstrup, L., deCarvalho, N., et al. (2016). Dynamic optical coherence tomography in dermatology. Dermatology 232 (3): 298–311. 50 Baran, U. and Wang, R.K. (2016). Review of optical coherence tomography based angiography in neuroscience. Neurophotonics 3 (1): 010902. 51 Shin, P., Choi, W., Joo, J., et al. (2019). Quantitative hemodynamic analysis of cerebral blood flow and neurovascular coupling using optical coherence tomography angiography. J. Cereb. Blood Flow Metab. 39 (10): 1983–1994. 52 Jia, Y., Li, P., and Wang, R.K. (2011). Optical microangiography provides an ability to monitor responses of cerebral microcirculation to hypoxia and hyperoxia in mice. J. Biomed. Opt. 16 (9): 096019. 53 Jia, Y. and Wang, R.K. (2010). Label-free in vivo optical imaging of functional microcirculations within meninges and cortex in mice. J. Neurosci. Methods 194 (1): 108–115. 54 Yang, S., Liu, K., Ding, H., et al. (2019). Longitudinal in vivo intrinsic optical imaging of cortical blood perfusion and tissue damage in focal photothrombosis stroke model. J. Cereb. Blood Flow Metab. 39 (7): 1381–1393. 55 Yang, S., Liu, K., Yao, L., et al. (2019). Correlation of optical attenuation coefficient estimated using optical coherence tomography with changes in astrocytes and neurons in a chronic photothrombosis stroke model. Biomed. Opt. Express 10 (12): 6258–6271. 56 Li, P., Cheng, Y., Li, P., et al. (2016). Hybrid averaging offers high-flow contrast by cost apportionment among imaging time, axial, and lateral resolution in optical coherence tomography angiography. Opt. Lett. 41 (17): 3944–3947. 57 Motaghiannezam, R. and Fraser, S. (2012). Logarithmic intensity and specklebased motion contrast methods for human retinal vasculature visualization using swept source optical coherence tomography. Biomed. Opt. Express 3 (3): 503–521. 58 Cheng, Y., Guo, L., Pan, C., et al. (2015). Statistical analysis of motion contrast in optical coherence tomography angiography. J. Biomed. Opt. 20 (11): 116004. 59 Goodman, J.W. (1985). Statistical Optics. Wiley. 60 Li, P. and Li, P. (2018). Mass sample optical coherence tomography angiography technology and application. Chin. J. Lasers 45 (3): 0307001. 61 Makita, S., Hong, Y., Yamanari, M., et al. (2006). Optical coherence angiography. Opt. Express 14 (17): 7821–7840. 62 Fingler, J., Schwartz, D., Yang, C., et al. (2007). Mobility and transverse flow visualization using phase variance contrast with spectral domain optical coherence tomography. Opt. Express 15 (20): 12636–12653. 63 Mariampillai, A., Standish, B.A., Moriyama, E.H., et al. (2008). Speckle variance detection of microvasculature using swept-source optical coherence tomography. Opt. Lett. 33 (13): 1530–1532.

111

112

3 Optical Coherence Tomography Structural and Functional Imaging

64 Enfield, J., Jonathan, E., and Leahy, M. (2011). In vivo imaging of the microcirculation of the volar forearm using correlation mapping optical coherence tomography (cmOCT). Biomed. Opt. Express 2 (5): 1184–1193. 65 Jia, Y., Tan, O., Tokayer, J., et al. (2012). Split-spectrum amplitude-decorrelation angiography with optical coherence tomography. Opt. Express 20 (4): 4710–4725. 66 Wang, R.K., Jacques, S.L., Ma, Z., et al. (2007). Three dimensional optical angiography. Opt. Express 15 (7): 4083–4097. 67 Li, P., Cheng, Y., Zhou, L., et al. (2016). Single-shot angular compounded optical coherence tomography angiography by splitting full-space B-scan modulation spectrum for flow contrast enhancement. Opt. Lett. 41 (5): 1058–1061. 68 Guo, L., Li, P., Pan, C., et al. (2016). Improved motion contrast and processing efficiency in OCT angiography using complex-correlation algorithm. J. Opt. 18 (2): 025301. 69 Xu, J., Song, S., Li, Y., et al. (2017). Complex-based OCT angiography algorithm recovers microvascular information better than amplitude- or phase-based algorithms in phase-stable systems. Phys. Med. Biol. 63 (1): 015023. 70 Choi, W.J., Reif, R., Yousefi, S., et al. (2014). Improved microcirculation imaging of human skin in vivo using optical microangiography with a correlation mapping mask. J. Biomed. Opt. 19 (3): 36010. 71 Uribe-Patarroyo, N., Villiger, M., and Bouma, B.E. (2014). Quantitative technique for robust and noise-tolerant speed measurements based on speckle decorrelation in optical coherence tomography. Opt. Express 22 (20): 24411–24429. 72 Chlebiej, M., Gorczynska, I., Rutkowski, A., et al. (2019). Quality improvement of OCT angiograms with elliptical directional filtering. Biomed. Opt. Express 10 (2): 1013–1031. 73 Tokayer, J., Jia, Y., Dhalla, A.H., et al. (2013). Blood flow velocity quantification using split-spectrum amplitude-decorrelation angiography with optical coherence tomography. Biomed. Opt. Express 4 (10): 1909–1924. 74 Wang, R.K., Zhang, Q., Li, Y., et al. (2017). Optical coherence tomography angiography-based capillary velocimetry. J. Biomed. Opt. 22 (6): 66008. 75 Cole, E.D., Moult, E.M., Dang, S., et al. (2017). The definition, rationale, and effects of Thresholding in OCT angiography. Ophthalmol. Retina 1 (5): 435–447. 76 Makita, S., Kurokawa, K., Hong, Y.J., et al. (2016). Noise-immune complex correlation for optical coherence angiography based on standard and Jones matrix optical coherence tomography. Biomed. Opt. Express 7 (4): 1525–1548. 77 Braaf, B., Donner, S., Nam, A.S., et al. (2018). Complex differential variance angiography with noise-bias correction for optical coherence tomography of the retina. Biomed. Opt. Express 9 (2): 486–506. 78 Zhang, A. and Wang, R.K. (2015). Feature space optical coherence tomography based micro-angiography. Biomed. Opt. Express 6 (5): 1919–1928. 79 Gao, S.S., Jia, Y., Liu, L., et al. (2016). Compensation for reflectance variation in vessel density quantification by optical coherence tomography angiography. Invest. Ophthalmol. Vis. Sci. 57 (10): 4485–4492.

References

80 Li, P., Huang, Z., Yang, S., et al. (2017). Adaptive classifier allows enhanced flow contrast in OCT angiography using a histogram-based motion threshold and 3D hessian analysis-based shape filtering. Opt. Lett. 42 (23): 4816–4819. 81 Huang, L., Fu, Y., Chen, R., et al. (2019). SNR-adaptive OCT angiography enabled by statistical characterization of intensity and decorrelation with multi-variate time series model. IEEE Trans. Med. Imaging 38 (11): 2695–2704. 82 Hendargo, H.C., Estrada, R., Chiu, S.J., et al. (2013). Automated non-rigid registration and mosaicing for robust imaging of distinct retinal capillary beds using speckle variance optical coherence tomography. Biomed. Opt. Express 4 (6): 803–821. 83 Frangi, A.F., Niessen, W.J., Vincken, K.L., et al., Multiscale vessel enhancement filtering. International conference on medical image computing and computer-assisted intervention. 1998. 1496: p. 130–137. 84 Lee, J., Jiang, J.Y., Wu, W., et al. (2014). Statistical intensity variation analysis for rapid volumetric imaging of capillary network flux. Biomed. Opt. Express 5 (4): 1160–1172. 85 Yousefi, S., Qin, J., Zhi, Z., et al. (2013). Label-free optical lymphangiography: development of an automatic segmentation method applied to optical coherence tomography to visualize lymphatic vessels using hessian filters. J. Biomed. Opt. 18 (8): 86004. 86 Yousefi, S., Liu, T., and Wang, R.K. (2015). Segmentation and quantification of blood vessels for OCT-based micro-angiograms using hybrid shape/intensity compounding. Microvasc. Res. 97: 37–46. 87 Li, A., You, J., Du, C., et al. (2017). Automated segmentation and quantification of OCT angiography for tracking angiogenesis progression. Biomed. Opt. Express 8 (12): 5604–5616. 88 Zhang, Y.M., Li, H.K., Cao, T.T., et al. (2021). Automatic 3D adaptive vessel segmentation based on linear relationship between intensity and complex-decorrelation in optical coherence tomography angiography. Quant. Imaging Med. Surg. 11 (3): 895–906. 89 Attwell, D., Buchan, A.M., Charpak, S., et al. (2010). Glial and neuronal control of brain blood flow. Nature 468 (7321): 232–243. 90 Choi, W.J., Li, Y., Qin, W., et al. (2016). Cerebral capillary velocimetry based on temporal OCT speckle contrast. Biomed. Opt. Express 7 (12): 4859–4873. 91 Srinivasan, V.J., Radhakrishnan, H., Lo, E.H., et al. (2012). OCT methods for capillary velocimetry. Biomed. Opt. Express 3 (3): 612–629. 92 Su, J.P., Chandwani, R., Gao, S.S., et al. (2016). Calibration of optical coherence tomography angiography with a microfluidic chip. J. Biomed. Opt. 21 (8): 86015. 93 Liu, G., Jia, W., Sun, V., et al. (2012). High-resolution imaging of microvasculature in human skin in-vivo with optical coherence tomography. Opt. Express 20 (7): 7694–7705. 94 Choi, W., Moult, E.M., Waheed, N.K., et al. (2015). Ultrahigh-speed, swept-source optical coherence tomography angiography in nonexudative age-related macular degeneration with geographic atrophy. Ophthalmology 122 (12): 2532–2544.

113

114

3 Optical Coherence Tomography Structural and Functional Imaging

95 Ploner, S.B., Moult, E.M., Choi, W., et al. (2016). Toward quantitative optical coherence tomography angiography: visualizing blood flow speeds in ocular pathology using variable interscan time analysis. Retina 36 (Suppl 1): S118–S126. 96 Chen, R., Yao, L., Liu, K., et al. (2020). Improvement of decorrelation-based OCT angiography by an adaptive spatial-temporal kernel in monitoring stimulus-evoked hemodynamic responses. IEEE Trans. Med. Imaging 39 (12): 4286–4296. 97 Grafe, M.G.O., Gondre, M., and de Boer, J.F. (2019). Precision analysis and optimization in phase decorrelation OCT velocimetry. Biomed. Opt. Express 10 (3): 1297–1314. 98 Grafe, M.G.O., Nadiarnykh, O., and De Boer, J.F. (2019). Optical coherence tomography velocimetry based on decorrelation estimation of phasor pair ratios (DEPPAIR). Biomed. Opt. Express 10 (11): 5470–5485. 99 Wang, H. and Rollins, A.M. (2009). Speckle reduction in optical coherence tomography using angular compounding by B-scan Doppler-shift encoding. J. Biomed. Opt. 14 (3): 030512. 100 Podoleanu, A.G., Dobre, G.M., and Jackson, D.A. (1998). En-face coherence imaging using galvanometer scanner modulation. Opt. Lett. 23 (3): 147–149. 101 Baumann, B., Pircher, M., Gotzinger, E., et al. (2007). Full range complex spectral domain optical coherence tomography without additional phase shifters. Opt. Express 15 (20): 13375–13387. 102 Leitgeb, R.A., Michaely, R., Lasser, T., et al. (2007). Complex ambiguity-free Fourier domain optical coherence tomography through transverse scanning. Opt. Lett. 32 (23): 3453–3455. 103 Vincent, T.J., Thiessen, J.D., Kurjewicz, L.M., et al. (2010). Longitudinal brain size measurements in APP/PS1 transgenic mice. Magn. Reson. Insights 4: MRI.S5885. 104 Jia, Y.L. and Wang, R.K.K. (2011). Optical micro-angiography images structural and functional cerebral blood perfusion in mice with cranium left intact. J. Biophotonics 4 (1–2): 57–63. 105 Srinivasan, V.J., Mandeville, E.T., Can, A., et al. (2013). Multiparametric, longitudinal optical coherence tomography imaging reveals acute injury and chronic recovery in experimental ischemic stroke. PLoS One 8 (8): e71478. 106 Baran, U., Li, Y., and Wang, R.K. (2015). Vasodynamics of pial and penetrating arterioles in relation to arteriolo-arteriolar anastomosis after focal stroke. Neurophotonics 2 (2): 025006. 107 Zhang, Y., Yao, L., Yang, F., et al. (2020). INS-fOCT: a label-free, all-optical method for simultaneously manipulating and mapping brain function. Neurophotonics 7 (1): 015014. 108 Uma Maheswari, R., Takaoka, H., Homma, R., et al. (2002). Implementation of optical coherence tomography (OCT) in visualization of functional structures of cat visual cortex. Opt. Commun. 202 (1–3): 47–54. 109 Maheswari, R.U., Takaoka, H., Kadono, H., et al. (2003). Novel functional imaging technique from brain surface with optical coherence tomography enabling

References

110

111

112

113 114 115 116

117

118

119 120

121

122

123

visualization of depth resolved functional structure in vivo. J. Neurosci. Methods 124 (1): 83–92. Rajagopalan, U.M. and Tanifuji, M. (2007). Functional optical coherence tomography reveals localized layer-specific activations in cat primary visual cortex in vivo. Opt. Lett. 32 (17): 2614–2616. Aguirre, A.D., Chen, Y., Fujimoto, J.G., et al. (2006). Depth-resolved imaging of functional activation in the rat cerebral cortex using optical coherence tomography. Opt. Lett. 31 (23): 3459. Chen, Y., Aguirre, A.D., Ruvinskaya, L., et al. (2009). Optical coherence tomography (OCT) reveals depth-resolved dynamics during functional brain activation. J. Neurosci. Methods 178 (1): 162–173. Wells, J., Kao, C., Mariappan, K., et al. (2005). Optical stimulation of neural tissue in vivo. Opt. Lett. 30 (5): 504–506. Richter, C.P., Matic, A.I., Wells, J.D., et al. (2011). Neural stimulation with optical radiation. Laser. Photon. Rev. 5 (1): 68–80. Li, H.K., Liu, K.Y., Yao, L., et al. (2021). ID-OCTA: OCT angiography based on inverse SNR and decorrelation features. J. Innov. Opt. Health Sci. 14 (1). Stepnoski, R.A., LaPorta, A., Raccuia-Behling, F., et al. (1991). Noninvasive detection of changes in membrane potential in cultured neurons by light scattering. Proc. Natl. Acad. Sci. U. S. A. 88 (21): 9382–9386. Urban, A., Mace, E., Brunner, C., et al. (2014). Chronic assessment of cerebral hemodynamics during rat forepaw electrical stimulation using functional ultrasound imaging. Neuroimage 101: 138–149. Matsuura, T., Fujita, H., Seki, C., et al. (1999). CBF change evoked by somatosensory activation measured by laser-Doppler flowmetry: independent evaluation of RBC velocity and RBC concentration. Jpn. J. Physiol. 49 (3): 289–296. Quigley, H.A. and Broman, A.T. (2006). The number of people with glaucoma worldwide in 2010 and 2020. Br. J. Ophthalmol. 90 (3): 262–267. Guo, L., Shi, R., Zhang, C., et al. (2016). Optical coherence tomography angiography offers comprehensive evaluation of skin optical clearing in vivo by quantifying optical properties and blood flow imaging simultaneously. J. Biomed. Opt. 21 (8): 6. Manfredini, M., Greco, M., Farnetani, F., et al. (2017). Acne: morphologic and vascular study of lesions and surrounding skin by means of optical coherence tomography. J. Eur. Acad. Dermatol. Venereol. 31 (9): 1541–1546. Lott, M.E.J., Slocomb, J.E., Shivkumar, V., et al. (2013). Impaired retinal vasodilator responses in prediabetes and type 2 diabetes. Acta Ophthalmol. 91 (6): e462–e469. Nesper, P.L., Lee, H.E., Fayed, A.E., et al. (2019). Hemodynamic response of the three macular capillary plexuses in dark adaptation and flicker stimulation using optical coherence tomography angiography. Invest. Ophthalmol. Visual Sci. 60 (2): 694–703.

115

116

3 Optical Coherence Tomography Structural and Functional Imaging

124 Tan, B., Mason, E., MacLellan, B., et al. (2017). Correlation of visually evoked functional and blood flow changes in the rat retina measured with a combined OCT plus ERG system. Invest. Ophthalmol. Visual Sci. 58 (3): 1673–1681. 125 Andretzky, P., Lindner, M.W., Herrmann, J.M., et al. (1999). Optical coherence tomography by “spectral radar”: dynamic range estimation and in vivo measurements of skin. In: Proceedings of Optical and Imaging Techniques for Biomonitoring Iv (ed. M. DalFante et al.), 78–87. SPIE. 126 Welzel, J., Lankenau, E., Birngruber, R., et al. (1997). Optical coherence tomography of the human skin. J. Am. Acad. Dermatol. 37 (6): 958–963. 127 Lomas, A., Leonardi-Bee, J., and Bath-Hextall, F. (2012). A systematic review of worldwide incidence of nonmelanoma skin cancer. Br. J. Dermatol. 166 (5): 1069–1080. 128 Ulrich, M., von Braunmuehl, T., Kurzen, H., et al. (2015). The sensitivity and specificity of optical coherence tomography for the assisted diagnosis of nonpigmented basal cell carcinoma: an observational study. Br. J. Dermatol. 173 (2): 428–435.

117

4 Coherent Raman Scattering Microscopy and Biomedical Applications Minbiao Ji State Key Laboratory of Surface Physics and Department of Physics, Fudan University, Shanghai 200433, China

4.1 Introduction Fluorescence microscopy has demonstrated enormous capabilities for biological and biomedical researches, primarily due to the excellent optical properties of various fluorescent probes including ultrabrightness, photo switchable, and stochastic excitation, giving rise to the research fields of single molecule and super-resolution microscopy. While fluorescence relies on resonant transitions between electronic states of molecules, vibrational spectroscopy reflects nuclear motions of molecules, i.e. bond vibrations and rotations. Two major types of vibrational spectroscopy are commonly seen: infrared (IR) absorption and Raman scattering. IR is a one-photon resonant transition process, dipole allowed, and usually has large cross-section. In contrast, Raman is an inelastic scattering process, sensitive to polarizability change, and has small cross-section. These two vibrational spectroscopies are often complementary, providing analytical information of molecular composition, structure, and concentration. They can also be integrated into microscopes, enabling label-free chemical imaging without the need and drawbacks of exogenous labeling molecules. This chapter will introduce the development of the advancing coherent Raman scattering microscopy and its potentials for biomedical researches.

4.1.1

Spontaneous Raman Scattering

Ever since the discovery of the Raman effect by C.V. Raman in 1928 [1], and awarded nobel prize for Physics in 1930, Raman spectroscopy has evolved dramatically since the invention of the laser. The energy diagram of vibrational transitions is illustrated in Figure 4.1a. Unlike IR absorption, in which the IR frequency is in direct resonance with the vibrational transition, Raman frequency/shift is the difference between the excitation (pump, 𝜔p ) and emission (Stokes, 𝜔S ) photons: 𝜔Raman = 𝜔p − 𝜔S , with the resonance condition of 𝜔Raman = Ωv , where Ωv represents the frequency of a certain vibrational mode of a molecule, or a phonon mode of a crystal lattice. Common molecular vibrational modes include the stretching Biomedical Photonic Technologies, First Edition. Edited by Zhenxi Zhang, Shudong Jiang, and Buhong Li. © 2023 WILEY-VCH GmbH. Published 2023 by WILEY-VCH GmbH.

4 Coherent Raman Scattering Microscopy and Biomedical Applications

Rayleigh scattering

Raman scattering Virtual state

Pump

Stokes

anti-Stokes

Vibrational states ΩV

IR

Ground state (a)

oleic acid BSA

500 Intensity (a.u.)

118

400 300 200 100 0 1000

(b)

1500

2000

2500

3000

Raman shift (cm–1)

Figure 4.1 (a) Energy diagram and optical transitions of IR absorption, Rayleigh scattering and Raman scattering; (b) Measured spontaneous Raman spectra of typical lipid oleic acid (OA) and protein bovine serum albumin (BSA). Source: Minbiao Ji.

(symmetric and antisymmetric), wagging, scissoring, twisting, and rocking modes of a chemical bond/group. As shown in Figure 4.1b, typical Raman spectra of lipid and protein demonstrate their differences in both the high frequency CH stretch region and the fingerprint region (600–1800 cm−1 ), serving the basis for distinguishing different chemical compositions. In biological cells and tissues, Raman spectra contains rich information of a mixture of various biomolecules, including lipids, proteins, nucleic acids, carbohydrates, etc. [2]. Digging through these spectra may unravel the disease states for diagnosis. Despite the great success of Raman spectroscopy as a standard analytical tool for numerous research fields and industries, spontaneous Raman scattering is known to suffer from weak cross-sections, and hence various techniques have been developed to enhance it, including surface-enhanced Raman scattering (SERS), tip-enhanced Raman scattering (TERS), and UV-enhanced Raman (UV-Raman). These techniques could achieve up to 108 –1010 enhanced Raman scattering efficiency, but they are currently difficult to implement fast imaging in biomedical researches.

4.1 Introduction

4.1.2

Coherent Raman Scattering

In the past 20 years, coherent Raman scattering (CRS) microscopy has emerged as a new technique for biological and biomedical imaging, taking advantage of the optical nonlinearity and coherent amplification that allows orders of magnitude faster imaging speed (up to video rate), while keeping the spectroscopic character of the Raman effect. In CRS, both the pump and Stokes laser beams simultaneously interact with molecular vibrations. Resonance occurs when the optical beat frequency between the pump and Stokes matches the vibrational frequency: 𝜔p − 𝜔S = Ωv , causing the excited molecules to vibrate coherently (Figure 4.2a). The coherent motion of bond vibrations would further scatter the pump or Stokes photons with enhanced efficiency, resulting in either coherent anti-stoke Raman scattering (CARS) or stimulated Raman scattering (SRS), depending on which photon is scattered (Figure 4.2b). Both CARS and SRS are third-order nonlinear optical processes, generated simultaneously, and form the family of CRS, although they differ in various ways and may be better suited in different situations.

CARS

Virtual state ωp ωas = 2ωp – ωs ωp

ωs Vibrational states Ωv Ground state

(a) SRS Virtual state

ωs

ωp

ωp

ωs Vibrational states Ωv Ground state

(b)

Figure 4.2 Transition diagrams of (a) CARS and (b) SRS. Here, the particular diagram represents the stimulated Raman loss process. Source: Minbiao Ji.

119

120

4 Coherent Raman Scattering Microscopy and Biomedical Applications

4.2 Coherent Anti-stokes Raman Scattering (CARS) Microscopy 4.2.1

Principles and Limitations

CARS microscopy was first demonstrated by Sunney Xie and Zumbusch in 1999 [3]. CARS is a four-wave-mixing (FWM) process. The incident and emission photons satisfy the optical parametric relationship: 𝜔CARS = 2𝜔p − 𝜔S . All photon energies are conserved, and molecules return to the ground state without net energy exchange with the electromagnetic fields. However, this causes a fatal problem for CARS – the nonresonance background (NRB) issue. As illustrated in Figure 4.3a, even when the pump and Stokes photons detune from vibrational resonance, the pure electronic transitions could contribute to the FWM signal. In principle, most materials contain rich electron clouds and will generate such an NRB, including glass slides and solvents. The NRB not only yields background-baring images but also distorts the CARS spectra due to the interference between the vibrational signal and the NRB [4–6]. Researchers have made various efforts to get rid of the NRB, but without much success. The signal generation of CARS could be described as: ECARS (𝜔as ) = iN𝜒 (3) Ep 2 (𝜔p )Es ∗ (𝜔s )

(4.1)

where Ep and Es represent the electric field of the excitation lasers, and ECARS represents that of the signal. CARS signal is usually detected as light intensity directly from a photo multiplier tube (PMT), and is so called homodyne detection. Hence the signal intensity could be written as: SCARS = |ECARS |2 = N 2 |𝜒 3 |2 Ip 2 Is

(4.2)

where 𝜒 (3) is the third-order optical susceptibility, containing both the vibrational (3) (3) (3) (Raman) and electronic NRB parts: 𝜒 (3) = 𝜒Raman + 𝜒NRB . Notice 𝜒Raman contains (3) both the imaginary and real parts, whereas 𝜒NRB is primarily real (for nonresonant (3) (3) transitions). It is the cross term in 𝜒Raman ⋅ 𝜒NRB that leads to the spectral distortion of CARS (Figure 4.3b and c). More recent development of broadband CARS technique is able to numerically separate the NRB and Raman contributions and might be promising for future applications [7].

4.2.2

Endoscopic CARS

Although it suffered from NRB, CARS microscopy benefits from several advantages, the most important of which is the easy and direct detection scheme. CARS signal could be collected in both forward (transmission) and backward (epi) modes, making them suitable for use in live animal studies. Furthermore, recent advances in endoscopic CARS have shown promise with miniaturized fiber scanners and microscope objectives. Combined with other nonlinear optical signals such as second harmonic generation (SHG) and two-photon excited fluorescence (TPEF),

4.3 Stimulated Raman Scattering (SRS) Microscopy

ωFWM = 2ωp – ωs

ωs Ωv

Intensity (a.u.)

ωp ωp

(b)

] Im[χ(3) Raman Re[χ(3) ] Raman

ω CARS SRS

ωs

ωFWM = 2ωp – ωs

Intensity (a.u.)

ωp

|χ(3)| χ(3) NRB

ωp

Ωv (a)

1500 1550 1600 1650 1700 1750 1800 (c)

Wavenumber (cm–1)

Figure 4.3 Nonresonant background of CARS. (a) Diagrams of the four-wave-mixing processes that contribute to the NRB; (b) Calculated spectra showing various components of 𝜒 (3) ; (c) Measured CARS and SRS spectra of oleic acid. Source: Minbiao Ji.

a multimodal nonlinear optical endoscope may provide new opportunities for endoscopic diagnosis [8].

4.3 Stimulated Raman Scattering (SRS) Microscopy 4.3.1

Principles and Advantages

The best solution to overcome the NRB issue of CARS turned out to be the invention of SRS microscopy in 2008, mainly by the Xie group [9]. As can be seen from the transition diagram (Figure 4.4a), akin to the stimulated emission principle, the spontaneous scattering process becomes a stimulated process when both the pump and Stokes photons interact with the molecules and the same resonance condition is reached: 𝜔p − 𝜔S = Ωv . Unlike CARS, photon energy is no longer conserved in the course of SRS. There is net energy flow from the photons to the molecules: certain pump photons are annihilated while the same number of Stokes photons are generated, and the loss of photon energy is transferred to the excited state energy

121

122

4 Coherent Raman Scattering Microscopy and Biomedical Applications

Pump Stokes photons photons

Virtual state

Pump

Stokes

Ω

(a)

SRL SRG

Ω

Vibrational states Ground state

Input pulse train

(b)

Output pulse train SRL ΔIp

Pump

Modulated Stokes

Modulated Pump

(c)

Stokes

t

t

t

t

t

Output pulse train

SRG ΔIS t

Figure 4.4 Principle of stimulated Raman scattering. (a) Energy diagram of SRS. The molecular population is transferred from the ground state to the vibrational excited state of the molecule through a virtual state under the excitation of pump and Stokes beams. (b) Energy transfer process of SRS. A pump photon is annihilated and a Stokes photon is generated, resulting in SRL of pump and SRG of Stokes. (c) Modulation transfer scheme for SRS. The intensity of the pump is reduced by the appearance of Stokes (SRL), while the intensity of Stokes is increased in the presence of pump (SRG). As a consequence, initial modulation of Stokes (pump) could transferred to pump (Stokes). Source: Minbiao Ji.

of the molecules (Figure 4.4b). As a result, the pump beam undergoes an intensity loss stimulated Raman loss (SRL) and the Stokes beam with an intensity gain stimulated Raman gain (SRG), termed as ΔI p and ΔI S . When the photons are detuned away from vibrational resonance, such energy transfer is blocked, and no SRL/SRG can happen, i.e. The NRB is intrinsically eliminated in SRS. From the stimulated emission point of view, the intensity change could be expressed as: ΔIp ∝ −N𝜎Raman Ip IS

(4.3)

ΔIS ∝ N𝜎Raman Ip IS

(4.4)

4.3 Stimulated Raman Scattering (SRS) Microscopy

where N is the number of molecules in the probe volume, and 𝜎 Raman is the molecular Raman scattering cross-section. In the language of nonlinear optics, the SRL field could be written as: ESRL (𝜔p ) = iN𝜒 (3) Ep (𝜔p )Es ∗ (𝜔s )Es (𝜔s )

(4.5)

It can be seen that the generated signal field has the same frequency as the pump beam, which results in coherent interference between the signal and the pump field. The final SRS signal is heterodyne detected with the pump light field being the local oscillator. The self-heterodyned SRL signal could thus be expressed as: ] [ (4.6) SSRL = Re ESRL Ep ∗ = N Im[𝜒 (3) ]Ip Is Therefore, the SRS signal is proportional to the imaginary part of the 𝜒 (3) , and since the NRB is mostly real, SRS is intrinsically free of NRB. In addition, the imaginary part of the third-order susceptibility Im[𝜒 (3) ] retains the lineshape of the Raman spectrum, so SRS produces spectra that resemble those of the spontaneous Raman, which is a big advantage compared with CARS (Figure 4.3c). Another important advantage of SRS is that the signal is linearly proportional to the concentration of the target molecules (ΔI ∝ N), and hence it is convenient to generate quantitative chemical mapping of samples, even with a mixture of different chemical species. Furthermore, the nonlinear optical effect allows SRS for intrinsic optical sectioning similar to two-photon microscopy. SRS inherits the spectroscopy and chemical resolution from spontaneous Raman, while it overcomes the NRB of CARS, and has become a more and more popular chemical imaging technique in various fields. On the detection side, SRS derives the signal from SRL or SRG, the differential intensity ΔI that is a small fraction of the total intensity, 𝛥I/I ∼ 10−4 . To extract this faint signal, the modulation transfer method, commonly used in pump–probe spectroscopy, is applied (Figure 4.4c). In this approach, one of the excitation beams is intensity modulated at radio frequency (RF) by an electro-optical modulator (EOM) or acousto-optical modulator (AOM). When SRL/SRG occurs, the intensity of the other beam will be modulated out-of-phase and in-phase at the same RF frequency. The weak ΔI on top of a large DC background could be demodulated by a lock-in amplifier as the SRS signal. Therefore, SRS detection requires more sophisticated electronics than CARS, and is less convenient for epi detection in live animal experiments, which usually require a specialized detector for collecting back-scattered SRS signal [10].

4.3.2

Hyperspectral SRS

Exploiting the spectroscopic information during SRS imaging is critically important for analyzing the chemical compositions of biological specimens. There have been quite a few ways to achieve hyperspectral SRS, most of which require the use of an optical parametric oscillator (OPO) laser with dual outputs: a wavelength tunable beam and a fixed one. Two major types of techniques are commonly seen for hyperspectral SRS: (i) the picosecond OPO-based method simply sweeps the tunable wavelength while keeping the other beam fixed; (ii) femtosecond OPO cannot be used for

123

4 Coherent Raman Scattering Microscopy and Biomedical Applications

Pump femtosecond

SF57

DM

GM

2

10

4

Optical delay (ps) 6 8

picosecond EOM M M

Stokes

M

SF57

t

Microsope

M

M Delay Samples

LIA

PC ref. out

PD SP in

M

8 6 4 2 0

(a)

(c)

ω

10

SRS spectrum SRS intensity (a.u.)

124

G

ω

2800 2850 2900 2950 3000 3050 3100 Raman shift (cm–1) f f γ

Δω

θ

Pump

Linear chirped Δω1

Δω2

Pk2

Stokes

Pk1

Lens

Galvo

Output t

(b)

t

Input

(d)

Figure 4.5 The setup (a) and principle (b) of hyperspectral stimulated Raman scattering. The femtosecond pulse from commercial laser are stretched by SF57 to picosecond pulse, accompanying with the linear distribution of 𝜔 versus t. (c) the typical SRS spectra of oleic acid and (d) the setup of rapid scanning optical delay line (RSODL) for hyperspectral SRS. Source: Minbiao Ji.

hyperspectral SRS directly because of the broad frequency band that kills the spectral resolution, thus either spectrally slicing a narrowband or using the so-called “spectral focusing” technique is widely adapted to maintain sufficient spectral resolution. Throughout this chapter, we will mainly introduce the principles and applications of the latter method. In a typical spectral focusing based SRS microscope setup (Figure 4.5a), pulsed femtosecond laser beams from a commercial OPO laser are used as the laser source. Both the pump and Stokes pulses are chirped by SF57 glass rods for several picoseconds (Figure 4.5b). When the linear chirp of the two pulses is matched, their frequency difference (Raman frequency) tends to be the same at different temporal positions; hence, the SRS signal corresponds to a specific narrowed (focused) spectral region. In addition, simply scanning the time delay between the pump and Stokes is equivalent to scanning the wavelength. Therefore, SRS spectra could be conveniently achieved by scanning the optical delay line without changing any of the laser wavelengths (Figure 4.5c). The rest of the setup is the same as an ordinary SRS microscope (Figure 4.5a), where intensity modulation of the Stokes beam is realized by an EOM, and the two laser beams are combined through a dichromic mirror, spatially, and temporally overlapped, and delivered into the laser scanning microscope. Finally, the SRS signal was optically filtered, detected by a photodetector, and demodulated with a lock-in amplifier to feed the analog input of the microscope to form images.

4.3 Stimulated Raman Scattering (SRS) Microscopy

A brief theoretical description is given below: The electric filed of the linearly chirped pulses can be written as [11] Ap ei(kp z−𝜔p t+𝜔p t0 ) A ei(kS z−𝜔S t) + √S t2 + c.c. E= √ (t−t0 )2 𝜏e− 2𝜏 2 eibt2 𝜏e− 2𝜏 2 eib(t−t0 )2

(4.7)

where A, k, 𝜔, 𝜏, t0 , and b represent the amplitude, wavevector, wavenumber, pulse duration, interpulse delay, and chirp of the propagating pulses, respectively. c.c. represents a complex conjugate. The relationship between chirp parameter b, chirped pulse duration Δ𝜏, and chirped pulse bandwidth Δ𝜆 is √ )2 ( 4(ln 2)2 𝜋c𝛥λ − (4.8) b= 2 𝛥𝜏 4 λ 𝛥𝜏 It is noted that the delay between the pump and Stokes pulse affects a change in the probed Raman oscillation frequency from 𝜔p − 𝜔S to 𝜔p − 𝜔S − 2bt0 , Therefore, SRS hyperspectral imaging can be realized by performing SRS imaging while sequentially scanning the interpulse delay t0 between the pump and the Stokes with an optical delay line. In order to obtain the SRS spectrum, a calibration is necessary to draw the correlation between time delay and Raman shift (Figure 4.5c). It is also worth noting 2 that as t0 increases, the SRS signal suffers attenuation by a factor of e[−(t0 ∕2𝜏) ] due to the limited bandwidth, which could be further removed by normalizing the SRS spectrum by a calibration spectrum (normally a two-photon absorption spectrum) that represents the cross-correlation profile. Compared with the wavelength tuning approach, the spectral focusing method is simpler, faster, and more reliable by only tuning the time delay without affecting the laser itself. And of course, there are many ways to compose an optical delay line, including the use of a conventional motorized linear stage, galvo mirror, and AOM. Figure 4.5d illustrates the use of galvo mirror to achieve a rapid scanning optical delay line (RSODL) for hyperspectral SRS [12], with rotational angle converted to time delay as: 𝜋fp𝜆 ⋅𝛾 (4.9) 45c ⋅ cos 𝜃 where f is the focal length of the lens, 𝜆 is the center wavelength of the Stokes beam, 𝛾 is the rotation angle of the galvo (degrees), c is the speed of light, p is the grating constant, and 𝜃 is the diffraction angle of the grating. 𝜏=

4.3.3

High Speed SRS

The nonlinear and coherent optical processes amplify the Raman signal by 3–5 orders of magnitude, enabling fast imaging (videorate) and sensitive detection (mM) for CRS microscopy. However, reaching video-rate imaging requires pixel dwell time down to nanosecond level, reducing the signal to noise ratio (SNR). In practice, the imaging rate is still limited to one second per frame with 512 × 512 pixels. For multicolor SRS, sequential wavelength tuning methods are obviously time-consuming. In order to speed up the imaging rate while maintaining the

125

126

4 Coherent Raman Scattering Microscopy and Biomedical Applications DL1

Polarization of the pulse train: Block

S-polarized

Pump

S2

P-polarized

SF57

Stokes SF57

DL2 S2

EOM

PBS1

S1

PBS2 λ/2 P

PBS

EOM

DM

S1

(a)

Microscope

f0 = 80 MHz

S1, ϕ=0

f0/4

1/f0 + Δτ

Sample LIA X Ref In Y

S2, ϕ=π/2

(c) Ω1

sequential tiling mosaicking

(b)

sequential tiling mode

Ω2

1

3

2

4

n+1

n+3

n+2

n+4

parallel strip mode

parallel strip mosaicking

Ix

1

2

(d)

Iy

PD SP

1

2

(e)

Figure 4.6 Techniques to speed up the rate of stimulated Raman scattering microscopy. (a) The old version of hyperspectral SRS with the wasting of part of Stokes. (b) Optical layout of dual-phase SRS setup. (c) Pulse trains of the modulated S1 and S2, showing the modulation phase difference. (d) The comparison of operating mode and (e) acquired imaging (blue: protein; green: lipid) of two mosaicing. Source: (a–c) Minbiao Ji; (e) Zhang et al. [14], From Optica Publishing Group.

SNR, recently developed dual-phase SRS may be a convenient choice for parallel two-color SRS imaging that could reach the highest imaging speed [13, 14]. The setup of dual-phase SRS is based on the framework of spectral focusing geometry, which has been described in the previous section, and makes use of the phase sensitivity of the lock-in amplifier [13]. Notice that in typical hyperspectral SRS, half of the Stokes beam intensity is discarded after intensity modulation (Figure 4.6a). The core idea of dual-phase SRS is to recycle the reflected Stokes beam to serve as a second Stokes beam (S2 ) in a Mach–Zehnder interferometry (Figure 4.6b). By fine tuning the interpulse delays between the chirped pump and Stokes beams (S1 and S2 ), two Raman frequencies could be driven simultaneously. In order to detect the two Raman signals simultaneously through the X and Y channels of the lock-in, S1

4.4 Biomedical Applications of CRS Microscopy

and S2 need to be modulated in quadrature phase. A neat design is to set the modulation frequency synchronized to 1/4 of the pulse repetition rate (f 0 ), then add one pulse interval T 0 to S2 , so that the phase difference between S1 and S2 is shifted from π to π/2 without affecting the SRS signal (Figure 4.6c). In the end, the SRL signals at Raman frequency Ω1 and Ω2 were generated in the pump beam with quadrature phase difference, which could be written as ) ) ( ( 𝜋f0 t 𝜋f0 t ISRL (t) = I1 (Ω1 ) sin + 𝜑0 + I2 (Ω2 ) cos + 𝜑0 (4.10) 2 2 Hence, a phase-sensitive lock-in amplifier could be used to detect I 1 (Ω1 ) and I 2 (Ω2 ) simultaneously through the in-phase (X) and quadrature (Y ) output channels with proper setting of the reference phase 𝜑0 . The method has been proven effective with negligible interference and cross-talk artifacts and could achieve video-rate imaging as fast as single channel SRS. Generally speaking, dual-phase SRS could acquire two images at a time, reducing the data collection time by half, which is particularly suitable for two-color SRS histology imaging on large tissue sections and may provide real-time histology for rapid diagnosis without image postprocessing. When imaging large-scale tissues, mosaicing methods are commonly used. However, the sequential tiling of SRS is slow due to the long interval time between adjacent images. To address this issue, parallel strip mosaicing mode (Figure 4.6d) was developed [14]. Briefly, parallel strip mosaicing could be divided into two steps. First, turn the normal “Framescan” mode to “Linescan” mode, in which the laser spot is scanning only in the Y direction. Second, move the sample stage along the X axis with constant velocity to yield stripe images. Consequently, parallel strip mosaicing reduces one of the stitching dimensions and improves the final image quality with less stitching artifacts (Figure 4.6e). With dual-phase SRS, an almost 10 times reduction in the total acquisition time could be achieved for large-scale tissue imaging. Such a method is practically useful when dealing with large numbers of tissue sections for statistical analysis and big-data sciences.

4.4 Biomedical Applications of CRS Microscopy 4.4.1

Label-Free Histology for Rapid Diagnosis

Two-color SRS microscopy holds great potential in label-free digital histopathology by providing diagnostic results similar to standard hematoxylin and eosin (H&E) staining, even though they are based on different molecular contrasts. H&E is known to stain protein and nucleic acids, whereas SRS usually detects lipids and proteins [10, 15]. Both modalities are two-color imaging methods that can well correlated [16]. Moreover, SRS has the flexibility to explore different spectroscopic features associated with pathological properties, such as nucleic acids [17] and misfolded proteins [18]. Up to date, several types of disease tissues have been demonstrated with SRS imaging, including brain tumors, laryngeal squamous cell carcinoma (SCC), Alzheimer’s disease, and

127

128

4 Coherent Raman Scattering Microscopy and Biomedical Applications Bright field

SRS

Tumor

Normal

(a)

500 μm

(b)

(c)

50 μm

(d)

(e)

300 μm

(f) White matter

Gray-White junction

30 μm

(g)

30 μm

(j)

Gray matter

30 μm

(h)

30 μm

(k)

Normal

30 μm

(i)

30 μm

(l) Neoplastic

Pre. Act.

T Case 1 M N T Case 2 M N M N

(m)

T Case 3 M N

T 1 cm

0

50 100 Percentage of tiles (%)

4.4 Biomedical Applications of CRS Microscopy

Figure 4.7 Label-free histology for rapid diagnosis of stimulated Raman scattering. (a) The picture of tumor infiltration area with nude eyes and (b) SRS microscopy and clear border only shown in SRS imaging. (c) The SRS imaging of ex vivo mouse brain slice. The gray/white junction is evident owing to the differences in lipid concentration between cortical and subcortical tissue, with cell-to-cell correlation between SRS (d) and H&E (e) images. (f) The large-scale tiling SRS image of human brain surgical tissues with the apparently biochemical differences between the protein-rich gray matter, appearing blue (left) and the myelinated white matter, appearing green (right). Source: (a–f) Ji et al. [10, 15], From American Association for the Advancement of Science. SRS images of frozen sections (g) cytological atypia accompanied with lymphocytes and architectural dysplasia; (h) typical keratin pearl] and unprocessed fresh larynx surgical tissues (i) clustered small nests; (j) cells with enriched nuclear contents (yellow arrows); (k) highly disordered cells with almost disrupted cell morphologies; (l) enlarged cell nucleus and abnormal nuclear morphology]. (m) The simulated model of surgical processes with the help of ResNet34-SRS to intraoperatively evaluate the resection margins. Source: (g–m) Zhang et al. [19], From Ivyspring International Publisher.

et al. [10, 18–22] In this section, a few typical label-free histology applications will be reviewed. In 2013, Ji et al. conducted two-color SRS detection of brain tumor margins on live mouse models [10]. In addition to the histoarchitectural differences between normal and cancerous brain tissues, the lipid/protein contrast delineates clear boundaries at the tumor margins (Figure 4.7a–c). Normal brain tissues contain rich lipids, mostly for the composition of the myelin sheath in neuronal axons. Because glioma tumor tissues are mostly made up of glia cells, they have a much lower lipid content but a much higher protein contents. Two-color SRS imaging at the CH2 (2850 cm−1 ) and CH3 (2930 cm−1 ) vibrations could conveniently differentiate lipids (green) and proteins (blue), which provides the chemical contrast of tissue histology. The numerical decomposition of the two chemicals is based on the linear relationship between SRS intensity and chemical concentrations (Eq. (4.6)). The comparison between SRS and H&E could be seen on the same thin frozen tissue sections with cell to cell correlation accuracy (Figure 4.7d and e). Living mice with exposed brains under SRS microscope revealed clear tumor margins in areas that appeared grossly normal to naked human eyes (Figure 4.7a–c). Later, Ji and coworkers validated SRS histology on excised human brain surgical tissues (Figure 4.7f) and further created a classifier based on the factors of cellularity, axonal density, and protein/lipid ratio [15]. Such a classifier was further improved by the use of machine-learning algorithms combined with SRS imaging [16]. SCC is the most common malignancy among laryngeal cancer. Zhang et al. were able to acquire three-color images representing the distributions of lipids, proteins, and collagen fibers, false colored as green, blue and red, respectively by combining two-color SRS with SHG microscopy [19]. Both thin frozen sections and fresh unprocessed larynx tissues yielded clear cytological and histoarchitectural features for diagnosis (Figure 4.7g–l). Meanwhile, they demonstrated that SRS had high diagnostic concordance by correlating with the H&E staining results of adjacent sister sections and being evaluated by three pathologists. Furthermore, a deep-learning model (ResNet34) was constructed and trained with laryngeal

129

130

4 Coherent Raman Scattering Microscopy and Biomedical Applications

SRS images to automatically and accurately differentiate normal and neoplastic tissues. To test the efficacy of the model, they simulated the surgical processes on totally removed larynxes and used the ResNet34-SRS to intraoperatively evaluate the resection margins determined by the naked eyes of the surgeon (Figure 4.7m). The neuropathological hallmarks of Alzheimer’s disease (AD) include the formation of amyloid plaques and neurofibrillary tangles. Although under debate, the “amyloid hypothesis” emphasizes the important role of the misfolding of amyloid beta (Aβ) polypeptides, along with the consequent formation of oligomers, fibrils, and final deposition of extracellular plaques. The key molecular feature of Aβ plaque is the rich beta-sheet conformation, which can be spectroscopically identified by the frequency shift of the amide I vibrational mode of the polypeptide backbone. Figure 4.8a shows the blue shift of the Amide I Raman peak of Aβ upon secondary structural change, based on which SRS spectral imaging could be applied to distinguish the distributions of lipids, normal proteins, and misfolded Aβ (plaques). Images at 1658 cm−1 have the maximum intensity of normal tissues, while images at 1670 cm−1 show the maximum brightness of the plaques, and images taken at 1680 cm−1 reveal increased contrast of the surrounding cells. Applying the conventional numerical decomposition method, distributions of lipids (green), normal proteins (blue), and amyloid plaques (magenta) could be decomposed from the raw imaging data (Figure 4.8b–e). Although the structures of the plaques could also be seen with high frequency CH stretch imaging, they do not provide spectral features specific to the misfolded plaques, i.e. they are not sensitive to protein conformational changes. The existence of the plaques was further verified by labeling methods, including antibody staining and thioflavin-S staining (Figure 4.8f and g). Interestingly, SRS microscopy may reveal additional histological changes that are missed by specific labeling. For instance, it can be seen that each plaque is surrounded by a lipid-rich halo structure, which might originate from the degenerated neurites and myelin sheaths but did not appear in the Aβ staining (Figure 4.8f and g). It is known that vibrational spectroscopy is not good at differentiating different proteins, but the misfolded protein seems to be an exception. SRS microscopy may thus find opportunities in other neuro-degenerative diseases, such as Parkinson’s disease and Huntington’s disease.

4.4.2

Raman Tagging and Imaging

The native vibrations of chemical bonds serve as intrinsic “fingerprints” for molecules, allowing us to identify them spectroscopically without additional labels. “Label-free” has been the major advantage of CRS microscopy since its invention. However, such an advantage is compromised by several disadvantages, including the limited sensitivity and specificity. Fluorescence microscopy, on the other hand, has easily reached single molecule detectability and highly specific targeting combined with various functional groups. Inspired by fluorescence labeling, Min et al. pioneered the work of introducing Raman tags to label biomolecules for CRS microscopy and have demonstrated the amazing capabilities of these “Raman labeling” approaches that bridge the gap between Raman and fluorescence [23, 24].

4.4 Biomedical Applications of CRS Microscopy

(a) SRS intensity (a.u.)

1500

plaque normal

1000

500

0 1600

1620

1640

1660

Raman shift

1680

(b)

(c)

1658 cm–1

1670 cm–1

(d)

(e)

1720

30 μm

1680 cm–1

(f)

SRS

1700

(cm–1)

(g)

20 μm

ThioS

Figure 4.8 (a) The SRS spectrum of plaque and normal tissue, showing the blue shift of misfolding protein. (b–e) Individual SRS images of a 1-mm-thick fresh mouse brain section at 1658, 1670, and 1680 cm−1 and the composite three-color image showing the distribution of lipid (green), normal protein (blue), and amyloid plaque (magenta). The comparison of SRS (f) and ThioS (g) labeling imaging. Halo structure only shown in SRS imaging. Source: Minbiao Ji.

131

132

4 Coherent Raman Scattering Microscopy and Biomedical Applications

As the old saying goes, “Indigo blue is extracted from the indigo plant, but is bluer than the plant it comes from.” Raman labeling and imaging has emerged as a new field of bioorthogonal chemical imaging for the following reasons. First, vibrational tags, including isotope-based and alkyne-based tags, contain only a few atoms and are expected to introduce minimal perturbations to the original molecules. Second, Raman signals that come from these vibrational tags do not suffer from photobleaching. Third, leveraging the narrow linewidth of the Raman peak, super-multiplexed Raman dyes are able to break the limited number of fluorescence labels and achieve direct imaging of tens of labeling species for biological samples. Furthermore, these tags usually have Raman spectra in the “silence” window (1800–2700 cm−1 ), away from the tissue backgrounds. Here, to elucidate this nascent technique, concrete examples will be demonstrated. To study organism activity at a global level, deuterated water (D2 O) has been applied to visualize the in situ metabolism of proteins, lipids, and DNA in cells, tissues and animals, as a universal and cost-effective tag. Meanwhile, for glucose metabolism, it can be traced with d7 -glucose and spectral tracing of deuterium isotopes. With distinct Raman spectra of C—D bonds in the Raman-silent spectral window, newly biosynthesized proteins, lipids, DNA, and glycogen can be monitored, revealing biomass turnover [25–27]. For example, by imaging the metabolism of deuterated fatty acids, saturated fatty acids were found to induce a phase separation to form solid-like domains in the presumably fluidic endoplasmic reticulum (ER) membrane, indicating the important role of the ER membrane phase in the onset of lipotoxicity [28] (Figure 4.9a). Isotope-based vibrational tags have minimal size but relatively moderate Raman intensity, so that they are more suitable for high-abundant species rather than low content. By contrast, triple bonds, such as alkyne (C≡C), nitrile (C≡N), not only have sharp Raman peaks in the cell silent region but also exhibit much stronger Raman signals than isotope tags, enabling higher sensitivity imaging (down to sub-μM level under electronic preresonance). From the simple view of harmonic oscillators, the vibrational frequency of a chemical bond is inversely proportional to the square root of the reduced mass: √ m1 m2 K 1 (4.11) ,𝜇 = 𝜐= 2𝜋c 𝜇 m1 + m2 where K is the force constant or bond strength and 𝜇 is the reduced mass of the atoms composing the chemical bond. Therefore, by chemically designing the molecular structures, including substituting with different end groups and isotopes, tens of chemicals with distinguishable Raman peaks could be synthesized and labeled to achieve super-multiplexing imaging (Figure 4.9b) [30], allowing a large number of different Raman dyes to be visualized simultaneously with high spatial and temporal resolution [29], which could not be achieved with fluorescence labeling due to the broad emission bands (Figure 4.9c). Combining with various photophysical and photochemical properties of fluorescence and dye molecule, there is much to expect in the direction of Raman tagging and CRS imaging, opening up new opportunities for the understanding of the function and dynamics of complex biological and biomedical systems.

Nitrile dyes

(a) d-palmitate, palmitate, d-palmitate

Alkyne dyes

4 8 12 Distance (μm)

Dmax = (0.8 ± 0.1) × 10–4 μm2/s

(c)

Carbow2226, Carbow2202 Carbow2172, Carbow2141 Carbow2128, Carbow2100 Carbow2086, Carbow2066 Carbow2049, Carbow2017 NucBlue, MitaGreen MitoOrange, FM 4-64 MitoDeepRed

20 μm

2.233 cm–1 2.231 cm–1 2.204 cm–1 2.179 cm–1 2.152 cm–1

2.228 cm–1 2.200 cm–1 2.176 cm–1 2.147 cm–1

2.242 cm–1 2.214 cm–1 2.186 cm–1 2.159 cm–1

2.237 cm–1 2.209 cm–1 2.183 cm–1 2.154 cm–1

Isotop

e editin

(b)

PM/ER

2.143 cm–1 2.101 cm–1 2.061 cm–1 Alkyne terminal group

0 C-D SRS

Xanthene 10-substitution

2.225 cm–1 2.224 cm–1 2.199 cm–1 2.173 cm–1 2.145 cm–1

Xanthene ring expansion

g

2.190 cm–1 2.117 cm–1

2.210 cm–1 Isotop e editi ng

Golgi/Mito

LD/Lyso

Nucleus/Tubulin

Actin/FM4-84

Nucleus/Golgi

Tubulin/Actin

LD/Mito

Nucleus/PM

10 μm ER/Tubulin

(d)

Figure 4.9 Labeling Raman imaging. (a) SRS imaging with deuterated palmitic acids reveals solid-phase membrane in living cells with implications on lipotoxicity according to the low maximum diffusion coefficient (Dmax). Scale bar, 10 μm. Source: (a) Shen et al. [28], From Proceedings of the National Academy of Sciences PNAS. (b) Design principles and structures of triple bonds dyes. (c) Fifteen-color imaging of live HeLa cells with corresponding Carbow and fluorescent molecules. (d) Ten-color optical imaging of PM (Carbow2141), ER (Carbow2226), Golgi (BODIPY TR), Mito (Carbow2062), LD (Carbow2202), Lyso (Carbow2086), nucleus (NucBlue), tubulin (SiR650), actin (GFP), and FM 4-64 in living HeLa cells. Overlay of two species are shown in each image. Source: (c, d) Hu et al. [29], From Springer Nature.

134

4 Coherent Raman Scattering Microscopy and Biomedical Applications

4.5 Prospects and Challenges .

Despite all the technical developments of CRS microscopy achieved so far, there remain quite a few challenges that await further breakthroughs. (i) Without Raman tagging, detection sensitivity is still limited to the mM level, limiting applications in imaging condensed chemicals. Improving the sensitivity is crucial in detecting lower concentrations of metabolites, such as neurotransmitters, glucose, ATP, etc. (ii) The spatial resolution of CRS microscopy is still limited by optical diffraction, which may be broken by introducing special Raman tags or exploring higher order nonlinear optical processes associated with Raman transitions. (iii) The current imaging depth is only around 200 μm, limited by tissue scattering and aberration. Opportunities may exist in applying wavefront shaping techniques to the excitation beams or tissue clearing methods that could preferably keep the main chemical compositions, including lipids and proteins. Technical advances are always needed for CRS microscopy, fulfilling the dreams of seeing smaller, deeper, faster, and fewer, as well as resolving more detailed chemical make-ups with improved spectroscopy. Moreover, it is challenging but critical to choose proper deep learning-based algorithms specifically tailored and integrated with CRS microscopy in various settings, including imaging and spectroscopy. Nonetheless, the future of CRS is bright. Both the traditional label-free advantage and the emerging Raman-labeling capability will continue to evolve and hopefully find more exciting opportunities in biological and biomedical research.

References 1 Raman, C.V. and Krishnan, K.S. (1928). A new type of secondary radiation. Nature 121: 501–502. https://doi.org/10.1038/121501c0. 2 Movasaghi, Z., Rehman, S., and Rehman, I.U. (2007). Raman spectroscopy of biological tissues. Appl. Spectrosc. Rev. 42: 493–541. Review. https://doi.org/10 .1080/05704920701551530. 3 Zumbusch, A., Holtom, G.R., and Xie, X.S. (1999). Three-dimensional vibrational imaging by coherent anti-stokes Raman scattering. Phys. Rev. Lett. 82: 4142–4145. https://doi.org/10.1103/physrevlett.82.4142. 4 Cheng, J.X. and Xie, X.S. (2004). Coherent anti-stokes Raman scattering microscopy: instrumentation, theory, and applications. J. Phys. Chem. B 108: 827–840. https://doi.org/10.1021/Jp035693v. 5 Cheng, J.X., Volkmer, A., and Xie, X.S. (2002). Theoretical and experimental characterization of coherent anti-stokes Raman scattering microscopy. J. Opt. Soc. Am. B-Opt. Phys. 19: 1363–1375. https://doi.org/10.1364/Josab.19.001363. 6 Ganikhanov, F., Evans, C.L., Saar, B.G. et al. (2006). High-sensitivity vibrational imaging with frequency modulation coherent anti-stokes Raman scattering (FM CARS) microscopy. Opt. Lett. 31: 1872–1874. https://doi.org/10.1364/OL.31 .001872.

References

7 Camp, C.H., Lee, Y.J., Heddleston, J.M. et al. (2014). High-speed coherent Raman fingerprint imaging of biological tissues. Nat. Photonics 8: 627–634. https://doi .org/10.1038/Nphoton.2014.145. 8 Lombardini, A., Mytskaniuk, V., Sivankutty, S. et al. (2018). High-resolution multimodal flexible coherent Raman endoscope. Light. Sci. Appl. 7: https://doi.org/10 .1038/s41377-018-0003-3. 9 Freudiger, C.W., Min, W., Saar, B.G. et al. (2008). Label-free biomedical imaging with high sensitivity by stimulated Raman scattering microscopy. Science 322: 1857–1861. https://doi.org/10.1126/science.1165758. 10 Ji, M., Orringer, D.A., Freudiger, C.W. et al. (2013). Rapid, label-free detection of brain tumors with stimulated Raman scattering microscopy. Sci. Transl. Med. 5: 201ra119. https://doi.org/10.1126/scitranslmed.3005954. 11 Fu, D., Holtom, G., Freudiger, C. et al. (2013). Hyperspectral imaging with stimulated Raman scattering by chirped femtosecond lasers. J. Phys. Chem. B 117: 4634–4640. https://doi.org/10.1021/jp308938t. 12 He, R., Liu, Z., Xu, Y. et al. (2017). Stimulated Raman scattering microscopy and spectroscopy with a rapid scanning optical delay line. Opt. Lett. 42: 659–662. https://doi.org/10.1364/OL.42.000659. 13 He, R., Xu, Y., Zhang, L. et al. (2016). Dual-phase stimulated Raman scattering microscopy for real-time two-color imaging. Optica 4. https://doi.org/10.1364/ optica.4.000044. 14 Zhang, B., Sun, M., Yang, Y. et al. (2018). Rapid, large-scale stimulated Raman histology with strip mosaicing and dual-phase detection. Biomed. Opt. Express 9: 2604–2613. https://doi.org/10.1364/BOE.9.002604. 15 Ji, M., Lewis, S., Camelo-Piragua, S. et al. (2015). Detection of human brain tumor infiltration with quantitative stimulated Raman scattering microscopy. Sci. Transl. Med. 7: 309ra163. https://doi.org/10.1126/scitranslmed.aab0195. 16 Orringer, D.A., Pandian, B., Niknafs, Y.S. et al. (2017). Rapid intraoperative histology of unprocessed surgical specimens via fibre-laser-based stimulated Raman scattering microscopy. Nat. Biomed. Eng. 1: 0027. https://doi.org/10.1038/s41551016-0027. 17 Lu, F.K., Basu, S., Igras, V. et al. (2015). Label-free DNA imaging in vivo with stimulated Raman scattering microscopy. Proc. Natl. Acad. Sci. U. S. A. 112: 11624–11629. https://doi.org/10.1073/pnas.1515121112. 18 Ji, M., Arbel, M., Zhang, L. et al. (2018). Label-free imaging of amyloid plaques in Alzheimer’s disease with stimulated Raman scattering microscopy. Sci. Adv. 4: eaat7715. https://doi.org/10.1126/sciadv.aat7715. 19 Zhang, L., Wu, Y., Zheng, B. et al. (2019). Rapid histology of laryngeal squamous cell carcinoma with deep-learning based stimulated Raman scattering microscopy. Theranostics 9: 2541–2554. https://doi.org/10.7150/thno.32655. 20 Yan, S., Cui, S., Ke, K. et al. (2018). Hyperspectral stimulated Raman scattering microscopy unravels aberrant accumulation of saturated fat in human liver cancer. Anal. Chem. 90: 6362–6366. https://doi.org/10.1021/acs.analchem .8b01312.

135

136

4 Coherent Raman Scattering Microscopy and Biomedical Applications

21 Tian, F., Yang, W., Mordes, D.A. et al. (2016). Monitoring peripheral nerve degeneration in ALS by label-free stimulated Raman scattering imaging. Nat. Commun. 7: 13283. https://doi.org/10.1038/ncomms13283. 22 Yue, S., Li, J., Lee, S.Y. et al. (2014). Cholesteryl ester accumulation induced by PTEN loss and PI3K/AKT activation underlies human prostate cancer aggressiveness. Cell Metab. 19: 393–406. https://doi.org/10.1016/j.cmet.2014.01.019. 23 Wei, L., Hu, F., Shen, Y. et al. (2014). Live-cell imaging of alkyne-tagged small biomolecules by stimulated Raman scattering. Nat. Methods 11: 410–412. https:// doi.org/10.1038/nmeth.2878. 24 Hu, F., Wei, L., Zheng, C. et al. (2014). Live-cell vibrational imaging of choline metabolites by stimulated Raman scattering coupled with isotope-based metabolic labeling. Analyst 139: 2312–2317. https://doi.org/10.1039/c3an02281a. 25 Wei, L., Yu, Y., Shen, Y. et al. (2013). Vibrational imaging of newly synthesized proteins in live cells by stimulated Raman scattering microscopy. Proc. Natl. Acad. Sci. U. S. A. 110: 11226–11231. https://doi.org/10.1073/pnas.1303768110. 26 Shi, L., Zheng, C., Shen, Y. et al. (2018). Optical imaging of metabolic dynamics in animals. Nat. Commun. 9: 2995. https://doi.org/10.1038/s41467-018-05401-3. 27 Zhang, L., Shi, L., Shen, Y. et al. (2019). Spectral tracing of deuterium for imaging glucose metabolism. Nat. Biomed. Eng. 3: 402–413. https://doi.org/10.1038/ s41551-019-0393-4. 28 Shen, Y., Zhao, Z., Zhang, L. et al. (2017). Metabolic activity induces membrane phase separation in endoplasmic reticulum. Proc. Natl. Acad. Sci. U. S. A. 114: 13394–13399. https://doi.org/10.1073/pnas.1712555114. 29 Hu, F., Zeng, C., Long, R. et al. (2018). Supermultiplexed optical imaging and barcoding with engineered polyynes. Nat. Methods 15: 194–200. https://doi.org/10 .1038/nmeth.4578. 30 Wei, L., Chen, Z., Shi, L. et al. (2017). Super-multiplex vibrational imaging. Nature 544: 465–470. https://doi.org/10.1038/nature22051.

137

5 Fluorescence Imaging-Guided Surgery Shudong Jiang Thayer School of Engineering, Dartmouth College, 14 Engineering Drive, Hanover, NH 03755, USA

5.1 Introduction Traditional surgery mainly relies on the surgeons empirical judgment on the color and shape of different tissues to determine the extent of resection. However, the lack of objective evaluation standards, the limited experience of doctors, and the color differences of individual patient tissues often lead to a high rate of positive margin, which is associated with increased recurrence and decreased survival [1]. On the other hand, minimizing damage to the nerves, blood vessels, ureters, and bile ducts during surgery is also important to the patients prognosis. Color-coded fluorescence imaging has been developed in order to enhance the visual difference between tissues and provide objective criteria to identify different tissues (based on tissue structure or pathology). This has led to its increasing impact on surgical care [2]. The earliest study of fluorescence image-guided surgery was published in 1948. In this study, the surgical removal of brain tumors was guided by the use fluorescein sodium (FS) [3]. In the mid-1990s to 2000s, following significant improvements in digital imaging processing for providing satisfactory resolution in real time, this technique gained widespread acceptance. Indocyanine green (ICG), similar to FS, which originated in ophthalmology, and qualitative measurements of hepatic and cardiac function has also been utilized as a vascular tracer in brain, lung, head and neck, breast, colorectal, melanoma, gastric, endometrial, and prostate cancer resections. In addition to FS and ICG, 5-aminolevulinic acid (ALA) was demonstrated as a highly specific intraoperative marker of high-grade glioma tissue in 2006, because ALA induces the accumulation of protoporphyrin IX (PpIX) in certain tumor cells [4]. Subsequent studies at the University of Toronto and Dartmouth College have characterized the use of ALA in low-grade gliomas and other brain tumors [5]. Since then, there has been rapid growth in clinical use of fluorescence image-guided surgery [6], US Food and Drug Administration (FDA) 510(K) clearance of new imaging systems, and the development of more specific tumor-enhancing fluorescent probes to overcome certain limitations of current Biomedical Photonic Technologies, First Edition. Edited by Zhenxi Zhang, Shudong Jiang, and Buhong Li. © 2023 WILEY-VCH GmbH. Published 2023 by WILEY-VCH GmbH.

5 Fluorescence Imaging-Guided Surgery 6000

Number of publications

Number of publications

138

5000 4000 3000 2000 1000 0 91' 94' 97' 00' 03' 06' 09' 12' 15' 18' 21'

(a)

Year

700 600 500 400 300 200 100 0 91' 94' 97' 00' 03' 06' 09' 12' 15' 18' 21'

(b)

Year

Figure 5.1 Publications related to fluorescence imaging-guided surgery searched in PubMed on 31-May-2022. The terms that appeared in the titles and abstracts of (a) [intraoperative fluorescence imaging] and (b) [fluorescence imaging] were used to conduct the search. AND [(surgical guide) OR (cancer resection) OR (plastic surgery) OR (laparoscopic) OR (orthopedic surgery)]. Source: Shudong Jiang.

FDA approved agents of FS, ICG, ALA, and methylene blue (MB). Figure 5.1 shows the publications related to fluorescence imaging-guided surgery from 1991 to 2021, searched in PubMed on May 31, 2022. The terms in the title and abstract of (a) [intraoperative fluorescence imaging] and (b) [fluorescence imaging] AND [(surgical guide) OR (cancer resection) OR (plastic surgery) OR (laparoscopic) OR (orthopedic surgery)] were searched for. The figure shows significant increases in number of the publications since the late 1990s. The most important advantages of fluorescence imaging-guided surgery are as follows: (i) high signal sensitivity and specificity, especially when multipoint fluorescence measurement is added to the images of the entire surgical field of view; (ii) compared with most radiation technologies which provide volume imaging, fluorescence technology can better reflect the tissue surface property; with the ability to zoom into the particular location, the local properties of tissue in a complex tissue structure can be visualized to reduce the need for intraoperative biopsy; (iii) the visual contrast between cancer and surrounding normal tissues may allow surgeons to recognize the residual cancer tissue covered by normal tissue or to avoid the nerve tissue buried under the surface; and (iv) compared with other surgical methods, the cost of fluorescence mediated surgery is relatively low [7].

5.2 Basics of Fluorescence Image-Guided Surgery As described in Chapter 2, based on the source of the detected fluorescence, fluorescence imaging can be divided into two categories: exogenous or endogenous fluorescence imaging. Endogenous fluorescence imaging is based on imaging the autofluorescence, Raman scattering, infrared reflectance, and microanatomical cyto-architecture to access the physiopathological parameters of the tissues specific to an anomaly directly, whereas exogenous fluorescence imaging is based on imaging the intensity and/or time-related parameters of the fluorescence that is emitted from the contrast agent probes (dyes) that are related to the specific disease/ tissue type and injected into the tissue.

5.2 Basics of Fluorescence Image-Guided Surgery

The most significant advantage of using endogenous fluorescence imaging is the avoidance of adding toxicity because the endogenous fluorescence comes from the natural metabolism, structure, and amino acids of the tissue itself. However, compared to the contrast agent probes used in exogenous fluorescence imaging, the contrast of the endogenous guide imaging markers of the disease to the tissues is relatively lower. Thus, to obtain real-time imaging over a wide field of view in the surgical setting, the current mainstream of clinical trials for fluorescent image-guided surgery is based on exogenous fluorescence imaging. Figure 5.2 shows the basic setup of the fluorescence imaging system for imagingguided surgery. The surgical field is illuminated with two light sources. One source is a single or an array of white light-emitting diodes (LEDs) while another is the LED or laser at the excitation wavelength of the probes. The white light and fluorescent light emitted from the surgical field of view are picked up by the optical lens and divided into two cameras. One camera is a RGB camera used to take white-light color images (with white light illumination), while another camera takes fluorescence images through the optical bandpass filter (to filter out the white and excitation lights). The two cameras are connected to a computer, and the computer software can process the fluorescence image data and overlay it on the color image. This setup can be adapted into a laparoscope or endoscope for surgery. Due to the absorption, scattering and inherent autofluorescence in the tissue, the visible light at the wavelength range of 350–740 nm does not penetrate deep into the tissue (i.e. less than several millimeters). In contrast, near-infrared (NIR) light at wavelengths ranging from 750 to 1000 nm can penetrate up to 20 mm deep. The sensitivity and the spatial resolution of the fluorescence imaging-guided surgery rely on the excitation light power and wavelength, optics, camera, and

Fluorescence

Filter White light & excitation light sources RGB Lens

Probes

Figure 5.2 Basic setup of the fluorescence imaging system for imaging-guided surgery. Source: Shudong Jiang.

139

140

5 Fluorescence Imaging-Guided Surgery

probes. Increasing the fluence rate (the number of photons illuminating a unit surface over time) of the excitation light power can increase the imaging sensitivity, resolution, and tissue penetration depth. However, the fluence rate is limited by the skin and tissue heating limitations and irreversible photochemical bleaching of the fluorescence probes and the background light at the wavelength range of the fluorescence. In general, fluence rates are restricted to the range of 10–25 mW/cm2 [8]. The most important factor that affects the background light is excitation, or ambient light leakage into the fluorescence camera. One can optimize the lens, filter, and camera design and gate the signal by triggering the excitation pulse to eliminate or reduce this leakage. The dynamic range of a fluorescence imaging system, which is determined by the camera and lens system and the data acquisition and display process, is another important spec for imaging-guided surgery [2]. In contrast to typical optical CMOS/CCD cameras that operate on an 8-bit gray scale output, fluorescence signals can easily vary by many orders of magnitude, thus limiting the digitized 255 gray scale values of an 8-bit to having limited resolution. This can be improved by limiting the size of the field of view or zooming the field of view in or out, to reduce the dynamic range of fluorescence signal variation over the field of view. However, this approach can easily limit quantitative interpretation of the observed intensity, corresponding to the concentration of fluorescence contrast probes. High dynamic range (HDR) imaging is now widely implemented in commercial camera systems and provides a high analog to digital conversion bit depth for single images. HDR can achieve a much wider dynamic range (>16 bit images) than traditional 8-bit images [2].

5.3 Fluorescence Probes for Imaging-Guided Surgery In the human body, the fluorescent probe is crucial to the viewing of targeted tissues during image-guided surgery [9]. There are many fluorescent probes commercially available for various targeted tissues. However, only a few are approved by the FDA for clinical trials. Within these few probes, ICG, FS, and MB have been designed for angiographic purposes or lymphatic drainage, while ALA has been designed to assess the metabolism of the tissue. Table 5.1 lists the excitation and emission wavelengths of these FDA-approved probes and their typical doses for clinical trials. ICG is the most widely used fluorescent probe in fluorescence-mediated surgery, in addition to brain tumor surgery. ICG was FDA approved for intravenous injection for choroid and retinal angiography in ophthalmology in 1959. It has been utilized lately as a vascular tracer in surgical navigation because ICG is almost 98% plasma protein bound. Therefore, once it is taken up into the microcirculation, it remains within the lymphatic and circulatory vasculature. In addition, the ICG half-life is short, making repeated measures possible, and it is indirectly activated, so that the dynamic fluorescence due to the tissue perfusion can be captured by a video rate imaging system. ICG is metabolized by the liver with a half-life of 3–4 minutes [10]. It is mainly used for intraoperative cardiovascular, liver function, and hepatic

5.3 Fluorescence Probes for Imaging-Guided Surgery

Table 5.1 Excitation and emission wavelengths of these FDA-approved probes and typical doses for clinical trials. Fluorescence probe

Excitation wavelength (nm)

Emission wavelength (nm)

Typical dose (mg/Kg)

ICG

780

820

0.03–0.5

FS

460

520

7.5–20

MB

665

680

1–2

ALA

405

635

20

Source: Shudong Jiang.

blood flow angiography, lung, head and neck, breast, colorectal, melanoma, gastric, endometrial, and prostate cancer resections, as well as to monitor the flow of blood, lymph fluid, cerebrospinal fluid, or urine during the operation. ICG absorbs NIR light in the wavelength range between 790 and 805 nm and has an emission peak of 835 nm. These NIR excitation and fluorescence emission wavelengths allow ICG to reflect blood vessels, bile ducts, lymphatic vessels, or tumors covered by approximately 1 cm of adipose tissue. ICG fluorescence is invisible in conventional operating microscopes or endoscopes and requires a special NIR detection device for detection and image display with corresponding analysis software [11]. The major limitation of using ICG as the tracer for surgical guidance is that it remains primarily in the blood, so when it is used for brain tumor resection, tumor contrast is limited and variable since in regions of the infiltration margin, blood–brain barrier disruption may not yet have occurred [1]. ALA is a natural precursor of blood iron. In 1999, the FDA approved the use of ALA for the treatment of actinic keratosis by photodynamic therapy. Because ALA also induces the accumulation of PpIX in certain tumor cells, it has been tested to be used as a highly specific marker of high-grade glioma tissue since 2006 [1]. PpIX is a light-sensing substance that emits red fluorescence with a wavelength of 610–720 nm under the excitation of violet and blue light at around 405 nm. Although the mechanism by which malignant glioma cells can take up exogenous ALA and convert it to PpIX is not fully understood, many experiments and clinical studies have confirmed that malignant gliomas appear to accumulate more PpIX than the surrounding tissues in less than three hours after ALA administration [10]. Under blue light irradiation through a surgical fluorescence microscope, the tumor area will show red fluorescence to mediate tumor resection. Due to the ambient red lights in the operating room passing through the filter of the fluorescence microscope, the surrounding normal brain tissue also appears red to a certain degree. To enhance the contrast of the tumor to normal surrounding tissue, the operating room needs to avoid the red light source as much as possible. Although ALA provides excellent visual contrast in high-grade gliomas, it lacks the sensitivity to differentiate between normal and diseased tissues in low-grade gliomas due to the reduced enhanced permeability and retention (EPR) and reduced downregulation of ferrochelatase associated with low-grade or marginal glioma cells [1].

141

142

5 Fluorescence Imaging-Guided Surgery

MB was the first entirely synthetic drug used in medicine, with the absorption/ excitation peak at 670 nm and an emission fluorescent peak at 690 nm. It has been used in the treatment of malaria, methemoglobinemia, and ifosfamide-induced encephalopathy since 1891. In the past decade, MB has also been used to map sentinel lymph nodes (SLNs) and identify breast cancer, neuroendocrine tumors, urologic tumors, and tumors in the parathyroid glands (PGs) [12]. Although MB is safe in most cases, it has risks of cardiac arrhythmias, coronary vasoconstriction, decreased cardiac output, decreased renal blood flow and mesenteric blood flow, and increased pulmonary vascular pressure [12]. In addition, the amount of accumulation of MB varies with tumor type, and the dose of MB has to be matched to each different type of the tumors. Similar to ICG, FS is a contrast agent for ophthalmology and brain tumors-based primarily on nonspecific vascular leakage. It is a water-soluble salt that emits green fluorescence. The main blue excitation wavelength peak is in the range of 465–490 nm, while the green fluorescence wavelength peak is in 510–530 nm. It has been safely used in humans since 1947, and the cost of FS is relatively low compared with that of ALA. As a probe of fluorescence image-guided surgery, FS has been commonly used for identifying glioblastoma and metastatic brain tumors. In these cases, FS enters the tumor through the damaged blood–brain barrier after intravenous administration. Instead of binding with tumor cells, FS is accumulated in the extracellular matrix of tumor cells and makes the tumor region fluorescent in yellow-green or yellow. While malignancy, vascular leaks or pooling defects, abnormal vasculature or neovascularization can cause the local enhancement of fluorescence, blocking and filling defects, and a normal blood brain barrier can reduce the local fluorescence. The major problem of using FS as a probe for fluorescence image-guided surgery is that FS may leak extravascularly, leading to staining of the peritumoral region due to its small molecule size and reversible binding to albumin and red blood cells. To overcome this problem, a fluorescein-albumin probe has been developed to increase the concentration gradient in the tumor or brain interface. A small clinical trial of using this improved probe for glioma resection has demonstrated the promise of the improvement [10]. In addition to the four FDA-approved probes, few others, including BLZ-100, ABY-029, and LUM015, have been approved for phase I clinical trials. BLZ-100 is a synthetic peptide ICG derivative conjugate and that has shown a strong contrast between pediatric glioma, head and neck cancer, sarcoma, breast, and skin cancers and their normal surrounding tissue. ABY-029 is a compound consisting of IRDye 800CW conjugated to synthetic anti-EGFR Affibody. The results of the preclinical studies have shown that for EGFR-positive tumors, ABY-029 had higher tumor to normal surrounding tissue contrast than that of using ALA/PpIX, and combining ABY-029 and PpIX together can provide even better discrimination between tumor and normal brain tissue than using either alone. LUM015 is a protease-activated fluorescent probe and has demonstrated safe administration and tumor-specific labeling [1].

5.4 Typical Fluorescence Imaging-Guided Surgeries

5.4 Typical Fluorescence Imaging-Guided Surgeries 5.4.1

Brain Tumor Resection

Gliomas are the most common type of primary malignant brain tumor. The completeness of the glioma resection primarily determines the length of life expectancy. While even a few residual tumor cells can directly affect the efficacy of subsequent adjuvant treatments and therefore reduce the patients survival length [13], over-resection can directly affect the life quality of the patients. Preoperative magnetic resonance images (MRIs) and the surgeon’s experience based on the texture and color of the tissue are the only resources to provide the guidance for tumor resection. However, the tumor position can shift up a couple of centimeters before and after opening the endocranium. The heterogeneity, similarity of tumor appearance under the surgical microscope to the surrounding normal brain tissue, and diffusely infiltrative behavior of high-grade gliomas increase the challenge and difficulty of achieving complete tumor removal while minimizing the damage to surrounding normal brain tissue [10]. To resolve this problem, ALA-based fluorescence imaging has been utilized because ALA is specific and sensitive within high-grade gliomas and its fluorescence can be visualized directly using an operating microscope equipped with standard filters. Patients usually take ALA orally for 2–3 hours before starting anesthesia, and its typical administered dose is 20 mg/kg of body weight (B.W.). As shown in Figure 5.3, the fluorescence imaging system is composed of a custom optical adapter, a liquid crystal tunable filter (LCTF), and a high-sensitivity charge-coupled device (CCD) camera. The custom adapter attaches these components (LCTF and camera) to an optical port on a traditional neurosurgery operating microscope, Zeiss Pentero OM. While the LCTF performs fast single-band filtering (7 nm) of incoming light in the wavelength range of 620–720 nm, fluorescence images at each single-band were taken by CCD and co-registered with the surgical field of view that was taken by the Pentero [14]. During the operation, the surgeon switched between the traditional white xenon lamp and violet blue excitation light illumination (a light source with a wavelength of 400 nm), to achieve the optimal tumor resection. Figure 5.4 shows an example of the intraoperative paired fluorescence and white light images during multiple stages of ALA/PpIX fluorescence-guided resection of a glioma. Strong visible PpIX fluorescence at the beginning (A and B), middle (C and D), and near end (E and F) of surgery, and no visible fluorescence at the end (G and H). The patient underwent fluorescence-guided resection under a protocol approved by the institutional review board at Dartmouth–Hitchcock Medical Center [15]. One of the major problems of ALA-based fluorescence imaging-guided surgery for gliomas resection is that the clinical decision of the resection tumor region relies on the subjective judgment of the fluorescence (color) by the surgeon. However, in the tumor margin areas, the fluorescence (color) is ambiguous due to its metabolic

143

144

5 Fluorescence Imaging-Guided Surgery

Visible fluorescence imaging

Surgeon oculars

Quantitative fluorescence imaging system CMOS LCTF optics

Surgeon oculars RGB CCD optics

(b)

Imaging lens

Broadband excitation light lamp

Broadband white light lamp

(c)

(a)

Figure 5.3 ALA-based fluorescence imaging system for glioma resection at Dartmouth. (a) System schematics. (b) Photo of the system with optical adapter components. (c) System connected to a Zeiss OPMI neurosurgical microscope. Source: Valdes et al. [14], With permission from Optica Publishing Group. (a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Figure 5.4 Intraoperative paired fluorescence and white light images during multiple stages of ALA/PpIX fluorescence imaging-guided resection of a glioma, at the beginning (a and b), middle (c and d), near end (e and f), and the end (g and h) of surgery. Source: Valdes et al. [15], with permission from American Association of Neurological Surgeons.

and structural nature, in most of the surgical cases. To solve this problem, research groups at Dartmouth College and the University of Toronto have developed a fluorescent interventional surgery system that can quantitatively detect PpIX concentration. While displaying the fluorescence image of the entire surgical field, the surgeon can select the region of interest and carry out quantitative fluorescence intensity detection. Clinical study outcomes demonstrate that this approach has significantly improved the resection accuracy of gliomas [16]. Figure 5.5 shows the intraoperative fluorescence images of different types of gliomas and normal brain tissue, as well as their corresponding spectra. The images in the top row are

5.4 Typical Fluorescence Imaging-Guided Surgeries

(a)

(b)

(c)

(d)

(e)

00 625 650 675 700 725 00 625 650 675 700 725 00 625 650 675 700 725 00 625 650 675 700 725 00 625 650 675 700 725 Wavelength (nm) Wavelength (nm) Wavelength (nm) Wavelength (nm) Wavelength (nm)

Figure 5.5 Intraoperative fluorescence images (Top) of different types of the gliomas and normal brain tissue, as well as its corresponding spectrums (Bottom). (a) Normal cerebral cortex, (b) low-grade glioma, (c) high-grade glioma, (d) meningioma, and (e) metastasis. Source: Valdés et al. [5], Reproduced with permission from American Association of Neurological Surgeons.

the intraoperative fluorescence images, and the arrow indicates the quantitatively detected position. The bottom row displays the corresponding raw and fitted fluorescence spectra. Excitation light is centered at 635 nm and fluorescence peak is at 710 nm. The wavelength range of the spectrum is 600–725 nm). The columns from left to right correspond to normal cerebral cortex (a), low-grade glioma (b), high-grade glioma (c), meningioma (d), and metastasis (e). The fluorescence of high-grade gliomas is clearly visible on the image and spectrum, while low-grade gliomas, meningiomas, and metastases can only be identified by quantitative detection [5]. This result indicates that the quantitative fluorescence probe measurement can improve the complete glioma resection significantly, compared to that based only on the surgeon’s subjective color judgment.

5.4.2

Open Surgeries for Cancer Resection in Other Organs

In addition to fluorescence imaging-guided glioma resection, clinical studies of imaging-guided cancer surgery have been carried out in other organs. Since most of the cancers in other organs are much deeper in tissue than that of glioma, ICG has been used in these surgeries for the relatively deeper tissue penetration. The successful ICG-based fluorescence imaging-guided open cancer surgeries are for parapharyngeal space tumors (head and neck cancer) [17], liver cancer [18, 19], breast cancer [20], and ovarian cancer [21]. The most common complication when dissecting a tumor in the narrow parapharyngeal space is dysphagia or carotid artery rupture. ICG fluorescence can indicate the margin of the tumor accurately so that the tumor can be safely removed without damaging the surrounding normal tissue in the narrow parapharyngeal space. In this study, 0.5 mg/kg B.W. of ICG was injected via the cephalic vein. 30–60 minutes after the injection, the hypopharyngeal cancer position was marked and the pharyngeal mucosa was incised trans-orally under the guidance of ICG fluorescence. Following the ICG fluorescence visualization, the submucosal tumor obscured by

145

146

5 Fluorescence Imaging-Guided Surgery

Anterior

Posterior

(a)

(b)

(c)

Figure 5.6 Pre-surgery MRI image (a), intraoperative RGB (b), and fluorescence (c) images of a patient with paraganglioma. In (c), a: facial nerve, b: accessory nerve, c: internal jugular vein, and d: retractor. Source: Yokoyama et al. [17], Reproduced with permission from Elsevier.

fascia was segmented from the carotid artery and lower cranial nerves and removed safely. Figure 5.6 [17] depicts a presurgery MRI image as well as intraoperative RGB and fluorescence images of a patient with a parapharyngeal space tumor. The fluorescence imaging system used in this study was HyperEye (MIZUHO, Japan). In this system, a white light source and a LED at the center wavelength of 760–780 nm are used as the light sources, and a CCD camera is used for imaging. By selecting the different optical filters, imaging can be switched between the white light or ICG fluorescence mode. The processed ICG fluorescence images are displayed as an overlay on the RGB white light images to display the surgical view with the identification of lymph nodes, cancer, and blood vessels. Fluorescence imaging-guided liver cancer resection is similar to the paraganglioma resection shown above. However, in liver cancer resection cases, 0.5 mg/kg ICG was injected intravenously three days [18] or 1–7 days [19] before the surgery. ICG fluorescence imaging has been utilized for virtualizing the metastases during ovarian cancer resection [20]. In contrast to the above two imaging systems used for glioma or paraganglioma resections, the imaging system used in this study is a fluorescence-assisted resection and exploration (Mini-FLARE) image-guided surgery system, which can capture and display color video and NIR fluorescence images simultaneously. The light sources of this system are a “white” light source with the wavelength of 400–650 nm and an NIR light source at the wavelength of 760 nm and power of 7.7 mW/cm2 . The imaging head was attached to a flexible gooseneck arm to obtain the stable view of the surgical field, even at extreme angles. After opening of the abdominal cavity, 20 mg of ICG was administered intravenously as single bolus by the anesthesiologist. The resection was carried out within 37–141 minutes post-administration of ICG. After resection of the primary tumor, uterus and ovaries, the Mini-FLARE was used to identify residual tumor and metastases outside the pelvis through ICG signals. However, the high false-positive rate of 62% indicated it may need to replace ICG with a tumor-specific intraoperative agent [21].

5.4 Typical Fluorescence Imaging-Guided Surgeries

Diagnosing and removing metastasized axillary lymph nodes is critical in early breast cancer surgery. It is directly related to the patients cancer recurrence rate and overall survival [22]. Different from the above cancer surgeries, fluorescence imaging-guided breast cancer surgery focuses on the identification and resection of SLN. SLN is the first site of lymphatic metastasis in breast cancer. By accurately locating the SLN from a series of lymph nodes, the cancer metastasis can be contained while eliminating unnecessary lymph node loss for life quality. Conventional clinical practice of SLN detection uses radioisotopes (Tc-99 m) in combination with sulfan blue (SB). The disadvantages of this method include, but are not limited to, the patient and environment being exposed to radiation; the high cost of gamma probes; the time interval between injection and operation varying; and the surgeon needing extensive training to obtain the experience for capturing SLN by small color difference between SLN and normal LN. Compared with the above traditional method, ICG-based fluorescence imaging-guided SLN detection in breast cancer patients has achieved SLN identification rates of 94–100% [20]. As an example, Figure 5.7 shows intraoperative ICG fluorescence images of subcutaneous lymphatic streams (a) and a basin after a skin incision, of a breast cancer patient [19, 20]. In this case, 1 ml of 0.5% ICG was injected into the periareolar area after surgical area was sterilized. The fluorescent lymphatic stream has been observed on the skin surface of the breast by a hand-held imaging system. (Figure 5.7a) Based on the end point of the subcutaneous lymph pathway (fluorescent line ending point), the surgeon can easily find the position where the lymphatic channels drain from the subcutaneous tissue into the axillary space. A skin incision of about 2 cm was made in this area, subcutaneous connective tissue was dissected and the fluorescent basin areas, including SLNs, were shown on the monitor (Figure 5.7b). The fluorescence imaging system used in this case is PDE-2 system (Hamamatsu Photonics, Japan). It is very similar to HyperEye; however, the light source is a LED with a center wavelength of 760 nm, and the detection subsystem is a CCD with a high-pass optical filter (cutoff wavelength at 820 nm).

(a)

(b)

Figure 5.7 Intraoperative ICG fluorescence images of a breast cancer patient. (a) Subcutaneous lymphatic streams. (b) A basin after a skin incision. Source: Sugie et al. [20], with permission from MDPI AG.

147

148

5 Fluorescence Imaging-Guided Surgery

5.4.3

Laparoscopic/Endoscopic Surgeries

Minimal invasive laparoscopic and endoscopic surgeries have played more and more important roles in the management of patients with cancer as they can significantly reduce the area of the trauma, thereby greatly reducing the complications, cost, and recovery time of the surgeries. In contrast to these advantages, the drawbacks of these minimally invasive surgeries are that the surgical view obtained from a laparoscope or endoscope is a two-dimensional view, which challenges eye-hand coordination and lacks tactile feedback [23]. Compared to the other modern optical imaging modalities, such as optical coherence tomography, diffuse optical spectroscopy, hyperspectral imaging, and optoacoustic imaging, NIR fluorescence imaging is the most promising technology to provide enhanced anatomic identification and physiologic tissue characterization intraoperatively [23]. The advantage of NIR fluorescence imaging is its capability of visualizing a wide variety of anatomical structures such as extra-hepatic bile ducts, ureters, arteries, and SLNs covered by adipose tissue to a depth of up to 1 cm [24]. Table 5.2 lists the types of fluorescence imaging-guided minimal invasive surgeries with each of their targeted tissue type, contrast agent probe, and probe admission methods, time, and dose. Most of these surgeries use ICG as the contrast agent probe due to its safety and deep tissue penetration nature. Although the type of surgery and the targeted tissue are different, the basic procedure is the same as follows: (i) a fluorescence probe is inserted through a vein or directly under the mucosal layer, (ii) an excitation light source is used to generate fluorescence in the surgical field, (iii) a camera system equipped with a filter capable of capturing real-time fluorescence images of targeted tissues, and (iv) since the nontargeted tissues produce lower fluorescence, compared to that of the targeted tissue, surgeon can relatively easily complete the surgery under fluorescence guidance. 5.4.3.1 Cholecystectomy

The image-guide concept of cholecystectomy based on ICG can be excreted exclusively into the bile to outline biliary anatomy even before dissection of Calot’s triangle [25]. 2.5 mg of ICG is injected intravenously into the patient 30 minutes before the patient enters the operating room. The minimally invasive surgical system consists of a laparoscope, a xenon light source, and a CCD camera. The light with the wavelengths over 800 nm, from Xenon light source, is cut by a low-pass optical filter before it reached to the surgical field. A 810-nm high-pass optical filter is installed in front of the CCD camera to filter out background and excitation light with the wavelength lower than 810 nm. Figure 5.8 shows white light (a) and fluorescence images before dissection of Calot’s triangle, as well as corresponding preoperative magnetic resonance cholangiopancreatography (MRCP) image. In (b) and (c), arrowhead and arrow represent the cystic duct and an accessory hepatic duct, respectively [25]. The cystic duct and an accessory hepatic duct, corresponding to the draining of the accessory hepatic duct on MRCP, can be seen clearly in the fluorescence image (b) while both of the ducts are invisible in the white light image.

Table 5.2 Fluorescence imaging-guided minimal invasive surgeries with each of their targeted tissue types contrast agent probe, as well as probe admission method, time, and dose. The surgery

Enhanced tissue

Probe

Probe injection

Guide through

Dose / injection time

Refs.

Cholecystectomy

Cystic duct, common hepatic duct junction, accessory bile ducts

ICG, MB

Intravenous (IV)

Enhanced cystic duct, bile duct, hepatic ducts

2.5 mg/30 min. Before entering surgical room

[25] [26]

Lymphatic drainage

ICG

Esophageal submucosa injection

Lymphatic drainage pattern

[27]

Sentinel nodes (SLN)

ICG

0.5 ml of 1.25 mg/ml at each of the four quadrants around tumor/Immediate 7.5 mg/immediate

Esophagectomy Gastrectomy

Blood vessel

Submucosa injection

Sentinel node mapping

IV

The infrapyloric artery (IPA) type

[28] [29]

Adrenal vein

ICG

IV

Border of tumor

5 mg each, up to 3 doses/immediate

[30, 31]

Brain aneurysm

Arterial, capillary, and venous

ICG

Peripheral vein injection

Arteries behind the internal carotid artery

25 mg/immediate

[32]

Endonasal surgery

Blood vessel

ICG

Peripheral vein injection

Internal carotid artery

12.5 mg/immediate

[33]

Early-stage lung cancer

Lymphatic mapping

ICG

Needle injection to the deep of the lesion

Tumor & SLN localization

1.25–2.5 mg/immediate

[34]

Pulmonary ground-glass opacity resection

Pulmonary vessels

ICG

Inhaled ICG with 4 ml/min oxygen flow

Filling-defect of vessels

0.25 mg/kg/85 min. Prior surgery

[35]

Blood vessels

ICG

IV

Tumor, renal vasculature identification

0.75–7.5 mg/immediate

[36]

Head and Neck tumor

Blood vessels

ICG

IV

Uptake curves in tumors

8.3 mg/immediate

[37]

Prostatectomy

Sentinel nodes

ICG + Tc-99 m

peripheral zone of the prostate injection

Sentinel node mapping

0.25 mg

[38]

Colonic tattooing

Serosal surface stain

ICG

Submucosal layer of the colon injection

Localization of colorectal lesions

2.5–3.75 mg /1–2 days before surgery

[39]

Adrenalectomy

Nephrectomy

Source: Shudong Jiang.

150

5 Fluorescence Imaging-Guided Surgery

(a)

(b)

(c)

Figure 5.8 Images of the surgical field before dissection of Calot’s triangle. (a) Image of white light; (b) ICG fluorescence image. The cystic duct and an accessory hepatic duct are shown clearly at the arrowhead and arrow positions, respectively; (c) Corresponding preoperative MRCP, which shows an accessory hepatic duct draining the left paramedian sector. Source: Ishizawa et al. [25], Reproduced with permission from Oxford University Press.

5.4.3.2 Gastrectomy

Gastrectomy with prophylactic lymphadenectomy has been the standard procedure for patients with the risk of lymph node metastasis because the metastatic status in the lymph nodes is difficult to accurately predict preoperatively and assess intraoperatively [28]. As it has been described in Section 5.4.2 for SLN detection in open breast cancer resections, ICG-based fluorescence imaging has been utilized in minimal invasive gastrectomy. In contrast to breast cancer resection in Section 5.4.2, the timing of admitting ICG is based on surgeon’s decision, either 1–3 days before or intraoperatively. Endoscopically, 0.5 ml of 0.5% ICG solution (1.25 mg/ml, Diagnogreen 0.5%; Daiichi Pharmaceutical, Tokyo, Japan) was injected into the submucosa in four quadrants around the tumor. The fluorescence imaging system used in this study is the PDE-2 system (Hamamatsu Photonics, Japan). As shown in Figure 5.9, the invisible cancer and the lymph nodes with metastasis in the traditional laparoscopic view (a) have been clearly identified in the ICG fluorescence view (b).

(a)

(b)

Figure 5.9 Comparison between traditional (a) and ICG fluorescens (b) laparoscopic views. ICG fluorescence makes it easy to distinguish the primary tumor (arrow) and fluorescent nodes (arrowhead) from the surrounding tissue (b), compared that observed in traditional laparoscopic view with naked eye (a). Source: Tajima et al. [28], Reproduced with permission from Springer Nature.

5.4 Typical Fluorescence Imaging-Guided Surgeries

5.4.3.3 Pulmonary Ground-Glass Opacity in Thoracoscopic Wedge Resection

Huang et al. at Hainan General Hospital, China recently reported an interesting case that utilized ICG for virtualizing the pulmonary ground-glass opacity/nodule (GGO/GGN) during a thoracoscopic surgery [35]. In contrast to all other applications that admit fluorescence probes through intravenous or site injection, and guide the resection through enhanced (positive) contrast to the normal surrounding tissue, in this case, the patient autonomously inhaled ICG at a dose of 0.25 mg/kg using 4 ml/min oxygen flow rate for 16 minutes. Due to GGOs appearance as an area of hazy opacity that does not hide the underlying pulmonary vessels or bronchial structures; the inhalation administration utilizes the negative imaging of ICG with defected fluorescence. 5.4.3.4 Head and Neck

In contrast to most fluorescence imaging-guided open and minimally invasive surgeries that use the steady state fluorescence images to guide the cancer resection and/or avoid normal tissue injury, this fluorescence imaging-guided endoscopic procedure is based on the dynamic ICG uptake based on the fluorescence video imaging [37]. In this ICG-based fluorescence imaging-guided endoscopy, the video-rate (12 frames/second) dynamic fluorescence imaging was performed by a NIR-optimized HD camera system (IMAGE1 SNIR/ICG system; Karl Storz, Tuttlingen, Germany) directly after 8.3 mg of ICG was injected intravenously. Each video had duration of at least 60 seconds to record the initial ICG uptake phase. Based on evaluating the ICG video, premalignant or malignant tumors can be distinguished from benign lesions due to their (i) fast uptake of ICG within 1 to 2 two seconds after ICG injection; and (ii) the lesions showing a constant ICG labeling after the ICG uptake phase. Figure 5.10 shows an example of the ICG images of an ICG-positive laryngeal carcinoma (malignant) on the left anterior vocal cord at different time points, as well as its white light images.

5.4.4

Organ Transplant Surgery

Similar to liver cancer resections, one of the major complications of hepatobiliary transplantation is the incidence of bile duct and hepatic artery injuries due to the

0

(a)

0 sec

1

(b)

1 sec

(c)

5 sec

28

13

5

(d) 13 sec

(d) 28 sec

Figure 5.10 ICG images of an ICG-positive laryngeal carcinoma (malignant) on the left anterior vocal cord at 1, 5, 13, and 28 seconds, respectively, after ICG injected. The white light image is shown as well (a). The lesion becomes clearly ICG-positive 5 seconds after the intravenous ICG bolus injection. After 28 seconds, the retention of the ICG in the lesion is completed. Source: Schmidt et al. [37], Reproduced with permission from John Wiley & Sons.

151

152

5 Fluorescence Imaging-Guided Surgery

(a)

(b)

(c)

(d)

Figure 5.11 Intraoperative fluorescence images of the recipient of a living liver transplantation after 10 seconds of intravenous injection of 1 ml of ICG, as well as presurgery CT image of the donor. (a) Hepatic artery; (b) portal vein (PV); (c) preoperative analysis of donor hepatic vein by three-dimensional (3D)-CT; (d) liver parenchyma. RHV, Right hepatic vein; MHV, middle hepatic vein; LHV, left hepatic vein. Source: Mitsuhashi et al. [40], Reproduced with permission from Springer Nature.

difficulties in evaluating the local anatomy. In addition, evaluation of blood flow and liver function of the hepatic artery and portal vein after transplantation also plays an important role in a successful transplant. The results of a research group in Japan have shown that ICG-based fluorescence imaging can be utilized in hepatobiliary transplantation to virtualize the hepatic artery, portal vein, and bile ducts [40]. The fluorescence imaging system is the PDE-2 system (Hamamatsu Photonics, Japan). Figure 5.11 shows the intraoperative fluorescence images of the recipient of a living liver transplantation after 10 seconds of intravenous injection of 1 ml of ICG (Figure 5.11a,b,d), as well as a presurgery CT image of the donor (Figure 5.11c). As shown in Figure 5.11a, b, excellent blood flow in the hepatic artery and portal vein, without kinking or stenosis at the anastomosis, was confirmed. As it was expected before the surgery, the graft contained an irregularly shaped region lacking blood perfusion (fluorescence) (Figure 5.11d) since the graft did not have a middle hepatic vein and thus the venous return to the left side of the anterior segment had been impaired (Figure 5.11c).

5.4.5

Plastic Surgery

Microvascular free tissue transfer is commonly used in the plastic surgery of head and neck, breast, jejunal, and skin reconstructions. The success of the reconstructive

5.4 Typical Fluorescence Imaging-Guided Surgeries

(a)

(b)

Figure 5.12 Commercialized fluorescence imaging systems for plastic surgery. (a) SPY PHI. (b) PDE-NEO II. Source: Shudong Jiang.

surgery is largely dependent on adequate perfusion and drainage of the reconstructive flap. In addition to surgeons continuously analyzing skin paddle color, capillary refill time, and bleeding from the flap edges to assess the blood flow intraoperatively, various technologies have been used to provide objective assessment of flap perfusion [41, 42]. Within these technologies, fluorescence imaging is now well standardized due to its ability to assess tissue vascularization and lymphatic drainage intraoperatively with the rapid, reliable, safe, and user-friendly manners [42]. MB and ICG are two fluorescence probes used to assess the flap’s perfusion and drainage. Since ICG can assess much deeper tissue compared to that of MB, recent clinical practice is mostly focused on using ICG as the probe. Several commercialized fluorescence imaging systems have been developed and obtained FDA approval for this purpose. Figure 5.12 shows the two most popular commercialized imaging systems: SPY-PHI (Stryker, US) and PDE-neo II (Hamamatsu Photonics, Japan). In both imaging systems, LED light sources with a wavelength of 805 nm (of SPY PHI) or 760 nm (of PDE-neo II) are used for excitation of ICG, and the fluorescence lights in the wavelength range from 820 to 900 nm were detected by a CCD camera. Table 5.3 lists the major plastic surgeries that use ICG-based fluorescence imaging to evaluate flaps.

5.4.6

Orthopedic Surgery

Infection is a common and potentially catastrophic complication, resulting in prolonged morbidity, loss of function, and potential loss of limb [48, 49]. In an effort to minimize infection and healing complications, management of open fractures is therefore based on aggressive and thorough debridement of all poorly perfused or devitalized bone, since vascular perfusion plays a critical role in the health of bone by delivering necessary oxygen, nutrients, antibiotics, and endogenous immune cells [50, 51]. Many believe that the surgeon’s propensity to excise as little as possible is the primary source of error in the treatment of open fractures [51].

153

154

5 Fluorescence Imaging-Guided Surgery

Table 5.3 flaps.

Major plastic surgeries that using ICG-based fluorescence imaging to evaluate

Reconstruction site

Donor site

Probe dose

Admission method

Breast [41, 43]

Lower abdomen

5 mg/subject

IV

Head & neck [44, 45]

Latissimus gracilis, vastus lateralis, anterior lateral thigh, fibular, the iliac crest Jejunal

7.5 mg/subject

Injected through parent vessel

5 mg/subject

IV

5 mg/subject

IV

Skin

0.5 mg/Kg (B.W.)

IV

Hypopharyngeal or cervical esophageal [46] Skin [47] Source: Shudong Jiang.

Animal and human bone perfusion have been imaged using dynamic contrastenhanced magnetic resonance imaging (DCE-MRI) [52], computer tomography (CT) angiography [53], and combined positron emission tomography (PET)/CT [50]. However, these imaging modalities are impractical in the orthopedic surgical population because they cannot be translated to the operating room, have problems with image resolution, particularly in the setting of metallic implants, and have lengthy time requirements for data acquisition. An ICG-based dynamic contrast-enhanced fluorescence imaging (DCE-FI) has been developed at Dartmouth to provide a measurable or quantifiable method that can objectively assess the bone perfusion intraoperatively and reduce the risk of recurrent infection and treatment failure due to substantial practice variation [54–56]. Figure 5.13 shows the imaging setup for a patient during his open tibia fracture with segmental bone loss. The SPY Elite system (Stryker, USA) was positioned 300 mm from the fractured tibia and the field of view is about 260 mm along

Figure 5.13

Intraoperative bone blood perfusion imaging setup. Source: Shudong Jiang.

5.4 Typical Fluorescence Imaging-Guided Surgeries

the tibia. ICG images were recorded for 4.5 minutes with a frame rate of 0.27 second. Following 20 seconds of pre-injection imaging, 0.1 mg/kg of ICG was administered intravenously to the patient. For imaging display purpose, after DCE-FI was acquired, a white-light image was taken. By applying a bone-specific kinetic model [54, 55] to the fluorescence image series, the perfusion-related metrics, such as peak intensity, the total bone perfusion (TBP) and a “late-to-total perfusion fraction” (LPF) were calculated for assessing the damage level of the bone at different bone regions of interest (ROI). Figure 5.14 summarizes the ICG-based DCE-FI data we acquired from this patient. In this picture, ROIs #1 was fracture, #2 was closest to the fracture/segmental bone defect, and ROIs #3 and #4 are more proximal from the fracture/segmental bone defect. The more proximal ROIs (#3 and #4 in Figure 5.14a) were, as expected, better perfused than that of the ROIs in or close to the fracture site, which is demonstrated by the brighter regions in the fluorescence intensity images at 75 and 215 seconds. (Figure 5.14b,c), and faster raises in the temporal dynamic curves (Figure 5.14g). This trend is further reflected in LPF (Figure 5.14d) and TBP (Figure 5.14e) where more proximal ROIs demonstrate brighter signal than more distal ROIs. (Figure 5.14f) demonstrates the use of machine learning classification to establish a boundary between healthy and damaged bone based on the TBP and LPF (%) values, which were the variables that performed best in the porcine preclinical model.

TDC (a.u.)

250

2

200

3

200

150

150

150

100

100

100

100

50

50

50

50

0 0

50

100 150 200 250 300

Time (s)

0

50

100 150 200 250 300

Time (s)

0

0

50

4

200

150

0

(g)

250

250

250

1

200

100 150 200 250 300

Time (s)

0

0

50

100 150 200 250 300

Time (s)

Figure 5.14 Images acquired intraoperatively during surgical treatment of an open tibia fracture with segmental bone loss. (a) White-light image, the circles #1–#4, were ROIs for imaging data analysis shown in (g); (b) and (c), fluorescence images were acquired at 75 and 215 seconds, after ICG injection, respectively; (d) and (e), LPF (d) and TBP (e) images; and (g), temporal dynamic curves on ROIs shown in (a). Source: Shudong Jiang.

155

5 Fluorescence Imaging-Guided Surgery

5.4.7

Parathyroid Gland Identification

The most common complication after thyroidectomy is hypoparathyroidism: the PGs have been damaged and/or removed during thyroidectomy. This is due to the fact that PGs are small and it is hard to distinguish from surrounding tissue [57, 58]. Unlike most fluorescence imaging-guided surgeries, which use exogenously admitted fluorescence contrast agent probes, auto-fluorescence from PGs has been imaged for intraoperative identification of these PGs. As shown in Figure 5.15, PGs exhibit a unique autofluorescence with an intensity of 2–11 times that of the surrounding tissue, at the wavelength range of 820–830 nm when excited at the wavelengths of 750–785 nm [57]. Currently, the FDA has approved two imaging systems for intraoperative PGs identification: Fluobeam® (Fluoptics©, Grenoble, France) and the Parathyroid Detection PTeye System (AIBiomed Inc., Santa Barbara, CA, USA). The excitation light source of Fluobeam is a laser at a wavelength of 750 nm and a power of 5 mW/cm2 at a 20 cm working distance. The optical signal at wavelengths of over 800 nm is imaged to provide a real-time grayscale video image of detected and enhanced PGs. Figure 5.16

Intensity (a.u.)

156

Parathyroid Thyroid Fat Muscle Trachea

100

50

0 800

850

900

950

1000

Wavelength (nm)

Figure 5.15 Comparison of autofluorescence in different tissues in the surgical felid of thyroidectomy. Source: Adapted from [57] Figure 1 with permission of Springer Nature.

(a)

(b)

Figure 5.16 Images of intraoperative auto-fluorescence images of the PGs. (arrows) (a) Two PGs after right lobectomy; (b) Two PGs after left thyroid lobectomy. Source: Demarchi et al. [57], Reproduced with permission from MDPI.

5.5 Limitations, Challenges, and Possible Solutions

shows the intraoperative images of the PGs. While Fluobeam provides the images of the surgical field of view with the enhanced PGs, the PTeye System is a sterile probe that detects the optical properties of the tissues at the tip of the probe and provides an audio and visual signal warning when the probe is touching a PG. The limitation of this technique is that the thyroid and parathyroid tissues have similar levels of autofluorescence so that it is difficult to distinguish PGs from thyroid tissues; and false positives occur due to the autofluorescence from PGs being overlapped by that from brown fat, colloidal nodules, or metastatic lymph nodes [57].

5.5 Limitations, Challenges, and Possible Solutions As shown above, fluorescence imaging-guided surgeries have great potential to become a clinical standard for identifying anatomical structures, assessing tissue perfusion, enhancing the cancer treatment by decreasing or eliminating positive tumor margins while avoiding damage to the healthy surrounding tissue. However, to translate this novel imaging tool into regular clinical practice, several limitations need to be overcome. One limitation of fluorescence imaging-guided surgery is that surgeons need to switch views frequently between the monitor and the surgical site, which interrupts the surgical workflow. To overcome this limitation, a vision goggle was developed to enable direct surgical site fluorescence-guide visualization [59]. This head-mounted all-plastic imaging system consists of a dual-mode imaging system, a see-through goggle, autofocusing, and auto-contrast tuning modules. A surgeon can view the fluorescence images of the surgical site directly through the goggle. (Figure 5.17) is a photo of the goggle (Figure 5.17a) and the surgical cavity of a patient right after a lumpectomy (Figure 5.17b). The high fluorescence area of the edges of the cavity led to the later pathological confirmed positive margins.

Image projector

See-through display (a)

Camera (b)

Figure 5.17 Photo and image of the goggle and the surgical cavity seen by the surgeon through the goggle. This patient had a lumpectomy. (a) A photo of the goggle; and (b) Fluorescence image overlay on a white light image of the surgical cavity. The high fluorescence area of the edges of the cavity leading to a later pathological confirmed positive margins. Source: Mondal et al. [59], Reproduced with permission from Springer Nature.

157

158

5 Fluorescence Imaging-Guided Surgery

The second limitation is the timing of probe injections. For example, for the fluorescent probe in biliary surgery, the effect of an injection FS24 hours before surgery is much better than an injection two hours before surgery. However, 24 hours pre injection is practically invisible for outpatient cases. In addition, the development of new fluorescent probes with higher binding capacity, extinction coefficient, quantum yield, shorter onset time, and longer imaging time will also be a huge advantage for fluorescence-mediated minimally invasive surgery. In order to improve the binding and increase the target cells’ sensitivity to light, Yaseen et al. have developed a nanoencapsulation system for ICG to stabilize and direct its distribution. This simple and cost-effective design also protects ICG from rapid degradation via encapsulation and produces greater uptake by the lungs and spleen [60]. Yu et al. coated ICG with an antiepidermal growth factor receptor on their cell surfaces and achieved ICG-targeted phototherapy induced apoptosis in 90 to 100% of cervical cancer and head and neck squamous cell cancer cell lines [61]. Though these new probes have not been approved for clinical use yet, these in vitro experiments demonstrate the potential of their future usage in fluorescence imaging-guided surgeries. One of the major challenges of fluorescence imaging-guided surgery is the tissue penetration depth due to the sensitivity of the fluorescence imaging systems and probes. While existing systems can image adipose tissue through 10 mm, the adipose tissue typically covers the surgical site by more than this thickness. To improve the imaging system sensitivity, a bioinspired imager has been created [62]. This imaging system is inspired by the Morpho butterfly compound eye, and realize a singlechip multispectral imager with 1000 × higher sensitivity and 7 × better spatial co-registration accuracy compared to clinical imaging systems in current use [62]. The existing analysis tool can produce objective and quantitative images of the relative and absolute fluorescence intensity and the properties related to the perfusion based on the dynamic fluorescence imaging through kinetic model analysis. The number of different quantitative measurements provided by different analysis programs have created a controversy over which parameter is accurate and reliable for the particular clinical applications [28]. Thus, the future of this technology will rely on the standardization [63] and continued refinement of the analysis tools to provide accurate images of the clinically relevant parameters seen on the screen [55, 64].

References 1 Samkoe, K.S., Bates, B.D., Elliott, J.T. et al. (2018). Application of fluorescenceguided surgery to subsurface cancers requiring wide local excision: literature review and novel developments toward indirect visualization. Cancer Control 25 (1): 1073274817752332. 2 Pogue, B.W., Paulsen, K.D., Samkoe, K.S. et al. (2016). Vision 20/20: molecularguided surgical oncology based upon tumor metabolism or immunologic phenotype: technological pathways for point of care imaging and intervention. Med. Phys. 43 (6): 3143–3156.

References

3 Moore, G.E., Peyton, W.T., Hunter, S.W., and French, L. (1948). The clinical use of sodium fluorescein and radioactive diiodofluorescein in the localization of tumors of the central nervous system. Minn. Med. 31 (10): 1073–1076. 4 Stummer, W., Pichlmeier, U., Meinel, T. et al. (2006). Group AL-GS: fluorescence-guided surgery with 5-aminolevulinic acid for resection of malignant glioma: a randomised controlled multicentre phase III trial. Lancet Oncol. 7 (5): 392–401. 5 Valdés, P.A., Leblond, F., Kim, A. et al. (2011). Quantitative fluorescence in intracranial tumor: implications for ALA-induced PpIX as an intraoperative biomarker. J. Neurosurg. 115 (1): 11–17. 6 Zhang, Z., He, K., Chi, C. et al. (2022). Intraoperative fluorescence molecular imaging accelerates the coming of precision surgery in China. Eur. J. Nucl. Med. Mol. Imaging. 7 Dip, F.D., Asbun, D., Rosales-Velderrain, A. et al. (2014). Cost analysis and effectiveness comparing the routine use of intraoperative fluorescent cholangiography with fluoroscopic cholangiogram in patients undergoing laparoscopic cholecystectomy. Surg. Endosc. 28: 1838–1843. 8 Vahrmeijer, A.L., Hutteman, M., van der Vorst, J.R. et al. (2013). Image-guided cancer surgery using near-infrared fluorescence. Nat. Rev. Clin. Oncol. 10 (9): 507–518. 9 van Beurden, F., van Willigen, D.M., Vojnovic, B. et al. (2020). Multi-wavelength fluorescence in image-guided surgery, clinical feasibility and future perspectives. Mol. Imag. 19: 1536012120962333. 10 Li, Y., Rey-Dios, R., Roberts, D.W. et al. (2014). Intraoperative fluorescenceguided resection of high-grade gliomas: a comparison of the present techniques and evolution of future strategies. World Neurosurg. 82 (1–2): 175–185. 11 Reinhart, M.B., Huntington, C.R., Blair, L.J. et al. (2016). Indocyanine green: historical context, current applications, and future considerations. Surg. Innov. 23 (2): 166–175. 12 Nagaya, T., Nakamura, Y.A., Choyke, P.L., and Kobayashi, H. (2017). Fluorescence-guided surgery. Front. Oncol. 7: 314. 13 Stummer, W. and Kamp, M.A. (2009). The importance of surgical resection in malignant glioma. Curr. Opin. Neurol. 22 (6): 645–649. 14 Valdes, P.A., Jacobs, V.L., Wilson, B.C. et al. (2013). System and methods for wide-field quantitative fluorescence imaging during neurosurgery. Opt. Lett. 38 (15): 2786–2788. 15 Valdes, P.A., Roberts, D.W., Lu, F.-K., and Golby, A. (2016). Optical technologies for intraoperative neurosurgical guidance. Neurosurg. Focus 40 (3): E8. 16 Valdes, P.A., Bekelis, K., Harris, B.T. et al. (2014). 5-Aminolevulinic acid-induced protoporphyrin IX fluorescence in meningioma: qualitative and quantitative measurements in vivo. Neurosurgery 10 (1): 74–82. 17 Yokoyama, J., Ooba, S., Fujimaki, M. et al. (2014). Impact of indocyanine green fluorescent image-guided surgery for parapharyngeal space tumours. J. Craniomaxillofac. Surg. 42 (6): 835–838.

159

160

5 Fluorescence Imaging-Guided Surgery

18 Kawaguchi, Y., Ishizawa, T., Masuda, K. et al. (2011). Hepatobiliary surgery guided by a novel fluorescent imaging technique for visualizing hepatic arteries, bile ducts, and liver cancers on color images. J. Am. Coll. Surg. 212 (6): e33–e39. 19 Hu, Z., Fang, C., Li, B. et al. (2020). First-in-human liver-tumour surgery guided by multispectral fluorescence imaging in the visible and near-infrared-I/II windows. Nat. Biomed. Eng. 4 (3): 259–271. 20 Sugie, T., Kassim, A., Takeuchi, M. et al. (2010). A novel method for sentinel lymph node biopsy by indocyanine green fluorescence technique in breast cancer. Cancers 2: 713–720. 21 Tummers, Q.R., Hoogstins, C.E., Peters, A.A. et al. (2015). The value of intraoperative near-infrared fluorescence imaging based on enhanced permeability and retention of indocyanine green: feasibility and false-positives in ovarian cancer. PloS One 10 (6): e0129766. 22 Hirano, A., Kamimura, M., Ogura, K. et al. (2012). A comparison of indocyanine green fluorescence imaging plus blue dye and blue dye alone for sentinel node navigation surgery in breast cancer patients. Ann. Surg. Oncol. 19: 4112–4116. 23 Schols, R.M., Bouvy, N.D., van Dam, R.M., and Stassen, L.P.S. (2013). Advanced intraoperative imaging methods for laparoscopic anatomy navigation: an overview. Surg. Endosc. 27: 1851–1859. 24 Schols, R.M., Connell, N.J., and Stassen, L.P.S. (2015). Near-infrared fluorescence imaging for real-time intraoperative anatomical guidance in minimally invasive surgery: a systematic review of the literature. World J. Surg. 39: 1069–1079. 25 Ishizawa, T., Bandai, Y., Ijichi, M. et al. (2010). Fluorescent cholangiography illuminating the biliary tree during laparoscopic cholecystectomy. Br. J. Surg. 97: 1369–1377. 26 Armstrong, N., Hou, W., and Tang, Q. (2017). Biological and historical overview of Zika virus. World J. Virol. 6 (1): 1–8. 27 Schlottmann, F., Barbetta, A., Mungo, B. et al. (2017). Identification of the lymphatic drainage pattern of Esophageal cancer with near-infrared fluorescent imaging. J. Laparoendosc. Adv. Surg. Tech. A 27 (3): 268–271. 28 Tajima, Y., Murakami, M., Yamazaki, K. et al. (2010). Sentinel node mapping guided by indocyanine green fluorescence imaging during laparoscopic surgery in gastric cancer. Ann. Surg. Oncol. 17: 1787–1793. 29 Kim, M., Son, S.Y., Cui, L.H. et al. (2017). Real-time vessel navigation using indocyanine green fluorescence during robotic or laparoscopic gastrectomy for gastric cancer. J. Gastric Cancer 17 (2): 145–153. 30 Colvin, J., Zaidi, N., and Berber, E. (2016). The utility of indocyanine green fluorescence imaging during robotic adrenalectomy. J. Surg. Oncol. 114 (2): 153–156. 31 DeLong, J.C., Chakedis, J.M., Hosseini, A. et al. (2015). Indocyanine green (ICG) fluorescence-guided laparoscopic adrenalectomy. J. Surg. Oncol. 112 (6): 650–653. 32 Nishiyama, Y., Kinouchi, H., Senbokuya, N. et al. (2012). Endoscopic indocyanine green video angiography in aneurysm surgery: an innovative method for

References

33

34

35 36

37

38

39

40

41

42

43

44

45

46

intraoperative assessment of blood flow in vasculature hidden from microscopic view. J. Neurosurg. 117 (2): 302–308. Hide, T., Yano, S., Shinojima, N., and Kuratsu, J. (2015). Usefulness of the indocyanine green fluorescence endoscope in endonasal transsphenoidal surgery. J. Neurosurg. 122 (5): 1185–1192. Hachey, K.J., Digesu, C.S., Armstrong, K.W. et al. (2017). A novel technique for tumor localization and targeted lymphatic mapping in early-stage lung cancer. J. Thorac. Cardiovasc. Surg. 154 (3): 1110–1118. Huang, W., Wang, K., Chen, F. et al. (2022). Intraoperative fluorescence visualization in thoracoscopic surgery. Ann. Thorac. Surg. Tobis, S., Knopf, J., Silvers, C. et al. (2011). Near infrared fluorescence imaging with robotic assisted laparoscopic partial nephrectomy: initial clinical experience for renal cortical tumors. J. Urol. 186 (1): 47–52. Schmidt, F., Dittberner, A., Koscielny, S. et al. (2017). Feasibility of real-time near-infrared indocyanine green fluorescence endoscopy for the evaluation of mucosal head and neck lesions. Head Neck 39 (2): 234–240. van der Poel, H.G., Buckle, T., Brouwer, O.R. et al. (2011). Intraoperative laparoscopic fluorescence guidance to the sentinel lymph node in prostate cancer patients: clinical proof of concept of an integrated functional imaging approach using a multimodal tracer. Eur. Urol. 60 (4): 826–833. Lee, S.J., Sohn, D.K., Han, K.S. et al. (2018). Preoperative tattooing using indocyanine green in laparoscopic colorectal surgery. Ann. Coloproctol. 34 (4): 206–211. Mitsuhashi, N., Kimura, F., Shimizu, H. et al. (2008). Usefulness of intraoperative fluorescence imaging to evaluate local anatomy in hepatobiliary surgery. J. Hepato-Biliary-Pancreat. Surg. 15 (5): 508–514. Johnson, A.C., Colakoglu, S., Chong, T.W., and Mathes, D.W. (2020). Indocyanine green angiography in breast reconstruction: utility, limitations, and search for standardization. Plast. Reconstr. Surg. Glob. Open 8 (3): e2694. Burnier, P., Niddam, J., Bosc, R. et al. (2017). Indocyanine green applications in plastic surgery: a review of the literature. J. Plast. Reconstr. Aesthet. Surg. 70 (6): 814–827. Losken, A., Zenn, M.R., Hammel, J.A. et al. (2012). Assessment of zonal perfusion using intraoperative angiography during abdominal flap breast reconstruction. Plast. Reconstr. Surg. 129 (4): 618e–624e. Green, J.M. III, Thomas, S., Sabino, J. et al. (2013). Use of intraoperative fluorescent angiography to assess and optimize free tissue transfer in head and neck reconstruction. J. Oral Maxillofac. Surg. 71 (8): 1439–1449. Iida, T., Mihara, M., Yoshimatsu, H. et al. (2014). Versatility of the superficial circumflex iliac artery perforator flap in head and neck reconstruction. Ann. Plast. Surg. 72 (3): 332–336. Kamiya, K., Unno, N., Miyazaki, S. et al. (2015). Quantitative assessment of the free jejunal graft perfusion. J. Surg. Res. 194 (2): 394–399.

161

162

5 Fluorescence Imaging-Guided Surgery

47 Holm, C., Tegeler, J., Mayr, M. et al. (2002). Monitoring free flaps using laser-induced fluorescence of indocyanine green: a preliminary experience. Microsurgery 22 (7): 278–287. 48 Gitajn, I.L., Titus, A.J., Tosteson, A.N. et al. (2018). Deficits in preference-based health-related quality of life after complications associated with tibial fracture. Bone Jt. J. 100 (9): 1227–1233. 49 Farr, J.N., Drake, M.T., Amin, S. et al. (2014). In vivo assessment of bone quality in postmenopausal women with type 2 diabetes. J. Bone Miner. Res. 29 (4): 787–795. 50 Jodal, L., Nielsen, O.L., Afzelius, P. et al. (2017). Blood perfusion in osteomyelitis studied with [(15)O]water PET in a juvenile porcine model. EJNMMI Res. 7 (1): 4. 51 Gustilo, R.B., Mendoza, R.M., and Williams, D.N. (1984). Problems in the management of type III (severe) open fractures: a new classification of type III open fractures. J. Trauma 24 (8): 742–746. 52 Poot, D.H.J., van der Heijden, R.A., van Middelkoop, M. et al. (2018). Dynamic contrast-enhanced MRI of the patellar bone: how to quantify perfusion. J. Magn. Reson. Imaging 47 (3): 848–858. 53 Udagawa, A., Sato, S., Hasuike, A. et al. (2013). Micro-CT observation of angiogenesis in bone regeneration. Clin. Oral Implants Res. 24 (7): 787–792. 54 Elliott, J.T., Jiang, S., Pogue, B.W., and Gitajn, I.L. (2019). Bone-specific kinetic model to quantify periosteal and endosteal blood flow using indocyanine green in fluorescence guided orthopedic surgery. J. Biophotonics 12 (8): e201800427. 55 Elliott, J.T., Addante, R.R., Slobegean, G.P. et al. (2020). Intraoperative fluorescence perfusion assessment should be corrected by a measured subject-specific arterial input function. J. Biomed. Opt. 25 (6): 1–14. 56 Gitajn, I.L., Elliott, J.T., Gunn, J.R. et al. (2020). Evaluation of bone perfusion during open orthopedic surgery using quantitative dynamic contrast-enhanced fluorescence imaging. Biomed. Opt. Express 11 (11): 6458–6469. 57 Demarchi, M.S., Karenovics, W., Bédat, B., and Triponez, F. (2020). Intraoperative autofluorescence and indocyanine green angiography for the detection and preservation of parathyroid glands. J. Clin. Med. 9 (3): 830. 58 De Leeuw, F., Breuskin, I., Abbaci, M. et al. (2016). Intraoperative near-infrared imaging for parathyroid gland identification by auto-fluorescence: a feasibility study. World J. Surg. 40 (9): 2131–2138. 59 Mondal, S.B., Gao, S., Zhu, N. et al. (2017). Optical see-through cancer vision goggles enable direct patient visualization and real-time fluorescence-guided oncologic surgery. Ann. Surg. Oncol. 24 (7): 1897–1903. 60 Yaseen, M.A., Yu, J., Jung, B. et al. (2009). Biodistribution of encapsulated indocyanine green in healthy mice. Mol. Pharmaceutics 6 (5): 1321–1332. 61 Yu, J., Javier, D., Yaseen, M.A. et al. (2010). Self-assembly synthesis, tumor cell targeting, and photothermal capabilities of antibody-coated indocyanine green nanocapsules. J. Am. Chem. Soc. 132 (6): 1929–1938.

References

62 Garcia, M., Edmiston, C., York, T. et al. (2018). Bio-inspired imager improves sensitivity in near-infrared fluorescence image-guided surgery. Optica 5 (4): 413–422. 63 Ruiz, A.J., Wu, M., LaRochelle, E.P.M. et al. (2020). Indocyanine green matching phantom for fluorescence-guided surgery imaging system characterization and performance assessment. J. Biomed. Opt. 25 (5): 1–15. 64 Elliott, J.T., Dsouza, A.V., Davis, S.C. et al. (2015). Review of fluorescence guided surgery visualization and overlay techniques. Biomed. Opt. Express. 6 (10): 3765–3782.

163

165

6 Enhanced Photodynamic Therapy Buhong Li 1 and Li Lin 2 1

Hainan University, School of Science, 58 Renmin Road, Haikou 570228, China Fujian Normal University, Key Laboratory of OptoElectronic Science and Technology for Medicine of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fuzhou 350117, China 2

6.1 Introduction Photodynamic therapy (PDT), a minimally invasive therapeutic approach, utilizes photosensitizer (PS) with irradiation of light at a specific wavelength to activate oxygen interaction to generate cytotoxic reactive oxygen species (ROS), for treating malignant and nonmalignant diseases. PS, light, and oxygen are the three critical elements of PDT [1]. Most recently, the clinical applications of PDT have been extended to the treatment of antimicrobial infections, such as those of bacterial, fungal, and viral origins. PDT, being an important modality of light-activated therapy, has gradually become the fourth most popular cancer treatment after surgery, radiotherapy, and chemotherapy, and is also the preferred treatment for special diseases such as port wine stains (PWS). To enhance the efficacy of PDT, great research progress has been made in both fundamental research and clinical applications [2, 3]. As shown in Figure 6.1, the PDT-related publication per year in Web of Science increased substantially during the past decade due to its advantages of lesion-adaptability, easiness of repeating treatment, and low side effects. As indicated in Figure 6.2, development of novel PSs, elucidation of biological mechanisms, expansion of clinical applications, and investigation of dosimetry are the four hot topics in PDT research. The publications directly related to PSs for PDT, aiming to enhance photodynamic effects and decrease side effects, accounted for the highest proportion and have increased to 50% over the past decade. Other research fields representing 5% of the total publications mainly focused on light sources and synergistic therapy. During the photosensitization activated by light irradiation, an excited PS can undergo type I and/or type II as well as type III PDT reactions for disease treatment [4]. The corresponding mechanisms are demonstrated graphically in Figure 6.3. The type I PDT generates superoxide anion (O2 ⋅- ), hydrogen peroxide (H2 O2 ), and hydroxyl radical (⋅OH) by electron transfer reactions of PSs, while the type II reaction yields singlet oxygen (1 O2 ) through an oxygen-dependent process. For most Biomedical Photonic Technologies, First Edition. Edited by Zhenxi Zhang, Shudong Jiang, and Buhong Li. © 2023 WILEY-VCH GmbH. Published 2023 by WILEY-VCH GmbH.

6 Enhanced Photodynamic Therapy

5000

4000

Number of publications

166

3000

2000

1000

0 2012

2013 2014

2015

2016

2017

2018

2019

2020

2021

Published year

Figure 6.1 Publications related to PDT searched in Web of Science over a duration of 2012–2021. Source: Buhong Li.

Figure 6.2 Hot topics in PDT research. Source: Buhong Li.

Others 5% Dosimetry 10% Clinical applications 15%

Novel photosensitizers 50%

Biological mechanism 20%

clinically approved PSs, 1 O2 is generally regarded as the primary cytotoxic agent. However, PDT mediated by aminolevulinic acid hydrochloride (ALA, Levulan@ ) induced protoporphyrin IX (PpIX) or verteporfin simultaneously involves both Type I and Type II reactions. In addition, the excited PS in the type III PDT (or photoactivated chemotherapy, PACT) undergoes electron transfer to the biomolecules such

6.1 Introduction

Biomolecules (DNAs) Type III

Electron-transfer Free radical and ions

O2

3O 2

Type I Electron-transfer

S1

.OH ROS

Intersystem crossing

S0

Figure 6.3

H2O2 Energy-transfer

Phosphorescence

hv

Fluorescence

Absorption

T1

1O 2

1270nm Type II

3O 2

Photosensitizer

Simplified Jablonski diagram for PDT of Type I, II, and III. Source: Buhong Li.

as DNAs, thereby allowing little oxygen dependence [5]. For this, the development of oxygen-dependent and -independent PSs for enhanced PDT will be discussed. Figure 6.4 indicates the typical pathways of biological responses in PDT with specific targets [6], showing that tumor cell death can be induced directly by apoptosis, necrosis, autophagy, and liperoptosis, or indirectly through vascular destruction, as well as immune response triggering cytotoxic T cell or antibody-mediated cellular cytotoxicity [7–9]. Deferring from direct damage to tumor cells, the vascular-targeted PDT (V-PDT) results in local hypoxia through vessel constriction, indirectly leading to cell death by limited oxygen and nutrient supply. As for the immune response, PDT-induced damage in tumor tissues triggers inflammatory response, which presents an antitumor effect. Normally, the PDT-induced inflammatory response activates the cytotoxic T cells that are subsequently transported to the targeted location for tumor cell killing. Currently, most attention is focused

Capillary

PDT

Figure 6.4

Vessel constriction

Regional hypoxia

Tumor cell

Apoptosis Necrosis Autophagy Liperoptosis

Tumor cell death

Immune response

Cytotoxic T cell Antibody-mediated cellular cytotoxicity

Tumor remission

Pathways of PDT response. Source: Buhong Li.

167

168

6 Enhanced Photodynamic Therapy

on either damaging the malignant cells or the PDT-induced immunologic response against metastases and recurrence. However, shutdown of tumor vasculature has also played a vital role in the clinical treatments of vascular-related diseases, including age-related macular degeneration, PWS, prostate cancer, and so on.

6.2 Photosensitizers for Enhanced PDT PSs strongly affect PDT efficacy through delivery efficiency, cellular localization, and peak wavelength of absorption spectrum. Moreover, other PS properties, such as purity, clearance period, and oxygen-mediated ROS yield, all significantly determine the effectiveness of clinical PDT treatment. Ever since the approval of Photofrin for clinical PDT in 1993, researchers have been increasingly active in exploring new PSs with enhanced efficiency. To date, great progress has been made in developing clinically available PSs [10, 11], but still with several constraints in clinical broadening of PS-mediated PDT. Photofrin® (Porfimer sodium), as the typical first-generation PS, has been widely used for PDT against various cancerous indications, but just like most first-generation PSs, suffering from poor chemical purity, low water solubility, limited selectivity to cancer cells, peak absorption wavelength away from near-infrared (NIR) spectrum, and long clearance time. Specifically, Photofrin-mediated PDT leads to skin photosensitivity lasting for up to six to eight weeks, as a consequence of continuous photodynamic reactions. Therefore, after the PS administration, avoidance of strong light exposure for patients is necessary, which, however, causes inconvenience and additional pain. Researchers have put much efforts to improve the situation [12, 13], and hence, in contrast, the second-generation PSs generally possess improved water solubility, enhanced tumor-targeting ability, enabled excitation at longer wavelengths, shortened clearance times, and enhanced overall efficacy. However, there still exist critical limitations in properties of water solubility, tumor targeting, tumor regression efficiency, therapeutic depth, and hypoxia resistance for the second-generation PSs, asking for more investigations in developing new PSs with perfection. Table 6.1 shows several typical PSs together with normally used treatment parameters. Nowadays, aiming to overcome the previously discussed limitations of traditional PSs, nanostructures or functional moieties have been extensively studied as typical approaches for PS upgradation [26–29]. Among the extensive studies assisted by nanoscience and nanotechnology, the development of new PSs has taken six primary directions: (i) Develop novel photosensitive materials possessing high quantum yield of 1 O2 , such as C60 , black phosphorus, graphene quantum dots, or aggregation-induced emission (AIE) PSs; (ii) Improve the delivery efficiency of PSs by, for example, magnetic nanoparticles-based PSs, which can be driven by external magnetic field, as well as through acidic pH-, glutathione-, H2 O2 -, matrix metalloproteinase-2-responsive or photo-responsive PS release method to avoid premature leakage during delivery; (iii) Enhance cellular absorption of PSs via

6.2 Photosensitizers for Enhanced PDT

Table 6.1

Treatment parameters of typical clinical PSs.

PSs

Time to PDT Excitation after PS delivery Light dose 𝝀 (nm)

Porfimer sodium (Photofrin)

630

48 h

240 J/cm2 (400 mW for 600 s)

[14]

5-aminolevulinic acid (5-ALA)

635

2–4 h

12 J/cm2

[15]

2

References

Methyl aminolevulinate (MAL) 630

3h

37 J/cm

[16]

Methylene blue

670

180 s

1911 J/cm2 (6370 mW/cm2 for 300 s)

[17]

Temoporfin (mTHPC)

652

1–2 d

5–120 J/cm2

[18]

HiPorfin

630

4h

150 J/cm2

[19] 2

Hemoporfin (HMME)

532

10 min

96–120 J/cm

[20]

Talaporfin sodium (NPe6)

664

4h

100 J/cm2

[21]

2

Chlorin e6 (Ce6)

660

4h

240 J/cm

[22]

Padeliporfin (Tookad)

763

6 min

100–360 J/cm2

[23]

Verteporfin

690

60–90 min

40 J

[24]

AlPcS4

675

24 h

30 J/cm2

[25]

surface decoration technique to specifically recognize cancer cells or subcellular organelles, or using photochemical internalization to destroy endocytic vesicles thereby facilitating drug release to the cytoplasm; (iv) Optimize the utilization rate of optical energy via nanoflowers with multilayer structure or nanosheets with high specific surface area for stronger interaction between PSs and photons, or via the combination with heavy atoms or transition metal atoms for higher intersystem crossing (ISC) efficiency or longer triplet state lifetime of the PSs; (v) Develop anti-hypoxia PS by adding oxygen carrier/generators, or replacing oxygen-consuming type II PSs with type I or III PSs; (vi) Integrate internal light sources such as bioluminescent materials into photosensitive ROS generators using genetic encoding approach to obtain a theoretically unlimited treatment depth [30]; (vii) Employ PSs with nano-platform-based synergistic therapeutic function to carry drugs or adjuvants required for different therapies (such as chemotherapy drugs or immune adjuvants), or to convert excitation energy into antitumor products of different therapies (such as 1 O2 or hyperpyrexia). Although the research on PSs for enhanced PDT is in the ascendant, the clinical trial data and biosafety assessment are scarce waiting for more investigation. Most research work still remains in the principle-based concept exploration stage, with limited number of novel PSs entering clinical trials. Ideal PSs in clinical applications of enhanced PDT should possess basic characteristics including : (i) Abundant source materials, easy chemical synthesis process, and good biocompatibility;

169

170

6 Enhanced Photodynamic Therapy

(ii) Clear chemical composition and structure; (iii) The maximum absorption peak located at NIR for deep treatment depth; (iv) Strong hypoxia resistance; (v) High quantum yield of 1 O2 ; (vi) Superior light stability and less photo-bleaching effect; (vii) Good selectivity with specific cell or tissue targeting; (viii) Lower side effects, and fast metabolism clearance speed; (ix) Multifunction available, such as diagnosis and efficacy monitoring.

6.3 Light Sources for Enhanced PDT The dual selectivity of PDT allows precise killing of cancerous cells [31]. The treatment accuracy is realized firstly by the preferential retention of PSs in the lesion area where the cellular metabolism is more active, and, secondly, by the selective illumination of light sources on the lesion area, indicating the importance of the PDT light sources. In PDT, it is critical to firstly consider the matching of light source emission spectrum and PS absorption spectrum aiming to achieve efficient PDT activation [32, 33]. Meanwhile, the diversity and the complexity of target tissues exert strong impact on the required optical dosage, illumination areas, and treatment depths, which largely determines the lighting schemes to be adopted as well as the PDT light sources to be employed. Sunlight, as the most ancient and the only natural PDT-light source, is currently prevailing for actinic keratosis treatment. Sunlight primarily contains UV, VIS, and NIR light and is usually used with endogenous PpIX-based PDT (largest absorption peak in 408 nm and other absorption peaks of 506, 542, 577, and 630 nm), which can be initiated by ALA or methyl aminolevulinate (MAL). Sunlight-based PDT provides large treatment area and simple operation, thereby, offers irreplaceable advantages even with the speedy development of artificial PDT light sources. The current artificial PDT light sources, lasers, light emitting diodes (LEDs), and broad-spectrum lamps, each of which shows unique priority in PDT and, hence, have all been widely used for extensive indications. The main limitation of PDT light sources is the penetration depth of light in biological tissues, resulting from the optical absorption by oxygenated hemoglobin, deoxygenated hemoglobin, melanin, cytochrome, and other biomolecules. Commonly, clinical PSs possess absorption bands ranging from visible (VIS) to NIR region. The tissue penetration depth for VIS photons is normally limited to less than 5 mm, and consequently, the VIS-based PDT only applies to the superficial cancers or the lesions at endoscope/ fiber-reachable locations (oral/nasal cavities or ocular regions) but the deep-seated tumors. In contrast with VIS light, NIR light allows longer treatment depth due to the tissue optical window arriving in the orange/red to NIR spectral range [34]. Mostly, indocyanine dyes including ICG, IR-825, and IR-780, presenting strong NIR absorption, are favored to be used with NIR light sources in clinical PDT for deep-seated tumors. To significantly enhance the photosensitizing process, further extension of the penetration ability in biological tissues, and careful design of the lighting scheme are both needed for PDT light sources to achieve a higher PDT efficacy.

6.3 Light Sources for Enhanced PDT

6.3.1

Extended Penetration Depth

6.3.1.1 Lasers

A laser is the most common narrow-spectra PDT light source, attributed to not only the monochromatic, coherent, high-power illumination but also the precisely controllable and fiber-guidable output. With lasers, up-conversion nanoparticles (UCNPs) or two-photon excitation (TPE) have been normally combined to realize a deep treatment depth. For UCNP-involved PDT, UCNPs indirectly excite PSs through fluorescence resonance energy transfer (FRET) to convert the incident NIR laser to the VIS region [35]. Generally, UCNPs are produced by doping lanthanide, transition or actinide metal ions into the inorganic crystal matrix, and the trivalent lanthanide metal ion has been regarded as the most favored dopants for UCNPs with its multiple metastable energy states for up-conversion processes. After administration of PSs containing UCNPs followed by NIR illumination by a laser, UCNPs absorb two or more photons and transfer them to a single VIS photon through anti-Stokes procedure, thereby exciting the VIS-absorbing PS. Currently, for UCNPs, Yb3+ , possessing a large NIR absorption cross section, and Er3+ are usually employed and doped into NaYF4 for light absorption and up-conversion, respectively. Differing from the most common one-photon excitation-based PDT, TPE-based PDT remains not feasible until the commercial availability of femtosecond tunable lasers and the TPE-responsive PSs [36]. For TPE-responsive PSs, considerable cross section of NIR two-photon absorption as well as a high generation efficiency of phototoxic ROS are both necessary. Under irradiation with a sufficiently high photon density from a femtosecond tunable laser, TPE-responsive PSs, such as semiconductor quantum dots (QDs), carbon QDs (CQDs), Au NPs, or polymer-based NPs, simultaneously absorb two NIR photons to activate the following PDT process, presenting a superior spatial selectivity since this process only takes place in the focal area of laser. 6.3.1.2 Light-Emitting Diodes

LEDs, as one of the primary PDT light sources, possessing advantages of light weight, small size, low cost, and long lifetime, consequently, as shown in Figure 6.5, become increasingly attractive in PDT applications as wearable, disposable, or household light sources. In addition, LEDs have the possibility to be wirelessly powered without complex lines for power supply, therefore allowing simple and easy operation, and showing great potential as an implantable light source for internal lighting during deep PDT. At present, ultrasonic power supply, electric field coupling, near-field communication (NFC), electromagnetic induction, and magnetic field coupling resonance are the commonly used techniques for wirelessly powered LEDs. Specifically, the cost-efficiency of LEDs is particularly advantageous since it is critical for the implantable light sources to afford disposability. According to current studies, the combination of micro-LED (μ-LED) in micrometer sizes and wireless powering technology has become a highlight of novel PDT internal light sources.

171

172

6 Enhanced Photodynamic Therapy

Wearable

Figure 6.5 New LEDs for potential PDT applications. Source: Buhong Li.

Domestic

Implantable

Disposable

Flexible

Particularly, NFC is a high-frequency wireless communication technology with a working frequency of 13.56 MHz assisted by a rectifier for conversion from the received radio frequency signal to the desired LED driving power, and ultrasonic powering technology, which requires a piezoelectric receiver to convert the ultrasonic waves into electrical energy for illumination, are the most feasible modalities for implantable μ-LEDs. 6.3.1.3 Self-Excitation Light Sources

Self-excitation lighting allows direct internal illumination to the target tissue, not only presenting less optical loss but also largely protecting normal tissue from cytotoxicity [37]. The most used energy transfer mechanisms for self-excitation lighting include resonance energy transfer (RET) for chemiluminescence/bioluminescence and radiological excitation for Cerenkov luminescence. In chemiluminescence/bioluminescence, a donor, such as Rluc’s mutant Rluc8.6 or peroxalate, is necessary to generate luminescence via biological activities or chemical reactions for PS activation, while to obtain Cerenkov illumination, a charged particle is required to travel faster than an optical photon, which normally occurs in a dielectric medium and usually gives blue emission for triggering Soret band-absorbing PSs [38]. Presently, a genetic coding method is reported to realize genetically-encoded bioluminescence RET-activated PDT, through synthesis of genetic NanoLuc-miniSOG BRET pair by combination of NanoLuc luciferase flashlight and phototoxic flavoprotein miniSOG, which triggers ROS generation under luciferase substrate injection [30]. Also, in Cerenkov illumination-activated PDT, successful applications of radioactive isotopes including 64 Cu, 18 F, 90 Y, and 89 Zr have been reported, but with a declaration saying that the Cerenkov illumination-involved treatment works not as a standalone PDT effect but a combination therapy with the radiotherapy, thereby decreasing the PDT accuracy by free ionizing radiation. Hopefully, in the future, well-confined Cerenkov illumination will be developed for more accurate Cerenkov illumination activated-PDT with reduced nuclide dosage.

6.3 Light Sources for Enhanced PDT

6.3.1.4 X-Ray

Unlike NIR light whose biological penetration depth is longer than VIS but still quite limited, X-ray presents limitlessness in biological tissues, and even bones, hence becoming attractive as a deep-penetration excitation source for PDT and is commonly abbreviated as X-PDT [33]. In X-PDT, nanoscintillators, such as SrAl2 O4 :Eu2+ or Y2.99 Pr0.01 Al5 O12 , need to be combined with PSs working as energy transducers for X-ray to VIS optical conversion, instead of most clinically approved PSs possessing insufficient X-ray absorption. Unfortunately, there still exist two limitations for X-PDT’s broadening: (i) Enhanced energy transfer efficiency and improved luminescence intensity for the PSs in X-PDT; (ii) Development of high-accuracy X-PDT only targeting on lesions since X-PDT is acting as a combination of PDT and radiotherapy, both of which interact and cooperate to damage cell membranes and DNAs, and the radiotherapy can destroy normal tissues around the lesions. 6.3.1.5 Acoustic Waves

In addition to X-ray, the lower attenuation coefficient of acoustic waves results in superior directionality as well as good tissue penetration, hence obtaining increasing attention as an excitation source in PDT. During PS activation, acoustic waves demonstrate both satisfactory treatment depth for lesions and sufficient safety for normal cells [39]. Unlike PSs in X-PDT, which require a synthesis procedure for a specific PS containing nanoscintillators, part of traditional organic PSs can directly respond to acoustic waves. Additionally, Au NPs have been extensively studied as novel acoustic wave-responsive PSs in PDT, due to the good chemical stability of gold. Nowadays, the actual mechanisms behind acoustic excitation for PDT still await more investigation, but the most convincing reason is that acoustic wave-produced sonoluminescence triggers the PS for following ROS generation on condition that sonoluminescence spectrum overlaps with the PS absorption band. For future clinical translation of acoustic wave-activated PDT, much effort needs to put into the efficiency improvement between acoustic–optical energy conversion.

6.3.2

Optimized Scheme of Irradiation

During clinical treatment, continuous irradiation with high density of luminous flux is generally applied to patients, resulting in limitations on rate of oxygen supply, reduction in safety of surrounding normal cells, as well as impact on optical properties of target tissues. By optimizations of PDT lighting schemes, the above mentioned problems could be significantly relieved thereby improving the PDT efficacy. Presently, there have been four most studied lighting schemes, two of which are termed fractionation PDT and metronomic PDT (mPDT), and are both used for hypoxia alleviation during PDT. In contrast to the conventional lighting scheme, which easily causes oxygen consumption in tissues much faster than the supply from the surrounding blood vessels, fractionation PDT uses segmented light to avoid the rapid depletion of oxygen in the tissue, while mPDT maintains tissue oxygen partial pressure by a reduced irradiation intensity but with a prolonged treatment time, thus obtaining stable 1 O2 production. With identical PDT dosage,

173

174

6 Enhanced Photodynamic Therapy

fractionation PDT and mPDT ensure the oxygen supply during treatment, hence giving an improved PDT efficacy. In addition to fractionation PDT and mPDT, there exist another lighting scheme referred to as pulse/super-pulse illumination mode. According to a recent study, the pulse illumination mode during PDT, especially the super pulse illumination mode, effectively improves the PDT effect, with experimental results showing apparent relief of thermal damage and tissue hypoxia due to the oxygen diffusion and the heat dissipation taking place in the intervals between pulses [40]. Additionally, the pulse/super-pulse illumination mode avoids the temperature fluctuation-induced changes of tissue optical properties, realizing more accurate and stable treatment. Except for the above, taking use of nonlinear optical interaction is also a potential lighting scheme to nonlinearly in situ convert NIR light to the PS excitation spectrum, which involves a biological medium to realize the nonlinear wavelength conversion [41]. According to the results, using cellular biomolecules under NIR laser irradiation, Ce6 can be successfully excited through nonlinear up-conversion by second harmonic generation and four-wave mixing including coherent anti-stokes Raman scattering, with an PDT efficacy enhancement factor of 5 and 2 in vitro, respectively, in comparison with the TPE-based PDT.

6.4 Oxygen Supply for Enhanced PDT Insufficient oxygen supply for PDT is a nonnegligible issue, which significantly limits the 1 O2 generation as well as the PDT efficacy, primarily due to: (i) Interior hypoxic environment inside the solid tumor [42, 43]. (ii) Oxygen-consuming type II PDT process aggravates the hypoxia problem [44]. (iii) Vascular shutdown by PDT inhibits the oxygen transportation. Therefore, to alleviate hypoxia during PDT, strategies of enhancing the oxygen supply and reducing the oxygen consumption should be adopted, and typical strategies are shown in Figure 6.6.

6.4.1

Oxygen Replenishment

To enhance PDT by replenishing oxygen, either oxygen carriers or in situ oxygen generators has been intensively studied. 6.4.1.1 Oxygen Carriers

Oxygen carriers, made of natural substances, inorganics, or organic polymers, physically adsorb oxygen followed by transportation to the target sites for the subsequent 1 O2 generation, requiring strong oxygen storage ability [45, 46]. Particularly, biocompatible red blood cells (RBCs) or hemoglobin molecules have been often used to delivery oxygen into solid tumors by bonding connection or encapsulation to the PS, since, in mammals, hemoglobin functions as the primary tool for oxygen transportation by reversibly bonding to oxygen, and a single RBC contains 200 to 2 billion hemoglobin molecules. In addition, perfluorocarbon NPs and fluorinated materials, such as fluorinated polypeptides and fluorinated polymers, are also

6.4 Oxygen Supply for Enhanced PDT Hemoglobin-related NPs Oxygen carriers

Inorganics Organic polymers

Oxygen replenishment

H2O2-involved reactions CaO2-involved reactions Oxygen generator

Oxide/peroxide decomposition Photosynthetic bacteria

Enhanced PDT

Light fractionation PDT Irradiation scheme

Metronomic PDT Pulse/super pulse radiation

Reduced oxygen consumption

Hypoxia-activated approaches Oxygen-dependence reduction

Hypoxia-activated prodrugs Hypoxia-cleavable linkers Type I mediated PS Type III mediated PS

Figure 6.6 Oxygen supply strategies for enhanced PDT. Source: Reproduced from Li et al. [3] Figure 5 with permission of Chinese Laser Press.

popular oxygen-replenishing carriers with high oxygen capacity. Moreover, organic polymer materials, metal–organic frameworks, covalent organic polymers, as well as covalent organic frameworks that have already drawn increasing attention for drug delivery, have also become very attractive for oxygen storage due to their porosity and large surface. 6.4.1.2 Oxygen Generators

The most common approach for in situ oxygen generation is to carry out chemical reactions of H2 O2 and MnO2 , fully taking use of the high H2 O2 concentration in specific tumor microenvironments. For this, PSs combined with MnO2 are transported to targets, and later MnO2 is reduced to Mn2+ by H2 O2 , thus generating oxygen for PDT process. The Mn2+ continuously reacts with H2 O2 to produce MnO2 , which keeps reacting with H2 O2 resulting in the next repetition of oxygen production as well as reduction to Mn2+ . One outstanding advantage regarding this strategy is the excellent solubility and the fast excretion of Mn2+ . Additionally, another biocompatible oxygen generator, CaO2 , has also been widely studied for hypoxia-resistant PDT, since it could gradually produce oxygen in the humid atmosphere. Furthermore, the oxygen generation from water photodecomposition for PDT is realized by C3 N4 or Fe–C3 N4 as the water-splitting material, taking advantage of the abundant water in human body. Similarly, H2 O2 decomposition also provides oxygen for PDT, but requires catalase as the catalyst to accelerate the reactions, and the strategy owns excellent biocompatibility due to the metal-free processes. Except for the above, there exists another high-biosafety oxygen-generating method that integrates the photosynthetic bacteria possessing oxygen enrichment ability to the PSs, benefiting from controllable photoautotrophy, satisfactory oxygen yield, as well as cost-efficiency, and, thereby, being intensively explored in many diseases including cardiovascular issues.

175

6 Enhanced Photodynamic Therapy

Light density (mW/cm2)

176

Traditional

Fractionation

Metronomic

3T

T

6T

2A

A

Treatment time (s)

Figure 6.7

6.4.2

Light fractionation and metronomic PDT. Source: Buhong Li.

Reduced Oxygen Consumption

6.4.2.1 Irradiation Scheme

During PDT lighting, continuous irradiation leads to fast consumption of oxygen, thus calling for more careful design of PDT lighting scheme to slow down the speed of oxygen consumption hence sparing more time for oxygen perfusion. As shown in Figure 6.7, the metronomic strategy to avoid premature hypoxia is to use a decreased illumination density but with a prolonged irradiation time, while the fractionation mode, also called pulse mode, experiences repeated alternation between sessions with/without illumination. Both of the above schemes provide a more consistent oxygen supply for PDT. 6.4.2.2 Hypoxia-Activated Approaches

PDT unavoidably consumes oxygen for 1 O2 production. Once the hypoxia occurs, hypoxia-activated approaches would be triggered and start their cancer-killing tasks, so as to compensate for the hypoxia-limited PDT effect. Hypoxia-cleavable linkers are usually utilized to control the drug for chemotherapy, and after PDT depletes the oxygen, hypoxia-cleavable linkers are broken and accelerate on-demand release of the chemotherapy drug to damage cancerous cells. Differing from hypoxia-cleavable linkers, the hypoxia-activated prodrugs, such as AQ4 N and Tirapazamine, directly generate cytotoxic species once oxygen depletion arises after PDT. 6.4.2.3 Reduction of Oxygen Dependence

Oxygen carriers or generators intratumorally increase the oxygen level, but the oxygen supply is only temporarily improved due to the fast process of gas release [47, 48]. Therefore, it is also significantly important to reduce the oxygen requirement by PDT. One strategy is to inhibit the oxygen-consuming intrinsic intracellular chemical reactions, specifically, the mitochondria-associated oxidative phosphorylation (OXPHOS), which could be retarded using inhibitory drug atovaquone, so as to suppress oxygen consumption. Another strategy is to replace the oxygen-consuming Type II PDT process with the Type I PDT process, which asks for appropriate PSs, and, however, the number of available Type I PSs is much less than that of the Type II. Currently, inorganic PSs, including ZnO nanorods and TiO2 NPs have been widely

6.5 Synergistic Therapy for Enhanced PDT

studied to help Type I PDT produce hydroxyl radicals. Apart from the Type I PDT, Type III PDT (also referred to as PACT) is another promising candidate to substitute the Type II process, which undergoes electron transfer from the excited PS to biomolecules including DNAs, thereby circumventing the hypoxic problems by its oxygen-independent PDT process.

6.5 Synergistic Therapy for Enhanced PDT Currently, clinical surgery, chemotherapy, immunotherapy, radiotherapy, photothermal therapy (PTT), PDT, sonodynamic therapy (SDT), or magnetic hyperthermia therapy (MHT) are the common monotherapies against cancers. As shown in Table 6.2, among these therapies, clinical surgery presents the most straightforward process while chemotherapy and radiotherapy provide high-efficiency tumor inhibition. In addition, PTT, PDT, and MHT possess non-radiation processes with high safety as well as good tumor selectivity. Notably, immunotherapy is superior in reducing tumor metastasis and recurrence, while SDT offers deep-penetrating excitation process without ionizing radiation. As shown in Figure 6.8, synergistic treatment combining PDT with the other monotherapies has been extensively adopted to boost the antitumor efficiency for a higher rate than that of the monotherapies, through realization of mutual benefits as well as offset of respective shortcomings [49, 50]. Depending on the quantity of therapies to be combined, PDT-involving synergistic therapies can be divided into dual-modal therapies and triple/multiple-modal therapies.

6.5.1

Dual-Modal Therapy

6.5.1.1 Surgery

Clinical surgeries treat tumors through resection operation, however, only nonmalignant tumors can be cured through complete resection. For malignant tumors, the surgery becomes much more complicated, which, fortunately, can achieve an Table 6.2

Advantages of different monotherapies.

Therapy

Advantages

Clinical surgery

Straightforward process

Chemotherapy Radiotherapy

High efficiency for tumor inhibition

Photothermal therapy

Good tumor selectivity

Magnetic hyperthermia therapy Immunotherapy

Reduction of metastasis and recurrence

Sonodynamic therapy

Deep-penetrating excitation process with high safety

177

6 Enhanced Photodynamic Therapy

Che the morap y

o-

PDT

P th hoto th erm er al ap y

un m apy m I er th

Figure 6.8 Synergistic therapy for enhanced PDT. Source: Buhong Li.

Radio y p thera

Sonod ynam therap ic y

l ica Clin ery g sur

Magnetic hyperthermia therapy

178

improved therapeutic efficacy by assistance with PDT pre- or post-resection: (i) For large-volume tumors, pretreatment by PDT reduces the tumor size hence benefiting the following surgical operation, lowering the risk of large-area wounds and shortening the recovery period. (ii) After surgery, PDT further eliminates the residual diseased cells to reduce the cancerous recurrence rate. 6.5.1.2 Chemotherapy

In the synergistic therapy of PDT and chemotherapy, normally, drug for chemotherapy, such as Doxorubicin (DOX), is loaded either on a nanoplatform together with PS or directly on a nanocarrier-like nano-PS. After administration, light excites the PSs loaded on the nanoplatform, while the drug for chemotherapy is usually released through a stimuli-responsive process. The primary advantages of this combination are: (i) The chemotherapy process makes the tumor cell more sensitive to the cytotoxicity of ROS from PDT, resulting in an efficient PDT treatment with less dosage and smaller side effects. (ii) Proteins related to the drug efflux effect, which negatively affect the therapeutic efficacy of chemotherapy, can be deactivated by ROS, thus enhancing the drug utilization in chemotherapy. 6.5.1.3 Radiotherapy

Mostly, PDT and radiotherapy are carried out separately in their synergistic treatment. Sometimes, nanoplatforms carrying PSs are also integrated with heavy metal elements to amplify radiotherapy effect. Although the radiotherapy declines the high precision of PDT, this limitation can be addressed through conformal radiotherapy techniques, such as three-dimensional conformal radiotherapy. For radiotherapy, charged particle beams consisting of protons or carbon ions from a cyclotron or synchrotron are a rising candidate, while X-ray radiation is still the most conventional method to locally treat tumors with PDT. Specifically, X-PDT is usually treated as a combined effect of PDT and radiotherapy, using X-ray transducers to convert X-ray photons to absorbable photons for PSs.

6.5 Synergistic Therapy for Enhanced PDT

6.5.1.4 Photothermal Therapy

PTT is the most common therapy to work with PDT, whose combination results in an increased cancer cell killing efficacy. Especially, for PDT, the thermal effects of PTT accelerate intratumoral blood circulation hence providing a better oxygen supply. Usually, in their synergistic treatment, two lasers emitting distinct peak wavelengths are utilized, although both their excitation wavelengths fall in the NIR region (normally ∼650 nm for PDT, ∼810 nm for PTT) [51], which not only increases the complexity of medical setup but the probability of extra pain and risk for patients. To this issue, sensitizers possessing overlapping absorption spectrum between PDT and PTT are required, to simultaneously absorb the incident NIR light for ROS generation while converting the NIR energy into hyperthermia. Particularly, the gold-based nano-material, such as gold nanorods, is one of the most used sensitizers for combined PDT and PTT, attributed to the strong localized surface plasmon resonance for energy conversion, and the ability of photochemical catalysis. 6.5.1.5 Immunotherapy

Damages by PDT in tumor tissues trigger the inflammatory response, activating cytotoxic T cells followed by transportation to target location for tumor cell killing [52]. In this combination with immunotherapy, an immune adjuvant is needed to cooperate with the PS. During immunotherapy, immunosuppressive signals, which can retard the activation of T-cell activation, can be removed through a potential strategy termed immune checkpoint blockade. Specifically, cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) blockade and programmed death 1/programmed death ligand 1 (PD-1/PD-L1) blockade have obtained FDA approval for tumor immunotherapy. However, the effectiveness of immunotherapy on the primary tumor is quite limited, which could be improved through synergistic treatment with PDT, in the meanwhile giving an enhanced PDT effect by a reduced recurrence and metastasis rate. 6.5.1.6 Magnetic Hyperthermia Therapy

Combination of PDT and MHT utilizes magnetic NPs as not only nanocarriers for PSs but a critical component for MHT. For PDT, iron oxide (Fe3 O4 and γ-Fe2 O3 )-based magnetic NPs are widely used with an external guiding magnetic field for accurate drug delivery, while in MHT, the generation of magnetic hyperthermia also requires the association of magnetic NPs with an alternating magnetic field that is commonly generated by an MHT device. The shortcoming of this strategy is that, with an increasing distance, the strength of magnetic field falls off rapidly, implying an effective treatment only in the vicinity of the body surface, which, fortunately, could be possibly improved through implantation of magnets near the target tumor. 6.5.1.7 Sonodynamic Therapy

Synergistic treatment of PDT and SDT is referred to as sonophotodynamic therapy (SPDT), which requires a light source for PDT, and an acoustic source for SDT.

179

180

6 Enhanced Photodynamic Therapy

Table 6.3

PDT-related triple/multiple-modal therapy. Chemotherapy

Triple/multiplemodal therapy with PDT

Radiotherapy

Photothermal therapy

Photoimmunotherapy

Photothermal therapy

Chemotherapy

Photothermal therapy

Radiotherapy

Photothermal Magneto-mechanical therapy therapy

Chemotherapy

SDT is actually an extended method of PDT, producing ROS from ultrasonically excited sonosensitizers [53]. Fortunately, traditional PSs, such as Rose Bengal, can be responsive to both the optical and the acoustic energy, instead of using different sensitizers or synthesizing an exclusive agent for SPDT. Because of the deep-penetrating acoustic waves, SPDT presents an extended treatment depth with high safety. However, ROS generation mechanism behind SDT yet awaits further elucidation, with high-temperature pyrolysis and sonoluminescence based on cavitation effects being the mostly supported explanations.

6.5.2

Triple/Multiple-Modal Therapy

Triple/multiple-modal synergistic therapies have also been extensively studied, aiming for stronger antitumor effect [54], as shown in Table 6.3, the most common modalities include chemo/PDT/radiotherapy, PDT/photothermal/photoimmunotherapy, photodynamic/photothermal/chemotherapy, photothermal/photodynamic/ radiation, and magneto-mechanical/photothermal/photodynamic/chemotherapies. Among these modalities, chemotherapy, radiotherapy, and PTT are the most prevailing treatment methods, which can be attributed to their relatively long history, mature techniques, as well as respective uniqueness in antitumor efficacy. Moreover, immunotherapy, due to its irreplaceable ability against recurrence and metastasis, shows a rising trend to boom in the triple/multiple-modal therapies. It is believed that, in the future, more combinations will be explored, and newly developed monotherapies will keep emerging and play a role with PDT in the synergistic cancer treatment, to fully exploit the advantages of current treatment methods, and further boost the antitumor efficiency.

6.6 PDT Dosimetry PDT dose needs to be quantified for real-time detection, adjustment and optimization of PDT dose according to the individual patient [55–57]. Measurable dosimetric parameters and available techniques are listed in Table 6.4. It is shown that given the same dose of the PS (per kilogram of body weight), drug metabolism time, and dose, the actual clinical efficacy of PDT significantly varies for individual patients. The research methods of PDT dosimetry are generally divided into 4 types, including explicit dosimetry, implicit dosimetry, direct dosimetry, and

6.6 PDT Dosimetry

Table 6.4

Available techniques for measuring dosimetric parameters.

Dosimetry models

Dosimetric factors

Explicit dosimetry

PS

Optical techniques

Absorbance Concentration

Irradiation light Fluence rate Fluence Molecule oxygen

Implicit dosimetry Biological response

Absorption spectroscopy Fluorescence spectroscopy Flat and isotropic detectors

Oxygen saturation

Diffuse optical spectroscopy Spatial frequency domain imaging

Oxygen partial pressure

Oxygen-sensitive probes

Concentration PS photobleaching Cell death (Apoptosis/Necrosis/ Autophagy) Blood flow

Fluorescence spectroscopy Molecular biomarkers Laser Doppler flowmetry Laser Doppler imaging Laser Speckle Imaging Doppler OCT

Vascular damage

Diameter and depth of blood vascular

Optical coherence tomography Spatial frequency domain imaging Reflectance confocal microscopy

Direct dosimetry

Immune modulation

Molecular biomarkers

Other indicators NADH fluorescence

Fluorescence lifetime imaging

Singlet oxygen

Time-resolved luminescence Chemical probes

Concentration

biological response dose. The explicit dosimetry involves the measurement of three components involved in the photodynamic reactions (i.e. PS, light, and oxygen), while the implicit dosimetry is based on monitoring the photobleaching mechanism of PS, e.g. measuring the fluorescence decay. Monitoring biological tissue response has been demonstrated as another means to monitor therapeutic outcomes and predict PDT-induced vascular and tissue damages. Additionally, the direct dosimetry depends directly on monitoring the generation of 1 O2 via detecting its characteristic 1270 nm luminescence.

181

182

6 Enhanced Photodynamic Therapy

6.6.1

Explicit Dosimetry

6.6.1.1 Irradiation Light

The distribution and dose of light in tissue during PDT is one of the key dosimetric parameters for predicting therapeutic outcome. The light distribution in biological tissues can be calculated by using the measured optical properties of the model, and can also be estimated by using appropriate photoelectric detectors and probes directly to measure the injection rate. For the four main modes of light transmission (i.e. anterior surface and point surface, cylinder and insert), the dependence of PDT efficacy on incident light dose can be quantitatively evaluated. At the same time, the absorption efficiency of PS at different wavelengths is different, which also affects the distribution of optical flow rate. 6.6.1.2 PS Concentration

The noninvasive quantitative determination of PS concentration by calibrating the original fluorescence or absorption spectra (e.g. for nonfluorescent PS) is challenging because the excitation light and fluorescence emission of the excited PS are absorbed and scattered by the tissue, resulting in varying degrees of distortion. To determine the absolute PS concentration in biological tissues, different methods were explored based on the use of diffuse reflectance spectroscopy (DRS) and fluorescence spectroscopy (FS). PS concentration could be quantitatively obtained from DRS and FS by additional empirical correction and calibration. In the other method, the quantitative fluorescence (such as tissue self-fluorescence and PS fluorescence) is extracted from the original fluorescence to eliminate the distortion of tissue light absorption and scattering, and then the PS concentration can be accurately determined by spectral unmixing and calibration. Differential path length spectrum is one of these successful technologies. In addition, the PS concentration can be quantified by using pulsed laser excitation, and then the measured fluorescence can be reconstructed by using the normalized Born approximation. In addition, the fluorescence signal can be detected by CCD array. Several new optical imaging techniques for quantitative fluorescence measurement were studied to determine the pharmacokinetics of PS for PDT, including fluorescence-to-reference imaging, dual-wavelength excitation, and NIR imaging, spatial frequency domain imaging, tomography, and time-gated fluorescence tomography. All these methods are designed to decouple absorption and scattering effects in frequency domain, spatial domain, and time domain respectively. 6.6.1.3 Tissue Oxygen Partial Pressure

The oxygen content in normal human tissues is about 5%. Some solid tumors have poor vascular permeability and tissue oxygen partial pressure (pO2 ) may be low or even deoxygenated, limiting the efficacy of PDT. In addition, since the absorption of 600–700 nm light by deoxyhemoglobin is much greater than that by oxyhemoglobin, the penetration depth of light in the tissue increases significantly under high tissue oxygen partial pressure. This is another important reason to ensure sufficient pO2

6.6 PDT Dosimetry

in the organization. There are three main detection methods for tissue pO2 . (i) The dynamic fluorescence quenching rods, such as the Clarke-based Eppendorf probe or OxyLite system are directly injected into the tissue to measure pO2 . (ii) Various spectral detection technologies based on the difference in absorption characteristics of oxyhemoglobin and deoxyhemoglobin, such as reflection spectrum frequency domain, photon transfer spectrum, and Fourier transform spectrum imaging. (iii) NMR and ESR spectra based on blood oxygen level dependence. On the other hand, a series of optical methods have been successfully developed to evaluate microvascular StO2 , which can be quantitatively determined by measuring the fingerprints of oxyheme and anaeroheme absorption spectra with different temporal and spatial resolutions. Among them, broadband/near-infrared DRS, frequency-domain photon migration spectroscopy and Fourier transform spectroscopy (FT-IR) have been used to measure PDT-induced tumor oxygenation changes. Recently, a newly developed differential path length spectroscopy overcomes the disadvantage of excessive deep sampling volume of DRS by designing differential probes, which is used to measure the changes in blood oxygen saturation and blood volume fraction before and after PDT on the surface. Spatial frequency domain imaging (SFDL) is an imaging method that can image optical properties and oxygen saturation in two-dimensional mode, so it can be used to measure the biochemical composition changes of PWS including oxygen saturation changes.

6.6.2

Implicit Dosimetry

Once the quantitative measurement of PS fluorescence or concentration can be achieved during PDT treatment, PS fluorescence photobleaching can be used as an implicit dosimetry parameter for evaluating PDT efficacy under certain specific conditions, namely, 1 O2 -mediated and high pO2 -mediated conditions. In addition to the killing effect on the target tissue, 1 O2 produced during PDT may also react with PS molecules in the ground state to produce irreversible photoinduced products, such as photobleaching, resulting in the decrease of PS concentration and 1 O2 yield in the reaction system. By detecting the photobleaching characteristics of PS, the amount of 1 O2 produced by PDT could be indirectly evaluated. At present, the mathematical expressions of photobleaching, photoproduction, and photodynamic dose in cell suspension and the numerical model of PDT have been successfully established.

6.6.3

Biological Response

Biological monitoring has been proved to be another effective method for monitoring treatment and predicting PDT-induced tissue injury, such as direct target cell death through pathological cell death (necrosis), programmed cell death (apoptosis) and autophagy, tissue ischemia and target cell death due to vascular injury, and immune regulation, which are combined. In particular, several nondestructive optical techniques for monitoring vascular injury during PDT have been widely studied. Vascular injury can be assessed by noninvasive measurement of blood flow, blood perfusion, blood volume fraction, vascular diameter and density [56, 57].

183

184

6 Enhanced Photodynamic Therapy

Blood flow and perfusion can be simultaneously measured by laser Doppler flow meter, laser Doppler imaging, and laser speckle imaging [58]. Dispersion correlation spectroscopy (DCS) was used to continuously measure the blood flow of fibrosarcoma in mice during PDT. The obtained data suggested that DCS played a role in real-time monitoring of PDT vascular response as an indicator of therapeutic efficacy. The blood flow and microvascular StO2 were monitored by the combination of intrinsic optical signal imaging and laser speckle imaging. In addition, the feasibility of monitoring microvascular response during PDT in animal models by Doppler optical coherence tomography (DOCT) was realized. The real-time monitoring of PDT-induced vascular changes by DOCT was conducive to optimizing PDT dosimetry. The changes in StO2 and blood perfusion were measured by DRS and laser Doppler imaging, respectively. The local oxygen saturation decreased rapidly and the blood perfusion increased to compensate for PDT-induced oxygen depletion. Recently, laser speckle imaging has been used to monitor microcirculation imaging and PWS microcirculation changes during vascular targeted PDT. Perfusion of PWS lesions can be measured quantitatively before, during, and after treatment. In the process of vascular targeted PDT, the measurement of blood volume fraction and vascular diameter is particularly important for PWS, and DRS is the most commonly used measurement technology. The measurement of vascular diameter during PDT can be performed by real-time photoacoustic imaging.

6.6.4 1O 2

Direct Dosimetry

is generally considered as the main cytotoxic species inducing biological damage, thereby the luminescence of 1 O2 could be the gold standard of PDT dosimetry. When considering the complicated interactions between PS, light, and oxygen, the direct dosimetry is fairly attractive since it circumvents this complication. The feasibility of detecting 1270 nm luminescence was successfully proved by animal models and reported on humans. Compared with the indirect determination of 1 O2 , the luminescence of 1 O2 can be detected without adding chemical probes. The time and spectral resolved detection systems for 1 O2 luminescence measurement were developed by using high-sensitivity near-infrared photomultiplier tube and camera, respectively. For direct dosimetry, the effects of PS dose, photobleaching, light dose, oxygen concentration, and tissue optical properties on the production of 1 O2 during PDT should be quantitatively studied to optimize and adjust the treatment regimens of various indications. A typical system setup for time- and spectral-resolved 1 O2 luminescence detection is described in Figure 6.9 [59]. Briefly, to detect the time-resolved 1 O2 luminescence, a pulse laser was used as excitation light source. Different band-pass filters (e.g. 1190, 1230, 1270, 1310, and 1350 nm) were installed in a filter wheel for achieving the spectral-resolved 1 O2 luminescence measurement with higher collection efficiency, as compared to the spectral discrimination achieved by using a monochromator. The best commercial PMT (H10330-45, Hamamatsu, Japan) and the fast photon counter

6.6 PDT Dosimetry

Position control

count Vth Discriminator

Photo cathode condenser lens

Sync trigger

Filter wheel

Computer

H10330-45

1O 2

Counter

NIR-PMT

Pre-Amp

MSA-300 Module controller

Collection optics Long-pass filter

Pulse laser

Measured sample Attenuator

Figure 6.9 Schematic diagram of the time- and spectral-resolved 1 O2 luminescence detection system. Source: Reproduced from Li et al. [59] Figure 2 with permission of John Wiley and Sons.

(MSA-300 multichannel, Becker & Hickl GmbH, Germany) were chosen for measuring and counting the NIR luminescence, respectively. Figure 6.10a shows an example of time-resolved NIR luminescence at each of the 5 detection wavelengths for porphyrin-based PS hematoporphyrin monomethyl ether (HMME, Honglv Photosensitizer Co., Ltd., Shanghai, China) [59]. The signal at 1270 nm can be clearly identified and has the expected rise and subsequent fall that is consistent with the lifetimes of the PS triplet state and of the 1 O2 decay, respectively. There is also a strong background signal below about 1 μs even after all the spectral filtering. Hence, the time-resolved spectra at each wavelength were integrated from 1 to 25 μs, resulting in a clean spectral peak at 1270 nm, as indicated in Figure 6.10b. This 1 O2 peak was linearly dependent on HMME concentration (Figure 6.10b, inset), as expected. Recently, PDT-generated near-infrared 1 O2 luminescence was measured together with cell viability for ALA-induced PpIX and exogenous PpIX, at different incubation times. As shown in Figure 6.11, the clonogenic survival curves for ALA-induced, mitochondrially-localized PpIX as a function of the cumulative 1 O luminescence counts were steeper by a factor of 3.1 ± 0.2 than for the plasma 2 membrane-localized exogenous PpIX. The control cells had an average final surviving fraction of 0.94 ± 0.04, indicating that the experimental manipulations did not have a significant effect on the cell viability (data not shown). This finding indicates that ALA-induced PpIX is more effective than exogenous PpIX for the same cumulative 1 O2 luminescence counts. In each case, the cumulative 1 O2 correlated well and reproducibly with the relative cell viability. One-way ANOVA

185

6 Enhanced Photodynamic Therapy 6

O2 luminescence counts

3.5 × 10

Measured curve

6

1190 nm 1230 nm 1270 nm 1310 nm 1350 nm

2.4 × 104 1.6 × 10

4

1

0.0

(a)

0

5

15 10 Time (μs)

20

25

2.7 × 106 2.2 × 106

15 μM 30 μM 45 μM 60 μM

Fitting curve

6

2.5 × 10

6

2.0 × 10

6

1.5 × 10

10

20 30 40 50 60 HMME concentration (μM)

1.7 × 106 1.2 × 106

1

8.0 × 103

3.2 × 106

1

3.2 × 104

O2 luminescence counts

O2 luminescence counts

3.0 × 10

1180 1230 1280 1330 1380 Wavelength (nm)

(b)

Figure 6.10 Time (a) and spectrally (b) resolved NIR luminescence from 15 μM HMME in distilled water. Source: Reproduced from Li et al. [59] Figure 3 with permission of John Wiley and Sons. 1

Surviving fraction

186

0.1

0.01

0

10 000

20 000 Cumulative

1O

30 000 2

40 000

50 000

luminescence counts

Figure 6.11 Clonogenic surviving fraction versus cumulative 1 O2 luminescence counts for ALA-induced PpIX (◽) and exogenous PpIX (◼). Each curve is normalized to the pretreatment value for a given experiment and the error bars are 1 standard deviation on the colony counts.

demonstrated a statistically-significant difference (p < 0.05) between the cell viability for ALA-induced PpIX and exogenous PpIX. A simple interpretation of this is that the mitochondria are more sensitive than the plasma membrane. As illustrated in Figure 6.12, the 1 O2 luminescence imaging system consists of a thermo-electrically cooled InGaAs camera (Xenics Model XEVA-1.7-320, 14 bit, Leuven, Belgium), and a front-end optical collection unit was further optimized for in vivo studies (20) [60]. The spectral response range of NIR camera is in the

6.6 PDT Dosimetry

Sample Shutter Lens Laser

NIR-CCD Lens Attenuator LP filter (550 nm)

LP filiter (1100 nm) BP filter (1215, 1270 and 1315, FWHM = 25 nm)

Fluorescence Optical fiber spectrometer

Optical collection unit

Fluorescence signal

1

O2 signal

Figure 6.12 Schematic diagram of 1 O2 luminescence imaging system. Source: Lin et al. [60], Reproduced with permission from John Wiley & Sons.

900–1700 nm, and the size of each pixel is 30 μm with a resolution of 320 × 256 pixels. Therefore, the maximal sensing area of the camera is 9.60 × 7.68 mm2 . To maximize the collection and transmission efficiency of NIR light, a large relative aperture (1 : 0.86) of optical collection unit was designed by ZEMAX software (ZEMAX Development Corporation, Bellevue, USA), with a magnification of 1×. More importantly, the distortion of the optical imaging system was less than 0.1%. All lenses for this system were optimized for antireflection treatment for the spectral range from 1100 to 1400 nm, and the transmission was no less than 99.0%. The irradiation light was provided by a semiconductor laser (532 nm, Changchun New Industries Optoelectronics Technology Co., Ltd., Changchun, China), delivered by an optical fiber (R400-7-UV–VIS, Ocean Optics Inc., Dunedin, USA). The dynamical changes of NIR luminescence in blood vessels can be monitored by using 1270 nm BP filters. For this, three mice were received RB at 25.0 mg/kg BW, and the images were recorded every 10 seconds (2 s per frame) for 150 seconds. A typical example was shown in Figure 6.13a, where the vascular structure in the DSWC model and the vascular occlusion (i.e. reduction in diameter of blood vessels) can be clearly visualized [60]. As shown in Figure 6.13b, partial constriction was detected for the veins in ROI-1 and ROI-2 after V-PDT, which were labeled by the red arrows in Figure 6.13a. It can be seen that the arterioles were completely constricted as labeled by the white arrows in Figure 6.13b. Figure 6.13c indicated that the NIR luminescence and fluorescence intensities in blood vessels in DSWC model were gradually decreased during V-PDT. However, the 1 O2 luminescence intensity decreased much faster than the RB fluorescence intensity (i.e. photobleaching) at the first 30 s, one possible explanation is that the oxygen concentration in blood vessels was significantly reduced at the beginning of V-PDT treatment since the oxygen was continuously consumed during photodynamic reaction and the generation of 1 O resulted in vasoconstriction. Although these preliminary findings are encour2 aging, the mechanisms for the relationship between 1 O2 luminescence intensity and PS photobleaching under different conditions should be further investigated.

187

6 Enhanced Photodynamic Therapy

4000

0s

10 s

20 s

30 s

3000

2000 40 s

50 s

70 s

60 s

1000

80 s

90 s

100 s

110 s

120 s

130 s

140 s

150 s

0

(a)

ROI -1 ROI -2

After PDT

Before PDT

(b) 1.0 Normalized intensity

188

0.8 0.6 0.4

0.0 (c)

Luminescence intensity at BP1270 nm RB Fluorescence intensity

0.2

0

30

60

90

120

150

Time (s)

Figure 6.13 Dynamic monitoring NIR luminescence in blood vessels in DSWC model with 1270 nm BP filter (a), the white light images of blood vessels in DSWC model before and after V-PDT (b), and NIR luminescence and RB fluorescence intensities varied with treatment time during V-PDT (c). Source: Lin et al. [60], Reproduced with permission from John Wiley & Sons.

6.7 Clinical Applications

6.7 Clinical Applications Based on the diseases, clinical applications of PDT are primarily divided into three types: tumor-targeting PDT, V-PDT, and antimicrobial PDT, among which antimicrobial PDT shows the best promise to be adapted into the future clinical practice. Clinical applications of PDT, as well as the typically employed PSs, are shown in Table 6.5.

6.7.1

Tumor-Targeting PDT

Successful applications of tumor-targeting PDT have been reported as a promising therapeutic modality in dermatological cancer, head and neck tumors, brain tumors, oral cancer, tongue cancer, nasopharyngeal carcinoma, lung cancer, breast cancer, bladder cancer, osteosarcoma, digestive system tumors, including Table 6.5

Clinical applications of PDT with typical PSs.

Indications

PSs

References

Dermatological cancer

5-ALA, MAL

[61]

Head and neck tumor

Photofrin

[62]

Brain tumor

Talaporfin sodium

[63]

Oral cancer

Photofrin, Foscan®

[64]

Tongue cancer

mTHPC

[65]

Nasopharyngeal carcinoma

Foscan

[66]

Lung cancer

WST11, Fotolon®, Talaporfin

[67]

Breast cancer

Visudyne, Purlytin, Lutex

[68]

Bladder cancer

5-ALA, Hexaminolevulinic acid

[69]

Osteosarcoma

Hiporfin, mTHPC, ALA

[70]

Esophageal cancer

Photofrin, Foscan, 5-ALA

[71]

Gastric cancer

Photofrin

[72]

Colorectal cancer

5-ALA, Chlorins

[73]

Liver cancer

Photofrin, Sinoporphyrin sodium

[74]

Bile duct cancer

Photofrin, Photosan

[75]

Pancreatic cancer

mTHPC, Photofrin, 5-ALA

[76]

PWS

Hematoporphyrin monomethyl ether (HMME)

[77]

Prostate cancer

TOOKAD,

[78]

AMD

Visudyne

[79]

Wound infection

Methylene blue, Toluidine blue O

[80]

HPV

5-ALA

[81]

HIV

5-ALA

[82]

189

190

6 Enhanced Photodynamic Therapy

esophageal cancer, gastric cancer, colorectal cancer, liver cancer, bile duct cancer, and pancreatic cancer [83]. For treatments of the positions that easily get large-area cancers, as well as inoperable sites, or operable sites however requiring tricky surgeries, such as eyelid or nasal margin, PDT is especially attractive due to its treatment accuracy and minimal invasiveness which allows less harm to the skin thereby providing sufficient safety and cosmetic outcome. PDT for dermatological cancers, superficial tumors, or precancerous lesions, such as actinic keratosis and Bowen’s disease, normally chooses locally applied ALA or MAL as the PS. In deep-PDT for squamous cell carcinoma or breast Paget disease, HpD-PDT is commonly carried out via intravenous administration. For intracavitary tumors, PDT possesses good clinical efficacy with little effect on the functional structures of organs. In addition, PDT demonstrates radical curative effect on early tumors, as well as palliative treatment for mid-state and/or advanced tumors. As an example, lung cancer, as one of the earliest indications treated by PDT, commonly receives PDT in the early stage for patients that are not suitable for resection or has PDT as the preoperative treatment to narrow the lesion for a smaller resection scope. For bile duct carcinoma that normally occurs with crypticity leaving the diagnosis accompanied by an impossible radical cure, PDT becomes an appropriate option for palliative treatment of unresectable cholangiocarcinoma, instead of chemotherapy and radiotherapy, to which bile duct carcinoma is less sensitive. Furthermore, focusing on the high recurrence rate of bladder cancer after surgery, cystoscope is combined with PDT during electro-cystectomy to effectively remove residual cancerous cells and reduce the recurrence rate, in the meanwhile, to alleviate painful symptoms such as hematuria. In clinical tumor-targeting PDT, the current focus is still on the development of novel PSs with higher ROS yield and good targeting ability on cancerous cells. Additionally, an extended treatment depth is also critically desired for PDT on the deep-seated tumors. Along with availability of novel PSs, the clinical coverage of PDT will continue to expand, hopefully, becoming a prevailing therapy for tumor suppression in future clinical practice.

6.7.2

Vascular-Targeted PDT

In contrast to tumor-targeting PDT, V-PDT damages blood vessels through ROS-induced vascular injury. As a response to the vascular injury, thrombin is produced, released, aggregated, or activated, followed by hemagglutination, thrombosis, and, finally, vascular closure, resulting in insufficient oxygen and nutrient supply to the lesion hence triggering cell death and tissue necrosis [84]. To date, V-PDT has become one of the three major clinical applications of PDT, and the current PDT-treated vascular diseases, include (i) Cutaneous microvascular diseases, including hemangioma and PWS. Particularly, PWS is a congenital and benign vascular malformation that is prone to face and neck, to treat which, in the early 1990s, Ying Gu invented V-PDT. After nearly 30 years of clinical practice with satisfactory safety and effectiveness, V-PDT has proved itself to be the favored treatment for PWS. (ii) Fundus microvascular disease, such as age-related macular degeneration (AMD), in which choroidal neovascularization is the typical

6.7 Clinical Applications

manifestation that causes blindness, always suffering from treatment difficulties of traditional laser photocoagulation. In contrast, V-PDT is much safer with less harm to normal macular tissues around the lesions. (iii) Gastrointestinal mucosal microvascular diseases, including esophageal varices, gastric antrum vasodilation, and radiation gastroenteritis that often lead to severe anemia, in the treatment of which PDT shows uniqueness to deal with the dispersive distribution of lesions because of good selectivity, small trauma, short recovery period, high safety and satisfactory durability. (iv) Tumors with particularly rich blood vessels, such as prostate cancers that commonly occur in elderly men, also utilize PDT as an attractive treatment modality. Specifically, TOOKAD®, developed jointly by the Witzman Academy of Sciences in Israel and Steba Biotechnology Company in Israel, has been approved by the European Drug Administration for prostate cancer V-PDT treatment with remarkable results. The present hot spot to boost the development of clinical V-PDT is to quantitatively establish the relationship between the V-PDT dosage and the vascular biological response and to develop a protective strategy for the non-targeting vasculature during V-PDT.

6.7.3

Microbial-Targeting PDT

Pathogenic microorganisms present significant diversity and rapid variation ability, leading to great challenges during antimicrobial treatment. Additionally, decades of antibiotic abuse have induced microbial drug resistance, making the antimicrobial treatment a more problematic task. Microbial-targeting PDT, or antimicrobial PDT (aPDT), mainly damages microorganisms by acting on cell walls, cell membranes, or DNAs through multi-target killing effect with less drug resistance [85, 86]. Presently, the aPDT has become an effective clinical method for anti-bacterium, antivirus, and anti-fungus treatment. The killing mechanism behind aPDT is primarily classified as two types: (i) Intracellular substance leakage, or inactivation of membrane transport system as well as related proteases, both induced by cellular wall damage from ROS; (ii) Damage of double-strand structure of DNA and interference with biological activities, such as proliferation and metabolism, resulting from ROS irreversibly destroying DNA base and sugar components. Compared with the clinical application of tumor-targeting PDT and V-PDT, development of aPDT is still in its infancy. Clinical studies show a wide range and promising prospects of aPDT in a growing variety of bacterial, viral, and fungal infections. At present, for antibacterial PDT, the clinical indications, include wound infection, chronic ulcer infection, acne, and periodontal diseases. Considering the bacterial structure diversity, leading to irregular aPDT sensitivity, it is critical to establish aPDT dosimetry against different bacterial strains to maximize the therapeutic efficacy of aPDT. Viruses play a major role in causing infectious diseases that affect humans. Particularly, high-risk human papillomavirus (HPV) is famous for producing lesions in lower genital tract (cervix, vagina, vulva). With lowering ages of high-risk HPV infection, aPDT is in line with the demand for not only efficient treatment but well protection of the organ function, thereby being increasingly used against the

191

192

6 Enhanced Photodynamic Therapy

high-risk HPV infection. In addition, for human skin and mucosal infection from condyloma acuminatum caused by low-risk HPV, aPDT can effectively remove subclinical infections that are invisible to naked eyes, hence largely reducing the recurrence. For fungal infections in body surfaces or cavities, advantages of aPDT have been just revealed. For instance, Qiu et al. [87] applied PDT to treat 2 patients with early esophageal cancer combined with extensive Candida albicans infection. The patients were completely cured after only 1–2 aPDT treatment, with well protection of esophagus structure and function. Additionally, for human immunodeficiency virus (HIV) patients, oropharyngeal/esophageal candidiasis is the most common opportunistic infection, but luckily can be satisfactorily treated by methylene blue-mediated aPDT [88]. Except for the previously discussed cases, aPDT has also been preliminarily applied in fungal infections in cornea, oral cavity as well as nails. During aPDT for fungus, it is important to select PSs maximally absorbing light outside the fungal absorption peak, to avoid large optical loss by rich fungal pigments.

6.8 Future Perspective With the continuous progress in the basic research and the clinical study of PDT, the individualized treatment scheme shows critical importance for enhanced PDT, which, however, is still very challenging in clinics. As presented in Figure 6.14, to improve the clinical PDT efficacy, a PDT all-in-one protocol with the precisely spatiotemporal regulation of dosimetric parameters is urgently needed to realize the individualized treatment scheme. The ideal protocol for enhanced PDT should incorporate all functions, including diagnosis, personalized treatment, precise monitoring and real-time treatment Individual PDT treatment protocol

Experienced dosimetry

Optimization and regulation Prediction

Precise dosimetry

Dosimetry model Simulation Real-time monitoring dosimetric parameters

Clinical PDT treatment Optical imaging

Enhanced PDT

Figure 6.14 Li.

Schematic diagram for individual PDT with precise dosimetry. Source: Buhong

6.8 Future Perspective

regulation. During PDT, the integrative protocol first carries out diagnosis on patients followed by formulation as well as execution of a personalized treatment scheme. Meanwhile, a spatiotemporal monitoring system consistently measures the variations in dosimetric parameters and simultaneously predicts the optimal values for higher PDT efficacy. Subsequently, the predicted optimization and modification are feedbacked to recalculate the treatment scheme for the patient, hence realizing specific treatment for different individuals with real-time regulation of the therapeutic parameters. The spatiotemporal regulation strongly depends on the precise monitoring of dosimetric parameters. Dosimetric parameters that can affect PDT efficacy, include not only doses of PSs, light, and oxygen, but also PS pharmacokinetics, tissue optical properties, and efficiency of drug delivery system. Conventional measurements of the administered PS dosage as well as the irradiation intensity cannot fully account for the individual variations in pharmacokinetics, tissue optical properties, or for the dynamic interactions between PS, light, and oxygen. Currently, most of the clinical PDT dosimetry is based mainly on the administrated PS dose, drug–light interval, and light intensity directly measured from the light source rather than on the tissue in vivo. Therefore, the final goal of precisely spatiotemporal regulation is still a barrier to achieving individualized PDT treatment. The first challenge for monitoring dosimetric parameters in PDT is the heterogeneity of targeted tissues, which results in the highly nonuniform spatial distribution of PS, light, and oxygen. To address this issue, instead of using a point-resolved measurement for the dosimetric parameters with optical fiber, 3D optical molecular imaging techniques need further investigation to achieve real-time measurement in vivo. Secondly, the dosimetry strategy that can overcome respective limitations of different approaches and simultaneously monitor PS, light, and oxygen without additional interruptions remains attractive during clinical PDT. More importantly, mechanisms behind the impact of administered PS dose, photobleaching, irradiation light intensity, surrounding oxygen concentration, and tissue optical properties on the yield of 1 O2 require more elucidation. Finally, the correlation between PDT-induced biological responses and dosage of PS, light, and oxygen should be quantitatively established for PDT efficacy prediction. In addition to the PDT all-in-one protocol with the spatiotemporal regulation of dosimetric parameters, there exist other factors for PDT improvement. Firstly, most novel PS studies present inadequate clinical evidence and unsatisfactory clinical translation. One of the solutions to the issue is to facilitate clinical translation of the PSs already under in vivo studies or to select substances for PS synthesis among materials that either already got clinically approved or inherently exist in human bodies. Furthermore, another possibility is to employ advanced nanotechnology-based strategies to upgrade PSs that have been clinically utilized, instead of newly emerging materials with unknown biosafety as well as prolonged and sophisticated formulation procedures. Additionally, differing from conventional PDT based on physicians’ experience and complexed medical devices, domestic PDT, which has always been an attractive PDT subject, significantly depends on

193

194

6 Enhanced Photodynamic Therapy

not only the effectiveness of personalized PDT but also the combination of intelligentized system and the medical facility of simplicity and integration. Moreover, the domestic PDT protocol requires portable size, straightforward operation, and cost efficiency to achieve commercial availability. Finally, considering the prospect of prevailing domestic PDT devices, taking advantage of naturally degradable materials would be essentially critical to deal with increased quantity of disposed electronic devices. Although the enhanced PDT is still in its developing stage, the fast progress is highly encouraging. It could be anticipated that an ideal PS-mediated PDT for personalized treatment, maybe even at home, will come true around the corner, further spreading the benefits of PDT to humans.

Acknowledgments This study was supported by the National Natural Science Foundation of China (61935004, 62227823, 62005047) and Science and Technology Foundation of Fujian Province of China (2019Y4004). We would like to thank all the researchers who have contributed to this filed and whose names are listed in the references.

References 1 Li, X., Lovell, J.F., Yoon, J. et al. (2020). Clinical development and potential of photothermal and photodynamic therapies for cancer. Nat. Rev. Clin. Oncol. 17 (11): 657–674. 2 Hu, T., Wang, Z., Shen, W. et al. (2021). Recent advances in innovative strategies for enhanced cancer photodynamic therapy. Theranostics 11 (7): 3278–3300. 3 Li, B., Chen, T., Lin, L. et al. (2022). Recent progress in photodynamic therapy: from fundamental research to clinical applications. Chin. J. Lasers 49 (5): 0507101. 4 Wilson, B.C., Patterson, M.S., Li, B. et al. (2015). Correlation of in vivo tumor response and singlet oxygen luminescence detection in mTHPC-mediated photodynamic therapy. J. Innov. Opt. Heal. Sci. 8 (1): 1540006. 5 Bolze, F., Jenni, S., Sour, A. et al. (2017). Molecular photosensitisers for two-photon photodynamic therapy. Chem. Commun. 53 (96): 12857–12877. 6 Sorbellini, E., Rucco, M., and Rinaldi, F. (2018). Photodynamic and photobiological effects of light-emitting diode (LED) therapy in dermatological disease: an update. Lasers Med. Sci. 33 (7): 1431–1439. 7 Chen, D., Tang, Q., Zou, J. et al. (2018). pH-Responsive PEG-doxorubicin encapsulated Aza-BODIPY nanotheranostic agent for imaging-guided synergistic cancer therapy. Adv. Healthcare Mater. 7 (7): 1701272. 8 Cai, J., Zheng, Q., Huang, H. et al. (2018). 5-Aminolevulinic acid mediated photodynamic therapy inhibits proliferation and promotes apoptosis of A375 and A431 cells. Photodiagn. Photodyn. Ther. 21: 257–262.

References

9 Shui, S., Zhao, Z., Wang, H. et al. (2021). Non-enzymatic lipid peroxidation initiated by photodynamic therapy drives a distinct ferroptosis-like cell death pathway. Redox Biol. 45: 102056. 10 Zhang, J., Jiang, C., Longo, J.P.F. et al. (2018). An updated overview on the development of new photosensitizers for anticancer photodynamic therapy. Acta Pharmacol. Sin. 8 (2): 137–146. 11 Lin, L., Song, X., Dong, X. et al. (2021). Nano-photosensitizers for enhanced photodynamic therapy. Photodiagn. Photodyn. Ther. 36: 102597. 12 Wang, K., Zhang, Y., Wang, J. et al. (2016). Self-assembled IR780-loaded transferrin nanoparticles as an imaging, targeting and PDT/PTT agent for cancer therapy. Sci. Rep. 6 (1): 1–11. 13 Kondo, K., Akita, M., and Yoshizawa, M. (2016). Solubility switching of metallophthalocyanines and their larger derivatives upon encapsulation. Chem. Eur. J. 22 (6): 1937–1940. 14 Park, Y.K. and Park, C.H. (2016). Clinical efficacy of photodynamic therapy. Obstet. Gynecol. Sci. 59 (6): 479–488. 15 Eskiler, G.G., Ozkan, A.D., Kucukkara, E.S. et al. (2020). Optimization of 5-aminolevulinic acid-based photodynamic therapy protocol for breast cancer cells. Photodiagn. Photodyn. Ther. 31: 101854. 16 Alcántara-González, J., Calzado-Villarreal, L., Sánchez-Largo, M.E. et al. (2020). Recalcitrant viral warts treated with photodynamic therapy methyl aminolevulinate and red light (630 nm): a case series of 51 patients. Lasers Med. Sci. 35 (1): 229–231. 17 Cecatto, R.B., de Magalhães, L.S., Rodrigues, M.F.S.D. et al. (2020). Methylene blue mediated antimicrobial photodynamic therapy in clinical human studies: the state of the art. Photodiagn. Photodyn. Ther. 31: 101828. 18 Horlings, R.K., Terra, J.B., and Witjes, M.J. (2015). mTHPC mediated, systemic photodynamic therapy (PDT) for nonmelanoma skin cancers: case and literature review. Lasers Surg. Med. 47 (10): 779–787. 19 Sun, M., Zhou, C., Zeng, H. et al. (2015). Hiporfin-mediated photodynamic therapy in preclinical treatment of osteosarcoma. Photochem. Photobiol. 91 (3): 533–544. 20 Zhao, Y., Tu, P., Zhou, G. et al. (2016). Hemoporfin photodynamic therapy for port-wine stain: a randomized controlled trial. PLoS One 11 (5): e0156219. 21 Yano, T., Minamide, T., Takashima, K. et al. (2021). Clinical practice of photodynamic therapy using talaporfin sodium for esophageal cancer. J. Clin. Med. 10 (13): 2785. 22 Phuong, P.T.T., Lee, S., Lee, C. et al. (2018). Beta-carotene-bound albumin nanoparticles modified with chlorin e6 for breast tumor ablation based on photodynamic therapy. Colloids Surf., B 171: 123–133. 23 Karges, J. (2022). Clinical development of metal complexes as photosensitizers for photodynamic therapy of cancer. Angew. Chem. Int. Ed. 61 (5): e202112236. 24 Huggett, M.T., Jermyn, M., Gillams, A. et al. (2014). Phase I/II study of verteporfin photodynamic therapy in locally advanced pancreatic cancer. Br. J. Cancer 110 (7): 1698–1704.

195

196

6 Enhanced Photodynamic Therapy

25 Xin, J., Wang, S., Zhang, L. et al. (2018). Comparison of the synergistic anticancer activity of AlPcS4 photodynamic therapy in combination with different low-dose chemotherapeutic agents on gastric cancer cells. Oncol. Rep. 40 (1): 165–178. 26 Teng, X., Li, F., Lu, C. et al. (2020). Carbon dots-assisted luminescence of singlet oxygen: generation dynamics but not cumulative amount of singlet oxygen responsible for photodynamic therapy efficacy. Nanoscale Horiz. 5 (6): 978–985. 27 Sarbu, M.I., Matei, C., Mitran, C.I. et al. (2019). Photodynamic therapy: a hot topic in dermato-oncology. Oncol. Lett. 17 (5): 4085–4093. 28 Xu, W., Qian, J., Hou, G. et al. (2019). A dual-targeted hyaluronic acid-gold nanorod platform with triple-stimuli responsiveness for photodynamic/photothermal therapy of breast cancer. Acta Biomater. 83: 400–413. 29 Tang, B.Z. and Liu, B. (2020). Catalyst: aggregation-induced emission-how far have we come, and where are we going next? Chemistry 6 (6): 1195–1198. 30 Shramova, E.I., Chumakov, S.P., Shipunova, V.O. et al. (2022). Genetically encoded BRET-activated photodynamic therapy for the treatment of deep-seated tumors. Light Sci. Appl. 11: 38. 31 Li, B. and Lin, L. (2022). Internal light source for deep photodynamic therapy. Light Sci. Appl. 11: 85. 32 Kalluru, P., Vankayala, R., Chiang, C.S. et al. (2013). Photosensitization of singlet oxygen and in vivo photodynamic therapeutic effects mediated by PEGylated W18 O49 nanowires. Angew. Chem. Int. Ed. 52 (47): 12332–12336. 33 Wang, G.D., Nguyen, H.T., Chen, H. et al. (2016). X-ray induced photodynamic therapy: a combination of radiotherapy and photodynamic therapy. Theranostics 6 (13): 2295–2305. 34 Wang, S., Dai, X.Y., Ji, S. et al. (2022). Scalable and accessible personalized photodynamic therapy optimization with FullMonte and PDT-SPACE. J. Biomed. Opt. 27 (8): 083006. 35 Li, F., Du, Y., Liu, J. et al. (2018). Responsive assembly of upconversion nanoparticles for pH-activated and near-infrared-triggered photodynamic therapy of deep tumors. Adv. Mater. 30 (35): 1802808. 36 Shen, Y., Shuhendler, A.J., Ye, D. et al. (2016). Two-photon excitation nanoparticles for photodynamic therapy. Chem. Soc. Rev. 45 (24): 6725–6741. 37 Blum, N.T., Zhang, Y., Qu, J. et al. (2020). Recent advances in self-exciting photodynamic therapy. Front. Bioeng. Biotechnol. 8: 1136. 38 Kamkaew, A., Cheng, L., Goel, S. et al. (2016). Cerenkov radiation induced photodynamic therapy using chlorin e6-loaded hollow mesoporous silica nanoparticles. ACS Appl. Mater. Interfaces 8 (40): 26630–26637. 39 Kawamura, K., Hikosou, D., Inui, A. et al. (2019). Ultrasonic activation of water-soluble Au25 (SR)18 nanoclusters for singlet oxygen production. J. Phys. Chem. C 123 (43): 26644–26652. 40 Kamanli, A.F. and Çetinel, G. (2021). Radiation mode and tissue thickness impact on singlet oxygen dosimetry methods for antimicrobial photodynamic therapy. Photodiagn. Photodyn. Ther. 36: 102483.

References

41 Kachynski, A.V., Pliss, A., Kuzmin, A.N. et al. (2014). Photodynamic therapy by in situ nonlinear photon conversion. Nat. Photonics 8 (6): 455–461. 42 Pucelik, B., Sułek, A., Barzowska, A. et al. (2020). Recent advances in strategies for overcoming hypoxia in photodynamic therapy of cancer. Cancer Lett. 492: 116–135. 43 Hong, L., Li, J., Luo, Y. et al. (2022). Recent advances in strategies for addressing hypoxia in tumor photodynamic therapy. Biomolecules 12 (1): 81. 44 Qin, S., Xu, Y., Li, H. et al. (2022). Recent advances in in-situ oxygen-generating and oxygen-replenishing strategies for hypoxic-enhanced photodynamic therapy. Biomater. Sci. 10: 51–84. 45 Jiang, L., Bai, H., Liu, L. et al. (2019). Luminescent, oxygen-supplying, hemoglobin-linked conjugated polymer nanoparticles for photodynamic therapy. Angew. Chem. Int. Ed. 131 (31): 10770–10775. 46 Ji, Y., Lu, F., Hu, W. et al. (2019). Tandem activated photodynamic and chemotherapy: Using pH-sensitive nanosystems to realize different tumour distributions of photosensitizer/prodrug for amplified combination therapy. Biomaterials 219: 119393. 47 Chen, D., Yu, Q., Huang, X. et al. (2020). A highly-efficient type I photosensitizer with robust vascular-disruption activity for hypoxic-and-metastatic tumor specific photodynamic therapy. Small 16 (23): 2001059. 48 Chen, D., Tang, Y., Zhu, J. et al. (2019). Photothermal-pH-hypoxia responsive multifunctional nanoplatform for cancer photo-chemo therapy with negligible skin phototoxicity. Biomaterials 221: 119422. 49 Yang, X., Yu, Q., Yang, N. et al. (2019). Thieno[3,2-b]thiophene-DPP based near-infrared nanotheranostic agent for dual imaging-guided photothermal/ photodynamic synergistic therapy. J. Mater. Chem. B 7 (15): 2454–2462. 50 Liu, L., Xie, H.J., Mu, L.M. et al. (2018). Functional chlorin gold nanorods enable to treat breast cancer by photothermal/photodynamic therapy. Int. J. Nanomed. 13: 8119–8135. 51 Curcio, A., Silva, A.K.A., Cabana, S. et al. (2019). Iron oxide nanoflowers @ cushybrids for cancer tri-therapy: interplay of photothermal therapy, magnetic hyperthermia and photodynamic therapy. Theranostics 9 (5): 1288–1302. 52 Xu, J., Xu, L., Wang, C. et al. (2017). Near-infrared-triggered photodynamic therapy with multitasking upconversion nanoparticles in combination with checkpoint blockade for immunotherapy of colorectal cancer. ACS Nano 11 (5): 4463–4474. 53 Bakhshizadeh, M., Moshirian, T., Esmaily, H. et al. (2017). Sonophotodynamic therapy mediated by liposomal zinc phthalocyanine in a colon carcinoma tumor model: Role of irradiating arrangement. Iran. J. Basic Med. Sci. 20 (10): 1088–1092. 54 Qiu, J., Xiao, Q., Zheng, X. et al. (2015). Single W18 O49 nanowires: a multifunctional nanoplatform for computed tomography imaging and photothermal/photodynamic/radiation synergistic cancer therapy. Nano Res. 8 (11): 3580–3590.

197

198

6 Enhanced Photodynamic Therapy

55 Lin, H., Shen, Y., Chen, D. et al. (2013). Feasibility study on quantitative measurements of singlet oxygen generation using singlet oxygen sensor green. J. Fluoresc. 23 (1): 41–47. 56 Pogue, B.W., Elliott, J.T., Kanick, S.C. et al. (2016). Revisiting photodynamic therapy dosimetry: reductionist & surrogate approaches to facilitate clinical success. Phys. Med. Biol. 61 (7): R57–R89. 57 Li, B., Gu, Y., and Wilson, B.C. (2017). Structural and functional imaging for vascular targeted photodynamic therapy. Proc. SPIE 10065: 1006504. 58 Chen, D., Ren, J., and Wang, Y. (2016). Relationship between the blood perfusion values determined by laser speckle imaging and laser Doppler imaging in normal skin and port wine stains. Photodiagn. Photodyn. Ther. 13: 1–9. 59 Li, B., Lin, L., and Lin, H. (2016). Photosensitized singlet oxygen generation and detection: recent advances and future perspectives in cancer photodynamic therapy. J. Biophotonics 9 (11–12): 1314–1325. 60 Lin, L., Lin, H., Shen, Y. et al. (2020). Singlet oxygen luminescence image in blood vessels during vascular targeted photodynamic therapy. Photochem. Photobiol. 96 (3): 646–651. 61 Queirós, C., Garrido, P.M., Maia Silva, J. et al. (2020). Photodynamic therapy in dermatology: beyond current indications. Dermatol. Thera. 33 (6): e13997. 62 Biel, M.A. (2010). Photodynamic therapy of head and neck cancers. In: Photodynamic Therapy, 281–293. Totowa, NJ: Humana Press. 63 Akimoto, J. (2016). Photodynamic therapy for malignant brain tumors. Neurol. Medico-Chirurgica 2015–0296. 64 Saini, R., Lee, N.V., Liu, K.Y. et al. (2016). Prospects in the application of photodynamic therapy in oral cancer and premalignant lesions. Cancers 8 (9): 83. 65 Karakullukcu, B., Nyst, H.J., van Veen, R.L. et al. (2012). mTHPC mediated interstitial photodynamic therapy of recurrent nonmetastatic base of tongue cancers: development of a new method. Head Neck 34 (11): 1597–1606. 66 Succo, G., Rosso, S., Fadda, G.L. et al. (2014). Salvage photodynamic therapy for recurrent nasopharyngeal carcinoma. Photodiagn. Photodyn. Ther. 11 (2): 63–70. 67 Wang, K., Yu, B., and Pathak, J.L. (2021). An update in clinical utilization of photodynamic therapy for lung cancer. J. Cancer 12 (4): 1154. 68 Banerjee, S.M., MacRobert, A.J., Mosse, C.A. et al. (2017). Photodynamic therapy: inception to application in breast cancer. The Breast 31: 105–113. 69 Railkar, R. and Agarwal, P.K. (2018). Photodynamic therapy in the treatment of bladder cancer: past challenges and current innovations. Eur. Urol. Focus 4 (4): 509–511. 70 Yu, W., Zhu, J., Wang, Y. et al. (2017). A review and outlook in the treatment of osteosarcoma and other deep tumors with photodynamic therapy: from basic to deep. Oncotarget 8 (24): 39833. 71 Wu, H., Minamide, T., and Yano, T. (2019). Role of photodynamic therapy in the treatment of esophageal cancer. Dig. Endosc. 31 (5): 508–516. 72 Yano, T. and Wang, K.K. (2020). Photodynamic therapy for gastrointestinal cancer. Photochem. Photobiol. 96 (3): 517–523.

References

73 Simelane, N.W.N., Kruger, C.A., and Abrahamse, H. (2020). Photodynamic diagnosis and photodynamic therapy of colorectal cancer in vitro and in vivo. RSC Adv. 10 (68): 41560–41576. 74 Zou, H., Wang, F., Zhou, J.J. et al. (2020). Application of photodynamic therapy for liver malignancies. J. Gastrointestinal Oncol. 11 (2): 431. 75 Lee, T.Y., Cheon, Y.K., and Shim, C.S. (2013). Current status of photodynamic therapy for bile duct cancer. Clin. Endosc. 46 (1): 38. 76 Wang, Y., Wang, H., Zhou, L. et al. (2020). Photodynamic therapy of pancreatic cancer: Where have we come from and where are we going? Photodiagn. Photodyn. Ther. 31: 101876. 77 Han, Y., Ying, H., Zhang, X. et al. (2020). Retrospective study of photodynamic therapy for pulsed dye laser-resistant port-wine stains. J. Dermatol. 47 (4): 348–355. 78 Noweski, A., Roosen, A., Lebdai, S. et al. (2019). Medium-term follow-up of vascular-targeted photodynamic therapy of localized prostate cancer using TOOKAD soluble WST-11 (Phase II Trials). Eur. Urol. Focus 5 (6): 1022–1028. 79 Mellish, K.J. and Brown, S.B. (2001). Verteporfin: a milestone in opthalmology and photodynamic therapy. Expert Opin. Pharmacother. 2 (2): 351–361. 80 Jia, Q., Song, Q., Li, P. et al. (2019). Rejuvenated photodynamic therapy for bacterial infections. Adv. Healthcare Mater. 8 (14): e1900608. 81 Hu, Z., Liu, L., Zhang, W. et al. (2018). Dynamics of HPV viral loads reflect the treatment effect of photodynamic therapy in genital warts. Photodiagn. Photodyn. Ther. 21: 86–90. 82 Xu, J., Xiang, L., Chen, J. et al. (2013). The combination treatment using CO2 laser and photodynamic therapy for HIV seropositive men with intraanal warts. Photodiagn. Photodyn. Ther. 10 (2): 186–193. 83 Li, G., Wang, Q., Liu, J. et al. (2021). Innovative strategies for enhanced tumor photodynamic therapy. J. Mater. Chem. B 9 (36): 7347–7370. 84 Xu, X., Shen, Y., Lin, L. et al. (2022). Multi-step deep neural network for identifying subfascial vessels in a dorsal skinfold window chamber model. Biomed. Opt. Exp. 13 (1): 426–437. 85 Cieplik, F., Deng, D., Crielaard, W. et al. (2018). Antimicrobial photodynamic therapy-what we know and what we don’t. Crit. Rev. Microbiol. 44 (5): 571–589. 86 Gilaberte, Y., Rezusta, A., Juarranz, A. et al. (2021). Antimicrobial photodynamic therapy: a new paradigm in the fight against infections. Front. Med. 8: 788888. 87 Qiu, H., Mao, Y., Gu, Y. et al. (2014). The potential of photodynamic therapy to treat esophageal candidiasis coexisting with esophageal cancer. J. Photochem. Photobiol. B 130: 305–309. 88 Zeng, B., Zeng, B., Hung, C. et al. (2021). Efficacy and acceptability of different anti-fungal interventions in oropharyngeal or esophageal candidiasis in HIV co-infected adults: a pilot network meta-analysis. Expert Rev. Anti-Infect. Ther. 19 (11): 1469–1479.

199

201

7 Optogenetics Prof. Ke Si Zhejiang University, 38, Zheda Road, Hangzhou, 310027, China

7.1

Introduction

Since ancient times, humans have never stopped exploring the brain. In this most complex organ of the human body, there are tens of billions of cells crisscrossed and connected to each other, forming a huge network. In this network, there is always a large amount of information that constitutes our thoughts, language, and behavior. The neuron network in the transmission of information is as brilliant and profound as the sea of stars. We have been spying, but very few. To unravel the mystery of the brain, scientists responded to God’s will-God said, there must be light. In 2005, researchers from Stanford University used light to affect the brains of mice. With the help of bacterial opsins, light was used to control the cells that switch the brain, allowing a mouse with Parkinson’s disease to stand up again, or even walk again. They call this technology optogenetics. Afterward, researchers all over the world used this technology to study a variety of cells regulated by electrical signals, such as nerve cells, heart cells, and stem cells, where electrical signals refer to the flow of ions across membranes. With the development of optogenetics, scientists insert laser-emitting fibers into animal brains or use two-photon excitation to enable opsin transmit excitatory cation currents or inhibitory anion currents, thereby controlling cell behavior with high precision.

7.2

Introduction of Optogenetics

Scientists before the nineteenth century believed that the brain performs its various functions through overall activity. In 1870, young German physicians Feritz and Hizig used electrical stimulation of the cerebral cortex of dogs and discovered that a certain cortical area manages the movement of the opposite side of the body. They first proposed the concept of “motor zone.” Later, it was discovered that there is a finer division of labor in the cortical motor area, with specific parts of the upper limbs, lower limbs, and trunk. In addition, there are other functional areas, see Figure 7.1. Biomedical Photonic Technologies, First Edition. Edited by Zhenxi Zhang, Shudong Jiang, and Buhong Li. © 2023 WILEY-VCH GmbH. Published 2023 by WILEY-VCH GmbH.

202

7 Optogenetics Past

Present Dendrite

Soma Measuring Axon

19% ΔF/F

(a)

(b)

0.5 ms

2 ms

Manipulating

(c)

(d)

Figure 7.1 Recording and stimulation: past and present. (a) The first action potential recorded intracellularly from a neuron, by Hodgkin and Huxley; In the insert, the electrode inserted into a giant squid axon. (b) The multisite optical recording of action potentials in a cerebellar Purkinje neuron by using voltage-sensitive dyes. (c) The electrical stimulation of a frog nerve. (d) Optical deep-brain stimulation of neurons expressing microbial opsin genes [1]. Source: Scanziani et al. [1], From Springer Nature.

In 2005, an article written by Karl Deisseroth’s laboratory at Stanford University appeared in Natural Neuroscience. They announced that it is possible to regulate nerve function by expressing photosensitive proteins in nerve cells in response to light stimulation of corresponding wavelengths. This achievement announced that mankind officially has the tools to precisely control the brain. This means that humans will be able to manipulate independent neurons as they like and explore the mechanism of their interaction. The target neuron is our key to open the door to the world of the brain. We only need to stimulate certain neurons to activate brain functions, such as vision, hearing, and smell. This is an exciting technology. For a long time, our mutual understanding of the role of neurons has only remained on correlation. With optogenetics, we are now finally able to explore the causal relationship between specific neural circuits and brain functions. Moreover, this technology is minimally invasive and precise, which is undoubtedly a leap forward as a neuroscience research tool. Optogenetics is an emerging biotechnology that combines genetics technology and optical methods to achieve precise control in specific cells, tissues, and circuits of living organisms. Optogenetics has developed rapidly in the past ten years and has

7.2 Introduction of Optogenetics

achieved great results. Different from the previous biochemical research methods, optogenetics introduces light-controlled methods to regulate the physiological processes of cells. It realizes the precise positioning of nerve cells and achieves scientific research purposes that cannot be achieved by electronic control methods. It provides new ideas for neuroscience researchers to deeply explore the nervous system and brain functions. Currently, cell activities that can be controlled by optogenetics include but are not limited to activating or inhibiting cell signaling pathways, regulating metabolic pathways, performing gene editing, and changing cell migration. Compared with traditional methods, optogenetics has unparalleled advantages: only requires transferring photosensitive proteins into the cell, which is highly practical; uses light as a stimulating medium, which can easily achieve control of nerve cells with millisecond precision; achieves regulation of specific cells through tissue-specific promoters. Optogenetics is also noninvasive. In optogenetic operations, cells express foreign genes encoding photosensitive proteins and then use various light stimulation to change cell behavior. No foreign bodies invade the tissues and very little trauma to experimental animals. The research of optogenetics includes the development of photosensitive proteins, the transfer of photosensitive protein-coding genes into target cells, directed light control, and the detection of output signals or changes in animal behavior. There are two important components of optogenetics. One is the photosensitive element that can sense light stimulation and respond accordingly; the other is the equipment that can realize this activity and the related technology to record and analyze this activity. There are usually four steps in optogenetic experiments and research, as introduced in the following.

7.2.1

Find the Right Photosensitive Protein

Photosensitive proteins are an important component of optogenetics. After the microbiological rhodopsin was shown to be able to activate neurons, a series of photosensitive proteins, named opsin, were discovered that are naturally occurring in organisms with different properties. Most of the opsin (type I) in microorganisms are ion channels and ion pumps, and most of the opsin (type II) in vertebrates are G-protein-coupled receptors. According to their different effects on neurons, photosensitive proteins can also be divided into two types, activation type, and inhibition type, which excites or inhibit the neural activities respectively. Photosensitive proteins can be either natural (such as rhodopsin) or chemically modified to be sensitive to light. Photosensitive proteins are essential to optogenetics, as we will discuss in a later section. According to the different characteristics of photosensitive proteins and the target neurons, appropriate photosensitive proteins can be selected, and then plasmid vectors containing promoter, target gene, and fluorescent protein gene can be

203

204

7 Optogenetics

constructed through genetic engineering. Promoters target genes to specific cells, and fluorescent proteins encode genes that can be displayed, tracked, and located under a microscope.

7.2.2

The Opsin Gene Is Introduced into the Receptor Cell

After the completion of the construction of the target gene, it is necessary to transfer and express the target gene in specific cells, tissues, or organs by means of physics and chemistry. Then immunofluorescence imaging under a fluorescence microscope was usually used to monitor the expression of the target gene. Because of the high diversity and complexity of neural circuits, marking specific types of cells or specific neural projections is one of the key problems for optogenetics in neurobiology. At present, the specific expression of opsin in neural circuits can be achieved by using infection with viral vectors containing opsin genes and cellspecific promoters, establishing transgenic mouse strains, and gene knocking-in.

7.2.3

Time and Space Control of Stimulation Light

By controlling different parameters of light, such as wavelength, light intensity, and duty cycle, the time regulation of neuronal activity is achieved. Spatial regulation is achieved by artificially selective irradiation of the cells to be studied locally. The light sources typically used are the laser and LED. At present, the most common method of light transmission is implantable laser-coupled fiber, which can transmit the light stimulation to the deep tissue of the body with high effectively. Optical fibers, typically tens to hundreds of microns in diameter, are implanted in the target brain region and coupled to a light source to allow light input in free-moving animals. When the input light intensity exceeds the threshold required for optic protein activation, the optic protein is activated to trig neuron activation or inhibition. For example, the light intensity required to trigger ChR2 action potential is generally 5 mW/mm2 [2]. Although the light intensity required at implant of the fiber end can be well controlled, the light intensity around the fiber end will decrease gradually with the increase of distance due to the influence of scattering and absorption by the surrounding brain tissue. Therefore, the implanted optical fiber can only activate opsins in a limited regions, instead a uniform light intensity in surrounding regions. Although the stronger the light intensity, the larger the effective activation range, too strong light will cause tissue heating and damage to neurons [3]. Therefore, in addition to appropriate light intensity and controlled experiments, reducing light intensity attenuation and tissue thermal damage is the future development direction of optical fiber transmission. Optical fiber technology has been widely used in the experiments of deep brain stimulation in free-moving animals, but the implantation of optical fiber will inevitably cause brain tissue damage and local bleeding to some extent. Two-photon optogenetics, developed in recent years, achieves noninvasive cellular precision of light stimulation, achieves precise control of time and space, minimizes illumination time while illuminating the cell, and reduces response time to milliseconds.

7.2 Introduction of Optogenetics

Researchers can precisely stimulate a specific neuron and observe the response of cells connected to it; Or, we can stimulate several neurons next to each other to see whether they are dominated by one or jointly controlled by other neurons. Optogenetics can control time with a precision of milliseconds and space with a precision of a single cell, which is unmatched by electrical stimulation and chemical drugs.

7.2.4

Collect Output Signals and Read Results

These include using electrodes to read electrophysiological signals, functional optical imaging to read changes in cell membrane voltage or calcium ion concentration, and functional magnetic resonance imaging (FMRI) to read blood oxygen levels in relevant brain tissues or neural circuits to facilitate optogenetic manipulation of large brain tissues. Animal behavioral assessment can also be performed to evaluate the consequence of optogenetics on the whole animal. The electrophysiological signal can be detected by a patch clamp or microelectrode array (MCA) to read out the corresponding electrophysiological signal after light stimulation. Thus, the relationship between different light stimulation modes, current, and voltage can be clarified, and the corresponding theoretical basis can be deduced from animal experiments in vivo. Patch clamp includes two types of recording: current clamp or voltage clamp. Based on the types of recording (such as an inward current or a light-induced voltage), the optogenetic architecture can be modified or the light parameters can be adjusted to obtain better control of the neurons, cells, or even the entire animal’s behavior. Electrode is a common tool for detecting membrane potential. Single metal electrodes and multi-electrode array are also used to detect cells excited or inhibited by optogenetics in vivo or in vitro. MCA is a new detection technique for synchronous, noninvasive, and long-term recording of cell and tissue action potentials. The MCA uses multiple densely arranged electrodes to record current or voltage signals simultaneously. This technique can accurately reflect the conduction direction, conduction velocity, and the pathway of action potential in local tissue groups, as well as the intercellular interaction. It can also record the changes in membrane potential in noninvasive extracellular tissues. MCA has been widely used in optogenetic research of brain slices, neuron networks, nerve cells, and cardiomyocytes. In addition, the MCA recording technology can provide a platform for the whole heart, part of the heart, myocardial tissue, cardiac electrical activity, and electrical conduction, providing a new method for in vivo electrophysiological recording of optogenetics. Functional optical imaging technology includes calcium ion imaging and voltage imaging. The principle is to carry out fluorescence imaging of calcium ion indicators or voltage-sensitive dyes and then achieve the dynamic recording of calcium ion concentration or voltage signal by measuring the change of fluorescence intensity [4]. The excitatory sequence on the surface of the heart can be obtained by using the technique of voltage-sensitive dye, and the characteristics of cardiac electrical activity and the mechanism of arrhythmia can be analyzed.

205

206

7 Optogenetics Manipulate Record

Synapse

Neuron

Local circuit

Intralayer

Whole brain

Behavior

Figure 7.2 A variety of applications use optogenetic probes to both read out and manipulate activity [7]. Source: Haeusser [7]/With permission of Springer Nature.

FMRI is an important measure of brain function at the whole animal level. Combined with optogenetics, it is called optogenetic FMRI. FMRI can accurately stimulate certain types of nerve cells at certain nodes in the whole brain network loop and obtain the hemodynamic information of the whole brain network. It reflects the activity of a group of cells in the body, helping to identify the activity of neural circuits associated with the disease. The projection of specific local cells or specific axons to distant cells can be identified directly by signals related to blood oxygen levels, which cannot be done with microelectrodes. Therefore, optogenetic manipulation of large brain tissue can be performed accurately. Animal behavior testing is to observe the effect of optogenetics on the behavior of the entire animal. Although all the above recording methods are important for the evaluation of the activities of cells, tissues, and organs caused by optogenetics, the effect of optogenetics is unknown because the whole animal is regulated by neurohumoral and other mechanisms. For example, in zebrafish repeated experiments with somatic sensory neurons, light stimulation caused the fish to engage in avoidance-like swimming [5]. In drosophila experiments, activation of the upstream gene sequence of ChR2 causes the release of the neurotransmitter monoamine, which in turn causes an anorexia response [6]. A significant feature of optogenetic approaches is the ability to target the probe to the cell, which allows optogenetics to be used to study multiple levels of nervous system function. Therefore, optogenetics can be applied to brain functions at all levels. And because of that, optogenetics has permeated all areas of neuroscience, from exploring the properties of individual synapses to studying specific cell types within and between neural circuits, to imaging entire brains, and manipulating animal behavior. These methods are increasingly being used not only to study the basic mechanisms of brain function but also to study the basic mechanisms of animal disease models, see Figure 7.2.

7.3

The History and Development of Optogenetics

People often compare the human brain with the computer. There are hundreds of millions of transistors in a computer and tens of billions of nerve cells in a brain. In both the computer and the brain, the basic components are connected to each other to form huge networks in which huge amounts of information are quickly transmitted using electrical signals. How do neuroscientists understand brain function by

7.3 The History and Development of Optogenetics

measuring the flow of information in the brain? The traditional method uses electrodes to measure the electrical signals on nerve cells, which act like a telephone bug to see how they work. This method, also known as electrophysiology, has been used for centuries, but its drawbacks are obvious: it measures the activity of only a few of the hundreds of millions of nerve cells in a process of interaction, which is far from sufficient to investigate the function of the brain. As early as the 1960s, the young Professor Lawrence B. Cohen of Yale University began looking for ways to introduce an element of light, turning the electrical signals of nerve cells into light, and allowing neuroscientists to see the “drama” of brain activity. His idea was to dye nerve cells with a chemical dye whose molecules did not enter the cells but only attached to the membranes. When nerve cells produce electrical signals, the dye’s molecular structure is affected by an electric field and changes color, so that the nerve tissue can be imaged by measuring the dye’s fluorescence. However, this method also has an obvious disadvantage that it is not selective for nerve cells, so the contribution of different nerve cells to the light signal cannot be separated. The question has baffled a generation of neuroscientists. In 1962, the discovery of green fluorescent protein (GFP) from a jellyfish officially opened the door to bioluminescence research [8]. In 2008, the Nobel Prize in chemistry was awarded to green fluorescent protein. One of the most applications of fluorescent proteins is Brainbow. In a 2007 study led by Joshua R. Sanes and Jeff W. Lichtman, fluorescent pigments of three colors, red, yellow, and blue, were inserted into mouse genomes and successfully colored different mouse cells. The three colors are combined with each other, and there are nearly a hundred color marks on the mouse brainstem tissue slices that are finally observed under the microscope, like a brilliant rainbow [9], see Figure 7.3. This gives us a wonderful picture of the interweaving of different cells in the brain, and raises a further question: how do these cells interact with each other? In a single neuron cell, information is transmitted in the form of electrical signals. With the conduction of signals, the internal and external potentials of the cell membrane are reversed, and a large amount of calcium ions flow in. The traditional method uses electrodes to measure potential changes in nerve cells, but this method has obvious shortcomings: it requires a lot of electrodes and the scope of observation is limited. With fluorescent protein, scientists found a new solution. A fluorescent protein can be attached to a protein that can sense changes in voltage or calcium ion concentration, causing it to flash brightly as neurons engage in brain activity. In 1979, Francis Crick et al. believed that the biggest challenge in the field of neurobiology was: how to accurately control a class of neurons under study in brain nerve research without affecting the function of other peripheral neurons and how to study the relationship between neural circuits and behaviors. Traditional research generally uses electrode stimulation or drug treatment. However, electrode stimulation is not well-targeted, the action time period of drug treatment is too long, and the target cells are diverse, and other limitations cannot solve such problems. In the 1970s, the expression mechanism of light-activated cells was not very clear. Crick proposed whether light can be used as a control method, which provided a preliminary basis for optogenetics.

207

208

7 Optogenetics

(a) Brainbow-1.0

(b) Brainbow-1.1

(c) Brainbow-2.0

(d) Brainbow-2.1

(e) Restriction of recombination

Figure 7.3 XFP expression in Brainbow transgenic mice. (a, b) Thy1-Brainbow-1.0 and Thy1-Brainbow-1.1 transgenic mice were crossed with CreERT2-expressing animals. Tamoxifen injection led to mosaic XFP expression throughout the brain. (a) Brainstem; (b) hippocampal mossy fibrous axons and their terminals (c) In Thy1-Brainbow-2.0 mice, transient recombination with the CreERT2/tamoxifen system triggers expression of M-CFP (peripheral motor axons). (d) In Thy1-Brainbow-2.1 mice, CreERT2-mediated recombination leads to expression of multiple XFPs. Left: oculomotor nerve. Right: hippocampus (dentate gyrus, labeled neurons, and astrocytes) [9]. Source: Livet et al. [9], From Springer Nature.

With fluorescent proteins that can be genetically labeled, the next step in optogenetics is how to use fluorescent proteins to show the activity of nerve cells, and how to use light to direct and control their activity. The inspiration for optogenetics comes into vision. There are many photoreceptor cells in the human eye. The light from the outside enters our eyes, which is converted

7.3 The History and Development of Optogenetics

into chemical and electrical signals after the retina and enters the brain through the transmission of neurons. Can we also use light to give commands to neurons and manipulate their activities? In 1971, Stoeckenius and Oesterhelt first discovered that bacteriorhodopsin as an ion pump can be rapidly activated by visible light photons [10]. In the following years, halorhodopsin [11], channelrhodopsin [12], and other photosensitive proteins were discovered. In 2002, Gero Miesenböck, a professor at Memorial Sloan–Kettering Cancer Research Center, first tried to express photosensitive proteins from invertebrates in rat cells and saw neurons respond to light stimuli in a petri dish [13]. In 2005, he successfully realized the use of light to control headless fruit flies to flap their wings [14]. Gero Miesenböck himself is also known as the founder of optogenetics. In 2005, scientists discovered that neurons respond precisely to light after just introducing the microbe opsin gene [15]. The photogene element developed by using photosensitive proteins can control the action potential of nerve cells in living animals rapidly, precisely, and nondestructively, and realize the noninvasive activation or inhibition of neuronal activity. Since then, the era of optogenetics has come. Optogenetics provides a new interface for physical optics, photochemistry, and biology. It combines genetic engineering with the latest optical technology. This method has many advantages, such as high practicality of in vivo expression, accurate timeliness, reduced trauma, no foreign body intrusion into tissues, and so on. The localized optical fiber can be used to stimulate the cells locally, or the diffuse light can be designed to stimulate the brain area in a wide range. Many laboratories have become extensively involved in optogenetics, some using photosensitive channels instead of classic electrode stimuli, some looking for more photosensitive proteins to enrich their toolkits, and some hoping to use photosensitive proteins to provide clinical therapeutic applications. In 2010, Nature Methods listed optogenetics as the annual technology, and in the same year, Science highlighted this progress in the ten-year technology review. In 2015, Natural Neuroscience specially wrote an article to commemorate the tenth anniversary of optogenetics, saying that “optogenetics opened the door to the coveted experiment.” To promote the development of optogenetics, a variety of new devices and systems are needed. The introduction of fiberoptic tools and laser diodes in 2007 led to further advances in optogenetic control. It can go deep into the brain and operate on live animals. At present, optical fiber technology has been widely used in the experiments of deep brain stimulation in free-moving animals. However, the implantation of optical fiber will inevitably cause brain tissue damage and local bleeding to some extent, especially when high-intensity light input is carried out with large-diameter optical fiber. Therefore, for light stimulation of more brain areas and multipoint stimulation of special spatial locations, researchers have developed miniaturized micro-optical fibers and tapered fibers to reduce mechanical damage to tissues [16], see Figure 7.4. In addition, these neural interfaces can only be used to illuminate one point. In an experiment, the experimenter can only determine one insertion position for

209

210

7 Optogenetics

Tapered optical fiber Au Insulation epoxy

Electrical recording

Localized photon delivery

Optical aperture

Neuron (a)

(b)

Figure 7.4 Single optrode with dual stimulation and recording functions [17]. (a) Concept schematic. Light is locally delivered through the aperture at the tip of the tapered optical fiber to nearby neurons; the neuronal activities are recorded through the thermally metalized gold tip of the optrode. (b) SEM images of the optrode tip. The exposed metallic part of the tip is approximately 50 μm, appearing brighter in the upper image. The diameter of the optical aperture (outlined in white circle) is about 1 μm in the lower image. Scale bars are 10 and 1 μm for the upper and lower images. Source: Zhang et al. [17], From IOP Publishing.

detection, which is impossible for multipoint illumination. An effective method is to use fiber bundles. Fiber bundles containing hundreds or thousands of micro-optical fibers are wrapped in an insulating casing to form a fiber bundle and implanted in the target brain area to achieve simultaneous stimulation of a larger brain area or sequential stimulation of different brain regions [18]. Connecting a silicon chip to one end of the implanted optical fiber and coupled to a light source array is another experimental program for multipoint stimulation of the brain [19]. In short, these diversified optical transmission strategies make optical input in multiple brain regions and multiple spatial locations more convenient and effective. However, there are still difficulties in the development of optogenetics. On the one hand, the axons and dendrites of nerve cells are crosslinked with each other, and the response signal of photosensitive protein can easily interfere with surrounding nerve cells. On the other hand, optical technology has not reached the accuracy of a certain cell. It is difficult to accurately control a single cell in time and space. Because the area of light entering the brain is relatively large, traditional methods will target many cells at the same time. To achieve light stimulation of single cells, the researchers used two new tools: a more sensitive, targeted photosensitive channel protein, and an optimized holographic two-photon microscope. Dr. Boyden thought of using genetic engineering to modify photosensitive channel proteins. In 2014, Dr. Boyden’s team discovered a new photosensitive channel protein CoChR [20]. It is more sensitive to light and produces a stronger current, about

7.4 Photosensitive Protein

ten times that of the first photosensitive channel protein, ChR2. To this, Dr. Boyden added a residue of Kainate receptor KA2 [21]. After modification, the new photosensitive channel protein so CoChR can be concentrated in the cell body of nerve cells, avoiding the interference of axons and dendrites on peripheral nerve cells. To solve the problem of interference, another problem is the optical technology that achieves cell precision. The two-photon computer-generated holography (CGH) technology achieves precise control of time and space, which can minimize the illumination time while illuminating the cells and reduce the response time to milliseconds. At the same time, this technology also supports 3D imaging, laying the foundation for generating complex models. Using this single-cell optogenetic technology, researchers can precisely stimulate specific neurons and observe the responses of cells connected to them. In other words, researchers can use this technique to stimulate several neurons next to each other to see which one dominates or is under the control of other neurons. This will help people understand how thinking, feeling, and movement happen. Optogenetics technology originated from the field of neuroscience and has been rapidly developed and widely applied in the field of neuroscience up to now, and then extended to the study of animal behavior. Optogenetics technology has the advantages of rapid, precise, highly specific, etc. that provides the possibility to explore the relationship between specific types of neuronal activity and animal behavior changes. Optogenetic approaches have now provided insight into a wide range of issues in behavior, physiology, and pathology, including the fields of sensation, cognition, and action. Although humans are currently performing these studies in mammals (rats and mice), optogenetic methods have become an important method for studying neural circuits.

7.4

Photosensitive Protein

7.4.1

Introduction and Development of Photosensitive Protein

In neuronal cells, a short electrical signal (spike or action potential) appears after the cell membrane is depolarized. This electrical signal process is the basis for neuronal cells to communicate with each other. Conversely, hyperpolarization of the cell membrane will inhibit the appearance of such spikes. If neuroscience researchers can master this spike switch, then they can easily study the function of nerve cells, the interaction between nerve cells, and the mechanisms by which neural circuits regulate biological behavior. Neuroscientists thought that if the exogenous photosensitive protein-encoding gene that can change this membrane potential can be expressed in neuronal cells, then the spike can be controlled by light-controlled operation. As early as 1979, microbiologists discovered that certain microorganisms could express a single component of photosensitive ion channel protein. Stoeckenius and Oesterhelt demonstrated in 1971 that bacteriorhodopsin can be activated under visible light conditions and act as a channel protein to transport ions across the

211

212

7 Optogenetics

membrane. Later, in 1977, Matsuno–Yagi and Mukohata discovered more members of this family. In 2002, Hegeman et al. further discovered channelrhodopsin [12]. In 2002, Zemelman et al. used drosophila photosensitive receptor genes, including rhodopsin, Cl− arrestin2, and G protein α subunit, to develop a multicomponent photoactivation strategy, named “ChARGe.” [13] And with the help of gene expression technology, it is the first time to use light to activate a small group of specific neurons in a group of mixed neurons. Similarly, Banghart et al. achieved precise and reversible control of neuronal activity using light in rats by coupling photosensitive chemical molecules to potassium ion channels [22]. However, the further application of multicomponent optogenetics is hindered by the technical limitations of multicomponent protein expression, chemical modification of proteins, and tissue penetration of chemical small molecules. The turning point came in 2005. Scientists pointed out that after introducing a microbial opsin gene without any other components or chemicals, neurons become very sensitive to light. This phenomenon was quickly confirmed one after another. Boyden et al. successfully expressed microbial rhodopsin in mammalian neurons for the first time, and successfully realized the photoactivation of neurons without other cofactors or components. Subsequent studies have shown that vertebrate tissue contains natural all-trans-retinal, which is a cofactor necessary for photonic control of microbial rhodopsin. Thus, the researchers showed that optogenetic control is even feasible for complete mammalian brain tissue and free-moving mammals. Therefore, the microbial opsin gene provides a one-component strategy. By 2010, a variety of microbial rhodopsin proteins, including channelrhodopsin (ChR), bacteriorhodopsin (BR), and halorhodopsin (NpHR), have been proven to be able to achieve optical activation or inhibition of mammalian neurons, thereby realizing the optogenetic control of neurons in intact mammalian brain tissues or even free-moving mammals. Thus, the door to single-component optogenetics was opened. This is impossible with electrical stimulation.

7.4.2

Types of Photosensitive Proteins

The existing optical control components generally include the following six categories: rhodopsins, xanthopsins, phytochromes (Phy), cryptochromes (CRY), light, oxygen, or voltage (LOV), and blue light sensor using flavin adenine dinucleotide (BLUF). Rhodopsin is currently the most widely discovered and most widely used photosensitive protein. They have great value in optogenetic experiments. Bacteriorhodopsin (BR) is a transmembrane protein of bacteria such as Halobacterium salinarium. After absorbing light energy, retinal changes from all-trans-retinal to 13-cis-retinal, and then to all-trans-retinal. Isomerization triggers a conformational change in bacterial rhodopsin, pumping protons from the cytoplasm to the outside of the cell. Halorhodopsin (NpHR) is a Cl− mediated channel protein that can be activated by yellow light from 560 to 580 nm. Anion influx causes neuronal cell hyperpolarization and inhibits neuronal activity.

7.4 Photosensitive Protein

BR / PR

HR

Opto

XR

ChR

Bacterial cyclase

ATP

ATP

(a)

cAMP

Peak activation λ (nm)

650 Fast inhibition

600 Step function opsin (bistable depolarization)

Hyperpolarizing Biochemical modulation Red-shifted depolarizing Blue depolarizing Bistable depolarizing ChETA variants

550 500

Opto-α1

Opto-β2

450 400 1 ms

(b)

Fast excitation

10 ms

Biochemical modulation

100 ms

1s

τoff

10 s

100 s 1 min

1000 s 30 min

10000 s

Figure 7.5 Basic Properties of Known Single-Component Optogenetic Tools with Published Spectral and Kinetic Information [23]. (a) Single-component optogenetic tool families; transported ions and signaling pathways are indicated. (b) Kinetic and spectral attributes of optogenetic tool variants for which both of these properties have been reported and for which minimal activity in the dark is observed. Source: Yizhar et al. [23]/With permission of Elsevier.

Channelrhodopsin (ChR) is a nonselective cation channel protein that is activated by blue light across the membrane 7 times and is found in Chlamydomonas reinhardt. The maximum absorption wavelength of it is 470 nm, see Figure 7.5.

7.4.3

Improvement of Photosensitive Protein

But the first generation of optogenetics technology is not perfect: excitatory photosensitive proteins are very effective, but inhibitory opsin is very low, that is to say: stimulating electrical signals are much more effective than inhibitory electrical signals. Opsin is usually expressed on the cell membrane and is a transmembrane channel protein. A light pulse causes the opsin to open the channel, and then cations flow into the cell. For excitatory opsins, only one light pulse is needed to activate the protein and keep the channel open. At this time, cations can continue to flow in, even if the light pulse is turned off, the cells can remain excited, and the continuous ion current makes the cells more sensitive to light stimulation. These features allow people to simply activate neurons deep in the animal’s brain, without the need for optical fibers to penetrate tissues. In contrast, inhibitory opsin is not a channel, but a “pump.” When a photon comes in, it moves an ion across the membrane. This mechanism of action is very inefficient and cannot keep inhibitory opsin on continuously, so more light pulses are needed to conduct experiments, so the opsin pump cannot

213

214

7 Optogenetics

be an effective inhibitor switch. These inhibitory protein pumps are different from the normal inhibitory mechanism of brain cells. The normal inhibitory mechanism of the brain is to make neurons more transparent and more resistant to excitatory currents. Moreover, for these photosensitive proteins to function well, high levels of gene expression and light intensity must be achieved in the living nervous system, while achieving cellular specificity and minimizing toxicity to the cells. However, we know that neurons are highly susceptible to overexpression of membrane proteins, and due to the sensitivity of the neurons themselves, they are prone to side effects of light and heat. 7.4.3.1

Improvements to Excitatory Photosensitive Proteins

Currently, the most commonly used excitatory photosensitive protein is ChR2, which can be activated by 470 nm blue light irradiation to rapidly trigger cation inflow. The response speed of ChR2 is very fast. This characteristic of electrophysiological activity can be applied to the pacing of cardiomyocytes. ChR1 and ChR2 have similar effects, and ChR1 is mainly activated by red light. Both channels of rhodopsin show the ability to open and close quickly, but ChR2 is more effective than ChR1 on animal cell membranes. Therefore, ChR2 is more widely used than ChR1 in the research of optogenetic technology in life sciences. Improve Expression and Photocurrent The improvement of ChR2 focuses on improving the expression and photocurrent in the mammalian system. Scientists first thought of using ChR2 to bind histidine to replace arginine at position 134 to increase the steady-state current [24]. However, the mechanism of this method is to delay the time when the channel is closed, which significantly impairs the time accuracy and current peak shape. The endoplasmic reticulum (ER) export motifs that have been observed since 2008 help to obtain high and safe expression of halorhodopsin. Next, it was discovered that a chimera strategy (using a hybrid of ChR1 and ChR2) helped to improve the expression and spike characteristics of channelrhodopsin. Improve Inactivation and Desensitization On the other hand, the inactivation time constant and desensitization of ChR2 impose limitations on the time accuracy of the pulse. If the next light pulse occurs before the inactivation of ChR2, desensitization of ChR2 may lead to activation failure. If the light pulse occurs after all ChR2 are inactivated, it will also cause activation to fail. To solve this problem, scientists have found that the above chimera method can effectively improve the desensitization of opsin in neurons. In another method to deal with desensitization and inactivation, considering the crystal structure of BR, the residue at position 123 of ChR2 is modified to threonine or alanine, and the resulting opsin protein with faster response is called ChETA [25]. In many experiments, however, we still need enough time to check the activation of neurons, so there is no need to close the channels quickly after the light pulse ends. The step-function opsins (SFOs) solve this problem well. SFO is a member of the ChR mutant family [26]. It exhibits a bistable state,

7.4 Photosensitive Protein

which can delay the activation of neurons long enough after the light stimulation is terminated. Unlike ChETA, the SFO mutation is to stabilize the active retinal isomer, and its function is to extend the active state of the channel. The SFO mutant is a slow-reacting ChR2 protein. Under blue light irradiation, the protein can keep neuronal cells in a stable and activatable state for a long time, but the effect of green light irradiation is just the opposite. 7.4.3.2

Improvements to Inhibitory Photosensitive Protein

Currently, commonly used inhibitory photosensitive proteins include NpHR and Archaerhodopsin-T (ArchT). NpHR is a chloride ion pump. The photoactivation spectrum of NpHR is 525 ∼ 651 nm (the center wavelength is 578 nm), which can be activated by yellow light. The chlorine pump is activated under the light irradiation of this wavelength, the concentration of intracellular chloride ions increases, and the chloride ions are pumped from the extracellular to the intracellular, causing the cell membrane to produce a hyperpolarization reaction and thus produce an inhibitory effect. ArchT is a proton pump, which can be activated by red light, and the mechanism of its inhibitory effect is similar to that of a chloride ion pump. There are other inhibitory light-sensitive proteins, such as Mac, eBR, GtACRs, and other light-driven proton pumps that can also make cells undergo hyperpolarization and inhibit the generation of action potentials. Unlike excitatory channel rhodopsin, NpHR is a true pump. For excitatory photosensitive proteins, light pulses cause opsins to open channels on the cell membrane, and then cations flow into the cell. Therefore, only one light pulse is needed to open the protein and keep the channel open. At this time, cations can continue to flow in, keeping the cells excited even when the light pulse is turned off, and the continuous ion current makes the cells more sensitive to light stimulation. In contrast, these inhibitory protein pumps differ from the normal inhibitory mechanisms of brain cells. The brain’s normal inhibitory mechanism is to make neurons more transparent and resistant to excitatory currents. The inhibitory opsin is not a channel, but a “pump.” When a photon comes in, it moves an ion across the membrane. This mechanism of action is very inefficient, and the inhibitory opsin cannot be turned on continuously. Therefore, more light pulses are needed to conduct experiments, so the opsin pump cannot be an effective inhibitor switch. To improve the function of NpHR, some modifications to NpHR are needed. Initially, the codon-optimized sequence was used to enhance its subcellular trafficking (eNpHR2.0 [27] and eNpHR3.0 [28]), which resulted in better membrane targeting and higher current. In 2012, it was discovered that the amino acids in the inner wall of the excitatory switch channel would form a negative charge and attract cations into the channel. On this basis, a new strategy for the construction of inhibitory channels was found: the excitatory opsin was modified to make its inner wall full of positively charged amino acids and attract anions into the cell. This inhibitory protein channel is so photosensitive that a single blue light pulse causes neurons to shut down for several minutes. In addition, the process can be reversed by red light. Professor Deisseroth called this new opsin SwiChR, which is more effective in inhibiting neuronal

215

216

7 Optogenetics

activity, more sensitive to light stimulation, and able to remain on for a long time. It is believed that the long-acting action and stable response of this protein are particularly useful for animal behavior research. It may even be used to treat some diseases, such as severe epilepsy, which will bring new opportunities for optogenetics.

7.4.4

Other Modifications of Photosensitive Proteins

Modifications to photosensitive proteins continue. Researchers from Howard Hughes Medical Institute and other institutions in the United States have found a new way to modify a class of photosensitive proteins called rhodopsin. By flipping these proteins around in their membranes, they were able to produce tools with different properties. The technique could double the number of proteins used in optogenetics. These newly developed heterozygous rhodopsin proteins allow the researchers to conduct new experiments that can help analyze brain circuits and study the neuroscience of Parkinson’s disease. If every existing modified or newly discovered rhodopsin can acquire a new function when flipped, this may result in a doubling of the protein tools used in optogenetic techniques. They not only changed the orientation of the protein in the cell membrane but also found that these new modified rhodopsin have unique and useful new functions. One kind of rhodopsin called FLInChR (full-length inversion of ChR, ChR full-length flip) plays a role in activating neurons at the beginning. When flipping occurs, it becomes a powerful and fast inhibitor that can be used to carry out new experiments [29]. Thomas McHugh, head of the Brain Science Research Institute at the Japan Institute of Physics and Chemistry (RIKEN), and his colleagues have now found a new way to introduce light noninvasively deep into the brain. They used up-conversion nanoparticles (UCNPs) to direct lasers deep into the skull. Such nanoparticles can absorb near-infrared light and convert them into visible light at depths that traditional optogenetics cannot reach. In mice with fear memories, the researchers used UCNPs that can emit light to successfully arouse fear memories in the hippocampus. These neuronal activation, inhibition, and memory-stimulating effects were only observed in mice injected with nanoparticles [30].

7.4.5

Application of Photosensitive Protein

Photosensitive proteins can be applied to research in various fields, such as new materials, plants, cell electrophysiology, and optogenetics. The discovery of microbial photosensitive proteins breaks through the traditional electrophysiological research methods. ChR2 and NpHR require different light wavelengths and can be co-expressed in the same nerve cell, see Figure 7.6. Different wavelengths can be used to selectively activate one of them to adjust the frequency of action potentials. If cardiomyocytes express both ChR2 and NpHR, their electrophysiological properties can be used to precisely regulate heart rhythm. Different photosensitive proteins have different characteristics, and the required light wavelength, intensity, density, and duration are also different. As more and

7.5 Precise Optogenetics

(a)

NpHR–EYFP

ChR2–mCherry

Overlay

Cell-attached Whole-cell –62 mV

(b)

60 pA 80 mV 200 ms

25 pA (c)

400 ms

Figure 7.6 Combining NpHR with ChR2 for noninvasive optical control. (a) Hippocampal neurons co-expressing NpHR–EYFP under control of the EF1α promoter and ChR2–mCherry under control of the synapsin I promoter. (b) Cell-attached and whole-cell recording of neurons co-expressing NpHR–EYFP and ChR2–mCherry. Action potentials are evoked by brief pulses of blue light (473 nm, 15 ms per pulse; length of blue bars is not to scale for ease of visualization). Simultaneous illumination with yellow light inhibited spike firing. (c) Voltage-clamp recording from a single neuron co-expressing NpHR–EYFP and ChR2–mCherry, showing independently addressable outward and inward photocurrents in response to yellow and blue light, respectively [31]. Source: Zhang et al. [31], From Springer Nature.

more photosensitive proteins and their variants are discovered or synthesized, optogenetics technology has also been rapidly developed. At present, photosensitive proteins have been expressed in a variety of cells, such as central neurons, peripheral neurons, retinal cells, skeletal muscle cells, cardiomyocytes, and pluripotent stem cells (hiPSC). Photosensitive protein is widely used in the research of nervous systems and vision recovery (retinitis pigmentosa). The photosensitive proteins ChR2 and NpHR are introduced into cells that are not completely damaged in the retina to restore the photosensitivity of these cells and make them retransmit nerve signals in the visual pathway. Some studies have put this method into practice in animals. The development of optogenetic-related tools is very rapid. Scientists have carried out a lot of screening work in various ecosystems and discovered new photosensitive proteins. At the same time, they have carried out a lot of work on existing proteins. And these photosensitive proteins are also able to undergo various recombination, forming a variety of systems to regulate the activity of nerve cells.

7.5

Precise Optogenetics

Once the desired opsin is targeted to the neuron of interest, the next step we need to consider is the issue of light transmission. For example, the rapid changes in a

217

7 Optogenetics

variety of opsins in brain slices and long-term stimulation behavioral experiments in animal brains require different light delivery methods. Especially for in vivo optogenetic control systems, the laser decays exponentially in the radial depth within the tissue, see Figure 7.7 [2]. Whether the photosensitive protein can be activated is directly related to the optical power, so the laser’s activation range can be predicted by the attenuation curve. The researchers found that taking the 473 nm blue laser as an example, the laser intensity (optical power) is at a critical state at a radial depth of approximately 1 mm, and the laser power will not reach enough intensity to Optical fiber

Saline

Light power density PDmin

473 nm

594 nm

80 60

0

1

1

(d)

40

0 1 Distance (mm)

0

473 nm 0

100

0.5 1 1.5 Distance from fiber (mm)

0 1 Distance (mm)

1 0.1 0

0.5 1 1.5 Distance from fiber (mm)

594 nm

2

473 nm 561 nm 594 nm 635 nm

10

0.01

2 –1

Brain tissue

20

(b)

(c)

0

2 –1

Distance from fiber (mm)

% Light transmission

473 nm 561 nm 594 nm 635 nm

100

Distance from fiber (mm)

Depth

(a)

% Initial power density

218

0

0

1

1

2 –1

2

(e)

0 1 Distance (mm)

2–1 0 1 Distance (mm)

Figure 7.7 Light propagation in brain tissue for in vivo optogenetics [23]. (a) Schematic showing that the maximum activation depth is the depth at which the light power density falls below the activation threshold, PDmin . (b) Measured percent transmission of light power at 473, 561, 594, and 635 nm light from a fiberoptic (200 μm, NA = 0.37) shown as a function of distance from the fiber tip in brain tissue. Solid lines represent fits to the measured data. (c) Predicted fraction of initial light power density as a function of depth in brain tissue for the same fiber; includes effects of absorption, scattering, and geometric light spread. (d and e) Lateral light spread as a function of sample thickness. Saline solution (top) or rat gray matter (bottom) was illuminated by either blue (473 nm; left) or yellow (594 nm; right) light delivered through a 200 μm optical fiber (NA = 0.37). Images are sections through a 3D map of light intensity along the axis of an illuminating fiber. Contour maps of the image data show iso-intensity lines at 50%, 10%, 5%, and 1% of maximum. Note conical spread of light in saline due to fiber properties, and more symmetrical light propagation shape in brain tissue. Source: Yizhar et al. [23], From Elsevier.

7.5 Precise Optogenetics

activate the corresponding neurons in the brain tissue where it is greater than 1 mm. Therefore, microscope objectives are mainly used for optogenetic control of the cortex, and new optical methods are needed for optogenetic studies of deep brain regions. There are two types of illumination for optogenetics: in vitro research light source and in vivo light illumination. Light sources for in vitro research include light-emitting diodes and lasers. Light-emitting diode (LED) is a semiconductor electronic component composed of compounds containing phosphorous (P), arsenic (As), nitrogen (N), gallium (Ga), etc. that can convert electrical signals into optical signals. On the basis of light-emitting diodes, the new generation of micro-light-emitting diodes (μ-ILED) of display technology has attracted much attention in optogenetics technology because of its more miniaturization [32]. However, the LED has a wider spectrum and weaker light. The laser can produce amplified light in the state of stimulated radiation and has the characteristics of small divergence, good monochromaticity, and extremely high brightness. Therefore, the laser is currently the most widely used in optogenetic technology.

7.5.1

Single-Photon Optogenetics

Fiber implantation is currently one of the important methods for studying optogenetics in deep brain regions. This method couples an optical fiber with a laser and implants it into the target area three-dimensionally. The optical fiber has excellent light guiding properties, and the diameter can be controlled at the level of tens of microns with high spatial accuracy. Therefore, the light stimulation can be more effectively transmitted to the deep tissues of the body, and the light input in the free-moving animal can be realized. When the input light intensity exceeds the threshold required for opsin activation, opsin can be activated to cause neuron activation or inhibition. For example, the light intensity required to trigger the CHR2 action potential is generally 5 mW/mm2 . Zhang F, Aravanis AM, and others used light for the first time to control the behavior of rodents in a freely moving state and made the first light stimulation neural interface for light stimulation in rodents [33]. A typical optical fiber-based optogenetics system is shown in Figure 7.8. The cannula guide is implanted into the skull above the target area to guide the needle for virus injection and optical fiber. Then the fiber with the outer cover stripped is penetrated into the brain tissue through the catheter, and the end is tightened by the inner tube and the locking nut. This system ensures co-registration between opsin expression and illuminated brain volume, allowing different depths of targeted areas. And it allows optogenetics and pharmacological operations to be combined. However, because the light passes through the locking nut, cannula guide, and catheter at the same time, it is difficult to keep the three parts coaxial. And the fiber with the outer layer is very fragile, and it is easy to break in the skull with the movement of the animal. Usually, when raising animals, the catheter is closed by a dust cap with an iron core. When light is stimulated, the dust cap is removed and the optical fiber is implanted, which is very stressful for animals. Therefore, the research

219

220

7 Optogenetics

Fiber guide

Optical fiber

ChR2+ Cranioplastic cement ChR2Skull Cortex (a)

Illumination

(b) Optical fiber

Cranioplastic cement

Fiber guide

(c)

(d)

(e)

(f)

Figure 7.8 Overview of the optical neural interface [2]. (a) Schematic of optical neural interface mounted on a rat skull, showing optical fiber guide, optical fiber inserted, and blue light transmitted to the cortex. Two weeks prior to testing, lentivirus carrying the ChR2 gene fused to mCherry under control of the CaMKIIα promoter was injected through the fiber guide. (b) Close-up schematic of the stimulated region. The optical fiber tip is flush with the fiber guide and blue light is illuminating the deeper layers of motor cortex. Only glutamatergic pyramidal neurons that are both in the cone of illumination and genetically ChR2+ will be activated to fire action potentials. (c) Rat with optical neural interface implanted; blue light is transmitted to target neurons via the optical fiber. (d) Close-up view of the optical neural interface showing fiber guide attached with translucent cranioplastic cement. Note that no scalp or bone is exposed. (e) Low-power mCherry fluorescence image of acute brain slice showing rat motor cortex after removal of optical neural interface. The edge of the tissue displaced by the optical fiber is demarcated with a dashed line. Numerous mCherry+ neurons around the distal end of the fiber guide are present. Scale bar is 250 μm. (f) High-power image of mCherry+ neurons showing membrane-localized fluorescence characteristic of ChR2-mCherry fusion protein expression. Source: Aravanis et al. [2], From IOP Publishing.

7.5 Precise Optogenetics

on micro-optical fiber implantation equipment has appeared one after another. At present, there are integrated micro-implantation equipment with a volume less than 200 μm and a weight less than 0.5 g [34]. The device can simultaneously perform virus injection, optical fiber optical transmission, and electrical signal recording, which will greatly reduce the damage to living tissues. Another solution is to use a permanently implanted optical fiber connected to a metal or ceramic ferrule at one end outside the skull. Only when the test is carried out with the sleeve system, can it be connected to the light source through the patch line [16]. This system has the advantage of reducing the risk of tissue damage and infection caused by repeated fiber insertion. This cannula system loses the function of simultaneous optogenetic stimulation and drug injection, and there will be a certain deviation between the opsin expression area and the optoelectrode implantation region. But it is ideal for high-throughput behavioral experiments for chronic implants. But it is ideal for high-throughput behavioral experiments for chronic implants. Similar to the principle of optical fiber implantation, another implant device is GRIN-LENS. GRIN-LENS is a lens with a gradient of refractive index, which can realize light transmission with lower loss than optical fiber. However, in addition to strong scattering properties, biological tissues also have drastically varying optical aberrations. The optical fiber implanted device can effectively solve the absorption of light by biological tissue, but how to use the light in its mean free path to effectively realize the firing of neurons needs to correct the aberration of biological tissue. At present, the emerging machine learning algorithm can effectively solve this problem, so that the stimulus light can be focused to reach the threshold of firing neurons [35, 36]. The neural interface described above can only be used to illuminate one point. In an experiment, the experimenter can only determine one insertion position for detection, which is impossible for multipoint illumination. For light stimulation of more brain areas and multipoint stimulation of special spatial locations, an effective method is to use multiple optical fibers to implant the target brain area. So as to achieve simultaneous stimulation of a larger brain area or sequential stimulation of different brain areas [19]. Connecting a silicon chip to one end of an implanted waveguide or optical fiber and coupled it to a light source array is another experimental scheme for multipoint stimulation of the brain, as shown in Figure 7.9. Many neuroengineering scientists have applied microelectromechanical technology (micro-electro-mechanical system (MEMS)) to the development of optopoles and have made major breakthroughs. An implanted optopole developed by Fan Wu et al. integrates insulated waveguides and electrical signal recording contacts [39]. The optopole integrates eight electrical signal recording contacts on the silicon wafer. The length of the straight handle of the optopole is 5 mm, the thickness is as small as 15 μm, and the width is as narrow as 70 μm, which is close to the diameter of the optical fiber. The biggest feature of the optode is that it integrates eight channel recording points while controlling the overall size in a very small range. It has low trauma close to optical fiber and has good spatial resolution and scalability, especially suitable for areas with high density of neurons in the deep brain [37].

221

7 Optogenetics Wideband µLED1, 60 nW Site8

Layer #1 Microelectrode Layer #2 µ-IPD Layer #3 µ-ILEDs Temperature Layer #4 sensor 200 µm

(a)

Injection microneedle

Releasable base

(b)

Pyramidal cells Interneuron

0.5 mV

222

Site1 10 ms

(c)

Figure 7.9 Fiberless optical stimulation using μLEDs [37]. (a) GaN μLEDs grown on sapphire wafers and transferred onto a polymer substrate by laser-liftoff achieved 50 × 50 μm2 μLEDs [38]. (b) First demonstration of monolithic integration of multiple GaN μLEDs on silicon neural probes and capable of a 50 μm pitch. Scale = 15 μm. (c) In vivo demonstration of same optoelectrode controlling pyramidal cells (PYR) in distinct parts of the CA1 pyramidal cell layer [39]. Source: Seymour et al. [37], From Springer Nature.

The use of standard optical fibers is the most commonly used optogenetic research method. However, due to the strong scattering characteristics of biological tissues, light is usually non-focused speckles inside biological tissues. In recent years, research on the light-focusing problem of scattering media has solved this problem. The wavefront correction device is used to modulate the incident light, which can achieve optical focusing through the scattering medium. In 2020, Professor K. Si’s laboratory at Zhejiang University proposed a binary transmission matrix calculation method for scattering media, which uses DMD to perform binary amplitude modulation on incident light to achieve focusing at any position through the scattering media, and the optical fiber speckle refocuses back [40]. However, the geometry of the probe array limits the fixed distance between the photoactive sites and therefore the stimulation mode is limited. Selective illumination and light activation of target neurons can be achieved using digital micromirror devices (DMDs) or through phase modulation techniques such as CGH. Vivien Szabo et al. propose an all-optical method for recording mouse neuronal activity using digital microlens DMD [41]. The system allows for different forms of fluorescence imaging by using DMD and is therefore used to locate neurons to design patterns of light activation and to record neuronal activity after photography (Figure 7.10). The implantation of the optical fiber is usually one-time, and will always cause mechanical damage to the brain tissue and local hemorrhage to a certain extent. At the same time, the fiber implanted into the target area is fixed and cannot be freely exchanged. For experiments that require multipoint illumination, such as bilateral stimulation, large-volume illumination, or illuminating multiple points with a specific spatial pattern, the use of multiple optical fibers can cause more invasive problems.

7.5 Precise Optogenetics

sCMOS

L8 F DMD

MO Bundle

(a)

1.0 ΔZFWHM (μm)

O

SIM intensity (norm.)

(b)

L7

0.5

0.0

(c)

40

20

0 –40–20 0 20 40 z (μm)

0

20 40 60 Grid period (μm)

sCMOS 10 % 5s

L8 F

O MO Bundle

(d)

(g)

1.0 0.5 0.0 –20 –10 0 10 20 x (μm)

Intensity (norm.)

L7

Intensity (norm.)

(e)

DMD

(f) 1.0 0.5 0.0

–20

0 20 z (μm)

Figure 7.10 Characterization of fluorescence imaging with intensity modulation [41]. (a) Implementation of epifluorescence and SIM. (b and c) Characterization of epifluorescence and SIM imaging. (b) Fluorescence images recorded with conventional epifluorescence imaging (left) and SIM (right). (c) (Left) SIM intensity measured with the fiberscope from a fluorescence plane as a function of axial position z (z = 0 at the imaging plane of the micro-objective). (Right) Axial resolution of SIM as a function of the grid period. (d) Implementation of the scanless multipoint confocal microscope for fluorescence recording from a few somata (in this case, three). (e) (Left) Average of 600 raw images registered at the camera in multipoint confocal mode for simultaneous functional imaging of three somata. (Right) GCaMP5-G calcium signal (ΔF/F trace) measured from 4.2 μm ROI, circled in green, and placed on the soma. (f) x–z profile of the confocal PSF. (g) Lateral (left) and axial (right) profiles of the PSF. Lateral and axial resolutions are 3.1 and 8.6 μm, respectively (FWHM). Source: Szabo et al. [41], From Elsevier.

The development of single-photon optogenetics has been hindered. Fortunately, in recent years, the rapid development of single-photon noninvasive focusing technology has provided powerful tool support for optogenetics. Professor K. Si’s lab from Zhejiang University proposed a parallel wavefront distortion correction algorithm for the thick tissue, combining the conjugate adaptive optics system and the coherent light adaptive correction technology, and realized the one-time correction of aberrations in a large field of view. This method provides a feasible reference scheme for high-speed, high-resolution, multipoint light stimulation deep in biological tissues [42–44]. In addition, the rapid development of machine learning also provides a way for noninvasive focusing. In 2018, Yuncheng Jin used convolutional neural network

223

224

7 Optogenetics

(CNN) to quickly calculate the point spread function image with Zernike mode after training and realized noninvasive fast focus on tissue [35]. In 2020, S. Hu introduced the CNN to the Shack–Hartmann wavefront sensor (SHWS) to simplify the optical distortion detection process. This work may be applied to the application of adaptive optics based on wavefront sensors in the focusing of deep tissues in living bodies [45]. The traditional SHWS needs to measure the wavefront slope of each micro-lens to reconstruct the wavefront. In 2020, L. Hu applied deep learning to SHWS to directly predict wavefront distribution without measuring wavefront slope [46], which further promoted the development of deep focus on living tissue. And Luo Y uses genetic algorithm to further optimize, and evolve the light focus to the global optimal [47]. In the same year, Y. Jin et al. addressed the problem that it is difficult to use a unified model to train data sets caused by the diversity and heterogeneity of biological tissues, and adopted a migration learning network to solve the problem of insufficient data [48]. This series of advancements has promoted the development of single-photon optogenetics and promoted the deep focusing of single-photon methods in living tissues. Nevertheless, the disadvantages of single-photon optogenetics are obvious. Single-photon optical systems usually lack single-cell resolution, so they are not suitable for studying the fine dynamics of neural circuits, such as how many neurons are needed to initiate a certain behavior, and whether certain nodes in the neural circuit dominate other nodes.

7.5.2

Multiphoton Optogenetics

The two-photon optogenetics developed in recent years has achieved noninvasive cell-precision light stimulation and achieved precise control of time and space. It can minimize the lighting time while illuminating the cells and reduce the response time to milliseconds. The basic principle of two-photon excitation is: in the case of high photon density, fluorescent molecules can absorb two long-wavelength photons at the same time, and emit a shorter-wavelength photon after relaxation. The effect is the same as using a photon whose wavelength is half the long wavelength to excite fluorescent molecules. Two-photon excitation requires a high photon density. In order not to damage cells, two-photon microscopes use high-energy mode-locked pulsed lasers. The laser light emitted by this laser has a very high peak energy and a very low average energy. Its pulse width is only on the order of 100 femtoseconds, and its period can reach 80–100 MHz. 1 x02780; When using a high The advantages of the two-photon microscope are:  numerical aperture objective lens to focus the photons of the pulsed laser, the photon density at the focal point of the objective lens is very high. Two-photon excitation only occurs at the focal point of the objective lens, so the two-photon microscope does not need a confocal pinhole, and the spatial resolution is very 2 x02781; Light of two-photon excitation light wavelength is less affected high.  by scattering than single-photon excitation light, so as to penetrate deeper spec3 x02782; Long-wavelength near-infrared light is less toxic to cells than imens.  4 x02783; When using a two-photon microscope to observe short-wavelength light. 

7.5 Precise Optogenetics

specimens, photobleaching and phototoxicity occur only on the focal plane. Therefore, the two-photon microscope is more suitable for observing thick specimens, living cells, and performing spot photobleaching experiments than single-photon microscopes. The two-photon optogenetics derived from the two-photon microscope, due to its high spatial resolution and the ability to achieve noninvasive deep stimulation in the living body, can achieve single-cell or even subcellular optical manipulation that cannot be achieved by traditional optogenetics. It is very powerful for exploring the role of single or 2–3 neurons in the neuron microcircuits. Rafel Yuste and others praised this technology as the most ideal experimental tool in the field of neuroscience [49]. Two-photon optogenetics was first proposed by Rafel Yuste (Columbia University, the head of neuroscience technology, American Brain Project, one of the proponents of calcium imaging technology), Karl Deisseroth (Stanford University, one of the authors of the American Brain Project, the inventor of optogenetics), David Tank (Princeton university, one of the proponents of the American Brain Project, a member of the American Academy of Sciences). They did two-photon stimulation without expressing light-sensitive protein in 2007. By around 2009, David Tank et al. calculated the photocurrent generated by theoretical two-photon optogenetic stimulation. By 2011, Karl Deisseroth invented a new type of C1V1 photosensitive protein that can generate enough photocurrent under two-photon stimulation, and the first generation of two-photon optogenetics was born. In 2013, Rafel Yuste merged two systems of calcium imaging and two-photon optogenetics to achieve real-time calcium imaging and single-cell level two-photon stimulation. In 2014 and 2015, two Nature Methods were published to describe the technology in detail. From 2015 to 2016, Rafel Yuste continued to improve on the original technology. Through SLM and temporary focusing, it is possible to stimulate any neuron cell in multiple layers (two layers can be close to 200um) based on the original single-layer multiple-neuron stimulation. And the shape of stimulation is the shape of neuron cell body. Using free multi-target two-photon stimulation and imaging technology, a certain in-depth investigation of biological problems was also carried out, and the composition of neural microcircuits was nearly discovered. The results are published in Science, Annual Review of Biophysics, and other magazines. Their results and techniques have been continuously highlighted by magazines such as Nature and Neuron. In 2017, at the Optics and the Brain International Conference organized by OSA, multiple groups proposed the two-photon holographic 3D single-cell stimulation technique and its application. In the process of two-photon optogenetics, two types of light-targeting strategies for single-cell light stimulation have emerged, which are usually called serial scanning and parallel mode light-targeting methods. 7.5.2.1

Serial Scanning

In serial scanning, the laser scanning optical system generally consists of a scanning unit, a telescope system, and an objective lens, see Figure 7.11 b. The light is focused into a Gaussian spot with a micrometer scale. When the scanning unit is working,

225

7 Optogenetics

Tdw,1

Galvo-mirrors x

Ti,

1-2

y y

-1

T i,3

Tdw,2 i ,2 -3

x

(c)

T

226

Tdw,3

Scan unit

L1

L2

x

(b) conj BFP

Acousto optic deflector

y

Obj

y

ave

cw

usti

Aco

Acoustic wave

(a)

x conj FFP

BFP

FFP

Figure 7.11 Schematic diagram of laser scanning optical system [49]. (a) Schematic diagram of continuous light stimulation of a series of neurons. T_dw represents the light stimulation residence time of each neuron, and T_(i, n-m) represents the positioning time required for the beam to redirect from the nth neuron to the m th neuron. (b) Optical scheme based on scanning light aiming technology: The scanning unit enables the laser beam to be tilted in two different directions (x and y) over a wide range of angles. This scanning unit passes through a magnifying glass at the back focal plane (BFP) of the microscope objective, including a scanning lens (L1) and a tube lens (L2). This arrangement makes the illumination spot in the target front focal plane (FFP) have a two-dimensional (2D) lateral displacement. (c) Common scanning components: 2D scanning galvanometer mirror (top), two consecutive acousto-optic deflectors (AOD) (bottom). Source: Ronzitti et al. [49]/IOP Publishing / CC BY 4.0.

the spot moves rapidly on the sample surface, thereby continuously stimulating a series of target sites, see Figure 7.11 a. There are two types of commonly used scanning elements: standard galvanometer scanner unit and acousto-optic deflector, see Figure 7.11 c. Standard Galvanometer Scanner (GMs) The standard galvanometer scanner is com-

posed of two mirrors. Each mirror is mounted on a current actuator that guides the laser beam in two vertical directions x and y. Because the mirror reflection loses very little light, it ensures high optical efficiency. The mirror is achromatic and can support high excitation power density without damage. Acousto-Optic Deflector (AOD) As an alternative to standard galvanometer scanners,

the acousto-optic deflector allows for inertial scanning of the laser beam. In AOD, sound waves are usually transmitted in the lateral direction through a crystal (such as TeO2) in radio frequencies, leaving a phase grating on the crystal itself. The grating then diffracts the incident laser beam in multiple orders. For a specific angle of incidence (Bragg angle) of the incident beam, most lasers are directed to the first order of diffraction. In this case, the first-order diffraction angle (θ) can be tuned

7.5 Precise Optogenetics

by changing the frequency of the sound wave. For 2D scanning systems, two vertical AODs can be used sequentially. This configuration allows random access to the region of interest, that is, nonsequential scanning because the position of the laser spot is completely determined by the acousto-optic frequencies in the x and y directions. Since there is no need to move optical components during the AOD scanning process, the process is not restricted by inertia. 7.5.2.2

Parallel Mode Lighting Method

The parallel mode lighting method is to produce patterned light activation. This method expands the space flexibility and can match different experimental requirements. Another important point is that light reaches all targets at the same time. However, compared with scanning-based strategies, the multi-target spatiotemporal flexibility of optical stimulation requires a more powerful laser source. In fact, illuminating N neurons at the same time means that the laser source must provide a power equivalent to N*P_th to excite the neurons or detectable fluorescent functional response. By coupling the microscope objective lens with the spatial light modulator, and by modulating the intensity [50–52] or phase [53–55] of the incident beam, any defined illumination pattern can be obtained on the sample, thereby enabling simultaneous multisite stimulation, see Figure 7.12.

(a)

(c)

(b)

(d)

Figure 7.12 Parallel light-targeting methods [49]. (a) Optical devices used to spatially modulate the illumination in parallel light-targeting optogenetics applications: micro-LED array, DMD, and LC-SLM. (b) Patterned intensity-modulation setups. Illumination intensity patterns in the sample plane (objective FFP) are obtained by spatially shaping the intensity of the light by means of LEDs (top) or DMDs (bottom) placed in a conjugated plane (FFP*). (c) CGH setup. Illumination intensity patterns in FFP are obtained by modulating the phase of the illumination beam in the BFP of the objective by means of an LC-SLM placed in a conjugated plane (BFP*). (d) Generalized phase contrast (GPC) setup. Illumination intensity patterns in the FFP are obtained by modulating the phase of the illumination beam by means of an LC-SLM placed in an FFP* and a phase contrast filter (PCF) placed in a BFP* plane. Source: Ronzitti et al. [49], From IOP Publishing.

227

228

7 Optogenetics

When using the patterned excitation area to illuminate the sample, the optical axial resolution is severely impaired, hindering the control of neurons in the cellular and subcellular resolution. By introducing time-focusing (TF) technology, the axial resolution of the horizontal expansion mode can be greatly improved. In TF, the idea is to adjust the duration of the laser pulse in the propagation process. The light pulse is compressed as it travels toward the focal plane, reaches its shortest value at the focal plane, and stretches again when it travels farther away. The peak intensity decreases and moves away from the focal plane. The axial resolution of large holographic illumination patterns is limited to the order of microns due to the space–time focusing technology. However, the need to study brain circuits at three-dimensional, single-cell resolution requires new light shaping schemes. Three-dimensional spatial directional addressing schemes have been proposed [56–58]. Figure 7.13 shows a common configuration for multiplexing

Figure 7.13 Experimental setup and principle of MTF-LS [57]. (a) CGH based on the use of a liquid-crystal SLM (SLM1). Inset: CGH using a static holographic phase mask. (b) GPC interferometry. (c) An amplitude/phase modulation scheme for simultaneous generation of multiple shapes, in which SLM1 both defined the 2D illumination patterns and shaped the illumination of SLM2. Source: Accanto et al. [57], From Optica Publishing Group.

7.5 Precise Optogenetics

time-focused light shaping (MTF-LS). The implementation process can be basically divided into two steps: first, the beam shaping unit is used to generate the required two-dimensional shape and focus it on the TF grating, and then the two-dimensional shape is multiplexed into the sample volume using the second SLM placed behind the grating. To clarify the causal relationship between behavior or pathology and neural activity patterns, optogenetics needs to continue to develop and improve. It must have the following capabilities: it selectively interferes with a single neuron while monitoring the overall neural network activity during waking in the animal. To achieve this goal, neuron research of the all-optical strategy was started. It can achieve real-time calcium imaging and single-cell level two-photon stimulation. Calcium imaging technology refers to the use of calcium ion indicators to monitor the calcium ion concentration in tissues. In the study of the nervous system in vivo or in vitro experiments, calcium imaging technology is widely used to monitor the changes of calcium ions in hundreds of neurons at the same time to detect neuronal activity. With calcium imaging technology, the originally silent nerve activity has become a magnificent flashing image, and scientists can finally watch the nerve signals shuttle through the neural network with their own eyes. Therefore, as soon as this technology appeared, it was sought after by neuroscientists all over the world, and it is still the most direct way for people to observe neural activity. In the mammalian nervous system, calcium ions are an important class of neuron intracellular signaling molecules. Neuronal calcium imaging is based on the strict correspondence between calcium ion concentration and neuronal activity. Special fluorescent dyes or protein fluorescent probes (calcium indicator) are used to show the calcium ion concentration in neurons in the form of fluorescence intensity, so as to achieve the purpose of detecting neuronal activity. In 2013, Rafel Yuste merged two systems of calcium imaging and two-photon optogenetics to achieve real-time calcium imaging and single-cell level two-photon stimulation technology. Figure 7.14 shows a holographic two-photon multipoint lighting system that enables simultaneous activation of multiple neurons and calcium imaging [59]. Multiple light beams are generated by SLM-based laser phase modulation and then coupled to the two-photon microscope device through a galvanometer. This solution allows multiple individual beams scanned simultaneously to span multiple targets in the sample. To provide two-photon imaging and single-cell precision light stimulation at the same time, another mode-locked femtosecond laser was added to the existing optical path, and the SLM was incorporated into the in vivo two-photon beam resonance scanning microscope. The SLM is used to split the light stimulation laser into separate, spatially targeted beams, thereby activating multiple neurons at the same time. The independent operation of two femtosecond pulsed laser beams enables precise control of activated neurons with high spatiotemporal resolution recording. Using the two-photon optogenetics technology, researchers in the Rafel Yuste laboratory realized the control of mouse visual behavior by activating several neurons in the mouse visual cortex [60]. Subsequent experiments showed that neuronal

229

7 Optogenetics

Simultaneous two-photon

Stimulation 1064 nm excitation

Stimulation Imaging i

PC S

ii v

Imaging 765/920 nm excitation

L1

iv

L2

iii vi Opsin Activity sensor

HWP

Dichroic 1

SLM

GM1 L3 ZB L4

Stim Stim Stim Stim Stim Stim Stim Stim Stim Stim

RSM GM2

Dichroic 2 F1 sCMOS

i

PMT2

Scan lens

F3

ii

Tube lens

PMT1 F2

Dichroic 3

iii Dichroic 4

iv v vi

Objective Focal plane

(a)

(b) Photostimulation

50 pA

Trial

1

18 1 Spikes

230

(c)

0 5% ΔF/F 500 ms

i

ii

iii

(d)

(e)

Figure 7.14 Experimental device for parallelization of light stimulus based on scanning [59]. (a) Schematic illustration of the experimental goal. (b) Optical layout of the SLM-based two-photon patterned photostimulation, two-photon resonant scanning, moving in vivo microscope. (c) A large field of view of neurons coexpressing GCaMP6s (green) and C1V1-2A-mCherry (pink; scale bar, 100 μm). (d) Inset from a large field of view (200 μm × 200 μm) for the experiment shown in e (scale bar, 50 μm). A two-photon targeted cellattached patch-clamp recording was obtained from neuron i. This neuron was targeted for optogenetic stimulation in a spiral pattern. (e) Top, electrophysiological recording during photostimulation trials (pink bar). Single sweep (from trial 2), raster plot, and peristimulus time histogram are shown. Bottom, calcium imaging recordings obtained simultaneously from neurons i–iii in d (mean ± s.e.m., n = 3 neurons). Source: Packer et al. [59], From Springer Nature.

7.6 Application and Development of Optogenetics

populations can be divided according to their corresponding functions [59]. This discovery is very important for studying the mechanism of neural microcircuits. Stanford University professor Karl Deisseroth said: “We don’t know how many cells can trigger complex thoughts, feelings or emotions, but given our findings in mice, this may be a surprisingly small number.” Precision optogenetics, as the 2015 technology of Nature Methods, realized the manipulation of neurons at the level of cell resolution, thus taking the first step toward achieving neural microanatomy.

7.6

Application and Development of Optogenetics

Optogenetics is widely used in various fields and plays a very important role in basic research, disease treatment, and industrial applications. It can be said that the development of optogenetics has opened up a new landscape of neuroscience. At present, optogenetics technology has been widely used in the research of neurology, epilepsy, metabolic diseases, retinopathy, myocardial electrophysiology, acute kidney injury, and pain.

7.6.1 7.6.1.1

Application of Optogenetics in Gene Editing and Transcription Genome Editing

The genome in the nucleus determines the function of the cell. If we perfectly combine optogenetics with the gene editing technology, we can accurately adapt and design the genes of the genome, which will help us study and understand the complex biological functions of cells. For example, researchers designed a light-induced genome editing device based on recombinase Cre with high temporal and spatial specificity. The device consists of two parts, the first part is formed by the fusion of the 19th–104th amino acid polypeptide of Cre recombinase and the photosensitive cryptochrome 2 (CRY2) (CRY2-CreN). In the second part, the ligand CIB1 interacting with Cry2 fuses the 106th–343th amino acid polypeptide of the CRE recombinase (CIB1-CreC). When there is no blue light stimulation (inactive state), CRY2-CreN and CIB1-CreC are respectively free in the nucleus. When illuminated by blue light, the conformation of CRY2 is changed and CIB1-CreC is recruited to form a dimer so that the complex has complete Cre recombinase activity and performs the corresponding functions [61]. With the continuous development of optogenetics technology, gene editing technology has received continuous innovation and breakthroughs. The light-controlled gene editing technology not only has high temporal and spatial specificity and high-efficiency induction performance but also avoids the toxic and side effects caused by the regulation of chemical drugs, providing brand-new technical support for gene editing.

231

232

7 Optogenetics

7.6.1.2

Genome Transcription

Compared with genome editing, regulating the transcription and expression of genomic genes, in most cases, it has better reversible effects and transient expression effects. It plays an important role in the treatment of certain diseases, the study of complex gene networks, and the differentiation of stem cells. Compared with chemical regulation of the transcription and expression of genomic genes, light-induced genomic transcription devices have a high degree of temporal and spatial specificity and can provide tools with high temporal and spatial precision for studying complex gene networks in time and space. This provides strong technical support for the study of cell programming, metabolism, body homeostasis, and memory formation. TALE-based light-induced genome transcription device was designed by researchers from Massachusetts Institute of Technology and Harvard University. The device consists of two parts, the first part is formed by fusion of TALE with genome anchoring function and customizable DNA binding domain and photosensitive cryptochrome 2 (CRY2) (TALE-CRY2). The second part is composed of CRY2 interacting ligand protein CIB1 fused with a required effector protein (CIB1-effector). When there is no blue light stimulation (inactive state), TALE-CRY2 binds to the promoter region of the target gene and CIB1-effector exists in the nucleus in a free state. When irradiated by blue light, the conformation of CRY2 is changed and CIB1-effector is recruited to the promoter region of the target gene, thereby inducing the expression of the target gene [62].

7.6.2

Application of Optogenetics at the Cellular Level

Optogenetics technology combines genetic expression of protein with laser light control and imaging, providing a new research method for cell biology. The research methods of conventional signaling pathways, including protein mutations and knockouts have nonspecific effects on cell functions due to their long-acting time. In addition, although specific chemical inhibitors can quickly block certain specific signaling pathways, due to the irreversibility of drug action and lack of spatial control, it is difficult to have an in-depth understanding of the temporal and spatial transduction and regulation of signal pathways. Optogenetics technology not only has many advantages such as nondestructive, noninvasive, and highly reversible but also can realize high temporal and spatial resolution recording of signaling pathway molecules and cell life activities controlled by them by combining microscopic imaging methods. Then reveal the transmission of signal pathways and the spatiotemporal regulation mechanism. The schematic diagram of the photochemical reaction of various photosensitive proteins commonly used in cells is shown in Figure 7.15. 7.6.2.1

Movement and Localization of Organelles

Researchers have developed a cell transport system that uses the principle of dimerization of CRY2 and CIB1 proteins under blue light irradiation to connect specific organelles and molecular motors to them. Under the induction of blue

7.6 Application and Development of Optogenetics

280–315 nm

UVR8-COP1

Dark

Jα 440–473 nm



Dark

LOV CRY2-CIB1 Dronpa

450–488 nm

Dark

Nx

450–488 nm

(N>1) Dark

490 nm 390 nm

650 nm PhyB-PIF 750 nm

TRENDS in Biotechnology

Figure 7.15 Photochemical reactions of various photosensitive elements under light conditions [63]. Source: Zhang and Cu [63]/with permission of Elsevier.

light, organelles (mitochondria, peroxisomes, lysosomes, etc.) can be transported to the nucleus by recruiting dynein. At the same time, it can also transport organelles to the plasma membrane by recruiting kinesins. This process is reversible. In the absence of blue light, the dimerization structure of CRY2 and CIB1 dissociates, the organelles are separated from the molecular motors, and finally, the organelles are redistributed in the cell [64]. 7.6.2.2

Regulating Cellular Pathways

Conventional experimental methods, such as protein mutations, and knockouts often take several days to detect the effect of cell function caused by changes in cell signaling molecules. In this process, nonspecific effects are often caused, such as the compensatory expression of certain functional proteins or artifacts caused by protein overexpression. Up to now, researchers have realized the optical regulation of many important cell signal pathways through scientific modification of the abovementioned various light control components. Such as receptor tyrosine kinase Ras-MAPK signaling pathway, PI3K-Akt signaling pathway, and Rho GTPase activation related pathways. Through the rapid, effective and specific switch control of the abovementioned signal pathways, it helps us to understand the cascade

233

7 Optogenetics

transmission law of the signal pathway and the effect of different signal molecules. These methods are also of great significance for the research of diseases closely related to signal pathways, such as cancer, diabetes, and related inflammatory reactions.

7.6.3 7.6.3.1

The Application of Optogenetics in Animal Behavior Research Animal Eating Behavior

Research by Heydendael et al. showed that hypothalamic neuropeptide/ hypothalamic secretin can modulate neuroendocrine behavioral response to stress, and clarified the interaction of hypothalamic secretin/feeding hormone in specific situations through optogenetic techniques [65]. Researchers at Yale University School of Medicine have discovered a group of specific neurons in the brains of mice. When these neurons are activated, they will trigger binge-eating behavior in mice [66]. Optogenetics technology has great application prospects in animal feeding behavior. This method can realize the study of animal feeding behavior by regulating the activity of specific neurons in the animal’s brain.

ector

Conn

Intensity (%)

234

Second layer Cover glass

Trial initiation Masking flash Light pulse (1 ms)

100

First layer

90 80 70

Cortex LED

60 50

Right port

Miss

Skull

Trial initiation

Left port

Light Hit pulses Correct reject No False alarm stimulation

40 30 20 10 0

(a)

(b)

(c)

0.2 0.4 0.6 0.8 1.0 Time (s)

Figure 7.16 Photostimulation in freely moving mice performing a detection task. (a), Schematic of the photostimulation setup. (b), Schematic of the behavioral apparatus and reward contingencies. The mouse initiates a trial by sticking its snout into the central port. Photostimuli are applied during a stimulation period (300 ms) accompanied by a series of bright blue light flashes delivered to the behavioral arena (30 Hz, 300 ms) to mask possible scattered light from the portable light source. The mouse then decides to enter either the left or the right port for a water reward. If a photostimulus was present, the choice of the left port was rewarded with a drop of water (hit, green star) whereas the choice of the right port lead to a short timeout (4 s, miss, red star). If the stimulus was absent, only the choice of the right port was rewarded with reward (correct reject, green circle) whereas the left port lead to a timeout (4 s, false alarm, red circle). (c), Data from one session (200 trials) with a single stimulus (1 ms) with decreasing light intensities. Source: Huber et al. [67], From Springer Nature.

7.6 Application and Development of Optogenetics

Animal Reward Behavior Karel Svoboda et al. expressed the excitatory photosensitive protein ChR2 in mouse barrel cortical neurons and studied the relationship between light stimulation and mouse behavior using the behavioral training method shown in Figure 7.16. The experiment proved that the reward behavior of mice was correlated with light stimulation [67]. Animal Depression and Anxiety Behavior Taking advantage of the single-cell precision

of optogenetics, Chaudhury et al. of Stanford University demonstrated a direct link between VTA dopamine neuronal firing patterns and susceptibility to a depression-related phenotype. Light stimulation of the corresponding neurons can make normal mice more sensitive to social defeat stress [68]. Animal Pain Behavior Pain is a subjectively unpleasant feeling or emotional experi-

ence caused by harmful or potentially destructive stimuli. In pain research, the first application of optogenetic methods was when Daou et al. stimulated Nav1.8-ChR2 transgenic mice with blue light. In the experiment, mice showed strong nociceptive behavior, indicating that pain in mammals can be induced by light. This result will contribute to the development of pain medicines and pain treatment [69].

7.6.4 7.6.4.1

Application of Optogenetics in Disease Treatment Cardiac Electrophysiology

Optogenetics technology was first used in zebrafish and rats for heart research in 2010, and the transgene expression of ChR2 and NpHR in zebrafish models was used to study heart rhythm. In this study, scholars combined optogenetics and optical microscopy to map the origin of cardiac pacemaker activity during zebrafish development and found that optogenetics methods accurately trigger reversible rhythm disorders [70]. The use of light to control myocardial membrane potential in vitro and in vivo has also been validated in mice [71], see Figure 7.17. Bruegmann et al. used light to induce ChR2 to stimulate cardiomyocytes in mice. This method can achieve precise local stimulation of cardiomyocytes and continuous depolarization of heart tissue, thereby restoring heart rhythm [72]. Ambrosi et al. further researched and developed a new method that uses optogenetics for selective cell-specific excitation, in which the optical excitation threshold can be expressed by the treatment efficiency. It is mainly used to test the structure–function relationship of heart tissue after treatment and restore the excitability of cardiomyocytes [73]. Karathanos et al. used a biophysical-based human ventricular model to determine whether light stimulation therapy can terminate ventricular fibrillation. The results showed that when most (>16.6%) ventricular tissues were directly stimulated by bright enough light to induce action potentials in uncoupled cells, defibrillation was successful [74]. In the diagnosis and prevention of heart disease, optogenetics technology has significant advantages over conventional electric defibrillation and drug therapy. In heart studies, optogenetic methods have been used to explain in vitro and in vivo cell interactions. In view of the clinical application prospect of this field, optogenetics may be a breakthrough in the prevention and treatment of cardiology. Although there are still many technical problems to be solved, optogenetics is an innovative approach, which can provide new methods and ideas for the treatment of cardiovascular diseases, especially for the research of alternative electric defibrillation.

235

236

7 Optogenetics

475 nm

K+

Na+

Light-activated action potentials in heart cells Transgenesis or cell transplant

Heart expressing lightsensitive ChR2 channels

Optical pacing in vivo

Figure 7.17 Transfer of the gene encoding ChR2 renders heart muscle cells sensitive to blue light emitted by an LED. The electrical transduction of the optical signal can be used to control heart muscle membrane potential in vitro and in vivo [71]. Source: [71]/With permission of Springer Nature.

7.6.4.2

Epilepsy

At present, the main treatment method for epilepsy is drug therapy, but some patients still have unsatisfactory effects on drug treatment and develop drugrefractory epilepsy. Traditional antiepileptic drugs show lack of specificity for specific cell types in epilepsy neural circuits. The overexcitement of many neurons during epileptic seizures is dynamic and requires precise time control of neural activity for effective treatment. The optogenetics technology is specific in time and space, can be accurately located in a group or a single-neuron cell, and can further study the pathogenesis of epilepsy. Tønnesen [75] and Sukhotinsky [76] induced epilepsy models in vitro and in vivo, respectively, and used vectors to express the NpHR gene in hippocampus. The results showed that yellow light irradiation can shorten the time of epilepsy. 7.6.4.3

Parkinson’s Disease

In recent years, deep brain stimulation has gradually been applied in clinical practice, but it lacks spatial specificity and cannot be accurately positioned in target cells. The emergence of optogenetics allows us to gain a deeper understanding of the pathogenesis of Parkinson’s disease and new understanding of its diagnosis and treatment from the level of neurons and neural circuits.

7.7 Prospects and Prospects

α-synuclein (α-Syn) and its mutants are small presynaptic proteins known to be directly related to Parkinson’s disease. Overexpression of mutant α-Syn can lead to reduced dopamine release, which can lead to motor dysfunction. The experimenter knocked out the Drosophila dopamine neuron gene, which caused the dyskinesia of the Drosophila larvae. ChR2 was expressed in these flies and parkinsonism-related dyskinesia symptoms improved after exposure to blue light with wavelength of 470 nm. Our results suggest that light activation of ChR2 can ameliorate Parkinson’s disease symptoms caused by dopamine synaptic transmission disorders [77].

7.7

Prospects and Prospects

Taken together, optogenetics has many advantages:

7.7.1

Accurate Time Control

Different from chemical and electrical stimulation, optogenetic methods use light stimulation to make light a switch to activate cells. Due to the unique performance of light, the time of light stimulation is accurate. In addition, it is possible to explore the different effects of light with different parameters on stimulation by changing the frequency and amplitude of light and performing light stimulation with different parameters.

7.7.2

Precise Targeting

Photosensitive proteins, an important tool of optogenetics, are encoded by the genetic material DNA. Different promoters or positioning signals can be connected according to specific needs, so as to selectively obtain the watchband in any type of cell, to achieve precise targeting

7.7.3

Precise Cell Subtype

With traditional chemical and electrical stimulation methods, a variety of cells with different functions will be stimulated at the same time, and it is impossible to study the specific functions of the brain. However, with optogenetics, scientists can limit the expression of photosensitive proteins in a certain cell type. During light stimulation, only cells expressing photosensitive proteins will be activated or inhibited accordingly.

7.7.4

Minimal Interference

We know that cells are very sensitive to the external environment. Traditional electrical stimulation and chemical stimulation methods require electrodes or drug delivery tubes to be inserted into the nerve tissue. The invasion of these foreign bodies will cause drastic changes in the environment around the cells, which will

237

238

7 Optogenetics

change the physiological state of the cells. For example, causing inflammation such as glial cell aggregation around the motor and drug delivery tube. The biggest advantage of optogenetics is that its noninvasive light stimulation enables cells to maintain the physiological environment before stimulation as much as possible. Optogenetics has been developing since 2005, groping in the mud and advancing in glory. Each step of development is the dedication and hard work of thousands of scientific researchers. In 2005, the term optogenetics appeared in the field of vision. Since then, novel ideas and huge application prospects have deeply attracted scientists all over the world to develop this tool. In the following years, breakthroughs at different levels came one after another. More and more opsins have been developed, the construction of stable and tolerant expression vectors has become more mature, and the level of imaging and behavior has continued to rise. Even so, until 2009, with the close integration of optics and genetics, this optogenetic manipulation of opsins produced by microorganisms was widely adopted. At the same time, gene targeting of neural circuits has also developed from the initial hippocampal cell line to nematodes, fruit flies, zebrafish, and rodents, and finally formed a mouse line that can specifically transfect opsin. Since 2005, papers on optogenetics have grown exponentially. Papers on optogenetics in the top journals Nature and Science are endless. The emergence of optogenetics has made scientists’ research on neural circuits more controllable. Humans use this technology to achieve targeted regulation of complex biological systems and even physiological events in freely moving mammals. At the same time, optogenetics has gradually become the yardstick of neural circuits that are the basis of invertebrate research. This light control speed is extremely fast, can reach the millisecond level, and the positioning is accurate. On this basis, researchers also combine optogenetics and FMRI, or positron emission tomography (PET) technology to perform whole-brain imaging of the activity patterns produced by limited nerve cells. It can be said that optogenetics has led biological research to a new field, promoted the further research and development of biology, and provided new ideas for understanding the nature of the human body and diseases. The vigorous development of optogenetics has also inspired many methods, such as magnetic genetics, acoustic genetics, and even mechanical genetics. Optogenetics can also be combined with gene editing technology, using light to perform precise gene editing at fixed time points. In the future, optogenetics can also be combined with big data analysis technology. Whether it is the improvement of optogenetic tools or the information feedback of neural circuits, high-throughput analysis and data analysis of artificial intelligence are required. In the future, optogenetics will have further development, and try to cooperate with more disciplines and technical branches. In the future, there is no doubt that researchers will find more light-controlling switch molecules, and at the same time, they will certainly develop more experimental methods to use these light-controlling molecules to control various cellular

References

molecular functions. Professor Deisseroth believes that the most concrete conceivable thing about optogenetic technology is to control all cells in the mammalian brain separately. Another more open question is how this technology can be applied to clinical research. Precisely locating the target nerve cells in clinical research requires more solid basic scientific evidence. Optogenetic technology is the same as other emerging technologies. We humans cannot predict what kind of changes it can bring to biology and even humans, and we cannot predict which fields will be changed by optogenetic technology. However, it is certain that through the application of optogenetics in basic neuroscience, more molecular targets for drug development, more loop sites for computer simulation of the human brain, and more methods and strategies for regenerative medicine such as repairing the human brain will be discovered in the future. As Edward S Boyden said, “We are just beginning.”

References 1 Scanziani, M. and Hausser, M. (2009). Electrophysiology in the age of light. Nature 461: 930–939. https://doi.org/10.1038/nature08540. 2 Aravanis, A.M., Wang, L.P., Zhang, F. et al. (2007). An optical neural interface: in vivo control of rodent motor cortex with integrated fiberoptic and optogenetic technology. J. Neural Eng. 4: S143–S156. https://doi.org/10.1088/1741-2560/4/3/ s02. 3 Yaroslavsky, A.N., Schulze, P.C., Yaroslavsky, I.V. et al. (2002). Optical properties of selected native and coagulated human brain tissues in vitro in the visible and near infrared spectral range. Phys. Med. Biol. 47: 2059–2073. https://doi.org/10 .1088/0031-9155/47/12/305. 4 Peterka, D.S., Takahashi, H., and Yuste, R. (2011). Imaging voltage in neurons. Neuron 69: 9–21. Editorial Material. https://doi.org/10.1016/j.neuron.2010.12.010. 5 Arrenberg, A.B., Del Bene, F., and Baier, H. (2009). Optical control of zebrafish behavior with halorhodopsin. Proc. Natl. Acad. Sci. U.S.A. 106: 17968–17973. https://doi.org/10.1073/pnas.0906252106. 6 Borue, X., Cooper, S., Hirsh, J. et al. (2009). Quantitative evaluation of serotonin release and clearance in drosophila. J. Neurosci. Methods 179: 300–308. https:// doi.org/10.1016/j.jneumeth.2009.02.013. 7 Haeusser, M. (2014). Optogenetics: the age of light. Nat. Methods 11: 1012–1014. https://doi.org/10.1038/nmeth.3111. 8 Shimomura, O., Johnson, F.H., and Saiga, Y. (1962). Extraction, purification and properties of aequorin, a bioluminescent protein from luminous hydromedusan, Aequorea. J. Cell. Comp. Physiol. 59: 223. https://doi.org/10.1002/jcp.1030590302. 9 Livet, J., Weissman, T.A., Kang, H.N. et al. (2007). Transgenic strategies for combinatorial expression of fluorescent proteins in the nervous system. Nature 450: 56. Article. https://doi.org/10.1038/nature06293.

239

240

7 Optogenetics

10 Oesterhelt, D. and Stoeckenius, W. (1971). Rhodopsin-like protein from purple membrane of Halobacterium-halobium. Nat.New Biol. 233: 149. https://doi.org/10 .1038/newbio233149a0. 11 Matsuno-Yagi, A. and Mukohata, Y. (1977). Two possible roles of bacteriorhodopsin; a comparative study of strains of Halobacterium halobium differing in pigmentation. Biochem. Biophys. Res. Commun. 78: 237–243. https://doi.org/10 .1016/0006-291X(77)91245-1. 12 Nagel, G., Ollig, D., Fuhrmann, M. et al. (2002). Channelrhodopsin-1: a lightgated proton channel in green algae. Science 296: 2395–2398. https://doi.org/10 .1126/science.1072068. 13 Zemelman, B.V., Lee, G.A., Ng, M. et al. (2002). Selective photostimulation of genetically chARGed neurons. Neuron 33: 15–22. https://doi.org/10.1016/s08966273(01)00574-8. 14 Lima, S.Q. and Miesenbock, G. (2005). Remote control of behavior through genetically targeted photostimulation of neurons. Cell 121: 141–152. https://doi .org/10.1016/j.cell.2005.02.004. 15 Boyden, E.S., Zhang, F., Bamberg, E. et al. (2005). Millisecond-timescale, genetically targeted optical control of neural activity. Nat. Neurosci. 8: 1263–1268. https://doi.org/10.1038/nn1525. 16 Sileo, L., Pisanello, M., Della Patria, A. et al. (2015). Optical Fiber technologies for in-vivo light delivery and optogenetics. In: 2015 17th International Conference on Transparent Optical Networks (ICTON), 1–5. IEEE. 17 Zhang, J., Laiwalla, F., Kim, J.A. et al. (2009). Integrated device for optical stimulation and spatiotemporal electrical recording of neural activity in light-sensitized brain tissue. J. Neural Eng. 6: 055007. 2009/09/02. https://doi.org/10.1088/17412560/6/5/055007. 18 Abaya, T.V.F., Blair, S., Tathireddy, P. et al. (2012). A 3D glass optrode array for optical neural stimulation. Biomed. Opt. Express 3: 3087–3104. https://doi.org/10 .1364/boe.3.003087. 19 Royer, S., Zemelman, B.V., Barbic, M. et al. (2010). Multi-array silicon probes with integrated optical fibers: light-assisted perturbation and recording of local neural circuits in the behaving animal. Eur. J. Neurosci. 31: 2279–2291. https:// doi.org/10.1111/j.1460-9568.2010.07250.x. 20 Klapoetke, N.C., Murata, Y., Kim, S.S. et al. (2014). Independent optical excitation of distinct neural populations. Nat. Methods 11: 338–346. https://doi.org/10 .1038/nmeth.2836. 21 Valluru, L., Xu, J., Zhu, Y.L. et al. (2005). Ligand binding is a critical requirement for plasma membrane expression of heteromeric kainate receptors. J. Biol. Chem. 280: 6085–6093. https://doi.org/10.1074/jbc.M411549200. 22 Banghart, M., Borges, K., Isacoff, E. et al. (2004). Light-activated ion channels for remote control of neuronal firing. Nat. Neurosci. 7: 1381–1386. https://doi.org/10 .1038/nn1356. 23 Yizhar, O., Fenno, L.E., Davidson, T.J. et al. (2011). Optogenetics in neural systems. Neuron 71: 9–34. https://doi.org/10.1016/j.neuron.2011.06.004.

References

24 Nagel, G., Brauner, M., Liewald, J.F. et al. (2005). Light activation of channelrhodopsin-2 in excitable cells of Caenorhabditis elegans triggers rapid behavioral responses. Curr. Biol. 15: 2279–2284. https://doi.org/10.1016/j.cub.2005 .11.032. 25 Gunaydin, L.A., Yizhar, O., Berndt, A. et al. (2010). Ultrafast optogenetic control. Nat. Neurosci. 13: 387–392. https://doi.org/10.1038/nn.2495. 26 Berndt, A., Yizhar, O., Gunaydin, L.A. et al. (2009). Bi-stable neural state switches. Nat. Neurosci. 12: 229–234. https://doi.org/10.1038/nn.2247. 27 Gradinaru, V., Thompson, K.R., and Deisseroth, K. (2008). eNpHR: a Natronomonas halorhodopsin enhanced for optogenetic applications. Brain Cell Biol. 36: 129–139. https://doi.org/10.1007/s11068-008-9027-6. 28 Gradinaru, V., Zhang, F., Ramakrishnan, C. et al. (2010). Molecular and cellular approaches for diversifying and extending optogenetics. Cell 141: 154–165. https://doi.org/10.1016/j.cell.2010.02.037. 29 Brown, J., Behnam, R., Coddington, L. et al. (2018). Expanding the optogenetics toolkit by topological inversion of rhodopsins. Cell 175: 1131. https://doi.org/10 .1016/j.cell.2018.09.026. 30 Chen, S., Weitemier, A.Z., Zeng, X. et al. (2018). Near-infrared deep brain stimulation via upconversion nanoparticle-mediated optogenetics. Science 359: 679–683. https://doi.org/10.1126/science.aaq1144. 31 Zhang, F., Wang, L.P., Brauner, M. et al. (2007). Multimodal fast optical interrogation of neural circuitry. Nature 446: 633–639. https://doi.org/10.1038/ nature05744. 32 Cui, Y., Li, Y., Xing, Y. et al. (2017). Thermal design of rectangular microscale inorganic light-emitting diodes. Appl. Therm. Eng. 122: 653–660. https://doi.org/ 10.1016/j.applthermaleng.2017.05.020. 33 Zhang, F., Gradinaru, V., Adamantidis, A.R. et al. (2010). Optogenetic interrogation of neural circuits: technology for probing mammalian brain structures. Nat. Protoc. 5: 439–456. https://doi.org/10.1038/nprot.2009.226. 34 Park, S., Guo, Y., Jia, X. et al. (2017). One-step optogenetics with multifunctional flexible polymer fibers. Nat. Neurosci. 20: 612. https://doi.org/10.1038/nn.4510. 35 Jin, Y., Zhang, Y., Hu, L. et al. (2018). Machine learning guided rapid focusing with sensor-less aberration corrections. Opt. Express 26: 30162–30171. https://doi .org/10.1364/oe.26.030162. 36 Hu, L., Hu, S., Gong, W. et al. (2019). Learning-based Shack-Hartmann wavefront sensor for high-order aberration detection. Opt. Express 27: 33504–33517. https://doi.org/10.1364/oe.411191. 37 Seymour, J.P., Wu, F., Wise, K.D. et al. (2017). State-of-the-art MEMS and microsystem tools for brain research. Microsyst. Nanoeng. 3: https://doi.org/10 .1038/micronano.2016.66. 38 Kim, T.I., McCall Jordan, G., Jung Yei, H. et al. (2013). Injectable, cellular-scale optoelectronics with applications for wireless optogenetics. Science 340: 211–216. https://doi.org/10.1126/science.1232437.

241

242

7 Optogenetics

39 Wu, F., Stark, E., Ku, P.C. et al. (2015). Monolithically integrated muLEDs on silicon neural probes for high-resolution optogenetic studies in behaving animals. Neuron 88: 1136–1148. 2015/12/03. https://doi.org/10.1016/j.neuron.2015.10.032. 40 Si, K., Tang, L., Du, J. et al. (2020). Light focusing through scattering medium based on binary transmission matrix. Chin. J. Lasers 47: https://doi.org/10.3788/ cjl202047.0207038. 41 Szabo, V., Ventalon, C., De Sars, V. et al. (2014). Spatially selective holographic photoactivation and functional fluorescence imaging in freely behaving mice with a fiberscope. Neuron 84: 1157–1169. 2014/12/01. https://doi.org/10.1016/j .neuron.2014.11.005. 42 Zhao, Q., Shi, X., Gong, W. et al. (2018). Large field-of-view and deep tissue optical micro-imaging based on parallel wavefront correction algorithm. Chin. J. Lasers 45: https://doi.org/10.3788/cjl201845.1207001. 43 Zhao, Q., Shi, X., Zhu, X. et al. (2019). Large field of view correction by using conjugate adaptive optics with multiple guide stars. J. Biophotonics 12: https://doi .org/10.1002/jbio.201800225. 44 Wu, C., Chen, J., Zhang, B. et al. (2020). Multiple guide stars optimization in conjugate adaptive optics for deep tissue imaging. Opt. Commun. 459: https://doi .org/10.1016/j.optcom.2019.124891. 45 Hu, S., Hu, L., Zhang, B. et al. (2020). Simplifying the detection of optical distortions by machine learning. J. Innovative Opt. Health Sci. 13: https://doi.org/10 .1142/s1793545820400015. 46 Hu, L., Hu, S., Gong, W. et al. (2020). Deep learning assisted Shack-Hartmann wavefront sensor for direct wavefront detection. Opt. Lett. 45: 3741–3744. https:// doi.org/10.1364/ol.395579. 47 Luo, Y., Yan, S., Li, H. et al. (2020). Focusing light through scattering media by reinforced hybrid algorithms. APL Photonics 5: https://doi.org/10.1063/1.5131181. 48 Jin, Y., Chen, J., Wu, C. et al. (2020). Wavefront reconstruction based on deep transfer learning for microscopy. Opt. Express 28: 20738–20747. https://doi.org/10 .1364/oe.396321. 49 Ronzitti, E., Ventalon, C., Canepari, M. et al. (2017). Recent advances in patterned photostimulation for optogenetics. J. Opt. 19: https://doi.org/10.1088/20408986/aa8299. 50 Wang, S., Szobota, S., Wang, Y. et al. (2007). All optical interface for parallel, remote, and spatiotemporal control of neuronal activity. Nano Lett. 7: 3859–3863. https://doi.org/10.1021/nl072783t. 51 Warp, E., Agarwal, G., Wyart, C. et al. (2012). Emergence of patterned activity in the developing zebrafish spinal cord. Curr. Biol. 22: 93–102. https://doi.org/10 .1016/j.cub.2011.12.002. 52 Lutz, C., Otis, T.S., DeSars, V. et al. (2008). Holographic photolysis of caged neurotransmitters. Nat. Methods 5: 821–827. https://doi.org/10.1038/nmeth.1241. 53 Packer, A.M., Peterka, D.S., Hirtz, J.J. et al. (2012). Two-photon optogenetics of dendritic spines and neural circuits. Nat. Methods 9: 1202–1205. https://doi.org/ 10.1038/nmeth.2249.

References

54 Yang, W., Miller, J.E.K., Carrillo-Reid, L. et al. (2016). Simultaneous multi-plane imaging of neural circuits. Neuron 89: 269–284. https://doi.org/10.1016/j.neuron .2015.12.012. 55 Farah, N., Reutsky, I., Shoham, S. et al. (2007). Patterned optical activation of retinal ganglion cells. In: 2007 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vols 1–16, 6368–6370. IEEE. 56 Hernandez, O., Papagiakoumou, E., Tanese, D. et al. (2016). Three-dimensional spatiotemporal focusing of holographic patterns. Nat. Commun. 7: https://doi .org/10.1038/ncomms11928. 57 Accanto, N., Molinier, C., Tanese, D. et al. (2018). Multiplexed temporally focused light shaping for high-resolution multi-cell targeting. Optica 5: https:// doi.org/10.1364/optica.5.001478. 58 Pegard, N.C., Mardinly, A.R., Oldenburg, I.A. et al. (2017). Three-dimensional scanless holographic optogenetics with temporal focusing (3D-SHOT). Nat. Commun. 8: 1228. 2017/11/02. https://doi.org/10.1038/s41467-017-01031-3. 59 Packer, A.M., Russell, L.E., Dalgleish, H.W. et al. (2015). Simultaneous all-optical manipulation and recording of neural circuit activity with cellular resolution in vivo. Nat. Methods 12: 140–146. 2014/12/23. https://doi.org/10.1038/nmeth.3217. 60 Carrillo-Reid, L., Yang, W., Kang Miller, J.E. et al. (2017). Imaging and optically manipulating neuronal ensembles. Annu. Rev. Biophys. 46: 271–293. 2017/03/17. https://doi.org/10.1146/annurev-biophys-070816-033647. 61 Konermann, S., Brigham, M.D., Trevino, A.E. et al. (2014). Optical control of mammalian endogenous transcription and epigenetic states. Mol. Ther. 22: S93–S94. 62 Liu, H., Yu, X., Li, K. et al. (2008). Photoexcited CRY2 interacts with CIB1 to regulate transcription and floral initiation in Arabidopsis. Science 322: 1535–1539. https://doi.org/10.1126/science.1163927. 63 Zhang, K. and Cu, B. (2015). Optogenetic control of intracellular signaling pathways. Trends Biotechnol. 33: 92–100. https://doi.org/10.1016/j.tibtech.2014.11 .007. 64 Taslimi, A., Zoltowski, B., Miranda, J.G. et al. (2016). Optimized secondgeneration CRY2-CIB dimerizers and photoactivatable Cre recombinase. Nat. Chem. Biol. 12: 425–430. https://doi.org/10.1038/nchembio.2063. 65 Heydendael, W. and Jacobson, L. (2010). Widespread hypothalamic-pituitaryadrenocortical axis-relevant and mood-relevant effects of chronic fluoxetine treatment on glucocorticoid receptor gene expression in mice. Eur. J. Neurosci. 31: 892–902. https://doi.org/10.1111/j.1460-9568.2010.07131.x. 66 Zhang, X. and van den Pol, A.N. (2017). Rapid binge-like eating and body weight gain driven by zona incerta GABA neuron activation. Science 356: 853–858. https://doi.org/10.1126/science.aam7100. 67 Huber, D., Petreanu, L., Ghitani, N. et al. (2008). Sparse optical microstimulation in barrel cortex drives learned behaviour in freely moving mice. Nature 451: 61–64. https://doi.org/10.1038/nature06445.

243

244

7 Optogenetics

68 Chaudhury, D., Walsh, J.J., Friedman, A.K. et al. (2013). Rapid regulation of depression-related behaviours by control of midbrain dopamine neurons. Nature 493: 532–536. https://doi.org/10.1038/nature11713. 69 Daou, I., Tuttle, A.H., Longo, G. et al. (2013). Remote optogenetic activation and sensitization of pain pathways in freely moving mice. J. Neurosci. 33: 18631. https://doi.org/10.1523/JNEUROSCI.2424-13.2013. 70 Arrenberg Aristides, B., Stainier Didier, Y.R., Baier, H. et al. (2010). Optogenetic control of cardiac function. Science 330: 971–974. https://doi.org/10.1126/science .1195929. 71 Knollmann, B.C. (2010). Pacing lightly: optogenetics gets to the heart. Nat. Methods 7: 889–891. 2010/10/30. https://doi.org/10.1038/nmeth1110-889. 72 Vogt, C.C., Bruegmann, T., Malan, D. et al. (2015). Systemic gene transfer enables optogenetic pacing of mouse hearts. Cardiovasc. Res. 106: 338–343. https://doi.org/10.1093/cvr/cvv004. 73 Boyle, P.M., Williams, J.C., Ambrosi, C.M. et al. (2013). A comprehensive multiscale framework for simulating optogenetics in the heart. Nat. Commun. 4: https://doi.org/10.1038/ncomms3370. 74 Bruegmann, T., Boyle, P.M., Vogt, C.C. et al. (2016). Optogenetic defibrillation terminates ventricular arrhythmia in mouse hearts and human simulations. J. Clin. Invest. 126: 3894–3904. https://doi.org/10.1172/JCI88950. 75 Tønnesen, J., Sørensen Andreas, T., Deisseroth, K. et al. (2009). Optogenetic control of epileptiform activity. Proc. Natl. Acad. Sci. 106: 12162–12167. https://doi .org/10.1073/pnas.0901915106. 76 Sukhotinsky, I., Chan, A.M., Ahmed, O.J. et al. (2013). Optogenetic delay of status epilepticus onset in an in vivo rodent epilepsy model. PLoS One 8: e62013. 2013/05/03. https://doi.org/10.1371/journal.pone.0062013. 77 Zabrocki, P., Bastiaens, I., Delay, C. et al. (2008). Phosphorylation, lipid raft interaction and traffic of alpha-synuclein in a yeast model for Parkinson. Biochim. Biophys. Acta 1783: 1767–1780. https://doi.org/10.1016/j.bbamcr.2008 .06.010.

245

8 Optical Theranostics Based on Gold Nanoparticles Cuiping Yao 1 , Xiao-Xuan Liang 2 , Sijia Wang 1 , Jing Xin 1 , Luwei Zhang 3 , and Zhenxi Zhang 1 1 Institute of Biomedical Photonics and Sensor, School of Life Science and Technology, Key Laboratory of Biomedical Information Engineering of Ministry of Education, Xi’an Jiaotong University, Xi’an, Shaanxi, 710049, China 2 Institute of Biomedical Optics, University of Lubeck, Lubeck 23562, Germany 3 School of Food Equipment Engineering & Science, Xi’an Jiaotong University, Xian, Shaanxi 710049, Peoples Republic of China

8.1 Thermoplasmonic Effects of AuNP Photo-based diagnoses and therapeutic treatments have gained great attention because of their minimally invasive and precise modalities in biomedical fields [1]. Nobel gold nanoparticles (AuNPs) exhibit significant advantages, such as tunable optical and photothermal features, which have led to multifold applications, particularly in AuNP-mediated optoporation [2–5], photo-thermal therapy [1, 6], photoacoustic (PA) detection [7] and in vivo theranostics [8]. The general physicals behind these applications rely on the thermoplasmonic properties of AuNP, which, at resonant wavelength, can effectively convert the photonic energy from the incident light to the thermal energy of AuNPs [9]. Depending on the incident laser energy, the induced thermal effects vary from a moderate temperature increase in the surrounding medium to the formation of micrometer-size photothermal bubbles and associated acoustical effects. In this section, we briefly depict the essence of the physical picture. We focus on spherical AuNPs since they are very frequently used in practice and can be theoretically treated easily.

8.1.1

Overview of Thermoplasmonic Effects

Thermoplasmonic effects generally involve the generation and dissipation of heat from an AuNP irradiated by a laser pulse. It is a multiscale and multiphysics challenge that usually covers a time scale of femtoseconds to microseconds, a space scale of nanometer to micrometers, and involves multiple physical events. Figure 8.1 briefly illustrates the involved physical events with a schematic of time scales, which guides the rationale of this section. Biomedical Photonic Technologies, First Edition. Edited by Zhenxi Zhang, Shudong Jiang, and Buhong Li. © 2023 WILEY-VCH GmbH. Published 2023 by WILEY-VCH GmbH.

246

8 Optical Theranostics Based on Gold Nanoparticles

Electron-phonon energy transfer

Cavitation and Heat diffusion and bubble dynamics inter face conductance

Electron heating

1 fs

10 fs

100 fs

1 ps

10 ps

100 ps

1 ns

Time

Figure 8.1 Schematic of transient events involved in thermoplasmonic effects with schematic of time scales. Source: Xiao-Xuan Liang.

First, laser energy is deposited into the electron system of AuNP via plasmonic absorption. It excites electrons within AuNP to higher electronic levels, which then redistribute the energy over roughly 500 fs, till reaching a new equilibrium distribution [1]. Energy will then pass from the heated electrons to the lattice phonons within a few picosecond since the electron temperature is higher than the phonon temperature, which is known as the electron–phonon energy transfer [10]. Due to the temperature gradient between the hot AuNP and the cold surrounding water, heat will conduct from AuNP into the surrounding water. Due to the existence of interfacial thermal resistance (Kapitza resistance) at the AuNP-water interface, such a thermal conduction process can be virtually viewed as a stepwise process: heat conducts through the NP-water interface and then diffuses into the surrounding water. Depending on NP size, this process occurs at tens of ps to hundreds of ps. If the surrounding water is heated beyond a threshold, a phase explosion occurs, leading to cavitation and bubble dynamics. This occurs within hundreds of ns to μs.

8.1.2

Plasmonic Absorption of AuNP

When an electromagnetic (EM) wave passes through an AuNP, the free electrons in gold will be excited by the external EM wave to form a plasmon. If the plasmon frequency coincides with the driving frequency of the EM wave, the plasmon is in resonance with the EM wave and will absorb the EM wave strongly, the so-called resonant plasmonic absorption. AuNP will also absorb EM waves at other wavelengths but less efficiently. In general, the plasmonic absorption efficiency depends on the size of AuNP and laser wavelength. It can be quantitatively described by the plasmonic absorption cross section, 𝜎 abs which can be calculated, analytically, by the Mie theory for AuNPs of spherical shape [11]. The Mie theory has been covered by many classic textbooks [12–14], and will not be elucidated here. Interested readers can refer to Refs. [10, 13] and the affiliated MATLAB codes. Figure 8.2 shows 𝜎 abs as a function of wavelength 𝜆 and NP-size RNP . In Figure 8.2a, one can observe that for NP of different sizes, a resonant plasmonic absorption peak is locates at 𝜆 ≈ 540 nm. For 𝜆 < 500 nm, 𝜎 abs becomes slightly smaller than that at resonant frequency. This is due to the fact that the bounded 5d-band electrons are excited at shorter wavelengths, which gives rise to the friction term in the Drude–Lorentz oscillator and makes the oscillation amplitude smaller. For longer wavelengths, 𝜎 abs becomes much smaller than that at resonant

8.1 Thermoplasmonic Effects of AuNP

50 RNP = 100 nm

103

RNP = 50 nm

102

RNP = 30 nm

101 100 (a)

RNP = 10 nm

λ = 400 nm λ = 532 nm

40

~ β RNP3

30 20

σabs (nm2)

104

σabs (× 103 nm2)

σabs (nm2)

105

104

103

10

10

400 500 600 700 800 900 1000 λ (nm)

2

10

0 0 (b)

20

40

60

30 RNP (nm)

80

100

100

RNP (nm)

Figure 8.2 Plasmonic absorption cross section 𝜎 abs as a function of wavelength 𝜆 (a) and as a function of NP-radius RNP (b), calculated by the Mie theory. For small NPs, 𝜎 abs = 𝛽RNP 3 , which is reflected in the inserted figure in (b) (log–log scale), with a slope of three for the dashed lines to demonstrate the linearity of 𝜎 abs with respect to RNP 3 . Source: Xiao-Xuan Liang.

frequency, indicating much less heat will be generated at this wavelength region, the so-called off-resonance region. It can also be seen that 𝜎 abs (𝜆) becomes larger as RNP increases. This is more obviously demonstrated in Figure 8.2b, where for different wavelengths 𝜎 abs exhibits a general increasing trend as RNP increases (solid lines). However, for NPs with radii approximately smaller than 30 nm, 𝜎 abs scales linearly with RNP 3 with coefficient given by 𝛽 (dash lines), whereas for NPs of larger sizes 𝜎 abs stays below this linearity. This is because small NPs can be regarded as a pure dipolar charge oscillator, whereas for large NPs multipolar terms of the charge distribution need to be taken into accounts, which leads to the nonlinearity feature of 𝜎 abs with respect to RNP 3 . 𝛽 is found to be 0.430 nm−1 for 532 nm and 0.237 nm−1 for 400 nm. The energy deposited in AuNP via plasmonic absorption can be calculated by the following equation: Eabs = 𝜎abs × F0

(8.1)

Here, 𝜎 abs is the plasmonic absorption cross section, and F 0 is the fluence of the laser pulse. The temperature increase of NP, ΔT NP , was in fact a reflection of the energy conversion from light to heat. An ideal situation is that laser energy is totally deposited into a NP and heat diffusion between the NP and surroundings does not exist. In this case, ΔT NP can be simply given by ΔTNP (RNP ) = 𝜎abs × F0 ∕(VNP × CAu )

(8.2)

where V NP = (4/3)𝜋RNP 3 is the volume of AuNP and CAu = 2.5 J/cm3 /K denotes the volumetric heat capacity of gold. Equation (8.2) can roughly estimate the maximum NP temperature, which is proportional to 𝜎 abs /V NP . If 𝜎 abs was linearly proportional to V NP regardless of the NP size and no heat dissipation occurred, one would expect no size-dependent of ΔT NP because the influence of RNP cancels each other in terms of 𝜎 abs and V NP . On the other hand, to reach a fixed bubble formation threshold, one would expect no size-dependent threshold fluence F th . Inserting the value of 𝛽 in Eq. (8.2), one

247

248

8 Optical Theranostics Based on Gold Nanoparticles

can obtain an ideal threshold fluence ≈1 mJ/cm2 at 532 nm. In reality, however, F th with respect to RNP features a V-shape, with a minimum at RNP ≈ 30 nm (see Figure 8.8). Therefore, a more elaborated understanding is required.

8.1.3

Electron–Phonon Energy Transfer

The absorbed photonic energy will not heat up AuNP instantaneously. Instead, it will heat up the electrons first, and then thermalize the AuNP via electron-phonon energy transfer. The first process occurs on the same time scale as pulse duration, whereas the second process happens on the order of 10 ps, much shorter than the thermal conduction time from AuNP into surrounding water, which is about hundreds of ps. Therefore, for fs to ps laser pulses, the absorbed energy is thermally confined in AuNP and heat conduction from AuNP into water occurs much later. Heating and dissipation, in this case, can be viewed as decoupled. For ns pulse; however, heating and dissipation occur concurrently. Nevertheless, the whole thermalization process can be described by the well-established two-temperature model (TTM) [10, 15–17]. Since thermal conductivity of both electrons and gold are very fast (≈1 ps), thermal diffusion terms within AuNP can be neglected and temperature distribution in AuNP can be viewed as homogeneous [18]. Under such assumptions, TTM reads dT (t) 𝜎 F Ce e = G(TNP (t) − Te (t)) + abs 0 q(t) VNP dt dTNP (t) CAu = G(Te (t) − TNP (t)) (8.3) dt Here, T e and T NP represent the temperature of electrons and AuNP, respectively; Ce denotes the volumetric heat capacity of electrons; G is the electron-phonon coupling factor; and q(t) represents the normalized Gaussian pulse profile as shown by the following formula ( ( )) t-2𝜏L 2 1 q(t) = exp −𝜋 (8.4) 𝜏L 𝜏L In general, Ce is a function of electron temperature T e . At low temperatures (T e < 3000 K), Ce is linearly proportional to T e with an electron heat capacity coefficient 𝛾, i.e. Ce (T e ) = 𝛾T e . The experimentally determined value 𝛾 exp = 68 J/m3 /K2 , not very far away from the theoretical value 𝛾 th = 63 J/m3 /K2 , which is calculated by the free-electron gas model [17]. However, when T e > 3000 K, the bounded d-band electrons will contribute to Ce , and 𝛾 will deviate strongly from its low-temperature value. In a similar fashion, at low temperatures the electron-phonon coupling factor G is a constant G ≈ 2 × 1016 W/m3 /K. However, when T e > 3000 K, G increases rapidly as T e increases [17]. Figure 8.3 shows the calculation results simulated by the TTM, with T e -dependent Ce and G taken from first principle [17]. For a Gaussian pulse centered at 2𝜏 L , the electron temperature increases very rapidly from room temperature to its maximum value T e,max = 6109 K, which is reached at t ≈ 320 fs, at the end of the

8.1 Thermoplasmonic Effects of AuNP

104

Temperature (K)

Figure 8.3 Temporal evolution of electron temperature T e (dashed line) and AuNP temperature T NP (red solid line) calculated by the two-temperature model, and T NP calculated by the one-temperature model (blue solid line). Calculation parameters are RNP = 50 nm, 𝜆 = 532 nm, F 0 = 5 mJ/cm2 and 𝜏 L = 100 fs, with room temperature T 0 = 293 K. Dash-dot line shows laser pulse profile. Source: Xiao-Xuan Liang.

Te, TTM TNP, TTM TNP, OTM pulse

103

102 10 fs

100 fs

1 ps

10 ps

100 ps

laser pulse. Afterwards, T e starts to decrease while the temperature of AuNP, T NP , starts to increase because of the electron-phonon energy transfer. T e and T NP become thermally equilibrated within 40 ps and reach an equilibrium temperature of 1080 K. In most cases; however, it is the temporal evolution of NP temperature rather than that of electron temperature that is the object of study. In this regard, the complex TTM can be approximated by a one-temperature model (OTM). In OTM, we assume a characteristic electron-phonon energy transfer time 𝜏 ep = 1.7 ps [10]. The normalized energy transfer rate from electrons to phonons can be described by the following equation: ( ) 1 t exp − pNP (t) = (8.5) 𝜏ep 𝜏ep and the normalized energy deposited from electrons into the NP lattice is the integration of pNP- (t) over time and is given by the following growth function ) ( t (8.6) p (t)dt = 1 − exp − eNP (t) = ∫ NP 𝜏ep The temperature increase rate of AuNP and the temperature increase in AuNP can be described by the following equations: 𝜎 F dTNP (t) = abs 0 pNP (t) CAu VNP dt

(8.7)

and, TNP (t) =

𝜎abs F0 e (t) CAu VNP NP

(8.8)

respectively. From Figure 8.3, one can observe that the maximum temperature of AuNPs simulated by TTM and OTM are almost the same. Thus, in many studies, one still uses the simple OTM to reduce the degree of complexity [10], whereas in other studies, people extend TTM to consider more complex scenarios for pulse duration longer than 1 ns [16]. Both models are applicable in the pulse duration range from fs to ns.

249

250

8 Optical Theranostics Based on Gold Nanoparticles

8.1.4

Heat Diffusion and Interface Conductance

Due to the temperature gradient between the hot AuNP and the relatively cold surrounding water, heat will diffuse from AuNP into the water. The heat diffusion equations describe such process. The equations read as follows under the assumption of radial symmetry: { CAu 𝜕t T(r, t) = 𝜅Au ∇2 T(r, t) + 𝜀P,NP (t), for r < RNP (8.9) Cw 𝜕t T(r, t) = 𝜅w ∇2 T(r, t), for r > RNP Here, r denotes the radial argument; T represents the temperature rise within AuNP (r < RNP ) and outside of AuNP (r > RNP ); Cw denotes the volumetric heat capacity of water; 𝜅 Au and 𝜅 w represent the thermal conductivity of Au and water, respectively. While the symbols 𝜕 t T and ∇2 T denote the temporal and spatial change of temperature respectively, 𝜀P,NP denotes the volumetric power density deposited in AuNP and is given by 𝜀P,NP (t) =

𝜎abs F0 p (t) VNP NP

(8.10)

with pNP (t) given by Eq. (8.5). At Au–water interface (r = RNP ), the interfacial thermal resistance (Kapitza resistance) exists and can play a big role in the heat release. The interface resistance arises from the differences in electronic and vibrational properties in different materials at each side of the interface when an energy carrier (phonon or electron, depending on the material) attempts to traverse through the interface. On the other hand, the roughness and coating of the interface will also influence the value of Kapitza resistance [19, 20]. In general, the value of Kapitza conductance for the Au–water interface is about 100 MW/m2 /K [21].The direct consequence of a finite interface conductivity g (or resistivity 1/g) is a temperature discontinuity at the AuNP interface, as illustrated by Figure 8.4. In this figure, RNP − and RNP + denote the immediate boundary to the AuNP side and to the water side, respectively. The temperature drop at the interface is denoted as ΔT = T(RNP − ) – T(RNP + ). The interface is regarded as infinitely small. From the equation of continuity, the heat flux from the AuNP side equals the heat flux into the –

Figure 8.4 Illustration of heat conduction through Au–water interface. Source: Xiao-Xuan Liang.

+

RNP RNP RNP –

T (RNP )

ΔT

Heat flux Au

+ –

T (RNP )

Heat flux g Au–water interface

Water

8.1 Thermoplasmonic Effects of AuNP

water side, and both are equal to −g × ΔT. The boundary condition is thus described by the following equations ( ) ( ) { 𝜅w 𝜕r T RNP+ , t = 𝜅Au 𝜕r T RNP− , t = −gΔT(t) ( ) ( ) (8.11) ΔT(t) = T RNP− , t − T RNP+ , t The energy deposition in AuNP and heat diffusion from AuNP into surrounding water are then unambiguously described by Eqs. (8.9)–(8.11), which can be numerically solved. Figure 8.5 shows the spatiotemporal plot of generation and dissipation of heat in AuNP embedded in water. One can observe that the stacks of temperature-radius plots T(r) at different time instants maintain such a general feature: A rectangular shape for r < RNP , indicating a homogeneous temperature distribution within AuNP, followed by a sharp drop at the Au–water interface arising from the Kapitza resistance, and continued with a drop for r > RNP reflecting thermal diffusion into the surrounding water. The temporary evolution of AuNP temperature and water temperature at the interface are depicted in the T(t) plain as red and green trajectories, respectively. The maximum temperature rise of AuNP is reached at t = 10 ps for a value of 783 K, whereas the maximum temperature rise of water at interface is reached at t = 212 ps for a value of 330 K. Note that the temperature values in Figures 8.3 and 8.5 differ by 20 K due to room temperature. To better resolve the temporal change of AuNP and water at the interface, T(t) plots are presented in 2D plots with longer diffusion times as shown in Figure 8.6. From Figure 8.6, one can observe that the sharp temperature gradient between AuNP and water is quickly reconciled within 212 ps, which is the characteristic time for heat to go through the Kapitza resistance. Afterward, the reconciliation occurs much slower, since heat diffusion into water then dominates. The characteristic time

Temperature rise (K)

TNP 800 700 600 500 400 300 200 100 0

Twater

783 K 300 K 212 ps 150 100 0

50 Radius (nm)

200

250

e Tim

300

450 400 350

500

)

(ps

50 0 10 ps

Figure 8.5 Spatiotemporal plot of generation and dissipation of heat in AuNP embedded in water. Black lines represent temperature profiles at specific time instants; red and green lines are for temperature rise within AuNP and at the vicinity of AuNP in water, respectively. Calculation parameters are RNP = 50 nm, 𝜆 = 532 nm, F 0 = 5 mJ/cm2 and 𝜏 L = 100 fs. Source: Xiao-Xuan Liang.

251

8 Optical Theranostics Based on Gold Nanoparticles

Temperature rise (K)

Temperature rise (K)

800 600 400

800

783 K@10 ps

600 330 K@212 ps

400 200 0 0

200 0

100

200 300 Time (ps)

400

500

Figure 8.6 Temporal evolution of temperature rise for AuNP (red) and water at the interface (green). Inserted figure shows the calculations for the first 500 ps. Calculation parameters are the same as in Figure 8.5. Source: Xiao-Xuan Liang.

TNP Tw,max 0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

Time (ns)

of heat diffuse in water for RNP = 50 nm is about 3.5 ns, which corresponds to the time for water temperature at RNP + drop to its 1/e value. In a more general aspect, the characteristic times for heat diffusion through the Kapitza interface and those for heat diffusion in water depend on the size of AuNP. Baffou et al. [20] introduced an intuitive characteristic of diffusion time through the Kapitza interface, 𝜏 d K , and diffusion time in water 𝜏 d NP , which are given below: 𝜌 c (8.12) 𝜏dK = RNP Au Au 3g and 𝜏dNP = R2NP

𝜌Au cAu 3𝜅w

(8.13)

Figure 8.7 shows the characteristic diffusion time as a function of nanoparticle radius for 𝜏 d K and for 𝜏 d NP . It predicts 𝜏 d K = 400 ps and 𝜏 d NP = 3.5 ns for RNP = 50 nm. The latter is very close to the numerically calculated value as shown in Figure 8.6. 10 ns

Kapitza resistance

1 ns Diffusion time

252

τdNP τdK

100 ps

10 ps Thermal resistance

1 ps

6 nm 1

10

100

Nanoparticle radius (nm)

Figure 8.7 Characteristic diffusion time as a function of nanoparticle radius, g = 100 MW/m2 /K in the calculations. Source: Xiao-Xuan Liang.

8.1 Thermoplasmonic Effects of AuNP

One can also observe that the two lines cross each other at RNP = 6 nm. For RNP > 6 nm, 𝜏 d NP is larger than 𝜏 d K , indicating heat diffusion in water is slower than that through the Kapitza interface and serves as the bottleneck for the whole heat diffusion process. This region is indicated as the thermal resistance region. In contrast, for very small AuNP (RNP < 6 nm), 𝜏 d K is larger than 𝜏 d NP , indicating that heat diffusion in water is very fast due to a much larger surface-to-volume ratio, but heat diffusion through the Kapitza interface is the limiting factor influencing the whole heat conduction process. Thus, this region is indicated as Kapitza resistance region. For the extreme case of RNP = 1 nm, without Kapitza resistance, it would take about 1 ps for heat diffusion. However, when considering Kapitza resistance, it takes about 10 ps, which is on the same order of thermalization of AuNP via electron-phonon energy transfer. Finally, the characteristic diffusion time for AuNP of various sizes is given by the larger value of 𝜏 d K and 𝜏 d NP .

8.1.5

Bubble Formation Threshold

Cavitation occurs when a threshold temperature T th in water is overcome, leading to vapor bubble formation. Generally, the value of T th depends on nucleation and stress confinement conditions. While the nucleation condition describes how “pure” the liquid is, stress confinement refers to the condition if the energy deposition time is short enough such that a thermoelastic wave can be built [22]. Boiling occurs at 100 ∘ C at ambient pressure due to heterogeneous nucleation, where abundant small nucleation sites (e.g. impurities and dust) exist in water. Traditionally, bubble formation due to heating under isobaric conditions is called “boiling,” while the rupture of a liquid achieved by tensile stress under isothermal conditions is called “cavitation” [23]. Such a distinction becomes obscure when targets are both heated and stretched under conditions of stress confinement. In fs laser-induced optical breakdown under tight focusing, where impurities are unlikely to be found (homogeneous nucleation) and stress confinement condition applies, cavitation occurs at T th = 152 ∘ C for NA = 1.3 and 168 ∘ C for NA = 0.8 [23, 24]. Under tight focusing, the bipolar thermoelastic stresses caused by the temperature rise stay confined in the focal volume, leading to the rupture of the liquid. For weak focusing or for longer pulse durations, no stress confinement is guaranteed. Cavitation occurs at T th ≈ 300 ∘ C, corresponding to the kinetic spinodal limit at ambient pressure [23]. In some literature, a slightly smaller value of about 280 ∘ C is adopted [10, 25]. In the case of bubble formation around AuNP, no stress confinement occurs since energy deposition in water occurs through heat diffusion and the characteristic diffusion time (Figure 8.7) is much longer than the acoustic transit time (≈30 ps for traveling over 50 nm). Therefore, we identify the bubble formation threshold around AuNP T th = 300 ∘ C. In some literature, the critical temperature T cri = 373.9 ∘ C is still used instead of the kinetic spinodal temperature as the threshold temperature for bubble formation around AuNP. This is incorrect because the heating of AuNP is isobaric whereas the critical point is reached at 22 MPa, much larger than the ambient pressure. Other studies have found that bubble formation occurs at “≈90% of the critical temperature” [25], which is very close to the kinetic spinodal temperature.

253

8 Optical Theranostics Based on Gold Nanoparticles

Threshold fluence (mJ/cm2)

254

Figure 8.8 Fluence thresholds as a function of NP radius for 355 nm (blue) and 532 nm (green), and for fs and ns laser pulses. The ideal fluence F th,0 (dashed lines) was obtained by inserting the values of 𝛽 in the term of 𝜎 abs in Eq. (8.2). Source: Xiao-Xuan Liang.

355 nm 532 nm 10

1 ns

1

100 fs

0

20

Fth,0 40

60

80

100

Nanoparticle radius (nm)

Figure 8.8 depicts the fluence threshold F th as a function of NP radius for 355 nm and 532 nm, and for 100 fs and 1 ns laser pulses. It can be observed that F th with respect to RNP features a V-shape, with a minimum at RNP ≈ 30 nm. This comes from two size-dependent factors. For small NPs (RNP < 30 nm), the increase of F th arises from the fast heat dissipation to the surrounding water, considering the characteristic time of NP cooling 𝜏 d NP ∝ RNP 2 [2] (see Figure 8.7). For large NPs (RNP > 30 nm), the increase of F th stems from the nonlinearity of 𝛿 abs with regard to the volume of NP (see Figure 8.2b). For fs laser pulse, F th at RNP = 30 nm is close to the ideal fluence F th,0 , whereas for 1 ns pulse, the deviation is stronger. This is due to the fact that for ns laser pulses, heat diffusion has already started within the pulse duration when plasmonic absorption is still ongoing. For even longer pulse durations, e.g. 5 ns, a temperature gradient in water in the vicinity of AuNP already forms before cavitation occurs. This will create a mass density gradient in the surrounding water that correlates with a change in refractive index. As a result, the plasmonic absorption cross section is no longer constant, but rather time and temperature-dependent. Since an increased temperature correlates with a reduced mass density and a reduced refractive index, the resultant absorption cross section is reduced. In this regard, an even larger bubble formation threshold is expected [16].

8.2 Gold Nanoparticles-Mediated Optical Diagnosis The detection of biomarkers in biofluids has been widely used in point-of-care (POC) or clinical applications, and high resolution and signal-to-noise ratio (SNR) bioimaging have held the capability of revealing structures and functions of biological or pathological tissue. Due to their unique physicochemical and optical properties, such as localized surface plasmon resonance (LSPR), surface-enhanced Raman scattering (SERS), and metal-enhanced fluorescence (MEF), AuNPs have shown great potential in optical diagnosis, including in vitro and in vivo diagnostics

8.2 Gold Nanoparticles-Mediated Optical Diagnosis

for cancer or other specific diseases; and enhanced optical imaging for biostructures or macromolecular. These remarkable optical properties of AuNPs make them a sensitive sensor and amplifier for disease biomarkers and bioimaging.

8.2.1

Gold Nanoparticles-Mediated Diagnosis of Disease Markers

The microscopic observation of biopsied samples has been the mainstay of diagnostic processes. However, this diagnostic technology suffers from intraobservational subjectivity. Therefore, the diagnosis of disease at the microscopic level often occurs in the middle and late stages of disease and greatly limits the success of intervention. Pathological changes in a diseased site are often accompanied by abnormal activities of various biomolecules in and around the involved cells. Identifying the location and expression levels of these biomolecules could enable diagnosis of the related disease at an early stage. Hence, diagnosis of disease biomarker is very important for clinical detection. The most used method for clinical biomarkers detection is the ELISA. But ELISA usually allows detection only after biomarker levels reached critical threshold concentrations, at which point the disease has already advanced remarkably. This does not well reflect the advantages of biomarker-based disease detection. With the development of nanotechnology and medical science, numerous nanomaterials are used for biomarker-based disease detection and signal amplification. Among various nanomaterials, AuNPs have unique optical and surface plasmon resonance (SPR) properties that attract great attention. In addition, the high loading capacity of AuNPs is very useful for surface coating with biomarkers such as proteins, DNA, antibodies, and then diagnosis of disease can be facilitated at very early stages. AuNPs are typically functionalized with biomarkers via Au—S, Au—N covalent bonds, or Au – protein/antibody interactions. Hence, in the detection of disease biomarkers, AuNPs-mediated detection method will become a potential and effective detection method. 8.2.1.1

In Vitro Gold Nanoparticles-Mediated Biomarker Diagnosis of Disease

In vitro diagnosis of disease using biological samples (such as blood, urine, and tissue) is a crucial component of clinical diagnosis. It can make clinical diagnosis faster, easier, and less painful for patients. Therefore, in vitro diagnosis of the disease has received much public attention. Compared with in vivo diagnosis of disease, in vitro diagnosis of disease has the following advantages: First, in vitro diagnosis of disease does not interact with the human body directly, providing very limited suffering. Second, the procedures of in vitro diagnosis of disease are performed on biological samples, avoiding the possible biological safety problems for patients. Third, in vitro diagnosis of a disease can quickly provide disease information, saving diagnostic time. Last, for acute infectious diseases, in vitro diagnosis of the disease can effectively reduce the infection rate in the case of the remote settings of the instrument. With the development of nanotechnology, in vitro diagnosis of disease has made tremendous progress. Some disease diagnosis systems with high sensitivity can detect biomarkers at pico-, femto-, atto-, and even zepto-molar levels.

255

256

8 Optical Theranostics Based on Gold Nanoparticles

Detection of very low concentrations of analytes can improve early diagnosis of disease. AuNPs have high surface-to-volume ratios and can be functionalized easily to detect specific targets, offering a lower detection concentration of analytes than conventional strategies. Therefore, AuNPs-mediated diagnosis of disease, especially in vitro AuNPs-mediated diagnosis of disease, is widely used in the field of diagnosis of disease. The past few years have witnessed a variety of AuNPs-mediated diagnosis of disease systems in vitro based on various analytes, including small molecules, proteins, nucleic acids, pathogens, cancer cells. These systems are primarily classified by their signal transducers, including the LSPR, fluorescence, electrical, SERS and their integration with POC systems. Gold Nanoparticles-Mediated Biomarker Diagnosis of Disease by LSPR LSPR is a spectroscopic technique in which collective resonant oscillation of conduction electrons at interfaces of noble metal nanoparticles stimulated by incident light. When the frequency of incident photons matches the natural frequency of surface electrons oscillating against their attraction to the positive nuclei, the resonant can be achieved. And then the strong EM fields will be formed on the surfaces of AuNPs. The distinctive absorption peak in the visible frequency range can be appeared also. Because the EM wave polarizing the surface charges is on the boundary of nanoparticle surfaces and external media such as air or water, the oscillations are very sensitive to the events at the nanoparticle-medium interfaces. These events can be transformed into the shift of the absorption band and a color change of the nanoparticle solutions. Based on this phenomenon, LSPR assays based on the absorption band shift and colorimetric sensing are employed to detect diseases. In addition, the local dielectric environment of nanoparticles can be changed by the specific binding of analytes, such as disease-related biomarker, on the nanoparticle’s surfaces and then the dramatic LSPR shift can be achieved. Therefore, a targeted AuNP-based LSPR assay is a more sensitive LSPR assay strategy to detect disease. The absorption band peak shift and the colorimetric analysis are main LSPR assay strategies. The absorption band peak shift induced by LSPR used for biomarker detection is a label-free detection method for disease. Antibodies, biotin, DNA strands, polymers, and so on as biomarkers of disease are used to modify AuNPs and induce the LSPR spectrum shift assay. However, by only relying on the modification of biomarkers without further amplification procedures, the sensitivity of the label-free LSPR shift assay is limited. One of the amplification methods to improve the sensitivity of the LSPR shift assay is increasing the refractive index sensitivity of AuNPs through adjusting the AuNPs to a specific size and shape or increasing the mass of the biomolecules on the surfaces of AuNPs. Gold nanorods (AuNRs) are the main AuNPs for high-sensitive LSPR assay materials. Nanoplates having a larger contact surface area can be used for LSPR assays because they can obtain a large red shift. Regarding about the size of AuNPs, the research shows that the large-sized AuNPs have a much longer EM field decay length than small-sized AuNPs and then they can obtain a much broader linear range and much higher sensitivity. Aptamer-antigen-antibody sandwich structure on AuNPs was designed to increase the mass of the detection of biomolecules. Another method to improve

8.2 Gold Nanoparticles-Mediated Optical Diagnosis

the sensitivity of the LSPR shift assay is labeling the antibody with an enzyme, such as horseradish peroxidase (HRP). Even single molecular sensitivity can be achieved by conjugating HRP with secondary antibodies for the detection of clinically relevant antigens. The sensitivity of AuNP-based LSPR shift assay can also be improved by conjugating of detection antibody with a magnetic nanoparticle. The magnetic nanoparticles not only increase the LSPR shift but also make the detection convenient. Because with an external magnet, the analytes can be separated and enriched from complex solutions by the magnetic nanoparticles. The current research shows that the detection limit of the LSPR shift assay with the help of magnetic nanoparticles can be as low as the picomolar level. The AuNP dimers instead of individual AuNPs can also provide much higher detection sensitivity in the LSPR shift assay. Although many strategies are designed to improve the sensitivity of LSPR assay, the magnitude of wavelength shift is directly proportional to the concentration of an analyte. Hence, when the concentration of the analyte is very low, the induced LSPR shift is very small. In the very early-stage disease, the clinical concentration of biomarkers is often very too low to be detectable by LSPR assay and then induces LSPR inappropriately. Recently, Stevens and coworkers designed a new LSPR assay in which the magnitude of wavelength shift is inversely proportional to the concentration of analyte. The concentration of analyte is lower, the LSPR shift is larger. This new LSPR assay method is based on the growth of Ag crystals in the presence of Au nanostars. H2 O2 catalyzed by glucose oxidase (GOx) is a decisive factor in determining the rate of crystallization of Ag to further decide whether to form the nucleation of Ag nanoparticles in solutions or epitaxial growth of Ag nanocrystals around the Au nanostars. High concentration GOx produce larger amount of H2 O2 , resulting in the formation of more Ag nanoparticles and less mount of Ag deposited around the Au nanostars, resulting in a weak LSPR shift. In contrast, low concentration GOx produced small amount of H2 O2 , a large Ag cover to the surface of Au nanostars and a significant blue-shift. Using this method for prostate-specific antigen detection, the detection limit can be as low as 10−18 g/ml (4 × 10−20 M). Hence, the “inverse sensitivity” LSPR assay is an ultrahigh detection sensitivity method for the diagnosis of disease. An AuNPs colorimetric assay based on the aggregation of AuNP is a simple and fast detection method for disease. Upon aggregation, AuNPs change color from red to light purple. In gold nanoparticle colorimetric assay, the enzyme cascade assay is a very classic assay method to detect the low level of biomolecules (nanomolar level). Because AuNPs have ultrahigh extinction coefficients (e.g. 2.7 ×108 M −1 cm−1 for 13 nm AuNPs). In 1997, Mirkin and coworkers first proposed an AuNP-based colorimetric assay, and then colorimetric assays developed explosively. Generally, these assays can be divided into two categories: (i) AuNPs are directly used in the assays by forming cross-linking assemblies without signal amplification; and (ii) AuNPs are used as substrates, replacing conventional small organic molecules in enzyme-catalyzed reactions. In the first category, AuNPs were triggered to aggregation by targeted analytes directly or indirectly and resulted in a wavelength red shift in the visible region. For example, DNA-mediated AuNP assembly can

257

258

8 Optical Theranostics Based on Gold Nanoparticles

be used for a colorimetric assay of DNA. The hybridization of two complementary DNA strands on AuNPs through adding the target DNA molecules results in the formation of the cross-linked AuNP aggregate and then a red-to-blue (or purple) color change can be observed. Citrate-mediated AuNP aggregation can be used for ATP detection. The hybridization of the ATP aptamer with its complementary oligonucleotide results in the formation of dsDNA, which is unable to stabilize the unmodified AuNPs in high salt solutions. In the presence of ATP, the ATP aptamer can be interacted with to release a random coil-like ssDNA, which can stabilize the AuNPs and prevent salt-induced aggregation. Because the AuNP-based colorimetric assay relies on the formation of large AuNP aggregates, the sensitivity for detection is moderate, generally at the nanomolar level. To enhance detection sensitivity, enzyme-based amplification strategies were used in the AuNP-based colorimetric assay. Chan and coworkers used this strategy to diagnose of infectious diseases. The initially inactive multicomponent nucleic acid enzyme (MNAzyme) is activated in the presence of the DNA target. The active MNAzyme recognizes its substrate, a linker DNA, and then in turn catalyzes the cleavage of multiple linker DNAs. The degradation of linker DNAs cannot induce the aggregation of AuNPs, and the color remains red. If the DNA target is absent, the MNAzyme cannot be activated and cannot cleave the linker DNA. The hybridization of linkers with their complementary strands on the AuNP surface induces the aggregation of AuNPs and the color change to blue. Based on this signal amplification, the detection limit can be decreased to 50 pM, much lower than those using direct AuNP-based colorimetric assays. To enhance the sensitivity of the AuNP-based colorimetric assay, the enzymatic ligation chain reaction (LCR) was also used. In the presence of Ampligase, the template and target DNA hybridize with the capture sequences coated on AuNPs and ligated covalently to form DNA-AuNP assemblies. The denaturation of the solution can be release the target DNA and the ligated AuNPs. Repetition of the hybridization and ligation exponentially amplifies the number of ligated AuNPs to result in the color of the solution changing from red to purple. Because the ligated AuNPs can be amplified exponentially through thermal cycling, ultrahigh sensitivity at the aM level can be improved. However, the LCR-based AuNP colorimetric assay has at least three limitations to note. First, the precise control of temperature for the hybridization and denaturation processes cannot be appropriate for rapid detection. Second, the thermal cycle at high temperatures causes nonspecific aggregation of AuNPs easily. Third, the low amplification efficiency can be caused by the restricted interactions between the enzymes and their ligation sites. Gold Nanoparticles-Mediated Diagnosis of Disease by Fluorescent Assays Fluorescence as a highly sensitive technique plays an important role in biomedical diagnosis of disease because it can even offer single-molecule detection sensitivity. However, traditional fluorescent assays have lots of problems, such as photostability and autofluorescence of biological samples. Compared with traditional fluorescent assays, nanomaterial-mediated fluorescent assays have great potential to detect disease due to their excellent photostability and biocompatibility, facile surface tailorability, color tenability, high surface-to-volume ratios, and high emission rates. gold nanoclusters (AuNCs), which are made up of 2–100 Au atoms, are a new type

8.2 Gold Nanoparticles-Mediated Optical Diagnosis

of fluorescent nanomaterial with a unique optical feature. In principle, when the size of AuNCs approaches the Fermi wavelength of electrons, molecule-like optical properties such as discrete electronic states and size-dependent fluorescence can be observed. With one-photon excitation, two different emission wavelengths can be found in the visible and NIR regions. From the energy diagram of AuNCs, the emission in the visible is most likely associated with the Au core, and the emission in the NIR is usually associated with the interaction of the surface ligands with AuNCs. Therefore, the emission efficiency is affected by both the size of the Au core and the polarity of the surface ligand. Usually, the surface ligands of AuNCs are small thiol-based molecules, polymers, DNA oligonucleotides, peptides, and proteins. Through controlling the interactions between the AuNCs and their ligands, the AuNC-based fluorescent assay can be measured. Activatable fluorescent assays composed of fluorophore (donor) and quencher (acceptor) have also been designed to monitor disease. Forster resonance energy transfer (FRET) as a classical mode of energy transfer provides a powerful tool for determining conformational changes of biomolecules and molecular interactions in biochemical processes when these biochemical processes occur across a distance of less than 10 nm. Nanoparticle surface energy transfer (NSET) can further overcome the FRET distance–dependent energy transfer limit. Like in FRET, when AuNP (acceptor) and fluorophore (donor) are brought into proximity, the energy of the fluorophore can be transferred to the AuNP surfaces via dipole-surface interactions. Because free conducting electrons are isotropically distributed in the dipole vectors of AuNPs, the interaction of the conducting electrons and the oscillating dipole of the fluorophore increases strongly. Therefore, the dipole-surface interactions in NSET are dipole–dipole interactions also. Continuous excitation of the electron-hole pair on AuNP surfaces increases the possibility of dipole-surface interactions and enhances the energy transfer efficiency in NSET. The dipole-surface energy transfer distance increased more than twofold compared with traditional FRET. 8.2.1.2

In Vivo Gold Nanoparticles-Mediated Diagnosis of Disease

In recent years, gold nanoparticle-mediated photo-imaging has emerged as a novel approach, particularly in cancer diagnosis and treatment, to identify early-stage tumors and guide surgeons for precision treatment by distinguishing tumor and healthy cells. Because AuNPs have excellent tunable properties, resulting in a plasmonic resonance shift from 520 to 800–1200 nm. The 800–1200 nm range in the near infra-red region is therapeutically useful because the tissue is moderately transparent to near infra-red light. Therefore, gold nanoparticles provide an opportunity for cancer diagnostics and therapeutic effects in deep tissues by photothermal or photo-imaging methods. Recently, various types of gold nanoparticles, such as gold nanospheres (AuNS), gold nanrods, gold nanoshells, gold nanocages, gold nanostars, gold nanorings, gold nanoflowers, and so on, have been used for cancer diagnostics and therapeutics in vitro or in vivo. For example, in cancer diagnosis in vivo, Jeffrey et al. developed a gold nanoparticle CT contrast agent through conjugating with an anti-EGFR antibody, and the results showed that gold nanoparticle CT contrast could be used to detect tumor receptor overexpression to improve the detection and accurate diagnosis of lung cancer by dual-energy CT imaging. Wang and coworkers designed a targeted gold nanoparticle probe by carrying

259

260

8 Optical Theranostics Based on Gold Nanoparticles

hyaluronic acid and manganese chelates on the surface of AuNP to target the CD44 receptor for detecting hepatocellular carcinoma by CT and MR dual-mode imaging. Dinish et al. reported an AuNP-based biofunctionalized molecular probe through labeling with NIR active organic molecules on the AuNP surface and targeting with anti-EGFR antibodies for single molecule detection by in vivo PA imaging and SERS biosensing. Kim et al. synthesized a fibrin-targeting glycol-chintosan-coated AuNP probe for cerebral thrombus imaging directly. However, using AuNP for cancer diagnostics and therapy in clinical trials is very a few because the toxicity of AuNP has been controversial. The size, morphology, production method of AuNP and environmental scenario are the main factor affecting the toxicity of AuNP. The modification of AuNPs by PEG is also an effective method to reduce their toxicity. Generally, AuNPs-mediated diagnosis of disease pays close attention to early diagnosis of disease.

8.2.2

Gold Nanoparticles-Mediated Optical Bioimaging

Based on their absorption scattering, and fluorescence properties, AuNPs have been widely used as probes to enhance bioimaging and have been widely applied for dark field (DF) imaging, fluorescence imaging, photothermal or PA imaging, surface enhanced Rahman imaging, OCT, CT, and MRI. It is still not satisfied with the resolution, speed, sensitivity, and penetration depth in tissues in recent years. In this chapter, we summarized the recent achievements and advances associated with these technologies using AuNPs to improve the resolution and sensitivity of bioimaging in vitro and in vivo. 8.2.2.1 Dark Field Imaging

Because of LSPR, plasmonic AuNPs have strong light scattering properties, which are decided by their optical cross sections, reflection coefficients, and surface topographies. These parameters of AuNPs are further affected by their shapes and sizes. Larger AuNPs have larger cross section and are more efficient at light scattering than smaller AuNPs. Therefore, the light scattering efficiency of AuNPs is tunable by changing their size and shape, making them excellent probes for light scattering imaging. Under DF microscopy, an individual AuNP has 5 orders of magnitude higher SNR than normal fluorescein with nanometer spatial precision (limited by the optical diffraction limit), which makes it possible to directly observe the intracellular behavior of AuNPs using optical microscopy. A single Au nanosphere usually scatters the light at a wavelength of 530 nm due to the LSPR effect and displays a small green dot under a DF microscope, and when the size of AuNP is increased or small particles aggregated into clusters, the wavelength of the scattering light red shifts and exhibits yellow dots. T Carsten et al. used combined optical dark-field with transmission electron microscopy (TEM) to track the uptake of AuNPs by mammalian cells [1, 26], and Fan et al. used size and color change to rack the dynamic processes of AuNPs in live cells [2, 27]. Except for studying AuNP itself, AuNP-labeled DF microscopy can also be used for researching cellular biological processes like cell division or microorganism-cell interaction. AuNPs localized in the nucleus, for example, can be used for monitor the nuclear morphological changes

8.2 Gold Nanoparticles-Mediated Optical Diagnosis

in cell mitosis, and virus-labeled AuNPs have been used to study the virus–cell interactions, particularly the process of virus invading cells in real time [28]. Due to the strong scattering signal of biological tissue around AuNPs, AuNP-labeled DF microscopy was mainly applied for cell imagining. However, AuNP-labeled DF microscopy could also be used for in vivo studies by using tissue slide sections, such as for the pharmacokinetics study of AuNPs in ex vivo tissues. For example, SoRelle et al. studied the biodistribution of single AuNP with hyperspectral DF images in mouse tissue sections [29]. Though DF microscopy is restricted to in vitro or ex vivo imaging, it is still an important imaging strategy for AuNPs because of its easy realization and nano-spatial resolution. With the development of optical technologies and instruments, further biomedical applications of AuNP-labeled DF microscopy will emerge. 8.2.2.2 Fluorescence and Luminescence Imaging

Though DF microscopy can detect a single AuNP and the scattered light intensity depends linearly on the sixth power of the AuNP size, the scattering intensity is too low to detect when AuNP sizes are less than 10 nm. Conversely, the absorption intensity only depends on the third power of the size. Thus, imaging strategies based on the absorption of AuNP like fluorescence, photothermal, and PA imaging are more suitable for probing smaller AuNPs. Metal materials, unlike semiconductor and organic materials, typically have few band gaps and very low fluorescence quantum yields (10−10 ), resulting in little interest in the study of fluorescence imaging by metal materials. But due to the quantum size effect and LSPR, small metal nanoclusters (10−4 ) than conventional metal materials. For fluorescent gold nanoclusters, the fluorescence could be due to the LSPR and trans-particle electron exchange, and the detailed mechanism is still under discovery. In addition, smaller nanoparticles (20 nm), its fluorescence was no longer affected by AuNP. The PEF effect of AuNPs could help us image with weak fluorescence emission and enable us to achieve high resolution imaging, even seeing the fluorescence at a single-molecule level.

8.2 Gold Nanoparticles-Mediated Optical Diagnosis

8.2.2.3 Photothermal and Photoacoustic Imaging

Due to the strong SPR absorbance, small AuNPs could transfer the light energy to heat and improve the temperature of the medium around them, which is called a “photothermal effect” and will cause a reflection index change in the medium. By using this mechanism, we could even image the small AuNPs with a size